Passes the allowed domains for a realm to the frontend, via
page_params.domains. Groundwork for allowing users to add and
remove domains via the admin setting page, rather than via the
realm_alias.py management command.
This is a preliminary step towards eliminating the realm.domain field
in favor of realm.subdomain. Includes a database migration to create
these for existing realms.
This adds a medium (500px) size avatar thumbnail, that can be
referenced as `{name}-medium.png`. It is intended to be used on the
user's own settings page, though we may come up with other use cases
for high-resolution avatars in the future.
This will automatically generate and upload the medium avatar images
when a new avatar original is uploaded, and contains a migration
(contributed by Kirill Kanakhin) to ensure all pre-existing avatar
images have a medium avatar.
Note that this implementation does not provide an endpoint for
fetching the medium-size avatar for another user.
[substantially modified by tabbott]
When we added data on never_subscribed streams to what
populate_subscribers is called on, we failed to add the corresponding
data on subscribers to email_dict, the mapping of user IDs to emails
for the subscribers.
Because in the Zephyr world, stream names can be a secret, and also
Zephyr mirroring tends to involve many thousands of streams, we
shouldn't send this data.
This is some of the code we'd need if we wanted to have Zulip generate
avatars for things. Since it is so little useful code, and it's not
clear we will need this feature ever, we can remove this code to make
the codebase less confusing. It'd be easy to dig this out of history
if we ever want it.
Fixes#2101.
- Add tests for SEND_MISSED_MESSAGE_EMAILS_AS_USER is False (the
default!).
- Reorganized test case code by removing repeated parts of code,
improving code style and moving common parts to separate class
methods.
Fixes#1697.
POST to /typing creates a typing event
Required parameters are 'op' ('start' or 'stop') and 'to' (recipient
emails). If there are multiple recipients, the 'to' parameter
should be a JSON string of the list of recipient emails.
The event created looks like:
{
'type': 'typing',
'op': 'start',
'sender': 'hamlet@zulip.com',
'recipients': [{
'id': 1,
'email': 'othello@zulip.com'
}]
}
We now send peer_remove events to folks who have never subscribed
to the streams (except for private streams and zephyr).
We also use logic that is more similar to how
bulk_add_subscriptions() works.
There are two reasons for this change. First, we want to be
consistent with notify_subscriptions_added(), which doesn't
handle "peer" events. Second, we want to fix this code in a
subsequent commit not to do one user at a time, which is
inefficient.
Compare the hash of 'zilencer/management/commands/populate_db' with
'var/populate_db_hash' and 'tools/setup/postgres-init-test-db' with
'var/postgres_init_test_db_hash' and if any comparison fails rebuild
the database.
With fixes from tabbott.
As best I can tell, this option was completely confused in two ways:
* The name is confusing; it actually controls whether we _clear_ the
timeout associated with the current handler
* It's not clear why one would not want it to be unconditionally true.
From reading the history, I'm pretty sure I had just misread the code
when I created this.
And this caused a real bug; a later refactoring caused us to basically
never cancel the timeouts, which in turn resulted in 90% of all events
traffic being hearbeats with a much lower frequency (~5s) than the
intended 45s. Removing this code fixes that nasty bug.
merge_vars is the user meta data we store in mailchimp. This commit changes
what we store from realm.domain to realm.id, since realm.domain is being
deprecated, and changes the OPTIN_TIME to a nicer format.
This distinguishes between YouTube Videos and Image Previews by adding
a particular “youtube-video” class to the preview along with changing
the title to the video ID rather than the link. This serves to allow
the lightbox to ID when a lightbox preview should be treated like a
YouTube video rather than an image preview.
This also modifies the tests in bug down to expect a youtube-video class
along with the title to just be the video ID on YouTube rather than the
entire URL link.
This changes `from django.utils.importlib import import_module` to
`from importlib import import_module`, as `django.utils.importlib` will
be removed in django version 1.9.
The changes that required us to fork this extension had been merged
into upstream CodeHilite, so we can remove it and switch to using the
version that comes with python-markdown.
This updates Bugdown to reflect the changes in the updated
markdown. In particular, we now pass a default config object in the
__init__ for the Bugdown extension, update the make_md_engine function
to take kwargs as opposed to a config list, and have UListProcessor
inherit from ulist as opposed to olist (which no longer works).
We update the (forked from upstream) fenced_code extension's
makeExtension to take args and kwargs, and update
FencedBlockPreprocessor __init__ method with updated Codehilite
arguments.
We update the (forked from upstream) Codehilite extension to
mirror the logic with the latest upstream Codehilite:
Add parse_hl_lines function
update makeExtension to take args and kwarfs instead of config
list
Add regex for highlight lines
use linenums instead of linenos
use get_formatter_by_name instead of HtmlFormatter
user get_lexer_by_name instead of TextLexer
add hl_lines and use_pygments arguments to the codehlite
constructor
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / PR #id title'.
Modify some test fixtures for better coverage.
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / PR #id title'.
Modify some test fixtures for better coverage.
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / MR #id title'.
Modify some test fixtures for better coverage.
Fixes: #1883.
Rename:
PUSH_COMMITS_LIMIT to COMMITS_LIMIT
PUSH_COMMIT_ROW_TEMPLATE to COMMIT_ROW_TEMPLATE
PUSH_COMMITS_MORE_THAN_LIMIT_TEMPLATE to COMMITS_MORE_THAN_LIMIT_TEMPLATE
In order to use the latest version of psycopg2 with the latest version
of sqlalchemy we must integrate the changes made in the 2.4.6 release of
psycopg2. See
c86ca7687f
and the release notes for psycopg2 2.4.6.
Change the CountStat object to take an is_gauge variable instead of a
smallest_interval variable. Previously, (smallest_interval, frequency)
could be any of (hour, hour), (hour, day), (hour, gauge), (day, hour),
(day, day), or (day, gauge).
The current change is equivalent to excluding (hour, day) and (day, hour)
from the list above.
This change, along with other recent changes, allows us to simplify how we
handle time intervals. This commit also removes the TimeInterval object.
With reactions and other upcoming features, we'll be adding several
places where we need to check whether a particular user can access a
particular message. It's best to just have a single helper function
for this purpose that we can use everywhere.
Previously, we sent users to an "invite your friends" page after they
created an organization. This commit removes that step in the flow and sends
users directly to the home page. We also remove the now-unused
initial_invite_page.html template, initial_invite.js (which pre-filled the
invite emails with characters from literature), and the /invite URL route.
Change the parameter name of some functions from 'md' to 'content',
since the name 'md' seems to be the reason why this parameter was
wrongly annotated.
Previously, we suggested running
`python -c import zerver.tests.test_mytest`
when importing test_mytest failed, which doesn't work.
This commit adds the missing quotes, making it
`python -c 'import zerver.tests.test_mytest'`
test_settings.py was setting EXTERNAL_HOST after importing settings.py,
which has several variables (like SERVER_URI) that are computed from
EXTERNAL_HOST.
[tweaked by tabbott to add comments explaining the story here].
Adds a new field org_type to Realm. Defaults for restricted_to_domain
and invite_required are now controlled by org_type at time of realm
creation (see zerver.lib.actions.do_create_realm), rather than at the
database level. Note that the backend defaults are all
org_type=corporate, since that matches the current assumptions in the
codebase, whereas the frontend default is org_type=community, since if
a user isn't sure they probably want community.
Since we will likely in the future enable/disable various
administrative features based on whether an organization is corporate
or community, we discuss those issues in the realm creation form.
Before we actually implement any such features, we'll want to make
sure users understand what type of organization they are a member of.
Choice of org_type (via radio button) has been added to the realm
creation flow and the realm creation management command, and the
open-realm option removed.
The database defaults have not been changed, which allows our testing code
to work unchanged.
[includes some HTML/CSS work by Brock Whittaker to make it look nice]
Previously, the generate-fixtures shell script by called into Django
multiple times in order to check whether the database was in a
reasonable state. Since there's a lot of overhead to starting up
Django, this resulted in `test-backend` and `test-js-with-casper`
being quite slow to run a single small test (2.8s or so) even on my
very fast laptop.
We fix this is by moving the checks into a new Python library, so that
we can avoid paying the Django startup overhead 3 times unnecessarily.
The result saves about 1.2s (~40%) from the time required to run a
single backend test.
Fixes#1221.
This is a first pass at building a framework for collecting various
stats about realms, users, streams, etc. Includes:
* New analytics tables for storing counts data
* Raw SQL queries for pulling data from zerver/models.py tables
* Aggregation functions for aggregating hourly stats into daily stats, and
aggregating user/stream level stats into realm level stats
* A management command for pulling the data
Note that counts.py was added to the linter exclude list due to errors
around %%s.
The command to render old messages now looks for all messages
not matching the bugdown version, and it no longer directly calls
into model code. We should still be extremely cautious about
using this code.
This pulls message-related code from models.py into a new
module called message.py, and it starts to break some bugdown
dependencies. All the methods here are basically related to
serializing Message objects as dictionaries for caches and
events.
extract_message_dict
stringify_message_dict
message_to_dict
message_to_dict_json
MessageDict.to_dict_uncached
MessageDict.to_dict_uncached_helper
MessageDict.build_dict_from_raw_db_row
MessageDict.build_message_dict
This fix also removes a circular dependency related
to get_avatar_url.
Also, there was kind of a latent bug in Message.need_to_render_content
where it was depending on other calls to Message to import bugdown
and set it globally in the namespace. We really need to just
eliminate the function, since it's so small and used by code that
may be doing very sketchy things, but for now I just fix it. (The
bug would possibly be exposed by moving build_message_dict out to the
library.)
I move these three functions to lib/cache.py:
to_dict_cache_key_id
to_dict_cache_key
flush_message
This will prepare us for a more significant refactoring that
eventually breaks down some circular dependencies with
Message and bugdown.
This removes some unused code on Attachment. Some of these
methods might be useful in a manage.py shell, but without tests,
they are not very trustworthy, and the Attachment model isn't that
complicated to write raw Django queries against.
This moves the logic for renaming a stream to the REST API
update_stream_backend method, eliminating the legacy API endpoint for
doing so.
It also adds a nice test suite covering international stream names.
This improves Google and JWT auth as well as the registration
codepath to log something if the wrong subdomain is encountered.
Ideally, we'd have tests for these, and code to make the Google and JWT
auth cases show a clear error message.
This ensures that everything is using the correct subdomain for
requests. While it probably wouldn't be a real security problem for
the wrong subdomain to work, this enforcement is essential to catching
bugs in the product and users' API scripts.
This adds support for running a Zulip production server with each
realm on its own unique subdomain, e.g. https://realm_name.example.com.
This patch includes a ton of important features:
* Configuring the Zulip sesion middleware to issue cookier correctly
for the subdomains case.
* Throwing an error if the user tries to visit an invalid subdomain.
* Runs a portion of the Casper tests with REALMS_HAVE_SUBDOMAINS
enabled to test the subdomain signup process.
* Updating our integrations documentation to refer to the current subdomain.
* Enforces that users can only login to the subdomain of their realm
(but does not restrict the API; that will be tightened in a future commit).
Note that toggling settings.REALMS_HAVE_SUBDOMAINS on a live server is
not supported without manual intervention (the main problem will be
adding "subdomain" values for all the existing realms).
[substantially modified by tabbott as part of merging]
This sets the “title” attribute on the image to the actual title of
the image specified by the user in their markdown, rather than just
the URL of the full link to it.
This was the original way to send messages via the Zulip API in the
very early days of Zulip, but was replaced by the REST API back in
2013.
Fixes: #730.
Sorry, couldn't resist wording the commit message like that. :)
The remove_unreachable() method on Message was no longer being
used, and the commit history made it fairly clear we won't need it
in the future.
We can now rely on UserProfile.last_reminder being time zone
aware, or even if it isn't, it's a self-correcting problem the
first time a reminder is sent. (It's a non-problem to be off
by a few timezones if somebody still has an old value there, because
they will still be outside the 1-minute nag window even with the
timezone disparity.)
This is partly a concession to testing; it's really hard to test
that we are flushing the cache properly if tests need to look
at a global variable in models.py that can be re-assigned on every
request. Extracting this function makes it easy for tests to know
whether a domain is in the local cache.
We currently do
var invite_suffix = "{{invite_suffix}}";
in javascript in the initial_invite_page.html template.
This sets invite_suffix to "{{invite_suffix}}" when the template is rendered
without invite_suffix in the params, rather than to "" as intended. This
later causes problems in the invite_email validator in initial_invite.js.
We no longer use all the alert words for all the users in the
entire realm when we look for alert words in a newly sent/edited
message. Now we limit the search to only all the alert words
for all the users who will get UserMessage records. This will
hopefully make a big difference for big realms where most messages
are only sent to a small subset of users.
The bugdown parser no longer has a concept of which users need which
alert words, since it can't really do anything actionable with that info
from a rendering standpoint.
Instead, our calling code passes in a set of search words to the parser.
The parser returns the list of words it finds in the message.
Then the model method builds up the list of user ids that should be
flagged as having alert words in the message.
This refactoring is a little more involved than I'd like, but there are
still some circular dependency issues with rendering code, so I need to
pass in the rather complicated realm_alert_words data structure all the way
from the action through the model to the renderer.
This change shouldn't change the overall behavior of the system, except
that it does remove some duplicate regex checks that were occurring when
multiple users may have had the same alert word.
We now use render_incoming_message() to render all incoming
new messages (sends/edits), so that they will get the same treatment.
This change also establishes do_send_messages() as the code
path to get new messages rendered. It removes some
logic from check_message() that only happened on certain code paths
for sending messages, and which would only detect failures by
expensively rendering messages, so it wasn't much of a guard.
This change also helps to phase out maybe_render_content(), which
deepens the call stack without providing much clarity to the reader,
since it's behavior is so variable.
Finally, this sets up to fix a flaw in the way we compute which
users have alert words in their messages (in a subsequent commit).
If you supplied an unrecognizable address to our email system,
or you had EMAIL_GATEWAY_PATTERN configured wrong,
the get_missed_message_token_from_address() used to crash
hard and cryptically with a traceback saying that you can't
call startswith() on a None object.
Now we throw a ZulipEmailForwardError exception. This will
still lead to a traceback, but it should be easier to diagnose
the problem.
In our email mirror, we have a special format for missed
message emails that uses a 32-bit randomly generated token
that we put into redis that is then prefixed with "mm" for
a total of 34 characters.
We had a bug where we would mis-classify emails like
mmcfoo@example.com as being these system-generated emails
that were part of the redis setup.
It's actually a little unclear how the bug in the library
function would have manifested from the user's point of view,
but it was definitely buggy code, and it's possibly related in
a subtle way to an error report we got from a customer where
only one of their users, who happened to have a name like
mmcfoo, was having problems with the mirror.
It appears that the assertRaisesRegexp approach we had before didn't
work properly on some systems, likely due to a bad interact with a
i18n (we haven't definitively determined the cause).
We now raise an exception in bugdown.do_convert() if rendering
fails, to avoid silent failures, and then calling code can convert
the exception to a JsonableError.
The list_to_streams() method now uses create_streams_if_needed() to
do its heavy lifting during the autocreate=True case.
This commit gets us to 100% coverage on the streams view. (The
recently created action.create_streams_if_needed() was easy
to test in isolation, and it has 100% coverage as well, so we are
not cheating here.)
Fixes: #1005.
When we push a device token, we want to clean out any other user's
tokens on the device, but not the current user's. We were wiping
away our own token, if it existed, before creating it again. This
was probably never a user-facing problem; it just made for dead code
and a little unnecessary DB churn. By excluding the current user
from the delete() call, we exercise the update path in our tests now,
so we have 100% coverage.
We now have 100% coverage on views/push_notifications.py, modulo
some dead code which will be addressed in the next commit.
There were some existing tests in text_external.py, but that
module is really intended for tests that hit external services.
The view is a really simple API that updates a DB table, and the
new test code focuses on error handling and idempotency as well
as the happy path.