send_event() expects a list of user ids (ints) except for the special case
of messages. This commit:
1. Fixes this in the call to send_event() in do_send_typing_notification()
2. Renames the variables in do_send_typing_notification() to better reflect
their content (for example, recipient_ids instead of recipients).
3. Renames the id field in the dicts sent in the typing event body (sender,
recipients) to user_id.
4. Adds assertions to the tests to verify that the tornado event user ids
are the same as the recipients in the event body.
5. Adds assertions to the tests to verify that the tornado event user
ids and the recipient user ids (in the event body) are the same as the
expected user ids (obtained from the emails using
get_user_profile_by_email)
6. Changes all assertTrues to assertEquals in the tests
This fixes#2151.
Disallow Realm.string_id's like "streams", "about", and several hundred
others. Also restrict string_id's to be at least 3 characters long, and only
use characters in [a-z0-9-].
Does not restrict realms created by the create_realm.py management command.
Previously realm_name and realm_subdomain defalted to None, which when
posted to /accounts/register, are submitted as u'None'. "None" is an invalid
Realm.subdomain, since subdomains can't have capital letters.
If a stream is public, we now send notifications to all realm users
if the name or description of the stream changes. For private
streams, the behavior remains the same.
We do this by introducing a method called
can_access_stream_user_ids().
(showell helped with this fix)
Fixes#2195
We use the queries_captured() context manager in our tests
to capture queries that happen during certain actions. Usually
we use the list of queries to validate that we're not doing
too many database calls.
For most tests the database calls are fairly deterministic,
but SAVEPOINT-related things seem to be more random, and the
number of savepoints is usually not relevant to what the test
is trying to prevent, which is more serious problems like
O(N) database fetches.
Previously, we set restrict_to_domain and invite_required differently
depending on whether we were setting up a community or a corporate
realm. Setting restrict_to_domain requires validation on the domain of the
user's email, which is messy in the web realm creation flow, since we
validate the user's email before knowing whether the user intends to set up
a corporate or community realm. The simplest solution is to have the realm
creation flow impose as few restrictions as possible (community defaults),
and then worry about restrict_to_domain etc. after the user is already in.
We set the test suite to explictly use the old defaults, since several of
the tests depend on the old defaults.
This commit adds a database migration.
We now send dictionaries for cross-realm bots. This led to the
following changes:
* Create get_cross_realm_dicts() in actions.py.
* Rename the page_params field to cross_realm_bots.
* Fix some back end tests.
* Add cross_realm_dict to people.js.
* Call people.add for cross-realm bots (if they are not already part of the realm).
* Remove hack to add in feedback@zulip.com on the client side.
* Add people.is_cross_realm_email() and use it in compose.js.
* Remove util.string_in_list_case_insensitive().
In preparation for a change to do_create_realm where we will use the
database default for restricted_to_domain rather than computing it within
do_create_realm, and due to which do_create_realm will no longer know
whether we are creating an open realm or not.
Does a database migration to rename Realm.subdomain to
Realm.string_id, and makes Realm.subdomain a property. Eventually,
Realm.string_id will replace Realm.domain as the handle by which we
retrieve Realm objects.
We now simply exclude all cross-realm bots from the set of emails
under consideration, and then if the remaining emails are all in
the same realm, we're good.
This fix changes two behaviors:
* You can no longer send a PM to an ordinary user in another realm
by piggy-backing a cross-realm bot on to the message. (This was
basically a bug, but it would never manifest under current
configurations.)
* You will be able to send PMs to multiple cross-realm bots at once.
(This was an arbitrary restriction. We don't really care about this
scenario much yet, and it fell out of the new implementation.)
This is a first step towards implementing a message retention policy
feature.
- Add Realm model message_retention_days field to setup
messages expired period for realm.
- Add migration.
- Add tool to get expired messages for each Realm.
- Add tests to cover tool for getting expired messages.
Now that we have updated python-markdown, we remove the deprecated
safe_mode. We used safe_mode to escape raw html, so now instead we
pass in an EscapeHtml markdown extension to the markdown engine.
See https://pythonhosted.org/Markdown/release-2.6.html for details on
the deprecation.
Fixes: #2037 (also addresses the remaining piece of #2043).
This is a preliminary step towards eliminating the realm.domain field
in favor of realm.subdomain. Includes a database migration to create
these for existing realms.
This adds a medium (500px) size avatar thumbnail, that can be
referenced as `{name}-medium.png`. It is intended to be used on the
user's own settings page, though we may come up with other use cases
for high-resolution avatars in the future.
This will automatically generate and upload the medium avatar images
when a new avatar original is uploaded, and contains a migration
(contributed by Kirill Kanakhin) to ensure all pre-existing avatar
images have a medium avatar.
Note that this implementation does not provide an endpoint for
fetching the medium-size avatar for another user.
[substantially modified by tabbott]
When we added data on never_subscribed streams to what
populate_subscribers is called on, we failed to add the corresponding
data on subscribers to email_dict, the mapping of user IDs to emails
for the subscribers.
Because in the Zephyr world, stream names can be a secret, and also
Zephyr mirroring tends to involve many thousands of streams, we
shouldn't send this data.
This is some of the code we'd need if we wanted to have Zulip generate
avatars for things. Since it is so little useful code, and it's not
clear we will need this feature ever, we can remove this code to make
the codebase less confusing. It'd be easy to dig this out of history
if we ever want it.
Fixes#2101.
POST to /typing creates a typing event
Required parameters are 'op' ('start' or 'stop') and 'to' (recipient
emails). If there are multiple recipients, the 'to' parameter
should be a JSON string of the list of recipient emails.
The event created looks like:
{
'type': 'typing',
'op': 'start',
'sender': 'hamlet@zulip.com',
'recipients': [{
'id': 1,
'email': 'othello@zulip.com'
}]
}
We now send peer_remove events to folks who have never subscribed
to the streams (except for private streams and zephyr).
We also use logic that is more similar to how
bulk_add_subscriptions() works.
There are two reasons for this change. First, we want to be
consistent with notify_subscriptions_added(), which doesn't
handle "peer" events. Second, we want to fix this code in a
subsequent commit not to do one user at a time, which is
inefficient.
Compare the hash of 'zilencer/management/commands/populate_db' with
'var/populate_db_hash' and 'tools/setup/postgres-init-test-db' with
'var/postgres_init_test_db_hash' and if any comparison fails rebuild
the database.
With fixes from tabbott.
As best I can tell, this option was completely confused in two ways:
* The name is confusing; it actually controls whether we _clear_ the
timeout associated with the current handler
* It's not clear why one would not want it to be unconditionally true.
From reading the history, I'm pretty sure I had just misread the code
when I created this.
And this caused a real bug; a later refactoring caused us to basically
never cancel the timeouts, which in turn resulted in 90% of all events
traffic being hearbeats with a much lower frequency (~5s) than the
intended 45s. Removing this code fixes that nasty bug.
merge_vars is the user meta data we store in mailchimp. This commit changes
what we store from realm.domain to realm.id, since realm.domain is being
deprecated, and changes the OPTIN_TIME to a nicer format.
This distinguishes between YouTube Videos and Image Previews by adding
a particular “youtube-video” class to the preview along with changing
the title to the video ID rather than the link. This serves to allow
the lightbox to ID when a lightbox preview should be treated like a
YouTube video rather than an image preview.
This also modifies the tests in bug down to expect a youtube-video class
along with the title to just be the video ID on YouTube rather than the
entire URL link.
This changes `from django.utils.importlib import import_module` to
`from importlib import import_module`, as `django.utils.importlib` will
be removed in django version 1.9.
The changes that required us to fork this extension had been merged
into upstream CodeHilite, so we can remove it and switch to using the
version that comes with python-markdown.
This updates Bugdown to reflect the changes in the updated
markdown. In particular, we now pass a default config object in the
__init__ for the Bugdown extension, update the make_md_engine function
to take kwargs as opposed to a config list, and have UListProcessor
inherit from ulist as opposed to olist (which no longer works).
We update the (forked from upstream) fenced_code extension's
makeExtension to take args and kwargs, and update
FencedBlockPreprocessor __init__ method with updated Codehilite
arguments.
We update the (forked from upstream) Codehilite extension to
mirror the logic with the latest upstream Codehilite:
Add parse_hl_lines function
update makeExtension to take args and kwarfs instead of config
list
Add regex for highlight lines
use linenums instead of linenos
use get_formatter_by_name instead of HtmlFormatter
user get_lexer_by_name instead of TextLexer
add hl_lines and use_pygments arguments to the codehlite
constructor
Rename:
PUSH_COMMITS_LIMIT to COMMITS_LIMIT
PUSH_COMMIT_ROW_TEMPLATE to COMMIT_ROW_TEMPLATE
PUSH_COMMITS_MORE_THAN_LIMIT_TEMPLATE to COMMITS_MORE_THAN_LIMIT_TEMPLATE
In order to use the latest version of psycopg2 with the latest version
of sqlalchemy we must integrate the changes made in the 2.4.6 release of
psycopg2. See
c86ca7687f
and the release notes for psycopg2 2.4.6.
Change the CountStat object to take an is_gauge variable instead of a
smallest_interval variable. Previously, (smallest_interval, frequency)
could be any of (hour, hour), (hour, day), (hour, gauge), (day, hour),
(day, day), or (day, gauge).
The current change is equivalent to excluding (hour, day) and (day, hour)
from the list above.
This change, along with other recent changes, allows us to simplify how we
handle time intervals. This commit also removes the TimeInterval object.
With reactions and other upcoming features, we'll be adding several
places where we need to check whether a particular user can access a
particular message. It's best to just have a single helper function
for this purpose that we can use everywhere.
Change the parameter name of some functions from 'md' to 'content',
since the name 'md' seems to be the reason why this parameter was
wrongly annotated.
Previously, we suggested running
`python -c import zerver.tests.test_mytest`
when importing test_mytest failed, which doesn't work.
This commit adds the missing quotes, making it
`python -c 'import zerver.tests.test_mytest'`
Adds a new field org_type to Realm. Defaults for restricted_to_domain
and invite_required are now controlled by org_type at time of realm
creation (see zerver.lib.actions.do_create_realm), rather than at the
database level. Note that the backend defaults are all
org_type=corporate, since that matches the current assumptions in the
codebase, whereas the frontend default is org_type=community, since if
a user isn't sure they probably want community.
Since we will likely in the future enable/disable various
administrative features based on whether an organization is corporate
or community, we discuss those issues in the realm creation form.
Before we actually implement any such features, we'll want to make
sure users understand what type of organization they are a member of.
Choice of org_type (via radio button) has been added to the realm
creation flow and the realm creation management command, and the
open-realm option removed.
The database defaults have not been changed, which allows our testing code
to work unchanged.
[includes some HTML/CSS work by Brock Whittaker to make it look nice]
Previously, the generate-fixtures shell script by called into Django
multiple times in order to check whether the database was in a
reasonable state. Since there's a lot of overhead to starting up
Django, this resulted in `test-backend` and `test-js-with-casper`
being quite slow to run a single small test (2.8s or so) even on my
very fast laptop.
We fix this is by moving the checks into a new Python library, so that
we can avoid paying the Django startup overhead 3 times unnecessarily.
The result saves about 1.2s (~40%) from the time required to run a
single backend test.
Fixes#1221.
This is a first pass at building a framework for collecting various
stats about realms, users, streams, etc. Includes:
* New analytics tables for storing counts data
* Raw SQL queries for pulling data from zerver/models.py tables
* Aggregation functions for aggregating hourly stats into daily stats, and
aggregating user/stream level stats into realm level stats
* A management command for pulling the data
Note that counts.py was added to the linter exclude list due to errors
around %%s.
The command to render old messages now looks for all messages
not matching the bugdown version, and it no longer directly calls
into model code. We should still be extremely cautious about
using this code.
This pulls message-related code from models.py into a new
module called message.py, and it starts to break some bugdown
dependencies. All the methods here are basically related to
serializing Message objects as dictionaries for caches and
events.
extract_message_dict
stringify_message_dict
message_to_dict
message_to_dict_json
MessageDict.to_dict_uncached
MessageDict.to_dict_uncached_helper
MessageDict.build_dict_from_raw_db_row
MessageDict.build_message_dict
This fix also removes a circular dependency related
to get_avatar_url.
Also, there was kind of a latent bug in Message.need_to_render_content
where it was depending on other calls to Message to import bugdown
and set it globally in the namespace. We really need to just
eliminate the function, since it's so small and used by code that
may be doing very sketchy things, but for now I just fix it. (The
bug would possibly be exposed by moving build_message_dict out to the
library.)
I move these three functions to lib/cache.py:
to_dict_cache_key_id
to_dict_cache_key
flush_message
This will prepare us for a more significant refactoring that
eventually breaks down some circular dependencies with
Message and bugdown.
This adds support for running a Zulip production server with each
realm on its own unique subdomain, e.g. https://realm_name.example.com.
This patch includes a ton of important features:
* Configuring the Zulip sesion middleware to issue cookier correctly
for the subdomains case.
* Throwing an error if the user tries to visit an invalid subdomain.
* Runs a portion of the Casper tests with REALMS_HAVE_SUBDOMAINS
enabled to test the subdomain signup process.
* Updating our integrations documentation to refer to the current subdomain.
* Enforces that users can only login to the subdomain of their realm
(but does not restrict the API; that will be tightened in a future commit).
Note that toggling settings.REALMS_HAVE_SUBDOMAINS on a live server is
not supported without manual intervention (the main problem will be
adding "subdomain" values for all the existing realms).
[substantially modified by tabbott as part of merging]
This sets the “title” attribute on the image to the actual title of
the image specified by the user in their markdown, rather than just
the URL of the full link to it.
We can now rely on UserProfile.last_reminder being time zone
aware, or even if it isn't, it's a self-correcting problem the
first time a reminder is sent. (It's a non-problem to be off
by a few timezones if somebody still has an old value there, because
they will still be outside the 1-minute nag window even with the
timezone disparity.)
We no longer use all the alert words for all the users in the
entire realm when we look for alert words in a newly sent/edited
message. Now we limit the search to only all the alert words
for all the users who will get UserMessage records. This will
hopefully make a big difference for big realms where most messages
are only sent to a small subset of users.
The bugdown parser no longer has a concept of which users need which
alert words, since it can't really do anything actionable with that info
from a rendering standpoint.
Instead, our calling code passes in a set of search words to the parser.
The parser returns the list of words it finds in the message.
Then the model method builds up the list of user ids that should be
flagged as having alert words in the message.
This refactoring is a little more involved than I'd like, but there are
still some circular dependency issues with rendering code, so I need to
pass in the rather complicated realm_alert_words data structure all the way
from the action through the model to the renderer.
This change shouldn't change the overall behavior of the system, except
that it does remove some duplicate regex checks that were occurring when
multiple users may have had the same alert word.
We now use render_incoming_message() to render all incoming
new messages (sends/edits), so that they will get the same treatment.
This change also establishes do_send_messages() as the code
path to get new messages rendered. It removes some
logic from check_message() that only happened on certain code paths
for sending messages, and which would only detect failures by
expensively rendering messages, so it wasn't much of a guard.
This change also helps to phase out maybe_render_content(), which
deepens the call stack without providing much clarity to the reader,
since it's behavior is so variable.
Finally, this sets up to fix a flaw in the way we compute which
users have alert words in their messages (in a subsequent commit).
If you supplied an unrecognizable address to our email system,
or you had EMAIL_GATEWAY_PATTERN configured wrong,
the get_missed_message_token_from_address() used to crash
hard and cryptically with a traceback saying that you can't
call startswith() on a None object.
Now we throw a ZulipEmailForwardError exception. This will
still lead to a traceback, but it should be easier to diagnose
the problem.
In our email mirror, we have a special format for missed
message emails that uses a 32-bit randomly generated token
that we put into redis that is then prefixed with "mm" for
a total of 34 characters.
We had a bug where we would mis-classify emails like
mmcfoo@example.com as being these system-generated emails
that were part of the redis setup.
It's actually a little unclear how the bug in the library
function would have manifested from the user's point of view,
but it was definitely buggy code, and it's possibly related in
a subtle way to an error report we got from a customer where
only one of their users, who happened to have a name like
mmcfoo, was having problems with the mirror.
We now raise an exception in bugdown.do_convert() if rendering
fails, to avoid silent failures, and then calling code can convert
the exception to a JsonableError.
While one often might want to put the user's name in an email
template, `name` here was the user's full name, not their first name,
and thus reads as quite formal.
Our implementation of duplication detection in the Zulip email error
reporting system was buggy in two important ways:
* It did not look at the traceback, and thus considered all errors as
the same.
* It reset the 10-minute duplicate timer every time an error happened,
thus concealing situations where the same error was occuring more
often than 1/10 minutes.
This fixes a problem where the requests to Tornado would attempt to
use a configured outgoing HTTP proxy, when really we want to connect
directly to localhost.
Fixes: #468.
Old behavior is to do something tricky that relies on the server being on
Pacific Time and the users being in the US. The goal is to have this message
appear during business hours, since click through rates are higher during
business hours. Our server is now on the East Coast though and our users are
in every timezone, so until we do something smarter this seems like a better
heuristic. We're also trying to cleanse our codebase of non-timezone-aware
datetime.datetime objects.
The previous default configuration resulted in delivery problems if
the Zulip server was authorized in the SPF records for the domains of
all users on the Zulip server.
This has the nice side effect of getting rid of the now-unnecessary
ADMIN_DOMAIN from this codepath -- we really just want whichever
realm ERROR_BOT is in.
Since this delayed sending feature is the only thing
settings.MANDRILL_API_KEY is used for, it seems reasonable for that to
be the gate as to whether we actually use Mandrill.
This sends an event when a new avatar is uploaded that refreshes the
avatar for all browser clients without the need to reload the browser.
Fixes: #1359.
!avatar, !modal_link, !gravatar, etc. were incorrectly being processed
before the escape character for code blocks.
While we're at it, we add tests for these special syntaxes.
This fixes a nasty bug where exporting messages sent by a single user
might only contain some of the messages in the event that the
unspecified sort order by the database didn't happen to be sorted by
message ID.
This commit only addresses tables that currently derive from
user_profile_config in get_realm_config:
zerver_userpresence
zerver_useractivity
zerver_useractivityinterval
zerver_subscription
zerver_recipient
zerver_stream
zerver_huddle
It also introduces an entry in realm.json for a virtual
table called "zerver_userprofile_mirrordummy" for dummy users,
which include prior dummy users and users excluded from the call
to do_export_realm().
Note that this feature is not yet exposed in the management command.
This adds a few new helpful context variables that we can use to
compute URLs in all of our templates:
* external_uri_scheme: http(s)://
* server_uri: The base URL for the server's canonical name
* realm_uri: The base URL for the user's realm
This is preparatory work for making realm_uri != server_uri when we
add support for subdomains.
The message cache filling script actually used both database and
memcached queries as part of filling the cache (all used to compute
the needed display_recipient values). We would ideally fix this by
using bulk operations to fill the display_recipient cache, but until
we do so, this cache filler is counterproductive.
I believe this disabling fixes an issue where memcached would get
overloaded and stop handling requests during a server restart on busy
servers.
Now attachment data gets written to its own json file. We are
splitting this out so that will be easier for us to cross-check
attachments against messages without holding up writing a lot
of the other realm data. (message cross-checking is coming soon)
This commit doesn't change any behavior; it just moves fetching
attachments out of the Config scheme and into its own method.
This prepares us to start writing attachment data to its own
file and cross-checking against message ids (coming soon).
We now just have a single configuration get_realm_config() that
handles most of the top-down realm export tables. (It basically
does everything not related to messages or uploads/avatars.)
Unifying the configs allows us to be more strict in our
configuration about checking for anomalies. In the future
we may need to loosen up some of those restrictions again,
but for now we are picky and paranoid.
Fetch stream data only for stream recipients, instead of
getting streams via realm_id.
(This change is kind of moot for now, since our stream recipients
include all possible stream recipients in the realm, but this
sets us up for when we start restricting users that we export
within the realm.)