Since the intent of our testing code was clearly to clear this cache
for every test, there's no reason for it to be a module-level global.
This allows us to remove an unnecessary import from test_runner.py,
which in combination with DEFAULT_REALM's definition was causing us to
run models code before running migrations inside test-backend.
(That bug, in turn, caused test-backend's check for whether migrations
needs to be run to happen sadly after trying to access a Realm,
trigger a test-backend crash if the Realm model had changed since the
last provision).
We use the plumbing introduced in a previous commit, to now raise
PushNotificationBouncerRetryLaterError in send_to_push_bouncer in case
of issues with talking to the bouncer server. That's a better way of
dealing with the errors than the previous approach of returning a
"failed" boolean, which generally wasn't checked in the code anyway and
did nothing.
The PushNotificationBouncerRetryLaterError exception will be nicely
handled by queue processors to retry sending again, and due to being a
JsonableError, it will also communicate the error to API users.
We add PushNotificationBouncerRetryLaterError as an exception to signal
an error occurred when trying to communicate with the bouncer and it
should be retried. We use JsonableError as the base class, because this
signal will need to work in two roles:
1. When the push notification was being issued by the queue worker
PushNotificationsWorker, it will signal to the worker to requeue the
event and try again later.
2. The exception will also possibly be raised (this will be added in the
next commit) on codepaths coming from a request to an API endpoint (for
example to add a token, to users/me/apns_device_token). In that case,
it'll be needed to provide a good error to the API user - and basing
this exception on JsonableError will allow that.
This includes adding a new endpoint to the push notification bouncer
interface, and code to call it appropriately after resetting a user's
personal API key.
When we add support for a user having multiple API keys, we may need
to add an additional key here to support removing keys associated with
just one client.
The function only used the user's realm anyway, so this is a cleaner
API.
This should also make it more convenient to permanently delete
messages manually, since one doesn't have to fetch a random user in
the realm in order to delete a message using the management shell.
No functional change.
MigrationsTestCase is intentionally omitted from this, since migrations
tests are different in their nature and so whatever setUp()
ZulipTestCase may do in the future, MigrationsTestCase may not
necessarily want to replicate.
One small change in behavior is that this creates an array with all the
row_objects at once, rather than creating them 1000 at a time.
That should be fine, given that the client batches these in units of
10000 anyway, and so we're just creating 10K rows of a relatively
small data structure in Python code here.
Fixes#1727.
With the server down, apply migrations 0245 and 0246. 0246 will remove
the pub_date column, so it's essential that the previous migrations
ran correctly to copy data before running this.
There's no reason for this to be a category of error that emails the
server administrator, since there's a good chance that fixing it will
need to be done in the Zulip codebase, not administrator action.
Historically, Zulip's implementation of wildcard mentions never
triggered either email or push notifications, instead being limited to
desktop notifications and the "mentions" counter.
We fix this just by plumbing the "wildcard_mentioned" flag through our
system.
Implements much of
https://github.com/zulip/zulip/issues/6040#issuecomment-510157264.
We're also now ready to seriously work on #3750.
Previous cleanups (mostly the removals of Python __future__ imports)
were done in a way that introduced leading newlines. Delete leading
newlines from all files, except static/assets/zulip-emoji/NOTICE,
which is a verbatim copy of the Apache 2.0 license.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This provides a clean warning and 40x error, rather than a 500, for
this corner case which is very likely user error.
The test here is awkward because we have to work around
https://github.com/zulip/zulip/issues/12362.
This contains email of the user to whom notification is being
send. This has not been used in any past mobile releases, so it is
safe to remove it.
As user_id will be stable for the user, but not email. So it's better to
start consuming `user_id` instead of email on mobile.
When a bunch of messages with active notifications are all read at
once -- e.g. by the user choosing to mark all messages, or all in a
stream, as read, or just scrolling quickly through a PM conversation
-- there can be a large batch of this information to convey. Doing it
in a single GCM/FCM message is better for server congestion, and for
the device's battery.
The corresponding client-side logic is in zulip/zulip-mobile#3343 .
Existing clients today only understand one message ID at a time; so
accommodate them by sending individual GCM/FCM messages up to an
arbitrary threshold, with the rest only as a batch.
Also add an explicit test for this logic. The existing tests
that happen to cause this function to run don't exercise the
last condition, so without a new test `--coverage` complains.
Since da8f4bc0e back in August, this control flow has caused
`flags.active_mobile_push_notification` to be cleared if we don't send
these `remove` messages at all, and if we send them directly to GCM...
but not if we send them via the Zulip notification bouncer.
As a result, on a server configured to send `remove` notification-messages
via the bouncer, we accumulate "active" messages and never clear them.
If the user then does `mark_all_as_read`, we end up sending a `remove`
for each of those messages again, and all in one giant burst. We've
seen puzzling bursts of hundreds of removals pass through the bouncer
since turning on removals on chat.zulip.org; it's likely many of them
are caused by this bug.
This issue was made more acute with f4478aad5, which unconditionally
enabled removals.
Test added by tabbott.
The client-side fix to make these not a problem was in release
16.2.96, of 2018-08-22. We've been sending them from the
development community server chat.zulip.org since 2018-11-29.
We started forcing clients to upgrade with commit fb7bfbe9a,
deployed 2018-12-05 to zulipchat.com.
(The mobile app unconditionally makes a request to a route on
zulipchat.com to check for this kind of forced upgrade, so that
applies to mobile users of any Zulip server.)
So at this point it's long past safe for us to unconditionally
send these. Hardwire the old `SEND_REMOVE_PUSH_NOTIFICATIONS`
setting to True, and simplify it out.
I was hoping this would make things faster... it does, but sadly only
by about 70ms, 5% of this file's test runtime.
It sure does make this file rather less action-at-a-distance, though,
as well as fixing some duplication.
If we make a practice on the Zulip server of always explicitly setting
the desired priority, then when an old server doesn't set the priority
we can reasonably have the bouncer make a guess.
That is, this allows a Zulip server to now set the `priority`; but if
it doesn't, we use upstream's default value, which has the same effect
as we've always previously had by not setting it at all.
But when this is deployed to the push notifications bouncer server, it
does allow another server to set priority when pushing notifications
through the bouncer.
This adds a new API for sending basic analytics data (number of users,
number of messages sent) from a Zulip server to the Zulip Cloud
central analytics database, which will make it possible for servers to
elect to have their usage numbers counted in published stats on the
size of the Zulip ecosystem.