We use the plumbing introduced in a previous commit, to now raise
PushNotificationBouncerRetryLaterError in send_to_push_bouncer in case
of issues with talking to the bouncer server. That's a better way of
dealing with the errors than the previous approach of returning a
"failed" boolean, which generally wasn't checked in the code anyway and
did nothing.
The PushNotificationBouncerRetryLaterError exception will be nicely
handled by queue processors to retry sending again, and due to being a
JsonableError, it will also communicate the error to API users.
We add PushNotificationBouncerRetryLaterError as an exception to signal
an error occurred when trying to communicate with the bouncer and it
should be retried. We use JsonableError as the base class, because this
signal will need to work in two roles:
1. When the push notification was being issued by the queue worker
PushNotificationsWorker, it will signal to the worker to requeue the
event and try again later.
2. The exception will also possibly be raised (this will be added in the
next commit) on codepaths coming from a request to an API endpoint (for
example to add a token, to users/me/apns_device_token). In that case,
it'll be needed to provide a good error to the API user - and basing
this exception on JsonableError will allow that.
This includes adding a new endpoint to the push notification bouncer
interface, and code to call it appropriately after resetting a user's
personal API key.
When we add support for a user having multiple API keys, we may need
to add an additional key here to support removing keys associated with
just one client.
Fixes#1727.
With the server down, apply migrations 0245 and 0246. 0246 will remove
the pub_date column, so it's essential that the previous migrations
ran correctly to copy data before running this.
This brings us in line, and also allows us to style these more like
unordered lists, which is visually more appealing.
On the backend, we now use the default list blockprocessor + sane list
extension of python-markdown to get proper list markup; on the
frontend, we mostly return to upstream's code as they have followed
CommonMark on this issue.
Using <ol> here necessarily removes the behaviour of not renumbering
on lists written like 3, 4, 7; hopefully users will be OK with the
change.
Fixes#12822.
Historically, Zulip's implementation of wildcard mentions never
triggered either email or push notifications, instead being limited to
desktop notifications and the "mentions" counter.
We fix this just by plumbing the "wildcard_mentioned" flag through our
system.
Implements much of
https://github.com/zulip/zulip/issues/6040#issuecomment-510157264.
We're also now ready to seriously work on #3750.
This contains email of the user to whom notification is being
send. This has not been used in any past mobile releases, so it is
safe to remove it.
As user_id will be stable for the user, but not email. So it's better to
start consuming `user_id` instead of email on mobile.
When a bunch of messages with active notifications are all read at
once -- e.g. by the user choosing to mark all messages, or all in a
stream, as read, or just scrolling quickly through a PM conversation
-- there can be a large batch of this information to convey. Doing it
in a single GCM/FCM message is better for server congestion, and for
the device's battery.
The corresponding client-side logic is in zulip/zulip-mobile#3343 .
Existing clients today only understand one message ID at a time; so
accommodate them by sending individual GCM/FCM messages up to an
arbitrary threshold, with the rest only as a batch.
Also add an explicit test for this logic. The existing tests
that happen to cause this function to run don't exercise the
last condition, so without a new test `--coverage` complains.
Since da8f4bc0e back in August, this control flow has caused
`flags.active_mobile_push_notification` to be cleared if we don't send
these `remove` messages at all, and if we send them directly to GCM...
but not if we send them via the Zulip notification bouncer.
As a result, on a server configured to send `remove` notification-messages
via the bouncer, we accumulate "active" messages and never clear them.
If the user then does `mark_all_as_read`, we end up sending a `remove`
for each of those messages again, and all in one giant burst. We've
seen puzzling bursts of hundreds of removals pass through the bouncer
since turning on removals on chat.zulip.org; it's likely many of them
are caused by this bug.
This issue was made more acute with f4478aad5, which unconditionally
enabled removals.
Test added by tabbott.
This is the only server-side change required for the FCM migration!
Optionally, at some point in the future we might choose to migrate
to the new ("v1") API which FCM also offers. Nothing revolutionary
but there are some nice things about it:
https://firebase.google.com/docs/cloud-messaging/migrate-v1
The client-side fix to make these not a problem was in release
16.2.96, of 2018-08-22. We've been sending them from the
development community server chat.zulip.org since 2018-11-29.
We started forcing clients to upgrade with commit fb7bfbe9a,
deployed 2018-12-05 to zulipchat.com.
(The mobile app unconditionally makes a request to a route on
zulipchat.com to check for this kind of forced upgrade, so that
applies to mobile users of any Zulip server.)
So at this point it's long past safe for us to unconditionally
send these. Hardwire the old `SEND_REMOVE_PUSH_NOTIFICATIONS`
setting to True, and simplify it out.
If we make a practice on the Zulip server of always explicitly setting
the desired priority, then when an old server doesn't set the priority
we can reasonably have the bouncer make a guess.
That is, this allows a Zulip server to now set the `priority`; but if
it doesn't, we use upstream's default value, which has the same effect
as we've always previously had by not setting it at all.
But when this is deployed to the push notifications bouncer server, it
does allow another server to set priority when pushing notifications
through the bouncer.
This checks if push_notification_enabled() is set to false in
handle_push_notification and adds an early return statement.
This is a significant performance optimization for our unit tests
because the push notifications code path does a number of database
queries, and this migration means we don't end up doing those queries
the hundreds of times we send PMs or mentions in our tests where we're
not trying to test the push notifications functionality.
This should also have a small message sending scalability improvement
for any Zulip servers without push notifications enabled.
Tweaked by tabbott to fix a few small issues.
Fixes#10895.
If a user deletes message between when it triggered a potential push
notification for another user, and when that notification was actually
sent, we'd end up with a situation where the `Message` table didn't
have an entry for the requested message ID.
The clean fix for this is to still throw an exception in the event
that the message doesn't exist at all, but if it exists in
ArchivedMessage, don't throw a user-facing exception.
This seems to happen when Apple is having a partial outage on some of
their APNS shards; it should be treated like other networking errors
connecting to APNS (with an automatic retry).
This prevents the warning about push notifications not being
registered for from being printed in development environment startup
by default. In development, that's the expected state, and we don't
need to spam up the output with that notice.
The previous content made it sound like we were actually sending a
push notification, which could be confusing/alarming in some cases
(see e.g. 9c224ccdd3). Instead, we make
clear that we're sending it to all clients (which one might correctly
suspect is vacuous in the development environment).
While it could make sense to print these logging statements at WARN
level on server startup, it doesn't make sense to do so on every
message (though it perhaps did make sense to do so before more recent
changes added good ways to discover you forgot to configure push
notifications).
Instead, we now just do a WARN log on queue processor startup, and
then at DEBUG level for individual messages.
Fixes#10894.