Appending to bytes in a loop leads to a quadratic slowdown since
Python doesn’t optimize this for bytes like it does for str.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
While accepting an invitation from a user, there was no condition in
place to check if the user sending the invitation was now
now-deactivated.
Skip sending notifications about newly-joined users to users who are
now disabled.
Fixes#18569.
We don't have to go to the database to get the Recipient
fields for `user_profile.recipient`.
See also 85ed6f332a from a little
over a year ago--it's very similar.
The bug here probably didn't come up too much in
practice, but if we were adding a user to multiple
streams when they already had used all N available
colors, all the new streams would be assigned the same
color, since the size of used_colors would stay at N,
thwarting our little modulo-len hackery.
It's not a terrible bug, since users can obviously
customize their stream colors as they see fit.
Usually when we are adding a user to multiple streams,
the users are fairly new, and thus don't have many
existing streams, so I have never heard this bug
reported in the field.
Anyway, assigning the colors in bulk seems to make more
sense, and I added some tests.
For the situations where all the colors have already
been used, I didn't put a ton of thought into exactly
which repeated colors we want to choose; instead, I
just ensure they're different modulo 24. It's possible
that we should just have more than 24 canned colors, or
we should just assign the same default color every time
and let users change it themselves (once they've gone
beyond the 24, to be clear). Or maybe we can just do
something smarter here. I don't have enough time for a
deep dive on this issue.
Part of our codepath for subscribing users involves
fetching the users' existing subscriptions to make sure
we can do things like properly report to the clients
that the users were already subscribed. This codepath
used to be coupled to code that helped users maintain
unique stream colors.
Suppose you are creating a new stream, and you are
importing users from an older stream with 15k
subscribers, and each of your users is subscribed to
about 20 streams.
The prior code, instead of filtering on recipient_id,
would literally look at every subscription for every
user, which was kind of crazy if you didn't understand
the pick-stream-color complications.
Before this commit, we would fetch 300k rows with 15
columns each (granted, all but one of the columns are
bool/int). That's a total of 4.5 million tiny objects
that we had to glom into Django ORM objects and slice
and dice.
After this commit, we would fetch exactly zero rows
for the are-they-already-subscribed logic.
Yes, ZERO.
If we were to mistakenly try to re-add the same 15k
subscribers to the new stream (under the new code), we
will now fetch 15k Sub rows instead of 300k.
It is worth looking at the prior commit. We go through
great pains to ensure that users get new stream colors
when we invite them to a stream, and we still fetch a
bunch of data for that. Instead of 4.5 million cells,
it's more like 600k cells (2 columns per row), and it's
less than that insofar as some users may only
have 24 distinct colors among their many streams.
It's a lot of work.
This commit sets us up for the next commit, which will
save us a very expensive query.
If you are adding 15k users to a stream, and each user
has about 20 existing streams, then we need to retrieve
300k rows from the database to figure out which stream
colors they already have. We don't need all the extra
fields from Subscription, so now we get just the two
values we need for making a color map.
In the next commit we'll eliminate the other use case
for the big query, and I will explain in greater
depth how splitting out the color-picking code can
be a huge win. It is possible that some product decisions
could make this codepath easier. We could also do some
engineering specific to stream colors, such as caching
which colors users have already used.
This does cost us an extra round trip to the database.
Having the `wildcard_mentions_notify` setting turned on does
not necessarily mean that the user will receive notification
for that message. There is more nuance to this, as explained
in the updated comment.
We recently ran into a payload in production that didn't contain
an event type at all. A payload where we can't figure out the event
type is quite rare. Instead of letting these payloads run amok, we
should raise a more informative exception for such unusual payloads.
If we encounter too many of these, then we can choose to conduct a
deeper investigation on a case-by-case basis.
With some changes by Tim Abbott.
Given that these values are uuids, it's better to use UUIDField which is
meant for exactly that, rather than an arbitrary CharField.
This requires modifying some tests to use valid uuids.
We avoid repeating the same calculations over and
over again for the same stream.
This helps, but the real bottleneck in this function
is that send_event usually takes at least a millisecond,
and that adds up quickly if you're doing something
like subscribing 5k users to a new stream.
GIF files can be `.GIF`, and also we determine the file format by
inspecting the image data, so there's no reason to have this
assertion.
(The code for serving still images does not rely on the file being a
GIF.)
Have kept process_new_human_user out of
the atomic block because it involves many
different operations and also sends events.
Tried enclosing event in on_commit but that
would need many changes in the tests, so have
skipped it for now.
Updates testing helpers in `event_schema.py` for `do_update_message` so
that all stream message fields are present in any edits / updates to
stream messages. Adds verfication tests of events returned from private
message edits and from stream message content-only and topic-only edits.
Updates the `update_message` event type to always include a `stream_id`
field when the message being edited is a stream message. This change
aligns with the current definition of the `\get-events` endpoint
in the OpenAPI documentation.
It is better to press on, than stop halfway through due to a user
whose email no longer works. The exception is already logged, which
is sufficient here, as this is generally run interactively.
These fundamentally tested send_email, not build_email, and thus
belong in TestSendEmail, not TestBuildEmail. They also duplicated the
code in test_send_email_exceptions; reuse it.
This allows verify_uploads to use the database
as the authoritative source for what attachments
we need to look for when we're verifying the
images got exported properly, while still
also verifying attachment.json is correct.
It is better for the verifying code to just explicitly
ensure that the exported file bytes match the bytes
in the test image. This introduces a tiny bit more
of I/O.
It's easier to read the code without the intermediate
full_data dictionary that obscures where the files live.
We also avoid some unnecessary file i/o in the tests.
We do a sanity check for every table
that gets written to user.json as part of
the single-user export.
If we add more tables to the single-user export,
the test that I modified here will now ask
the author to add a new checker function, which
means we should always have at least a basic
sanity check for every exported table as long
as we stay in this new paradigm.
We also remove a little bit of old code that
became redundant.
This replaces the TERMS_OF_SERVICE and PRIVACY_POLICY settings with
just a POLICIES_DIRECTORY setting, in order to support settings (like
Zulip Cloud) where there's more policies than just those two.
With minor changes by Eeshan Garg.
We do s/TOS/TERMS_OF_SERVICE/ on the name, and while we're at it,
remove the assumed zerver/ namespace for the template, which isn't
correct -- Zulip Cloud related content should be in the corporate/
directory.
We now complain if a test author sends a stream message
that does not result in the sender getting a
UserMessage row for the message.
This is basically 100% equivalent to complaining that
the author failed to subscribe the sender to the stream
as part of the test setup, as far as I can tell, so the
AssertionError instructs the author to subscribe the
sender to the stream.
We exempt bots from this check, although it is
plausible we should only exempt the system bots like
the notification bot.
I considered auto-subscribing the sender to the stream,
but that can be a little more expensive than the
current check, and we generally want test setup to be
explicit.
If there is some legitimate way than a subscribed human
sender can't get a UserMessage, then we probably want
an explicit test for that, or we may want to change the
backend to just write a UserMessage row in that
hypothetical situation.
For most tests, including almost all the ones fixed
here, the author just wants their test setup to
realistically reflect normal operation, and often devs
may not realize that Cordelia is not subscribed to
Denmark or not realize that Hamlet is not subscribed to
Scotland.
Some of us don't remember our Shakespeare from high
school, and our stream subscriptions don't even
necessarily reflect which countries the Bard placed his
characters in.
There may also be some legitimate use case where an
author wants to simulate sending a message to an
unsubscribed stream, but for those edge cases, they can
always set allow_unsubscribed_sender to True.
These variables can be unset if the `os.path.exists` check fails.
That should be rare, since we've previously checked the files do
exist before getting here.
While races here are unlikely, it is most correct to enforce this
invariant at the database layer, and having a database-level
constraint makes the models file a bit more readable.
These are not considered to be "personal"
info, even if you upload them, so we
don't export them.
Generally the only folks who upload
these are admins, who can easily get
them in other ways. In fact, anybody
can get these via the app.
We now ensure that all message ids are sorted BEFORE
we split them into batches.
We now do a few extra "slim" queries to get message
ids up front.
But, now, when we divide them into batches, we no
longer run 2 or 3 different complicated queries in
a loop. We just basically hydrate our message ids,
so `write_message_partials` should be easy to reason
about.
This change also means that for tiny realms with
< 1000 messages you will always have just one
json file, since we aggregate the ids from the
queries before batching.
This accomplishes a few things:
* It extracts `chunkify` rather than having us
clumsily track chunking-related stuff in a
big loop that is doing other stuff.
* It makes it so that all message ids
in message-000001.json < message-000002.json.
* It makes it easier for us to customize
the messages we send to a single user
(coming soon).
BTW we probably have a slicker version of chunkify
somewhere in our codebase, but I couldn't remember
where.
Following b3c58f454f, we want to clean up
old topics that may contain the disallowed characters. The Message table
is large, so we go in batches, making sure we limit topic fetches and
UPDATE query to no more than BATCH_SIZE Message rows per query.
Now all file writes go through our three
helper functions, and we consistently
write a single log message after the file
gets written.
I killed off write_message_exports, since
all but one of its callers can call
write_table_data, which automatically
sorts data. In particular, our Message
and UserMessage data will now be sorted
by ids.
This probably just postpones the list creation until
Django builds the "IN" query, but semantically it's
good to work in sets where we don't have any
meaningful ordering of the list that gets used.
The immediate benefit of this is stronger mypy
checks (avoiding the ugly union caused by message
files).
The subsequent commit will add sorting.
We have test coverage on all these lines insofar
as if you comment out the lines, tests will
explode (i.e. more than superficial line
coverage).
The distinction here wasn't super meaningful
due to the way we order our "elif" statements,
but we want to reserver "normal_parent" for the
majority of use cases, where you simply tell
the Config what the "foreign_key" is.
For realm-wide exports, there is no reason to query
inefficiently against a list of modified users.
We move the Config out of the common child configs.
Even though Django usually treats foo__in
and foo_id__in identically for filters where
foo is a ForeignKey type, we want to insist
on somewhat more consistent syntax, because
we have the odd combo of type and type_id
in Recipient, where type_id is kinda like a
foreign key, but not a ForeignKey.
So we assert for now that all our include_rows
values end in "_id__in".
Zulip shows two guides on How to reply, first one by
the welcome bot and second one is intro_reply hotspot.
To simply and avoid redundancy, intro_reply hotspot is
removed.
Fixes#20482.
Force postgres to give reactions in ID order - which
is generally chronological order. Results in frontend
displaying reactions in said order.
Fixes#20060.
In many of our stream notification messages, we make use of the
same silent user mention syntax, the template for which was always
hardcoded. This commit adds a helper function that all relevant
callers can call to get the right syntax when mentioning users.
Thanks to Tim Abbott for this suggestion!
Removed existing empty narrow divs from app/home.html and created
a new javascript module to dynamically load empty narrow messages
using handlebar template.
Fixes#18797
The original intention of this was to prevent coding
errors with realm getters that don't, um, filter
on realm.
Unfortunately, you can still write a broken realm getter
that forgets to filter on realm, but which returns a
Set, and the new safeguards won't see any difference.
We could make all the getters return sorted lists
instead, but that's for another day.
This code does serve another purpose, which is to
prevet egregious bugs in the import itself.
The diff here is ugly, but to summarize:
BEFORE IMPORT:
define get_user_id
define get_huddle_hashes
AFTER IMPORT AND MAKING GETTERS:
check realm id
define assert_realm_values
verify emoji codes
check huddle hashes
We don't have automated test coverage on this yet,
but below are the results from manual testing.
Note that we include the realm icon and logo even
though they were not created by Cordelia.
./manage.py export_single_user cordelia@zulip.com
$ (cd /tmp/zulip-export-4v3mo802/ && find .)
.
./emoji
./emoji/2
./emoji/2/emoji
./emoji/2/emoji/images
./emoji/2/emoji/images/3.jpg
./emoji/records.json
./messages-000001.json
./realm_icons
./realm_icons/2
./realm_icons/2/night_logo.original
./realm_icons/2/night_logo.png
./realm_icons/2/icon.png
./realm_icons/2/icon.original
./realm_icons/records.json
./avatars
./avatars/2
./avatars/2/c5125af0447f4d66ce34c1b32eac75ac27ebe0e7.original
./avatars/2/c5125af0447f4d66ce34c1b32eac75ac27ebe0e7.png
./avatars/records.json
./uploads
./uploads/2
./uploads/2/68
./uploads/2/68/xyEkC5dTIp8m42_6HJ3kBfdt
./uploads/2/68/xyEkC5dTIp8m42_6HJ3kBfdt/denver.jpg
./uploads/2/96
./uploads/2/96/ol5WE6RTUntvuPDSpJUrYTim
./uploads/2/96/ol5WE6RTUntvuPDSpJUrYTim/denver.jpg
./uploads/records.json
./user.json
There are tactical reasons to remove this assertion.
Basically, the reason it's safe to remove is that it's
been around a long time and we would have seen this
operationally. Also, the check to make sure that the
S3 filename thingy matches the avatar hash is a much
stronger check.
We will soon restore a stronger version of this check
that applies to all of our asset types (emojis/avatars/etc.).
This makes it easier to read the calling code and see
the big picture of how the four asset types are
organized.
I also handle uploads first, to be similar to the local
code.
This code is well tested--you can modify any of the callers
to pass in a wrong value of `object_key` and get a failing
test.
The comment explains in more detail, but basically we'd skip
exercising a bit of code in the signup code path if there were no
messages in the last week, resulting in the query count not matching.
"help" command occurs in the command list in
initial pms or when bot doesn't understand the message. It doesn't
occur when the bot is respoding to the "help" command itself.
This commit adds code to check whether a user is allowed to use
wildcard mention in a large stream or not while editing a message
based on the realm settings.
Previously this was only checked while sending message, thus user
was easily able to use wildcard mention by first sending a normal
message and then using a wildcard mention by editing it.
1. The initial welcome message now contains less detail.
2. The bot now responds to these commands: "apps", "edit profile",
"dark mode", "light mode", "streams", "topics", "message formatting",
"keyboard shortcuts" and "help" - the bot still responds if there are
slight variations in these commands.
3. Tests have been made to check if bot responds to the advertised
commands (with variations) and gives a negative message if it doesn't
understand the message.
With substantial tweaks by tabbott.
Fixes#19900.
A confirmation link takes a user to the check_prereg_key_and_redirect
endpoint, before getting redirected to POST to /accounts/register/. The
problem was that validation was happening in the check_prereg_key_and_redirect
part and not in /accounts/register/ - meaning that one could submit an
expired confirmation key and be able to register.
We fix this by moving validation into /accouts/register/.
Migrates the `/update-subscription-settings` api endpoint to the
`ignored_parameters_unsupported` model, which is also currently used
by `/update-settings` and `update-realm-user-settings-defaults`.
This change is a step towards preparing for an eventual migration to
have all endpoints return an `ignored_parameters_unsupported` block.
Previously the `/update-subscription-settings` endpoint returned a
copy of the data object sent in the request.
Fixes#15307.
django-scim2 doesn't order the rows when fetching them in reponse to a
query using the filter syntax. We ensure that ORDER BY id is always
appended to the SQL queries.
We add the following tables to the user export:
AlertWord
CustomProfileFieldValue
RealmAuditLog
Service
UserActivity
UserActivityInterval
UserCount
UserGroup
UserHotspot
UserPresence
UserTopic
Except for UserCount, we achieve this by sharing
code with the realm export via
add_user_profile_child_configs.
UserCount is handled slightly differently than realm
exports due to which key we trigger off.
It's possible that RealmAuditLog is incomplete for
single users, since we may also want rows where they
are the acting_user. This commit finds rows where
they are the modified_user. For non-admins I believe
it's rarely the case that they are the actor, and
they will tend to be the modified user if the two
fields are different at all. For admins it's
arguable we want to see both changes they enacted
as well as changes that affected them.
This commit switches the BigBlueButton integration
to use SHA256 instead of SHA1 as BigBlueButton supports
it and scalelite does now, too.
Fixes#19966.
This commit changes web_public_streams_enabled to return False if
realm.enable_spectator_access is False. This is added so that
creating web-public streams is not allowed if enable_spectator_access
is False.
Python's behaviour on `sys.exit` is to wait for all non-daemon threads
to exit. In the context of the missedmessage_emails worker, if any
work is pending, a non-daemon Timer thread exists, which is waiting
for 5 seconds. As soon as that thread is serviced, it sets up another
5-second Timer, a process which repeats until all
ScheduledMessageNotificationEmail records have been handled. This
likely takes two minutes, but may theoretically take up to a week
until the thread exits, and thus sys.exit can complete.
Supervisor only gives the process 30 seconds to shut down, so
something else must prevent this endless Timer.
When `stop` is called, take the lock so we can mutate the timer.
However, since `stop` may have been called from a signal handler, our
thread may _already_ have the lock. As Python provides no way to know
if our thread is the one which has the lock, make the lock a
re-entrant one, allowing us to always try to take it.
With the lock in hand, cancel any outstanding timers. A race exists
where the timer may not be able to be canceled because it has
finished, maybe_send_batched_emails has been called, and is itself
blocked on the lock. Handle this case by timing out the thread join
in `stop()`, and signal the running thread to exit by unsetting the
timer event, which will be detected once it claims the lock.
Special characters, including `\r`, `\n`, and more esoteric codepoints
like non-characters, can negatively affect rendering and UI behaviour.
Check for, and prevent making new messages with, characters in the
Unicode categories of `Cc` (control characters), `Cs`, (surrogates),
and `Cn` (unassigned, non-characters).
Fixes#20128.
This commit replaces "dark mode" and "light mode" with "dark theme"
and "light theme" in the message returned and shown in a little
popup in the UI, when color scheme settings are changed through
slash commands.
Adds `wildcard_mentions_notify` as a property that can be updated
by the endpoint and removes mention of potential `null` value in
the return object because it is not possible.
Also cleans up the documentation of `in_home_view` legacy property
and updates the return object description to better reflect what
is actually returned.
Since spectators can't access personal profile settings and
can't view profile for other users. Hence, we don't send realm
custom profile field data and user's profile data to spectators.
Fixes#20301.
Enable spectator access for test `zulip` realm in developement
setup.
Add option in `do_create_realm` to configure
`enable_spectator_access` field of `Realm`.
We restrict access of messages from web public streams if
anonymous login is disabled via `enable_spectator_access`.
Display of `Anonymous login` button is now controlled by
the value of `enable_spectator_access`.
Admins can toggle `enable_spectator_access` via org settings in UI.
If null is a potential value of data type for a return value or
parameter in the API endpoint, then it is rendered as an option.
This currently relies on the 'nullable' setting in the OpenAPI spec
that was removed in the 3.1.0 release. If/when the OpenAPI version
is updated, then how the `data_type` for parameters and return values
is rendered will need to be reworked.
Fixes#20264.
It turns out these were just wrong. We fix a few things:
* Sort the list of settings so that it's possible to compare with reality.
* Deleted additional fields that don't actually exist.
* Fixed various fields missing past feature level updates.
Neither of these fields use the `realm/update_dict` event type; they
use `realm/update`; we've attempted to clarify that in the previous
commit.
That reality means we don't have automated testing for these values,
and that meant that typos like these could slip through.
The user id is a very useful piece of information that the mobile
client should have access to - instead of only getting the email. This
makes it much simpler to impleent clients that might be robust to
changes in email address.
RabbitMQ clients have a setting called prefetch[1], which controls how
many un-acknowledged events the server forwards to the local queue in
the client. The default is 0; this means that when clients first
connect, the server must send them every message in the queue.
This itself may cause unbounded memory usage in the client, but also
has other detrimental effects. While the client is attempting to
process the head of the queue, it may be unable to read from the TCP
socket at the rate that the server is sending to it -- filling the TCP
buffers, and causing the server's writes to block. If the server
blocks for more than 30 seconds, it times out the send, and closes the
connection with:
```
closing AMQP connection <0.30902.126> (127.0.0.1:53870 -> 127.0.0.1:5672):
{writer,send_failed,{error,timeout}}
```
This is https://github.com/pika/pika/issues/753#issuecomment-318119222.
Set a prefetch limit of 100 messages, or the batch size, to better
handle queues which start with large numbers of outstanding events.
Setting prefetch=1 causes significant performance degradation in the
no-op queue worker, to 30% of the prefetch=0 performance. Setting
prefetch=100 achieves 90% of the prefetch=0 performance, and higher
values offer only minor gains above that. For batch workers, their
performance is not notably degraded by prefetch equal to their batch
size, and they cannot function on smaller prefetches than their batch
size.
We also set a 100-count prefetch on Tornado workers, as they are
potentially susceptible to the same effect.
[1] https://www.rabbitmq.com/confirms.html#channel-qos-prefetch
The `current_queue_size` key in the queue monitoring stats file was
the local queue size, not the global queue size -- d5a6b0f99a
renamed the function, but did not adjust the queue monitoring JSON,
despite the last use of it having been removed in cd9b194d88.
The function is still used to mark "we emptied our queue," and it
remains a reasonable metric for that.
TOR users are legitimate users of the system; however, that system can
also be used for abuse -- specifically, by evading IP-based
rate-limiting.
For the purposes of IP-based rate-limiting, add a
RATE_LIMIT_TOR_TOGETHER flag, defaulting to false, which lumps all
requests from TOR exit nodes into the same bucket. This may allow a
TOR user to deny other TOR users access to the find-my-account and
new-realm endpoints, but this is a low cost for cutting off a
significant potential abuse vector.
If enabled, the list of TOR exit nodes is fetched from their public
endpoint once per hour, via a cron job, and cached on disk. Django
processes load this data from disk, and cache it in memcached.
Requests are spared from the burden of checking disk on failure via a
circuitbreaker, which trips of there are two failures in a row, and
only begins trying again after 10 minutes.
Unhandled exceptions propagating to process_queue were not caught there,
causing improper logging - errors didn't land in errors.log as expected.
Exceptions should be caught and explicitly logged by the process_queue
logger. Exceptions occurring during consuming events are caught and
handled inside the worker's logic - however those that happen while
setting up the worker were not addressed at all, and that's the core bug
we mean to address here.
Furthermore, in multi-threaded mode we want the autoreload mechanism to
be working - which it doesn't without catching the exceptions. The
correct approach is to - again - catch the exception, log it and then
send SIGUSR1 signal to trigger exit and autoreload.
I believe that this migration with the default of atomic=True will
fail when trying to convert the field to PositiveIntegerField if there
were any 0 values present in the database when the migration began.
The fix is to have each of the steps be their own transaction.
A SIGTERM can show up at any point in the ioloop, even in places which
are not prepared to handle it. This results in the process ignoring
the `sys.exit` which the SIGTERM handler calls, with an uncaught
SystemExit exception:
```
2021-11-09 15:37:49.368 ERR [tornado.application:9803] Uncaught exception
Traceback (most recent call last):
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/http1connection.py", line 238, in _read_message
delegate.finish()
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/httpserver.py", line 314, in finish
self.delegate.finish()
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/routing.py", line 251, in finish
self.delegate.finish()
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/web.py", line 2097, in finish
self.execute()
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/web.py", line 2130, in execute
**self.path_kwargs)
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/gen.py", line 307, in wrapper
yielded = next(result)
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/web.py", line 1510, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "/home/zulip/deployments/2021-11-08-05-10-23/zerver/tornado/handlers.py", line 150, in get
request = self.convert_tornado_request_to_django_request()
File "/home/zulip/deployments/2021-11-08-05-10-23/zerver/tornado/handlers.py", line 113, in convert_tornado_request_to_django_request
request = WSGIRequest(environ)
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/django/core/handlers/wsgi.py", line 66, in __init__
script_name = get_script_name(environ)
File "/home/zulip/deployments/2021-11-08-05-10-23/zerver/tornado/event_queue.py", line 611, in <lambda>
signal.signal(signal.SIGTERM, lambda signum, stack: sys.exit(1))
SystemExit: 1
```
Supervisor then terminates the process with a SIGKILL, which results
in dropping data held in the tornado process, as it does not dump its
queue.
The only command which is safe to run in the signal handler is
`ioloop.add_callback_from_signal`, which schedules the callback to run
during the course of the normal ioloop. This callbacks does an
orderly shutdown of the server and the ioloop before exiting.
This commit ensures that all markdown fixtures have unique
test names by rewriting the names of some of them and adding
a test in `test_markdown.py`.
Earlier this was over-writing the value for same keys in
`load_markdown_tests` in `test_markdown.py`.
Race conditions in stream unsubscription may lead to multiple
back-to-back SUBSCRIPTION_DEACTIVATED RealmAuditLog entries for the
same stream. The current logic constructs duplicate UserMessage
entries for such, which then later fail to insert.
Keep a set of message-ids that have been prep'd to be inserted, so
that we don't duplicate them if there is a duplicated
SUBSCRIPTION_DEACTIVATED row. This also renames the `message` local
variable, which otherwise overrode the `message` argument of a
different type.
Previously, our codebase contained links to various versions of the
Django docs, eg https://docs.djangoproject.com/en/1.8/ref/
request-response/#django.http.HttpRequest and https://
docs.djangoproject.com/en/2.2/ref/settings/#std:setting-SERVER_EMAIL
opening a link to a doc with an outdated Django version would show a
warning "This document is for an insecure version of Django that is no
longer supported. Please upgrade to a newer release!".
Most of these links are inside comments.
Following the replacement of these links in our docs, this commit uses
a search with the regex "docs.djangoproject.com/en/([0-9].[0-9]*)/"
and replaces all matches with "docs.djangoproject.com/en/3.2/".
All the new links in this commit have been generated by the above
replace and each link has then been manually checked to ensure that
(1) the page still exists and has not been moved to a new location
(and it has been found that no page has been moved like this), (2)
that the anchor that we're linking to has not been changed (and it has
been found that no anchor has been changed like this).
One comment where we mentioned a Django version in text before linking
to a page for that version has also been changed, the comment
mentioned the specific version when a change happened, and the history
is no longer relevant to us.
This resolves the issues reported in #20108, major chunk of which were
due to the incomplete support for importing the livechat streams/messages
in the tool. So, it's best not to import any livechat streams/messages for
now until a complete support for importing the same is developed.
The decorator form is clearer by being more explicit; additionally,
the api_by_user rate-limit only currently used in one place, and makes
it difficult to test per-user rate-limits that are more specific.