There are tactical reasons to remove this assertion.
Basically, the reason it's safe to remove is that it's
been around a long time and we would have seen this
operationally. Also, the check to make sure that the
S3 filename thingy matches the avatar hash is a much
stronger check.
We will soon restore a stronger version of this check
that applies to all of our asset types (emojis/avatars/etc.).
This makes it easier to read the calling code and see
the big picture of how the four asset types are
organized.
I also handle uploads first, to be similar to the local
code.
This code is well tested--you can modify any of the callers
to pass in a wrong value of `object_key` and get a failing
test.
The comment explains in more detail, but basically we'd skip
exercising a bit of code in the signup code path if there were no
messages in the last week, resulting in the query count not matching.
"help" command occurs in the command list in
initial pms or when bot doesn't understand the message. It doesn't
occur when the bot is respoding to the "help" command itself.
This commit adds code to check whether a user is allowed to use
wildcard mention in a large stream or not while editing a message
based on the realm settings.
Previously this was only checked while sending message, thus user
was easily able to use wildcard mention by first sending a normal
message and then using a wildcard mention by editing it.
1. The initial welcome message now contains less detail.
2. The bot now responds to these commands: "apps", "edit profile",
"dark mode", "light mode", "streams", "topics", "message formatting",
"keyboard shortcuts" and "help" - the bot still responds if there are
slight variations in these commands.
3. Tests have been made to check if bot responds to the advertised
commands (with variations) and gives a negative message if it doesn't
understand the message.
With substantial tweaks by tabbott.
Fixes#19900.
A confirmation link takes a user to the check_prereg_key_and_redirect
endpoint, before getting redirected to POST to /accounts/register/. The
problem was that validation was happening in the check_prereg_key_and_redirect
part and not in /accounts/register/ - meaning that one could submit an
expired confirmation key and be able to register.
We fix this by moving validation into /accouts/register/.
Migrates the `/update-subscription-settings` api endpoint to the
`ignored_parameters_unsupported` model, which is also currently used
by `/update-settings` and `update-realm-user-settings-defaults`.
This change is a step towards preparing for an eventual migration to
have all endpoints return an `ignored_parameters_unsupported` block.
Previously the `/update-subscription-settings` endpoint returned a
copy of the data object sent in the request.
Fixes#15307.
django-scim2 doesn't order the rows when fetching them in reponse to a
query using the filter syntax. We ensure that ORDER BY id is always
appended to the SQL queries.
We add the following tables to the user export:
AlertWord
CustomProfileFieldValue
RealmAuditLog
Service
UserActivity
UserActivityInterval
UserCount
UserGroup
UserHotspot
UserPresence
UserTopic
Except for UserCount, we achieve this by sharing
code with the realm export via
add_user_profile_child_configs.
UserCount is handled slightly differently than realm
exports due to which key we trigger off.
It's possible that RealmAuditLog is incomplete for
single users, since we may also want rows where they
are the acting_user. This commit finds rows where
they are the modified_user. For non-admins I believe
it's rarely the case that they are the actor, and
they will tend to be the modified user if the two
fields are different at all. For admins it's
arguable we want to see both changes they enacted
as well as changes that affected them.
This commit switches the BigBlueButton integration
to use SHA256 instead of SHA1 as BigBlueButton supports
it and scalelite does now, too.
Fixes#19966.
This commit changes web_public_streams_enabled to return False if
realm.enable_spectator_access is False. This is added so that
creating web-public streams is not allowed if enable_spectator_access
is False.
Python's behaviour on `sys.exit` is to wait for all non-daemon threads
to exit. In the context of the missedmessage_emails worker, if any
work is pending, a non-daemon Timer thread exists, which is waiting
for 5 seconds. As soon as that thread is serviced, it sets up another
5-second Timer, a process which repeats until all
ScheduledMessageNotificationEmail records have been handled. This
likely takes two minutes, but may theoretically take up to a week
until the thread exits, and thus sys.exit can complete.
Supervisor only gives the process 30 seconds to shut down, so
something else must prevent this endless Timer.
When `stop` is called, take the lock so we can mutate the timer.
However, since `stop` may have been called from a signal handler, our
thread may _already_ have the lock. As Python provides no way to know
if our thread is the one which has the lock, make the lock a
re-entrant one, allowing us to always try to take it.
With the lock in hand, cancel any outstanding timers. A race exists
where the timer may not be able to be canceled because it has
finished, maybe_send_batched_emails has been called, and is itself
blocked on the lock. Handle this case by timing out the thread join
in `stop()`, and signal the running thread to exit by unsetting the
timer event, which will be detected once it claims the lock.
Special characters, including `\r`, `\n`, and more esoteric codepoints
like non-characters, can negatively affect rendering and UI behaviour.
Check for, and prevent making new messages with, characters in the
Unicode categories of `Cc` (control characters), `Cs`, (surrogates),
and `Cn` (unassigned, non-characters).
Fixes#20128.
This commit replaces "dark mode" and "light mode" with "dark theme"
and "light theme" in the message returned and shown in a little
popup in the UI, when color scheme settings are changed through
slash commands.
Adds `wildcard_mentions_notify` as a property that can be updated
by the endpoint and removes mention of potential `null` value in
the return object because it is not possible.
Also cleans up the documentation of `in_home_view` legacy property
and updates the return object description to better reflect what
is actually returned.
Since spectators can't access personal profile settings and
can't view profile for other users. Hence, we don't send realm
custom profile field data and user's profile data to spectators.
Fixes#20301.
Enable spectator access for test `zulip` realm in developement
setup.
Add option in `do_create_realm` to configure
`enable_spectator_access` field of `Realm`.
We restrict access of messages from web public streams if
anonymous login is disabled via `enable_spectator_access`.
Display of `Anonymous login` button is now controlled by
the value of `enable_spectator_access`.
Admins can toggle `enable_spectator_access` via org settings in UI.
If null is a potential value of data type for a return value or
parameter in the API endpoint, then it is rendered as an option.
This currently relies on the 'nullable' setting in the OpenAPI spec
that was removed in the 3.1.0 release. If/when the OpenAPI version
is updated, then how the `data_type` for parameters and return values
is rendered will need to be reworked.
Fixes#20264.
It turns out these were just wrong. We fix a few things:
* Sort the list of settings so that it's possible to compare with reality.
* Deleted additional fields that don't actually exist.
* Fixed various fields missing past feature level updates.
Neither of these fields use the `realm/update_dict` event type; they
use `realm/update`; we've attempted to clarify that in the previous
commit.
That reality means we don't have automated testing for these values,
and that meant that typos like these could slip through.
The user id is a very useful piece of information that the mobile
client should have access to - instead of only getting the email. This
makes it much simpler to impleent clients that might be robust to
changes in email address.
RabbitMQ clients have a setting called prefetch[1], which controls how
many un-acknowledged events the server forwards to the local queue in
the client. The default is 0; this means that when clients first
connect, the server must send them every message in the queue.
This itself may cause unbounded memory usage in the client, but also
has other detrimental effects. While the client is attempting to
process the head of the queue, it may be unable to read from the TCP
socket at the rate that the server is sending to it -- filling the TCP
buffers, and causing the server's writes to block. If the server
blocks for more than 30 seconds, it times out the send, and closes the
connection with:
```
closing AMQP connection <0.30902.126> (127.0.0.1:53870 -> 127.0.0.1:5672):
{writer,send_failed,{error,timeout}}
```
This is https://github.com/pika/pika/issues/753#issuecomment-318119222.
Set a prefetch limit of 100 messages, or the batch size, to better
handle queues which start with large numbers of outstanding events.
Setting prefetch=1 causes significant performance degradation in the
no-op queue worker, to 30% of the prefetch=0 performance. Setting
prefetch=100 achieves 90% of the prefetch=0 performance, and higher
values offer only minor gains above that. For batch workers, their
performance is not notably degraded by prefetch equal to their batch
size, and they cannot function on smaller prefetches than their batch
size.
We also set a 100-count prefetch on Tornado workers, as they are
potentially susceptible to the same effect.
[1] https://www.rabbitmq.com/confirms.html#channel-qos-prefetch
The `current_queue_size` key in the queue monitoring stats file was
the local queue size, not the global queue size -- d5a6b0f99a
renamed the function, but did not adjust the queue monitoring JSON,
despite the last use of it having been removed in cd9b194d88.
The function is still used to mark "we emptied our queue," and it
remains a reasonable metric for that.
TOR users are legitimate users of the system; however, that system can
also be used for abuse -- specifically, by evading IP-based
rate-limiting.
For the purposes of IP-based rate-limiting, add a
RATE_LIMIT_TOR_TOGETHER flag, defaulting to false, which lumps all
requests from TOR exit nodes into the same bucket. This may allow a
TOR user to deny other TOR users access to the find-my-account and
new-realm endpoints, but this is a low cost for cutting off a
significant potential abuse vector.
If enabled, the list of TOR exit nodes is fetched from their public
endpoint once per hour, via a cron job, and cached on disk. Django
processes load this data from disk, and cache it in memcached.
Requests are spared from the burden of checking disk on failure via a
circuitbreaker, which trips of there are two failures in a row, and
only begins trying again after 10 minutes.
Unhandled exceptions propagating to process_queue were not caught there,
causing improper logging - errors didn't land in errors.log as expected.
Exceptions should be caught and explicitly logged by the process_queue
logger. Exceptions occurring during consuming events are caught and
handled inside the worker's logic - however those that happen while
setting up the worker were not addressed at all, and that's the core bug
we mean to address here.
Furthermore, in multi-threaded mode we want the autoreload mechanism to
be working - which it doesn't without catching the exceptions. The
correct approach is to - again - catch the exception, log it and then
send SIGUSR1 signal to trigger exit and autoreload.
I believe that this migration with the default of atomic=True will
fail when trying to convert the field to PositiveIntegerField if there
were any 0 values present in the database when the migration began.
The fix is to have each of the steps be their own transaction.
A SIGTERM can show up at any point in the ioloop, even in places which
are not prepared to handle it. This results in the process ignoring
the `sys.exit` which the SIGTERM handler calls, with an uncaught
SystemExit exception:
```
2021-11-09 15:37:49.368 ERR [tornado.application:9803] Uncaught exception
Traceback (most recent call last):
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/http1connection.py", line 238, in _read_message
delegate.finish()
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/httpserver.py", line 314, in finish
self.delegate.finish()
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/routing.py", line 251, in finish
self.delegate.finish()
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/web.py", line 2097, in finish
self.execute()
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/web.py", line 2130, in execute
**self.path_kwargs)
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/gen.py", line 307, in wrapper
yielded = next(result)
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/web.py", line 1510, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "/home/zulip/deployments/2021-11-08-05-10-23/zerver/tornado/handlers.py", line 150, in get
request = self.convert_tornado_request_to_django_request()
File "/home/zulip/deployments/2021-11-08-05-10-23/zerver/tornado/handlers.py", line 113, in convert_tornado_request_to_django_request
request = WSGIRequest(environ)
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/django/core/handlers/wsgi.py", line 66, in __init__
script_name = get_script_name(environ)
File "/home/zulip/deployments/2021-11-08-05-10-23/zerver/tornado/event_queue.py", line 611, in <lambda>
signal.signal(signal.SIGTERM, lambda signum, stack: sys.exit(1))
SystemExit: 1
```
Supervisor then terminates the process with a SIGKILL, which results
in dropping data held in the tornado process, as it does not dump its
queue.
The only command which is safe to run in the signal handler is
`ioloop.add_callback_from_signal`, which schedules the callback to run
during the course of the normal ioloop. This callbacks does an
orderly shutdown of the server and the ioloop before exiting.
This commit ensures that all markdown fixtures have unique
test names by rewriting the names of some of them and adding
a test in `test_markdown.py`.
Earlier this was over-writing the value for same keys in
`load_markdown_tests` in `test_markdown.py`.
Race conditions in stream unsubscription may lead to multiple
back-to-back SUBSCRIPTION_DEACTIVATED RealmAuditLog entries for the
same stream. The current logic constructs duplicate UserMessage
entries for such, which then later fail to insert.
Keep a set of message-ids that have been prep'd to be inserted, so
that we don't duplicate them if there is a duplicated
SUBSCRIPTION_DEACTIVATED row. This also renames the `message` local
variable, which otherwise overrode the `message` argument of a
different type.
Previously, our codebase contained links to various versions of the
Django docs, eg https://docs.djangoproject.com/en/1.8/ref/
request-response/#django.http.HttpRequest and https://
docs.djangoproject.com/en/2.2/ref/settings/#std:setting-SERVER_EMAIL
opening a link to a doc with an outdated Django version would show a
warning "This document is for an insecure version of Django that is no
longer supported. Please upgrade to a newer release!".
Most of these links are inside comments.
Following the replacement of these links in our docs, this commit uses
a search with the regex "docs.djangoproject.com/en/([0-9].[0-9]*)/"
and replaces all matches with "docs.djangoproject.com/en/3.2/".
All the new links in this commit have been generated by the above
replace and each link has then been manually checked to ensure that
(1) the page still exists and has not been moved to a new location
(and it has been found that no page has been moved like this), (2)
that the anchor that we're linking to has not been changed (and it has
been found that no anchor has been changed like this).
One comment where we mentioned a Django version in text before linking
to a page for that version has also been changed, the comment
mentioned the specific version when a change happened, and the history
is no longer relevant to us.