GIF files can be `.GIF`, and also we determine the file format by
inspecting the image data, so there's no reason to have this
assertion.
(The code for serving still images does not rely on the file being a
GIF.)
Have kept process_new_human_user out of
the atomic block because it involves many
different operations and also sends events.
Tried enclosing event in on_commit but that
would need many changes in the tests, so have
skipped it for now.
Updates testing helpers in `event_schema.py` for `do_update_message` so
that all stream message fields are present in any edits / updates to
stream messages. Adds verfication tests of events returned from private
message edits and from stream message content-only and topic-only edits.
Updates the `update_message` event type to always include a `stream_id`
field when the message being edited is a stream message. This change
aligns with the current definition of the `\get-events` endpoint
in the OpenAPI documentation.
It is better to press on, than stop halfway through due to a user
whose email no longer works. The exception is already logged, which
is sufficient here, as this is generally run interactively.
These fundamentally tested send_email, not build_email, and thus
belong in TestSendEmail, not TestBuildEmail. They also duplicated the
code in test_send_email_exceptions; reuse it.
This allows verify_uploads to use the database
as the authoritative source for what attachments
we need to look for when we're verifying the
images got exported properly, while still
also verifying attachment.json is correct.
It is better for the verifying code to just explicitly
ensure that the exported file bytes match the bytes
in the test image. This introduces a tiny bit more
of I/O.
It's easier to read the code without the intermediate
full_data dictionary that obscures where the files live.
We also avoid some unnecessary file i/o in the tests.
We do a sanity check for every table
that gets written to user.json as part of
the single-user export.
If we add more tables to the single-user export,
the test that I modified here will now ask
the author to add a new checker function, which
means we should always have at least a basic
sanity check for every exported table as long
as we stay in this new paradigm.
We also remove a little bit of old code that
became redundant.
This replaces the TERMS_OF_SERVICE and PRIVACY_POLICY settings with
just a POLICIES_DIRECTORY setting, in order to support settings (like
Zulip Cloud) where there's more policies than just those two.
With minor changes by Eeshan Garg.
We do s/TOS/TERMS_OF_SERVICE/ on the name, and while we're at it,
remove the assumed zerver/ namespace for the template, which isn't
correct -- Zulip Cloud related content should be in the corporate/
directory.
We now complain if a test author sends a stream message
that does not result in the sender getting a
UserMessage row for the message.
This is basically 100% equivalent to complaining that
the author failed to subscribe the sender to the stream
as part of the test setup, as far as I can tell, so the
AssertionError instructs the author to subscribe the
sender to the stream.
We exempt bots from this check, although it is
plausible we should only exempt the system bots like
the notification bot.
I considered auto-subscribing the sender to the stream,
but that can be a little more expensive than the
current check, and we generally want test setup to be
explicit.
If there is some legitimate way than a subscribed human
sender can't get a UserMessage, then we probably want
an explicit test for that, or we may want to change the
backend to just write a UserMessage row in that
hypothetical situation.
For most tests, including almost all the ones fixed
here, the author just wants their test setup to
realistically reflect normal operation, and often devs
may not realize that Cordelia is not subscribed to
Denmark or not realize that Hamlet is not subscribed to
Scotland.
Some of us don't remember our Shakespeare from high
school, and our stream subscriptions don't even
necessarily reflect which countries the Bard placed his
characters in.
There may also be some legitimate use case where an
author wants to simulate sending a message to an
unsubscribed stream, but for those edge cases, they can
always set allow_unsubscribed_sender to True.
These variables can be unset if the `os.path.exists` check fails.
That should be rare, since we've previously checked the files do
exist before getting here.
While races here are unlikely, it is most correct to enforce this
invariant at the database layer, and having a database-level
constraint makes the models file a bit more readable.
These are not considered to be "personal"
info, even if you upload them, so we
don't export them.
Generally the only folks who upload
these are admins, who can easily get
them in other ways. In fact, anybody
can get these via the app.
We now ensure that all message ids are sorted BEFORE
we split them into batches.
We now do a few extra "slim" queries to get message
ids up front.
But, now, when we divide them into batches, we no
longer run 2 or 3 different complicated queries in
a loop. We just basically hydrate our message ids,
so `write_message_partials` should be easy to reason
about.
This change also means that for tiny realms with
< 1000 messages you will always have just one
json file, since we aggregate the ids from the
queries before batching.
This accomplishes a few things:
* It extracts `chunkify` rather than having us
clumsily track chunking-related stuff in a
big loop that is doing other stuff.
* It makes it so that all message ids
in message-000001.json < message-000002.json.
* It makes it easier for us to customize
the messages we send to a single user
(coming soon).
BTW we probably have a slicker version of chunkify
somewhere in our codebase, but I couldn't remember
where.
Following b3c58f454f, we want to clean up
old topics that may contain the disallowed characters. The Message table
is large, so we go in batches, making sure we limit topic fetches and
UPDATE query to no more than BATCH_SIZE Message rows per query.
Now all file writes go through our three
helper functions, and we consistently
write a single log message after the file
gets written.
I killed off write_message_exports, since
all but one of its callers can call
write_table_data, which automatically
sorts data. In particular, our Message
and UserMessage data will now be sorted
by ids.
This probably just postpones the list creation until
Django builds the "IN" query, but semantically it's
good to work in sets where we don't have any
meaningful ordering of the list that gets used.
The immediate benefit of this is stronger mypy
checks (avoiding the ugly union caused by message
files).
The subsequent commit will add sorting.
We have test coverage on all these lines insofar
as if you comment out the lines, tests will
explode (i.e. more than superficial line
coverage).
The distinction here wasn't super meaningful
due to the way we order our "elif" statements,
but we want to reserver "normal_parent" for the
majority of use cases, where you simply tell
the Config what the "foreign_key" is.
For realm-wide exports, there is no reason to query
inefficiently against a list of modified users.
We move the Config out of the common child configs.
Even though Django usually treats foo__in
and foo_id__in identically for filters where
foo is a ForeignKey type, we want to insist
on somewhat more consistent syntax, because
we have the odd combo of type and type_id
in Recipient, where type_id is kinda like a
foreign key, but not a ForeignKey.
So we assert for now that all our include_rows
values end in "_id__in".
Zulip shows two guides on How to reply, first one by
the welcome bot and second one is intro_reply hotspot.
To simply and avoid redundancy, intro_reply hotspot is
removed.
Fixes#20482.
Force postgres to give reactions in ID order - which
is generally chronological order. Results in frontend
displaying reactions in said order.
Fixes#20060.
In many of our stream notification messages, we make use of the
same silent user mention syntax, the template for which was always
hardcoded. This commit adds a helper function that all relevant
callers can call to get the right syntax when mentioning users.
Thanks to Tim Abbott for this suggestion!
Removed existing empty narrow divs from app/home.html and created
a new javascript module to dynamically load empty narrow messages
using handlebar template.
Fixes#18797
The original intention of this was to prevent coding
errors with realm getters that don't, um, filter
on realm.
Unfortunately, you can still write a broken realm getter
that forgets to filter on realm, but which returns a
Set, and the new safeguards won't see any difference.
We could make all the getters return sorted lists
instead, but that's for another day.
This code does serve another purpose, which is to
prevet egregious bugs in the import itself.
The diff here is ugly, but to summarize:
BEFORE IMPORT:
define get_user_id
define get_huddle_hashes
AFTER IMPORT AND MAKING GETTERS:
check realm id
define assert_realm_values
verify emoji codes
check huddle hashes
We don't have automated test coverage on this yet,
but below are the results from manual testing.
Note that we include the realm icon and logo even
though they were not created by Cordelia.
./manage.py export_single_user cordelia@zulip.com
$ (cd /tmp/zulip-export-4v3mo802/ && find .)
.
./emoji
./emoji/2
./emoji/2/emoji
./emoji/2/emoji/images
./emoji/2/emoji/images/3.jpg
./emoji/records.json
./messages-000001.json
./realm_icons
./realm_icons/2
./realm_icons/2/night_logo.original
./realm_icons/2/night_logo.png
./realm_icons/2/icon.png
./realm_icons/2/icon.original
./realm_icons/records.json
./avatars
./avatars/2
./avatars/2/c5125af0447f4d66ce34c1b32eac75ac27ebe0e7.original
./avatars/2/c5125af0447f4d66ce34c1b32eac75ac27ebe0e7.png
./avatars/records.json
./uploads
./uploads/2
./uploads/2/68
./uploads/2/68/xyEkC5dTIp8m42_6HJ3kBfdt
./uploads/2/68/xyEkC5dTIp8m42_6HJ3kBfdt/denver.jpg
./uploads/2/96
./uploads/2/96/ol5WE6RTUntvuPDSpJUrYTim
./uploads/2/96/ol5WE6RTUntvuPDSpJUrYTim/denver.jpg
./uploads/records.json
./user.json
There are tactical reasons to remove this assertion.
Basically, the reason it's safe to remove is that it's
been around a long time and we would have seen this
operationally. Also, the check to make sure that the
S3 filename thingy matches the avatar hash is a much
stronger check.
We will soon restore a stronger version of this check
that applies to all of our asset types (emojis/avatars/etc.).
This makes it easier to read the calling code and see
the big picture of how the four asset types are
organized.
I also handle uploads first, to be similar to the local
code.
This code is well tested--you can modify any of the callers
to pass in a wrong value of `object_key` and get a failing
test.
The comment explains in more detail, but basically we'd skip
exercising a bit of code in the signup code path if there were no
messages in the last week, resulting in the query count not matching.
"help" command occurs in the command list in
initial pms or when bot doesn't understand the message. It doesn't
occur when the bot is respoding to the "help" command itself.
This commit adds code to check whether a user is allowed to use
wildcard mention in a large stream or not while editing a message
based on the realm settings.
Previously this was only checked while sending message, thus user
was easily able to use wildcard mention by first sending a normal
message and then using a wildcard mention by editing it.
1. The initial welcome message now contains less detail.
2. The bot now responds to these commands: "apps", "edit profile",
"dark mode", "light mode", "streams", "topics", "message formatting",
"keyboard shortcuts" and "help" - the bot still responds if there are
slight variations in these commands.
3. Tests have been made to check if bot responds to the advertised
commands (with variations) and gives a negative message if it doesn't
understand the message.
With substantial tweaks by tabbott.
Fixes#19900.
A confirmation link takes a user to the check_prereg_key_and_redirect
endpoint, before getting redirected to POST to /accounts/register/. The
problem was that validation was happening in the check_prereg_key_and_redirect
part and not in /accounts/register/ - meaning that one could submit an
expired confirmation key and be able to register.
We fix this by moving validation into /accouts/register/.
Migrates the `/update-subscription-settings` api endpoint to the
`ignored_parameters_unsupported` model, which is also currently used
by `/update-settings` and `update-realm-user-settings-defaults`.
This change is a step towards preparing for an eventual migration to
have all endpoints return an `ignored_parameters_unsupported` block.
Previously the `/update-subscription-settings` endpoint returned a
copy of the data object sent in the request.
Fixes#15307.
django-scim2 doesn't order the rows when fetching them in reponse to a
query using the filter syntax. We ensure that ORDER BY id is always
appended to the SQL queries.
We add the following tables to the user export:
AlertWord
CustomProfileFieldValue
RealmAuditLog
Service
UserActivity
UserActivityInterval
UserCount
UserGroup
UserHotspot
UserPresence
UserTopic
Except for UserCount, we achieve this by sharing
code with the realm export via
add_user_profile_child_configs.
UserCount is handled slightly differently than realm
exports due to which key we trigger off.
It's possible that RealmAuditLog is incomplete for
single users, since we may also want rows where they
are the acting_user. This commit finds rows where
they are the modified_user. For non-admins I believe
it's rarely the case that they are the actor, and
they will tend to be the modified user if the two
fields are different at all. For admins it's
arguable we want to see both changes they enacted
as well as changes that affected them.
This commit switches the BigBlueButton integration
to use SHA256 instead of SHA1 as BigBlueButton supports
it and scalelite does now, too.
Fixes#19966.
This commit changes web_public_streams_enabled to return False if
realm.enable_spectator_access is False. This is added so that
creating web-public streams is not allowed if enable_spectator_access
is False.
Python's behaviour on `sys.exit` is to wait for all non-daemon threads
to exit. In the context of the missedmessage_emails worker, if any
work is pending, a non-daemon Timer thread exists, which is waiting
for 5 seconds. As soon as that thread is serviced, it sets up another
5-second Timer, a process which repeats until all
ScheduledMessageNotificationEmail records have been handled. This
likely takes two minutes, but may theoretically take up to a week
until the thread exits, and thus sys.exit can complete.
Supervisor only gives the process 30 seconds to shut down, so
something else must prevent this endless Timer.
When `stop` is called, take the lock so we can mutate the timer.
However, since `stop` may have been called from a signal handler, our
thread may _already_ have the lock. As Python provides no way to know
if our thread is the one which has the lock, make the lock a
re-entrant one, allowing us to always try to take it.
With the lock in hand, cancel any outstanding timers. A race exists
where the timer may not be able to be canceled because it has
finished, maybe_send_batched_emails has been called, and is itself
blocked on the lock. Handle this case by timing out the thread join
in `stop()`, and signal the running thread to exit by unsetting the
timer event, which will be detected once it claims the lock.
Special characters, including `\r`, `\n`, and more esoteric codepoints
like non-characters, can negatively affect rendering and UI behaviour.
Check for, and prevent making new messages with, characters in the
Unicode categories of `Cc` (control characters), `Cs`, (surrogates),
and `Cn` (unassigned, non-characters).
Fixes#20128.
This commit replaces "dark mode" and "light mode" with "dark theme"
and "light theme" in the message returned and shown in a little
popup in the UI, when color scheme settings are changed through
slash commands.
Adds `wildcard_mentions_notify` as a property that can be updated
by the endpoint and removes mention of potential `null` value in
the return object because it is not possible.
Also cleans up the documentation of `in_home_view` legacy property
and updates the return object description to better reflect what
is actually returned.
Since spectators can't access personal profile settings and
can't view profile for other users. Hence, we don't send realm
custom profile field data and user's profile data to spectators.
Fixes#20301.
Enable spectator access for test `zulip` realm in developement
setup.
Add option in `do_create_realm` to configure
`enable_spectator_access` field of `Realm`.
We restrict access of messages from web public streams if
anonymous login is disabled via `enable_spectator_access`.
Display of `Anonymous login` button is now controlled by
the value of `enable_spectator_access`.
Admins can toggle `enable_spectator_access` via org settings in UI.
If null is a potential value of data type for a return value or
parameter in the API endpoint, then it is rendered as an option.
This currently relies on the 'nullable' setting in the OpenAPI spec
that was removed in the 3.1.0 release. If/when the OpenAPI version
is updated, then how the `data_type` for parameters and return values
is rendered will need to be reworked.
Fixes#20264.
It turns out these were just wrong. We fix a few things:
* Sort the list of settings so that it's possible to compare with reality.
* Deleted additional fields that don't actually exist.
* Fixed various fields missing past feature level updates.
Neither of these fields use the `realm/update_dict` event type; they
use `realm/update`; we've attempted to clarify that in the previous
commit.
That reality means we don't have automated testing for these values,
and that meant that typos like these could slip through.
The user id is a very useful piece of information that the mobile
client should have access to - instead of only getting the email. This
makes it much simpler to impleent clients that might be robust to
changes in email address.
RabbitMQ clients have a setting called prefetch[1], which controls how
many un-acknowledged events the server forwards to the local queue in
the client. The default is 0; this means that when clients first
connect, the server must send them every message in the queue.
This itself may cause unbounded memory usage in the client, but also
has other detrimental effects. While the client is attempting to
process the head of the queue, it may be unable to read from the TCP
socket at the rate that the server is sending to it -- filling the TCP
buffers, and causing the server's writes to block. If the server
blocks for more than 30 seconds, it times out the send, and closes the
connection with:
```
closing AMQP connection <0.30902.126> (127.0.0.1:53870 -> 127.0.0.1:5672):
{writer,send_failed,{error,timeout}}
```
This is https://github.com/pika/pika/issues/753#issuecomment-318119222.
Set a prefetch limit of 100 messages, or the batch size, to better
handle queues which start with large numbers of outstanding events.
Setting prefetch=1 causes significant performance degradation in the
no-op queue worker, to 30% of the prefetch=0 performance. Setting
prefetch=100 achieves 90% of the prefetch=0 performance, and higher
values offer only minor gains above that. For batch workers, their
performance is not notably degraded by prefetch equal to their batch
size, and they cannot function on smaller prefetches than their batch
size.
We also set a 100-count prefetch on Tornado workers, as they are
potentially susceptible to the same effect.
[1] https://www.rabbitmq.com/confirms.html#channel-qos-prefetch
The `current_queue_size` key in the queue monitoring stats file was
the local queue size, not the global queue size -- d5a6b0f99a
renamed the function, but did not adjust the queue monitoring JSON,
despite the last use of it having been removed in cd9b194d88.
The function is still used to mark "we emptied our queue," and it
remains a reasonable metric for that.
TOR users are legitimate users of the system; however, that system can
also be used for abuse -- specifically, by evading IP-based
rate-limiting.
For the purposes of IP-based rate-limiting, add a
RATE_LIMIT_TOR_TOGETHER flag, defaulting to false, which lumps all
requests from TOR exit nodes into the same bucket. This may allow a
TOR user to deny other TOR users access to the find-my-account and
new-realm endpoints, but this is a low cost for cutting off a
significant potential abuse vector.
If enabled, the list of TOR exit nodes is fetched from their public
endpoint once per hour, via a cron job, and cached on disk. Django
processes load this data from disk, and cache it in memcached.
Requests are spared from the burden of checking disk on failure via a
circuitbreaker, which trips of there are two failures in a row, and
only begins trying again after 10 minutes.
Unhandled exceptions propagating to process_queue were not caught there,
causing improper logging - errors didn't land in errors.log as expected.
Exceptions should be caught and explicitly logged by the process_queue
logger. Exceptions occurring during consuming events are caught and
handled inside the worker's logic - however those that happen while
setting up the worker were not addressed at all, and that's the core bug
we mean to address here.
Furthermore, in multi-threaded mode we want the autoreload mechanism to
be working - which it doesn't without catching the exceptions. The
correct approach is to - again - catch the exception, log it and then
send SIGUSR1 signal to trigger exit and autoreload.
I believe that this migration with the default of atomic=True will
fail when trying to convert the field to PositiveIntegerField if there
were any 0 values present in the database when the migration began.
The fix is to have each of the steps be their own transaction.
A SIGTERM can show up at any point in the ioloop, even in places which
are not prepared to handle it. This results in the process ignoring
the `sys.exit` which the SIGTERM handler calls, with an uncaught
SystemExit exception:
```
2021-11-09 15:37:49.368 ERR [tornado.application:9803] Uncaught exception
Traceback (most recent call last):
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/http1connection.py", line 238, in _read_message
delegate.finish()
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/httpserver.py", line 314, in finish
self.delegate.finish()
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/routing.py", line 251, in finish
self.delegate.finish()
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/web.py", line 2097, in finish
self.execute()
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/web.py", line 2130, in execute
**self.path_kwargs)
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/gen.py", line 307, in wrapper
yielded = next(result)
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/tornado/web.py", line 1510, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "/home/zulip/deployments/2021-11-08-05-10-23/zerver/tornado/handlers.py", line 150, in get
request = self.convert_tornado_request_to_django_request()
File "/home/zulip/deployments/2021-11-08-05-10-23/zerver/tornado/handlers.py", line 113, in convert_tornado_request_to_django_request
request = WSGIRequest(environ)
File "/home/zulip/deployments/2021-11-08-05-10-23/zulip-py3-venv/lib/python3.6/site-packages/django/core/handlers/wsgi.py", line 66, in __init__
script_name = get_script_name(environ)
File "/home/zulip/deployments/2021-11-08-05-10-23/zerver/tornado/event_queue.py", line 611, in <lambda>
signal.signal(signal.SIGTERM, lambda signum, stack: sys.exit(1))
SystemExit: 1
```
Supervisor then terminates the process with a SIGKILL, which results
in dropping data held in the tornado process, as it does not dump its
queue.
The only command which is safe to run in the signal handler is
`ioloop.add_callback_from_signal`, which schedules the callback to run
during the course of the normal ioloop. This callbacks does an
orderly shutdown of the server and the ioloop before exiting.
This commit ensures that all markdown fixtures have unique
test names by rewriting the names of some of them and adding
a test in `test_markdown.py`.
Earlier this was over-writing the value for same keys in
`load_markdown_tests` in `test_markdown.py`.
Race conditions in stream unsubscription may lead to multiple
back-to-back SUBSCRIPTION_DEACTIVATED RealmAuditLog entries for the
same stream. The current logic constructs duplicate UserMessage
entries for such, which then later fail to insert.
Keep a set of message-ids that have been prep'd to be inserted, so
that we don't duplicate them if there is a duplicated
SUBSCRIPTION_DEACTIVATED row. This also renames the `message` local
variable, which otherwise overrode the `message` argument of a
different type.
Previously, our codebase contained links to various versions of the
Django docs, eg https://docs.djangoproject.com/en/1.8/ref/
request-response/#django.http.HttpRequest and https://
docs.djangoproject.com/en/2.2/ref/settings/#std:setting-SERVER_EMAIL
opening a link to a doc with an outdated Django version would show a
warning "This document is for an insecure version of Django that is no
longer supported. Please upgrade to a newer release!".
Most of these links are inside comments.
Following the replacement of these links in our docs, this commit uses
a search with the regex "docs.djangoproject.com/en/([0-9].[0-9]*)/"
and replaces all matches with "docs.djangoproject.com/en/3.2/".
All the new links in this commit have been generated by the above
replace and each link has then been manually checked to ensure that
(1) the page still exists and has not been moved to a new location
(and it has been found that no page has been moved like this), (2)
that the anchor that we're linking to has not been changed (and it has
been found that no anchor has been changed like this).
One comment where we mentioned a Django version in text before linking
to a page for that version has also been changed, the comment
mentioned the specific version when a change happened, and the history
is no longer relevant to us.
This resolves the issues reported in #20108, major chunk of which were
due to the incomplete support for importing the livechat streams/messages
in the tool. So, it's best not to import any livechat streams/messages for
now until a complete support for importing the same is developed.
The decorator form is clearer by being more explicit; additionally,
the api_by_user rate-limit only currently used in one place, and makes
it difficult to test per-user rate-limits that are more specific.
Both `create_realm_by_ip` and `find_account_by_ip` send emails to
arbitrary email addresses, and as such can be used to spam users.
Lump their IP rate limits into the same bucket; most legitimate users
will likely not be using both of these endpoints at similar times.
The rate is set at 5 in 30 minutes, the more quickly-restrictive of
the two previous rates.
The existing test did no verify that the rate limit only applied to
127.0.0.1, and that other IPs were unaffected. For safety, add an
explicit test of this.
The only use case of rate_limit_rule which does not clear the
RateLimitedIPAddr history is test_hit_ratelimits_as_remote_server,
which is not made any worse by clearing out the IP history for a
non-existent `api_by_remote_server` domain.
Since `prefers_web_public_view` key in session is only
relevant to users without an account, this key should no longer
be present in the user's session object.
Fixes#19907
For export realm following changes have been made:
- `./manage.py export --upload` would delete `.tar.gz` and unpacked dir
- `./manage.py export` would only delete `unpacked dir`
Besides, we have removed `--delete-after-upload` as we have set it as
the default.
Fixes#20081
If realm is web_public, spectators can now view avatar of other
users.
There is a special exception we had to introduce in rest model to
allow `/avatar` type of urls for `anonymous` access, because they
don't have the /api/v1 prefix.
Fixes#19838.
This commit adds functionality to import messages from the
Discussions having direct channels as their parent. As we don't
have topics in the PMs, the messages are imported in interleaved
form in the imported direct channels/PMs.
This was completely unsupported earlier and would have resulted in
an error.
This commit updates the error message returned when the maximum
invite limit for the day. We update the error returned by API to
only mention that the limit is reached and add the suggestion
to use multi-use link or contact support in the message shown
in webapp.
Add `escape_navigates_to_default_view` as a bool setting in
UserBaseSettings model and implement it as a checkbox that toggles
the hotkey implementation of escape to the default view in the
advanced user display settings.
With /help/ documentation edits from Alya Abbott.
Fixes#20043.
We always use delivery_email to generate gravatar_url, but in
test_admin_api_hide_emails we were passing email to get_gravatar_url
and matched with the avatar_url field of the fetched user object.
The tests were passing because the email_address_is_realm_public
was using old realm object and thus email field was incorrectly
set to delivery_email even when email_address_visibility was set
to EMAIL_ADDRESS_VISIBILITY_ADMINS.
This commit fixes the test to pass delivery_email to get_gravatar_url.
We create RealmUserDefault object for internal realm just
for consistency. The code in migration does so but it
was missed to add the code when creating new internal realm.
Not proxying these requests through camo is a security concern.
Furthermore, on the desktop client, any embed image which is hosted on
a server with an expired or otherwise invalid certificate will trigger
a blocking modal window with no clear source and a confusing error
message; see zulip/zulip-desktop#1119.
Rewrite all `message_embed_image` URLs through camo, if it is enabled.
We add discussion id and url in the comments and highlighted title to
the body of disscussion message to make it more meaningful and accessible.
Fixes#19938.
We aim to use Zulip topics thoughtfully in displaying messages from
discussions, as well as linking to the discussion in every message so
that it's easy to view them.
Fixes#19938.
Supporting URL percent-encoded bytes is possible using `%%20`, but this
is not necessarily very understandable to end-users, even those that
understand percent encoding.
Allow `%20` in linkifier URL format strings, and transform them into
`%%20` in the pattern just before they are applied in markdown
translation. Care must be taken here, such that already-escaped `%`s
are not escaped an extra time.
We do this before rendering, and not before storage, as
a simplification; the JS-side linkifier at present only understands
`%(foo)s` and thus needs no changes, and to avoid an un-escaping pass
before showing in the admin UI.
User-supplied custom realm filter has had some sort of regex-based
validation of the format URL since their introduction in
d7e1e4a2c0 -- and this has always been
in addition to the URLValidator. The URLValidator is the one which
does the security-relevant work of validating that the schema is
reasonable, and that the overall shape of the URL is well-formed. The
regex has served primarily to arbitrary limit the characters that can
appear in the URL, in the mistaken name of safety.
Adjust the regex, such that its only purpose is to verify that the
usages of `%` characters in the URL are reasonable, and leave the URL
validation to the URLValidator, which can do a far better job. This
includes broadening the support to include `%%` as an escape
character; this is likely such a niche case as to be unnecessary, but
costs little.
Fixes#16013.
og:image is supposed to be an absolute URL, but some sites incorrectly
provide a relative URL. In this case, it makes more sense to
interpret it relative to the full page URL after redirects, rather
than relative to just the domain part of the page URL before
redirects.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We don't yet have a do_reactivate_stream function, but we reserve a
number since:
1. It'll likely be added in the future.
2. For now, we can restore archived stream with some manual intervention
in the Django shell, and for that we'll want to create an appropriate
RealmAuditLog entry.
Removes the `/day` and `/night` options from the typeahead menu while
still allowing the commands to be used. Typing `/day` and `/night`
will now suggest `/light` and `/dark`, respectively. Also changes the
`Dark mode` and `Light mode` popups that appear after using the
corresponding command.
Fixes#18318.
Created a schema for the ignored_parameters_unsupported that is
returned by the /settings and /realm/user_settings_defaults endpoints
and removed the duplicated text in the api documentation.
Also cleaned up some small errors in the /realm/user_settings_default
definition and sidebar link /api/update-realm-user-settings-defaults.
Fixes#19674
Since 3853285241, PushNotificationsWorker uses the aioapns library
to send Apple push notifications. This introduces an asyncio event
loop into this worker process, which, if unlucky, can respond poorly
when a SIGALRM is introduced to it:
```
[asyncio] Task exception was never retrieved
future: <Task finished coro=<send_apple_push_notification.<locals>.attempt_send() done, defined at /path/to/zerver/lib/push_notifications.py:166> exception=WorkerTimeoutException(30, 1)>
Traceback (most recent call last):
File "/path/to/zerver/lib/push_notifications.py", line 169, in attempt_send
result = await apns_context.apns.send_notification(request)
File "/path/to/zulip-py3-venv/lib/python3.6/site-packages/aioapns/client.py", line 57, in send_notification
response = await self.pool.send_notification(request)
File "/path/to/zulip-py3-venv/lib/python3.6/site-packages/aioapns/connection.py", line 407, in send_notification
response = await connection.send_notification(request)
File "/path/to/zulip-py3-venv/lib/python3.6/site-packages/aioapns/connection.py", line 189, in send_notification
data = json.dumps(request.message, ensure_ascii=False).encode()
File "/usr/lib/python3.6/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/usr/lib/python3.6/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.6/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/path/to/zerver/worker/queue_processors.py", line 353, in timer_expired
raise WorkerTimeoutException(limit, len(events))
zerver.worker.queue_processors.WorkerTimeoutException: Timed out after 30 seconds processing 1 events
```
...which subsequently leads to the worker failing to make any progress
on the queue.
Remove the timeout on the worker. This may result in failing to make
forward progress if Apple/Google take overly long handling requests,
but is likely preferable to failing to make forward progress if _one_
request takes too long and gets unlucky with when the signal comes
through.
This makes logging more consistent between FCM and APNs codepaths, and
makes clear which user-ids are for local users, and which are opaque
integers namespaced from some remote zulip server.
Being able to determine how many distinct users are getting push
notifications per remote host is useful, as is the distribution of
device counts. This parallels the log line in
handle_push_notification for push notifications from local realms,
handled via the event queue.
We should use more selective query for UserGroupMembership
objects in tests for checking adding and removing members.
This is done by checking the membership counts for the
particular user group only.
This will help in keeping the tests more understandable
after we add members to the role-based system groups,
since that would create a lot of membership objects.
We make the UserGroup queries in user group creation and
deletion tests more selective by fitering the user groups
which belong to the realm and not the one included in
lear realm, etc.
This will help us to keep the tests more understandable
when the counts of UserGroup increases due to addition of
system groups. There is no need to consider system groups
of other realms in these tests.
It is confusing to have the plan type constants not be namespaced
by the thing they represent. We already have a namespacing
convention in place for constants, so we should use it for
Realm.plan_type as well.
* Remove unnecessary json_validator for full_name parameter.
* Update frontend to pass the right parameter.
* Update documentation and note the change.
Fixes#18409.
`rendered_content` in historical messages may be empty; examining the
history of them may thus require diff'ing two empty strings, which
itself produces an empty string.
Use `lxml.html.fragment_fromstring` to be able to successfully parse
these, rather than 500.
Part of #19559.
As detailed in the comments, the default behavior is undesirable for us
because we can't really predict all possibilities of exceptions that may
be raised - and thus putting str(e) in the http response is potentially
insecure as it may leak some unexpected sensitive information that was
in the exception.
As a hypothetical example - KeyError resulting from some buggy
some_dict[secret_string] call would leak information. Though of course
we aim to never write code like that.
We pass allow_realm_admin as True to access_stream_by_id for
`GET users/{user_id}/subscriptions{stream_id}` endpoint
because we want to allow non-subscribed admins to get
subscription status in private streams.
Fixes#19077.
This commit adds related_name parameter to UserGroup.direct_members
such that we can use direct_groups instead of the default
usergroupmembership_set for getting all the groups of which the
user is direct member.
This commit also sets related_name of UserGroupMembership.user_group
and UserGroupMembership.user_profile to "+" which means that we will
not be having backward relations for these. This change is correct
since we would need to use the recursive queries to get all the
groups of a user and all the members of a group after we add the
subgroups concept in next commit. This leads to us using direct_members
field of UserGroup instead of usergroupmembership_set in mention code,
but this will soon be replaced with the recursive query function to
include subgroup's members as well.
Extracted this commit from #19866.
Authored-by : Anders Kaseorg <anders@zulip.com>
This commit makes the query in get_user_group_direct_members
efficient by directly fetching user-profile ids instead of
first fetching user profile object and then id.
The previous commit apparently didn't have its migration attached properly.
Since this is just a reference to a ManyToManyField, this migration
doesn't actually modify the database, but it is needed for CI to pass.
This commit renames members field of UserGroup to direct_members
for better readability because in the new permissions model, a
user group can be a sub-group of another group and thus technically
members of sub-group will also be members of that group.
This is a prep commit for new permissions model.
Extracted this commit from #19866.
Co-authored-by: Anders Kaseorg <anders@zulip.com>
This is a prep commit for new permissions model in
which a user group would be able to have a subgroup.
This commit renames get_memberships_of_users to
get_direct_memberships_of_users to specify that
the function is used only to fetch the direct
memberships and not memberships of subgroups of
the direct group.
Extracted this commit from #19866.
Co-authored-by: Anders Kaseorg <anders@zulip.com>
This is a prep commit for new permissions model in which a user
group would be able to have a subgroup.
This commit renames get_user_groups to get_direct_user_groups
to specify that the function is used only to fetch the direct
groups that user is part of and not subgroups of the direct
group.
Extracted this commit from #19866.
Co-authored-by: Anders Kaseorg <anders@zulip.com>
This is a prep commit for new permissions model in which a user group would
be able to have a subgroup.
This commit renames get_user_group_members to get_user_group_direct_members
to specify that the function is used only to fetch direct members of group
and excludes the subgroup's members.
Extracted this commit from #19866.
Co-authored-by: Anders Kaseorg <anders@zulip.com>
This will be useful to let users enable/disable
sharing read receipts once we add that feature.
Note: Added "I've" to IGNORED_PHRASES in
tools/lib/capitalization.py to avoid capitalization
errors for the label text of this setting.
Note: These are not functional in enabling/disabling sending of
typing notifications with this commit.
Refactored the privacy settings update to keep the code less
duplicated along with making the addition of new settings easier.
From 430c5cb, in `fetch_initial_state_data`,
we only include legacy settings in the top level of
`state` and the newer ones are stored in `state['user_settings']`.
That should've had a corresponding change in apply_event().
Also, fixed a test related to this logic.
From 430c5cb, we do not send events for settings added after
introduction of `user_settings` event.
This test wasn't considering that and started failing on
adding new user_settings.
This commit removes _test_user_settings_for_adding_streams
and its callers for testing public and private streams
because it uses excessive mocking and we also test the same
thing in _test_user_settings_for_creating_streams without
mocking, so this test doesn't add anything.
For users who are not logged in and for those who don't have
'prefers_web_public_view' set in session, we redirect them
to the default login page where they can choose to login
as spectator or authenticated user.
This commit adds can_create_web_public_streams helper
in models.py which will be used to validate whether
user is allowed to create a web-public stream or not.
This commit also adds the checks for Realm.POLICY_OWNERS_ONLY
in check_has_permission_policies.
This commit adds create_web_public_stream_policy
field to Realm table which controls the roles that
can create web-public streams and by default its
value is set to POLICY_OWNERS_ONLY.
This commit enforces invite_only argument to be named
in _test_user_settings_for_creating_streams. This will
help in improving readability especially when we will
add is_web_public argument in further commits.
This commit adds tests for POLICY_EVERYONE and POLICY_NOBODY
in check_has_permission_policies test. The original code
used these values but these were not covered in test.
We send three events when changing delivery email of a user - one
for updating the delivery_email field of user, one for avatar url
change, and one for changing email field if email_address_visibility
is set to 'EMAIL_ADDRESS_VISIBILITY_EVERYONE'.
There is already a test for delivery_email and avatar_url event with
the visibility setting set to 'EMAIL_ADDRESS_VISIBILITY_ADMINS_ONLY',
but no test for verifying the email update event sent when email
address is public, so this commit adds a test for checking the schema
of event for updating email field.
When email_address_visibility is changed and either the old value
or the updated value is EMAIL_ADDRESS_VISIBILITY_EVERYONE then
email field of all users is updated and we also send the corresponding
event to clients. But apply_event code did not update the data on
receiving the event, so this commit fixes the code to correctly
handle the event in apply_event.
(We also use this event when just changing a user's email address).
This commit also adds the tests and openapi schema for the event.
We use the lists defined in models.py like Realm.COMMON_POLICY_TYPES,
Realm.COMMON_MESSAGE_POLICY_TYPES, etc. in do_set_realm_property_test
instead of using defining list there (eg - [4, 3, 2, 1]). We do the
same thing in do_set_realm_property_test in test_realm.py.
We skip email_address_visibility values in this commit because it
requires some change in openapi schema as well.
Since the calls to the translation function `_()` are made outside
of the `send_message_moved_breadcrumbs` function, these strings are
translated outside of the `with override_language` block, leading to
translated strings even when we don't intend them to be translated.
We now use gettext_lazy with appropriate testing to avoid this.
Zulip attempts to validate that the regular expressions that admins
enter for linkifiers are well-formatted, and only contain a specific
subset of regex grammar. The process of checking these
properties (via a regex!) can cause denial-of-service via
backtracking.
Furthermore, this validation itself does not prevent the creation of
linkifiers which themselves cause denial-of-service when they are
executed. As the validator accepts literally anything inside of a
`(?P<word>...)` block, any quadratic backtracking expression can be
hidden therein.
Switch user-provided linkifier patterns to be matched in the Markdown
processor by the `re2` library, which is guaranteed constant-time.
This somewhat limits the possible features of the regular
expression (notably, look-head and -behind, and back-references);
however, these features had never been advertised as working in the
context of linkifiers.
A migration removes any existing linkifiers which would not function
under re2, after printing them for posterity during the upgrade; they
are unlikely to be common, and are impossible to fix automatically.
The denial-of-service in the linkifier validator was discovered by
@erik-krogh and @yoff, as GHSL-2021-118.
This removes a false-positive ReDoS, since the input is always
checked-in code. It also incidentally refactors to make the regexes
be more explicit about the values they expect, and removes unnecessary
capturing groups.
It removes an optional parenthesized status code for fixtures,
unnecessary since 981e4f8946, as well as
optional key-value language options, unnecessary since
a2be9a0e2d.
Thank you to @erik-krogh and @yoff for bringing this to our attention.
This fixes the issue where 'None' would appear in the rendered
html in case of a missing tab display_name. Now,
'test-help-documentation' will fail in case of any tab display_name
being missing.
In case of a tab_section with no tabs, currently a single tab with
the name 'null_tab' gets added. Added the display name 'None' for
'null_tab', to keep in line with the existing behaviour.
Fixes#19822
This makes our onboarding guide for education organizations much
simpler, since new organizations will start with these settings
correctly configured.
Fixes#19682
Users wanted a feature where they could specify
which users can create public streams and which users can
create private streams.
This splits stream creation code into two parts,
public and private stream creation.
Fixes#17009.
This commit replaces 'allow_message_deleting' boolean setting
with an integer setting 'delete_own_message_policy'. We have a
separate dropdown now for deciding which user-roles can delete
messages sent by themselves and the time-limit setting droddown
is different.
This new setting has two options - everyone and admins only. Other
options including moderators will be added further.
We also remove the "Never" option from the original time-limit
dropdown, as admins are always allowed to delete message. This
never option resembled the case of only admins being allowed to
delete but this state is now resembled by setting the dropdown
to "admins only" and we also disable the time-limit dropdown in
this case as admins are allowed to delete irrespective of limit.
Note, this setting is only for deleting messages sent by the
deleting user themselves, and only admins are allowed to delete
messages sent by others as before.
We make zero invalid value for message_content_delete_limit_seconds and
for handling the case of "Allow to delete message any time", the API-level
value of message_content_delete_limit_seconds is "anytime" and "None"
as the DB-level value. We also use these values for message retention
setting, so it helps maintain consistency.
This commit does not remove the 'enable_login_emails' field from
RealmUserDefault table but it is just not used and cannot be
changed from UI or API similar to 'enable_marketing_emails' setting.
This is a somewhat subtle function, that deserves a few comments
explaining subtle details of its logic, and there's no good reason to
have multiple copies of that logic that are slightly inconsistent.
Because the main changes here are just checking for invariant
failures, the behavioral change here should be limited to ensuring
deactivated streams are not considered available even if they were
tagged as web public streams before deactivation.
This fixes a problem where we could not import zerver.lib.streams from
zerver.lib.message, which would otherwise be reasonable, because the
former implicitly imported many modules due to this issue.
Requests to the root subdomain weren't getting request_notes.realm set
even if a realm exists on the root subdomain - which is actually a
common scenario, because simply having one organization, on the root
subdomain, is the simplest and common way for self-hosted deployments.
This reverts commit cd93d0967f.
This check_or is redundant with check_union; it gives a misleading
error message for the non-matching case; and it has no type safety.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
In maybe_send_resolve_topic_notifications, since the calls to the
translation function `_()` are made outside of the `override_language`
block, the strings are not translated correctly.
This commit refactors the function to make sure that the translation
happens in the right block of code.
Fixes#19730.
Apparently, our slack compatible outgoing webhook format didn't
exactly match Slack, especially in the types used for values. Fix
this by using a much more consistent format, where we preserve their
pattern of prefixing IDs with letters.
This fixes a bug where Zulip's team_id could be the empty string,
which tripped up using GitLab's slash commands with Zulip.
Fixes#19588.
This commit removes the existing default_twenty_four_hour_time field in
Realm table which was used to set the twenty_four_hour_time setting of
new user on joining and instead we now use the twenty_four_hour_time
field of RealmUserDefault table for the same.
With some tweaks by tabbott to clarify the documentation.
These values are currently either a string already or a List[int]. We
should do the conversion in
do_update_user_custom_profile_data_if_changed properly: if the value is
already a string, it can be used directly - if it's not, orjson.dumps is
a more future-proof way of converting than str(). Using orjson.dumps
here also allows us to change the converter of the USER type
CustomProfileField to orjson.loads, which is nicer to have than
ast.literal_eval.
While orjson.dumps() and str() give the same output when
given the special case of List[int],
ast.literal_eval was previously used due to orjson.loads not being
a good inverse function to str in general. That gets straightened out
now.
None of the existing custom profile field types have the value as an
integer like declared in many places - nor is it a string like currently
decalred in types.py. The correct type is Union[str, List[int]]. Rather
than tracking this in so many places throughout the codebase, we add a
new ProfileDataElementValue type and insert it where appropriate.
The old assignment is incorrect - field_value.value is a TextField() and
should always be a string. This didn't strictly break anything, because
django converts the value to a string when .save()ing to the db, but
field_value.value persists as a non-string for the rest of this
codepath. After fixing this, the small codeblock in
notify_user_update_custom_profile_data handling conversion of
field_value.value to a string becomes redundant.
We're assured that we're not breaking event format by the test
test_custom_profile_field_data_events in test_events.py.
Send update event to client after a stream is made web public.
This has been documented in the API documentation since feature level
73; previously the value was always false.
We allow clients to make existing streams web public via the API.
This feature is still disabled via settings in production
environments, because we may have additional policy rules or UI
warnings we wish to add to this sort of conversion.
User can now create web public stream via the /subscribe API.
So, when a web public stream present in the API request does not
exist, it will be created now by specifying the is_web_public
parameter. The parameter would have been ignored without this
commit.
The new error message is more clear about why, "User cannot create
stream with this settings." was bad English, and in any case removing
an unnecessary string is always an improvement for translators.
This new setting both serves as a guard to allow us to merge API
support for web public streams to main before we're ready for this
feature to be available on Zulip Cloud, and also long term will
protect self-hosted servers from accidentally enabling web-public
streams (which could be a scary possibility for the administrators of
a corporate Zulip server).
Recently, we discovered that our settings_tab/relative Markdown
directives didn't work when they were in a macro that was included
in another Markdown file. Note that without this commit, the
/help/create-your-organization-profile page is broken. This commit
changes the respective priorities of these two extensions such that
these directives are rendered *after* the macro is included in
another file.
Thanks to Alya Abbott for reporting this bug!
All of our custom Markdown extensions have priorities that govern
the order in which the preprocessors will be run. It is more
convenient to have these all in one file so that you can easily
discern the order at first glance.
Thanks to Alya Abbott for reporting the bug that led to this
refactoring!
This is a follow-up to #19388.
We will in the future allow patch requests to change the visibility
of an existing topic, so `last_updated` is better name for this field.
This commit does not affect the API or events in any way, but only the
database.
Fixes#17456.
The main tricky part has to do with what values the attribute should
have. LDAP defines a Boolean as
Boolean = "TRUE" / "FALSE"
so ideally we'd always see exactly those values. However,
although the issue is now marked as resolved, the discussion in
https://pagure.io/freeipa/issue/1259 shows how this may not always be
respected - meaning it makes sense for us to be more liberal in
interpreting these values.
The test now uses submit_reg_form_for_user, meaning a blank
full_name is posted to /accounts/register/ rather than the
parameter being excluded.
Fixes part of #7564
I had to pass stop_after_reg_form=True, as the call to get_user in
verify_signup fails. I am not sure whether this is the expected
behavior. Also this causes the test to use submit_reg_form_for_user,
meaning a blank password is posted to /accounts/register/ rather than
no password.
Fixes part of #7564
get_user_by_delivery_email should be used, given that the email variable
is the realm email address that the account is being created with, not
the .email field which can be a dummy address based on org settings.
Currently we used to redirect to /new when the user click on buy
standard from the root domain. Instead we redirect to /upgrade page.
The /upgrade page redirect would ask user to enter the subdomain
of their organization and would then redirect them to /upgrade
page of their organization.
This better matches the title of the page and more generally our
conventions around naming /help/ articles. We include a redirect
because this is referenced from Welcome Bot messages, and we
definitely don't want those links to break.
This parallels fe25517295, but for mobile notifications. It also
adds a test, which verifies that such content does not crash either
mobile or email notifications.
fe25517295 adjusted the email_notifications codepath to use
`lxml.html.fragment_fromstring` method when parsing
`rendered_content`, but left the tests using a helper which called
`fromstring`.
Switching the tests to match the code as run reveals a bug -- using
`drop_tree` on all `message_inline_image` classes now _does_ remove
all of a top-level image-URL-only message. Previously, such messages
were "safe" from the block that calls `drop_tree` only by dint of
`drop_tree` being a silent no-op for the root element. When parsed
using `fragment_fromstring`, they are no longer the root, and as such
an empty message results.
Reorder relative_to_full_url to check for only one `message_inline_image`
within the top `<div>`, and only run the `drop_tree` path in the
alternate case. Tests must be adjusted for their output now including
one more layer of `<div>`.
The previous commit introduced logging of attempts for username+password
backends. For completeness, we should log, in the same format,
successful attempts via social auth backends.
Our convention is to always have authenticate() called with a request
object. We need to be consistent with that in tests too, to avoid test
failures resulting from breaking that assumption.
We modify assert_login_failure to call client.login() in the same way as
the other similar helpers - with a properly initialized HttpRequest
instance.
Now, when we add a custom animated emoji to the realm
we also save a still image of it (1st frame of the gif). So
we can avoid showing an animated emoji every time.
create_confirmation_link has validity time as an optional argument,
because it has reasonable defaults. Thus it's a better API for
do_send_confirmation_email to make this optional as well, allowing
relying on create_confirmation_link's defaults.
This extends the invite api endpoints to handle an extra
argument, expiration duration, which states the number of
days before the invitation link expires.
For prereg users, expiration info is attached to event
object to pass it to invite queue processor in order to
create and send confirmation link.
In case of multiuse invites, confirmation links are
created directly inside do_create_multiuse_invite_link(),
For filtering valid user invites, expiration info stored in
Confirmation object is used, which is accessed by a prereg
user using reverse generic relations.
Fixes#16359.
The API for changing the batching period was added in
5db4fe8652.
This is a follow up to that commit. We also update the timestamps for
existing scheduled email notifications entries so that the effect of
changing the setting is immediate.
Part of #15280