There is still some miscellaneous cleanup that
has to happen for things like analytics queries
and dead code in node tests, but this should
remove the main use of pointers in the backend.
(We will also still need to drop the DB field.)
rename ZEPHYR_MIRROR_BUGDOWN_KEY to ZEPHYR_MIRROR_MARKDOWN_KEY and
DEFAULT_BUGDOWN_KEY tp DEFAULT_MARKDOWN_KEY.
This commit is part of series of commits aimed at renaming bugdown to
markdown.
This commits changes class name of MarkdownListPreprocessor to
MarkdownListPreprocessor. It also changes corresponding references
in tests.
This is part of series of commits which aims for renaming bugdown to
markdown.
This commit is first of few commita which aim to change all the
bugdown references to markdown. This commits rename the files,
file path mentions and change the imports.
Variables and other references to bugdown will be renamed in susequent
commits.
We send user_id of the referrer instead of email in the invites dict.
Sending user_ids is more robust, as those are an immutable reference
to a user, rather than something that can change with time.
Updates to the webapp UI to display the inviters for more convenient
inspection will come in a future commit.
In zulip.yaml, add `deprecated` tags to all parameters/keys with
`Deprecated` in the description. Then add tests to ensure that deprecated
parameters/keys will always have the `deprecated` key. Also, in
the API docs, sort the parameters according to presence of `deprecated`
key, presenting the `deprecated` keys at the end and add a `deprecated`
tag next to them.
We send a remove mobile push notification to the users who were
no longer mentioned after the content of the message was edited.
This also corrects the notification count for the mobile apps
where a user was prior mentioned in a muted stream / topic and the
message was edited and the user is no longer mentioned now.
Hence, fixing the case where user has read all his unreads
but the notification badge on the app is still positive.
Fixes#15428.
do_clear_mobile_push_notifications_for_ids can now be used to
clear push_notification for multiple users at once. This method
loops over users, so no performance optimization is gained.
We've been seeing an exception in server_event_dispatch.js in
production where in large organizations, sometimes when a new user
joined, every other browser in the organization would throw an
exception processing some sort of realm_user/update event.
It turns out the cause was that when a user copies their profile from
an existing user account with a user-uploaded avatar, the code path we
reused to set the avatar properly send a realm_user/update event about
the avatar change -- for a user that hadn't been fully created and
certainly hadn't have the realm_user/add event sent for.
We fix this and add tests and comments to prevent it recurring.
(Removed an incorrect docstring while working on this).
This fixes an issues that causes HTML entities inside of inline code
blocks to be converted rather than being displayed literally.
The upstream python-markdown now handles this correctly, so we just use
their implementation with our changes for removing .strip(). As a result
of this migration, we switch backtick pattern to an inline processor
too.
Fixes#12056.
For the codeblock counterpart of this issue, we should follow the
upstream PR https://github.com/Python-Markdown/markdown/pull/990.
Co-authored-by: Rohitt Vashishtha <aero31aero@gmail.com>
Note that I don't actually convert the
checker from check_dict to check_dict_only,
because that would be a user-facing change,
but I think we can sweep a lot of things
like this after the next release.
Because of other validation on these values, I don't believe any of
these does anything different, but these changes improve readability
and likely make GitHub's code scanners happy.
We now have our muted topics use tuples internally,
which allows us to tighten up the annotation
for get_topic_mutes, as well as our schema
checking.
We want to deprecate sub_validator=None
for check_list, so we also introduce
check_tuple here. Now we also want to deprecate
check_tuple, but it's at least isolated now.
We will use this for data structures that are tuples,
but which are sent as lists over the wire. Fortunately,
we don't have too many of those.
The plan is to convert tuples to dictionaries,
but backward compatibility may be tricky in some
places.
This commit changes do_get_user_invites function to not return
multiuse invites to non-admin users. We should only return multiuse
invites to admins, as we only allow admins to create them.
Streams can have lots of subscribers, meaning that the archiving process
will be moving tons of UserMessages per message. For that reason, using
a smaller batch size for stream messages is justified.
Some personal messages need to be added in test_scrub_realm to have
coverage of do_delete_messages_by_sender after these changes.
Currently, we use -1 as the Realm.message_retention_days value to retain
message forever unless specified at stream level for a particular stream,
that is, no policy set at the realm level. But this is incoherent with what
we use for Stream.message_retention_days where -1 means
> disable retention policy for this stream unconditionally
that can be confusing from an API standpoint.
So instead of trying some hack to reset the value to NULL or using some
other value like -2 for RETAIN_MESSAGE_FOREVER and use that for API. It is
much more intuitive to use a string like 'forever' that can be mapped to
RETAIN_MESSAGE_FOREVER at the backend. And this is similar to what we use
for streams settings as well.
To be more consistent with the meaning in the Stream model, and to make
it easier to have a reasonable settings API, we get rid of the None
value for Realm.message_retention_days in favor of the value -1 to
represent the "don't delete messages" default policy.
A generator that yields values without receiving or returning them is
an Iterator. Although every Iterator happens to be iterable, Iterable
is a confusing annotation for generators because a generator is only
iterable once.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Also enable warn_unused_ignores. I think the fact that there are so
few of these is good evidence that it’s not a significant burden for
people fixing type errors.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We only use this in a few places, but they're really important places
for understanding the types in the codebase, and so it's worth having
a bit of expository documentation explaining how we use it.
(And I expect we'll add more with time).
With this implementation of the feature of the automatic theme
detection, we make the following changes in the backend, frontend and
documentation.
This replaces the previous night_mode boolean with an enum, with the
default value being to use the prefers-color-scheme feature of the
operating system to determine which theme to use.
Fixes: #14451.
Co-authored-by: @kPerikou <44238834+kPerikou@users.noreply.github.com>
We can now invite new users as realm owners. We restrict only
owners to invite new users as owners both for single invite
and multiuse invite link. Also, only owners can revoke or resend
owner invitations.
Old: a validator returns None on success and returns an error string
on error.
New: a validator returns the validated value on success and raises
ValidationError on error.
This allows mypy to catch mismatches between the annotated type of a
REQ parameter and the type that the validator actually validates.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Rename the validator to check_union, to conform
more to Python typing nomenclature.
And we rename one of the test helpers to the
simpler `check_types`. (The test helper
was using "variable" in the "var" sense.)
We assert that the post was successful, to give
more immediate feedback for tests that don't
bother to check the return value and may be
implicitly assuming this method just works in
all cases.
And we also make it more convenient for tests
that are happy-path tests--they don't have to
do the assertion themselves. (And they're still
free to do deeper checks on the json.)
We opt out with allow_fail=True. We probably want
a more direct API eventually for tests that are
clearly trying to test the failure path for
subscribing to streams.
It's possible that a couple tests here that I added
allow_fail=True to just have flawed data setup--
I don't have time to investigate all cases, but
hopefully they will at least stand out more.
Two things were broken here:
* we were using name(s) instead of id(s)
* we were always sending lists that only
had one element
Now we just send "stream_id" instead of "subscriptions".
If anything, we should start sending a list of users
instead of a list of streams. For example, see
the code below:
if peer_user_ids:
for new_user_id in new_user_ids:
event = dict(type="subscription", op="peer_add",
stream_id=stream.id,
user_id=new_user_id)
send_event(realm, event, peer_user_ids)
Note that this only affects the webapp, as mobile/ZT
don't use this.
Currently the API docs do not specify whether a given API parameter
is to be specified in `query` or in `path`. Edit the docs so as
to show the type of argument right beside argument name.
The loop I added here in 5b49839b08 was
ill-conceived. The critical issue was that despite its name,
do_clear_mobile_push_notifications_for_ids does not immediately clear
push notifications (Except in our test suite, where `send_event`
immediately calls into the queue worker code!).
Instead, it queues work to clear those push notifications. Which
means that the first user to declare bankruptcy with a large number of
unreads will fill the queue, and then this will just be an infinite
loop adding more work to the queue.
This adds a new client_capability that clients such as the mobile apps
can use to avoid unreasonable network bandwidth consumed sending
avatar URLs in organizations with 10,000s of users.
Clients don't strictly need this data, as they can always use the
/avatar/{user_id} endpoint to fetch the avatar if desired.
This will be more efficient especially for realms with
10,000+ users because the avatar URLs would increase the
payload size significantly and cost us more bandwidth.
Fixes#15287.
We need this field to avoid O(N) database operations
while fetching realm user data for clients with
`user_avatar_url_field_optional` flag enabled.
Part of #15287.
With #14378, we regressed back to the state of that
prior to 7e0ea61b00.
We fix this by getting our avatar bucket on
object initialization, and use the appropriate means
of gathering the network location for the urls.
Fixes#14484.
This commit adds backend support for setting message_retention_days
while creating streams and updating it for an existing stream. We only
allow organization owners to set/update it for a stream.
'message_retention_days' field for a stream existed previously also, but
there was no way to set it while creating streams or update it for an
exisiting streams using any endpoint.
Previously, we had implemented:
<span class="timestamp" data-timestamp="unix time">Original text</span>
The new syntax is:
<time timestamp="ISO 8601 string">Original text</time>
<span class="timestamp-error">Invalid time format: Original text</span>
Since python and JS interpretations of the ISO format are very
slightly different, we force both of them to drop milliseconds
and use 'Z' instead of '+00:00' to represent that the string is
in UTC. The resultant strings look like: 2011-04-11T10:20:30Z.
Fixes#15431.
Fixes#14498.
When a topic is moved to a different stream, the message may no
longer be reachable to guest user, if the user is not subscribed
to the new stream.
We used to send message update event to the client in these cases,
which seems to be confusing both to the client updating the message
and the server sending push_notifications for it.
Now, we delete the UserMessage entry for these messages for the
user and send a delete message event to the client; which makes
both push_notification and the event handling client think that
the message was deleted and hence no confusion in the code is
raised.
This makes the system store and track PushDeviceToken objects on
the local Zulip server when using the push notifications bouncer
and includes tests for this.
This is something we need to implement end-to-end encryption for
push notifications. We'll add the encryption key as an additional
property on the local PushDeviceToken object.
It also likely adds some value in the case that a server were to
switch between using the bouncer service and sending notifications
directly, though in practice that's unlikely to happen.
The most import change here is the one in maybe_send_to_registration
codepath, as the insufficient validation there could lead to fetching
an expired PreregistrationUser that was invited as an administrator
admin even years ago, leading to this registration ending up in the
new user being a realm administrator.
Combined with the buggy migration in
0198_preregistrationuser_invited_as.py, this led to users incorrectly
joining as organizations administrators by accident. But even without
that bug, this issue could have allowed a user who was invited as an
administrator but then had that invitation expire and then joined via
social authentication incorrectly join as an organization administrator.
The second change is in ConfirmationEmailWorker, where this wasn't a
security problem, but if the server was stopped for long enough, with
some invites to send out email for in the queue, then after starting it
up again, the queue worker would send out emails for invites that
had already expired.
Google has removed the Google Hangouts brand, thus we are removing
them as video chat provider option.
This commit removes Google Hangouts integration and make a migration
that sets all realms that are using Hangouts as their video chat
provider to the default, jitsi.
With changes by tabbott to improve the overall video call documentation.
Fixes: #15298.
This adds support for a "spoiler" syntax in Zulip's markdown, which
can be used to hide content that one doesn't want to be immediately
visible without a click.
We use our own spoiler block syntax inspired by Zulip's existing quote
and math block markdown extensions, rather than requiring a token on
every line, as is present in some other markdown spoiler
implementations.
Fixes#5802.
Co-authored-by: Dylan Nugent <dylnuge@gmail.com>
This adds a new function `get_apns_badge_count()` to
fetch count value for a user push notification and
then sends that value with the APNs payload.
Once a message is read from the web app, the count is
decremented accordingly and a push notification with
`event: remove` is sent to the iOS clients.
Fixes#10271.
This line was effectively hardcoding a specific stream_post_policy,
overriding the value already present in the event, to no purpose.
(I believe it got here via cargo-culting induced by #13787.)
This commit removes is_old_stream property from the stream objects
returned by the API. This property was unnecessary and is essentially
equivalent to 'stream_weekly_traffic != null'.
We compute sub.is_old_stream in stream_data.update_calculated_fields
in frontend code and it is used to check whether we have a non-null
stream_weekly_traffic or not.
Fixes#15181.
This likely fix a bug that can leak thousands of messages into the
invalid state where:
* user_message.flags.active_mobile_push_notification is True
* user_message.flags.read is True
which is intended to be impossible except during the transient process
between marking messages as read sending the "remove push
notifications" event.
The bug is that if a user who is declaring bankruptcy with 10,000s of
unreads ends up having the database query to mark all of those as read
take 60s, the Django/uwsgi request will time out and kill the process.
If the postgres transaction still completes, we'll end up with the
second half of this function never being run.
A safer ordering is to do the smaller queries first.
We do this in a loop for correctness in the unlikely event there are
more than 10,000 of these.
This is designed to have no user-facing change unless the client
declares bulk_message_deletion in its client_capabilities.
Clients that do so will receive a single bulk event for bulk deletions
of messages within a single conversation (topic or PM thread).
Backend implementation of #15285.
Whenever we use API queries to mark messages as read we now increment
two new LoggingCount stats, messages_read::hour and
messages_read_interactions::hour.
We add an early return in do_increment_logging_stat function if there
are no changes (increment is 0), as an optimization to avoid
unnecessary database queries.
We also log messages_read_interactions::hour Logging stat
as the number of API queries to mark messages as read.
We don't include tests for the case where do_update_pointer is called
because do_update_pointer will most likely be removed from the
codebase in the near future.
This adds a powerful end-to-end test for Zulip's API documentation:
For every documented API endpoint (with a few declared exceptions that
we hope to remove), we verify that every API response received by our
extensive backend test suite matches the declared schema.
This is a critical step towards being able to have complete, high
quality API documentation.
Fixes#15340.
When doing query for same topic names in a stream, we should do a
case-insensitive exact match for the topic, since that's the data
model for topics in Zulip.
Generated by pyupgrade --py36-plus --keep-percent-format.
Now including %d, %i, %u, and multi-line strings.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The Python 3.6 style does support non-total and even partially-total
TypedDict, but total gives us better guarantees.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Use read-only types (List ↦ Sequence, Dict ↦ Mapping, Set ↦
AbstractSet) to guard against accidental mutation of the default
value.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
There seems to have been a confusion between two different uses of the
word “optional”:
• An optional parameter may be omitted and replaced with a default
value.
• An Optional type has None as a possible value.
Sometimes an optional parameter has a default value of None, or None
is otherwise a meaningful value to provide, in which case it makes
sense for the optional parameter to have an Optional type. But in
other cases, optional parameters should not have Optional type. Fix
them.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Automatically generated by the following script, based on the output
of lint with flake8-comma:
import re
import sys
last_filename = None
last_row = None
lines = []
for msg in sys.stdin:
m = re.match(
r"\x1b\[35mflake8 \|\x1b\[0m \x1b\[1;31m(.+):(\d+):(\d+): (\w+)", msg
)
if m:
filename, row_str, col_str, err = m.groups()
row, col = int(row_str), int(col_str)
if filename == last_filename:
assert last_row != row
else:
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
with open(filename) as f:
lines = f.readlines()
last_filename = filename
last_row = row
line = lines[row - 1]
if err in ["C812", "C815"]:
lines[row - 1] = line[: col - 1] + "," + line[col - 1 :]
elif err in ["C819"]:
assert line[col - 2] == ","
lines[row - 1] = line[: col - 2] + line[col - 1 :].lstrip(" ")
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This commit adds three `.pysa` model files: `false_positives.pysa`
for ruling out false positive flows with `Sanitize` annotations,
`req_lib.pysa` for educating pysa about Zulip's `REQ()` pattern for
extracting user input, and `redirects.pysa` for capturing the risk
of open redirects within Zulip code. Additionally, this commit
introduces `mark_sanitized`, an identity function which can be used
to selectively clear taint in cases where `Sanitize` models will not
work. This commit also puts `mark_sanitized` to work removing known
false postive flows.
The only clients that should use the typing
indicators endpoint are our internal clients,
and they should send a JSON-formatted list
of user_ids.
We now enforce this, which removes some
complexity surrounding legacy ways of sending
users, such as emails and comma-delimited
strings of user_ids.
There may be a very tiny number of mobile
clients that still use the old emails API.
This won't have any user-facing effect on
the mobile users themselves, but if you type
a message to your friend on an old mobile
app, the friend will no longer see typing
indicators.
Also, the mobile team may see some errors
in their Sentry logs from the server rejecting
posts from the old mobile clients.
The error messages we report here are a bit
more generic, since we now just use REQ
to do validation with this code:
validator=check_list(check_int)
This also allows us to remove a test hack
related to the API documentation. (We changed
the docs to reflect the modern API in an
earlier commit, but the tests couldn't be
fixed while we still had the more complex
semantics for the "to" parameter.)
Prevent `JsonableError(_("Missing content"))` from
ever being triggered.
That error wasn't handle by anything, and thus just threw a 500, as
it's not a response to an HTTP request.
The right fix is to adjust the caller to ban the empty string in
content (or content that strips to the empty string).
Closes#15145.
This commit adds some basic checks while adding or removing
realm owner status of a user and adds code to change owner
status of a user using update_user_backend.
This also adds restriction on removing owner status of the
last owner of realm. This restriction was previously on
revoking admin status, but as we have added a more privileged
role of realm owner, we now have this restriction on owner
instead of admin.
We need to apply that restriction both in the role change code path
and the deactivate code path.
This commit sets the role of the user creating the realm as
realm owner after the realm is created.
Previously, the role of user creating the realm was set as admin.
But now we want it to be owner because owners have the highest
privilege level.
The test_management_commands use in particular was causing pickling
errors when the test failed, because Python 3 filter returns an
iterator, not a list.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit adds an integration for Thinkst Canaries - physical, VM and
cloud-based canaries for detecting attackers to a network. Thinkst
Canaries can send webhook alerts when canaries have been tripped, and
this integration will post Zulip messages when these webhooks are
received.
Signed-off-by: David Wood <david@davidtw.co>
This was previously hardcoded with agreement between the Zulip backend
and frontend as 86400 seconds (1 day). Now, it's still hardcoded in
the backend, but arranged in a way where we could add a setting
without any changes to the mobile and terminal apps to update logic.
Fixes#15278.
We're migrating to using the cleaner zulip.com domain, which involves
changing all of our links from ReadTheDocs and other places to point
to the cleaner URL.
Generated by pyupgrade --py36-plus --keep-percent-format, but with the
NamedTuple changes reverted (see commit
ba7906a3c6, #15132).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Old topic of the msg edit event can be used to help the client
calculate useful information such as if a change
in current narrow is required.
This fixes our re narrow logic after a stream edit of a topic, with
no change in topic name itself, since the original topic was not
present in the event received and hence the `orig_topic` was
undefined in this case.
Option to disable breadcrumb messages were given in both message edit
form and topic edit stream popover.
User now has the option to select which stream to send the notification
of stream edit of a topic via checkboxes in the UI.
We pipe realm_id through functions where it is available,
this helps us avoid doing query for realm_id in loop when
multiple messages are being processed.
datetime.timezone is available in Python ≥ 3.2. This also lets us
remove a pytz dependency from the PostgreSQL scripts.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes warnings like these with python -Wd:
/home/circleci/zulip/zerver/lib/bugdown/__init__.py:327: DeprecationWarning: This method will be removed in future versions. Use 'list(elem)' or iteration over elem instead.
for child in currElementPair.value.getchildren():
/home/circleci/zulip/zerver/lib/bugdown/__init__.py:328: DeprecationWarning: This method will be removed in future versions. Use 'list(elem)' or iteration over elem instead.
if child.getchildren():
/home/circleci/zulip/zerver/lib/bugdown/__init__.py:282: DeprecationWarning: This method will be removed in future versions. Use 'list(elem)' or iteration over elem instead.
for child in currElement.getchildren():
/home/circleci/zulip/zerver/lib/bugdown/__init__.py:283: DeprecationWarning: This method will be removed in future versions. Use 'list(elem)' or iteration over elem instead.
if child.getchildren():
https://docs.python.org/3.8/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.getchildren
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes this warning with python -Wd:
/home/circleci/zulip/zerver/lib/bot_config.py:69: DeprecationWarning: This method will be removed in future versions. Use 'parser.read_file()' instead.
config.readfp(conf)
https://docs.python.org/3/library/configparser.html#configparser.ConfigParser.readfp
Signed-off-by: Anders Kaseorg <anders@zulip.com>
url_to_a returns Union[Element, str], but str cannot be appended to
Element; that would raise TypeError at runtime.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
zerver/lib/i18n.py:34:28: E741 ambiguous variable name 'l'
zerver/lib/webhooks/common.py:103:34: E225 missing whitespace around operator
zerver/tests/test_queue_worker.py:563:9: E306 expected 1 blank line before a nested definition, found 0
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This reimplements our Zoom video call integration to use an OAuth
application. In addition to providing a cleaner setup experience,
especially on zulipchat.com where the server administrators can have
done the app registration already, it also fixes the limitation of the
previous integration that it could only have one call active at a time
when set up with typical Zoom API keys.
Fixes#11672.
Co-authored-by: Marco Burstein <marco@marco.how>
Co-authored-by: Tim Abbott <tabbott@zulipchat.com>
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Objects whose properties are not described were validated by the
current validator. Edit it so that objects with no `properties`
or `additionalProperties` attribute i.e. opaque objects get
invalidated.
Also make changes in zulip.yaml to fix any opaque objects (tweaked by
tabbott to edit the documentation for better clarity).
We change do_create_user and create_user to accept
role as a parameter instead of 'is_realm_admin' and 'is_guest'.
These changes are done to minimize data conversions between
role and boolean fields.
request_retry and notify_bot_owner don't use request_data so might
as well not send it to them at all.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
Using the Python Standard Library's abc library and NotImplementedError
we can better define interfaces (this is mainly to improve readability
and consistency).
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
Integrations can be supplied a logo parameter which is used to contruct
their `logo_url`. It would be useful to store this parameter, instead of
computing the path from the URL.
This commit changes the person dict in event sent by do_change_user_role
to send role instead of is_admin or is_guest.
This makes things much more straightforward for our upcoming primary
owners feature.
Currently response return values have to be written twice, once in
the docs and once in zulip.yaml. Create a markdown extension so
that the return values in api docs are rendered using content from
zulip.yaml
This commit changes do_change_user_role to support adding or removing
the realm owner status of user and sending an event.
We also extend the existing test for do_change_user_role to do a bit
more validation to confirm the audit log records all values of role.
The new realm_owner role is added as option for role field in
UserProfile model and is_realm_owner is added as property for the user
profile.
Aside from some basic tests validating the logic, this has no effect
as users cannot end up with set as realm owners.
If a user receives more than one invite to join a
realm, after that user registers, all the remaining
invitations should be revoked, preventing them to be
listed in active invitations on admin panel.
To do this, we added a new prereg_user status,
STATUS_REVOKED.
We also added a confirmation_link_expired_error page
in case the user tries click on a revoked invitaion.
This page has a link to login page.
Fixes: #12629
Co-authored-by: Arunika <arunikayadav42@gmail.com>
On invitations panel, invites were being removed when
the user clicked on invitation's link. Now we only remove
it when the user completes registration.
Fixes: #12281
mock is just a backport of the standard library’s unittest.mock now.
The SAMLAuthBackendTest change is needed because
MagicMock.call_args.args wasn’t introduced until Python
3.8 (https://bugs.python.org/issue21269).
The PROVISION_VERSION bump is skipped because mock is still an
indirect dev requirement via moto.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We handle fenced code blocks in a preprocessor, and > style blockquotes
are parsed in a blockprocessor. Pymarkdown doesn't run the preprocessors
again on any blocks that it is parsing, and is unlikely to accept our
solution upstream; they intend to convert fenced_code to a block parser.
We simply run all the preprocessors on the text again, with the exception
of NormalizeWhitespace which removed delimiters used by HtmlStash to mark
preprocessed html code. To counter this, we subclass NormalizeWhitespace
and use our customized version for when it is called from a blockparser.
Upstream issue: https://github.com/Python-Markdown/markdown/issues/53Fixes#12800.
This commit merges do_change_is_admin and do_change_is_guest to a
single function do_change_user_role which will be used for changing
role of users.
do_change_is_api_super_user is added as a separate function for
changing is_api_super_user field of UserProfile.
This will protect us in case of some kinds of bugs that could allow
making requests such as password authentication attempts to tornado.
Without restricting the domains to which the in-memory backend can
be applied, such bugs would lead to attackers having multiple times
larger rate limits for these sensitive requests.
After a message was reset in our caches cache via message editing or
adding/removing a reaction, we were sending corrupt data to the cache
because build_message_dict (and thus build_dict_from_raw_db_row) was
improperly being called before sewing in the reaction data.
As a result, we were sending raw database data in the reaction
dictionaries, rather than the reformatted version expected by the API.
Bug introduced in 2a4c62a326.
Fixing this correctly required moving the rendering_realm_id logic one
step higher in the call chain, which is a useful refactoring anyway
(since we're no longer passing a `Message` object down)
We now parse tex and latex as regular languages, highlighting them
with pygments. We only allow 'math' to trigger latex rendering,
which is in line with the documentation.
This commit shifts our timestamp syntax to be of the form:
<span class="timestamp data-timestamp="123456"></span>
since value is not a valid attribute of span elements.
This adds support for syntax like: !time(Jun 7 2017, 6:30 PM) so that
everyone sees the time in their own local timezone. This can be used
when scheduling online meetings, etc.
This adds some hardcoded values for timezones, because of there
being no sureshot way of determining the timezone easily. However,
since the main way of using the feature should be a typeahead for
entering the time, this shouldn't be cause of much concern.
Fixes#5176.
This extends `put_dict_in_redis` to take token as an argument
and return that with the as a `key` with following key format.
Also, edit regex for token to include uppercase letters as
a token sent during apple authentication contains uppercase
letters.
Useful for Adding "Sign in with Apple" support.
During events such as stream / topic name edit for a topic, we were
running queries to db in loop for each message for reactions,
submessages and realm_id. This commit reduces the queries to be
done only for realm_id, which is yet to be fixed.
This is accomplished by building messages with empty reactions
and submessages and then updating them in the messages using bulk
queries.
This commit allows non admins to set stream post policy while creating
streams.
Restriction was there to prevent user from creating a stream in which
the user cannot post himself but this will be taken care of with
stream admin feature.
For unknown reasons, deleting 10,000s of ArchiveTransaction objects
results in rapidly growing memory in the job making the request in the
Django process, eventually leading to an OOM kill.
I don't understand why Django behaves that way; I would have expected
the failure mode to instead be a serious load problem on the database
server, but perhaps the way Django's internal deletion logic handles
cascading the deletes to many millions of ArchiveMessages and other
ForeignKey objects requires tracking a lot of data in memory.
The solution is the same in any case, which is to batch the deletions
to execute a reasonable number of them at once. Doing a single
ArchiveTransaction at a time would likely result in huge numbers of
database queries in a loop, which performs very poorly. So we balance
by batching deletions in groups of 100 ArchiveTransactions; testing
this in production, I saw no spike of memory usage materially beyond
that of a normal Django process, and each bulk-deletion transaction
takes several seconds to process (meaning per-transaction overhead is
negligible).
I'm not sure exactly what series of history got us here, but we were
fetching the mobile_user_ids data for all users in the organization,
regardless of whether they were recently active (and thus relevant for
the main presence data set). And doing so in a sloppy fashion
(sending every user ID over the wire, rather than just having the
database join on Realm).
Fixing this saves a factor of 4-5 on the total runtime of a presence
request on organizations with 10Ks of users like chat.zulip.org; more
like 25% in an organization with 150. Since large organizations are
very heavily weighted in the overall cost of presence, this is a huge
win.
Fixes part of #13734.
Zulip's openapi specification in zulip.yaml has various examples
for various schemas. Validate the example with their respective
schemas to ensure that all the examples are schematically correct.
Part of #14100.
The `email` field for identifying the user being modified in these
events was not used by either the webapp or other official Zulip
clients. Instead, it was legacy data from before we switched years
ago to sending user_id fields as the correct way to uniquely identify
a user.
When a user changes its avatar image, the user's avatar in popovers
wasn't being correctly updated, because of browser caching of the
avatar image. We added a version on the request to get the image in
the same format we use elsewhere, so the browser knows when to use the
cached image or to make a new request to the server.
Edited by Tim to preserve/fix sort orders in some tests, and update
zulip_feature_level.
Fixes: #14290
We remove the `owner` field from `page_params/realm_bots`
and bot-related events.
In the recent commit 155f6da8ba
we added `owner_id`, which we now use everywhere we need
bot owners for.
We also bump the `API_FEATURE_LEVEL` to 5 here. We
had already documented this in the prior commit to
add `owner_id`.
Note that we don't have to worry about mobile/ZT clients
here--we only deal with bot data in the webapp.
For the below payloads we want `owner_id` instead
of `owner`, which we should deprecate. (The
`owner` field is actually an email, which is
not a stable key.)
page_params.realm_bots
realm_bot/add
realm_bot/update
IMPORTANT NOTE: Some of the data served in
these payloads is cached with the key
`bot_dicts_in_realm_cache_key`.
For page_params, we get the new field
via `get_owned_bot_dicts`.
For realm_bot/add, we modified
`created_bot_event`.
For realm_bot/update, we modified
`do_change_bot_owner`.
On the JS side, we no longer
look up the bot's owner directly in
`server_events_dispatch` when we get
a realm_bot/update event. Instead, we
delegate that job to `bot_data.js`.
I modified the tests accordingly.
When editing a message where we mention a usergroup, we would remove
the 'mentioned' flag from messages, resulting in the message being
hidden from your mentions in the UI. This was reported by Greg Price in
https://chat.zulip.org/#narrow/stream/9-issues/topic/missing.20mention.
We add the same code that we use in do_send_messages to calculate the
updated mentions_user_ids. We add some tests alongside other user group
mention tests in test_bugdown.
This adds a webhook that can be used to interpret standard Slack
payloads. Since there are a ton of existing Slack integrations out
there, having a webhook which can accept standard Slack payloads can
significantly ease transition pains. Obviously this can't do everything
that Slack payloads can (particularly WRT their widgets/interactions),
but we can ingest text and parse out multi-block payloads into a message
relatively reasonably.
Currently when the user uploads files with ".jpe" file extension, the
markdown is converted to link but the image is not embedded.
This commit adds the support for ".jpe" file extension.
Fixes#14863
These changes should be included in bd9b74436c,
as it makes sure that Zulip limited plan realm won't be able to change the
`message_retention_days` setting.
Since production testing of `message_retention_days` is finished, we can
enable this feature in the organization settings page. We already had this
setting in frontend but it was bit rotten and not rendered in templates.
Here we replaced our past text-input based setting with a
dropdown-with-text-input setting approach which is more consistent with our
existing UI.
Along with frontend changes, we also incorporated a backend change to
handle making retention period forever. This change introduces a new
convertor `to_positive_or_allowed_int` which only allows positive integers
and an allowed value for settings like `message_retention_days` which can
be a positive integer or has the value `Realm.RETAIN_MESSAGE_FOREVER` when
we change the setting to retain message forever.
This change made `to_not_negative_int_or_none` redundant so removed it as
well.
Fixes: #14854
It's a preliminary step to enable message_retention_setting in org settings
UI, which is a non-limited plan only feature. So we require a page_param
property that tells us the limited-plan state of the Zulip realm.
Previously, we had a restriction that we could only
edit and move the topics of 7 days old messages.
This buggy behaviour is now removed as in this
commit.
Fixes#14492.
Part of #13912.
New path() function changed the way a regex pattern
is created from urls - it adds escape backslashes,
so for testing purposes we need to take care of them
and remove them, to check if urls were tested.
Additionaly, regex patterns from urls can have
[^/]+ instead of [^/]*, so we need to take care
of it too.
We no longer have intermediate constants of
`git_described` and `zulip_version_const`.
Instead, we make a `deployment_data` dictionary
that is grep-friendly, and we just let
`deployment_repr` do simple formatting
without translating string constants.
This is pretty easy to test:
- set DEBUG_ERROR_REPORTING = True
- modify some code to throw an exception
- see error output in #errors
- use "/emails" with text-only option to view
errors
This code was bitrotted--we no longer have a file
called `version`.
The info that was probably reported when that feature
was originally written probably lives now
in `zulip-git-version`, although I didn't research
all the history here. Here is the relevant
excerpt from `version.py`:
zulip_git_version_file = os.path.join(
os.path.dirname(os.path.abspath(__file__)),
'zulip-git-version')
if os.path.exists(zulip_git_version_file):
with open(zulip_git_version_file) as f:
version = f.read().strip()
if version:
ZULIP_VERSION = version
The file gets written as follows:
$ cat tools/cache-zulip-git-version
#!/usr/bin/env bash
set -e
cd "$(dirname "$0")/.."
git describe --tags --match='[0-9]*' > zulip-git-version || true
Here is what that might look like:
2.2-dev-2102-gf256ea39eb
Here is an excerpt from one of our recent error reports,
which demonstrates that the code I eliminated here was not
functioning (the third field is missing):
Deployed code:
- git: 2.2-dev-2028-g99ce96d49b-dirty
- ZULIP_VERSION: 2.2-dev-2028-g99ce96d49b
This fixes the main problem reported on #7868. I think
we may just want to close the issue, since the other
`nocoverage` stuff seems harmless to me.
Previously api_description and api_code_examples were two independent
markdown extensions for displaying OpenAPI content used in the same
places. We combine them into a single markdown extension (with two
processors) and move them to the openapi folder to make the codebase
more readable and better group the openapi code in the same place.
Instread of using stream_name + Intergers as topics, we now
generate topics using pos in `config.generate_data.json`.
This helps us create and test more realistic topics.
For realms with no retention policy on themselves or any of their
streams, no archiving happens, but 3 lines of logs would be generated.
That's redundant and we make changes in this commit to avoid logging
those lines if nothing of interest is happening.
Member of the org can able see list of invitations sent by him/her.
given permission for the member to revoke and resend the invitations
sent by him/her and added tests for test member can revoke and resend
the invitations only sent by him/her.
Fixes#14007.
Previously, hanging_lists preprocessor didn't consider anything
indented at 4 or above spaces to be a list. This meant that when
we had a list like:
1. 1
2. 2
3. 3
2. 2a
1. 1a
We would insert a newline between 3. 3 and 2. 2a. This resulted
in the block processor breaeking down 1 list into 2 blocks, which
messed up the nesting and indentation for the second block.
We've had bugs in the past where users with a name in the format
"Alice|999" would confuse our markdown rendering or typeahead. While
that's a fully solvable problem, there's no real use case for that, so
it's probably simpler to just prevent users from setting their name
that way.
Fixes#13923.
Prior to this change, there were reports of 500s in
production due to `export.extra_data` being a
Nonetype. This was reproducible using the s3
backend in development when a row was created in
the `RealmAuditLog` table, but the export failed in
the `DeferredWorker`. This left an entry lying
about that was never updated with an `extra_data`
field.
To fix this, we catch any exceptions in the
`DeferredWorker`, and then update `extra_data` to
encode the failure. We also fix the fact that we
never updated the export UI table with pending exports.
These changes also negated the use for the somewhat
hacky `clear_success_banner` logic.
This will give help up write new digest only if the db rebuild
succeeds. We were relying on the caller to
be successful in building db, this was hacky and unreliable.
We write new db digest once the caller succeeds, this ensures
that we write new digest after every successful attempt.
This fixes the anomality we were facing that Databases were rebuild
on the 2nd provision attempt with no changes to files or migrations.
This was happening because we didn't write a new digest for db
after the first provision (The case of DB didn't exist).
During the 1st provision, we check the template_status() of
Database both Dev and Test, but database_exists() of Databases
obviously returned false, and we rebuild the database,
but forgot to write_new_digest and hence the anomaly in the
second provision explained above.
This ensures that if one deletes `zproject/dev-secrets.conf`, we end
up rebuilding the databases from scratch (which, critically, will
ensure the password that gets setup matches what's in the current
version of the configuration file).
This should address a category of issue we've had where deleting
`zproject/dev-secrets.conf` would result in provision failing.
The logic in do_set_realm_property would previously "change" the email
addrssees of every user in the realm, even if they hadn't actually
changed.
We fix this by skipping the logic when it's unnecessary.
bulk_update is used to update the email of user_profile objects in
database when email_address_visibility is changed.
This helps resolve the problem of timeout errors in realms with large
number of users due to large number of database queries run in a
loop.
Since bulk_update doesn't flush caches, we need our own bit of code to
do that.
Fixes a part of #14600.
We add URLs to the `links_for_embed set`, only when
the `url_embed_preview_enabled` flag is turned on.
So, it is sufficient to check if `links_for_embed`
is not empty.
This new type eliminates a bunch of messy code that previously
involved passing around long lists of mixed positional keyword and
arguments, instead using a consistent data object for communicating
about the state of an external authentication (constructed in
backends.py).
The result is a significantly more readable interface between
zproject/backends.py and zerver/views/auth.py, though likely more
could be done.
This has the side effect of renaming fields for internally passed
structures from name->full_name, next->redirect_to; this results in
most of the test codebase changes.
Modified by tabbott to add comments and collaboratively rewrite the
initialization logic.
We now prevent these variations:
* <hr/>
* <hr />
* <br/>
* <br />
We could enforce similar consistency for other void
tags, if we wished, but these two are particularly
prevalent.
Add function in openapi.py to access endpoint descriptions written
in zulip.yaml. Use this function for creating a markdown extension
for rendering endpoint descriptions written in zulip.yaml.
We use this extension for a single endpoint to get test coverage.
The post_init cache-flushing behavior in the original alert words
migration was subtly wrong; while it may have passed tests, it didn't
have the right ordering for unlikely races.
We use post_save rather than post_init hooks precisely because they
ensure that we flush the cache after we know the database has been
updated and any future reads from the database will have the latest
state.
Previously, alert words were case-insensitive in practice, by which I
mean the Markdown logic had always been case-insensitive; but the data
model was not, so you could create "duplicate" alert words with the
same words in different cases. We fix this inconsistency by making
the database model case-insensitive.
I'd prefer to be using the Postgres `citext` extension to have
postgres take care of case-insensitive logic for us, but that requires
installing a postgres extension as root on the postgres server, which
is a pain and perhaps not worth the effort to arrange given that we
can achieve our goals with transaction when adding alert words.
We take advantage of the migrate_alert_words migration we're already
doing for all users to effect this transition.
Fixes#12563.
Previously, alert words were a JSON list of strings stored in a
TextField on user_profile. That hacky model reflected the fact that
they were an early prototype feature.
This commit migrates from that to a separate table, 'AlertWord'. The
new AlertWord has user_profile, word, id and realm(denormalization so
we can provide a nice index for fetching all the alert words in a
realm).
This transition requires moving the logic for flushing the Alert Words
caches to their own independent feature.
Note that this commit should not be cherry-picked without the
following commit, which fixes case-sensitivity issues with Alert Words.
This is a precursor commit to change the name of
AlertWordNotificationProcessor to AlertWordsNotificationProcessor
to match the change from UserProfile.alert_words to Alertword.
Previously, we added support for 'none', 'plain' and 'noop' and a
function `lang = remap_language(lang)`. This also had the potential
to encourage adding more remappings- something that we deliberatly
want to keep to a minimum.
For context, Anders K doesn't want us to keep any remapping (only
keeping 'text' which is the default no-op lexer that pygments has)
and Tim wants to keep 'plain' and 'text'. We should only document
and advertise 'text'.
Previously, the message and event APIs represented the user differently
for the same reaction data. To make this more consistent, I added a
user_id field to the reaction dict for both messages and events. I
updated the front end to use the user_id field rather than the user
dict. Lastly, I updated front end and back end tests that used user
info.
I primarily tested this by running my local Zulip build and
adding/removing reactions from messages.
Fixes#12049.
Some sites don't render correctly unless you are one of the latest browsers.
YouTube Music, for instance, changes the page title to "Your browser is
deprecated, please upgrade.", which makes our URL previews look bad.
In the original implementation, we were checking for the default language
inside format_code, which resulted in the setting being ignored when set to
quote, math, tex or latex. We shift the validation to `check_for_new_fence`
We also update the tests to use a saner naming scheme for the variables.
This commit removes can_create_streams and can_subscribe_other_users
to use has_permission as a generic function in UserProfile model for
these settings policies.
Relevant changes are made to events.py to avoid duplication at some
places.
We have two different digest schemes to make
sure we keep the database up to date. There
is the migration digest, which is NOT in the
scope of this commit, and which already
used the mechanism we use for other tools.
Here we are talking about the digest for
important files like `populate_db.py`.
Now our scheme is more consistent with how we
check file changes for other tools (as
well as the aformentioned migration files).
And we only write one hash file, instead of
seven.
And we only write the file when things have
actually changed.
And we are explicit about side effects.
Finally, we include a couple new bot settings
in the digest:
INTERNAL_BOTS
DISABLED_REALM_INTERNAL_BOTS
NOTE: This will require a one-time transition,
where we rebuild both databases (dev/test).
It takes a little over two minutes for me,
so it's not super painful.
I bump the provision version here, even
though you don't technically need it (since
the relevant tools are actually using the
digest files to determine if they need to
rebuild the database). I figure it's just
good to explicitly make this commit trigger
a provision, and the user will then see
the one-time migration of the hash files
with a little bit less of a surprise.
And I do a major bump, not a minor bump,
because when we go in the reverse direction,
the old code will have to rebuild the
database due to the legacy hash files not
being around, so, again, I just prefer it
to be explicit.
Fixes#14595.
Invalid HTTP requests could end up in an unhandled exception in
skip_200_and_304 due the record not having the status_code attribute
set. With this change we'll avoid the exception
Example:
curl -X POST -H 'Transfer-Encoding : chunked' --data-binary 'a' 'http://zulipdev.com:9991/json/messages/57'
2020-04-21 10:56:22.007 WARN [django.server] "POST /json/messages/57 HTTP/1.1" 405 95
2020-04-21 10:56:22.007 INFO [django.server] code 400, message Bad request syntax ('a')
2020-04-21 10:56:22.008 WARN [django.server] "a" 400 -
We remove the `generate_fixtures` option here mostly
for simplicity, but in particular to facilitate
an upcoming commit to simplify the job of
`generate-fixtures` (and remove its `--force` option).
The command line option here for `test-backend`
was really calling `generate_fixtures --force`,
which we're about to rename `tools/rebuild-test-database`.
The `test-backend` tools is already smart about catching
up on migrations, so we generally don't need to tell it
to repair the database.
And if the database does get corrupt, you can just do
it directly with `tools/rebuild-test-database`.
This eliminates the `use_force` flag in
`update_test_databases_if_required`, which was easy
to confuse with `rebuild_test_database`.
The other caller wasn't using `use_force`.
Somewhat confusingly, we have two types of different
digests related to databases. The migration digests
are pragmatic, since changes to migrations are a bit
more frequent for certain use cases and don't
necessitate a complete rebuild of the database.
Anyway, these are just more specific names.
Generated by autopep8 --aggressive, with the setup.cfg configuration
from #14532. In general, an isinstance check may not be equivalent to
a type check because it includes subtypes; however, that’s usually
what you want.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Generated by autopep8, with the setup.cfg configuration from #14532.
I’m not sure why pycodestyle didn’t already flag these.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This refactors `extract_python_code_example` to accept an
`example_regex` parameter. It can now be used to extract code examples
from javascript_examples.py.
Previously, the send_custom_email code path leaked files in paths that
were not `.gitignored`, under templates/zerver/emails.
This became problematic when we added automated tests for this code
path, as it meant we leaked these files every time `test-backend` ran.
Fix this by ensuring all the files we generate are in this special
subdirectory.
The purpose is to provide a way for (non-webapp) clients,
like the mobile and terminal apps, to tell whether the
server it's talking to is new enough to support a given
API feature -- in particular a way that
* is finer-grained than release numbers, so that for
features developed after e.g. 2.1.0 we can use them
immediately on servers deployed from master (like
chat.zulip.org and zulipchat.com) without waiting the
months until a 2.2 release;
* is reliable, unlike e.g. looking at the number of
commits since a release;
* doesn't lead to a growing bag of named feature flags
which the server has to go on sending forever.
Tweaked by tabbott to extend the documentation.
Closes#14618.
We now have two functions related to digests
for processes:
is_digest_obsolete
write_digest_file
In most cases we now **wait** to write the
digest file until after we've successfully
run a process with its new inputs.
In one place, for database migrations, we
continue to write the digest optimistically.
We'll want to fix this, but it requires a
little more code cleanup.
Here is the typical sequence of events:
NEVER RUN -
is_digest_obsolete returns True
quickly (we don't compute a hash)
write_digest_file does a write (duh)
AFTER NO CHANGES -
is_digest_obsolete returns False
after reading one file for old
hash and multiple files to compute
hash
most callers skip write_digest_file
(no files are changed)
AFTER SOME CHANGES -
is_digest_obsolete returns False
after doing full checks
most callers call write_digest_file
*after* running a process
I make these all functions for consistency,
and in particular I want to continue to avoid
`glob.glob` calls until we are actually
computing hashes.
This is mostly a prep to allow us to do
hashing in two separate places:
- check hashes
- update hashes
We would only update hashes **after** running
processes anew.
For `provision_inner` I considered using a
class to put the three path-related helpers
into a mini namespace, but it felt too heavy.
It wouldn't be completely implausible here
to extract something like a JSON config
file that has a list of globs for each
process that we do path-hashing for, but I
want to clean up other stuff first.
We now remove the `Type` and `_TYPE` suffixes,
as we will start treating this like a real
class with behavior, instead of a glorified
struct.
We pass in `platform_type`, so that we can
just derive some of our data from that,
where naming conventions apply.
And we use the name `migrations_status_path`,
instead of the name `migration_status`, which
had two different meanings before this change.
This is a pure refactor, and we just early-exit
in case the datbase doesn't exist (knowing that
that can be a bit of a lie now--see the comment
I added.)
Refactored code in actions.py and streams.py to move stream related
functions into streams.py and remove the dependency on actions.py.
validate_sender_can_write_to_stream function in actions.py was renamed
to access_stream_for_send_message in streams.py.
I remove `is_force` from `file_or_package_hash_updated`
and modernize its mypy annotations.
If `is_force` is `True`, we just now run the thing
we want to force-run without having to call
`file_or_package_hash_updated` to expensively
and riskily return `True`.
Another nice outcome of this change is that if
`file_or_package_hash_updated` returns `True`,
you can know that the file or package has
indeed been updated.
For the case of `build_pygments_data` we also
skip an `os.path.exists` check when `is_force`
is `True`.
We will short-circuit more logic in the next
few commits, as well as cleaning up some of
the long/wrapper lines in the `if` statements.
This is be useful for the mobile and desktop apps to hand an uploaded
file off to the system browser so that it can render PDFs (Etc.).
The S3 backend implementation is simple; for the local upload backend,
we use Django's signing feature to simulate the same sort of 60-second
lifetime token.
Co-Author-By: Mateusz Mandera <mateusz.mandera@protonmail.com>
For some mobile use cases, 15 seconds is potentially too short for a
busy+slow device to open a browser and fetch the URL. 60 seconds is
plenty, and doesn't carry a materially increased security risk.
When creating a webhook integration or creating a new one, it is a pain to
create or update the screenshots in the documentation. This commit adds a
tool that can trigger a sample notification for the webhook using a fixture,
that is likely already written for the tests.
Currently, the developer needs to take a screenshot manually, but this could
be automated using puppeteer or something like that.
Also, the tool does not support webhooks with basic auth, and only supports
webhooks that use json fixtures. These can be fixed in subsequent commits.
The Redis-based rate limiting approach takes a lot of time talking to
Redis with 3-4 network requests to Redis on each request. It had a
negative impact on the performance of `get_events()` since this is our
single highest-traffic endpoint.
This commit introduces an in-process rate limiting alternate for
`/json/events` endpoint. The implementation uses Leaky Bucket
algorithm and Python dictionaries instead of Redis. This drops the
rate limiting time for `get_events()` from about 3000us to less than
100us (on my system).
Fixes#13913.
Co-Author-by: Mateusz Mandera <mateusz.mandera@protonmail.com>
Co-Author-by: Anders Kaseorg <anders@zulipchat.com>
Right now, the message is "Invalid characters in emoji name" when
the emoji_name is empty. Changing check_valid_emoji_name() in
zerver/lib/emoji.py which validates the name to accomodate the case
of missing name. The new message is "Emoji name is missing".
Generated by `pyupgrade --py3-plus --keep-percent-format` on all our
Python code except `zthumbor` and `zulip-ec2-configure-interfaces`,
followed by manual indentation fixes.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This adds a new realm setting: default_code_block_language.
This PR also adds a new widget to specify a language, which
behaves somewhat differently from other widgets of the same
kind; instead of exposing methods to the whole module, we
just create a single IIFE that handles all the interactions
with the DOM for the widget.
We also move the code for remapping languages to format_code
function since we want to preserve the original language to
decide if we override it using default_code_clock_language.
Fixes#14404.
Semaphore has currently has two different versions of their product -
Classic and 2.0. This commit adds support for Semaphore 2.0, along side
Semaphore Classic, using the same webhook. This would let the integration
work seamlessly for users who have already configured a Zulip integration in
their Semaphore 2.0 projects.
Semaphore 2.0 currently only supports GitHub and their payloads do not
contain URLs for common entities like commits, pull requests and tags. We
construct URLs for them using templates, but also try to support other
services by providing notifications without URLs.
Closes#14171
Co-authored-by: Puneeth Chaganti <punchagan@muse-amuse.in>
If we had a rule like "max 3 requests in 2 seconds", there was an
inconsistency between is_ratelimited() and get_api_calls_left().
If you had:
request #1 at time 0
request #2 and #3 at some times < 2
Next request, if exactly at time 2, would not get ratelimited, but if
get_api_calls_left was called, it would return 0. This was due to
inconsistency on the boundary - the check in is_ratelimited was
exclusive, while get_api_calls_left uses zcount, which is inclusive.
time_reset returned from api_calls_left() was a timestamp, but
mistakenly treated as delta seconds. We change the return value of
api_calls_left() to be delta seconds, to be consistent with the return
value of rate_limit().
The information used to be stored in a request._ratelimit dict, but
there's no need for that, and a list is a simpler structure, so this
allows us to simplify the plumbing somewhat.
That's the value that matters to the code that catches the exception,
and this change allows simplifying the plumbing somewhat, and gets rid
of the get_rate_limit_result_from_request function.
This commit contains a few clean ups:
* In order to scale better for adding multiple commands,
the message formatting and setting switch logic was
extracted to its own function.
* The command lists were removed, as the frontend parses
the slash command from the compose box, and only sends
a single command to the backend for any given command
alias typed.
* The `switch_command` logic was removed because, given
the aforementioned fact, the index of the command will
always be the same. Thus the switch command will always
be the same.
* Switched to using early returns as opposed to nested
conditionals. Along with removing single use variable
declarations.
This commit reuses the existing infrastructure for moving a topic
within a stream to add support for moving topics from one stream to
another.
Split from the original full-feature commit so that we can merge just
the backend, which is finished, at this time.
This is a large part of #6427.
The feature is incomplete, in that we don't have real-time update of
the frontend to handle the event, documentation, etc., but this commit
is a good mergable checkpoint that we can do further work on top of.
We also still ideally would have a test_events test for the backend,
but I'm willing to leave that for follow-up work.
This appears to have switched to tabbott as the author during commit
squashing sometime ago, but this commit is certainly:
Co-Authored-By: Wbert Adrián Castro Vera <wbertc@gmail.com>
This simplifies the update_display_settings endpoint to use REQ for
validation, rather than custom if/else statements.
The test changes just take advantage of the now more consistent
syntax.
This changes the payload that is used
to populate `page_params` for the webapp,
as well as responses to the once-every-50-seconds
presence pings.
Now our dictionary of users only has these
two fields in the value:
- activity_timestamp
- idle_timestamp
Example data:
{
6: Object { idle_timestamp: 1585746028 },
7: Object { active_timestamp: 1585745774 },
8: Object { active_timestamp: 1585745578,
idle_timestamp: 1585745400}
}
We only send the slimmer type of payload
to clients that have set `slim_presence`
to True.
Note that this commit does not change the format
of the event data, which still looks like this:
{
website: {
client: 'website',
pushable: false,
status: 'active',
timestamp: 1585745225
}
}
This commit migrates zulip outging webhook payload to
/zulip-outgoing-webhook:post in OpenAPI.
Since this migrates the last payloads from api/fixtures.json to
OpenAPI, this commit removes api/fixtures.json file and the functions
accessing the file.
Tweaked by tabbott to further remove an unnecessary conditional.
This is critical for importing the very first realm into an empty
server, since in 27b15a9722, we changed
the model to create the internal realm when the first real realm would
be created, but neglected the data import code path.
The distinction between ValueError and TypeError
is not useful in these functions:
- extract_stream_indicator
- extract_private_recipients (or its callees)
These are always invoked in views to validate
user input.
When we use REQ to wrap the validators, any
Exception gets turned into a JsonableError, so
the distinction is not important.
And if we don't use REQ to wrap the validators,
the errors aren't caught.
Now we just let these functions directly produce
the desired end result for both codepaths.
Also, we now flag the error strings for translation.
This setting is being overridden by the frontend since the last
commit, and the security model is clearer and more robust if we don't
make it appear as though the markdown processor is handling this
issue.
Co-authored-by: Tim Abbott <tabbott@zulipchat.com>
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Zulip's modal_link markdown feature has not been used since 2017; it
was a hack used for a 2013-era tutorial feature and was never used
outside that use case.
Unfortunately, it's sloppy implementation was exposed in the markdown
processor for all users, not just the tutorial use case.
More importantly, it was buggy, in that it did not validate the link
using the standard validation approach used by our other code
interacting with links.
The right solution is simply to remove it.
When more than one outgoing webhook is configured,
the message which is send to the webhook bot passes
through finalize_payload function multiple times,
which mutated the message dict in a way that many keys
were lost from the dict obj.
This commit fixes that problem by having
`finalize_payload` return a shallow copy of the
incoming dict, instead of mutating it. We still
mutate dicts inside of `post_process_dicts`, though,
for performance reasons.
This was slightly modified by @showell to fix the
`test_both_codepaths` test that was added concurrently
to this work. (I used a slightly verbose style in the
tests to emphasize the transformation from `wide_dict`
to `narrow_dict`.)
I also removed a deepcopy call inside
`get_client_payload`, since we now no longer mutate
in `finalize_payload`.
Finally, I added some comments here and there.
For testing, I mostly protect against the root
cause of the bug happening again, by adding a line
to make sure that `sender_realm_id` does not get
wiped out from the "wide" dictionary.
A better test would exercise the actual code that
exposed the bug here by sending a message to a bot
with two or more services attached to it. I will
do that in a future commit.
Fixes#14384
The `event` parameter is never used by `process_success`,
and eliminating it allows us to greatly simplify tests
that are just confusingly passing in events that are
totally ignored.
We've had a bug for a while that if any ScheduledEmail objects get
created with the wrong email sender address, even after the sysadmin
corrects the problem, they'll still get errors because of the objects
stored with the wrong format.
We solve this by using FromAddress placeholders strings in
send_future_email function, so that ScheduledEmail objects end up
setting the final `from_address` value when mail is actually sent
using the setting in effect at that time.
Fixes#11008.
When we are fetching messages, we need to hydrate
stream names into the messages for legacy reasons.
(Ideally, we could skip this step for the webapp
and modern mobile clients, since they really only
need stream_ids, but we're not there yet.)
We keep a recipient cache that maps recipient ids
to stream names.
When we populate that cache, we now use `values(...)`
to avoid fat objects and extra DB work.
Note that we are already using a similar technique
for hydrating PM/huddle recipients.
The previous system for documenting arguments was very ugly if any of
the examples or descriptions were wrong. After thinking about this
for a while, I concluded the core problem was that a table was the
wrong design element to use for API parameters, and we'd be much
better off with individual card-type widgets instead.
This rewrites the API arguments documentation implementation to use a
basic sort of card-like system with some basic styling; I think the
result is a lot more readable, and it's a lot more clear how we would
add additional OpenAPI details (like parameter types) to the
documentation.
This commit modifies 'zerver/lib/bot_lib.py' to decouple the
user-controllable 'service_name' parameter from the value that is
passed in to 'import_module'. This is done as a precautionary
hardening.
This commit introduces two new functions in 'url_encoding.py' which
centralize two common patterns for constructing redirect URLs. It
also migrates the files using those patterns to use the new
functions.
After subscribing a stream email address to a Mailman email list
and receiving a message from it (using the polling configuration
with an Exim + Dovecot mailserver), the following error message
is emitted by Zulip:
Logger zerver.lib.email_mirror, from module zerver.lib.email_mirror line 77:
Error generated by Anonymous user (not logged in) on zulip deployment
Sender: "Foo Bar" <foo@example.com>
To: No recipient found
Missing recipient in mirror email
This is because the To: header on the received email corresponds
to the email list, and there are no other headers to indicate the
final recipient, apart from the "Envelope-To" header added by
Exim. To resolve this problem, the commit adds "Envelope-To" to
the list of headers to check for a match.
This improves the error handling for invalid values of the
propagate_mode parameter to our message editing endpoints.
Previously, invalid values would just work like change_one rather than
doing nothing.
The previous starred_messages race handling did not correctly consider
the possibility that an event queue might have been registered without
starred_messages.
Instead of operating on RateLimitedObjects, and making the classes
depend on each too strongly. This also allows getting rid of get_keys()
function from RateLimitedObject, which was a redis rate limiter
implementation detail. RateLimitedObject should only define their own
key() function and the logic forming various necessary redis keys from
them should be in RedisRateLimiterBackend.
type().__name__ is sufficient, and much readable than type(), so it's
better to use the former for keys.
We also make the classes consistent in forming the keys in the format
type(self).__name__:identifier and adjust logger.warning and statsd to
take advantage of that and simply log the key().
We don't need `do_create_user` to send a partial
event here for bots. The only caller to `do_create_user`
that actually creates bots (apart from some tests that
just need data setup) is `add_bot_backend`, which
sends the more complete event including bot "extras"
like service info.
The modified event tests show the simplification
here (2 events instead of 3).
Also, the bot tests now use tuple unpacking, which
will force a ValueError if we duplicate events
again.
We were going back to the database to get all
the users in the realm, when we had them right
there already. I believe this is a legacy
of us running on a very old version of Django
(back in early days), where `bulk_create`
didn't give you back ids in a nice way.
In the interim we added the `RealmAuditLog`
code, which does take advantage of the
existing profiles (and proves we can rely
on them).
But meanwhile we were still
doing a query to get all N users in the
realm. With `selected_related`!
To be fair, bulk_create_users() is by
its very nature a pretty infrequent
operation. This change is more motivated
by code cleanup.
Now we just loop through user_ids for
the Recipient/Subscriber foreign key rows.
I also removed some fairly convoluted code mapping
emails to user_ids and just work in user_id
space.
We try to use the correct variation of `email`
or `delivery_email`, even though in some
databases they are the same.
(To find the differences, I temporarily hacked
populate_db to use different values for email
and delivery_email, and reduced email visibility
in the zulip realm to admins only.)
In places where we want the "normal" realm
behavior of showing emails (and having `email`
be the same as `delivery_email`), we use
the new `reset_emails_in_zulip_realm` helper.
A couple random things:
- I fixed any error messages that were leaking
the wrong email
- a test that claimed to rely on the order
of emails no longer does (we sort user_ids
instead)
- we now use user_ids in some place where we used
to use emails
- for IRC mirrors I just punted and used
`reset_emails_in_zulip_realm` in most places
- for MIT-related tests, I didn't fix email
vs. delivery_email unless it was obvious
I also explicitly reset the realm to a "normal"
realm for a couple tests that I frankly just didn't
have the energy to debug. (Also, we do want some
coverage on the normal case, even though it is
"easier" for tests to pass if you mix up `email`
and `delivery_email`.)
In particular, I just reset data for the analytics
and corporate tests.
I guess `test_classes` has 100% line coverage
enforcement, which is a bit tricky for error
handling.
This fixes that, as well as making the name
snake_case and improving the format of the
errors.
This test was using the anti-pattern of doing an
assertion inside a conditional.
I added the `findOne` helper to make it easier
to write robust tests for scenarios like this.
If I send a message from a normal Zulip client, it is
considered to be "read" by me. But if I send it via
an API program (using my human account), the message
is not immediately "read" by me.
Now we handle this correctly in `get_raw_unread_data`.
The symptom of this was that these messages would get
"stuck" in "Private Messages" narrows until the next
time you reloaded your app.
Using an Exists subquery to avoid scanning the entire Subscription
table seems to speed things up greatly.
Set up with:
./manage.py populate_db --extra_users 2000 --extra-streams 1000
Tested on my computer, the original function was taking ~1.2seconds,
the optimized version only ~0.05-0.06.
Likely fixes#13874; we can re-open if after production testing we
feel more work is warranted.
In 220c2a5ff3 I
introduced a query to find invites by delivery_email
but was still using email as the key.
For most realms `email` and `delivery_email` are
synonymous, so this temporary bug would not affect
them. For realms that restrict emails, the invite
would have probably failed for other reasons, but
the symptom would have been less clear.
We now have this API...
If you really just need to log in
and not do anything with the actual
user:
self.login('hamlet')
If you're gonna use the user in the
rest of the test:
hamlet = self.example_user('hamlet')
self.login_user(hamlet)
If you are specifically testing
email/password logins (used only in 4 places):
self.login_by_email(email, password)
And for failures uses this (used twice):
self.assert_login_failure(email)
This reduces query counts in some cases, since
we no longer need to look up the user again. In
particular, it reduces some noise when we
count queries for O(N)-related tests.
The query count is usually reduced by 2 per
API call. We no longer need to look up Realm
and UserProfile. In most cases we are saving
these lookups for the whole tests, since we
usually already have the `user` objects for
other reasons. In a few places we are simply
moving where that query happens within the
test.
In some places I shorten names like `test_user`
or `user_profile` to just be `user`.
We want a clean codepath for the vast majority
of cases of using api_get/api_post, which now
uses email and which we'll soon convert to
accepting `user` as a parameter.
These apis that take two different types of
values for the same parameter make sweeps
like this kinda painful, and they're pretty
easy to avoid by extracting helpers to do
the actual common tasks. So, for example,
here I still keep a common method to
actually encode the credentials (since
the whole encode/decode business is an
annoying detail that you don't want to fix
in two places):
def encode_credentials(self, identifier: str, api_key: str) -> str:
"""
identifier: Can be an email or a remote server uuid.
"""
credentials = "%s:%s" % (identifier, api_key)
return 'Basic ' + base64.b64encode(credentials.encode('utf-8')).decode('utf-8')
But then the rest of the code has two separate
codepaths.
And for the uuid functions, we no longer have
crufty references to realm. (In fairness, realm
will also go away when we introduce users.)
For the `is_remote_server` helper, I just inlined
it, since it's now only needed in one place, and the
name didn't make total sense anyway, plus it wasn't
a super robust check. In context, it's easier
just to use a comment now to say what we're doing:
# If `role` doesn't look like an email, it might be a uuid.
if settings.ZILENCER_ENABLED and role is not None and '@' not in role:
# do stuff
Instead of trying to set the _requestor_for_logs attribute in all the
relevant places, we try to use request.user when possible (that will be
when it's a UserProfile or RemoteZulipServer as of now). In other
places, we set _requestor_for_logs to avoid manually editing the
request.user attribute, as it should mostly be left for Django to manage
it.
In places where we remove the "request._requestor_for_logs = ..." line,
it is clearly implied by the previous code (or the current surrounding
code) that request.user is of the correct type.
This uses the better, modern, user ID based API for sending messages
internally in the test suite, something that's convenient to do as a
follow-up to the migration to pass UserProfile objects to these
functions.
This commit mostly makes our tests less
noisy, since emails are no longer an important
detail of sending messages (they're not even
really used in the API).
It also sets us up to have more scrutiny
on delivery_email/email in the future
for things that actually matter. (This is
a prep commit for something along those
lines, kind of hard to explain the full
plan.)
We plan to use these records to check and record the schema of Zulip's
events for the purposes of API documentation.
Based on an original messier commit by tabbott.
In theory, a nicer version of this would be able to work directly off
the mypy type system, but this will be good enough for our use case.
This extends our email address visibility settings to deny access to
user email addresses even to organization administrators.
At the moment, they can of course change the setting (which leaves an
audit trail), but in the future only organization owners will be able
to change that setting.
While we're at this, we rewrite the settings_data.js test to cover all
the cases in a more consistent way.
Fixes#14111.
This isn't the only bug in our testing libraries with
EMAIL_ADDRESS_VISIBILITY; but we don't have a lot of tests that need
to deal with that set of settings.
We were using `code` to pass around messages.
The `code` field is designed to be a code, not
a human-readable message.
It's possible that we don't actually need two
flavors of messages for these type of validations,
but I didn't want to change that yet.
We **definitely** don't need to put two types of
message in the exception, so I fix that. Instead,
I just have the caller ask what level of detail
it needs.
I added a non-verbose message for the case of
system bots.
I removed the non-translated version of the message
for deactivated accounts, which didn't have test
coverage and is slightly more prone to leaking
email info that we don't want to leak.
In the prep commits leading up to this, we split
out two new helpers:
validate_email_is_valid
get_errors_for_new_emails
Now when we validate invites we use two separate
loops to filter our emails.
Note that the two extracted functions map to two
of the data structures that used to be handled
in a single loop, and now we break them out:
errors = validate_email_is_valid
skipped = get_errors_for_new_emails
The first loop checks that emails are even valid
to begin with.
The second loop finds out whether emails are already
in use.
The second loop takes advantage of this helper:
get_errors_for_new_emails
The second helper can query all potential new emails
with a single round trip to the database.
This reduces our query count.
The main purpose of this new function is to allow
us to validate emails in bulk, which we don't do
yet (still setting the stage for that).
This is still a speedup, though, since in our
caller we grab only three fields now.
And other than that, we're essentially doing
the same query for the single-email case, just
outside the loop.
We are trying to kill off `validate_email`, so
we no longer call it from these tests.
These tests are already kind of low-level in
nature, so testing the more specific helpers
here should be fine.
Note that we also make the third parameter
to `validate_email` non-optional in this commit,
to preserve 100% coverage. This is really just
refactoring noise--we will soon eliminate the
entire function, but I didn't want to do everything
in a huge commit.
This is a prep commit that will allow us
to more efficiently validate a bunch of
emails in the invite UI.
This commit does not yet change any
behavior or performance.
A secondary goal of this commit is to
prepare us to eliminate some hackiness
related to how we construct
`ValidationError` exceptions.
It preserves some quirks of the prior
implementation:
- the strings we decided to translate
here appear haphazard (and often
get ignored anyway)
- we use `msg` in most codepaths,
but use `code` for invites
Right now we never actually call this with
more than one email, but that will change
soon.
Note that part of the rationale for the inner
method here is to avoid a test coverage bug
with `continue` in loops.
This has two goals:
- sets up a future commit to bulk-validate
emails
- the extracted function is more simple,
since it just has errors, and no codes
or deactivated flags
This commit leaves us in a somewhat funny
intermediate state where we have
`action.validate_email` being a glorified
two-line function with strange parameters,
but subsequent commits will clean this up:
- we will eliminate validate_email
- we will move most of the guts of its
other callee to lib/email_validation.py
To be clear, the code is correct here, just
kinda in an ugly, temporarily-disorganized
intermediate state.
We now use the `get_realm_email_validator()`
helper to build an email validator outside
the loop of emails in our invite list.
This allows us to perform RealmDomain queries
only once per request, instead of once per
email.
Now called:
validate_email_not_already_in_realm
We have a separate validation function that
makes sure that the email fits into a realm's
domain scheme, and we want to avoid naming
confusion here.
Without the fix here, you will get an exception
similar to below if you try to invite one of the
cross realm bots. (The actual exception is
a bit different due to some rebasing on my branch.)
File "/home/zulipdev/zulip/zerver/lib/request.py", line 368, in _wrapped_view_func
return view_func(request, *args, **kwargs)
File "/home/zulipdev/zulip/zerver/views/invite.py", line 49, in invite_users_backend
do_invite_users(user_profile, invitee_emails, streams, invite_as)
File "/home/zulipdev/zulip/zerver/lib/actions.py", line 5153, in do_invite_users
email_error, email_skipped, deactivated = validate_email(user_profile, email)
File "/home/zulipdev/zulip/zerver/lib/actions.py", line 5069, in validate_email
return None, (error.code), (error.params['deactivated'])
TypeError: 'NoneType' object is not subscriptable
Obviously, you shouldn't try to invite a cross
realm bot to your realm, but we want a reasonable
error message.
RESOLUTION:
Populate the `code` parameter for `ValidationError`.
BACKGROUND:
Most callers to `validate_email_for_realm` simply catch
the `ValidationError` and then report a more generic error.
That's also what `do_invite_users` does, but it has the
somewhat convoluted codepath through `validate_email`
that triggers this code:
try:
validate_email_for_realm(user_profile.realm, email)
except ValidationError as error:
return None, (error.code), (error.params['deactivated'])
The way that we're using the `code` parameter for
`ValidationError` feels hacky to me. The intention
behind `code` is to provide a descriptive error to
calling code, and it's not intended for humans, and
it feels strange that we actually translate this in
other places. Here are the Django docs:
https://docs.djangoproject.com/en/3.0/ref/forms/validation/
And then here's an example of us actually translating
a code (not part of this commit, just providing context):
raise ValidationError(_('%s already has an account') %
(email,), code = _("Already has an account."),
params={'deactivated': False})
Those codes eventually get put into InvitationError, which
inherits from JsonableError, and we do actually display
these errors in the webapp:
if skipped and len(skipped) == len(invitee_emails):
# All e-mails were skipped, so we didn't actually invite anyone.
raise InvitationError(_("We weren't able to invite anyone."),
skipped, sent_invitations=False)
I will try to untangle this somewhat in upcoming commits.
Previously, the input:
====================
- One
- Two
Two continued
====================
Would produce the same output as:
====================
- One
- Two
```
Two continued
```
====================
This was because our CodeBlockProcessor had a higher priority than
the ListIndentProcessor. This issue was discussed here:
https://chat.zulip.org/#narrow/stream/9-issues/topic/continuation.20paragraphs.20in.20list.20items.
/delete_topic endpoint could be used to request the deletion of a topic,
that would cause do_delete_messages to be called with an empty set in
these cases:
1. Requesting deletion of an empty stream.
2. Requesting deletion of a topic in a private stream with history not
public to subscribers, if the requesting admin doesn't have access to
any of the messages in that topic.
This function slims down the data that we get
from the database in order to create the
streams part of our client payload.
We also fix a typo.
We also clearly distinguish between queries
and lists here.
This new method prevents us from getting fat
objects from the database.
Instead, now we just get ids from the database
to build our subqueries.
Note that we could also technically eliminate
the `set(...)` wrappers in this code to have
Django make a subquery and save a round trip.
I am postponing that for another commit (since
it's still somewhat coupled to some other
complexity in `do_get_streams` that I am trying
to cut through, plus it's not the main point
of this commit.)
BEFORE:
# old, still in use for other codepaths
def get_stream_subscriptions_for_user(user_profile: UserProfile) -> QuerySet:
# TODO: Change return type to QuerySet[Subscription]
return Subscription.objects.filter(
user_profile=user_profile,
recipient__type=Recipient.STREAM,
)
user_subs = get_stream_subscriptions_for_user(user_profile).filter(
active=True,
).select_related('recipient')
recipient_check = Q(id__in=[sub.recipient.type_id for sub in user_subs])
AFTER:
# newly added
def get_subscribed_stream_ids_for_user(user_profile: UserProfile) -> QuerySet:
return Subscription.objects.filter(
user_profile_id=user_profile,
recipient__type=Recipient.STREAM,
active=True,
).values_list('recipient__type_id', flat=True)
subscribed_stream_ids = get_subscribed_stream_ids_for_user(user_profile)
recipient_check = Q(id__in=set(subscribed_stream_ids))
We calculate `max_message_id` for the mobile client.
Our query now no longer joins to the Message table
and just grabs one value instead of fat objects.
For historical reasons we were creating Recipient
objects at some point in the typing-notifications
codepath. Now we just work with UserProfiles.
This removes some queries, as indicated by
the change to `len(queries)` in a couple of the
tests.
The one subtle thing that changes here is huddles.
If user 10 sends a typing notification that they
are talking to users 20 and 30, there might not
actually be a huddle for users 10/20/30, but
we were actually creating huddles on the fly!
There is no need to create huddles just for
typing notifications, since we don't even
share huddle ids with our clients. The clients
just infer the huddles.
Some of the code that gets killed off here as
somewhat "collateral damage" is some
defensive code related to formerly supporting streams
in typing indicators. The support for streams
was killed off almost as soon as we released
the feature, and the codepath is pretty clearly
user-centric at this point.
I actually like this pattern:
def check_send_typing_notification(...):
typing_notification = check_typing_notification(...)
do_send_typing_notification(...)
It can help divide responsibilities nicely and make it easy
to write detailed unit tests against each of the two helpers.
Unfortunately, the good things didn't really happen here, and
instead we got the worst aspects of the pattern:
- The responsibilities for validation leaked into
the second function.
- Both functions were doing sane things individually
that became not-so-sane in the big picture (namely,
we ended up making Recipient objects for no reason,
but if you read each of the helpers, it was just one
step that seemed reasonable).
- Passing around dictionaries for results can be annoying.
Also, the pattern made a lot more sense when the validation
for typing was a lot more complicated. My prior commit makes
it so that we only ever deal with a list of user_ids.
Anyway, now I'm inlining it. :)
Subsequent commits will clean up the more substantive issue
here, which is that we are building Recipients for no reason.
Before the Django 2.x upgrade, the DatabaseCreation
argument took an integer value. To deal with running
mulitple test instances, we created a random start
range that could count up 100 workers until the next
random id. Arbitrarily limiting the number of workers
to 100.
Post upgrade, we can now use string values. Enabling
the database + worker numbers to be more readable, as
well as removing the cap on the worker count.
This field wasn't accessed by any clients and was a less robust
version of the user_id field. Any client hoping to be interested in
who did message edits should be able to handle working with user IDs
rather than email addresses.
This is preparation for supporting moving messages between streams in
some cases.
It doesn't actually have any functional effect, since flush_message
clears the message unconditionally anyway.
This is mostly refactoring, but we also prevent a new
type of value error (list of non-int-or-string). The
new test code helps enforce that.
Cleanup includes:
- Use early-exit for email case.
- Rename helpers to get_validate_*.
- Avoid clumsy rebuilding of lists in helpers.
- Avoid the confusing `recipient` name (which
can be confused with the model by the same
name).
- Just delegate duplicate-id/email-removal to
the helpers.
The cleaner structure allows us to elminate a couple
mypy workarounds.
Credits to @xpac1985 for reporting, debugging and proposing fix to the
issue. The proposed fix was modified slightly by @hackerkid to set the
correct value for max_invites and upload_quota_gb. Tests added by
@hackerkid.
Fixes#13974
This fixes a confusing aspect of how our automated tests worked
previously, where we'd almost all HTTP requests in the unlikely
configuration with no User-Agent string specified.
We need to adjust query counts in a few tests that now are a bit
cheaper because they now can take advantage of a Client object created
in server_initialization.py in `process_client`.
To avoid some hidden bugs in tests caused by every ldap user having the
same password, we give each user a different password, generated based
on their uids (to avoid some ugly hard-coding in a bunch of places).
‘req_var in request.GET’ was previously believed to be slow from
profiling results. However, the real explanation for those profiling
results is that WSGIRequest.GET is a lazy cached property, so there’s
no reason to avoid it if we’re accessing request.GET anyway.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
In https://github.com/zulip/zulip/pull/12823 some changes to the realms
structure have been made, so now both in production and development
cross-realm bots live in the realm with string_id "zulipinternal".
There was a TODO in retention code to eliminate a conditional in a query
that became redundant with this change, and also the zulipinternal realm
should be omitted from the archiving process in archive_messages().
We use this single regular expression for processing essentially every
request, so it's definitely worth hinting to Python that we're going
to do so by compiling it. Saves about 40us per request.
A sloppy implementation of the main has_request_variables wrapper
function meant that it did two very inefficient things:
* To combine together the GET and POST parameters, it would make a
copy of the request.GET QueryDict object, which combined with the
fact that these objects are slow to access, consumed about 90us per
argument.
* Doing this in a loop (one time per argument), rather than once,
which resulted in us doing this 11 times for a `GET /events` query.
Fixing this to just make a dictionary and combine things with some
small loops saved about 1 millisecond from the total runtime of GET
/events (for comparison, the total actual work of that view function
is about 700ms).
We need to fix at least one test that used a bad mock HttpRequest
object that didn't have a .GET property.
The comment explains this issue, but effectively, the upgrade to
Django 2.x means that Django's built-in django.request logger was
writing to our errors logs WARNING-level data for every 404 and 400
error. We don't consider user errors to be a problem worth
highlighting in that log file.
Django 2.2.x is the next LTS release after Django 1.11.x; I expect
we'll be on it for a while, as Django 3.x won't have an LTS release
series out for a while.
Because of upstream API changes in Django, this commit includes
several changes beyond requirements and:
* urls: django.urls.resolvers.RegexURLPattern has been replaced by
django.urls.resolvers.URLPattern; affects OpenAPI code and related
features which re-parse Django's internals.
https://code.djangoproject.com/ticket/28593
* test_runner: Change number to suffix. Django changed the name in this
ticket: https://code.djangoproject.com/ticket/28578
* Delete now-unnecessary SameSite cookie code (it's now the default).
* forms: urlsafe_base64_encode returns string in Django 2.2.
https://docs.djangoproject.com/en/2.2/ref/utils/#django.utils.http.urlsafe_base64_encode
* upload: Django's File.size property replaces _get_size().
https://docs.djangoproject.com/en/2.2/_modules/django/core/files/base/
* process_queue: Migrate to new autoreload API.
* test_messages: Add an extra query caused by .refresh_from_db() losing
the .select_related() on the Realm object.
* session: Sync SessionHostDomainMiddleware with Django 2.2.
There's a lot more we can do to take advantage of the new release;
this is tracked in #11341.
Many changes by Tim Abbott, Umair Waheed, and Mateusz Mandera squashed
are squashed into this commit.
Fixes#10835.
Since essentially the first use of Tornado in Zulip, we've been
maintaining our Tornado+Django system, AsyncDjangoHandler, with
several hundred lines of Django code copied into it.
The goal for that code was simple: We wanted a way to use our Django
middleware (for code sharing reasons) inside a Tornado process (since
we wanted to use Tornado for our async events system).
As part of the Django 2.2.x upgrade, I looked at upgrading this
implementation to be based off modern Django, and it's definitely
possible to do that:
* Continue forking load_middleware to save response middleware.
* Continue manually running the Django response middleware.
* Continue working out a hack involving copying all of _get_response
to change a couple lines allowing us our Tornado code to not
actually return the Django HttpResponse so we can long-poll. The
previous hack of returning None stopped being viable with the Django 2.2
MiddlewareMixin.__call__ implementation.
But I decided to take this opportunity to look at trying to avoid
copying material Django code, and there is a way to do it:
* Replace RespondAsynchronously with a response.asynchronous attribute
on the HttpResponse; this allows Django to run its normal plumbing
happily in a way that should be stable over time, and then we
proceed to discard the response inside the Tornado `get()` method to
implement long-polling. (Better yet might be raising an
exception?). This lets us eliminate maintaining a patched copy of
_get_response.
* Removing the @asynchronous decorator, which didn't add anything now
that we only have one API endpoint backend (with two frontend call
points) that could call into this. Combined with the last bullet,
this lets us remove a significant hack from our
never_cache_responses function.
* Calling the normal Django `get_response` method from zulip_finish
after creating a duplicate request to process, rather than writing
totally custom code to do that. This lets us eliminate maintaining
a patched copy of Django's load_middleware.
* Adding detailed comments explaining how this is supposed to work,
what problems we encounter, and how we solve various problems, which
is critical to being able to modify this code in the future.
A key advantage of these changes is that the exact same code should
work on Django 1.11, Django 2.2, and Django 3.x, because we're no
longer copying large blocks of core Django code and thus should be
much less vulnerable to refactors.
There may be a modest performance downside, in that we now run both
request and response middleware twice when longpolling (once for the
request we discard). We may be able to avoid the expensive part of
it, Zulip's own request/response middleware, with a bit of additional
custom code to save work for requests where we're planning to discard
the response. Profiling will be important to understanding what's
worth doing here.
1) Created a new class `DatabaseType` and access its objects inside
`template_database_status()` instead of sending five arguments with
default values.
2) Made `check_files` and `setting_name` local variables instead of
function parameters since they had same value(None) for every call.
Fixes#13845.
webpack optimizes JSON modules using JSON.parse("{…}"), which is
faster than the normal JavaScript parser.
Update the backend to use emoji_codes.json too instead of the three
separate JSON files.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This code was very useful when first implemented to help catch errors
where our backend templates didn't render, but has been superceded by
the success of our URL coverage testing (which ensures every URL
supported by Zulip's urls.py is accessed by our tests, with a few
exceptions) and other tests covering all of the emails Zulip sends.
It has a significant maintenance cost because it's a bit hacky and
involves generating fake context, so it makes sense to remove these.
Any future coverage issues with templates should be addressed with a
direct test that just accessing the relevant URL or sends the relevant
email.
We now use realm_id for querying UserPresence
instead of building a big WHERE clause from the
list of user_ids.
This commit may be a bit hard to measure, since
we still get the list of user_ids for the PushToken
query in the same method.
We now validate streams with a separate
function from PM recipients.
It's confusing enough all the ways you can
encode a stream or encode the PM recipients,
but trying to do it all in one function was
hard to reason about and led to at least one
bug.
In particular, there was a bug where streams
with commas in them would get split. Now
we just don't ever split on commas inside
of `extract_stream_indicator`.
Fixes#13836
After removing internal_send_message() in a recent
commit, we now have only two callers for
extract_recipients, and they are both related
to our REQ mechanism that always passes strings
to converters. (If there are default values,
REQ does not call the converters.)
We therefore make two changes:
- use the more strict annotation of "str"
for the `s` parameter
- don't bother with the isinstance check
This index is intended to optimize the performance of the very
frequently run query of "what is the presence status of all users in a
realm?".
Main changes:
- add realm_id to UserPresence
- add index for realm_id
- backfill realm_id for old rows
- change all writes to UserPresence to include
realm_id
The index is of this form:
"zerver_userpresence_realm_id_5c4ef5a9" btree (realm_id)
We will create an index on (realm_id, timestamp) in a
future commit, but I think it's a bit faster if you do
the backfill before the index.
There's also a minor tweak to the populate_db script.
This is just a refactoring to the more modern API
for sending internal messages.
To make this work we now plumb the email_gateway
flag through `internal_send_stream_message` instead
of `internal_send_message`.
We also change `send_zulip` to have its callers
pass in a full UserProfile object (which one of
them already had).
We prefer this to internal_send_message().
We are trying to deprecate `internal_send_message`,
which has extra moving parts related to
`extract_recipients` and `Addressee.legacy_build`.
There are two chunks of code that I touch here
that look pretty similar, but I'm not quite
sure they're worth de-duplicating, since they
use different topics and different message
content.
Instead of having `notify_new_user` delegate
all the heavy lifting to `send_signup_message`,
we just rename `send_signup_message` to be
`notify_new_user` and remove the one-line
wrapper.
We remove a lot of obsolete complexity:
- `internal` was no longer ever set to True
by real code, so we kill it off as well
as well as killing off the internal_blurb code
and the now-obsolete test
- the `sender` parameter was actually an
email, not a UserProfile, but I think
that got past mypy due to the caller
passing in something from settings.py
- we were only passing in NOTIFICATION_BOT
for the sender, so we just hard code
that now
- we eliminate the verbose
`admin_realm_signup_notifications_stream`
parameter and just hard code it to
"signups"
- we weren't using the optional realm
parameter
There's also a long ugly comment in
`get_recipient_info` related to this code
that I amended for now.
We should try to take action in a subsequent
commit.
This avoids an unnecessary join to UserProfile.
To verify this, you can do `print(queries)` in the
`test_get_custom_profile_fields_from_api` test. It's
kinda noisy, so I excerpted them below...
Before:
SELECT ...
FROM "zerver_customprofilefieldvalue"
INNER JOIN "zerver_userprofile" ON ("zerver_customprofilefieldvalue"."user_profile_id" = "zerver_userprofile"."id")
INNER JOIN "zerver_customprofilefield" ON ("zerver_customprofilefieldvalue"."field_id" = "zerver_customprofilefield"."id")
WHERE "zerver_userprofile"."realm_id" = 2
After:
SELECT ...
FROM "zerver_customprofilefieldvalue"
INNER JOIN "zerver_customprofilefield" ON ("zerver_customprofilefieldvalue"."field_id" = "zerver_customprofilefield"."id")
WHERE "zerver_customprofilefield"."realm_id" = 2'
I don't have any way to measure the two queries with
realistic data, but I would assume the second
query is significantly faster on most of our instances,
since CustomProfileField should be tiny.
The line removed here is a noop, as both sides of the
immediately following conditional reassign the
same variable.
This harmless cruft was the result of the recent commit
1ae5964ab8, which added
support for single-user GETs.
Apparently, the arguments passed to template_database_status were
incorrect for the manual testing development database, in that we
didn't pass a status_dir when calling into that code from provision.
The result was that provisioning before running `test-backend` would
ignore changes to the list of check_files (etc.) made after rebasing,
and vice versa.
The cleanest fix is to compute status_dir from other values passed in;
I'm also going to open a follow-up issue for creating a better overall
interface here.
This adds a new API endpoint for querying basic data on a single other
user in the organization, reusing the existing infrastructure (and
view function!) for getting data on all users in an organization.
Fixes#12277.
This code is a bit flatter and just preps the data
for a single user. There is never any interaction
between the data for user A and user B, so we can
mostly avoid complicated nested data structures
and do most of the data-crunching on a per-user basis.
We also do an explicit sort of the data before
running it through groupby. The explicit sort
simplifies how we calculate `most_recent_info`
and also avoids needing to add `dt` to an intermediate
data structure.
Finally, when it comes to the individual client data,
the code has relied on the assumption that there is
only one row per client, which I believe to be true,
but now the code is more explicit about that.
django-phonenumber-field 2.4.0 adds tighter phone number validation
that rejects +12223334444 for having an invalid area code. This was
reverted in 4.0.0, but django-two-factor-auth still requires <3.99.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This commit includes a new `stream_post_policy` setting,
by replacing the `is_announcement_only` field from the Stream model,
which is done by mirroring the structure of the existing
`create_stream_policy`.
It includes the necessary schema and database migrations to migrate
the is_announcement_only boolean field to stream_post_policy,
a smallPositiveInteger field similar to many other settings.
This change is done to allow organization administrators to restrict
new members from creating and posting to a stream. However, this does
not affect admins who are new members.
With many tweaks by tabbott to documentation under /help, etc.
Fixes#13616.
This flag affects page_params and the
payload you get back from POSTs to this
url:
users/me/presence
The flag does not yet affect the
presence events that get sent to a
client.
This should ensure that folks rebasing past this commit from an older
database model get their database rebuilt in the way that will
match the test_subs.py query count of 40.
We will want to raise RateLimited in authenticate() in rate limiting
code - Django's authenticate() mechanism catches PermissionDenied, which
we don't want for RateLimited. We want RateLimited to propagate to our
code that called the authenticate() function.
As more types of rate limiting of requests are added, one request may
end up having various limits applied to it - and the middleware needs to
be able to handle that. We implement that through a set_response_headers
function, which sets the X-RateLimit-* headers in a sensible way based
on all the limits that were applied to the request.
While the result of this change doesn't completely do what we need, it
does remove a huge amount of duplicated lists of fields. With a bit
more similar work, we should be able to eliminate a broad category of
potential bugs involving Stream and Subscription objects being
represented inconsistently in the API.
Work towards #13787.
This has the side of effect of making new fields we add to Stream be
automatically included, which will help maintain this code as we
upgrade it.
This commit adds is_web_public, history_public_to_subscribers, and
email_notifications fields to the dictionary.
This modifies get_cross_realm_dicts in zerver.lib.users to call
format_user_row. This is done to remove current and prevent future
inconsistencies between in the dictionary formats for get_raw_user_data
and get_cross_realm_dicts.
Implementation substantially rewritten by tabbott.
Fixes#13638.
This moves get_cross_realm_dicts (from zerver.lib.actions),
get_raw_user_data and get_custom_profile_field_values (from
zerver.lib.events) to zerver.lib.users.
This extracts the user_data inner function from get_raw_user_data as a
reusable function. We intend to reuse it for cross-realm user dicts.
A few changes were made while extracting it:
* Renaming the UserProfile argument to acting_user, so we can do loops
over a local user_profile variable.
* Moved it to zerver.lib.users, as that's a more appropriate home for
this function formatting data on users.
* Simplified the calling convention for passing custom profile fields
to reflect the fact that this function processes a single user (and
is expected to be called in a loop).
"Zulip Voyager" was a name invented during the Hack Week to open
source Zulip for what a single-system Zulip server might be called, as
a Star Trek pun on the code it was based on, "Zulip Enterprise".
At the time, we just needed a name quickly, but it was never a good
name, just a placeholder. This removes that placeholder name from
much of the codebase. A bit more work will be required to transition
the `zulip::voyager` Puppet class, as that has some migration work
involved.
A wart that has long been present inin Zulip's get_messages API is how
to request "the latest messages" in the API. Previously, the
recommendation was basically to pass anchor=10000000000000000 (for an
appropriately huge number). An accident of the server's implementation
meant that specific number of 0s was actually important to avoid a
buggy (or at least wasteful) value of found_newest=False if the query
had specified num_after=0 (since we didn't check).
This was the cause of the mobile issue
https://github.com/zulip/zulip-mobile/issues/3654.
The solution is to allow passing a special value of anchor='newest',
basically a special string-type value that the server can interpret as
meaning the user precisely just wants the most recent messages. We
also add an analogous anchor='oldest' or similar to avoid folks
needing to write a somewhat ugly anchor=0 for fetching the very first
messages.
We may want to also replace the use_first_unread_anchor argument to be
a "first_unread" value for the anchor parameter.
While it's not always ideal to make a value have a variable type like
this, in this case it seems like a really clean way to express the
idea of what the user is asking for in the API.
This is required for the upcoming type behavior of the "anchor"
parameter.
This change is the minimal work required to have our OpenAPI code not
fail when checking a union-type value of this form. We'll likely want
to, in the future, do something nicer, but it'd require more extensive
infrastructure for parsing of OpenAPI data that it's worth with our
current approach (we may want to switch to using a library).
The proximal issue here is that in upcoming commits, we're going to
change the type of the `anchor` field in `get_messages_backend` to
support passing either an integer or a string.
Many of our tests using POSTRequestMock currently define a query
object that uses integer values for the integer fields we're going to
pass into it, e.g. {'num_after': 0}. That is the correct type for
that field in the Zulip API, before HTTP encoding turns it into a
string. However, because POSTRequestMock didn't use HTTP encoding at
all (which will convert the 0 into a '0'), it ended up passing an
integer to a function that can't possible receive one as an argument.
Ideally, we'd just get rid of POSTRequestMock, since it's a hack, and
just do real HTTP requests instead.
But since it's used in a lot of places making doing so somewhat
impractical, we can get past this issue by just making POSTRequestMock
convert integers to strings.
Using logging.info() rather than logger.info() meant that our
zulip.soft_deactivation logger configuration (which, in particular,
included not logging to the console) was not active on this log line,
resulting in the `manage.py soft_deactivate_users` cron job sending
emails every time it ran.
Fixes#13750.
Previously, we didn't track opening and closing fences separately,
with led to bugs like not parsing a list that was immediately after
a quoted fence; we treated each ``` as a new fence.
This commit rewrites the function to maintain a stack of currently
open fences. If any of the parent fences is a code fence, we do not
insert a new line before a list.
We also add some test cases specifically to test this behavior with
complexly nested lists.
Fixes#13745.
The desktop otp flow (to be added in next commits) will want to generate
one-time tokens for the app that will allow it to obtain an
authenticated session. log_into_subdomain will be the endpoint to pass
the one-time token to. Currently it uses signed data as its input
"tokens", which is not compatible with the otp flow, which requires
simpler (and fixed-length) token. Thus the correct scheme to use is to
store the authenticated data in redis and return a token tied to the
data, which should be passed to the log_into_subdomain endpoint.
In this commit, we replace the "pass signed data around" scheme with the
redis scheme, because there's no point having both.
This extracts a function for computing show_invites and
show_add_streams, for better readability and testability.
This commit was substantially cleaned up by tabbott.
This legacy cross-realm bot hasn't been used in several years, as far
as I know. If we wanted to re-introduce it, I'd want to implement it
as an embedded bot using those common APIs, rather than the totally
custom hacky code used for it that involves unnecessary queue workers
and similar details.
Fixes#13533.
If an email is sent with the .prefer-html option, but it has no html
body, it's better to fall back to plaintext content instead of treating
it as a user error.
Closes#13484.
These options tell zulip whether to prefer the plaintext or html version
of the email message. prefer-text is the default behavior, so including
the option doesn't change anything as of now, but we're adding it to
prepare to potentially change the default behavior in the future.
As we add more address options, which will have different behavior than
simply setting option_name=True, we need to migrate this subsystem to
something that better supports more complex logic and will allow
encapsulating it, instead of needing to be put all over the
decode_email_address function.
This essentially unused legacy variable was causing Zulip to query the
database at import time, which is generally not something we aim to
do.
Combined with the issue fixed in the previous commit, this variable
resulted in test-backend providing an unhelpful crash when provision
hadn't updated the unit testing database.
Since the intent of our testing code was clearly to clear this cache
for every test, there's no reason for it to be a module-level global.
This allows us to remove an unnecessary import from test_runner.py,
which in combination with DEFAULT_REALM's definition was causing us to
run models code before running migrations inside test-backend.
(That bug, in turn, caused test-backend's check for whether migrations
needs to be run to happen sadly after trying to access a Realm,
trigger a test-backend crash if the Realm model had changed since the
last provision).
Due to a known but unfixed bug in the Python standard library’s
urllib.parse module (CVE-2015-2104), a crafted URL could bypass the
validation in the previous patch and still achieve an open redirect.
https://bugs.python.org/issue23505
Switch to using django.utils.http.is_safe_url, which already contains
a workaround for this bug.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Zulip has had a small use of WebSockets (specifically, for the code
path of sending messages, via the webapp only) since ~2013. We
originally added this use of WebSockets in the hope that the latency
benefits of doing so would allow us to avoid implementing a markdown
local echo; they were not. Further, HTTP/2 may have eliminated the
latency difference we hoped to exploit by using WebSockets in any
case.
While we’d originally imagined using WebSockets for other endpoints,
there was never a good justification for moving more components to the
WebSockets system.
This WebSockets code path had a lot of downsides/complexity,
including:
* The messy hack involving constructing an emulated request object to
hook into doing Django requests.
* The `message_senders` queue processor system, which increases RAM
needs and must be provisioned independently from the rest of the
server).
* A duplicate check_send_receive_time Nagios test specific to
WebSockets.
* The requirement for users to have their firewalls/NATs allow
WebSocket connections, and a setting to disable them for networks
where WebSockets don’t work.
* Dependencies on the SockJS family of libraries, which has at times
been poorly maintained, and periodically throws random JavaScript
exceptions in our production environments without a deep enough
traceback to effectively investigate.
* A total of about 1600 lines of our code related to the feature.
* Increased load on the Tornado system, especially around a Zulip
server restart, and especially for large installations like
zulipchat.com, resulting in extra delay before messages can be sent
again.
As detailed in
https://github.com/zulip/zulip/pull/12862#issuecomment-536152397, it
appears that removing WebSockets moderately increases the time it
takes for the `send_message` API query to return from the server, but
does not significantly change the time between when a message is sent
and when it is received by clients. We don’t understand the reason
for that change (suggesting the possibility of a measurement error),
and even if it is a real change, we consider that potential small
latency regression to be acceptable.
If we later want WebSockets, we’ll likely want to just use Django
Channels.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Fixes#13416
We used to search only one level in depth through the MIME structure,
and thus would miss attachments that were nested deeper (which can
happen with some email clients). We can take advantage of message.walk()
to iterate through each MIME part.
Previously, if you tried to invite a user whose account had been
deactivated, we didn't provide a clear path forward for reactivating
the users, which was confusing.
We fix this by plumbing through to the frontend the information that
there is an existing user account with that email address in this
organization, but that it's deactivated. For administrators, we
provide a link for how to reactivate the user.
Fixes#8144.
This experimental setting disables sending private messages in Zulip
in a crude way (i.e. users get an error when they try to send one).
It makes no effort to adjust the UI to avoid advertising the idea of
sending private messages.
Fixes#6617.
We should take adventage of the recipient field being denormalized into
the Stream model. We don't need to make queries to figure out a stream's
recipient id, so we take advantage of that to eliminate some of
those redundant queries and simplify StreamRecipientMap.
This improves the approach of creating multiple parallel processes by
using subprocess.Popen() instead of run_parallel() and
subprocess.call() while exporting an organization's message
history. This prevents forking twice for individual subprocess.
While this has some performance benefit, the main reason to fix this
is that it fixes an issue with the data export web UI introduced in
run_parallel forks exited).
Fixes#12904.
process_missed_message did nothing other than calling
send_to_missed_message_address with the same arguments, so there's no
reason to have these as separate functions.
Addresses point 1 of #13533.
MissedMessageEmailAddress objects get tied to the specific that was
missed by the user. A useful benefit of that is that email message sent
to that address will handle topic changes - if the message that was
missed gets its topic changed, the email response will get posted under
the new topic, while in the old model it would get posted under the
old topic, which could potentially be confusing.
Migrating redis data to this new model is a bit tricky, so the migration
code has comments explaining some of the compromises made there, and
test_migrations.py tests handling of the various possible cases that
could arise.
Preparatory commit for making the email mirror use the database instead
of redis for missed message addresses.
This model will represent missed message email addresses, which
currently have their data stored in redis.
The redis data will be converted and migrated into these models and
the email mirror will start using them in the main commit.
For cross realm bots, explicitly set bot_owner_id
to None. This makes it clear that the cross realm
bots have no owner, whereas before it could be
misdiagnosed as the server forgetting to set the
field.
Model classes fetched through apps.get_model don't get methods or class
attributes. It's not feasible to add them to all these objects in
use_db_models, but Recipient.PERSONAL etc. are worth setting, since
doing that increases the range of functions that can successfully be
imported and called in test_migrations.py.
Fixes#13504.
This commit is purely an improvement in error handling.
We used to not do any validation on keys before passing them to
memcached, which meant for invalid keys, memcached's own key
validation would throw an exception. Unfortunately, the resulting
error messages are super hard to read; the traceback structure doesn't
even show where the call into memcached happened.
In this commit we add validation to all the basic cache_* functions, and
appropriate handling in their callers.
We also add a lot of tests for the new behavior, which has the nice
effect of giving us decent coverage of all these core caching
functions which previously had been primarily tested manually.
This change should prevent test flakes, plus
it's more deterministic behavior for clients,
who will generally comma-join the ids into
a key for their internal data structures.
I was able to verify test coverage on this
by making the sort reversed, which would
cause test_huddle_send_message_events to
fail.
This should be about 4 times faster, saving something like half a
millisecond on each stream of 10000 subscribers.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
In 3892a8afd8, we restructured the
system for managing uploaded files to a much cleaner model where we
just do parsing inside bugdown.
That new model had potentially buggy handling of cases around both
relative URLs and URLS starting with `realm.host`.
We address this by further rewriting the handling of attachments to
avoid regular expressions entirely, instead relying on urllib for
parsing, and having bugdown output `path_id` values, so that there's
no need for any conversions between formats outside bugdowm.
The check_attachment_reference_change function for processing message
updates is significantly simplified in the process.
The new check on the hostname has the side effect of requiring us to
fix some previously weird/buggy test data.
Co-Author-By: Anders Kaseorg <anders@zulipchat.com>
Co-Author-By: Rohitt Vashishtha <aero31aero@gmail.com>
This closes an open redirect vulnerability, one case of which was
found by Graham Bleaney and Ibrahim Mohamed using Pysa.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Our open graph parser logic sloppily mixed data obtained by parsing
open graph properties with trusted data set by our oembed parser.
We fix this by consistenly using our explicit whitelist of generic
properties (image, title, and description) in both places where we
interact with open graph properties. The fixes are redundant with
each other, but doing both helps in making the intent of the code
clearer.
This issue fixed here was originally reported as an XSS vulnerability
in the upcoming Inline URL Previews feature found by Graham Bleaney
and Ibrahim Mohamed using Pysa. The recent Oembed changes close that
vulnerability, but this change is still worth doing to make the
implementation do what it looks like it does.
For new user onboarding, it's important for it to be easy to verify
that Zulip's mobile push notifications work without jumping through
hoops or potentially making mistakes. For that reason, it makes sense
to toggle the notification defaults for new users to the more
aggressive mode (ignoring whether the user is currently actively
online); they can set the more subtle mode if they find that the
notifications are annoying.
Previously, these accesses used e.g. .select_related("realm"), which
was the only foreign key on the Stream model. Since the intent in
these code paths is to attach the related models for efficient access,
we should just do that for all related models, including Recipient.
With the recipient field being denormalized into the UserProfile and
Streams models, all current uses of get_stream_recipients can be done
more efficiently, by simply checking the .recipient_id attribute on the
appropriate objects.
With the recipient field being denormalized into the UserProfile and
Streams models, all current uses of bulk_get_recipients can be done more
efficient, by simply checking the .recipient_id attribute on the
appropriate objects.
The flow in recipient_for_user_profiles previously worked by doing
validation on UserProfile objects (returning a list of IDs), and then
using that data to look up the appropriate Recipient objects.
For the case of sending a private message to another user, the new
UserProfile.recipient column lets us avoid the query to the Recipient
table if we move the step of reducing down to user IDs to only occur
in the Huddle code path.
Previously, if the user had interacted with the Zulip mobile app in
the last ~140 seconds, it's likely the mobile app had sent presence
data to the Zulip server, which in turns means that the Zulip server
might not send that user mobile push notifications (or email
notifications) about new messages for the next few minutes.
The email notifications behavior is potentially desirable, but the
push notifications behavior is definitely not -- a private message
reply to something you sent 2 minutes ago is definitely something you
want a push notification for.
This commit partially addresses that issue, by ignoring presence data
from the ZulipMobile client when determining whether the user is
currently engaging with a Zulip client (essentially, we're only
considering desktop activity as something that predicts the user is
likely to see a desktop notification or is otherwise "online").
This removes the last of the messy use of regular expressions outside
bugdown to make decisions on whether a message contains an attachment
or not. Centralizing questions about links to be decided entirely
within bugdown (rather than doing ad-hoc secondary parsing elsewhere)
makes the system cleaner and more robust.
This commit wraps up the work to remove basic regex based parsing
of messages to handle attachment claiming/unclaiming. We now use
the more dependable Bugdown processor to find potential links and
only operate upon those links instead of parsing the full message
content again.
Previously, we would naively set has_attachment just by searching
the whole messages for strings like `/user_uploads/...`. We now
prevent running do_claim_attachments for messages that obviously
do not have an attachment in them that we previously ran.
For example: attachments in codeblocks or
attachments that otherwise do not match our link syntax.
The new implementation runs that check on only the urls that
bugdown determines should be rendered. We also refactor some
Attachment tests in test_messages to test this change.
The new method is:
1. Create a list of potential_attachment_urls in Bugdown while rendering.
2. Loop over this list in do_claim_attachments for the actual claiming.
For saving:
3. If we claimed an attachment, set message.has_attachment to True.
For updating:
3. If claimed_attachment != message.has_attachment: update has_attachment.
We do not modify the logic for 'unclaiming' attachments when editing.
add_a, add_oembed_data and add_embed are only called by
InlineInterestingLinksProcessor and this commit allows
these methods to access self.markdown object.
This commit has a side-effect that we also now allow mixed lists,
but they have different syntax from the commonmark implementation
and our marked output. For example, without the closing li tags:
Input Bugdown Marked
-------------------------------------
<ul>
- Hello <li>Hello <ul><li>Hello</ul>
+ World <li>World <ul><li>World
+ Again <li>Again <li>Again</ul>
* And <li>And <ul><li>And
* Again <li>Again <li>Again</ul>
</ul>
The bugdown render is in line with what a user in #13447 requests.
Fixes#13477.
Adds required API and front-end changes to modify and read the
wildcard_mentions_notify field in the Subscription model.
It includes front-end code to add the setting to the user's "manage
streams" page. This setting will be greyed out when a stream is muted.
The PR also includes back-end code to add the setting the initial state of
a subscription.
New automated tests were added for the API, events system and front-end.
In manual testing, we checked that modifying the setting in the front end
persisted the change in the Subscription model. We noticed the notifications
were not behaving exactly as expected in manual testing; see
https://github.com/zulip/zulip/issues/13073#issuecomment-560263081 .
Tweaked by tabbott to fix real-time synchronization issues.
Fixes: #13429.
Previously, get_recent_private_messages could take 100ms-1s to run,
contributing a substantial portion of the total runtime of `/`.
We fix this by taking advantage of the recent denormalization of
personal_recipient into the UserProfile model, allowing us to avoid
the complex join with Recipient that was previously required.
The change that requires additional commentary is the change to the
main, big SQL query:
1. We eliminate UserMessage table from the query, because the condition
m.recipient_id=%(my_recipient_id)d
implies m is a personal message to the user being processed - so joining
with usermessage to check for user_profile_id and flags&2048 (which
checks the message is private) is redundant.
2. We only need to join the Message table with UserProfile
(on sender_id) and get the sender's personal_recipient_id from their
UserProfile row.
Fixes#13437.
This is adds foreign keys to the corresponding Recipient object in the
UserProfile on Stream tables, a denormalization intended to improve
performance as this is a common query.
In the migration for setting the field correctly for existing users,
we do a direct SQL query (because Django 1.11 doesn't provide any good
method for doing it properly in bulk using the ORM.).
A consequence of this change to the model is that a bit of code needs
to be added to the functions responsible for creating new users (to
set the field after the Recipient object gets created). Fortunately,
there's only a few code paths for doing that.
Also an adjustment is needed in the import system - this introduces a
circular relation between Recipient and UserProfile. The field cannot be
set until the Recipient objects have been created, but UserProfiles need
to be created before their corresponding Recipients. We deal with this
by first importing UserProfiles same way as before, but we leave the
personal_recipient field uninitialized. After creating the Recipient
objects, we call a function to set the field for all the imported users
in bulk.
A similar change is made for managing Stream objects.
We used to specify the securityScheme for each REST operation seperately.
This is unecessary as the securityScheme can be specified in root level
and would be automatically applied to all operations. This also prevents
us accidentally not specifying the securityScheme for some operations and
was the case for /users/me/subscriptions PATCH endpoint. The root level
securityScheme can be also overriden in the operational level when
necessary.
swagger.io/docs/specification/authentication/#security
We use the plumbing introduced in a previous commit, to now raise
PushNotificationBouncerRetryLaterError in send_to_push_bouncer in case
of issues with talking to the bouncer server. That's a better way of
dealing with the errors than the previous approach of returning a
"failed" boolean, which generally wasn't checked in the code anyway and
did nothing.
The PushNotificationBouncerRetryLaterError exception will be nicely
handled by queue processors to retry sending again, and due to being a
JsonableError, it will also communicate the error to API users.
We add PushNotificationBouncerRetryLaterError as an exception to signal
an error occurred when trying to communicate with the bouncer and it
should be retried. We use JsonableError as the base class, because this
signal will need to work in two roles:
1. When the push notification was being issued by the queue worker
PushNotificationsWorker, it will signal to the worker to requeue the
event and try again later.
2. The exception will also possibly be raised (this will be added in the
next commit) on codepaths coming from a request to an API endpoint (for
example to add a token, to users/me/apns_device_token). In that case,
it'll be needed to provide a good error to the API user - and basing
this exception on JsonableError will allow that.
This is a performance optimization, since we can avoid doing work
related to wildcard mentions in the common case that the message can't
have any. We also add a unit test for adding wildcard mentions in a
message edit.
We also switch the underlying exctact_mention_text method to use
a regular for loop, as well as make the related methods return
tuples of (names, is_wildcard). This abstraction is hidden from the
MentionData callers behind mention_data.message_has_wildcards().
Concerns #13430.
This simple change switches us to take advantage of the
server-maintained data for the pm_conversations system we implemented
originally for mobile use.
This should make it a lot more convenient to find historical private
message conversations, since one can effectively scroll infinitely
into the history.
We'll need to do some profiling of the backend after this is deployed
in production; it's possible we'll need to add some database indexes,
denormalization, or other optimizations to avoid making loading the
Zulip app significantly slower.
Fixes#12502.
The expected signatures for these callbacks seem to have changed
somewhere in https://github.com/pika/pika/pull/1002.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This change makes it possible for users to control the notification
settings for wildcard mentions as a separate control from PMs and
direct @-mentions.
This includes adding a new endpoint to the push notification bouncer
interface, and code to call it appropriately after resetting a user's
personal API key.
When we add support for a user having multiple API keys, we may need
to add an additional key here to support removing keys associated with
just one client.
For organizations with EMAIL_ADDRESS_VISIBILITY_ADMINS, we were using
the wrong email address in the notice telling the user how to login in
the future.
Then, find and fix a predictable number of previous misuses.
With a small change by tabbott to preserve backwards compatibility for
sending `yes` for the `forged` field.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The original/legacy emoji reactions endpoints made use of HTTP PUT and
didn't have an API that could correctly handle situations where the
emoji names change over time. We stopped using the legacy endpoints
some time ago, so we can remove them now.
This requires straightforward updates to older tests that were still
written against the legacy API.
Fixes#12940.
The function only used the user's realm anyway, so this is a cleaner
API.
This should also make it more convenient to permanently delete
messages manually, since one doesn't have to fetch a random user in
the realm in order to delete a message using the management shell.
No functional change.
This fixes two regressions in 1946692f9a.
The first bug was actually introduced much earlier, namely that we
were not sending a `bot_owner_id` field at all for bot users without
an owner. The correct behavior would have been send `None` for the
owner field.
The second bug was simply that we needed to update the webapp to look
for the `bot_owner_id` field, rather than an old email-address format
`bot_owner` field.
Thanks to Vinit Singh for reporting this bug.
The state of the FAKELDAP setup for the dev env has fallen behind the
backend changes and updates to fakeldap (which implemented
SCOPE_ONELEVEL searches), as well as having some other minor issues.
This commit restore it to a working state and now all three config modes
work properly.
This makes it possible to simlulate messages sent by specific clients,
rather than just "test suite". Relevant for sending messages where
`message.sent_by_human()` is True.
Rather than subtracting sets in multiple places, it's simpler/cleaner
to just check which users are in the set when processing them.
This refactoring be helpful when we extend the get_recipient_info
logic to handle wildcard mentions as well.
django_to_ldap_username is now able to find the correct ldap username in
every supported type of configuration, so we can remove these
conditionals and use django_to_ldap_username in a straight-forward
manner.
Previously, we were using user_profile.email rather than
user_profile.delivery_email in all calculations involving Gravatar
URLs, which meant that all organizations with the new
EMAIL_ADDRESS_VISIBILITY_ADMINS setting enabled had useless gravatars
not based on the `user15@host.domain` type fake email addresses we
generate for the API to refer to users.
The fix is to convert these calculations to use the user's
delivery_email. Some refactoring is required to ensure the data is
passed through to the parts of the codebase that do the check;
fortunately, our automated tests of schemas are effective in verifying
that the new `sender_delivery_email` field isn't visible to the API.
Fixes#13369.
Apparently, the refactor months ago that introduced finalize_payload
wasn't applied to the outgoing webhook code path, resulting in message
dicts with an unexpected format with no avatar_url and some extra
values that were intended to be internal details not relevant to
external clients.
Because this API is not widely used, we expect there to be little to
no impact of converting this back to matching the `get_messages`
interface, as it once was and has always been intended to be.
The one somewhat tricky detail is that we include both the `content`
and `rendered_content` fields, rather than asking the client to pick
which they want via the `apply_markdown` flag, because there is no
place for the client to configure that setting.
The code comment explains this issue in some detail, but essentially
in Kubernetes and Docker Swarm systems, the container overlayer
network has a relatively short TCP idle lifetime (about 15 minutes),
which can lead to it killing the connection between Tornado and
RabbitMQ.
We fix this by setting a TCP keepalive on that connection shorter than
15 minutes.
Fixes#10776.
Most of the failures were due to parameters that are not intended to
be used by third-party code, so the correct fix for those was the set
intentionally_undocumented=True.
Fixes#12969.
MigrationsTestCase is intentionally omitted from this, since migrations
tests are different in their nature and so whatever setUp()
ZulipTestCase may do in the future, MigrationsTestCase may not
necessarily want to replicate.
new_name and description params should be valid JSON
strings. The format of these params are marked as
json so that the curl example genenrator can convert
them into json strings.