Obviously, this file will soon grow--this
was the easiest way to start without introducing
noise into other commits.
It will soon be structurally similar
to frontend_tests/node_tests/lib/events.js--I
have some ideas there. But this should also
help for things like API docs.
This change makes our handling of youtube-url previews consistent
with how we handle our inline images. This allows the previews to
render next to the paragraph that links to the youtube video.
Follow-up to PR #15773.
This commit rewrites the way addresses are collected. If
the header with the address is not an AddressHeader (for instance,
Delivered-To and Envelope-To), we take its string representation.
Fixes: #15864 ("Error in email_mirror - _UnstructuredHeader has no attribute addresses").
This commit re-adds the integration for canarytokens.org, now separate
from the primary Thinkst integration.
Signed-off-by: David Wood <david@davidtw.co>
This commit fixes the Thinkst Canary integration which - based on the
schema in upstream documentation - incorrectly assumed that some fields
would always be sent, which meant that the integration would fail. In
addition, this commit adjusts support for canarytokens to only support
the canarytoken schema with Thinkst Canaries (not Thinkst's
canarytokens.org).
Signed-off-by: David Wood <david@davidtw.co>
A few major themes here:
- We remove short_name from UserProfile
and add the appropriate migration.
- We remove short_name from various
cache-related lists of fields.
- We allow import tools to continue to
write short_name to their export files,
and then we simply ignore the field
at import time.
- We change functions like do_create_user,
create_user_profile, etc.
- We keep short_name in the /json/bots
API. (It actually gets turned into
an email.)
- We don't modify our LDAP code much
here.
We generally want to avoid having two sibling test
suites depend on each other, unless there's a real
compelling reason to share code. (And if there is
code to share, we can usually promote it to either
test_helpers or ZulipTestCase, as I did here.)
This commit is also prep for the next commit, where
I try to simplify all of the helpers in EmojiReactionBase.
Especially now that we have f-strings, it is usually
better to just call api_post explicitly than to
obscure the mechanism with thin wrappers around
api_post. Our url schemes are pretty stable, so it's
unlikely that the helpers are actually gonna prevent
future busywork.
This issue isn't something a system administrator needs to take action
on -- it's a likely minor logic bug around organization
administrators moving topics between streams.
As a result, it shouldn't send error emails to administrators.
This is a hacky fix to avoid spoiler content leaking in emails. The
general idea here is to tell people to open Zulip to view the actual
message in full.
We create a mini-markdown parser here that strips away the fence content
that has the 'spoiler' tag for the text emails.
Our handling of html emails is much better in comparison where we can
use lxml to parse the spoiler blocks.
We include tests for the new implementation to avoid churning the
codebase too much so this can be easily reverted when we are able to
re-enable the feature.
We now do something sensible for spoilers in notifications. A message
like:
```spoiler Luke's father is
Vader. Don't tell anyone else.
```
would be rendered as:
Luke's father is (...)
If the push_notification for the UserMessage is already active,
we don't send any push notification to the user. This may
happen due to race conditions.
Added and fixed test cases affected by this.
This is similar to our behavior with image previews, and helps
reduce clutter in the final rendered html.
We add the string 'Tweet: ' to our existing tests so those tests
remain the same.
This commit makes our handling of twitter previews consistent with
how we handle our inline images so that tweets render next to the
paragraph that links to the tweet.
We decouple the logic of insertion rules for inline links from
image preview logic. Now, we can use this same logic for other
kinds of link previews as well.
We also remove the post_process_data option.
The sanity check is just overkill at this point,
since the mechanism to find streams is very
direct due to a recent commit.
Before this change we would only export streams
that had actual subscribers, which is usually
harmless, but it was mostly a relic of a one
time migration that we did when we were cleaning
up some dirty data in some of our very early
databases (circa 2016).
Now we work down the table hierarchy in a
more natural way:
- get Streams in Realm
- get Recipients matching above Streams
- get Subscriptions matching above Recipients
Note that for per-user exports, I kept the
same logic (users -> subscriptions -> recipients ->
streams) we had before.
One subtle detail here is that we make our
final Config blocks--which build the final
version of Recipient/Subscription--now hang
off of realm_config.
Fixes#15146.
This particular commit has been a long time coming. For reference,
!avatar(email) was an undocumented syntax that simply rendered an
inline 50px avatar for a user in a message, essentially allowing
you to create a user pill like:
`!avatar(alice@example.com) Alice: hey!`
---
Reimplementation
If we decide to reimplement this or a similar feature in the future,
we could use something like `<avatar:userid>` syntax which is more
in line with creating links in markdown. Even then, it would not be
a good idea to add this instead of supporting inline images directly.
Since any usecases of such a syntax are in automation, we do not need
to make it userfriendly and something like the following is a better
implementation that doesn't need a custom syntax:
`![avatar for Alice](/avatar/1234?s=50) Alice: hey!`
---
History
We initially added this syntax back in 2012 and it was 'deprecated'
from the get go. Here's what the original commit had to say about
the new syntax:
> We'll use this internally for the commit bot. We might eventually
> disable it for external users.
We eventually did start using this for our github integrations in 2013
but since then, those integrations have been neglected in favor of
our GitHub webhooks which do not use this syntax.
When we copied `!gravatar` to add the `!avatar` syntax, we also noted
that we want to deprecate the `!gravatar` syntax entirely - in 2013!
Since then, we haven't advertised either of these syntaxes anywhere
in our docs, and the only two places where this syntax remains is
our game bots that could easily do without these, and the git commit
integration that we have deprecated anyway.
We do not have any evidence of someone asking about this syntax on
chat.zulip.org when developing an integration and rightfully so- only
the people who work on Zulip (and specifically, markdown) are likely
to stumble upon it and try it out.
This is also the only peice of code due to which we had to look up
emails -> userid mapping in our backend markdown. By removing this,
we entirely remove the backend markdown's dependency on user emails
to render messages.
---
Relevant commits:
- Oct 2012, Initial commit c31462c278
- Nov 2013, Update commit bot 968c393826
- Nov 2013, Add avatar syntax 761c0a0266
- Sep 2017, Avoid email use c3032a7fe8
- Apr 2019, Remove from webhook 674fcfcce1
Log RealmAuditLog in do_set_realm_property and do_remove_realm_domain.
Tests for the changes are written in test_events because it will save
duplicate code for test_change_realm_property.
Added new Event Type in AbstractRealmAuditLog STREAM_CREATED.
Since we finally create streams in create_stream_if_needed function
in zerver/lib/streams.py so logged realm_audit there.
Passed acting_user when create_stream_if_needed or ensure_stream
function is called.
Added tests in test_audit_log.
We had been using !time() syntax for timestamps so far. Since its
an unreleased feature, we can make changes without affecting many
people.
Fixes#15442.
Prior to this commit whenever convert was imported from zerver.lib.markdown
it was aliased as markdown_convert for readability.
This commit rename convert function to markdown_convert so that it can be
directly import it without aliasing and without compromising readability.
This fixes our triggering a RabbitMQ event to send a push notification
to remove the empty set of push notifications, resulting from not
using the correct data structure to determine which message IDs to look at.
This was causing a lot of visible exceptions when running
the `test_messages.py` test suite.
For users who are unsubscribed from the new stream but are in
the old stream, we delete the UserMessage.
We send the delete_message event only to guest users,
who have completely lost asses to the moved messages, for other
users we send the normal update_message event which moves
the messages to the new unsubed stream which
otherwise would look broken to the
user without reloading to the webpage.
Our previous OpenAPI schema validator that we implemented ourselves
was useful training wheels for our understanding OpenAPI properly, and
was mostly correct. But given that we've finally reached the point
where our OpenAPI file accurately describes the API, it makes sense to
switch to use an official OpenAPI validator. We lose some ability to
do exclude rules for particular elements, but those were primarily
important for us when we had a lot of them.
As part of this change, we need to add `additionalProperties: false`
for all of our dictonaries/objects where we've documented every
parameter; otherwise the OpenAPI schema checker won't know that we
expect every parameter to be documented.
This was hiding an actual type error in test_cache: a mismatch between
the object ID type, which is str, and the default id_fetcher, which
returns int.
Mypy’s insufficient support for default generic arguments basically
means we can’t use them without a lot of overloading, and there are
not enough callers here to justify that.
https://github.com/python/mypy/issues/3737
We avoid this being super messy where the code calls this by adding
some less generic wrappers for generic_bulk_cached_fetch.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
According to @showell:
> All the slow decorators can die. That was a failed experiment of
> mine from 2014 days. I have meaning to kill them for a couple years
> now. I wrote this with the best of intentions, but I believe it's
> now just cruft. We never made a "fast" mode, for one. And we kept
> writing more and more slow tests, haha.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We remove support for the old clients which required an event for
each message to clear notification.
This is justified since it has been around 1.5 years since we started
supporting the bulk operation (and so essentially nobody is using a
mobile app version so old that it doesn't support the batched
approach) and the unbatched approach has a maintenance and reliability
cost.
This commit removes bugdown alias and do proper imports from markdown
module. Also remove bugdown word and replace it with markdown in
comments.
This commit is part of series of commits aimed at renaming bugdown to
markdown.
There is still some miscellaneous cleanup that
has to happen for things like analytics queries
and dead code in node tests, but this should
remove the main use of pointers in the backend.
(We will also still need to drop the DB field.)
rename ZEPHYR_MIRROR_BUGDOWN_KEY to ZEPHYR_MIRROR_MARKDOWN_KEY and
DEFAULT_BUGDOWN_KEY tp DEFAULT_MARKDOWN_KEY.
This commit is part of series of commits aimed at renaming bugdown to
markdown.
This commits changes class name of MarkdownListPreprocessor to
MarkdownListPreprocessor. It also changes corresponding references
in tests.
This is part of series of commits which aims for renaming bugdown to
markdown.
This commit is first of few commita which aim to change all the
bugdown references to markdown. This commits rename the files,
file path mentions and change the imports.
Variables and other references to bugdown will be renamed in susequent
commits.
We send user_id of the referrer instead of email in the invites dict.
Sending user_ids is more robust, as those are an immutable reference
to a user, rather than something that can change with time.
Updates to the webapp UI to display the inviters for more convenient
inspection will come in a future commit.
In zulip.yaml, add `deprecated` tags to all parameters/keys with
`Deprecated` in the description. Then add tests to ensure that deprecated
parameters/keys will always have the `deprecated` key. Also, in
the API docs, sort the parameters according to presence of `deprecated`
key, presenting the `deprecated` keys at the end and add a `deprecated`
tag next to them.
We send a remove mobile push notification to the users who were
no longer mentioned after the content of the message was edited.
This also corrects the notification count for the mobile apps
where a user was prior mentioned in a muted stream / topic and the
message was edited and the user is no longer mentioned now.
Hence, fixing the case where user has read all his unreads
but the notification badge on the app is still positive.
Fixes#15428.
do_clear_mobile_push_notifications_for_ids can now be used to
clear push_notification for multiple users at once. This method
loops over users, so no performance optimization is gained.
We've been seeing an exception in server_event_dispatch.js in
production where in large organizations, sometimes when a new user
joined, every other browser in the organization would throw an
exception processing some sort of realm_user/update event.
It turns out the cause was that when a user copies their profile from
an existing user account with a user-uploaded avatar, the code path we
reused to set the avatar properly send a realm_user/update event about
the avatar change -- for a user that hadn't been fully created and
certainly hadn't have the realm_user/add event sent for.
We fix this and add tests and comments to prevent it recurring.
(Removed an incorrect docstring while working on this).
This fixes an issues that causes HTML entities inside of inline code
blocks to be converted rather than being displayed literally.
The upstream python-markdown now handles this correctly, so we just use
their implementation with our changes for removing .strip(). As a result
of this migration, we switch backtick pattern to an inline processor
too.
Fixes#12056.
For the codeblock counterpart of this issue, we should follow the
upstream PR https://github.com/Python-Markdown/markdown/pull/990.
Co-authored-by: Rohitt Vashishtha <aero31aero@gmail.com>
Note that I don't actually convert the
checker from check_dict to check_dict_only,
because that would be a user-facing change,
but I think we can sweep a lot of things
like this after the next release.
Because of other validation on these values, I don't believe any of
these does anything different, but these changes improve readability
and likely make GitHub's code scanners happy.
We now have our muted topics use tuples internally,
which allows us to tighten up the annotation
for get_topic_mutes, as well as our schema
checking.
We want to deprecate sub_validator=None
for check_list, so we also introduce
check_tuple here. Now we also want to deprecate
check_tuple, but it's at least isolated now.
We will use this for data structures that are tuples,
but which are sent as lists over the wire. Fortunately,
we don't have too many of those.
The plan is to convert tuples to dictionaries,
but backward compatibility may be tricky in some
places.
This commit changes do_get_user_invites function to not return
multiuse invites to non-admin users. We should only return multiuse
invites to admins, as we only allow admins to create them.
Streams can have lots of subscribers, meaning that the archiving process
will be moving tons of UserMessages per message. For that reason, using
a smaller batch size for stream messages is justified.
Some personal messages need to be added in test_scrub_realm to have
coverage of do_delete_messages_by_sender after these changes.
Currently, we use -1 as the Realm.message_retention_days value to retain
message forever unless specified at stream level for a particular stream,
that is, no policy set at the realm level. But this is incoherent with what
we use for Stream.message_retention_days where -1 means
> disable retention policy for this stream unconditionally
that can be confusing from an API standpoint.
So instead of trying some hack to reset the value to NULL or using some
other value like -2 for RETAIN_MESSAGE_FOREVER and use that for API. It is
much more intuitive to use a string like 'forever' that can be mapped to
RETAIN_MESSAGE_FOREVER at the backend. And this is similar to what we use
for streams settings as well.
To be more consistent with the meaning in the Stream model, and to make
it easier to have a reasonable settings API, we get rid of the None
value for Realm.message_retention_days in favor of the value -1 to
represent the "don't delete messages" default policy.
A generator that yields values without receiving or returning them is
an Iterator. Although every Iterator happens to be iterable, Iterable
is a confusing annotation for generators because a generator is only
iterable once.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Also enable warn_unused_ignores. I think the fact that there are so
few of these is good evidence that it’s not a significant burden for
people fixing type errors.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We only use this in a few places, but they're really important places
for understanding the types in the codebase, and so it's worth having
a bit of expository documentation explaining how we use it.
(And I expect we'll add more with time).
With this implementation of the feature of the automatic theme
detection, we make the following changes in the backend, frontend and
documentation.
This replaces the previous night_mode boolean with an enum, with the
default value being to use the prefers-color-scheme feature of the
operating system to determine which theme to use.
Fixes: #14451.
Co-authored-by: @kPerikou <44238834+kPerikou@users.noreply.github.com>
We can now invite new users as realm owners. We restrict only
owners to invite new users as owners both for single invite
and multiuse invite link. Also, only owners can revoke or resend
owner invitations.
Old: a validator returns None on success and returns an error string
on error.
New: a validator returns the validated value on success and raises
ValidationError on error.
This allows mypy to catch mismatches between the annotated type of a
REQ parameter and the type that the validator actually validates.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Rename the validator to check_union, to conform
more to Python typing nomenclature.
And we rename one of the test helpers to the
simpler `check_types`. (The test helper
was using "variable" in the "var" sense.)
We assert that the post was successful, to give
more immediate feedback for tests that don't
bother to check the return value and may be
implicitly assuming this method just works in
all cases.
And we also make it more convenient for tests
that are happy-path tests--they don't have to
do the assertion themselves. (And they're still
free to do deeper checks on the json.)
We opt out with allow_fail=True. We probably want
a more direct API eventually for tests that are
clearly trying to test the failure path for
subscribing to streams.
It's possible that a couple tests here that I added
allow_fail=True to just have flawed data setup--
I don't have time to investigate all cases, but
hopefully they will at least stand out more.
Two things were broken here:
* we were using name(s) instead of id(s)
* we were always sending lists that only
had one element
Now we just send "stream_id" instead of "subscriptions".
If anything, we should start sending a list of users
instead of a list of streams. For example, see
the code below:
if peer_user_ids:
for new_user_id in new_user_ids:
event = dict(type="subscription", op="peer_add",
stream_id=stream.id,
user_id=new_user_id)
send_event(realm, event, peer_user_ids)
Note that this only affects the webapp, as mobile/ZT
don't use this.
Currently the API docs do not specify whether a given API parameter
is to be specified in `query` or in `path`. Edit the docs so as
to show the type of argument right beside argument name.
The loop I added here in 5b49839b08 was
ill-conceived. The critical issue was that despite its name,
do_clear_mobile_push_notifications_for_ids does not immediately clear
push notifications (Except in our test suite, where `send_event`
immediately calls into the queue worker code!).
Instead, it queues work to clear those push notifications. Which
means that the first user to declare bankruptcy with a large number of
unreads will fill the queue, and then this will just be an infinite
loop adding more work to the queue.
This adds a new client_capability that clients such as the mobile apps
can use to avoid unreasonable network bandwidth consumed sending
avatar URLs in organizations with 10,000s of users.
Clients don't strictly need this data, as they can always use the
/avatar/{user_id} endpoint to fetch the avatar if desired.
This will be more efficient especially for realms with
10,000+ users because the avatar URLs would increase the
payload size significantly and cost us more bandwidth.
Fixes#15287.
We need this field to avoid O(N) database operations
while fetching realm user data for clients with
`user_avatar_url_field_optional` flag enabled.
Part of #15287.
With #14378, we regressed back to the state of that
prior to 7e0ea61b00.
We fix this by getting our avatar bucket on
object initialization, and use the appropriate means
of gathering the network location for the urls.
Fixes#14484.
This commit adds backend support for setting message_retention_days
while creating streams and updating it for an existing stream. We only
allow organization owners to set/update it for a stream.
'message_retention_days' field for a stream existed previously also, but
there was no way to set it while creating streams or update it for an
exisiting streams using any endpoint.
Previously, we had implemented:
<span class="timestamp" data-timestamp="unix time">Original text</span>
The new syntax is:
<time timestamp="ISO 8601 string">Original text</time>
<span class="timestamp-error">Invalid time format: Original text</span>
Since python and JS interpretations of the ISO format are very
slightly different, we force both of them to drop milliseconds
and use 'Z' instead of '+00:00' to represent that the string is
in UTC. The resultant strings look like: 2011-04-11T10:20:30Z.
Fixes#15431.
Fixes#14498.
When a topic is moved to a different stream, the message may no
longer be reachable to guest user, if the user is not subscribed
to the new stream.
We used to send message update event to the client in these cases,
which seems to be confusing both to the client updating the message
and the server sending push_notifications for it.
Now, we delete the UserMessage entry for these messages for the
user and send a delete message event to the client; which makes
both push_notification and the event handling client think that
the message was deleted and hence no confusion in the code is
raised.
This makes the system store and track PushDeviceToken objects on
the local Zulip server when using the push notifications bouncer
and includes tests for this.
This is something we need to implement end-to-end encryption for
push notifications. We'll add the encryption key as an additional
property on the local PushDeviceToken object.
It also likely adds some value in the case that a server were to
switch between using the bouncer service and sending notifications
directly, though in practice that's unlikely to happen.
The most import change here is the one in maybe_send_to_registration
codepath, as the insufficient validation there could lead to fetching
an expired PreregistrationUser that was invited as an administrator
admin even years ago, leading to this registration ending up in the
new user being a realm administrator.
Combined with the buggy migration in
0198_preregistrationuser_invited_as.py, this led to users incorrectly
joining as organizations administrators by accident. But even without
that bug, this issue could have allowed a user who was invited as an
administrator but then had that invitation expire and then joined via
social authentication incorrectly join as an organization administrator.
The second change is in ConfirmationEmailWorker, where this wasn't a
security problem, but if the server was stopped for long enough, with
some invites to send out email for in the queue, then after starting it
up again, the queue worker would send out emails for invites that
had already expired.
Google has removed the Google Hangouts brand, thus we are removing
them as video chat provider option.
This commit removes Google Hangouts integration and make a migration
that sets all realms that are using Hangouts as their video chat
provider to the default, jitsi.
With changes by tabbott to improve the overall video call documentation.
Fixes: #15298.
This adds support for a "spoiler" syntax in Zulip's markdown, which
can be used to hide content that one doesn't want to be immediately
visible without a click.
We use our own spoiler block syntax inspired by Zulip's existing quote
and math block markdown extensions, rather than requiring a token on
every line, as is present in some other markdown spoiler
implementations.
Fixes#5802.
Co-authored-by: Dylan Nugent <dylnuge@gmail.com>
This adds a new function `get_apns_badge_count()` to
fetch count value for a user push notification and
then sends that value with the APNs payload.
Once a message is read from the web app, the count is
decremented accordingly and a push notification with
`event: remove` is sent to the iOS clients.
Fixes#10271.
This line was effectively hardcoding a specific stream_post_policy,
overriding the value already present in the event, to no purpose.
(I believe it got here via cargo-culting induced by #13787.)
This commit removes is_old_stream property from the stream objects
returned by the API. This property was unnecessary and is essentially
equivalent to 'stream_weekly_traffic != null'.
We compute sub.is_old_stream in stream_data.update_calculated_fields
in frontend code and it is used to check whether we have a non-null
stream_weekly_traffic or not.
Fixes#15181.
This likely fix a bug that can leak thousands of messages into the
invalid state where:
* user_message.flags.active_mobile_push_notification is True
* user_message.flags.read is True
which is intended to be impossible except during the transient process
between marking messages as read sending the "remove push
notifications" event.
The bug is that if a user who is declaring bankruptcy with 10,000s of
unreads ends up having the database query to mark all of those as read
take 60s, the Django/uwsgi request will time out and kill the process.
If the postgres transaction still completes, we'll end up with the
second half of this function never being run.
A safer ordering is to do the smaller queries first.
We do this in a loop for correctness in the unlikely event there are
more than 10,000 of these.
This is designed to have no user-facing change unless the client
declares bulk_message_deletion in its client_capabilities.
Clients that do so will receive a single bulk event for bulk deletions
of messages within a single conversation (topic or PM thread).
Backend implementation of #15285.
Whenever we use API queries to mark messages as read we now increment
two new LoggingCount stats, messages_read::hour and
messages_read_interactions::hour.
We add an early return in do_increment_logging_stat function if there
are no changes (increment is 0), as an optimization to avoid
unnecessary database queries.
We also log messages_read_interactions::hour Logging stat
as the number of API queries to mark messages as read.
We don't include tests for the case where do_update_pointer is called
because do_update_pointer will most likely be removed from the
codebase in the near future.
This adds a powerful end-to-end test for Zulip's API documentation:
For every documented API endpoint (with a few declared exceptions that
we hope to remove), we verify that every API response received by our
extensive backend test suite matches the declared schema.
This is a critical step towards being able to have complete, high
quality API documentation.
Fixes#15340.
When doing query for same topic names in a stream, we should do a
case-insensitive exact match for the topic, since that's the data
model for topics in Zulip.
Generated by pyupgrade --py36-plus --keep-percent-format.
Now including %d, %i, %u, and multi-line strings.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The Python 3.6 style does support non-total and even partially-total
TypedDict, but total gives us better guarantees.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Use read-only types (List ↦ Sequence, Dict ↦ Mapping, Set ↦
AbstractSet) to guard against accidental mutation of the default
value.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
There seems to have been a confusion between two different uses of the
word “optional”:
• An optional parameter may be omitted and replaced with a default
value.
• An Optional type has None as a possible value.
Sometimes an optional parameter has a default value of None, or None
is otherwise a meaningful value to provide, in which case it makes
sense for the optional parameter to have an Optional type. But in
other cases, optional parameters should not have Optional type. Fix
them.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Automatically generated by the following script, based on the output
of lint with flake8-comma:
import re
import sys
last_filename = None
last_row = None
lines = []
for msg in sys.stdin:
m = re.match(
r"\x1b\[35mflake8 \|\x1b\[0m \x1b\[1;31m(.+):(\d+):(\d+): (\w+)", msg
)
if m:
filename, row_str, col_str, err = m.groups()
row, col = int(row_str), int(col_str)
if filename == last_filename:
assert last_row != row
else:
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
with open(filename) as f:
lines = f.readlines()
last_filename = filename
last_row = row
line = lines[row - 1]
if err in ["C812", "C815"]:
lines[row - 1] = line[: col - 1] + "," + line[col - 1 :]
elif err in ["C819"]:
assert line[col - 2] == ","
lines[row - 1] = line[: col - 2] + line[col - 1 :].lstrip(" ")
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This commit adds three `.pysa` model files: `false_positives.pysa`
for ruling out false positive flows with `Sanitize` annotations,
`req_lib.pysa` for educating pysa about Zulip's `REQ()` pattern for
extracting user input, and `redirects.pysa` for capturing the risk
of open redirects within Zulip code. Additionally, this commit
introduces `mark_sanitized`, an identity function which can be used
to selectively clear taint in cases where `Sanitize` models will not
work. This commit also puts `mark_sanitized` to work removing known
false postive flows.
The only clients that should use the typing
indicators endpoint are our internal clients,
and they should send a JSON-formatted list
of user_ids.
We now enforce this, which removes some
complexity surrounding legacy ways of sending
users, such as emails and comma-delimited
strings of user_ids.
There may be a very tiny number of mobile
clients that still use the old emails API.
This won't have any user-facing effect on
the mobile users themselves, but if you type
a message to your friend on an old mobile
app, the friend will no longer see typing
indicators.
Also, the mobile team may see some errors
in their Sentry logs from the server rejecting
posts from the old mobile clients.
The error messages we report here are a bit
more generic, since we now just use REQ
to do validation with this code:
validator=check_list(check_int)
This also allows us to remove a test hack
related to the API documentation. (We changed
the docs to reflect the modern API in an
earlier commit, but the tests couldn't be
fixed while we still had the more complex
semantics for the "to" parameter.)
Prevent `JsonableError(_("Missing content"))` from
ever being triggered.
That error wasn't handle by anything, and thus just threw a 500, as
it's not a response to an HTTP request.
The right fix is to adjust the caller to ban the empty string in
content (or content that strips to the empty string).
Closes#15145.
This commit adds some basic checks while adding or removing
realm owner status of a user and adds code to change owner
status of a user using update_user_backend.
This also adds restriction on removing owner status of the
last owner of realm. This restriction was previously on
revoking admin status, but as we have added a more privileged
role of realm owner, we now have this restriction on owner
instead of admin.
We need to apply that restriction both in the role change code path
and the deactivate code path.
This commit sets the role of the user creating the realm as
realm owner after the realm is created.
Previously, the role of user creating the realm was set as admin.
But now we want it to be owner because owners have the highest
privilege level.
The test_management_commands use in particular was causing pickling
errors when the test failed, because Python 3 filter returns an
iterator, not a list.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit adds an integration for Thinkst Canaries - physical, VM and
cloud-based canaries for detecting attackers to a network. Thinkst
Canaries can send webhook alerts when canaries have been tripped, and
this integration will post Zulip messages when these webhooks are
received.
Signed-off-by: David Wood <david@davidtw.co>
This was previously hardcoded with agreement between the Zulip backend
and frontend as 86400 seconds (1 day). Now, it's still hardcoded in
the backend, but arranged in a way where we could add a setting
without any changes to the mobile and terminal apps to update logic.
Fixes#15278.
We're migrating to using the cleaner zulip.com domain, which involves
changing all of our links from ReadTheDocs and other places to point
to the cleaner URL.
Generated by pyupgrade --py36-plus --keep-percent-format, but with the
NamedTuple changes reverted (see commit
ba7906a3c6, #15132).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Old topic of the msg edit event can be used to help the client
calculate useful information such as if a change
in current narrow is required.
This fixes our re narrow logic after a stream edit of a topic, with
no change in topic name itself, since the original topic was not
present in the event received and hence the `orig_topic` was
undefined in this case.
Option to disable breadcrumb messages were given in both message edit
form and topic edit stream popover.
User now has the option to select which stream to send the notification
of stream edit of a topic via checkboxes in the UI.
We pipe realm_id through functions where it is available,
this helps us avoid doing query for realm_id in loop when
multiple messages are being processed.
datetime.timezone is available in Python ≥ 3.2. This also lets us
remove a pytz dependency from the PostgreSQL scripts.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes warnings like these with python -Wd:
/home/circleci/zulip/zerver/lib/bugdown/__init__.py:327: DeprecationWarning: This method will be removed in future versions. Use 'list(elem)' or iteration over elem instead.
for child in currElementPair.value.getchildren():
/home/circleci/zulip/zerver/lib/bugdown/__init__.py:328: DeprecationWarning: This method will be removed in future versions. Use 'list(elem)' or iteration over elem instead.
if child.getchildren():
/home/circleci/zulip/zerver/lib/bugdown/__init__.py:282: DeprecationWarning: This method will be removed in future versions. Use 'list(elem)' or iteration over elem instead.
for child in currElement.getchildren():
/home/circleci/zulip/zerver/lib/bugdown/__init__.py:283: DeprecationWarning: This method will be removed in future versions. Use 'list(elem)' or iteration over elem instead.
if child.getchildren():
https://docs.python.org/3.8/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.getchildren
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes this warning with python -Wd:
/home/circleci/zulip/zerver/lib/bot_config.py:69: DeprecationWarning: This method will be removed in future versions. Use 'parser.read_file()' instead.
config.readfp(conf)
https://docs.python.org/3/library/configparser.html#configparser.ConfigParser.readfp
Signed-off-by: Anders Kaseorg <anders@zulip.com>
url_to_a returns Union[Element, str], but str cannot be appended to
Element; that would raise TypeError at runtime.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
zerver/lib/i18n.py:34:28: E741 ambiguous variable name 'l'
zerver/lib/webhooks/common.py:103:34: E225 missing whitespace around operator
zerver/tests/test_queue_worker.py:563:9: E306 expected 1 blank line before a nested definition, found 0
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This reimplements our Zoom video call integration to use an OAuth
application. In addition to providing a cleaner setup experience,
especially on zulipchat.com where the server administrators can have
done the app registration already, it also fixes the limitation of the
previous integration that it could only have one call active at a time
when set up with typical Zoom API keys.
Fixes#11672.
Co-authored-by: Marco Burstein <marco@marco.how>
Co-authored-by: Tim Abbott <tabbott@zulipchat.com>
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Objects whose properties are not described were validated by the
current validator. Edit it so that objects with no `properties`
or `additionalProperties` attribute i.e. opaque objects get
invalidated.
Also make changes in zulip.yaml to fix any opaque objects (tweaked by
tabbott to edit the documentation for better clarity).
We change do_create_user and create_user to accept
role as a parameter instead of 'is_realm_admin' and 'is_guest'.
These changes are done to minimize data conversions between
role and boolean fields.
request_retry and notify_bot_owner don't use request_data so might
as well not send it to them at all.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
Using the Python Standard Library's abc library and NotImplementedError
we can better define interfaces (this is mainly to improve readability
and consistency).
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
Integrations can be supplied a logo parameter which is used to contruct
their `logo_url`. It would be useful to store this parameter, instead of
computing the path from the URL.
This commit changes the person dict in event sent by do_change_user_role
to send role instead of is_admin or is_guest.
This makes things much more straightforward for our upcoming primary
owners feature.
Currently response return values have to be written twice, once in
the docs and once in zulip.yaml. Create a markdown extension so
that the return values in api docs are rendered using content from
zulip.yaml
This commit changes do_change_user_role to support adding or removing
the realm owner status of user and sending an event.
We also extend the existing test for do_change_user_role to do a bit
more validation to confirm the audit log records all values of role.
The new realm_owner role is added as option for role field in
UserProfile model and is_realm_owner is added as property for the user
profile.
Aside from some basic tests validating the logic, this has no effect
as users cannot end up with set as realm owners.
If a user receives more than one invite to join a
realm, after that user registers, all the remaining
invitations should be revoked, preventing them to be
listed in active invitations on admin panel.
To do this, we added a new prereg_user status,
STATUS_REVOKED.
We also added a confirmation_link_expired_error page
in case the user tries click on a revoked invitaion.
This page has a link to login page.
Fixes: #12629
Co-authored-by: Arunika <arunikayadav42@gmail.com>
On invitations panel, invites were being removed when
the user clicked on invitation's link. Now we only remove
it when the user completes registration.
Fixes: #12281
mock is just a backport of the standard library’s unittest.mock now.
The SAMLAuthBackendTest change is needed because
MagicMock.call_args.args wasn’t introduced until Python
3.8 (https://bugs.python.org/issue21269).
The PROVISION_VERSION bump is skipped because mock is still an
indirect dev requirement via moto.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We handle fenced code blocks in a preprocessor, and > style blockquotes
are parsed in a blockprocessor. Pymarkdown doesn't run the preprocessors
again on any blocks that it is parsing, and is unlikely to accept our
solution upstream; they intend to convert fenced_code to a block parser.
We simply run all the preprocessors on the text again, with the exception
of NormalizeWhitespace which removed delimiters used by HtmlStash to mark
preprocessed html code. To counter this, we subclass NormalizeWhitespace
and use our customized version for when it is called from a blockparser.
Upstream issue: https://github.com/Python-Markdown/markdown/issues/53Fixes#12800.
This commit merges do_change_is_admin and do_change_is_guest to a
single function do_change_user_role which will be used for changing
role of users.
do_change_is_api_super_user is added as a separate function for
changing is_api_super_user field of UserProfile.
This will protect us in case of some kinds of bugs that could allow
making requests such as password authentication attempts to tornado.
Without restricting the domains to which the in-memory backend can
be applied, such bugs would lead to attackers having multiple times
larger rate limits for these sensitive requests.
After a message was reset in our caches cache via message editing or
adding/removing a reaction, we were sending corrupt data to the cache
because build_message_dict (and thus build_dict_from_raw_db_row) was
improperly being called before sewing in the reaction data.
As a result, we were sending raw database data in the reaction
dictionaries, rather than the reformatted version expected by the API.
Bug introduced in 2a4c62a326.
Fixing this correctly required moving the rendering_realm_id logic one
step higher in the call chain, which is a useful refactoring anyway
(since we're no longer passing a `Message` object down)
We now parse tex and latex as regular languages, highlighting them
with pygments. We only allow 'math' to trigger latex rendering,
which is in line with the documentation.
This commit shifts our timestamp syntax to be of the form:
<span class="timestamp data-timestamp="123456"></span>
since value is not a valid attribute of span elements.
This adds support for syntax like: !time(Jun 7 2017, 6:30 PM) so that
everyone sees the time in their own local timezone. This can be used
when scheduling online meetings, etc.
This adds some hardcoded values for timezones, because of there
being no sureshot way of determining the timezone easily. However,
since the main way of using the feature should be a typeahead for
entering the time, this shouldn't be cause of much concern.
Fixes#5176.
This extends `put_dict_in_redis` to take token as an argument
and return that with the as a `key` with following key format.
Also, edit regex for token to include uppercase letters as
a token sent during apple authentication contains uppercase
letters.
Useful for Adding "Sign in with Apple" support.
During events such as stream / topic name edit for a topic, we were
running queries to db in loop for each message for reactions,
submessages and realm_id. This commit reduces the queries to be
done only for realm_id, which is yet to be fixed.
This is accomplished by building messages with empty reactions
and submessages and then updating them in the messages using bulk
queries.
This commit allows non admins to set stream post policy while creating
streams.
Restriction was there to prevent user from creating a stream in which
the user cannot post himself but this will be taken care of with
stream admin feature.
For unknown reasons, deleting 10,000s of ArchiveTransaction objects
results in rapidly growing memory in the job making the request in the
Django process, eventually leading to an OOM kill.
I don't understand why Django behaves that way; I would have expected
the failure mode to instead be a serious load problem on the database
server, but perhaps the way Django's internal deletion logic handles
cascading the deletes to many millions of ArchiveMessages and other
ForeignKey objects requires tracking a lot of data in memory.
The solution is the same in any case, which is to batch the deletions
to execute a reasonable number of them at once. Doing a single
ArchiveTransaction at a time would likely result in huge numbers of
database queries in a loop, which performs very poorly. So we balance
by batching deletions in groups of 100 ArchiveTransactions; testing
this in production, I saw no spike of memory usage materially beyond
that of a normal Django process, and each bulk-deletion transaction
takes several seconds to process (meaning per-transaction overhead is
negligible).
I'm not sure exactly what series of history got us here, but we were
fetching the mobile_user_ids data for all users in the organization,
regardless of whether they were recently active (and thus relevant for
the main presence data set). And doing so in a sloppy fashion
(sending every user ID over the wire, rather than just having the
database join on Realm).
Fixing this saves a factor of 4-5 on the total runtime of a presence
request on organizations with 10Ks of users like chat.zulip.org; more
like 25% in an organization with 150. Since large organizations are
very heavily weighted in the overall cost of presence, this is a huge
win.
Fixes part of #13734.
Zulip's openapi specification in zulip.yaml has various examples
for various schemas. Validate the example with their respective
schemas to ensure that all the examples are schematically correct.
Part of #14100.
The `email` field for identifying the user being modified in these
events was not used by either the webapp or other official Zulip
clients. Instead, it was legacy data from before we switched years
ago to sending user_id fields as the correct way to uniquely identify
a user.
When a user changes its avatar image, the user's avatar in popovers
wasn't being correctly updated, because of browser caching of the
avatar image. We added a version on the request to get the image in
the same format we use elsewhere, so the browser knows when to use the
cached image or to make a new request to the server.
Edited by Tim to preserve/fix sort orders in some tests, and update
zulip_feature_level.
Fixes: #14290
We remove the `owner` field from `page_params/realm_bots`
and bot-related events.
In the recent commit 155f6da8ba
we added `owner_id`, which we now use everywhere we need
bot owners for.
We also bump the `API_FEATURE_LEVEL` to 5 here. We
had already documented this in the prior commit to
add `owner_id`.
Note that we don't have to worry about mobile/ZT clients
here--we only deal with bot data in the webapp.
For the below payloads we want `owner_id` instead
of `owner`, which we should deprecate. (The
`owner` field is actually an email, which is
not a stable key.)
page_params.realm_bots
realm_bot/add
realm_bot/update
IMPORTANT NOTE: Some of the data served in
these payloads is cached with the key
`bot_dicts_in_realm_cache_key`.
For page_params, we get the new field
via `get_owned_bot_dicts`.
For realm_bot/add, we modified
`created_bot_event`.
For realm_bot/update, we modified
`do_change_bot_owner`.
On the JS side, we no longer
look up the bot's owner directly in
`server_events_dispatch` when we get
a realm_bot/update event. Instead, we
delegate that job to `bot_data.js`.
I modified the tests accordingly.
When editing a message where we mention a usergroup, we would remove
the 'mentioned' flag from messages, resulting in the message being
hidden from your mentions in the UI. This was reported by Greg Price in
https://chat.zulip.org/#narrow/stream/9-issues/topic/missing.20mention.
We add the same code that we use in do_send_messages to calculate the
updated mentions_user_ids. We add some tests alongside other user group
mention tests in test_bugdown.
This adds a webhook that can be used to interpret standard Slack
payloads. Since there are a ton of existing Slack integrations out
there, having a webhook which can accept standard Slack payloads can
significantly ease transition pains. Obviously this can't do everything
that Slack payloads can (particularly WRT their widgets/interactions),
but we can ingest text and parse out multi-block payloads into a message
relatively reasonably.
Currently when the user uploads files with ".jpe" file extension, the
markdown is converted to link but the image is not embedded.
This commit adds the support for ".jpe" file extension.
Fixes#14863
These changes should be included in bd9b74436c,
as it makes sure that Zulip limited plan realm won't be able to change the
`message_retention_days` setting.
Since production testing of `message_retention_days` is finished, we can
enable this feature in the organization settings page. We already had this
setting in frontend but it was bit rotten and not rendered in templates.
Here we replaced our past text-input based setting with a
dropdown-with-text-input setting approach which is more consistent with our
existing UI.
Along with frontend changes, we also incorporated a backend change to
handle making retention period forever. This change introduces a new
convertor `to_positive_or_allowed_int` which only allows positive integers
and an allowed value for settings like `message_retention_days` which can
be a positive integer or has the value `Realm.RETAIN_MESSAGE_FOREVER` when
we change the setting to retain message forever.
This change made `to_not_negative_int_or_none` redundant so removed it as
well.
Fixes: #14854
It's a preliminary step to enable message_retention_setting in org settings
UI, which is a non-limited plan only feature. So we require a page_param
property that tells us the limited-plan state of the Zulip realm.
Previously, we had a restriction that we could only
edit and move the topics of 7 days old messages.
This buggy behaviour is now removed as in this
commit.
Fixes#14492.
Part of #13912.
New path() function changed the way a regex pattern
is created from urls - it adds escape backslashes,
so for testing purposes we need to take care of them
and remove them, to check if urls were tested.
Additionaly, regex patterns from urls can have
[^/]+ instead of [^/]*, so we need to take care
of it too.
We no longer have intermediate constants of
`git_described` and `zulip_version_const`.
Instead, we make a `deployment_data` dictionary
that is grep-friendly, and we just let
`deployment_repr` do simple formatting
without translating string constants.
This is pretty easy to test:
- set DEBUG_ERROR_REPORTING = True
- modify some code to throw an exception
- see error output in #errors
- use "/emails" with text-only option to view
errors
This code was bitrotted--we no longer have a file
called `version`.
The info that was probably reported when that feature
was originally written probably lives now
in `zulip-git-version`, although I didn't research
all the history here. Here is the relevant
excerpt from `version.py`:
zulip_git_version_file = os.path.join(
os.path.dirname(os.path.abspath(__file__)),
'zulip-git-version')
if os.path.exists(zulip_git_version_file):
with open(zulip_git_version_file) as f:
version = f.read().strip()
if version:
ZULIP_VERSION = version
The file gets written as follows:
$ cat tools/cache-zulip-git-version
#!/usr/bin/env bash
set -e
cd "$(dirname "$0")/.."
git describe --tags --match='[0-9]*' > zulip-git-version || true
Here is what that might look like:
2.2-dev-2102-gf256ea39eb
Here is an excerpt from one of our recent error reports,
which demonstrates that the code I eliminated here was not
functioning (the third field is missing):
Deployed code:
- git: 2.2-dev-2028-g99ce96d49b-dirty
- ZULIP_VERSION: 2.2-dev-2028-g99ce96d49b
This fixes the main problem reported on #7868. I think
we may just want to close the issue, since the other
`nocoverage` stuff seems harmless to me.
Previously api_description and api_code_examples were two independent
markdown extensions for displaying OpenAPI content used in the same
places. We combine them into a single markdown extension (with two
processors) and move them to the openapi folder to make the codebase
more readable and better group the openapi code in the same place.
Instread of using stream_name + Intergers as topics, we now
generate topics using pos in `config.generate_data.json`.
This helps us create and test more realistic topics.
For realms with no retention policy on themselves or any of their
streams, no archiving happens, but 3 lines of logs would be generated.
That's redundant and we make changes in this commit to avoid logging
those lines if nothing of interest is happening.
Member of the org can able see list of invitations sent by him/her.
given permission for the member to revoke and resend the invitations
sent by him/her and added tests for test member can revoke and resend
the invitations only sent by him/her.
Fixes#14007.
Previously, hanging_lists preprocessor didn't consider anything
indented at 4 or above spaces to be a list. This meant that when
we had a list like:
1. 1
2. 2
3. 3
2. 2a
1. 1a
We would insert a newline between 3. 3 and 2. 2a. This resulted
in the block processor breaeking down 1 list into 2 blocks, which
messed up the nesting and indentation for the second block.
We've had bugs in the past where users with a name in the format
"Alice|999" would confuse our markdown rendering or typeahead. While
that's a fully solvable problem, there's no real use case for that, so
it's probably simpler to just prevent users from setting their name
that way.
Fixes#13923.
Prior to this change, there were reports of 500s in
production due to `export.extra_data` being a
Nonetype. This was reproducible using the s3
backend in development when a row was created in
the `RealmAuditLog` table, but the export failed in
the `DeferredWorker`. This left an entry lying
about that was never updated with an `extra_data`
field.
To fix this, we catch any exceptions in the
`DeferredWorker`, and then update `extra_data` to
encode the failure. We also fix the fact that we
never updated the export UI table with pending exports.
These changes also negated the use for the somewhat
hacky `clear_success_banner` logic.
This will give help up write new digest only if the db rebuild
succeeds. We were relying on the caller to
be successful in building db, this was hacky and unreliable.
We write new db digest once the caller succeeds, this ensures
that we write new digest after every successful attempt.
This fixes the anomality we were facing that Databases were rebuild
on the 2nd provision attempt with no changes to files or migrations.
This was happening because we didn't write a new digest for db
after the first provision (The case of DB didn't exist).
During the 1st provision, we check the template_status() of
Database both Dev and Test, but database_exists() of Databases
obviously returned false, and we rebuild the database,
but forgot to write_new_digest and hence the anomaly in the
second provision explained above.
This ensures that if one deletes `zproject/dev-secrets.conf`, we end
up rebuilding the databases from scratch (which, critically, will
ensure the password that gets setup matches what's in the current
version of the configuration file).
This should address a category of issue we've had where deleting
`zproject/dev-secrets.conf` would result in provision failing.
The logic in do_set_realm_property would previously "change" the email
addrssees of every user in the realm, even if they hadn't actually
changed.
We fix this by skipping the logic when it's unnecessary.
bulk_update is used to update the email of user_profile objects in
database when email_address_visibility is changed.
This helps resolve the problem of timeout errors in realms with large
number of users due to large number of database queries run in a
loop.
Since bulk_update doesn't flush caches, we need our own bit of code to
do that.
Fixes a part of #14600.
We add URLs to the `links_for_embed set`, only when
the `url_embed_preview_enabled` flag is turned on.
So, it is sufficient to check if `links_for_embed`
is not empty.
This new type eliminates a bunch of messy code that previously
involved passing around long lists of mixed positional keyword and
arguments, instead using a consistent data object for communicating
about the state of an external authentication (constructed in
backends.py).
The result is a significantly more readable interface between
zproject/backends.py and zerver/views/auth.py, though likely more
could be done.
This has the side effect of renaming fields for internally passed
structures from name->full_name, next->redirect_to; this results in
most of the test codebase changes.
Modified by tabbott to add comments and collaboratively rewrite the
initialization logic.
We now prevent these variations:
* <hr/>
* <hr />
* <br/>
* <br />
We could enforce similar consistency for other void
tags, if we wished, but these two are particularly
prevalent.
Add function in openapi.py to access endpoint descriptions written
in zulip.yaml. Use this function for creating a markdown extension
for rendering endpoint descriptions written in zulip.yaml.
We use this extension for a single endpoint to get test coverage.
The post_init cache-flushing behavior in the original alert words
migration was subtly wrong; while it may have passed tests, it didn't
have the right ordering for unlikely races.
We use post_save rather than post_init hooks precisely because they
ensure that we flush the cache after we know the database has been
updated and any future reads from the database will have the latest
state.
Previously, alert words were case-insensitive in practice, by which I
mean the Markdown logic had always been case-insensitive; but the data
model was not, so you could create "duplicate" alert words with the
same words in different cases. We fix this inconsistency by making
the database model case-insensitive.
I'd prefer to be using the Postgres `citext` extension to have
postgres take care of case-insensitive logic for us, but that requires
installing a postgres extension as root on the postgres server, which
is a pain and perhaps not worth the effort to arrange given that we
can achieve our goals with transaction when adding alert words.
We take advantage of the migrate_alert_words migration we're already
doing for all users to effect this transition.
Fixes#12563.
Previously, alert words were a JSON list of strings stored in a
TextField on user_profile. That hacky model reflected the fact that
they were an early prototype feature.
This commit migrates from that to a separate table, 'AlertWord'. The
new AlertWord has user_profile, word, id and realm(denormalization so
we can provide a nice index for fetching all the alert words in a
realm).
This transition requires moving the logic for flushing the Alert Words
caches to their own independent feature.
Note that this commit should not be cherry-picked without the
following commit, which fixes case-sensitivity issues with Alert Words.
This is a precursor commit to change the name of
AlertWordNotificationProcessor to AlertWordsNotificationProcessor
to match the change from UserProfile.alert_words to Alertword.
Previously, we added support for 'none', 'plain' and 'noop' and a
function `lang = remap_language(lang)`. This also had the potential
to encourage adding more remappings- something that we deliberatly
want to keep to a minimum.
For context, Anders K doesn't want us to keep any remapping (only
keeping 'text' which is the default no-op lexer that pygments has)
and Tim wants to keep 'plain' and 'text'. We should only document
and advertise 'text'.
Previously, the message and event APIs represented the user differently
for the same reaction data. To make this more consistent, I added a
user_id field to the reaction dict for both messages and events. I
updated the front end to use the user_id field rather than the user
dict. Lastly, I updated front end and back end tests that used user
info.
I primarily tested this by running my local Zulip build and
adding/removing reactions from messages.
Fixes#12049.
Some sites don't render correctly unless you are one of the latest browsers.
YouTube Music, for instance, changes the page title to "Your browser is
deprecated, please upgrade.", which makes our URL previews look bad.
In the original implementation, we were checking for the default language
inside format_code, which resulted in the setting being ignored when set to
quote, math, tex or latex. We shift the validation to `check_for_new_fence`
We also update the tests to use a saner naming scheme for the variables.
This commit removes can_create_streams and can_subscribe_other_users
to use has_permission as a generic function in UserProfile model for
these settings policies.
Relevant changes are made to events.py to avoid duplication at some
places.
We have two different digest schemes to make
sure we keep the database up to date. There
is the migration digest, which is NOT in the
scope of this commit, and which already
used the mechanism we use for other tools.
Here we are talking about the digest for
important files like `populate_db.py`.
Now our scheme is more consistent with how we
check file changes for other tools (as
well as the aformentioned migration files).
And we only write one hash file, instead of
seven.
And we only write the file when things have
actually changed.
And we are explicit about side effects.
Finally, we include a couple new bot settings
in the digest:
INTERNAL_BOTS
DISABLED_REALM_INTERNAL_BOTS
NOTE: This will require a one-time transition,
where we rebuild both databases (dev/test).
It takes a little over two minutes for me,
so it's not super painful.
I bump the provision version here, even
though you don't technically need it (since
the relevant tools are actually using the
digest files to determine if they need to
rebuild the database). I figure it's just
good to explicitly make this commit trigger
a provision, and the user will then see
the one-time migration of the hash files
with a little bit less of a surprise.
And I do a major bump, not a minor bump,
because when we go in the reverse direction,
the old code will have to rebuild the
database due to the legacy hash files not
being around, so, again, I just prefer it
to be explicit.
Fixes#14595.
Invalid HTTP requests could end up in an unhandled exception in
skip_200_and_304 due the record not having the status_code attribute
set. With this change we'll avoid the exception
Example:
curl -X POST -H 'Transfer-Encoding : chunked' --data-binary 'a' 'http://zulipdev.com:9991/json/messages/57'
2020-04-21 10:56:22.007 WARN [django.server] "POST /json/messages/57 HTTP/1.1" 405 95
2020-04-21 10:56:22.007 INFO [django.server] code 400, message Bad request syntax ('a')
2020-04-21 10:56:22.008 WARN [django.server] "a" 400 -
We remove the `generate_fixtures` option here mostly
for simplicity, but in particular to facilitate
an upcoming commit to simplify the job of
`generate-fixtures` (and remove its `--force` option).
The command line option here for `test-backend`
was really calling `generate_fixtures --force`,
which we're about to rename `tools/rebuild-test-database`.
The `test-backend` tools is already smart about catching
up on migrations, so we generally don't need to tell it
to repair the database.
And if the database does get corrupt, you can just do
it directly with `tools/rebuild-test-database`.
This eliminates the `use_force` flag in
`update_test_databases_if_required`, which was easy
to confuse with `rebuild_test_database`.
The other caller wasn't using `use_force`.
Somewhat confusingly, we have two types of different
digests related to databases. The migration digests
are pragmatic, since changes to migrations are a bit
more frequent for certain use cases and don't
necessitate a complete rebuild of the database.
Anyway, these are just more specific names.
Generated by autopep8 --aggressive, with the setup.cfg configuration
from #14532. In general, an isinstance check may not be equivalent to
a type check because it includes subtypes; however, that’s usually
what you want.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Generated by autopep8, with the setup.cfg configuration from #14532.
I’m not sure why pycodestyle didn’t already flag these.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This refactors `extract_python_code_example` to accept an
`example_regex` parameter. It can now be used to extract code examples
from javascript_examples.py.
Previously, the send_custom_email code path leaked files in paths that
were not `.gitignored`, under templates/zerver/emails.
This became problematic when we added automated tests for this code
path, as it meant we leaked these files every time `test-backend` ran.
Fix this by ensuring all the files we generate are in this special
subdirectory.
The purpose is to provide a way for (non-webapp) clients,
like the mobile and terminal apps, to tell whether the
server it's talking to is new enough to support a given
API feature -- in particular a way that
* is finer-grained than release numbers, so that for
features developed after e.g. 2.1.0 we can use them
immediately on servers deployed from master (like
chat.zulip.org and zulipchat.com) without waiting the
months until a 2.2 release;
* is reliable, unlike e.g. looking at the number of
commits since a release;
* doesn't lead to a growing bag of named feature flags
which the server has to go on sending forever.
Tweaked by tabbott to extend the documentation.
Closes#14618.
We now have two functions related to digests
for processes:
is_digest_obsolete
write_digest_file
In most cases we now **wait** to write the
digest file until after we've successfully
run a process with its new inputs.
In one place, for database migrations, we
continue to write the digest optimistically.
We'll want to fix this, but it requires a
little more code cleanup.
Here is the typical sequence of events:
NEVER RUN -
is_digest_obsolete returns True
quickly (we don't compute a hash)
write_digest_file does a write (duh)
AFTER NO CHANGES -
is_digest_obsolete returns False
after reading one file for old
hash and multiple files to compute
hash
most callers skip write_digest_file
(no files are changed)
AFTER SOME CHANGES -
is_digest_obsolete returns False
after doing full checks
most callers call write_digest_file
*after* running a process
I make these all functions for consistency,
and in particular I want to continue to avoid
`glob.glob` calls until we are actually
computing hashes.
This is mostly a prep to allow us to do
hashing in two separate places:
- check hashes
- update hashes
We would only update hashes **after** running
processes anew.
For `provision_inner` I considered using a
class to put the three path-related helpers
into a mini namespace, but it felt too heavy.
It wouldn't be completely implausible here
to extract something like a JSON config
file that has a list of globs for each
process that we do path-hashing for, but I
want to clean up other stuff first.
We now remove the `Type` and `_TYPE` suffixes,
as we will start treating this like a real
class with behavior, instead of a glorified
struct.
We pass in `platform_type`, so that we can
just derive some of our data from that,
where naming conventions apply.
And we use the name `migrations_status_path`,
instead of the name `migration_status`, which
had two different meanings before this change.
This is a pure refactor, and we just early-exit
in case the datbase doesn't exist (knowing that
that can be a bit of a lie now--see the comment
I added.)
Refactored code in actions.py and streams.py to move stream related
functions into streams.py and remove the dependency on actions.py.
validate_sender_can_write_to_stream function in actions.py was renamed
to access_stream_for_send_message in streams.py.
I remove `is_force` from `file_or_package_hash_updated`
and modernize its mypy annotations.
If `is_force` is `True`, we just now run the thing
we want to force-run without having to call
`file_or_package_hash_updated` to expensively
and riskily return `True`.
Another nice outcome of this change is that if
`file_or_package_hash_updated` returns `True`,
you can know that the file or package has
indeed been updated.
For the case of `build_pygments_data` we also
skip an `os.path.exists` check when `is_force`
is `True`.
We will short-circuit more logic in the next
few commits, as well as cleaning up some of
the long/wrapper lines in the `if` statements.
This is be useful for the mobile and desktop apps to hand an uploaded
file off to the system browser so that it can render PDFs (Etc.).
The S3 backend implementation is simple; for the local upload backend,
we use Django's signing feature to simulate the same sort of 60-second
lifetime token.
Co-Author-By: Mateusz Mandera <mateusz.mandera@protonmail.com>
For some mobile use cases, 15 seconds is potentially too short for a
busy+slow device to open a browser and fetch the URL. 60 seconds is
plenty, and doesn't carry a materially increased security risk.
When creating a webhook integration or creating a new one, it is a pain to
create or update the screenshots in the documentation. This commit adds a
tool that can trigger a sample notification for the webhook using a fixture,
that is likely already written for the tests.
Currently, the developer needs to take a screenshot manually, but this could
be automated using puppeteer or something like that.
Also, the tool does not support webhooks with basic auth, and only supports
webhooks that use json fixtures. These can be fixed in subsequent commits.
The Redis-based rate limiting approach takes a lot of time talking to
Redis with 3-4 network requests to Redis on each request. It had a
negative impact on the performance of `get_events()` since this is our
single highest-traffic endpoint.
This commit introduces an in-process rate limiting alternate for
`/json/events` endpoint. The implementation uses Leaky Bucket
algorithm and Python dictionaries instead of Redis. This drops the
rate limiting time for `get_events()` from about 3000us to less than
100us (on my system).
Fixes#13913.
Co-Author-by: Mateusz Mandera <mateusz.mandera@protonmail.com>
Co-Author-by: Anders Kaseorg <anders@zulipchat.com>
Right now, the message is "Invalid characters in emoji name" when
the emoji_name is empty. Changing check_valid_emoji_name() in
zerver/lib/emoji.py which validates the name to accomodate the case
of missing name. The new message is "Emoji name is missing".
Generated by `pyupgrade --py3-plus --keep-percent-format` on all our
Python code except `zthumbor` and `zulip-ec2-configure-interfaces`,
followed by manual indentation fixes.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This adds a new realm setting: default_code_block_language.
This PR also adds a new widget to specify a language, which
behaves somewhat differently from other widgets of the same
kind; instead of exposing methods to the whole module, we
just create a single IIFE that handles all the interactions
with the DOM for the widget.
We also move the code for remapping languages to format_code
function since we want to preserve the original language to
decide if we override it using default_code_clock_language.
Fixes#14404.
Semaphore has currently has two different versions of their product -
Classic and 2.0. This commit adds support for Semaphore 2.0, along side
Semaphore Classic, using the same webhook. This would let the integration
work seamlessly for users who have already configured a Zulip integration in
their Semaphore 2.0 projects.
Semaphore 2.0 currently only supports GitHub and their payloads do not
contain URLs for common entities like commits, pull requests and tags. We
construct URLs for them using templates, but also try to support other
services by providing notifications without URLs.
Closes#14171
Co-authored-by: Puneeth Chaganti <punchagan@muse-amuse.in>
If we had a rule like "max 3 requests in 2 seconds", there was an
inconsistency between is_ratelimited() and get_api_calls_left().
If you had:
request #1 at time 0
request #2 and #3 at some times < 2
Next request, if exactly at time 2, would not get ratelimited, but if
get_api_calls_left was called, it would return 0. This was due to
inconsistency on the boundary - the check in is_ratelimited was
exclusive, while get_api_calls_left uses zcount, which is inclusive.
time_reset returned from api_calls_left() was a timestamp, but
mistakenly treated as delta seconds. We change the return value of
api_calls_left() to be delta seconds, to be consistent with the return
value of rate_limit().
The information used to be stored in a request._ratelimit dict, but
there's no need for that, and a list is a simpler structure, so this
allows us to simplify the plumbing somewhat.
That's the value that matters to the code that catches the exception,
and this change allows simplifying the plumbing somewhat, and gets rid
of the get_rate_limit_result_from_request function.
This commit contains a few clean ups:
* In order to scale better for adding multiple commands,
the message formatting and setting switch logic was
extracted to its own function.
* The command lists were removed, as the frontend parses
the slash command from the compose box, and only sends
a single command to the backend for any given command
alias typed.
* The `switch_command` logic was removed because, given
the aforementioned fact, the index of the command will
always be the same. Thus the switch command will always
be the same.
* Switched to using early returns as opposed to nested
conditionals. Along with removing single use variable
declarations.
This commit reuses the existing infrastructure for moving a topic
within a stream to add support for moving topics from one stream to
another.
Split from the original full-feature commit so that we can merge just
the backend, which is finished, at this time.
This is a large part of #6427.
The feature is incomplete, in that we don't have real-time update of
the frontend to handle the event, documentation, etc., but this commit
is a good mergable checkpoint that we can do further work on top of.
We also still ideally would have a test_events test for the backend,
but I'm willing to leave that for follow-up work.
This appears to have switched to tabbott as the author during commit
squashing sometime ago, but this commit is certainly:
Co-Authored-By: Wbert Adrián Castro Vera <wbertc@gmail.com>
This simplifies the update_display_settings endpoint to use REQ for
validation, rather than custom if/else statements.
The test changes just take advantage of the now more consistent
syntax.
This changes the payload that is used
to populate `page_params` for the webapp,
as well as responses to the once-every-50-seconds
presence pings.
Now our dictionary of users only has these
two fields in the value:
- activity_timestamp
- idle_timestamp
Example data:
{
6: Object { idle_timestamp: 1585746028 },
7: Object { active_timestamp: 1585745774 },
8: Object { active_timestamp: 1585745578,
idle_timestamp: 1585745400}
}
We only send the slimmer type of payload
to clients that have set `slim_presence`
to True.
Note that this commit does not change the format
of the event data, which still looks like this:
{
website: {
client: 'website',
pushable: false,
status: 'active',
timestamp: 1585745225
}
}
This commit migrates zulip outging webhook payload to
/zulip-outgoing-webhook:post in OpenAPI.
Since this migrates the last payloads from api/fixtures.json to
OpenAPI, this commit removes api/fixtures.json file and the functions
accessing the file.
Tweaked by tabbott to further remove an unnecessary conditional.
This is critical for importing the very first realm into an empty
server, since in 27b15a9722, we changed
the model to create the internal realm when the first real realm would
be created, but neglected the data import code path.
The distinction between ValueError and TypeError
is not useful in these functions:
- extract_stream_indicator
- extract_private_recipients (or its callees)
These are always invoked in views to validate
user input.
When we use REQ to wrap the validators, any
Exception gets turned into a JsonableError, so
the distinction is not important.
And if we don't use REQ to wrap the validators,
the errors aren't caught.
Now we just let these functions directly produce
the desired end result for both codepaths.
Also, we now flag the error strings for translation.
This setting is being overridden by the frontend since the last
commit, and the security model is clearer and more robust if we don't
make it appear as though the markdown processor is handling this
issue.
Co-authored-by: Tim Abbott <tabbott@zulipchat.com>
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Zulip's modal_link markdown feature has not been used since 2017; it
was a hack used for a 2013-era tutorial feature and was never used
outside that use case.
Unfortunately, it's sloppy implementation was exposed in the markdown
processor for all users, not just the tutorial use case.
More importantly, it was buggy, in that it did not validate the link
using the standard validation approach used by our other code
interacting with links.
The right solution is simply to remove it.
When more than one outgoing webhook is configured,
the message which is send to the webhook bot passes
through finalize_payload function multiple times,
which mutated the message dict in a way that many keys
were lost from the dict obj.
This commit fixes that problem by having
`finalize_payload` return a shallow copy of the
incoming dict, instead of mutating it. We still
mutate dicts inside of `post_process_dicts`, though,
for performance reasons.
This was slightly modified by @showell to fix the
`test_both_codepaths` test that was added concurrently
to this work. (I used a slightly verbose style in the
tests to emphasize the transformation from `wide_dict`
to `narrow_dict`.)
I also removed a deepcopy call inside
`get_client_payload`, since we now no longer mutate
in `finalize_payload`.
Finally, I added some comments here and there.
For testing, I mostly protect against the root
cause of the bug happening again, by adding a line
to make sure that `sender_realm_id` does not get
wiped out from the "wide" dictionary.
A better test would exercise the actual code that
exposed the bug here by sending a message to a bot
with two or more services attached to it. I will
do that in a future commit.
Fixes#14384
The `event` parameter is never used by `process_success`,
and eliminating it allows us to greatly simplify tests
that are just confusingly passing in events that are
totally ignored.
We've had a bug for a while that if any ScheduledEmail objects get
created with the wrong email sender address, even after the sysadmin
corrects the problem, they'll still get errors because of the objects
stored with the wrong format.
We solve this by using FromAddress placeholders strings in
send_future_email function, so that ScheduledEmail objects end up
setting the final `from_address` value when mail is actually sent
using the setting in effect at that time.
Fixes#11008.
When we are fetching messages, we need to hydrate
stream names into the messages for legacy reasons.
(Ideally, we could skip this step for the webapp
and modern mobile clients, since they really only
need stream_ids, but we're not there yet.)
We keep a recipient cache that maps recipient ids
to stream names.
When we populate that cache, we now use `values(...)`
to avoid fat objects and extra DB work.
Note that we are already using a similar technique
for hydrating PM/huddle recipients.
The previous system for documenting arguments was very ugly if any of
the examples or descriptions were wrong. After thinking about this
for a while, I concluded the core problem was that a table was the
wrong design element to use for API parameters, and we'd be much
better off with individual card-type widgets instead.
This rewrites the API arguments documentation implementation to use a
basic sort of card-like system with some basic styling; I think the
result is a lot more readable, and it's a lot more clear how we would
add additional OpenAPI details (like parameter types) to the
documentation.
This commit modifies 'zerver/lib/bot_lib.py' to decouple the
user-controllable 'service_name' parameter from the value that is
passed in to 'import_module'. This is done as a precautionary
hardening.
This commit introduces two new functions in 'url_encoding.py' which
centralize two common patterns for constructing redirect URLs. It
also migrates the files using those patterns to use the new
functions.
After subscribing a stream email address to a Mailman email list
and receiving a message from it (using the polling configuration
with an Exim + Dovecot mailserver), the following error message
is emitted by Zulip:
Logger zerver.lib.email_mirror, from module zerver.lib.email_mirror line 77:
Error generated by Anonymous user (not logged in) on zulip deployment
Sender: "Foo Bar" <foo@example.com>
To: No recipient found
Missing recipient in mirror email
This is because the To: header on the received email corresponds
to the email list, and there are no other headers to indicate the
final recipient, apart from the "Envelope-To" header added by
Exim. To resolve this problem, the commit adds "Envelope-To" to
the list of headers to check for a match.
This improves the error handling for invalid values of the
propagate_mode parameter to our message editing endpoints.
Previously, invalid values would just work like change_one rather than
doing nothing.
The previous starred_messages race handling did not correctly consider
the possibility that an event queue might have been registered without
starred_messages.
Instead of operating on RateLimitedObjects, and making the classes
depend on each too strongly. This also allows getting rid of get_keys()
function from RateLimitedObject, which was a redis rate limiter
implementation detail. RateLimitedObject should only define their own
key() function and the logic forming various necessary redis keys from
them should be in RedisRateLimiterBackend.
type().__name__ is sufficient, and much readable than type(), so it's
better to use the former for keys.
We also make the classes consistent in forming the keys in the format
type(self).__name__:identifier and adjust logger.warning and statsd to
take advantage of that and simply log the key().
We don't need `do_create_user` to send a partial
event here for bots. The only caller to `do_create_user`
that actually creates bots (apart from some tests that
just need data setup) is `add_bot_backend`, which
sends the more complete event including bot "extras"
like service info.
The modified event tests show the simplification
here (2 events instead of 3).
Also, the bot tests now use tuple unpacking, which
will force a ValueError if we duplicate events
again.
We were going back to the database to get all
the users in the realm, when we had them right
there already. I believe this is a legacy
of us running on a very old version of Django
(back in early days), where `bulk_create`
didn't give you back ids in a nice way.
In the interim we added the `RealmAuditLog`
code, which does take advantage of the
existing profiles (and proves we can rely
on them).
But meanwhile we were still
doing a query to get all N users in the
realm. With `selected_related`!
To be fair, bulk_create_users() is by
its very nature a pretty infrequent
operation. This change is more motivated
by code cleanup.
Now we just loop through user_ids for
the Recipient/Subscriber foreign key rows.
I also removed some fairly convoluted code mapping
emails to user_ids and just work in user_id
space.
We try to use the correct variation of `email`
or `delivery_email`, even though in some
databases they are the same.
(To find the differences, I temporarily hacked
populate_db to use different values for email
and delivery_email, and reduced email visibility
in the zulip realm to admins only.)
In places where we want the "normal" realm
behavior of showing emails (and having `email`
be the same as `delivery_email`), we use
the new `reset_emails_in_zulip_realm` helper.
A couple random things:
- I fixed any error messages that were leaking
the wrong email
- a test that claimed to rely on the order
of emails no longer does (we sort user_ids
instead)
- we now use user_ids in some place where we used
to use emails
- for IRC mirrors I just punted and used
`reset_emails_in_zulip_realm` in most places
- for MIT-related tests, I didn't fix email
vs. delivery_email unless it was obvious
I also explicitly reset the realm to a "normal"
realm for a couple tests that I frankly just didn't
have the energy to debug. (Also, we do want some
coverage on the normal case, even though it is
"easier" for tests to pass if you mix up `email`
and `delivery_email`.)
In particular, I just reset data for the analytics
and corporate tests.
I guess `test_classes` has 100% line coverage
enforcement, which is a bit tricky for error
handling.
This fixes that, as well as making the name
snake_case and improving the format of the
errors.
This test was using the anti-pattern of doing an
assertion inside a conditional.
I added the `findOne` helper to make it easier
to write robust tests for scenarios like this.
If I send a message from a normal Zulip client, it is
considered to be "read" by me. But if I send it via
an API program (using my human account), the message
is not immediately "read" by me.
Now we handle this correctly in `get_raw_unread_data`.
The symptom of this was that these messages would get
"stuck" in "Private Messages" narrows until the next
time you reloaded your app.
Using an Exists subquery to avoid scanning the entire Subscription
table seems to speed things up greatly.
Set up with:
./manage.py populate_db --extra_users 2000 --extra-streams 1000
Tested on my computer, the original function was taking ~1.2seconds,
the optimized version only ~0.05-0.06.
Likely fixes#13874; we can re-open if after production testing we
feel more work is warranted.
In 220c2a5ff3 I
introduced a query to find invites by delivery_email
but was still using email as the key.
For most realms `email` and `delivery_email` are
synonymous, so this temporary bug would not affect
them. For realms that restrict emails, the invite
would have probably failed for other reasons, but
the symptom would have been less clear.
We now have this API...
If you really just need to log in
and not do anything with the actual
user:
self.login('hamlet')
If you're gonna use the user in the
rest of the test:
hamlet = self.example_user('hamlet')
self.login_user(hamlet)
If you are specifically testing
email/password logins (used only in 4 places):
self.login_by_email(email, password)
And for failures uses this (used twice):
self.assert_login_failure(email)
This reduces query counts in some cases, since
we no longer need to look up the user again. In
particular, it reduces some noise when we
count queries for O(N)-related tests.
The query count is usually reduced by 2 per
API call. We no longer need to look up Realm
and UserProfile. In most cases we are saving
these lookups for the whole tests, since we
usually already have the `user` objects for
other reasons. In a few places we are simply
moving where that query happens within the
test.
In some places I shorten names like `test_user`
or `user_profile` to just be `user`.
We want a clean codepath for the vast majority
of cases of using api_get/api_post, which now
uses email and which we'll soon convert to
accepting `user` as a parameter.
These apis that take two different types of
values for the same parameter make sweeps
like this kinda painful, and they're pretty
easy to avoid by extracting helpers to do
the actual common tasks. So, for example,
here I still keep a common method to
actually encode the credentials (since
the whole encode/decode business is an
annoying detail that you don't want to fix
in two places):
def encode_credentials(self, identifier: str, api_key: str) -> str:
"""
identifier: Can be an email or a remote server uuid.
"""
credentials = "%s:%s" % (identifier, api_key)
return 'Basic ' + base64.b64encode(credentials.encode('utf-8')).decode('utf-8')
But then the rest of the code has two separate
codepaths.
And for the uuid functions, we no longer have
crufty references to realm. (In fairness, realm
will also go away when we introduce users.)
For the `is_remote_server` helper, I just inlined
it, since it's now only needed in one place, and the
name didn't make total sense anyway, plus it wasn't
a super robust check. In context, it's easier
just to use a comment now to say what we're doing:
# If `role` doesn't look like an email, it might be a uuid.
if settings.ZILENCER_ENABLED and role is not None and '@' not in role:
# do stuff
Instead of trying to set the _requestor_for_logs attribute in all the
relevant places, we try to use request.user when possible (that will be
when it's a UserProfile or RemoteZulipServer as of now). In other
places, we set _requestor_for_logs to avoid manually editing the
request.user attribute, as it should mostly be left for Django to manage
it.
In places where we remove the "request._requestor_for_logs = ..." line,
it is clearly implied by the previous code (or the current surrounding
code) that request.user is of the correct type.
This uses the better, modern, user ID based API for sending messages
internally in the test suite, something that's convenient to do as a
follow-up to the migration to pass UserProfile objects to these
functions.
This commit mostly makes our tests less
noisy, since emails are no longer an important
detail of sending messages (they're not even
really used in the API).
It also sets us up to have more scrutiny
on delivery_email/email in the future
for things that actually matter. (This is
a prep commit for something along those
lines, kind of hard to explain the full
plan.)
We plan to use these records to check and record the schema of Zulip's
events for the purposes of API documentation.
Based on an original messier commit by tabbott.
In theory, a nicer version of this would be able to work directly off
the mypy type system, but this will be good enough for our use case.
This extends our email address visibility settings to deny access to
user email addresses even to organization administrators.
At the moment, they can of course change the setting (which leaves an
audit trail), but in the future only organization owners will be able
to change that setting.
While we're at this, we rewrite the settings_data.js test to cover all
the cases in a more consistent way.
Fixes#14111.
This isn't the only bug in our testing libraries with
EMAIL_ADDRESS_VISIBILITY; but we don't have a lot of tests that need
to deal with that set of settings.
We were using `code` to pass around messages.
The `code` field is designed to be a code, not
a human-readable message.
It's possible that we don't actually need two
flavors of messages for these type of validations,
but I didn't want to change that yet.
We **definitely** don't need to put two types of
message in the exception, so I fix that. Instead,
I just have the caller ask what level of detail
it needs.
I added a non-verbose message for the case of
system bots.
I removed the non-translated version of the message
for deactivated accounts, which didn't have test
coverage and is slightly more prone to leaking
email info that we don't want to leak.
In the prep commits leading up to this, we split
out two new helpers:
validate_email_is_valid
get_errors_for_new_emails
Now when we validate invites we use two separate
loops to filter our emails.
Note that the two extracted functions map to two
of the data structures that used to be handled
in a single loop, and now we break them out:
errors = validate_email_is_valid
skipped = get_errors_for_new_emails
The first loop checks that emails are even valid
to begin with.
The second loop finds out whether emails are already
in use.
The second loop takes advantage of this helper:
get_errors_for_new_emails
The second helper can query all potential new emails
with a single round trip to the database.
This reduces our query count.
The main purpose of this new function is to allow
us to validate emails in bulk, which we don't do
yet (still setting the stage for that).
This is still a speedup, though, since in our
caller we grab only three fields now.
And other than that, we're essentially doing
the same query for the single-email case, just
outside the loop.
We are trying to kill off `validate_email`, so
we no longer call it from these tests.
These tests are already kind of low-level in
nature, so testing the more specific helpers
here should be fine.
Note that we also make the third parameter
to `validate_email` non-optional in this commit,
to preserve 100% coverage. This is really just
refactoring noise--we will soon eliminate the
entire function, but I didn't want to do everything
in a huge commit.
This is a prep commit that will allow us
to more efficiently validate a bunch of
emails in the invite UI.
This commit does not yet change any
behavior or performance.
A secondary goal of this commit is to
prepare us to eliminate some hackiness
related to how we construct
`ValidationError` exceptions.
It preserves some quirks of the prior
implementation:
- the strings we decided to translate
here appear haphazard (and often
get ignored anyway)
- we use `msg` in most codepaths,
but use `code` for invites
Right now we never actually call this with
more than one email, but that will change
soon.
Note that part of the rationale for the inner
method here is to avoid a test coverage bug
with `continue` in loops.
This has two goals:
- sets up a future commit to bulk-validate
emails
- the extracted function is more simple,
since it just has errors, and no codes
or deactivated flags
This commit leaves us in a somewhat funny
intermediate state where we have
`action.validate_email` being a glorified
two-line function with strange parameters,
but subsequent commits will clean this up:
- we will eliminate validate_email
- we will move most of the guts of its
other callee to lib/email_validation.py
To be clear, the code is correct here, just
kinda in an ugly, temporarily-disorganized
intermediate state.
We now use the `get_realm_email_validator()`
helper to build an email validator outside
the loop of emails in our invite list.
This allows us to perform RealmDomain queries
only once per request, instead of once per
email.
Now called:
validate_email_not_already_in_realm
We have a separate validation function that
makes sure that the email fits into a realm's
domain scheme, and we want to avoid naming
confusion here.
Without the fix here, you will get an exception
similar to below if you try to invite one of the
cross realm bots. (The actual exception is
a bit different due to some rebasing on my branch.)
File "/home/zulipdev/zulip/zerver/lib/request.py", line 368, in _wrapped_view_func
return view_func(request, *args, **kwargs)
File "/home/zulipdev/zulip/zerver/views/invite.py", line 49, in invite_users_backend
do_invite_users(user_profile, invitee_emails, streams, invite_as)
File "/home/zulipdev/zulip/zerver/lib/actions.py", line 5153, in do_invite_users
email_error, email_skipped, deactivated = validate_email(user_profile, email)
File "/home/zulipdev/zulip/zerver/lib/actions.py", line 5069, in validate_email
return None, (error.code), (error.params['deactivated'])
TypeError: 'NoneType' object is not subscriptable
Obviously, you shouldn't try to invite a cross
realm bot to your realm, but we want a reasonable
error message.
RESOLUTION:
Populate the `code` parameter for `ValidationError`.
BACKGROUND:
Most callers to `validate_email_for_realm` simply catch
the `ValidationError` and then report a more generic error.
That's also what `do_invite_users` does, but it has the
somewhat convoluted codepath through `validate_email`
that triggers this code:
try:
validate_email_for_realm(user_profile.realm, email)
except ValidationError as error:
return None, (error.code), (error.params['deactivated'])
The way that we're using the `code` parameter for
`ValidationError` feels hacky to me. The intention
behind `code` is to provide a descriptive error to
calling code, and it's not intended for humans, and
it feels strange that we actually translate this in
other places. Here are the Django docs:
https://docs.djangoproject.com/en/3.0/ref/forms/validation/
And then here's an example of us actually translating
a code (not part of this commit, just providing context):
raise ValidationError(_('%s already has an account') %
(email,), code = _("Already has an account."),
params={'deactivated': False})
Those codes eventually get put into InvitationError, which
inherits from JsonableError, and we do actually display
these errors in the webapp:
if skipped and len(skipped) == len(invitee_emails):
# All e-mails were skipped, so we didn't actually invite anyone.
raise InvitationError(_("We weren't able to invite anyone."),
skipped, sent_invitations=False)
I will try to untangle this somewhat in upcoming commits.
Previously, the input:
====================
- One
- Two
Two continued
====================
Would produce the same output as:
====================
- One
- Two
```
Two continued
```
====================
This was because our CodeBlockProcessor had a higher priority than
the ListIndentProcessor. This issue was discussed here:
https://chat.zulip.org/#narrow/stream/9-issues/topic/continuation.20paragraphs.20in.20list.20items.
/delete_topic endpoint could be used to request the deletion of a topic,
that would cause do_delete_messages to be called with an empty set in
these cases:
1. Requesting deletion of an empty stream.
2. Requesting deletion of a topic in a private stream with history not
public to subscribers, if the requesting admin doesn't have access to
any of the messages in that topic.
This function slims down the data that we get
from the database in order to create the
streams part of our client payload.
We also fix a typo.
We also clearly distinguish between queries
and lists here.
This new method prevents us from getting fat
objects from the database.
Instead, now we just get ids from the database
to build our subqueries.
Note that we could also technically eliminate
the `set(...)` wrappers in this code to have
Django make a subquery and save a round trip.
I am postponing that for another commit (since
it's still somewhat coupled to some other
complexity in `do_get_streams` that I am trying
to cut through, plus it's not the main point
of this commit.)
BEFORE:
# old, still in use for other codepaths
def get_stream_subscriptions_for_user(user_profile: UserProfile) -> QuerySet:
# TODO: Change return type to QuerySet[Subscription]
return Subscription.objects.filter(
user_profile=user_profile,
recipient__type=Recipient.STREAM,
)
user_subs = get_stream_subscriptions_for_user(user_profile).filter(
active=True,
).select_related('recipient')
recipient_check = Q(id__in=[sub.recipient.type_id for sub in user_subs])
AFTER:
# newly added
def get_subscribed_stream_ids_for_user(user_profile: UserProfile) -> QuerySet:
return Subscription.objects.filter(
user_profile_id=user_profile,
recipient__type=Recipient.STREAM,
active=True,
).values_list('recipient__type_id', flat=True)
subscribed_stream_ids = get_subscribed_stream_ids_for_user(user_profile)
recipient_check = Q(id__in=set(subscribed_stream_ids))
We calculate `max_message_id` for the mobile client.
Our query now no longer joins to the Message table
and just grabs one value instead of fat objects.
For historical reasons we were creating Recipient
objects at some point in the typing-notifications
codepath. Now we just work with UserProfiles.
This removes some queries, as indicated by
the change to `len(queries)` in a couple of the
tests.
The one subtle thing that changes here is huddles.
If user 10 sends a typing notification that they
are talking to users 20 and 30, there might not
actually be a huddle for users 10/20/30, but
we were actually creating huddles on the fly!
There is no need to create huddles just for
typing notifications, since we don't even
share huddle ids with our clients. The clients
just infer the huddles.
Some of the code that gets killed off here as
somewhat "collateral damage" is some
defensive code related to formerly supporting streams
in typing indicators. The support for streams
was killed off almost as soon as we released
the feature, and the codepath is pretty clearly
user-centric at this point.
I actually like this pattern:
def check_send_typing_notification(...):
typing_notification = check_typing_notification(...)
do_send_typing_notification(...)
It can help divide responsibilities nicely and make it easy
to write detailed unit tests against each of the two helpers.
Unfortunately, the good things didn't really happen here, and
instead we got the worst aspects of the pattern:
- The responsibilities for validation leaked into
the second function.
- Both functions were doing sane things individually
that became not-so-sane in the big picture (namely,
we ended up making Recipient objects for no reason,
but if you read each of the helpers, it was just one
step that seemed reasonable).
- Passing around dictionaries for results can be annoying.
Also, the pattern made a lot more sense when the validation
for typing was a lot more complicated. My prior commit makes
it so that we only ever deal with a list of user_ids.
Anyway, now I'm inlining it. :)
Subsequent commits will clean up the more substantive issue
here, which is that we are building Recipients for no reason.
Before the Django 2.x upgrade, the DatabaseCreation
argument took an integer value. To deal with running
mulitple test instances, we created a random start
range that could count up 100 workers until the next
random id. Arbitrarily limiting the number of workers
to 100.
Post upgrade, we can now use string values. Enabling
the database + worker numbers to be more readable, as
well as removing the cap on the worker count.
This field wasn't accessed by any clients and was a less robust
version of the user_id field. Any client hoping to be interested in
who did message edits should be able to handle working with user IDs
rather than email addresses.
This is preparation for supporting moving messages between streams in
some cases.
It doesn't actually have any functional effect, since flush_message
clears the message unconditionally anyway.
This is mostly refactoring, but we also prevent a new
type of value error (list of non-int-or-string). The
new test code helps enforce that.
Cleanup includes:
- Use early-exit for email case.
- Rename helpers to get_validate_*.
- Avoid clumsy rebuilding of lists in helpers.
- Avoid the confusing `recipient` name (which
can be confused with the model by the same
name).
- Just delegate duplicate-id/email-removal to
the helpers.
The cleaner structure allows us to elminate a couple
mypy workarounds.
Credits to @xpac1985 for reporting, debugging and proposing fix to the
issue. The proposed fix was modified slightly by @hackerkid to set the
correct value for max_invites and upload_quota_gb. Tests added by
@hackerkid.
Fixes#13974
This fixes a confusing aspect of how our automated tests worked
previously, where we'd almost all HTTP requests in the unlikely
configuration with no User-Agent string specified.
We need to adjust query counts in a few tests that now are a bit
cheaper because they now can take advantage of a Client object created
in server_initialization.py in `process_client`.
To avoid some hidden bugs in tests caused by every ldap user having the
same password, we give each user a different password, generated based
on their uids (to avoid some ugly hard-coding in a bunch of places).
‘req_var in request.GET’ was previously believed to be slow from
profiling results. However, the real explanation for those profiling
results is that WSGIRequest.GET is a lazy cached property, so there’s
no reason to avoid it if we’re accessing request.GET anyway.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
In https://github.com/zulip/zulip/pull/12823 some changes to the realms
structure have been made, so now both in production and development
cross-realm bots live in the realm with string_id "zulipinternal".
There was a TODO in retention code to eliminate a conditional in a query
that became redundant with this change, and also the zulipinternal realm
should be omitted from the archiving process in archive_messages().
We use this single regular expression for processing essentially every
request, so it's definitely worth hinting to Python that we're going
to do so by compiling it. Saves about 40us per request.
A sloppy implementation of the main has_request_variables wrapper
function meant that it did two very inefficient things:
* To combine together the GET and POST parameters, it would make a
copy of the request.GET QueryDict object, which combined with the
fact that these objects are slow to access, consumed about 90us per
argument.
* Doing this in a loop (one time per argument), rather than once,
which resulted in us doing this 11 times for a `GET /events` query.
Fixing this to just make a dictionary and combine things with some
small loops saved about 1 millisecond from the total runtime of GET
/events (for comparison, the total actual work of that view function
is about 700ms).
We need to fix at least one test that used a bad mock HttpRequest
object that didn't have a .GET property.
The comment explains this issue, but effectively, the upgrade to
Django 2.x means that Django's built-in django.request logger was
writing to our errors logs WARNING-level data for every 404 and 400
error. We don't consider user errors to be a problem worth
highlighting in that log file.
Django 2.2.x is the next LTS release after Django 1.11.x; I expect
we'll be on it for a while, as Django 3.x won't have an LTS release
series out for a while.
Because of upstream API changes in Django, this commit includes
several changes beyond requirements and:
* urls: django.urls.resolvers.RegexURLPattern has been replaced by
django.urls.resolvers.URLPattern; affects OpenAPI code and related
features which re-parse Django's internals.
https://code.djangoproject.com/ticket/28593
* test_runner: Change number to suffix. Django changed the name in this
ticket: https://code.djangoproject.com/ticket/28578
* Delete now-unnecessary SameSite cookie code (it's now the default).
* forms: urlsafe_base64_encode returns string in Django 2.2.
https://docs.djangoproject.com/en/2.2/ref/utils/#django.utils.http.urlsafe_base64_encode
* upload: Django's File.size property replaces _get_size().
https://docs.djangoproject.com/en/2.2/_modules/django/core/files/base/
* process_queue: Migrate to new autoreload API.
* test_messages: Add an extra query caused by .refresh_from_db() losing
the .select_related() on the Realm object.
* session: Sync SessionHostDomainMiddleware with Django 2.2.
There's a lot more we can do to take advantage of the new release;
this is tracked in #11341.
Many changes by Tim Abbott, Umair Waheed, and Mateusz Mandera squashed
are squashed into this commit.
Fixes#10835.
Since essentially the first use of Tornado in Zulip, we've been
maintaining our Tornado+Django system, AsyncDjangoHandler, with
several hundred lines of Django code copied into it.
The goal for that code was simple: We wanted a way to use our Django
middleware (for code sharing reasons) inside a Tornado process (since
we wanted to use Tornado for our async events system).
As part of the Django 2.2.x upgrade, I looked at upgrading this
implementation to be based off modern Django, and it's definitely
possible to do that:
* Continue forking load_middleware to save response middleware.
* Continue manually running the Django response middleware.
* Continue working out a hack involving copying all of _get_response
to change a couple lines allowing us our Tornado code to not
actually return the Django HttpResponse so we can long-poll. The
previous hack of returning None stopped being viable with the Django 2.2
MiddlewareMixin.__call__ implementation.
But I decided to take this opportunity to look at trying to avoid
copying material Django code, and there is a way to do it:
* Replace RespondAsynchronously with a response.asynchronous attribute
on the HttpResponse; this allows Django to run its normal plumbing
happily in a way that should be stable over time, and then we
proceed to discard the response inside the Tornado `get()` method to
implement long-polling. (Better yet might be raising an
exception?). This lets us eliminate maintaining a patched copy of
_get_response.
* Removing the @asynchronous decorator, which didn't add anything now
that we only have one API endpoint backend (with two frontend call
points) that could call into this. Combined with the last bullet,
this lets us remove a significant hack from our
never_cache_responses function.
* Calling the normal Django `get_response` method from zulip_finish
after creating a duplicate request to process, rather than writing
totally custom code to do that. This lets us eliminate maintaining
a patched copy of Django's load_middleware.
* Adding detailed comments explaining how this is supposed to work,
what problems we encounter, and how we solve various problems, which
is critical to being able to modify this code in the future.
A key advantage of these changes is that the exact same code should
work on Django 1.11, Django 2.2, and Django 3.x, because we're no
longer copying large blocks of core Django code and thus should be
much less vulnerable to refactors.
There may be a modest performance downside, in that we now run both
request and response middleware twice when longpolling (once for the
request we discard). We may be able to avoid the expensive part of
it, Zulip's own request/response middleware, with a bit of additional
custom code to save work for requests where we're planning to discard
the response. Profiling will be important to understanding what's
worth doing here.