We also have the caller pass in the property name for an
additional sanity check.
Note that we don't yet handle the possibility of extra_data;
that will be a subsequent commit.
Also, the stream_id fields aren't in Realm.property_types,
so we specify their types in the checker.
This a pretty big commit, but I really wanted it
to be atomic.
All realm_user/update events look the same from
the top:
_check_realm_user_update = check_events_dict(
required_keys=[
("type", equals("realm_user")),
("op", equals("update")),
("person", _check_realm_user_person),
]
)
And then we have a bunch of fields for person that
are optional, and we usually only send user_id plus
one other field, with the exception of avatar-related
events:
_check_realm_user_person = check_dict_only(
required_keys=[
# vertical formatting
("user_id", check_int),
],
optional_keys=[
("avatar_source", check_string),
("avatar_url", check_none_or(check_string)),
("avatar_url_medium", check_none_or(check_string)),
("avatar_version", check_int),
("bot_owner_id", check_int),
("custom_profile_field", _check_custom_profile_field),
("delivery_email", check_string),
("full_name", check_string),
("role", check_int_in(UserProfile.ROLE_TYPES)),
("email", check_string),
("user_id", check_int),
("timezone", check_string),
],
)
I would start the code review by just skimming the changes
to event_schema.py, to get the big picture of the complexity
here. Basically the schema is just the combined superset of
all the individual schemas that we remove from test_events.
Then I would read test_events.py.
The simplest diffs are basically of this form:
- schema_checker = check_events_dict([
- ('type', equals('realm_user')),
- ('op', equals('update')),
- ('person', check_dict_only([
- ('role', check_int_in(UserProfile.ROLE_TYPES)),
- ('user_id', check_int),
- ])),
- ])
# ...
- schema_checker('events[0]', events[0])
+ check_realm_user_update('events[0]', events[0], {'role'})
Instead of a custom schema checker, we use the "superset"
schema checker, but then we pass in the set of fields that we
expect to be there. Note that 'user_id' is always there.
So most of the heavy lifting happens in this new function
in event_schema.py:
def check_realm_user_update(
var_name: str, event: Dict[str, Any], optional_fields: Set[str],
) -> None:
_check_realm_user_update(var_name, event)
keys = set(event["person"].keys()) - {"user_id"}
assert optional_fields == keys
But we still do some more custom checks in test_events.py.
custom profile fields: check keys of custom_profile_field
def test_custom_profile_field_data_events(self) -> None:
+ self.assertEqual(
+ events[0]['person']['custom_profile_field'].keys(),
+ {"id", "value", "rendered_value"}
+ )
+ check_realm_user_update('events[0]', events[0], {"custom_profile_field"})
+ self.assertEqual(
+ events[0]['person']['custom_profile_field'].keys(),
+ {"id", "value"}
+ )
avatar fields: check more specific types, since the superset
schema has check_none_or(check_string)
def test_change_avatar_fields(self) -> None:
+ check_realm_user_update('events[0]', events[0], avatar_fields)
+ assert isinstance(events[0]['person']['avatar_url'], str)
+ assert isinstance(events[0]['person']['avatar_url_medium'], str)
+ check_realm_user_update('events[0]', events[0], avatar_fields)
+ self.assertEqual(events[0]['person']['avatar_url'], None)
+ self.assertEqual(events[0]['person']['avatar_url_medium'], None)
Also note that avatar_fields is a set of four fields that
are set in event_schema.
full name: no extra work!
def test_change_full_name(self) -> None:
- schema_checker('events[0]', events[0])
+ check_realm_user_update('events[0]', events[0], {'full_name'})
test_change_user_delivery_email_email_address_visibilty_admins:
no extra work for delivery_email
check avatar fields more directly
roles (several examples) -- actually check the specific role
def test_change_realm_authentication_methods(self) -> None:
- schema_checker('events[0]', events[0])
+ check_realm_user_update('events[0]', events[0], {'role'})
+ self.assertEqual(events[0]['person']['role'], role)
bot_owner_id: no extra work!
- change_bot_owner_checker_user('events[1]', events[1])
+ check_realm_user_update('events[1]', events[1], {"bot_owner_id"})
- change_bot_owner_checker_user('events[1]', events[1])
+ check_realm_user_update('events[1]', events[1], {"bot_owner_id"})
- change_bot_owner_checker_user('events[1]', events[1])
+ check_realm_user_update('events[1]', events[1], {"bot_owner_id"})
timezone: no extra work!
- timezone_schema_checker('events[1]', events[1])
+ check_realm_user_update('events[1]', events[1], {"email", "timezone"})
Obviously, this file will soon grow--this
was the easiest way to start without introducing
noise into other commits.
It will soon be structurally similar
to frontend_tests/node_tests/lib/events.js--I
have some ideas there. But this should also
help for things like API docs.
We add the ability to supply optional_keys,
and we don't mutate the list of required
keys that gets passed into us.
We also enforce that there is a "type"
field.
(We will use optional_keys soon.)
This change makes our handling of youtube-url previews consistent
with how we handle our inline images. This allows the previews to
render next to the paragraph that links to the youtube video.
Follow-up to PR #15773.
In particular importing gitter data leads to having accounts with these
noreply github emails. We generally only want users to have emails that
we can actually send messages to, so we'll keep the old behavior of
disallowing sign up with such an email address. However, if an account
of this type already exists, we should allow the user to have access to
it.
This commit rewrites the way addresses are collected. If
the header with the address is not an AddressHeader (for instance,
Delivered-To and Envelope-To), we take its string representation.
Fixes: #15864 ("Error in email_mirror - _UnstructuredHeader has no attribute addresses").
Zulip converts :) to the 1F642 Unicode emoji and promotes the same emoji
in the popular section of the emoji picker.
Previously Zulip has labeled 1F642 as "slight smile". While that name
conforms to the Unicode standard (which describes the code point as
SLIGHTLY SMILING FACE), it didn't match our use case of the emoji.
If a user types :) or selects the first smile in the emoji picker they
probably mean to express a regular "smile" and not a "slight smile",
which raises the question why they are only smiling slightly.
This commit relabels 1F642 as 😄 and our previous 😄 263A as
:smiling_face:. Note that 263A looks different in our three supported
emoji sets, so it is not suited to be our "default smile".
This change does not require a migration since our emoji system stores
both unicode points and names and handles name changes transparently.
ERROR_BOT setting is not None during testing, so running
test_report_error without making errors stream was causing exception.
This commit make a stream name errors thus removes exception and error
log spam caused by it in ./tools/test-backend output.
This commit verify that error logging while testing data export in
test_notify_realm_export_on_failure using assertLogs so that the logs
do not spam test output.
This commit tests if error logs are logged when an error occurs during
testing of check_send_webhook_fixture_message using assertlogs. Using
assertlogs ensure logs are not printed as spam in test output.
This commit verify warning logs while testing validate_api_key and
profile is incoming webhook but is_webhook is not set to True.
Verification is done using assertLogs so that logs does not cause spam
by printing in the test output.
This commit verify error logs printed during testing of log_and report
function using assertLogs without printing it in test output and hence
avoiding spam.
A few major themes here:
- We remove short_name from UserProfile
and add the appropriate migration.
- We remove short_name from various
cache-related lists of fields.
- We allow import tools to continue to
write short_name to their export files,
and then we simply ignore the field
at import time.
- We change functions like do_create_user,
create_user_profile, etc.
- We keep short_name in the /json/bots
API. (It actually gets turned into
an email.)
- We don't modify our LDAP code much
here.
When you post to /json/users, we no longer
require or look at the short_name parameter,
since we don't use it in any meaningful way.
An upcoming commit will eliminate it from the
database.
This fixes up some complex helpers that may
have had some value before f-strings come along,
but they mostly obscured the logic for
people reading the tests.
We still keep really simple helpers for the
common cases, but there are no optional
parameters for them.
One goal of this fix is to remove the
short_name concept, and we just explicitly
set senders everywhere we need them.
We also now have each test just explicitly set
its reaction_type.
For cases where we have custom message ids
or senders, we just inline the simple call
to api_post.
We generally want to avoid having two sibling test
suites depend on each other, unless there's a real
compelling reason to share code. (And if there is
code to share, we can usually promote it to either
test_helpers or ZulipTestCase, as I did here.)
This commit is also prep for the next commit, where
I try to simplify all of the helpers in EmojiReactionBase.
Especially now that we have f-strings, it is usually
better to just call api_post explicitly than to
obscure the mechanism with thin wrappers around
api_post. Our url schemes are pretty stable, so it's
unlikely that the helpers are actually gonna prevent
future busywork.
It's not clear to me why these passed mypy
before, given this:
def assert_realm_values(f: Callable[[Realm], Any], ...
But this is clearly more accurate.
This issue isn't something a system administrator needs to take action
on -- it's a likely minor logic bug around organization
administrators moving topics between streams.
As a result, it shouldn't send error emails to administrators.
This is a hacky fix to avoid spoiler content leaking in emails. The
general idea here is to tell people to open Zulip to view the actual
message in full.
We create a mini-markdown parser here that strips away the fence content
that has the 'spoiler' tag for the text emails.
Our handling of html emails is much better in comparison where we can
use lxml to parse the spoiler blocks.
We include tests for the new implementation to avoid churning the
codebase too much so this can be easily reverted when we are able to
re-enable the feature.
The tests had a bunch of different ways to create
users; now we are consistent. (This is a bit of
a prep step, too, to allow us to easily clean
Hamlet's existing words before each test.)
We could certainly do better with the handling here, but using the raw
string that the user gave us is okayish for now.
Proper formatting of timestamps requires handling locales and timezones
of the receiver as well which is a larger project.
We now do something sensible for spoilers in notifications. A message
like:
```spoiler Luke's father is
Vader. Don't tell anyone else.
```
would be rendered as:
Luke's father is (...)
If the push_notification for the UserMessage is already active,
we don't send any push notification to the user. This may
happen due to race conditions.
Added and fixed test cases affected by this.
Previously, we rendered the twitter previews outside of a
spoiler block at the end of the message. The commit series
ending with this commit fixes that by inlining twitter
previews instead of appending them all at the end. As a
consequence of the inlining, we have fixed the issue here.
This commit just adds a test to assert that.
Fixes#15518.
This is similar to our behavior with image previews, and helps
reduce clutter in the final rendered html.
We add the string 'Tweet: ' to our existing tests so those tests
remain the same.
This commit makes our handling of twitter previews consistent with
how we handle our inline images so that tweets render next to the
paragraph that links to the tweet.
This particular commit has been a long time coming. For reference,
!avatar(email) was an undocumented syntax that simply rendered an
inline 50px avatar for a user in a message, essentially allowing
you to create a user pill like:
`!avatar(alice@example.com) Alice: hey!`
---
Reimplementation
If we decide to reimplement this or a similar feature in the future,
we could use something like `<avatar:userid>` syntax which is more
in line with creating links in markdown. Even then, it would not be
a good idea to add this instead of supporting inline images directly.
Since any usecases of such a syntax are in automation, we do not need
to make it userfriendly and something like the following is a better
implementation that doesn't need a custom syntax:
`![avatar for Alice](/avatar/1234?s=50) Alice: hey!`
---
History
We initially added this syntax back in 2012 and it was 'deprecated'
from the get go. Here's what the original commit had to say about
the new syntax:
> We'll use this internally for the commit bot. We might eventually
> disable it for external users.
We eventually did start using this for our github integrations in 2013
but since then, those integrations have been neglected in favor of
our GitHub webhooks which do not use this syntax.
When we copied `!gravatar` to add the `!avatar` syntax, we also noted
that we want to deprecate the `!gravatar` syntax entirely - in 2013!
Since then, we haven't advertised either of these syntaxes anywhere
in our docs, and the only two places where this syntax remains is
our game bots that could easily do without these, and the git commit
integration that we have deprecated anyway.
We do not have any evidence of someone asking about this syntax on
chat.zulip.org when developing an integration and rightfully so- only
the people who work on Zulip (and specifically, markdown) are likely
to stumble upon it and try it out.
This is also the only peice of code due to which we had to look up
emails -> userid mapping in our backend markdown. By removing this,
we entirely remove the backend markdown's dependency on user emails
to render messages.
---
Relevant commits:
- Oct 2012, Initial commit c31462c278
- Nov 2013, Update commit bot 968c393826
- Nov 2013, Add avatar syntax 761c0a0266
- Sep 2017, Avoid email use c3032a7fe8
- Apr 2019, Remove from webhook 674fcfcce1
Log RealmAuditLog in do_set_realm_property and do_remove_realm_domain.
Tests for the changes are written in test_events because it will save
duplicate code for test_change_realm_property.
Added new Event Type in AbstractRealmAuditLog STREAM_CREATED.
Since we finally create streams in create_stream_if_needed function
in zerver/lib/streams.py so logged realm_audit there.
Passed acting_user when create_stream_if_needed or ensure_stream
function is called.
Added tests in test_audit_log.
This commit moves out the SoftDeactivationMessageTest out of
test_messages.py (which at the moment have mixed category of tests) into
a more logical file, test_soft_deactivation.py.
This commit moves TestMessageForIdsDisplayRecipientFetching class which
have tests regarding display_recipient filled in by MessageDict to
test_message_dict.py.
This commit moves InternalPrepTest test class to test_message_send.py
because it tests internal_send_* and internal_prep_* functions which are
used for internal message sending in zulip.
This commit moves few tests related to testing proper sending of private
messages from PrivateMessagesTest class in test_messages.py to a new class
in test_message_send.py.
The commit moves, test_create_mirror_user_despite_race which is not related
to message sending from MessagePOSTTest class in test_message_send.py to
test_mirror_users.py.
This commit extracts out MessagePOSTTest class from test_messages.py
intially.
In future commits other related message sending tests will be moved from
test_messages.py to test_message_send.py.
Starting with extracting out MirroredMessageUsersTests as it is related to
mirror users than anything message-specific.
In a future commit, may extract out some tests from MessagePOSTTest as well
but still deciding on those.
We had been using !time() syntax for timestamps so far. Since its
an unreleased feature, we can make changes without affecting many
people.
Fixes#15442.
Prior to this commit whenever convert was imported from zerver.lib.markdown
it was aliased as markdown_convert for readability.
This commit rename convert function to markdown_convert so that it can be
directly import it without aliasing and without compromising readability.
For users who are unsubscribed from the new stream but are in
the old stream, we delete the UserMessage.
We send the delete_message event only to guest users,
who have completely lost asses to the moved messages, for other
users we send the normal update_message event which moves
the messages to the new unsubed stream which
otherwise would look broken to the
user without reloading to the webpage.
Our previous OpenAPI schema validator that we implemented ourselves
was useful training wheels for our understanding OpenAPI properly, and
was mostly correct. But given that we've finally reached the point
where our OpenAPI file accurately describes the API, it makes sense to
switch to use an official OpenAPI validator. We lose some ability to
do exclude rules for particular elements, but those were primarily
important for us when we had a lot of them.
As part of this change, we need to add `additionalProperties: false`
for all of our dictonaries/objects where we've documented every
parameter; otherwise the OpenAPI schema checker won't know that we
expect every parameter to be documented.
This was hiding an actual type error in test_cache: a mismatch between
the object ID type, which is str, and the default id_fetcher, which
returns int.
Mypy’s insufficient support for default generic arguments basically
means we can’t use them without a lot of overloading, and there are
not enough callers here to justify that.
https://github.com/python/mypy/issues/3737
We avoid this being super messy where the code calls this by adding
some less generic wrappers for generic_bulk_cached_fetch.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
According to @showell:
> All the slow decorators can die. That was a failed experiment of
> mine from 2014 days. I have meaning to kill them for a couple years
> now. I wrote this with the best of intentions, but I believe it's
> now just cruft. We never made a "fast" mode, for one. And we kept
> writing more and more slow tests, haha.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We remove support for the old clients which required an event for
each message to clear notification.
This is justified since it has been around 1.5 years since we started
supporting the bulk operation (and so essentially nobody is using a
mobile app version so old that it doesn't support the batched
approach) and the unbatched approach has a maintenance and reliability
cost.
This commit changes the name of fixture that uses reference to bugdown.
Word backend in backend_markdown is important so to make it clear that
it is backend markdown. These test fixtures are also used in frontend,
so highlighting this is useful.
This commit is part of series of commits aimed at renaming bugdown to
markdown.
I also fix the code formatting so it's more
considerate of folks that have smaller monitors
or do side-by-side editing. And it's more
diff friendly as well.
I want to avoid creating the same scheme checkers
for multiple tests, and it's easiest to create the
schema checkers at module scope (and we possibly
even move them to another file eventually).
This will allow us to more easily instrument our
code to find duplicate schemas.
The goal here is to make test_events.py be
primarily focused on testing specific actions
and then validating:
- schemas
- apply_events logic
Then the new module here is a bit of a
kitchen sink of old tests, although it's
primarily focused on the actual mechanics
of the event system:
- logging
- register
- queue_ids
- fetching initial state
- client descriptors
The classes toward the bottom arguably
should go into more feature-specific
test modules, but the main goal now is
to purify test_events.py. (We may eventually
want to rename test_events.py to something
more like test_action_events.py, but the
current name has some doc references and
tribal knowledge around it.)
The current description of object and array parameters in
zulip.yaml is wrong and renders incorrect requests on using OpenAPI
tools such as SwaggerIO, etc. Fix it by encoding it correctly and
changing tests accordingly.
Fixes#14380.
There is still some miscellaneous cleanup that
has to happen for things like analytics queries
and dead code in node tests, but this should
remove the main use of pointers in the backend.
(We will also still need to drop the DB field.)
This test is poorly written, in that it doesn't actually do any
verification of the results, but this at least is the correct
conversion of the data setup for it such that if we did verify its
results, the data we populated was relevant.
When we are just building lots of UserActivity
records to simulate user activity, it's misleading
to use some specific endpoint that people reading
the test may think is pertinent to the actual
test.
The pointer endpoints that I replaced in this
commit are a good example of this principle--
they will no longer exist in an upcoming commit,
but these tests would have kept running and be
even more confusing.
Rename rest of function names, classes and comments containing bugdoown
to markdown in test_markdown.py. Also change the refactored classes and
functions occurences in other files.
This commit is part of series of commits aimed at renaming bugdown to
markdown.
Preparatory commit before removing bugdown alias for markdown. This
will prevent same variable name errors when name markdown is used
instead of bugdown.
This commit is part of series of commits aimed at renaming bugdown to
markdown.
This commits changes class name of MarkdownListPreprocessor to
MarkdownListPreprocessor. It also changes corresponding references
in tests.
This is part of series of commits which aims for renaming bugdown to
markdown.
Rename the file and all the refrences to file and module test_bugdown.py
to test_markdown.py.
This commit is part of series of commit that renames bugdown to markdown.
This commit is first of few commita which aim to change all the
bugdown references to markdown. This commits rename the files,
file path mentions and change the imports.
Variables and other references to bugdown will be renamed in susequent
commits.
We send user_id of the referrer instead of email in the invites dict.
Sending user_ids is more robust, as those are an immutable reference
to a user, rather than something that can change with time.
Updates to the webapp UI to display the inviters for more convenient
inspection will come in a future commit.
Change variable `name` to `date_sent` as `name` actually stores
the date sent. Also change the data types of `name` and `create_time`
to integer. As they actually have empty decimal value.
Due to authentication restrictions, a deployment may need to direct
traffic for mobile applications to an alternate uri to take advantage
alternate authentication mechansism. By default the standard realm URI
will be usedm but if overridden in the settings file, an alternate uri
can be substituted.
We send a remove mobile push notification to the users who were
no longer mentioned after the content of the message was edited.
This also corrects the notification count for the mobile apps
where a user was prior mentioned in a muted stream / topic and the
message was edited and the user is no longer mentioned now.
Hence, fixing the case where user has read all his unreads
but the notification badge on the app is still positive.
Fixes#15428.
We've been seeing an exception in server_event_dispatch.js in
production where in large organizations, sometimes when a new user
joined, every other browser in the organization would throw an
exception processing some sort of realm_user/update event.
It turns out the cause was that when a user copies their profile from
an existing user account with a user-uploaded avatar, the code path we
reused to set the avatar properly send a realm_user/update event about
the avatar change -- for a user that hadn't been fully created and
certainly hadn't have the realm_user/add event sent for.
We fix this and add tests and comments to prevent it recurring.
(Removed an incorrect docstring while working on this).
The restart event was always handled pretty similarly
to pointer, so I use restart events now for this
test (in preparation of eliminating pointer events).
This fixes an issues that causes HTML entities inside of inline code
blocks to be converted rather than being displayed literally.
The upstream python-markdown now handles this correctly, so we just use
their implementation with our changes for removing .strip(). As a result
of this migration, we switch backtick pattern to an inline processor
too.
Fixes#12056.
For the codeblock counterpart of this issue, we should follow the
upstream PR https://github.com/Python-Markdown/markdown/pull/990.
Co-authored-by: Rohitt Vashishtha <aero31aero@gmail.com>
Note that I don't actually convert the
checker from check_dict to check_dict_only,
because that would be a user-facing change,
but I think we can sweep a lot of things
like this after the next release.
This avoids some code duplication as well
as adding some missing fields.
We also use check_dict_only to prevent
folks from adding new fields to the
relevant events without updating these
tests. (A bigger sweep comes later.)
As the code comment indicates, we just
use a strict check here rather than
pretending that the test exercises a
more complicated schema for the config
data, which is dynamic in nature.
Cleaning up config_data is outside the
scope of this PR; my main goal is to
eliminate check_dict calls (usually in favor
of check_dict_only).
Because of other validation on these values, I don't believe any of
these does anything different, but these changes improve readability
and likely make GitHub's code scanners happy.
The helper should be used instead of constructing the dict manually.
Change get_account_data_dict, on GitHubAuthBackendTest
class, so it has a third argument, user_avatar_url.
This is a preparation for support using GitHub avatar
upon user resgistration (when the user logs using
GitHub).
We now have our muted topics use tuples internally,
which allows us to tighten up the annotation
for get_topic_mutes, as well as our schema
checking.
We want to deprecate sub_validator=None
for check_list, so we also introduce
check_tuple here. Now we also want to deprecate
check_tuple, but it's at least isolated now.
We will use this for data structures that are tuples,
but which are sent as lists over the wire. Fortunately,
we don't have too many of those.
The plan is to convert tuples to dictionaries,
but backward compatibility may be tricky in some
places.
This commit changes do_get_user_invites function to not return
multiuse invites to non-admin users. We should only return multiuse
invites to admins, as we only allow admins to create them.
This commit changes the PreregistrationUser.invite_as dict to have
same set of values as we have for UserProfile.role.
This also adds a data migration to update the already exisiting
PreregistrationUser and MultiuseInvite objects.
Currently, we use -1 as the Realm.message_retention_days value to retain
message forever unless specified at stream level for a particular stream,
that is, no policy set at the realm level. But this is incoherent with what
we use for Stream.message_retention_days where -1 means
> disable retention policy for this stream unconditionally
that can be confusing from an API standpoint.
So instead of trying some hack to reset the value to NULL or using some
other value like -2 for RETAIN_MESSAGE_FOREVER and use that for API. It is
much more intuitive to use a string like 'forever' that can be mapped to
RETAIN_MESSAGE_FOREVER at the backend. And this is similar to what we use
for streams settings as well.
To be more consistent with the meaning in the Stream model, and to make
it easier to have a reasonable settings API, we get rid of the None
value for Realm.message_retention_days in favor of the value -1 to
represent the "don't delete messages" default policy.
subdomain=None didn't make much sense as a value, and wasn't actually in
use anywhere, except one test where it was accidental. All tests specify
the subdomain explicitly, so we should change the type to str, and make
it an obligatory kwarg.
Adds the ability to set a SAML attribute which contains a
list of subdomains the user is allowed to access. This allows a Zulip
server with multiple organizations to filter using SAML attributes
which organization each user can access.
Cleaned up and adapted by Mateusz Mandera to fit our conventions and
needs more.
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
A generator that yields values without receiving or returning them is
an Iterator. Although every Iterator happens to be iterable, Iterable
is a confusing annotation for generators because a generator is only
iterable once.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
In a decorator annotated with generic type (ViewFuncT) -> ViewFuncT,
the type variable ViewFuncT = TypeVar(…) must be instantiated to
the *same* type in both places. This amounts to a claim that the
decorator preserves the signature of the view function, which is not
the case for decorators that add a user_profile parameter.
The corrected annotations enforce no particular relationship between
the input and output signatures, which is not the ideal type we might
get if mypy supported variadic generics, but is better than enforcing
a relationship that is guaranteed to be wrong.
This removes a bunch of ‘# type: ignore[call-arg] # mypy doesn't seem
to apply the decorator’ annotations. Mypy does apply the decorator,
but the decorator’s incorrect annotation as signature-preserving made
it appear as if it didn’t.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
With this implementation of the feature of the automatic theme
detection, we make the following changes in the backend, frontend and
documentation.
This replaces the previous night_mode boolean with an enum, with the
default value being to use the prefers-color-scheme feature of the
operating system to determine which theme to use.
Fixes: #14451.
Co-authored-by: @kPerikou <44238834+kPerikou@users.noreply.github.com>
We can now invite new users as realm owners. We restrict only
owners to invite new users as owners both for single invite
and multiuse invite link. Also, only owners can revoke or resend
owner invitations.
Old: a validator returns None on success and returns an error string
on error.
New: a validator returns the validated value on success and raises
ValidationError on error.
This allows mypy to catch mismatches between the annotated type of a
REQ parameter and the type that the validator actually validates.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We had a bug in `validate_against_openapi_schema` that prevented it
from correctly inspecting nested arrays.
Fix the bug and address all the exceptions, either via
EXCLUDE_PROPERTIES or fixing them when simple. Also add a test case
for nested verification.
Attachment objects in production are only created in one place, which
passses a size. Additionally, I verified in multiple production
environments with old data that this never actually happens (or has
happened).
So we should make the data model correctly reflect the possibilities here.
Rename the validator to check_union, to conform
more to Python typing nomenclature.
And we rename one of the test helpers to the
simpler `check_types`. (The test helper
was using "variable" in the "var" sense.)
We assert that the post was successful, to give
more immediate feedback for tests that don't
bother to check the return value and may be
implicitly assuming this method just works in
all cases.
And we also make it more convenient for tests
that are happy-path tests--they don't have to
do the assertion themselves. (And they're still
free to do deeper checks on the json.)
We opt out with allow_fail=True. We probably want
a more direct API eventually for tests that are
clearly trying to test the failure path for
subscribing to streams.
It's possible that a couple tests here that I added
allow_fail=True to just have flawed data setup--
I don't have time to investigate all cases, but
hopefully they will at least stand out more.
Two things were broken here:
* we were using name(s) instead of id(s)
* we were always sending lists that only
had one element
Now we just send "stream_id" instead of "subscriptions".
If anything, we should start sending a list of users
instead of a list of streams. For example, see
the code below:
if peer_user_ids:
for new_user_id in new_user_ids:
event = dict(type="subscription", op="peer_add",
stream_id=stream.id,
user_id=new_user_id)
send_event(realm, event, peer_user_ids)
Note that this only affects the webapp, as mobile/ZT
don't use this.
api docs filenames are basically the operationId of their endpoint
in zulip.yaml with `_` replaced by `-`. But some operationIds have
changed, so change the affected filenames. Make changes in other
files accordingly.
This adds a new client_capability that clients such as the mobile apps
can use to avoid unreasonable network bandwidth consumed sending
avatar URLs in organizations with 10,000s of users.
Clients don't strictly need this data, as they can always use the
/avatar/{user_id} endpoint to fetch the avatar if desired.
This will be more efficient especially for realms with
10,000+ users because the avatar URLs would increase the
payload size significantly and cost us more bandwidth.
Fixes#15287.
This extends get_accounts_for_email test by adding a deactivated
user and assert that get_accounts_for_email doesn't return any accounts
for that deactivated user.
Fixes#14807.
_setup_export_files modifies the zulip realm. We used to
call realm.refresh_from_db in tests after _setup_export_files was
called to make sure that the change is reflected. But sometimes
calling refresh_from_db was missed out here and there.
This commit makes calling refresh_from_db after _setup_export_files
unnecessary.
This commit adds backend support for setting message_retention_days
while creating streams and updating it for an existing stream. We only
allow organization owners to set/update it for a stream.
'message_retention_days' field for a stream existed previously also, but
there was no way to set it while creating streams or update it for an
exisiting streams using any endpoint.
Previously, we had implemented:
<span class="timestamp" data-timestamp="unix time">Original text</span>
The new syntax is:
<time timestamp="ISO 8601 string">Original text</time>
<span class="timestamp-error">Invalid time format: Original text</span>
Since python and JS interpretations of the ISO format are very
slightly different, we force both of them to drop milliseconds
and use 'Z' instead of '+00:00' to represent that the string is
in UTC. The resultant strings look like: 2011-04-11T10:20:30Z.
Fixes#15431.
The term `parameter` is a better word than `argument` for data passed
to an API endpoint; this is why OpenAPI uses in their terminology.
Replace `argument` with `parameter` in the API docs to improve their
readability.
Fixes#15435.
Fixes#14498.
When a topic is moved to a different stream, the message may no
longer be reachable to guest user, if the user is not subscribed
to the new stream.
We used to send message update event to the client in these cases,
which seems to be confusing both to the client updating the message
and the server sending push_notifications for it.
Now, we delete the UserMessage entry for these messages for the
user and send a delete message event to the client; which makes
both push_notification and the event handling client think that
the message was deleted and hence no confusion in the code is
raised.
This makes the system store and track PushDeviceToken objects on
the local Zulip server when using the push notifications bouncer
and includes tests for this.
This is something we need to implement end-to-end encryption for
push notifications. We'll add the encryption key as an additional
property on the local PushDeviceToken object.
It also likely adds some value in the case that a server were to
switch between using the bouncer service and sending notifications
directly, though in practice that's unlikely to happen.
The most import change here is the one in maybe_send_to_registration
codepath, as the insufficient validation there could lead to fetching
an expired PreregistrationUser that was invited as an administrator
admin even years ago, leading to this registration ending up in the
new user being a realm administrator.
Combined with the buggy migration in
0198_preregistrationuser_invited_as.py, this led to users incorrectly
joining as organizations administrators by accident. But even without
that bug, this issue could have allowed a user who was invited as an
administrator but then had that invitation expire and then joined via
social authentication incorrectly join as an organization administrator.
The second change is in ConfirmationEmailWorker, where this wasn't a
security problem, but if the server was stopped for long enough, with
some invites to send out email for in the queue, then after starting it
up again, the queue worker would send out emails for invites that
had already expired.
Google has removed the Google Hangouts brand, thus we are removing
them as video chat provider option.
This commit removes Google Hangouts integration and make a migration
that sets all realms that are using Hangouts as their video chat
provider to the default, jitsi.
With changes by tabbott to improve the overall video call documentation.
Fixes: #15298.
Fixes#14828.
Giving the /subdomain/<token>/ url there could feel buggy if the user
ended up using the token in the desktop app, and then tried clicking the
"continue in browser" link - which had the same token that would now be
expired. It's sufficient to simply link to /login/ instead.
This adds support for a "spoiler" syntax in Zulip's markdown, which
can be used to hide content that one doesn't want to be immediately
visible without a click.
We use our own spoiler block syntax inspired by Zulip's existing quote
and math block markdown extensions, rather than requiring a token on
every line, as is present in some other markdown spoiler
implementations.
Fixes#5802.
Co-authored-by: Dylan Nugent <dylnuge@gmail.com>
This adds a new function `get_apns_badge_count()` to
fetch count value for a user push notification and
then sends that value with the APNs payload.
Once a message is read from the web app, the count is
decremented accordingly and a push notification with
`event: remove` is sent to the iOS clients.
Fixes#10271.
Mocking `get_base_payload()` verifies the wrong output
when the code is actually correct. So, its better that
we call the real function here, especially when we are
adding the Apple case.
This commit removes is_old_stream property from the stream objects
returned by the API. This property was unnecessary and is essentially
equivalent to 'stream_weekly_traffic != null'.
We compute sub.is_old_stream in stream_data.update_calculated_fields
in frontend code and it is used to check whether we have a non-null
stream_weekly_traffic or not.
Fixes#15181.
Fixes#15285
This event will be used more now for guest users when moving
topic between streams (See #15277). So, instead of deleting
messages in the topic as part of different events which is
very slow and a bad UX, we now handle the messages to delete in
bulk which is a much better UX.
This is designed to have no user-facing change unless the client
declares bulk_message_deletion in its client_capabilities.
Clients that do so will receive a single bulk event for bulk deletions
of messages within a single conversation (topic or PM thread).
Backend implementation of #15285.
This commits adds restriction on admins to set message retention policy.
We now only allow only organization owners to set message retention
policy.
Dropdown for changing retention policy is disabled in UI for admins also.
This uses `where_active_push_notification()` instead of
the flag condition to keep it consistent with the app
code as we use the same function to filter active push
notifications everywhere else.
Overrides some of internal functions of python-social-auth
to handle native flow.
Credits to Mateusz Mandera for the overridden functions.
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
This adds a powerful end-to-end test for Zulip's API documentation:
For every documented API endpoint (with a few declared exceptions that
we hope to remove), we verify that every API response received by our
extensive backend test suite matches the declared schema.
This is a critical step towards being able to have complete, high
quality API documentation.
Fixes#15340.
When doing query for same topic names in a stream, we should do a
case-insensitive exact match for the topic, since that's the data
model for topics in Zulip.
This is a test case verifying the current codebase produces incorrect
results. All the messages should have been edited in this case
regardless of the topic's case unless they belong to a different
stream.
Generated by pyupgrade --py36-plus --keep-percent-format.
Now including %d, %i, %u, and multi-line strings.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The narrow parameter was incorrectly documented as a one-level array
rather than an array of arrays, and the tests incorrectly expected
this due to a combination of design and implementation bugs.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Use read-only types (List ↦ Sequence, Dict ↦ Mapping, Set ↦
AbstractSet) to guard against accidental mutation of the default
value.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
There seems to have been a confusion between two different uses of the
word “optional”:
• An optional parameter may be omitted and replaced with a default
value.
• An Optional type has None as a possible value.
Sometimes an optional parameter has a default value of None, or None
is otherwise a meaningful value to provide, in which case it makes
sense for the optional parameter to have an Optional type. But in
other cases, optional parameters should not have Optional type. Fix
them.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We triage on the event's "op" field instead
of using the type check.
And then we still get the implicit assertion
that we any "add" event produces a dict
for "subscriptions", since we dereference it
in the assertion for "subscribers".
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Automatically generated by the following script, based on the output
of lint with flake8-comma:
import re
import sys
last_filename = None
last_row = None
lines = []
for msg in sys.stdin:
m = re.match(
r"\x1b\[35mflake8 \|\x1b\[0m \x1b\[1;31m(.+):(\d+):(\d+): (\w+)", msg
)
if m:
filename, row_str, col_str, err = m.groups()
row, col = int(row_str), int(col_str)
if filename == last_filename:
assert last_row != row
else:
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
with open(filename) as f:
lines = f.readlines()
last_filename = filename
last_row = row
line = lines[row - 1]
if err in ["C812", "C815"]:
lines[row - 1] = line[: col - 1] + "," + line[col - 1 :]
elif err in ["C819"]:
assert line[col - 2] == ","
lines[row - 1] = line[: col - 2] + line[col - 1 :].lstrip(" ")
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This new endpoint returns a 'user' dictionary which, as of now,
contains a single key 'is_subscribed' with a boolean value that
represents whether the user with the given 'user_id' is subscribed
to the stream with the given 'stream_id'.
Fixes#14966.
The only clients that should use the typing
indicators endpoint are our internal clients,
and they should send a JSON-formatted list
of user_ids.
We now enforce this, which removes some
complexity surrounding legacy ways of sending
users, such as emails and comma-delimited
strings of user_ids.
There may be a very tiny number of mobile
clients that still use the old emails API.
This won't have any user-facing effect on
the mobile users themselves, but if you type
a message to your friend on an old mobile
app, the friend will no longer see typing
indicators.
Also, the mobile team may see some errors
in their Sentry logs from the server rejecting
posts from the old mobile clients.
The error messages we report here are a bit
more generic, since we now just use REQ
to do validation with this code:
validator=check_list(check_int)
This also allows us to remove a test hack
related to the API documentation. (We changed
the docs to reflect the modern API in an
earlier commit, but the tests couldn't be
fixed while we still had the more complex
semantics for the "to" parameter.)
Add test to validate example responses in zulip.yaml. Also change
zulip.yaml for some wrong examples or for cases which were not
covered by `test-api`. Also enhance `validate_against_openapi_schema`.
ACCESS_TOKEN_URL works a different for apple authentication, so,
we removed and remocked the ACCESS_TOKEN_URL mock in
`register_extra_endpoints` override of apple auth test class.
It is cleaner to have it as generic feature of `social_auth_test`.
So, this commit adds a function that returns token_data_dict that
we had earlier and is called in the ACCESS_TOKEN_URL mock.
This function is overriden in apple auth test class to generate
payload of the format that apple auth expects.
Thanks to Mateusz Mandera for the simple idea.
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
This commit adds some basic checks while adding or removing
realm owner status of a user and adds code to change owner
status of a user using update_user_backend.
This also adds restriction on removing owner status of the
last owner of realm. This restriction was previously on
revoking admin status, but as we have added a more privileged
role of realm owner, we now have this restriction on owner
instead of admin.
We need to apply that restriction both in the role change code path
and the deactivate code path.
This commit sets the role of the user creating the realm as
realm owner after the realm is created.
Previously, the role of user creating the realm was set as admin.
But now we want it to be owner because owners have the highest
privilege level.
The test_management_commands use in particular was causing pickling
errors when the test failed, because Python 3 filter returns an
iterator, not a list.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Fixes this error in the dev environment:
$ ./manage.py checkconfig
Error: You must set ZULIP_ADMINISTRATOR in /etc/zulip/settings.py.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This implementation overrides some of PSA's internal backend
functions to handle `state` value with redis as the standard
way doesn't work because of apple sending required details
in the form of POST request.
Includes a mixin test class that'll be useful for testing
Native auth flow.
Thanks to Mateusz Mandera for the idea of using redis and
other important work on this.
Documentation rewritten by tabbott.
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
This was previously hardcoded with agreement between the Zulip backend
and frontend as 86400 seconds (1 day). Now, it's still hardcoded in
the backend, but arranged in a way where we could add a setting
without any changes to the mobile and terminal apps to update logic.
Fixes#15278.
This was implemented in 2012 to avoid showing a loading indicator for
fetching messages for users with no message history. However, the
Zulip onboarding UI always creates some message history, and fetching
history is fast, so this is likely clutter more than a useful
optimization.
This was written by Rishi for a very brief purpose a few years ago,
and it doesn't serve much purpose now other than to be a place we
update in code sweeps.
We're migrating to using the cleaner zulip.com domain, which involves
changing all of our links from ReadTheDocs and other places to point
to the cleaner URL.
Adds a top-level logger in `settings.LOGGING` `zulip.auth`
with the default handlers `DEFAULT_ZULIP_HANDLERS` and
an extra hanlder that writes to `/var/log/zulip/auth.log`.
Each auth backend uses it's own logger, `self.logger` which
is in form 'zulip.auth.<backend name>'.
This way it's clear which auth backend generated the log
and is easier to look for all authentication logs in one file.
Besides the above mentioned changes, `name` attribute is added to
`ZulipAuthMixin` so that these logging kind of calls wouldn't raise
any issues when logging is tried in a class without `name` attribute.
Also in the tests we use a new way to check if logger calls are made
i.e. we use `assertLogs` to test if something is logged.
Thanks to Mateusz Mandera for the idea of having a seperate logger
for auth backends and suggestion of using `assertLogs`.
This commit removes short_name and client_id fields from the user
objects returned by get_profile_backend because neither of them
had a purpose.
* short_name hasn't been present anywhere else in the Zulip API for
several years, and isn't set through any coherent algorithm.
* client_id was a forgotten 2013-era predecessor to the queue_id field
returned by the register_event_queue process.
The combination of these changes gets us close to having `get_profile`
have the exact same format as other endpoints fetching a user object.
This commit changes get_profile_backend to be based on format_user_row
such that it's a superset of the fields for our other endpoints for
getting data on a user.
To be clear, this does not removes any of the exisiting fields, that
were returned by this endpoint.
This change adds some fields to the User object returned by the
endpoint. API docs are updated accordingly for the added fields.
Slack owners and primary owners will be mapped to zulip
realm owners on import.
Previously, we mapped the owner and primary owner roles of slack
to realm admins in zulip. As we have added ROLE_REALM_OWNER in
8bbc074, we now map slack owners and primary owners to owners in
zulip.
Tests are modified for checking all the 3 cases-
- Slack workspace primary owner
- Slack workspace owner
- Slack workspace admin
This commit also has docs changes in 'import-from-slack.md'.
Calling jwt.decode without an algorithms list raises a
DeprecationWarning. This is for protecting against
symmetric/asymmetric key confusion attacks.
This is a backwards-incompatible configuration change.
Fixes#15207.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Generated by pyupgrade --py36-plus --keep-percent-format, but with the
NamedTuple changes reverted (see commit
ba7906a3c6, #15132).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This adds 'target_users' parameter to 'attempt_unsubscribe_of_principal`
function in test_subs.py, which accepts list of UserProfile objects to be
unsubscribed, instead of defining users in the function itself.
This change makes the code cleaner and more readable.
Also, 'other_user_subbed' parameter is changed to 'target_users_subbed'
to clearly depict the use of this parameter.
Option to disable breadcrumb messages were given in both message edit
form and topic edit stream popover.
User now has the option to select which stream to send the notification
of stream edit of a topic via checkboxes in the UI.
We pipe realm_id through functions where it is available,
this helps us avoid doing query for realm_id in loop when
multiple messages are being processed.
Fixes warnings like this:
/srv/zulip-py3-venv/lib/python3.8/site-packages/django/db/models/fields/__init__.py:1424: RuntimeWarning: DateTimeField MutedTopic.date_muted received a naive datetime (2020-01-01 00:00:00) while time zone support is active.
warnings.warn("DateTimeField %s received a naive datetime (%s)"
Signed-off-by: Anders Kaseorg <anders@zulip.com>
datetime.timezone is available in Python ≥ 3.2. This also lets us
remove a pytz dependency from the PostgreSQL scripts.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
url_to_a returns Union[Element, str], but str cannot be appended to
Element; that would raise TypeError at runtime.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
zerver/lib/i18n.py:34:28: E741 ambiguous variable name 'l'
zerver/lib/webhooks/common.py:103:34: E225 missing whitespace around operator
zerver/tests/test_queue_worker.py:563:9: E306 expected 1 blank line before a nested definition, found 0
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This reimplements our Zoom video call integration to use an OAuth
application. In addition to providing a cleaner setup experience,
especially on zulipchat.com where the server administrators can have
done the app registration already, it also fixes the limitation of the
previous integration that it could only have one call active at a time
when set up with typical Zoom API keys.
Fixes#11672.
Co-authored-by: Marco Burstein <marco@marco.how>
Co-authored-by: Tim Abbott <tabbott@zulipchat.com>
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
This commit fixes the tests to use role instead of is_admin in
update user endpoint. These changes got missed in the original
commit 9fa6067 which included the change of using role in update user
endpoint and were also not caught in tests.
This commit removes redundant lines from the test for changing
full name in test_users.py. Those lines were passing is_admin=False
for already non admin user and were added in 41fbb16, but these lines
are of no use now.
This commit modifies the backend to accept user ids when subscribing
users to streams.
It also migrates all existing tests to use this API, aside from a
small set of tests for the legacy API.
There's no reason to send data beyond the user `id` of the uploader,
and reason not to, as the previous model was both awkward when
`author=None` and resulted in unecessary parsing complexity for
clients.
Modified by tabbott to add the frontend changes and API documentation.
Fixes#15115.
This commit changes the person dict in event sent by do_change_user_role
to send role instead of is_admin or is_guest.
This makes things much more straightforward for our upcoming primary
owners feature.
This saves the completely unnecessary work of mapping the Client name
to its ID. Because we had in-process caching of the immutable Client
objects, this isn't a material performance win, but it will eventually
let us delete that caching logic and have a simpler system.
This commit changes the update user API endpoint to accept role
as parameter instead of the bool parameters is_guest and is_admin.
User role dropdown in user info modal is also modified to use
"dropdown_options_widget".
Modified by tabbott to document the API change.
This commit changes do_change_user_role to support adding or removing
the realm owner status of user and sending an event.
We also extend the existing test for do_change_user_role to do a bit
more validation to confirm the audit log records all values of role.
The new realm_owner role is added as option for role field in
UserProfile model and is_realm_owner is added as property for the user
profile.
Aside from some basic tests validating the logic, this has no effect
as users cannot end up with set as realm owners.
If the key paramenter on POST isn't correct we won't be
able to find the confirmation object, which will lead to
an exception. To deal with it more gracefully, we are
catching the exception and redirecting to the
confirmation_link_expired_error page.
If a user receives more than one invite to join a
realm, after that user registers, all the remaining
invitations should be revoked, preventing them to be
listed in active invitations on admin panel.
To do this, we added a new prereg_user status,
STATUS_REVOKED.
We also added a confirmation_link_expired_error page
in case the user tries click on a revoked invitaion.
This page has a link to login page.
Fixes: #12629
Co-authored-by: Arunika <arunikayadav42@gmail.com>
This tests if a user, that is already registered, is
redirected to the login page when they click on an
invitation.
Co-authored-by: Arunika <arunikayadav42@gmail.com>
Tests attached a UserProfile to confirmation objects,
which is not very valid as this is the only place
where this is done. Now we attach PreregUser to
the confirmation object, making the tests correct.
Co-authored-by: Arunika <arunikayadav42@gmail.com>
On invitations panel, invites were being removed when
the user clicked on invitation's link. Now we only remove
it when the user completes registration.
Fixes: #12281
This fixes some issues with unclear terminology and visual styling in
the pages for the new free trial.
There's probably more we can and should usefully do in the future.
mock is just a backport of the standard library’s unittest.mock now.
The SAMLAuthBackendTest change is needed because
MagicMock.call_args.args wasn’t introduced until Python
3.8 (https://bugs.python.org/issue21269).
The PROVISION_VERSION bump is skipped because mock is still an
indirect dev requirement via moto.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We handle fenced code blocks in a preprocessor, and > style blockquotes
are parsed in a blockprocessor. Pymarkdown doesn't run the preprocessors
again on any blocks that it is parsing, and is unlikely to accept our
solution upstream; they intend to convert fenced_code to a block parser.
We simply run all the preprocessors on the text again, with the exception
of NormalizeWhitespace which removed delimiters used by HtmlStash to mark
preprocessed html code. To counter this, we subclass NormalizeWhitespace
and use our customized version for when it is called from a blockparser.
Upstream issue: https://github.com/Python-Markdown/markdown/issues/53Fixes#12800.
This commit merges do_change_is_admin and do_change_is_guest to a
single function do_change_user_role which will be used for changing
role of users.
do_change_is_api_super_user is added as a separate function for
changing is_api_super_user field of UserProfile.
This is important, because lack of this meant that the POST request in
our tests still had the old session, with various params stored in it.
This mechanism doesn't work in reality in SAML, so the backend uses
redis to store and recover the params from redis. Without flushing the
session, these tests would fail to catch some breakages in the
redis-based mechanism.
This will protect us in case of some kinds of bugs that could allow
making requests such as password authentication attempts to tornado.
Without restricting the domains to which the in-memory backend can
be applied, such bugs would lead to attackers having multiple times
larger rate limits for these sensitive requests.
Helps to see if users are often trying to login with deactived
accounts.
A use case: Trackdown whether any deactivated bot users are still
trying to access the API.
This implementation adds a new key `inactive_user_id`
to `return_data` in the function `is_user_active` which
check if a `user_profile` is active. This reduces the effort
of getting `user_id` just before logging.
Modified tests for line coverage.
Instead of plumbing the idp to /complete/saml/ through redis, it's much
more natural to just figure it out from the SAMLResponse, because the
information is there.
This is also a preparatory step for adding IdP-initiated sign in, for
which it is important for /complete/saml/ to be able to figure out which
IdP the request is coming from.
We now parse tex and latex as regular languages, highlighting them
with pygments. We only allow 'math' to trigger latex rendering,
which is in line with the documentation.
This commit shifts our timestamp syntax to be of the form:
<span class="timestamp data-timestamp="123456"></span>
since value is not a valid attribute of span elements.
This adds support for syntax like: !time(Jun 7 2017, 6:30 PM) so that
everyone sees the time in their own local timezone. This can be used
when scheduling online meetings, etc.
This adds some hardcoded values for timezones, because of there
being no sureshot way of determining the timezone easily. However,
since the main way of using the feature should be a typeahead for
entering the time, this shouldn't be cause of much concern.
Fixes#5176.
The swagger validator is a basic tool to check whether our
openapi specification file follows the basic syntax. But to ensure
that our zulip.yaml file is not only syntactically compatible but
also describes our API well, we need to add custom tests. This
commit currently checks whether each endpoint has an `operationId`
and a valid tag. It also makes it easier to check for custom rules
in the future.
If the IdP authentication API is flaky for some reason, it can return
bad http responses, which will raise HTTPError inside
python-social-auth. We don't want to generate a traceback
in those cases, but simply log the exception and fail gracefully.
During events such as stream / topic name edit for a topic, we were
running queries to db in loop for each message for reactions,
submessages and realm_id. This commit reduces the queries to be
done only for realm_id, which is yet to be fixed.
This is accomplished by building messages with empty reactions
and submessages and then updating them in the messages using bulk
queries.
This commit allows non admins to set stream post policy while creating
streams.
Restriction was there to prevent user from creating a stream in which
the user cannot post himself but this will be taken care of with
stream admin feature.
I'm not sure exactly what series of history got us here, but we were
fetching the mobile_user_ids data for all users in the organization,
regardless of whether they were recently active (and thus relevant for
the main presence data set). And doing so in a sloppy fashion
(sending every user ID over the wire, rather than just having the
database join on Realm).
Fixing this saves a factor of 4-5 on the total runtime of a presence
request on organizations with 10Ks of users like chat.zulip.org; more
like 25% in an organization with 150. Since large organizations are
very heavily weighted in the overall cost of presence, this is a huge
win.
Fixes part of #13734.
The query to fetch the latest user activity was missing an
`.order_by('last_visit')`. This meant that the results were being
ordered by the `id`, which resulted in us getting `update_message_flags`
action performed on the client that the user installed last, instead of
being client agnostic and fetching the "global" last
`update_message_flags` action performed by the user.
Zulip's openapi specification in zulip.yaml has various examples
for various schemas. Validate the example with their respective
schemas to ensure that all the examples are schematically correct.
Part of #14100.
The `email` field for identifying the user being modified in these
events was not used by either the webapp or other official Zulip
clients. Instead, it was legacy data from before we switched years
ago to sending user_id fields as the correct way to uniquely identify
a user.
If the #random channel in Slack is deactivated, we should follow
Zulip's data model of not allowing deactivated, default streams.
This had apparently happened in zulipchat.com for a few organizations,
resulting in weird exceptions trying to invite new users.
When a user changes its avatar image, the user's avatar in popovers
wasn't being correctly updated, because of browser caching of the
avatar image. We added a version on the request to get the image in
the same format we use elsewhere, so the browser knows when to use the
cached image or to make a new request to the server.
Edited by Tim to preserve/fix sort orders in some tests, and update
zulip_feature_level.
Fixes: #14290
We remove the `owner` field from `page_params/realm_bots`
and bot-related events.
In the recent commit 155f6da8ba
we added `owner_id`, which we now use everywhere we need
bot owners for.
We also bump the `API_FEATURE_LEVEL` to 5 here. We
had already documented this in the prior commit to
add `owner_id`.
Note that we don't have to worry about mobile/ZT clients
here--we only deal with bot data in the webapp.
For the below payloads we want `owner_id` instead
of `owner`, which we should deprecate. (The
`owner` field is actually an email, which is
not a stable key.)
page_params.realm_bots
realm_bot/add
realm_bot/update
IMPORTANT NOTE: Some of the data served in
these payloads is cached with the key
`bot_dicts_in_realm_cache_key`.
For page_params, we get the new field
via `get_owned_bot_dicts`.
For realm_bot/add, we modified
`created_bot_event`.
For realm_bot/update, we modified
`do_change_bot_owner`.
On the JS side, we no longer
look up the bot's owner directly in
`server_events_dispatch` when we get
a realm_bot/update event. Instead, we
delegate that job to `bot_data.js`.
I modified the tests accordingly.
When editing a message where we mention a usergroup, we would remove
the 'mentioned' flag from messages, resulting in the message being
hidden from your mentions in the UI. This was reported by Greg Price in
https://chat.zulip.org/#narrow/stream/9-issues/topic/missing.20mention.
We add the same code that we use in do_send_messages to calculate the
updated mentions_user_ids. We add some tests alongside other user group
mention tests in test_bugdown.
Slack has disabled creation of legacy tokens, which means we have to use other
tokens for importing the data. Thus, we shouldn't throw an error if the token
doesn't match the legacy token format.
Since we do not have any other validation for those tokens yet, we log a warning
but still try to continue with the import assuming that the token has the right
scopes.
See https://api.slack.com/changelog/2020-02-legacy-test-token-creation-to-retire.
While this functionality to post slow queries to a Zulip stream was
very useful in the early days of Zulip, when there were only a few
hundred accounts, it's long since been useless since (1) the total
request volume on larger Zulip servers run by Zulip developers, and
(2) other server operators don't want real-time notifications of slow
backend queries. The right structure for this is just a log file.
We get rid of the queue and replace it with a "zulip.slow_queries"
logger, which will still log to /var/log/zulip/slow_queries.log for
ease of access to this information and propagate to the other logging
handlers. Reducing the amount of queues is good for lowering zulip's
memory footprint and restart performance, since we run at least one
dedicated queue worker process for each one in most configurations.
These changes should be included in bd9b74436c,
as it makes sure that Zulip limited plan realm won't be able to change the
`message_retention_days` setting.
The pointer doesn't get updated when a user is only reading messages in
narrowed views. But, we use the pointer position to determine the
furthest read time, which causes the bankruptcy banner to show up even
for users who have been actively reading and sending messages.
This commit switches to using the time of the last update_message_flags
activity by a user to determine the time of last activity.
Since production testing of `message_retention_days` is finished, we can
enable this feature in the organization settings page. We already had this
setting in frontend but it was bit rotten and not rendered in templates.
Here we replaced our past text-input based setting with a
dropdown-with-text-input setting approach which is more consistent with our
existing UI.
Along with frontend changes, we also incorporated a backend change to
handle making retention period forever. This change introduces a new
convertor `to_positive_or_allowed_int` which only allows positive integers
and an allowed value for settings like `message_retention_days` which can
be a positive integer or has the value `Realm.RETAIN_MESSAGE_FOREVER` when
we change the setting to retain message forever.
This change made `to_not_negative_int_or_none` redundant so removed it as
well.
Fixes: #14854
It's a preliminary step to enable message_retention_setting in org settings
UI, which is a non-limited plan only feature. So we require a page_param
property that tells us the limited-plan state of the Zulip realm.
Popular email clients like Gmail will automatically linkify link-like
content present in an HTML email they receive, even if it doesn't have
links in it. This made it possible to include what in Gmail will be a
user-controlled link in invitation emails that Zulip sends, which a
spammer/phisher could try to take advantage of to send really bad spam
(the limitation of having the rest of the invitation email HTML there
makes it hard to do something compelling here).
We close this opportunity by structuring our emails to always show the
user's name inside an existing link, so that Gmail won't do new
linkification, and add a test to help ensure we don't remove this
structure in a future design change.
Co-authored-by: Anders Kaseorg <andersk@mit.edu>
New path() function changed the way a regex pattern
is created from urls - it adds escape backslashes,
so for testing purposes we need to take care of them
and remove them, to check if urls were tested.
Additionaly, regex patterns from urls can have
[^/]+ instead of [^/]*, so we need to take care
of it too.
Previously api_description and api_code_examples were two independent
markdown extensions for displaying OpenAPI content used in the same
places. We combine them into a single markdown extension (with two
processors) and move them to the openapi folder to make the codebase
more readable and better group the openapi code in the same place.
For privacy-minded folks who don't want to leak the
information of whether they're online, this adds an
option to disable sending presence updates to other
users.
The new settings lies in the "Other notification
settings" section of the "Notification settings"
page, under a "Presence" subheading.
Closes#14798.
This commit extends the template for "choose email" to mention for
users who have unverified emails that they need to verify them before
using them for Zulip authentication.
Also modified `social_auth_test_finish` to assert if all emails
are present in "choose email" screen as we need unverified emails
to be shown to user and verified emails to login/signup.
Fixes#12638 as this was the last task for that issue.
As "choose email" screen is only used for GitHub auth, the part
that deals with it is separated from `social_auth_test` and
dealt in a new function `social_auth_finish`. This new
`social_auth_finish` contains only the code that deals with
authentication backends that do not have "choose email" screen.
But it is overidden in GitHub test class to handle the
"choose email" screen.
It was refactored because `expect_choose_email_screen` blocks
were confusing while figuring out how tests work on non GitHub
auths.
Member of the org can able see list of invitations sent by him/her.
given permission for the member to revoke and resend the invitations
sent by him/her and added tests for test member can revoke and resend
the invitations only sent by him/her.
Fixes#14007.
Previously, hanging_lists preprocessor didn't consider anything
indented at 4 or above spaces to be a list. This meant that when
we had a list like:
1. 1
2. 2
3. 3
2. 2a
1. 1a
We would insert a newline between 3. 3 and 2. 2a. This resulted
in the block processor breaeking down 1 list into 2 blocks, which
messed up the nesting and indentation for the second block.
This does not rely on the desktop app being able to register for the
zulip:// scheme (which is problematic with, for example, the AppImage
format).
It also is a better interface for managing changes to the system,
since the implementation exists almost entirely in the server/webapp
project.
This provides a smoother user experience, where the user doesn't need
to do the paste step, when combined with
https://github.com/zulip/zulip-desktop/pull/943.
Fixes#13613.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We've had bugs in the past where users with a name in the format
"Alice|999" would confuse our markdown rendering or typeahead. While
that's a fully solvable problem, there's no real use case for that, so
it's probably simpler to just prevent users from setting their name
that way.
Fixes#13923.
Prior to this change, there were reports of 500s in
production due to `export.extra_data` being a
Nonetype. This was reproducible using the s3
backend in development when a row was created in
the `RealmAuditLog` table, but the export failed in
the `DeferredWorker`. This left an entry lying
about that was never updated with an `extra_data`
field.
To fix this, we catch any exceptions in the
`DeferredWorker`, and then update `extra_data` to
encode the failure. We also fix the fact that we
never updated the export UI table with pending exports.
These changes also negated the use for the somewhat
hacky `clear_success_banner` logic.
This new type eliminates a bunch of messy code that previously
involved passing around long lists of mixed positional keyword and
arguments, instead using a consistent data object for communicating
about the state of an external authentication (constructed in
backends.py).
The result is a significantly more readable interface between
zproject/backends.py and zerver/views/auth.py, though likely more
could be done.
This has the side effect of renaming fields for internally passed
structures from name->full_name, next->redirect_to; this results in
most of the test codebase changes.
Modified by tabbott to add comments and collaboratively rewrite the
initialization logic.
This changes add_reaction in zerver.views.reactions to allow
calling POST ../messages/{message_id}/reactions api endpoint with
emoji_name only, even in the case of a custom emoji.
Changing test_alert_words to use do_add_alert_words() and
do_remove_alert_words() from lib/actions.py instead of the
existing add_user_alert_words() and remove_user_alert_words()
as is the general policy of calling these functions when we
are updating the database.
This reverts commit 8f32db81a1.
This change unfortunately requires an index that we don't have, and
thus is incredibly expensive. We'll need to do a thoughtful reworking
before we can integrate it again.
The post_init cache-flushing behavior in the original alert words
migration was subtly wrong; while it may have passed tests, it didn't
have the right ordering for unlikely races.
We use post_save rather than post_init hooks precisely because they
ensure that we flush the cache after we know the database has been
updated and any future reads from the database will have the latest
state.
Previously, alert words were case-insensitive in practice, by which I
mean the Markdown logic had always been case-insensitive; but the data
model was not, so you could create "duplicate" alert words with the
same words in different cases. We fix this inconsistency by making
the database model case-insensitive.
I'd prefer to be using the Postgres `citext` extension to have
postgres take care of case-insensitive logic for us, but that requires
installing a postgres extension as root on the postgres server, which
is a pain and perhaps not worth the effort to arrange given that we
can achieve our goals with transaction when adding alert words.
We take advantage of the migrate_alert_words migration we're already
doing for all users to effect this transition.
Fixes#12563.
Previously, alert words were a JSON list of strings stored in a
TextField on user_profile. That hacky model reflected the fact that
they were an early prototype feature.
This commit migrates from that to a separate table, 'AlertWord'. The
new AlertWord has user_profile, word, id and realm(denormalization so
we can provide a nice index for fetching all the alert words in a
realm).
This transition requires moving the logic for flushing the Alert Words
caches to their own independent feature.
Note that this commit should not be cherry-picked without the
following commit, which fixes case-sensitivity issues with Alert Words.
When a user is reading messages only in stream or topic narrows, the pointer
can be left far behind. Using this to compute the furthest_read_time causes
the banckruptcy banner to be shown even when a user has been actively
reading messages. This commit switches to using the sent time on the last
message that the user has read to compute the furthest read time.
Previously, the message and event APIs represented the user differently
for the same reaction data. To make this more consistent, I added a
user_id field to the reaction dict for both messages and events. I
updated the front end to use the user_id field rather than the user
dict. Lastly, I updated front end and back end tests that used user
info.
I primarily tested this by running my local Zulip build and
adding/removing reactions from messages.
Fixes#12049.
In the original implementation, we were checking for the default language
inside format_code, which resulted in the setting being ignored when set to
quote, math, tex or latex. We shift the validation to `check_for_new_fence`
We also update the tests to use a saner naming scheme for the variables.
Internet Explorer does not support `position: sticky` which improves
floating recipient bar behavior during scrolling which is one of the
issues blocking PR #9910.
IE also does not support some features that modern browsers support
hence may not super well.
This commit adds an error page that'll be displayed when a user logs
in from Internet Explorer. Also, a test is added.
Generated by autopep8 --aggressive, with the setup.cfg configuration
from #14532. In general, an isinstance check may not be equivalent to
a type check because it includes subtypes; however, that’s usually
what you want.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Generated by autopep8, with the setup.cfg configuration from #14532.
I’m not sure why pycodestyle didn’t already flag these.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The purpose is to provide a way for (non-webapp) clients,
like the mobile and terminal apps, to tell whether the
server it's talking to is new enough to support a given
API feature -- in particular a way that
* is finer-grained than release numbers, so that for
features developed after e.g. 2.1.0 we can use them
immediately on servers deployed from master (like
chat.zulip.org and zulipchat.com) without waiting the
months until a 2.2 release;
* is reliable, unlike e.g. looking at the number of
commits since a release;
* doesn't lead to a growing bag of named feature flags
which the server has to go on sending forever.
Tweaked by tabbott to extend the documentation.
Closes#14618.
Refactored code in actions.py and streams.py to move stream related
functions into streams.py and remove the dependency on actions.py.
validate_sender_can_write_to_stream function in actions.py was renamed
to access_stream_for_send_message in streams.py.
This test was passing a string, not an Iterable[str], and effectively
a quirk in the remove_alert_words implementation happened to result in
processing each character in the string working.
This is be useful for the mobile and desktop apps to hand an uploaded
file off to the system browser so that it can render PDFs (Etc.).
The S3 backend implementation is simple; for the local upload backend,
we use Django's signing feature to simulate the same sort of 60-second
lifetime token.
Co-Author-By: Mateusz Mandera <mateusz.mandera@protonmail.com>
"description: |" supports markdown and is overall better for
writing multiline paragraphs. So use it in multiline paragraphs
and line-wrap the newly formed paragraphs accordingly.
Edited by tabbott to change most single-line descriptions to use this
format as well.
When creating a webhook integration or creating a new one, it is a pain to
create or update the screenshots in the documentation. This commit adds a
tool that can trigger a sample notification for the webhook using a fixture,
that is likely already written for the tests.
Currently, the developer needs to take a screenshot manually, but this could
be automated using puppeteer or something like that.
Also, the tool does not support webhooks with basic auth, and only supports
webhooks that use json fixtures. These can be fixed in subsequent commits.
If SAML_REQUIRE_LIMIT_TO_SUBDOMAINS is enabled, the configured IdPs will
be validated and cleaned up when the saml backend is initialized.
settings.py would be a tempting and more natural place to do this
perhaps, but in settings.py we don't do logging and we wouldn't be able
to write a test for it.
Through the limit_to_subdomains setting on IdP dicts it's now possible
to limit the IdP to only allow authenticating to the specified realms.
Fixes#13340.
This refactors add_default_stream in zerver/views/streams.py to
take in stream_id as parameter instead of stream_name.
Minor changes have been made to test_subs.py and settings_streams.js
accordingly.
The Redis-based rate limiting approach takes a lot of time talking to
Redis with 3-4 network requests to Redis on each request. It had a
negative impact on the performance of `get_events()` since this is our
single highest-traffic endpoint.
This commit introduces an in-process rate limiting alternate for
`/json/events` endpoint. The implementation uses Leaky Bucket
algorithm and Python dictionaries instead of Redis. This drops the
rate limiting time for `get_events()` from about 3000us to less than
100us (on my system).
Fixes#13913.
Co-Author-by: Mateusz Mandera <mateusz.mandera@protonmail.com>
Co-Author-by: Anders Kaseorg <anders@zulipchat.com>
Right now, the message is "Invalid characters in emoji name" when
the emoji_name is empty. Changing check_valid_emoji_name() in
zerver/lib/emoji.py which validates the name to accomodate the case
of missing name. The new message is "Emoji name is missing".
The error is PGroonga specific since `pgroonga_query_extract_keywords` does
not handle empty string inputs correctly. This commit prevents search
narrows from having empty operands.
Closes#14405
Generated by `pyupgrade --py3-plus --keep-percent-format` on all our
Python code except `zthumbor` and `zulip-ec2-configure-interfaces`,
followed by manual indentation fixes.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Option is added to video_chat_provider settings for disabling
video calls.
Video call icon is hidden in two cases-
1. video_chat_provider is set to disabled.
2. video_chat_provider is set to Jitsi and settings.JITSI_SERVER_URL
is none.
Relevant tests are added and modified.
Fixes#14483
This adds a new realm setting: default_code_block_language.
This PR also adds a new widget to specify a language, which
behaves somewhat differently from other widgets of the same
kind; instead of exposing methods to the whole module, we
just create a single IIFE that handles all the interactions
with the DOM for the widget.
We also move the code for remapping languages to format_code
function since we want to preserve the original language to
decide if we override it using default_code_clock_language.
Fixes#14404.
If we had a rule like "max 3 requests in 2 seconds", there was an
inconsistency between is_ratelimited() and get_api_calls_left().
If you had:
request #1 at time 0
request #2 and #3 at some times < 2
Next request, if exactly at time 2, would not get ratelimited, but if
get_api_calls_left was called, it would return 0. This was due to
inconsistency on the boundary - the check in is_ratelimited was
exclusive, while get_api_calls_left uses zcount, which is inclusive.
This is a prep commit for generating /team page data
using cron job. zerver/tests directory is not present in
production installation. So moving the file from the directory
tests to tools.
This commit reuses the existing infrastructure for moving a topic
within a stream to add support for moving topics from one stream to
another.
Split from the original full-feature commit so that we can merge just
the backend, which is finished, at this time.
This is a large part of #6427.
The feature is incomplete, in that we don't have real-time update of
the frontend to handle the event, documentation, etc., but this commit
is a good mergable checkpoint that we can do further work on top of.
We also still ideally would have a test_events test for the backend,
but I'm willing to leave that for follow-up work.
This appears to have switched to tabbott as the author during commit
squashing sometime ago, but this commit is certainly:
Co-Authored-By: Wbert Adrián Castro Vera <wbertc@gmail.com>
This commit corrects the test_change_stream_policy_requires_realm_admin
by setting the date_joined of user in the tests itself.
test_non_admin is added to avoid duplication of code.
Code is added for checking success on changing stream_post_policy
by admins.
This used to show a blank page. Considering that the links remain valid
only for 15 seconds it's important to show something more informative to
the user.
This is a prep commit for making use of same choices for
create_stream_policy and invite_to_stream_policy as both fields
have same set of choices.
This will be useful as we add other fields using these same types.
This commit replaces the WAITING _PERIOD with FULL_MEMBERS from
create_stream_policy and invite_to_stream_policy choices to
achieve consistency and making the variables more descriptive.
This simplifies the update_display_settings endpoint to use REQ for
validation, rather than custom if/else statements.
The test changes just take advantage of the now more consistent
syntax.
This changes the payload that is used
to populate `page_params` for the webapp,
as well as responses to the once-every-50-seconds
presence pings.
Now our dictionary of users only has these
two fields in the value:
- activity_timestamp
- idle_timestamp
Example data:
{
6: Object { idle_timestamp: 1585746028 },
7: Object { active_timestamp: 1585745774 },
8: Object { active_timestamp: 1585745578,
idle_timestamp: 1585745400}
}
We only send the slimmer type of payload
to clients that have set `slim_presence`
to True.
Note that this commit does not change the format
of the event data, which still looks like this:
{
website: {
client: 'website',
pushable: false,
status: 'active',
timestamp: 1585745225
}
}
This commit migrates zulip outging webhook payload to
/zulip-outgoing-webhook:post in OpenAPI.
Since this migrates the last payloads from api/fixtures.json to
OpenAPI, this commit removes api/fixtures.json file and the functions
accessing the file.
Tweaked by tabbott to further remove an unnecessary conditional.
The distinction between ValueError and TypeError
is not useful in these functions:
- extract_stream_indicator
- extract_private_recipients (or its callees)
These are always invoked in views to validate
user input.
When we use REQ to wrap the validators, any
Exception gets turned into a JsonableError, so
the distinction is not important.
And if we don't use REQ to wrap the validators,
the errors aren't caught.
Now we just let these functions directly produce
the desired end result for both codepaths.
Also, we now flag the error strings for translation.
This setting is being overridden by the frontend since the last
commit, and the security model is clearer and more robust if we don't
make it appear as though the markdown processor is handling this
issue.
Co-authored-by: Tim Abbott <tabbott@zulipchat.com>
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Zulip's modal_link markdown feature has not been used since 2017; it
was a hack used for a 2013-era tutorial feature and was never used
outside that use case.
Unfortunately, it's sloppy implementation was exposed in the markdown
processor for all users, not just the tutorial use case.
More importantly, it was buggy, in that it did not validate the link
using the standard validation approach used by our other code
interacting with links.
The right solution is simply to remove it.
This makes it relatively easy for a system administrator to
temporarily override these values after a desktop app security
release that they want to ensure all of their users take.
We're not putting this in settings, since we don't want to encourage
accidental long-term overrides of these important-to-security values.
Previously, we only printed the test-case when we had an assertion error.
With this change, we also include timeout errors as well as any other
causes for failure.
We now have Hamlet, not Othello, send the message
to Othello's bot, since that's a more interesting
test and less likely to lead to a false positive.
And then we simplify the recipient check to avoid
the strange mypy mess as well as possible false
negatives.
When more than one outgoing webhook is configured,
the message which is send to the webhook bot passes
through finalize_payload function multiple times,
which mutated the message dict in a way that many keys
were lost from the dict obj.
This commit fixes that problem by having
`finalize_payload` return a shallow copy of the
incoming dict, instead of mutating it. We still
mutate dicts inside of `post_process_dicts`, though,
for performance reasons.
This was slightly modified by @showell to fix the
`test_both_codepaths` test that was added concurrently
to this work. (I used a slightly verbose style in the
tests to emphasize the transformation from `wide_dict`
to `narrow_dict`.)
I also removed a deepcopy call inside
`get_client_payload`, since we now no longer mutate
in `finalize_payload`.
Finally, I added some comments here and there.
For testing, I mostly protect against the root
cause of the bug happening again, by adding a line
to make sure that `sender_realm_id` does not get
wiped out from the "wide" dictionary.
A better test would exercise the actual code that
exposed the bug here by sending a message to a bot
with two or more services attached to it. I will
do that in a future commit.
Fixes#14384
We now validate the message data explicitly, rather
than comparing it to the event data. This protects
us from false positives where we were only validating
that the request data was a mutated version of the
event message data. (We'll have a commit soon that
fixes a mutation-related bug.)
This code is only used in one test, and having
the indirection of setUp partly obscured a
problem with the fact that our event message
is actually a wide dict that gets mutated
by `build_bot_request`. We'll fix that soon,
but this is a pure code move for now.
The `event` parameter is never used by `process_success`,
and eliminating it allows us to greatly simplify tests
that are just confusingly passing in events that are
totally ignored.
Migrate "call_on_each_event" from api/arguments.json to
/events:real-time in OpenAPI.
This is a bit of a hack, but it lets us eliminate this secondary
arguments.json file, which is probably worth it.
Tweaked by tabbott to fix various formatting issues in the original
documentation while I was looking at it.
We've had a bug for a while that if any ScheduledEmail objects get
created with the wrong email sender address, even after the sysadmin
corrects the problem, they'll still get errors because of the objects
stored with the wrong format.
We solve this by using FromAddress placeholders strings in
send_future_email function, so that ScheduledEmail objects end up
setting the final `from_address` value when mail is actually sent
using the setting in effect at that time.
Fixes#11008.
Overall, this change eliminates a lot of
optional parameters and conditionals, plus
some legacy logic related to caches.
For all the places we are just editing topics,
we now just call `check_topic` to see that
the topic got updated.
For places where the topic edit failed, we
just inline the checks that message still
has the old topic and content.
And then for successful **content** edits,
we now do a more rigorous, more sane check
that the messages are properly cached. The
old code here had evolved from 2013 into
something that didn't really make much sense
in the context of editing topics.
Now we are literally pulling data from the
cache and making sure it's valid, rather
than trying to poorly simulate the two
codepaths related to dispatching message
events and fetching messages. Some of the
history here was that when I introduced
`MessageDict` several years ago, I did a
lot of code sweeping and didn't analyze every
single test to make sure it's still valid,
plus some of the tests still had some value
for catching regressions. A recent commit
now gets us coverage on that a lot more
explicitly, rather than in passing.
See the comment in the test for a thorough explanation.
In brief, this test makes sure that the events codepath
for messages produces the same results as the fetch
codepath.
And this sets us up to simplify another test that kind
of poorly tried to do the same thing in passing. (In
fairness the test was really ancient and preceded a lot
of later work that we did here.)
This is a full-stack change:
- server
- JS code
- templates
It's all pretty simple--just use stream_id instead
of stream_name.
I am 99% sure we don't document this API nor use it
in mobile, so it should be a safe change.
The function `prepare_login_url_and_headers` returns a register
link for any value of `is_signup` unless it's not none.
This commit changes it to a boolean for that function and other
functions using it so that it becomes much clearer when a
register link will be returned.
Also, all occurrences of `is_signup='1'` are changed to
`is_signup=True` to make the code consistent with the above change.
This allows us to block use of the desktop app with insecure versions
(we simply fail to load the Zulip webapp at all, instead rendering an
error page).
For now we block only versions that are known to be both insecure and
not auto-updating, but we can easily adjust these parameters in the
future.
This improves the error handling for invalid values of the
propagate_mode parameter to our message editing endpoints.
Previously, invalid values would just work like change_one rather than
doing nothing.
type().__name__ is sufficient, and much readable than type(), so it's
better to use the former for keys.
We also make the classes consistent in forming the keys in the format
type(self).__name__:identifier and adjust logger.warning and statsd to
take advantage of that and simply log the key().
When a user in login flow using github auth chooses a email that is
not associated with an existing account, it leads to a "continue to
registration" choice. This cannot be tested with the earlier version
of `stage_two_of_registration`.
Also added the test.
Thanks to Mateusz Mandera for the solution.
Co-authored-by: Mateusz Mandera <mateusz.mandera@protonmail.com>
The previous model for GitHub authentication was as follows:
* If the user has only one verified email address, we'll generally just log them in to that account
* If the user has multiple verified email addresses, we will always
prompt them to pick which one to use, with the one registered as
"primary" in GitHub listed at the top.
This change fixes the situation for users going through a "login" flow
(not registration) where exactly one of the emails has an account in
the Zulip oragnization -- they should just be logged in.
Fixes part of #12638.
URLs for config errors were configured seperately for each error
which is better handled by having error name as argument in URL.
A new view `config_error_view` is added containing context for
each error that returns `config_error` page with the relevant
context.
Also fixed tests and some views in `auth.py` to be consistent with
changes.
This is a bit more rigorous than just
dereferencing the first element of
a list comprehension, as it will give a
ValueError if more matches are found than
the test was expecting.
We don't need `do_create_user` to send a partial
event here for bots. The only caller to `do_create_user`
that actually creates bots (apart from some tests that
just need data setup) is `add_bot_backend`, which
sends the more complete event including bot "extras"
like service info.
The modified event tests show the simplification
here (2 events instead of 3).
Also, the bot tests now use tuple unpacking, which
will force a ValueError if we duplicate events
again.
We now restrict emails on the zulip realm, and now
`email` and `delivery_email` will be different for
users.
This change should make it more likely to catch
errors where we leak delivery emails or use the
wrong field for lookups.
We try to use the correct variation of `email`
or `delivery_email`, even though in some
databases they are the same.
(To find the differences, I temporarily hacked
populate_db to use different values for email
and delivery_email, and reduced email visibility
in the zulip realm to admins only.)
In places where we want the "normal" realm
behavior of showing emails (and having `email`
be the same as `delivery_email`), we use
the new `reset_emails_in_zulip_realm` helper.
A couple random things:
- I fixed any error messages that were leaking
the wrong email
- a test that claimed to rely on the order
of emails no longer does (we sort user_ids
instead)
- we now use user_ids in some place where we used
to use emails
- for IRC mirrors I just punted and used
`reset_emails_in_zulip_realm` in most places
- for MIT-related tests, I didn't fix email
vs. delivery_email unless it was obvious
I also explicitly reset the realm to a "normal"
realm for a couple tests that I frankly just didn't
have the energy to debug. (Also, we do want some
coverage on the normal case, even though it is
"easier" for tests to pass if you mix up `email`
and `delivery_email`.)
In particular, I just reset data for the analytics
and corporate tests.
We specifically give the existing user different
delivery_email and email addresses, to prevent false
positives during the test that checks that users
signing up with an already-existing email get
an error message.
(We also rename the test.)
I guess `test_classes` has 100% line coverage
enforcement, which is a bit tricky for error
handling.
This fixes that, as well as making the name
snake_case and improving the format of the
errors.
This test was using the anti-pattern of doing an
assertion inside a conditional.
I added the `findOne` helper to make it easier
to write robust tests for scenarios like this.
We had a bunch of ugly hacks to monkey patch things due to upstream
being temporarily unmaintained and not merging PRs. Now the project is
active again and the fixes have been merged and included in the latest
version - so we clean up all that code.
If I send a message from a normal Zulip client, it is
considered to be "read" by me. But if I send it via
an API program (using my human account), the message
is not immediately "read" by me.
Now we handle this correctly in `get_raw_unread_data`.
The symptom of this was that these messages would get
"stuck" in "Private Messages" narrows until the next
time you reloaded your app.
I've always thought of distributed teams as the place where Zulip
really shines over other tools, because chat is much more important in
that context.
And I've always been kinda unhappy with "most productive team chat" as
a line.
There's a lot more we should do here, but this is a start.
We now have this API...
If you really just need to log in
and not do anything with the actual
user:
self.login('hamlet')
If you're gonna use the user in the
rest of the test:
hamlet = self.example_user('hamlet')
self.login_user(hamlet)
If you are specifically testing
email/password logins (used only in 4 places):
self.login_by_email(email, password)
And for failures uses this (used twice):
self.assert_login_failure(email)
This reduces query counts in some cases, since
we no longer need to look up the user again. In
particular, it reduces some noise when we
count queries for O(N)-related tests.
The query count is usually reduced by 2 per
API call. We no longer need to look up Realm
and UserProfile. In most cases we are saving
these lookups for the whole tests, since we
usually already have the `user` objects for
other reasons. In a few places we are simply
moving where that query happens within the
test.
In some places I shorten names like `test_user`
or `user_profile` to just be `user`.
We want a clean codepath for the vast majority
of cases of using api_get/api_post, which now
uses email and which we'll soon convert to
accepting `user` as a parameter.
These apis that take two different types of
values for the same parameter make sweeps
like this kinda painful, and they're pretty
easy to avoid by extracting helpers to do
the actual common tasks. So, for example,
here I still keep a common method to
actually encode the credentials (since
the whole encode/decode business is an
annoying detail that you don't want to fix
in two places):
def encode_credentials(self, identifier: str, api_key: str) -> str:
"""
identifier: Can be an email or a remote server uuid.
"""
credentials = "%s:%s" % (identifier, api_key)
return 'Basic ' + base64.b64encode(credentials.encode('utf-8')).decode('utf-8')
But then the rest of the code has two separate
codepaths.
And for the uuid functions, we no longer have
crufty references to realm. (In fairness, realm
will also go away when we introduce users.)
For the `is_remote_server` helper, I just inlined
it, since it's now only needed in one place, and the
name didn't make total sense anyway, plus it wasn't
a super robust check. In context, it's easier
just to use a comment now to say what we're doing:
# If `role` doesn't look like an email, it might be a uuid.
if settings.ZILENCER_ENABLED and role is not None and '@' not in role:
# do stuff
Instead of trying to set the _requestor_for_logs attribute in all the
relevant places, we try to use request.user when possible (that will be
when it's a UserProfile or RemoteZulipServer as of now). In other
places, we set _requestor_for_logs to avoid manually editing the
request.user attribute, as it should mostly be left for Django to manage
it.
In places where we remove the "request._requestor_for_logs = ..." line,
it is clearly implied by the previous code (or the current surrounding
code) that request.user is of the correct type.
This refactors get_members_backend to return user data of a single
user in the form of a dictionary (earlier being a list with a single
dictionary).
This also refactors it to return the data with an appropriate key
(inside a dictionary), "user" or "members", according to the type of
data being returned.
Tweaked by tabbott to use somewhat less opaque code and simple OpenAPI
descriptions.
Previously, get_client_name was responsible for both parsing the
User-Agent data as well as handling the override behavior that we want
to use "website" rather than "Mozilla" as the key for the Client object.
Now, it's just responsible for User-Agent, and the override behavior
is entirely within process_client (the function concerned with Client
objects).
This has the side effect of changing what `Client` object we'll use
for HTTP requests to /json/ endpoints that set the `client` attribute.
I think that's in line with our intent -- we only have a use case for
API clients overriding the User-Agent parsing (that feature is a
workaround for situations where the third party may not control HTTP
headers but does control the HTTP request payload).
This loses test coverage on the `request.GET['client']` code path; I
disable that for now since we don't have a real use for that behavior.
(We may want to change that logic to have Client recognize individual
browsers; doing so requires first using a better User-Agent parsing
library).
Part of #14067.
The "sender" property in `send_message_backend` is meant to only do
something when doing Zephyr mirroring (or similar). We should help
clients behave correctly by banning this property in requests that are
not specifically requesting mirroring behavior.
This commit requires changes to a number of tests that incorrectly
passed this parameter or didn't use the right setup for mirroring.
The special Zephyr mirroring logic is only intended to be used via the
API, so this sets up a more effective test. It also allows us to
remove certain Client parsing logic for the /json/ views using session
authentication.
The email domain restriction to @zulip.com is annoying in development
environment when trying to test sign up. For consistency, it's best to
have tests use the same default, and the tests that require domain
restriction can be adjusted to set that configuration up for themselves
explicitly.
This uses the better, modern, user ID based API for sending messages
internally in the test suite, something that's convenient to do as a
follow-up to the migration to pass UserProfile objects to these
functions.
This commit mostly makes our tests less
noisy, since emails are no longer an important
detail of sending messages (they're not even
really used in the API).
It also sets us up to have more scrutiny
on delivery_email/email in the future
for things that actually matter. (This is
a prep commit for something along those
lines, kind of hard to explain the full
plan.)
This extends our email address visibility settings to deny access to
user email addresses even to organization administrators.
At the moment, they can of course change the setting (which leaves an
audit trail), but in the future only organization owners will be able
to change that setting.
While we're at this, we rewrite the settings_data.js test to cover all
the cases in a more consistent way.
Fixes#14111.
We were using `code` to pass around messages.
The `code` field is designed to be a code, not
a human-readable message.
It's possible that we don't actually need two
flavors of messages for these type of validations,
but I didn't want to change that yet.
We **definitely** don't need to put two types of
message in the exception, so I fix that. Instead,
I just have the caller ask what level of detail
it needs.
I added a non-verbose message for the case of
system bots.
I removed the non-translated version of the message
for deactivated accounts, which didn't have test
coverage and is slightly more prone to leaking
email info that we don't want to leak.
In the prep commits leading up to this, we split
out two new helpers:
validate_email_is_valid
get_errors_for_new_emails
Now when we validate invites we use two separate
loops to filter our emails.
Note that the two extracted functions map to two
of the data structures that used to be handled
in a single loop, and now we break them out:
errors = validate_email_is_valid
skipped = get_errors_for_new_emails
The first loop checks that emails are even valid
to begin with.
The second loop finds out whether emails are already
in use.
The second loop takes advantage of this helper:
get_errors_for_new_emails
The second helper can query all potential new emails
with a single round trip to the database.
This reduces our query count.
We are trying to kill off `validate_email`, so
we no longer call it from these tests.
These tests are already kind of low-level in
nature, so testing the more specific helpers
here should be fine.
Note that we also make the third parameter
to `validate_email` non-optional in this commit,
to preserve 100% coverage. This is really just
refactoring noise--we will soon eliminate the
entire function, but I didn't want to do everything
in a huge commit.
This has two goals:
- sets up a future commit to bulk-validate
emails
- the extracted function is more simple,
since it just has errors, and no codes
or deactivated flags
This commit leaves us in a somewhat funny
intermediate state where we have
`action.validate_email` being a glorified
two-line function with strange parameters,
but subsequent commits will clean this up:
- we will eliminate validate_email
- we will move most of the guts of its
other callee to lib/email_validation.py
To be clear, the code is correct here, just
kinda in an ugly, temporarily-disorganized
intermediate state.
We now use the `get_realm_email_validator()`
helper to build an email validator outside
the loop of emails in our invite list.
This allows us to perform RealmDomain queries
only once per request, instead of once per
email.
Without the fix here, you will get an exception
similar to below if you try to invite one of the
cross realm bots. (The actual exception is
a bit different due to some rebasing on my branch.)
File "/home/zulipdev/zulip/zerver/lib/request.py", line 368, in _wrapped_view_func
return view_func(request, *args, **kwargs)
File "/home/zulipdev/zulip/zerver/views/invite.py", line 49, in invite_users_backend
do_invite_users(user_profile, invitee_emails, streams, invite_as)
File "/home/zulipdev/zulip/zerver/lib/actions.py", line 5153, in do_invite_users
email_error, email_skipped, deactivated = validate_email(user_profile, email)
File "/home/zulipdev/zulip/zerver/lib/actions.py", line 5069, in validate_email
return None, (error.code), (error.params['deactivated'])
TypeError: 'NoneType' object is not subscriptable
Obviously, you shouldn't try to invite a cross
realm bot to your realm, but we want a reasonable
error message.
RESOLUTION:
Populate the `code` parameter for `ValidationError`.
BACKGROUND:
Most callers to `validate_email_for_realm` simply catch
the `ValidationError` and then report a more generic error.
That's also what `do_invite_users` does, but it has the
somewhat convoluted codepath through `validate_email`
that triggers this code:
try:
validate_email_for_realm(user_profile.realm, email)
except ValidationError as error:
return None, (error.code), (error.params['deactivated'])
The way that we're using the `code` parameter for
`ValidationError` feels hacky to me. The intention
behind `code` is to provide a descriptive error to
calling code, and it's not intended for humans, and
it feels strange that we actually translate this in
other places. Here are the Django docs:
https://docs.djangoproject.com/en/3.0/ref/forms/validation/
And then here's an example of us actually translating
a code (not part of this commit, just providing context):
raise ValidationError(_('%s already has an account') %
(email,), code = _("Already has an account."),
params={'deactivated': False})
Those codes eventually get put into InvitationError, which
inherits from JsonableError, and we do actually display
these errors in the webapp:
if skipped and len(skipped) == len(invitee_emails):
# All e-mails were skipped, so we didn't actually invite anyone.
raise InvitationError(_("We weren't able to invite anyone."),
skipped, sent_invitations=False)
I will try to untangle this somewhat in upcoming commits.
We allow folks to invite emails that are
associated with a mirror_dummy account.
We had a similar test already for registration,
but not invites.
This logic typically affects MIT realms in the
real world, but the logic should apply to any
realm, so I use accounts from the zulip realm
for convenient testing. (For example, we might
run an IRC mirror for a non-MIT account.)
I use a range here because there's some leak
from another test that causes the count to
vary. Once we get this a bit more under control,
we should be able to analyze the leak better.
The substantive improvement here is to use
a strange casing for Hamlet's email, which
will prevent future casing bugs.
I also log in as Cordelia to prevent confusion
that the test has something to do with
inviting yourself. It's more typical for
somebody to invite another person to a realm
(not realizing they're already there).
I also made two readability tweaks.
Previously, the input:
====================
- One
- Two
Two continued
====================
Would produce the same output as:
====================
- One
- Two
```
Two continued
```
====================
This was because our CodeBlockProcessor had a higher priority than
the ListIndentProcessor. This issue was discussed here:
https://chat.zulip.org/#narrow/stream/9-issues/topic/continuation.20paragraphs.20in.20list.20items.
/delete_topic endpoint could be used to request the deletion of a topic,
that would cause do_delete_messages to be called with an empty set in
these cases:
1. Requesting deletion of an empty stream.
2. Requesting deletion of a topic in a private stream with history not
public to subscribers, if the requesting admin doesn't have access to
any of the messages in that topic.
We were only checking error handling before, not
the happy path. The structure of the code
made it so that we effectively tested most of the
logic for this use case (since all the other flags
are sort of just filters on top of this), but
obviously we want explicit coverage here. Also,
we weren't testing the is-admin-but-not-api-super-user
error checking until this commit.
For historical reasons we were creating Recipient
objects at some point in the typing-notifications
codepath. Now we just work with UserProfiles.
This removes some queries, as indicated by
the change to `len(queries)` in a couple of the
tests.
The one subtle thing that changes here is huddles.
If user 10 sends a typing notification that they
are talking to users 20 and 30, there might not
actually be a huddle for users 10/20/30, but
we were actually creating huddles on the fly!
There is no need to create huddles just for
typing notifications, since we don't even
share huddle ids with our clients. The clients
just infer the huddles.
Some of the code that gets killed off here as
somewhat "collateral damage" is some
defensive code related to formerly supporting streams
in typing indicators. The support for streams
was killed off almost as soon as we released
the feature, and the codepath is pretty clearly
user-centric at this point.
The only clients that should use the typing
indicators endpoint are our internal clients,
and they should send a JSON-formatted list
of user_ids.
Unfortunately, we still have some older versions
of mobile that still send emails.
In this commit we fix non-user-facing things
like docs and tests to promote the user_ids
interface that has existed since about version
2.0 of the server.
One annoyance is that we documented the
typing endpoint with emails, instead of the
more modern user_ids, which may have delayed
mobile converting to user_ids (and which
certainly caused confusion). It's trivial
to update the docs, but we need to short
circuit one assertion in the openapi tests.
We also clean up the test structure for the
typing tests:
TypingHappyPathTest.test_start_to_another_user
TypingHappyPathTest.test_start_to_multiple_recipients
TypingHappyPathTest.test_start_to_self
TypingHappyPathTest.test_start_to_single_recipient
TypingHappyPathTest.test_stop_to_another_user
TypingHappyPathTest.test_stop_to_self
TypingValidateOperatorTest.test_invalid_parameter
TypingValidateOperatorTest.test_missing_parameter
TypingValidateUsersTest.test_argument_to_is_not_valid_json
TypingValidateUsersTest.test_bogus_user_id
TypingValidateUsersTest.test_empty_array
TypingValidateUsersTest.test_missing_recipient
TypingValidationHelpersTest.test_recipient_for_user_ids
TypingValidationHelpersTest.test_recipient_for_user_ids_non_existent_id
TypingLegacyMobileSupportTest.test_legacy_email_interface
Users who are using ZulipDesktop or haven't managed to auto-update to
ZulipElectron should be strongly encouraged to upgrade.
We'll likely want to move to something even stricter that blocks
loading the app at all, but this is a good start.
This field wasn't accessed by any clients and was a less robust
version of the user_id field. Any client hoping to be interested in
who did message edits should be able to handle working with user IDs
rather than email addresses.
We should not need so many queries here,
although a couple of the queries are just
standard things that apply to all requests.
I will reduce the number of queries in a
later commit.
This is mostly refactoring, but we also prevent a new
type of value error (list of non-int-or-string). The
new test code helps enforce that.
Cleanup includes:
- Use early-exit for email case.
- Rename helpers to get_validate_*.
- Avoid clumsy rebuilding of lists in helpers.
- Avoid the confusing `recipient` name (which
can be confused with the model by the same
name).
- Just delegate duplicate-id/email-removal to
the helpers.
The cleaner structure allows us to elminate a couple
mypy workarounds.
Credits to @xpac1985 for reporting, debugging and proposing fix to the
issue. The proposed fix was modified slightly by @hackerkid to set the
correct value for max_invites and upload_quota_gb. Tests added by
@hackerkid.
Fixes#13974
This gives them cache-compatible URLs, and also avoids some extra
copies of the sprite sheet images.
Comments on the Octopus emoji added by tabbott.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This fixes a confusing aspect of how our automated tests worked
previously, where we'd almost all HTTP requests in the unlikely
configuration with no User-Agent string specified.
We need to adjust query counts in a few tests that now are a bit
cheaper because they now can take advantage of a Client object created
in server_initialization.py in `process_client`.
There was some duplicated code to test config error pages for
different auths which could be handled with less duplicated code
by adding those functions to `SocialAuthBase`.
Also moving the other tests makes it easier to access tests related
to a backend auth when they are in the same file.
Original idea was that KeyError was only going to happen there in case
of user passing bad input params to the endpoint, so logging a generic
message seemed sufficient. But this can also happen in case of
misconfiguration, so it's worth logging more info as it may help in
debugging the configuration.
Create a new page for desktop auth flow, in which
users can select one from going to the app or
continue the flow in the browser.
Co-authored-by: Mateusz Mandera <mateusz.mandera@protonmail.com>
To avoid some hidden bugs in tests caused by every ldap user having the
same password, we give each user a different password, generated based
on their uids (to avoid some ugly hard-coding in a bunch of places).
This avoids using `.save()` directly for editing stream properties,
and also uses the API in _send_and_verify_message to avoid confusing
logic around which user is doing what request.
Fixes part of #13823
This adds the OpenAPI format data for /users/{user_id} endpoint
and also removes 'users/{user_id}' from 'pending_endpoints' in
zerver/tests/test_openapi.py .
A sloppy implementation of the main has_request_variables wrapper
function meant that it did two very inefficient things:
* To combine together the GET and POST parameters, it would make a
copy of the request.GET QueryDict object, which combined with the
fact that these objects are slow to access, consumed about 90us per
argument.
* Doing this in a loop (one time per argument), rather than once,
which resulted in us doing this 11 times for a `GET /events` query.
Fixing this to just make a dictionary and combine things with some
small loops saved about 1 millisecond from the total runtime of GET
/events (for comparison, the total actual work of that view function
is about 700ms).
We need to fix at least one test that used a bad mock HttpRequest
object that didn't have a .GET property.
Django 2.2.x is the next LTS release after Django 1.11.x; I expect
we'll be on it for a while, as Django 3.x won't have an LTS release
series out for a while.
Because of upstream API changes in Django, this commit includes
several changes beyond requirements and:
* urls: django.urls.resolvers.RegexURLPattern has been replaced by
django.urls.resolvers.URLPattern; affects OpenAPI code and related
features which re-parse Django's internals.
https://code.djangoproject.com/ticket/28593
* test_runner: Change number to suffix. Django changed the name in this
ticket: https://code.djangoproject.com/ticket/28578
* Delete now-unnecessary SameSite cookie code (it's now the default).
* forms: urlsafe_base64_encode returns string in Django 2.2.
https://docs.djangoproject.com/en/2.2/ref/utils/#django.utils.http.urlsafe_base64_encode
* upload: Django's File.size property replaces _get_size().
https://docs.djangoproject.com/en/2.2/_modules/django/core/files/base/
* process_queue: Migrate to new autoreload API.
* test_messages: Add an extra query caused by .refresh_from_db() losing
the .select_related() on the Realm object.
* session: Sync SessionHostDomainMiddleware with Django 2.2.
There's a lot more we can do to take advantage of the new release;
this is tracked in #11341.
Many changes by Tim Abbott, Umair Waheed, and Mateusz Mandera squashed
are squashed into this commit.
Fixes#10835.
This makes it possible to create a Zulip account from the mobile or
desktop apps and have the end result be that the user is logged in on
their mobile device.
We may need small changes in the desktop and/or mobile apps to support
this.
Closes#10859.
webpack optimizes JSON modules using JSON.parse("{…}"), which is
faster than the normal JavaScript parser.
Update the backend to use emoji_codes.json too instead of the three
separate JSON files.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This code was very useful when first implemented to help catch errors
where our backend templates didn't render, but has been superceded by
the success of our URL coverage testing (which ensures every URL
supported by Zulip's urls.py is accessed by our tests, with a few
exceptions) and other tests covering all of the emails Zulip sends.
It has a significant maintenance cost because it's a bit hacky and
involves generating fake context, so it makes sense to remove these.
Any future coverage issues with templates should be addressed with a
direct test that just accessing the relevant URL or sends the relevant
email.
In python 3.5-3.6, generic types had an __origin__ attribute which
indicated which generic they originated from; the code was reflecting on
that value to check types against the openapi spec. In python3.7, this
changed, and there's no longer an immediately simple way to get this
information in all cases. __origin__ appears to be the implementing
class now, returning `list` or `collections.abc.Iterator` rather than
`typing.List` and `typing.Iterator`. This adds a sloppy-but-effective
mechanism for inferring if a type maps to the List/Dict/Iterator/Mapping
types and gets the test suite passing again.
We now validate streams with a separate
function from PM recipients.
It's confusing enough all the ways you can
encode a stream or encode the PM recipients,
but trying to do it all in one function was
hard to reason about and led to at least one
bug.
In particular, there was a bug where streams
with commas in them would get split. Now
we just don't ever split on commas inside
of `extract_stream_indicator`.
Fixes#13836
After removing internal_send_message() in a recent
commit, we now have only two callers for
extract_recipients, and they are both related
to our REQ mechanism that always passes strings
to converters. (If there are default values,
REQ does not call the converters.)
We therefore make two changes:
- use the more strict annotation of "str"
for the `s` parameter
- don't bother with the isinstance check
Note that while the test mocks the actual message
send, we now have a `get_stream` call in the queue
worker, so we have to set up a real stream for
testing (or we could have mocked that as well, but
it didn't seem necessary). The setup queries add
to the amount of queries reported by the test,
plus the `get_stream` call. I just made the
query count a digits regex, which is a little bit
lame, but I don't think it's worth risking test
flakes for this.
This index is intended to optimize the performance of the very
frequently run query of "what is the presence status of all users in a
realm?".
Main changes:
- add realm_id to UserPresence
- add index for realm_id
- backfill realm_id for old rows
- change all writes to UserPresence to include
realm_id
The index is of this form:
"zerver_userpresence_realm_id_5c4ef5a9" btree (realm_id)
We will create an index on (realm_id, timestamp) in a
future commit, but I think it's a bit faster if you do
the backfill before the index.
There's also a minor tweak to the populate_db script.