We keep the error message same for all cases when a user is not
allowed to invite others for all values of invite_to_realm_policy.
We raise error with different message for guest cases because it
is handled by decorators. We aim to change this behavior in future.
Explaining the details in error message isn't much important as
we do not show errors probably in API only, as we do not the show
the options itself in the frontend.
This makes it much more clear that this feature does JSON encoding,
which previously was only indicated in the documentation.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The moderator role was not included in the tests for create_stream_policy
and invite_to_stream_policy. The tests are do_set_realm_property_test
in test_events.py and do_test_realm_update_api in test_realm.py.
This should have been added for create_stream_policy in 5b32dcd and
in 5b32dcd for invite_to_stream_policy, but was missed by mistake.
This commit adds backend code for passing can_invite_others_to_realm
field to clients using the fetch_initial_state_data in the page_params
object.
Though this field is not used by webapp as of now, but will be used
to fix a bug of incorreclty showing the invite users option in
settings overlay in the next commit.
We add moderators and full members option to invite_to_realm_policy
by using COMMON_POLICY_TYPES and use can_invite_others_to_realm helper
added in previous commit. This commit only does the backend work,
frontend work will be done in separate commit.
This commit adds can_invite_others_to_realm helper which will be used in
further in next commit when invite_to_realm_policy will be modified to
support all values of COMMON_POLICY_TYPES.
It is important for this commit's correctness that
INVITE_TO_REALM_POLICY_TYPES was initialized to use the same values.
This commit replaces invite_by_admins_policy, which was a bool field,
with a new enum field invite_by_realm_policy.
Though the final goal is to add moderators and full members option
using COMMON_POLICY_TYPES, but this will be done in a separate
commit to make this easy for review.
The tests for can_create_streams and can_subscribe_other_users shares a
lot of code and we deduplicate the code by extracting most of the code
as check_has_permission_policies which will now be called by the two
tests test_can_create_streams and test_can_subscribe_other_users.
This will also help in avoiding the duplication of code when we will
convert more policies to use COMMON_POLICY_TYPES.
We send the whole data set as a part of the event rather than
doing an add/remove operation for couple of reasons:
* This would make the client logic simpler.
* The playground data is small enough for us to not worry
about performance.
Tweaked both `fetch_initial_state_data` and `apply_events` to
handle the new playground event.
Tests added to validate the event matches the expected schema.
Documented realm_playgrounds sections inside /events and
/register to support our openapi validation system in test_events.
Tweaked other tests like test_event_system.py and test_home.py
to account for the new event being generated.
Lastly, documented the changes to the API endpoints in
api/changelog.md and bumped API_FEATURE_LEVEL.
Tweaked by tabbott to add an `id` field in RealmPlayground objects
sent to clients, which is essential to sending the API request to
remove one.
Similar to the previous commit, we have added a `do_*` function
which does the deletion from the DB. The next commit handles sending
the events when both adding and deleting a playground entry.
Added the openAPI format data to zulip.yaml for DELETE
/realm/playgrounds/{playground_id}. Also added python and curl
examples to remove-playground.md.
Tests added.
This endpoint will allow clients to create a playground entry
containing the name, pygments language and url_prefix for the
playground of their choice.
Introduced the `do_*` function in-charge of creating the entry in
the model. Handling the process of sending events which will be
done in a follow up commit.
Added the openAPI format data to zulip.yaml for POST
/realm/playgrounds. Also added python and curl examples for using
the endpoint in its markdown documented (add-playground.md).
Tests added.
Tweaked exports.py to add the config object there so that our export
tool can include the table when exporting. Also includes all the
changes required to import the new table from the exported data.
Helper function `get_realm_playgrounds` added to fetch all
playgrounds in a realm.
Tests amended.
Adds backend code for the mute users feature.
This is just infrastructure work (database
interactions, helpers, tests, events, API docs
etc) and does not involve any behavioral/semantic
aspects of muted users.
Adds POST and DELETE endpoints, to keep the
URL scheme mostly consistent in terms of `users/me`.
TODOs:
1. Add tests for exporting `zulip_muteduser` database table.
2. Add dedicated methods to python-zulip-api to be used
in place of the current `client.call_endpoint` implementation.
This is a prep change to eventually completely
replace the term "filter" with "linkifier" in
the codebase.
This only renames files. Code changes will be
done in further commits.
This renames the test file for muting to have
the term `topic` in it, along with an ambiguously
named helper.
This is a prep change for implementing the mute
users feature.
We use GIPHY web SDK to create popover containing GIFs in a
grid format. Simply clicking on the GIFs will insert the GIF in the compose
box.
We add GIPHY logo to compose box action icons which opens the GIPHY
picker popover containing GIFs with "Powered by GIPHY"
attribution.
Previously, if a user subscribed to a stream with
history_public_to_subscribers, and then was looking at old messages in
the stream, they would not get live-updates for that stream, because
of the structure in how notify_reaction_update only looked at
UserMessage rows (we had a previous workaround involving the
`historical` field in `UserMessage` which had already made it work if
the user themselves added the reaction).
We fix this by including all subscribers with history access in the
set of recipients for update events.
Fixes a bug that was confused with #16942.
Amazon SES has a limit on the size of address fields, and rejects
emails with too-long "From" combinations of name and address. This
limit is set to 320 bytes and comes from an RFC limitation on the
size of addresses. This RFC standard states that an email address
should not be composed of a local part (before the '@') longer than
64 bytes and a domain part (after the '@') longer than 255 bytes.
It is possible that Amazon SES misinterprets this limitation as it
checks the length of the combination of the name and the email
address of the sender.
To ensure that this problem is not encountered in the send_email
module of Zulip the length of this combination is now checked
against this limit and the from_name field is removed to only
keep the from_address field when it is necessary in order to
stay below 320 bytes.
If the from_address field alone is longer than 320 bytes the
sending process will raise an SMTPDataError exception.
Tests for this new check are added to the backend test suite in
order to test if build_email correctly outputs an email with filled
from_name and from_address fields when the total length is lower
than 320 bytes and that it correctly throws the from_name field
away when necessary.
Fixes: #17558.
This reverts commit 9c6d8d9d81 (#16916).
This feature has known bugs, and also wants some design changes to
make it customizable like linkifiers, so we’re retargeting this to
post-4.x.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The Session middleware only adds `Vary: cookie` if it sees an access
to the from inside of it. Because we are effectively, from the Django
session middleware's point of view, returning the static content of
`request.saved_response` and never accessing the session, it does not
set `Vary: cookie` on longpoll requests.
Explicitly mark Tornado requests as varying by cookie.
Adding an additional `!` to the stream name each time a stream is
deactivated, to a maximum of 21 times, effectively limits number of
times a stream with a given name can be deactivated. This is unlikely
to come up in common usage, but may be confusing when testing.
Change what we prepend to deactivated stream names to something with
more entropy than just `!`, by instead prepending a substring of hash
of the stream's ID. `!`s. Using 128 bits of the hash means that it
will require more than 10^18th renames to have a 1% chance of collision.
Because too-long stream names are also truncated at 60 characters,
having this entropy in the beginning of the name also helps address
potential issues from stream names that differed only in, e.g. the
60th character.
Fixes#17016.
Instead of validating `op` value later, this commit does that
in `REQ`.
Also helps avoiding duplication of this validation when
stream typing notifications feature is added.
The `widget_content` key is expected to contain a string which parses
as JSON; in the event that it does not, log the error and notify the
bot owner, instead of failing silently.
Fixes#16850.
In `validate_account_and_subdomain` we check
if user's realm is not deactivated. In case
of failure of this check, we raise our standard
JsonableError. While this works well in most
cases but it creates difficulties in handling
of users with deactivated realms for non-browser
clients.
So we register a new REALM_DEACTIVATED error
code so that clients can distinguish if error
is because of deactivated account. Following
these changes `validate_account_and_subdomain`
raises RealmDeactivatedError if user's realm
is deactivated.
This error is also documented in
`/api/rest-error-handling`.
Testing: I have mostly relied on automated
backend tests to test this.
Fixes#17763.
In validate_account_and_subdomain we check if
user's account is not deactivated. In case of
failure of this check we raise our standard
JsonableError. While this works well in most
cases but it creates difficulties in handling
of deactivated accounts for non-browser clients.
So we register a new USER_DEACTIVATED error
code so that clients can distinguish if error
is because of deactivated account. Following
these changes `validate_account_and_subdomain`
raises UserDeactivatedError if user's account
is deactivated.
This error is also documented in
`/api/rest-error-handling`.
Testing: I have mostly relied on automated
backend tests to test this.
Partially addresses issue #17763.
We add a TUTORIAL_ENABLED setting for self-hosters who want to
disable the tutorial entirely on their system. For this, the
default value (True) is placed in default_settings.py, which
can be overwritten by adding an entry in /etc/zulip/settings.py.
Updated database query to filter out deactivated streams from the
return of the get_topic_mutes method. Added optional
include_deactivated parameter to the method to make the behavior
default but overrideable. Added test case in test_muting for these
changes. Fixes blueslip warnings thrown by muting.js set_muted_topics
when passed deactivated streams via page_params.
This adds the is_user_active with the appropriate code for setting the
value correctly in the future. In the following commit a migration to
backfill the value for existing Subscriptions will be added.
To ensure correct user_profile.is_active handling also in tests, we
replace all direct .is_active mutation with calls to appropriate
functions.
This commit adds a new option of STREAM_POST_POLICY_MODERATORS
in stream_post_policy which will allow only realm admins and
moderators to post in that stream.
The moderators-only option was actually added in the previous
commit for create_stream_policy as we use the same function
'has_permission' for both the policies. But we add the error
handling code and tests for moderators-only option in this
commit.
This commit modifies the has_permission function to include
realm moderator role. Thus this adds a new option of moderators
only for create_stream_policy.
Though this automatically adds this option for invite_to_stream_policy
also, but we will keep other code for showing error and for tests
in a separate commit.
The session object provides a common place to set headers on all
requests, no matter which implementation.
Because the `headers` attribute of Session is not a true static
attribute, but rather exposed via overriding `__getstate__`, `mock`'s
autospec cannot know about it, and thus throws an error; in tests that
mock the Session, we thus must explicitly set the `session.headers`.
The existing organization, of returning an opaque blob from
`build_bot_request`, which was later consumed by
`send_data_to_server`, is not particularly sensible; the steps become
oddly split between the OutgoingWebhookWorker, `do_rest_call`, and the
`OutgoingWebhookServiceInterface`.
Make the `OutgoingWebhookServiceInterface` in charge of building,
making, and returning the request in one method; another method
handles extracting content from a successful response. `do_rest_call`
is responsible for calling both halves of this, and doing common error
handling.
The comments in stream-policy tests in test_message_send.py specifies
the restriction of creating streams based on stream_post_policy. But
this restriction was removed in 9aaa61963 and we now allow everyone to
create all type of streams. So this commit fixes the stream creation
parts in comments.
The /events and /register endpoints were excluded from schema validations,
because they were earlier not completely documented. However, they can
now be added for proper checking. Removed them from excluded endpoints list
and fixed the documentation for /register and /events after the checking.
Fixes#17796.
Backend logic for handling user mention was cluttered
because it was handled at two stages first in
get_possible_mentions_info while fetching mention data
based on the messsage and then later in UserMentionPattern
which handles processing of text for mention.
Ideally UserMentionPattern should depend on
get_possible_mentions_info only for data but there was a
shared logic between these two that made it hard to debug
any possible bugs.
Updates in this commit make both of these functions
coherent in terms of logic and also add appropiate
comments to improve readability of these functions.
There was also a hidden bug that if a user A is
mentioned in with @**name|id** then @**invalid|id**
again mentioned A because of the way we handled mentions
earlier. It is solved as a result of this refactor and
appropiate test has been added for this.
This has been tested manually as well as by adding new
test to address missing case.
I noticed this because the test_events.py tests had the extremely
weird pattern of calling the actual change function, and then testing
the `notify` function's state changes (which should always be noops),
rather than actually testing the state change function.
Fixing the test made it clear that the actual logic in events.py
simply did not handle deleting custom_profile_field_value elements
from user objects when a custom_profile_field object was deleted.
So we fix that bit of logic as well.
It appears this bug was unique -- at least we don't have any other
notify_* functions being used directly in test_events.py, and the
handful of state_change_expected=False entries are all events for data
not present in page_params.
* `op` (operation) field, added in f6fb88549f, was never intended for
`custom_profile_fields` event. This commit removes the `op` as it doesn't
have any use in the code.
* As a part of cleanup, this also eliminates the schema check warnings
for `custom_profile_fields` event, mentioned in #17568.
This commit migrates some of the backend tests in test_import_export
to use assertLogs(), instead of mock.patch() as planned in #15331.
Logs for tests in this file are suppressed and are not asserted as
that made changes to import/export codebase more fragile. As we
already have checks for the actual functionalities, it made less
sense to assert those logs.
Extend our markdown system to support mentioning of users
by id also. Following these changes, it would be possible
to mention users with @**|user_id** and silently mention
using @_**|user_id**.
Main intention for extending the mention syntax is to make
it convenient for bots to mention a users using their ids. It
is to be noted that previous syntax are also supported.
Documentation tweaked by tabbott for better readability.
The changes were tested manually in development server, and also
by adding some new backend and frontend tests.
Fixes: #17487.
We add support to shorten links and test their shortening in
well-organized, clean manner that makes it trivial to extend the
GitHub approach for GitLab and perhaps other services.
We only shorten basic types of GitHub links (issue, PR, commit) that
fit a set of simple common patterns; the default behaviour of Autolink
is kept for everything else.
Logic added in frontend and backend Markdown Processor is identical.
This makes easy to extend the logic for other services like GitLab.
Fixes#11895.
This commit changes the list_to_streams function to raise error
according to create_stream_policy value when a user cannot create
streams instead of same error for all cases.
This commit modifies test_user_settings_for_subscribing_other_users
to check all the possible cases including the cases when a user
can successfully subscribe other users along with the already
tested failure cases. This commit also adds checks for guest users
which was not present before.
This commit replaces the code which directly changes user.role,
realm.create_stream_policy and realm.waiting_period_threshold
with do_change_user_role and do_set_realm_property functions
in test_can_create_streams. This makes the code similar to the
other tests.
We refactor test_can_create_streams and test_can_subscribe_other_users
in test_subs.py. We want to follow a specific order in such tests
which is just set the policy value one by one and then checking
that the role in policy returns true and role just below that returns
false. This approach is explained in detail below.
Following hierarchy of roles is considered for these tests -
1. Realm admin
2. Full members
3. Members
4. Guests.
Then if the policy is set to admins only, we check that the having
role as admin returns true and the role just below that, i.e. full
member returns false. Similarly, if the policy is set to members
only, we check that a member should return true and role below it
which is guest should return false. We basically follow these as
we can assume that if a user with particular role cannot do the
required task, then user with role below in the hierarchy would
be not allowed to do the task too.
This commit refactors the above mentioned two tests to have above
explained workflow.
This commit removes the unnecessary do_change_user_role function
in test_can_subcribe_other_users. This was added in 1aebf3cab
which replaced the multiple functions like do_change_is_admin
and do_change_is_guest with do_change_user_role.
Previously two functions do_change_is_admin and do_change_is_guest
were used because there were two flags is_realm_admin and is_guest
which were used to determine the role of a user. But then we added
a single field role to UserProfile and removed the multiple flags
and thus also replaced the different functions with a single
do_change_user_role. With addition of a new field role, two
different do_change_* functions were not needed as we only have
a role field instead of different flags, but this was missed in
1aebf3cab and this commit fixes it.
This add the schema checker, openapi schema, and also a test for
realm/deactivated event.
With several block comments by tabbott explaining the logic behind our
behavior here.
Part of #17568.
Modifies `StreamPattern` and `StreamTopicPattern` to inherit
from InlineProcessor instead of Pattern. This change is done
because Pattern stopped checking for matching patterns as soon
as it found a match which was not a valid stream. Due to this
all the subsequent mention failed, even if they were valid.
This bug was only present in backend renderring due to
markdown.inlinepatterns.Pattern.
Due to above changes verbose_compile is no longer used for
precompiling STREAM_LINK_REGEX, STREAM_TOPIC_LINK_REGEX as
adds ^(.*?) and (.*?)$ which cause extra overhead of matching
pattern which is not required. With new InlineProcessor these
extra patterns at beggining and end are not required.
So, StreamPattern and StreamTopicPattern now define their own
__init__ method for precompiling the regex.
Fixes#17535.
These changes were tested locally in dev server and by adding
some new markdown tests to test these.
Modifies `UserGroupMentionPattern` to inherit from InlineProcessor
instead of Pattern. This change is done because Pattern
stopped checking for matching patterns as soon as it found
a match which was not a valid user group. Due to this all
the subsequent user group mention failed, even if they were
valid. This bug was only present in backend renderring due to
markdown.inlinepatterns.Pattern.
This was reported as issue #17535.
These changes were tested locally in dev server and by adding
some new markdown tests to test these.
Modifies `UserMentionPattern` to inherit from InlineProcessor
instead of Pattern. This change is done because Pattern
stopped checking for matching patterns as soon as it found
a match which was not a valid user. Due to this all the
subsequent user mention failed. This bug was only present in
backend renderring due to markdown.inlinepatterns.Pattern.
This was reported as issue #17535.
These changes were tested locally in dev server and by adding
some new markdown tests to test these.
This is preparatory work for investigating reports of missing unread
messages.
It's a little surprising that not test failed after adding the code
without API documentation.
Co-Author-By: Tushar Upadhyay (tushar912).
The message from the bot which triggered the 407 error message notifies
the bot owner about the exceptions as well in the error message. This
commit handles it more gracefully and shows a generic message.
The messages from the bot which were triggered by the outgoing_webhooks
didn't have the bot name in them. This commit adds the bot name to it
and makes the corresponding changes in the tests.
This adds an option for restricting a ldap user
to only be allowed to login into certain realms.
This is done by configuring an attribute mapping of "org_membership"
to an ldap attribute that will contain the list of subdomains the ldap
user is allowed to access. This is analogous to how it's done in SAML.
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
On replying to an email notifcation from a stream where the user
does not come under the stream_post_policy will subsequently result
in a failure. In such a case, the user does not receive feedback
regarding the failure.
Notify the user via notification bot if their email
message failed to send.
Fixes#16642.
self.example_user("hamlet") uses get_user_by_delivery_email, so it
doesn't actually cache anything. This should use a cached function, like
the test below: test_do_change_realm_subdomain_clears_user_realm_cache.
This is part of our general process of replacing emails, which are not
static with time, with user_ids when referring to users in the API.
We still keep the `email` reference option, since it can be useful for
linking third-party applications to Zulip on an intranet that might
have a user's corporate email handy and not want to do the extra round
trip to lookup the user.
The name of the parameter, user_id_or_email, was chosen to to make it
clear that the default/preferred option is user_id.
Fixes#14304.
TextField is used to allow users to set long stream + topic narrow
names in the urls.
We currently restrict users to only set "all_messages" and
"recent_topics" as narrows.
This commit achieves 3 things:
* Removes recent topics as the default view which loads when
hash is empty.
* Loads default_view when hash is empty.
* Loads default_view on pressing escape key when it is unhandled by
other present UI elements.
NOTE: After this commit loading zulip with an empty hash will
automatically set hash to default_view. Ideally, we'd just display
the default view without a hash, but that involves extra complexity.
One exception is when user is trying to load an overlay directly,
i.e. zulip is loaded with an overlay hash. In this case,
we render recent topics is background irrespective of default_view.
We consider this last detail to be a bug not important enough to block
adding this setting.
The query string parameter authentication method is now deprecated for
newly created Slack applications since the 24th of February[1]. This
causes Slack imports to fail, claiming that the token has none of the
required scopes.
Two methods can be used to solve this problem: either include the
authentication token in the header of an HTTP GET request, or include
it in the body of an HTTP POST request. The former is preferred, as
the code was already written to use HTTP GET requests.
Change the way the parameters are passed to the "requests.get" method
calls, to pass the token via the `Authorization` header.
[1] https://api.slack.com/changelog/2020-11-no-more-tokens-in-querystrings-for-newly-created-appsFixes: #17408.
Minimized code duplication by integrating POSTRequestMock into
HostRequestMock and then updating the required files with
HostRequestMock.
Fixes part of #1211.
This commit updates the stream creation, subscribing others to
stream, wildcard mention settings and stream post policy to allow
realm moderators even if they are new and the respective setting
is set to allow full members only.
This commit renames the is_new_member property in models.py
to is_provisional_member which will return true for any user
who is not a full member. We will add a condition in further
commit such that this returns 'False' for a moderator as we
will initially give all the rights to moderator that a full
member has.
These are all referring to email_gateway_bot, when they're supposed to
refer to the notification and welcome bots, respectively. The values are
the same though, so the tests were passing anyway.
Its likely that we would implement new hotspots that aren't
a part of the tutorial hotspots, in the future. For instance,
a hotspot to advertise new features. Hence, grouping them into
categories like INTRO_HOTSPOTS would be a good start. We also
have an aggregate of all types of hotspots we may add in the
future, under ALL_HOTSPOTS.
Fixes#17238.
In process_new_human user, the queries were wrong, revoking all invites
sent to the email address, even in other realms than the one where the
new account just got created.
test_signup: This test was wrong, because the inviter UserProfile was
from a different realm. Such a PreregistrationUser shouldn't be
considered valid.
test_tutorial: The direct call to internal_send_private_message was
using sender's realm as the realm argument which is not valid. It
doesn't lead to any error because the codepath seems to mostly not care
about the realm arg if the sender is a cross-realm bot. From my reading
of the code I think that wrong realm arg here would break user mentions,
because it makes its way to check_message() and then to
build_message_send_dict - but overall the message gets sent without
errors. Either way, this was a bug in the test and should be fixed.
This commmit includes ROLE_MODERATOR in realm_user_count_by_role.
We also update test_change_role in test_audit_log.py to include
changes for moderator role as well.
Note that at this point, it's not possible to create moderator users;
this just will make it easier to write tests for logic involving them
as we develop the feature.
Currently there are only tests for verifying the error case and there
are no tests to check the case where messages are sent successfully
in 'STREAM_POST_POLICY_RESTRICT_NEW_MEMBERS' stream.
This commit adds tests for checking that full members and bots owned
by them can send message successfully in streams with post policy as
'STREAM_POST_POLICY_RESTRICT_NEW_MEMBERS'.
According to tests we should not allow bot without owners to
post in streams with STREAM_POST_POLICY_RESTRICT_NEW_MEMBERS.
But the code does not handle this and the related test passes
and raises error for case of bots without owner because the bot
is itself a new member.
This commit fixes this by adding a condition to check if there
is no bot owner and then raise error if there is no owner.
Add new rest api endpoint GET users/{email} for looking up a user by
email, which is useful especially for corporate API applications that
might already have a user's email address.
Fixes#14302.
A few internal fields used for tracking which types of notifications
have already been sent for a given message, like `hander_id` and the
`push_notified` bundle of fields were being incorrectly included in
message events delivered to clients clients.
One could argue these fields might be useful hints to clients, but
because notifications can be triggered later on via
`missedmessage_hook`, they have no useful purpose in the API.
This commit move these extended event field on a `internal_data`
object within the event object, and delete this field in `contents()`
for call points that would serve data to clients.
Tweaked by tabbott to provide a cleaner interface.
We're not bumping API_FEATURE_LEVEL because these fields have always
been documented as being present only due to a bug, so no clients
should be expecting or relying on them.
Fixes: #15947.
zerver/lib/users.py has a function named access_user_by_id, which is
used in /users views to fetch a user by it's id. Along with fetching
the user this function also does important validations regarding
checking of required permissions for fetching the target user.
In an attempt to solve the above problem this commit introduces
following changes:
1. Make all the parameters except user_profile, target_user_id
to be keyword only.
2. Use for_admin parameter instead of read_only.
3. Adds a documentary note to the function describing the reason for
changes along with recommended way to call this function in future.
4. Changes in views and tests to call this function in this changed
format.
Changes were tested using ./tools/test-backend.
Fixes#17111.
This commit migrates some of the backend tests to use assertLogs(),
instead of mock.patch() as planned in #15331.
Tweaked by tabbott to avoid tautological assertions.
There were some tests that had mock patches for logging, although no
logging was actually happening there. This commit removes such patches
in `corporate/tests/test_stripe.py`, `zerver/tests/test_cache.py`,
`zerver/tests/test_queue_worker.py`,
and `zerver/tests/test_signup.py`.
EmailLogBackend used to create a new EmailMessage and copy
only certain values from the original EmailMultiAlternatives
object. This resulted in the loss of information and made
it harder to test PRs like
https://github.com/zulip/zulip/pull/17121.
So instead of creating a new EmailMessage, tweak and send the existing
EmailMultiAlternatives object.
This isn't quite the right model, because we're not actually going
through the upload code path, but it does at least provide some inline
image previews in the data.
Fixes part of #14991.
The changes are as follows:
• Fix one day offset in all western zones.
• Correct CST from -64800 to -21600 and CDT from -68400 to -18000.
• Disambiguate PST in favor of -28000 over +28000.
• Add GMT, UTC, WET, previously excluded for being at offset 0.
• Add ACDT, AEDT, AKST, MET, MSK, NST, NZDT, PKT, which the previous
code did not find.
• Remove numbered abbreviations -12, …, +14, which are unnecessary.
• Remove MSD and PKST, which are no longer used.
Hardcode the dict and verify it with a test, so that future
discrepancies won’t go silently unnoticed.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Adjustments made due to changes in Django 3.0:
(https://docs.djangoproject.com/en/3.0/releases/3.0/)
- test_signup: INTERNAL_RESET_URL_TOKEN was moved to
PasswordResetConfirmView.reset_url_token
- test_message_fetch:
"add_never_cache_headers() and never_cache() now add the private
directive to Cache-Control headers."
- "django.utils.html.escape() now uses html.escape() to escape HTML.
This converts ' to ' instead of the previous equivalent decimal
code '." - this requires adjusting the expected decimal code
in some of the string fixtures in tests.
When we were getting an apply_event call for
a subscription/add event, we were trying not to
mutate the event itself, but this clumsy code
was still mutating the actual event:
# Avoid letting 'subscribers' entries end up in the list
for i, sub in enumerate(event['subscriptions']):
event['subscriptions'][i] = \
copy.deepcopy(event['subscriptions'][i])
del event['subscriptions'][i]['subscribers']
This is only a theoretical bug.
The only person who receives a subscription/add
event is the current user.
And it wouldn't have affected the current user,
since the apply_event was correctly updating the
state, and we wouldn't actually deliver the event
to the client (because the whole point of apply_event
is to prevent us from having to piggyback the
super-recent events on to our payload or put
them into the event queue and possibly race).
The new code just cleanly makes a copy of each
sub, if necessary, as we add them to state["subscriptions"].
And I updated the event schemas to reflect that
subscribers is always present in subscription/add
event.
Long term we should probably avoid sending subscribers
on this event when the clients don't set something
like include_subscribers. That's a fairly complicated
fix that involves passing in flags to ClientDescriptor.
Alternatively, we could just say that our policy is
that we never send subscribers there, but we instead
use peer_add events. See issue #17089 for more
details.
I eliminate the defaults, since the existing code
was already specificying values for most things.
I move all the booleans to the bottom for both
parameters and arguments.
I require explicit keywords for everything but
user_profile (which is now first).
And, finally, I format the code in a more
diff-friendly manner.
We eliminate some redundant checks.
We also consistently provide a `subscribers` field
in our stream data with `[]`, even if our users
can't access subscribers. We therefore bump
the API version and tweak the docs. (See further
down for a detailed justification of the change.)
Even though it is sometimes fine to have redundant code
that is defensive in nature, some upcoming changes are gonna
move subscriber-related logic out of build_stream_dict_for_sub
for certain codepaths as part of our effort to streamline
the payload for subscribers within page_params.
So we can't rely on the code that I removed here
inside of build_stream_dict_for_sub.
Anyway, it makes more sense to do these checks explicitly
in the validate function.
The code in build_stream_dict_for_sub was almost effectively
a noop, since the validation function was already preventing
us from getting subscriber info. The only difference it
made was sometimes converting `[]` to `None`, and then
subsequently omitting the subscribers field.
Neither ZT nor the webapp make any distinction between
`[]` or <missing key> for the `subscribers` data in
`page_params`.
The webapp has had this code for a long time (and now
equivalent code elsewhere in this PR):
if (!Object.prototype.hasOwnProperty.call(sub, "subscribers")) {
sub.subscribers = new LazySet([]);
}
The webapp calculates access based on booleans, anyway:
sub.can_access_subscribers =
page_params.is_admin || sub.subscribed ||
(!page_params.is_guest && !sub.invite_only);
And ZT would choke if `subscribers` were missing, except that
it never gets to the relevant code due to other checks:
def get_other_subscribers_in_stream(<snip>):
assert stream_id is not None or stream_name is not None
if stream_id:
assert self.is_user_subscribed_to_stream(stream_id)
return [sub
for sub in self.stream_dict[stream_id]['subscribers']
if sub != self.user_id]
else:
return [sub
for _, stream in self.stream_dict.items()
for sub in stream['subscribers']
if stream['name'] == stream_name
if sub != self.user_id]
You could make a semantic argument that we should prefer
<missing key> to `[]` when subscribers aren't even available, but
we have precedent from the way that `bulk_get_subscriber_user_ids`
has traditionally populated its result:
result: Dict[int, List[int]] =
{stream["id"]: [] for stream in stream_dicts}
If we changed `stream_dicts` to `target_stream_dicts` we
would faciliate a move toward `None`, but it would just cause
headaches for other server code as well as the frontends
(which, to reiterate, already prefer the empty array
for convenience).
By moving the relevant logic from realm.get_bot_domain to
get_fake_email_domain we will make realm.host be used (if possible) for
dummy user addresses. That is, instead of user11@zulipchat.com, the
address will become user11@subdomain.zulipchat.com.
With the change in d70e1bcdb7,
bots get email like bot@zulip.com with EXTERNAL_HOST="zulip.com",
rather than bot@subdomain.zulip.com, which was the old format. That's
not desirable, so with this commit, realm.host will be used when
possible and only falling back to FAKE_EMAIL_DOMAIN if needed.
We often send only one field (away or status_text)
to be updated.
So we have to make our schema support optional
keys.
As a result of the more flexible schema, we no
longer need to exempt the node fixtures from
our schema checks.
This makes us more efficient when handling
multiple users. We don't have to keep
sending the same two queries to the database.
Note that as part of this we eliminated
a failure mode for the obscure population
of users from whom both `user.is_guest` and
`user.can_access_public_streams()` returns
False. We know this would have only affected
Zephyr users (by looking at the code), and
we know we don't actually process Zephyr
users for email digests (or else we would
have raised exceptions in the old code).
We mostly need realm_id, but when we go to build
message lists, we need realm.uri.
We could probably be more aggresive about using
`only` here, but for now I am just trying to
reduce hops to the database.
We now require explicit keywords for all arguments
to fetch_initial_state_data except user_profile.
We provide reasonable defaults to keep the test
code concise.
In the case of reusing a registration link, reuse the
redirect_to_email_login_url helper. This does have the side effect of
now showing a "you've already registered" note, which did not happen
previously, but that seems probably for the best, since the user did
just click a "register" link.
Checking for `validate_email_not_already_in_realm` again (after the
form already did so), but only in the case that the form fails to
validate, means that we may be spending time pushing totally invalid
emails to the DB to check. In the case of emails containing nulls,
this can even trigger a 500 error from PostgreSQL.
Stop calling `validate_email_not_already_in_realm` in the form
validation. The form is currently only used in two places -- in
`accounts_home` and in `maybe_send_to_registration`. The latter is
only called if the address is known to not currently have an account,
so checking in there is unnecessary; and in the former case, we wish
different behaviour (the redirect) than just validation failure, which
is all the validator can do.
Fixes#17015.
Co-authored-by: Alex Vandiver <alexmv@zulip.com>
Add a `--allow-reserved-subdomain` flag which allows creation of
reserved keyword domains. This also always enforces that the domain
is not in use, which was removed in 0258d7d.
Fixes#16924.
When changing the subdomain of a realm, create a deactivated realm with
the old subdomain of the realm, and set its deactivated_redirect to the
new subdomain.
Doing this will help us to do the following:
- When a user visits the old subdomain of a realm, we can tell the user
that the realm has been moved.
- During the registration process, we can assure that the old subdomain
of the realm is not used to create a new realm.
If the subdomain is changed multiple times, the deactivated_redirect
fields of all the deactivated realms are updated to point to the new
uri.
Instead of just storing the edit history in the message which
triggered the topic edit, we store the edit history in all
the messages that changed. This helps users track the edit history
of a message more reliably.
As of Feb 15th 2019, Hipchat Cloud and Stride
have reached End Of Life and are no longer
supported by Atlassian. Since it is almost 2 years
now we can remove the migration guides.
Fetchings rows with end_time within the last 25 hours would result
in the realmcount queries returning two rows for each realm
if the analytics page was opened within an hour since the
count stats were updated.
Allowing any admins to create arbitrary users is not ideal because it
can lead to abuse issues. We should require something stronger that
requires the server operator's approval and thus we add a new
can_create_users permission.
We change the return type of check_message to be dataclass instead of
Dict[str, Any]. This refactoring helps us to understand the context of the
data structure returned by check_message clearly which was not possible
when using Dict.
SendMessageRequest class is added in zerver/lib/message.py inspite of it
not being used in that file itself just to maintain consistency as other
TypedDicts and dataclasses are defined in that file and to avoid circular
dependency as SendMessageRequest is being used in lib/widget.py as well.
We also rename local variable to 'send_request' for accessing
SendMessageRequest objects.
We always want to do these at the same time. Previously, message
editing did too much stripping (fixes#16837) and failed to check for
NUL bytes.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The Slack API always (even for failed requests) puts the access scopes
of the token passed in, into "X-OAuth-Scopes"[1], which can be used to
determine if any are missing -- and if so, which.
[1] https://api.slack.com/legacy/oauth-scopes#working-with-scopes
An HTML document sent without a charset in the Content-Type header
needs to be scanned for a charset in <meta> tags. We need to pass
bytes instead of str to Beautiful Soup to allow it to do this.
Fixes#16843.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
If a user visits a realm which has been deactivated and it's
deactivated_redirect field is set, we should have a message telling the
user that the realm has moved to the deactivated_redirect url.
We export a realm's data, and disable the realm, because the user
is moving from Zulip Cloud (e.g. https://example.zulipchat.com/) to
self-hosting or another platform (e.g. https://zulip.example.com/)
which we do not control. This commit adds a field in the realm object
called deactivated_redirect to store the url to which the realm has
moved.
This handles the conditions when anchor values are larger than
LARGER_THAN_MAX_MESSAGE_ID by clamping them down to it. Also added
tests for the function parse_anchor_value.
Fixes#16768.
In 1bcb8d8ee8 I made
it so the webapp doesn't include "streams" in its
state from `fetch_initial_state_data`, but I didn't
address all the places in apply_event.
For 3000 messages and 400 users, this saved
about 30 seconds.
We only do two queries per batch of messages
now, and the algorithm is easier to analyze,
as it's just three nested loops.
Note that we are much more efficient about finding
active users here:
- we do one query per realm (instead of per-user)
- we pass the cutoff date to the database
- we get back just a list of distinct ids
The query counts increase here for somewhat
contrived reasons. The tests before this
commit reflected a successful trip to the
UserProfile cache, but that's not actually
realistic in practice.
We don't need to mock the dates here. We also
explicitly clear out all streams first, and then
we explicitly test with both the stream being
current and the stream being old.
We can use the _enqueue_emails_for_realm helper
to avoid all the Tuesday-related logic here.
We also don't bother to create UserActivity
records, since the bot gets excluded by virtue
of its being a bot. (Also, the date ranges
here were sketchy due to the time mocking.)
We can avoid all the date mocking now for all
but a couple tests that exercise the is-it-Tuesday
logic.
And this test now correctly tests that we exclude
recently active users.
And this allows us to remove the other test.
This commit fixes a bug in marked.js which caused it to double-escape
HTML when rendering messages of the form: *[text](url)*.
This fixes a bug introduced in
3bdc8bbaa5, where an unnecessary
escape() call was added for the <em> code path, likely just because it
was adjacent to the others that needed it in the file.
Fix this, and add tests to verify that things are still being escaped
once after removing this extra escape.
Fixes#14845.
I now use sets for stream_ids in more of the digest
code.
As part of this I replaced exclude_subscription_modified_streams
with streams_recently_modified_for_user.
It's easier for the caller to just ask for ids
to delete from its callee than it is to pass
in a set/list to mutate.
The simpler boundary between the functions makes
the tests easier to write--you can see the
`filtered_streams` logic goes away in this diff.
I also make the tests a bit more thorough by using
combinations of Cordelia/Othello and Verona/Denmark
to try to find multiple possible flaws.
And I make the time intervals longer than 1s to
avoid false negatives from slow CI boxes.
If we have multiple users, this reduces the amount
of queries we need to do, because we get all
subscriptions for all users in a single query
to Subscription.
For the single-user case, we are introducing an
extra query hop, but the database is doing
roughly the same work, because we are just breaking
up this complex query into two hops:
messages =
select ... from message
where recipient__type_id in (
select stream_id from subscription
where ...
)
Now it's more like:
stream_ids =
select stream_id from subscription
where ...
messages =
select ... from message
where recipient__type_id in stream_ids
Note that we are not changing anything semantically
or algorithmically yet. The only overhead here
for the single-user case is boxing and unboxing
data into single-item dicts and lists.
The interfaces for callers in the view and the
queue processor remain the same for now.
We didn't need the enough-traffic mock.
We also continue to prep for testing multiple users.
I also finally remove a comment that is about to
be addressed (and which inaccurately refers to huddles).
In 709493cd75 (Feb 2017)
I added code to render_markdown that re-fetched the
sender of the message, to detect whether the message is
a bot.
It's better to just let the ORM fetch this. The
message object should already have sender.
The diff makes it look like we are saving round trips
to the database, which is true in some cases. For
the main message-send codepath, though, we are only
saving a trip to memcached, since the middleware
will have put our sender's user object into the
cache. The test_message_send test calls internally
to check_send_stream_message, so it was actually
hitting the database in render_markdown (prior to
my change).
Before this change we were clearing the cache on
every SQL usage.
The code to do this was added in February 2017
in 6db4879f9c.
Now we clear the cache just one time, but before
the action/request under test.
Tests that want to count queries with a warm
cache now specify keep_cache_warm=True. Those
tests were particularly flawed before this change.
In general, the old code both over-counted and
under-counted queries.
It under-counted SQL usage for requests that were
able to pull some data out of a warm cache before
they did any SQL. Typically this would have bypassed
the initial query to get UserProfile, so you
will see several off-by-one fixes.
The old code over-counted SQL usage to the extent
that it's a rather extreme assumption that during
an action itself, the entries that you put into
the cache will get thrown away. And that's essentially
what the prior code simulated.
Now, it's still bad if an action keeps hitting the
cache for no reason, but it's not as bad as hitting
the database. There doesn't appear to be any evidence
of us doing something silly like fetching the same
data from the cache in a loop, but there are
opportunities to prevent second or third round
trips to the cache for the same object, if we
can re-structure the code so that the same caller
doesn't have two callees get the same data.
Note that for invites, we have some cache hits
that are due to the nature of how we serialize
data to our queue processor--we generally just
serialize ids, and then re-fetch objects when
we pop them off the queue.
Steve asked me to remove this, since the tictactoe game was always
intended as a proof of concept. Now that we have poll and todo
widgets, the sample code for tictactoe has much less value.
We replace the content and type in test_widgets.py to maintain
coverage.
Initally, when writing two or more quotes, having
a blank line in between them, merges those quotes.
This created confusion especially in "quote and reply".
This commit fixes such issues. Now two or more quotes
having a blank line in between them, will not get merged.
This change is correct both for usability and for improving our
compatibility with CommonMark.
Fixes#14379.
This commit removes mock.patch with assertLogs().
* Adds return value to do_rest_call() in outgoing_webhook.py, to
support asserting log output in test_outgoing_webhook_system.py.
* Logs are not asserted in test_realm.py because it would require to users
to be queried using users=User.objects.filter(realm=realm) and the order
of resulting queryset varies for each run.
* In test_decorators.py, replacement of mock.patch is not done because
I'm not sure if it's worth the effort to replace it as it's a return
value of a function.
Tweaked by tabbott to set proper mypy types.
Then because the ID is now part of the draft dict, we can
(and do) change the structure of the "drafts" parameter
returned from `GET /drafts` from an object (mapping ID to
data) to an array.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
Sometimes we don't need to specify the expected_drafts field.
So by removing it, we can reduce the clutter a bit.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
Now the timestamp returned in a draft dict will always be an int.
The endpoints will still accept either an int or a float.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
Our test-backend validation confirms that we don't log anything to
stdout in the tests, so the fact that CI passes with this removes
shows there was nothing being logged.
While working on shifting toward native browser time zone APIs
(#16451), it was found that all but very recent Chrome and Node
versions reject certain legacy timezone aliases like US/Pacific
(https://crbug.com/364374).
For now, we only canonicalize the timezone property returned in user
objects and not the timezone setting itself.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
During the new user creation code path, there can be no existing
active clients for the user being created, so we can skip the code to
send events to that user's clients.
The tests here reflect that we need to send fewer events, and do fewer
queries that would have been spent computing data for these..
Fixes#16503, combined with the long series of recent changes by Steve
Howell to fix super-linear behavior in this code path.
We no bulk up peer_add/peer_remove events by user if the
same user has subscribed to multiple streams (and just
that single user).
This mostly optimizes the new-user codepath, but the
algorithm is a bit more general in nature.
This test was flaky due to some date-related
non-determinism. I make all the Message objects
current to make add_new_user_history reliably
try to bulk-update UserMessage rows to read.
Because of the very large `oneOf` clause of the formats of events
possible in Zulip's `GET /events` system, we had issues with
`test-backend` failures for missing documentation for a new event
format being like 1000 lines of output, which was very much unhelpful.
Fix this by limiting the output use only the oneOf variants that are
broadly similar to the actual payload received.
Fixes#16023.
The comment still pointed to 'vacate' event flow, but
we have removed the vacate event in a9356508ca.
This commit fixes the comment to depict the correct
purpose of below lines, i.e. to test the remove
event flow.
We were including 'realm_user' in event_types along with 'subscription',
but we don't send event of type 'realm_user' when subscribing to a new
stream. This was added in 1c332f5d6a.
This commit removes 'realm_user' from event_types.
The name used to be included in the id_token, but this seems to have
been changed by Apple and now it's sent in the `user` request param.
https://github.com/python-social-auth/social-core/pull/483 is the
upstream PR for this - but upstream is currently unmaintained, so we
have to monkey patch.
We also alter the tests to reflect this situation. Tests no longer put
the name in the id_token, but rather in the `user` request param in the
browser flow, just like it happens in reality.
An adaptation has to be made in the native flow - since the name won't
be included by Apple in the id_token anymore, the app, when POSTing
to the /complete/apple/ endpoint,
can (and should for better user experience)
add the `user` param formatted as json of
{"email": "hamlet@zulip.com", "name": {"firstName": "Full", "lastName": "Name"}}
dict. This is also reflected by the change in the
native flow tests.
We now can send an implied matrix of user/stream tuples
for peer_add and peer_remove events.
The client code basically does this:
for stream_id in event['stream_ids']:
for user_id in event['user_ids']:
update_sub(stream_id, user_id)
We used to send individual events, which gets real
expensive when you are creating new streams. For
the case of copy-to-stream case, we should see
events go from U to 1, where U is the number of users
added.
Note that we don't yet fully optimize the potential
of this schema. For adding a new user with lots
of default streams, we still send S peer_add events.
And if you subscribe a bunch of users to a bunch of
private streams, we only go from U * S to S; we can't
optimize it down to one event easily.
All the fields of a stream's recipient object can
be inferred from the Stream, so we just make a local
object. Django will create a Message object without
checking that the child Recipient object has been
saved. If that behavior changes in some upgrade,
we should see some pretty obvious symptom, including
query counts changing.
Tweaked by tabbott to add a longer explanatory comment, and delete a
useless old comment.
This saves us a query for edge cases like when
you try to unsubscribe from a public stream
that you have already unsubscribed from.
But this is mostly to prep for upcoming
optimizations.
Initially markdown titles were overridden by Youtube and Vimeo preview titles.
But now it will check if any markdown title is present to replace Youtube or
Vimeo preview titles, if preview of linked websites is enabled.
Fixes#16100
Upstream has slightly changed the whitespace around stashes. Take
this opportunity to clean up the extra blank lines we were outputting.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
That class is an artifact of when Stream
didn't have recipient_id. Now it's simpler
to deal with stream subscriptions.
We also save a query during page load (and
other places where we get subscriber
info).
We already trust ids that are put on our queue
for deferred work. For example, see the code for
"mark_stream_messages_as_read_for_everyone"
We now pass stream_recipient_id when we queue
up work for do_mark_stream_messages_as_read.
This generally saves about 3 queries per
user when we unsubscribe them from a stream.
We get two speedups:
* The query to get existing subscribers only
gets the two fields we need. We no longer
need all the overhead of user_profile
and recipient data being returned in the
query.
* We avoid Django making extra hops to the
database to get user info.
We replace get_peer_user_ids_for_stream_change
with two bulk functions to get peers and/or
subscribers.
Note that we have three codepaths that care about
peers:
subscribing existing users:
we need to tell peers about new subscribers
we need to tell subscribed user about old subscribers
unsubscribing existing users:
we only need to tell peers who unsubscribed
subscribing new user:
we only need to tell peers about the new user
(right now we generate send_event
calls to tell the new user about existing
subscribers, but this is a waste
of effort that we will fix soon)
The two bulk functions are this:
bulk_get_subscriber_peer_info
bulk_get_peers
They have some overlap in the implementation,
but there are some nuanced differences that are
described in the comments.
Looking up peers/subscribers in bulk leads to some
nice optimizations.
We will save some memchached traffic if you are
subscribing to multiple public streams.
We will save a query in the remove-subscriber
case if you are only dealing with private streams.
We used to send occupy/vacate events when
either the first person entered a stream
or the last person exited.
It appears that our two main apps have never
looked at these events. Instead, it's
generally the case that clients handle
events related to stream creation/deactivation
and subscribe/unsubscribe.
Note that we removed the apply_events code
related to these events. This doesn't affect
the webapp, because the webapp doesn't care
about the "streams" field in do_events_register.
There is a theoretical situation where a
third party client could be the victim of
a race where the "streams" data includes
a stream where the last subscriber has left.
I suspect in most of those situations it
will be harmless, or possibly even helpful
to the extent that they'll learn about
streams that are in a "quasi" state where
they're activated but not occupied.
We could try to patch apply_event to
detect when subscriptions get added
or removed. Or we could just make the
"streams" piece of do_events_register
not care about occupy/vacate semantics.
I favor the latter, since it might
actually be what users what, and it will
also simplify the code and improve
performance.
The query to get "occupied" streams has been expensive
in the past. I'm not sure how much any recent attempts
to optimize that query have mitigated the issue, but
since we clearly aren't sending this data, there is no
reason to compute it.
Using web_public_guest for anonymous users is confusing since
'guest' is actually a logged-in user compared to
web_public_guest which is not logged-in and has only
read access to messages. So, we rename it to
web_public_visitor.
This is a more thorough test of adding multiple
streams for multiple users, including streams
that users have already subscribed to.
The extra queries here are due to the fact
that we call `principal_to_user_profile` in
a loop in the view. So that's an example
of O(N) overhead. We may be able to bulk-fetch
these users eventually.
This is a pure extraction, except that I remove a
redundant check that `len(principals) > 0`. Whenever
that value is false, then `new_subscriptions` will
only have one possible entry, which is the current
user, and we skip that in the loop.
We no longer do O(N) queries to get existing streams.
This is a somewhat contrived use case--generally, we
are not trying to re-subscribe a user to several
streams. Still, we want to avoid this.
This commit also makes `test_bulk_subscribe_many`
do more work, and the change to the test helped
me discover this bug.
If a user asks to be subscribed to a stream
that they are already subscribed to, then
that stream won't be in new_stream_user_ids,
and we won't need to send an event for it.
This change makes that happen more automatically.
I think it's important that the callers understand
that bulk_add_subscriptions assumes all streams
are being created within a single realm, so I make
it an explicit parameter.
This may be overkill--I would also be happy if we
just included the assertions from this commit.
ssh always runs its command through a shell (after naïvely joining
multiple arguments with spaces), so it needs an extra level of shell
quoting. This should have no effect because we already validated user
with a regex, but it’s better for escaping to be locally correct in
case the context changes.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This low-level interface allows consuming from a queue with timeouts.
This can be used to either consume in batches (with an upper timeout),
or one-at-a-time. This is notably more performant than calling
`.get()` repeatedly (what json_drain_queue does under the hood), which
is "*highly discouraged* as it is *very inefficient*"[1].
Before this change:
```
$ ./manage.py queue_rate --count 10000 --batch
Purging queue...
Enqueue rate: 11158 / sec
Dequeue rate: 3075 / sec
```
After:
```
$ ./manage.py queue_rate --count 10000 --batch
Purging queue...
Enqueue rate: 11511 / sec
Dequeue rate: 19938 / sec
```
[1] https://www.rabbitmq.com/consumers.html#fetching
`loopworker_sleep_mock` is a file-level variable used to mock out the
sleep() call in LoopQueueProcessingWorker; don't reuse the variable
name for something else.
Despite its name, the `queue_size` method does not return the number
of items in the queue; it returns the number of items that the local
consumer has delivered but unprocessed. These are often, but not
always, the same.
RabbitMQ's queues maintain the queue of unacknowledged messages; when
a consumer connects, it sends to the consumer some number of messages
to handle, known as the "prefetch." This is a performance
optimization, to ensure the consumer code does not need to wait for a
network round-trip before having new data to consume.
The default prefetch is 0, which means that RabbitMQ immediately dumps
all outstanding messages to the consumer, which slowly processes and
acknowledges them. If a second consumer were to connect to the same
queue, they would receive no messages to process, as the first
consumer has already been allocated them. If the first consumer
disconnects or crashes, all prior events sent to it are then made
available for other consumers on the queue.
The consumer does not know the total size of the queue -- merely how
many messages it has been handed.
No change is made to the prefetch here; however, future changes may
wish to limit the prefetch, either for memory-saving, or to allow
multiple consumers to work the same queue.
Rename the method to make clear that it only contains information
about the local queue in the consumer, not the full RabbitMQ queue.
Also include the waiting message count, which is used by the
`consume()` iterator for similar purpose to the pending events list.
For streams in which only full members are allowed to post,
we block guest users from posting there.
Guests users were blocked from posting to admin only streams
already. So now, guest users can only post to
STREAM_POST_POLICY_EVERYONE streams.
This is not a new feature but a bugfix which should have
happened when implementing full member stream policy / guest users.
SIGALRM is the simplest way to set a specific maximum duration that
queue workers can take to handle a specific message. This only works
in non-threaded environments, however, as signal handlers are
per-process, not per-thread.
The MAX_CONSUME_SECONDS is set quite high, at 10s -- the longest
average worker consume time is embed_links, which hovers near 1s.
Since just knowing the recent mean does not give much information[1],
it is difficult to know how much variance is expected. As such, we
set the threshold to be such that only events which are significant
outliers will be timed out. This can be tuned downwards as more
statistics are gathered on the runtime of the workers.
The exception to this is DeferredWorker, which deals with quite-long
requests, and thus has no enforceable SLO.
[1] https://www.autodesk.com/research/publications/same-stats-different-graphs
Currently, drain_queue and json_drain_queue ack every message as it is
pulled off of the queue, until the queue is empty. This means that if
the consumer crashes between pulling a batch of messages off the
queue, and actually processing them, those messages will be
permanently lost. Sending an ACK on every message also results in a
significant amount lot of traffic to rabbitmq, with notable
performance implications.
Send a singular ACK after the processing has completed, by making
`drain_queue` into a contextmanager. Additionally, use the `multiple`
flag to ACK all of the messages at once -- or explicitly NACK the
messages if processing failed. Sending a NACK will re-queue them at
the front of the queue.
Performance of a no-op dequeue before this change:
```
$ ./manage.py queue_rate --count 50000 --batch
Purging queue...
Enqueue rate: 10847 / sec
Dequeue rate: 2479 / sec
```
Performance of a no-op dequeue after this change (a 25% increase):
```
$ ./manage.py queue_rate --count 50000 --batch
Purging queue...
Enqueue rate: 10752 / sec
Dequeue rate: 3079 / sec
```
We add a new wildcard_mention_policy setting to handle wildcard
mentions in large streams, with a wide range of policies available to
organizations.
We set the default to the safe option for preventing accidental spam:
only stream administrators being able to use wildcard mentions in
large streams.
We call build_message_send_dict from check_message instead of
do_send_messages.
This is a prep commit for adding a new setting for handling
wildcard mentions in large streams.
A later commit alters `authenticate` of EmailAuthBackend to
add a store `needs_to_change_password` variable to session
which is useful to insist users on changing their weak password.
The tests start failing with that change because client.login()
runs `authenticate` without a `request` object. So, this commit
sends a request object with `request.session=self.client.session`
to self.client.login() in tests wherever needed.
We previously used to to redirect to config error page with
a different URL. This commit renders config error in the same
URL where configuration error is encountered. This way when
conifguration error is fixed the user can refresh to continue
normally or go back to login page from the link provided to
choose any other backend auth.
Also moved those URLs to dev_urls.py so that they can be easily
accessed to work on styling etc.
In tests, removed some of the asserts checking status code to be 200
as the function `assert_in_success_response` does that check.
We now no longer define any schemas in test_events--all
of them are in event_schema, which helps our tooling
cross-check schemas for openapi and node tests.
It happens that whether you add a reaction or remove
a reaction, we send the exact same fields, just using
a different op code.
This sort of symmetry is actually kind of rare, as
usually "add" events have more fields, and "remove" events
might just send an id of something to remove.
Our openapi schema treats these as two seperate events,
so we are more consistent with it, and it helps our
schema-checking tooling for node fixtures, too.
Note that we now have to exempt the two events from
our openapi checks, due to the is_mirror_dummy field
in the deprecated user block. We can decide how to
handle this later--one possibility is to just add it
as an optional field on the event_schema side.
Note that we use value_type for value instead of
bool, since properties can be non-bool things
like color, which we just don't test now. We
should test them.
We more than compensate for this by checking
the actual value of the value in
check_subscription_update.
There is a legacy format where we send
singular "message_id" instead of plural
"message_ids".
Then there are different fields for "private"
and "stream" message types.
Note that we make the schema for profile_data
slightly more realistic, but it doesn't actually get
exercised by our current tests (apart from
making sure it's a dict), since we don't have
profile data for our test realm.
We also don't have the optional fields for bots,
since our tests don't exercise that, nor
delivery_email.
So we exempt realm_user_add_event from openapi
checks for now.
When we try to match the openapi specs better, we
will probably want to add a few tests to test_events.
Obviously getting good coverage for adding users
would be nice for all these scenarios:
* delivery_email matters
* bots
* realm has profile fields
This is a prep commit for supporting "presence"
events, where the key of the dictionary is some
arbitrary string like "website" but the value
of the dictionary is another dictionary itself
with keys that are more like variable names.
This also forces us to create TupleType.
We exempt this from the openapi check,
since we haven't figured out how to model
tuples in openapi with the same precision
as event_schema (and it may be impossible).
Long term we just want to stop dealing in
tuples, of course.
StringDict is a data type for representing dictionaries where
all keys and values are strings. Add this data type to data_types.py
and edit other files so that this data type is put to use and tested.
(slightly tweaked by @showell to remove a comment and shorten
a var name now that we have a proper data type)
We also make our schema in event_schema reflect this,
which in turn makes us match the already accurate
openapi spec, so we no longer need to exempt four
types of events from our sanity checks.
We might want to rename the tool to something more
general now, since we are really reconciling three
things:
- node fixtures
- event_schema checkers for test_events
- openapi specs
The way we compare python and openapi schemas is
as follows:
- first convert openapi schemas to be build
from DictType, ListType, etc. with from_opeapi
- do a diff on the schemas
Most of the new code is just having the FooType
family of classes serialize themselves with schema().
This commit renames 'test_message_to_self' and
'test_api_message_to_self' tests to
'test_message_to_stream_by_name' and
'test_api_message_to_stream_by_name' to depict
the actual purpose of these tests.
This argument does not define if an endpoint "is a webhook"; it is set
for "/api/v1/messages", which is not really a webhook, but allows
access from webhooks.
If multiple filters match the same string, we run into an infinite
loop of converting string into urls. To fix it, we mark the matched
string as atomic after first conversion.
This mimics the backend logic for adding the data-attribute -
to know what Pygments language was used to highlight the code
block - in locally echoed messages.
New test added checks our logic for canonicalizing pygments alias
(for both frontend and backend).
Other fixtures and tests amended.
In development and test, we keep the Tornado port at 9993 and 9983,
respectively; this allows tests to run while a dev instance is
running.
In production, moving to port 9800 consistently removes an odd edge
case, when just one worker is on an entirely different port than if
two workers are used.