We don't need the select_related('user_profile')
optimization any more, because we just keep
track of user info in our own data structures.
In this codepath we are never actually modifying
users; we just occasionally need their ids or
emails.
This can be a pretty substantive improvement if
you are adding a bunch of users to a stream
who each have a bunch of their own subscriptions.
We could also limit the number of full rows in this
query by adding an extra hop to the DB just to
get colors (using values_list), and then only get
full sub info for the streams that we're adding, rather
than getting every single subscription, in full, for each user.
Apart from finding what colors the user has already
used, the only other reason we need all the columns
in Subscription here is to handle streams that
need to be reactivated. Otherwise we could do
only("id", "active", "recipient_id", "user_profile_id")
or similar. Fortunately, Subscription isn't
an overly wide table; it's mostly bool fields.
But by far the biggest thing to avoid is bringing
in all the extra user_profile data.
We have pretty good coverage on query counts here,
so I think this fix is pretty low risk.
This class removes a lot of the annoying tuples
we were passing around.
Also, by including the user everywhere, which
is easily available to us when we make instances
of SubInfo, it sets the stage to remove
select_related('user_profile').
We used to send occupy/vacate events when
either the first person entered a stream
or the last person exited.
It appears that our two main apps have never
looked at these events. Instead, it's
generally the case that clients handle
events related to stream creation/deactivation
and subscribe/unsubscribe.
Note that we removed the apply_events code
related to these events. This doesn't affect
the webapp, because the webapp doesn't care
about the "streams" field in do_events_register.
There is a theoretical situation where a
third party client could be the victim of
a race where the "streams" data includes
a stream where the last subscriber has left.
I suspect in most of those situations it
will be harmless, or possibly even helpful
to the extent that they'll learn about
streams that are in a "quasi" state where
they're activated but not occupied.
We could try to patch apply_event to
detect when subscriptions get added
or removed. Or we could just make the
"streams" piece of do_events_register
not care about occupy/vacate semantics.
I favor the latter, since it might
actually be what users what, and it will
also simplify the code and improve
performance.
The query to get "occupied" streams has been expensive
in the past. I'm not sure how much any recent attempts
to optimize that query have mitigated the issue, but
since we clearly aren't sending this data, there is no
reason to compute it.
Using web_public_guest for anonymous users is confusing since
'guest' is actually a logged-in user compared to
web_public_guest which is not logged-in and has only
read access to messages. So, we rename it to
web_public_visitor.
This is a more thorough test of adding multiple
streams for multiple users, including streams
that users have already subscribed to.
The extra queries here are due to the fact
that we call `principal_to_user_profile` in
a loop in the view. So that's an example
of O(N) overhead. We may be able to bulk-fetch
these users eventually.
This is a pure extraction, except that I remove a
redundant check that `len(principals) > 0`. Whenever
that value is false, then `new_subscriptions` will
only have one possible entry, which is the current
user, and we skip that in the loop.
We no longer do O(N) queries to get existing streams.
This is a somewhat contrived use case--generally, we
are not trying to re-subscribe a user to several
streams. Still, we want to avoid this.
This commit also makes `test_bulk_subscribe_many`
do more work, and the change to the test helped
me discover this bug.
If a user asks to be subscribed to a stream
that they are already subscribed to, then
that stream won't be in new_stream_user_ids,
and we won't need to send an event for it.
This change makes that happen more automatically.
Let
U = number of users to subscribe
S = number of streams to subscribe
We were technically doing N^3 amount of work
when we sent certain events, or to be more
precise, U * S * S amount of work. For each
stream, we were looping through a list of tuples
of size U * S to find the users for the stream.
In practice either U or S is usually 1, so the
performance gains here are probably negligible,
especially since the constant factors here
were just slinging around Python data.
But the code is actually more readable now, so
it's a double win.
We rename needs_new_sub (which sounds like
a boolean!) to new_recipient_ids, and we
calculate it explicitly within the loop, so
that we don't need to worry as much about
subsequent passes through the loop mutating it.
This allows us to also remove recipient_ids,
which in turn lets us remove recipients_map,
albeit with a small tweak for stream_map.
I also introduce the my_subs local, which
I use to more directly populate used_colors,
as well as using it as the loop var.
I think it's important that the callers understand
that bulk_add_subscriptions assumes all streams
are being created within a single realm, so I make
it an explicit parameter.
This may be overkill--I would also be happy if we
just included the assertions from this commit.
This function now does all the work that we used
to do with notify_subscriptions_added happening
inside a loop.
There's a small fine-tuning here, where we only
get recent traffic on streams that we're actually
sending events for.
We now just pass in all_subscribers_by_stream, rather
than a callback.
We also move sub_tuples_by_user closer to the
loop where we call notify_subscriptions_added.
This preserves the alpha layer on GIF images that need to be resized
before being uploaded. Two important changes occur here:
1. The new frame is a *copy* of the original image, which preserves the
GIF info.
2. The disposal method of the original GIF is preserved. This
essentially determines what state each frame of the GIF starts from
when it is drawn; see PIL's docs:
https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html#saving
for more info.
This resolves some but not all of the test cases in #16370.
ssh always runs its command through a shell (after naïvely joining
multiple arguments with spaces), so it needs an extra level of shell
quoting. This should have no effect because we already validated user
with a regex, but it’s better for escaping to be locally correct in
case the context changes.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
do_send_messages has side effects outside the database and may not
work reliably if its database effects are reordered by being inside a
transaction.
This also fixes a bug where we were doing the update incorrectly on
the Message table.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Since this was using repead individual get() calls previously, it
could not be monitored for having a consumer. Add it in, by marking
it of queue type "consumer" (the default), and adding Nagios lines for
it.
Also adjust missedmessage_emails to be monitored; it stopped using
LoopQueueProcessingWorker in 5cec566cb9, but was never added back
into the set of monitored consumers.
This low-level interface allows consuming from a queue with timeouts.
This can be used to either consume in batches (with an upper timeout),
or one-at-a-time. This is notably more performant than calling
`.get()` repeatedly (what json_drain_queue does under the hood), which
is "*highly discouraged* as it is *very inefficient*"[1].
Before this change:
```
$ ./manage.py queue_rate --count 10000 --batch
Purging queue...
Enqueue rate: 11158 / sec
Dequeue rate: 3075 / sec
```
After:
```
$ ./manage.py queue_rate --count 10000 --batch
Purging queue...
Enqueue rate: 11511 / sec
Dequeue rate: 19938 / sec
```
[1] https://www.rabbitmq.com/consumers.html#fetching
`loopworker_sleep_mock` is a file-level variable used to mock out the
sleep() call in LoopQueueProcessingWorker; don't reuse the variable
name for something else.
Despite its name, the `queue_size` method does not return the number
of items in the queue; it returns the number of items that the local
consumer has delivered but unprocessed. These are often, but not
always, the same.
RabbitMQ's queues maintain the queue of unacknowledged messages; when
a consumer connects, it sends to the consumer some number of messages
to handle, known as the "prefetch." This is a performance
optimization, to ensure the consumer code does not need to wait for a
network round-trip before having new data to consume.
The default prefetch is 0, which means that RabbitMQ immediately dumps
all outstanding messages to the consumer, which slowly processes and
acknowledges them. If a second consumer were to connect to the same
queue, they would receive no messages to process, as the first
consumer has already been allocated them. If the first consumer
disconnects or crashes, all prior events sent to it are then made
available for other consumers on the queue.
The consumer does not know the total size of the queue -- merely how
many messages it has been handed.
No change is made to the prefetch here; however, future changes may
wish to limit the prefetch, either for memory-saving, or to allow
multiple consumers to work the same queue.
Rename the method to make clear that it only contains information
about the local queue in the consumer, not the full RabbitMQ queue.
Also include the waiting message count, which is used by the
`consume()` iterator for similar purpose to the pending events list.
We modify access_stream_for_delete_or_update function to return
Subscription object also along with stream. This change will be
helpful in avoiding an extra query to get subscription object in
code for updating subscription role.
For streams in which only full members are allowed to post,
we block guest users from posting there.
Guests users were blocked from posting to admin only streams
already. So now, guest users can only post to
STREAM_POST_POLICY_EVERYONE streams.
This is not a new feature but a bugfix which should have
happened when implementing full member stream policy / guest users.
Otherwise, if consume_func raised an exception for any reason *other*
than the alarm being fired, the still-pending alarm would have fired
later at some arbitrary point in the calling code.
We need two try…finally blocks in case the signal arrives just before
signal.alarm(0).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Replaced ImageOps.fit by ImageOps.pad, in zerver/lib/upload.py, which
returns a sized and padded version of the image, expanded to fill the
requested aspect ratio and size.
Fixes part of #16370.
SIGALRM is the simplest way to set a specific maximum duration that
queue workers can take to handle a specific message. This only works
in non-threaded environments, however, as signal handlers are
per-process, not per-thread.
The MAX_CONSUME_SECONDS is set quite high, at 10s -- the longest
average worker consume time is embed_links, which hovers near 1s.
Since just knowing the recent mean does not give much information[1],
it is difficult to know how much variance is expected. As such, we
set the threshold to be such that only events which are significant
outliers will be timed out. This can be tuned downwards as more
statistics are gathered on the runtime of the workers.
The exception to this is DeferredWorker, which deals with quite-long
requests, and thus has no enforceable SLO.
[1] https://www.autodesk.com/research/publications/same-stats-different-graphs
Currently, drain_queue and json_drain_queue ack every message as it is
pulled off of the queue, until the queue is empty. This means that if
the consumer crashes between pulling a batch of messages off the
queue, and actually processing them, those messages will be
permanently lost. Sending an ACK on every message also results in a
significant amount lot of traffic to rabbitmq, with notable
performance implications.
Send a singular ACK after the processing has completed, by making
`drain_queue` into a contextmanager. Additionally, use the `multiple`
flag to ACK all of the messages at once -- or explicitly NACK the
messages if processing failed. Sending a NACK will re-queue them at
the front of the queue.
Performance of a no-op dequeue before this change:
```
$ ./manage.py queue_rate --count 50000 --batch
Purging queue...
Enqueue rate: 10847 / sec
Dequeue rate: 2479 / sec
```
Performance of a no-op dequeue after this change (a 25% increase):
```
$ ./manage.py queue_rate --count 50000 --batch
Purging queue...
Enqueue rate: 10752 / sec
Dequeue rate: 3079 / sec
```
Part of #16094.
Moved the language selection preference logic from home.py to a new
function in i18n.py to avoid repetition in analytics views and home
views.
For users who are not authenticated, we don't need to 2fa them,
we only need it once they are trying to login.
Tweaked by tabbott to be much more readable; the new style might
require new test coverage.
We add a new wildcard_mention_policy setting to handle wildcard
mentions in large streams, with a wide range of policies available to
organizations.
We set the default to the safe option for preventing accidental spam:
only stream administrators being able to use wildcard mentions in
large streams.
This prevents the memcached connection from being shared across
multiple processes, and hopefully addresses unexpected behavior from
cached functions like get_user_profile_by_id invoked inside the worker
processes.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We call build_message_send_dict from check_message instead of
do_send_messages.
This is a prep commit for adding a new setting for handling
wildcard mentions in large streams.
We extract the loop for building message dict in
do_send_messages in a separate function named
build_message_send_dict.
This is a prep commit for moving the code for building
of message dict in check_message.
There is a bug where we send event for even
those messages which do not have embedded links
as we are using single set 'links_for_embed' to
check whether we have to send event for
embedded links or not.
This commit fixes the bug by adding 'links_for_embed'
in message dict itself and send the event only
if that message has embedded links.
As explained in the previous commit, yamole preprocessed allOf with an
algorithm that is not standards compliant. We replicate that
algorithm, but importantly, we only use it for our own code and not
for building the openapi_core RequestValidator.
This improves the time taken by OpenAPISpec().check_reload() from
1.69s to 0.53s, nearly all of which is inside
openapi_core.create_spec.
Closes#10484. Significantly improves #16068.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
yamole preprocesses our schema by naïvely merging all the objects in
an allOf array together, but this fails to capture the meaning of
allOf according to the OpenAPI specification. allOf is supposed to be
a strict logical intersection of each subschema interpreted
independently. It does not combine their properties maps before
interpreting additionalProperties. So according to the old definition
of JsonSuccess, every response is invalid:
allOf:
- additionalProperties: false
properties:
result:
type: string
- required:
- result
- msg
properties:
msg:
type: string
because the first subschema disallowed msg and the second subschema
required msg.
To fix this, whenever we use allOf for schema “inheritence”, the base
schema must not specify additionalProperties, and the child schema
must explicitly list all properties recursively inherited from the
base schema in any subschema that uses additionalProperties.
Fixes#16109.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit removes the unnecessary comment which was added in
9454683108, when we were using message.get() for keys which
were also passed as args in do_send_messages, but there are no
such keys in the current code.
This commit removes the unnecessary line of code to get
rendered_content from message dict sent by check_message
when it actually does not inlcude 'rendered_content' key.
This line was added in 9454683108, but now we do not send
rendered_content in the message dict as we render the message
in do_send_messages itself.
A later commit alters `authenticate` of EmailAuthBackend to
add a store `needs_to_change_password` variable to session
which is useful to insist users on changing their weak password.
The tests start failing with that change because client.login()
runs `authenticate` without a `request` object. So, this commit
sends a request object with `request.session=self.client.session`
to self.client.login() in tests wherever needed.
We previously used to to redirect to config error page with
a different URL. This commit renders config error in the same
URL where configuration error is encountered. This way when
conifguration error is fixed the user can refresh to continue
normally or go back to login page from the link provided to
choose any other backend auth.
Also moved those URLs to dev_urls.py so that they can be easily
accessed to work on styling etc.
In tests, removed some of the asserts checking status code to be 200
as the function `assert_in_success_response` does that check.
We now no longer define any schemas in test_events--all
of them are in event_schema, which helps our tooling
cross-check schemas for openapi and node tests.
It happens that whether you add a reaction or remove
a reaction, we send the exact same fields, just using
a different op code.
This sort of symmetry is actually kind of rare, as
usually "add" events have more fields, and "remove" events
might just send an id of something to remove.
Our openapi schema treats these as two seperate events,
so we are more consistent with it, and it helps our
schema-checking tooling for node fixtures, too.
Note that we now have to exempt the two events from
our openapi checks, due to the is_mirror_dummy field
in the deprecated user block. We can decide how to
handle this later--one possibility is to just add it
as an optional field on the event_schema side.
Note that we use value_type for value instead of
bool, since properties can be non-bool things
like color, which we just don't test now. We
should test them.
We more than compensate for this by checking
the actual value of the value in
check_subscription_update.
There is a legacy format where we send
singular "message_id" instead of plural
"message_ids".
Then there are different fields for "private"
and "stream" message types.
Note that we make the schema for profile_data
slightly more realistic, but it doesn't actually get
exercised by our current tests (apart from
making sure it's a dict), since we don't have
profile data for our test realm.
We also don't have the optional fields for bots,
since our tests don't exercise that, nor
delivery_email.
So we exempt realm_user_add_event from openapi
checks for now.
When we try to match the openapi specs better, we
will probably want to add a few tests to test_events.
Obviously getting good coverage for adding users
would be nice for all these scenarios:
* delivery_email matters
* bots
* realm has profile fields
This is a prep commit for supporting "presence"
events, where the key of the dictionary is some
arbitrary string like "website" but the value
of the dictionary is another dictionary itself
with keys that are more like variable names.
This also forces us to create TupleType.
We exempt this from the openapi check,
since we haven't figured out how to model
tuples in openapi with the same precision
as event_schema (and it may be impossible).
Long term we just want to stop dealing in
tuples, of course.
StringDict is a data type for representing dictionaries where
all keys and values are strings. Add this data type to data_types.py
and edit other files so that this data type is put to use and tested.
(slightly tweaked by @showell to remove a comment and shorten
a var name now that we have a proper data type)
We also make our schema in event_schema reflect this,
which in turn makes us match the already accurate
openapi spec, so we no longer need to exempt four
types of events from our sanity checks.
We might want to rename the tool to something more
general now, since we are really reconciling three
things:
- node fixtures
- event_schema checkers for test_events
- openapi specs
The way we compare python and openapi schemas is
as follows:
- first convert openapi schemas to be build
from DictType, ListType, etc. with from_opeapi
- do a diff on the schemas
Most of the new code is just having the FooType
family of classes serialize themselves with schema().
Defining types with an object hierarchy
of type classes will allow us to build
functionality that was impossible (or
really janky) with the validators.py
approach of composing functions.
Most of the changes to event_schema.py
were automated search/replaces.
This patch doesn't really yet take
advantage of the new FooType classes,
but we will use it soon to audit our
openapi specs.
Even before GDPR changes, it was strange that we displayed
users differently for fork events vs. all other events.
After GDPR, we don't even get the `username` field any
more.
So now we simply use `display_name` if available, and then
we try `nickname`.
See https://developer.atlassian.com/cloud/bitbucket/bitbucket-api-changes-gdpr/
for more context.
We were trying to share the same format string between
the two different versions of bitbucket, but this only
creates confusion, as the two versions are only close
enough to be confusing.
The format string might be the same, but the semantics
are different, as well as the eventual outputs.
For example, the {username} piece here is simple in version
2, but in version 3 we append a url to the user's name.
This commit renames 'test_message_to_self' and
'test_api_message_to_self' tests to
'test_message_to_stream_by_name' and
'test_api_message_to_stream_by_name' to depict
the actual purpose of these tests.
user_profile will be None for web_public_guests here. Hence, for
settings (of which most be inaccessible by web public guest),
which require a user_profile, we either set an empty value for
them or set them to a default value. This will help render
the frontend or extend support to our clients without breaking
a lot of code.
Tweaked by tabbott to add many comments.
These represent known errors in what the user submitted. This is
slightly complicated by UnsupportedWebhookEventType being an instance
of JsonableError.
allow_webhook_access may be true if the request allows webhook
requests, regardless of if it only used for a webhook integration.
Only actually log to the verbose webhook logger if it is explicitly a
webhook endpoint, as judged by `webhook_client_name`. This prevents
requests for `POST /api/v1/messages` from being logged to the webhook
logger if they mistakenly contain a `payload` argument.
This argument does not define if an endpoint "is a webhook"; it is set
for "/api/v1/messages", which is not really a webhook, but allows
access from webhooks.
If multiple filters match the same string, we run into an infinite
loop of converting string into urls. To fix it, we mark the matched
string as atomic after first conversion.
We raise MissingAuthenticationError now, which adds
`www_authenticate=session` header to the error response. This
stops modern web-browsers from displaying a login form everytime
a 401 response it sent to the client.
Having both of these is confusing; TORNADO_SERVER is used only when
there is one TORNADO_PORT. Its primary use is actually to be _unset_,
and signal that in-process handling is to be done.
Rename to USING_TORNADO, to parallel the existing USING_RABBITMQ, and
switch the places that used it for its contents to using
TORNADO_PORTS.
This system can't update stats while the queue is idle, without using
threads for this, but at least we ensure to update the file after
consuming an event if more than MAX_SECONDS_BEFORE_UPDATE_STATS passed
since the last update, regardless of the number of iterations done so
far.
The race condition is described in the comment block removed by this
commit. This leaves room for another, remaining race condition
that should be virtually impossible, but nevertheless it seems
worthwhile to have it documented in the code, so we put a new comment
describing it.
As a final note, this is not a new race condition,
it was hypothetically possible with the old code as well.
This mimics the backend logic for adding the data-attribute -
to know what Pygments language was used to highlight the code
block - in locally echoed messages.
New test added checks our logic for canonicalizing pygments alias
(for both frontend and backend).
Other fixtures and tests amended.
In ae58ed5a7 we decided to echo back the text, when no Pygments lexer
matching that language was found. When we do so, we must take care to
HTML escape the lang before wrapping it in a data-code-language attribute.
Tweaked by tabbott to make clear the escaping is defensive.
In development and test, we keep the Tornado port at 9993 and 9983,
respectively; this allows tests to run while a dev instance is
running.
In production, moving to port 9800 consistently removes an odd edge
case, when just one worker is on an entirely different port than if
two workers are used.
tornado.web.Application does not share any inheritance with Django at
all; it has a similar router interface, but tornado.web.Application is
not an instance of Django anything.
Refold the long lines that follow it.
While urllib3 retries all connection errors, it only retries a subset
of read errors, since not all requests are safe to retry if they are
not idempotent, and the far side may have already processed them once.
By default, the only methods that are urllib3 retries read errors on
are GET, TRACE, DELETE, OPTIONS, HEAD, and PUT. However, all of the
requests into Tornado from Django are POST requests, which limits the
effectiveness of bb754e0902.
POST requests to `/api/v1/events/internal` are safe to retry; at worst,
they will result in another event queue, which is low cost and will be
GC'd in short order.
POST requests to `/notify_tornado` are _not_ safe to retry, but this
codepath is only used if USING_RABBITMQ is False, which only occurs
during testing.
Enable retries for read errors during all POSTs to Tornado, to better
handle Tornado restarts without 500's.
Without an explicit port number, the `stdout_logfile` values for each
port are identical. Supervisor apparently decides that it will
de-conflict this by appending an arbitrary number to the end:
```
/var/log/zulip/tornado.log
/var/log/zulip/tornado.log.1
/var/log/zulip/tornado.log.10
/var/log/zulip/tornado.log.2
/var/log/zulip/tornado.log.3
/var/log/zulip/tornado.log.7
/var/log/zulip/tornado.log.8
/var/log/zulip/tornado.log.9
```
This is quite confusing, since most other files in `/var/log/zulip/`
use `.1` to mean logrotate was used. Also note that these are not all
sequential -- 4, 5, and 6 are mysteriously missing, though they were
used in previous restarts. This can make it extremely hard to debug
logs from a particular Tornado shard.
Give the logfiles a consistent name, and set them up to logrotate.
Calling `render()` in a middleware before LocaleMiddleware has run
will pick up the most-recently-set locale. This may be from the
_previous_ request, since the current language is thread-local. This
results in the "Organization does not exist" page occasionally being
in not-English, depending on the preferences of the request which that
thread just finished serving.
Move HostDomainMiddleware below LocaleMiddleware; none of the earlier
middlewares call `render()`, so are safe. This will also allow the
"Organization does not exist" page to be localized based on the user's
browser preferences.
Unfortunately, it also means that the default LocaleMiddleware catches
the 404 from the HostDomainMiddlware and helpfully tries to check if
the failure is because the URL lacks a language component (e.g.
`/en/`) by turning it into a 304 to that new URL. We must subclass
the default LocaleMiddleware to remove this unwanted functionality.
Doing so exposes a two places in tests that relied (directly or
indirectly) upon the redirection: '/confirmation_key'
was redirected to '/en/confirmation_key', since the non-i18n version
did not exist; and requests to `/stats/realm/not_existing_realm/`
incorrectly were expecting a 302, not a 404.
This regression likely came in during f00ff1ef62, since prior to
that, the HostDomainMiddleware ran _after_ the rest of the request had
completed.
This commit moves docs for users/{user_id}/subscriptions/{stream_id}
enndpoint to be after users/me/subscriptions/muted_topics docs.
We are rearranging the docs because after adding the new patch
endpoint for users/{user_id}/subscriptions/{stream_id}, openapi_core
validator tries to match 'users/me/subscriptions/muted_topics'
with 'users/{user_id}/subscriptions/{stream_id}' path in zulip.yaml
and thus gives error while running tests.
This is a bug in 'openapi_core' as it does not follows OpenAPI specs
to match concrete paths before their templated counterparts. Thus,
this commit rearranges the docs such that openapi_core validator
tries to match muted_topics endpoint with the correct path in
zulip.yaml docs.
When converting fenced code markdown, we add the language (if specified)
in a data-attribute by tweaking the HTML generated. Doing so, allows the
frontend to make use of this attr to display view-in-playground option
for codeblocks.
We use pygments to get the lexer subclass name and use that instead of
directly using the language in the data-attribute. Doing so, helps us
map different language aliases (like `js` and `javascript`) into a common
variable (like `JavaScript`) - and avoids the client from dealing with
multiple tags corresponding to the same language.
The html structure for a message like this:
``` js
..content..
```
would now be:
<div class="codehilite" data-codehilite-language="JavaScript">
<pre>..content..</pre>
</div>
Tests and fixtures amended.
This was a broken abstraction that returned to its caller within
multiple forked processes on exceptions, and encouraged ignoring the
error code (as all of its callers did).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Fixes#16284.
Most of the work for this was done when we implemented correct
behavior for guest users, since they treat public streams like private
streams anyway.
The general method involves moving the messages to the new stream with
special care of UserMessage.
We delete UserMessages for subs who are losing access to the message.
For private streams with protected history, we also create UserMessage
elements for users who are not present in the old stream, since that's
important for those users to access the moved messages.
Previously, S3UploadBackend.delete_export_tarball failed to strip the
leading ‘/’ from the export path. This mistake is now caught by Moto
1.3.15. I expect it caused deletion failures in the real S3, although
I haven’t verified this.
We store export_path in the audit log with a leading ‘/’, but the
actual S3 keys do not have a leading ‘/’. Changing either system
would require a migration. So the new convention is that the
variables named ‘export_path’ have a leading ‘/’, while variables
named ‘path_id’ or ‘key’ do not.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Previously, the GitLab webhook code, namely the `get_objects_assignee`
method first tried to get a single assignee and if that failed then it
looks for multiple assignees and then it would return the first
assignee that it found (there's actually a code smell here - a loop
which would always return on the first iteration).
Instead, this commit will change that behavior to first check for
multiple assignees first then for a single assignee if we can't find
multiple assignees. Ultimately it will return a list of all of the
assignees (however many that might be [0, n]). This method has then
aptly been renamed to `get_assignees`.
Finally, we tweked the code using this method to always use it's
output as an "assignees" parameter to templates (there's also an
assignee parameter which we want to avoid here for consistency).
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
For some reasons, some of the fixtures had the +x bit set, while
some didn't. What this commit does is make sure that no fixture
is marked as "executable" (for anyone).
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
The previous code only worked by accident and hyperlink 20.0.0 breaks
it.
>>> hyperlink.parse("example.com").replace(scheme="https")
DecodedURL(url=URL.from_text('https:example.com'))
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Django treats path("<name>") like re_path(r"(?P<name>[^/]+)") and
path("<path:name>") like re_path(r"(?P<name>.+)").
This is more readable and consistent than the mix of slightly
different regexes we had before, and fixes various bugs:
• The r'apps/(.*)$' regex was missing a start anchor ^, so it
incorrectly matched all URLs that included apps/ as a substring
anywhere.
• The r'accounts/login/(google)/$' regex was missing a start anchor ^,
so it incorrectly matched all URLs that ended with
accounts/login/google/.
• The type annotation of zerver.views.realm_export.delete_realm_export
takes export_id as an int, but it was previously passed as a string.
• The type annotation of zerver.views.users.avatar takes medium as a
bool, but it was previously passed as a string.
• The [0-9A-Za-z]+ pattern for uidb64 was missing the - and _
characters that can validly be part of a base64url encoded
string (although I think the id is actually a decimal integer here,
in which case only 012345ADEIMNOQTUYcgjkwxyz are present in its
base64url encoding).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
$ref siblings are ignored according to the OpenAPI specification, and
the referenced definitions already have examples.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Replace default root logger with zulip.auth.apple for apple auth
in file zproject/backends.py and update the test cases
accordingly in file zerver/tests/test_auth_backends.py
Replaced mock.patch with assertLogs for testing log outputs
in file test_auth_backends.py.
This change requires adjusting
test_log_into_subdomain_when_email_is_none to use an explicit token
since that appears in the log output.
This clears it out of the data sent to Sentry, where it is duplicative
with the indexed metadata -- and potentially exposes PHI if Sentry's
"make this issue public" feature is used.
The previous link was to "extended callable" types, which are
deprecated in favor of callback protocols. Unfortunately, defining a
protocol class can't express the typing -- we need some sort of
variadic generics[1]. Specifically, we wish to support hitting the
endpoint with additional parameters; thus, this protocol is
insufficient:
```
class WebhookHandler(Protocol):
def __call__(request: HttpRequest, api_key: str) -> HttpResponse: ...
```
...since it prohibits additional parameters. And allowing extra
arguments:
```
class WebhookHandler(Protocol):
def __call__(request: HttpRequest, api_key: str,
*args: object, **kwargs: object) -> HttpResponse: ...
```
...is similarly problematic, since the view handlers do not support
_arbitrary_ keyword arguments.
[1] https://github.com/python/typing/issues/193
`zulip.zerver.lib.webhooks.common` was very opaque previously,
especially since none of the logging was actually done from that
module.
Adjust to a more explicit logger name.
Any exception is an "unexpected event", which means talking about
having an "unexpected event logger" or "unexpected event exception" is
confusing. As the error message in `exceptions.py` already explains,
this is about an _unsupported_ event type.
This also switches the path that these exceptions are written to,
accordingly.
8e10ab282a moved UnexpectedWebhookEventType into
`zerver.lib.exceptions`, but left the import into
`zserver.lib.webhooks.common` so that webhooks could continue to
import the exception from there.
This clutters things and adds complexity; there is no compelling
reason that the exception's source of truth should not move alongside
all other exceptions.
The main race conditions, which actually happened in production was with
concurrent execution of deliver_email and clear_scheduled_emails.
clear_scheduled_emails could delete all email.users in the middle of
deliver_email execution, causing it to pass empty to_user_ids list to
send_email. We mitigate this by getting the list of user ids in a single
query and moving forward with that snapshot, not having to worry about
database data being mutated anymore.
clear_scheduled_emails had potential race conditions with concurrent
execution of itself due to not locking the appropriate rows upon
selecting them for the purpose of potentially deleting them. FOR UPDATE
locks need to be acquired to prevent simultaneous mutation.
Tested manually with some print+sleep debugging to make some races
happen.
fixes #zulip-2k (sentry)
There are three functional side effects:
• Correct an insignificant but mathematically offensive bias toward
repeated characters in generate_api_key introduced in commit
47b4283c4b4c70ecde4d3c8de871c90ee2506d87; its entropy is increased
from 190.52864 bits to 190.53428 bits.
• Use the base32 alphabet in confirmation.models.generate_key; its
entropy is reduced from 124.07820 bits to the documented 120 bits, but
now it uses 1 syscall instead of 24.
• Use the base32 alphabet in get_bigbluebutton_url; its entropy is
reduced from 51.69925 bits to 50 bits, but now it uses 1 syscall
instead of 10.
(The base32 alphabet is A-Z 2-7. We could probably replace all of
these with plain secrets.token_urlsafe, since I expect most callers
can handle the full urlsafe_b64 alphabet A-Z a-z 0-9 - _ without
problems.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
For web-public streams, clients can access full topic history
without being authenticated. They only need to additionally
send "streams:web-public" narrow with their request like all
the other web-public queries.
This verifies that we actually do enqueue a record when there is an
error on non-staging. With the previous commit, it verifies that that
data serializes correctly.
The return type of `ugettext_lazy('...')` (aliased as `_`) is a
promise, which is only forced into a string when it is dealt with in
string context. This `django.utils.functional.lazy.__proxy__` object
is not entirely transparent, however -- it cannot be serialized by
`orjson`, and `isinstance(x, str) == False`, which can lead to
surprising action-at-a-distance.
In the two places which will serialize the role value (either into
Zulip's own error reporting queue, or Sentry's), force the return
value. Failure to do this results in errors being dropped
mostly-silently, as they cannot be serialized and enqueued by the
error reporter logger, which has no recourse but to just log a
warning; see previous commit.
When we do this forcing, explicitly override the language to be the
realm default. Failure to provide this override would translate the
role into the role in the language of the _request_, yielding varying
results.
AdminNotifyHandler is used to notify admins of errors; it is a
critical piece of logic. Failures in reporting errors will compound,
since its `except Exception` clauses cannot generate logging at the
`error` or `exception` level, as that would be recursive. It must
settle for logging at the `warning` level, and hope that admins are
vigilant to the logging there.
Increase the chances of being notified of failures in this logger, by
bubbling up those exceptions to Sentry, which is an orthogonal
reporting stack.
When user requests for a realm that doesn't exists, we raise
a InvalidSubdomainError.
This reduces our effort at repeatedly ensuring realm is valid
in request in web-public queries.
If there are unsupported keys, we still log an error,
but we now also send a message to the stream. (This
is a good tradeoff for the github webhook, since users
can just turn off notifications if they find it spammy.
Also, we intend to support "repository" soon.)
This is a bit of an experiment to see how this plays
in the field:
* will customers notice the change?
* will Sentry reports look any different?
The main thing fixed here is that we weren't turning
on our keys into a list. And then I refined the message
a bit more, including sorting the keys.
I also avoid the unnecessary "else".
The EVENT_FUNCTION_MAPPER maps a string event name
to a function handler. Before this we circumvented
mypy checks with a call to get_body_function_based_on_type,
which specified Any as the type of our event function.
Now the types are rigorous.
This change was impossible without the recent commit
to introduce the Helper class.
The Helper class will soon grow, but the immediate
problem it solves is the need to jankily inspect
the parameters of our get_*_body function.
Most of the changes were handled by an ad hoc
munge.py script.
The substantive changes were adding the Helper
class and passing it in.
And then the linter discovered a place where
the optional include_title parameter wasn't used
(which is one of the reasons to avoid the janky
inspect-signature technique).
As a side note, none of the include_title parameters
needed a default value of False, as we always passed
in an explicit value.
We test cover both sides of include_title, which
you can verify by hard coding it to either True or
False (and seeing the relevant failures), although I
suspect most individual codepaths
only test one value, based on whether "topic" is in
the fixture or not.
Finally, I know Helper is not a great name, but I
intend to evolve the class a bit before deciding
whether a more descriptive name is helpful here.
(For example, an upcoming commit will add a
log_unexpected helper method.)
We get the header_event one level up the call
stack now, too.
It's somewhat annoying that we have our own
concept of "event" here, instead of just returning
our event handlers directly, or just calling them
directly, but it's a bit non-trivial to fix that
right away.
In passing, I remove the strange OR for "ping",
which is already a key in EVENT_FUNCTION_MAPPER.
See https://github.com/zulip/zulip/issues/16258 for
possible follow up here.
We now ignore the following two new pull_request
actions (as well as the three existing ones
from before):
approved
converted_to_draft
As the issue above indicates, we may want to actually
support "approved" if we can find somebody to work
on the webhook. (And then the issue goes a little
broader than what changed here.)
We consolidate the tests and remove the fixtures, which
just have a lot of noisy fields that we ignore. Also,
pull_request__request_review_removed was named improperly.
Before this the only way we took advantage
of the summary from UnexpectedWebhookEventType
was by looking at exc_info().
Now we just explicitly add it to the log
message, which also sets us up to call
log_exception_to_webhook_logger directly
with some sort of "summary" info
when we don't actually want a real
exception (for example, we might want to
report anomalous webhook data but still
continue the transaction).
A minor change in passing is that I move
the payload parameter lexically.
Our isort configuration was almost Black-compatible, but we were
missing ensure_newline_before_comments.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Some users setup zulip with trailing / at end, like 'https://meet.jit.si/
leading to extra / on clients while generating video chat link.
This commit removes trailing '/' if it exists to make it consistent. Manual
testing was done by generating jitsi url.
Fixes#16225
We eliminate optional parameters and replace `request_body`
with `payload`.
There is much less confusion if we just pass in `payload`,
and then we optionally re-format it if it's json.
For unclear reasons the original code was trying to
do `request_body = str(payload)` when `request_body`
was no longer being used.
The query to finds and marks all unread UserMessages in the stream as read
can be quite expensive, so we'll move that work to the deferred_work
queue and split it into batches.
Fixes#15770.
In 468c5b9a58 we changed the method of
getting the list of management commands. Using app_config.path has a
caveat in that the value depends on the path from which we're executing.
An example of things breaking can be reproduced by calling
/home/vagrant/zulip/tools/test-backend TestCommandsCanStart
This makes the app_config.path values to start with /home/vagrant/zulip,
but DEPLOY_ROOT in the dev environment is set to /srv/zulip.
/home/vagrant/zulip is a soft link to /srv/zulip, so it's a valid path
to call test-backend through, but it causes self.commands to end up
being an empty list. We fix this by converting app_config.path to the
real path.
Rather than catching, checking action type, and possibly re-raising,
instead return None explicitly from `get_subject_and_body`, which
already signals for a blank success result. This collocates the logic
of the action types in one place, and removes the complexity of the
re-raise.
Sentry may get reported multiple exceptions stacks, in the case where
a `raise ...` was caught, and a new exception was `raise`d. In this
case, the `filename` is the most recent exception -- but the
exceptions are stored in the `exception` key in the order in which
they occurred. As such, taking the first value with a `stacktrace`
will result in showing the wrong line, or in no stack trace being
resolved at all.
Look from the last `exception` backwards, for matching stacks.
Since ALL_HOTSPOTS is a global object, it is initialized
at the time the backend server is started. Hence, the
title and description is translated only once. Using
ugettext_lazy makes sure that the strings are translated
in each and every request according to the language
of the user.
Fixes#16224
This commit fixes examples in "400" response for deactivating user
endpoints to have msg as "Cannot deactivate the last organization
owner" instead of "Cannot deactivate the last organization
administrator".
We had already removed the restriction on deactivating last admin
and added it for last owner, while adding owner role.
The `typing: stop` event did not have any tests in test_events
hence its documentation wasn't added. So add tests and relevant
documentation for the typing stop event. Also edit the documentation
of `typing: start` to include the fact that servers should use
their own timeout incase `stop` event event isn't received.
Fixes#16122.
We display the text of the consent message, and then continue with the
export, which will scroll the content off the screen. Allow the
administrator time to examine the contents of the message, and decide
whether to proceed based on that and the fraction of users that have
responded so far.
We raise two types of json_unauthorized when
MissingAuthenticationError is raised. Raising the one
with www_authenticate let's the client know that user needs
to be logged in to access the requested content.
Sending `www_authenticate='session'` header with the response
also stops modern web-browsers from showing a login form to the
user and let's the client handle it completely.
Structurally, this moves the handling of common authentication errors
to a single shared middleware exception handler.
Improve OpenAPI documentation of /zulip-outgoing-webhook by moving
data and making appropriate additions from its couterpart in the
/outgoing-webhook docs. Then remove the redundant documentation
from the doc and add command to render OpenAPI documetation. Also
add a test to outgoing_webhooks_interface.py to ensure that OpenAPI
documentation is correct.
Fixes#16203.
This lets the backend tests pass if zilencer has been (manually)
removed from EXTRA_INSTALLED_APPS, by skipping the tests that require
it. test-backend complains that some URLs are untested in this case:
ERROR: Some URLs are untested! Here's the list of untested URLs:
api/v1/users/me/android_gcm_reg_id
api/v1/users/me/apns_device_token
team/
Signed-off-by: Anders Kaseorg <anders@zulip.com>
It's never safe to access the mock RemoteZulipServer object; this
caused exceptions on every request in production for any server with
ZILENCER_ENABLED=False.
This commit adds automatic detection of extra output (other than
printed by testing library or tools) in stderr and stdout by code under
test test-backend when it is run with flag --ban-console-output.
It also prints the test that produced the extra console output.
Fixes: #1587.
Extracting a section for presence endpoints and using path() rather
than re_path() results in a much cleaner implementation of this
concept.
This eliminates the last case where test_openapi couldn't correctly
match an endpoint documentation with the OpenAPI definitions for it.
This renames 'group_id' to 'user_group_id' in the api docs to remove
the naming mismatch between the url config and the docs and eventually
remove the 'user_groups' endpoints from 'pending_endpoints' in
test_openapi.py.
'user_groups' endpoints are currently under 'pending_endpoints' in
test_openapi.py (even after being documented except one), due to the
'user_group_id' and 'group_id' parameter name mismatch in the
url config and the view functions.
This commit includes 'path_only=True' for 'user_group_id' parameter in
views to avoid the failure of 'test_openapi_arguments', in
test_openapi.py, which excludes the path parameters. This is a prep
commit for renaming 'group_id' to 'user_group_id' in the documentation
and removing the 'user_groups' endpoints from 'pending_endpoints'.
This queue had a race condition with creation of another Timer while
maybe_send_batched_emails is still doing its work, which may cause
two or more threads to be running maybe_send_batched_emails
at the same time, mutating the shared data simultaneously.
Another less likely potential race condition was that
maybe_send_batched_emails after sending out its email, can call
ensure_timer(). If the consume function is run simultaneously
in the main thread, it will call ensure_timer() too, which,
given unfortunate timings, might lead to both calls setting a new Timer.
We add locking to the queue to avoid such race conditions.
Tested manually, by print debugging with the following setup:
1. Making handle_missedmessage_emails sleep 2 seconds for each email,
and changed BATCH_DURATION to 1s to make the queue start working
right after launching.
2. Putting a bunch of events in the queue.
3. ./manage.py process_queue --queue_name missedmessage_emails
4. Once maybe_send_batched_emails is called and while it's processing
the events, I pushed more events to the queue. That triggers the
consume() function and ensure_timer().
Before implementing the locking mechanism, this causes two threads
to run maybe_send_batched_emails at the same time, mutating each other's
shared data, causing a traceback such as
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 1182, in run
self.function(*self.args, **self.kwargs)
File "/srv/zulip/zerver/worker/queue_processors.py", line 507, in maybe_send_batched_emails
del self.events_by_recipient[user_profile_id]
KeyError: '5'
With the locking mechanism, things get handled as expected, and
ensure_timer() exits if it can't obtain the lock due to
maybe_send_batched_emails still working.
Co-authored-by: Tim Abbott <tabbott@zulip.com>
We raise two types of json_unauthorized when
MissingAuthenticationError is raised. Raising the one
with www_authenticate let's the client know that user needs
to be logged in to access the requested content.
Sending `www_authenticate='session'` header with the response
also stops modern web-browsers from showing a login form to the
user and let's the client handle it completely.
Structurally, this moves the handling of common authentication errors
to a single shared middleware exception handler.
If you look at line number 1121 (new) of commit 14c0a387cf,
I seem to have accidently set the description for a status
200 response to "Bad Request" instead of "Success" which
is what it really is. It's basically an ugly typo (maybe
due to hastily copy-pasting the template).
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
I noticed RateLimitTests.test_hit_ratelimits fails when run as an
individual test, but never when run after other tests. That's due to the
first API request in a run of tests taking a long time, as detailed in
the comment on the change to the setUp method.
Django always sets request.user to a UserProfile or AnonymousUser
instance, so it's better to mimic that in the tests where we pass a
dummy request objects for rate limiter testing purposes.
The data is now stored in memory if things are happening inside tornado.
That aside, there is no reason for a comment on a rate_limit_user call
to talk about low level implementation details of that function.
I can find no evidence of it being possible to get an Exception when
accessing request.user or for it to be falsy. Django should always set
request.user to either a UserProfile (if logged in) or AnonymousUser
instance. Thus, this seems to be dead code that's handling cases that
can't happen.
`zproject/settings.py` itself is mostly-empty now. Adjust the
references which should now point to `zproject/computed_settings.py`
or `zproject/default_settings.py`.
`update_message_flags` events used `operation` instead of `op`, the
latter being the standard field used in other events. So add `op`
field to `update_message_flags` and mark `operation` as deprecated,
so that it can be removed later.
It's possible that this is a new name for the "due"
field, but it's not totally clear.
In the exception we saw in the field:
payload['action']['data']['old']['dueComplete'] = False
payload['action']['data']['card']['dueComplete'] = True
We remove the fixture for create_check_item, which
has been bit-rotting for as long as we have ignored
this type of card data.
Our new test is more powerful, in the sense that it
shows we successfully ignore all fixtures of this
type.
If we want to handle this, we'll just need to get
new, representative fixture data from trello.
Commit c4254497b2
curiously had get_body() round tripping its data
through json load and dump.
I have seen this done for pretty-printing reasons,
but it doesn't apply here.
And if you're doing it for validation reasons,
you only need to do half the work, as my commit
here demonstrates.
We arguably don't even need the fail-fast code
here, since our fixtures are linted to be proper
json, I believe, plus downstream code probably
gives reasonably easy-to-diagnose symptoms.
We introduce get_payload for the relatively
exceptional cases where webhooks return payloads
as dicts.
Having a simple "str" type for get_body will
allow us to extract test helpers that use
payloads from get_body() without the ugly
`Union[str, Dict[str, str]]` annotations.
I also tightened up annotations in a few places
where we now call get_payload (using Dict[str, str]
instead of Dict[str, Any]).
In the zendesk test I explicitly stringify
one of the parameters to satisfy mypy.
We tighten up the mypy types here. And then
once we know that expected_message and expected_topic
are never None, we don't have call the do_test_message
and do_test_topic helpers any more, so we eliminate
them, too.
Finally, we don't return a message, since no tests
use the message currently.
If we're not passing in expected_topic or expected_message
to check_webhook, it's better to just call send_webhook_payload,
since we'll want to explicitly check our messages
anyway.
This preps us to always require those fields for
check_webhook, which can prevent insidious testing no-ops.
This forces us to be a bit more explicit about testing
the three key values in any stream message, and it
also de-clutters the code a bit. I eventually want
to phase out do_test_topic and friends, since they
have the pitfall that you can call them and have them
do nothing, because they don't actually require
values to be be passed in.
I also clean up the code a bit for the tests that
have two new messages arriving.
Having an optional stream_name parameter makes
it confusing to read the code if you know your
webhook is sending private messages.
And then the other two callers are already
checking topics, so they might as well check
stream names, too.
We also have the two stream-oriented callers
make their own call to "subscribe". And we
future-proof this by making sure the exception
for no-message-being-sent calls out that gotcha.
Somewhat in passing, we now assert that
self.STREAM_NAME is not None in the main
helper. This is partly to satisfy mypy, but
it's also a good sanity check.
This also sets the stage for the next commit,
where I'll add an assert_stream_message helper.
Not all webhook payloads are json, so send_json_payload was a
bit misleading.
In passing I also remove "bytes" from the Union type for
"payload" parameter.
Almost all webhook tests use this helper, except a few
webhooks that write to private streams.
Being concise is important here, and the name
`self.send_and_test_stream_message` always confused
me, since it sounds you're sending a stream message,
and it leaves out the webhook piece.
We should consider renaming `send_and_test_private_message`
to something like `check_webhook_private`, but I couldn't
decide on a great name, and it's very rarely used. So
for now I just made sure the docstrings of the two
sibling functions reference each other.
The "EXPECTED_" prefix and "_EVENTS" suffix
usually provided more noise than signal.
We also use module constants to avoid the "self."
noise. It also makes it a bit more clear which
constants actually have to be in the class (e.g.
"FIXTURE_DIR_NAME") to do their job.
This function is a bad idea, as it leads to a possible situation
where you aren't actually testing anything:
def do_test_message(self, msg: Message, expected_message: Optional[str]) -> None:
if expected_message is not None:
self.assertEqual(msg.content, expected_message)
Unfortunately, it's called deep in the stack in some places, but
we can safely replace it with assertEqual here.
We had optional parameters for expected_topic and
expected_message, which are trivial to eliminate,
since the integration is really simple.
And we were doing strange things trying to reset
class variables at the end of tests. Now we just
set them explicitly in the tests.
The test helper here was taking an "expected_topic"
parameter that it just ignored, and then the
dialogflow tests were passing in expected messages
in that slot, so the actual "expected_message" var
was "None" and was ignored. So the tests weren't
testing anything.
Now we eliminate the crufty expected_topic parameter
and require an actual value for "expected_message".
I also clean up the mypy type for content_type,
and I remove the `content_type is None` check,
since all callers either pass in a str content
type or default to "application/json".
Some `<img>` tags do not have an SRC, if they are rewritten using JS
to have one later. Attempting to access `first_image['src']` on these
will raise an exception, as they have no such attribute.
Only look for images which have a defined `src` attribute on them. We
could instead check if `first_image.has_attr('src')`, but this seems
only likely to produce fewer valid images.
03ca3afbc2 added more codes that are equivalent to 404's; this adds to
the list of cache-as-None codes a couple which are equivalent to
403's. It does not comprise _all_ possible 403-like codes -- many of
them are "the client is not OK," which is relevant to log as an error
still.
Per [1], the sentry API returns frames sorted from oldest to newest.
As such, matching against the first filename that matches is most
likely not the right frame.
Match against the last frame with the guilty filename.
[1] https://develop.sentry.dev/sdk/event-payloads/stacktrace/
The original commit was broken here:
b553507412
The intention was to run the same loop for all
settings, but instead, we did a funny loop of
just resetting schema_checker, and then we only
actually tested the last value of the loop.
This commit adds "role" field to the Subscription objects passed to
clients. This is important preparation for being able to work on the
frontend for this feature.
While exporting analytics data we were using wrong table name
'zerver_analytics' in analytics config. Renamed it with
correct table name 'zerver_realm'.
Since bug https://bugs.python.org/issue3445 was resolved in Python
3.3, we can avoid the use of assigned=available_attrs(view_func) in
wraps decorator (which we were only using because we'd copied code
that handled that from Django).
Also available_attrs is now depreciated from Django 3.0 onwards.
Django 3.0 removed private Python 2 compatibility APIs
so used lru_cache() directly from functools.
We cast lru_cache to Any to avoid attr-defined error in mypy since we
are adding extra field, 'key_prefix', to this object later.
This comment stopped being true in 5686821150, and very much stopped
being relevant in dd40649e04 when the middleware entirely stopped
publishing to a queue.
This function now matches the copy in zerver/lib/actions.py.
This is the same migration as
b250e42f61c525029bd2b3bbb8f4ea93ece62072; orjson enforces that we
don't use integers as keys in JSON dictionaries.
Apparently, we were incorrectly using constants for title/description
rather than the nice non-constant values from og:title and
og:description in our meta tags.
This commit adds the is_web_public field in the AbstractAttachment
class. This is useful when validating user access to the attachment,
as otherwise we would have to make a query in the db to check if
that attachment was sent in a message in a web-public stream or not.
The new Stream administrator role is allowed to manage a stream they
administer, including:
* Setting properties like name, description, privacy and post-policy.
* Removing subscribers
* Deactivating the stream
The access_stream_for_delete_or_update is modified and is used only
to get objects from database and further checks for administrative
rights is done by check_stream_access_for_delete_or_update.
We have also added a new exception class StreamAdministratorRequired.
This commit adds role field to the Subscription class. Currently,
there are two option of roles - STREAM_ADMINISTRATOR and MEMBER.
We also add a property 'is_stream_admin' for checking whether the
user is stream admin or not.
Via API, users can now access messages which are in web-public
streams without any authentication.
If the user is not authenticated, we assume it is a web-public
query and add `streams:web-public` narrow if not already present
to the narrow. web-public streams are also directly accessible.
Any malformed narrow which is not allowed in a web-public query
results in a 400 or 401. See test_message_fetch for the allowed
queries.
django.security.DisallowedHost is only one of a set of exceptions that
are "SuspiciousOperation" exceptions; all return a 400 to the user
when they bubble up[1]; all of them are uninteresting to Sentry.
While they may, in bulk, show a mis-configuration of some sort of the
application, such a failure should be detected via the increase in
400's, not via these, which are uninteresting individually.
While all of these are subclasses of SuspiciousOperation, we enumerate
them explicitly for a number of reasons:
- There is no one logger we can ignore that captures all of them.
Each of the errors uses its own logger, and django does not supply
a `django.security` logger that all of them feed into.
- Nor can we catch this by examining the exception object. The
SuspiciousOperation exception is raised too early in the stack for
us to catch the exception by way of middleware and check
`isinstance`. But at the Sentry level, in `add_context`, it is no
longer an exception but a log entry, and as such we have no
`isinstance` that can be applied; we only know the logger name.
- Finally, there is the semantic argument that while we have decided
to ignore this set of security warnings, we _may_ wish to log new
ones that may be added at some point in the future. It is better
to opt into those ignores than to blanket ignore all messages from
the security logger.
This moves the DisallowedHost `ignore_logger` to be adjacent to its
kin, and not on the middleware that may trigger it. Consistency is
more important than locality in this case.
Of these, the DisallowedHost logger if left as the only one that is
explicitly ignored in the LOGGING configuration in
`computed_settings.py`; it is by far the most frequent, and the least
likely to be malicious or impactful (unlike, say, RequestDataTooBig).
[1] https://docs.djangoproject.com/en/3.0/ref/exceptions/#suspiciousoperation
These weren’t wrong since orjson.JSONDecodeError subclasses
json.JSONDecodeError which subclasses ValueError, but the more
specific ones express the intention more clearly.
(ujson raised ValueError directly, as did json in Python 2.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>