Even though required attribute of stream and stream_id params is marked
false in openapi specification, the API expects atleast one of the
params to be set. There is no way to specify relationships like this
openapi and they dont seem to have any plan to implement this in future.
https://github.com/OAI/OpenAPI-Specification/issues/256
This limit was introduced in c588c79 as a part of the
feature and not due to performance crisis. So we are
increasing this limit to 7 days. Since topics tends to
naturally fizzle after day or two so 7 days limit
would be good enough.
One small change in behavior is that this creates an array with all the
row_objects at once, rather than creating them 1000 at a time.
That should be fine, given that the client batches these in units of
10000 anyway, and so we're just creating 10K rows of a relatively
small data structure in Python code here.
Fixes#1727.
With the server down, apply migrations 0245 and 0246. 0246 will remove
the pub_date column, so it's essential that the previous migrations
ran correctly to copy data before running this.
1. Apply migration 0243 to add date_sent column.
2. Apply migration 0244 to copy pub_date over to date_sent. Can be done
with the server running.
3. With the server down (for consistency between memory and
database state of Django objects), verify consistency with
Message.objects.exclude(date_sent=F("pub_date")).count() == 0
Apparently, our change in b8a1050fc4 to
stop caching responses on API endpoints accidentally ended up
affecting uploaded files as well.
Fix this by explicitly setting a Cache-Control header in our Sendfile
responses, as well as changing our outer API caching code to only set
the never cache headers if the view function didn't explicitly specify
them itself.
This is not directly related to #13088, as that is a similar issue
with the S3 backend.
Thanks to Gert Burger for the report.
The previous code for ensuring the sort order of emoji choices was
correct relied on an OrderedDict structure, which isn't guaranteed to
be preserved when passed to the frontend via JSON (in fact, it isn't,
since we converted the way page_params is passed to use
sort_keys=True). Switch it to a list of dictionaries to correct this.
Fixes#13220.
Previously, we were hardcoding the domain s3.amazonaws.com. Given
that we already have an interface for configuring the host in
/etc/zulip/boto.cfg (which in turn, automatically configures boto), we
just need to actually use the value configured in boto for what S3
hostname to use.
We don't have tests for this new use case, in part because they're
likely annoying to write with `moto` and there hasn't been a huge
amount of demand for it. Since this doesn't regress existing S3
backend support, it seems worth merging.
This patches an issue in f37535044 where we mistakenly tried to send
the function as part of the page_params. Instead, we should just try
to send the list of configuration options (in their user displayable
form).
This change adds the OpenAPI data needed to document the POST and
DELETE methods associated with this endpoint.
Descriptions edited slightly by tabbott.
Apparently, the Zulip notifications (and resulting emails) were
correct, but the download links inside the Zulip UI were incorrectly
not including S3 prefix on the URL, making them not work.
While we're at this, we rewrite the somewhat convoluted previous
system for formatting the data export output.
When using our EMAIL_ADDRESS_VISIBILITY_ADMINS feature, we were
apparently creating bot users with different email and delivery_email
properties, due to effectively an oversight in how the code was
written (the initial migration handled bots correctly, but not bots
created after the transition).
Following the refactor in the last commit, the fix for this is just
adding the missing conditional, a test, and a database migration to
fix any incorrectly created bots leaked previously.
This is also a useful preparatory refactor for having a user setting
controlling whether one's own email address is publicly available
within the organization.
This should dramatically improve the queue processor's performance in
cases where there's a very high volume of requests on a given endpoint
by a given user, as described in the new docstring.
Until we test this more broadly in production, we won't know if this
is a full solution to the problem, but I think it's likely. We've
never seen the UserActivityInterval worker end up backlogged without a
total queue processor outage, and it should have a similar workload.
Fixes#13180.
We don't actually need to go to the memcached (falling back to the
database) to fetch either user or client objects on every event. For
user objects, we actually can just pass through the user ID
transparently; for client objects, we can use an in-process cache,
since the mapping of string to ID never changes.
With the way these tests are, it's unnecessary to have 3 separate
classes, and it makes it confusing to decide where to add a potential
additional mm email test.