This commit renames the realm-level setting
'signup_notifications_stream' to 'signup_announcements_stream'.
The new name reflects better what the setting does.
This commit renames the realm-level setting 'notifications_stream'
to 'new_stream_announcements_stream'.
The new name reflects better what the setting does.
5c96f94206 mistakenly appended, rather than prepended, the edit to
the history. This caused AssertionErrors when attempting to view the
history of moved messages, which check that the `last_edit_time`
matches the timestamp of the first edit in the list.
Fix the ordering, and update the `edit_history` for messages that were
affected. We limit to only messages edited since the commit was
merged, since that helps bound the affected messages somewhat.
The widening of the time between when a process is marked for
reload (at Tornado startup) and when it sends reload events makes it
unlikely-to-impossible that a single `/` request will span both of
them, and thus hit the WebReloadClientError corner case.
Remove it, as it is not worth the complication. The bad behaviour it
is attempting to prevent (of a reload right after opening `/`) was
always still possible -- if the `/` request completed right before
Tornado restarted -- so it is not clear that it was ever worth the
complication.
Having a non-identity `cache_transformer` is no different from running
it on every row of the query_function. Simplify understanding of the
codepath used in caching by merging the pieces of code.
Rather than pass around a list of message objects in-memory, we
instead keep the same constructed QuerySet which includes the later
propagated messages (if any), and use that same query to pick out
affected Attachment objects, rather than limiting to the set of ids.
This is not necessarily a win -- the list of message-ids *may* be very
long, and thus the query may be more concise, easier to send to
PostgreSQL, and faster for PostgreSQL to parse. However, the list of
ids is almost certainly better-indexed.
After processing the move, the QuerySet must be re-defined as a search
of ids (and possibly a very long list of such), since there is no
other way which is guaranteed to correctly single out the moved
messages. At this point, it is mostly equivalent to the list of
Message objects, and certainly takes no less memory.
Rather than use `bulk_update()` to batch-move chunks of messages, use
a single SQL query to move the messages. This is much more efficient
for large topic moves. Since the `edit_history` field is not yet
JSON (see #26496) this requires that PostgreSQL cast the current data
into `jsonb`, append the new data (also cast to `jsonb`), and then
re-cast that as text.
For single-message moves, this _increases_ the SQL query count by one,
since we have to re-query for the updated data from the database after
the bulk update. However, this is overall still a performance
improvement, which improves to 2x or 3x for larger topic moves. Below
is a table of duration in seconds to run `do_update_message` to move a
topic to a new stream, based on messages in the topic, for before and
after this change:
| Topic size | Before | After |
| ---------- | -------- | ------- |
| 1 | 0.1036 | 0.0868 |
| 2 | 0.1108 | 0.0925 |
| 5 | 0.1139 | 0.0959 |
| 10 | 0.1218 | 0.0972 |
| 20 | 0.1310 | 0.1098 |
| 50 | 0.1759 | 0.1366 |
| 100 | 0.2307 | 0.1662 |
| 200 | 0.3880 | 0.2229 |
| 500 | 0.7676 | 0.4052 |
| 1000 | 1.3990 | 0.6848 |
| 2000 | 2.9706 | 1.3370 |
| 5000 | 7.5218 | 3.2882 |
| 10000 | 14.0272 | 5.4434 |
This applies access restrictions in SQL, so that individual messages
do not need to be walked one-by-one. It only functions for stream
messages.
Use of this method significantly speeds up checks if we moved "all
visible messages" in a topic, since we no longer need to walk every
remaining message in the old topic to determine that at least one was
visible to the user. Similarly, it significantly speeds up merging
into existing topics, since it no longer must walk every message in
the new topic to determine if the user could see at least one.
Finally, it unlocks the ability to bulk-update only messages the user
has access to, in a single query (see subsequent commit).
The problem was that earlier this was just an uncaught JsonableError,
leading to a full traceback getting spammed to the admins.
The prior commit introduced a clear .code for this error on the bouncer
side, meaning the self-hosted server can now detect that and handle it
nicely, by just logging.error about it and also take the opportunity to
adjust the realm.push_notifications_... flags.
- Renames "Bots and integrations" to "Bots overview" everywhere
(sidebar, page title, page URL).
- Adds a copy of /api/integrations-overview (symbolic link) as the
second page in the Bots & integrations section, titled
"Integrations overview".
Fixes#28758.
Creates an incoming webhook integration for Patreon. The main
use case is getting notifications when new patrons sign up.
Fixes#18321.
Co-authored-by: Hari Prashant Bhimaraju <haripb01@gmail.com>
Co-authored-by: Sudipto Mondal <sudipto.mondal1997@gmail.com>
The previous query suffered from bad corner cases when the user had
received a large number of direct messages but sent very few,
comparatively. This mean that the first half of the UNION would
retrieve a very large number of UserMessage rows, requiring fetching a
large number of Message rows, merely to throw them away upon
determining that the recipient was the current user.
Instead of merging two queries of "last 1k received" + "last 1k sent",
we instead make better use of the UserMessage rows to find "last 1k
sent or received." This may change the list of recipients, as large
disparities in sent/received messages may result in pushing the
most-recently-sent users off of the list. These are likely uncommon
edge cases, however -- and the disparity is the whole reason for the
performance problem.
This also provides more correct answers. In the case where a user's
1001'th message sent was to person A today, but my most recent message
received was from them yesterday, the previous plan would show the
message I received yesterday message-id as the max, and not the more
recent message I sent today.
While we could theoretically raise the `RECENT_CONVERSATIONS_LIMIT` to
more frequently match the same recipient list as previously, this
increases the cost of the most common cases unreasonably. With a
1000-message limit, the common cases are slightly faster, and the tail
latencies are very much improved; raising `RECENT_CONVERSATIONS_LIMIT`
would increase the result similarity to the old algorithm, at the cost
of the p50 and p75.
| | Old | New |
| ------ | ------- | ------- |
| Mean | 0.05287 | 0.02520 |
| p50 | 0.00695 | 0.00556 |
| p75 | 0.05592 | 0.03351 |
| p90 | 0.14645 | 0.08026 |
| p95 | 0.20181 | 0.10906 |
| p99 | 0.30691 | 0.16014 |
| p99.9 | 0.57894 | 0.19521 |
| max | 22.0610 | 0.22184 |
On the whole, however, the much more bounded worst case are worth the
small changes to the resultset.
This is preparatory work towards adding a Topic model.
We plan to use the local variable name as 'topic' for
the Topic model objects.
Currently, we use *topic as the local variable name for
topic names.
We rename local variables of the form *topic to *topic_name
so that we don't need to think about type collisions in
individual code paths where we might want to talk about both
Topic objects and strings for the topic name.
This is preparatory work towards adding a Topic model.
We plan to use the local variable name as 'topic' for
the Topic model objects.
Currently, we use *topic as the local variable name for
topic names.
We rename local variables of the form *topic to *topic_name
so that we don't need to think about type collisions in
individual code paths where we might want to talk about both
Topic objects and strings for the topic name.
This is preparatory work towards adding a Topic model.
We plan to use the local variable name as 'topic' for
the Topic model objects.
Currently, we use *topic as the local variable name for
topic names.
We rename local variables of the form *topic to *topic_name
so that we don't need to think about type collisions in
individual code paths where we might want to talk about both
Topic objects and strings for the topic name.
Requests to these endpoint are about a specified user, and therefore
also have a notion of the RemoteRealm for these requests. Until now
these endpoints weren't getting the realm_uuid value, because it wasn't
used - but now it is needed for updating .last_request_datetime on the
RemoteRealm.
`<time:1234567890123>` causes a "signed integer is greater than
maximum" exception from dateutil.parser; datetime also cannot handle
it ("year 41091 is out of range") but that is a ValueError which is
already caught.
Catch the OverflowError thrown by dateutil.
boto3 has two different modalities of making API calls -- through
resources, and through clients. Resources are a higher-level
abstraction, and thus more generally useful, but some APIs are only
accessible through clients. It is possible to get to a client object
from a resource, but not vice versa.
Use `get_bucket(...).meta.client` when we need direct access to the
client object for more complex API calls; this lets all of the
configuration for how to access S3 to sit within `get_bucket`. Client
objects are not bound to only one bucket, but we get to them based on
the bucket we will be interacting with, for clarity.
We removed the cached session object, as it serves no real purpose.
e883ab057f started caching the boto client, which we had identified
as slow call. e883ab057f went further, calling
`get_boto_client().generate_presigned_url()` once and caching that
result.
This makes the inner cache on the client useless. Remove it.