The problem is not the list comprehension, as the previous wording
implied, but rather the fact that data is needed from the linked
table.
Be explicit about _what_ in the QuerySet API is helpful for addressing
this -- namely, use of `select_related`.
Change in stream color occurs very rarely, and the palette is taking a lot of space in the popover.
This commit will hide the palette in default view of stream popover.
During events such as stream / topic name edit for a topic, we were
running queries to db in loop for each message for reactions,
submessages and realm_id. This commit reduces the queries to be
done only for realm_id, which is yet to be fixed.
This is accomplished by building messages with empty reactions
and submessages and then updating them in the messages using bulk
queries.
This fixes a minor regression in a very recent
commit.
In 7ad5bea3e6 I was
a little too aggressive about deactivating users.
We do want a few users who are outside the realm,
just to prevent regressions where we fail to filter
on realm. The likelihood of such regressions are
fairly low, but it would certainly be an ugly bug.
Without this change, you could get obscure
failures when logging in as Cordelia if you
modified test data by doing something
fairly innocuous like adding a new test user.
Also the complicated query here to exclude
users was flaky, since it didn't explicitly
order by any field before doing the 'LIMIT 6'.
Part of the problem with debugging this flake
was that the failure would happen for the login,
but the data actually gets changed in `setUp,
which is easy to overlook, since it's not
explicitly invoked.
We continue to keep the seat_count set to
a constant, predictable value, since some
tests are very sensitive to having 6 users.
The navbar uses rendered markdown and rendered html within the narrow
description, this inserts eg katex--html and allows rendering of
inline math formulae. Unfortunately, in the previous SCSS file, this
fact was overlooked and a generic "span" selector was used with would
target all spans within the parent element, direct descendants or
otherwise, which caused the side effect of applying padding and margin
to inner katex elements which broke appearance.
This commit replaces the "span" selector with "& > span" so that only
spans which are the direct children to the parent element are selected
and katex--html is rendered correctly.
Fixes: #14947.
This commit allows non admins to set stream post policy while creating
streams.
Restriction was there to prevent user from creating a stream in which
the user cannot post himself but this will be taken care of with
stream admin feature.
JSON.parse behaves as we want for numbers but for strings, we would
throw an error like 'unexpected token at position 0'. This meant we
couldn't read back the value set by `$input.data('val', 'text')`.
We had removed this function from the codebase when we switched to
using dropdown_list_widget. This was accidentally left as it is when
making that change.
This significantly reduces the time required to handle events like
stream & topic name edit for topics.
Verified using the Chrome Profiler for a topic with 100 messages:
With this commit: 0.64s to move the topic to a different stream.
Without this commit: 5.5s.
For unknown reasons, deleting 10,000s of ArchiveTransaction objects
results in rapidly growing memory in the job making the request in the
Django process, eventually leading to an OOM kill.
I don't understand why Django behaves that way; I would have expected
the failure mode to instead be a serious load problem on the database
server, but perhaps the way Django's internal deletion logic handles
cascading the deletes to many millions of ArchiveMessages and other
ForeignKey objects requires tracking a lot of data in memory.
The solution is the same in any case, which is to batch the deletions
to execute a reasonable number of them at once. Doing a single
ArchiveTransaction at a time would likely result in huge numbers of
database queries in a loop, which performs very poorly. So we balance
by batching deletions in groups of 100 ArchiveTransactions; testing
this in production, I saw no spike of memory usage materially beyond
that of a normal Django process, and each bulk-deletion transaction
takes several seconds to process (meaning per-transaction overhead is
negligible).