Backfill subscription realm audit log SUBSCRIPTION_CREATED events for
users which are currently subscribed but don't have any subscription
events, presumably due to some historical bug. This is important
because those rows are necessary when reactivating a user who is
currently soft-deactivated.
For each stream, we find the subscribed users who have no
subscription-related realm audit log entries, and create a
`backfill=True` subscription audit log entry which is the latest it
could have been, based on UserMessage rows. We then optionally insert
a `DEACTIVATION` if the current subscription is not active.
Black 23 enforces some slightly more specific rules about empty line
counts and redundant parenthesis removal, but the result is still
compatible with Black 22.
(This does not actually upgrade our Python environment to Black 23
yet.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
To explain the rationale of this change, for example, there is
`get_user_activity_summary` which accepts either a `Collection[UserActivity]`,
where `QuerySet[T]` is not strictly `Sequence[T]` because its slicing behavior
is different from the `Protocol`, making `Collection` necessary.
Similarily, we should have `Iterable[T]` instead of `List[T]` so that
`QuerySet[T]` will also be an acceptable subtype, or `Sequence[T]` when we
also expect it to be indexed.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
Race conditions in stream unsubscription may lead to multiple
back-to-back SUBSCRIPTION_DEACTIVATED RealmAuditLog entries for the
same stream. The current logic constructs duplicate UserMessage
entries for such, which then later fail to insert.
Keep a set of message-ids that have been prep'd to be inserted, so
that we don't duplicate them if there is a duplicated
SUBSCRIPTION_DEACTIVATED row. This also renames the `message` local
variable, which otherwise overrode the `message` argument of a
different type.
model__id syntax implies needing a JOIN on the model table to fetch the
id. That's usually redundant, because the first table in the query
simply has a 'model_id' column, so the id can be fetched directly.
Django is actually smart enough to not do those redundant joins, but we
should still avoid this misguided syntax.
The exceptions are ManytoMany fields and queries doing a backward
relationship lookup. If "streams" is a many-to-many relationship, then
streams_id is invalid - streams__id syntax is needed. If "y" is a
foreign fields from X to Y:
class X:
y = models.ForeignKey(Y)
then object x of class X has the field x.y_id, but y of class Y doesn't
have y.x_id. Thus Y queries need to be done like
Y.objects.filter(x__id__in=some_list)
There exists a logic bug (see #18236) which causes duplicate
usermessage rows to be inserted. Currently, this stops catch-up for
all users.
Catch and record the exception for each affected user, so we at least
make catch-up progress on other users.
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Automatically generated by the following script, based on the output
of lint with flake8-comma:
import re
import sys
last_filename = None
last_row = None
lines = []
for msg in sys.stdin:
m = re.match(
r"\x1b\[35mflake8 \|\x1b\[0m \x1b\[1;31m(.+):(\d+):(\d+): (\w+)", msg
)
if m:
filename, row_str, col_str, err = m.groups()
row, col = int(row_str), int(col_str)
if filename == last_filename:
assert last_row != row
else:
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
with open(filename) as f:
lines = f.readlines()
last_filename = filename
last_row = row
line = lines[row - 1]
if err in ["C812", "C815"]:
lines[row - 1] = line[: col - 1] + "," + line[col - 1 :]
elif err in ["C819"]:
assert line[col - 2] == ","
lines[row - 1] = line[: col - 2] + line[col - 1 :].lstrip(" ")
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Using logging.info() rather than logger.info() meant that our
zulip.soft_deactivation logger configuration (which, in particular,
included not logging to the console) was not active on this log line,
resulting in the `manage.py soft_deactivate_users` cron job sending
emails every time it ran.
Fixes#13750.
Due to my misreading the code and a sloppy search, I thought in
8218bf101c that
all_stream_subscription_logs didn't filter for streams.
While changing this, we'll switch to using `.modified_stream_id` for
potentially better performance.
If a soft deactivated user had a subscription double-toggled without
any new messages being sent in between, add_missing_messages might
incorrectly process those two subscription changes in the wrong order.
Fortunately, the failure mode was usually to throw this exception:
django.db.utils.IntegrityError: duplicate key value violates unique
constraint "zerver_usermessage_user_profile_id_message_id_4936d0df_uniq"
DETAIL: Key (user_profile_id, message_id)=(4, 57) already exists.
Our unit tests actually had this precise setup some fraction of the
time, because a bit of the test setup code subscribed+unsubscribed the
target user without sending any messages in between, resulting in a
test failure something like 50% of the time.
The original exception was hard to reproduce reliably originally
(resulting in an extremely annoying nondetermnistic test failure), but
is easily reproducible by changing the "id" to "-id" in this change to
always mis-order the processing of those RealmAuditLog events.
Previously, our soft-deactivation logic incorrectly did not filter the
set of stream subscription changes to look at to only include the
target stream.
This could result in unspecified buggy behavior.
Break will do the same thing as continue here, as each iteration will
have the same result, and it's also worth explaining why this isn't
one layer up in the loop setup.
When soft deactivation is run for in "auto" mode (no emails are
specified and all users inactive for specified number of days are
deactivated), catch-up is also run in the "auto" mode if
AUTO_CATCH_UP_SOFT_DEACTIVATED_USERS is True.
Automatically catching up soft-deactivated users periodically would
ensure a good user experience for returning users, but on some servers
we may want to turn off this option to save on some disk space.
Fixes#8858, at least for the default configuration, by eliminating
the situation where there are a very large number of messages to recover.
A user who has been soft deactivated for a long time might have 10Ks of message
history that was "soft deactivated". It might take a minute or more to add
UserMessage rows for all of these messages, causing timeouts. So, we paginate
the creation of these UserMessage rows.
The previous logic for soft deactivation ended up doing a giant
transaction in the case that there were thousands of users to
deactivate; this was messy and potentially buggy.
The batched transactions were useful for RealmAuditLog management,
however. So the right solution is to do reasonably sized batches
(e.g. 100 users).