Adds a re-usable lockfile_nonblocking helper to context_managers.
Relying on naive `os.mkdir` is not enough especially now that the
successful operation of this command is necessary for push notifications
to work for many servers.
We can't use `lockfile` context manager from
`zerver.lib.context_managers`, because we want the custom behavior of
failing if the lock can't be acquired, instead of waiting.
That's because if an instance of this gets stuck, we don't want to start
queueing up more processes waiting forever whenever the cronjob runs
again and fail->exit is preferrable instead.
Given that most of the use cases for realms-only code path would
really like to upload audit logs too, and the others would likely
produce a better user experience if they upoaded audit logs, we
should just have a single main code path here i.e.
'send_analytics_to_push_bouncer'.
We still only upload usage statistics according to documented
option, and only from the analytics cron job.
The error handling takes place in 'send_analytics_to_push_bouncer'
itself.
These metadata are essentially all publicily available anyway, and
making uploading them unconditional will simplify some things.
The documentation is not quite accurate in that it claims the server
will upload some metadata that is not actually uploaded yet (but will
by soon). This seems harmless.
This reduces the giant load spike at 5 minute past the hour, when all
remote servers currently attempt to submit their records.
We do not wish to slew over a full hour, because we want to ensure
that we do not hold the lock when the next hour's analytics runs. It
is also not necessary to have that much variation; 10 minutes is
picked as an arbitrary "long enough" time to spread requests over.
This commit moves constants for system group names to a new
"SystemGroups" class so that we can use these group names
in multiple classes in models.py without worrying about the
order of defining them.
The former name is kind of misleading - this function is for the remote
server to send analytics to the push bouncer. Under our usual
terminology, a "remote server" is a self-hosted Zulip server. So data is
sent FROM not TO a remote server.
_default_manager is the same as objects on most of our models. But
when a model class is stored in a variable, the type system doesn’t
know which model the variable is referring to, so it can’t know that
objects even exists (Django doesn’t add it if the user added a custom
manager of a different name). django-stubs used to incorrectly assume
it exists unconditionally, but it no longer does.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Adds the count of users with the role of guest to the stats view
`page_params` via a database query. This information is then added
to the summary statistics section of the analytics page after being
formatted by `stats.js`.
Creates Bassanio as a guest user in the database for the analytics
realm.
Fixes#20162.
Adds the realm's used storage space for attachments to the stats
view `page_params`. This information is then added to the summary
statistics section of the analytics page after being formatted by
`stats.js`.
Uses the emoji test image to create an `Attachment` in the database
for the analytics realm. Even though it doesn't create a message
to claim the attachment, it still is sent as storage space used
data for the analytics `/stats/` page.
This commit sets can_remove_subscribers_group to admins system
group while creating streams as it will be the default value
of this setting. In further we would provide an option to set
value of this setting to any user group while creating streams
using API or UI.
This adds the is_user_active with the appropriate code for setting the
value correctly in the future. In the following commit a migration to
backfill the value for existing Subscriptions will be added.
To ensure correct user_profile.is_active handling also in tests, we
replace all direct .is_active mutation with calls to appropriate
functions.
This is a prep commit. Currenty we only pass CountStat.property
to last_successful_fill function. But it needs access to
CountStat.time_increment as well. We can pass the entire CountStat
object to the function as a workaround. But making last_successful_fill
a property of CountStat seems to be much more cleaner.
Unlike stream_stats, I'm not aware of any of these having been used in
the last few years, and it's basically just really bad subsets of the
data in /activity, which also doesn't require shell access to use.
These haven't had real work or usage, AFAIK, since 2013.
A few major themes here:
- We remove short_name from UserProfile
and add the appropriate migration.
- We remove short_name from various
cache-related lists of fields.
- We allow import tools to continue to
write short_name to their export files,
and then we simply ignore the field
at import time.
- We change functions like do_create_user,
create_user_profile, etc.
- We keep short_name in the /json/bots
API. (It actually gets turned into
an email.)
- We don't modify our LDAP code much
here.
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Automatically generated by the following script, based on the output
of lint with flake8-comma:
import re
import sys
last_filename = None
last_row = None
lines = []
for msg in sys.stdin:
m = re.match(
r"\x1b\[35mflake8 \|\x1b\[0m \x1b\[1;31m(.+):(\d+):(\d+): (\w+)", msg
)
if m:
filename, row_str, col_str, err = m.groups()
row, col = int(row_str), int(col_str)
if filename == last_filename:
assert last_row != row
else:
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
with open(filename) as f:
lines = f.readlines()
last_filename = filename
last_row = row
line = lines[row - 1]
if err in ["C812", "C815"]:
lines[row - 1] = line[: col - 1] + "," + line[col - 1 :]
elif err in ["C819"]:
assert line[col - 2] == ","
lines[row - 1] = line[: col - 2] + line[col - 1 :].lstrip(" ")
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>