For 3000 messages and 400 users, this saved
about 30 seconds.
We only do two queries per batch of messages
now, and the algorithm is easier to analyze,
as it's just three nested loops.
I think it's important that the callers understand
that bulk_add_subscriptions assumes all streams
are being created within a single realm, so I make
it an explicit parameter.
This may be overkill--I would also be happy if we
just included the assertions from this commit.
SIGALRM is the simplest way to set a specific maximum duration that
queue workers can take to handle a specific message. This only works
in non-threaded environments, however, as signal handlers are
per-process, not per-thread.
The MAX_CONSUME_SECONDS is set quite high, at 10s -- the longest
average worker consume time is embed_links, which hovers near 1s.
Since just knowing the recent mean does not give much information[1],
it is difficult to know how much variance is expected. As such, we
set the threshold to be such that only events which are significant
outliers will be timed out. This can be tuned downwards as more
statistics are gathered on the runtime of the workers.
The exception to this is DeferredWorker, which deals with quite-long
requests, and thus has no enforceable SLO.
[1] https://www.autodesk.com/research/publications/same-stats-different-graphs
We set wildcard_mention_policy in the test database so that we can
avoid future changes in mention puppeteer tests, as the default
membership of streams in the Zulip development organization is large
enough to prevent random users from using wildcard mentions.
We call build_message_send_dict from check_message instead of
do_send_messages.
This is a prep commit for adding a new setting for handling
wildcard mentions in large streams.
Having both of these is confusing; TORNADO_SERVER is used only when
there is one TORNADO_PORT. Its primary use is actually to be _unset_,
and signal that in-process handling is to be done.
Rename to USING_TORNADO, to parallel the existing USING_RABBITMQ, and
switch the places that used it for its contents to using
TORNADO_PORTS.
A few major themes here:
- We remove short_name from UserProfile
and add the appropriate migration.
- We remove short_name from various
cache-related lists of fields.
- We allow import tools to continue to
write short_name to their export files,
and then we simply ignore the field
at import time.
- We change functions like do_create_user,
create_user_profile, etc.
- We keep short_name in the /json/bots
API. (It actually gets turned into
an email.)
- We don't modify our LDAP code much
here.
The prior version clobbered all flags, which means
we had unrealistic values for is_private.
Now we only touch the unread flag, which
also means when we go next to create alert words,
those will now work.
Added new Event Type in AbstractRealmAuditLog STREAM_CREATED.
Since we finally create streams in create_stream_if_needed function
in zerver/lib/streams.py so logged realm_audit there.
Passed acting_user when create_stream_if_needed or ensure_stream
function is called.
Added tests in test_audit_log.
Fixes#2665.
Regenerated by tabbott with `lint --fix` after a rebase and change in
parameters.
Note from tabbott: In a few cases, this converts technical debt in the
form of unsorted imports into different technical debt in the form of
our largest files having very long, ugly import sequences at the
start. I expect this change will increase pressure for us to split
those files, which isn't a bad thing.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Automatically generated by the following script, based on the output
of lint with flake8-comma:
import re
import sys
last_filename = None
last_row = None
lines = []
for msg in sys.stdin:
m = re.match(
r"\x1b\[35mflake8 \|\x1b\[0m \x1b\[1;31m(.+):(\d+):(\d+): (\w+)", msg
)
if m:
filename, row_str, col_str, err = m.groups()
row, col = int(row_str), int(col_str)
if filename == last_filename:
assert last_row != row
else:
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
with open(filename) as f:
lines = f.readlines()
last_filename = filename
last_row = row
line = lines[row - 1]
if err in ["C812", "C815"]:
lines[row - 1] = line[: col - 1] + "," + line[col - 1 :]
elif err in ["C819"]:
assert line[col - 2] == ","
lines[row - 1] = line[: col - 2] + line[col - 1 :].lstrip(" ")
if last_filename is not None:
with open(last_filename, "w") as f:
f.writelines(lines)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Generated by pyupgrade --py36-plus --keep-percent-format, but with the
NamedTuple changes reverted (see commit
ba7906a3c6, #15132).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
datetime.timezone is available in Python ≥ 3.2. This also lets us
remove a pytz dependency from the PostgreSQL scripts.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We change do_create_user and create_user to accept
role as a parameter instead of 'is_realm_admin' and 'is_guest'.
These changes are done to minimize data conversions between
role and boolean fields.
This commit merges do_change_is_admin and do_change_is_guest to a
single function do_change_user_role which will be used for changing
role of users.
do_change_is_api_super_user is added as a separate function for
changing is_api_super_user field of UserProfile.
Instread of using stream_name + Intergers as topics, we now
generate topics using pos in `config.generate_data.json`.
This helps us create and test more realistic topics.
Generated by `pyupgrade --py3-plus --keep-percent-format` on all our
Python code except `zthumbor` and `zulip-ec2-configure-interfaces`,
followed by manual indentation fixes.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We now restrict emails on the zulip realm, and now
`email` and `delivery_email` will be different for
users.
This change should make it more likely to catch
errors where we leak delivery emails or use the
wrong field for lookups.
Instead of trying to set the _requestor_for_logs attribute in all the
relevant places, we try to use request.user when possible (that will be
when it's a UserProfile or RemoteZulipServer as of now). In other
places, we set _requestor_for_logs to avoid manually editing the
request.user attribute, as it should mostly be left for Django to manage
it.
In places where we remove the "request._requestor_for_logs = ..." line,
it is clearly implied by the previous code (or the current surrounding
code) that request.user is of the correct type.
The email domain restriction to @zulip.com is annoying in development
environment when trying to test sign up. For consistency, it's best to
have tests use the same default, and the tests that require domain
restriction can be adjusted to set that configuration up for themselves
explicitly.
This index is intended to optimize the performance of the very
frequently run query of "what is the presence status of all users in a
realm?".
Main changes:
- add realm_id to UserPresence
- add index for realm_id
- backfill realm_id for old rows
- change all writes to UserPresence to include
realm_id
The index is of this form:
"zerver_userpresence_realm_id_5c4ef5a9" btree (realm_id)
We will create an index on (realm_id, timestamp) in a
future commit, but I think it's a bit faster if you do
the backfill before the index.
There's also a minor tweak to the populate_db script.
This commit includes a new `stream_post_policy` setting,
by replacing the `is_announcement_only` field from the Stream model,
which is done by mirroring the structure of the existing
`create_stream_policy`.
It includes the necessary schema and database migrations to migrate
the is_announcement_only boolean field to stream_post_policy,
a smallPositiveInteger field similar to many other settings.
This change is done to allow organization administrators to restrict
new members from creating and posting to a stream. However, this does
not affect admins who are new members.
With many tweaks by tabbott to documentation under /help, etc.
Fixes#13616.
This is a preparatory commit for using isort for sorting all of our
imports, merging changes to files where we can easily review the
changes as something we're happy with.
These are also files with relatively little active development, which
means we don't expect much merge conflict risk from these changes.
This fixes a similar problem to the last commit; we don't use
memcached with the test database, so we don't need to flush memcached
when rebuilding it.
(And if we try, we'll get exceptions trying to access the relevant
settings).
Our recent fixes to using the system's configured memcached settings
broke populate_db, because its hacky clear_database helper is called
with a hacked-up settings module.
We fix this by first moving this out-of-place code from models.py into
populate_db, and then saving the settings required to access memcached
so that we can use them in clear_database.
We also fix a mypy erorr in flush-memcached that matches the same
issue fixed in clear_database.
Extracting this calculation makes it easier to hack
it when you're trying to load lots of users.
We probably want a slightly more realistic calculation
here for stress testing. And also fewer rows. But
at least now it's a little more clear what it's doing.
This includes adding a new endpoint to the push notification bouncer
interface, and code to call it appropriately after resetting a user's
personal API key.
When we add support for a user having multiple API keys, we may need
to add an additional key here to support removing keys associated with
just one client.
Then, find and fix a predictable number of previous misuses.
With a small change by tabbott to preserve backwards compatibility for
sending `yes` for the `forged` field.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
One small change in behavior is that this creates an array with all the
row_objects at once, rather than creating them 1000 at a time.
That should be fine, given that the client batches these in units of
10000 anyway, and so we're just creating 10K rows of a relatively
small data structure in Python code here.
Fixes#1727.
With the server down, apply migrations 0245 and 0246. 0246 will remove
the pub_date column, so it's essential that the previous migrations
ran correctly to copy data before running this.
Previous cleanups (mostly the removals of Python __future__ imports)
were done in a way that introduced leading newlines. Delete leading
newlines from all files, except static/assets/zulip-emoji/NOTICE,
which is a verbatim copy of the Apache 2.0 license.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This commit alone breaks things, needs to be merged with the follow-up
ones.
welcome-bot is removed from the explicit list, because it already is in
settings.INTERNAL_BOTS.
Django’s default FileSystemFinder disallows STATICFILES_DIRS from
containing STATIC_ROOT (by raising an ImproperlyConfigured exception),
because STATIC_ROOT is supposed to be the result of collecting all the
static files in the project, not one of the potentially many sources
of static files.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We rename the send_messages function to generate_and_messages, and
factor out the actual sending part of it into a separate function, which
now gets the name send_messages().
Sending messages one-by-one is significantly slower compared to taking
advantage of the batch-handling code in do_send_messages, so we pass all
the messages to the function in one go. This is memory-hungry if there's
a lot of messages, so we will allow splitting into smaller batches in
the next commit.
The code generating pub_dates for messages would fail to distribute them
across days if tot_messages was too large.
We refactor this code as a separate function (for clarity and to unit
test for the bug we're fixing), and change the structure and naming to a
form that more clearly describes what's happening. We also shift away
from the approach of all the float-to-int conversions as this is in
general tricky and bug prone - django's timedelta() handles floats as
arguments, so we take advantage of that.
Add new custom profile field type, External account.
External account field links user's social media
profile with account. e.g. GitHub, Twitter, etc.
Fixes part of #12302
Rename URL type custom profile field in populate db to avoid confusion
with the "GitHub profile" custom external account profile field we'll
be adding shortly.
This provides a clean warning and 40x error, rather than a 500, for
this corner case which is very likely user error.
The test here is awkward because we have to work around
https://github.com/zulip/zulip/issues/12362.
The hope is that by having a shorter list of initial streams, it'll
avoid some potential confusion confusion about the value of topics.
At the very least, having 5 streams each with 1 topic was not a good
way to introduce Zulip.
This commit minimizes changes to the message content in
`send_initial_realm_messages` to keep the diff readable. Future commits will
reshape the content.