This commit adds a 'skip-delay' option to the
'send_zulip_update_announcements' management command.
It will be useful for self-hosted servers after 9.0 upgrade to
avoid the 24 hour delay to receive update messages after group
DM is sent to the admins.
One can run the management command with the --skip-delay flag
to immediately send the update messages.
Using --rotate-key without write access to the secrets file is currently
quite painful, since you end up rotating your registration's secret with
no local record of it; so effectively you lose your registration and
need help from support. We should just prevent this failure mode.
This commit adds a management command that will run regularly
as a cron job to send zulip updates to realms based on their
current and latest zulip_update_announcements_level.
For realms with:
* level = None: Send a group DM to admins notifying them about
this new feature & suggestion to set the stream accordingly.
* level = 0:
* If stream is still not configured, wait for a week
before setting their level to latest level. They will
miss updates until their configure the stream.
* If stream is configured, send updates.
* level > 0: Send one message/update per level & increase
the level by 1 till the latest level.
Fixes#28604.
This makes no immediate reloads the default for runtornado, matching
the production configuration, and changes the development incantation
to be the one to specify the departure from the norm, with
--immediate-reloads.
Creating the QueueProcessingWorker objects when the ThreadedWorker is
created can lead to a race which caused confusing error messages:
1. A thread tries to call `self.worker = get_worker()`
2. This call raises an exception, which is caught by
`log_and_exit_if_exception`
3. `log_and_exit_if_exception` sends our process a SIGUSR1, _but
otherwise swallows the error_.
4. The thread's `.run()` is called, which tries to access
`self.worker`, which was never set, and throws another exception.
5. The process handles the SIGUSR1, restarting.
Move the creation of the worker to when it is started, so the worker
object does not need to be stored, and possibly have a decoupled
failure.
If we `.distinct("delivery_email")` then we must also
`.order_by("delivery_email")`; adc987dc43 added the `.order_by`
call, which broke the newsletter codepath, since it did not contain
the `delivery_email` in the ordering fields.
Add a flag to distinct on emails in `send_custom_email`.
The set of `enable_marketing_emails=True` are those that have opted
into getting marketing newsletter emails -- but we previously limited
further to only those users active in the last month.
Broaden that to "opted in, and either recently active or an owner or
an admin," with the goal of providing information to folks who may
have tried out Zulip in the past.
Co-authored-by: Tim Abbott <tabbott@zulip.com>
Saying `**options: str` is a lie, since it contains bools. We pluck
out the two bools that we need properly typed because we will be
pushing them into function calls, and type them explicitly as bools.
We call 'send_server_data_to_push_bouncer' just after registering
server for push notification.
This helps to have a current state of the user counts when first
logging in after the RemoteRealm flow.
This works around the `/usr/bin/pg_dump` failure described in the
previous commit. Since we are now calling the appropriately-versioned
`pg_dump` binary directly, it is no longer "necessary", but is added
as a defense-in-depth.
`/usr/bin/pg_dump` on Ubuntu and Debian is actually a tool which
attempts to choose which `pg_dump` binary from all of the
`postgresql-client-*` packages that are installed to run. However,
its logic is confused by passing empty `--host` and `--port` options
-- instead of looking at the running server instance on the server, it
instead assumes some remote host and chooses the highest versioned
`pg_dump` which is installed.
Because Zulip writes binary database backups, they are sensitive to
the version of the client `pg_dump` binary is used -- and the output
may not be backwards compatible. Using a PostgreSQL 16 `pg_dump`
writes archive format 1.15, which cannot be read by a PostgreSQL 15
`pg_restore`.
Zulip does not currently support PostgreSQL 16 as a server. This
means that backups on servers with `postgresql-client-16` installed
did not successfully round-trip Zulip backups -- their backups are
written using PostgreSQL 16's client, and the `pg_restore` chosen on
restore was correctly chosen as the one whose version matched the
server (PostgreSQL 15 or below), and thus did not understand the new
archive format.
Existing `./manage.py backups` taken since `postgresql-client-16` were
installed are thus not directly usable by the `restore-backup` script.
They are not useless, however, since they can theoretically be
converted into a format readable by PostgreSQL 15 -- by importing into
a PostgreSQL 16 instance, and re-dumping with a PostgreSQL 15
`pg_dump`.
Fix this issue by hard-coding path to the binary whose version matches
the version of the server we are connected to. This may theoretically
fail if we are connected to a remote PostgreSQL instance and we do not
have a `postgresql-client` package locally installed which matches the
remote PostgreSQL server's version. However, choosing a matching
version is the only way to ensure that it will be able to be imported
cleanly -- and it is preferable that we fail the backup process rather
than write backups that we cannot easily restore from.
Fixes: #27160.
Our logic for extracting strings from templates did not properly
handle the syntax for code containing whitespace control characters,
resulting in a couple strings from subscribe_to_more_streams.hbs not
being processed.
This allows us to not have to keep extending the tool for every
one-off use case and set of users; we build a pipeline to generate the
appropriate JSON file, write a template which uses the data it
provides, and run the tool with them together.
The set of objects in the `users` object can be very large (in some
cases, literally every object in the database) and making them into a
giant `id in (...)` to handle the one tiny corner case which we never
use is silly.
Switch the `--users` codepath to returning a QuerySet as well, so it
can be composed. We pass a QuerySet into send_custom_email as well,
so it can ensure that the realm is `select_related` in as well, no
matter how the QuerySet was generated.
This commit updates the select_related calls in queries
to get UserProfile objects in sync_ldap_user_data code
to pass "realm" as argument to select_related call.
Also, note that "realm" is the only non-null foreign key
field in UserProfile object, so select_related() was only
fetching realm object previously as well. But we should
still pass "realm" as argument in select_related call so
that we can make sure that only required fields are
selected in case we add more foreign keys to UserProfile
in future.
This commit updates the select_related calls in queries
to get UserProfile objects in send_custom_email code to
pass "realm" as argument to select_related call.
Also, note that "realm" is the only non-null foreign key
field in UserProfile object, so select_related() was only
fetching realm object previously as well. But we should
still pass "realm" as argument in select_related call so
that we can make sure that only required fields are selected
in case we add more foreign keys to UserProfile in future.
We now set tos_version to "-1" for imported users and the ones
created using API or using other methods like LDAP, SCIM and
management commands. This value will help us to allow users to
change email address visibility setting during first login.
Previously, it seemed possible for the scheduled messages API to try
to send infinite copies of a message if we had the very poor luck of a
persistent failure happening after a message was sent.
The failure_message field supports being able to display what happened
in the scheduled messages modal, though that's not exposed to the API
yet.
The previous logic would attempt to send a large number of unrelated
messages in a single transaction, which is just asking for trouble in
the event that one of the attempts fails.
Because education organizations and users have slightly specialized
use cases, we update the Welcome Bot message content sent to new
users and new organization owners for these types of organizations
to link to help center articles/guides geared toward these users
and organizations.
Also, updates the demo organization warning to only go to the new
demo organization owner because the 30 day deletion text is only
definitely accurate when the organization is created.
Fixes#21694.
This is mainly updating the variable names and relevant docstrings
without actual change to the behavior of the command.
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
So far, we've used the BitField .authentication_methods on Realm
for tracking which backends are enabled for an organization. This
however made it a pain to add new backends (requiring altering the
column and a migration - particularly troublesome if someone wanted to
create their own custom auth backend for their server).
Instead this will be tracked through the existence of the appropriate
rows in the RealmAuthenticationMethods table.
Previously, we had an architecture where CSS inlining for emails was
done at provision time in inline_email_css.py. This was necessary
because the library we were using for this, Premailer, was extremely
slow, and doing the inlining for every outgoing email would have been
prohibitively expensive.
Now that we've migrated to a more modern library that inlines the
small amount of CSS we have into emails nearly instantly, we are able
to remove the complex architecture built to work around Premailer
being slow and just do the CSS inlining as the final step in sending
each individual email.
This has several significant benefits:
* Removes a fiddly provisioning step that made the edit/refresh cycle
for modifying email templates confusing; there's no longer a CSS
inlining step that, if you forget to do it, results in your testing a
stale variant of the email templates.
* Fixes internationalization problems related to translators working
with pre-CSS-inlined emails, and then Django trying to apply the
translators to the post-CSS-inlined version.
* Makes the send_custom_email pipeline simpler and easier to improve.
Signed-off-by: Daniil Fadeev <fadeevd@zulip.com>
Actions like deleting realms may leave unreferenced uploads in the
attachment storage backend.
Fix these by walking the complete contents of the attachment storage
backend, and removing files which are no longer present in the
database. This may take quite some time, as it is necessarily O(n) in
the number of files uploaded to the system.
Ever since we started bundling the app with webpack, there’s been less
and less overlap between our ‘static’ directory (files belonging to
the frontend app) and Django’s interpretation of the ‘static’
directory (files served directly to the web).
Split the app out to its own ‘web’ directory outside of ‘static’, and
remove all the custom collectstatic --ignore rules. This makes it
much clearer what’s actually being served to the web, and what’s being
bundled by webpack. It also shrinks the release tarball by 3%.
Signed-off-by: Anders Kaseorg <anders@zulip.com>