As of commit cff40c557b (#9300), these
files are no longer served directly to the browser. Disentangle them
from the static asset pipeline so we can refactor it without worrying
about them.
This has the side effect of eliminating the accidental duplication of
translation data via hash-naming in our release tarballs.
This reverts commit b546391f0b (#1148).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
When parsing custom HTTP headers in the integrations dev panel, http
headers from fixtures system and the send_webhook_fixture_message
we now use a singular source of logic: standardize_headers which
will take care of converting a dictionary of input headers into a
standard form that Django expects.
The argument parser has default empty values set for the options
`--password` and `--password-file`, and this causes the script to try and
read a password file even when the argument was not provided.
The upload option will no longer be limited to strictly S3 uploads. This
commit serves as a preliminary step for supporting LOCAL_UPLOADS_DIR as
part of the public only export feature.
This is a very old commit for #106, which has been on hiatus for a few
years. It was significantly modified by tabbott to:
* Improve coding style and variable names
* Update mypy annotations style
* Clean up the testing logic
* Update for API changes elsewhere in our system
But the actual runtime code is essentially unmodified from the
original work by Kirill.
It contains basic support for archiving Messages, UserMessages, and
Attachments with a nice test suite. It's still not usable in
production (e.g. it will probably break Reactions, SubMessages, etc.),
but upcoming commits will address that.
This lets us handle directly in our tooling the user experience that
we document for exporting a realm with member consent (before, it
required unpleasant manual work).
This renames Subscription.in_home_view field to is_muted, for greater
clarity as to what it does just from seeing the setting name, without
having to look it up.
Also disabled an obsolete test_migrations test.
Fixes#10042.
Using sys.exit in a management command makes it impossible
to unit test the code in question. The correct approach to do the same
thing in Django management commands is to raise CommandError.
Followup of b570c0dafa
This commit serves as the first step in supporting "public export" as a
webapp feature. The refactoring was done as a means to allow calling
the export logic from elsewhere in the codebase.
These functions don't really belong in actions.py, so we move them out,
into email_mirror_helpers.py. They can't go directly into
email_mirror.py or we'd get circular imports resulting in ImportError.
We change the send_to_email_mirror management command, to send messages
to the email mirror through the mirror_email_message function instead of
process_message - this makes the message follow a similar codepath as
emails sent into the mirror with the postfix configuration, which means
going through the MirrorWorker queue. The reason for this is to make
this command useful for testing the new email mirror rate limiter.
When soft deactivation is run for in "auto" mode (no emails are
specified and all users inactive for specified number of days are
deactivated), catch-up is also run in the "auto" mode if
AUTO_CATCH_UP_SOFT_DEACTIVATED_USERS is True.
Automatically catching up soft-deactivated users periodically would
ensure a good user experience for returning users, but on some servers
we may want to turn off this option to save on some disk space.
Fixes#8858, at least for the default configuration, by eliminating
the situation where there are a very large number of messages to recover.
Earlier the behavior was to raise an exception thereby stopping the
whole sync. Now we log an error message and skip the field. Also
fixes the `query_ldap` command to report missing fields without
error.
Fixes: #11780.
This avoids a spurious permission error inside the Postgres
`resolve_symlinks` function if we don’t have access to the current
working directory (e.g. we’re running with cwd /root inside `su
zulip`).
While we’re here, add a defensive `--` argument.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
We do this since we are yet to figure out how the entire realm
internal bots scenerio should work and therefore for the timming
we will use notification bot to deliver the reminders.
Now, if you pass an api_key, we'll initialize the public room
subscribers to be whatever they were at the time the import happened.
Also, document the situation on the caveats section.
Closes#11195. We add a management command to allow us to send emails
to the email mirror directly. The command doesn't require any
configuring of email sending or receiving for the email mirror,
it passes the emails directly using the process_message function.
Feature of sending notification to the stream using notification bot
is added. user_profile is also passed to do_rename_stream for using
the name of user who renamed the stream in notification.
Notification is sent to the stream using
internal_send_stream_message in do_rename_stream.
Fixes#11034.
Previously, this wasn't an explicit feature of the export tool.
Note that the current version still includes metadata on private
streams and private message recipients, just not their messages.
Note that a pretty common use case for this is a realm admin sending this to
everyone after an import from HipChat or Slack. So this adds the realm_name
to the title (so that there is something they might recognize) and kept the
wording generic enough to accommodate the user not having clicked anything
to get this email.
Also strengthens the tests a bit to better test the complicated template
logic.
The previous migration code path was broken in two ways:
* ScheduledEmail objects generally contain a `None` value for
whichever of `to_user_id` and `to_email` isn't in use; this could
result in us sending a [None] to send_email(), which doesn't make
sense.
* We were calling handle_send_email_format_changes in the wrong order
with respect to the JSON loading process.
Thanks to Tom Daff for the report!
Technically, we will only need to process deactivated users for the
purpose of reactivating them (and can ignore, e.g., name changes).
But it's simplest to just process them unconditionally.
This should make life a lot more convenient for organizations that use
the LDAP integration and have their avatars in LDAP already.
This hasn't been end-to-end tested against LDAP yet, so there may be
some minor revisions, but fundamentally, it works, has automated
tests, and should be easy to maintain.
Fixes#286.
Apparently, Django's get_current_site function (used, e.g., in
django-two-factor to look up the domain to use in QR codes) first
tries to use the Sites framework, and if unavailable, does the right
thing (namely, using request.get_host()).
We don't use the Sites framework for anything in Zulip, so the correct
fix is to just remove it.
Fixes#11014.
A key part of this is the new helper, get_user_by_delivery_email. Its
verbose name is important for clarity; it should help avoid blind
copy-pasting of get_user (which we'll also want to rename).
Unfortunately, it requires detailed understanding of the context to
figure out which one to use; each is used in about half of call sites.
Another important note is that this PR doesn't migrate get_user calls
in the tests except where not doing so would cause the tests to fail.
This probably deserves a follow-up refactor to avoid bugs here.
Apparently, we have a second code path where we might try to call
send_email library functions on old data, namely in the
queue_processors codebase. So we apply the same migration logic here.
This adds a function that sends provided email to all administrators
of a realm, but in a single email. As a result, send_email now takes
arguments to_user_ids and to_emails instead of to_user_id and
to_email.
We adjust other APIs to match, but note that send_future_email does
not yet support the multiple recipients model for good reasons.
Tweaked by tabbott to modify `manage.py deliver_email` to handle
backwards-compatibily for any ScheduledEmail objects already in the
database.
Fixes#10896.
This library was absolutely essential as part of our Python 2->3
migration process, but all of its calls should be either no-ops or
encode/decode operations.
Note also that the library has been wrong since the incorrect
refactoring in 1f9244e060.
Fixes#10807.
This adds a web flow and management command for reactivating a Zulip
organization, with confirmation from one of the organization
administrators.
Further work is needed to make the emails nicer (ideally, we'd send
one email with all the admins on the `To` line, but the `send_email`
library doesn't support that).
Fixes#10783.
With significant tweaks to the email text by tabbott.
This should make it possible for there to safely be multiple Tornado
processes running on different ports on the same system.
It may also fix a rare race bug in development, where previously, it
was possible for the Tornados processes for Casper and the main
development server to interfere; I haven't investigated whether this
was a real bug or not, but now those two services will use independent
Tornado files.
We still need to add something to direct traffic between the different
Tornado processes.
Masking content can be useful for testing
out conversions where you're dealing
with data from customers and want to avoid
inadvertently reading their content (while
still having semi-realistic messages).
This is a very early version of a tool to convert Hipchat
tar files into data files that can be used by the Zulip
import process.
We include the most fundamental entities--users and
streams. Customers who don't care about past messages
or customizations could start an instance off of this
and start communicating.
Of course, there are a lot of things missing in the
initial version:
* messages!
* file assets -- avatars, emojis, attachments
* probably lots of other minor things
We currently ignore any incoming dates from Hipchat data
and just use the current time. This is consistent with
other imports.
We also don't have any docs yet, although the process
will be extremely similar to the "Slack" process:
https://zulipchat.com/help/import-from-slack
Also, there's a comment at the top of convert_hipchat_data.py
that describes how to test this in dev mode.
I tested this by following the steps in the comment above.
The users just "show up" in /devlogin, so that's nice, and
you can send messages to other users. To verify the stream
data you have to go into the gear menu and click on "All
Streams", then you can subscribe and send a message.
Production users will need to get new passwords and
re-subscribe to streams. We will probably auto-subscribe
all users to public streams.
This fixes an issue where passing a path like `~/exports/foo` would
result in a `~` directory being created and the export/import not
working correctly.
This flag is used to track which user/message pairs correspond to an
active mobile push notification, that should potentially be cleared
when the user reads the message.
This flag should never appear on a message that is also marked as
read; eventually we may want a cron job to check for that condition.
We include a partial index on UserMessage for this flag.
The is_private flag is intended to be set if recipient type is
'private'(1) or 'huddle'(3), otherwise i.e if it is 'stream'(2), it
should be unset.
This commit adds a database index for the is_private flag (which we'll
need to use it). That index is used to reset the flag if it was
already set. The already set flags were due to a previous removal of
is_me_message flag for which the values were not cleared out.
For now, the is_private flag is always 0 since the really hard part of
this migration is clearing the unspecified previous state; future
commits will fully implement it actually doing something.
History: Migration rewritten significantly by tabbott to ensure it
runs in only 3 minutes on chat.zulip.org. A key detail in making that
work was to ensure that we use the new index for the queries to find
rows to update (which currently requires the `order_by` and `limit`
clauses).
The only changes visible at the AST level, checked using
https://github.com/asottile/astpretty, are
zerver/lib/test_fixtures.py:
'\x1b\\[(1|0)m' ↦ '\\x1b\\[(1|0)m'
'\\[[X| ]\\] (\\d+_.+)\n' ↦ '\\[[X| ]\\] (\\d+_.+)\\n'
which is fine because re treats '\\x1b' and '\\n' the same way as
'\x1b' and '\n'.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Significantly tweaked by tabbott because:
* Argparse was already handling the early checks
* Splitting the bottom loop into two loops means we validate all the
input before trying to run actual import code on anything.
* The argparse documentation was confusing about whether the paths
should be files or directories.
We extract the entire operations of the management command to a
function create_if_missing_realm_internal_bots in the
zerver/lib/onboarding.py. The logic for determining if there are any realm
internal bots which have not been created is extracted to a function
missing_any_realm_internal_bots in actions.py.
We add conditional infinite sleep to this delivery job as a means to
handle case of multiple servers in service to a realm running this
job. In such a scenerio race conditions might arise leading to
multiple deliveries for same message. This way we try to match the
behaviour of what other jobs do in such a case.
Note: We should eventually do something to make such jobs work
while being running on multiple servers.
This should help avoid confusing error messages for anyone
accidentally running this twice.
In particular, this also makes it easier to run Zulip inside
Kubernetes, since one doesn't need to worry about duplicate calls.
This fixes exceptions when sending PMs in development (where we were
trying to connect to the localhost push bouncer, which we weren't
authorized for, but even if we were, it wouldn't work, since there's
no APNS/GCM certs).
At the same time, we also set and order of operations that ensures one
has the opportunity to adjust the server URL before submitting
anything to us.
When you're importing with --destroy-rebuild-database, we need to
check subdomain availability after we've cleared out the database;
otherwise, trying to reuse the same subdomain doesn't work.
This logging was apparently broken when sorting imports; it's a fairly
unique thing in our codebase that this would be a problem. Prevent
future regressions by adding this exception explicitly to the isort
configuration.
Issue #2088 asked for a wrapper to be created for
`create_stream_if_needed` (called `ensure_stream`) for the 25 times that
`create_stream_if_needed` is called and ignores whether the stream was
created. This commit replaces relevant occurences of
`create_stream_if_needed` with `ensure_stream`, including imports.
The changes weren't significant enough to add any tests or do any
additional manual testing.
The refactoring intended to make the API easier to use in most cases.
The majority of uses of `create_stream_if_needed` ignored the second
parameter.
Fixes: #2088.
We also delete a couple helper functions that were only used there.
This management command was primarily used before we had a UI for
creating outgoing webhook bots.
We no longer accept URLs while creating emoji; so this management
command was probably left out while migrating realm emoji
infrastructure to upload backend.
We could fix this to work properly today, but the command was
originally written in a context when Zulip didn't have a UI for
managing realm emoji at all. Now that we do have such a UI, it
doesn't have a compelling use case, and work on migrating the realm
emoji schema demonstrates that this does have a maintenance cost.
So, we simply remove this command.
The fresh imported data shows that the users emails are not included
in the data. However, the data received from the older method of slack
(which is using legacy tokens) contains the email data of the users.
This code duplicated the code in setup_realm_internal_bots, with some
added logic to avoid trying to create the same bot twice. That logic
was buggy so that it would never work at all -- it subtracted a set of
UserProfile objects from a set of email strings -- so it looked like
the command might blow up when run after the users already existed.
In fact, the buggy logic wasn't necessary, because the work the
command does after it is idempotent -- in particular `create_users`,
within its subroutine `bulk_create_users`, already filters out users
that already exist. So just cut the buggy stuff out, deduplicate the
rest with `setup_realm_internal_bots`, and document that invariant on
the latter.
While we're here, in the common case bail early without doing any
per-realm work in Python, since we're running this on every upgrade.
The name `create_logger` suggests something much bigger than what this
function actually does -- the logger doesn't any more or less exist
after the function is called than before. Its one real function is to
send logs to a specific file.
So, pull out that logic to an appropriately-named function just for
it. We already use `logging.getLogger` in a number of places to
simply get a logger by name, and the old `create_logger` callsites can
do the same.
Because calls to `create_logger` generally run after settings are
configured, these would override what we have in `settings.LOGGING` --
which in particular defeated any attempt to set log levels in
`test_settings.py`. Move all of these settings to the same place in
`settings.py`, so they can be overridden in a uniform way.
This is already the loglevel we set on the root logger, so this has no
effect -- except in tests, where `test_settings.py` attempts to set
some of these same loggers to higher loglevels. Because the
`create_logger` call generally runs after we've configured settings,
it clobbers that effect.
The code in `test_settings.py` that tries to suppress logs only works
because it also sets `propagate=False`, which has nothing to do with
loglevels but does cause logs at this logger (and descendants) to be
dropped completely unless we've configured handlers for this logger
(or one of its relevant descendants.)
These are the exceptions to the rule that our queues correspond to
queue-processor workers.
Purging `notify_tornado` in particular is a useful workaround right
now for some error spew in the dev environment.
Do you call get_recipient(Recipient.STREAM, stream_id) or
get_recipient(stream_id, Recipient.STREAM)? I could never
remember, and it was not very type safe, since both parameters
are integers.
Before this change, we populated two cache entries for each
message that we sent. The entries were largely redundant,
with the only difference being whether we sent the content
as raw markdown or as the rendered HTML.
This commit makes it so we only have one cache entry per
message, and it includes both content and rendered_content.
One legacy source on confusion here is that `content`
changes meaning when you're on the front end. Here is the
situation going forward:
database:
content = raw
rendered_contented = rendered
cache entry:
content = raw
rendered_contented = rendered
payload for the frontend:
content = raw (for apply_markdown=False)
content = rendered (for apply_markdown=True)
Now we use 'git ls-files' to get the list of locales that we actually
track. Previously we were using os.listdir to get the contents of the
static/locale directory. This could also return locales which were
present in the directory but are not supported by us, e.g. zh_CN.
We have been assigning locale to language code. Mostly code and locale
are same but for languages like zh-Hans, locale is zh_Hans and code is
zh-hans.
After this commit, compilemessages command should be run.
Previously we were using regexes to extract the language from our
locale files. Now we use LANG_INFO data structure provided by Django
to do the same and fallback to PO files only when language code is not
present in the Django data structure.
This should mean that maintaining two Zulip development environments
using the same Git checkout no longer has caching problems keeping
track of the migration status.
Previously we used to mark a key as unstranlated if its value was equal
to it in translations.json. This had an issue because it didn't allow
otherwise valid cases where key was equal to the value.
This commit solves the problem by disallowing an empty string as a valid
translation and then using the empty string as the value for all the
unstranslated keys.
Fixes#5261
compilemessages command now does all the heavy lifting by creating a
language_name_map.json file under locale directory. This file is used
by get_language_list to retrieve the require information.
Fixes: #6486
The commit() call in fix() breaks migrations and tests (unless you
mock) due to outer transactions.
We now explicitly call commit() from the management command.
This commit completely switches us over to using a
dedicated model called MutedTopic to track which topics
a user has muted.
This includes the necessary migrations to create the
table and populate it from legacy data in UserProfile.
A subsequent commit will actually remove the old field
in UserProfile.
The double forward slash (//) after the protocol in URLs was being
mistakenly considered the beginning of an inline JS comment, causing
internationalization strings being cut unexpectedly.
Now the check for inline JS comments is only run in .js files.
This code empirically doesn't work. It's not entirely clear why, even
having done quite a bit of debugging; partly because the code is quite
convoluted, and because it shows the symptoms of people making changes
over time without really understanding how it was supposed to work.
Moreover, this code targets an old version of the APNs provider API.
Apple deprecated that in 2015, in favor of a shiny new one which uses
HTTP/2 to meet the same needs for concurrency and scale that the old
one had to do a bunch of ad-hoc protocol design for.
So, rip this code out. We'll build a pathway to the new API from
scratch; it's not that complicated.