We create a path structure in the from:
`/var/<uuid>/test-backend/run_1234567/worker_N/`
A settings attribute, `TEST_WORKER_DIR`, was created as a worker's
directory for a given `test-backend` run's file storage. The
appropirate path is created in `setup_test_environment`, while each
workers subdirectory is created within `init_worker`.
This allows a test class to write to `settings.TEST_WORKER_DIR`,
populating the appropirate directory of a given worker. Also
providing the long-term approach to clean up filesystem access
in the backend unit tests.
This will make it easier to have access to the stream creator.
The indirection also isn't really adding anything, especially given that the
announce message is inlined just above.
Modified by punchagan to:
* Add a separate markdown test for de-duplicating inline previews
* Check for number of unique URLs to see if per limit message is crossed
* Use a set for processed URLs instead of a list
Fixes#8379.
The previous code for the validator test was fairly messy due to
checking for both formats of the openapi url, one with
<variable_name> and the other with {variable_name}. To eliminate
this, we have standardized the format and restricted it to
{variable_name} as per the official format at:
https://swagger.io/docs/specification/describing-parameters.
Add new custom profile field type, External account.
External account field links user's social media
profile with account. e.g. GitHub, Twitter, etc.
Fixes part of #12302
We can simply archive cross-realm personal messages according to the
retention policy of the recipient's realm. It requires adding another
message-archiving query for this case however.
What remains is to figure out how to treat cross-realm huddle messages.
This reverts commit 8f15884c7d. Using the
WITH ( ) ... DELETE method leads to a small performance drop, while
probably not offering many positives, so it seems appropriate to go to
the simpler case of just letting things get cleaned up by CASCADE.
The way the code changed in this commit was written caused Django to
fetch stream.realm from the database for every stream, leading to
redundant, identical queries. Each stream's realm is already known, so
we use that information.
In addition to the test which checks to to see if each endpoint in
code (urls.py) is documented in the openapi documentation (and with
the right arugments). We now also have a test to see if every
endpoint in the openapi documentation is a legitimate endpoint
also existing in code.
We do this by piggy-backing on the work done be the former test and
using set operations. This method avoid the need for an extra loop
and it uses set operations for additional speed and ease of reading.
By importing a few view modules in the validation test itself we
can remove a few endpoints which were marked as buggy. What was
happening was that the view functions weren't imported and hence
the arguments map was not filled. Thus the test complained that
there was documentation for request parameters that seemed to be
missing in the code. Also, for the events register endpoint, we
have renamed one of the documented request parameters from
"stream" to "topic" (the API itself was not modified though).
We add a new "documentation_pending" attribute to req variables
so that any arguments not currently documented but should be
documented can be properly accounted for.
The conditional block containing the tarball upload logic for both S3
and local uploads was deconstructed and moved to the more appropriate
location within `zerver/lib/upload.py`.
This change is preliminary refactoring in order to improve the test
mocking strategy related to `test_realm_export.py`.
What this allows is the ability to simply mock a return value from
`do_export_realm`. We can then use that value as a dummy url to
ensure a file has been served and can be retrieved.
Duplicate handling when INSERTing is switched from "LEFT JOIN ... id IS
NULL" approach to "ON CONFLICT (id) DO NOTHING", since we now have
postgresql 9.5. The ON CONFLICT approach is more natural as well as also
potentially being faster,
We don’t need a hacked copy anymore. We run the installed version out
of node_modules in development, and a Webpack-bundled version of that
in production.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This will allow us to mark a REQ variable as intentionally
undocumented. With this, we can remove some of the endpoints marked
as "buggy" even though they're not actually buggy, we just needed to
specify certain parameters as intentionally undocumented (e.g. the
stream_id for the /users/me/subscriptions/muted_topics endpoint.)
Any REQ variable with intentionally_undocumentated set to True
will not be added to the arguments_map data structure.
For some of the other "buggy" endpoints, we would want to mark the
entire endpoint as being undocumented intentionally via. the urls.py
file.
As of commit cff40c557b (#9300), these
files are no longer served directly to the browser. Disentangle them
from the static asset pipeline so we can refactor it without worrying
about them.
This has the side effect of eliminating the accidental duplication of
translation data via hash-naming in our release tarballs.
This reverts commit b546391f0b (#1148).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We had gotten to the right documented content, but the previous
chaining of object creation/deletion wasn't quite right.
Fix this in a way that reduces the amount by which tests are dependent
on what other tests are doing.
This is a dramatic redesign of the look and feel of our missed-message
emails, designed to decrease the feeling of clutter and just provide
the content users care about in a clear, visible fashion.
This cleans up the reply_warning feature in favor of a more coherent
explanation of whether or not one can reply.
(Also, critically, it now advertises the ability to enable
missed-message email replies with some administrative configuration
work.)
We reuse the link regexes we use elsewhere inn markdown
for parsing links in topic names and add a button to open
them in new tabs similar to our behavior with linkifiers
in topic names.
Fixes#12391.
When archiving Messages, we stop relying on LEFT JOIN ... IS NULL to
avoid duplicates when INSERTing. Instead we use ON CONFLICT DO UPDATE
(added in postgresql 9.5) to, in case of archiving a Message that
already has a corresponding archived objects (this happens if a Message
gets archived, restored and then archived again), re-assign the existing
ArchivedMessage to the new transaction.
This also allows us to fix test_archiving_messages_second_time, which
was temporarily disable a few commits before.
We combine run_message_batch_query and run_archiving_in_chunks
functions, which makes the code simpler and more readable - we get rid
of hacky generator usage, for example.
In the process, move_expired_messages_* functions are adjusted, and now
they archive Messages as well as their related objects.
Appropriate adjustments in reaction to this are made in the main
archiving functions which call move_expired_messages_* (they no longer
need to call move_related_objects_to_archive).
Instead of having a bunch of custom code in the function, we make it use
run_message_batch_query and run_archiving_in_chunks to do the necessary
operations in a consistent way, using the same codepaths as the rest of
the archiving system.
This breaks test_archiving_messages_second_time temporarily, but we will
fix it and re-enable the test in the next commits, where we'll address
various other issues with re-archiving of messages.
We also remove the @transaction.atomic wrapper, because atomicity is
handled by the logic inside run_archiving_in_chunks.
We add a new model, ArchiveTransaction, to tie archived objects together
in a coherent way, according to the batches in which they are archived.
This enables making a better system for restoring from archive, and it
seems just more sensible to tie the archived objects in this way, rather
the somewhat vague setting of archive_timestamp to each object using
timezone_now().
For storing HTTP headers as a function of fixture name, previously
we required that the fixture_to_headers method should reside in a
separate module called headers.py.
However, as in many cases, this method will only take a few lines,
we decided to move this function into the view.py file of the
integration instead of requiring a whole new file called headers.py
This commit introduces the small change in the system architecture,
migrates the GitHub integration, and updates the docs accordingly.
In the GitHub integration we established that for many integrations,
we can directly map the fixture filename to the set of required
headers and by following a simple naming convention we can greatly
ease the logic involved in fixture_to_headers method required .
So to prevent the need for duplicating the logic used by the GitHub
integration, we created a method called `get_http_headers_from_filename`
which will take the name of the HTTP header (key) and then return a
corresponding method (in a decorator-like fashion) which could then be
equated to fixture_to_headers in headers.py.
The GitHub integration was modified to use this method and the docs
were updated to suggest using this when possible.
When parsing custom HTTP headers in the integrations dev panel, http
headers from fixtures system and the send_webhook_fixture_message
we now use a singular source of logic: standardize_headers which
will take care of converting a dictionary of input headers into a
standard form that Django expects.
Using this system, we can now associate any fixture of any integration
with a particular set of HTTP headers. A helper method called
determine_http_headers was introduced, and the test suite was upgraded
to use determine_http_headers.
Comments and documentation significantly edited by tabbott.
This function is an alternative to get_admin_users that we use in all
places where we explicitly want only human administrative users (not
administrative bots). The following commits will rename
get_admin_users for better clarity.
Our recently-added code for rewriting user IDs on data import didn't
correctly handle wildcard mentions and mentions generated by very old
versions of Zulip (pre data-user-id).
The previous query ended up doing an awkward join that did not
guarantee use of the Recipient index on zerver_message, turning a very
fast query into something that could take much longer for a single
stream than the rest of the import combined.
We also document support for user IDs in the pm-with narrow operator.
Edited by tabbott to document on /api rather than in the /help page.
Fixes part of #9474.
Namely, here we add the "plan_includes_wide_organization_logo" and
"upgrade_text_for_wide_organization_logo" to the page_params (which
is set in zerver/lib/events.py).
"plan_includes_wide_organization_logo" is True if the plan is not of
the Realm.LIMITED type. We need to add this extra boolean parameter
instead of just using "realm_plan_type" to make things a lot easier
to work with on the frontend side, especially considering that
handlebars won't allow checking for equality in its {{#if}} blocks.
When a realm's plan type is updated using "do_change_plan_type" we
notify active users of the realm. This way certain plan features
could be enabled instantaneously for active users.
A function was written in `test_fixtures.py` to drop a test database
template if the corresponding database id doesn't belong to a file.
Alongside this fact, every file that is written is removed after 60
minutes. Meaning any potential database template can never exist
longer than one hour.
This follow-up work was added to deal with the potential race
conditions when running `test-backend`. Ensuring that all templates
are properly dealt with.
Essentially rewritten by tabbott for cleanliness.
Fixes the remainder of #12426.
The ids that will be used for each particular run of the test suite are
written to a unique file. Each file will then be used as a time
reference of when the suite was ran.
This change sets up the ability for a complete clean up of potentially
leaked database templates.
Tweaked by tabbott to remove these files after successful database
cleanup.
When running the test-backend suite in serial mode, `destroy_test_db`
double appends the database id number to the template if passed an
argument for `number`. The comment here explains this behavior.
This fixes an issue that caused LDAP synchronization to fail for
avatars. The problem occurred due to the lack of a 'name' attribute
on the BytesIO object that we pass to the upload backend (which is
only used in the S3 backend for computing Content-Type).
Fixes#12411.
Rather than relying on the CASCADING property of the ForeignKey to the
Message table to clean up these objects, we delete them in the same
query as we archive them - since it's guaranteed that any of these
objects that we archive will be deleted due to their Message being
deleted later.
We don't have this guarantee for Attachment objects, which is why we
can't apply this scheme to them.
To ensure the database retains a consistent state if archiving gets
interrupted, we process each Messages chunk together with related
objects in a single atomic transaction.
We had two duplicate functions for archiving zerver_attachment_messages
rows, doing the same thing - archiving by message_id. One of them had a
redundant INNER JOIN, so we get rid of that too.
Since we loop over realms in the functions for archiving stream messages
and then personal+huddle messages, and also want to split cleaning up
attachments by realm - it makes sense to do it all in one single loop.
Rename notification property `enable_stream_sounds` to
`enable_stream_audible_notifications` to match with other
notification property patterns.
Fixes part of #12304
We batch queries that archive Messages, to limit the maximum amount of
Message objects archived in a single query. This leads to the archiving
of other related objects being batched as well, because we loop over
chunks of archived messages and archive their related objects per-chunk.
N = self.parallel templates are created, and these templates were
previously named 'zulip_test_template_<1, N>'. However, to support
running multiple instances of `test-backend`, a unique
`random_id_range_start` was created for each template database.
There was no problem prior because the templates would simply be
used again and thus did not require any clean up. Now that there are
unique database names being created, every time `test-backend` is run
these templates can accumulate on disk. Instead, we clean up our
templates at the end of every complete run of the test suite, or upon a
SIGINT.
Fixes: #12426
This validation is incomplete, in large part because of the long list
of TODOs in this code. But this test should provide a ton of support
for us in avoiding regressions as we work towards having complete API
documentation.
See https://github.com/zulip/zulip/issues/12521 for a bunch of
follow-up improvements.
We add the following behavior:
If stream has message_retention_days set to -1, archiving for it is
disabled.
If stream has message_retention_days set to null, use the realm's
policy. If the realm has no policy, we don't archive for this stream.
UserMessages no longer need special handling, they can be archived by
move_models_with_message_key_to_archive and automatically cleaned up
like the other models with a message key with CASCADING=True.
We change the archiving scheme to allow having stream based retention
policies. In the first step of the archiving process, we loop over
streams and archive their expired messages and related objects.
Then we separately archive all expired personal and huddle messages and
related objects. As the last step, we scan for redundant attachments
which can now be deleted.
To achieve this, we have to rewrite a significant portion of the
retention code and rework some of the database queries.
For the sake of simplicity, we neither archive nor delete cross-realm
messages, except cross-realm stream messages – in their case they can
be processed in the same manner as ordinary stream messages.
In the query for archiving personal and huddle messages we simply
exclude those sent by cross-realm bots.
We change the tests to adapt to these modifications.
Since we archive attachments and attachment_messages tied to a list of
ids of Messages that we just archived (so from the current realm), it's
unnecessary to check their realm in the queries. This could potentially
cause archiving of an attachment with realm_id of another realm, but
this isn't an issue, as long as we make sure we don't end up deleting
the original Attachment object incorrectly - but realm_id check is
included in delete_expired_attachments() to ensure that.
Previously, we didn't have validation to prevent editing certain flags
that don't make sense for a client to edit, like whether a user was
mentioned in a given message.
This isn't a security issue -- the user could only mess up their own
personal search results (etc.), but it does seem worth fixing to avoid
confusion for folks developing Zulip clients.
While we're at it, clearly document the situation in comments.
This adds a setting to control Zulip's default behavior of sorting to
bottom and graying out inactive streams. The previous logic is still
the default "automatic", but this gives users more control. See the
models.py comment for details.
Fixes#11524.
We were apparently reusing the path for both the development and test
databases, which meant that we would not always correctly run
`generate_fixtures` when changes were required.
This was a recent regression introduced when we added this cache a few
days ago.
We add RETURNING to fetch relevant message and usermessage ids in
archiving queries and use them to make other queries faster and slower.
A side-effect of this implementation is that with cross-realm messages,
the UserMessage of the recipient and the Message will not be deleted -
but cross-realm messages are rare, will still get correctly put in the
archive tables and so failing to delete should not be a problem for now.
They will be fully handled later.
zerver_archivedmessage is already INNER JOIN-ed earlier in the query, so
we check the pub_date in it, instead of joining zerver_message, which
would just redundantly join the analogical rows.
lxml parser appends html and body tags to the soup object which
are not reqired. There are no other major parsing diffrences between
the two parsers as long the HTML input is perfectly formated.
lxml parser is much faster than html.parser but it hardly matters
in our case.
https://www.crummy.com/software/BeautifulSoup/bs4/doc/#differences-
between-parsers
In addition to the "+show-sender" option, we now add "+include-footers"
which disables stripping of the footer from the email body if this token
is included in the email address.
To enable a comfortable way of adding more optional tokens in the
address (like current '+show-sender') we change decode_email_address to
return a general dictionary containing options specified through adding
these optional tokens in the To: address. For now, we only have
"+show-sender", but more can be easily added using this change.
Instead of running `what_to_do_with_migrations` unconditionally, we
first hash and compare the files located in `*/migrations/*`. Only if
a migration file has changed (or the hash file does not exist yet) do we
call `what_to_do_with_migrations`.
It was discovered that the call to Django's `showmigrations.py` file was
causing roughly a 500ms increase in `test-backend`'s start up time.
However, this fix only saves about 100ms, apparently because a lot of
that work was importing Django dependnecies we need for most tests
anyway.
Fixes: #12428.
Ensure that the html is safe, before using it. The html is considered if it is
in an iframe with a http/https src, based on the recommendations here:
https://oembed.com/#section3
We directly embed the `iframe` html into the lightbox overlay.
We add general code that will archive models that are tied to a specific
Message (such as Reactions and SubMessages). Certain details of the
model are grabbed from a list models_with_message_key, and then used to
create queries that will archive these database tables.
We put Reaction in that list in this commit, and add appropriate tests.
To have archiving of other analogical models (for example SubMessage),
one only needs to make an appropriate entry in the
models_with_message_key list.
Sometimes it's useful to run two copies of test-backend at the same
time. The problem with doing so is that we need to make sure no two
threads are using the same test database ID.
Previously, this worked only if at most one of those copies was
running in the single-threaded mode, because we used a random database
ID for the single-threaded code path, but the same IDs counting from 0
for the parallel code path.
Fix this, mostly, by generating a random start for the range of IDs
used by the process, and then counting off database IDs starting from
there (both in the parallel and non-paralllel modes).
There's still a very low probability race, see the TODO.
Additionally, there appear to be some other races with running two
copies of test-backend at the same time not related to the database.
See https://github.com/zulip/zulip/issues/12426 for a follow-up issue
that's sorta created by this.
The test-backend parallel test runner system doesn't actually use the
zulip_test database; instead, it creates its own databases off the
zulip_test_template database.
We were accidentally running `tools/generate_fixtures` even when there
are no changes, because this function is shared with the
tools/lib/test_server.py codebase, which needs us to do the work of
creating a test database for it off the zulip_test_template database.
Fixing this saves about 1.5s / 4s of the runtime of a single test.
Previously, if you exported a Zulip organization and then re-imported
it, we'd end up renumbering the user IDs and all direct foreign key
references to them in the database, but not the data-user-id
references in mentions. Fix this by parsing the message content and
doing that renumbering.
(Because we import raw markdown, not HTML, from third-party tools,
these changes won't affect data import from slack etc.)
Fixes the high-priority part of #11293.
Modifies the dict with the user info to include the key `bot_owner_id`
so it can be displayed in the user info popover.
Tests concerned with changing bot owner have been modified to have
number of events=2 because while updating the bot info, two events
are fired -- updating the `realm_bot` and `realm_user` since the
key `bot_owner_id` is a part of realm user info.
A unique path was created using the `LOCAL_UPLOADS_DIR` backend, similar
to the code used in `LocalUploadBackend`. The exported tarball was
copied to the directory, and an nginx url was created to serve the file
publicly.
Tweaked by tabbott to output an actual URL.
This cleans up the pattern for how we check which user is logged in
during Zulip's backend unit tests to be much more readable (replacing
the arcane session code that does this check).
We split archive_messages code into two functions: moving to archive and
cleanup. This allows cleaning up the tests - they can call
these functions directly instead of copying several lines of
archive_messages here and there in multiple tests.
This is probably a good idea for the production use case, since then
there's some consistency of behavior, and if we extend logging, one
knows exactly which realms were or were not executed before a logged
failure.
This fixes the nondeterministic test failures we've been seeing in CI:
if you use `-id` in that order_by, it happens consistently.
The upload option will no longer be limited to strictly S3 uploads. This
commit serves as a preliminary step for supporting LOCAL_UPLOADS_DIR as
part of the public only export feature.
log_and_report and its helper functions were mostly old code no longer
well adapted to how email mirror works currently, as well as having no
test coverage. We rewrite this part of the email to report errors in a
similar manner, and add tests for it. We're able to get rid of the
clunky and now useless debug_info dictionary in process message, as
log_and_report only needs the recipient email in its third argument.
The only place in which process_stream_message used debug_info was to
set the 'stream' key, which would only be used if ZulipEmailForwardError
was raised after this line in the code - which is impossible, because after
that line only send_zulip (which doesnt raise this exception) and
logger.info get called, then process_stream_message successfully returns
and then process_message succesfully returns as well. So this debug_info
code wasn't doing anything. We remove it.
Clients won't have access to user email addresses, and thus won't be
able to compute gravatars.
The tests for this are a bit messy, in large part because our tests
for get_events call subsections of it, rather than the main function.
This is a very old commit for #106, which has been on hiatus for a few
years. It was significantly modified by tabbott to:
* Improve coding style and variable names
* Update mypy annotations style
* Clean up the testing logic
* Update for API changes elsewhere in our system
But the actual runtime code is essentially unmodified from the
original work by Kirill.
It contains basic support for archiving Messages, UserMessages, and
Attachments with a nice test suite. It's still not usable in
production (e.g. it will probably break Reactions, SubMessages, etc.),
but upcoming commits will address that.
This is handy for code that needs to do something with the sent
message. We need it for a retention policy code path, but it seems
likely we'll use it a lot down the line.
This lets us handle directly in our tooling the user experience that
we document for exporting a realm with member consent (before, it
required unpleasant manual work).
We may be successfully able to get the page once, to get the content type, but
the server or network may go down and cause problems when fetching the page for
parsing its meta tags.
Currently, we only show previews for URLs which are HTML pages, which could
contain other media. We don't show previews for links to non-HTML pages, like
pdf documents or audio/video files. To verify that the URL posted is an HTML
page, we verify the content-type of the page, either using server headers or by
sniffing the content.
Closes#8358
`youtube.com/playlist?list=<list-id>` incorrectly matches the regex since the
change in 8afda1c1bb. The regex was modified to
match URLs of the form `youtu.be/<id>` and this playlist URL incorrectly matches
with the `<id>` set to `playlist`.
This commit avoids this match by verifying that the ID is not playlist.
This renames Subscription.in_home_view field to is_muted, for greater
clarity as to what it does just from seeing the setting name, without
having to look it up.
Also disabled an obsolete test_migrations test.
Fixes#10042.
Digest emails were disabled for soft deactivated users, since UserMessage
objects are created for such users lazily when they return.
We now compute the message list for gathering hot conversations by looking at
all the messages sent to the streams where the user is subscribed, while they
were subscribed.
Fixes#6297
If the text part of an email message didn't specify the charset in the
Content-Type header, the text content wouldn't be found. We fix this, by
assuming us-ascii charset in those cases, as specified by RFC6657:
https://tools.ietf.org/html/rfc6657
This commit migrates the Subscription's notification fields from a
BooleanField to a NullBooleanField where a value of None means to
inherit the value from user's profile.
Also includes a migrations to set the corresponding settings to None
if they match the user profile's values. This migration helps us in
getting rid of the weird "Apply to all" widget that we offered on
subscription settings page.
The mobile apps can't handle None appearing as the stream-level
notification settings, so for backwards-compatibility we arrange to
only send True/False to the mobile apps by applying those defaults
server-side. We introduce a notification_settings_null value within a
client_capabilities structure that newer versions of the mobile apps
can use to request the new model.
This mobile compatibility code is pretty effectively tested by the
existing test_events tests for the subscriptions subsystem.
Currently there's no way to tell the difference between "a server admin
deactivated a realm due to it being spammy" vs "a realm admin deactivated
the realm".
Due to my misreading the code and a sloppy search, I thought in
8218bf101c that
all_stream_subscription_logs didn't filter for streams.
While changing this, we'll switch to using `.modified_stream_id` for
potentially better performance.
This makes the implementation of `get_realm` consistent with its
declared return type of `Realm` rather than `Optional[Realm]`.
Fixes#12263.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This commit replaces the `create_stream_by_admins_only` setting with a
new `create_stream_policy` setting, which mirroring the structure of
the existing `invite_to_stream_policy`.
This is important preparation for migrating the waiting period feature
to be its own independent setting.
Fixes#12236.
Running the backend tests with a high number of processes can cause
unexpected errors with language changes. When certain tests that change
the default language, (without explicitly overriding the teardown method
to reset the default language), interleave with other tests that are
expecting the language to be in English, discrepancies arise.
This fixes a common nondeterministic test failure with high levels of
parallelization.
If a soft deactivated user had a subscription double-toggled without
any new messages being sent in between, add_missing_messages might
incorrectly process those two subscription changes in the wrong order.
Fortunately, the failure mode was usually to throw this exception:
django.db.utils.IntegrityError: duplicate key value violates unique
constraint "zerver_usermessage_user_profile_id_message_id_4936d0df_uniq"
DETAIL: Key (user_profile_id, message_id)=(4, 57) already exists.
Our unit tests actually had this precise setup some fraction of the
time, because a bit of the test setup code subscribed+unsubscribed the
target user without sending any messages in between, resulting in a
test failure something like 50% of the time.
The original exception was hard to reproduce reliably originally
(resulting in an extremely annoying nondetermnistic test failure), but
is easily reproducible by changing the "id" to "-id" in this change to
always mis-order the processing of those RealmAuditLog events.
Previously, our soft-deactivation logic incorrectly did not filter the
set of stream subscription changes to look at to only include the
target stream.
This could result in unspecified buggy behavior.
Break will do the same thing as continue here, as each iteration will
have the same result, and it's also worth explaining why this isn't
one layer up in the loop setup.
We should definitely be starting each test case with an empty copy of
the per-request caches, since their intended duration is even shorter
than a request.
This was masked by the fact that these caches are automatically
flushed when one makes an actual request to the Zulip API; so the
problems were only manifesting in tests like test_events, where we
call lower-level functions that access a per-request cache without
using the Zulip API.
The make_import_output_dir helper function used a path determined
primarily by the filename of the fixture being used, and expected to
have complete control over that path for the duration of the test.
This resulted in nondeterministic errors if our two test classes that
ran Mattermost import code ran at the same time.
This module is used to render the HTML of pages like our user documentation
into text for use in open graph previews of those articles. It provided somewhat
confusing output in the case that there were paragraph breaks in the original message,
because text with multiple paragraphs and list items does't read very well. This commit
adds `|` as a delimiter between paragraphs, and prefixes list items with a `*`.
Closes#12228
When an emoji is nested inside another inline tag - like em or strong -
it was getting double processed because of the way the inlinePattern
TreeProcessor runs (it runs recursively). With this fix, we set the
inner text of the emoji span as an AtomicString, preventing us from
double processing the emoji's text.
Fixes#11621
Test Plan:
* Add test case for **😄**, verify it passes.
* Go into local dev server and send "**😄**" to self and verify the DOM
does not have double <span> tags for the emoji.
* Run zerver.tests.test_push_notifications and verify the markdown test case matches
the text_content field properly
We create rate_limit_entity as a general rate-limiting function for
RateLimitedObjects, from code that was possible to abstract away from
rate_limit_user and that will be used for other kinds of rate limiting.
We make rate_limit_user use this new general framework from now.
This commit creates a new organization setting that determines whether
a user can invite other users to streams. Previously this was linked
to the waiting period threshold, but this was both not documented and
overly limiting.
With significant tweaks by tabbott to change the database model to not
involve two threshhold fields, edit the tests, etc.
This requires follow-up work to make the create stream policy setting
work how this code implies it should.
Fixes#12042.
Allow realms to specify the day of the week when the digest should be sent out.
When enqueue-ing digests, pick only the realms that chose the current weekday as
the day to send out digests.
The github-services model for how GitHub would send requests to this
legacy integration is no longer available since earlier in 2019.
Removing this integration also allows us to finally remove
authenticated_api_view, the legacy authentication model from 2013 that
had been used for this integration (and other features long since
upgraded).
A few functions that were used by the Beanstalk webhook are moved into
that webhook's implementation directly.
An endpoint was created in zerver/views. Basic rate-limiting was
implemented using RealmAuditLog. The idea here is to simply log each
export event as a realm_exported event. The number of events
occurring in the time delta is checked to ensure that the weekly
limit is not exceeded.
The event is published to the 'deferred_work' queue processor to
prevent the export process from being killed after 60s.
Upon completion of the export the realm admin(s) are notified.
This reverts commit fd9dd51d16 (#1815).
The issue described does not exist in Python 3, where urllib.parse now
_only_ accepts (Unicode) str and does the right thing with it. The
workaround was not being triggered and would have failed if it were.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This contains email of the user to whom notification is being
send. This has not been used in any past mobile releases, so it is
safe to remove it.
As user_id will be stable for the user, but not email. So it's better to
start consuming `user_id` instead of email on mobile.
Calls to `render_markdown_path` weren't getting cached since the context
argument is unhashable, and the `ignore_unhashable_lru_cache` decorator ignores
such calls. This commit adds a couple of more decorators - one which converts
dict arguments to the function to a dict items tuple, and another which converts
dict items tuple arguments back to dicts. These two decorators used along with
the `ignore_unhashable_lru_cache` decorator ensure that the calls to
`render_markdown_path` with the context dict argument are also cached.
The time to run zerver.tests.test_urls.PublicURLTest.test_public_urls drops by
about 50% from 8.4s to 4.1s with this commit. The time to run
zerver.tests.test_docs.DocPageTest.test_doc_endpoints drops by about 20% from
3.2s to 2.5s.
This fixes an issue where the hanging unordered list was not
rendering in blockquote; the problem was that we were not
adding an empty line(to satisfy the markdown) for hanging
unordered list if it is in blockquote. Both blockquote
and code block is fenced but we want to avoid rendering
the list if it's in the code block but not in blockquote.
Fixes: #11916.
This commit serves as the first step in supporting "public export" as a
webapp feature. The refactoring was done as a means to allow calling
the export logic from elsewhere in the codebase.
This is important because upcoming features will include slightly more
complex logic in post_process_state that we'd ideally like to be
included in what this suite tests.
This requires a few related changes:
* A small change to post_process_state to sort the realm_users objects
by user_id to ensure those data structures are stable.
* Improvements to the logic for checking if the initial state has
changed to use match_states for better output.
Extend the list of users that have to be notified when a message is
changed, so that in addition to users who have a UserMessage row, any
users who subscribed later to a stream with history public to
subscribers will also get the update.
Fixes: #8750.
This adds experimental support in /register for sending key
statistical data on the last 1000 private messages that the user is a
participant in. Because it's experimental, we require developers to
request it explicitly in production (we don't use these data yet in
the webapp, and it likely carries some perf cost).
We expect this to be extremely helpful in initializing the mobile app
user experience for showing recent private message conversations.
See the code comments, but this has been heavily optimized to be very
efficient and do all the filtering work at the database layer so that
we minimize network transit with the database.
Fixes#11944.
This is in response to a support ticket where the user had a closed left
sidebar, had added an organization, and then couldn't figure out how to
switch organizations. They had googled and found "The desktop app makes it
easy to switch between different organizations" in our help docs, which was
not sufficiently helpful.
Previously, we could 500 if an organization administrator scanned
possible PreregistrationUser IDs looking for a valid invitation they
can interact with.
They couldn't do anything, so no security issue, but this fixes that
case to just be a 400 error as it should be.
With the previous commit, fixes#1836.
As specified in the issue above, we make
get_email_gateway_message_string_from_address raise an exception if
it doesn't recognise the email gateway address pattern. Then, we make
appropriate adjustments in the codepaths which call this function.
These functions don't really belong in actions.py, so we move them out,
into email_mirror_helpers.py. They can't go directly into
email_mirror.py or we'd get circular imports resulting in ImportError.
The hope is that by having a shorter list of initial streams, it'll
avoid some potential confusion confusion about the value of topics.
At the very least, having 5 streams each with 1 topic was not a good
way to introduce Zulip.
This commit minimizes changes to the message content in
`send_initial_realm_messages` to keep the diff readable. Future commits will
reshape the content.
There were several problems with the old format:
* The sender was not necessarily the sender; it was the person who did
the deletion (which could be an organization administrator)
* It didn't include the ID of the sender, just the email address.
* It didn't include the recipient ID, instead having a semi-malformed
recipient_type_id under the weird name recipient_user_ids.
Since nothing was relying on the old behavior, we can just fix the
event structure.
Closes#2420
We add rate limiting (max X emails withing Y seconds per realm) to the
email mirror. By creating RateLimitedRealmMirror class, inheriting from
RateLimitedObject, and rate_limit_mirror_by_realm function, following a
mechanism used by rate_limit_user, we're able to have this
implementation mostly rely on the already existing, and proven over
time, rate_limiter.py code. The rules are configurable in settings.py in
RATE_LIMITING_MIRROR_REALM_RULES, analogically to RATE_LIMITING_RULES.
Rate limit verification happens in the MirrorWorker in
queue_processors.py. We don't rate limit missed message emails, as due
to using one time addresses, they're not a spam threat.
test_mirror_worker is adapted to the altered MirrorWorker code and a new
test - test_mirror_worker_rate_limiting is added in test_queue_worker.py
to provide coverage for these changes.
Fixes#9840.
Old addresses caused bugs in some cases with non-latin characters in
stream names (see issue number above). We switch to using django's
slugify helper function to convert stream names to full ascii, while
also getting rid of problematic non-alphanumeric characters, in a
reasonable way. See Django's documentation for slugify to see more about
how this function works.
Tests extended by tabbott to cover cases where we do end up with ascii.
To prepare for changing how the stream name gets encoded into mirror
email addresses while making sure old addresses keep working, we ignore
the stream_name part when receiving emails into the mirror and we only
look at the email_token to identify into which stream to mirror the
email.
Previously, these cache keys looked like:
:1:9c26164d3a393e316e0f8210efe270e08710d45astream_by_realm_and_name:...
Now, they look like this:
:1:9c26164d3a393e316e0f8210efe270e08710d45a:stream_by_realm_and_name:...
See the comment, but this is a significant performance optimization
for all of our pages using common_context, because this code path is
called more than a dozen times (recursively) by common_context.
Follow up on 92dc363. This modifies the ScheduledEmail model
and send_future_email to properly support multiple recipients.
Tweaked by tabbott to add some useful explanatory comments and fix
issues with the migration.
When soft deactivation is run for in "auto" mode (no emails are
specified and all users inactive for specified number of days are
deactivated), catch-up is also run in the "auto" mode if
AUTO_CATCH_UP_SOFT_DEACTIVATED_USERS is True.
Automatically catching up soft-deactivated users periodically would
ensure a good user experience for returning users, but on some servers
we may want to turn off this option to save on some disk space.
Fixes#8858, at least for the default configuration, by eliminating
the situation where there are a very large number of messages to recover.
A user who has been soft deactivated for a long time might have 10Ks of message
history that was "soft deactivated". It might take a minute or more to add
UserMessage rows for all of these messages, causing timeouts. So, we paginate
the creation of these UserMessage rows.
This field is primarily intended to support avoiding displaying the
"more topics" feature in new organizations and streams, where we might
know that all messages in the stream are already available in the
browser.
Based on original work by Roman Godov, and significantly modified by
tabbott.
The second migration involved here could be expensive on Zulip Cloud,
but is unlikely to be an issue on other servers.
Apparently, our new validator for stream color having a valid format
incorrectly handled colors that had duplicate characters in them.
(This is caused in part by the spectrum.js logic automatically
converting #ffff00 to #ff0, which our validator rejected). Given that
we had old stream colors in the #ff0 format in our database anyway for
legacy, there's no benefit to banning these colors.
In the future, we could imagine standardizing the format, but doing so
will require also changing the frontend to submit colors only in the
6-character format.
Fixes an issue reported in
https://github.com/zulip/zulip/issues/11845#issuecomment-471417073
Addresses point 2 of #10612. We use a regex to detect if a form
of FWD indicator is present at the beginning of the subject, which
means the message has been forwarded.
remove_quotations argument is added to a couple of functions where
it's necessary.
In filter_footer, the criteria for a line to be a possible beginning
of a footer is changed to line.strip() == "--", instead of
line.strip().startswith("--"), because the former would remove
quotations from plaintext emails. This change makes sense, because
RFC 3676 specifies ""-- " as the separator line between the body
and the signature of a message":
https://tools.ietf.org/html/rfc3676
We remove the 'subject' argument of process_stream_message and make
subject processing happen inside the function, as it's a more
appropriate place than the general process_message function and is
needed to have a good way of disabling removing quotations in forwarded
emails sent into the mirror.
Some urls which end with image file extensions (eg .jpg) may link to
html pages. This adds handling for linx.li, wikipedia.org and
pasteboard.co. If it is possible, we redirect to the actual image url
otherwise we do not attempt to render it as an image.
Fixes#10438.
When a Zephyr user deactivates their account, they should be
automatically turned into a mirror dummy user (so that other users can
continue to interact with them as normal for a Zephyr user who isn't
using Zulip).
Fixes part 3 of #10612. When sending an email to the email mirror to a
stream address, if "+show-sender" is added in the address, the stream
message will now include "From: <sender>" at the top.
This reverts commit ff90c0101c but keeps
the test cases added for reference.
This was reverted because it was both not a clean solution and created
other realm filters bugs involving dashes (etc.).
In commit de65a04 we can see that if the need ever arises to modify
how stream descriptions are rendered, we would need to make changes
at 5 different call points which can be quite cumbersome. So this
functionality has been extracted to a new method called
'render_stream_descriptions'.
This fixes an issue where invalid emoji name prevents following
emojis from rendering.
This reverts the code change in
8842349629, while still passing the
tests added in that commit (it seems the original commit had
misdiagnosed an ordering bug and thus introduced this issue).
Fixes: #11770.
The night logo synchronization on the settings page was perfect, but
the actual display logic had a few problems:
* We were including the realm_logo in context_processors, even though
it is only used in home.py.
* We used different variable names for the templating in navbar.html
than anywhere else the codebase.
* The behavior that the night logo would default to the day logo if
only one was uploaded was not correctly implemented for the navbar
position, either in the synchronization for updates code or the
logic in the navbar.html templates.
This commit leverages the ahocorasick algorithm to build a set of user_ids
that have their alert_words present in the message. It runs in linear time
of the order of length of the input message as opposed to number of
alert_words. This is after building a ahocorasick Automaton which runs
in O(number of alert_words in entire realm) which is usually cached.
This fixes an issue where blank lines between blocks were causing
auto-numbering of list to stop before the blank line resulting
in two separate numbered list instead of one.
Edited significantly by tabbott to explain the tricky details in the
comments.
Fixes: #11651.
Add `max_int_size` parameter to `to_non_negative_int()` in
decorator.py so it will be able to validate that the integer doesn't
exceed the integer maximum limit.
Fixes#11451
This is important for situations such as with our Zapier app,
where the requesting user may be a bot that would like to access
its owner's subscriptions.
Tweaked by tabbott to eliminate the 2^N growth of cases in
do_get_streams.
We want to use the baseline features of bugdown, but not fancy things
like inline URL previews, since the whole structure of stream
descriptions is to have a single-line thing supporting some
formatting.
The migration part of this change fixes a bug encountered by some
organizations upgrading from older versions of Zulip.
This allows us to have some features using bugdown rendering where
inline image previews will not be rendered (which would be problematic
for e.g. stream descriptions).
Guest users will just get an empty list of default streams; we also
hide the "Default streams" organization view from the guest users UI.
This is for consistency with not providing guest users the full list
of streams in an organization.
Fixing this involves fixing the backend to handle unchanged field
submissions of the Zoom credentials without trying to re-validate the
credentials (for performance) as well as to fetch the already-sent
secret.
We'll use this in the push-notifications code, in a context where
there should definitely already be UserMessage rows if everything's
gone normally... but explicitly checking at the top seems like the
right pattern from a secure-coding perspective.
When a bunch of messages with active notifications are all read at
once -- e.g. by the user choosing to mark all messages, or all in a
stream, as read, or just scrolling quickly through a PM conversation
-- there can be a large batch of this information to convey. Doing it
in a single GCM/FCM message is better for server congestion, and for
the device's battery.
The corresponding client-side logic is in zulip/zulip-mobile#3343 .
Existing clients today only understand one message ID at a time; so
accommodate them by sending individual GCM/FCM messages up to an
arbitrary threshold, with the rest only as a batch.
Also add an explicit test for this logic. The existing tests
that happen to cause this function to run don't exercise the
last condition, so without a new test `--coverage` complains.
Adblock Plus's "Block social media icons tracking" setting blocked
integration logos for social media platforms from loading, so the logos
are renamed to bypass this.
Fixes#11590.
This change should help people discover to distinguish
silent mentions in text as a part of Zulip syntax while
differentiating them from regular mentions.
This optimizes test-backend by skipping webhook
tests when run in default mode.
Tweaked by tabbott to extend the documentation and update the CI
commands.
ACCOUNT_ACTIVATION_DAYS doesn't seems to be used anywhere.
INVITATION_LINK_VALIDITY_DAYS seems to do it's job currently.
(It was only ever used in very early Zulip commits).
We were using a hardcoded relative path, which doesn't work if you're
not running this from the root of the Zulip checkout.
As part of fixing this, we need to make `LOCAL_UPLOADS_DIR` an
absolute path.
Fixes#11581.
Since da8f4bc0e back in August, this control flow has caused
`flags.active_mobile_push_notification` to be cleared if we don't send
these `remove` messages at all, and if we send them directly to GCM...
but not if we send them via the Zulip notification bouncer.
As a result, on a server configured to send `remove` notification-messages
via the bouncer, we accumulate "active" messages and never clear them.
If the user then does `mark_all_as_read`, we end up sending a `remove`
for each of those messages again, and all in one giant burst. We've
seen puzzling bursts of hundreds of removals pass through the bouncer
since turning on removals on chat.zulip.org; it's likely many of them
are caused by this bug.
This issue was made more acute with f4478aad5, which unconditionally
enabled removals.
Test added by tabbott.
This is the only server-side change required for the FCM migration!
Optionally, at some point in the future we might choose to migrate
to the new ("v1") API which FCM also offers. Nothing revolutionary
but there are some nice things about it:
https://firebase.google.com/docs/cloud-messaging/migrate-v1
The client-side fix to make these not a problem was in release
16.2.96, of 2018-08-22. We've been sending them from the
development community server chat.zulip.org since 2018-11-29.
We started forcing clients to upgrade with commit fb7bfbe9a,
deployed 2018-12-05 to zulipchat.com.
(The mobile app unconditionally makes a request to a route on
zulipchat.com to check for this kind of forced upgrade, so that
applies to mobile users of any Zulip server.)
So at this point it's long past safe for us to unconditionally
send these. Hardwire the old `SEND_REMOVE_PUSH_NOTIFICATIONS`
setting to True, and simplify it out.
We've for a while had logic to set plan_type to LIMITED when importing
into Zulip Cloud; we need corresponding logic to set it to SELF_HOSTED
when importing into a self-hosted server.
Fixes#11541.
For internal stream messages, most of the time, we have access to
a Stream object. For the few corner cases where we don't, it is a
much cleaner approach to have a separate function that accepts a
stream name than having one multi-option helper that accepts both
names and objects.
This changes the border-radius to 6px for the tabbed display, which is not
in line with the current Zulip style for border-radius (4px). However 6px
really looks a lot better for this (possibly because it's a bigger box than
most of our other boxes?)
If we make a practice on the Zulip server of always explicitly setting
the desired priority, then when an old server doesn't set the priority
we can reasonably have the bouncer make a guess.