Since positional arguments are interpreted differently by different
backends in Django's authentication backend system, it’s safer to
disallow them.
This had been the motivation for previously declaring the parameters
with default values when we were on Python 2, but that was not super
effective because Python has no rule against positional default
arguments and that convention for our authentication backends was
solely enforced by code review.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The `queue_data` variable is an intermediate step that's unnecessary.
Instead, the values from the queue event are assigned dierectly.
Also, the `worker` variable is not worth an assignment as it is only
referenced a single time per test case.
A FileNotFound error was set as the side-effect of the do_export_realm
mock and the DeferredWorker was made to consume the event explicitly.
Previously, the mock of do_export_realm was producing spammy output
as a result of a FileNotFound error coming from the queue processing of
`do_write_stats_file_for_realm_export`.
A unique path was created using the `LOCAL_UPLOADS_DIR` backend, similar
to the code used in `LocalUploadBackend`. The exported tarball was
copied to the directory, and an nginx url was created to serve the file
publicly.
Tweaked by tabbott to output an actual URL.
This cleans up the pattern for how we check which user is logged in
during Zulip's backend unit tests to be much more readable (replacing
the arcane session code that does this check).
test_retention.py had various issues - we opt for keeping its essence
(what should the tests do and verify), but rewriting a lot of it in
order to have more clarity in what's happening there.
We split archive_messages code into two functions: moving to archive and
cleanup. This allows cleaning up the tests - they can call
these functions directly instead of copying several lines of
archive_messages here and there in multiple tests.
test_cross_realm_messages_archiving_two_realm_expired doesn't run the
code path patched in commit 3d1aa98b2ea344fba7fbb2373a37d4cf30f53e08i,
so it can still fail. We apply the analogical change in the test as
in the cited commit.
This is probably a good idea for the production use case, since then
there's some consistency of behavior, and if we extend logging, one
knows exactly which realms were or were not executed before a logged
failure.
This fixes the nondeterministic test failures we've been seeing in CI:
if you use `-id` in that order_by, it happens consistently.
Sending PM from a hamlet(consented) to othello is a case
of sending message from a consented user to a non consented
user. This result in the generation of more than one message
files during realm export. To handle this case _export_realm
is updated.
The upload option will no longer be limited to strictly S3 uploads. This
commit serves as a preliminary step for supporting LOCAL_UPLOADS_DIR as
part of the public only export feature.
We've been seeing nondeterministic failures in this test suite in CI
that we can't reproduce locally; these print statements should help
track them down.
This is the only function in TestEmailMirrorLibrary, so we rename this
class to more appropriate TestGetMissedMessageToken, clean it up a bit
and add some extra checks to finally get email_mirror.py to 100% test
coverage.
log_and_report and its helper functions were mostly old code no longer
well adapted to how email mirror works currently, as well as having no
test coverage. We rewrite this part of the email to report errors in a
similar manner, and add tests for it. We're able to get rid of the
clunky and now useless debug_info dictionary in process message, as
log_and_report only needs the recipient email in its third argument.
Mostly rewritten by Tim Abbott to ensure it correctly implements the
desired security model.
Administrators should have access to users' real email address so that
they can contact users out-of-band.
Clients won't have access to user email addresses, and thus won't be
able to compute gravatars.
The tests for this are a bit messy, in large part because our tests
for get_events call subsections of it, rather than the main function.
This provides a clean warning and 40x error, rather than a 500, for
this corner case which is very likely user error.
The test here is awkward because we have to work around
https://github.com/zulip/zulip/issues/12362.
The `LocalUploadBackend` returns a relative URL, while the `S3UploadBackend`
returns an absolute URL. This commit switches to using `urljoin` to obtain the
absolute URL, instead of simply joining strings.
This commit also adds a small functionality change where the results of
each webhook fixture message sent is now displayed to the user.
With a small tweak by tabbott to fix a styling bug.
Fixes#12122.
Note: If you're going to send fixtures which are not JSON or of the
text/plain content type, make sure you set the correct content type
in the custom headers.
E.g. For the wordpress fixtures the "Content-Type" should be set to
"application/x-www-form-urlencoded".
This is a very old commit for #106, which has been on hiatus for a few
years. It was significantly modified by tabbott to:
* Improve coding style and variable names
* Update mypy annotations style
* Clean up the testing logic
* Update for API changes elsewhere in our system
But the actual runtime code is essentially unmodified from the
original work by Kirill.
It contains basic support for archiving Messages, UserMessages, and
Attachments with a nice test suite. It's still not usable in
production (e.g. it will probably break Reactions, SubMessages, etc.),
but upcoming commits will address that.
This commit introduces a simple field where the user can now specify custom
HTTP headers. This commit does not introduce an improved system for storing
HTTP headers as fixtures - such a change would modify both the existing unit
tests as well as this devtool.
This commit adds a new developer tool: The "integrations dev panel"
which will serve as a replacement for the send_webhook_fixture_message
management command as a way to test integrations with much greater ease.
This lets us handle directly in our tooling the user experience that
we document for exporting a realm with member consent (before, it
required unpleasant manual work).
We may be successfully able to get the page once, to get the content type, but
the server or network may go down and cause problems when fetching the page for
parsing its meta tags.
Currently, we only show previews for URLs which are HTML pages, which could
contain other media. We don't show previews for links to non-HTML pages, like
pdf documents or audio/video files. To verify that the URL posted is an HTML
page, we verify the content-type of the page, either using server headers or by
sniffing the content.
Closes#8358
We had some excessively tight rules about what characters were
allowed, which in particular prevented using `?foo=bar&baz=quux`
structures in the realm filters URLs.
Fixes#12239.
`youtube.com/playlist?list=<list-id>` incorrectly matches the regex since the
change in 8afda1c1bb. The regex was modified to
match URLs of the form `youtu.be/<id>` and this playlist URL incorrectly matches
with the `<id>` set to `playlist`.
This commit avoids this match by verifying that the ID is not playlist.
This renames Subscription.in_home_view field to is_muted, for greater
clarity as to what it does just from seeing the setting name, without
having to look it up.
Also disabled an obsolete test_migrations test.
Fixes#10042.
These tests have some code and comments that only used to apply when
these empty body scenarios used to raise the regular
ZulipEmailForwardError - now they raise ZulipEmailForwardUserError.
We adapt the tests to this fact and test by mocking logging.warning and
making sure it gets called with the intended warning message. This is
also needed to cover the ZulipEmailForwardUserError case with tests to
get to 100% coverage of email_mirror.py.
We add a test for the case "if not all(val is not None for val in result):"
on result returned by redis_client.hmget in send_to_missed_message_address.
A couple of tests asserted that the number of queries were within a range,
because they ran one additional query when they were run individually, as
compared to running all the tests in `TestDigestEmailMessages`. We now trigger
these additional queries within the tests, to make the tests deterministic and
assert that the number of queries is a number, instead of a range.
Digest emails were disabled for soft deactivated users, since UserMessage
objects are created for such users lazily when they return.
We now compute the message list for gathering hot conversations by looking at
all the messages sent to the streams where the user is subscribed, while they
were subscribed.
Fixes#6297
If the text part of an email message didn't specify the charset in the
Content-Type header, the text content wouldn't be found. We fix this, by
assuming us-ascii charset in those cases, as specified by RFC6657:
https://tools.ietf.org/html/rfc6657
This commit migrates the Subscription's notification fields from a
BooleanField to a NullBooleanField where a value of None means to
inherit the value from user's profile.
Also includes a migrations to set the corresponding settings to None
if they match the user profile's values. This migration helps us in
getting rid of the weird "Apply to all" widget that we offered on
subscription settings page.
The mobile apps can't handle None appearing as the stream-level
notification settings, so for backwards-compatibility we arrange to
only send True/False to the mobile apps by applying those defaults
server-side. We introduce a notification_settings_null value within a
client_capabilities structure that newer versions of the mobile apps
can use to request the new model.
This mobile compatibility code is pretty effectively tested by the
existing test_events tests for the subscriptions subsystem.
If MAX_FILE_UPLOAD_SIZE is set to 0, then UI elements like the upload
icon in the compose and message edit UI and "Attachments" menu in
"/#settings" are not displayed.
A different error message is also displayed if a user tries to drag and
drop or paste a file into the compose message box.
Fixes#12152.
Fixes#12273.
When running the test_query_email_attr test in reverse, the test failed
because self._LDAPUser.attrs was being modified and it was being shared
with other tests.
This makes the implementation of `get_realm` consistent with its
declared return type of `Realm` rather than `Optional[Realm]`.
Fixes#12263.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This commit replaces the `create_stream_by_admins_only` setting with a
new `create_stream_policy` setting, which mirroring the structure of
the existing `invite_to_stream_policy`.
This is important preparation for migrating the waiting period feature
to be its own independent setting.
Fixes#12236.
Using sys.exit in a management command makes it impossible
to unit test the code in question. The correct approach to do the same
thing in Django management commands is to raise CommandError.
Followup of b570c0dafa
Fixes#12251.
Previously when disabling name changes in server settings, instead
of realm settings, the name edit button did not get disabled.
Changing name resulted in a message stating `no changes made`.
Fixes#12132.
Realm setting to disable avatar changes is already present.
The `AVATAR_CHANGES_DISABLED` setting now follows the same
2-setting model as `NAME_CHANGES_DISABLED`.
This is useful when syncing avatars from an integrated LDAP/active
directory.
The upload avatar and delete avatar buttons are hidden if avatar
changes are disabled and the user is a non-admin.
If the user has a gravatar set, then the user will not be able to
upload an image as their avatar if avatar changes are disabled.
Part of #12132.
This module is used to render the HTML of pages like our user documentation
into text for use in open graph previews of those articles. It provided somewhat
confusing output in the case that there were paragraph breaks in the original message,
because text with multiple paragraphs and list items does't read very well. This commit
adds `|` as a delimiter between paragraphs, and prefixes list items with a `*`.
Closes#12228
When an emoji is nested inside another inline tag - like em or strong -
it was getting double processed because of the way the inlinePattern
TreeProcessor runs (it runs recursively). With this fix, we set the
inner text of the emoji span as an AtomicString, preventing us from
double processing the emoji's text.
Fixes#11621
Test Plan:
* Add test case for **😄**, verify it passes.
* Go into local dev server and send "**😄**" to self and verify the DOM
does not have double <span> tags for the emoji.
* Run zerver.tests.test_push_notifications and verify the markdown test case matches
the text_content field properly
We create rate_limit_entity as a general rate-limiting function for
RateLimitedObjects, from code that was possible to abstract away from
rate_limit_user and that will be used for other kinds of rate limiting.
We make rate_limit_user use this new general framework from now.
This enables the function to either return a valid UserProfile or raise
InvalidMirrorInput, which is clearer and more pythonic than the previous
approach of a tuple of a bool and Optional[UserProfile].
In making the type clearer, this improves checking with mypy.
Tests updated.
This commit creates a new organization setting that determines whether
a user can invite other users to streams. Previously this was linked
to the waiting period threshold, but this was both not documented and
overly limiting.
With significant tweaks by tabbott to change the database model to not
involve two threshhold fields, edit the tests, etc.
This requires follow-up work to make the create stream policy setting
work how this code implies it should.
Fixes#12042.
The github-services model for how GitHub would send requests to this
legacy integration is no longer available since earlier in 2019.
Removing this integration also allows us to finally remove
authenticated_api_view, the legacy authentication model from 2013 that
had been used for this integration (and other features long since
upgraded).
A few functions that were used by the Beanstalk webhook are moved into
that webhook's implementation directly.
An endpoint was created in zerver/views. Basic rate-limiting was
implemented using RealmAuditLog. The idea here is to simply log each
export event as a realm_exported event. The number of events
occurring in the time delta is checked to ensure that the weekly
limit is not exceeded.
The event is published to the 'deferred_work' queue processor to
prevent the export process from being killed after 60s.
Upon completion of the export the realm admin(s) are notified.
This slows down the tests by about 5-10% -- the tests go from 0.6s to 0.630s or
so. But, this seems like a change worth making to prevent open-graph metadata
breaking HTML.
The entire idea of doing this operation with unchecked string
replacement in a middleware class is in my opinion extremely
ill-conceived, but this fixes the most pressing problem with it
generating invalid HTML.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This reverts commit fd9dd51d16 (#1815).
The issue described does not exist in Python 3, where urllib.parse now
_only_ accepts (Unicode) str and does the right thing with it. The
workaround was not being triggered and would have failed if it were.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This contains email of the user to whom notification is being
send. This has not been used in any past mobile releases, so it is
safe to remove it.
As user_id will be stable for the user, but not email. So it's better to
start consuming `user_id` instead of email on mobile.
Calls to `render_markdown_path` weren't getting cached since the context
argument is unhashable, and the `ignore_unhashable_lru_cache` decorator ignores
such calls. This commit adds a couple of more decorators - one which converts
dict arguments to the function to a dict items tuple, and another which converts
dict items tuple arguments back to dicts. These two decorators used along with
the `ignore_unhashable_lru_cache` decorator ensure that the calls to
`render_markdown_path` with the context dict argument are also cached.
The time to run zerver.tests.test_urls.PublicURLTest.test_public_urls drops by
about 50% from 8.4s to 4.1s with this commit. The time to run
zerver.tests.test_docs.DocPageTest.test_doc_endpoints drops by about 20% from
3.2s to 2.5s.
This fixes an issue where the hanging unordered list was not
rendering in blockquote; the problem was that we were not
adding an empty line(to satisfy the markdown) for hanging
unordered list if it is in blockquote. Both blockquote
and code block is fenced but we want to avoid rendering
the list if it's in the code block but not in blockquote.
Fixes: #11916.
This is important because upcoming features will include slightly more
complex logic in post_process_state that we'd ideally like to be
included in what this suite tests.
This requires a few related changes:
* A small change to post_process_state to sort the realm_users objects
by user_id to ensure those data structures are stable.
* Improvements to the logic for checking if the initial state has
changed to use match_states for better output.
Extend the list of users that have to be notified when a message is
changed, so that in addition to users who have a UserMessage row, any
users who subscribed later to a stream with history public to
subscribers will also get the update.
Fixes: #8750.
This adds experimental support in /register for sending key
statistical data on the last 1000 private messages that the user is a
participant in. Because it's experimental, we require developers to
request it explicitly in production (we don't use these data yet in
the webapp, and it likely carries some perf cost).
We expect this to be extremely helpful in initializing the mobile app
user experience for showing recent private message conversations.
See the code comments, but this has been heavily optimized to be very
efficient and do all the filtering work at the database layer so that
we minimize network transit with the database.
Fixes#11944.
Previously, we had some expensive-to-calculate keys in
zulip_default_context, especially around enabled authentication
backends, which in total were a significant contributor to the
performance of various logged-out pages. Now, these keys are only
computed for the login/registration pages where they are needed.
This is a moderate performance optimization for the loading time of
many logged-out pages.
Closes#11929.
With the previous commit, fixes#1836.
As specified in the issue above, we make
get_email_gateway_message_string_from_address raise an exception if
it doesn't recognise the email gateway address pattern. Then, we make
appropriate adjustments in the codepaths which call this function.
These functions don't really belong in actions.py, so we move them out,
into email_mirror_helpers.py. They can't go directly into
email_mirror.py or we'd get circular imports resulting in ImportError.
The hope is that by having a shorter list of initial streams, it'll
avoid some potential confusion confusion about the value of topics.
At the very least, having 5 streams each with 1 topic was not a good
way to introduce Zulip.
This commit minimizes changes to the message content in
`send_initial_realm_messages` to keep the diff readable. Future commits will
reshape the content.
There were several problems with the old format:
* The sender was not necessarily the sender; it was the person who did
the deletion (which could be an organization administrator)
* It didn't include the ID of the sender, just the email address.
* It didn't include the recipient ID, instead having a semi-malformed
recipient_type_id under the weird name recipient_user_ids.
Since nothing was relying on the old behavior, we can just fix the
event structure.
Closes#2420
We add rate limiting (max X emails withing Y seconds per realm) to the
email mirror. By creating RateLimitedRealmMirror class, inheriting from
RateLimitedObject, and rate_limit_mirror_by_realm function, following a
mechanism used by rate_limit_user, we're able to have this
implementation mostly rely on the already existing, and proven over
time, rate_limiter.py code. The rules are configurable in settings.py in
RATE_LIMITING_MIRROR_REALM_RULES, analogically to RATE_LIMITING_RULES.
Rate limit verification happens in the MirrorWorker in
queue_processors.py. We don't rate limit missed message emails, as due
to using one time addresses, they're not a spam threat.
test_mirror_worker is adapted to the altered MirrorWorker code and a new
test - test_mirror_worker_rate_limiting is added in test_queue_worker.py
to provide coverage for these changes.
We clean up test_mirror_worker for more readability, as well as make it
verify that mirror_email gets called the correct amount of times and use
a correct rcpt_to address, so that the test doesn't fail when some
verification of the address is added in the following commits
implementing rate limiting in the email mirror.
Fixes#9840.
Old addresses caused bugs in some cases with non-latin characters in
stream names (see issue number above). We switch to using django's
slugify helper function to convert stream names to full ascii, while
also getting rid of problematic non-alphanumeric characters, in a
reasonable way. See Django's documentation for slugify to see more about
how this function works.
Tests extended by tabbott to cover cases where we do end up with ascii.
To prepare for changing how the stream name gets encoded into mirror
email addresses while making sure old addresses keep working, we ignore
the stream_name part when receiving emails into the mirror and we only
look at the email_token to identify into which stream to mirror the
email.
See the comment, but this is a significant performance optimization
for all of our pages using common_context, because this code path is
called more than a dozen times (recursively) by common_context.
We never intended to render them for this use case as the result would
not look good, and now we have a convenient bugdown option for
controlling this behavior.
Since we're not storing the markdown rendering anywhere, there's
conveniently no data migration required.
Fixes#11889.
This renames references to user avatars, bot avatars, or organization
icons to profile pictures. The string in the UI are updated,
in addition to the help files, comments, and documentation. Actual
variable/function names, changelog entries, routes, and s3 buckets are
left as-is in order to avoid introducing bugs.
Fixes#11824.
Follow up on 92dc363. This modifies the ScheduledEmail model
and send_future_email to properly support multiple recipients.
Tweaked by tabbott to add some useful explanatory comments and fix
issues with the migration.
Apparently, our invalid realm error page had HTTP status 200, which
could be confusing and in particular broken our mobile app's error
handling for this case.
When soft deactivation is run for in "auto" mode (no emails are
specified and all users inactive for specified number of days are
deactivated), catch-up is also run in the "auto" mode if
AUTO_CATCH_UP_SOFT_DEACTIVATED_USERS is True.
Automatically catching up soft-deactivated users periodically would
ensure a good user experience for returning users, but on some servers
we may want to turn off this option to save on some disk space.
Fixes#8858, at least for the default configuration, by eliminating
the situation where there are a very large number of messages to recover.
A user who has been soft deactivated for a long time might have 10Ks of message
history that was "soft deactivated". It might take a minute or more to add
UserMessage rows for all of these messages, causing timeouts. So, we paginate
the creation of these UserMessage rows.
This logic for passing through whether the user was logged in never
worked, because we were trying to read the client.
Fix this, and add tests to ensure it never breaks again.
Restructured by tabbott to have completely different code with the
same intent.
Fixes#11802.
Previously, the LDAP authentication model ignored the realm-level
settings for who can join a realm. This was sort of reasonable at the
time, because the original LDAP auth was an SSO solution that didn't
allow multiple realms, and so one could fully configure authentication
settings on the LDAP side. But now that we allow multiple realms with
the LDAP backend, one could easily imagine wanting different
restrictions on them, and so it makes sense to add this enforcement.
This field is primarily intended to support avoiding displaying the
"more topics" feature in new organizations and streams, where we might
know that all messages in the stream are already available in the
browser.
Based on original work by Roman Godov, and significantly modified by
tabbott.
The second migration involved here could be expensive on Zulip Cloud,
but is unlikely to be an issue on other servers.
The actual bug in #11791 was caused by code reverted in
3ed85f4cd7, so technically #11791 is
already fixed. However, it makes sense to add tests to ensure that it
doesn't regress in the future as part of closing out the issue.
Fixes#11791.
Apparently, our new validator for stream color having a valid format
incorrectly handled colors that had duplicate characters in them.
(This is caused in part by the spectrum.js logic automatically
converting #ffff00 to #ff0, which our validator rejected). Given that
we had old stream colors in the #ff0 format in our database anyway for
legacy, there's no benefit to banning these colors.
In the future, we could imagine standardizing the format, but doing so
will require also changing the frontend to submit colors only in the
6-character format.
Fixes an issue reported in
https://github.com/zulip/zulip/issues/11845#issuecomment-471417073
Addresses point 2 of #10612. We use a regex to detect if a form
of FWD indicator is present at the beginning of the subject, which
means the message has been forwarded.
remove_quotations argument is added to a couple of functions where
it's necessary.
In filter_footer, the criteria for a line to be a possible beginning
of a footer is changed to line.strip() == "--", instead of
line.strip().startswith("--"), because the former would remove
quotations from plaintext emails. This change makes sense, because
RFC 3676 specifies ""-- " as the separator line between the body
and the signature of a message":
https://tools.ietf.org/html/rfc3676
We remove the 'subject' argument of process_stream_message and make
subject processing happen inside the function, as it's a more
appropriate place than the general process_message function and is
needed to have a good way of disabling removing quotations in forwarded
emails sent into the mirror.
This used to have a single function test_email_subject_stripping which
would run through a sizeable list of example subjects from subjects.json
fixture, form an email with each subject, send it to the email mirror
and check if the resulting stream message has a correctly stripped
topic. That took too much time, because we run through the entire
process_message and most_recent_message codepaths a lot of times.
We change the way of testing to:
1. Ensure process_message applies subject stripping (only need to run
process_message twice here)
2. Test the strip_from_subject function separately, on all the example
from the subjects.json fixtures. This is very fast.
Some urls which end with image file extensions (eg .jpg) may link to
html pages. This adds handling for linx.li, wikipedia.org and
pasteboard.co. If it is possible, we redirect to the actual image url
otherwise we do not attempt to render it as an image.
Fixes#10438.
Fixes part 3 of #10612. When sending an email to the email mirror to a
stream address, if "+show-sender" is added in the address, the stream
message will now include "From: <sender>" at the top.
The test_events system was in several tests using get_realm to fetch a
realm object, rather than accessing self.user_profile.realm. This
created subtle problems where we were neither directly editing nor
refreshing the `realm` object associated with our UserProfile object
from the database after our the `do_*` methods.
The payoff for this is we can update the previously confused
`do_change_icon_source` test to actually change the state and have the
correct result.
This reverts commit ff90c0101c but keeps
the test cases added for reference.
This was reverted because it was both not a clean solution and created
other realm filters bugs involving dashes (etc.).
Earlier the behavior was to raise an exception thereby stopping the
whole sync. Now we log an error message and skip the field. Also
fixes the `query_ldap` command to report missing fields without
error.
Fixes: #11780.
This fixes an issue where invalid emoji name prevents following
emojis from rendering.
This reverts the code change in
8842349629, while still passing the
tests added in that commit (it seems the original commit had
misdiagnosed an ordering bug and thus introduced this issue).
Fixes: #11770.
The night logo synchronization on the settings page was perfect, but
the actual display logic had a few problems:
* We were including the realm_logo in context_processors, even though
it is only used in home.py.
* We used different variable names for the templating in navbar.html
than anywhere else the codebase.
* The behavior that the night logo would default to the day logo if
only one was uploaded was not correctly implemented for the navbar
position, either in the synchronization for updates code or the
logic in the navbar.html templates.
This commit leverages the ahocorasick algorithm to build a set of user_ids
that have their alert_words present in the message. It runs in linear time
of the order of length of the input message as opposed to number of
alert_words. This is after building a ahocorasick Automaton which runs
in O(number of alert_words in entire realm) which is usually cached.
This fixes an issue where blank lines between blocks were causing
auto-numbering of list to stop before the blank line resulting
in two separate numbered list instead of one.
Edited significantly by tabbott to explain the tricky details in the
comments.
Fixes: #11651.
Add `max_int_size` parameter to `to_non_negative_int()` in
decorator.py so it will be able to validate that the integer doesn't
exceed the integer maximum limit.
Fixes#11451
This is important for situations such as with our Zapier app,
where the requesting user may be a bot that would like to access
its owner's subscriptions.
Tweaked by tabbott to eliminate the 2^N growth of cases in
do_get_streams.
tests now ran in 7.649s from 9.297s. And this test works just as well
with 3 bots, since only 3 database queries with 3 bots confirms we're
not doing linear queries in the number of bots in the organization.
We want to use the baseline features of bugdown, but not fancy things
like inline URL previews, since the whole structure of stream
descriptions is to have a single-line thing supporting some
formatting.
The migration part of this change fixes a bug encountered by some
organizations upgrading from older versions of Zulip.
This allows us to have some features using bugdown rendering where
inline image previews will not be rendered (which would be problematic
for e.g. stream descriptions).
Guest users will just get an empty list of default streams; we also
hide the "Default streams" organization view from the guest users UI.
This is for consistency with not providing guest users the full list
of streams in an organization.
Fixing this involves fixing the backend to handle unchanged field
submissions of the Zoom credentials without trying to re-validate the
credentials (for performance) as well as to fetch the already-sent
secret.
Visually, #zoom_help_text acts like
.organization-settings-parent div:first-of-type when the Zoom option
is selected, but isn't treated as such.
No visual change with the #google_hangouts_domain change; just there to make
the code more readable/defensible.
When a bunch of messages with active notifications are all read at
once -- e.g. by the user choosing to mark all messages, or all in a
stream, as read, or just scrolling quickly through a PM conversation
-- there can be a large batch of this information to convey. Doing it
in a single GCM/FCM message is better for server congestion, and for
the device's battery.
The corresponding client-side logic is in zulip/zulip-mobile#3343 .
Existing clients today only understand one message ID at a time; so
accommodate them by sending individual GCM/FCM messages up to an
arbitrary threshold, with the rest only as a batch.
Also add an explicit test for this logic. The existing tests
that happen to cause this function to run don't exercise the
last condition, so without a new test `--coverage` complains.
We do not anticipate our UI for showing stream descriptions looking
reasonable for multi-line descriptions, so we should just ban creating
them.
Given the frontend changes, multi-line descriptions are only likely to
show up from importing content from other tools, in which case
replacing newlines with spaces is cleaner than the alternative.
This change should help people discover to distinguish
silent mentions in text as a part of Zulip syntax while
differentiating them from regular mentions.
ACCOUNT_ACTIVATION_DAYS doesn't seems to be used anywhere.
INVITATION_LINK_VALIDITY_DAYS seems to do it's job currently.
(It was only ever used in very early Zulip commits).
Since da8f4bc0e back in August, this control flow has caused
`flags.active_mobile_push_notification` to be cleared if we don't send
these `remove` messages at all, and if we send them directly to GCM...
but not if we send them via the Zulip notification bouncer.
As a result, on a server configured to send `remove` notification-messages
via the bouncer, we accumulate "active" messages and never clear them.
If the user then does `mark_all_as_read`, we end up sending a `remove`
for each of those messages again, and all in one giant burst. We've
seen puzzling bursts of hundreds of removals pass through the bouncer
since turning on removals on chat.zulip.org; it's likely many of them
are caused by this bug.
This issue was made more acute with f4478aad5, which unconditionally
enabled removals.
Test added by tabbott.
The client-side fix to make these not a problem was in release
16.2.96, of 2018-08-22. We've been sending them from the
development community server chat.zulip.org since 2018-11-29.
We started forcing clients to upgrade with commit fb7bfbe9a,
deployed 2018-12-05 to zulipchat.com.
(The mobile app unconditionally makes a request to a route on
zulipchat.com to check for this kind of forced upgrade, so that
applies to mobile users of any Zulip server.)
So at this point it's long past safe for us to unconditionally
send these. Hardwire the old `SEND_REMOVE_PUSH_NOTIFICATIONS`
setting to True, and simplify it out.
For Google auth, the multiuse invite key should be stored in the
csrf_state sent to google along with other values like is_signup,
mobile_flow_otp.
For social auth, the multiuse invite key should be passed as params to
the social-auth backend. The passing of the key is handled by
social_auth pipeline and made available to us when the auth is
completed.
For internal stream messages, most of the time, we have access to
a Stream object. For the few corner cases where we don't, it is a
much cleaner approach to have a separate function that accepts a
stream name than having one multi-option helper that accepts both
names and objects.
Our html collects extra spaces in a couple of places. The most prominent is
paragraphs that look like the following in the .md file:
* some text
continued
The html will have two spaces before "continued".
This changes the border-radius to 6px for the tabbed display, which is not
in line with the current Zulip style for border-radius (4px). However 6px
really looks a lot better for this (possibly because it's a bigger box than
most of our other boxes?)
I was hoping this would make things faster... it does, but sadly only
by about 70ms, 5% of this file's test runtime.
It sure does make this file rather less action-at-a-distance, though,
as well as fixing some duplication.