Extend the list of users that have to be notified when a message is
changed, so that in addition to users who have a UserMessage row, any
users who subscribed later to a stream with history public to
subscribers will also get the update.
Fixes: #8750.
This adds experimental support in /register for sending key
statistical data on the last 1000 private messages that the user is a
participant in. Because it's experimental, we require developers to
request it explicitly in production (we don't use these data yet in
the webapp, and it likely carries some perf cost).
We expect this to be extremely helpful in initializing the mobile app
user experience for showing recent private message conversations.
See the code comments, but this has been heavily optimized to be very
efficient and do all the filtering work at the database layer so that
we minimize network transit with the database.
Fixes#11944.
Certain payloads for account updates do not include the
previous_attributes that allow us to figure out what was actually
updated. So, we just ignore just payloads.
This is in response to a support ticket where the user had a closed left
sidebar, had added an organization, and then couldn't figure out how to
switch organizations. They had googled and found "The desktop app makes it
easy to switch between different organizations" in our help docs, which was
not sufficiently helpful.
Previously, we had some expensive-to-calculate keys in
zulip_default_context, especially around enabled authentication
backends, which in total were a significant contributor to the
performance of various logged-out pages. Now, these keys are only
computed for the login/registration pages where they are needed.
This is a moderate performance optimization for the loading time of
many logged-out pages.
Closes#11929.
Previously, we could 500 if an organization administrator scanned
possible PreregistrationUser IDs looking for a valid invitation they
can interact with.
They couldn't do anything, so no security issue, but this fixes that
case to just be a 400 error as it should be.
With the previous commit, fixes#1836.
As specified in the issue above, we make
get_email_gateway_message_string_from_address raise an exception if
it doesn't recognise the email gateway address pattern. Then, we make
appropriate adjustments in the codepaths which call this function.
These functions don't really belong in actions.py, so we move them out,
into email_mirror_helpers.py. They can't go directly into
email_mirror.py or we'd get circular imports resulting in ImportError.
The hope is that by having a shorter list of initial streams, it'll
avoid some potential confusion confusion about the value of topics.
At the very least, having 5 streams each with 1 topic was not a good
way to introduce Zulip.
This commit minimizes changes to the message content in
`send_initial_realm_messages` to keep the diff readable. Future commits will
reshape the content.
There were several problems with the old format:
* The sender was not necessarily the sender; it was the person who did
the deletion (which could be an organization administrator)
* It didn't include the ID of the sender, just the email address.
* It didn't include the recipient ID, instead having a semi-malformed
recipient_type_id under the weird name recipient_user_ids.
Since nothing was relying on the old behavior, we can just fix the
event structure.
We change the send_to_email_mirror management command, to send messages
to the email mirror through the mirror_email_message function instead of
process_message - this makes the message follow a similar codepath as
emails sent into the mirror with the postfix configuration, which means
going through the MirrorWorker queue. The reason for this is to make
this command useful for testing the new email mirror rate limiter.
Closes#2420
We add rate limiting (max X emails withing Y seconds per realm) to the
email mirror. By creating RateLimitedRealmMirror class, inheriting from
RateLimitedObject, and rate_limit_mirror_by_realm function, following a
mechanism used by rate_limit_user, we're able to have this
implementation mostly rely on the already existing, and proven over
time, rate_limiter.py code. The rules are configurable in settings.py in
RATE_LIMITING_MIRROR_REALM_RULES, analogically to RATE_LIMITING_RULES.
Rate limit verification happens in the MirrorWorker in
queue_processors.py. We don't rate limit missed message emails, as due
to using one time addresses, they're not a spam threat.
test_mirror_worker is adapted to the altered MirrorWorker code and a new
test - test_mirror_worker_rate_limiting is added in test_queue_worker.py
to provide coverage for these changes.
We clean up test_mirror_worker for more readability, as well as make it
verify that mirror_email gets called the correct amount of times and use
a correct rcpt_to address, so that the test doesn't fail when some
verification of the address is added in the following commits
implementing rate limiting in the email mirror.
Fixes#9840.
Old addresses caused bugs in some cases with non-latin characters in
stream names (see issue number above). We switch to using django's
slugify helper function to convert stream names to full ascii, while
also getting rid of problematic non-alphanumeric characters, in a
reasonable way. See Django's documentation for slugify to see more about
how this function works.
Tests extended by tabbott to cover cases where we do end up with ascii.
To prepare for changing how the stream name gets encoded into mirror
email addresses while making sure old addresses keep working, we ignore
the stream_name part when receiving emails into the mirror and we only
look at the email_token to identify into which stream to mirror the
email.
I'm surprised that this wasn't a mypy error; we were passing a Realm
object as an integer, and predictably, this resulted in us
constructing a cache key that looked like this:
stream_by_realm_and_name:<Realm: zulip 1>:dd5...
Previously, these cache keys looked like:
:1:9c26164d3a393e316e0f8210efe270e08710d45astream_by_realm_and_name:...
Now, they look like this:
:1:9c26164d3a393e316e0f8210efe270e08710d45a:stream_by_realm_and_name:...
This avoids a bunch of duplicated calls to auth_enabled_helper for our
social auth backends, which added up because auth_enabled_helper can
take 100us to run.
See the comment, but this is a significant performance optimization
for all of our pages using common_context, because this code path is
called more than a dozen times (recursively) by common_context.
We have a few code paths that call get_realm_from_request multiple
times on the same request (e.g. the login page), once inside the view
function and once inside the common context processor code. This
change saves a useless duplicate database query in those code paths.
This block of code with 2 database queries is solely for the /devlogin
endpoint. Removing that block from the /login code path makes it
easier to test /login perf in development.
We never intended to render them for this use case as the result would
not look good, and now we have a convenient bugdown option for
controlling this behavior.
Since we're not storing the markdown rendering anywhere, there's
conveniently no data migration required.
Fixes#11889.
This renames references to user avatars, bot avatars, or organization
icons to profile pictures. The string in the UI are updated,
in addition to the help files, comments, and documentation. Actual
variable/function names, changelog entries, routes, and s3 buckets are
left as-is in order to avoid introducing bugs.
Fixes#11824.
Follow up on 92dc363. This modifies the ScheduledEmail model
and send_future_email to properly support multiple recipients.
Tweaked by tabbott to add some useful explanatory comments and fix
issues with the migration.
Apparently, our invalid realm error page had HTTP status 200, which
could be confusing and in particular broken our mobile app's error
handling for this case.
When soft deactivation is run for in "auto" mode (no emails are
specified and all users inactive for specified number of days are
deactivated), catch-up is also run in the "auto" mode if
AUTO_CATCH_UP_SOFT_DEACTIVATED_USERS is True.
Automatically catching up soft-deactivated users periodically would
ensure a good user experience for returning users, but on some servers
we may want to turn off this option to save on some disk space.
Fixes#8858, at least for the default configuration, by eliminating
the situation where there are a very large number of messages to recover.
A user who has been soft deactivated for a long time might have 10Ks of message
history that was "soft deactivated". It might take a minute or more to add
UserMessage rows for all of these messages, causing timeouts. So, we paginate
the creation of these UserMessage rows.
This logic for passing through whether the user was logged in never
worked, because we were trying to read the client.
Fix this, and add tests to ensure it never breaks again.
Restructured by tabbott to have completely different code with the
same intent.
Fixes#11802.
Previously, the LDAP authentication model ignored the realm-level
settings for who can join a realm. This was sort of reasonable at the
time, because the original LDAP auth was an SSO solution that didn't
allow multiple realms, and so one could fully configure authentication
settings on the LDAP side. But now that we allow multiple realms with
the LDAP backend, one could easily imagine wanting different
restrictions on them, and so it makes sense to add this enforcement.
This field is primarily intended to support avoiding displaying the
"more topics" feature in new organizations and streams, where we might
know that all messages in the stream are already available in the
browser.
Based on original work by Roman Godov, and significantly modified by
tabbott.
The second migration involved here could be expensive on Zulip Cloud,
but is unlikely to be an issue on other servers.
The actual bug in #11791 was caused by code reverted in
3ed85f4cd7, so technically #11791 is
already fixed. However, it makes sense to add tests to ensure that it
doesn't regress in the future as part of closing out the issue.
Fixes#11791.
Apparently, our new validator for stream color having a valid format
incorrectly handled colors that had duplicate characters in them.
(This is caused in part by the spectrum.js logic automatically
converting #ffff00 to #ff0, which our validator rejected). Given that
we had old stream colors in the #ff0 format in our database anyway for
legacy, there's no benefit to banning these colors.
In the future, we could imagine standardizing the format, but doing so
will require also changing the frontend to submit colors only in the
6-character format.
Fixes an issue reported in
https://github.com/zulip/zulip/issues/11845#issuecomment-471417073
According to GitHub's webhook docs, the scope of a membership
event can only be limited to 'teams', which holds true when a
new member is added to a team. However, we just found a payload
in our logs that indicates that when a user is removed from a
team, the scope of the membership is erroneously set to
'organization', not 'team'. This is most likely a bug on
GitHub's end because such behaviour is a direct violation of
their webhook API event specifications. We account for this
by restricting membership events to teams explicitly, at least
till GitHub's docs suggest otherwise.
Now that we've more or less stabilized our authentication/registration
subsystem how we want it, it seems worth adding proper documentation
for this.
Fixes#7619.
Addresses point 2 of #10612. We use a regex to detect if a form
of FWD indicator is present at the beginning of the subject, which
means the message has been forwarded.
remove_quotations argument is added to a couple of functions where
it's necessary.
In filter_footer, the criteria for a line to be a possible beginning
of a footer is changed to line.strip() == "--", instead of
line.strip().startswith("--"), because the former would remove
quotations from plaintext emails. This change makes sense, because
RFC 3676 specifies ""-- " as the separator line between the body
and the signature of a message":
https://tools.ietf.org/html/rfc3676
We remove the 'subject' argument of process_stream_message and make
subject processing happen inside the function, as it's a more
appropriate place than the general process_message function and is
needed to have a good way of disabling removing quotations in forwarded
emails sent into the mirror.
This used to have a single function test_email_subject_stripping which
would run through a sizeable list of example subjects from subjects.json
fixture, form an email with each subject, send it to the email mirror
and check if the resulting stream message has a correctly stripped
topic. That took too much time, because we run through the entire
process_message and most_recent_message codepaths a lot of times.
We change the way of testing to:
1. Ensure process_message applies subject stripping (only need to run
process_message twice here)
2. Test the strip_from_subject function separately, on all the example
from the subjects.json fixtures. This is very fast.
Some urls which end with image file extensions (eg .jpg) may link to
html pages. This adds handling for linx.li, wikipedia.org and
pasteboard.co. If it is possible, we redirect to the actual image url
otherwise we do not attempt to render it as an image.
Fixes#10438.
When a Zephyr user deactivates their account, they should be
automatically turned into a mirror dummy user (so that other users can
continue to interact with them as normal for a Zephyr user who isn't
using Zulip).
Fixes part 3 of #10612. When sending an email to the email mirror to a
stream address, if "+show-sender" is added in the address, the stream
message will now include "From: <sender>" at the top.
The test_events system was in several tests using get_realm to fetch a
realm object, rather than accessing self.user_profile.realm. This
created subtle problems where we were neither directly editing nor
refreshing the `realm` object associated with our UserProfile object
from the database after our the `do_*` methods.
The payoff for this is we can update the previously confused
`do_change_icon_source` test to actually change the state and have the
correct result.
This reverts commit ff90c0101c but keeps
the test cases added for reference.
This was reverted because it was both not a clean solution and created
other realm filters bugs involving dashes (etc.).
In commit de65a04 we can see that if the need ever arises to modify
how stream descriptions are rendered, we would need to make changes
at 5 different call points which can be quite cumbersome. So this
functionality has been extracted to a new method called
'render_stream_descriptions'.
Earlier the behavior was to raise an exception thereby stopping the
whole sync. Now we log an error message and skip the field. Also
fixes the `query_ldap` command to report missing fields without
error.
Fixes: #11780.
This fixes an issue where invalid emoji name prevents following
emojis from rendering.
This reverts the code change in
8842349629, while still passing the
tests added in that commit (it seems the original commit had
misdiagnosed an ordering bug and thus introduced this issue).
Fixes: #11770.
The night logo synchronization on the settings page was perfect, but
the actual display logic had a few problems:
* We were including the realm_logo in context_processors, even though
it is only used in home.py.
* We used different variable names for the templating in navbar.html
than anywhere else the codebase.
* The behavior that the night logo would default to the day logo if
only one was uploaded was not correctly implemented for the navbar
position, either in the synchronization for updates code or the
logic in the navbar.html templates.
This commit leverages the ahocorasick algorithm to build a set of user_ids
that have their alert_words present in the message. It runs in linear time
of the order of length of the input message as opposed to number of
alert_words. This is after building a ahocorasick Automaton which runs
in O(number of alert_words in entire realm) which is usually cached.
This fixes an issue where blank lines between blocks were causing
auto-numbering of list to stop before the blank line resulting
in two separate numbered list instead of one.
Edited significantly by tabbott to explain the tricky details in the
comments.
Fixes: #11651.
Add `max_int_size` parameter to `to_non_negative_int()` in
decorator.py so it will be able to validate that the integer doesn't
exceed the integer maximum limit.
Fixes#11451
This is important for situations such as with our Zapier app,
where the requesting user may be a bot that would like to access
its owner's subscriptions.
Tweaked by tabbott to eliminate the 2^N growth of cases in
do_get_streams.
tests now ran in 7.649s from 9.297s. And this test works just as well
with 3 bots, since only 3 database queries with 3 bots confirms we're
not doing linear queries in the number of bots in the organization.
We want to use the baseline features of bugdown, but not fancy things
like inline URL previews, since the whole structure of stream
descriptions is to have a single-line thing supporting some
formatting.
The migration part of this change fixes a bug encountered by some
organizations upgrading from older versions of Zulip.
This allows us to have some features using bugdown rendering where
inline image previews will not be rendered (which would be problematic
for e.g. stream descriptions).
Guest users will just get an empty list of default streams; we also
hide the "Default streams" organization view from the guest users UI.
This is for consistency with not providing guest users the full list
of streams in an organization.
Fixing this involves fixing the backend to handle unchanged field
submissions of the Zoom credentials without trying to re-validate the
credentials (for performance) as well as to fetch the already-sent
secret.
Visually, #zoom_help_text acts like
.organization-settings-parent div:first-of-type when the Zoom option
is selected, but isn't treated as such.
No visual change with the #google_hangouts_domain change; just there to make
the code more readable/defensible.
This avoids a spurious permission error inside the Postgres
`resolve_symlinks` function if we don’t have access to the current
working directory (e.g. we’re running with cwd /root inside `su
zulip`).
While we’re here, add a defensive `--` argument.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
This saves about 8% of the runtime of our total response middleware,
or equivalently close to 2% of the total Tornado response time. Which
is pretty significant given that we're not sure anyone is using statsd
in production.
It's also useful outside Tornado, but the effect is particularly
significant because of how important Tornado performance is.
This avoids parsing these functions on every request, which was
adding roughly 350us to our per-request response times.
The overall impact was more than 10% of basic Tornado response
runtime.
We'll use this in the push-notifications code, in a context where
there should definitely already be UserMessage rows if everything's
gone normally... but explicitly checking at the top seems like the
right pattern from a secure-coding perspective.
When a bunch of messages with active notifications are all read at
once -- e.g. by the user choosing to mark all messages, or all in a
stream, as read, or just scrolling quickly through a PM conversation
-- there can be a large batch of this information to convey. Doing it
in a single GCM/FCM message is better for server congestion, and for
the device's battery.
The corresponding client-side logic is in zulip/zulip-mobile#3343 .
Existing clients today only understand one message ID at a time; so
accommodate them by sending individual GCM/FCM messages up to an
arbitrary threshold, with the rest only as a batch.
Also add an explicit test for this logic. The existing tests
that happen to cause this function to run don't exercise the
last condition, so without a new test `--coverage` complains.
Adblock Plus's "Block social media icons tracking" setting blocked
integration logos for social media platforms from loading, so the logos
are renamed to bypass this.
Fixes#11590.
This change was prompted by a possible bug on Clubhouse's end. In
general, if a branch is added, it also prompts a workflow state
transition in its primary story.
However, our webhook error logs show an erroneous payload where
the same branch is added to another story, but there is no
workflow state transition. Also, both the stories are grouped in
the same payload, which confused/broke our code. Ideally, this
shouldn't happen and is most likely a bug on Clubhouse's end.
In most cases, changes included in Clubhouse payloads never
pertain to more than one parent entity (stories) simultaneously,
and we usually operate under the assumption that the changes
included therein are related to each other in terms of their
parent object (story or epic) and not a child object (the GitHub
branch).
A check suite is a collection of check runs. We care a lot more
about the outcomes of check runs in this case because check_run
payloads are a lot more informative than check_suite payloads.
(And in any case, the check_suite events are primarily for notifying
tools like CI to run checks).
We only support notifications for events where a check run has
completed. Notifications for when a check run has been queued or
is in progress are not very informative and may be too noisy.
We do not anticipate our UI for showing stream descriptions looking
reasonable for multi-line descriptions, so we should just ban creating
them.
Given the frontend changes, multi-line descriptions are only likely to
show up from importing content from other tools, in which case
replacing newlines with spaces is cleaner than the alternative.
This change should help people discover to distinguish
silent mentions in text as a part of Zulip syntax while
differentiating them from regular mentions.
This optimizes test-backend by skipping webhook
tests when run in default mode.
Tweaked by tabbott to extend the documentation and update the CI
commands.
The payloads for this event are missing some important details
about the Project's changes, such as the name of the project,
the card's column name, etc. Without such details, the resultant
notifications would not be useful at all!
ACCOUNT_ACTIVATION_DAYS doesn't seems to be used anywhere.
INVITATION_LINK_VALIDITY_DAYS seems to do it's job currently.
(It was only ever used in very early Zulip commits).
We were using a hardcoded relative path, which doesn't work if you're
not running this from the root of the Zulip checkout.
As part of fixing this, we need to make `LOCAL_UPLOADS_DIR` an
absolute path.
Fixes#11581.
Since da8f4bc0e back in August, this control flow has caused
`flags.active_mobile_push_notification` to be cleared if we don't send
these `remove` messages at all, and if we send them directly to GCM...
but not if we send them via the Zulip notification bouncer.
As a result, on a server configured to send `remove` notification-messages
via the bouncer, we accumulate "active" messages and never clear them.
If the user then does `mark_all_as_read`, we end up sending a `remove`
for each of those messages again, and all in one giant burst. We've
seen puzzling bursts of hundreds of removals pass through the bouncer
since turning on removals on chat.zulip.org; it's likely many of them
are caused by this bug.
This issue was made more acute with f4478aad5, which unconditionally
enabled removals.
Test added by tabbott.
This is the only server-side change required for the FCM migration!
Optionally, at some point in the future we might choose to migrate
to the new ("v1") API which FCM also offers. Nothing revolutionary
but there are some nice things about it:
https://firebase.google.com/docs/cloud-messaging/migrate-v1
The client-side fix to make these not a problem was in release
16.2.96, of 2018-08-22. We've been sending them from the
development community server chat.zulip.org since 2018-11-29.
We started forcing clients to upgrade with commit fb7bfbe9a,
deployed 2018-12-05 to zulipchat.com.
(The mobile app unconditionally makes a request to a route on
zulipchat.com to check for this kind of forced upgrade, so that
applies to mobile users of any Zulip server.)
So at this point it's long past safe for us to unconditionally
send these. Hardwire the old `SEND_REMOVE_PUSH_NOTIFICATIONS`
setting to True, and simplify it out.
We've for a while had logic to set plan_type to LIMITED when importing
into Zulip Cloud; we need corresponding logic to set it to SELF_HOSTED
when importing into a self-hosted server.
Fixes#11541.
For Google auth, the multiuse invite key should be stored in the
csrf_state sent to google along with other values like is_signup,
mobile_flow_otp.
For social auth, the multiuse invite key should be passed as params to
the social-auth backend. The passing of the key is handled by
social_auth pipeline and made available to us when the auth is
completed.
For internal stream messages, most of the time, we have access to
a Stream object. For the few corner cases where we don't, it is a
much cleaner approach to have a separate function that accepts a
stream name than having one multi-option helper that accepts both
names and objects.
The bot's details (such as ID and name) are used to format a
connection label in Zapier's UI. This allows users to distinguish
between different Zapier-to-Zulip connections set up with different
bots.
Our html collects extra spaces in a couple of places. The most prominent is
paragraphs that look like the following in the .md file:
* some text
continued
The html will have two spaces before "continued".
This changes the border-radius to 6px for the tabbed display, which is not
in line with the current Zulip style for border-radius (4px). However 6px
really looks a lot better for this (possibly because it's a bigger box than
most of our other boxes?)
I was hoping this would make things faster... it does, but sadly only
by about 70ms, 5% of this file's test runtime.
It sure does make this file rather less action-at-a-distance, though,
as well as fixing some duplication.
If we make a practice on the Zulip server of always explicitly setting
the desired priority, then when an old server doesn't set the priority
we can reasonably have the bouncer make a guess.
That is, this allows a Zulip server to now set the `priority`; but if
it doesn't, we use upstream's default value, which has the same effect
as we've always previously had by not setting it at all.
But when this is deployed to the push notifications bouncer server, it
does allow another server to set priority when pushing notifications
through the bouncer.
If the caller has access to a Stream object, it is wasteful to
query a database for a stream by ID or name. In addition, not
having to go through stream names eliminates various classes of
possible bugs involved with re-fetching the Stream object by name.
If the caller has access to a Stream object, it is wasteful to
query a database for a stream by ID or name. In addition, not
having to go through stream names eliminates various classes of
possible bugs involved with getting a Stream object back.
The name for_stream_name is more appropriate here. The name
for_stream is more suitable for a function that takes in a Stream
object, which we're about to add.
Our hash-naming of production assets interacted badly with the "look
at files in a directory" algorithm used to determine what sound
options exist for the "notification sound" feature. For lack of a
better solution, we fix this by excluding files with an extra `.` in
their name.
Extracts out common tests so that future social-auth backends can
be tested without duplicating tests. I have been careful to not
change any testing logic.
Add all the stop words to page_params, reading from the
`zulip_english.stop` database, with caching to avoid loading the file
on every page load.
Part of #10592.
This causes changing the email_address_visibility field to actually
modify what user_profile.email values are generated for users, both on
user creation and afterwards as email addresses are edited.
The overall feature isn't yet complete, but this brings us pretty close.
The original commit made a number of well-meaning but suboptimal product
changes to the invoice events, such as threading them under the invoice id
rather than the customer id. It also seems to be causing 500s for
invoiceitem events, though I'm not sure why.
This reverts commit 65489b0391.
We had disabled reference style links in bugdown, however,
we hadn't disabled them in marked. This commit rectifies
that and adds test cases for the same.
Fixes#11350.
This helps keep the realm.json small and easy to process; previously,
almost the entire size of that file was the analytics data.
We implement this by refactoring the analytics Config objects into a
separate subroutine that writes to a separate file, plus the
corresponding import code.
Manual testing was performed by exporting the 'analytics' realm, and
importing back to a newly created 'test' realm. The 'test' realm was
then exported and the json files were inspected. The data appeared
consistent with no abnormalities.
Fixes: #11220.
We eliminated use of this function in outgoing_webhook.py in
bdc95b5d72.
Tweaked by tabbott to also eliminate code only used for that mock.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
This commit does the following three things:
1. Update stream model to accomodate rendered description.
2. Render and save the stream rendered description on update.
3. Render and save stream descriptions on creation.
Further, the stream's rendered description is also sent whenever the
stream's description is being sent.
This is preparatory work for eliminating the use of the
non-authoritative marked.js markdown parser for stream descriptions.
This adds a new API for sending basic analytics data (number of users,
number of messages sent) from a Zulip server to the Zulip Cloud
central analytics database, which will make it possible for servers to
elect to have their usage numbers counted in published stats on the
size of the Zulip ecosystem.
This is primarily a feature for onboarding, where an organization
administrator might send a bunch of random test messages as part of
joining, but then want a pristine organization when their users later
join.
But it can theoretically be used for other use cases (e.g. for
moderation or removing threads that are problematic in some way).
Tweaked by tabbott to handle corner cases with
is_history_public_to_subscribers.
Fixes#10912.
This replaces the current usage of stream names with stream ids.
This commit also removes the `traditional` attribute from the invite
form as now we are sending stream_ids as an argument; this was the
only place in the codebase we used traditional=true, and it's great to
have it removed.
Ever since we implemented support for stream IDs in Addressee,
Addressee.stream_name() can now return None. This commit ensures
that _internal_prep_message only calls ensure_stream when
Addressee.stream_name() is not None.
This commit also contains the following auxiliary changes:
* Adds a custom exception, StreamWithIDDoesNotExist for when
a stream with a given ID does not exist because the error
message returned by StreamDoesNotExist only makes with stream
names, not IDs.
* Adds a new helper, get_stream_by_id_in_realm, which is similar
to get_user_profile_by_id_in_realm (introduced in #10391).
* Adds a helper, validate_stream_id_with_pm_notification, which
returns the Stream object associated with a given ID and also
handles PM notifications to the bot owner if the message was
sent by a bot and if the stream does not exist or has no
subscribers.
* Modifies the message sent by send_pm_if_empty_stream to
accommodate stream IDs.
Note that all of the above changes are required before check_message
can be modified to support stream IDs.
We've had this code oscillated a few times; the original comparison
was added as part of Stride import but broke HipChat import.
c34a8f2e69 fixed HipChat import but
regressed Stride.
This change fixes this for both HipChat + Stride.
Apparently, the "continue to registration" flow used a subtly invalid
way of encoding the full name. We put in the query part of the action
URL of the HTML form, but apparently HTML forms with a `GET` type will
ignore the query part (replacing it with any input values), which
makes sense but doesn't do what we want here. There are a few sane
ways to fix it, but given that the encoding logic we had before for
including the name in the URL was ugly, I'm pretty happy with just
adding a hidden input to the form for the name.
As part of Google+ being removed, they've eliminated support for the
/plus/v1/people/me endpoint. Replace it with the very similar
/oauth2/v3/userinfo endpoint.
This additional logic to prevent resizing is certain circumstances
(file size, dimensions) is necessary because the pillow gif handling
code seems to be rather flaky with regards to handling gif color
palletes, causing broken gifs after resizing. The workaround is to
only resize when absolutely necessary (e.g. because the file is larger
than 128x128 or 128KB).
Fixes#10351.
We had initially designed the poll widget like a blog
post with comments beneath it but it makes more sense
to think of it as just a simple poll with options.
We add a new syntax which converts the messages like the following:
```
/poll Who do you support?
Nadal
- Djokovic
```
to a poll with the two names as options. The list syntax is optional
since anyone making a poll is likely to want to create a list anyway.
This doesn't have any security impact, since we overwrote any other
fields in any case, and also this step happens before the security
part of input validation for stream creation. But this does improve
error messages if one tries to specify other arguments, and also makes
more clear that the `description` argument is supported here.
Refactor the potentially expensive work done by Beautiful Soup into a
function that is called by the alter_content function, so that we can
cache the result. Saves a significant portion of the runtime of
loading of all of our /help/ and /api/ documentation pages (e.g. 12ms
for /api).
Fixes#11088.
Tweaked by tabbott to use the URL path as the cache key, clean up
argument structure, and use a clearer name for the function.
In very old Slack workspaces, slackbot can appear as "Slackbot", and
the import script only checks for "slackbot" (case sensitive). This
breaks the import--it throws the assert that immediately follows the
test. I don't know how common this is, but it definitely affected our
import.
The simple fix is to compare against a lowercased-version of the
user's full name.
Earlier, our realm filters didn't render for languages that do not
use spaces (eg: Japanese) since we used to check for the presence
of an actual space character. This commit replaces that logic with
a complex scheme to detect word boundaries.
Also, we convert the RealmFilterPattern to subclass InlineProcessor
and make use of the new no-op feature in py-markdown 3.0.1 where we
can tell py-markdown that our pattern didn't find a match despite
the initial regex getting matched.
Fixes#9883.
Since we are building our parser from scratch now:
1. We have control over which proccessor goes at what priority number.
Thus, we have also shifted the deprecated `.add()` calls to use the
new `.register()` calls with explicit priorities, but maintaining
the original order that the old method generated.
2. We do not have to remove the processors added by py-markdown that
we do not use in Zulip; we explicitly add only the processors we
do require.
3. We can cluster the building of each type of parser in one place,
and in the order they need to be so that when we register them,
there is no need to sort the list. This also makes for a huge
improvement in the readability of the code, as all the components
of each type are registered in the same function.
These are significant performance improvements, because we save on
calls to `str.startswith` in `.add()`, all the resources taken to
generate the default to-be-removed processors and the time taken to
sort the list of processors.
Following are the profiling results for the changes made. Here, we
build 10 engines one after the other and note the time taken to build
each of them. 1st pass represents the state after this commit and 2nd
pass represent the state after some regex modifications in the commits
that follow by Steve Howell. All times are in microseconds.
| nth Engine | Old Time | 1st Pass | 2nd Pass |
| ---------- | -------- | -------- | -------- |
| 1 | 92117.0 | 81775.0 | 76710.0 |
| 2 | 1254.0 | 558.0 | 341.0 |
| 3 | 1170.0 | 472.0 | 305.0 |
| 4 | 1155.0 | 519.0 | 301.0 |
| 5 | 1170.0 | 546.0 | 326.0 |
| 6 | 1271.0 | 609.0 | 416.0 |
| 7 | 1125.0 | 459.0 | 299.0 |
| 8 | 1146.0 | 476.0 | 390.0 |
| 9 | 1274.0 | 446.0 | 301.0 |
| 10 | 1135.0 | 451.0 | 297.0 |
We avoid re-computing the regex string here, and we
also avoid re-compiling the regex itself.
I decided to put the "one_time" decorator in the
bugdown file itself, just to reduce friction in
folks reading the "buyer beware" comments.
Unfortunately, we can't use this for the
get_web_link_regex() function due to testing concerns,
so that continues to do an inelegant cache-with-global-var
scheme.
We use early-exit to flatten the code.
I also tweaked the comments a bit based on some recent
profile findings. (e.g. reading the file isn't actually
a big bottleneck, it's more the regex itself)
This fixes an annoying bug where clicking to subscribe to a stream
would change the color shown in the "manage streams" UI immediately
after you click.
Fixes#11072.
This adds a setting under "Notification" section of
"Organization settings" tab, which enables Organization administrator to
control whether the missed message emails include the message content or
not.
Fixes: #11123.
Multiple delete message requests for the same message sometimes caused
a 500 error. This happened via the normal IntegrityError being thrown
by delete message/archiving code.
This was manually reproduced by adding latency in function
move_messages_to_archive() in retention.py and
delete_message_backend() in views.py. This addresses the problem by
adding code to handle the exception and throw JsonableError to convert
500 to 400 errors, with an automated test.
Apparently, Clubhouse has two different payloads for story label
changes, one where the label name lives inside the "references"
object, and the other where it lives inside the "actions" object.
Their webhook API is still in beta, so this could just be a bug.
Anyhow, we should support both.
This a check on server side to verify whether the user sending request
to create stream where only admins can post is an admin or not; Raises
a JsonableError when the user is not the realm admin.
You can now pass in an info field with a value
like "out to lunch" to the /users/me/status,
and the server will include that in its outbound
events.
The semantics here are that both "away" and
"status_text" have to have defined values in order
to cause changes. You can omit the keys or
pass in None when values don't change.
The way you clear info is to pass the empty
string.
We also change page_params to have a dictionary
called "user_status" instead of a set of user
ids. This requires a few small changes on the
frontend. (We will add "status_text" support in
subsequent commits; the changes here just keep
the "away" feature working correctly.)
We now have single function that handle both away
and not-away.
This refactoring sets us up to piggyback "info" more
easily onto status updates.
The only thing that changes here is that we don't
delete database rows any more when users revoke
their away status. Instead we just set the status
to NORMAL.
When I was initially writing the tests to solve issue #10131 in PR
2 schema checkers as I modified the code to send the rendered_value
only when required.
When I was using just 1 schema checker shared between two code paths,
we needed _allow_only_listed_keys. But after shifting to 2 schema
checkers for the two different cases, we no longer needed that flag,
and it's better to remove it for a stronger check.
This reverts the temporary fix done in commit
46f4e58782 and replaced it with the fix that
non-admins should be able to see a dropdown to select a non-admin type of
invited user i.e. normal member or guest user.
`fakeldap` assumes every attribute to be a multi-value attribute
while making comparison in `_comapare_s()` and so while making
comparisons for password it gives a false positive. The result
of this was that it was possible to login in the dev environment
using LDAP using a substring of the password. For example, if the
LDAP password is `ldapuser1` even entering `u` would log you in.
On the backend, we extend the BlockQuoteProcessor's clean function that
just removes '>' from the start of each line to convert each mention to
have the silent mention syntax, before UserMentionPattern is invoked.
The frontend, however, has an edge case where if you are mentioned in
some message and you quote it while having mentioned yourself above
the quoted message, you wouldn't see the red highlight till we get the
final rendered message from the backend.
This is such a subtle glitch that it's likely not worth worrying about.
Fixes#8025.
These mentions look like regular mentions except they do not
trigger any notification for the person mentioned. These are
primarily to be used when you make a bot take an action and
the bot mentions you, or when you quote a message that mentions
you.
Fixes#11221.
Apparently, zoom's API will (sometimes?) return a 201 (not 200)
created in response to the API request to create a call. We fix this
by using the proper requests check for whether or not the request
failed.
This commit fixes an error in the logic for allowing admins to edit any
user's CPF (custom profile field) values. The logic allowing users to
edit their own CPF values is however sound. What happens is that of all
the CPF types, for "choice fields" as well as "URL" and "date fields",
when the value is reset/deleted/cleared by the admin in the Admin UI
(organization settings), the frontend would send a null (empty string)
value to the backend for that custom profile field (as this is, after
all, the new value in this case). This would then triggers the backend
validators to return an error message.
We fix this by using the method check_remove_custom_profile_field_value,
that both code paths (user editing their own CPFs and admin editing a
user's CPF) can call.
This moves the logic for deleting the user's custom profile field
value in the remove_user_custom_profile_data view function to a method
named check_remove_user_custom_profile_value in actions.py, so that we
can reuse it in the next commit.
We make this change because setting up reminders in PM's didn't
play really well with our current infrastructure. Basically the
reminder messages from the bot can't appear in the same narrow as
that of a PM between two people and therefore we disable it.
Though we make an exception here where a person wants to set up
reminder for himself.
We do this since we are yet to figure out how the entire realm
internal bots scenerio should work and therefore for the timming
we will use notification bot to deliver the reminders.
Previously, the subscription color attribute had a validator of
check_string, but this is insufficient. Hence this commit update the
validator used to check_color. Fixes#11268.
For unsupported or invalid payloads, we should just raise the
UnexpectedWebhookEventType exception and let our logging system
take care of recording the payload that caused the error.
This commit improves a couple of things:
* All of the message templates are now at the top, a convention
we follow in a lot of our webhooks.
* Messages are not prefixed with any emojis. We don't do this in
any of our other webhooks. Plus, the emojis were outdated.
* Remove some superfluous code.
* Use ```quote <quote goes here> ``` style formatting for
quoted text instead of the `>` character.
This commit only adds support for the four events that have sample
payloads provided for them on the Pagerduty developer website.
Support for the remaining events will be added in subsequent
commits, as we get access to more sample payloads.
Previously, zerver.views.registration.confirmation_key was only
available in development; now we make that more structurally clear by
moving it to the special zerver/views/development directory.
Fixes#11256.
Some urls are only available in the development environment
(dev_urls.py); Corresponding views (here email_log.py) is moved to the
new directory zerver/views/development.
Fixes#11256.
This is a major upgrade, and requires some significant compatibility
work:
* Migrating the pattern-removal logic to use the Registry feature.
* Handling the removal of positional arguments in markdown extensions.
* Handling the removal of safe mode.
This adds language paramater to send_future_email. As a result, this
properly internationalizes invitation reminder emails, by passing
correct language into send_future_email.
Fixes#11240.
We still create a Python 2 virtualenv for thumbor but that’s
separate (/srv/zulip-thumbor-venv from
scripts/lib/create-thumbor-venv).
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Now, if you pass an api_key, we'll initialize the public room
subscribers to be whatever they were at the time the import happened.
Also, document the situation on the caveats section.
We had an inconsistent behavior when `LDAP_APPEND_DOMAIN` was set
in that we allowed user to enter username instead of his email in
the auth form but later the workflow failed due to a small bug.
Fixes: #10917.
This now sets the user-agent to something like:
ZulipOutgoingWebhook/2.0
(It uses the current ZULIP_VERSION.)
Before this change, the user-agent would be
something like `python-requests/2.18.4`.
Fixes#10741
Closes#11195. We add a management command to allow us to send emails
to the email mirror directly. The command doesn't require any
configuring of email sending or receiving for the email mirror,
it passes the emails directly using the process_message function.
We need to explicitly check for empty recipient lists in
send_message to ensure that internal_send_huddle_message doesn't
call Addressee.for_private with an empty recipient list.
The check for empty recipient lists (the "message_to" argument)
does not belong in check_message. As we implement support for
sending messages by user IDs (see #9474), we will be extending
much of the existing code in Addressee.legacy_build that validates
recipient lists. Therefore, Addressee.legacy_build is a much more
apt area to check for empty recipient lists.
Also, Addressee.for_private and Addressee.for_user_ids also need
to do their own validation, since not everything goes through
Addressee.legacy_build. It is okay to simply throw a 500 in these
cases because we expect that callers will be doing their own
validation for calls that don't go through Addressee.legacy_build().
This commit is a part of our efforts surrounding #9474.
Feature of sending notification to the stream using notification bot
is added. user_profile is also passed to do_rename_stream for using
the name of user who renamed the stream in notification.
Notification is sent to the stream using
internal_send_stream_message in do_rename_stream.
Fixes#11034.
Previously, this wasn't an explicit feature of the export tool.
Note that the current version still includes metadata on private
streams and private message recipients, just not their messages.
This adds a proper template for the /digest page, making it a
reasonable way to view the digest email content for development and
debugging.
Fixes: #11016.
Previously, we had some hand-written logic for parsing the subject
line of the email's headers and turning it into a Python string using
each of the valid encodings for an email. That logic was buggy, and
sometimes resulted in a bytes object being passed into the
`send_zulip`, which would eventually throw an exception.
The fix for this is to use the Python standard library make_header
method for handling internationalized email headers.
https://stackoverflow.com/questions/7331351/python-email-header-decoding-utf-8
Fixes part 1 of #10612. We use a regex to remove RE:, FWD: (and similar
variations) from email subjects. Unit test is included, we add
subjects.json in fixtures containing various subjects to try the
stripping on.
Since we have already added the `invite_as` field to models, we can now
replace usage of `invite_as_admin` properly with its equivalent `invite_as
== PreregistrationUser.INVITE_AS['REALM_ADMIN']`.
Hence, also removed now redundant `invite_as`.
This should make it easily for mobile/terminal apps to handle
situations like the user's API key changing.
Also fix the fact we were incorrectly using a 400, not 401, status
code for this case.
The logic for flushing the API key has been broken every since we
added the cache, since we were incorrectly flushing the new API key,
not the old API key, from the cache after regeneration.
This should eliminate the need to do manual analytics work when
importing organizations imported/exported using the zulip -> zulip
import/export tools.
The slim_mode setting had been incorrectly configured to skip
"deleted" users, resulting in bugs where private messages with deleted
users would not be imported.
This endpoint serves requests which might originate from an image
preview link which had an http url and the message holding the image
link was rendered before we introduced thumbnailing. In that case
we would have used a camo proxy to proxy http content over https and
avoid mix content warnings.
In near future, we plan to drop use of camo and just rely on thumbor
to serve such images. This endpoint helps maintain backward
compatibility for links which were already rendered.
This setting splits away part of responsibility from THUMBOR_URL.
Now on, this setting will be responsible for controlling whether
we thumbnail images or not by asking bugdown to render image links
to hit our /thumbnail endpoint. This is irrespective of what
THUMBOR_URL is set to though ideally THUMBOR_URL should be set
to point to a running thumbor instance.
We used to add sharpen filter for all the image sizes whereas it was
intended for resized images only which would have been smoothened
out a bit by the resize operation.
This unnecessary use of the filter used to result in weird issues
with full size images.
For example: Image located at this url:-
http://arqex.com/wp-content/uploads/2015/02/trees.png
When rendered in full size would have just boundaries visible.
When trying to find the email gateway address, use the
`email.util.getaddresses` function to deal with cases
where multiple recipients are included in the email header
or the stream address appears as an angle-addr with a
name given (e.g. if someone added it to their address book).
Added some other headers where the required address may
appear: "Resent" headers are sometimes used for forwarding,
and streams may also be found in CC. There is no way to find
the address if the email was recieved as a BCC.
This should hopefully be the last commit of this form; ultimately, my
hope is that we'll be able to refactor the semi-duplicated logic in
this file to avoid so much effort going into keeping this correct.
For many actions, we make a single call to
send_event, and it's kind of heavy now to
properly assert we made one call, and we
don't need to exercise all the tornado code
to prove that the action was written correctly.
This makes it possible it include our standard markdown formatting in
one's custom profile fields, allowing for links, emphasis, emoji, etc.
Fixes#10131.
While we're at it, we remove the JSON parsing that was part of the
user field code path, since this function isn't responsible for
rendering user fields.
This adds a model field `invited_as` to PreregistrationUser model class
which will replace `invited_as_admin` in future. Intentions behind adding
this are that we can specify "types" of users for an invited person other
than admin or regular that is, guest user or maybe many others in the
future.
The octet-stream content type is potentially under-specified, but it's
better than potentially submitting None and increases consistency of
this part of the codebase.
The boto library's s3 interface allows setting only string-format
metadata keys. So we need to cast the last_modified floating-point
timestamp into a string before storing on the S3 object.
This bug mostly broke uploading avatars when using the S3 storage backend.
Apparently, hc-migrate can generate emoticons.json files with a
somewhat different format. Assuming that other files are in the
normal format, we should be able to handle it like this.
See report in #11135.
Apparently, some methods of exporting from HipChat do not include an
emoticons.json file. We could test for this using the
`include_emoticons` field in `metadata.json`, but we currently don't
even bother to read that file. Rather than changing that, we just
print a warning and proceed. This is arguably better anyway, in that
often not having emoticons.json is the result of user error when
exporting, and it's nice to flag that this is happening.
Fixes#11135.
This IntegrityError has been happening occasionally in production due
to races, likely due to some sort of mobile app double-post bug.
Handle this by avoiding a 500, and returning the same 400 we would do
if there hadn't been a race.
This section is largely unnecessary, doesn't convey any useful
information, and is probably a remnant from an older version of
this doc that we forgot to remove.
This commit adds a custom Markdown include extension which is
identical to the original except when a macro file can't
be found, it raises a custom JsonableError exception, which
we can catch and then trigger an appropriate test failure.
Fixes: #10947
Our HipChat conversion tool didn't properly handle basic avatar
images, resulting in only the medium-size avatar images being imported
properly. This fixes that bug by asking the import tool to do the
thumbnailing for the basic avatar image (from the .original file) as
well as the medium avatar image.
This is a major rewrite of the billing system. It moves subscription
information off of stripe Subscriptions and into a local CustomerPlan
table.
To keep this manageable, it leaves several things unimplemented
(downgrading, etc), and a variety of other TODOs in the code. There are also
some known regressions, e.g. error-handling on /upgrade is broken.
Note that a pretty common use case for this is a realm admin sending this to
everyone after an import from HipChat or Slack. So this adds the realm_name
to the title (so that there is something they might recognize) and kept the
wording generic enough to accommodate the user not having clicked anything
to get this email.
Also strengthens the tests a bit to better test the complicated template
logic.