If a user deletes message between when it triggered a potential push
notification for another user, and when that notification was actually
sent, we'd end up with a situation where the `Message` table didn't
have an entry for the requested message ID.
The clean fix for this is to still throw an exception in the event
that the message doesn't exist at all, but if it exists in
ArchivedMessage, don't throw a user-facing exception.
This seems to happen when Apple is having a partial outage on some of
their APNS shards; it should be treated like other networking errors
connecting to APNS (with an automatic retry).
Apparently, we have a second code path where we might try to call
send_email library functions on old data, namely in the
queue_processors codebase. So we apply the same migration logic here.
This adds a function that sends provided email to all administrators
of a realm, but in a single email. As a result, send_email now takes
arguments to_user_ids and to_emails instead of to_user_id and
to_email.
We adjust other APIs to match, but note that send_future_email does
not yet support the multiple recipients model for good reasons.
Tweaked by tabbott to modify `manage.py deliver_email` to handle
backwards-compatibily for any ScheduledEmail objects already in the
database.
Fixes#10896.
This prevents the warning about push notifications not being
registered for from being printed in development environment startup
by default. In development, that's the expected state, and we don't
need to spam up the output with that notice.
Previously, we frequently accessed user_profile.realm from outside the
loops that interact with UserProfile objects. This variable reuse
outside the loop could be confusing and should be a style/lint
violation.
While in this case, the behavior was correct (in that all users in the
loops were within the same realm), extracting a separate `realm`
variable significantly clarifies what's going on here.
This reverts commit 2fa77d9d54.
Further investigation has determined that this did not fix the
password-reset problem described in the previous commit message;
meanwhile, it causes other problems. We still need to track down the
root cause of the original password-reset bug.
This provides a nice user experience for folks where we do know what
their LDAP credentials are.
Though we need to fix#10917 before the content in the email with be
correct.
This is the first step of letting users use Zulip markdown in their
SHORT_TEXT and LONG_TEXT custom profile fields, so that they can
include emphasis, links, etc.
This doesn't include any frontend logic yet, however.
This commit changes the return type of get_possible_mentions_info to a
list instead of a dict, thus disposing off the hacky logic of storing
users with duplicate full names with name|id keys that made the code
obfuscated.
The other functions continue to use the dicts as before, however, there
are minor variable changes where needed in accordance with the updated
definition of get_possible_mentions_info.
This function is equivalent to recipient_for_emails, but fetches
user_profiles by IDs, not by emails.
This commit is a part of our efforts surrounding #9474, but is
more primarily geared towards adding support for sending typing
notifications by user IDs.
Previously, get_user_profiles() was split into two functions:
* user_profiles_from_unvalidated_emails, which raised a
ValidationError upon encountering a non-existent user email.
* get_user_profiles, which caught the ValidationError raised
by user_profiles_from_unvalidated_emails and raised a
JsonableError instead.
According to Steve Howell, this complexity is partly a relic
of past refactoring and is unnecessarily heavy. It is better to
just raise JsonableError directly.
recipient_for_emails is used by our typing notifications code.
user_profiles_from_unvalidated_emails is used by our typing
notifications code *and* for sending messages.
user_profiles_from_unvalidated_emails is a part of a larger
framework used by Addressee to validate recipient emails when sending
messages and will eventually need to be removed as we move forward
with #9474. So it makes sense to just inline this function within
recipient_for_emails so that we don't break our typing notifications
code in the future.
This commit is a part of our efforts surrounding #9474.
This library was absolutely essential as part of our Python 2->3
migration process, but all of its calls should be either no-ops or
encode/decode operations.
Note also that the library has been wrong since the incorrect
refactoring in 1f9244e060.
Fixes#10807.
This adds a web flow and management command for reactivating a Zulip
organization, with confirmation from one of the organization
administrators.
Further work is needed to make the emails nicer (ideally, we'd send
one email with all the admins on the `To` line, but the `send_email`
library doesn't support that).
Fixes#10783.
With significant tweaks to the email text by tabbott.
The previous content made it sound like we were actually sending a
push notification, which could be confusing/alarming in some cases
(see e.g. 9c224ccdd3). Instead, we make
clear that we're sending it to all clients (which one might correctly
suspect is vacuous in the development environment).
While it could make sense to print these logging statements at WARN
level on server startup, it doesn't make sense to do so on every
message (though it perhaps did make sense to do so before more recent
changes added good ways to discover you forgot to configure push
notifications).
Instead, we now just do a WARN log on queue processor startup, and
then at DEBUG level for individual messages.
Fixes#10894.
Change the truncation marker from `...` to `\n[message truncated]`
when receiving messages from the API or through e-mail. Also, update
tests to account for the new change.
Fix#10871.
Our recent change in 2fa77d9d54 to
disable the cached_db cache backend broke upgrade-zulip-from-git with
an attributeerror; we fix that by checking the session engine before
trying to access its cache-related attributes.
This is initial work, which will help us establish habits of using a
well-tested approach for renaming a Zulip organization (since as part
of https://github.com/zulip/zulip-mobile/issues/3142, we'll likely
need to make this function do more).
This seems like kind of a silly function to extract
to topic.py, but it will theoretically help us sweep
"subject" if we change the DB.
It had test coverage.
We use the message a lot for the query modified
here, so I think it's worth taking the up-front
hit of getting bulkier objects to avoid O(N)
hops back to the database.
Our webhook-errors.log file is riddled with exceptions that are
logged when a webhook is incorrectly configured to send data in
a non-JSON format. To avoid this, api_key_only_webhook_view
now supports an additional argument, notify_bot_owner_on_invalid_json.
This argument, when True, will send a PM notification to the bot's
owner notifying them of the configuration issue.
We (lexically) remove "subject" from the conversion code. The
`build_message` helper calls `set_topic_name` under the hood,
so things still have "subject" in the JSON.
There was good code coverage on `build_message`.
The various vars here that had recipient_subject
in the name now have either bucket or bucket_tup
there.
The shorter names are a bit easier to read, and the
original names were misleading for the PM case.
This was basically two search/replaces, and we have
good test coverage here, so it's pretty low risk
despite the messy diff.
We now attach zulip_db_data to the markdown engines
for classes that need it. This was the last remaining
global we had, so we remove `arguments.py` here.
The Markdown processor makes it fairly simple for
the helper classes to access the `md` engine. We
now write `_md_engine.zulip_message` to avoid having
the current message in the global namespace.
Note that we do reuse engines for multiple messages,
but each engine is specific to a realm. And we therefore
avoid even the theoretical possibility of leaking message
data between realms.
This makes us consistent with how we import codehilite.
Using Python's normal import mechanism avoids some overhead
with Markdown having to parse dotted notation.
These modules are tiny, so they shouldn't impact startup
too much. Also, by explicitly importing them, we avoid
the pitfall of having a sucessful startup and a broken
renderer.
We were building the same link regex every time
we build a Markdown engine, which happens twice
per realm. It's an expensive operation due to
the complexity of the regex and us reading a file.
Nested classes are kind of expensive in Python,
particularly when you throw in mypy annotations.
Also, flatter is arguably better, although it is
kind of a pain here not to have closures.
This change avoids hitting the Django ORM when
we don't find any possible group mentions in
the message content.
Django doesn't necessarily actually hit the database,
but it's still slow and shows up in profiles.
This commit speeds up the import by avoiding
sender lookups and instead using the data
for users that we already have in memory.
This avoids a few DB hops, many hops to memcached,
plus some object construction.
We now call do_render_markdown() directly. This
also makes it more explicit that the import has
never rendered alert words.
For the import-data codepath, we will call
the extracted function directly in a
subsequent commit.
The do_render_markdown() function has more
required parameters, which allows for more
explicit code and also allows us to flatten
out some logic related to alert words. (We
just pass in empty sets/dicts as needed).
We can rely on `message_realm` being the same
as `message.sender.realm`, which allows us to
skip two queries to the database for the rare
Zephyr mirroring case.
This function requires a message object, whereas
we want to work with JSON data to avoid necessary
queries when we import data. Inlining the function
sets us up for a subsequent refactoring.
We change the way we deal with theoretical return
values of `None` to use an assertion; otherwise,
we would have to loosen up a bunch of mypy types
from `str` to `Optional[str]`. It's not clear `None`
is even possible--we've moved toward throwing exceptions
there instead of silently failing.
This is somewhat hairy logic, so it's nice
to extract it and not worry about variable leaks.
Also, this moves some legacy "subject" references out
of actions.py.
We start by including functions that do custom
queries for topic history.
The goal of this library is partly to quarantine
the legacy "subject" column on Message.
Recently, one of our users reported that a JIRA webhook was not
able to send messages to a stream with a space character in its
name. Turns out that JIRA does something weird with webhook URLs,
such that escaped space characters (%20) are escaped again, so
that when the request gets to Zulip, the double escaped %20 is
evaluated as the literal characters `%20`, and not as a space.
We fix this by unescaping the stream name on our end before
sending the message forward!
The previous logic was incorrect, in that if `content_type` was set to
None (which happens with Slack/HipChat export, among other things),
then we wouldn't run the `guess_type` logic to auto-detect the
Content-Type to send to S3.
The UserMessage table can be huge, so creating a
bunch of entries in `ID_MAP` can overflow memory.
We don't have any tables that depend on `UserMessage`,
and we don't send the 'id' fields from `zerver_usermessage`
to the database, so re-mapping them was just busy-work.
This is a preparator refactor for supporting hosting different Tornado
processes on different servers; to look up which Tornado server we
should be sending the event to, we'll need the realm object.
Apparently, the QUERY_STRING property of the report object wasn't
actually a string; since we only care about its string representation,
we should just stringify it.
Fixes#10745.
Use get_display_recipient to get stream names, and remove the
references to message.stream_name in push_notifications.py which were
added in 97571a203, as the actual stream names were being retrived
only for Message objects associated with public streams.
When you send a message to a bot that wants
to talk via an outgoing webhook, and there's
an error (e.g. server is down), we send a
message to the bot's owner that links to the
message that triggered the error.
The code to produce those links was out of
date.
Now we move the important code to the
`url_encoding.py` library and fix the PM
links to use the more modern style (user_ids
instead of emails). We also replace "subject"
with "topic" in the stream urls.
This supports guest user in the user-info-form-modal as well as in the
role section of the admin-user-table.
With some fixes by Tim Abbott and Shubham Dhama.
This will help us eliminate camo from our production installs.
Basically it helps us de duplicate some code from upcoming code
which will help us check validity of a camo url.
Also, rename get_alert_from_message to get_gcm_alert.
With the implementation of the and get_apns_alert_title and
get_apns_alert_subtitle, the logic within get_alert_from_message
is only relevant to the GCM payload, so we adjust the name
accordingly.
Progresses #9949.
Resolves https://github.com/zulip/zulip-mobile/issues/1316.
The string that is returned from get_alert_from_message is
dependent upon the same message that is passed into get_apns_payload
and get_gcm_payload. The contents of those payloads that are tested via
TestGetAPNsPayload and TestGetGCMPayload, which makes the tests for
get_alert_from_message redundant.
Also, simplify the logic by removing the last elif conditional.
If we use an outgoing webhook and the web server
responds with `widget_content` in the payload, we
include that in what we send through the send-message
codepath.
This makes outgoing webhook bots more consistent with
generic bots.
Bots are not allowed to use the same name as
other users in the realm (either bot or human).
This is kind of a big commit, but I wanted to
combine the post/patch (aka add/edit) checks
into one commit, since it's a change in policy
that affects both codepaths.
A lot of the noise is in tests. We had good
coverage on the previous code, including some places
like event testing where we were expediently
not bothering to use different names for
different bots in some longer tests. And then
of course I test some new scenarios that are relevant
with the new policy.
There are two new functions:
check_bot_name_available:
very simple Django query
check_change_bot_full_name:
this diverges from the 3-line
check_change_full_name, where the latter
is still used for the "humans" use case
And then we just call those in appropriate places.
Note that there is still a loophole here
where you can get two bots with the same
name if you reactivate a bot named Fred
that was inactive when the second bot named
Fred was created. Also, we don't attempt
to fix historical data. So this commit
shouldn't be considered any kind of lockdown,
it's just meant to help people from
inadvertently creating two bots of the same
name where they don't intend to. For more
context, we are continuing to allow two
human users in the same realm to have the
same full name, and our code should generally
be tolerant of that possibility. (A good
example is our new mention syntax, which disambiguates
same-named people using ids.)
It's also worth noting that our web app client
doesn't try to scrub full_name from its payload in
situations where the user has actually only modified other
fields in the "Edit bot" UI. Starting here
we just handle this on the server, since it's
easy to fix there, and even if we fixed it in the web
app, there's no guarantee that other clients won't be
just as brute force. It wasn't exactly broken before,
but we'd needlessly write rows to audit tables.
Fixes#10509
When we create new ids for message rows, we
now sort the new ids by their corresponding
pub_date values in the rows.
This takes a sizable chunk of memory.
This feature only gets turned on if you
set sort_by_date to True in realm.json.
We could migrate all the current PREMIUM_FREE organizations to have more
invites, but this setting mainly affects orgs right as they are starting, so
it's probably fine.
We seemed to have been doing too much of sharpening on the thumbnails.
The purpose of sharpening here was to just counter the softening
effects of a resize on an image but overdoing it is bad.
Value sharpen(0.5,0.2,true) seems to look good for achieving the
best results here on different displays as revealed in the manual
hit and trial based testing.
Thanks to @borisyankov for pointing out the issue and suggesting
the values.
For some webhook endpoints where the third-party API requires us to do
this, the user's API key might appear in error emails through
appearing in the `QUERY_STRING` parameter. Fix that by filtering any
actual content from those; what we usually need for debugging is just
what set of parameters were provided.
The APNS client libraries (especially the hyper.http20 one) were
determined via profiling to take significant time during the import
process, so we move them to be lazily imported in order to optimize
the overall Zulip import process. This save up to about 100ms in
import time.
These libraries are only used in certain Django processes inside
zulipchat.com, and so are unnecessary both in development as well as
for self-hosted Zulip servers.
This is a prepartory commit for the upcoming changes. It was meaningful
to extract this one out because this function is essentially a condition
check on whether a given url is one of the user_uploads or an external
one. Based on its value we decide whether a url must be thumbnailed or
not and thus this function will also be used in an upcoming commit
patching lib/thumbnail.py to do the same check before thumbnail url
generation.
We are basically adding a check for url's to be external (belonging
to some 3rd party web site hosting the image) or be one of the
user uploaded files. User uploaded files are served by a separate
endpoint which is /user_uploads/. Any other local url such as
/user_avatars/ or /static/ should never be sent to thumbor for
thumbnailing.
Not sending /user_avatars/ to thumbor for thumbnailing makes sense
because they are already properly thumbnailed and stored properly.
/static/ urls host very few images we use for demo and can be safely
be excluded from thumbnailing.
The Zulip API is to be used on both development and production
servers, and really we just need to talk about zuliprc files.
There's a similar issue for the JS docs, but we need to fix the
copy/paste issues with those as well.
We don't want really long urls to lead to truncated
keys, or we could theoretically have two different
urls get mixed up previews.
Also, this suppresses warnings about exceeding the
250 char limit.
Finally, this gives the key a proper prefix.
We use UserMessageLite to avoid Django overhead, and we
do updates in chunks of 10000. (The export may be broken
into several files already, but a reasonable chunking at
import time is good defense against running out of memory.)