This works by yielding messages sorted based on timestamp. Because
the Slack exports are broken into files by date, it's convenient to do
a 2-layer sorting process, where we open all the files for a given
day, and then sort their messages by timestamp before yielding them.
Fixes#10930.
If a user deletes message between when it triggered a potential push
notification for another user, and when that notification was actually
sent, we'd end up with a situation where the `Message` table didn't
have an entry for the requested message ID.
The clean fix for this is to still throw an exception in the event
that the message doesn't exist at all, but if it exists in
ArchivedMessage, don't throw a user-facing exception.
This seems to happen when Apple is having a partial outage on some of
their APNS shards; it should be treated like other networking errors
connecting to APNS (with an automatic retry).
Apparently, we have a second code path where we might try to call
send_email library functions on old data, namely in the
queue_processors codebase. So we apply the same migration logic here.
This adds a function that sends provided email to all administrators
of a realm, but in a single email. As a result, send_email now takes
arguments to_user_ids and to_emails instead of to_user_id and
to_email.
We adjust other APIs to match, but note that send_future_email does
not yet support the multiple recipients model for good reasons.
Tweaked by tabbott to modify `manage.py deliver_email` to handle
backwards-compatibily for any ScheduledEmail objects already in the
database.
Fixes#10896.
While this would never happen for a real article, this prevents a 500
in this case for a situation which is definitely user error and should
be a 40x (in this case, 404).
As part of this, we refactor the main view code to do validation in a
single code path, since the semi-duplicated-in-3-places logic was
getting pretty buggy.
This prevents the warning about push notifications not being
registered for from being printed in development environment startup
by default. In development, that's the expected state, and we don't
need to spam up the output with that notice.
Previously, we frequently accessed user_profile.realm from outside the
loops that interact with UserProfile objects. This variable reuse
outside the loop could be confusing and should be a style/lint
violation.
While in this case, the behavior was correct (in that all users in the
loops were within the same realm), extracting a separate `realm`
variable significantly clarifies what's going on here.
The previous implementation used run_parallel incorrectly, passing it
a set of very small jobs (each was to download a single file), which
meant that we'd end up forking once for every file to download.
This correct implementation sends each of N threads 1/N of the files
to download, which is more consistent with the goal of distributing
the download work between N threads.
There are (at least) two types of objects that could be sent with a
charge.succeeded event, a Charge (e.g. for credit cards) or a Payment (if
they pay by ACH). We were handling the first but not the second.
This commit also updates the fixture for the existing charge.succeeded event
to the latest API version.
This reverts commit 2fa77d9d54.
Further investigation has determined that this did not fix the
password-reset problem described in the previous commit message;
meanwhile, it causes other problems. We still need to track down the
root cause of the original password-reset bug.
This provides a nice user experience for folks where we do know what
their LDAP credentials are.
Though we need to fix#10917 before the content in the email with be
correct.
This is the first step of letting users use Zulip markdown in their
SHORT_TEXT and LONG_TEXT custom profile fields, so that they can
include emphasis, links, etc.
This doesn't include any frontend logic yet, however.
Recent changes merged in #10877 didn't handle these events
correctly. The linkified_id function breaks for the `discount`
object in the JSON payload. A cursory glance at Stripe's docs
tells me that since a discount is associated with a customer
or a coupon, it makes sense for a `discount` object to not have
an ID that can necessarily be linked to. So, we can just link
to the associated coupon instead.
This commit changes the return type of get_possible_mentions_info to a
list instead of a dict, thus disposing off the hacky logic of storing
users with duplicate full names with name|id keys that made the code
obfuscated.
The other functions continue to use the dicts as before, however, there
are minor variable changes where needed in accordance with the updated
definition of get_possible_mentions_info.
This function is equivalent to recipient_for_emails, but fetches
user_profiles by IDs, not by emails.
This commit is a part of our efforts surrounding #9474, but is
more primarily geared towards adding support for sending typing
notifications by user IDs.
Previously, get_user_profiles() was split into two functions:
* user_profiles_from_unvalidated_emails, which raised a
ValidationError upon encountering a non-existent user email.
* get_user_profiles, which caught the ValidationError raised
by user_profiles_from_unvalidated_emails and raised a
JsonableError instead.
According to Steve Howell, this complexity is partly a relic
of past refactoring and is unnecessarily heavy. It is better to
just raise JsonableError directly.
recipient_for_emails is used by our typing notifications code.
user_profiles_from_unvalidated_emails is used by our typing
notifications code *and* for sending messages.
user_profiles_from_unvalidated_emails is a part of a larger
framework used by Addressee to validate recipient emails when sending
messages and will eventually need to be removed as we move forward
with #9474. So it makes sense to just inline this function within
recipient_for_emails so that we don't break our typing notifications
code in the future.
This commit is a part of our efforts surrounding #9474.
This library was absolutely essential as part of our Python 2->3
migration process, but all of its calls should be either no-ops or
encode/decode operations.
Note also that the library has been wrong since the incorrect
refactoring in 1f9244e060.
Fixes#10807.
This adds a web flow and management command for reactivating a Zulip
organization, with confirmation from one of the organization
administrators.
Further work is needed to make the emails nicer (ideally, we'd send
one email with all the admins on the `To` line, but the `send_email`
library doesn't support that).
Fixes#10783.
With significant tweaks to the email text by tabbott.
The previous content made it sound like we were actually sending a
push notification, which could be confusing/alarming in some cases
(see e.g. 9c224ccdd3). Instead, we make
clear that we're sending it to all clients (which one might correctly
suspect is vacuous in the development environment).
While it could make sense to print these logging statements at WARN
level on server startup, it doesn't make sense to do so on every
message (though it perhaps did make sense to do so before more recent
changes added good ways to discover you forgot to configure push
notifications).
Instead, we now just do a WARN log on queue processor startup, and
then at DEBUG level for individual messages.
Fixes#10894.
For messages with strange senders, we don't import
messages. Basically, we only import a message if
it has sender with an id that maps to a non-deleted
user.
We now account for streams having users that
may be deleted. We do a couple things:
- use a loop instead of map
- only pass in users to hipchat_subscriber
- early-exit if there are not users
- skip owner/members logic for public streams
Change the truncation marker from `...` to `\n[message truncated]`
when receiving messages from the API or through e-mail. Also, update
tests to account for the new change.
Fix#10871.
There are only a handful of non-JSON webhooks that wouldn't
benefit from the notify_bot_owner_on_invalid_json feature.
Specifically, these are the webhooks where the third-party product
uses another format, whether it be HTML form-encoded, XML, or
something else.
Tweaked by tabbott to correc the list of excluded webhooks.
Previously, the Stripe webhook code was riddled with implicit
assertions that there were exactly N event types within a given
category, and we handled the last one in a final `else` clause in the
block. This was likely to cause confusing problems in the event that
we're missing an event type (or Stripe adds a new one).
We fix this by just adding a few more conditionals and raising the
standard "unexpected event type" exception for the others.
Our recent change in 2fa77d9d54 to
disable the cached_db cache backend broke upgrade-zulip-from-git with
an attributeerror; we fix that by checking the session engine before
trying to access its cache-related attributes.
Removes email_not_verified option. That option was used to assign
email_data a different set of emails for a test. Instead of that,
this refactor allows to specify the email_data itself in the function
which calls github_oauth2_test. Flags like email_not_verified are
generally used in one test. This is a preparatory refactor for
choose email screen which may have introduced multiple flags otherwise.
The email_list returned has the primary email as the first element.
Testing: The order of the emails in the test was changed to put a
verified email before the primary one. The tests would fail without
this commit's change after the changes in the order of test emails.
This is initial work, which will help us establish habits of using a
well-tested approach for renaming a Zulip organization (since as part
of https://github.com/zulip/zulip-mobile/issues/3142, we'll likely
need to make this function do more).
This seems like kind of a silly function to extract
to topic.py, but it will theoretically help us sweep
"subject" if we change the DB.
It had test coverage.
We use the message a lot for the query modified
here, so I think it's worth taking the up-front
hit of getting bulkier objects to avoid O(N)
hops back to the database.
Normal hipchat exports use integer ids for their
users and "rooms," which we just borrowed during
conversion.
Atlassian Stride uses stride UUIDs for these instead, but otherwise
has the same export format.
We now introduce IdMapper to handle external ids
that aren't integer. The IdMapper will map UUID
ids to ints and remember them. For ints it just
leaves them alone.
Fixes#10805.
Our webhook-errors.log file is riddled with exceptions that are
logged when a webhook is incorrectly configured to send data in
a non-JSON format. To avoid this, api_key_only_webhook_view
now supports an additional argument, notify_bot_owner_on_invalid_json.
This argument, when True, will send a PM notification to the bot's
owner notifying them of the configuration issue.
Until we resolve https://github.com/zulip/zulip/issues/10832, we will
need to maintain our own forked copy of Django's SessionMiddleware.
We apparently let this get out of date.
This fixes a few subtle bugs involving the user logout experience that
were throwing occasional exceptions (e.g. the UpdateError fix you can
see).
We (lexically) remove "subject" from the conversion code. The
`build_message` helper calls `set_topic_name` under the hood,
so things still have "subject" in the JSON.
There was good code coverage on `build_message`.
We're trying to sweep "subject" out of the codebase,
even when it has nothing to do our legacy "subject"
field. The rewording here will prevent some linter
noise.
This was a pretty nasty error, where we were accidentally accessing
the parent list in this inner loop function.
This appears to have been introduced as a refactoring bug in
7822ef38c2.
The various vars here that had recipient_subject
in the name now have either bucket or bucket_tup
there.
The shorter names are a bit easier to read, and the
original names were misleading for the PM case.
This was basically two search/replaces, and we have
good test coverage here, so it's pretty low risk
despite the messy diff.
We now attach zulip_db_data to the markdown engines
for classes that need it. This was the last remaining
global we had, so we remove `arguments.py` here.
The Markdown processor makes it fairly simple for
the helper classes to access the `md` engine. We
now write `_md_engine.zulip_message` to avoid having
the current message in the global namespace.
Note that we do reuse engines for multiple messages,
but each engine is specific to a realm. And we therefore
avoid even the theoretical possibility of leaking message
data between realms.
This makes us consistent with how we import codehilite.
Using Python's normal import mechanism avoids some overhead
with Markdown having to parse dotted notation.
These modules are tiny, so they shouldn't impact startup
too much. Also, by explicitly importing them, we avoid
the pitfall of having a sucessful startup and a broken
renderer.
We were building the same link regex every time
we build a Markdown engine, which happens twice
per realm. It's an expensive operation due to
the complexity of the regex and us reading a file.
Nested classes are kind of expensive in Python,
particularly when you throw in mypy annotations.
Also, flatter is arguably better, although it is
kind of a pain here not to have closures.
This change avoids hitting the Django ORM when
we don't find any possible group mentions in
the message content.
Django doesn't necessarily actually hit the database,
but it's still slow and shows up in profiles.
This commit speeds up the import by avoiding
sender lookups and instead using the data
for users that we already have in memory.
This avoids a few DB hops, many hops to memcached,
plus some object construction.
We now call do_render_markdown() directly. This
also makes it more explicit that the import has
never rendered alert words.
For the import-data codepath, we will call
the extracted function directly in a
subsequent commit.
The do_render_markdown() function has more
required parameters, which allows for more
explicit code and also allows us to flatten
out some logic related to alert words. (We
just pass in empty sets/dicts as needed).
We can rely on `message_realm` being the same
as `message.sender.realm`, which allows us to
skip two queries to the database for the rare
Zephyr mirroring case.
This function requires a message object, whereas
we want to work with JSON data to avoid necessary
queries when we import data. Inlining the function
sets us up for a subsequent refactoring.
We change the way we deal with theoretical return
values of `None` to use an assertion; otherwise,
we would have to loosen up a bunch of mypy types
from `str` to `Optional[str]`. It's not clear `None`
is even possible--we've moved toward throwing exceptions
there instead of silently failing.
This is somewhat hairy logic, so it's nice
to extract it and not worry about variable leaks.
Also, this moves some legacy "subject" references out
of actions.py.
We start by including functions that do custom
queries for topic history.
The goal of this library is partly to quarantine
the legacy "subject" column on Message.
A recent change to check_send_webhook_message allows webhooks to
unescape stream names before sending a message. This commit adds
a test for the edge case where the webhook URL is escaped twice by
a third-party.
Recently, one of our users reported that a JIRA webhook was not
able to send messages to a stream with a space character in its
name. Turns out that JIRA does something weird with webhook URLs,
such that escaped space characters (%20) are escaped again, so
that when the request gets to Zulip, the double escaped %20 is
evaluated as the literal characters `%20`, and not as a space.
We fix this by unescaping the stream name on our end before
sending the message forward!
The previous logic was incorrect, in that if `content_type` was set to
None (which happens with Slack/HipChat export, among other things),
then we wouldn't run the `guess_type` logic to auto-detect the
Content-Type to send to S3.
The last_modified field is intended to support setting the
orig-last-modified field in the S3 backend when importing, basically
to keep track of this bit of pre-export data for debugging. In the
event that it isn't available, the correct thing to do is not write
out an invalid `last_modified` field; we should just not write it out
at all.
This fixes the fact that these emoji were sometimes not displaying
properly (because of changes in the emoji names used in the codebase),
while also making this integration more standard (since it was the
only one with such an aggressive use of emoji).
The UserMessage table can be huge, so creating a
bunch of entries in `ID_MAP` can overflow memory.
We don't have any tables that depend on `UserMessage`,
and we don't send the 'id' fields from `zerver_usermessage`
to the database, so re-mapping them was just busy-work.
This is a preparator refactor for supporting hosting different Tornado
processes on different servers; to look up which Tornado server we
should be sending the event to, we'll need the realm object.
This should make it possible for there to safely be multiple Tornado
processes running on different ports on the same system.
It may also fix a rare race bug in development, where previously, it
was possible for the Tornados processes for Casper and the main
development server to interfere; I haven't investigated whether this
was a real bug or not, but now those two services will use independent
Tornado files.
We still need to add something to direct traffic between the different
Tornado processes.
This reverts commit 3645bb9225.
This change was incorrect, because the `is_zephyr_mirror_realm`
property on Realm is a property and thus isn't available in the
migration codebase.
Since this migration is only run for very old servers, this should
have no impact.
Apparently, the QUERY_STRING property of the report object wasn't
actually a string; since we only care about its string representation,
we should just stringify it.
Apparently, we weren't resetting the query counters inside the
websockets codebase, resulting in broken log results like this:
SOCKET 403 2ms (db: 1ms/2q) /socket/auth [transport=websocket] (unknown via ?)
SOCKET 403 5ms (db: 2ms/3q) /socket/auth [transport=websocket] (unknown via ?)
SOCKET 403 2ms (db: 3ms/4q) /socket/auth [transport=websocket] (unknown via ?)
SOCKET 403 2ms (db: 3ms/5q) /socket/auth [transport=websocket] (unknown via ?)
SOCKET 403 2ms (db: 4ms/6q) /socket/auth [transport=websocket] (unknown via ?)
SOCKET 403 2ms (db: 5ms/7q) /socket/auth [transport=websocket] (unknown via ?)
SOCKET 403 2ms (db: 5ms/8q) /socket/auth [transport=websocket] (unknown via ?)
SOCKET 403 3ms (db: 6ms/9q) /socket/auth [transport=websocket] (unknown via ?)
The correct fix for this is to call reset_queries at the start of each
endpoint within the websockets system. As it turns out, we're already
calling record_request_start_data there, and in fact should be calling
`reset_queries` in all code paths that use that function (the other
code paths, in zerver/middleware.py, do it manually with
connection.connection.queries = []).
So we can clean up the code in a way that reduces risk for similar
future issues and fix this logging bug with this simple refactor.
Fixes#10745.
Use get_display_recipient to get stream names, and remove the
references to message.stream_name in push_notifications.py which were
added in 97571a203, as the actual stream names were being retrived
only for Message objects associated with public streams.
This is mostly an extraction, but it does change the
way we calculate `content`. We append the markdown
links from ALL files to any content that came in the
message itself.
Separating this out also allows us to add more
test coverage for the extracted code.
We now use subscriber_map for building UserMessage
rows in Slack/Gitter conversions.
This is mostly designed to simplify the code, rather
than having to scan the entire subscribers for each
message.
I am guessing this will improve performance for most
conversions. We sort small lists on every message,
in order to be deterministic, but the sorting cost
is probably more than offset by avoiding the O(N)
scans across all subscriptions. Also, it's probably
negligible in the grand scheme of things, compared
to JSON parsing, file I/O, etc.
This commits also fixes some typos with mentioned_users_id ->
mentioned_user_ids and cleans up a test a bit as well.
We now have all three third party
conversions (Gitter/Slack/Hipchat)
go through build_user_message().
Hipchat was already using this helper.
We also avoid callers having to pass in
an id to build_user_message().
When you send a message to a bot that wants
to talk via an outgoing webhook, and there's
an error (e.g. server is down), we send a
message to the bot's owner that links to the
message that triggered the error.
The code to produce those links was out of
date.
Now we move the important code to the
`url_encoding.py` library and fix the PM
links to use the more modern style (user_ids
instead of emails). We also replace "subject"
with "topic" in the stream urls.
This supports guest user in the user-info-form-modal as well as in the
role section of the admin-user-table.
With some fixes by Tim Abbott and Shubham Dhama.
The purpose of this commit is to pass information
to the frontend whether the message response recieved
has been limited due to plan restrictions or not.
To implement this, the backend for limiting the message
history had to be rewritten as we used to fetch
only the message rows whose id was greater than
first_visible_message_id. The filtered rows gives us
no information on whether the message history was
limited or not. So the backend was rewritten to not
do any restriction of limiting the message rows while
making the query. The limiting of rows is now done in
post_process_limited_query which will also return back
the value of history_limited flag.
Tweaked by tabbott to note a few cases where the results are
incorrect. I'm merging this despite those, because those cases don't
impact the correctness of the feature, and it may have tricky
performance implications to fix correctly.
Apparently, we weren't actually checking that found_oldest had the
correct value; fortunately, this didn't actually result in a problem,
because the values were always correct. But this will be important as
we start extending this test.
This is a preparatory commit which will help us with removing camo.
In the upcoming commits we introduce a new endpoint which is based
out on the setting CAMO_URI. Since camo could have been hosted on
a different server as well from the main Zulip server, this change
will help us realise in tests how that scenerio might be dealt with.
This will help us eliminate camo from our production installs.
Basically it helps us de duplicate some code from upcoming code
which will help us check validity of a camo url.
Also, rename get_alert_from_message to get_gcm_alert.
With the implementation of the and get_apns_alert_title and
get_apns_alert_subtitle, the logic within get_alert_from_message
is only relevant to the GCM payload, so we adjust the name
accordingly.
Progresses #9949.
Resolves https://github.com/zulip/zulip-mobile/issues/1316.
The string that is returned from get_alert_from_message is
dependent upon the same message that is passed into get_apns_payload
and get_gcm_payload. The contents of those payloads that are tested via
TestGetAPNsPayload and TestGetGCMPayload, which makes the tests for
get_alert_from_message redundant.
Also, simplify the logic by removing the last elif conditional.
If we use an outgoing webhook and the web server
responds with `widget_content` in the payload, we
include that in what we send through the send-message
codepath.
This makes outgoing webhook bots more consistent with
generic bots.
The test named `test_archiving_messages_with_attachment`
started flaking recently. We use sets for comparison
instead of lists to avoid arbitrary sorting differences.
Masking content can be useful for testing
out conversions where you're dealing
with data from customers and want to avoid
inadvertently reading their content (while
still having semi-realistic messages).
Having two smaller functions should make it
easier to customize the behavior for each specific
use case. The only reason they were ever coupled
was to keep ids in sequence, but the recent NEXT_ID
changes make that a non-issue now.
We now instantiate NEXT_ID in sequencer.py, which avoids
having multiple modules make multiple copies of a sequencer
and possibly causing id collisions.
Bots are not allowed to use the same name as
other users in the realm (either bot or human).
This is kind of a big commit, but I wanted to
combine the post/patch (aka add/edit) checks
into one commit, since it's a change in policy
that affects both codepaths.
A lot of the noise is in tests. We had good
coverage on the previous code, including some places
like event testing where we were expediently
not bothering to use different names for
different bots in some longer tests. And then
of course I test some new scenarios that are relevant
with the new policy.
There are two new functions:
check_bot_name_available:
very simple Django query
check_change_bot_full_name:
this diverges from the 3-line
check_change_full_name, where the latter
is still used for the "humans" use case
And then we just call those in appropriate places.
Note that there is still a loophole here
where you can get two bots with the same
name if you reactivate a bot named Fred
that was inactive when the second bot named
Fred was created. Also, we don't attempt
to fix historical data. So this commit
shouldn't be considered any kind of lockdown,
it's just meant to help people from
inadvertently creating two bots of the same
name where they don't intend to. For more
context, we are continuing to allow two
human users in the same realm to have the
same full name, and our code should generally
be tolerant of that possibility. (A good
example is our new mention syntax, which disambiguates
same-named people using ids.)
It's also worth noting that our web app client
doesn't try to scrub full_name from its payload in
situations where the user has actually only modified other
fields in the "Edit bot" UI. Starting here
we just handle this on the server, since it's
easy to fix there, and even if we fixed it in the web
app, there's no guarantee that other clients won't be
just as brute force. It wasn't exactly broken before,
but we'd needlessly write rows to audit tables.
Fixes#10509
This bug was introduced very recently and is an
aliasing bug. It caused extra UserMessage rows to
be created as we inadvertently updated the underlying
subscriber_map sets for multiple messages.
This probably mostly affected PMs.
It's doubtful the bug ever got out into the field.
Previously, MissedMessageWorker used a batching strategy of just
grabbing all the events from the last 2 minutes, and then sending them
off as emails. This suffered from the problem that you had a random
time, between 0s and 120s, to edit your message before it would be
sent out via an email.
Additionally, this made the queue had to monitor, because it was
expected to pile up large numbers of events, even if everything was
fine.
We fix this by batching together the events using a timer; the queue
processor itself just tracks the items, and then a timer-handler
process takes care of ensuring that the emails get sent at least 120s
(and at most 130s) after the first triggering message was sent in Zulip.
This introduces a new unpleasant bug, namely that when we restart a
Zulip server, we can now lose some missed_message email events;
further work is required on this point.
Fixes#6839.