This commit is a step towards the goal of replacing most of the
send_future_email pathway with a call to send_email.
Note that this commit changes the default value of sender from "Zulip
<NOREPLY_EMAIL_ADDRESS>" to "NOREPLY_EMAIL_ADDRESS". NOREPLY_EMAIL_ADDRESS
will soon be changed to have the Zulip in front.
Note that the correctness of this commit relies on the fact that
send_future_email also sets the sender to settings.NOREPLY_EMAIL_ADDRESS by
default (in the body of the function).
This commit replaces all uses of django.core.mail.send_mail with send_email,
other than in the password reset flow, since that code looks like it is just
a patch to Django's password reset code.
The send_email function is in a new file, since putting it in
zerver.lib.notifications would create an import loop with confirmation.models.
send_future_email will soon be moved into email.py as well.
Relies on the fact that all the email template names now follow the same
pattern.
Note that there was some template_prefix-like computation being done in
send_confirmation (conditioned on obj.realm.is_zephyr_mirror_realm); that
computation is now being done in the callers.
This increase in the list of needed fields carries a small performance
cost, but it should be very small, and this change is needed to
support outgoing webhooks without additional database queries.
Also puts them into a processing queue, though the queue processor
does nothing.
Rewritten by tabbott to avoid unnecessary database queries in
do_send_messages.
This is a better solution to the problem of how _pg_re_escape should
handle the null character. There's really no good reason to have a
null character in a stream name.
The new function takes a full UserProfile object for the sender,
which allows us to avoid O(N) calls when creating the stream to
find the user profile of the notification bot. (The calls were
already cached, so this won't necessarily be a huge performance
win.)
We also don't have to worry about sending a blank subject any more.
The new, more direct interface for prepping internal stream
messages circumvents the bug-prone extract_recipients() method,
which has the pitfall that it will try to parse a stream name
as JSON. It also takes a UserProfile object for the sender, so
it's a bit more type-safe.
- Add file_name field to `RealmEmoji` model and migration.
- Add emoji upload supporting to Upload backends.
- Add uploaded file processing to emoji views.
- Use emoji source url as based for display url.
- Change emoji form for image uploading.
- Fix back-end tests.
- Fix front-end tests.
- Add tests for emoji uploading.
Fixes#1134
Move the user_profile data section down into fetch_initial_state_data
so it entirely pulls from register_ret for #3853.
This field requires some changes to the events race handling.
This moves the avatar_ fields in page_params to come from
register_ret. Unlike many fields, changing this had a bit of
complexity, because the avatar update events didn't actually contain
some of the details required for moving these into register_ret to
work correctly without races.
We fix that as part of this change.
Modified significantly by tabbott.
The function internal_prep_message is kind of awkward to
call, so I'm moving most of its implementation to
_internal_prep_message() for upcoming refactorings.
We'll need to implement a version of the simple decoding/decryption
logic used by this library in the mobile code as well, but that should
be simple enough.
All webhook fixtures in zerver/fixtures/<webhook_name> have now
been moved to dedicated webhook-specific directories under
zerver/webhooks/<webhook_name>/fixtures, where <webhook_name> is
the name of the webhook.
This new feature makes it possible to request a different set of
initial data from the event_types an API client is subscribing to.
Primarily useful for mobile apps, where bandwidth constraints might
mean one wants to subscribe to events for a broader set of data than
is initially fetched, and plan to fetch the current state in future
requests.
For our Git integrations, we now only display the number of commits
pushed when the pusher also happens to be the only author of the
commits being pushed.
Part of #3968.
Follow-up to #4006.
For Git push messages, we now have a single space character between
the name of a commit's author and the number of commits by that
author, plus a period at the end.
Part of #3968.
Follow-up to #4006.
- Add aggregated info to real-time updated presence status.
- Update `presence events` test case with adding aggregated
information to presence event.
- Add test case for updating presence status for user which
send state from multiple clients.
Fixes#4282.
We need to reset INSTRUMENTED_CALLS variable before running every
subsuite. If we do not do this then the subsuite running on a
particular process called A will send the accumulated instrumented
calls gathered by the previous subsuites which also ran on the process A.
This also fixes the extra delay that we used to experience after the
tests had finished running. The extra delay was due to the duplicate
instrumented calls in the INSTRUMENTED_CALLS list. The size of this
list used to be ~100k for parallel model as opposed to ~1800 for serial
model.
This commit creates a dedicated file upload directory for every process
when we are running tests in parallel mode. This makes sure that we do
not run into any race conditions due to multiple processes accessing
the same upload directory.
This fixes a performance problem where we were previously starting up
a full Django process (~0.7s even on a fast machine) every time a new
email came in, potentially allowing users to accidentally DoS a Zulip
server. Now, we just post over HTTPS, allowing the existing thread
pool support to do its job.
- Add script wrapper to communicate postfix pipe with django web server
over HTTP(S). It uses shared_secret authentication mode.
- Add django view to process messages from email mirror server.
- Clean management command `email-mirror`. Left just functional
for cron email processing.
- Add routes for new tornado view.
- Change pipe script in master process postfix config template
based on updated script.
- Add tests.
Tweaked by tabbott to adjust the directory and set better defaults.
Fixes#2421.
In this commit we add a logout wrapper so as to enable developers
to just do self.logout instead of doing a post request at API
endpoint for logout. This is achieved by adding a wrapper function
for the Django's client.logout contained in TestCase. We add this
by extending ZulipTestCase to have a logout function.
We now show a few new things:
(1) The number of commits pushed.
(2) Who authored the commits (just counts, not which specific ones, for brevity).
Add tests for case of multiple committers.
Part of #3968.
This Refactors the function fetch_initial_state_data to use the
realm.property_types attribute, avoiding some unnecessary code
duplication.
Tweaked slightly by tabbott to simplify the code a bit.
Addresses part of issue #3854.
This is an incomplete cleaned-up continuation of Lisa Neigut's push
notification bouncer work. It supports registration and
deregistration of individual push tokens with a central push
notification bouncer server.
It still is missing a few things before we can complete this effort:
* A registration form for server admins to configure their server for
this service, with tests.
* Code (and tests) for actually bouncing the notifications.
In cases where old unread messages in the home view might have been
leaked (either due to bugs or unusual muting interactions), it's
theoretically possible for the first unread message in the home view
to be far older than the pointer.
Since the Zulip mobile app is loading messages following the
use_first_unread logic, we need to plug this gap.
Probably a longer-term solution will involve changing how
update_message_flags works to automatically advance the pointer, but
this change should make it possible for the mobile apps to
consistently use the `use_first_unread` mechanism for fetching the
latest home view messages.
With tweaks to the tests by tabbott.
Fixeszulip/zulip-mobile#422.
The previous logic was that anyone with a link to a file could send it
to other users, but only the owner could make a file realm-public.
This had some confusing corner cases.
The new logic is much simpler:
* Only the file's owner/uploader can include a file in a message for
the first time.
* Anyone with access to read a file can share it with others by
including it in messages they send.
* Once a file has been sent to a public stream, any user in the realm
can access it.
Earlier, a stack was being used to go through the message and search
for links. Because of this, in some cases the images were added to
the preview in reverse. Using a queue will keep the image previews in
the same order as they appeared in the message.
Fixes#4453.
Apparently, Django's _destroy_test_db has a mostly unnecessary
sleep(1) before dropping the database, which obviously wastes a bunch
of time in the single-test runtime of their database teardown logic.
We work around this by monkey-patching that function to not do the sleep.
Instead of zulip_test, use zulip_test_template for backend DB. This
makes sure that the DB used by backend tests is different from the
DB, which will be zulip_test, used by Casper tests.
This replaces individual tests for realm properties with a generic
do_test_realm_update_api function to test each property in the
Realm.property_types attribute.
Addresses part of #3854.
This adds the option '--rerun' to the `test-backend` infrastructure.
It runs the tests that failed during the last 'test-backend' run. It
works by stailing failed test info at var/last_test_failure.json
Cleaned up by Umair Khan and Tim Abbott.
By default, Python markdown tab length for indents is 4 spaces, which
require using 4 spaces or a tab to create nested elements. This
modifies that setting to specify 2-space indentation for nesting
elements only.
Modified significantly by tabbott to limit the change to just list
indentation.
Fixes#4252.
Users editing messages or updating message flags are either already
recorded or not interesting from an audit perspective, and so there's
no need to use log_event with them.
This commit adds the backend support for a new style of tutorial which
allows for highlighting of multiple areas of the page with hotspots that
disappear when clicked by the user.
Modify `bot_owner_user_ids()` to return the user_ids of only
admins and bot owners instead of all the current active users.
This was causing a traceback on the frontend.
Fixes: #3391.
- Add message retention period field to organization settings form.
- Add css for retention period field.
- Add convertor to not negative int or to None.
- Add retention period setting processing to back-end.
- Fix tests.
Modified by tabbott to hide the setting, since it doesn't work yet.
The goal of merging this setting code now is to avoid unnecessary
merge conflicts in the future.
Part of #106.
Adds a new webhook integration for Slack to receive messages
from one's Slack team's public channels.
Contains negative tests for broken, missing or invalid data.
Allows two different option for integration:
1. Receive notification on a single stream with different topics
for each of Slack's public channels.
2. Receive notification on different streams for each of Slack's
public channels.
Steps to choose between the two options is described in the documentation.
Fixes#3569.
This fixes 2 issues:
* Being added to an invite_only stream did not correctly update the
"streams" key of the initial state.
* Once that's resolved, subscribe_to_stream when called on a
nonexistant stream would both send a "create" event (from
create_stream_if_needed) and an "occupy" event (from
bulk_add_subscriptions).
The second event should just be suppressed in that case, and this
implements that suppression.
We previously didn't apply the default language event change
correctly.
Not super important as a bug, since we require the user to reload the
browser for their changes to take effect, but this will save time if
we ever change that.
This makes it possible for us to do some convenient validation for
developers, checking whether the correct types are passed for each
each realm property.
zerver/lib/actions: removed do_set_realm_* functions and added
do_set_realm_property, which takes in a realm object and the name and
value of an attribute to update on that realm.
zerver/tests/test_events.py: refactored realm tests with
do_set_realm_property.
Kept the do_set_realm_authentication_methods and
do_set_realm_message_editing functions because their function
signatures are different.
Addresses part of issue #3854.
This makes get_stream match get_realm, get_user_profile_by_email,
etc., in interface, and is more convenient for mypy annotations
because `get_stream` now doesn't return an Optional[Stream].
We use the same strategy Zulip already uses for starred messages,
namely, creating a new UserMessage row with the "historical" flag set
(which basically means Zulip can ignore this row for most purposes
that use UserMessage rows). The historical flag is ignored, however,
in determining which users' browsers to notify about new reactions,
and thus the user will get to see the reaction appear when they click
a message (and any reactions other users later add, as well!).
There's still something of a race here, in that if some users react to
a message while the user is looking at the unsubscribed stream but
before the user reacts to that message, those reactions will not be
displayed to that user (so counts will be a bit lower, or something).
This race feels small enough to ignore for now.
Fixes#3345.
This adds an organization description field to the Realm model, as well as
an input field to the organization settings template. Added three tests.
Set the max length of the field to 100 characters.
Fixes#3962.
Changing assert_in_success_response to require List[Text] instead of
Iterable[Text] prevents the following misuse:
self.assert_in_response_success("message", response)
Currently, this will check whether 'm', 'e', 's', 'a', and 'g' separately
appear in the response, which is probably not the intended behavior. The
correct usage is as follows:
self.assert_in_response_success(["message"], response)
This of course only works in the 2 minute window where missed-message
emails are planned, but nonetheless likely avoids common cases of
emailing users with deleted messages.
Fixes: #3873.