The name for_stream_name is more appropriate here. The name
for_stream is more suitable for a function that takes in a Stream
object, which we're about to add.
Our hash-naming of production assets interacted badly with the "look
at files in a directory" algorithm used to determine what sound
options exist for the "notification sound" feature. For lack of a
better solution, we fix this by excluding files with an extra `.` in
their name.
This causes changing the email_address_visibility field to actually
modify what user_profile.email values are generated for users, both on
user creation and afterwards as email addresses are edited.
The overall feature isn't yet complete, but this brings us pretty close.
This commit does the following three things:
1. Update stream model to accomodate rendered description.
2. Render and save the stream rendered description on update.
3. Render and save stream descriptions on creation.
Further, the stream's rendered description is also sent whenever the
stream's description is being sent.
This is preparatory work for eliminating the use of the
non-authoritative marked.js markdown parser for stream descriptions.
Ever since we implemented support for stream IDs in Addressee,
Addressee.stream_name() can now return None. This commit ensures
that _internal_prep_message only calls ensure_stream when
Addressee.stream_name() is not None.
This commit also contains the following auxiliary changes:
* Adds a custom exception, StreamWithIDDoesNotExist for when
a stream with a given ID does not exist because the error
message returned by StreamDoesNotExist only makes with stream
names, not IDs.
* Adds a new helper, get_stream_by_id_in_realm, which is similar
to get_user_profile_by_id_in_realm (introduced in #10391).
* Adds a helper, validate_stream_id_with_pm_notification, which
returns the Stream object associated with a given ID and also
handles PM notifications to the bot owner if the message was
sent by a bot and if the stream does not exist or has no
subscribers.
* Modifies the message sent by send_pm_if_empty_stream to
accommodate stream IDs.
Note that all of the above changes are required before check_message
can be modified to support stream IDs.
This fixes an annoying bug where clicking to subscribe to a stream
would change the color shown in the "manage streams" UI immediately
after you click.
Fixes#11072.
You can now pass in an info field with a value
like "out to lunch" to the /users/me/status,
and the server will include that in its outbound
events.
The semantics here are that both "away" and
"status_text" have to have defined values in order
to cause changes. You can omit the keys or
pass in None when values don't change.
The way you clear info is to pass the empty
string.
We also change page_params to have a dictionary
called "user_status" instead of a set of user
ids. This requires a few small changes on the
frontend. (We will add "status_text" support in
subsequent commits; the changes here just keep
the "away" feature working correctly.)
We now have single function that handle both away
and not-away.
This refactoring sets us up to piggyback "info" more
easily onto status updates.
The only thing that changes here is that we don't
delete database rows any more when users revoke
their away status. Instead we just set the status
to NORMAL.
This moves the logic for deleting the user's custom profile field
value in the remove_user_custom_profile_data view function to a method
named check_remove_user_custom_profile_value in actions.py, so that we
can reuse it in the next commit.
We make this change because setting up reminders in PM's didn't
play really well with our current infrastructure. Basically the
reminder messages from the bot can't appear in the same narrow as
that of a PM between two people and therefore we disable it.
Though we make an exception here where a person wants to set up
reminder for himself.
The check for empty recipient lists (the "message_to" argument)
does not belong in check_message. As we implement support for
sending messages by user IDs (see #9474), we will be extending
much of the existing code in Addressee.legacy_build that validates
recipient lists. Therefore, Addressee.legacy_build is a much more
apt area to check for empty recipient lists.
Also, Addressee.for_private and Addressee.for_user_ids also need
to do their own validation, since not everything goes through
Addressee.legacy_build. It is okay to simply throw a 500 in these
cases because we expect that callers will be doing their own
validation for calls that don't go through Addressee.legacy_build().
This commit is a part of our efforts surrounding #9474.
Feature of sending notification to the stream using notification bot
is added. user_profile is also passed to do_rename_stream for using
the name of user who renamed the stream in notification.
Notification is sent to the stream using
internal_send_stream_message in do_rename_stream.
Fixes#11034.
Since we have already added the `invite_as` field to models, we can now
replace usage of `invite_as_admin` properly with its equivalent `invite_as
== PreregistrationUser.INVITE_AS['REALM_ADMIN']`.
Hence, also removed now redundant `invite_as`.
The logic for flushing the API key has been broken every since we
added the cache, since we were incorrectly flushing the new API key,
not the old API key, from the cache after regeneration.
This IntegrityError has been happening occasionally in production due
to races, likely due to some sort of mobile app double-post bug.
Handle this by avoiding a 500, and returning the same 400 we would do
if there hadn't been a race.
This adds a new realm_logo field, which is a horizontal-format logo to
be displayed in the top-left corner of the webapp, and any other
places where we might want a wide-format branding of the organization.
Tweaked significantly by tabbott to rebase, fix styling, etc.
Fixing the styling of this feature's loading indicator caused me to
notice the loading indicator for the realm_icon feature was also ugly,
so I fixed that too.
Fixes#7995.
It appears that our i18n logic was only using the recipient's language
for logged-in emails, so even properly tagged for translation and
translated emails for functions like "Find my team" and "password
reset" were being always sent in English.
With great work by Vishnu Ks on the tests and the to_emails code path.
Fixes a bug in import_realm where secondary attributes like message
visibility weren't being set, and also makes bugs like this less likely in
the future.
Also, putting the plan_type change at the end of import_realm, so that
future restrictions to LIMITED realms don't affect the import process.
Also, add a new notification sound, "ding". It comes from
https://freesound.org, where the original Zulip notification sound comes
from as well. In the future, new sounds can be added by adding audio
files to the `static/audio/notification_sounds` directory.
Tweaked significantly by tabbott:
* Avoided removing static/audio/zulip.ogg, because that file is
checked for by old versions of the desktop app.
* Added a views check for the sound being valid + tests.
* Added additional tests.
* Restructured the test_events test to be cleaner.
* Removed check_bool_or_string.
* Increased max length of notification_sound.
* Provide available_notification_sounds in events data set if global
notifications settings are requested.
Fixes#8051.
A key part of this is the new helper, get_user_by_delivery_email. Its
verbose name is important for clarity; it should help avoid blind
copy-pasting of get_user (which we'll also want to rename).
Unfortunately, it requires detailed understanding of the context to
figure out which one to use; each is used in about half of call sites.
Another important note is that this PR doesn't migrate get_user calls
in the tests except where not doing so would cause the tests to fail.
This probably deserves a follow-up refactor to avoid bugs here.
This adds a function that sends provided email to all administrators
of a realm, but in a single email. As a result, send_email now takes
arguments to_user_ids and to_emails instead of to_user_id and
to_email.
We adjust other APIs to match, but note that send_future_email does
not yet support the multiple recipients model for good reasons.
Tweaked by tabbott to modify `manage.py deliver_email` to handle
backwards-compatibily for any ScheduledEmail objects already in the
database.
Fixes#10896.
Previously, we frequently accessed user_profile.realm from outside the
loops that interact with UserProfile objects. This variable reuse
outside the loop could be confusing and should be a style/lint
violation.
While in this case, the behavior was correct (in that all users in the
loops were within the same realm), extracting a separate `realm`
variable significantly clarifies what's going on here.
This is the first step of letting users use Zulip markdown in their
SHORT_TEXT and LONG_TEXT custom profile fields, so that they can
include emphasis, links, etc.
This doesn't include any frontend logic yet, however.
This function is equivalent to recipient_for_emails, but fetches
user_profiles by IDs, not by emails.
This commit is a part of our efforts surrounding #9474, but is
more primarily geared towards adding support for sending typing
notifications by user IDs.
recipient_for_emails is used by our typing notifications code.
user_profiles_from_unvalidated_emails is used by our typing
notifications code *and* for sending messages.
user_profiles_from_unvalidated_emails is a part of a larger
framework used by Addressee to validate recipient emails when sending
messages and will eventually need to be removed as we move forward
with #9474. So it makes sense to just inline this function within
recipient_for_emails so that we don't break our typing notifications
code in the future.
This commit is a part of our efforts surrounding #9474.
This adds a web flow and management command for reactivating a Zulip
organization, with confirmation from one of the organization
administrators.
Further work is needed to make the emails nicer (ideally, we'd send
one email with all the admins on the `To` line, but the `send_email`
library doesn't support that).
Fixes#10783.
With significant tweaks to the email text by tabbott.
Change the truncation marker from `...` to `\n[message truncated]`
when receiving messages from the API or through e-mail. Also, update
tests to account for the new change.
Fix#10871.
This is initial work, which will help us establish habits of using a
well-tested approach for renaming a Zulip organization (since as part
of https://github.com/zulip/zulip-mobile/issues/3142, we'll likely
need to make this function do more).
This is somewhat hairy logic, so it's nice
to extract it and not worry about variable leaks.
Also, this moves some legacy "subject" references out
of actions.py.
We start by including functions that do custom
queries for topic history.
The goal of this library is partly to quarantine
the legacy "subject" column on Message.
This is a preparator refactor for supporting hosting different Tornado
processes on different servers; to look up which Tornado server we
should be sending the event to, we'll need the realm object.
This supports guest user in the user-info-form-modal as well as in the
role section of the admin-user-table.
With some fixes by Tim Abbott and Shubham Dhama.
Bots are not allowed to use the same name as
other users in the realm (either bot or human).
This is kind of a big commit, but I wanted to
combine the post/patch (aka add/edit) checks
into one commit, since it's a change in policy
that affects both codepaths.
A lot of the noise is in tests. We had good
coverage on the previous code, including some places
like event testing where we were expediently
not bothering to use different names for
different bots in some longer tests. And then
of course I test some new scenarios that are relevant
with the new policy.
There are two new functions:
check_bot_name_available:
very simple Django query
check_change_bot_full_name:
this diverges from the 3-line
check_change_full_name, where the latter
is still used for the "humans" use case
And then we just call those in appropriate places.
Note that there is still a loophole here
where you can get two bots with the same
name if you reactivate a bot named Fred
that was inactive when the second bot named
Fred was created. Also, we don't attempt
to fix historical data. So this commit
shouldn't be considered any kind of lockdown,
it's just meant to help people from
inadvertently creating two bots of the same
name where they don't intend to. For more
context, we are continuing to allow two
human users in the same realm to have the
same full name, and our code should generally
be tolerant of that possibility. (A good
example is our new mention syntax, which disambiguates
same-named people using ids.)
It's also worth noting that our web app client
doesn't try to scrub full_name from its payload in
situations where the user has actually only modified other
fields in the "Edit bot" UI. Starting here
we just handle this on the server, since it's
easy to fix there, and even if we fixed it in the web
app, there's no guarantee that other clients won't be
just as brute force. It wasn't exactly broken before,
but we'd needlessly write rows to audit tables.
Fixes#10509
We could migrate all the current PREMIUM_FREE organizations to have more
invites, but this setting mainly affects orgs right as they are starting, so
it's probably fine.
We use UserMessageLite to avoid Django overhead, and we
do updates in chunks of 10000. (The export may be broken
into several files already, but a reasonable chunking at
import time is good defense against running out of memory.)
Fixes the urgent part of #10397.
It was discovered that soft-deactivated users don't get mobile push
notifications for messages on private streams that they have configured
to send push notifications.
Reason: `handle_push_notification` calls `access_message`, and that
logic assumes that a user who is a recipient of a message has an
associated UserMessage row. Those UserMessage rows are created
lazily for soft-deactivated users, so they might not exist (yet)
until the user comes back.
Solution: Ensure that userMessage row is created for
stream_push_user_ids and stream_email_user_ids in create_user_messages.
In user type custom field, field value is list of user ids. We weren't
converting list to json object in update event payload. This throws
error in frontend, cause we store stringify representation of custom
field value. Therefore, after update event is recieved field-value-
type gets updated to array from string which throws json parsing error.
Using early-exit here allows us to more easily
comment why there are certain exemptions to
this logic.
We also only require callers to pass in realm,
not the whole user object.
This prevents leaking some variables into an already
cluttered function.
We also add test coverage for what's now an
early-exit condition in the new function--we exempt
public MIT streams from these events.
This change was partially driven by a quirk in Python
where peephole optimizations make `continue` lines
appear not to be covered.
I also think it's generally a good idiom to extract
functions for loop bodies when they don't actually
accumulate values or maintain other state. With this
commit we now prevent potential bugs for vars like
`is_stream` leaking between loop iterations.
We simulate a race condition by mocking create_user
to actually create a user, but then raise an
IntegrityError (as if another process had actually
created the user, not our test).
I also changed the real code to use explicitly
named parameters.
Our get_streams_traffic function used to query
all streams in the StreamCount table if you
passed in `None` for `streams`.
Now we require that you pass in a list of
stream_ids.
I don't know how much work this will save
the database, since probably the bulk of
the work is aggregating. If we need to fine
tune DB performance, we could possibly add
`realm` as an argument and add it to the filter.
What we'll immediately get, for large multi-realm
installations, is less data over the wire and
less work for the ORM.
The prior code uses an awkward idiom that
pre-dates the `exists()` function, and it
had an unreachable line of code.
The new version should be faster, since we
don't create a throwaway heavy Django object
or send needless data over the wire.
This functions appears to be redundant to
`access_stream_by_name`. The only
meaningful line of code in the function that we're
removing, the code that raises an error,
appears to be unreachable, despite reasonably
extensive tests.
The only thing the function was restricting
was that the case where the bot's owner was
unsubscribed to a private stream, which
is already locked down in
`access_stream_by_name` calls inside of
`patch_bot_backend`.
This commit increases test coverage
by removing unreachable code.
It's possible this function had
some theoretical value before we
introduced the `require_non_guest_human_user`
decorator to the `patch_bot_backend`
view, since in theory the bot itself
could have subscribed to a stream that
the owner didn't subscribe to. Even
then it's not clear that allowing the
bot to set that as a default stream
would have been harmful, since they
can already access it.
We want our methodology for extracting the last message
id to be consistent, particularly in terms of how we
handle edge cases. (I'll concede that the
`bulk_remove_subscriptions` codepath never hits that
corner case in practice, but it's harmless to handle
the theoretical case.)
It may also be nice to have this function show up
clearly in profiling.
This also adds some direct testing to the function.
It's not clear to me why we don't use `latest('id')`
in the implementation, but that's outside the scope
of this commit.
This de-clutters check_message a bit and also makes
it easy to audit our rules for who can write to a
stream.
Also, this works around a bug with Python where its
optimizations for the `pass` instruction make them
not appear to run and show up as uncovered in
coverage reports.
Right now it only has one function, but the function
we removed never really belonged in actions.py, and
now we have better test coverage on actions.py, which
is an important module to get to 100%.
This uses the recently introduced active_mobile_push_notification
flag; messages that have had a mobile push notification sent will have
a removal push notification sent as soon as they are marked as read.
Note that this feature is behind a setting,
SEND_REMOVE_PUSH_NOTIFICATIONS, since the notification format is not
supported by the mobile apps yet, and we want to give a grace period
before we start sending notifications that appear as (null) to
clients. But the tracking logic to maintain the set of message IDs
with an active push notification runs unconditionally.
This is designed with at-least-once semantics; so mobile clients need
to handle the possibility that they receive duplicat requests to
remove a push notification.
We reuse the existing missedmessage_mobile_notifications queue
processor for the work, to avoid materially impacting the latency of
marking messages as read.
Fixes#7459, though we'll need to open a follow-up issue for
using these data on iOS.
Fixes a regression introduced in 23246ff816.
However, we'll be shortly removing this feature, since it's legacy
support for an app that no longer is supported.
The "/stats" command doesn't actually do anything
interesting yet, and it also writes to the message
feed instead of replying directly to the user.
The history of this command was that it was
written during a PyCon sprint. It was mainly intended
as an example for subsequent slash commands. The
ones we built after "/stats" have sort of outgrown
"/stats" and don't follow the original structure
for "/stats". (The "/day", "/ping", and "/settings"
commands were built shortly after.)j
We probably want to ressurect "/stats" fairly soon,
after figuring out some useful stats and refining
the UI.
As you can see from this commit, resurrecting the
code here shouldn't be too difficult, but it
may actually be pretty rare that we just translate
slash commands into fleshed out messages.
random_api_key, the function we use to generate random tokens for API
keys, has been moved to zerver/lib/utils.py because it's used in more
parts of the codebase (apart from user creation), and having it in
zerver/lib/create_user.py was prone to cyclic dependencies.
The function has also been renamed to generate_api_key to have an
imperative name, that makes clearer what it does.
Now reading API keys from a user is done with the get_api_key wrapper
method, rather than directly fetching it from the user object.
Also, every place where an action should be done for each API key is now
using get_all_api_keys. This method returns for the moment a single-item
list, containing the specified user's API key.
This commit is the first step towards allowing users have multiple API
keys.
Renaming a user group to a name shared by other group wasn't a scenario
handled by the backend, and the server errored whenever this was
attempted.
Now a json_error is returned, letting the user know that a user group
with that name already exists.
When last user(only in case of admin) unsubscribe from private stream,
stream page doesn't get updated. Cause we delete the private stream
as soon as last user unsubscribe from stream.
So `sub` get undefined in frontend, cause that stream is deleted
before unsubscribe-user-from-stream event is received.
Fix this by changing order of events sent to frontend. Event
`subscription: remove` should be sent before `stream: delete` event
from backend.
This ensures that the format of this data structures matches that for
in-realm bots in the main users data structure (including avatars,
etc.).
Fixes#10138.
We were getting event-handling exceptions in JS in production if a new
user was created and then went and set a custom profile field, because
there was no `.profile_data` on their user object. We were able to
trace the issue down to the fact that our events didn't include that
field when creating a new user.
This renames Realm.restricted_to_domain field to
emails_restricted_to_domains, for greater clarity as to what it does
just from seeing the setting name, without having to look it up.
Fixes part of #10042.
A stream created in the last few hours likely won't be in StreamCount
(since that gets updated once a day), and hence won't be in the
recent_traffic dict.
However, get_average_weekly_stream_traffic should be None in this case,
not 0.
This refactors the generate_topic_history_from_db_rows function to not
depend upon the assumption of rows passed as parameter to be sorted in
reverse order of max_message_id field.
Additionally, we add sorting and some tests that verify correct
handling of these cases.
In this commit we add a new endpoint so as to have a way of fetching
topic history for a given stream id without having to be logged in.
This can only happen if the said stream is web public otherwise we
just return an empty topics list. This endpoint is quite analogous
to get_topics_backend which is used by our main web app.
In this commit we also do a bit of duplication regarding the query
responsible for fetching all the topics from DB. Basically this
query is exactly the same as what we have in the
get_topic_history_for_stream function in actions.py. Basically
duplicating now is the right thing to do because this query is
really gonna change when we add another criteria for filtering
messages which is:
Only topics for messages which were sent during the period the
corresponding stream was web public should be returned.
Now when we will do this, the query will change and thus it won't
really be a code duplication!
Fixes#7665
In case of invitation events, 'invites_changed' event without
any real payload is sent to all the realm admins and the user.
The event is handled by reloading the list to view recent changes.
Commit tweaked by shubhamdhama:
* Send an `invite_changed` event when an user accept an invite.
Also, added the test for the same.
* No need to delete the invite list in frontend, current logic
handles the case when the invite data is changed properly.
* Extracted the common logic for sending an event into
`notify_invites_changed`.
This is all the plumbing that makes it possible to enable the
stream_email_notifications setting via the Zulip API. The flag still
doesn't do anything yet, but this is a nice checkpoint along the way
to implementing this feature.
This commit creates a new field called delivery_email. For now, it is
exactly the same as email upon user profile creation and should stay
that way even when email is changed, and is used only for sending
outgoing email from Zulip.
The purpose of this field is to support an upcoming option where the
existing `email` field in Zulip becomes effectively the user's
"display email" address, as part of making it possible for users
actual email addresses (that can receive email, stored in the
delivery_email field) to not be available to other non-administrator
users in the organization.
Because the `email` field is used in numerous places in display code,
in the API, and in database queries, the shortest path to implementing
this "private email" feature is to keep "email" as-is in those parts
of the codebase, and just set the existing "email" ("display email")
model field to be something generated like
"username@zulip.example.com" for display purposes.
Eventually, we'll want to do further refactoring, either in the form
of having both `display_email` and `delivery_email` as fields, or
renaming "email" to "username".
We extract out the logic for generating a list of all historical
topics for a given stream as a separate function. This avoids code
duplication when we add the similar code path for grabbing all topics
for web public streams.
The main remaining todo for correctly populating
RealmAuditLog.requires_billing_update is supporting the de-seating (and
corresponding re-seating) that happens after being offline for two weeks.
The only changes visible at the AST level, checked using
https://github.com/asottile/astpretty, are
zerver/lib/test_fixtures.py:
'\x1b\\[(1|0)m' ↦ '\\x1b\\[(1|0)m'
'\\[[X| ]\\] (\\d+_.+)\n' ↦ '\\[[X| ]\\] (\\d+_.+)\\n'
which is fine because re treats '\\x1b' and '\\n' the same way as
'\x1b' and '\n'.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>