Before, presence information for an entire realm could only be queried via
the `POST /api/v1/users/me/presence` endpoint. However, this endpoint also
updates the presence information for the user making the request. Therefore,
bot users are not allowed to access this endpoint because they don't have
any presence data.
This commit adds a new endpoint `GET /api/v1/realm/presence` that just
returns the presence information for the realm of the caller.
Fixes#10651.
We don't want really long urls to lead to truncated
keys, or we could theoretically have two different
urls get mixed up previews.
Also, this suppresses warnings about exceeding the
250 char limit.
Finally, this gives the key a proper prefix.
Now that we allow multiple users to have registered the same token, we
need to configure calls to unregister tokens to only query the
targeted user_id.
We conveniently were already passing the `user_id` into the push
notification bouncer for the remove API, so no migration for older
Zulip servers is required.
If cordelia searches on pm-with:iago@zulip.com,cordelia@zulip.com,
we now properly treat that the same way as pm-with:iago@zulip.com.
Before this fix, the query would initially go through the
huddle code path. The symptom wasn't completely obvious, as
eventually a deeper function would return a recipient id
corresponding to a single PM with @iago@zulip.com, but we would
only get messages where iago was the recipient, and not any
messages where he was the sender to cordelia.
I put the helper function for this in zerver/lib/addressee, which
is somewhat speculative. Eventually, we'll want pm-with queries
to allow for user ids, and I imagine there will be some shared
logic with other Addressee code in terms of how we handle these
strings. The way we deal with lists of emails/users for various
endpoints is kind of haphazard in the current code, although
granted it's mostly just repeating the same simple patterns. It
would be nice for some of this code to converge a bit. This
affects new messages, typing indicators, search filters, etc.,
and some endpoints have strange legacy stuff like supporting
JSON-encoded lists, so it's not trivial to clean this up.
Tweaked by tabbott to add some additional tests.
For our bots that use GenericOutgoingWebhookService
(which are basically Zulip style bots), we now
include a "content-type" header of "application/json".
We accomplish this by having the service classes
implement their own custom method called
`send_data_to_server`. For the Slack-related
code, we just extracted code from `do_rest_call`,
and then for the Zulip-related code, we added
a `headers` parameter.
This fixes a couple things:
* process_event() is a pretty vague name
* returning tuples should generally be avoided
* we were producing the same REST parameters in both
subclasses
* relative_url_path was always blank
* request_kwargs was always empty
Now process_event() is called build_bot_request(),
and it only returns request data,
not a tuple of `rest_operation` and `request_data`.
By no longer returning `rest_operation`, there are
fewer moving parts. We just have `do_rest_call` make
a POST call.
Before this change, we instantiated base_url into a superclass
of subclasses that returned base_url into a dictionary that
gets returned to our caller.
Now we just pull base_url out of service when we need to make
the REST call.
We move the JSON parsing step into the
higher level function: process_success_response().
In the unlikely event that we'll start integrating
with a solution that doesn't use JSON, we can deal
with that, and for now doing the parsing in one
place will help us make error reporting more
consistent.
In a subsequent commit we'll introduce better
error handling for malformed JSON.
The earlier code here, if it got a payload with
"response_string" as a key, would prefix the
corresponding value with "Success!". We just
want the bot to set its own content.
The code is reorganized here so that process_success()
always produces a value keyed by "content" from
incoming data, and then process_success_response()
doesn't do any fancy munging of the data.
There's no reason to return a failure message in
process_success(), since it's implied to be part of
the success codepath. I didn't look at the full history
of how the strange API evolved, but the second element
of the tuple was clearly noise by the time I got here.
Neither of the subclasses ever set it, and none of the
consumers used it.
This two-line function wasn't really carrying its
weight, and it just made it harder to refactor the
overall codepath.
Eliminating the function forces us to mock at a slightly
deeper level, which is probably a good thing for what
the test intends to do. The deeper mock still verifies that
we're sending the message (good) without digging into
all the details of how we send it (good).
Note that we will still keep around the similarly named
`fail_with_message` helper, which is a lot more useful.
(The succeed/fail scenarios aren't really symmetric here.
For success, there are fewer codepaths that do more complex
things, whereas we have lots and lots of failure codepaths
that all do the same simple thing of replying with a canned
message.)
Before this change subclasses of OutgoingWebhookServiceInterface
would return a raw string as the first element of its return
tuple in process_success(). This is not a very flexible
design, as it prevents the bot from passing extra data like
`widget_content`.
It's also possible in the future that we'll want to let outgoing
bots reply directly to senders who mention them on streams, and
again the original design was overly constrained for that.
This commit does not actually change any functionality yet.
Tweaked by tabbott to use a declared constant rather than just use
5000 in multiple places; this also means we can change the count
without updating translations.
Fixes#10446.
Fixes the urgent part of #10397.
It was discovered that soft-deactivated users don't get mobile push
notifications for messages on private streams that they have configured
to send push notifications.
Reason: `handle_push_notification` calls `access_message`, and that
logic assumes that a user who is a recipient of a message has an
associated UserMessage row. Those UserMessage rows are created
lazily for soft-deactivated users, so they might not exist (yet)
until the user comes back.
Solution: Ensure that userMessage row is created for
stream_push_user_ids and stream_email_user_ids in create_user_messages.
At some point as part of the process of supporting renumbering data,
we changed the structure of our file uploads to expect `path` to match
`s3_path`, with both having the relative path within the overall
hierarchy (including the realm ID). This change updates the more
rarely-used S3 export code path to use that model, fixing a crash when
messages reference an Attachment object with a rewritten path_id.
Note we're no longer using subscriptions_html in the help docs, so no need
to test for it. There is already a test for subscriptions_html in
IntegrationTest.
We start by stripping the ids in front of the name before the database
lookup. This has the advantage of not mentioning anyone if an incorrect
user id and full name combination is specified, as well as not having
the query the database twice, once by fullname and next by id.
Previously, we were storing only the most recent person with the same
full name as others; this commit adds new keys to the dict such that
simply looking by name would get you the newest user with this name,
and the get_user_by_id function can index the remaining users.
This is largely inspired by requests from people not liking the
Google's new emojiset. A lot of people were requesting to revert
back to old blobs emojiset so we are re-enabling this feature
after making relevant infrastructure changes for supporting google's
old blob emojiset and re-adding support for twitter emojiset.
Fixes: #10158.
Fixes part of #10297.
Use FAKE_LDAP_NUM_USERS which specifies the number of LDAP users
instead of FAKE_LDAP_EXTRA_USERS which specified the number of
extra users.
This adds a feature in the "Notification" section of "Settings" tab,
which lets user enable or disable login emails notification.
Tweaked by tabbott to simplify the test.
Fixes: #5795, progress towards #5854.
Also use name for selecting form in casper tests
as form with action=new is present in both /new
and /accounts/new/send_confirm/ which breaks
test in CircleCI as
waitWhileVisible('form[action^="/new/"]) never stops
waiting.
We also remove some unreachable code. Calling
split() always returns at least one token, even
if it's just the empty string. This is tested
directly on this commit, plus messages with
empty content get rejected pretty early in
the execution path.
In user type custom field, field value is list of user ids. We weren't
converting list to json object in update event payload. This throws
error in frontend, cause we store stringify representation of custom
field value. Therefore, after update event is recieved field-value-
type gets updated to array from string which throws json parsing error.
The function being tested here was kind of an
emergency response to some spam attacks. It
works for a pretty specific set of circumstances,
so it requires a lot of setup.
We may eliminate this function as we improve
our realm "plan types", and if that happens, we
can either eliminate this test or repurpose it.
The output of generate_dev_ldap_dir was being tested against the fixture
located at zerver/tests/fixtures/ldap_dir.json. This didn't make much sense
as generate_dev_ldap_dir was itself used by developers to generate/update
the fixtures. Instead, test_generate_dev_ldap_dir checks the structure of
the dict returned by generate_dev_ldap_dir. The structure is checked by
regex checks, checking whether the dict contains some keys or not, etc.
This prevents leaking some variables into an already
cluttered function.
We also add test coverage for what's now an
early-exit condition in the new function--we exempt
public MIT streams from these events.
This extends a test that proved only what Cordelia
could do with/without super_user privileges when she
was trying to send to an unsubscribed stream as herself.
Now the test shows the same powers extend to Cordelia
when she's sending messages on behalf of a mirrored
user.
We simulate a race condition by mocking create_user
to actually create a user, but then raise an
IntegrityError (as if another process had actually
created the user, not our test).
I also changed the real code to use explicitly
named parameters.
These test cases are used to test the cost of stream creation.
Three scenarios of stream creation are covered:
1) create a public stream;
2) create a private stream;
3) create a public stream with announce=true when there is a notification stream.
Fix: #4804.
We've been getting reports from users that our Freshdesk webhook
isn't working correctly. It turns out that the issue had nothing
to do with the webhook implementation itself!
In freshdesk/doc.md, we have a JSON template we ask users to
copy/paste into a textbox in the Freshdesk UI. That JSON template
contains "{{" and "}}" characters which we escaped as Unicode
decimals to prevent clashes with Jinja2 syntax in other parts
of the same template. This worked for a while!
But thanks to the changes introduced as part of the
nested_code_blocks extension, such escaped characters were never
decoded, leading users to copy/paste the same template but with
raw escaped unicode representations of "{{" and "}}" inside. And
that eventually broke our webhook implementation.
This commit makes sure that such characters are properly "unescaped",
just for Freshdesk docs.
We have code to prevent newbies on open realms
from inviting users. This is mostly intended
to hinder spammers. This commit just adds some
test coverage.
Our get_streams_traffic function used to query
all streams in the StreamCount table if you
passed in `None` for `streams`.
Now we require that you pass in a list of
stream_ids.
I don't know how much work this will save
the database, since probably the bulk of
the work is aggregating. If we need to fine
tune DB performance, we could possibly add
`realm` as an argument and add it to the filter.
What we'll immediately get, for large multi-realm
installations, is less data over the wire and
less work for the ORM.
This commit adds some more tests related to patching
a bot's `default_sending_stream`.
Unfortunately, this didn't reach the code that I was
intending to add line coverage to, since checks happen
higher up in the stack, but the test code I added
is probably worthwhile.
We want our methodology for extracting the last message
id to be consistent, particularly in terms of how we
handle edge cases. (I'll concede that the
`bulk_remove_subscriptions` codepath never hits that
corner case in practice, but it's harmless to handle
the theoretical case.)
It may also be nice to have this function show up
clearly in profiling.
This also adds some direct testing to the function.
It's not clear to me why we don't use `latest('id')`
in the implementation, but that's outside the scope
of this commit.
If `TEXT_EMOJISET` is currently selected emojiset then fallback to
`GOOGLE_EMOJISET` for displaying emojis in emoji picker and
composebox typeahead. We should pre-load the spritesheets in`emoji.js`
even in case of text emojiset otherwise on slow networks emoji picker
will appear empty initially.
The timestamp used for new login notifications always used the 12-hour
format. Instead of that, we use now the one preferred by the user, as
reflected in their settings.
Fixes#10124.
Users in the waiting period category cannot subscribe other users to
a stream. When a user tries to mention another unsubscribed user, a
warning message appears with a subscribe button on it to subscribe
the other user.
This commit removes the subscribe button and changes the warning text
for users in the waiting period category.
Issue: When you created a new organization with /new, the "new login"
emails were emailed. We previously had a hack of adding the
.just_registered property to the user Python object to attempt to
prevent the emails, and checking that in zerver/signals.py. This
commit gets rid of the .just_registered check.
Instead of the .just_registered check, this checks if the user has
joined more than a minute before.
A test test_dont_send_login_emails_for_new_user_registration_logins
already exists.
Tweaked by tabbott to introduce the constant JUST_CREATED_THRESHOLD.
Fixes#10179.
Right now it only has one function, but the function
we removed never really belonged in actions.py, and
now we have better test coverage on actions.py, which
is an important module to get to 100%.
In this commit we fix a bug due to which url preview images for urls
to custom emojis, realm icons or user avatars appeared broken when
such urls would be part of a Zulip message.
This is a preparatory commit to fix a bug in which a user posts
a link of custom emoji, user avatar or realm icon in a Zulip
message.
In this commit we are just adjusting the url generation in the
backend to have the '/user_uploads/' in the encrypted url generated
which the user is supposed to be redirected to and therefore
essentially reaching thumbor with the encrypted url.
This is necessary because 'user_uploads' and 'user_avatars' (or any
other item under 'user_avatars' endpoint) have a different folder
location under the local file storage backend. 'user_uploads'
endpoint's stuff is stored in a 'files' directory whereas stuff
'user_avatars' endpoint's stuff is stored in a 'avatars' directory.
Thumbor needs to know from which directory a particular local file
needs to be retrieved and therefore the zthumbor/loaders.py adds
a prefix location for the directory.
Since in an upcoming commit we are going to add user_avatars
directory location 'avatars' folder as a prefix this preparatory
commit helps simply doing the changes.
The 'last_modified' value in emoji records is
needed for uploading the file to the S3 backend.
We set the same in the function 'import_uploads_s3'.
We also have to remove the keyword 'last_modified'
while building the RealmEmoji dict, as it is not
a field which exists in RealmEmoji objects.
This uses the recently introduced active_mobile_push_notification
flag; messages that have had a mobile push notification sent will have
a removal push notification sent as soon as they are marked as read.
Note that this feature is behind a setting,
SEND_REMOVE_PUSH_NOTIFICATIONS, since the notification format is not
supported by the mobile apps yet, and we want to give a grace period
before we start sending notifications that appear as (null) to
clients. But the tracking logic to maintain the set of message IDs
with an active push notification runs unconditionally.
This is designed with at-least-once semantics; so mobile clients need
to handle the possibility that they receive duplicat requests to
remove a push notification.
We reuse the existing missedmessage_mobile_notifications queue
processor for the work, to avoid materially impacting the latency of
marking messages as read.
Fixes#7459, though we'll need to open a follow-up issue for
using these data on iOS.
Historically, queue_json_publish had a special third argument that was
basically its default mock behavior in the test suite. We've been
migrating away from that model, because it was confusing and resulted
in poor test coverage of our queue worker code paths; this was one of
the last holdouts.
As it turns out, we don't exercise this code path in a way that
impacts tests much; the main downside of this change is a likely small
penalty to performance of the full test suite when sending private
messages.
Following recent testing flakes that were traced down to this not
having been called causing `receiver_is_off_zulip` to depend on test
ordering, it makes sense to centralize this.
I think it should always have been in ZulipTestCase; it appears the
reason it wasn't from the beginning was that originally only
test_events.py interacted with it, and do_test there still needs to
call this directly (because it can be called multiple times within a
single test). And then we did the wrong thing as expanded use of
Tornado event_queue code in tests to more of the codebase.
This prevents these unit tests from accidentally leaking data outside
their boundaries.
Verified using a test that fails after test_events without this change.
Apparently, we weren't calling the proper clear functions inside the
Tornado tests, which resulted in unexpected behavior in other tests
that were relying on the Tornado event queue system being empty.
(In this case, a new test for mobile push notifications that assumed
receiver_is_off_zulip() was always true failed after this was run).
Private messages are not supported in Slack-format webhook.
Instead of raising a NotImplementedError, we warn the user
that PM service is not supported by sending a message to the
user.
Added tests for the same.
Fixes#9239
This implements a significant performance optimization for users
clicking the `Private messages` narrow in the Zulip UI, especially for
those users who do not have 50 recent private messages in an
organization with a lot of stream message traffic (because then
previously, postgres needed to scan through a huge amount of history
to find enough private messages).
The database index powering it can also support many other queries we
might want to do in the future to support "recent conversations" type
features.
Fixes#6896.
The previous message was potentially a lot more ambiguous about
whether this was something about presence. "Deactivated" makes it
explicit that some action was taken to deactivate the account.
After the messages have been imported, set the rendered_content of the
messages instead of leaving its value to be 'None'.
This is important to ensure that:
(1) Performance for users is good after completing the import.
(2) The database's full-text indexes have all of the imported messages
(which only happens properly when Message rows have their
rendered_content field edited).
Fixes#9168.
In certain cases we have to load a template directly because it
isn't in Jinja2's recognized template directories. This commit
adds a test to make sure that absolute paths are recognized
if they are pure Markdown files.
Generates ldap_dir based on the mode and the no. of extra users.
It supports three modes, 'a', 'b' and 'c', description for which
can be found in prod_settings_templates.py.
We now update all test messages to have a pub_date
of "now" in the setUp() function in TestRetentionLib.
We've seen tests flake on query counts before this
patch. It's not certain that the test flaked due
to time-related glitches, but it seems the most
plausible explanation.
Since otp_encrypt_api_key only encrypts API keys, it doesn't require
access to the full UserProfile object to work properly. Now the
parameter it accepts is just the API key.
This is preparatory refactoring for removing the api_key field on
UserProfile.
Now reading API keys from a user is done with the get_api_key wrapper
method, rather than directly fetching it from the user object.
Also, every place where an action should be done for each API key is now
using get_all_api_keys. This method returns for the moment a single-item
list, containing the specified user's API key.
This commit is the first step towards allowing users have multiple API
keys.
The validate_api_key sentence may look a bit confusing since we are
using webhook_bot's email address but default_bot's API key.
At first sight, and without any context on these tests, it may look like
that's just a typo, but we do want it to be like it is right now because
that way the API key used doesn't correspond to the provided email
address (triggering some untested parts of our backend logic).
Due to copyright issues with potentially displaying Apple emojisets on
non-apple devices, as well as iamcal dropping support for the emojione
emojiset (see https://github.com/iamcal/emoji-data/pull/142), we are
dropping (perhaps temporarily) support for allowing users to switch
emojisets in Zulip.
This commit just hides the feature from the user but leaves most of
the infrastructure in place so that in the future if we decide to
re-enable the support we will not need to redo the infrastructure work
(some JS-side code is deleted, mostly because we'll want to re-add the
feature using the do_settings_change infrastructure anyway).
The most likely emoji set to add is the legacy "blobs" Google emoji
set, since it seems popular with some users.
Tweaked by tabbott to remove some additional JS code and update the
changelog.
This test refactor makes the subscription/stream settings changes use standard
APIs and thus be easier to follow (and more robust to subtle re-fetching bugs).
This is a follow-up to #9181.
Renaming a user group to a name shared by other group wasn't a scenario
handled by the backend, and the server errored whenever this was
attempted.
Now a json_error is returned, letting the user know that a user group
with that name already exists.
The use_first_unread_anchor parameter allows automatically setting the
anchor to the first message that hasn't been read in this narrow.
Therefore it isn't necessary to specify an anchor when this parameter is
enabled.
Note from Tim: Arguably, we should think about making
`use_first_unread_anchor` the default behavior when anchor is
unspecified, but that's for later consideration.
We found out in #9953 that, appparently, loading the OpenAPI file was
taking abut a 5% of the Zulip server startup time.
Since in many cases (especially in development) having the file loaded
won't be necessary at all, we read it on the first time data from the
OpenAPI spec is needed.
Tweaked by tabbott to add a test.
Automatically detect if the OpenAPI spec file has been modified since
the last time it was loaded into memory, and if it has, automatically
reload it to have the latest version.
This feature is designed with development environments in mind. The main
benefit is being able to see the changes made to the OpenAPI document
without needing to restart the development server, which is tedious and
slows the documentation workflow down.
When last user(only in case of admin) unsubscribe from private stream,
stream page doesn't get updated. Cause we delete the private stream
as soon as last user unsubscribe from stream.
So `sub` get undefined in frontend, cause that stream is deleted
before unsubscribe-user-from-stream event is received.
Fix this by changing order of events sent to frontend. Event
`subscription: remove` should be sent before `stream: delete` event
from backend.
This fixes a bug where administrators couldn't remove private
unsubscribed streams from the "default streams" list, because
access_stream_by_name didn't give them access to the stream object.
This commit adds 'resize_gif()' function which extracts each frame,
resize it and coalesces them again to form the resized GIF while
preserving the duration of the GIF. I read some stackoverflow
answers all of which were referring to BiggleZX's script
(https://gist.github.com/BigglesZX/4016539) for working with animated
GIF. I modified the script to fit to our usecase and did some manual
testing but the function was failing for some specific GIFs and was not
preserving the duration of animation. So I went ahead and read about
GIF format itself as well as PIL's `GifImagePlugin` code and came up
with this simple function which gets the worked done in a much cleaner
way. I tested this function on a number of GIF images from giphy.com
and it resized all of them correctly.
Fixes: #9945.
Email notifications for new logins displayed the login timestamp's
timezone in the location format (e.g. "Asia/Taipei"). Since that can
lead users to understand the login came from that place, the timezone in
those emails is now represented in +/-HHMM format.
Fixes#10178.
This adds a new function called handle_remove_push_notification in
zerver/lib/push_notifications.py which requires user_profile id and
the message id which has to be removed in the function.
For now, the function only supports GCM (and is mostly there for
prototyping).
The payload which is being delivered needs to contain the narrow
information and the content of the message.
This should make it much simpler for the mobile apps to line up the
data from server_settings against the data in the notifications.
Addresses part of #10094.
This ensures that the format of this data structures matches that for
in-realm bots in the main users data structure (including avatars,
etc.).
Fixes#10138.
This renames Realm.show_digest_email field to
digest_emails_enabled, for greater clarity as to what it does
just from seeing the setting name, without having to look it up.
Fixes part of #10042.
We were getting event-handling exceptions in JS in production if a new
user was created and then went and set a custom profile field, because
there was no `.profile_data` on their user object. We were able to
trace the issue down to the fact that our events didn't include that
field when creating a new user.
This renames Realm.restricted_to_domain field to
emails_restricted_to_domains, for greater clarity as to what it does
just from seeing the setting name, without having to look it up.
Fixes part of #10042.
We already had a setting for whether these logs were enabled; now it
also controls which stream the messages go to.
As part of this migration, we disable the feature in dev/production by
default; it's not useful for most environments.
Fixes the proximal data-export issue reported in #10078 (namely, a
stream with nobody ever subscribed to having been created).
This is a preparatory refactor for adding
UserProfile.can_subscribe_other_users.
Although there existed a test for limiting users from creating
streams at `test_subs.test_user_settings_for_adding_streams`,
it did not test the logic inside can_add_streams, tests have
been added to solve that issue.
It's sorta an unusual state to get into, to have a user own a
deactivated bot, when they can't create a bot of that type, but
definitely a valid possibility that we should be checking for.
Fixes#10087.
This setting isn't intended to exist long term, but instead to make it
possible to merge our search pills code before we're ready to cut over
production environments to use it.
The gitter mentions are in the format '@usermention'
and the mentions are included in the export data as:
"mentions": [
{
"screenName": "usermention",
"userId": "54d7876c15522ed4b3dbbefb",
"userIds": []
}]
We extract this data and map this mention to @**usermention**
for Zulip.
Various pieces of our thumbor-based thumbnailing system were already
merged; this adds the remaining pieces required for it to work:
* a THUMBOR_URL Django setting that controls whether thumbor is
enabled on the Zulip server (and if so, where thumbor is hosted).
* Replaces the overly complicated prototype cryptography logic
* Adds a /thumbnail endpoint (supported both on web and mobile) for
accessing thumbnails in messages, designed to support hosting both
external URLs as well as uploaded files (and applying Zulip's
security model for access to thumbnails of uploaded files).
* Modifies bugdown to, when THUMBOR_URL is set, render images with the
`src` attribute pointing /thumbnail (to provide a small thumbnail
for the image), along with adding a "data-original" attribute that
can be used to access the "original/full" size version of the image.
There are a few things that don't work quite yet:
* The S3 backend support is incomplete and doesn't work yet.
* The error pages for unauthorized access are ugly.
* We might want to rename data-original and /thumbnail?size=original
to use some other name, like "full", that better reflects the fact
that we're potentially not serving the original image URL.
This adds support to the event queue system for triggering
missed-message notifications (whether push or email) to support the
stream push notifications feature.
This modifies the logic for formatting outgoing missed-message emails
to support the upcoming stream email notifications feature (providing
a new format for the subject, etc.).
This change converts our logic for determining whether the current
user was mentioned in a group of messages from the implicit "if it was
sent to a stream, it's a mention" to the explicit "we actually know
there was a mention in the message". This is an important
prerequisite for our upcoming feature to support getting email
notifications for streams always (even without a mention).
Because in upcoming commits, we'll want to pass additional per-message
data into do_send_missedmessage_events_reply_in_zulip, we need to
expand the format for how we represent messages to account for that.
This refactors the generate_topic_history_from_db_rows function to not
depend upon the assumption of rows passed as parameter to be sorted in
reverse order of max_message_id field.
Additionally, we add sorting and some tests that verify correct
handling of these cases.
In this commit we add a new endpoint so as to have a way of fetching
topic history for a given stream id without having to be logged in.
This can only happen if the said stream is web public otherwise we
just return an empty topics list. This endpoint is quite analogous
to get_topics_backend which is used by our main web app.
In this commit we also do a bit of duplication regarding the query
responsible for fetching all the topics from DB. Basically this
query is exactly the same as what we have in the
get_topic_history_for_stream function in actions.py. Basically
duplicating now is the right thing to do because this query is
really gonna change when we add another criteria for filtering
messages which is:
Only topics for messages which were sent during the period the
corresponding stream was web public should be returned.
Now when we will do this, the query will change and thus it won't
really be a code duplication!
This migrates Zulip to use a dramatically better set of names and
aliases for our emoji set, defined in emoji_names.py (which is in turn
manually generated from our hand-curated CSV file).
This should significantly improve the experience of using Zulip's
emoji picker and emoji typeahead for finding what one is looking for.
Fixes#7665
In case of invitation events, 'invites_changed' event without
any real payload is sent to all the realm admins and the user.
The event is handled by reloading the list to view recent changes.
Commit tweaked by shubhamdhama:
* Send an `invite_changed` event when an user accept an invite.
Also, added the test for the same.
* No need to delete the invite list in frontend, current logic
handles the case when the invite data is changed properly.
* Extracted the common logic for sending an event into
`notify_invites_changed`.
POST and DELETE operations in /users/me/alert_words may leave the
user's list of alert words in an unknown state: POSTing adds words to a
list that the client may not know from the begining, and the same with
DELETE.
Replying with the current status of the alert words list is the best way
of letting the client alter the list and knowing its contents after
being updated with a single query.
This is especially useful taking into account that POSTing words that
were already present and DELETing non-existing words both produce a
successful response.
An extra test has been added to avoid leaving GET /users/me/alert_words
too untested.
For importing huddles we have to have unique huddle hashes.
Huddle hashes are extracted from the list of users participating
in a huddle. So to extract these user ids, we first use huddle
id to getting the matching recipient, and then we use subscription
to get the user ids from the recipient id.
Added tests for the same (tests slightly tweaked by tabbott).
This is all the plumbing that makes it possible to enable the
stream_email_notifications setting via the Zulip API. The flag still
doesn't do anything yet, but this is a nice checkpoint along the way
to implementing this feature.
This commit adds a Markdown tree-processor extension that renders
multi-line code blocks that are nested inside lists with the
formatting. Note that the code block could be nested inside multiple
list levels and would still get rendered correctly.
Tim: This fixes the need for unpleasant workarounds like
f5bfa4e793 and makes nested code blocks
in our documentation look exactly how users would expect them to.
Given that we allow adding emoji reactions by only using the
emoji_name, we should offer the same possibility for removing
reactions to make the experience for API clients not require looking
up emoji codes.
Since this is an additional optional parameter, this also preserves
backward compatibility.
Complete, correct implementations of Zulip's emoji reactions API need
to send both emoji_code and emoji_name in order to add a reaction;
this is important for corner cases around clicking on a reaction in a
message that was first reacted to a year ago, when the emoji
name->code mappings have changed for the given code point in the
intervening time.
However, for folks building tools using the Zulip API, that corner
case is not particularly common; as a result, it makes sense to offer
an interface that allows adding a reaction by only specifying the
emoji name.
This is why the only field that needs to be required is emoji_name,
which can now be mapped to a single emoji. Both fields will be
necessary when "voting" an old reaction, but since we stil allow
specifying the two of them, these changes offer retrocompatibility.
This adds a new settings, SOCIAL_AUTH_SUBDOMAIN, which specifies which
domain should be used for GitHub auth and other python-social-auth
backends.
If one is running a single-realm Zulip server like chat.zulip.org, one
doesn't need to use this setting, but for multi-realm servers using
social auth, this fixes an annoying bug where the session cookie that
python-social-auth sets early in the auth process on the root domain
ends up masking the session cookie that would have been used to
determine a user is logged in. The end result was that logging in
with GitHub on one domain on a multi-realm server like zulipchat.com
would appear to log you out from all the others!
We fix this by moving python-social-auth to a separate subdomain.
Fixes: #9847.
* If `zerver_realmauditlog` is present in the exported data,
`RealmAuditLog` would be imported normally.
* If it is not present, `create_subscription_events`
function in would create the `subscription_created`
events for RealmAuditLog. The reason this function
is in `import_realm` module and not in the individual
export tool scripts (like Slack) is because this
function would be common for all export tools.
This fixes#9846 for users who have not already done an import of
their organization from Slack.
Fixes#9846.
Custom profile field value are stored in different structure compare to
other profile fields in events, so generic way to update fields wasn't
updating custom profile fields in `apply_event` function.
Fix this by adding check for custom fields in `apply_event`.
This also adds the appropriate test_events test to verify this code path.
Fixes part of #9875.
This has two advantages;
* We can split bugdown/__init__.py into several modules, and each
module can access these arguments by importing these
* We get rid of the super-ugly `global db_data` construct, replacing
it with a only slightly ugly monkey-ish patching of the
`zerver.lib.bugdown.arguments` module, which is at least
considerably more clear on reading as to what it's purpose is.
This commit moves all files previously under the 'app' bundle in
the Django pipeline to being compiled by webpack under the 'app'
entry point. In the process, it moves assets under the app entry
to a file called app.js that consumes all relevant css and js files.
This commit also edits the webpack config to be able to expose certain
variables for third party libraries that are currently required by
some modules. This is bad coding form and should be refactored to
requiring whatever dependencies a module may have; we're just
deferring that to the future to simplify the series of transitions we
need to do here. The variable exposure is done using expose-loader in
webpack.
The app/index.html template is edited to override the newly introduced
'commonjs' block in the base template. This is done as a temporary
measure so as not to disrupt other pages on the app during the transition.
It also fixes the value of the 'this' context that was being inferred
as window by third party libraries. This is done using imports-loader
in the webpack config. This is also messy and probably isn't how we
want things to work long term.
We need to do a small monkey-patching of python-social-auth to ensure
that it doesn't 500 the request when a user does something funny in
their browser (e.g. using the back button in the auth flow) that is
fundamentally a user error, not a server error.
This was present in the pre-rewrite version of our Social auth
codebase, without clear documentation; I've fixed the explanation
part here.
It's perhaps worth investigating with the core social auth team
whether there's a better way to do this.
It's possible to make GitHub social authentication support letting the
user pick which of their verified email addresses to pick, using the
python-social-auth pipeline feature. We need to add an additional
screen to let the user pick, so we're not adding support for that now,
but this at least migrates this to use the data set of all emails that
have been verified as associated with the user's GitHub account (and
we just assume the user wants their primary email).
This also fixes the inability for very old GitHub accounts (where the
`email` field in the details might be a string the user wanted on
their GitHub profile page) to using GitHub auth to login.
Fixes#9127.
https://github.com/houstondatavis/slack-export/blob/master/users.json
JSON or JavaScript decodes "\/" to / (and some encoders always write
"\/" to avoid accidentally creating a </script> tag), while Python
assumes "\/" is a typo for "\\/" and decodes it to \/.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
The only changes visible at the AST level, checked using
https://github.com/asottile/astpretty, are
zerver/lib/test_fixtures.py:
'\x1b\\[(1|0)m' ↦ '\\x1b\\[(1|0)m'
'\\[[X| ]\\] (\\d+_.+)\n' ↦ '\\[[X| ]\\] (\\d+_.+)\\n'
which is fine because re treats '\\x1b' and '\\n' the same way as
'\x1b' and '\n'.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
When GETting an unedited message's edit history, the server wasn't able
to reply properly and produced a 500 error.
Now when that happens, we return a message history that only contains
the original message.
Messages can be bulky, and storing them in a single
data structure can cause a memory error.
In this commit, the messages are written to a file
batch-wise, thus avoiding the memory error.
Previously, the messages where being stored in a output file from
outside the function 'convert_slack_workspace_messages', but
now we store it from the inside the mentioned function.
This will help in processing and saving the messages batch-wise
so as to avoid a memory error.
Reactions are returned separately from 'convert_slack_workspace_messages'
rather than 'message_json'.
Also updated test for 'convert_slack_workspace_messages' and an additional
test for reactions is added.
This fixes a test flake introduced here:
317a2fff2a
We need a higher bogus bot owner id to prevent
flakes where our userid sequence gets to 100. (Tests
aren't completely deterministic in what data you
use, since sequences don't get rolled back when
you roll back transactions.)
Add 3 new Markdown emoji tests for newlines, emphasis, and links. The
goal of these tests is to ensure that Markdown operations concerning
emoji are preformed in proper order, with emoji being added correctly
based on other Markdown operations.
See suggestion here: https://git.io/flF5W.
The slash in command is stripped in the backend,
rather than in the client to make the client code
cleaner.
This would make client code cleaner in the slash
commands which include parameters.
This bug is caused by the conversion of newlines to `<br>` statements,
since `>` is not allowed as a character around an emoticon during
translation.
Also, add a new test case for preventing this bug from occurring in the
future.
Fix#9763.
We're adding more stream types, e.g. splitting private streams into
with/without shared history, adding publicly-archived streams, adding
announce-only streams, etc. So maintaining this text is going to get more
complicated over time.
Also, the right place to explain this stuff is in the stream header, or near
the z-in-a-circle.
This commit also adds translation tags to the messages.
In records the IDs like the realm_id and user_profile_id
of 'records.json' should be integers. This was missing in the
S3 backend and this commit fixes that.
Added tests for this as well.
For the emojis, In 'records.json', the record should contain
the attribute 'file_name', which was missing in the S3 backend.
This commit adds this attribute, as well as tests for the
records of uploads, avatars and emojis in both local and S3 backend.
Move the zcommands from '/views/messages.py' to
'/lib/zcommand'.
Also, move the zcommand tests from '/tests/test_messages.py'
to '/tests/test_zcommand'.
This fixes two issues:
* Our guest users feature gave guest users access to public stream
attachments even if they couldn't access the public stream.
* After a user joins a private stream with our new shared history
feature, they couldn't see images uploaded before they joined.
The tests need to check for a few types of issues:
* The actual access control permissions.
* How many database queries are used in the various
cases for that second model, especially with multiple messages
referencing an attachment. This function gets called a lot, and we
want to keep it fast.
Fixes#9372.
This new implementation model is a lot cleaner and should extend
better to the non-oauth backend supported by python-social-auth (since
we're not relying on monkey-patching `do_auth` in the OAuth backend
base class).
This adds a common function `access_user_by_id` to access user id
within same realm, complete with a full suite of unit tests.
Tweaked by tabbott to make the test much more readable.
We've for a long time had the behavior that a bot mentioned in a
stream message receives the notification, regardless of whether the
bot was actually subscribed to the stream.
Apparently, this behavior also triggered if you mentioned a bot in a
private message (i.e. the bot would be delievered the private message
and would probably respond unhelpfully in a new group private message
thread with the PMs original recipients plus the bot).
The fix for this bug is simple: To exclude this feature for private
messages.
The new can_access_all_realm_members function is meant to act as a
base function for guest users and Zephyr realm users regarding the
accessibility of the information of other users in the realm.
This fixes an issue where if you make #announce (the default
announcement stream) announce-only, then creating a new stream will
throw an exception (because notification-bot can't send there).
Fixes#9636.
These two slash commands now use zcommand to talk to
the server, so we have no Message overhead, and if you're
on a stream, you no longer spam people by accident.
The commands now also give reasonable messages
if you are already in the mode you ask for.
It should be noted that by moving these commands out of
widget.py, they are no longer behind the ALLOW_SUB_MESSAGES
setting guard.
This adds a /ping command that will be useful for users
to see what the round trip to the Zulip server is (including
only a tiny bit of actual server time to basically give a
200).
It also introduce the "/zcommand" endpoint and zcommand.js
module.
For some reason in my original version I was sending both
content and data to the client for submessage events,
where data === JSON.parse(content). There's no reason
to not just let the client parse it, since the client
already does it for data that comes on the original
message, and since we might eventually have non-JSON
payloads.
The server still continues to validate that the payload
is JSON, and the client will blueslip if the server
regressses and sends bad JSON for some reason.
We now have a simple algorithm: First, look at the URL path
(e.g. /de/, which is intended to be an override). Second, look at the
language the user has specified in their settings.
This adds a common function `access_bot_by_id` to access bot id within
same realm. It probably fixes some corner case bugs where we weren't
checking for deactivated bots when regenerating API keys.
Fixes the avatar/emoji part of #8177.
Does not address the issue with uploaded images, since we don't do
anything with them.
Also adds 3 images with different orientation exif tags to
test-images.
Previously, if you had LDAPAuthBackend enabled, we basically blocked
any other auth backends from working at all, by requiring the user's
login flow include verifying the user's LDAP password.
We still want to enforce that in the case that the account email
matches LDAP_APPEND_DOMAIN, but there's a reasonable corner case:
Having effectively guest users from outside the LDAP domain.
We don't want to allow creating a Zulip-level password for a user
inside the LDAP domain, so we still verify the LDAP password in that
flow, but if the email is allowed to register (due to invite or
whatever) but is outside the LDAP domain for the organization, we
allow it to create an account and set a password.
For the moment, this solution only covers EmailAuthBackend. It's
likely that just extending the list of other backends we check for in
the new conditional on `email_auth_backend` would be correct, but we
haven't done any testing for those cases, and with auth code paths,
it's better to disallow than allow untested code paths.
Fixes#9422.
This is the analog of the last commit, for the password reset flow.
For these users, they should be managing/changing their password in
the LDAP server.
The error message for users doing the wrong thing here is nonexistent
isn't great, but it should be a rare situation.
Previously, if both EmailAuthBackend and LDAPAuthBackend were enabled,
LDAP users could set a password using EmailAuthBackend and continue to
use that password, even if their LDAP account was later deactivated.
That configuration wasn't supported at all before, so this doesn't fix
a pre-existing security issue, but now that we're making that a valid
configuration, we need to cover this case.
This reflects the changes to the default URL publicly
displayed to the user. It also changes the default
URL of the default test server outgoing webhook, which
prevented the test server flaskbotrc from working out
of the box.
Export of RealmEmoji should also include the image
file of those emojis.
Here, we export emojis both for local and S3 backend
in a method with is similar to attachments and avatars.
Added tests for the same.
This adds the fields `trigger` and `service_email`
to each message event dispatched by outgoing webhook bots.
`trigger` will be used by the Botserver to determine if
a bot is mentioned in the message.
`service_email` will be used by the Botserver to determine
by which outgoing webhook bot the message should be handled.
This should make it easier for us to iterate on a less-dense Zulip.
We create two classes on body, less_dense_mode and more_dense_mode, so
that it's easy as we refactor to separate the two concepts from things
like colors that are independent.
API users, particularly bots, can now send a field
called "widget_content" that will be turned into
a submessage for the web app to look at. (Other
clients can still rely on "content" to be there,
although it's up to the bot author to make the
experience good for those clients as well.)
Right now widget_content will be a JSON string that
encodes a "zform" widget with "choices." Our first
example will be a trivia bot, where users will see
something like this:
Which fruit is orange in color?
[A] orange
[B] blackberry
[C] strawberry
The letters will be turned into buttons on the webapp
and have canned replies.
This commit has a few parts:
- receive widget_content in the request (simply
validating that it's a string)
- parse the JSON in check_message and deeply
validate its structure
- turn it into a submessage in widget.py
This should significantly improve the user experience for creating
additional accounts on zulipchat.com.
Currently, disabled in production pending some work on visual styling.
This is intended to support our upcoming feature to support copying a
user's customization settings from an existing account that user owns
in another organization.
We essentially stop running create_realm_internal_bots during
every provisioing and move its operations to run from populate db.
In fact to speed things up a bit we actually make populate db call the
funcs which create_realm_internal_bots calls behind the scenes.
Fixes: #9467.
We extract the entire operations of the management command to a
function create_if_missing_realm_internal_bots in the
zerver/lib/onboarding.py. The logic for determining if there are any realm
internal bots which have not been created is extracted to a function
missing_any_realm_internal_bots in actions.py.
This isn't a complete long-term fix, in that ideally we'd be doing
this check at the view layer, but various structural things make that
annoying, and we'll want this test either way.