The TeamCity webhook plugin supports multiple payload formats that
are customized to be used by different services such as Slack,
Flowdock, etc. We don't support such payloads, so we should ignore
them and stick to parsing only the generic ones. We should also
notify that bot owner about the error.
This fixes an issue where passing a path like `~/exports/foo` would
result in a `~` directory being created and the export/import not
working correctly.
Fixes#10124.
Users in the waiting period category cannot subscribe other users to
a stream. When a user tries to mention another unsubscribed user, a
warning message appears with a subscribe button on it to subscribe
the other user.
This commit removes the subscribe button and changes the warning text
for users in the waiting period category.
Issue: When you created a new organization with /new, the "new login"
emails were emailed. We previously had a hack of adding the
.just_registered property to the user Python object to attempt to
prevent the emails, and checking that in zerver/signals.py. This
commit gets rid of the .just_registered check.
Instead of the .just_registered check, this checks if the user has
joined more than a minute before.
A test test_dont_send_login_emails_for_new_user_registration_logins
already exists.
Tweaked by tabbott to introduce the constant JUST_CREATED_THRESHOLD.
Fixes#10179.
Right now it only has one function, but the function
we removed never really belonged in actions.py, and
now we have better test coverage on actions.py, which
is an important module to get to 100%.
In this commit we fix a bug due to which url preview images for urls
to custom emojis, realm icons or user avatars appeared broken when
such urls would be part of a Zulip message.
This is a preparatory commit to fix a bug in which a user posts
a link of custom emoji, user avatar or realm icon in a Zulip
message.
In this commit we are just adjusting the url generation in the
backend to have the '/user_uploads/' in the encrypted url generated
which the user is supposed to be redirected to and therefore
essentially reaching thumbor with the encrypted url.
This is necessary because 'user_uploads' and 'user_avatars' (or any
other item under 'user_avatars' endpoint) have a different folder
location under the local file storage backend. 'user_uploads'
endpoint's stuff is stored in a 'files' directory whereas stuff
'user_avatars' endpoint's stuff is stored in a 'avatars' directory.
Thumbor needs to know from which directory a particular local file
needs to be retrieved and therefore the zthumbor/loaders.py adds
a prefix location for the directory.
Since in an upcoming commit we are going to add user_avatars
directory location 'avatars' folder as a prefix this preparatory
commit helps simply doing the changes.
The 'last_modified' value in emoji records is
needed for uploading the file to the S3 backend.
We set the same in the function 'import_uploads_s3'.
We also have to remove the keyword 'last_modified'
while building the RealmEmoji dict, as it is not
a field which exists in RealmEmoji objects.
This uses the recently introduced active_mobile_push_notification
flag; messages that have had a mobile push notification sent will have
a removal push notification sent as soon as they are marked as read.
Note that this feature is behind a setting,
SEND_REMOVE_PUSH_NOTIFICATIONS, since the notification format is not
supported by the mobile apps yet, and we want to give a grace period
before we start sending notifications that appear as (null) to
clients. But the tracking logic to maintain the set of message IDs
with an active push notification runs unconditionally.
This is designed with at-least-once semantics; so mobile clients need
to handle the possibility that they receive duplicat requests to
remove a push notification.
We reuse the existing missedmessage_mobile_notifications queue
processor for the work, to avoid materially impacting the latency of
marking messages as read.
Fixes#7459, though we'll need to open a follow-up issue for
using these data on iOS.
Fixes a regression introduced in 23246ff816.
However, we'll be shortly removing this feature, since it's legacy
support for an app that no longer is supported.
Historically, queue_json_publish had a special third argument that was
basically its default mock behavior in the test suite. We've been
migrating away from that model, because it was confusing and resulted
in poor test coverage of our queue worker code paths; this was one of
the last holdouts.
As it turns out, we don't exercise this code path in a way that
impacts tests much; the main downside of this change is a likely small
penalty to performance of the full test suite when sending private
messages.
Following recent testing flakes that were traced down to this not
having been called causing `receiver_is_off_zulip` to depend on test
ordering, it makes sense to centralize this.
I think it should always have been in ZulipTestCase; it appears the
reason it wasn't from the beginning was that originally only
test_events.py interacted with it, and do_test there still needs to
call this directly (because it can be called multiple times within a
single test). And then we did the wrong thing as expanded use of
Tornado event_queue code in tests to more of the codebase.
This prevents these unit tests from accidentally leaking data outside
their boundaries.
Verified using a test that fails after test_events without this change.
Apparently, we weren't calling the proper clear functions inside the
Tornado tests, which resulted in unexpected behavior in other tests
that were relying on the Tornado event queue system being empty.
(In this case, a new test for mobile push notifications that assumed
receiver_is_off_zulip() was always true failed after this was run).
The s3 import code path made a hard assumption about `user_profile_id`
being set (we'd already fixed this in the local uploads code path).
Ideally, it should be, and I've opened #10268 for fixing that, but for
now this is how it needs to work.
Private messages are not supported in Slack-format webhook.
Instead of raising a NotImplementedError, we warn the user
that PM service is not supported by sending a message to the
user.
Added tests for the same.
Fixes#9239
This implements a significant performance optimization for users
clicking the `Private messages` narrow in the Zulip UI, especially for
those users who do not have 50 recent private messages in an
organization with a lot of stream message traffic (because then
previously, postgres needed to scan through a huge amount of history
to find enough private messages).
The database index powering it can also support many other queries we
might want to do in the future to support "recent conversations" type
features.
Fixes#6896.
The previous message was potentially a lot more ambiguous about
whether this was something about presence. "Deactivated" makes it
explicit that some action was taken to deactivate the account.
After the messages have been imported, set the rendered_content of the
messages instead of leaving its value to be 'None'.
This is important to ensure that:
(1) Performance for users is good after completing the import.
(2) The database's full-text indexes have all of the imported messages
(which only happens properly when Message rows have their
rendered_content field edited).
Fixes#9168.
In certain cases we have to load a template directly because it
isn't in Jinja2's recognized template directories. This commit
adds a test to make sure that absolute paths are recognized
if they are pure Markdown files.
Generates ldap_dir based on the mode and the no. of extra users.
It supports three modes, 'a', 'b' and 'c', description for which
can be found in prod_settings_templates.py.
One disadvantage of relying on Jinja2 to load all templates is that it
only searches a finite set of pre-configured template directories.
Unfortunately, that breaks when someone tries to enable a custom
privacy or terms page and has the corresponding template in a
directory outside of Jinja2's recognized directories (for instance, it
won't find `/etc/zulip/terms.md`, the recommended path).
This commit makes it so that render_markdown_path can be more
sensible about pure Markdown files and load templates with
absolute paths directly without relying on Jinja2, if need be.
We now update all test messages to have a pub_date
of "now" in the setUp() function in TestRetentionLib.
We've seen tests flake on query counts before this
patch. It's not certain that the test flaked due
to time-related glitches, but it seems the most
plausible explanation.
The "/stats" command doesn't actually do anything
interesting yet, and it also writes to the message
feed instead of replying directly to the user.
The history of this command was that it was
written during a PyCon sprint. It was mainly intended
as an example for subsequent slash commands. The
ones we built after "/stats" have sort of outgrown
"/stats" and don't follow the original structure
for "/stats". (The "/day", "/ping", and "/settings"
commands were built shortly after.)j
We probably want to ressurect "/stats" fairly soon,
after figuring out some useful stats and refining
the UI.
As you can see from this commit, resurrecting the
code here shouldn't be too difficult, but it
may actually be pretty rare that we just translate
slash commands into fleshed out messages.
Since otp_encrypt_api_key only encrypts API keys, it doesn't require
access to the full UserProfile object to work properly. Now the
parameter it accepts is just the API key.
This is preparatory refactoring for removing the api_key field on
UserProfile.
random_api_key, the function we use to generate random tokens for API
keys, has been moved to zerver/lib/utils.py because it's used in more
parts of the codebase (apart from user creation), and having it in
zerver/lib/create_user.py was prone to cyclic dependencies.
The function has also been renamed to generate_api_key to have an
imperative name, that makes clearer what it does.
Now reading API keys from a user is done with the get_api_key wrapper
method, rather than directly fetching it from the user object.
Also, every place where an action should be done for each API key is now
using get_all_api_keys. This method returns for the moment a single-item
list, containing the specified user's API key.
This commit is the first step towards allowing users have multiple API
keys.
The validate_api_key sentence may look a bit confusing since we are
using webhook_bot's email address but default_bot's API key.
At first sight, and without any context on these tests, it may look like
that's just a typo, but we do want it to be like it is right now because
that way the API key used doesn't correspond to the provided email
address (triggering some untested parts of our backend logic).
Due to copyright issues with potentially displaying Apple emojisets on
non-apple devices, as well as iamcal dropping support for the emojione
emojiset (see https://github.com/iamcal/emoji-data/pull/142), we are
dropping (perhaps temporarily) support for allowing users to switch
emojisets in Zulip.
This commit just hides the feature from the user but leaves most of
the infrastructure in place so that in the future if we decide to
re-enable the support we will not need to redo the infrastructure work
(some JS-side code is deleted, mostly because we'll want to re-add the
feature using the do_settings_change infrastructure anyway).
The most likely emoji set to add is the legacy "blobs" Google emoji
set, since it seems popular with some users.
Tweaked by tabbott to remove some additional JS code and update the
changelog.
Importing the Django test client is somewhat expensive, and we only
use it within one view function that's not used in production. So
there's a significant startup-time performance optimization in doing
an import inside the view code.
python-twitter was consuming a significant amount of import time.
However, this commit seems to not save any time at all, probably
because its recursive dependencies are imported elsewhere in Zulip.
This test refactor makes the subscription/stream settings changes use standard
APIs and thus be easier to follow (and more robust to subtle re-fetching bugs).
This is a follow-up to #9181.
Renaming a user group to a name shared by other group wasn't a scenario
handled by the backend, and the server errored whenever this was
attempted.
Now a json_error is returned, letting the user know that a user group
with that name already exists.
The use_first_unread_anchor parameter allows automatically setting the
anchor to the first message that hasn't been read in this narrow.
Therefore it isn't necessary to specify an anchor when this parameter is
enabled.
Note from Tim: Arguably, we should think about making
`use_first_unread_anchor` the default behavior when anchor is
unspecified, but that's for later consideration.
We found out in #9953 that, appparently, loading the OpenAPI file was
taking abut a 5% of the Zulip server startup time.
Since in many cases (especially in development) having the file loaded
won't be necessary at all, we read it on the first time data from the
OpenAPI spec is needed.
Tweaked by tabbott to add a test.
Automatically detect if the OpenAPI spec file has been modified since
the last time it was loaded into memory, and if it has, automatically
reload it to have the latest version.
This feature is designed with development environments in mind. The main
benefit is being able to see the changes made to the OpenAPI document
without needing to restart the development server, which is tedious and
slows the documentation workflow down.
When last user(only in case of admin) unsubscribe from private stream,
stream page doesn't get updated. Cause we delete the private stream
as soon as last user unsubscribe from stream.
So `sub` get undefined in frontend, cause that stream is deleted
before unsubscribe-user-from-stream event is received.
Fix this by changing order of events sent to frontend. Event
`subscription: remove` should be sent before `stream: delete` event
from backend.
This fixes a bug where administrators couldn't remove private
unsubscribed streams from the "default streams" list, because
access_stream_by_name didn't give them access to the stream object.
This commit adds 'resize_gif()' function which extracts each frame,
resize it and coalesces them again to form the resized GIF while
preserving the duration of the GIF. I read some stackoverflow
answers all of which were referring to BiggleZX's script
(https://gist.github.com/BigglesZX/4016539) for working with animated
GIF. I modified the script to fit to our usecase and did some manual
testing but the function was failing for some specific GIFs and was not
preserving the duration of animation. So I went ahead and read about
GIF format itself as well as PIL's `GifImagePlugin` code and came up
with this simple function which gets the worked done in a much cleaner
way. I tested this function on a number of GIF images from giphy.com
and it resized all of them correctly.
Fixes: #9945.
Email notifications for new logins displayed the login timestamp's
timezone in the location format (e.g. "Asia/Taipei"). Since that can
lead users to understand the login came from that place, the timezone in
those emails is now represented in +/-HHMM format.
Fixes#10178.
This flag is used to track which user/message pairs correspond to an
active mobile push notification, that should potentially be cleared
when the user reads the message.
This flag should never appear on a message that is also marked as
read; eventually we may want a cron job to check for that condition.
We include a partial index on UserMessage for this flag.
This adds a new function called handle_remove_push_notification in
zerver/lib/push_notifications.py which requires user_profile id and
the message id which has to be removed in the function.
For now, the function only supports GCM (and is mostly there for
prototyping).
The payload which is being delivered needs to contain the narrow
information and the content of the message.
This should make it much simpler for the mobile apps to line up the
data from server_settings against the data in the notifications.
Addresses part of #10094.
This ensures that the format of this data structures matches that for
in-realm bots in the main users data structure (including avatars,
etc.).
Fixes#10138.
For realms that don't have any presence-active users, we know for a
fact that there aren't any active clients that will be reloading just
after the server restarts, so we can skip filling the cache with data
related to that realm.
For zulipchat.com, this results in a significant performance
optimization for the recipient and stream caches, and a moderate
performance improvement for the user caches as well.
Private messages make up the bulk of Recipient objects. While private
messages are ~50% of messages, if you weight by messages received
(which is what is important for message-loading performance), it's
pretty strongly balanced towards stream messages.
We don't need to include long-term idle or other inactive users here,
since fetching them consumed to vast majority of the time.
(On chat.zulip.org, this decreased the runtime for populating the user
cache by 5x, removing only users we're unlikely to need to access).
This doesn't seem to have a huge performance downside (less than 1s
extra time for loading / on chat.zulip.org), and it means the
possibility of users having so many unreads that we get weird/buggy
behavior is much more unlikely to exist.
We'll still want a better experience for users who somehow go over
this limit, but it can be pretty firmly "you need to go mark some
things as read".
This renames Realm.show_digest_email field to
digest_emails_enabled, for greater clarity as to what it does
just from seeing the setting name, without having to look it up.
Fixes part of #10042.
We were getting event-handling exceptions in JS in production if a new
user was created and then went and set a custom profile field, because
there was no `.profile_data` on their user object. We were able to
trace the issue down to the fact that our events didn't include that
field when creating a new user.
This renames Realm.restricted_to_domain field to
emails_restricted_to_domains, for greater clarity as to what it does
just from seeing the setting name, without having to look it up.
Fixes part of #10042.
The previous error messages for this were written for a tool only to
be used by a couple people, and didn't make clear what potential
causes were. Tweak these to provide greater clarity about what's
going on.
The main cause of these errors appearing in practice was fixed in
7ea5987e5d, but nothing strongly
prevents a similar issue from being introduced in the future.
Fixes#10078.
Apparently, our old unminify logic relied on the fact that the
filenames displayed in tracebacks were of the form "app.js" (and the
`app.js` copy of the source map in the appropriate
`/home/zulip/deployments/`). The correct behavior is to just look up
the source map for the appropriate hash-named
`app.a40806b10565c1dee5bf.js` type file.
We fix this with a few small tweaks to the regular expressions. I
wish this file had reasonable unit tests.
We already had a setting for whether these logs were enabled; now it
also controls which stream the messages go to.
As part of this migration, we disable the feature in dev/production by
default; it's not useful for most environments.
Fixes the proximal data-export issue reported in #10078 (namely, a
stream with nobody ever subscribed to having been created).
The is_private flag is intended to be set if recipient type is
'private'(1) or 'huddle'(3), otherwise i.e if it is 'stream'(2), it
should be unset.
This commit adds a database index for the is_private flag (which we'll
need to use it). That index is used to reset the flag if it was
already set. The already set flags were due to a previous removal of
is_me_message flag for which the values were not cleared out.
For now, the is_private flag is always 0 since the really hard part of
this migration is clearing the unspecified previous state; future
commits will fully implement it actually doing something.
History: Migration rewritten significantly by tabbott to ensure it
runs in only 3 minutes on chat.zulip.org. A key detail in making that
work was to ensure that we use the new index for the queries to find
rows to update (which currently requires the `order_by` and `limit`
clauses).
As part of our effort to change the data model away from each user
having a single API key, we're eliminating the couple requests that
were made from Django to Tornado (as part of a /register or home
request) where we used the user's API key grabbed from the database
for authentication.
Instead, we use the (already existing) internal_notify_view
authentication mechanism, which uses the SHARED_SECRET setting for
security, for these requests, and just fetch the user object using
get_user_profile_by_id directly.
Tweaked by Yago to include the new /api/v1/events/internal endpoint in
the exempt_patterns list in test_helpers, since it's an endpoint we call
through Tornado. Also added a couple missing return type annotations.
This is a preparatory refactor for adding
UserProfile.can_subscribe_other_users.
Although there existed a test for limiting users from creating
streams at `test_subs.test_user_settings_for_adding_streams`,
it did not test the logic inside can_add_streams, tests have
been added to solve that issue.
Extracting this helper library will help us avoid an import loop
between notifications.py and message.py (with bugdown in between).
But in addition to that, it's a more natural model, since some of the
uses for these functions weren't part of the notifications code
anyway.
It's sorta an unusual state to get into, to have a user own a
deactivated bot, when they can't create a bot of that type, but
definitely a valid possibility that we should be checking for.
Fixes#10087.
This is a follow-up in response to Tim's comments on #9951.
In instances where all messages from a BitBucket integration are
grouped under one user specified topic (specified in the URL), we
should include the title of the PR in the message body, since
the availability of a user-specified topic precludes us from
including it in the topic itself (which was the default behaviour).
This is a follow-up in response to Tim's comments on #9951.
In instances where all messages from a Gogs integration are
grouped under one user specified topic (specified in the URL), we
should include the title of the PR in the message body, since
the availability of a user-specified topic precludes us from
including it in the topic itself (which was the default behaviour).
This is a follow-up in response to Tim's comments on #9951.
In instances where all messages from a GitHub integration are
grouped under one user specified topic (specified in the URL), we
should include the title of the issue/PR in the message body, since
the availability of a user-specified topic precludes us from
including it in the topic itself (which was the default behaviour).
This is a follow-up in response to Tim's comments on #9951.
In instances where all messages from a Gitlab integration are
grouped under one user specified topic (specified in the URL), we
should include the title of the issue/MR in the message body, since
the availability of a user-specified topic precludes us from
including it in the topic itself (which was the default behaviour).
Implement this function in 'bulk_import_model'
and 'update_model_ids'.
This lets us save on redundant-feeling arguments in these
frequently-called helper functions.
This deletes the unused Subscription.notifications field and removes
it from some testing and analytics code (which should not have been
using it in the first place).
Fixes#10042.
One of the code examples for GET /users was using the raw call_endpoint
method from our Python bindings, rather than get_members, which has been
specifically designed to interact with this endpoint.
Now all the examples here use the appropiate method.
Whenever a parameter for an endpoint in our REST API has a default
value, it is displayed under the "Description" section of the
arguments table in the docs.
This way, we don't need to explicitly indicate the default values in the
description, thus avoiding duplicate information in the OpenAPI source.
This has the benefit that we now get the usual data about the
user/request/etc. in error emails related to bugdown exceptions;
previously we were just getting the traceback in the emails (since our
`mail_admins` template was very simple) and no other debugging
details.
Comments tweaked by tabbott to help make clear exactly what's going on
here, since it's a little subtle and a little hacky.
Fixes#8843.
This makes a few tweaks from a pass through the file following up
on the recent 1eb67e4bf, which added documentation comments and
docstrings to a number of fields and models. Also reorder a few
more fields, where that makes things clearer.
There are a lot of specific edits, made as I read through and
particularly where I didn't immediately understand things, or
imagined a reader might have questions. A couple of themes:
* Use blank lines everywhere to set off a given comment and the
items it applies to.
* In a few places where we didn't already, specify concretely how
the meaning is represented in the value (answers include: it's a
number of messages; it's a name from such-and-such namespace;
it's a JSON blob.)
This reorganizes the field definitions to be more readable, and adds
descriptive comments for a number of fields whose purpose might be
otherwise unclear.
Our nested code block processor wasn't using the correct test for
whether a paragraph was empty of other content; first, we need to
confirm no children, and second, we need to confirm there is no text
before/after the code block element inside the p tag.
None of the file types here are actually processed by our static asset
pipeline in a way that would result in the hash-named versions of the
files (stored in staticfiles.json) being accessed. (If they were,
we'd be using something like `render_bundle` to access their paths).
So, we should not be generating/shipping these hash-named versions of
image, audio, and locale files.
This change decreases the size of a Zulip release tarball from 153MB
to 93MB, by removing all of the duplicated copies of various asset
files.
This may also help with #10038, but I'm not marking it as completing
that issue yet, because part of #10038 is that the non-hash-named
image files in prod-static/generated/emoji do not seem to have been
properly overwritten on upgrade, and it's unclear why.
Fixes#5971.
A stream created in the last few hours likely won't be in StreamCount
(since that gets updated once a day), and hence won't be in the
recent_traffic dict.
However, get_average_weekly_stream_traffic should be None in this case,
not 0.
This commit closes a long pending issue which involved moving the
`EMOTICON_CONVERSION` mapping to build_emoji infrastructure so
that there is only one source of truth. This was pending from the
time when this feature was implemented.
This setting isn't intended to exist long term, but instead to make it
possible to merge our search pills code before we're ready to cut over
production environments to use it.
The function 'update_model_ids' should be used on
the models BotStorageData and BotConfigData.
It is wrongly added here for UserGroup model.
Also the sequence name for BotStorageData and
BotConfigData is 'zerver_botuserstatedata_id_seq' and
'zerver_botuserconfigdata_id_seq' respectively, which
should be specifically mentioned in the function
'allocate_ids'.
This fixes some nondeterministic test failures.
The gitter mentions are in the format '@usermention'
and the mentions are included in the export data as:
"mentions": [
{
"screenName": "usermention",
"userId": "54d7876c15522ed4b3dbbefb",
"userIds": []
}]
We extract this data and map this mention to @**usermention**
for Zulip.
Messages can be bulky, and storing them in a single
data structure can cause a memory error.
In this commit, the messages are written to a file
batch-wise, thus avoiding the memory error.
Similar to commit 6b7b6b38ad
This will be used while for any ManyToMany field which
is being imported.
We add an internal function which takes in the old ID list
of the ManyToMany field and return the new updated ID list.
Various pieces of our thumbor-based thumbnailing system were already
merged; this adds the remaining pieces required for it to work:
* a THUMBOR_URL Django setting that controls whether thumbor is
enabled on the Zulip server (and if so, where thumbor is hosted).
* Replaces the overly complicated prototype cryptography logic
* Adds a /thumbnail endpoint (supported both on web and mobile) for
accessing thumbnails in messages, designed to support hosting both
external URLs as well as uploaded files (and applying Zulip's
security model for access to thumbnails of uploaded files).
* Modifies bugdown to, when THUMBOR_URL is set, render images with the
`src` attribute pointing /thumbnail (to provide a small thumbnail
for the image), along with adding a "data-original" attribute that
can be used to access the "original/full" size version of the image.
There are a few things that don't work quite yet:
* The S3 backend support is incomplete and doesn't work yet.
* The error pages for unauthorized access are ugly.
* We might want to rename data-original and /thumbnail?size=original
to use some other name, like "full", that better reflects the fact
that we're potentially not serving the original image URL.
This adds support to the event queue system for triggering
missed-message notifications (whether push or email) to support the
stream push notifications feature.
This modifies the logic for formatting outgoing missed-message emails
to support the upcoming stream email notifications feature (providing
a new format for the subject, etc.).
Because we're passing through the trigger for notifications to
do_send_missedmessage_events_reply_in_zulip, we don't need to go back
to the database to determine which messages actually mentioned the
user.
This change converts our logic for determining whether the current
user was mentioned in a group of messages from the implicit "if it was
sent to a stream, it's a mention" to the explicit "we actually know
there was a mention in the message". This is an important
prerequisite for our upcoming feature to support getting email
notifications for streams always (even without a mention).
Previously, maybe_enqueue_notifications had this very subtle logic,
where it set the notice variable only inside the block for push
notifications, but then also used it inside the block for email
notifications.
This "worked", because previously the conditions for push
notifications were always true if the conditions for email
notifications were, but the code was unnecessarily confusing. The
only good reason to write it this way is if build_offline_notification
was expensive; in fact, the most expensive thing it does is calling
time.time(), so that reason does not apply here.
This was further confusing, in that in the original logic, we relied
on the fact that push notification code path edited the "notice"
dictionary for further processing.
Instead, we just call it separately and setup the data separately in
each code path.
This data will be required for correctly implementing the upcoming
stream_push_notify feature; it also helps support cleaning up the code
for the existing stream mentions logic.
Because in upcoming commits, we'll want to pass additional per-message
data into do_send_missedmessage_events_reply_in_zulip, we need to
expand the format for how we represent messages to account for that.
This refactors the generate_topic_history_from_db_rows function to not
depend upon the assumption of rows passed as parameter to be sorted in
reverse order of max_message_id field.
Additionally, we add sorting and some tests that verify correct
handling of these cases.
In this commit we add a new endpoint so as to have a way of fetching
topic history for a given stream id without having to be logged in.
This can only happen if the said stream is web public otherwise we
just return an empty topics list. This endpoint is quite analogous
to get_topics_backend which is used by our main web app.
In this commit we also do a bit of duplication regarding the query
responsible for fetching all the topics from DB. Basically this
query is exactly the same as what we have in the
get_topic_history_for_stream function in actions.py. Basically
duplicating now is the right thing to do because this query is
really gonna change when we add another criteria for filtering
messages which is:
Only topics for messages which were sent during the period the
corresponding stream was web public should be returned.
Now when we will do this, the query will change and thus it won't
really be a code duplication!
We already include the issue title in the topic. But if one chooses
to group all gitlab notifications under one topic, the message body
is misleading in the sense that only the Issue ID and the description
are displayed, not the title, which isn't super helpful if the topic
doesn't tell you the title either.
I think we should err on the side of always including the title in
the main message body, which is what this commit does.
Fixes#9913.
This migrates Zulip to use a dramatically better set of names and
aliases for our emoji set, defined in emoji_names.py (which is in turn
manually generated from our hand-curated CSV file).
This should significantly improve the experience of using Zulip's
emoji picker and emoji typeahead for finding what one is looking for.
Fixes#7665
In case of invitation events, 'invites_changed' event without
any real payload is sent to all the realm admins and the user.
The event is handled by reloading the list to view recent changes.
Commit tweaked by shubhamdhama:
* Send an `invite_changed` event when an user accept an invite.
Also, added the test for the same.
* No need to delete the invite list in frontend, current logic
handles the case when the invite data is changed properly.
* Extracted the common logic for sending an event into
`notify_invites_changed`.
Some of the arguments in our REST API have to be sent as JSON objects,
which only accept double quotes for strings.
If we display the examples as normal Python objects, the syntax would be
quite similar but it would use simple quotes, which is invalid JSON (and
isn't accepted by the server).
That's why all the examples should be JSON-serialized in order to comply
with the API's requirements.
Until now, we were displaying an empty "Arguments" section in the REST
API docs whenever an endpoint didn't use input arguments.
In the case of OpenAPI-based docs, that was also annoying because it
required removing the {generate_api_arguments_table|...} template tag or
leaving an empty "parameters" field in zulip.yaml.
After this, we show a paragraph indicating that the endpoint doesn't
need arguments under the "Arguments" section.
If the argument table generator isn't able to reach a file that is
supposed to read, the two most likely causes are:
- The source .md documentation file that is requesting the table has a
typo in the path.
- The file with the arguments isn't there, for some reason.
In either case, we don't want the server to fail silently-ish and
display the docs as if there was no arguments for that endpoint. That's
why the most logic thing to do is to raise an exception and let the
admins know that there's something wrong.
<script charset=…>, <script type=…>, and <style type=…> are “obsolete
but conforming” in HTML5. They make the validator.nu output noisier
and real problems a little harder to find.
(type was required in HTML 4, which is not relevant to us.)
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
POST and DELETE operations in /users/me/alert_words may leave the
user's list of alert words in an unknown state: POSTing adds words to a
list that the client may not know from the begining, and the same with
DELETE.
Replying with the current status of the alert words list is the best way
of letting the client alter the list and knowing its contents after
being updated with a single query.
This is especially useful taking into account that POSTing words that
were already present and DELETing non-existing words both produce a
successful response.
An extra test has been added to avoid leaving GET /users/me/alert_words
too untested.
Querying an endpoint with no information (thus a noop) and it producing
a successful response doesn't seem to be expected.
Given the case that the client makes such query with no content it will
probably be unintentional and the API should let them know about it.
For importing huddles we have to have unique huddle hashes.
Huddle hashes are extracted from the list of users participating
in a huddle. So to extract these user ids, we first use huddle
id to getting the matching recipient, and then we use subscription
to get the user ids from the recipient id.
Added tests for the same (tests slightly tweaked by tabbott).
The tests for GET /users were looking for a specific user, asuming that
it would always be in the same position. Since the users' sorting isn't
guaranteed in any way, this can lead to errors in the tests.
Now we make sure the user we grab from the list is the one we need by
checking its email address.
This is just a hotfix that addresses the short-term problem: we have
already made some efforts to make sure these tests are more
deterministic, and now we only need to finish the migration of the old
enpoints to the new system as a long-term solution.
This is all the plumbing that makes it possible to enable the
stream_email_notifications setting via the Zulip API. The flag still
doesn't do anything yet, but this is a nice checkpoint along the way
to implementing this feature.
This commit creates a new field called delivery_email. For now, it is
exactly the same as email upon user profile creation and should stay
that way even when email is changed, and is used only for sending
outgoing email from Zulip.
The purpose of this field is to support an upcoming option where the
existing `email` field in Zulip becomes effectively the user's
"display email" address, as part of making it possible for users
actual email addresses (that can receive email, stored in the
delivery_email field) to not be available to other non-administrator
users in the organization.
Because the `email` field is used in numerous places in display code,
in the API, and in database queries, the shortest path to implementing
this "private email" feature is to keep "email" as-is in those parts
of the codebase, and just set the existing "email" ("display email")
model field to be something generated like
"username@zulip.example.com" for display purposes.
Eventually, we'll want to do further refactoring, either in the form
of having both `display_email` and `delivery_email` as fields, or
renaming "email" to "username".
This commit adds a Markdown tree-processor extension that renders
multi-line code blocks that are nested inside lists with the
formatting. Note that the code block could be nested inside multiple
list levels and would still get rendered correctly.
Tim: This fixes the need for unpleasant workarounds like
f5bfa4e793 and makes nested code blocks
in our documentation look exactly how users would expect them to.
Given that we allow adding emoji reactions by only using the
emoji_name, we should offer the same possibility for removing
reactions to make the experience for API clients not require looking
up emoji codes.
Since this is an additional optional parameter, this also preserves
backward compatibility.
Complete, correct implementations of Zulip's emoji reactions API need
to send both emoji_code and emoji_name in order to add a reaction;
this is important for corner cases around clicking on a reaction in a
message that was first reacted to a year ago, when the emoji
name->code mappings have changed for the given code point in the
intervening time.
However, for folks building tools using the Zulip API, that corner
case is not particularly common; as a result, it makes sense to offer
an interface that allows adding a reaction by only specifying the
emoji name.
This is why the only field that needs to be required is emoji_name,
which can now be mapped to a single emoji. Both fields will be
necessary when "voting" an old reaction, but since we stil allow
specifying the two of them, these changes offer retrocompatibility.
This adds a new settings, SOCIAL_AUTH_SUBDOMAIN, which specifies which
domain should be used for GitHub auth and other python-social-auth
backends.
If one is running a single-realm Zulip server like chat.zulip.org, one
doesn't need to use this setting, but for multi-realm servers using
social auth, this fixes an annoying bug where the session cookie that
python-social-auth sets early in the auth process on the root domain
ends up masking the session cookie that would have been used to
determine a user is logged in. The end result was that logging in
with GitHub on one domain on a multi-realm server like zulipchat.com
would appear to log you out from all the others!
We fix this by moving python-social-auth to a separate subdomain.
Fixes: #9847.
* If `zerver_realmauditlog` is present in the exported data,
`RealmAuditLog` would be imported normally.
* If it is not present, `create_subscription_events`
function in would create the `subscription_created`
events for RealmAuditLog. The reason this function
is in `import_realm` module and not in the individual
export tool scripts (like Slack) is because this
function would be common for all export tools.
This fixes#9846 for users who have not already done an import of
their organization from Slack.
Fixes#9846.
Custom profile field value are stored in different structure compare to
other profile fields in events, so generic way to update fields wasn't
updating custom profile fields in `apply_event` function.
Fix this by adding check for custom fields in `apply_event`.
This also adds the appropriate test_events test to verify this code path.
Fixes part of #9875.
We extract out the logic for generating a list of all historical
topics for a given stream as a separate function. This avoids code
duplication when we add the similar code path for grabbing all topics
for web public streams.
This has two advantages;
* We can split bugdown/__init__.py into several modules, and each
module can access these arguments by importing these
* We get rid of the super-ugly `global db_data` construct, replacing
it with a only slightly ugly monkey-ish patching of the
`zerver.lib.bugdown.arguments` module, which is at least
considerably more clear on reading as to what it's purpose is.
The main remaining todo for correctly populating
RealmAuditLog.requires_billing_update is supporting the de-seating (and
corresponding re-seating) that happens after being offline for two weeks.
In this commit we are fixing a kinda serious un-noticed bug with
the way run_db_migrations worked for test db.
Basically run_db_migrations runs new migrations on db (dev or test).
When we talk about the dev platform this process is straight forward.
We have a single DB zulip which was once created and now has some data.
Introduction of new migration causes a schema change or does something
else but bottom line being we just migrate the zulip DB and stuff works
fine.
Now coming to zulip test db (zulip_test) situation is a bit complex
in comparision to dev db. Basically this is because we make use of
what we call zulip_test_template to make test fixture restoration
after tests run fast. Now before we introduced the performance
optimisation of just doing migrations when possible, introduction of
a migration would ideally result in provisioning do a full rebuild of
the test database. When that used to happen sequence of events used to
be something like this:
* Create a zulip_test db from zulip_test_base template (An absolute
basic schema holding)
* Migrate and populate the zulip_test db.
* Create/Re-create zulip_test_template from the latest zulip_test.
Now after we introduced just do migrations instead of full db rebuild
when possible, what used to happen was that zulip_test db got
successfully migrated but when test suites would run they would try to
create zulip_test from zulip_test_template (so that individual tests
don't affect each other on db level).
This is where the problem resides; zulip_test_template wasn't migrated
and we just scrapped zulip_test and re-created it using
zulip_test_template as a template and hence zulip_test will not hold the
latest schema.
This is what we fix in this commit.
This commit moves all files previously under the 'app' bundle in
the Django pipeline to being compiled by webpack under the 'app'
entry point. In the process, it moves assets under the app entry
to a file called app.js that consumes all relevant css and js files.
This commit also edits the webpack config to be able to expose certain
variables for third party libraries that are currently required by
some modules. This is bad coding form and should be refactored to
requiring whatever dependencies a module may have; we're just
deferring that to the future to simplify the series of transitions we
need to do here. The variable exposure is done using expose-loader in
webpack.
The app/index.html template is edited to override the newly introduced
'commonjs' block in the base template. This is done as a temporary
measure so as not to disrupt other pages on the app during the transition.
It also fixes the value of the 'this' context that was being inferred
as window by third party libraries. This is done using imports-loader
in the webpack config. This is also messy and probably isn't how we
want things to work long term.
We need to do a small monkey-patching of python-social-auth to ensure
that it doesn't 500 the request when a user does something funny in
their browser (e.g. using the back button in the auth flow) that is
fundamentally a user error, not a server error.
This was present in the pre-rewrite version of our Social auth
codebase, without clear documentation; I've fixed the explanation
part here.
It's perhaps worth investigating with the core social auth team
whether there's a better way to do this.
It's possible to make GitHub social authentication support letting the
user pick which of their verified email addresses to pick, using the
python-social-auth pipeline feature. We need to add an additional
screen to let the user pick, so we're not adding support for that now,
but this at least migrates this to use the data set of all emails that
have been verified as associated with the user's GitHub account (and
we just assume the user wants their primary email).
This also fixes the inability for very old GitHub accounts (where the
`email` field in the details might be a string the user wanted on
their GitHub profile page) to using GitHub auth to login.
Fixes#9127.
https://github.com/houstondatavis/slack-export/blob/master/users.json
JSON or JavaScript decodes "\/" to / (and some encoders always write
"\/" to avoid accidentally creating a </script> tag), while Python
assumes "\/" is a typo for "\\/" and decodes it to \/.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>