Add test about avatar image was uploaded properly or not in
`tests.BotTest.test_add_bot_with_user_avatar` and
`tests.BotTest.test_patch_bot_avatar`.
Add `get_test_image_file()` and `avatar_disk_path()` in
`zerver.lib.test_helpers` and deduplicate some codes.
Fixes#1276.
Previously, we included a special subscribe button in new stream
notifications, but that had 2 problems:
(1) The subscribe button would render badly if the stream was renamed.
(2) There wasn't an easy way to look at the stream when deciding
whether to subscribe.
This fixes the second problem, but not really the first.
This adds support for only allowing normal users with account age
equal or greater than a "waiting period" threshold to create streams;
this is useful for open organizations that want new members to
understand the community before creating streams.
If create_stream_by_admins_only setting is set to True, only admin users
were able to create streams. Now normal users with account age greater
or equal than waiting period threshold can also create streams.
Account age is defined as number of days passed since the user had
created his account.
Fixes: #2308.
Tweaked by tabbott to clean up the actual can_create_streams logic and
the tests.
Adding more additional information about user profile to
`zerver.views.pointer.get_profile_backend`, like `user_id`,
`full_name`, `email`, `is_bot`, `is_admin`, and `short_name` of the
user.
Zulip doesn't previously make use of the standard Django is_staff flag
(in that the Django admin site is disabled), but since conceptually
the /activity page would be part of the Django admin site if we were
using it (i.e. for server-level administrators), it makes sense to key
off of that rather than the previous, fragile, check for the realm
domain name.
Adding a reaction is now a PUT request to
/messages/<message_id>/emoji_reactions/<emoji_name>
Similarly, removing a reaction is now a DELETE request to
/messages/<message_id>/emoji_reactions/<emoji_name>
This commit changes the url and updates the views and tests.
This commit also adds a test for invalid emoji when removing reaction.
This includes making the default stream description setting into a
dict. That is an API change; we'll discuss it in the changelog but it
seems small enough to be OK.
With some small tweaks by tabbott to remove unnecessary backwards
compatibility code for the settings.
Fixes#2427.
This change adds support for displaying inline open graph previews for
links posted into Zulip.
It is designed to interact correctly with message editing.
This adds the new settings.INLINE_URL_EMBED_PREVIEW setting to control
whether this feature is enabled.
By default, this setting is currently disabled, so that we can burn it
in for a bit before it impacts users more broadly.
Eventually, we may want to make this manageable via a (set of?)
per-realm settings. E.g. I can imagine a realm wanting to be able to
enable/disable it for certain URLs.
This automates including of markdown files under
`templates/zerver/help/` to be tested by `test_public_urls` (in
zerver/tests/test_signup.py).
Fixes#2465.
This can be useful in scenarios where the network doesn't support
websockets. We don't include it in prod_settings_template.py since
it's a very rare setting to need.
Fixes#1528.
A few functions had arguments removed without having their type
annotations updated accordingly. As a result mypy version 0.4.6
thinks that the first type in the annotation is supposed to be
the type of `self`, a new mypy feature which we are not intending
to use here.
This commit adds support for removing reactions via DELETE requests to
the /reactions endpoint with parameters emoji_name and message_id.
The reaction is deleted from the database and a reaction event is sent
out with 'op' set to 'remove'.
Tests are added to check:
1. Removing a reaction that does not exist fails
2. When removing a reaction, the event payload and users are correct
The markdown files under templates/zerver/help/ are technically not
templates in the standard sense, and thus should not be being
checked with this code path.
(We probably do want to add a test to make sure they all render fine,
but that can be its own project.)
Modify backend test of create_streams_if_needed so that the newly
created streams have descriptions.
Modify casperjs test of filling out stream_creation_form so that
the newly created stream has a description.
Fixes: #2428.
- Add base tornado test case class.
- Add test for websocket connection.
- Add test for websocket authentication.
- Add test for sending private message with websocket.
- Add test for sending stream message with websocket.
Fixes#2230
This commit adds the following:
1. A reaction model that consists of a user, a message and an emoji that
are unique together (a user cannot react to a particular message more
than once with the same emoji)
2. A reaction event that looks like:
{
'type': 'reaction',
'op': 'add',
'message_id': 3,
'emoji_name': 'doge',
'user': {
'user_id': 1,
'email': 'hamlet@zulip.com',
'full_name': 'King Hamlet'
}
}
3. A new API endpoint, /reactions, that accepts POST requests to add a
reaction to a message
4. A migration to add the new model to the database
5. Tests that check that
(a) Invalid requests cannot be made
(b) The reaction event body contains all the info
(c) The reaction event is sent to the appropriate users
(d) Reacting more than once fails
It is still missing important features like removing emoji and
fetching them alongside messages.
Refactor list_to_streams and create_streams_if_needed to take a list
of dictionaries, instead of a list of stream names. This is
preparation for being able to pass additional arguments into the
stream creation process.
An important note: This removes a set of validation code from the
start of add_subscriptions_backend; doing so is correct because
list_to_streams has that same validation code already.
[with some tweaks by tabbott for clarity]
Clean up the instances of self.assertIn("string", result.content.decode("utf-8")),
and replace them with self.assert_in_response("string").
Fixes: #2313
We are prone to case-sensitivity bugs, so I added AARON and ZOE.
Also, for good measure, I insert them in non-alphabetical order
to try to drive out bugs from non-consistent sorting of user ids.
Previously, we rejected the HEAD requests that the trello integration
uses to check if the server accepts the integration.
Add decorator for returning 200 status code if request is HEAD.
Fixes: #2311.
Updates the HTML docs to match changes to the Desk.com website,
including all new screenshots for the custom action workflow.
Tests four types of messages that could be sent as notifications from
Desk.com. Desk.com allows an account administrator to send any text
in a custom action, so there isn't a standard format.
Custom actions send URL-encoded POST data, the test fixtures contain
URL-encoded text like what could be sent by a custom action configured
as described in the Zulip Integrations documentation to post a new
message to a stream. (See also #2169, errors in this documentation.)
New zerver.tests.webhooks.test_deskdotcom.DeskDotComHookTests:
* Static text: minimal plain text string
* Case updated: activity alert with link to a Desk.com case and message
* Unicode text Italian: activity alert with message in Italian
* Unicode text Japanese: activity alert with message in Japanese
Each posts a new message in the deskdotcom stream.
Tested on Ubuntu 14.04. I created the fixtures with Emacs, I would
appreciate if someone can check that the Italian and Japanese messages
look ok. I used the same text for a live test and it displayed correctly.
Fixes#2031
Previously, if a new message arrived between when a user is subscribed
to the default streams and when the user's initial messages are
queried, we would try to create two UserMessage rows for the same
Message, resulting in an IntegrityError crash. We fix this and add a
test for that race condition.
In Django 1.10, the get_token function returns a salted version of
csrf token which changes whenever get_token is called. This gives
us wrong result when we compare the state after returning from
Google authentication servers. The solution is to unsalt the token
and use that token to find the HMAC so that we get the same value
as long as t he token is same.
If the user comes in to HomepageForm with a set subdomain, use that to
determine the signup realm instead of the email address.
In the non-REALMS_HAVE_SUBDOMAINS case, still allow using the email address
if no subdomain is passed.
`django.contrib.auth.get_user` function is updated in Django 1.10, due to
which everytime we update the password of the user the password hash changes.
This causes authentication failure. Previously, our code worked correctly
because we use our own session middleware and the `get_user` code had a
conditional statement which allowed our code to bypass the authentication
code.
Previously, the way that render_messages was calling bugdown meant
that the preview feature didn't have access to realm data like the
list of users or streams, resulting in previews for those elements
being wrong.
Now render_message_backend uses zerver.lib.render_markdown to render
messages correctly.
[Commit message tweaked and test added by tabbott]
No change to behavior. non_mit_mailing_list never returned False, so it was
never possible to reach the line "Otherwise, the user is an MIT mailing
list, and .."
send_event() expects a list of user ids (ints) except for the special case
of messages. This commit:
1. Fixes this in the call to send_event() in do_send_typing_notification()
2. Renames the variables in do_send_typing_notification() to better reflect
their content (for example, recipient_ids instead of recipients).
3. Renames the id field in the dicts sent in the typing event body (sender,
recipients) to user_id.
4. Adds assertions to the tests to verify that the tornado event user ids
are the same as the recipients in the event body.
5. Adds assertions to the tests to verify that the tornado event user
ids and the recipient user ids (in the event body) are the same as the
expected user ids (obtained from the emails using
get_user_profile_by_email)
6. Changes all assertTrues to assertEquals in the tests
This fixes#2151.
This makes it possible to configure only certain authentication
methods to be enabled on a per-realm basis.
Note that the authentication_methods_dict function (which checks what
backends are supported on the realm) requires an in function import
due to a circular dependency.
We recently made it so that a cross-realm bot can only send
messages to one realm at a time. (It can send to a realm
outside of its offical realm, but only one of them.) This
test adds coverage for that.
Disallow Realm.string_id's like "streams", "about", and several hundred
others. Also restrict string_id's to be at least 3 characters long, and only
use characters in [a-z0-9-].
Does not restrict realms created by the create_realm.py management command.
Before it was in UserSignUpTest, now it is in RealmCreationTest. The diff
makes it look like test_user_default_language is the target of the move,
but it isn't.
If a stream is public, we now send notifications to all realm users
if the name or description of the stream changes. For private
streams, the behavior remains the same.
We do this by introducing a method called
can_access_stream_user_ids().
(showell helped with this fix)
Fixes#2195
Previously, we set restrict_to_domain and invite_required differently
depending on whether we were setting up a community or a corporate
realm. Setting restrict_to_domain requires validation on the domain of the
user's email, which is messy in the web realm creation flow, since we
validate the user's email before knowing whether the user intends to set up
a corporate or community realm. The simplest solution is to have the realm
creation flow impose as few restrictions as possible (community defaults),
and then worry about restrict_to_domain etc. after the user is already in.
We set the test suite to explictly use the old defaults, since several of
the tests depend on the old defaults.
This commit adds a database migration.
This test seems intended to verify registration in the case of a
unique completely open domain; but because of the mit.edu realm, it
instead tested that a logic bug in the non-subdomains case was
present.
We now send dictionaries for cross-realm bots. This led to the
following changes:
* Create get_cross_realm_dicts() in actions.py.
* Rename the page_params field to cross_realm_bots.
* Fix some back end tests.
* Add cross_realm_dict to people.js.
* Call people.add for cross-realm bots (if they are not already part of the realm).
* Remove hack to add in feedback@zulip.com on the client side.
* Add people.is_cross_realm_email() and use it in compose.js.
* Remove util.string_in_list_case_insensitive().
Adds a database migration, adds a new string_id argument to the management
realm creation command, and adds a short name field to the web realm
creation form when REALMS_HAVE_SUBDOMAINS is False.
Does a database migration to rename Realm.subdomain to
Realm.string_id, and makes Realm.subdomain a property. Eventually,
Realm.string_id will replace Realm.domain as the handle by which we
retrieve Realm objects.
We now simply exclude all cross-realm bots from the set of emails
under consideration, and then if the remaining emails are all in
the same realm, we're good.
This fix changes two behaviors:
* You can no longer send a PM to an ordinary user in another realm
by piggy-backing a cross-realm bot on to the message. (This was
basically a bug, but it would never manifest under current
configurations.)
* You will be able to send PMs to multiple cross-realm bots at once.
(This was an arbitrary restriction. We don't really care about this
scenario much yet, and it fell out of the new implementation.)
We can currently send a PM to a user in another realm, as long
as we copy a cross-realm bot from the same realm. This loophole
doesn't yet affect us in practice--all cross-realm bots are
generally configured for the "admin" realm like the old zulip.com--
but we should lock it down in a subsequent commit.
Having each condition in a separate test was confusing to read,
especially since the tests were doing inconsistent setup, sometimes
calling user2 the user from 2.example.com realm and other times
calling user2 the cross-bot realm, etc.
Note that we still need the equivalent function in our
user-facing API, so there is not much code removal yet.
(Also, we will probably always keep this in our API,
as bot authors will usually just want a simple endpoint
here, whereas our client code gets page_params and events.)
Previously, we used to create one Google OAuth callback url entry
per subdomain. This commit allows us to authenticate subdomain users
against a single Google OAuth callback url entry.
This is a first step towards implementing a message retention policy
feature.
- Add Realm model message_retention_days field to setup
messages expired period for realm.
- Add migration.
- Add tool to get expired messages for each Realm.
- Add tests to cover tool for getting expired messages.
Passes the allowed domains for a realm to the frontend, via
page_params.domains. Groundwork for allowing users to add and
remove domains via the admin setting page, rather than via the
realm_alias.py management command.
This is a preliminary step towards eliminating the realm.domain field
in favor of realm.subdomain. Includes a database migration to create
these for existing realms.
This adds a medium (500px) size avatar thumbnail, that can be
referenced as `{name}-medium.png`. It is intended to be used on the
user's own settings page, though we may come up with other use cases
for high-resolution avatars in the future.
This will automatically generate and upload the medium avatar images
when a new avatar original is uploaded, and contains a migration
(contributed by Kirill Kanakhin) to ensure all pre-existing avatar
images have a medium avatar.
Note that this implementation does not provide an endpoint for
fetching the medium-size avatar for another user.
[substantially modified by tabbott]
This is some of the code we'd need if we wanted to have Zulip generate
avatars for things. Since it is so little useful code, and it's not
clear we will need this feature ever, we can remove this code to make
the codebase less confusing. It'd be easy to dig this out of history
if we ever want it.
Fixes#2101.
- Add tests for SEND_MISSED_MESSAGE_EMAILS_AS_USER is False (the
default!).
- Reorganized test case code by removing repeated parts of code,
improving code style and moving common parts to separate class
methods.
Fixes#1697.
POST to /typing creates a typing event
Required parameters are 'op' ('start' or 'stop') and 'to' (recipient
emails). If there are multiple recipients, the 'to' parameter
should be a JSON string of the list of recipient emails.
The event created looks like:
{
'type': 'typing',
'op': 'start',
'sender': 'hamlet@zulip.com',
'recipients': [{
'id': 1,
'email': 'othello@zulip.com'
}]
}
We now send peer_remove events to folks who have never subscribed
to the streams (except for private streams and zephyr).
We also use logic that is more similar to how
bulk_add_subscriptions() works.
This distinguishes between YouTube Videos and Image Previews by adding
a particular “youtube-video” class to the preview along with changing
the title to the video ID rather than the link. This serves to allow
the lightbox to ID when a lightbox preview should be treated like a
YouTube video rather than an image preview.
This also modifies the tests in bug down to expect a youtube-video class
along with the title to just be the video ID on YouTube rather than the
entire URL link.
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / PR #id title'.
Modify some test fixtures for better coverage.
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / PR #id title'.
Modify some test fixtures for better coverage.
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / MR #id title'.
Modify some test fixtures for better coverage.
Fixes: #1883.
Rename:
PUSH_COMMITS_LIMIT to COMMITS_LIMIT
PUSH_COMMIT_ROW_TEMPLATE to COMMIT_ROW_TEMPLATE
PUSH_COMMITS_MORE_THAN_LIMIT_TEMPLATE to COMMITS_MORE_THAN_LIMIT_TEMPLATE
With reactions and other upcoming features, we'll be adding several
places where we need to check whether a particular user can access a
particular message. It's best to just have a single helper function
for this purpose that we can use everywhere.
Previously, we sent users to an "invite your friends" page after they
created an organization. This commit removes that step in the flow and sends
users directly to the home page. We also remove the now-unused
initial_invite_page.html template, initial_invite.js (which pre-filled the
invite emails with characters from literature), and the /invite URL route.
test_settings.py was setting EXTERNAL_HOST after importing settings.py,
which has several variables (like SERVER_URI) that are computed from
EXTERNAL_HOST.
[tweaked by tabbott to add comments explaining the story here].
Adds a new field org_type to Realm. Defaults for restricted_to_domain
and invite_required are now controlled by org_type at time of realm
creation (see zerver.lib.actions.do_create_realm), rather than at the
database level. Note that the backend defaults are all
org_type=corporate, since that matches the current assumptions in the
codebase, whereas the frontend default is org_type=community, since if
a user isn't sure they probably want community.
Since we will likely in the future enable/disable various
administrative features based on whether an organization is corporate
or community, we discuss those issues in the realm creation form.
Before we actually implement any such features, we'll want to make
sure users understand what type of organization they are a member of.
Choice of org_type (via radio button) has been added to the realm
creation flow and the realm creation management command, and the
open-realm option removed.
The database defaults have not been changed, which allows our testing code
to work unchanged.
[includes some HTML/CSS work by Brock Whittaker to make it look nice]
This pulls message-related code from models.py into a new
module called message.py, and it starts to break some bugdown
dependencies. All the methods here are basically related to
serializing Message objects as dictionaries for caches and
events.
extract_message_dict
stringify_message_dict
message_to_dict
message_to_dict_json
MessageDict.to_dict_uncached
MessageDict.to_dict_uncached_helper
MessageDict.build_dict_from_raw_db_row
MessageDict.build_message_dict
This fix also removes a circular dependency related
to get_avatar_url.
Also, there was kind of a latent bug in Message.need_to_render_content
where it was depending on other calls to Message to import bugdown
and set it globally in the namespace. We really need to just
eliminate the function, since it's so small and used by code that
may be doing very sketchy things, but for now I just fix it. (The
bug would possibly be exposed by moving build_message_dict out to the
library.)
This moves the logic for renaming a stream to the REST API
update_stream_backend method, eliminating the legacy API endpoint for
doing so.
It also adds a nice test suite covering international stream names.
This adds support for running a Zulip production server with each
realm on its own unique subdomain, e.g. https://realm_name.example.com.
This patch includes a ton of important features:
* Configuring the Zulip sesion middleware to issue cookier correctly
for the subdomains case.
* Throwing an error if the user tries to visit an invalid subdomain.
* Runs a portion of the Casper tests with REALMS_HAVE_SUBDOMAINS
enabled to test the subdomain signup process.
* Updating our integrations documentation to refer to the current subdomain.
* Enforces that users can only login to the subdomain of their realm
(but does not restrict the API; that will be tightened in a future commit).
Note that toggling settings.REALMS_HAVE_SUBDOMAINS on a live server is
not supported without manual intervention (the main problem will be
adding "subdomain" values for all the existing realms).
[substantially modified by tabbott as part of merging]
This sets the “title” attribute on the image to the actual title of
the image specified by the user in their markdown, rather than just
the URL of the full link to it.
This was the original way to send messages via the Zulip API in the
very early days of Zulip, but was replaced by the REST API back in
2013.
Fixes: #730.
We no longer use all the alert words for all the users in the
entire realm when we look for alert words in a newly sent/edited
message. Now we limit the search to only all the alert words
for all the users who will get UserMessage records. This will
hopefully make a big difference for big realms where most messages
are only sent to a small subset of users.
The bugdown parser no longer has a concept of which users need which
alert words, since it can't really do anything actionable with that info
from a rendering standpoint.
Instead, our calling code passes in a set of search words to the parser.
The parser returns the list of words it finds in the message.
Then the model method builds up the list of user ids that should be
flagged as having alert words in the message.
This refactoring is a little more involved than I'd like, but there are
still some circular dependency issues with rendering code, so I need to
pass in the rather complicated realm_alert_words data structure all the way
from the action through the model to the renderer.
This change shouldn't change the overall behavior of the system, except
that it does remove some duplicate regex checks that were occurring when
multiple users may have had the same alert word.
If you supplied an unrecognizable address to our email system,
or you had EMAIL_GATEWAY_PATTERN configured wrong,
the get_missed_message_token_from_address() used to crash
hard and cryptically with a traceback saying that you can't
call startswith() on a None object.
Now we throw a ZulipEmailForwardError exception. This will
still lead to a traceback, but it should be easier to diagnose
the problem.
In our email mirror, we have a special format for missed
message emails that uses a 32-bit randomly generated token
that we put into redis that is then prefixed with "mm" for
a total of 34 characters.
We had a bug where we would mis-classify emails like
mmcfoo@example.com as being these system-generated emails
that were part of the redis setup.
It's actually a little unclear how the bug in the library
function would have manifested from the user's point of view,
but it was definitely buggy code, and it's possibly related in
a subtle way to an error report we got from a customer where
only one of their users, who happened to have a name like
mmcfoo, was having problems with the mirror.
It appears that the assertRaisesRegexp approach we had before didn't
work properly on some systems, likely due to a bad interact with a
i18n (we haven't definitively determined the cause).
We now raise an exception in bugdown.do_convert() if rendering
fails, to avoid silent failures, and then calling code can convert
the exception to a JsonableError.
We now have 100% coverage on views/push_notifications.py, modulo
some dead code which will be addressed in the next commit.
There were some existing tests in text_external.py, but that
module is really intended for tests that hit external services.
The view is a really simple API that updates a DB table, and the
new test code focuses on error handling and idempotency as well
as the happy path.
Apparently, in urllib.parse, one need to extract the query string from
the rest of the URL before parsing the query string, otherwise the
very first query parameter will have rest of the URL in its name.
This results in a nondeterministic failure that happens 1/N of the
time, where N is the number of fields marshalled from a dictionary
into the query string.
This commit extracts compose_views() from update_subscriptions_backend(),
and it implements the correct behavior for forcing transactions to roll
back, which is to raise an exception.
There were really three steps in this commit:
- Extract buggy code to compose_views().
- Add tests on compose_views().
- Fix bugs exposed by the new tests by converting errors to exceptions.
This adds support for using PGroonga to back the Zulip full-text
search feature. Because built-in PostgreSQL full text search doesn't
support languages that don't put space between terms such as Japanese,
Chinese and so on. PGroonga supports all languages including Japanese
and Chinese.
Developers will need to re-provision when rebasing past this patch for
the tests to pass, since provision is what installs the PGroonga
package and extension.
PGroonga is enabled by default in development but not in production;
the hope is that after the PGroonga support is tested further, we can
enable it by default.
Fixes#615.
[docs and tests tweaked by tabbott]
The previous default configuration resulted in delivery problems if
the Zulip server was authorized in the SPF records for the domains of
all users on the Zulip server.
This sends an event when a new avatar is uploaded that refreshes the
avatar for all browser clients without the need to reload the browser.
Fixes: #1359.
Previously, missed message emails with multiple senders would
incorrectly have a "," outside the quoted sender name part of the from
address string, resulting in confusing email output.
Adds a new field default language in the zerver_realm model.
This realm level default language will be used as default language
for newly created users. Realm level default language can be
changed from the administration page.
Fixes#1372.
Often, users will copy email addresses with a name (rather than pure
email addresses) into the Zulip "invite users" UI. Previously, that
would throw an error.
This change also adds a get_invitee_emails_set function for parsing
emails content and a test suite for this new feature.
Fixes: #1419.
Define Integration and WebhookIntegration classes.
Change webhook part of integration's guide.
Replace hardcoded webhook urls to generating
based on WEBHOOKS list.
This exists primarily in order to allow us to mock settings.DEBUG for
the purposes of rate limiting, without actually mocking
settings.DEBUG, which I suspect Django never intended one to do, and
thus caused some very strange test failures (see
https://github.com/zulip/zulip/pull/776 for details).
This was in AdminCreateUserTest.test_create_user_backend().
For end to end tests we are logged in, and we need to verify
that our decorators add UserProfile to the parameters of
the view on our behalf, so that we don't get false positives.
In an upcoming commit, we will want to be able to serialize
the parameters for client_put to produce url coverage reports,
so that is another reason not to pass in the UserProfile
object. (That was how this was discovered.)
This makes us more consistent, since we have other wrappers
like client_patch, client_put, and client_delete.
Wrapping also will facilitate instrumentation of our posting code.
This new helper combines two old helpers, one of which was misnamed
and the other of which was always called after the first, so it
made sense to just combine the helpers.
Fixes: #1386
This commit adds these two tests:
test_use_first_unread_anchor_with_some_unread_messages
test_use_first_unread_anchor_with_no_unread_messages
The new tests add coverage to the conditional logic in
get_old_messages_backend() that looks at first_unread_result
when use_first_unread_anchor is set to True.
The test is now called test_use_first_unread_anchor_with_muted_topics().
Before this commit, the test exercised setting
use_first_unread_anchor to True, but it didn't inspect the
most relevant query affected by the flag. Now it does.
This test is still kind of hard to read, and it's far from ideal,
but I'm reluctant to remove it from the test suite.
This increases test coverage by exercising highlight_string().
It also gives deeper test coverage to NarrowBuilder.by_search(),
which had test coverage before, but only in terms of inspecting
the SQL that was generated. This test actually runs the SQL
under the hood.
This partly fixes#1006.
This allows the frontend to fetch data on the subscribers list (etc.)
for streams where the user has never been subscribed, making it
possible to implement UI showing details like subscribe counts on the
subscriptions page.
This is likely a performance regression for very large teams with
large numbers of streams; we'll want to do some testing to determine
the impact (and thus whether we should make this feature only fully
enabled for larger realms).
There were a bunch of authorization and well-formedness checks in
zerver.lib.actions.do_update_message that I moved to
zerver.views.messages.update_message_backend.
Reason: by convention, functions in actions.py complete their actions;
error checking should be done outside the file when possible.
Fixes: #1150.
This is controlled through the admin tab and a new field in the Realms table.
Notes:
* The admin tab setting takes a value in minutes, whereas the backend stores it
in seconds.
* This setting is unused when allow_message_editing is false.
* There is some generosity in how the limit is enforced. For instance, if the
user sees the hovering edit button, we ensure they have at least 5 seconds to
click it, and if the user gets to the message edit form, we ensure they have
at least 10 seconds to make the edit, by relaxing the limit.
* This commit also includes a countdown timer in the message edit form.
Resolves#903.
Taiga's webhook integration would give output events in a random
order which caused test failures on python 3 (seems like python
3 is more prone to non-deterministic failures). Fix that by
sorting the outputs obtained from events before concatenating them.
Use ujson.dumps to render raw messages sent by the PagerDuty
integration instead of using pprint.pformat. pprint.pformat
gives different results on python 2 and 3.
Correctly encode and decode strings in convert_html_to_markdown.
It wasn't possible to use universal_newlines=True since
Popen.communicate doesn't encode/decode strings correctly on
python 2.
Bitbucket changed the format of their API. The old format is still
useful for BitBucket enterprise, but for the main cloud verison of
Bitbucket, we need a new BitBucket integration supporting the new API.
This is controlled through the admin tab and a new field in the Realms
table. This mirrors the behavior of the old hardcoded setting
feature_flags.disable_message_editing. Partially resolves#903.
This reverts commit f1f48f305e.
The use of sklearn unfortunately caused a substantial slowdown to the
Zulip provisioning process, which didn't seem worth it for a
relatively minor feature.
The subscribers list is appended to in `peer_add` events with not
regard for preserving the ordering, and ordering isn't really
important here, so it seems best to just sort it in these checks.
Also encode/decode strings appropriately when using api_keys to generate
basic auth header.
Also fix clashing annotations in zerver/tests/test_external.py.
We would like to know which kind of authentication backends the server
supports.
This is information you can get from /login, but not in a way easily
parseable by API apps (e.g. the Zulip mobile apps).
This reverts commit e985b57259.
This commit will break production when we next do a release, because
we haven't done a migration to create Attachment objects for
previously uploaded files.
For a long time, rest_dispatch has had this hack where we have to
create a copy of it in each views file using it, in order to directly
access the globals list in that file. This removes that hack, instead
making rest_dispatch just use Django's import_string to access the
target method to use.
[tweaked and reorganized from acrefoot's original branch in various
ways by tabbott]
get_display_recipient's annotation clashes with other wrong annotations.
Fix those wrong annotations.
Since get_display_recipient returns a Union, use isinstance checks and
casts to make mypy checks succeed.
Now that we have a working S3 mock and an effective way to toggle the
upload backend that Zulip is using, we can re-enable this important
end-to-end test of the Zulip S3 upload backend.
This has no functional changes; we just replace the old hacky
assignment of functions with assignment of the upload backend to a
variable.
I'm not totally happy with this, because we end up having to copy the
type annotations of the three methods 4 times each, but this should
make it a lot easier to test the (non-default-in-tests) S3 backend
using end-to-end tests, which would have caught
13bac1cc2a.
I expect we'll iterate on the interface over time; ideally, I'd like
all the code that checks LOCAL_UPLOADS_DIR to be inside upload.py, and
primarily in these classes.
In order to genericize use of Zulip outside companies,
all instances of coworkers have been changed to users.
NOTABLE EXCEPTION: When the Zulip instance is domain-
locked, the reference to coworkers remains. The reason
for this is twofold: first, the majority of Zulip instances
which require a particular domain will be locked to a
company, and second, the template variable for the domain
necessary should be added to the alert so it is clear
to the user what the domain needs to be for access.
Fixes: #861.
Just render the templates without the actual workflow to see if they
don't return a 500 error; this lets us catch various classes of
template bugs automatically.
Fixes#784.
Like the recent change blocking JSON endpoints for deactivated users
and users in deactivated realms, this change is a hardening
improvement. Those users should be unable to get an active session
anyway, but if somehow one is leaked, this means they won't be able to
access any user data.
Previously, api_fetch_api_key would not give clear error messages if
password auth was disabled or the user's realm had been deactivated;
additionally, the account disabled error stopped triggering when we
moved the active account check into the auth decorators.
This commit adds the capability to keep track and remove uploaded
files. Unclaimed attachments are files that have been uploaded to the
server but are not referred in any messages. A management command to
remove old unclaimed files after a week is also included.
Tests for getting the file referred in messages are also included.
Since we don't have a stable way to get the Dropbox preview failure
image (and it was sorta a weird setup anyway), it seems best to just
remove the condition.