When you edit a message to contain links, and URL previews are
enabled, previously we'd throw an exception, because the realm ID
wasn't included in the event.
Also adds a test so that we can have effective test coverage on this
codepath, though this history is actually that I found the bug through
writing this test :).
This fixes a weird issue where the following sequences of tests would fail:
test-backend
zerver.tests.test_messages.PersonalMessagesTest.test_personal_to_self
zerver.tests.test_report.TestReport.test_report_error
zerver.tests.test_templates.TemplateTestCase.test_custom_tos_template
It appears that all 3 tests are required for the failure.
While it's not entirely clear what the cause is, a very likely factor
is that settings.DEBUG is special, and so changing it at runtime is
likely to cause weird problems like this.
We fix this by replacing it with settings.DEVELOPMENT, which has the
same value in all environments, but doesn't have this problem of being
a special Django thing.
Change `from django.utils.timezone import now` to
`from django.utils import timezone`.
This is both because now() is ambiguous (could be datetime.datetime.now),
and more importantly to make it easier to write a lint rule against
datetime.datetime.now().
page_params is kinda a monster object. Ideally, we'd make it be
constructed in a much less haphazard fashion, and make sure that all
the useful data in it is available via the `/register` endpoint for
mobile/API. This change reorganizes page_params to be sorted by data
source, which is an important prerequisite for doing that.
- Add server version to `fetch_initial_state_data`.
- Add server version to register event queue api endpoint.
- Add server version to `get_auth_backends` api endpoint.
- Change source for server version in `home` endpoint.
- Fix tests.
Fixes#3663
- Add stamp file creation for the failed templates compilation.
- Add error response to `home` route if stamp file exists. It appears
just for the development environment.
- Add jinja2 template for failed handlebars templates compilation error.
Fixes#3650.
This adds to Zulip support for a user changing their own email
address.
It's backed by a huge amount of work by Steve Howell on making email
changes actually work from a UI perspective.
Fixes#734.
The comments explain why this change is correct. This change is
useful because it's better to not have dead code paths, both because
it makes our life easier for coverage analysis, and because the else
statement provided the illusion that it could actually happen.
If the analysis in that comment is wrong, we'd rather have a 500 error
so we fix the bug than things silently sorta working.
This arguably regresses the Zephyr experience, in that we no longer
consider 'foo.d.d.d.d.d' to be something that gets narrowed in with
the rest, but that's a pretty rare use case anyway.
In practice, using that many '.d's anyway only happens a few times a
year.
This makes life a lot easier for people inviting users to a new Zulip
organization, since they can give some form of context now.
Modified by tabbott to clean up CSS, backend code flow, and improve
the formatting of the emails.
Fixes: #1409.
There's a new option, `include_subscribers`, that controls whether the
API sends down subscriber data for the various streams you are
subscribed to.
This has significant performance savings for large realms with naive
clients, and saves a bunch of bandwidth as well.
This makes it super easy for frontend code using this view code to
produce a nice display of the history.
This also fixes an off-by-one error with the timestamps.
Based on work by Kartik Maji in #1204.
This has a few significant changes from the original version:
* We correctly handle filling in data for topic edits
* Has a complete test suite verifying correctness of the logic
* Currently, it doesn't include a special "start" entry
Things we may want to further change include:
* Adding a special "start" entry.
* Reversing the order of the history data returned for clarity.
We were apparently incorrectly harcdoding the client for the main
logged-in site loading to website, rather than using the existing
logic that could sort out the desktop apps.
In some cases here we simplify things by calling avatar_url()
instead of get_avatar_url(), when we have a user_profile record
handy. For other cases we pass in an extra avatar_version
parameter to get_avatar_url(), including from avatar_url().
We have a field called user_profile.avatar_version that will
track avatar versions and be used tactically in avatar urls
to get browsers to refresh their caches (in future commits).
This commit bumps the avatar version when we update avatars.
We do this in do_change_avatar_fields(), which was
do_change_avatar_source() before this change.
Adarsh did the initial work here, and Steve Howell (showell) also
made changes.
For our user administration, we now primarily work with user ids
that get put into data-user-id attributes. We still put emails in the
tags to make our Casper tests easy to maintain.
This requires a minor change to the back end to pass down user ids
for the /users endpoint (in get_members_backend).
This moves do_events_register, fetch_initial_state_data and friends to
a new file.
Modified significantly by tabbott for correctness and to remove unused
imports.
Fixes#3635.
Disallows you from putting the characters @, *, `, and > and " in
your name. Added test cases similar to the MAX_NAME_LENGTH check
Copied initial code from:
https://github.com/zulip/zulip/pull/2473
This changes the query for DevAuthBackend so that the shakespearian
users are not omitted while limiting the number of extra users to be
rendered to something reasonable.
Fixes: #3578.
A bug in Zulip's implementation of the "stream exists" endpoint meant
that any user of a Zulip server could subscribe to an invite-only
stream without needing to be invited by using the "autosubscribe"
argument.
Thanks to Rafid Aslam for discovering this issue.
The realm with string_id of "simple" just has three users
named alice, bob, and cindy for now. It is useful for testing
scenarios where realms don't have special zulip.com exception
handling.
This fixes a bug where update_message_backend would do one memcached
query per user receiving a given message. Right now we just do a
single bulk database query, but in principle we could use
generic_bulk_cached_fetch to use the cache as well.
Apparently, we were comparing the full list of enabled authentication
methods (whether or not supported by the server) against the user's
selections among those supported by the server, which resulted in
authentication methods being always reported as different.
This changes bugdown to use the realm passed in by the caller (if any)
for rendering, fixing a problem where bots such as the notification
bot would have their messages rendering using the admin realm's
settings, not the settings of the realm their messages are being sent
into.
Also adds a test for the notification bot case.
Fixes#3215.
A lot of care has been taken to ensure we're using the realm that the
message is being sent into, not the realm of the sender, to correctly
handle the logic for cross-realm bot users such as the notifications
bot.
In order to correctly handle messages sent by cross-realm bots, we
need to specify the realm that the messages are being sent into in the
send message code path. The commit and its successors convert that
code path to include the realm the message is being sent to explicitly.
Contributor visualization showing the avatar, user name and number
of commits for each contributors. The JSON data would be updated
upon deployment, triggered by the `update-prod-static` script.
This should substantially improve the clarity of the code, since
inside bugdown, this is only being used as a hash key that happens to
usually be a realm ID, not used as a Realm ID.
- Change `stream_name` into `stream_id` on some API endpoints that use
`stream_name` in their URLs to prevent confusion of `views` selection.
For example:
If the stream name is "foo/members", the URL would be trigger
"^streams/(?P<stream_name>.*)/members$" and it would be confusing because
we intend to use the endpoint with "^streams/(?P<stream_name>.*)$" regex.
All stream-related endpoints now use stream id instead of stream name,
except for a single endpoint that lets you convert stream names to stream ids.
See https://github.com/zulip/zulip/issues/2930#issuecomment-269576231
- Add `get_stream_id()` method to Zulip API client, and change
`get_subscribers()` method to comply with the new stream API
(replace `stream_name` with `stream_id`).
Fixes#2930.
Remove events that don't exist.
Move handling issue events to separate function.
Make formatting strings using format function.
Change camelCase variable name convetion to using underscores.
Make unknown events error more clear.
Add issue_event_type_name param to all fixtures.
The general __init__ file is a more natural home, and where other endpoints
(e.g. create_realm, etc) live.
Also changes forms.ValidationError to django.core.exceptions.ValidationError
to match the rest of the file/codebase.
Finishes the refactoring started in c1bbd8d. The goal of the refactoring is
to change the argument to get_realm from a Realm.domain to a
Realm.string_id. The steps were
* Add a new function, get_realm_by_string_id.
* Change all calls to get_realm to use get_realm_by_string_id instead.
* Remove get_realm.
* (This commit) Rename get_realm_by_string_id to get_realm.
Part of a larger migration to remove the Realm.domain field entirely.
The new GitHub dispatcher integration was apparently totally broken,
because we hadn't tagged the new dispatcher endpoint as exempt from
CSRF checking. I'm not sure why the test suite didn't catch this.
We only write domain to the session variable in one place,
accounts_home_with_domain, where we check that the domain is valid, that the
domain corresponds to an open realm, and that we are in the non-subdomains
case.
Previously, we were confusingly checking only a subset of the conditions
on reading back the domain in create_preregistration_user, and not checking
any of them when reading back the domain in get_realm_from_request.
Previously, we included a special subscribe button in new stream
notifications, but that had 2 problems:
(1) The subscribe button would render badly if the stream was renamed.
(2) There wasn't an easy way to look at the stream when deciding
whether to subscribe.
This fixes the second problem, but not really the first.
This adds support for only allowing normal users with account age
equal or greater than a "waiting period" threshold to create streams;
this is useful for open organizations that want new members to
understand the community before creating streams.
If create_stream_by_admins_only setting is set to True, only admin users
were able to create streams. Now normal users with account age greater
or equal than waiting period threshold can also create streams.
Account age is defined as number of days passed since the user had
created his account.
Fixes: #2308.
Tweaked by tabbott to clean up the actual can_create_streams logic and
the tests.
Adding more additional information about user profile to
`zerver.views.pointer.get_profile_backend`, like `user_id`,
`full_name`, `email`, `is_bot`, `is_admin`, and `short_name` of the
user.
Adding a reaction is now a PUT request to
/messages/<message_id>/emoji_reactions/<emoji_name>
Similarly, removing a reaction is now a DELETE request to
/messages/<message_id>/emoji_reactions/<emoji_name>
This commit changes the url and updates the views and tests.
This commit also adds a test for invalid emoji when removing reaction.
This change adds support for displaying inline open graph previews for
links posted into Zulip.
It is designed to interact correctly with message editing.
This adds the new settings.INLINE_URL_EMBED_PREVIEW setting to control
whether this feature is enabled.
By default, this setting is currently disabled, so that we can burn it
in for a bit before it impacts users more broadly.
Eventually, we may want to make this manageable via a (set of?)
per-realm settings. E.g. I can imagine a realm wanting to be able to
enable/disable it for certain URLs.
This can be useful in scenarios where the network doesn't support
websockets. We don't include it in prod_settings_template.py since
it's a very rare setting to need.
Fixes#1528.
This commit adds support for removing reactions via DELETE requests to
the /reactions endpoint with parameters emoji_name and message_id.
The reaction is deleted from the database and a reaction event is sent
out with 'op' set to 'remove'.
Tests are added to check:
1. Removing a reaction that does not exist fails
2. When removing a reaction, the event payload and users are correct
This commit adds the following:
1. A reaction model that consists of a user, a message and an emoji that
are unique together (a user cannot react to a particular message more
than once with the same emoji)
2. A reaction event that looks like:
{
'type': 'reaction',
'op': 'add',
'message_id': 3,
'emoji_name': 'doge',
'user': {
'user_id': 1,
'email': 'hamlet@zulip.com',
'full_name': 'King Hamlet'
}
}
3. A new API endpoint, /reactions, that accepts POST requests to add a
reaction to a message
4. A migration to add the new model to the database
5. Tests that check that
(a) Invalid requests cannot be made
(b) The reaction event body contains all the info
(c) The reaction event is sent to the appropriate users
(d) Reacting more than once fails
It is still missing important features like removing emoji and
fetching them alongside messages.
Refactor list_to_streams and create_streams_if_needed to take a list
of dictionaries, instead of a list of stream names. This is
preparation for being able to pass additional arguments into the
stream creation process.
An important note: This removes a set of validation code from the
start of add_subscriptions_backend; doing so is correct because
list_to_streams has that same validation code already.
[with some tweaks by tabbott for clarity]
Previously, we rejected the HEAD requests that the trello integration
uses to check if the server accepts the integration.
Add decorator for returning 200 status code if request is HEAD.
Fixes: #2311.
No change in behavior. The for_you block had already been removed in
portico.html long ago in a6889080ce. The
contents of the block are still present in the non-portico 404.html
and 5xx.html error pages.
In Django 1.10, the get_token function returns a salted version of
csrf token which changes whenever get_token is called. This gives
us wrong result when we compare the state after returning from
Google authentication servers. The solution is to unsalt the token
and use that token to find the HMAC so that we get the same value
as long as t he token is same.
Needed in case the user was allowed to join the realm when they got the
confirmation email, but is no longer allowed to do so. Check was previously
applied to invited users (those with a prereg_user.referred_by), and is now
applied regardless of how they get to accounts_register.
Ensure domain and subdomain correspond to the same realm when being passed
to forms.HomepageForm. Previously this was not the case when e.g. we got
here via the /register/<domain> endpoint.
This also effectively disables the register/<domain> endpoint when
REALMS_HAVE_SUBDOMAINS, or rather, foo.server.org/register/bar.com will try
to register you for the realm with string_id foo rather than realm with
domain bar.com.
Previously, the way that render_messages was calling bugdown meant
that the preview feature didn't have access to realm data like the
list of users or streams, resulting in previews for those elements
being wrong.
Now render_message_backend uses zerver.lib.render_markdown to render
messages correctly.
[Commit message tweaked and test added by tabbott]
This (1) removes the check on whether the domain of the email matches
the Realm.domain of an existing realm and (2) avoids setting `realm =
get_realm(domain)` in the realm creation flow, which would cause the
wrong code path to be followed in the event that the domain in a
user's email address happens to match a deactivated realm.
We now send dictionaries for cross-realm bots. This led to the
following changes:
* Create get_cross_realm_dicts() in actions.py.
* Rename the page_params field to cross_realm_bots.
* Fix some back end tests.
* Add cross_realm_dict to people.js.
* Call people.add for cross-realm bots (if they are not already part of the realm).
* Remove hack to add in feedback@zulip.com on the client side.
* Add people.is_cross_realm_email() and use it in compose.js.
* Remove util.string_in_list_case_insensitive().
Adds a database migration, adds a new string_id argument to the management
realm creation command, and adds a short name field to the web realm
creation form when REALMS_HAVE_SUBDOMAINS is False.
Does a database migration to rename Realm.subdomain to
Realm.string_id, and makes Realm.subdomain a property. Eventually,
Realm.string_id will replace Realm.domain as the handle by which we
retrieve Realm objects.
Previously, we used to create one Google OAuth callback url entry
per subdomain. This commit allows us to authenticate subdomain users
against a single Google OAuth callback url entry.
For some reason, we use 'load' function but it doesn't exist in the JWT
library code. This commit updates the code to use the correct interface
of the JWT library.
The signature verification is done by the decode function.
Passes the allowed domains for a realm to the frontend, via
page_params.domains. Groundwork for allowing users to add and
remove domains via the admin setting page, rather than via the
realm_alias.py management command.
This adds a medium (500px) size avatar thumbnail, that can be
referenced as `{name}-medium.png`. It is intended to be used on the
user's own settings page, though we may come up with other use cases
for high-resolution avatars in the future.
This will automatically generate and upload the medium avatar images
when a new avatar original is uploaded, and contains a migration
(contributed by Kirill Kanakhin) to ensure all pre-existing avatar
images have a medium avatar.
Note that this implementation does not provide an endpoint for
fetching the medium-size avatar for another user.
[substantially modified by tabbott]
POST to /typing creates a typing event
Required parameters are 'op' ('start' or 'stop') and 'to' (recipient
emails). If there are multiple recipients, the 'to' parameter
should be a JSON string of the list of recipient emails.
The event created looks like:
{
'type': 'typing',
'op': 'start',
'sender': 'hamlet@zulip.com',
'recipients': [{
'id': 1,
'email': 'othello@zulip.com'
}]
}
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / PR #id title'.
Modify some test fixtures for better coverage.
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / PR #id title'.
Modify some test fixtures for better coverage.
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / MR #id title'.
Modify some test fixtures for better coverage.
Fixes: #1883.
With reactions and other upcoming features, we'll be adding several
places where we need to check whether a particular user can access a
particular message. It's best to just have a single helper function
for this purpose that we can use everywhere.
Previously, we sent users to an "invite your friends" page after they
created an organization. This commit removes that step in the flow and sends
users directly to the home page. We also remove the now-unused
initial_invite_page.html template, initial_invite.js (which pre-filled the
invite emails with characters from literature), and the /invite URL route.
Adds a new field org_type to Realm. Defaults for restricted_to_domain
and invite_required are now controlled by org_type at time of realm
creation (see zerver.lib.actions.do_create_realm), rather than at the
database level. Note that the backend defaults are all
org_type=corporate, since that matches the current assumptions in the
codebase, whereas the frontend default is org_type=community, since if
a user isn't sure they probably want community.
Since we will likely in the future enable/disable various
administrative features based on whether an organization is corporate
or community, we discuss those issues in the realm creation form.
Before we actually implement any such features, we'll want to make
sure users understand what type of organization they are a member of.
Choice of org_type (via radio button) has been added to the realm
creation flow and the realm creation management command, and the
open-realm option removed.
The database defaults have not been changed, which allows our testing code
to work unchanged.
[includes some HTML/CSS work by Brock Whittaker to make it look nice]
This pulls message-related code from models.py into a new
module called message.py, and it starts to break some bugdown
dependencies. All the methods here are basically related to
serializing Message objects as dictionaries for caches and
events.
extract_message_dict
stringify_message_dict
message_to_dict
message_to_dict_json
MessageDict.to_dict_uncached
MessageDict.to_dict_uncached_helper
MessageDict.build_dict_from_raw_db_row
MessageDict.build_message_dict
This fix also removes a circular dependency related
to get_avatar_url.
Also, there was kind of a latent bug in Message.need_to_render_content
where it was depending on other calls to Message to import bugdown
and set it globally in the namespace. We really need to just
eliminate the function, since it's so small and used by code that
may be doing very sketchy things, but for now I just fix it. (The
bug would possibly be exposed by moving build_message_dict out to the
library.)
I move these three functions to lib/cache.py:
to_dict_cache_key_id
to_dict_cache_key
flush_message
This will prepare us for a more significant refactoring that
eventually breaks down some circular dependencies with
Message and bugdown.
This moves the logic for renaming a stream to the REST API
update_stream_backend method, eliminating the legacy API endpoint for
doing so.
It also adds a nice test suite covering international stream names.
This improves Google and JWT auth as well as the registration
codepath to log something if the wrong subdomain is encountered.
Ideally, we'd have tests for these, and code to make the Google and JWT
auth cases show a clear error message.
This adds support for running a Zulip production server with each
realm on its own unique subdomain, e.g. https://realm_name.example.com.
This patch includes a ton of important features:
* Configuring the Zulip sesion middleware to issue cookier correctly
for the subdomains case.
* Throwing an error if the user tries to visit an invalid subdomain.
* Runs a portion of the Casper tests with REALMS_HAVE_SUBDOMAINS
enabled to test the subdomain signup process.
* Updating our integrations documentation to refer to the current subdomain.
* Enforces that users can only login to the subdomain of their realm
(but does not restrict the API; that will be tightened in a future commit).
Note that toggling settings.REALMS_HAVE_SUBDOMAINS on a live server is
not supported without manual intervention (the main problem will be
adding "subdomain" values for all the existing realms).
[substantially modified by tabbott as part of merging]
This was the original way to send messages via the Zulip API in the
very early days of Zulip, but was replaced by the REST API back in
2013.
Fixes: #730.
We currently do
var invite_suffix = "{{invite_suffix}}";
in javascript in the initial_invite_page.html template.
This sets invite_suffix to "{{invite_suffix}}" when the template is rendered
without invite_suffix in the params, rather than to "" as intended. This
later causes problems in the invite_email validator in initial_invite.js.
We no longer use all the alert words for all the users in the
entire realm when we look for alert words in a newly sent/edited
message. Now we limit the search to only all the alert words
for all the users who will get UserMessage records. This will
hopefully make a big difference for big realms where most messages
are only sent to a small subset of users.
We now use render_incoming_message() to render all incoming
new messages (sends/edits), so that they will get the same treatment.
This change also establishes do_send_messages() as the code
path to get new messages rendered. It removes some
logic from check_message() that only happened on certain code paths
for sending messages, and which would only detect failures by
expensively rendering messages, so it wasn't much of a guard.
This change also helps to phase out maybe_render_content(), which
deepens the call stack without providing much clarity to the reader,
since it's behavior is so variable.
Finally, this sets up to fix a flaw in the way we compute which
users have alert words in their messages (in a subsequent commit).
The list_to_streams() method now uses create_streams_if_needed() to
do its heavy lifting during the autocreate=True case.
This commit gets us to 100% coverage on the streams view. (The
recently created action.create_streams_if_needed() was easy
to test in isolation, and it has 100% coverage as well, so we are
not cheating here.)
Fixes: #1005.
When we push a device token, we want to clean out any other user's
tokens on the device, but not the current user's. We were wiping
away our own token, if it existed, before creating it again. This
was probably never a user-facing problem; it just made for dead code
and a little unnecessary DB churn. By excluding the current user
from the delete() call, we exercise the update path in our tests now,
so we have 100% coverage.
This commit extracts compose_views() from update_subscriptions_backend(),
and it implements the correct behavior for forcing transactions to roll
back, which is to raise an exception.
There were really three steps in this commit:
- Extract buggy code to compose_views().
- Add tests on compose_views().
- Fix bugs exposed by the new tests by converting errors to exceptions.
These annotations aren't perfect because the sqlalchemy stubs in
typeshed are broken (e.g. a `Select` doesn't have the ability to do
`.where()`, but we've at least used some typevars to make it easy to
address that when the sqlalchemy stubs are less broken).
This adds support for using PGroonga to back the Zulip full-text
search feature. Because built-in PostgreSQL full text search doesn't
support languages that don't put space between terms such as Japanese,
Chinese and so on. PGroonga supports all languages including Japanese
and Chinese.
Developers will need to re-provision when rebasing past this patch for
the tests to pass, since provision is what installs the PGroonga
package and extension.
PGroonga is enabled by default in development but not in production;
the hope is that after the PGroonga support is tested further, we can
enable it by default.
Fixes#615.
[docs and tests tweaked by tabbott]
Apparently, we had incorrectly concluded that our highlight_string
search result highlighting offsets coming from tsearch_extras were
measured in bytes, whereas in fact it is measured in characters.
Also changes all links to /integrations to be relative rather than
absolute, so that users will primarily access the /integrations page
for their subdomain.
This sends an event when a new avatar is uploaded that refreshes the
avatar for all browser clients without the need to reload the browser.
Fixes: #1359.
This adds a few new helpful context variables that we can use to
compute URLs in all of our templates:
* external_uri_scheme: http(s)://
* server_uri: The base URL for the server's canonical name
* realm_uri: The base URL for the user's realm
This is preparatory work for making realm_uri != server_uri when we
add support for subdomains.
Most directly useful for the migration to zulipchat.com.
Creates a new field in UserProfile to store the tos_version, as well as two
new settings TOS_VERSION and FIRST_TIME_TOS_TEMPLATE. We check for a version
mismatch between what the user has signed and the current
settings.TOS_VERSION whenever the user hits the home page, and redirect them
if needed.
Note that accounts_accept_terms.html and
zerver.views.accounts_accept_terms were unused before this commit
(they date from c327446537)
Adds a new field default language in the zerver_realm model.
This realm level default language will be used as default language
for newly created users. Realm level default language can be
changed from the administration page.
Fixes#1372.
The previous export tool would only work properly for small realms,
and was missing a number of important features:
* Export of avatars and uploads from S3
* Export of presence data, activity data, etc.
* Faithful export/import of timestamps
* Parallel export of messages
* Not OOM killing for large realms
The new tool runs as a pair of documented management commands, and
solves all of those problems.
Also we add a new management command for exporting the data of an
individual user.
Often, users will copy email addresses with a name (rather than pure
email addresses) into the Zulip "invite users" UI. Previously, that
would throw an error.
This change also adds a get_invitee_emails_set function for parsing
emails content and a test suite for this new feature.
Fixes: #1419.
While logging through GitHub if the realm of the user doesn't
exist then we are redirected to registration page but the action
points to the complete url of the GitHub oAuth overflow.
The MitUser model caused a constant series of little problems for
users with mit.edu email addresses trying to sign up for different
Zulip servers.
The new implementation just uses conditionals on the realm object when
selecting the confirmation template to use.
In python 3, subprocess uses bytes for input and output if
universal_newlines=False (the default). It uses str for input and
output if universal_newlines=True.
Since we mostly deal with strings, add universal_newlines=True to
subprocess.check_output.
This allows the frontend to fetch data on the subscribers list (etc.)
for streams where the user has never been subscribed, making it
possible to implement UI showing details like subscribe counts on the
subscriptions page.
This is likely a performance regression for very large teams with
large numbers of streams; we'll want to do some testing to determine
the impact (and thus whether we should make this feature only fully
enabled for larger realms).
The muting logic in approximate_unread_count() was confusing
stream/subject and only using the first of many stream/subject
pairs, so it was rarely excluding rows from the count, and when
it did exclude rows, they were the wrong rows.
This fixes part of #1300, but we may want to keep the issue open.
There were a bunch of authorization and well-formedness checks in
zerver.lib.actions.do_update_message that I moved to
zerver.views.messages.update_message_backend.
Reason: by convention, functions in actions.py complete their actions;
error checking should be done outside the file when possible.
Fixes: #1150.
This is controlled through the admin tab and a new field in the Realms table.
Notes:
* The admin tab setting takes a value in minutes, whereas the backend stores it
in seconds.
* This setting is unused when allow_message_editing is false.
* There is some generosity in how the limit is enforced. For instance, if the
user sees the hovering edit button, we ensure they have at least 5 seconds to
click it, and if the user gets to the message edit form, we ensure they have
at least 10 seconds to make the edit, by relaxing the limit.
* This commit also includes a countdown timer in the message edit form.
Resolves#903.
Taiga's webhook integration would give output events in a random
order which caused test failures on python 3 (seems like python
3 is more prone to non-deterministic failures). Fix that by
sorting the outputs obtained from events before concatenating them.
Use ujson.dumps to render raw messages sent by the PagerDuty
integration instead of using pprint.pformat. pprint.pformat
gives different results on python 2 and 3.
Bitbucket changed the format of their API. The old format is still
useful for BitBucket enterprise, but for the main cloud verison of
Bitbucket, we need a new BitBucket integration supporting the new API.