This is a first step towards implementing a message retention policy
feature.
- Add Realm model message_retention_days field to setup
messages expired period for realm.
- Add migration.
- Add tool to get expired messages for each Realm.
- Add tests to cover tool for getting expired messages.
Passes the allowed domains for a realm to the frontend, via
page_params.domains. Groundwork for allowing users to add and
remove domains via the admin setting page, rather than via the
realm_alias.py management command.
This is a preliminary step towards eliminating the realm.domain field
in favor of realm.subdomain. Includes a database migration to create
these for existing realms.
This adds a medium (500px) size avatar thumbnail, that can be
referenced as `{name}-medium.png`. It is intended to be used on the
user's own settings page, though we may come up with other use cases
for high-resolution avatars in the future.
This will automatically generate and upload the medium avatar images
when a new avatar original is uploaded, and contains a migration
(contributed by Kirill Kanakhin) to ensure all pre-existing avatar
images have a medium avatar.
Note that this implementation does not provide an endpoint for
fetching the medium-size avatar for another user.
[substantially modified by tabbott]
This is some of the code we'd need if we wanted to have Zulip generate
avatars for things. Since it is so little useful code, and it's not
clear we will need this feature ever, we can remove this code to make
the codebase less confusing. It'd be easy to dig this out of history
if we ever want it.
Fixes#2101.
- Add tests for SEND_MISSED_MESSAGE_EMAILS_AS_USER is False (the
default!).
- Reorganized test case code by removing repeated parts of code,
improving code style and moving common parts to separate class
methods.
Fixes#1697.
POST to /typing creates a typing event
Required parameters are 'op' ('start' or 'stop') and 'to' (recipient
emails). If there are multiple recipients, the 'to' parameter
should be a JSON string of the list of recipient emails.
The event created looks like:
{
'type': 'typing',
'op': 'start',
'sender': 'hamlet@zulip.com',
'recipients': [{
'id': 1,
'email': 'othello@zulip.com'
}]
}
We now send peer_remove events to folks who have never subscribed
to the streams (except for private streams and zephyr).
We also use logic that is more similar to how
bulk_add_subscriptions() works.
This distinguishes between YouTube Videos and Image Previews by adding
a particular “youtube-video” class to the preview along with changing
the title to the video ID rather than the link. This serves to allow
the lightbox to ID when a lightbox preview should be treated like a
YouTube video rather than an image preview.
This also modifies the tests in bug down to expect a youtube-video class
along with the title to just be the video ID on YouTube rather than the
entire URL link.
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / PR #id title'.
Modify some test fixtures for better coverage.
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / PR #id title'.
Modify some test fixtures for better coverage.
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / MR #id title'.
Modify some test fixtures for better coverage.
Fixes: #1883.
Rename:
PUSH_COMMITS_LIMIT to COMMITS_LIMIT
PUSH_COMMIT_ROW_TEMPLATE to COMMIT_ROW_TEMPLATE
PUSH_COMMITS_MORE_THAN_LIMIT_TEMPLATE to COMMITS_MORE_THAN_LIMIT_TEMPLATE
With reactions and other upcoming features, we'll be adding several
places where we need to check whether a particular user can access a
particular message. It's best to just have a single helper function
for this purpose that we can use everywhere.
Previously, we sent users to an "invite your friends" page after they
created an organization. This commit removes that step in the flow and sends
users directly to the home page. We also remove the now-unused
initial_invite_page.html template, initial_invite.js (which pre-filled the
invite emails with characters from literature), and the /invite URL route.
test_settings.py was setting EXTERNAL_HOST after importing settings.py,
which has several variables (like SERVER_URI) that are computed from
EXTERNAL_HOST.
[tweaked by tabbott to add comments explaining the story here].
Adds a new field org_type to Realm. Defaults for restricted_to_domain
and invite_required are now controlled by org_type at time of realm
creation (see zerver.lib.actions.do_create_realm), rather than at the
database level. Note that the backend defaults are all
org_type=corporate, since that matches the current assumptions in the
codebase, whereas the frontend default is org_type=community, since if
a user isn't sure they probably want community.
Since we will likely in the future enable/disable various
administrative features based on whether an organization is corporate
or community, we discuss those issues in the realm creation form.
Before we actually implement any such features, we'll want to make
sure users understand what type of organization they are a member of.
Choice of org_type (via radio button) has been added to the realm
creation flow and the realm creation management command, and the
open-realm option removed.
The database defaults have not been changed, which allows our testing code
to work unchanged.
[includes some HTML/CSS work by Brock Whittaker to make it look nice]
This pulls message-related code from models.py into a new
module called message.py, and it starts to break some bugdown
dependencies. All the methods here are basically related to
serializing Message objects as dictionaries for caches and
events.
extract_message_dict
stringify_message_dict
message_to_dict
message_to_dict_json
MessageDict.to_dict_uncached
MessageDict.to_dict_uncached_helper
MessageDict.build_dict_from_raw_db_row
MessageDict.build_message_dict
This fix also removes a circular dependency related
to get_avatar_url.
Also, there was kind of a latent bug in Message.need_to_render_content
where it was depending on other calls to Message to import bugdown
and set it globally in the namespace. We really need to just
eliminate the function, since it's so small and used by code that
may be doing very sketchy things, but for now I just fix it. (The
bug would possibly be exposed by moving build_message_dict out to the
library.)
This moves the logic for renaming a stream to the REST API
update_stream_backend method, eliminating the legacy API endpoint for
doing so.
It also adds a nice test suite covering international stream names.
This adds support for running a Zulip production server with each
realm on its own unique subdomain, e.g. https://realm_name.example.com.
This patch includes a ton of important features:
* Configuring the Zulip sesion middleware to issue cookier correctly
for the subdomains case.
* Throwing an error if the user tries to visit an invalid subdomain.
* Runs a portion of the Casper tests with REALMS_HAVE_SUBDOMAINS
enabled to test the subdomain signup process.
* Updating our integrations documentation to refer to the current subdomain.
* Enforces that users can only login to the subdomain of their realm
(but does not restrict the API; that will be tightened in a future commit).
Note that toggling settings.REALMS_HAVE_SUBDOMAINS on a live server is
not supported without manual intervention (the main problem will be
adding "subdomain" values for all the existing realms).
[substantially modified by tabbott as part of merging]
This sets the “title” attribute on the image to the actual title of
the image specified by the user in their markdown, rather than just
the URL of the full link to it.
This was the original way to send messages via the Zulip API in the
very early days of Zulip, but was replaced by the REST API back in
2013.
Fixes: #730.
We no longer use all the alert words for all the users in the
entire realm when we look for alert words in a newly sent/edited
message. Now we limit the search to only all the alert words
for all the users who will get UserMessage records. This will
hopefully make a big difference for big realms where most messages
are only sent to a small subset of users.
The bugdown parser no longer has a concept of which users need which
alert words, since it can't really do anything actionable with that info
from a rendering standpoint.
Instead, our calling code passes in a set of search words to the parser.
The parser returns the list of words it finds in the message.
Then the model method builds up the list of user ids that should be
flagged as having alert words in the message.
This refactoring is a little more involved than I'd like, but there are
still some circular dependency issues with rendering code, so I need to
pass in the rather complicated realm_alert_words data structure all the way
from the action through the model to the renderer.
This change shouldn't change the overall behavior of the system, except
that it does remove some duplicate regex checks that were occurring when
multiple users may have had the same alert word.
If you supplied an unrecognizable address to our email system,
or you had EMAIL_GATEWAY_PATTERN configured wrong,
the get_missed_message_token_from_address() used to crash
hard and cryptically with a traceback saying that you can't
call startswith() on a None object.
Now we throw a ZulipEmailForwardError exception. This will
still lead to a traceback, but it should be easier to diagnose
the problem.
In our email mirror, we have a special format for missed
message emails that uses a 32-bit randomly generated token
that we put into redis that is then prefixed with "mm" for
a total of 34 characters.
We had a bug where we would mis-classify emails like
mmcfoo@example.com as being these system-generated emails
that were part of the redis setup.
It's actually a little unclear how the bug in the library
function would have manifested from the user's point of view,
but it was definitely buggy code, and it's possibly related in
a subtle way to an error report we got from a customer where
only one of their users, who happened to have a name like
mmcfoo, was having problems with the mirror.
It appears that the assertRaisesRegexp approach we had before didn't
work properly on some systems, likely due to a bad interact with a
i18n (we haven't definitively determined the cause).
We now raise an exception in bugdown.do_convert() if rendering
fails, to avoid silent failures, and then calling code can convert
the exception to a JsonableError.
We now have 100% coverage on views/push_notifications.py, modulo
some dead code which will be addressed in the next commit.
There were some existing tests in text_external.py, but that
module is really intended for tests that hit external services.
The view is a really simple API that updates a DB table, and the
new test code focuses on error handling and idempotency as well
as the happy path.
Apparently, in urllib.parse, one need to extract the query string from
the rest of the URL before parsing the query string, otherwise the
very first query parameter will have rest of the URL in its name.
This results in a nondeterministic failure that happens 1/N of the
time, where N is the number of fields marshalled from a dictionary
into the query string.
This commit extracts compose_views() from update_subscriptions_backend(),
and it implements the correct behavior for forcing transactions to roll
back, which is to raise an exception.
There were really three steps in this commit:
- Extract buggy code to compose_views().
- Add tests on compose_views().
- Fix bugs exposed by the new tests by converting errors to exceptions.
This adds support for using PGroonga to back the Zulip full-text
search feature. Because built-in PostgreSQL full text search doesn't
support languages that don't put space between terms such as Japanese,
Chinese and so on. PGroonga supports all languages including Japanese
and Chinese.
Developers will need to re-provision when rebasing past this patch for
the tests to pass, since provision is what installs the PGroonga
package and extension.
PGroonga is enabled by default in development but not in production;
the hope is that after the PGroonga support is tested further, we can
enable it by default.
Fixes#615.
[docs and tests tweaked by tabbott]
The previous default configuration resulted in delivery problems if
the Zulip server was authorized in the SPF records for the domains of
all users on the Zulip server.
This sends an event when a new avatar is uploaded that refreshes the
avatar for all browser clients without the need to reload the browser.
Fixes: #1359.
Previously, missed message emails with multiple senders would
incorrectly have a "," outside the quoted sender name part of the from
address string, resulting in confusing email output.
Adds a new field default language in the zerver_realm model.
This realm level default language will be used as default language
for newly created users. Realm level default language can be
changed from the administration page.
Fixes#1372.
Often, users will copy email addresses with a name (rather than pure
email addresses) into the Zulip "invite users" UI. Previously, that
would throw an error.
This change also adds a get_invitee_emails_set function for parsing
emails content and a test suite for this new feature.
Fixes: #1419.
Define Integration and WebhookIntegration classes.
Change webhook part of integration's guide.
Replace hardcoded webhook urls to generating
based on WEBHOOKS list.
This exists primarily in order to allow us to mock settings.DEBUG for
the purposes of rate limiting, without actually mocking
settings.DEBUG, which I suspect Django never intended one to do, and
thus caused some very strange test failures (see
https://github.com/zulip/zulip/pull/776 for details).
This was in AdminCreateUserTest.test_create_user_backend().
For end to end tests we are logged in, and we need to verify
that our decorators add UserProfile to the parameters of
the view on our behalf, so that we don't get false positives.
In an upcoming commit, we will want to be able to serialize
the parameters for client_put to produce url coverage reports,
so that is another reason not to pass in the UserProfile
object. (That was how this was discovered.)
This makes us more consistent, since we have other wrappers
like client_patch, client_put, and client_delete.
Wrapping also will facilitate instrumentation of our posting code.
This new helper combines two old helpers, one of which was misnamed
and the other of which was always called after the first, so it
made sense to just combine the helpers.
Fixes: #1386
This commit adds these two tests:
test_use_first_unread_anchor_with_some_unread_messages
test_use_first_unread_anchor_with_no_unread_messages
The new tests add coverage to the conditional logic in
get_old_messages_backend() that looks at first_unread_result
when use_first_unread_anchor is set to True.
The test is now called test_use_first_unread_anchor_with_muted_topics().
Before this commit, the test exercised setting
use_first_unread_anchor to True, but it didn't inspect the
most relevant query affected by the flag. Now it does.
This test is still kind of hard to read, and it's far from ideal,
but I'm reluctant to remove it from the test suite.
This increases test coverage by exercising highlight_string().
It also gives deeper test coverage to NarrowBuilder.by_search(),
which had test coverage before, but only in terms of inspecting
the SQL that was generated. This test actually runs the SQL
under the hood.
This partly fixes#1006.
This allows the frontend to fetch data on the subscribers list (etc.)
for streams where the user has never been subscribed, making it
possible to implement UI showing details like subscribe counts on the
subscriptions page.
This is likely a performance regression for very large teams with
large numbers of streams; we'll want to do some testing to determine
the impact (and thus whether we should make this feature only fully
enabled for larger realms).
There were a bunch of authorization and well-formedness checks in
zerver.lib.actions.do_update_message that I moved to
zerver.views.messages.update_message_backend.
Reason: by convention, functions in actions.py complete their actions;
error checking should be done outside the file when possible.
Fixes: #1150.
This is controlled through the admin tab and a new field in the Realms table.
Notes:
* The admin tab setting takes a value in minutes, whereas the backend stores it
in seconds.
* This setting is unused when allow_message_editing is false.
* There is some generosity in how the limit is enforced. For instance, if the
user sees the hovering edit button, we ensure they have at least 5 seconds to
click it, and if the user gets to the message edit form, we ensure they have
at least 10 seconds to make the edit, by relaxing the limit.
* This commit also includes a countdown timer in the message edit form.
Resolves#903.
Taiga's webhook integration would give output events in a random
order which caused test failures on python 3 (seems like python
3 is more prone to non-deterministic failures). Fix that by
sorting the outputs obtained from events before concatenating them.
Use ujson.dumps to render raw messages sent by the PagerDuty
integration instead of using pprint.pformat. pprint.pformat
gives different results on python 2 and 3.
Correctly encode and decode strings in convert_html_to_markdown.
It wasn't possible to use universal_newlines=True since
Popen.communicate doesn't encode/decode strings correctly on
python 2.