This completes a major redesign of the Zulip login and registration
pages, making them look much more slick and modern.
Major features include:
* Display of the realm name, description and icon on the login page
and registration pages in the subdomains case.
* Much slicker looking buttons and input fields.
* A new overall style for the exterior of these portico pages.
This new feature makes it possible to request a different set of
initial data from the event_types an API client is subscribing to.
Primarily useful for mobile apps, where bandwidth constraints might
mean one wants to subscribe to events for a broader set of data than
is initially fetched, and plan to fetch the current state in future
requests.
For our Git integrations, we now only display the number of commits
pushed when the pusher also happens to be the only author of the
commits being pushed.
Part of #3968.
Follow-up to #4006.
For Git push messages, we now have a single space character between
the name of a commit's author and the number of commits by that
author, plus a period at the end.
Part of #3968.
Follow-up to #4006.
We now use the name of the author of a commit as opposed to the
committer to format Git messages with multiple committers.
Part of #3968.
Follow-up to #4006.
- Add aggregated info to real-time updated presence status.
- Update `presence events` test case with adding aggregated
information to presence event.
- Add test case for updating presence status for user which
send state from multiple clients.
Fixes#4282.
We need to reset INSTRUMENTED_CALLS variable before running every
subsuite. If we do not do this then the subsuite running on a
particular process called A will send the accumulated instrumented
calls gathered by the previous subsuites which also ran on the process A.
This also fixes the extra delay that we used to experience after the
tests had finished running. The extra delay was due to the duplicate
instrumented calls in the INSTRUMENTED_CALLS list. The size of this
list used to be ~100k for parallel model as opposed to ~1800 for serial
model.
This commit creates a dedicated file upload directory for every process
when we are running tests in parallel mode. This makes sure that we do
not run into any race conditions due to multiple processes accessing
the same upload directory.
This fixes a confusing issue where a user might try resetting the
password for an email account that in part of a different Zulip
organization.
Is a useful early step towards making Zulip support reusing an email
in multiple realms.
Fixes: #4557.
This fixes a performance problem where we were previously starting up
a full Django process (~0.7s even on a fast machine) every time a new
email came in, potentially allowing users to accidentally DoS a Zulip
server. Now, we just post over HTTPS, allowing the existing thread
pool support to do its job.
- Add script wrapper to communicate postfix pipe with django web server
over HTTP(S). It uses shared_secret authentication mode.
- Add django view to process messages from email mirror server.
- Clean management command `email-mirror`. Left just functional
for cron email processing.
- Add routes for new tornado view.
- Change pipe script in master process postfix config template
based on updated script.
- Add tests.
Tweaked by tabbott to adjust the directory and set better defaults.
Fixes#2421.
Rename 'zulip_internal' decorator to 'require_server_admin', add
documentation for 'server_admin', explaining how to give permission
for ./activity page.
Fixes: #1463.
This is basically just using the new check_dict_only everywhere, with
a few exceptions:
* New self.check_events_dict automatically adds the id field to avoid
duplicating it ~80 times.
* Set log=False for many of the testing action functions to remove the
timestamp field from their returned event dictionaries, since it's
not needed and is the result of a deprecated log_event function.
Wasn't sure if the subscription_field list in do_test_subscribe_events
could contain optional arguments, so I left the call to check_dict on
along with a TODO.
Fixes: #1370.
In this commit we add a logout wrapper so as to enable developers
to just do self.logout instead of doing a post request at API
endpoint for logout. This is achieved by adding a wrapper function
for the Django's client.logout contained in TestCase. We add this
by extending ZulipTestCase to have a logout function.
We now show a few new things:
(1) The number of commits pushed.
(2) Who authored the commits (just counts, not which specific ones, for brevity).
Add tests for case of multiple committers.
Part of #3968.
The exclude variable was superfluous. The realm properties
listed in the exclude variable are not in the
realm.property_types dict, so they do not need to
be explicitly excluded.
This Refactors the function fetch_initial_state_data to use the
realm.property_types attribute, avoiding some unnecessary code
duplication.
Tweaked slightly by tabbott to simplify the code a bit.
Addresses part of issue #3854.
This is an incomplete cleaned-up continuation of Lisa Neigut's push
notification bouncer work. It supports registration and
deregistration of individual push tokens with a central push
notification bouncer server.
It still is missing a few things before we can complete this effort:
* A registration form for server admins to configure their server for
this service, with tests.
* Code (and tests) for actually bouncing the notifications.
In cases where old unread messages in the home view might have been
leaked (either due to bugs or unusual muting interactions), it's
theoretically possible for the first unread message in the home view
to be far older than the pointer.
Since the Zulip mobile app is loading messages following the
use_first_unread logic, we need to plug this gap.
Probably a longer-term solution will involve changing how
update_message_flags works to automatically advance the pointer, but
this change should make it possible for the mobile apps to
consistently use the `use_first_unread` mechanism for fetching the
latest home view messages.
With tweaks to the tests by tabbott.
Fixeszulip/zulip-mobile#422.
The previous logic was that anyone with a link to a file could send it
to other users, but only the owner could make a file realm-public.
This had some confusing corner cases.
The new logic is much simpler:
* Only the file's owner/uploader can include a file in a message for
the first time.
* Anyone with access to read a file can share it with others by
including it in messages they send.
* Once a file has been sent to a public stream, any user in the realm
can access it.
In this commit we fix the occasionally breaking tests for
test_home.HomeTest.test_bad_narrow which were the result of
us patching global settings in test_upload to add some new emails
to CROSS_REALM_BOT_EMAILS and not rolling back.
textsearch based full text search doesn't match text in link tag but
PGroonga based full text search can match text in link tag.
Without this change, highlighting text in link tag generates broken
HTML.
Moved error handling to the beginning of the update_realm
function. Removed several if statements and replaced them with
a block of code that loops through realm properties and updates
them if an update has been sent through the request. Also
created an 'exclude' list for realm properties that do not fit
into the general pattern that most other realm properties
follow for updating. Those properties are handled separately.
Some comments added by tabbott.
Addresses part of issue #3854.
This commit makes sure that GitHubAuthBackend will only authenticate
using its own authenticate method. This is done by adding a new
Python Social Auth strategy which instead of calling authenticate
method of Django, calls the authenticate of the backend directly.
The problem this commit solves is that while authenticating through
GitHub backend, we were ending up getting authenticated through
ZulipDummyBackend. This might happen because the default strategy used
by Python Social Auth calls the authenticate method of Django which
iterates over all the backends and tries the authenticate methods
which match with the function arguments. The new strategy this commit
adds calls the authenticate method of GitHub backend directly which
makes sense because we already know that we want to authenticate with
GithHub.
The actual problem of why we are ending up on ZulipDummyBackend is
still a mystery because the function arguments passed to its
authenticate method are different. It shouldn't be called.
Earlier, a stack was being used to go through the message and search
for links. Because of this, in some cases the images were added to
the preview in reverse. Using a queue will keep the image previews in
the same order as they appeared in the message.
Fixes#4453.
Some Handlebars strings contained whitespaces characters at their ends.
With this, such characters are removed, as well as multiple spaces
(like the ones produced by code indentation).
This also includes a couple of fixes that removes spaces that were
intentionally placed before/after the string to translate.
Useful for the upcoming check_realmauditlog_by_user_query, if nothing else.
But I suspect it will indeed get use; looking for events around or within a
certain time is pretty natural for an audit log.
The main argument against I would say is that this should actually be a
joint index with something else. I'm not sure what that something else
should be, so just optimizing for what I think
check_realmauditlog_by_user_query will need for now.
This fixes an issue with a nondeterministic number of database queries
being used in fetching bulk messages from the database. The source of
the problem was that we were fetching _all_ messages, not just the 600
that had been created by the test, and thus if the set of streams
present in messages in the test fixtures (which is random) changes,
the number of streams used (and thus number of queries) would change.
Apparently, Django's _destroy_test_db has a mostly unnecessary
sleep(1) before dropping the database, which obviously wastes a bunch
of time in the single-test runtime of their database teardown logic.
We work around this by monkey-patching that function to not do the sleep.
Instead of zulip_test, use zulip_test_template for backend DB. This
makes sure that the DB used by backend tests is different from the
DB, which will be zulip_test, used by Casper tests.
Show a user friendly message to the user if email is invalid.
Currently we show a generic message:
"Your username or password is incorrect."
The only backend which can accept a non-email username is LDAP.
So we check if it is enabled before showing the custom message.
The pages in question are already cached automatically by Zulip, and
the lru_cache decorator doesn't work, since `context` might contain
unhashable objects.
This removes individual tests for realm properties and replaces them
with a generic do_set_realm_property_test function to test each
property in the Realm.property_types attribute.
Addresses part of #3854.
This replaces individual tests for realm properties with a generic
do_test_realm_update_api function to test each property in the
Realm.property_types attribute.
Addresses part of #3854.
This adds the option '--rerun' to the `test-backend` infrastructure.
It runs the tests that failed during the last 'test-backend' run. It
works by stailing failed test info at var/last_test_failure.json
Cleaned up by Umair Khan and Tim Abbott.
This better sets expectatations for the fact that in Zulip, the
Organization settings UI is available read-only to non-administrator
users.
Tweaked by tabbott to update some additional references.
This is a remerge of e985b57259 (after
resolving merge conflicts, updating the tests, adding mypy annotations
etc.), which should now be correct, because we've done the necessary
database migration.
The rebase/remerge work was done by Tim Abbott and Aditya Bansal.
This is an important part of #320.
By default, Python markdown tab length for indents is 4 spaces, which
require using 4 spaces or a tab to create nested elements. This
modifies that setting to specify 2-space indentation for nesting
elements only.
Modified significantly by tabbott to limit the change to just list
indentation.
Fixes#4252.
Users editing messages or updating message flags are either already
recorded or not interesting from an audit perspective, and so there's
no need to use log_event with them.
Django uses arguments to differentiate between different authenticate
function so it is important to pass arguments in a predictable manner.
Keyword args will test the name of the argument as well.
The web app doesn't need any presence data for its first ping to
the server, because it already has up-to-date presence info in
page_params. So now we can tell the server not to send us a big
payload that we were already ignoring.
Most of this code was simply moved from activity.js with some
minor renaming of functions like set_presence_info -> set_info.
Some functions were slightly nontrivial extractions:
is_not_offline:
came from activity.huddle_fraction_present
get_status/get_mobile:
simple getters
set_user_status:
partial extraction from activity.set_user_status
last_active_date:
pulled out of admin.js code
We also fixed activity.filter_and_sort to take user_ids.
Due to Pgroonga regression, there is a difference in search
result between Travis and development env due to which one of
our tests fails. This commit makes sure that the test passes
for both strings till the Pgroonga bug is resolved.
This commit adds the backend support for a new style of tutorial which
allows for highlighting of multiple areas of the page with hotspots that
disappear when clicked by the user.
This fixes an exception we had in the user_activity queue processor
when changing email addresses, since the URL containing the
confirmation key was longer than 50 characters.
Modify `bot_owner_user_ids()` to return the user_ids of only
admins and bot owners instead of all the current active users.
This was causing a traceback on the frontend.
Fixes: #3391.
- Add message retention period field to organization settings form.
- Add css for retention period field.
- Add convertor to not negative int or to None.
- Add retention period setting processing to back-end.
- Fix tests.
Modified by tabbott to hide the setting, since it doesn't work yet.
The goal of merging this setting code now is to avoid unnecessary
merge conflicts in the future.
Part of #106.
This adds helpful email notifications for users who just logged into a
Zulip server, as a security protection against accounts being hacked.
Text tweaked by tabbott.
Fixes#2182.
This fixes a leak of this setting change that resulted from the
unusual way that our Tornado system sets this variable early in the
management command.
Fixes#3685.
Adds a new webhook integration for Slack to receive messages
from one's Slack team's public channels.
Contains negative tests for broken, missing or invalid data.
Allows two different option for integration:
1. Receive notification on a single stream with different topics
for each of Slack's public channels.
2. Receive notification on different streams for each of Slack's
public channels.
Steps to choose between the two options is described in the documentation.
Fixes#3569.
This fixes 2 issues:
* Being added to an invite_only stream did not correctly update the
"streams" key of the initial state.
* Once that's resolved, subscribe_to_stream when called on a
nonexistant stream would both send a "create" event (from
create_stream_if_needed) and an "occupy" event (from
bulk_add_subscriptions).
The second event should just be suppressed in that case, and this
implements that suppression.
We previously didn't apply the default language event change
correctly.
Not super important as a bug, since we require the user to reload the
browser for their changes to take effect, but this will save time if
we ever change that.
This makes it possible for us to do some convenient validation for
developers, checking whether the correct types are passed for each
each realm property.
zerver/lib/actions: removed do_set_realm_* functions and added
do_set_realm_property, which takes in a realm object and the name and
value of an attribute to update on that realm.
zerver/tests/test_events.py: refactored realm tests with
do_set_realm_property.
Kept the do_set_realm_authentication_methods and
do_set_realm_message_editing functions because their function
signatures are different.
Addresses part of issue #3854.
This makes get_stream match get_realm, get_user_profile_by_email,
etc., in interface, and is more convenient for mypy annotations
because `get_stream` now doesn't return an Optional[Stream].
We use the same strategy Zulip already uses for starred messages,
namely, creating a new UserMessage row with the "historical" flag set
(which basically means Zulip can ignore this row for most purposes
that use UserMessage rows). The historical flag is ignored, however,
in determining which users' browsers to notify about new reactions,
and thus the user will get to see the reaction appear when they click
a message (and any reactions other users later add, as well!).
There's still something of a race here, in that if some users react to
a message while the user is looking at the unsubscribed stream but
before the user reacts to that message, those reactions will not be
displayed to that user (so counts will be a bit lower, or something).
This race feels small enough to ignore for now.
Fixes#3345.
If `render()` is called from middleware that runs before the
authentication middleware, then this code path will be called with a
request object where request.user is not yet set. Handle this by
providing a reasonable error message.
In aa880b0419, we used the raw
do_set_realm_description method rather than calling the API, which
meant that the API success path wasn't actually tested.
This adds an organization description field to the Realm model, as well as
an input field to the organization settings template. Added three tests.
Set the max length of the field to 100 characters.
Fixes#3962.
An empty narrow (ie, the home view) can be represented in code as either
`None` or `[]` but we had incorrect handling that failed to fully
properly deal with either case.
(1) In `get_stream_name_from_narrow`, we failed to deal with `None` by
trying to always iterate over `narrow`.
(2) In several other places, we failed to deal with `[]` by explicitly
checking `if narrow is None` or `if narrow is not None`. Changing these
to truthiness checks should work for both the `None` and `[]` cases.
A previous commit changed a `get` (which can throw `DoesNotExist`) to use an
existing object, but kept the `try` / `except` block:
4bf3ace444
Removing this unused code path allows us to achieve 100% test coverage.
Changing assert_in_success_response to require List[Text] instead of
Iterable[Text] prevents the following misuse:
self.assert_in_response_success("message", response)
Currently, this will check whether 'm', 'e', 's', 'a', and 'g' separately
appear in the response, which is probably not the intended behavior. The
correct usage is as follows:
self.assert_in_response_success(["message"], response)
This of course only works in the 2 minute window where missed-message
emails are planned, but nonetheless likely avoids common cases of
emailing users with deleted messages.
Fixes: #3873.
This fixes an issue where if you saved a Python file (even just
changing whitespace) while casper tests were running, the Tornado
server being used would restart, triggering a confusing error like
this:
ReferenceError: Can't find variable: $
Traceback:
undefined:2
:4
Suite explicitly interrupted without any message given.
Django 1.10 has changed the implementation of this function to
match our custom implementation; in addition to this, we prefer
render().
Fixes#1914 via #4093.
- Add push, create and pull request event.
- Handle 'opened', 'closed' and 'merged' in 'pull request' event.
- Include tests for all the above events including 'push' with commits
more than limits.
Missed-message email replies using the reply-to of
noreply@zulipchat.com shouldn't advertise that "just replying" will
work.
Rebased and commit message rewritten by tabbott.
Fixes#3965.
On reloading the page after disabling email changes does not check
the "Prevent users from changing their email address".
Adding realm_email_changes_disabled to page_params_core_fields fixes the problem.
validate_user_access_to_subscribers_helper never uses
stream_dict['realm__domain']. I imagine it was there originally to do the
is_zephyr_mirror_realm check.
Previously we used the topic "Realm.domain" for new user signups, but topic
"Realm.string_id" for the realm creation. This changes the user signup
messages to be on the same topic thread as the realm creation.
This fixes 2 related issues:
* We incorrectly would report authentication methods that are
supported by a server (but have been disabled for a given
realm/subdomain) as supported.
* We did not return an error with an invalid subdomain on a valid
Zulip server.
* We did not return an error when requesting auth backends for the
homepage if SUBDOMAINS_HOMEPAGE is set.
Comes with complete tests.
Our linter for translation strings shouldn't check test files, since
then we'll end up translating non-user-facing strings.
So we fix that, and actually add the opposite lint rule.
All current calls to do_activate_user just use the default value of
timezone.now(). Having a date_joined other than timezone.now() raises an
interesting RealmAuditLog question (namely, which time should be used),
which we don't have to answer if we remove the argument.
Change applies to both subdomains and non-subdomains case, though we use
just the EXTERNAL_HOST in the non-subdomains case if there is only 1 realm.
Fixes#3903.
This makes the outcome if a user didn't have an avatar due to a past
email change reasonable; the user will just be bumped back to
gravatar, fixing their invalid state.
This commit introduces a migration for moving avatars from email based
to user id based storage.
This is in responce to change in behaviour of user_avatar_path to
return path comprising of realm id and a hash based on user id. Also
we fix test_helpers accordingly.
Fixes#3776.
- Add settings parameter for max realm icon size.
- Add settings parameter for max user avatar size.
- Add checking file size to avatar and icon
uploading views.
- Transfer file size limit parameter to frontend.
- Add tests.
Add a webhook to create messages from Splunk search alerts. The search
alert JSON includes the first search result and a link to view the full
results. The following fields are used:
* search_name - the name of the saved search
* results_link - URL of the full search results
* host - the host the search result came from
* source - the source file on that host
* _raw - the raw text of the logged event.
The Zulip message contains:
* search name
* host
* source
* raw
The destination stream and message topic are configurable: the default
stream is "splunk" and the default topic "Splunk Alert". If the topic is
not provided in the URL, the search name is used instead (truncated if too
long. If a needed field is missing, a default value is used instead.
Example: "Missing source"
It is possible to configure a Splunk search to not include some values,
so I've provided defaults rather than return an error for missing data.
In practice, these fields are unlikely to be deliberately suppressed.
Note: alerts are only available for Splunk servers using a valid trial,
developer, or paid license.
I've added tests for the normal case of one search result, the topic from
the search name, and for a search missing one of the fields used. Tested
using Splunk Enterprise 6.5.1.
Fixes#3477
- Add `OFFLINE_THRESHOLD_SECS` settings parameter
to handle offline period.
- Set aggregated status to offline if user's status
haven't changed for `OFFLINE_THRESHOLD_SECS` period.
- Add test for offline aggregated status.
- Add aggregated status to user presence status dict.
- Add tests for aggregated presence status.
- Fix removing unused keys from status dict
with aggregated data for user.
Fixes#3692
- Add new 'missedmessage_email_senders' queue for sending missed messages emails.
- Add the new worker to process 'missedmessage_email_senders' queue.
- Split aggregation missed messages and sending missed messages email
to separate queue workers.
- Adapt tests for sending missed emails to the new logic.
Fixes#2607
This system was quite complicated, and never had great semantics.
Eventually, we'll want some other system for gating which server
should generate digest emails for which realm controlled via the
database.
This feature hardcoded zulip.com, and never really made much sense
("feedback" should generally go to the local server administrator, not
to the Zulip development community).
This completes the process of simplifying the interface of the
send_*_push_notification functions, so that they can effectively
support a push notification forwarding workflow.
This refactoring is preparation for being able to forward push
notifications to users on behalf of another Zulip server.
The goal is to remove access to the current server's database from the
send_*_push_notification code paths.
This code was added as part of the Django 1.10 migration to make our
tests work with both Django 1.8 and 1.10. Now that we're on 1.10,
it's no longer required.
In this commit we change user_avatar_hash with user_avatar_path which
now returns paths to avatars based on the email hash.
Tweaked by tabbott to avoid an import loop.
"Local" datetimes are local to the server (or rather, are using
settings.TIME_ZONE), which in most cases is not what the recipient of the
message is expecting.
In this commit we just change the upload_avatar_image function to accept
two user_profiles acting_user_profile and target_user_profile. Basically
email param is dropped for a target_user_profile so that avatar's could
be moved lateron to user id based storage.
This currently only supports this in emoji reactions, not in actual
emoji in message bodies, but it's a great start for people who want a
text-only view.
Tweaked to update the text by tabbott.
Fixes#3169.
Standardizing the Zulip codebase to use UTC everywhere. Note that unlike
many recent commits in this line, this changes does result in a change in
behavior.
datetime.utcnow() is a timezone-naive datetime. The Django ORM interprets it
in the settings.TIME_ZONE timezone (e.g. 'America/New_York' in the
development server). We perhaps haven't noticed errors yet since with
'America/New_York' all it means is that emails are sent 5 hours early, or a
slightly different set of messages are included in the digest.
When you pass a naive datetime to the Django ORM, it uses settings.TIME_ZONE
for the time zone. In the development environment, both settings.TIME_ZONE
and datetime.now() use 'America/New_York', so there is no change in behavior
there. (fromtimestamp with no tz argument uses the same timezone as
datetime.now)
We are soon going to change settings.TIME_ZONE to UTC, so need to remove
naive datetimes from queries to the ORM.
Like many rare-case code with new tests, it turns out that the logic
for handling null characters in our Zephyr postgres query escaping
never worked, in multiple ways. First, it always changed the second
character in s, not the current one being inspected, and second, the
value it replaced it with was no the correct postgres escape of the
null byte. We fix this and add tests.
This completes the effort to get zerver/views/messages.py to 100%
test coverage.
Fixes#1006.
When you edit a message to contain links, and URL previews are
enabled, previously we'd throw an exception, because the realm ID
wasn't included in the event.
Also adds a test so that we can have effective test coverage on this
codepath, though this history is actually that I found the bug through
writing this test :).
This fixes a weird issue where the following sequences of tests would fail:
test-backend
zerver.tests.test_messages.PersonalMessagesTest.test_personal_to_self
zerver.tests.test_report.TestReport.test_report_error
zerver.tests.test_templates.TemplateTestCase.test_custom_tos_template
It appears that all 3 tests are required for the failure.
While it's not entirely clear what the cause is, a very likely factor
is that settings.DEBUG is special, and so changing it at runtime is
likely to cause weird problems like this.
We fix this by replacing it with settings.DEVELOPMENT, which has the
same value in all environments, but doesn't have this problem of being
a special Django thing.
Fix administration page javascript issue of TypeError that occurs
due to undefined variable access in static/js/bot_data.js file.
Reactivating a bot was not updating the state in `bot_data`.
Sending an event on reactivating a bot fixes this issue.
Fixes: #2840
Change `from django.utils.timezone import now` to
`from django.utils import timezone`.
This is both because now() is ambiguous (could be datetime.datetime.now),
and more importantly to make it easier to write a lint rule against
datetime.datetime.now().
page_params is kinda a monster object. Ideally, we'd make it be
constructed in a much less haphazard fashion, and make sure that all
the useful data in it is available via the `/register` endpoint for
mobile/API. This change reorganizes page_params to be sorted by data
source, which is an important prerequisite for doing that.
- Add server version to `fetch_initial_state_data`.
- Add server version to register event queue api endpoint.
- Add server version to `get_auth_backends` api endpoint.
- Change source for server version in `home` endpoint.
- Fix tests.
Fixes#3663
- Add stamp file creation for the failed templates compilation.
- Add error response to `home` route if stamp file exists. It appears
just for the development environment.
- Add jinja2 template for failed handlebars templates compilation error.
Fixes#3650.
Modify the `bot_list` to hold all the bots owned by an user
irrespective of whether the bot is active or inactive. Also
include the `is_active` field in `active_bot_dict_fields` to
distinguish between inactive and active bots.
Use `name_to_codepoint.json` file (and the similar structure in
emoji_codes.js) to map emoji names directly to codepoints and change
the rendered emoji image to `unicode/<codepoint.png>` rather than
`<emoji_name>.png`.
Fixes: #3539.
This changes the time render to be done on the client-side and
therefore take advantage of knowing the client’s timezone, along with
being formatted in a more human-parseable way.
This adds to Zulip support for a user changing their own email
address.
It's backed by a huge amount of work by Steve Howell on making email
changes actually work from a UI perspective.
Fixes#734.
* Created a drafts modal to display/restore/delete drafts
* Created a Draft model to support storing draft data in localstorage
* Removed existing restore-draft functionality
* Added casper and node tests for drafts functionality
Fixes#1717.
The comments explain why this change is correct. This change is
useful because it's better to not have dead code paths, both because
it makes our life easier for coverage analysis, and because the else
statement provided the illusion that it could actually happen.
If the analysis in that comment is wrong, we'd rather have a 500 error
so we fix the bug than things silently sorta working.
This arguably regresses the Zephyr experience, in that we no longer
consider 'foo.d.d.d.d.d' to be something that gets narrowed in with
the rest, but that's a pretty rare use case anyway.
In practice, using that many '.d's anyway only happens a few times a
year.
Our client code will now receive avatar_url in
page_params.people_list during page load, so it will be
able to use more current urls for old messages (the client
already had some logic for that and was just missing the
data).
We also add avatar_url to the realm_user/add event.
When we change the avatar, we make sure to always send a
realm_user/update event (even for bots).
We also needed to add avatar_version and
avatar_source to our active users cache.
This makes life a lot easier for people inviting users to a new Zulip
organization, since they can give some form of context now.
Modified by tabbott to clean up CSS, backend code flow, and improve
the formatting of the emails.
Fixes: #1409.
We now make tests that call EventsRegisterTest.do_test()
explicitly specify whether calls to apply_events() would
change the state of initially fetched data. Generally
these tests exist to test that logic (as well as verifying
schemas of events), so if they stop testing that logic, it
is usually a broken test.
Some tests are exempted from the check here, because I think
they don't really change state--such as updating messages or
notifications. You can set state_change_expected to False
for those tests.
For all the tests that deal with flipping boolean flags, I
set their value to False before calling do_test twice now.
For the authentication backends, I mock the settings so that
more backends are "supported" and therefore part of the event
and the fetched state.
Finally, for the bot tests, I make sure to use a bot the user
can access.
The original include_subscribers implementation did not correctly
update the apply_events code path to avoid adding 'subscribers' dicts
to things. This corrects that oversight.
There's a new option, `include_subscribers`, that controls whether the
API sends down subscriber data for the various streams you are
subscribed to.
This has significant performance savings for large realms with naive
clients, and saves a bunch of bandwidth as well.
This fixes a performance regression loading the Zulip homepage.
While it decreases the utility of the display of messages, it's only
so much loss (because the display recipient for PMs was totally broken
anyway).
Fixes#268.
Modified significantly by tabbott to:
* improve code cleanliness / repetition
* add missing translation tags
* move code into message_edit.js
* correspond with the new backend.
* not display the option for messages only topic-edited
This makes it super easy for frontend code using this view code to
produce a nice display of the history.
This also fixes an off-by-one error with the timestamps.
Our lists of rabbitmq queues was likely to end up out of date, since
there was nothing enforcing that the various lists of queues were
correct or the same as each other.
Based on work by Kartik Maji in #1204.
This has a few significant changes from the original version:
* We correctly handle filling in data for topic edits
* Has a complete test suite verifying correctness of the logic
* Currently, it doesn't include a special "start" entry
Things we may want to further change include:
* Adding a special "start" entry.
* Reversing the order of the history data returned for clarity.
This is important for, in the future, being able to display who edited
the topic of a message if that wasn't the person who originally sent
the message.
Our URL routing previously attempting to segment the /users/ endpoint
namespace into /me (affecting yourself) or /username@domain (affecting
other users) by regular expressions incorrectly, specifically in the
case of email addresses starting with `me`. This prevented various
admin actions like removing a user as an organization administrator.
This is a fairly risky, invasive change that speeds up
stream deactivation by no longer sending subscription/remove
events for individual subscribers to all of the clients who
care about a stream. Instead, we let the client handle the
stream deactivation on a coarser level.
The back end changes here are pretty straightforward.
On the front end we handle stream deactivations by removing the
stream (as needed) from the streams sidebar and/or the stream
settings page. We also remove the stream from the internal data
structures.
There may be some edge cases where live updates don't handle
everything, such as if you are about to compose a message to a
stream that has been deactivated. These should be rare, as admins
generally deactivate streams that have been dormant, and they
should be recoverable either by getting proper error handling when
you try to send to the stream or via reload.
This fix prevents stream deactivation from being basically
un-usable for medium to large sites. Instead of calling
bulk_remove_subscriptions one at a time for every individual
member of the realm, we call it once for all the users that
care about the stream. This change makes a huge difference, but
the feature is still a bit clunky, and we should only temporarily
revert to this fix if future, more-invasive fixes have flaws.
Fixes#3631.
We were apparently incorrectly harcdoding the client for the main
logged-in site loading to website, rather than using the existing
logic that could sort out the desktop apps.
This significantly simplify the logic for our logging process, making
it the case that websockets message sending requests always are logged
as having the exact same client as a normal AJAX request from that
server.
This commit changes test_patch_bot_avatar to upload avatars to a
different directory so that there is no race condition when tests are
run in parallel mode.
In some cases here we simplify things by calling avatar_url()
instead of get_avatar_url(), when we have a user_profile record
handy. For other cases we pass in an extra avatar_version
parameter to get_avatar_url(), including from avatar_url().
We have a field called user_profile.avatar_version that will
track avatar versions and be used tactically in avatar urls
to get browsers to refresh their caches (in future commits).
This commit bumps the avatar version when we update avatars.
We do this in do_change_avatar_fields(), which was
do_change_avatar_source() before this change.
Adarsh did the initial work here, and Steve Howell (showell) also
made changes.
In Zulip, we mark messages that you send to yourself as read if and
only if they were sent from a known client that represents a human
user use case. The purpose of this logic is to (1) mark messages
humans send as read while (2) still making it convenient to have a bot
that sends messages to yourself for something like Google calendar,
where you actually want to read those messages.
It's possible that we want to move the control for this behavior into
a client-specific flag rather than doing this off User-Agent.
Fixes#3694.
This test would fail if settings.RUNNING_INSIDE_TORNADO
was True, which seemed to happen due to other tests changing
that setting, although I did not fully investigate.
For our user administration, we now primarily work with user ids
that get put into data-user-id attributes. We still put emails in the
tags to make our Casper tests easy to maintain.
This requires a minor change to the back end to pass down user ids
for the /users endpoint (in get_members_backend).
Something in c14e981e00 broken test
failures being reported properly; this isn't the right fix but works
and will let us avoid reverting the original change until it can be
fixed properly.
I dug into why we never did this before, and it turns out we did, but
using `$.trim()` (which removes leading whitespace as well!). When
removing the `$.trim()` usage.
Fixes#3294.
This commit adds html versions of the invite and signup mails and renames
the existing .txt files to the preferred file extensions '.subject', '.html'
and '.txt'. The html versions of the mails are being sent along with the
text-only versions by the 'send_confirmation' function.
This fixes#3134.
The original test was written in shell script which launches a new
django instance for every tests. By doing it in Python, we avoid
the overhead and reduce the test time to <1 second.
Fixes#3620.
This moves do_events_register, fetch_initial_state_data and friends to
a new file.
Modified significantly by tabbott for correctness and to remove unused
imports.
Fixes#3635.
In Django, TestSuite can contain objects of type TestSuite as well
along with TestCases. This is why the run method of TestSuite is
responsible for iterating over items of TestSuite.
This also gives us access to result object which is used by unittest
to gather information about testcases.
Use append_instrumentation_data to append data to the INSTRUMENTED_DATA.
This gives us a layer of abstraction when we need to add instrumentation
data from other modules e.g. while running tests in parallel mode.
This function can be used to perform processing on instrumentation data.
For example, this can be used to send the instrumentation data gathered
in the test suite running in the child process to the parent process for
aggregation.
Having `restricted_to_domain` set to True if there are no more aliases
left means the user is either confused or forgot to set it to False. It
should be set to False automatically when the last alias is deleted.
I believe this completes the project of ensuring that our recent work
on limiting what characters can appears in users' full names covers
the entire codebase.
Disallows you from putting the characters @, *, `, and > and " in
your name. Added test cases similar to the MAX_NAME_LENGTH check
Copied initial code from:
https://github.com/zulip/zulip/pull/2473
Adds a new webhook integration for WordPress blogs. Both WordPress.com
and self-installed blogs are supported, with minor differences that
are described in the documentation. It creates a new message for each
action, the stream and topic may be specified or use default values.
WordPress actions supported:
publish_post: a new blog post was published
publish_page: a new page was published
user_register: a new user account was created
wp_login: a user logged in
Notes: comment_post only provides the id of the parent post, not title
or link, so was not included. On further testing, I found edit_post is
not very practical, it also fires while a new post is being written, and
when posts are deleted. (I think it tracks drafts too.) I've removed it,
as it seems more confusing than useful.
Fixes#3245
boto's stubs have been updated in mypy 0.4.7, which has given us
more information about what type of strings are expected as
parameters in various functions.
Wrap `list.append` in a lambda before assigning it to
event_queue.process_notification to prevent errors when
event_queue.process_notification is used with keyword arguments.
This also removes an error message by mypy 0.4.7.
In zerver.tests.test_decorators.test_check_dict, the variable
'keys' has to be explicitly annotated to pass mypy 0.4.7.
See https://github.com/python/mypy/issues/2777 for more info.
This changes the query for DevAuthBackend so that the shakespearian
users are not omitted while limiting the number of extra users to be
rendered to something reasonable.
Fixes: #3578.
Zulip's previous model for managing static asset files via Django
pipeline had some broken behavior around upgrades. In particular, it
was for some reason storing the information as to which static files
should be used in a memcached cache that was shared between different
deployments of Zulip. This means that during the upgrade process,
some clients might be served a version of the static assets that does
not correspond to the server they were connected to.
We've replaced that model with using ManifestStaticFilesStorage, which
instead allows each Zulip deployment directory to have its own
complete copy of the mapping of files to static assets, as it should
be.
We have to do a little bit of hackery with the staticfiles.json path
to make this work, basically because Django expects staticfiles.json
to be under STATIC_ROOT (aka the path nginx is serving to users), but
doing that doesn't really make sense for Zulip, since that directory
is shared between different deployments.
In a Zulip production environment, STATIC_ROOT points to the shared
directory that static assets are served from, and so the
compilemessages management command was trying to process every
historical version in there.
We do not use `get_link_embed_data` for messsages sent by
bots, as bots often repeat the same URL over and over again
and are generally either text-focused or have their own
mechanisms to provide preview content.
Fixes#2968.