On reloading the page after disabling email changes does not check
the "Prevent users from changing their email address".
Adding realm_email_changes_disabled to page_params_core_fields fixes the problem.
validate_user_access_to_subscribers_helper never uses
stream_dict['realm__domain']. I imagine it was there originally to do the
is_zephyr_mirror_realm check.
Previously we used the topic "Realm.domain" for new user signups, but topic
"Realm.string_id" for the realm creation. This changes the user signup
messages to be on the same topic thread as the realm creation.
This fixes 2 related issues:
* We incorrectly would report authentication methods that are
supported by a server (but have been disabled for a given
realm/subdomain) as supported.
* We did not return an error with an invalid subdomain on a valid
Zulip server.
* We did not return an error when requesting auth backends for the
homepage if SUBDOMAINS_HOMEPAGE is set.
Comes with complete tests.
Our linter for translation strings shouldn't check test files, since
then we'll end up translating non-user-facing strings.
So we fix that, and actually add the opposite lint rule.
All current calls to do_activate_user just use the default value of
timezone.now(). Having a date_joined other than timezone.now() raises an
interesting RealmAuditLog question (namely, which time should be used),
which we don't have to answer if we remove the argument.
Change applies to both subdomains and non-subdomains case, though we use
just the EXTERNAL_HOST in the non-subdomains case if there is only 1 realm.
Fixes#3903.
This makes the outcome if a user didn't have an avatar due to a past
email change reasonable; the user will just be bumped back to
gravatar, fixing their invalid state.
This commit introduces a migration for moving avatars from email based
to user id based storage.
This is in responce to change in behaviour of user_avatar_path to
return path comprising of realm id and a hash based on user id. Also
we fix test_helpers accordingly.
Fixes#3776.
- Add settings parameter for max realm icon size.
- Add settings parameter for max user avatar size.
- Add checking file size to avatar and icon
uploading views.
- Transfer file size limit parameter to frontend.
- Add tests.
Add a webhook to create messages from Splunk search alerts. The search
alert JSON includes the first search result and a link to view the full
results. The following fields are used:
* search_name - the name of the saved search
* results_link - URL of the full search results
* host - the host the search result came from
* source - the source file on that host
* _raw - the raw text of the logged event.
The Zulip message contains:
* search name
* host
* source
* raw
The destination stream and message topic are configurable: the default
stream is "splunk" and the default topic "Splunk Alert". If the topic is
not provided in the URL, the search name is used instead (truncated if too
long. If a needed field is missing, a default value is used instead.
Example: "Missing source"
It is possible to configure a Splunk search to not include some values,
so I've provided defaults rather than return an error for missing data.
In practice, these fields are unlikely to be deliberately suppressed.
Note: alerts are only available for Splunk servers using a valid trial,
developer, or paid license.
I've added tests for the normal case of one search result, the topic from
the search name, and for a search missing one of the fields used. Tested
using Splunk Enterprise 6.5.1.
Fixes#3477
- Add `OFFLINE_THRESHOLD_SECS` settings parameter
to handle offline period.
- Set aggregated status to offline if user's status
haven't changed for `OFFLINE_THRESHOLD_SECS` period.
- Add test for offline aggregated status.
- Add aggregated status to user presence status dict.
- Add tests for aggregated presence status.
- Fix removing unused keys from status dict
with aggregated data for user.
Fixes#3692
- Add new 'missedmessage_email_senders' queue for sending missed messages emails.
- Add the new worker to process 'missedmessage_email_senders' queue.
- Split aggregation missed messages and sending missed messages email
to separate queue workers.
- Adapt tests for sending missed emails to the new logic.
Fixes#2607
This system was quite complicated, and never had great semantics.
Eventually, we'll want some other system for gating which server
should generate digest emails for which realm controlled via the
database.
This feature hardcoded zulip.com, and never really made much sense
("feedback" should generally go to the local server administrator, not
to the Zulip development community).
This completes the process of simplifying the interface of the
send_*_push_notification functions, so that they can effectively
support a push notification forwarding workflow.
This refactoring is preparation for being able to forward push
notifications to users on behalf of another Zulip server.
The goal is to remove access to the current server's database from the
send_*_push_notification code paths.
This code was added as part of the Django 1.10 migration to make our
tests work with both Django 1.8 and 1.10. Now that we're on 1.10,
it's no longer required.
In this commit we change user_avatar_hash with user_avatar_path which
now returns paths to avatars based on the email hash.
Tweaked by tabbott to avoid an import loop.
"Local" datetimes are local to the server (or rather, are using
settings.TIME_ZONE), which in most cases is not what the recipient of the
message is expecting.
In this commit we just change the upload_avatar_image function to accept
two user_profiles acting_user_profile and target_user_profile. Basically
email param is dropped for a target_user_profile so that avatar's could
be moved lateron to user id based storage.
This currently only supports this in emoji reactions, not in actual
emoji in message bodies, but it's a great start for people who want a
text-only view.
Tweaked to update the text by tabbott.
Fixes#3169.
Standardizing the Zulip codebase to use UTC everywhere. Note that unlike
many recent commits in this line, this changes does result in a change in
behavior.
datetime.utcnow() is a timezone-naive datetime. The Django ORM interprets it
in the settings.TIME_ZONE timezone (e.g. 'America/New_York' in the
development server). We perhaps haven't noticed errors yet since with
'America/New_York' all it means is that emails are sent 5 hours early, or a
slightly different set of messages are included in the digest.
When you pass a naive datetime to the Django ORM, it uses settings.TIME_ZONE
for the time zone. In the development environment, both settings.TIME_ZONE
and datetime.now() use 'America/New_York', so there is no change in behavior
there. (fromtimestamp with no tz argument uses the same timezone as
datetime.now)
We are soon going to change settings.TIME_ZONE to UTC, so need to remove
naive datetimes from queries to the ORM.
Like many rare-case code with new tests, it turns out that the logic
for handling null characters in our Zephyr postgres query escaping
never worked, in multiple ways. First, it always changed the second
character in s, not the current one being inspected, and second, the
value it replaced it with was no the correct postgres escape of the
null byte. We fix this and add tests.
This completes the effort to get zerver/views/messages.py to 100%
test coverage.
Fixes#1006.
When you edit a message to contain links, and URL previews are
enabled, previously we'd throw an exception, because the realm ID
wasn't included in the event.
Also adds a test so that we can have effective test coverage on this
codepath, though this history is actually that I found the bug through
writing this test :).
This fixes a weird issue where the following sequences of tests would fail:
test-backend
zerver.tests.test_messages.PersonalMessagesTest.test_personal_to_self
zerver.tests.test_report.TestReport.test_report_error
zerver.tests.test_templates.TemplateTestCase.test_custom_tos_template
It appears that all 3 tests are required for the failure.
While it's not entirely clear what the cause is, a very likely factor
is that settings.DEBUG is special, and so changing it at runtime is
likely to cause weird problems like this.
We fix this by replacing it with settings.DEVELOPMENT, which has the
same value in all environments, but doesn't have this problem of being
a special Django thing.
Fix administration page javascript issue of TypeError that occurs
due to undefined variable access in static/js/bot_data.js file.
Reactivating a bot was not updating the state in `bot_data`.
Sending an event on reactivating a bot fixes this issue.
Fixes: #2840
Change `from django.utils.timezone import now` to
`from django.utils import timezone`.
This is both because now() is ambiguous (could be datetime.datetime.now),
and more importantly to make it easier to write a lint rule against
datetime.datetime.now().
page_params is kinda a monster object. Ideally, we'd make it be
constructed in a much less haphazard fashion, and make sure that all
the useful data in it is available via the `/register` endpoint for
mobile/API. This change reorganizes page_params to be sorted by data
source, which is an important prerequisite for doing that.
- Add server version to `fetch_initial_state_data`.
- Add server version to register event queue api endpoint.
- Add server version to `get_auth_backends` api endpoint.
- Change source for server version in `home` endpoint.
- Fix tests.
Fixes#3663
- Add stamp file creation for the failed templates compilation.
- Add error response to `home` route if stamp file exists. It appears
just for the development environment.
- Add jinja2 template for failed handlebars templates compilation error.
Fixes#3650.
Modify the `bot_list` to hold all the bots owned by an user
irrespective of whether the bot is active or inactive. Also
include the `is_active` field in `active_bot_dict_fields` to
distinguish between inactive and active bots.
Use `name_to_codepoint.json` file (and the similar structure in
emoji_codes.js) to map emoji names directly to codepoints and change
the rendered emoji image to `unicode/<codepoint.png>` rather than
`<emoji_name>.png`.
Fixes: #3539.
This changes the time render to be done on the client-side and
therefore take advantage of knowing the client’s timezone, along with
being formatted in a more human-parseable way.
This adds to Zulip support for a user changing their own email
address.
It's backed by a huge amount of work by Steve Howell on making email
changes actually work from a UI perspective.
Fixes#734.
* Created a drafts modal to display/restore/delete drafts
* Created a Draft model to support storing draft data in localstorage
* Removed existing restore-draft functionality
* Added casper and node tests for drafts functionality
Fixes#1717.
The comments explain why this change is correct. This change is
useful because it's better to not have dead code paths, both because
it makes our life easier for coverage analysis, and because the else
statement provided the illusion that it could actually happen.
If the analysis in that comment is wrong, we'd rather have a 500 error
so we fix the bug than things silently sorta working.
This arguably regresses the Zephyr experience, in that we no longer
consider 'foo.d.d.d.d.d' to be something that gets narrowed in with
the rest, but that's a pretty rare use case anyway.
In practice, using that many '.d's anyway only happens a few times a
year.
Our client code will now receive avatar_url in
page_params.people_list during page load, so it will be
able to use more current urls for old messages (the client
already had some logic for that and was just missing the
data).
We also add avatar_url to the realm_user/add event.
When we change the avatar, we make sure to always send a
realm_user/update event (even for bots).
We also needed to add avatar_version and
avatar_source to our active users cache.
This makes life a lot easier for people inviting users to a new Zulip
organization, since they can give some form of context now.
Modified by tabbott to clean up CSS, backend code flow, and improve
the formatting of the emails.
Fixes: #1409.
We now make tests that call EventsRegisterTest.do_test()
explicitly specify whether calls to apply_events() would
change the state of initially fetched data. Generally
these tests exist to test that logic (as well as verifying
schemas of events), so if they stop testing that logic, it
is usually a broken test.
Some tests are exempted from the check here, because I think
they don't really change state--such as updating messages or
notifications. You can set state_change_expected to False
for those tests.
For all the tests that deal with flipping boolean flags, I
set their value to False before calling do_test twice now.
For the authentication backends, I mock the settings so that
more backends are "supported" and therefore part of the event
and the fetched state.
Finally, for the bot tests, I make sure to use a bot the user
can access.
The original include_subscribers implementation did not correctly
update the apply_events code path to avoid adding 'subscribers' dicts
to things. This corrects that oversight.
There's a new option, `include_subscribers`, that controls whether the
API sends down subscriber data for the various streams you are
subscribed to.
This has significant performance savings for large realms with naive
clients, and saves a bunch of bandwidth as well.
This fixes a performance regression loading the Zulip homepage.
While it decreases the utility of the display of messages, it's only
so much loss (because the display recipient for PMs was totally broken
anyway).
Fixes#268.
Modified significantly by tabbott to:
* improve code cleanliness / repetition
* add missing translation tags
* move code into message_edit.js
* correspond with the new backend.
* not display the option for messages only topic-edited
This makes it super easy for frontend code using this view code to
produce a nice display of the history.
This also fixes an off-by-one error with the timestamps.
Our lists of rabbitmq queues was likely to end up out of date, since
there was nothing enforcing that the various lists of queues were
correct or the same as each other.