This changes the time render to be done on the client-side and
therefore take advantage of knowing the client’s timezone, along with
being formatted in a more human-parseable way.
This adds to Zulip support for a user changing their own email
address.
It's backed by a huge amount of work by Steve Howell on making email
changes actually work from a UI perspective.
Fixes#734.
* Created a drafts modal to display/restore/delete drafts
* Created a Draft model to support storing draft data in localstorage
* Removed existing restore-draft functionality
* Added casper and node tests for drafts functionality
Fixes#1717.
The comments explain why this change is correct. This change is
useful because it's better to not have dead code paths, both because
it makes our life easier for coverage analysis, and because the else
statement provided the illusion that it could actually happen.
If the analysis in that comment is wrong, we'd rather have a 500 error
so we fix the bug than things silently sorta working.
This arguably regresses the Zephyr experience, in that we no longer
consider 'foo.d.d.d.d.d' to be something that gets narrowed in with
the rest, but that's a pretty rare use case anyway.
In practice, using that many '.d's anyway only happens a few times a
year.
Our client code will now receive avatar_url in
page_params.people_list during page load, so it will be
able to use more current urls for old messages (the client
already had some logic for that and was just missing the
data).
We also add avatar_url to the realm_user/add event.
When we change the avatar, we make sure to always send a
realm_user/update event (even for bots).
We also needed to add avatar_version and
avatar_source to our active users cache.
This makes life a lot easier for people inviting users to a new Zulip
organization, since they can give some form of context now.
Modified by tabbott to clean up CSS, backend code flow, and improve
the formatting of the emails.
Fixes: #1409.
We now make tests that call EventsRegisterTest.do_test()
explicitly specify whether calls to apply_events() would
change the state of initially fetched data. Generally
these tests exist to test that logic (as well as verifying
schemas of events), so if they stop testing that logic, it
is usually a broken test.
Some tests are exempted from the check here, because I think
they don't really change state--such as updating messages or
notifications. You can set state_change_expected to False
for those tests.
For all the tests that deal with flipping boolean flags, I
set their value to False before calling do_test twice now.
For the authentication backends, I mock the settings so that
more backends are "supported" and therefore part of the event
and the fetched state.
Finally, for the bot tests, I make sure to use a bot the user
can access.
The original include_subscribers implementation did not correctly
update the apply_events code path to avoid adding 'subscribers' dicts
to things. This corrects that oversight.
There's a new option, `include_subscribers`, that controls whether the
API sends down subscriber data for the various streams you are
subscribed to.
This has significant performance savings for large realms with naive
clients, and saves a bunch of bandwidth as well.
This fixes a performance regression loading the Zulip homepage.
While it decreases the utility of the display of messages, it's only
so much loss (because the display recipient for PMs was totally broken
anyway).
Fixes#268.
Modified significantly by tabbott to:
* improve code cleanliness / repetition
* add missing translation tags
* move code into message_edit.js
* correspond with the new backend.
* not display the option for messages only topic-edited
This makes it super easy for frontend code using this view code to
produce a nice display of the history.
This also fixes an off-by-one error with the timestamps.
Our lists of rabbitmq queues was likely to end up out of date, since
there was nothing enforcing that the various lists of queues were
correct or the same as each other.
Based on work by Kartik Maji in #1204.
This has a few significant changes from the original version:
* We correctly handle filling in data for topic edits
* Has a complete test suite verifying correctness of the logic
* Currently, it doesn't include a special "start" entry
Things we may want to further change include:
* Adding a special "start" entry.
* Reversing the order of the history data returned for clarity.
This is important for, in the future, being able to display who edited
the topic of a message if that wasn't the person who originally sent
the message.
Our URL routing previously attempting to segment the /users/ endpoint
namespace into /me (affecting yourself) or /username@domain (affecting
other users) by regular expressions incorrectly, specifically in the
case of email addresses starting with `me`. This prevented various
admin actions like removing a user as an organization administrator.
This is a fairly risky, invasive change that speeds up
stream deactivation by no longer sending subscription/remove
events for individual subscribers to all of the clients who
care about a stream. Instead, we let the client handle the
stream deactivation on a coarser level.
The back end changes here are pretty straightforward.
On the front end we handle stream deactivations by removing the
stream (as needed) from the streams sidebar and/or the stream
settings page. We also remove the stream from the internal data
structures.
There may be some edge cases where live updates don't handle
everything, such as if you are about to compose a message to a
stream that has been deactivated. These should be rare, as admins
generally deactivate streams that have been dormant, and they
should be recoverable either by getting proper error handling when
you try to send to the stream or via reload.
This fix prevents stream deactivation from being basically
un-usable for medium to large sites. Instead of calling
bulk_remove_subscriptions one at a time for every individual
member of the realm, we call it once for all the users that
care about the stream. This change makes a huge difference, but
the feature is still a bit clunky, and we should only temporarily
revert to this fix if future, more-invasive fixes have flaws.
Fixes#3631.
We were apparently incorrectly harcdoding the client for the main
logged-in site loading to website, rather than using the existing
logic that could sort out the desktop apps.
This significantly simplify the logic for our logging process, making
it the case that websockets message sending requests always are logged
as having the exact same client as a normal AJAX request from that
server.
This commit changes test_patch_bot_avatar to upload avatars to a
different directory so that there is no race condition when tests are
run in parallel mode.
In some cases here we simplify things by calling avatar_url()
instead of get_avatar_url(), when we have a user_profile record
handy. For other cases we pass in an extra avatar_version
parameter to get_avatar_url(), including from avatar_url().
We have a field called user_profile.avatar_version that will
track avatar versions and be used tactically in avatar urls
to get browsers to refresh their caches (in future commits).
This commit bumps the avatar version when we update avatars.
We do this in do_change_avatar_fields(), which was
do_change_avatar_source() before this change.
Adarsh did the initial work here, and Steve Howell (showell) also
made changes.
In Zulip, we mark messages that you send to yourself as read if and
only if they were sent from a known client that represents a human
user use case. The purpose of this logic is to (1) mark messages
humans send as read while (2) still making it convenient to have a bot
that sends messages to yourself for something like Google calendar,
where you actually want to read those messages.
It's possible that we want to move the control for this behavior into
a client-specific flag rather than doing this off User-Agent.
Fixes#3694.
This test would fail if settings.RUNNING_INSIDE_TORNADO
was True, which seemed to happen due to other tests changing
that setting, although I did not fully investigate.
For our user administration, we now primarily work with user ids
that get put into data-user-id attributes. We still put emails in the
tags to make our Casper tests easy to maintain.
This requires a minor change to the back end to pass down user ids
for the /users endpoint (in get_members_backend).
Something in c14e981e00 broken test
failures being reported properly; this isn't the right fix but works
and will let us avoid reverting the original change until it can be
fixed properly.
I dug into why we never did this before, and it turns out we did, but
using `$.trim()` (which removes leading whitespace as well!). When
removing the `$.trim()` usage.
Fixes#3294.
This commit adds html versions of the invite and signup mails and renames
the existing .txt files to the preferred file extensions '.subject', '.html'
and '.txt'. The html versions of the mails are being sent along with the
text-only versions by the 'send_confirmation' function.
This fixes#3134.
The original test was written in shell script which launches a new
django instance for every tests. By doing it in Python, we avoid
the overhead and reduce the test time to <1 second.
Fixes#3620.
This moves do_events_register, fetch_initial_state_data and friends to
a new file.
Modified significantly by tabbott for correctness and to remove unused
imports.
Fixes#3635.
In Django, TestSuite can contain objects of type TestSuite as well
along with TestCases. This is why the run method of TestSuite is
responsible for iterating over items of TestSuite.
This also gives us access to result object which is used by unittest
to gather information about testcases.
Use append_instrumentation_data to append data to the INSTRUMENTED_DATA.
This gives us a layer of abstraction when we need to add instrumentation
data from other modules e.g. while running tests in parallel mode.
This function can be used to perform processing on instrumentation data.
For example, this can be used to send the instrumentation data gathered
in the test suite running in the child process to the parent process for
aggregation.
Having `restricted_to_domain` set to True if there are no more aliases
left means the user is either confused or forgot to set it to False. It
should be set to False automatically when the last alias is deleted.
I believe this completes the project of ensuring that our recent work
on limiting what characters can appears in users' full names covers
the entire codebase.
Disallows you from putting the characters @, *, `, and > and " in
your name. Added test cases similar to the MAX_NAME_LENGTH check
Copied initial code from:
https://github.com/zulip/zulip/pull/2473
Adds a new webhook integration for WordPress blogs. Both WordPress.com
and self-installed blogs are supported, with minor differences that
are described in the documentation. It creates a new message for each
action, the stream and topic may be specified or use default values.
WordPress actions supported:
publish_post: a new blog post was published
publish_page: a new page was published
user_register: a new user account was created
wp_login: a user logged in
Notes: comment_post only provides the id of the parent post, not title
or link, so was not included. On further testing, I found edit_post is
not very practical, it also fires while a new post is being written, and
when posts are deleted. (I think it tracks drafts too.) I've removed it,
as it seems more confusing than useful.
Fixes#3245
boto's stubs have been updated in mypy 0.4.7, which has given us
more information about what type of strings are expected as
parameters in various functions.
Wrap `list.append` in a lambda before assigning it to
event_queue.process_notification to prevent errors when
event_queue.process_notification is used with keyword arguments.
This also removes an error message by mypy 0.4.7.
In zerver.tests.test_decorators.test_check_dict, the variable
'keys' has to be explicitly annotated to pass mypy 0.4.7.
See https://github.com/python/mypy/issues/2777 for more info.
This changes the query for DevAuthBackend so that the shakespearian
users are not omitted while limiting the number of extra users to be
rendered to something reasonable.
Fixes: #3578.
Zulip's previous model for managing static asset files via Django
pipeline had some broken behavior around upgrades. In particular, it
was for some reason storing the information as to which static files
should be used in a memcached cache that was shared between different
deployments of Zulip. This means that during the upgrade process,
some clients might be served a version of the static assets that does
not correspond to the server they were connected to.
We've replaced that model with using ManifestStaticFilesStorage, which
instead allows each Zulip deployment directory to have its own
complete copy of the mapping of files to static assets, as it should
be.
We have to do a little bit of hackery with the staticfiles.json path
to make this work, basically because Django expects staticfiles.json
to be under STATIC_ROOT (aka the path nginx is serving to users), but
doing that doesn't really make sense for Zulip, since that directory
is shared between different deployments.
In a Zulip production environment, STATIC_ROOT points to the shared
directory that static assets are served from, and so the
compilemessages management command was trying to process every
historical version in there.
We do not use `get_link_embed_data` for messsages sent by
bots, as bots often repeat the same URL over and over again
and are generally either text-focused or have their own
mechanisms to provide preview content.
Fixes#2968.
(The commit q7ef4e40258280e202325c9295579c93fb948b replaced
data-user-email with data-user-id, but we still need to
support data-user-email for old clients like non-updated
androids and we still want to start the migration forward
to data-user-id.)
The goal of this library is to make it a lot easier to prevent bugs
like CVE-2017-0881 by having all of our views logic for fetching a
stream go through a couple carefully tested code paths.
A bug in Zulip's implementation of the "stream exists" endpoint meant
that any user of a Zulip server could subscribe to an invite-only
stream without needing to be invited by using the "autosubscribe"
argument.
Thanks to Rafid Aslam for discovering this issue.
Apparently, we weren't returning the `json_error`, resulting in users
encountering this condition receiving a 500, rather than the proper
40x error.
This fixes a regresion introduced in 9ae68ade8b.
Previously, if you searched for ':offi..' you would see both 🏢 and
:office_building: as possible completions, both of which are shortcodes for
the same unicode codepoint (and hence which have the same image). Also, we
sort the emoji in our emoji pickers alphabetically by shortcode, and so the
images for 🏢 and :office_building: show up next to each other, which
looks like a bug. This removes :office_building: as a shortcode, along with
several hundred other duplicates. It leaves some duplicates in that won't
give autocomplete or alphabetical ordering a problem, like (🚗,
:automobile:).
This fixes a regression introduced by our migration to track
subscribers for all public streams, where now users who are added to
an invite-only stream were receiving a mark_subscribed event
for a stream their browser didn't know existed, causing an exception.
To fix this, we now send a stream create event to the browser just
before the user receives the notification that it was added to the
invite-only stream.
The realm with string_id of "simple" just has three users
named alice, bob, and cindy for now. It is useful for testing
scenarios where realms don't have special zulip.com exception
handling.
In case realms have subdomains and the user hasn't been populated
yet in the Django User model, `ZulipLDAPAuthBackend` should not
rely on user's email domain to determine in which realm it should
be created in.
Fixes: #2227.
This fixes a bug where update_message_backend would do one memcached
query per user receiving a given message. Right now we just do a
single bulk database query, but in principle we could use
generic_bulk_cached_fetch to use the cache as well.
Apparently, we were comparing the full list of enabled authentication
methods (whether or not supported by the server) against the user's
selections among those supported by the server, which resulted in
authentication methods being always reported as different.
It turns out we were using malformed URLs in the image tags
(containing just a hostname, but no http(s)!) in what we were passing
to the Django templates for our digest/, which resulted in the Django
templates treating these URLs as http. Gmail recently cracked down on
loading images in HTTP, causing the emoji links to appear broken in
emails Zulip sends.
Fixes#3258.
This old helper has for years been used only by populate_db, and got
buggy (as of a recent refactoring). So we just call do_send_messages
directly instead.
Fixes the provisioning error we currently get in Travis CI.
This is a pretty minor change, but it makes it clear that we
have user_id in all the relevant states/events, so we might as
well use that for the check, since email is mutable and
slightly more difficult to reason about.
This changes bugdown to use the realm passed in by the caller (if any)
for rendering, fixing a problem where bots such as the notification
bot would have their messages rendering using the admin realm's
settings, not the settings of the realm their messages are being sent
into.
Also adds a test for the notification bot case.
Fixes#3215.
This moves the realm_filter_key variable, primarily used for clarity,
up from Bugdown into the render_markdown function.
We'll need this for the upcoming commits.
A lot of care has been taken to ensure we're using the realm that the
message is being sent into, not the realm of the sender, to correctly
handle the logic for cross-realm bot users such as the notifications
bot.
In order to correctly handle messages sent by cross-realm bots, we
need to specify the realm that the messages are being sent into in the
send message code path. The commit and its successors convert that
code path to include the realm the message is being sent to explicitly.