Use append_instrumentation_data to append data to the INSTRUMENTED_DATA.
This gives us a layer of abstraction when we need to add instrumentation
data from other modules e.g. while running tests in parallel mode.
This function can be used to perform processing on instrumentation data.
For example, this can be used to send the instrumentation data gathered
in the test suite running in the child process to the parent process for
aggregation.
Having `restricted_to_domain` set to True if there are no more aliases
left means the user is either confused or forgot to set it to False. It
should be set to False automatically when the last alias is deleted.
I believe this completes the project of ensuring that our recent work
on limiting what characters can appears in users' full names covers
the entire codebase.
Adds a new webhook integration for WordPress blogs. Both WordPress.com
and self-installed blogs are supported, with minor differences that
are described in the documentation. It creates a new message for each
action, the stream and topic may be specified or use default values.
WordPress actions supported:
publish_post: a new blog post was published
publish_page: a new page was published
user_register: a new user account was created
wp_login: a user logged in
Notes: comment_post only provides the id of the parent post, not title
or link, so was not included. On further testing, I found edit_post is
not very practical, it also fires while a new post is being written, and
when posts are deleted. (I think it tracks drafts too.) I've removed it,
as it seems more confusing than useful.
Fixes#3245
boto's stubs have been updated in mypy 0.4.7, which has given us
more information about what type of strings are expected as
parameters in various functions.
Wrap `list.append` in a lambda before assigning it to
event_queue.process_notification to prevent errors when
event_queue.process_notification is used with keyword arguments.
This also removes an error message by mypy 0.4.7.
We do not use `get_link_embed_data` for messsages sent by
bots, as bots often repeat the same URL over and over again
and are generally either text-focused or have their own
mechanisms to provide preview content.
Fixes#2968.
(The commit q7ef4e40258280e202325c9295579c93fb948b replaced
data-user-email with data-user-id, but we still need to
support data-user-email for old clients like non-updated
androids and we still want to start the migration forward
to data-user-id.)
The goal of this library is to make it a lot easier to prevent bugs
like CVE-2017-0881 by having all of our views logic for fetching a
stream go through a couple carefully tested code paths.
This fixes a regression introduced by our migration to track
subscribers for all public streams, where now users who are added to
an invite-only stream were receiving a mark_subscribed event
for a stream their browser didn't know existed, causing an exception.
To fix this, we now send a stream create event to the browser just
before the user receives the notification that it was added to the
invite-only stream.
It turns out we were using malformed URLs in the image tags
(containing just a hostname, but no http(s)!) in what we were passing
to the Django templates for our digest/, which resulted in the Django
templates treating these URLs as http. Gmail recently cracked down on
loading images in HTTP, causing the emoji links to appear broken in
emails Zulip sends.
Fixes#3258.
This old helper has for years been used only by populate_db, and got
buggy (as of a recent refactoring). So we just call do_send_messages
directly instead.
Fixes the provisioning error we currently get in Travis CI.
This is a pretty minor change, but it makes it clear that we
have user_id in all the relevant states/events, so we might as
well use that for the check, since email is mutable and
slightly more difficult to reason about.
This changes bugdown to use the realm passed in by the caller (if any)
for rendering, fixing a problem where bots such as the notification
bot would have their messages rendering using the admin realm's
settings, not the settings of the realm their messages are being sent
into.
Also adds a test for the notification bot case.
Fixes#3215.
This moves the realm_filter_key variable, primarily used for clarity,
up from Bugdown into the render_markdown function.
We'll need this for the upcoming commits.
A lot of care has been taken to ensure we're using the realm that the
message is being sent into, not the realm of the sender, to correctly
handle the logic for cross-realm bot users such as the notifications
bot.
In order to correctly handle messages sent by cross-realm bots, we
need to specify the realm that the messages are being sent into in the
send message code path. The commit and its successors convert that
code path to include the realm the message is being sent to explicitly.
Contributor visualization showing the avatar, user name and number
of commits for each contributors. The JSON data would be updated
upon deployment, triggered by the `update-prod-static` script.
This should substantially improve the clarity of the code, since
inside bugdown, this is only being used as a hash key that happens to
usually be a realm ID, not used as a Realm ID.
This commit reverts commit "29673411df8dffd50198c0f01c5db8561a782adf"
due to the bug is not reproducible anymore (it seems likely this was a
bug in Django fixed between Django 1.8 and 1.10).
Fixes#1297.
Finishes the refactoring started in c1bbd8d. The goal of the refactoring is
to change the argument to get_realm from a Realm.domain to a
Realm.string_id. The steps were
* Add a new function, get_realm_by_string_id.
* Change all calls to get_realm to use get_realm_by_string_id instead.
* Remove get_realm.
* (This commit) Rename get_realm_by_string_id to get_realm.
Part of a larger migration to remove the Realm.domain field entirely.
Update integration to use the latest Google API client.
Move Google Account authorization code to a separate file.
Move relevant files from 'bots/' to 'api/integrations/google/'.
Add documentation for integration.
Removes the dependence on postmonkey, which is a wrapper around
MailChimp API v1.3. MailChimp recommends using their v3.0 API directly,
rather than through a wrapper library.
While this may not have created a clear user-facing bug, it seems less
confusing for do_invite_users to only create PreregistrationUser
objects for users who actually received an email invitation.
Add test about avatar image was uploaded properly or not in
`tests.BotTest.test_add_bot_with_user_avatar` and
`tests.BotTest.test_patch_bot_avatar`.
Add `get_test_image_file()` and `avatar_disk_path()` in
`zerver.lib.test_helpers` and deduplicate some codes.
Fixes#1276.
This adds support for only allowing normal users with account age
equal or greater than a "waiting period" threshold to create streams;
this is useful for open organizations that want new members to
understand the community before creating streams.
If create_stream_by_admins_only setting is set to True, only admin users
were able to create streams. Now normal users with account age greater
or equal than waiting period threshold can also create streams.
Account age is defined as number of days passed since the user had
created his account.
Fixes: #2308.
Tweaked by tabbott to clean up the actual can_create_streams logic and
the tests.
First step in cleaning up populate_db.create_streams and
bulk_create.bulk_create_streams. Part of a series of commits to remove
Realm.domain from populate_db.
There is a change in Django 1.10 due to which whenever the password
of the user is changed the session hash changes. This change affects
us because we cache user profile objects and these cached objects need
to be refreshed. However, the signal sent by Django in which objects are
refreshed fails to refresh the cache for Tornado because it uses a
different cache prefix.
Note: Backend tests are not affected because they don't rely on Tornado.
This includes making the default stream description setting into a
dict. That is an API change; we'll discuss it in the changelog but it
seems small enough to be OK.
With some small tweaks by tabbott to remove unnecessary backwards
compatibility code for the settings.
Fixes#2427.
This change adds support for displaying inline open graph previews for
links posted into Zulip.
It is designed to interact correctly with message editing.
This adds the new settings.INLINE_URL_EMBED_PREVIEW setting to control
whether this feature is enabled.
By default, this setting is currently disabled, so that we can burn it
in for a bit before it impacts users more broadly.
Eventually, we may want to make this manageable via a (set of?)
per-realm settings. E.g. I can imagine a realm wanting to be able to
enable/disable it for certain URLs.
A few functions had arguments removed without having their type
annotations updated accordingly. As a result mypy version 0.4.6
thinks that the first type in the annotation is supposed to be
the type of `self`, a new mypy feature which we are not intending
to use here.
This commit adds support for removing reactions via DELETE requests to
the /reactions endpoint with parameters emoji_name and message_id.
The reaction is deleted from the database and a reaction event is sent
out with 'op' set to 'remove'.
Tests are added to check:
1. Removing a reaction that does not exist fails
2. When removing a reaction, the event payload and users are correct
The markdown files under templates/zerver/help/ are technically not
templates in the standard sense, and thus should not be being
checked with this code path.
(We probably do want to add a test to make sure they all render fine,
but that can be its own project.)
This commit adds the following:
1. A reaction model that consists of a user, a message and an emoji that
are unique together (a user cannot react to a particular message more
than once with the same emoji)
2. A reaction event that looks like:
{
'type': 'reaction',
'op': 'add',
'message_id': 3,
'emoji_name': 'doge',
'user': {
'user_id': 1,
'email': 'hamlet@zulip.com',
'full_name': 'King Hamlet'
}
}
3. A new API endpoint, /reactions, that accepts POST requests to add a
reaction to a message
4. A migration to add the new model to the database
5. Tests that check that
(a) Invalid requests cannot be made
(b) The reaction event body contains all the info
(c) The reaction event is sent to the appropriate users
(d) Reacting more than once fails
It is still missing important features like removing emoji and
fetching them alongside messages.
Refactor list_to_streams and create_streams_if_needed to take a list
of dictionaries, instead of a list of stream names. This is
preparation for being able to pass additional arguments into the
stream creation process.
An important note: This removes a set of validation code from the
start of add_subscriptions_backend; doing so is correct because
list_to_streams has that same validation code already.
[with some tweaks by tabbott for clarity]
Previously, the key prefix was based on the process id due to which
the JS tests couldn't properly flush user profiles from the cache as
our application spans over multiple processes. This problem becomes
apparent when in json_change_settings view after changing the user_profile
the tornado views continue to get the cached user profile corresponding
to their process id.
Clean up the instances of self.assertIn("string", result.content.decode("utf-8")),
and replace them with self.assert_in_response("string").
Fixes: #2313
We now instrument URL coverage whenever you run the back end tests,
and if you run the full suite and fail to test all endpoints, we
exit with a non-zero exit code and report failures to you.
If you are running just a subset of the test suite, you'll still
be able to see var/url_coverage.txt, which has some useful info.
With some tweaks to the output from tabbott.
Fixes#1441.