We enable data_suffix option when creating Coverage instances which
causes the output files to include the hostname, pid, and random id.
Before each run erase is called which clears all existing coverage data
files. And then at the end of the test run use the combine method which
merges the reports.
We collect coverage in the main process which collects data from
imports and also when running in single process mode. In the workers we
collect coverage in run_subsuite. This creates more stats files than
strictly required but I don't see a better place to save the stats when
stopping workers.
Note that this has the side effect of enabling parallel testing in
Travis CI.
Previously, all notification preference setting had a dedicated test
and setter. Now, all are handled through a modular function using the
property_types framework.
In the future, the type annotation should use Deque in order to be
compatible with the latest mypy version. See
https://github.com/python/mypy/pull/2845 for more info.
If a Markdown macro contains Jinja2 template code, it isn't rendered
because render_markdown_path calls template.render on the including
.md file before the macro has been included. And then the including
.md file is converted to HTML. Therefore, the template code within
a Markdown macro (if any) never gets rendered and is returned as it is.
Now, after the source .md file is converted to HTML,
render_markdown_path renders the resulting HTML so that any template
code within included macros (if any) is finally rendered.
mypy will error because of the attribute "request" on the LogRecord
object. Since this field is added in our tests dynamically and is not
on the base object, for now we will ignore the type.
This is probably not the right long-term solution to the cross-realm
bots problem (that solution is probably to eliminate cross-realm bots
and replace them with per-realm bots). But in the short term, this
will at least make it possible for mobile apps to interact with these
cross-realm bots using the `realm_user` data set.
Changes made to get_push_commits_event_message in
zerver/lib/webhooks/git.py are common to all Git integrations
that use get_push_commits_event_message. These include github,
github_webhook, gitlab, gogs, bitbucket, bitbucket2. In some
cases (for instance, gitlab), no further changes to gitlab/view.py
will be required to support pushing a local branch without commits;
adding a fixture and tests should suffice.
Now this function will delete tokens from RemotePushDeviceToken if it
is running on notification bouncer or PushDeviceToken if it is running
on a server which doesn't use notification bouncer.
The regex we were using didn't cover all the unicode blocks
to which our emojis belong. This commit fixes the regex to
include all the unicode blocks and also updates the
corresponding JS regex in marked.js.
Fixes: #3460.
Unicode codepoints are of minimum length 4, padded with extra
zeroes if the length is less than 4. This commit fixes the
`unicode_emoji_to_codepoint()` to ensure that the codepoint
it generates are of correct length.
When we create a stream, we usually send a welcome message on the
stream itself as well as an announcement on the announcement stream,
but we no longer PM the individual users. Hopefully this will be
more pleasant for users (less spammy), and it also will make creating a
stream a lot faster.
We still send notifications when we add subscribers to an existing
stream.
We now pre-populate the streams in DEFAULT_NEW_REALM_STREAMS
(social/general/zulip, unless somebody changes settings.py) with
welcome messages. This makes the streams appear to be active
right away, and it also gives the Zulip realm less of a
blank-slate feeling when you create it.
This change only affects the normal web-based create-realm flow.
It doesn't impact the management commands for creating realms
or setting default streams.
This makes the new user experience in an active community like
chat.zulip.org substantially nicer, since the new user will have the
same level of initial messages to populate topics (etc.) as an
existing user who is caught up.
Without this, there was an undue level of fading-for-inactivity in the
default streams.
This fixes 2 issues:
* The term "@-mentioned" is simplified to "mentioned".
* We would incorrectly list other people who sent context messages as
among the people who mentioned you.
Now, in the event of messages between two other members of a huddle,
the missed message emails are threaded in "Group PMs with name1 and
name2" and not in separate threads by sender.
Also, now the order of recipients in get_display_recipient consistent
with the order of names that appears in the list of personal messages
on the left sidebar.
Fixes most of #4553.
Since realm emoji are now required to be lowercase,
an appropriate migration was added to retroactively
fix any emoji that might have contained uppercase
letters.
Also, the validator on the model was changed to
reject uppercase letters.
These handlers will kick into action when is_signup is False. In case
the account exists, the user will be logged in, otherwise, user will
be asked if they want to proceed to registration.
This fixes a major performance issue, where we would fetch
user_profile objects inside a code path that had already bulk-fetched
the necessary user objects.
Like the similar related changes we just made, the fix is to marshall
and pass the data into the avatar library directly.
Due to the refactoring of the avatar URL codepath that added realm IDs
to the URLs, we ended up calling `get_user_profile_by_email` inside
`get_avatar_url`, which in turns was called in a loop over all users
in a realm.
Needless to say, this resulted in a significant performance problem.
We fix this issue by passing in the data needed to compute the avatar
URL, rather than looking it up by email address.
This is one of the last major endpoints that were still done in the
pre-REST style.
While we're at it, we change the endpoint to expect a stream ID, not a
stream name.
This fixes most cases where we were assigning a user to
the var email and then calling get_user_profile_by_email with
that var.
(This was fixed mostly with a script.)
The example_user() function is specifically designed for
AARON, hamlet, cordelia, and friends, and it allows a concise
way of using their built-in user profiles. Eventually, the
widespread use of example_user() should help us with refactorings
such as moving the tests users out of the "zulip.com" realm
and deprecating get_user_profile_by_email.
On slower systems or virtual environments, when you are running
tests in parallel mode, sometimes the time taken to send messages
in this test exceeds 1 second which resets the limit.
We used to use constructions like
from_email = "Zulip <%s>" % (settings.NOREPLY_EMAIL_ADDRESS,)
but no longer do. All references to settings.NOREPLY_EMAIL_ADDRESS in the
codebase now do not append a display name.
This commit is a step towards the goal of replacing most of the
send_future_email pathway with a call to send_email.
Note that this commit changes the default value of sender from "Zulip
<NOREPLY_EMAIL_ADDRESS>" to "NOREPLY_EMAIL_ADDRESS". NOREPLY_EMAIL_ADDRESS
will soon be changed to have the Zulip in front.
Note that the correctness of this commit relies on the fact that
send_future_email also sets the sender to settings.NOREPLY_EMAIL_ADDRESS by
default (in the body of the function).
This commit replaces all uses of django.core.mail.send_mail with send_email,
other than in the password reset flow, since that code looks like it is just
a patch to Django's password reset code.
The send_email function is in a new file, since putting it in
zerver.lib.notifications would create an import loop with confirmation.models.
send_future_email will soon be moved into email.py as well.
Fixes regression introduced in 326f9a85. The test indirectly makes a call to
email_is_not_mit_mailing_list, which then calls
DNS.dnslookup("%s.pobox.ns.athena.mit.edu" % username, DNS.Type.TXT).
If a user is trying to register for a mit zephyr mirroring realm, we send
them a specific registration email with a link to a few more instructions.
There is only one server that we know about that has such a realm, and that
server uses subdomains. This commit changes the logic to work in the
subdomains case, rather than in the non-subdomains case (though see next
para).
Note that the current check is deceptive, and is not actually correct in the
non-subdomains case. The prereg user has a realm only in the atypical case
of someone registering via the special URL for completely-open realms.
To do this correctly in the non-subdomains case, we would need to copy a
bunch of the logic from the beginning of accounts_register to figure out
which realm the user is signing up for, so that we can check if that realm
is a zephyr mirroring realm. Given how complicated the registration code is
already, I think it is probably not worth it at the moment. This commit also
removes the partial (deceptive) check, since I think it does more harm than
good.
Relies on the fact that all the email template names now follow the same
pattern.
Note that there was some template_prefix-like computation being done in
send_confirmation (conditioned on obj.realm.is_zephyr_mirror_realm); that
computation is now being done in the callers.
Specifically, this makes easily available to the desktop and mobile
apps data on the server's configuration, including important details
like the realm icon, name, and description.
It deprecates /api/v1/get_auth_backends.
This increase in the list of needed fields carries a small performance
cost, but it should be very small, and this change is needed to
support outgoing webhooks without additional database queries.
Also puts them into a processing queue, though the queue processor
does nothing.
Rewritten by tabbott to avoid unnecessary database queries in
do_send_messages.
Previously, api_key_only_webhook_view passed 3 positional arguments
(request, user_profile, and client) into a function. However, most
of our other auth decorators only pass 2 positional arguments. For
the sake of consistency, we now make api_key_only_webhook_view set
request.client and pass only request and user_profile as positional
arguments.
This is a better solution to the problem of how _pg_re_escape should
handle the null character. There's really no good reason to have a
null character in a stream name.
The new function takes a full UserProfile object for the sender,
which allows us to avoid O(N) calls when creating the stream to
find the user profile of the notification bot. (The calls were
already cached, so this won't necessarily be a huge performance
win.)
We also don't have to worry about sending a blank subject any more.
The new, more direct interface for prepping internal stream
messages circumvents the bug-prone extract_recipients() method,
which has the pitfall that it will try to parse a stream name
as JSON. It also takes a UserProfile object for the sender, so
it's a bit more type-safe.
- Add file_name field to `RealmEmoji` model and migration.
- Add emoji upload supporting to Upload backends.
- Add uploaded file processing to emoji views.
- Use emoji source url as based for display url.
- Change emoji form for image uploading.
- Fix back-end tests.
- Fix front-end tests.
- Add tests for emoji uploading.
Fixes#1134
In cases where the webhook payload doesn't have the username for the
author of a particular commit (this can happen if the author doesn't
have a GitHub account or the author's email is not associated with
their GitHub account), we now use the author's full name to format
messages.
Move the user_profile data section down into fetch_initial_state_data
so it entirely pulls from register_ret for #3853.
This field requires some changes to the events race handling.
This moves the avatar_ fields in page_params to come from
register_ret. Unlike many fields, changing this had a bit of
complexity, because the avatar update events didn't actually contain
some of the details required for moving these into register_ret to
work correctly without races.
We fix that as part of this change.
Modified significantly by tabbott.
The function internal_prep_message is kind of awkward to
call, so I'm moving most of its implementation to
_internal_prep_message() for upcoming refactorings.
This makes it possible for the Zulip mobile apps to use the normal web
authentication/Oauth flows, so that they can support GitHub, Google,
and other authentication methods we support on the backend, without
needing to write significant custom mobile-app-side code for each
authentication backend.
This PR only provides support for Google auth; a bit more refactoring
would be needed to support this for the GitHub/Social backends.
Modified by tabbott to use the mobile_auth_otp library to protect the
API key.
We'll need to implement a version of the simple decoding/decryption
logic used by this library in the mobile code as well, but that should
be simple enough.
All webhook fixtures in zerver/fixtures/<webhook_name> have now
been moved to dedicated webhook-specific directories under
zerver/webhooks/<webhook_name>/fixtures, where <webhook_name> is
the name of the webhook.
This completes a major redesign of the Zulip login and registration
pages, making them look much more slick and modern.
Major features include:
* Display of the realm name, description and icon on the login page
and registration pages in the subdomains case.
* Much slicker looking buttons and input fields.
* A new overall style for the exterior of these portico pages.
This new feature makes it possible to request a different set of
initial data from the event_types an API client is subscribing to.
Primarily useful for mobile apps, where bandwidth constraints might
mean one wants to subscribe to events for a broader set of data than
is initially fetched, and plan to fetch the current state in future
requests.
For our Git integrations, we now only display the number of commits
pushed when the pusher also happens to be the only author of the
commits being pushed.
Part of #3968.
Follow-up to #4006.
For Git push messages, we now have a single space character between
the name of a commit's author and the number of commits by that
author, plus a period at the end.
Part of #3968.
Follow-up to #4006.
We now use the name of the author of a commit as opposed to the
committer to format Git messages with multiple committers.
Part of #3968.
Follow-up to #4006.
- Add aggregated info to real-time updated presence status.
- Update `presence events` test case with adding aggregated
information to presence event.
- Add test case for updating presence status for user which
send state from multiple clients.
Fixes#4282.
We need to reset INSTRUMENTED_CALLS variable before running every
subsuite. If we do not do this then the subsuite running on a
particular process called A will send the accumulated instrumented
calls gathered by the previous subsuites which also ran on the process A.
This also fixes the extra delay that we used to experience after the
tests had finished running. The extra delay was due to the duplicate
instrumented calls in the INSTRUMENTED_CALLS list. The size of this
list used to be ~100k for parallel model as opposed to ~1800 for serial
model.
This commit creates a dedicated file upload directory for every process
when we are running tests in parallel mode. This makes sure that we do
not run into any race conditions due to multiple processes accessing
the same upload directory.
This fixes a confusing issue where a user might try resetting the
password for an email account that in part of a different Zulip
organization.
Is a useful early step towards making Zulip support reusing an email
in multiple realms.
Fixes: #4557.
This fixes a performance problem where we were previously starting up
a full Django process (~0.7s even on a fast machine) every time a new
email came in, potentially allowing users to accidentally DoS a Zulip
server. Now, we just post over HTTPS, allowing the existing thread
pool support to do its job.
- Add script wrapper to communicate postfix pipe with django web server
over HTTP(S). It uses shared_secret authentication mode.
- Add django view to process messages from email mirror server.
- Clean management command `email-mirror`. Left just functional
for cron email processing.
- Add routes for new tornado view.
- Change pipe script in master process postfix config template
based on updated script.
- Add tests.
Tweaked by tabbott to adjust the directory and set better defaults.
Fixes#2421.
Rename 'zulip_internal' decorator to 'require_server_admin', add
documentation for 'server_admin', explaining how to give permission
for ./activity page.
Fixes: #1463.
This is basically just using the new check_dict_only everywhere, with
a few exceptions:
* New self.check_events_dict automatically adds the id field to avoid
duplicating it ~80 times.
* Set log=False for many of the testing action functions to remove the
timestamp field from their returned event dictionaries, since it's
not needed and is the result of a deprecated log_event function.
Wasn't sure if the subscription_field list in do_test_subscribe_events
could contain optional arguments, so I left the call to check_dict on
along with a TODO.
Fixes: #1370.
In this commit we add a logout wrapper so as to enable developers
to just do self.logout instead of doing a post request at API
endpoint for logout. This is achieved by adding a wrapper function
for the Django's client.logout contained in TestCase. We add this
by extending ZulipTestCase to have a logout function.
We now show a few new things:
(1) The number of commits pushed.
(2) Who authored the commits (just counts, not which specific ones, for brevity).
Add tests for case of multiple committers.
Part of #3968.
The exclude variable was superfluous. The realm properties
listed in the exclude variable are not in the
realm.property_types dict, so they do not need to
be explicitly excluded.
This Refactors the function fetch_initial_state_data to use the
realm.property_types attribute, avoiding some unnecessary code
duplication.
Tweaked slightly by tabbott to simplify the code a bit.
Addresses part of issue #3854.
This is an incomplete cleaned-up continuation of Lisa Neigut's push
notification bouncer work. It supports registration and
deregistration of individual push tokens with a central push
notification bouncer server.
It still is missing a few things before we can complete this effort:
* A registration form for server admins to configure their server for
this service, with tests.
* Code (and tests) for actually bouncing the notifications.
In cases where old unread messages in the home view might have been
leaked (either due to bugs or unusual muting interactions), it's
theoretically possible for the first unread message in the home view
to be far older than the pointer.
Since the Zulip mobile app is loading messages following the
use_first_unread logic, we need to plug this gap.
Probably a longer-term solution will involve changing how
update_message_flags works to automatically advance the pointer, but
this change should make it possible for the mobile apps to
consistently use the `use_first_unread` mechanism for fetching the
latest home view messages.
With tweaks to the tests by tabbott.
Fixeszulip/zulip-mobile#422.
The previous logic was that anyone with a link to a file could send it
to other users, but only the owner could make a file realm-public.
This had some confusing corner cases.
The new logic is much simpler:
* Only the file's owner/uploader can include a file in a message for
the first time.
* Anyone with access to read a file can share it with others by
including it in messages they send.
* Once a file has been sent to a public stream, any user in the realm
can access it.
In this commit we fix the occasionally breaking tests for
test_home.HomeTest.test_bad_narrow which were the result of
us patching global settings in test_upload to add some new emails
to CROSS_REALM_BOT_EMAILS and not rolling back.
textsearch based full text search doesn't match text in link tag but
PGroonga based full text search can match text in link tag.
Without this change, highlighting text in link tag generates broken
HTML.