This commit uses the Markdown extension defined in
zerver/lib/bugdown/api_generate_examples to generate the
example fixture and code example, so that both are tested
in zerver/lib/api_test_helpers.
This commit uses the Markdown extension defined in
zerver/lib/bugdown/api_generate_examples to generate the
example fixture and code example, so that both are tested
in zerver/lib/api_test_helpers.
Some of our code examples can only be run with administrator
credentials (such as create-user). Thus, the Markdown extension
for generating code examples should have an option to include
the lines that recommend using an admin zuliprc instead of a
non-admin one.
This commit uses the Markdown extension defined in
zerver/lib/bugdown/api_generate_examples to generate the
example fixture and code example, so that both are tested
in zerver/lib/api_test_helpers.
This commit uses the Markdown extension defined in
zerver/lib/bugdown/api_generate_examples to generate the
example fixture and code example, so that both are tested
in tools/lib/api_tests.
This commit uses the Markdown extension defined in
zerver/lib/bugdown/api_generate_examples to generate the
example fixture and code example, so that both are tested
in tools/lib/api_tests.
This commit uses the Markdown extension defined in
zerver/lib/bugdown/api_generate_examples to generate the
example fixture and code example, so that both are tested in
tools/lib/api_tests.
This commit uses the Markdown extension defined in
zerver/lib/bugdown/api_generate_examples to generate the example
fixture and code example, so that both are tested in
tools/lib/api_tests.
This commit uses the Markdown extension defined in
zerver/lib/bugdown/api_generate_examples to generate the example
fixture and code example, so that both are tested in
tools/lib/api_tests.
Now that the Markdown extension defined in
zerver/lib/bugdown/api_generate_examples depended on code in the
tools/lib/* directory, it caused the production tests to fail since
the tools/ directory wouldn't exist in a production environment.
This commit adds a Markdown extension that allows the following
syntax,
{generate_code_example|<md_file_name>|<fixture or method>}
to generate code examples and fixtures found in tools/lib/api_tests.py
and templates/zerver/api/fixtures.json, respectively.
We'll replace this primarily with per-realm quotas (plus the simple
per-file limit of settings.MAX_FILE_UPLOAD_SIZE, 25 MiB by default).
We do want per-user quotas too, but they'll need some more management
apparatus around them so an admin has a practical way to set them
differently for different users. And the error handling in this
existing code is rather confused. Just clear this feature out
entirely for now; then we'll build the per-realm version more cleanly,
and then we can later add back per-realm quotas modelled after that.
The migration to actually remove the field is in a subsequent commit.
Based in part on work by Vishnu Ks (hackerkid).
In order to get test coverage on topic name checks, we
do them in Addressee, so that we don't hit an assertion
first. The assertion in question is in Addressee.topic(),
and it was added partly to appease mypy.
Previously we had a problem of id clashes while importing converted
slack data into an existing zulip instance with realms which are actively
populating the database.
This counts the total objects to be imported and does a db transaction
to increase the SEQUENCE number for that table by that number,
and hence allocates a range of ids for the to be converted slack data
objects.
Adds a check for newline that was present on backend, but missing in the
frontend markdown implementation. Updating messages uses is_me_message flag
received from server instead of its own partial test. Similarly, rendering
previews uses markdown code.
Fixes#6493.
This is the first step for allowing users
to edit a bot's service entries, name the
outgoing webhook configuration entries. The
chosen data structures allow for a future
with multiple services per bot; right now,
only one service per bot is supported.
This is responsible for:
1.) Handling all the incoming requests at the
messages endpoint which have defer param set. This is similar to
send_message_backend apart from the fact that instead of really
sending a message it schedules one to be sent later on.
2.) Does some preliminary checks such as validating timestamp for
scheduling a message, prevent scheduling a message in past, ensure
correct format of message to be scheduled.
3.) Extracts time of scheduled delivery from message.
4.) Add tests for the newly introduced function.
5.) timezone: Add get_timezone() to obtain tz object from string.
This helps in obtaining a timezone (tz) object from a timezone
specified as a string. This string needs to be a pytz lib defined
timezone string which we use to specify local timezones of the
users.
This code takes care of the environment running Python 3.4 when
test label is passed directly to the test-backend command:
./tools/test-backend test_alert_words
This also amends a commit from Brock Whittaker <brock@zulipchat.com>
that merges two separate functions for YouTube videos and Vimeo videos
into a generic video recall function.
Fixes#7550.
We add two functions:
1.) check_schedule_message(): This function is responsible for
doing the essential initial checkes to verify the validity of
the message. These checkes include things like if user is
allowed to send messages to some stream or not or if the user is
a super_user. All this is basically done by further calling
check_message() with appropriate parameters. This is on the same
lines as is check_send_message().
2.) do_schedule_messages(): This function is responsible for
creating ScheduleMessage table rows for a list of messages that
are to be scheduled. This basically accumulates the ScheduleMessage
objects in a list and then bulk creates the rows.
This adds UI fields in the bot settings for specifying
configuration values like API keys for a bot. The names
and placeholder values for each bot's config fields are
fetched from the bot's <bot>.conf template file in the
zulip_bots package. This also adds giphy and followup
as embedded bots.
* For strikethrough formatting: Slack's '~strike~' to Zulip's '~~strike~~'.
* For bold formatting: Slack's '*bold*' to Zulip's '**bold**'.
* For italic formatting: Slack's '_italic_' to Zulip's '*italic*'.
* For mentioning formatting: Slack's <@slack_id|short_name> to Zulip's @**full_name**.
* Checking links.
load_bot_config_template(bot) parses the <bot>.conf
template file, which can be found in the zulip_bots
package for each bot. It then returns the INI content
of that file as a dict.
Currently the zip file is extracted to the root of
zulip directory no matter where the the zip file.
The extracted data is not useful after running the command
which pollutes the zulip directory. It make more sense to
extract it to the same directory of zip file especially
when the zip file gets downloaded to /temp like in the tests.
This commit adds a setting to limit creation of generic bots
to admins for realms that want that restriction. (Generic
bots, apart from being considered spammy on some realms,
have less locked down permissions than webhook bots).
Fixes#7066.
We no longer have a special UI setting and model
field ("emoji_alt_code") for saying users want text-only
emojis. We now instead make "text" be a fifth choice
for "emojiset".
Fixes#7406
This commit does the following:
* Move the Arguments table data from stream-message.md and
private-message.md to a JSON file.
* Add a Markdown extension that allows one to include and render
a table from a JSON file like so:
{generate_arguments_table|arguments.json|private-stream.md}
* Use Bootstrap's .table class to format the table instead of
relying on custom CSS.
This uses the correct regex for strikethrough. Also, added
a test to make sure that strikethrough works when it contains
link with whitespace.
Fixes#7596.
When I added this "Deployed code" feature to the error reporting,
I apparently hadn't worked out enough of how this code works to
realize that `notify_server_error` may be in a different process,
at a different time and potentially even on a different machine
from the actual error being reported.
Given that architecture, all the data about the error must be computed
in `AdminNotifyHandler`, before sending the report through the queue,
or else it risks being wrong. The job of `notify_server_error` and
friends is only to format the data and send it off. So, move the
implementation of this feature in order to do that.
(@showell added some "nocoverage" directives here for code that
is hard to test (exceptions being thrown, deployment files not
existing) and that was originally part of a file that didn't
require 100% coverage)
This helps prevent them from diverging and getting different sets of
features and fixes. As a bonus, the email path gets a nice tweak that
the Zulip path has had for years, since f7f2ec0ac, which makes the
emails clearer and less broken-looking when logging a message with no
stack trace.
This name hasn't been right since f7f2ec0ac back in 2013; this handler
sends the log record to a queue, whose consumer will not only maybe
send a Zulip message but definitely send an email. I found this
pretty confusing when I first worked on this logging code and was
looking for how exception emails got sent; so now that I see exactly
what's actually happening here, fix it.
This is just a basic Dropbox webhook integration. It just
notifies a user when something has changed, it does not
specify what changed. Doing so would require storing data,
as Dropbox API was created mainly for file managers, not
integrations like this.
Closes#5672
We have shifted to a generic queue to send all the emails. This queue
can retry in case of network issues; this makes sure that the emails are
always sent.
The name `create_logger` suggests something much bigger than what this
function actually does -- the logger doesn't any more or less exist
after the function is called than before. Its one real function is to
send logs to a specific file.
So, pull out that logic to an appropriately-named function just for
it. We already use `logging.getLogger` in a number of places to
simply get a logger by name, and the old `create_logger` callsites can
do the same.
From the docs:
> This function does nothing if the root logger already has handlers
> configured for it.
Which we do if we've started up Django and configured settings, and in
particular allowed Django to process `settings.LOGGING`.
So, cut it out -- all it can do is confuse people about how logging
works.
If we ever actually used the `log_format` parameter, this would be
doubly confused, because only the first call would have any effect.
Because calls to `create_logger` generally run after settings are
configured, these would override what we have in `settings.LOGGING` --
which in particular defeated any attempt to set log levels in
`test_settings.py`. Move all of these settings to the same place in
`settings.py`, so they can be overridden in a uniform way.
This is already the loglevel we set on the root logger, so this has no
effect -- except in tests, where `test_settings.py` attempts to set
some of these same loggers to higher loglevels. Because the
`create_logger` call generally runs after we've configured settings,
it clobbers that effect.
The code in `test_settings.py` that tries to suppress logs only works
because it also sets `propagate=False`, which has nothing to do with
loglevels but does cause logs at this logger (and descendants) to be
dropped completely unless we've configured handlers for this logger
(or one of its relevant descendants.)
Adds a markdown preprocessor that finds ordered lists where all items
use the same number and change them to be in normal increasing order,
starting with that number.
Fixes#5159.
The original logic is buggy now that emails can belong to (and be
invited to) multiple realms.
The new logic in the `invites` queue worker also avoids the bug where
when the PreregistrationUser was gone by the time the queue worker got
to the invite (e.g., because it'd been revoked), we threw an exception.
[greg: fix upgrade-compatibility logic; add test; explain
revoked-invite race above]
This code changes frequently enough that errors are bound to creep in. The
main change is that this sends the original invitation email instead of the
reminder email, but I think that's fine.
Empirically, the retry in `_on_connection_closed` didn't actually work
-- if a reconnect failed, that was it, and the exception handler
didn't get run. A traceback would get logged, but all its frames were
in Tornado or Pika, not our own code; presumably something magic and
async was happening to the exception.
Moreover, though we would make one attempt to reconnect if we had a
connection that got closed, we didn't have any form of retry if the
original attempt at connecting failed in the first place.
Happily, upstream offers a perfectly reasonable bit of API that avoids
both of these problems: the on-open-error callback. So use that.
This method was new in Tornado 4.0. It saves us from having to get
the time ourselves and do the arithmetic -- which not only makes the
code a bit shorter, but also easier to get right. Tornado docs (see
http://www.tornadoweb.org/en/stable/ioloop.html) say we should have
been getting the time from `ioloop.time()` rather than hardcoding
`time.time()`, because the loop could e.g. be running on the
`time.monotonic()` clock.
Adding it afterward is inherently racy, and upstream's API is quite
reasonable for avoiding that -- just like we can pass an on-open
callback up front, we can do the same with the on-close callback.
This is a more thorough version of 4adf2d5c2 from back in 2013-04.
The default value of this parameter is already False upstream.
(It was already False in pika version 0.9.6, which we were
supposedly using when we introduced this in 4baeaaa52; not sure
what the story was there.)
Previously, we weren't doing a proper left join in
user_groups_in_realm_serialized, resulting in empty user groups being
excluded from the query. We want to leave decisions about excluding
empty user groups to the UI layer, so we include these here.
Because we use access_stream_by_id here, and that checks for an active
subscription to interact with a private stream, this didn't work.
The correct fix to add an option to active_stream_by_id to accept an
argument indicating whether we need an active subscription; for this
use case, we definitely do not.
[Modified by greg to (1) keep `USERNAME_FIELD = 'email'`,
(2) silence the corresponding system check, and (3) ban
reusing a system bot's email address, just like we do in
realm creation.]
As we migrate to allow reuse of the same email with multiple realms,
we need to replace the old "no email reuse" validators. Because
stealing the email for a system bot would be problematic, we still ban
doing so.
This commit only affects the realm creation logic, not registering an
account in an existing realm.
Originally this used signals, namely SIGRTMIN. But in prod, the
signal handler never fired; I debugged fruitlessly for a while, and
suspect uwsgi was foiling it in a mysterious way (which is kind of
the only way uwsgi does anything.)
So, we listen on a socket. Bit more code, and a bit trickier to
invoke, but it works.
This was developed for the investigation of memory-bloating on
chat.zulip.org that led to a331b4f64 "Optimize query_all_subs_by_stream()".
For usage instructions, see docstring.
We would allow a user with a valid invitation for one realm to use it
on a different realm instead. On a server with multiple realms, an
authorized user of one realm could use this (by sending invites to
other email addresses they control) to create accounts on other
realms. (CVE-2017-0910)
With this commit, when sending an invitation, we record the inviting
user's realm on the PreregistrationUser row; and when registering a
user, we check that the PregistrationUser realm matches the realm the
user is trying to register on. This resolves CVE-2017-0910 for
newly-sent invitations; the next commit completes the fix.
[greg: rewrote commit message]
This fixes some subtle JavaScript exceptions we've been getting in
zulipchat.com, caused by the system bot realm there not being "zulip"
interacting with get_cross_realm_users.
This should help protect us from future issues with the way that
`bulk_get_users` does caching.
It's likely that we'll want to further restructure `bulk_get_users` to
not have this base_query code path altogether (since it's kinda
buggy), but I'm going to defer that for a time when we have another
user.
The previous implementation had a subtle caching bug: because it was
sharing its cache with the `get_user_profile_by_email` cache, if a
user happened to have an email in that cache, we'd return it, even
though that user didn't match `base_query`.
This causes `get_cross_realm_users` to no longer have a problematic
caching bug.
Hides URL if the message content == image url so that sending gifs or
images feels less cluttered. Uses the url_to_a() function to generate
the expected url string for matching.
Fixes#7324.
We include ERROR_BOT in this set, even though it's not technically
cross-realm (it just lives in the admin realm).
This code path does not correctly handle emails that correspond to
multiple accounts (because `get_system_bot` does not). Since it's
intended to only be used by system bots, we add an appropriate
assertion to ensure it is only used for system bots.
This was causing problems, because internal_send_message assumes that
there is a unique user (across all realms) with the given email
address (which is sorta required to support cross-realm bot messages
the way it does).
With this change, it now, in practice, only sends cross-realm bot
messages.
Previously, this was a ValidationError, but that doesn't really make
sense, since this condition reflects an actual bug in the code.
Because this happened to be our only test coverage the ValidationError
catch on line 84 of registration.py, we add nocoverage there for now.
This buggy logic from e1686f427c had
broken do-destroy-rebuild-test-database.
Now that we're not just trying to add the Recipient objects for every
user on the system here to profiles_by_id, we also shouldn't be
processing every Recipeint object on the system. The fix is simple:
because of the patch we got merged into Django upstream,
recipients_to_create actually has the object IDs added to the
Recipient objects passed into Recipient.objects.bulk_create.
This was missed in manual testing, since it only broke `populate_db
--test-suite`.
An Integration object doesn't need access to the context dict used
to render its doc.md, since the context dict is just passed directly to
render_markdown_path.
This fixes a bug where, when a user is unsubscribed from a stream,
they might have unread messages on that stream leak. While it might
seem to be a minor problem, it can cause significant problems for
computing the `unread_msgs` data structures, since it means we need to
add an extra filter for whether the user is still subscribed, either
in the backend or in the UI.
Fixes#7095.
Inorder to provide more explicit error messages I have merged the
`emoji_code_is_valid()` and `emoji_name_is_valid()` functions into
`check_emoji_code_consistency()` and `check_emoji_name_consistency()`
respectively.
This often can cause minor caching problems.
Obviously, it'd be better if we had access to the AST and thus could
do this rule for UserProfile objects in general.
Instead of populating the context dict with integration-specific
information in render_markdown_path, we now do that in
zerver.views.integrations.integration_doc instead.
Fixes#7401.
Tweaked by tabbott to use cast to handle the typing issues here.
The character ">" now only starts a blockquote if the resulting
blockquote would be non-empty. Thus, by itself, ">" is now
interpreted literally by bugdown, fixing #687. The message
with contents consisting of ">>>" is now parsed as a doubly
(not triply) nested blockquote with contents ">". Properly
formed blockquotes have identical behavior as before, but now
bugdown can no longer produce empty blockquotes as output.
Fixes#2886, #687.
Storage limititations are only set on the value of
a config entry, since this is the only user-accessible
part of the schema. Keys are statically set by each
embedded bot.
This endpoint will allow us to add/delete emoji reactions whose emoji
got renamed during various emoji infra changes. This was also a
required change for realm emoji migration.
This commit was tweaked significantly by tabbott for greater clarity
(with no changes to the actual logic).
When the RabbitMQ server disappears, we log errors like these:
```
Traceback (most recent call last):
File "./zerver/lib/queue.py", line 114, in json_publish
self.publish(queue_name, ujson.dumps(body))
File "./zerver/lib/queue.py", line 108, in publish
self.ensure_queue(queue_name, do_publish)
File "./zerver/lib/queue.py", line 88, in ensure_queue
if not self.connection.is_open:
AttributeError: 'NoneType' object has no attribute 'is_open'
During handling of the above exception, another exception occurred:
[... traceback of connection failure inside the retried self.publish()]
```
That's a type error -- a programming error, not an exceptional
condition from outside the program. Fix the programming error.
Also move the retry out of the `except:` block, so that if it also
fails we don't get the exceptions stacked on each other. This is a
new feature of Python 3 which is sometimes indispensable for
debugging, and which surfaced this nit in the logs (on Python 2 we'd
never see the AttributeError part), but in some cases it can cause a
lot of spew if care isn't taken.
This commit helps reduce clutter on the navigation sidebar.
Creates new directories and moves relevant files into them.
Modifies index.rst, symlinks, and image paths accordingly.
This commit also enables expandable/collapsible navigation items,
renames files in docs/development and docs/production,
modifies /tools/test-documentation so that it overrides a theme setting,
Also updates links to other docs, file paths in the codebase that point
to developer documents, and files that should be excluded from lint tests.
Note that this commit does not update direct links to
zulip.readthedocs.io in the codebase; those will be resolved in an
upcoming follow-up commit (it'll be easier to verify all the links
once this is merged and ReadTheDocs is updated).
Fixes#5265.
While fixing an issue related to email gateway messages not getting
rendered properly, I unknowingly introduced a bug in the markdown
engine updation code. This commit fixes it. The issue was that for
a realm having email gateway setup, updation of realm filters would
lead to the updation of only one of the markdown engines not both.
In remove_members_from_group_backend, we are passing user group to
remove_members_from_user_group. In remove_members_from_user_group,
expect user_group_id.
This fixes a regression in ae5ba7f4fd,
where Zulip would 500 if the newly added system bots didn't exist on
the server.
This also fixes a moderate size performance problem where we'd fetch 5
users from memcached or the database in a loop.
The intended use of $$ is for inline expressions, not for multiline
ones; ```math is an acceptable alternative for the latter. Hence,
the $$-syntax for inline TeX no longer permits newlines within it.
This was also necessary for the next change to be sensible; namely
allowing for spaces around both $$ when crafting inline TeX instead of
forcing everything to be crammed together, e.g. $$x=7$$. In order to
avoid uninentionally creating inline expressions, the opening and
closing $$'s of an inline expression must now both exactly consist of
two dollar signs, no more and no less.
Fixes: #6488.
Previously, these push notification events were being generated, but
then ignored in handle_push_notification because there was no
user_message object.
Generally emails are not written with markdown in mind and hence
sometimes render in strange ways. This commit fixes a particular
issue that was causing whitespace before paragraphs to be treated
as code block due to which email content was being rendered in a
box that scrolls in right direction a lot.
Fixes: #7045.
This change affects realm_users and realm_non_active_users.
Note that we still send full avatar urls in realm_user/add
events, so apply_events has to do something mildly hacky to
turn the avatar_url to None in that case.
Fixing the event is probably not worth the trouble, as single
urls are not bandwidth hogs; we only need this optimization
for bulk data.
This change affects these values:
* page_params.avatar_url
* page_params.avatar_url_medium
It requires passing the client_gravatar flag through this
codepath:
* home_real
* do_events_register
* fetch_initial_state_data
* avatar_url
This commit allows clients to register client_gravatar=True, and
then we recognize that flag for message events. If the flag is
True, we will not calculate gravatar URLs and let the clients do
it themselves. (Clients can calculate gravatar URLs based on
emails with just a little bit of code.)
This refactoring doesn't change behavior, but it sets us up
to more easily handle a register setting for `client_gravatar`,
which will allow clients to tell us they're going to compute
their own gravatar URLs.
The `client_gravatar` flag already exists in our code, but it
is only used for Django views (users/messages) but not for
Zulip events.
The main change is to move the call to `set_sender_avatar` into
`finalize_payload`, which adds the boolean `client_gravatar`
parameter to that function. And then we update various callers
to supply that flag.
One small performance benefit of this change is that we now
lazily compute the client message payloads in
`event_queue.process_message_event` now, so this will improve
performance if all interested clients have the same value of
`apply_markdown`. But the change here is really preparing us
for the additional boolean parameter, which will cause us to
have four variations of the payload.
The main limitation of this version is that it's controlled entirely
from settings, with nothing in the database and no web UI or even
management command to control it. That makes it a bit more of a
burden for the server admins than it'd ideally be, but that's fine
for now.
Relatedly, the web flow for realm creation still requires choosing a
subdomain even if the realm is destined to live at an alias domain.
Specific to the dev environment, there is an annoying quirk: the
special dev login flow doesn't work on a REALM_HOSTS realm. Also,
in this version the `add_new_realm` and `add_new_user` management
commands, which are intended for use in development environments only,
don't support this feature.
In manual testing, I've confirmed that a REALM_HOSTS realm works for
signup and login, with email/password, Google SSO, or GitHub SSO.
Most of that was in dev; I used zulipstaging.com to also test
* logging in with email and password;
* logging in with Google SSO... far enough to correctly determine
that my email address is associated with some other realm.
The original PR to allow generic bots to be mentioned had
some merge issues that we detected about a week after the
fact. This commit restores the logic from the original PR.
The reason we didn't detect this bug earlier is that the
merge issues didn't break any existing behavior. Instead,
they made it so that only UserMessage rows got written for
bots, but no events were being set. The part of the commit
that got lost is restored here, so now events get sent as
well.
Thanks to @derAnfaenger for reporting this and being patient
as we tracked it down.
Fixes#7140
This adds the data model and bugdown support for the new UserGroup
mention feature.
Before it'll be fully operational, we'll still need:
* A backend API for making these.
* A UI for interacting with that API.
* Typeahead on the frontend.
* CSS to make them look pretty and see who's in them.
Because this is for tests, a heuristic like this that's right in most
situations is actually fine; we can override it in the few cases where
a test might set up a situation where it fails.
So just make it clear for the next reader that that's what's going on,
and also adjust the helper's interface slightly so that its callers
do have that flexibility.
We extract get_bulk_stream_subscriber_info() from this
function to remove some of the complexity. Also, in that
new function we avoid a hop to the database by querying
on stream ids instead of recipient ids. The query that
gets changed here does require a join to the recipient
table (to get the stream id), so it's a little bit of a
tradeoff.
There's an implicit assumption in bulk_remove_subscriptions
that all users belong to the same realm. We use the realm
for things like comparing occupied streams before and
after our main operation of deactivating streams.
Before this change, we just used the user_profile variable
that leaked from some prior loop to look up the realm, which
was super brittle.
Now we're a bit more explicit.
We were using Google's diff-match-patch library to diff HTML. The
problem with that approach is that it is a text differ, not an HTML
differ and so it ends up messing up the HTML tags. `lxml` is a safer
option.
Fixes: #7219.
Note that this code leads to a slightly different query, because
we join to one row in the small Recipient table to match
stream_id to recipient.type_id.
The first method we extract to this library is
get_active_subscriptions_for_stream_id().
We also move num_subscribers_for_stream_id() to here, which
is slightly annoying (having the method on Stream was nice)
but avoids some circular dependency issues.