Previously we had a problem of id clashes while importing converted
slack data into an existing zulip instance with realms which are actively
populating the database.
This counts the total objects to be imported and does a db transaction
to increase the SEQUENCE number for that table by that number,
and hence allocates a range of ids for the to be converted slack data
objects.
Adds a check for newline that was present on backend, but missing in the
frontend markdown implementation. Updating messages uses is_me_message flag
received from server instead of its own partial test. Similarly, rendering
previews uses markdown code.
Fixes#6493.
This is the first step for allowing users
to edit a bot's service entries, name the
outgoing webhook configuration entries. The
chosen data structures allow for a future
with multiple services per bot; right now,
only one service per bot is supported.
This is responsible for:
1.) Handling all the incoming requests at the
messages endpoint which have defer param set. This is similar to
send_message_backend apart from the fact that instead of really
sending a message it schedules one to be sent later on.
2.) Does some preliminary checks such as validating timestamp for
scheduling a message, prevent scheduling a message in past, ensure
correct format of message to be scheduled.
3.) Extracts time of scheduled delivery from message.
4.) Add tests for the newly introduced function.
5.) timezone: Add get_timezone() to obtain tz object from string.
This helps in obtaining a timezone (tz) object from a timezone
specified as a string. This string needs to be a pytz lib defined
timezone string which we use to specify local timezones of the
users.
This code takes care of the environment running Python 3.4 when
test label is passed directly to the test-backend command:
./tools/test-backend test_alert_words
This also amends a commit from Brock Whittaker <brock@zulipchat.com>
that merges two separate functions for YouTube videos and Vimeo videos
into a generic video recall function.
Fixes#7550.
We add two functions:
1.) check_schedule_message(): This function is responsible for
doing the essential initial checkes to verify the validity of
the message. These checkes include things like if user is
allowed to send messages to some stream or not or if the user is
a super_user. All this is basically done by further calling
check_message() with appropriate parameters. This is on the same
lines as is check_send_message().
2.) do_schedule_messages(): This function is responsible for
creating ScheduleMessage table rows for a list of messages that
are to be scheduled. This basically accumulates the ScheduleMessage
objects in a list and then bulk creates the rows.
This adds UI fields in the bot settings for specifying
configuration values like API keys for a bot. The names
and placeholder values for each bot's config fields are
fetched from the bot's <bot>.conf template file in the
zulip_bots package. This also adds giphy and followup
as embedded bots.
This endpoint is about to become an API-style route and have the legacy
decorator removed from its view. Other endpoints will be used in tests
instead of it.
* For strikethrough formatting: Slack's '~strike~' to Zulip's '~~strike~~'.
* For bold formatting: Slack's '*bold*' to Zulip's '**bold**'.
* For italic formatting: Slack's '_italic_' to Zulip's '*italic*'.
* For mentioning formatting: Slack's <@slack_id|short_name> to Zulip's @**full_name**.
* Checking links.
load_bot_config_template(bot) parses the <bot>.conf
template file, which can be found in the zulip_bots
package for each bot. It then returns the INI content
of that file as a dict.
Currently the zip file is extracted to the root of
zulip directory no matter where the the zip file.
The extracted data is not useful after running the command
which pollutes the zulip directory. It make more sense to
extract it to the same directory of zip file especially
when the zip file gets downloaded to /temp like in the tests.
This commit adds a setting to limit creation of generic bots
to admins for realms that want that restriction. (Generic
bots, apart from being considered spammy on some realms,
have less locked down permissions than webhook bots).
Fixes#7066.
We no longer have a special UI setting and model
field ("emoji_alt_code") for saying users want text-only
emojis. We now instead make "text" be a fifth choice
for "emojiset".
Fixes#7406
This commit does the following:
* Move the Arguments table data from stream-message.md and
private-message.md to a JSON file.
* Add a Markdown extension that allows one to include and render
a table from a JSON file like so:
{generate_arguments_table|arguments.json|private-stream.md}
* Use Bootstrap's .table class to format the table instead of
relying on custom CSS.
This commit splits usage.md into two separate docs,
stream-message.md and private-message.md. The arguments and return
values for sending a stream message are somewhat different from
those of sending a private message, so it made sense to split the
two up for clarity.
There might be case that NOTIFICATION_BOT is none, so before sending stream
announce notification, check first if settings.NOTIFICATION_BOT is not none.
This uses the correct regex for strikethrough. Also, added
a test to make sure that strikethrough works when it contains
link with whitespace.
Fixes#7596.
A `None` value is not properly handled in this function, which
indicates some lack of testing or a recent regression we don't
understand. We were getting lots of tracebacks from this line
of code on our test server:
mentioned = 'mentioned' in flags and 'read' not in flags
This diff is nothing but dedentation -- it's empty under
`git diff -b`. These with-statements are only needed for
a pretty narrow scope of code, so make that clear in the
source.
There are two different things you need to patch in order to get error
emails (at `/emails`) in dev. Flip one of them in dev all the time,
and make the comment on the other a bit more explicit.
When I added this "Deployed code" feature to the error reporting,
I apparently hadn't worked out enough of how this code works to
realize that `notify_server_error` may be in a different process,
at a different time and potentially even on a different machine
from the actual error being reported.
Given that architecture, all the data about the error must be computed
in `AdminNotifyHandler`, before sending the report through the queue,
or else it risks being wrong. The job of `notify_server_error` and
friends is only to format the data and send it off. So, move the
implementation of this feature in order to do that.
(@showell added some "nocoverage" directives here for code that
is hard to test (exceptions being thrown, deployment files not
existing) and that was originally part of a file that didn't
require 100% coverage)
This helps prevent them from diverging and getting different sets of
features and fixes. As a bonus, the email path gets a nice tweak that
the Zulip path has had for years, since f7f2ec0ac, which makes the
emails clearer and less broken-looking when logging a message with no
stack trace.
This deduplicates a little bit of logic, and also has us always put
things into `report` the same way.
Empirically an exception in this codepath is very rare, so we won't
complicate the code by trying to salvage a lot of partial information
if it happens -- just log the traceback, and try to get a minimal
notification sent of the bare fact this happened.
This name hasn't been right since f7f2ec0ac back in 2013; this handler
sends the log record to a queue, whose consumer will not only maybe
send a Zulip message but definitely send an email. I found this
pretty confusing when I first worked on this logging code and was
looking for how exception emails got sent; so now that I see exactly
what's actually happening here, fix it.
This is just a basic Dropbox webhook integration. It just
notifies a user when something has changed, it does not
specify what changed. Doing so would require storing data,
as Dropbox API was created mainly for file managers, not
integrations like this.
Closes#5672
We have shifted to a generic queue to send all the emails. This queue
can retry in case of network issues; this makes sure that the emails are
always sent.
This commit just copies all the code from MissedMessageSendingWorker
class to a new EmailSendingWorker class. All the logic to send an email
through a queue was already there. This commit only makes the logic
generic. It does so by creating a special purpose queue called
'email_senders' to send any type of email. To make
MissedMessageSendingWorker still work we derive it from
EmailSendingWorker. All the tests that were testing
MissedMessageSendingWorker now run against EmailSendingWorker.
Such payloads are generated when a GitLab repository has merge
request approvals enabled and a project member approves a merge
request. Approving is not the same as merging.
This reverts commit 620b2cd6e.
Contributors setting up a new development environment were getting
errors like this:
```
++ dirname tools/do-destroy-rebuild-database
[...]
+ ./manage.py purge_queue --all
Traceback (most recent call last):
[...]
File "/home/zulipdev/zulip/zproject/legacy_urls.py", line 3, in <module>
import zerver.views.streams
File "/home/zulipdev/zulip/zerver/views/streams.py", line 187, in <module>
method_kwarg_pairs: List[FuncKwargPair]) -> HttpResponse:
File "/usr/lib/python3.5/typing.py", line 1025, in __getitem__
tvars = _type_vars(params)
[...]
File "/usr/lib/python3.5/typing.py", line 277, in _get_type_vars
for t in types:
TypeError: 'ellipsis' object is not iterable
```
The issue appears to be that we're using the `typing` module from the
3.5 stdlib, rather than the `typing=3.6.2` in our requirements files,
and that doesn't understand the `Callable[..., HttpResponse]` that
appears in the definition of `FuncKwargPair`.
Revert for now to get provision working again; at least one person
reports that reverting this sufficed. We'll need to do more testing
before putting this change back in.
The name `create_logger` suggests something much bigger than what this
function actually does -- the logger doesn't any more or less exist
after the function is called than before. Its one real function is to
send logs to a specific file.
So, pull out that logic to an appropriately-named function just for
it. We already use `logging.getLogger` in a number of places to
simply get a logger by name, and the old `create_logger` callsites can
do the same.
From the docs:
> This function does nothing if the root logger already has handlers
> configured for it.
Which we do if we've started up Django and configured settings, and in
particular allowed Django to process `settings.LOGGING`.
So, cut it out -- all it can do is confuse people about how logging
works.
If we ever actually used the `log_format` parameter, this would be
doubly confused, because only the first call would have any effect.
Because calls to `create_logger` generally run after settings are
configured, these would override what we have in `settings.LOGGING` --
which in particular defeated any attempt to set log levels in
`test_settings.py`. Move all of these settings to the same place in
`settings.py`, so they can be overridden in a uniform way.
This is already the loglevel we set on the root logger, so this has no
effect -- except in tests, where `test_settings.py` attempts to set
some of these same loggers to higher loglevels. Because the
`create_logger` call generally runs after we've configured settings,
it clobbers that effect.
The code in `test_settings.py` that tries to suppress logs only works
because it also sets `propagate=False`, which has nothing to do with
loglevels but does cause logs at this logger (and descendants) to be
dropped completely unless we've configured handlers for this logger
(or one of its relevant descendants.)
Adds a markdown preprocessor that finds ordered lists where all items
use the same number and change them to be in normal increasing order,
starting with that number.
Fixes#5159.
Eventually this check for the realm will be done in get_object_from_key
itself. Rewriting this to fit the pattern in get_object_from_key.
No change to behavior.
Commit d4ee3023 and its parent have the history behind this code.
Since d4ee3023^, all new PreregistrationUser objects, except those for
realm creation, have a non-None `realm`. Since d4ee3023, any legacy
PreregistrationUsers, with a `realm` of None despite not being for
realm creation, are treated as expired. Now, we ignore them
completely, and remove any that exist from the database.
The user-visible effect is to change the error message for
registration (or invitation) links created before d4ee3023^ to be
"link does not exist", rather than "link expired".
This change will at most affect users upgrading straight from 1.7 or
earlier to 1.8 (rather than from 1.7.1), but I think that's not much
of a concern (such installations are probably long-running
installations, without many live registration or invitation links).
[greg: tweaked commit message]
We should omit these for mypy. For most class definitions,
mypy doesn't need `Any`, and it provides no real useful info.
For clever monkeypatches, you should provide a more specific
type than `Any`.
The original logic is buggy now that emails can belong to (and be
invited to) multiple realms.
The new logic in the `invites` queue worker also avoids the bug where
when the PreregistrationUser was gone by the time the queue worker got
to the invite (e.g., because it'd been revoked), we threw an exception.
[greg: fix upgrade-compatibility logic; add test; explain
revoked-invite race above]
This code changes frequently enough that errors are bound to creep in. The
main change is that this sends the original invitation email instead of the
reminder email, but I think that's fine.
We'll need the expanded test coverage when we move
check_prereg_key_and_redirect to zerver/views/registration.py to avoid
test failures, and these are also tests we should really have anyway.
Empirically, the retry in `_on_connection_closed` didn't actually work
-- if a reconnect failed, that was it, and the exception handler
didn't get run. A traceback would get logged, but all its frames were
in Tornado or Pika, not our own code; presumably something magic and
async was happening to the exception.
Moreover, though we would make one attempt to reconnect if we had a
connection that got closed, we didn't have any form of retry if the
original attempt at connecting failed in the first place.
Happily, upstream offers a perfectly reasonable bit of API that avoids
both of these problems: the on-open-error callback. So use that.