Previously we had a problem of id clashes while importing converted
slack data into an existing zulip instance with realms which are actively
populating the database.
This counts the total objects to be imported and does a db transaction
to increase the SEQUENCE number for that table by that number,
and hence allocates a range of ids for the to be converted slack data
objects.
Adds a check for newline that was present on backend, but missing in the
frontend markdown implementation. Updating messages uses is_me_message flag
received from server instead of its own partial test. Similarly, rendering
previews uses markdown code.
Fixes#6493.
This is the first step for allowing users
to edit a bot's service entries, name the
outgoing webhook configuration entries. The
chosen data structures allow for a future
with multiple services per bot; right now,
only one service per bot is supported.
This is responsible for:
1.) Handling all the incoming requests at the
messages endpoint which have defer param set. This is similar to
send_message_backend apart from the fact that instead of really
sending a message it schedules one to be sent later on.
2.) Does some preliminary checks such as validating timestamp for
scheduling a message, prevent scheduling a message in past, ensure
correct format of message to be scheduled.
3.) Extracts time of scheduled delivery from message.
4.) Add tests for the newly introduced function.
5.) timezone: Add get_timezone() to obtain tz object from string.
This helps in obtaining a timezone (tz) object from a timezone
specified as a string. This string needs to be a pytz lib defined
timezone string which we use to specify local timezones of the
users.
This code takes care of the environment running Python 3.4 when
test label is passed directly to the test-backend command:
./tools/test-backend test_alert_words
This also amends a commit from Brock Whittaker <brock@zulipchat.com>
that merges two separate functions for YouTube videos and Vimeo videos
into a generic video recall function.
Fixes#7550.
We add two functions:
1.) check_schedule_message(): This function is responsible for
doing the essential initial checkes to verify the validity of
the message. These checkes include things like if user is
allowed to send messages to some stream or not or if the user is
a super_user. All this is basically done by further calling
check_message() with appropriate parameters. This is on the same
lines as is check_send_message().
2.) do_schedule_messages(): This function is responsible for
creating ScheduleMessage table rows for a list of messages that
are to be scheduled. This basically accumulates the ScheduleMessage
objects in a list and then bulk creates the rows.
This adds UI fields in the bot settings for specifying
configuration values like API keys for a bot. The names
and placeholder values for each bot's config fields are
fetched from the bot's <bot>.conf template file in the
zulip_bots package. This also adds giphy and followup
as embedded bots.
This endpoint is about to become an API-style route and have the legacy
decorator removed from its view. Other endpoints will be used in tests
instead of it.
* For strikethrough formatting: Slack's '~strike~' to Zulip's '~~strike~~'.
* For bold formatting: Slack's '*bold*' to Zulip's '**bold**'.
* For italic formatting: Slack's '_italic_' to Zulip's '*italic*'.
* For mentioning formatting: Slack's <@slack_id|short_name> to Zulip's @**full_name**.
* Checking links.
load_bot_config_template(bot) parses the <bot>.conf
template file, which can be found in the zulip_bots
package for each bot. It then returns the INI content
of that file as a dict.
Currently the zip file is extracted to the root of
zulip directory no matter where the the zip file.
The extracted data is not useful after running the command
which pollutes the zulip directory. It make more sense to
extract it to the same directory of zip file especially
when the zip file gets downloaded to /temp like in the tests.
This commit adds a setting to limit creation of generic bots
to admins for realms that want that restriction. (Generic
bots, apart from being considered spammy on some realms,
have less locked down permissions than webhook bots).
Fixes#7066.
We no longer have a special UI setting and model
field ("emoji_alt_code") for saying users want text-only
emojis. We now instead make "text" be a fifth choice
for "emojiset".
Fixes#7406
This commit does the following:
* Move the Arguments table data from stream-message.md and
private-message.md to a JSON file.
* Add a Markdown extension that allows one to include and render
a table from a JSON file like so:
{generate_arguments_table|arguments.json|private-stream.md}
* Use Bootstrap's .table class to format the table instead of
relying on custom CSS.
This commit splits usage.md into two separate docs,
stream-message.md and private-message.md. The arguments and return
values for sending a stream message are somewhat different from
those of sending a private message, so it made sense to split the
two up for clarity.
There might be case that NOTIFICATION_BOT is none, so before sending stream
announce notification, check first if settings.NOTIFICATION_BOT is not none.
This uses the correct regex for strikethrough. Also, added
a test to make sure that strikethrough works when it contains
link with whitespace.
Fixes#7596.
A `None` value is not properly handled in this function, which
indicates some lack of testing or a recent regression we don't
understand. We were getting lots of tracebacks from this line
of code on our test server:
mentioned = 'mentioned' in flags and 'read' not in flags
This diff is nothing but dedentation -- it's empty under
`git diff -b`. These with-statements are only needed for
a pretty narrow scope of code, so make that clear in the
source.
There are two different things you need to patch in order to get error
emails (at `/emails`) in dev. Flip one of them in dev all the time,
and make the comment on the other a bit more explicit.
When I added this "Deployed code" feature to the error reporting,
I apparently hadn't worked out enough of how this code works to
realize that `notify_server_error` may be in a different process,
at a different time and potentially even on a different machine
from the actual error being reported.
Given that architecture, all the data about the error must be computed
in `AdminNotifyHandler`, before sending the report through the queue,
or else it risks being wrong. The job of `notify_server_error` and
friends is only to format the data and send it off. So, move the
implementation of this feature in order to do that.
(@showell added some "nocoverage" directives here for code that
is hard to test (exceptions being thrown, deployment files not
existing) and that was originally part of a file that didn't
require 100% coverage)
This helps prevent them from diverging and getting different sets of
features and fixes. As a bonus, the email path gets a nice tweak that
the Zulip path has had for years, since f7f2ec0ac, which makes the
emails clearer and less broken-looking when logging a message with no
stack trace.
This deduplicates a little bit of logic, and also has us always put
things into `report` the same way.
Empirically an exception in this codepath is very rare, so we won't
complicate the code by trying to salvage a lot of partial information
if it happens -- just log the traceback, and try to get a minimal
notification sent of the bare fact this happened.
This name hasn't been right since f7f2ec0ac back in 2013; this handler
sends the log record to a queue, whose consumer will not only maybe
send a Zulip message but definitely send an email. I found this
pretty confusing when I first worked on this logging code and was
looking for how exception emails got sent; so now that I see exactly
what's actually happening here, fix it.
This is just a basic Dropbox webhook integration. It just
notifies a user when something has changed, it does not
specify what changed. Doing so would require storing data,
as Dropbox API was created mainly for file managers, not
integrations like this.
Closes#5672
We have shifted to a generic queue to send all the emails. This queue
can retry in case of network issues; this makes sure that the emails are
always sent.
This commit just copies all the code from MissedMessageSendingWorker
class to a new EmailSendingWorker class. All the logic to send an email
through a queue was already there. This commit only makes the logic
generic. It does so by creating a special purpose queue called
'email_senders' to send any type of email. To make
MissedMessageSendingWorker still work we derive it from
EmailSendingWorker. All the tests that were testing
MissedMessageSendingWorker now run against EmailSendingWorker.
Such payloads are generated when a GitLab repository has merge
request approvals enabled and a project member approves a merge
request. Approving is not the same as merging.