Adds realm_bot delete event. On bot ownership change, add event is
sent to the bot_owner(if not admin) and delete event to the
previous bot owner(if not admin). For admin, update event is sent.
if the test fails, the 'output_dir' would not be deleted and
hence it would give an error when we run the tests next time,
as 'do_convert_data' expects an empty 'output_dir'.
Also the unzipped data file should be removed if the test fails
at 'do_convert_data'.
The messages were first being read and passed to the helper
functions channel wise.
This function makes a list of all the messages in the all the channels
beforehand which would be used to pass in the helper functions.
There's probably follow-up work to do here to eliminate these
completely, but this dramatically shrinks the ~1 minute race window
that was previously present between import and test function being
called.
This commit:
* Restructures the doc to use a numbered-step format.
Note that there are no screenshots. I signed up for a
Fabric/Crashlyics account but you have to link an Android/iOS app
to even get to the settings panel, which seemed like too much work
just to get a screenshot.
However, the way we can verify (somewhat) the correctness of the
last step is that it is a paraphrase of the first paragraph of
Fabric's Webhook docs, which can be found here:
https://docs.fabric.io/apple/crashlytics/custom-web-hooks.html
This commit modifies the text to:
1. Removes unnecessary screenshots.
2. Use the numbered-style format.
3. I also removed the instructions for generating an access token.
I took a look at Dropbox's docs and you shouldn't need that
for a webhook setup. The whole point behind a webhook is that
one can get by without using OAuth.
4. Rearranges the instructions to only contain 4 steps. For
uncomplicated instructions, that seems to be the ideal number.
This field has been unused by clients for some time, and isn't great
for our public archive feature plans (where we'll not want to be
including email addresses in messages).
Add `translate_emoticons` to `prop_types` and `expected_keys`.
Furthermore, create a emoji-translating Markdown inline pattern.
Also use a JavaScript version of `translate_emoticons` and then use
this function during Markdown previews and as a preprocessor. This
is only needed for previews, because usually emoticon translation
happens on the backend after sending.
Add tests for emoticon translation, a settings UI, and a /help/ page
as well.
Tweaked by tabbott to fix various test failurse as well as how this
handles whitespace, requiring emoticons to not have adjacent
characters.
Fixes#1768.
These two classes are tricky to test, and nocoverage-ing them
allows us to mark queue_processors.py as fully covered. We
still want to cover these two workers at some point, but for
now, it's nice to enforce full coverage for any future changes
to queue_processors.py.
Fixes (sort of) #6542.
This sets up a new test class with a simple
test, mostly for increasing coverage. The class
should in the future be extended to properly
verify the handle_feedback() logic.
We already check in get_service_bot_events() if a bot is mentioned,
and then only pass on the call to the bot handler if it is. The
commit removes the additional check in the embedded bot queue
processor simply because it is impossible to obtain test coverage
for it (there is no meaningful way to trigger the content of the
if-clause, because there will never be a message reaching the bot
without @-mentioning it.
To alleviate the danger of a potential regression, the check is not
removed completely, but rather replaced by an assert statement.
Previously, when a user updated the config data of an
embedded bot, only the updated fields were dispatched
back to the client. Dispatching all config data fields
makes the client updating code less brittle.
Webhook functions wrapped by the decorator:
@authenticated_api_view(is_webhook=True)
now log payloads that cause exceptions to webhook-errors.log.
Note that authenticated_api_view is only used by webhooks/github
and not anywhere else.
During a slack import, we don't have medium-size avatars already
available in the export data set (and possibly also with a normal
import/export?). The medium size avatar can be created by the
'ensure_medium_avatar_image' function, which checks if the medium
image exists, and if it doesn't, it creates the image.
This commit was substantially edited by tabbott to get rid of an
undefined variable bug, avoid initializing the upload backend classes
in a loop, and add some TODO notes on things that could be improved
later.