The previous export tool would only work properly for small realms,
and was missing a number of important features:
* Export of avatars and uploads from S3
* Export of presence data, activity data, etc.
* Faithful export/import of timestamps
* Parallel export of messages
* Not OOM killing for large realms
The new tool runs as a pair of documented management commands, and
solves all of those problems.
Also we add a new management command for exporting the data of an
individual user.
Define Integration and WebhookIntegration classes.
Change webhook part of integration's guide.
Replace hardcoded webhook urls to generating
based on WEBHOOKS list.
The old behavior was to raise an exception, but Django was catching
the exception and doing unexpected things. For instance, in the
manage.py shell, printing out a ModelReprMixin object (with
__unicode__ not implemented) would result in nothing being printed,
rather than it raising a error or otherwise alerting the programmer as
to what was going on.
This fixes a regression where missed message emails would not be sent
at all in the event that EMAIL_GATEWAY_PATTERN was unset.
The overall experience still isn't great, but it's better than crashing.
Fixes: #1411
[commit message expanded by tabbott]
This will lead to minor differences in the warnings that
people see when they run tests that are slow. We call out
the slowness a little more clearly from a visual standpoint,
and we simplify the calculation of the slowness threshold.
We still allow more time for tests with the `@slow` decorator
to run, but we don't use their expected_run_time.
Our flush functions update user profile cache entries which can cause
confusing race conditions (see e.g. #1257). To resolve this, we move
all the user_profile flush functions to delete the entry instead of
updating it -- it will then be fetched as part of the next request
that needs to access the user object.
There are still races here, and there is perhaps an argument that a
better fix for this would be to re-fetch the object and then put it
into the cache, but this resolves the main cache correctness problem
we had with the previous implementation.
Fixes: #1322.
This makes us more consistent, since we have other wrappers
like client_patch, client_put, and client_delete.
Wrapping also will facilitate instrumentation of our posting code.
This function is only called in cases where user_profile isn't None,
and the code reads better if we just check that first rather than
checking it on every line that accesses user_profile.
This allows the frontend to fetch data on the subscribers list (etc.)
for streams where the user has never been subscribed, making it
possible to implement UI showing details like subscribe counts on the
subscriptions page.
This is likely a performance regression for very large teams with
large numbers of streams; we'll want to do some testing to determine
the impact (and thus whether we should make this feature only fully
enabled for larger realms).
There were a bunch of authorization and well-formedness checks in
zerver.lib.actions.do_update_message that I moved to
zerver.views.messages.update_message_backend.
Reason: by convention, functions in actions.py complete their actions;
error checking should be done outside the file when possible.
Fixes: #1150.
This is controlled through the admin tab and a new field in the Realms table.
Notes:
* The admin tab setting takes a value in minutes, whereas the backend stores it
in seconds.
* This setting is unused when allow_message_editing is false.
* There is some generosity in how the limit is enforced. For instance, if the
user sees the hovering edit button, we ensure they have at least 5 seconds to
click it, and if the user gets to the message edit form, we ensure they have
at least 10 seconds to make the edit, by relaxing the limit.
* This commit also includes a countdown timer in the message edit form.
Resolves#903.
Correctly encode and decode strings in convert_html_to_markdown.
It wasn't possible to use universal_newlines=True since
Popen.communicate doesn't encode/decode strings correctly on
python 2.
Add methods assert_equals_response and assert_in_response to
AuthedTestCase. These methods make it convenient to check if
a string equals the contents of an HttpResponse's body or if a
string is a substring of the contents of an HttpResponse's body.
response.content is binary data, but code usually assumes it to
be text. Fix this by decoding response.content where required.
Don't do this in tests yet.
This is controlled through the admin tab and a new field in the Realms
table. This mirrors the behavior of the old hardcoded setting
feature_flags.disable_message_editing. Partially resolves#903.
This reverts commit f1f48f305e.
The use of sklearn unfortunately caused a substantial slowdown to the
Zulip provisioning process, which didn't seem worth it for a
relatively minor feature.
Originally this cache was used to transmit data from Django to Tornado
(and also for general message caching purposes), but now nothing
actually reads from this cache, so we can eliminate it.
Apparently, the message cache we were filling was completely useless
and unused, and furthermore, the cache we were filling as part of
restarting the server was also totally useless, since it didn't have
the messages users would be requesting.
The functions truncate_content, truncate_body and truncate_topic
are only meant to be used on text strings. So change its
parameter types from AnyStr to text_type.
Many stubs in xml.etree.ElementTree use Union[str, bytes] as
return type. Mypy wants us to correctly handle each case. This
is correct, but not useful for us since we know that we'll always
get str. So force the return value to text_type, to supress mypy
errors.
Also encode/decode strings appropriately when using api_keys to generate
basic auth header.
Also fix clashing annotations in zerver/tests/test_external.py.
* Add Optional where required.
* Set type of req_redis_key as `(text_type) -> text_type` for consistency.
Almost all our cache keys and redis keys use this signature.