Includes this change:
* openapi/python_examples: Update get_single_user.
This updates get_single_user to pass keyword arguments to
get_user_by_id instead of passing a dictionary.
Which is required for CI to pass, as we indeed fixed the API of that
function (which had only been present with the wrong API for one release).
This commit removes can_create_streams and can_subscribe_other_users
to use has_permission as a generic function in UserProfile model for
these settings policies.
Relevant changes are made to events.py to avoid duplication at some
places.
We have two different digest schemes to make
sure we keep the database up to date. There
is the migration digest, which is NOT in the
scope of this commit, and which already
used the mechanism we use for other tools.
Here we are talking about the digest for
important files like `populate_db.py`.
Now our scheme is more consistent with how we
check file changes for other tools (as
well as the aformentioned migration files).
And we only write one hash file, instead of
seven.
And we only write the file when things have
actually changed.
And we are explicit about side effects.
Finally, we include a couple new bot settings
in the digest:
INTERNAL_BOTS
DISABLED_REALM_INTERNAL_BOTS
NOTE: This will require a one-time transition,
where we rebuild both databases (dev/test).
It takes a little over two minutes for me,
so it's not super painful.
I bump the provision version here, even
though you don't technically need it (since
the relevant tools are actually using the
digest files to determine if they need to
rebuild the database). I figure it's just
good to explicitly make this commit trigger
a provision, and the user will then see
the one-time migration of the hash files
with a little bit less of a surprise.
And I do a major bump, not a minor bump,
because when we go in the reverse direction,
the old code will have to rebuild the
database due to the legacy hash files not
being around, so, again, I just prefer it
to be explicit.
Fixes#14595.
Invalid HTTP requests could end up in an unhandled exception in
skip_200_and_304 due the record not having the status_code attribute
set. With this change we'll avoid the exception
Example:
curl -X POST -H 'Transfer-Encoding : chunked' --data-binary 'a' 'http://zulipdev.com:9991/json/messages/57'
2020-04-21 10:56:22.007 WARN [django.server] "POST /json/messages/57 HTTP/1.1" 405 95
2020-04-21 10:56:22.007 INFO [django.server] code 400, message Bad request syntax ('a')
2020-04-21 10:56:22.008 WARN [django.server] "a" 400 -
This will work around https://bugs.python.org/issue34939 when we
convert the type comment to a Python 3.6 style annotation.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We remove the `generate_fixtures` option here mostly
for simplicity, but in particular to facilitate
an upcoming commit to simplify the job of
`generate-fixtures` (and remove its `--force` option).
The command line option here for `test-backend`
was really calling `generate_fixtures --force`,
which we're about to rename `tools/rebuild-test-database`.
The `test-backend` tools is already smart about catching
up on migrations, so we generally don't need to tell it
to repair the database.
And if the database does get corrupt, you can just do
it directly with `tools/rebuild-test-database`.
This eliminates the `use_force` flag in
`update_test_databases_if_required`, which was easy
to confuse with `rebuild_test_database`.
The other caller wasn't using `use_force`.
Somewhat confusingly, we have two types of different
digests related to databases. The migration digests
are pragmatic, since changes to migrations are a bit
more frequent for certain use cases and don't
necessitate a complete rebuild of the database.
Anyway, these are just more specific names.
Generated by autopep8 --aggressive, with the setup.cfg configuration
from #14532. In general, an isinstance check may not be equivalent to
a type check because it includes subtypes; however, that’s usually
what you want.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Generated by autopep8, with the setup.cfg configuration from #14532.
I’m not sure why pycodestyle didn’t already flag these.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This refactors `extract_python_code_example` to accept an
`example_regex` parameter. It can now be used to extract code examples
from javascript_examples.py.
Previously, the send_custom_email code path leaked files in paths that
were not `.gitignored`, under templates/zerver/emails.
This became problematic when we added automated tests for this code
path, as it meant we leaked these files every time `test-backend` ran.
Fix this by ensuring all the files we generate are in this special
subdirectory.
The purpose is to provide a way for (non-webapp) clients,
like the mobile and terminal apps, to tell whether the
server it's talking to is new enough to support a given
API feature -- in particular a way that
* is finer-grained than release numbers, so that for
features developed after e.g. 2.1.0 we can use them
immediately on servers deployed from master (like
chat.zulip.org and zulipchat.com) without waiting the
months until a 2.2 release;
* is reliable, unlike e.g. looking at the number of
commits since a release;
* doesn't lead to a growing bag of named feature flags
which the server has to go on sending forever.
Tweaked by tabbott to extend the documentation.
Closes#14618.
We now have two functions related to digests
for processes:
is_digest_obsolete
write_digest_file
In most cases we now **wait** to write the
digest file until after we've successfully
run a process with its new inputs.
In one place, for database migrations, we
continue to write the digest optimistically.
We'll want to fix this, but it requires a
little more code cleanup.
Here is the typical sequence of events:
NEVER RUN -
is_digest_obsolete returns True
quickly (we don't compute a hash)
write_digest_file does a write (duh)
AFTER NO CHANGES -
is_digest_obsolete returns False
after reading one file for old
hash and multiple files to compute
hash
most callers skip write_digest_file
(no files are changed)
AFTER SOME CHANGES -
is_digest_obsolete returns False
after doing full checks
most callers call write_digest_file
*after* running a process
I make these all functions for consistency,
and in particular I want to continue to avoid
`glob.glob` calls until we are actually
computing hashes.
This is mostly a prep to allow us to do
hashing in two separate places:
- check hashes
- update hashes
We would only update hashes **after** running
processes anew.
For `provision_inner` I considered using a
class to put the three path-related helpers
into a mini namespace, but it felt too heavy.
It wouldn't be completely implausible here
to extract something like a JSON config
file that has a list of globs for each
process that we do path-hashing for, but I
want to clean up other stuff first.
We now remove the `Type` and `_TYPE` suffixes,
as we will start treating this like a real
class with behavior, instead of a glorified
struct.
We pass in `platform_type`, so that we can
just derive some of our data from that,
where naming conventions apply.
And we use the name `migrations_status_path`,
instead of the name `migration_status`, which
had two different meanings before this change.
This is a pure refactor, and we just early-exit
in case the datbase doesn't exist (knowing that
that can be a bit of a lie now--see the comment
I added.)
Refactored code in actions.py and streams.py to move stream related
functions into streams.py and remove the dependency on actions.py.
validate_sender_can_write_to_stream function in actions.py was renamed
to access_stream_for_send_message in streams.py.
This test was passing a string, not an Iterable[str], and effectively
a quirk in the remove_alert_words implementation happened to result in
processing each character in the string working.
Openapi had descriptive response codes for endpoints with multiple
responses for same response code. But this does not fall in line
with openapi specifications. So change descriptive response codes
like "400_auth" and "400_anauth" to "400_0" and "400_1" for all
such endpoints. Also make the necessary changes in openapi.py so
as to be able to read the schema in such cases and generate example
in such cases.
zulip.yaml is not in compliance with openapi specifications file.
Edit it so that it passes verification as an openapi specification
file.
Fixes#14582 .
We do limit the length to 100 in the frontend, but no such check
exists in the backend.
Check that a new alert word has a maximum length of 100 in the
alert_words endpoint.
I remove `is_force` from `file_or_package_hash_updated`
and modernize its mypy annotations.
If `is_force` is `True`, we just now run the thing
we want to force-run without having to call
`file_or_package_hash_updated` to expensively
and riskily return `True`.
Another nice outcome of this change is that if
`file_or_package_hash_updated` returns `True`,
you can know that the file or package has
indeed been updated.
For the case of `build_pygments_data` we also
skip an `os.path.exists` check when `is_force`
is `True`.
We will short-circuit more logic in the next
few commits, as well as cleaning up some of
the long/wrapper lines in the `if` statements.
To be able to get a screenshot of the personal message using the
`generate-integration-docs-screenshot` tool, this commit changes the
personal build to be triggered by iago, instead of cordelia.
The webhook view used a default value for the email, which gave
non-informative errors when the webhook is incorrectly configured without
the email parameter.
The event name seems to have been incorrectly called `todo_due_date_changed`
instead of `todo_due_on_changed`. The API docs for webhooks don't mention
the correct event name, but the TODO json payload[1] seems to contain the
`due_on` field, aside from the fixture actually referring to
`todo_due_on_changed` event type.
[1]: https://github.com/basecamp/bc3-api/blob/master/sections/todos.md
This is be useful for the mobile and desktop apps to hand an uploaded
file off to the system browser so that it can render PDFs (Etc.).
The S3 backend implementation is simple; for the local upload backend,
we use Django's signing feature to simulate the same sort of 60-second
lifetime token.
Co-Author-By: Mateusz Mandera <mateusz.mandera@protonmail.com>
For some mobile use cases, 15 seconds is potentially too short for a
busy+slow device to open a browser and fetch the URL. 60 seconds is
plenty, and doesn't carry a materially increased security risk.
"description: |" supports markdown and is overall better for
writing multiline paragraphs. So use it in multiline paragraphs
and line-wrap the newly formed paragraphs accordingly.
Edited by tabbott to change most single-line descriptions to use this
format as well.
When creating a webhook integration or creating a new one, it is a pain to
create or update the screenshots in the documentation. This commit adds a
tool that can trigger a sample notification for the webhook using a fixture,
that is likely already written for the tests.
Currently, the developer needs to take a screenshot manually, but this could
be automated using puppeteer or something like that.
Also, the tool does not support webhooks with basic auth, and only supports
webhooks that use json fixtures. These can be fixed in subsequent commits.
If SAML_REQUIRE_LIMIT_TO_SUBDOMAINS is enabled, the configured IdPs will
be validated and cleaned up when the saml backend is initialized.
settings.py would be a tempting and more natural place to do this
perhaps, but in settings.py we don't do logging and we wouldn't be able
to write a test for it.
Through the limit_to_subdomains setting on IdP dicts it's now possible
to limit the IdP to only allow authenticating to the specified realms.
Fixes#13340.
This refactors add_default_stream in zerver/views/streams.py to
take in stream_id as parameter instead of stream_name.
Minor changes have been made to test_subs.py and settings_streams.js
accordingly.
This commit moves /rest-error-handling examples to components section so
that they can be re-used in individual endpoints where it's example can
be highlighted more easiy.
The Redis-based rate limiting approach takes a lot of time talking to
Redis with 3-4 network requests to Redis on each request. It had a
negative impact on the performance of `get_events()` since this is our
single highest-traffic endpoint.
This commit introduces an in-process rate limiting alternate for
`/json/events` endpoint. The implementation uses Leaky Bucket
algorithm and Python dictionaries instead of Redis. This drops the
rate limiting time for `get_events()` from about 3000us to less than
100us (on my system).
Fixes#13913.
Co-Author-by: Mateusz Mandera <mateusz.mandera@protonmail.com>
Co-Author-by: Anders Kaseorg <anders@zulipchat.com>
Since commit 1d72629dc4, we have been
maintaining a patched copy of Django’s
SessionMiddleware.process_response in order to unconditionally ignore
our own optional cookie_domain setting that we don’t set.
Instead, let’s not do that.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Right now, the message is "Invalid characters in emoji name" when
the emoji_name is empty. Changing check_valid_emoji_name() in
zerver/lib/emoji.py which validates the name to accomodate the case
of missing name. The new message is "Emoji name is missing".
The error is PGroonga specific since `pgroonga_query_extract_keywords` does
not handle empty string inputs correctly. This commit prevents search
narrows from having empty operands.
Closes#14405
Generated by `pyupgrade --py3-plus --keep-percent-format` on all our
Python code except `zthumbor` and `zulip-ec2-configure-interfaces`,
followed by manual indentation fixes.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Option is added to video_chat_provider settings for disabling
video calls.
Video call icon is hidden in two cases-
1. video_chat_provider is set to disabled.
2. video_chat_provider is set to Jitsi and settings.JITSI_SERVER_URL
is none.
Relevant tests are added and modified.
Fixes#14483
This adds a new realm setting: default_code_block_language.
This PR also adds a new widget to specify a language, which
behaves somewhat differently from other widgets of the same
kind; instead of exposing methods to the whole module, we
just create a single IIFE that handles all the interactions
with the DOM for the widget.
We also move the code for remapping languages to format_code
function since we want to preserve the original language to
decide if we override it using default_code_clock_language.
Fixes#14404.
We use retry_event in queue_processors.py to handle trying on failures,
without getting stuck in permanent retry loops if the event ends up
leading to failure on every attempt and we just keep sending NACK to
rabbitmq forever (or until the channel crashes). Tornado queues haven't
been using this, but they should.
Semaphore has currently has two different versions of their product -
Classic and 2.0. This commit adds support for Semaphore 2.0, along side
Semaphore Classic, using the same webhook. This would let the integration
work seamlessly for users who have already configured a Zulip integration in
their Semaphore 2.0 projects.
Semaphore 2.0 currently only supports GitHub and their payloads do not
contain URLs for common entities like commits, pull requests and tags. We
construct URLs for them using templates, but also try to support other
services by providing notifications without URLs.
Closes#14171
Co-authored-by: Puneeth Chaganti <punchagan@muse-amuse.in>
The change in 180d8abed6, while correct
for the Django part of the codebase, had the nasty side effect of
exposing a failure mode in the process_notification logic if the users
list was empty.
This, in turn, could cause our process_notification code to fail with
an IndexError when trying to process the event, which would result in
that tornado process not automatically recovering, due to the outer
try/except handler for consume triggering a NACK and thus repeating
the event.
If we had a rule like "max 3 requests in 2 seconds", there was an
inconsistency between is_ratelimited() and get_api_calls_left().
If you had:
request #1 at time 0
request #2 and #3 at some times < 2
Next request, if exactly at time 2, would not get ratelimited, but if
get_api_calls_left was called, it would return 0. This was due to
inconsistency on the boundary - the check in is_ratelimited was
exclusive, while get_api_calls_left uses zcount, which is inclusive.
time_reset returned from api_calls_left() was a timestamp, but
mistakenly treated as delta seconds. We change the return value of
api_calls_left() to be delta seconds, to be consistent with the return
value of rate_limit().
The information used to be stored in a request._ratelimit dict, but
there's no need for that, and a list is a simpler structure, so this
allows us to simplify the plumbing somewhat.
That's the value that matters to the code that catches the exception,
and this change allows simplifying the plumbing somewhat, and gets rid
of the get_rate_limit_result_from_request function.
This commit contains a few clean ups:
* In order to scale better for adding multiple commands,
the message formatting and setting switch logic was
extracted to its own function.
* The command lists were removed, as the frontend parses
the slash command from the compose box, and only sends
a single command to the backend for any given command
alias typed.
* The `switch_command` logic was removed because, given
the aforementioned fact, the index of the command will
always be the same. Thus the switch command will always
be the same.
* Switched to using early returns as opposed to nested
conditionals. Along with removing single use variable
declarations.
This is a prep commit for generating /team page data
using cron job. zerver/tests directory is not present in
production installation. So moving the file from the directory
tests to tools.
This commit reuses the existing infrastructure for moving a topic
within a stream to add support for moving topics from one stream to
another.
Split from the original full-feature commit so that we can merge just
the backend, which is finished, at this time.
This is a large part of #6427.
The feature is incomplete, in that we don't have real-time update of
the frontend to handle the event, documentation, etc., but this commit
is a good mergable checkpoint that we can do further work on top of.
We also still ideally would have a test_events test for the backend,
but I'm willing to leave that for follow-up work.
This appears to have switched to tabbott as the author during commit
squashing sometime ago, but this commit is certainly:
Co-Authored-By: Wbert Adrián Castro Vera <wbertc@gmail.com>
This commit corrects the test_change_stream_policy_requires_realm_admin
by setting the date_joined of user in the tests itself.
test_non_admin is added to avoid duplication of code.
Code is added for checking success on changing stream_post_policy
by admins.
This used to show a blank page. Considering that the links remain valid
only for 15 seconds it's important to show something more informative to
the user.
This breaks provisioning because running this as import time would
require language_name_map.json to be generated by `manage.py
compilemessages` before we can run any management commands :(.
We could potentially fix this in the future by changing the generate
language files to be things we commit to the project.
This is a prep commit for making use of same choices for
create_stream_policy and invite_to_stream_policy as both fields
have same set of choices.
This will be useful as we add other fields using these same types.
This commit replaces the WAITING _PERIOD with FULL_MEMBERS from
create_stream_policy and invite_to_stream_policy choices to
achieve consistency and making the variables more descriptive.
This simplifies the update_display_settings endpoint to use REQ for
validation, rather than custom if/else statements.
The test changes just take advantage of the now more consistent
syntax.
This changes the payload that is used
to populate `page_params` for the webapp,
as well as responses to the once-every-50-seconds
presence pings.
Now our dictionary of users only has these
two fields in the value:
- activity_timestamp
- idle_timestamp
Example data:
{
6: Object { idle_timestamp: 1585746028 },
7: Object { active_timestamp: 1585745774 },
8: Object { active_timestamp: 1585745578,
idle_timestamp: 1585745400}
}
We only send the slimmer type of payload
to clients that have set `slim_presence`
to True.
Note that this commit does not change the format
of the event data, which still looks like this:
{
website: {
client: 'website',
pushable: false,
status: 'active',
timestamp: 1585745225
}
}
This commit migrates zulip outging webhook payload to
/zulip-outgoing-webhook:post in OpenAPI.
Since this migrates the last payloads from api/fixtures.json to
OpenAPI, this commit removes api/fixtures.json file and the functions
accessing the file.
Tweaked by tabbott to further remove an unnecessary conditional.
This is critical for importing the very first realm into an empty
server, since in 27b15a9722, we changed
the model to create the internal realm when the first real realm would
be created, but neglected the data import code path.
The distinction between ValueError and TypeError
is not useful in these functions:
- extract_stream_indicator
- extract_private_recipients (or its callees)
These are always invoked in views to validate
user input.
When we use REQ to wrap the validators, any
Exception gets turned into a JsonableError, so
the distinction is not important.
And if we don't use REQ to wrap the validators,
the errors aren't caught.
Now we just let these functions directly produce
the desired end result for both codepaths.
Also, we now flag the error strings for translation.