Refactored code in actions.py and streams.py to move stream related
functions into streams.py and remove the dependency on actions.py.
validate_sender_can_write_to_stream function in actions.py was renamed
to access_stream_for_send_message in streams.py.
This test was passing a string, not an Iterable[str], and effectively
a quirk in the remove_alert_words implementation happened to result in
processing each character in the string working.
Openapi had descriptive response codes for endpoints with multiple
responses for same response code. But this does not fall in line
with openapi specifications. So change descriptive response codes
like "400_auth" and "400_anauth" to "400_0" and "400_1" for all
such endpoints. Also make the necessary changes in openapi.py so
as to be able to read the schema in such cases and generate example
in such cases.
zulip.yaml is not in compliance with openapi specifications file.
Edit it so that it passes verification as an openapi specification
file.
Fixes#14582 .
We do limit the length to 100 in the frontend, but no such check
exists in the backend.
Check that a new alert word has a maximum length of 100 in the
alert_words endpoint.
I remove `is_force` from `file_or_package_hash_updated`
and modernize its mypy annotations.
If `is_force` is `True`, we just now run the thing
we want to force-run without having to call
`file_or_package_hash_updated` to expensively
and riskily return `True`.
Another nice outcome of this change is that if
`file_or_package_hash_updated` returns `True`,
you can know that the file or package has
indeed been updated.
For the case of `build_pygments_data` we also
skip an `os.path.exists` check when `is_force`
is `True`.
We will short-circuit more logic in the next
few commits, as well as cleaning up some of
the long/wrapper lines in the `if` statements.
To be able to get a screenshot of the personal message using the
`generate-integration-docs-screenshot` tool, this commit changes the
personal build to be triggered by iago, instead of cordelia.
The webhook view used a default value for the email, which gave
non-informative errors when the webhook is incorrectly configured without
the email parameter.
The event name seems to have been incorrectly called `todo_due_date_changed`
instead of `todo_due_on_changed`. The API docs for webhooks don't mention
the correct event name, but the TODO json payload[1] seems to contain the
`due_on` field, aside from the fixture actually referring to
`todo_due_on_changed` event type.
[1]: https://github.com/basecamp/bc3-api/blob/master/sections/todos.md
This is be useful for the mobile and desktop apps to hand an uploaded
file off to the system browser so that it can render PDFs (Etc.).
The S3 backend implementation is simple; for the local upload backend,
we use Django's signing feature to simulate the same sort of 60-second
lifetime token.
Co-Author-By: Mateusz Mandera <mateusz.mandera@protonmail.com>
For some mobile use cases, 15 seconds is potentially too short for a
busy+slow device to open a browser and fetch the URL. 60 seconds is
plenty, and doesn't carry a materially increased security risk.
"description: |" supports markdown and is overall better for
writing multiline paragraphs. So use it in multiline paragraphs
and line-wrap the newly formed paragraphs accordingly.
Edited by tabbott to change most single-line descriptions to use this
format as well.
When creating a webhook integration or creating a new one, it is a pain to
create or update the screenshots in the documentation. This commit adds a
tool that can trigger a sample notification for the webhook using a fixture,
that is likely already written for the tests.
Currently, the developer needs to take a screenshot manually, but this could
be automated using puppeteer or something like that.
Also, the tool does not support webhooks with basic auth, and only supports
webhooks that use json fixtures. These can be fixed in subsequent commits.
If SAML_REQUIRE_LIMIT_TO_SUBDOMAINS is enabled, the configured IdPs will
be validated and cleaned up when the saml backend is initialized.
settings.py would be a tempting and more natural place to do this
perhaps, but in settings.py we don't do logging and we wouldn't be able
to write a test for it.
Through the limit_to_subdomains setting on IdP dicts it's now possible
to limit the IdP to only allow authenticating to the specified realms.
Fixes#13340.
This refactors add_default_stream in zerver/views/streams.py to
take in stream_id as parameter instead of stream_name.
Minor changes have been made to test_subs.py and settings_streams.js
accordingly.
This commit moves /rest-error-handling examples to components section so
that they can be re-used in individual endpoints where it's example can
be highlighted more easiy.
The Redis-based rate limiting approach takes a lot of time talking to
Redis with 3-4 network requests to Redis on each request. It had a
negative impact on the performance of `get_events()` since this is our
single highest-traffic endpoint.
This commit introduces an in-process rate limiting alternate for
`/json/events` endpoint. The implementation uses Leaky Bucket
algorithm and Python dictionaries instead of Redis. This drops the
rate limiting time for `get_events()` from about 3000us to less than
100us (on my system).
Fixes#13913.
Co-Author-by: Mateusz Mandera <mateusz.mandera@protonmail.com>
Co-Author-by: Anders Kaseorg <anders@zulipchat.com>
Since commit 1d72629dc4, we have been
maintaining a patched copy of Django’s
SessionMiddleware.process_response in order to unconditionally ignore
our own optional cookie_domain setting that we don’t set.
Instead, let’s not do that.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Right now, the message is "Invalid characters in emoji name" when
the emoji_name is empty. Changing check_valid_emoji_name() in
zerver/lib/emoji.py which validates the name to accomodate the case
of missing name. The new message is "Emoji name is missing".
The error is PGroonga specific since `pgroonga_query_extract_keywords` does
not handle empty string inputs correctly. This commit prevents search
narrows from having empty operands.
Closes#14405
Generated by `pyupgrade --py3-plus --keep-percent-format` on all our
Python code except `zthumbor` and `zulip-ec2-configure-interfaces`,
followed by manual indentation fixes.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Option is added to video_chat_provider settings for disabling
video calls.
Video call icon is hidden in two cases-
1. video_chat_provider is set to disabled.
2. video_chat_provider is set to Jitsi and settings.JITSI_SERVER_URL
is none.
Relevant tests are added and modified.
Fixes#14483
This adds a new realm setting: default_code_block_language.
This PR also adds a new widget to specify a language, which
behaves somewhat differently from other widgets of the same
kind; instead of exposing methods to the whole module, we
just create a single IIFE that handles all the interactions
with the DOM for the widget.
We also move the code for remapping languages to format_code
function since we want to preserve the original language to
decide if we override it using default_code_clock_language.
Fixes#14404.
We use retry_event in queue_processors.py to handle trying on failures,
without getting stuck in permanent retry loops if the event ends up
leading to failure on every attempt and we just keep sending NACK to
rabbitmq forever (or until the channel crashes). Tornado queues haven't
been using this, but they should.
Semaphore has currently has two different versions of their product -
Classic and 2.0. This commit adds support for Semaphore 2.0, along side
Semaphore Classic, using the same webhook. This would let the integration
work seamlessly for users who have already configured a Zulip integration in
their Semaphore 2.0 projects.
Semaphore 2.0 currently only supports GitHub and their payloads do not
contain URLs for common entities like commits, pull requests and tags. We
construct URLs for them using templates, but also try to support other
services by providing notifications without URLs.
Closes#14171
Co-authored-by: Puneeth Chaganti <punchagan@muse-amuse.in>
The change in 180d8abed6, while correct
for the Django part of the codebase, had the nasty side effect of
exposing a failure mode in the process_notification logic if the users
list was empty.
This, in turn, could cause our process_notification code to fail with
an IndexError when trying to process the event, which would result in
that tornado process not automatically recovering, due to the outer
try/except handler for consume triggering a NACK and thus repeating
the event.
If we had a rule like "max 3 requests in 2 seconds", there was an
inconsistency between is_ratelimited() and get_api_calls_left().
If you had:
request #1 at time 0
request #2 and #3 at some times < 2
Next request, if exactly at time 2, would not get ratelimited, but if
get_api_calls_left was called, it would return 0. This was due to
inconsistency on the boundary - the check in is_ratelimited was
exclusive, while get_api_calls_left uses zcount, which is inclusive.
time_reset returned from api_calls_left() was a timestamp, but
mistakenly treated as delta seconds. We change the return value of
api_calls_left() to be delta seconds, to be consistent with the return
value of rate_limit().
The information used to be stored in a request._ratelimit dict, but
there's no need for that, and a list is a simpler structure, so this
allows us to simplify the plumbing somewhat.
That's the value that matters to the code that catches the exception,
and this change allows simplifying the plumbing somewhat, and gets rid
of the get_rate_limit_result_from_request function.
This commit contains a few clean ups:
* In order to scale better for adding multiple commands,
the message formatting and setting switch logic was
extracted to its own function.
* The command lists were removed, as the frontend parses
the slash command from the compose box, and only sends
a single command to the backend for any given command
alias typed.
* The `switch_command` logic was removed because, given
the aforementioned fact, the index of the command will
always be the same. Thus the switch command will always
be the same.
* Switched to using early returns as opposed to nested
conditionals. Along with removing single use variable
declarations.
This is a prep commit for generating /team page data
using cron job. zerver/tests directory is not present in
production installation. So moving the file from the directory
tests to tools.
This commit reuses the existing infrastructure for moving a topic
within a stream to add support for moving topics from one stream to
another.
Split from the original full-feature commit so that we can merge just
the backend, which is finished, at this time.
This is a large part of #6427.
The feature is incomplete, in that we don't have real-time update of
the frontend to handle the event, documentation, etc., but this commit
is a good mergable checkpoint that we can do further work on top of.
We also still ideally would have a test_events test for the backend,
but I'm willing to leave that for follow-up work.
This appears to have switched to tabbott as the author during commit
squashing sometime ago, but this commit is certainly:
Co-Authored-By: Wbert Adrián Castro Vera <wbertc@gmail.com>
This commit corrects the test_change_stream_policy_requires_realm_admin
by setting the date_joined of user in the tests itself.
test_non_admin is added to avoid duplication of code.
Code is added for checking success on changing stream_post_policy
by admins.
This used to show a blank page. Considering that the links remain valid
only for 15 seconds it's important to show something more informative to
the user.
This breaks provisioning because running this as import time would
require language_name_map.json to be generated by `manage.py
compilemessages` before we can run any management commands :(.
We could potentially fix this in the future by changing the generate
language files to be things we commit to the project.
This is a prep commit for making use of same choices for
create_stream_policy and invite_to_stream_policy as both fields
have same set of choices.
This will be useful as we add other fields using these same types.
This commit replaces the WAITING _PERIOD with FULL_MEMBERS from
create_stream_policy and invite_to_stream_policy choices to
achieve consistency and making the variables more descriptive.
This simplifies the update_display_settings endpoint to use REQ for
validation, rather than custom if/else statements.
The test changes just take advantage of the now more consistent
syntax.
This changes the payload that is used
to populate `page_params` for the webapp,
as well as responses to the once-every-50-seconds
presence pings.
Now our dictionary of users only has these
two fields in the value:
- activity_timestamp
- idle_timestamp
Example data:
{
6: Object { idle_timestamp: 1585746028 },
7: Object { active_timestamp: 1585745774 },
8: Object { active_timestamp: 1585745578,
idle_timestamp: 1585745400}
}
We only send the slimmer type of payload
to clients that have set `slim_presence`
to True.
Note that this commit does not change the format
of the event data, which still looks like this:
{
website: {
client: 'website',
pushable: false,
status: 'active',
timestamp: 1585745225
}
}
This commit migrates zulip outging webhook payload to
/zulip-outgoing-webhook:post in OpenAPI.
Since this migrates the last payloads from api/fixtures.json to
OpenAPI, this commit removes api/fixtures.json file and the functions
accessing the file.
Tweaked by tabbott to further remove an unnecessary conditional.
This is critical for importing the very first realm into an empty
server, since in 27b15a9722, we changed
the model to create the internal realm when the first real realm would
be created, but neglected the data import code path.
The distinction between ValueError and TypeError
is not useful in these functions:
- extract_stream_indicator
- extract_private_recipients (or its callees)
These are always invoked in views to validate
user input.
When we use REQ to wrap the validators, any
Exception gets turned into a JsonableError, so
the distinction is not important.
And if we don't use REQ to wrap the validators,
the errors aren't caught.
Now we just let these functions directly produce
the desired end result for both codepaths.
Also, we now flag the error strings for translation.
This setting is being overridden by the frontend since the last
commit, and the security model is clearer and more robust if we don't
make it appear as though the markdown processor is handling this
issue.
Co-authored-by: Tim Abbott <tabbott@zulipchat.com>
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Zulip's modal_link markdown feature has not been used since 2017; it
was a hack used for a 2013-era tutorial feature and was never used
outside that use case.
Unfortunately, it's sloppy implementation was exposed in the markdown
processor for all users, not just the tutorial use case.
More importantly, it was buggy, in that it did not validate the link
using the standard validation approach used by our other code
interacting with links.
The right solution is simply to remove it.
This makes it relatively easy for a system administrator to
temporarily override these values after a desktop app security
release that they want to ensure all of their users take.
We're not putting this in settings, since we don't want to encourage
accidental long-term overrides of these important-to-security values.
Previously, we only printed the test-case when we had an assertion error.
With this change, we also include timeout errors as well as any other
causes for failure.
We now have Hamlet, not Othello, send the message
to Othello's bot, since that's a more interesting
test and less likely to lead to a false positive.
And then we simplify the recipient check to avoid
the strange mypy mess as well as possible false
negatives.
When more than one outgoing webhook is configured,
the message which is send to the webhook bot passes
through finalize_payload function multiple times,
which mutated the message dict in a way that many keys
were lost from the dict obj.
This commit fixes that problem by having
`finalize_payload` return a shallow copy of the
incoming dict, instead of mutating it. We still
mutate dicts inside of `post_process_dicts`, though,
for performance reasons.
This was slightly modified by @showell to fix the
`test_both_codepaths` test that was added concurrently
to this work. (I used a slightly verbose style in the
tests to emphasize the transformation from `wide_dict`
to `narrow_dict`.)
I also removed a deepcopy call inside
`get_client_payload`, since we now no longer mutate
in `finalize_payload`.
Finally, I added some comments here and there.
For testing, I mostly protect against the root
cause of the bug happening again, by adding a line
to make sure that `sender_realm_id` does not get
wiped out from the "wide" dictionary.
A better test would exercise the actual code that
exposed the bug here by sending a message to a bot
with two or more services attached to it. I will
do that in a future commit.
Fixes#14384
If we have an old event that's missing the field
`sender_delivery_email`, we now patch it at the top
of `process_message_event`, rather than for each call
to `get_client_payload`. This will make an upcoming
commit a bit easier to reason about. Basically, it's
simpler to shim the incoming event one time rather
than doing it up to four times. We know that
`get_client_payload` is non-destructive, because it
does a deepcopy.
We now validate the message data explicitly, rather
than comparing it to the event data. This protects
us from false positives where we were only validating
that the request data was a mutated version of the
event message data. (We'll have a commit soon that
fixes a mutation-related bug.)
This code is only used in one test, and having
the indirection of setUp partly obscured a
problem with the fact that our event message
is actually a wide dict that gets mutated
by `build_bot_request`. We'll fix that soon,
but this is a pure code move for now.
The `event` parameter is never used by `process_success`,
and eliminating it allows us to greatly simplify tests
that are just confusingly passing in events that are
totally ignored.
In zulip.yaml simple json success response which only contains 'msg'
and 'result' properties has been described repeatedly in multiple
endpoints. Instead, use SimpleSuccess template for such responses
to increase code modularity and reusablility.
Migrate "call_on_each_event" from api/arguments.json to
/events:real-time in OpenAPI.
This is a bit of a hack, but it lets us eliminate this secondary
arguments.json file, which is probably worth it.
Tweaked by tabbott to fix various formatting issues in the original
documentation while I was looking at it.
Most part of "/message/{message_id}" is migrated to OpenAPI. This commit
migrated the remaning payload "update-message-edit-permission-error"
from "api/fixtures.json" to OpenAPI. This commit also fixes an error
schema in "zulip.yaml" for this payload.
We've had a bug for a while that if any ScheduledEmail objects get
created with the wrong email sender address, even after the sysadmin
corrects the problem, they'll still get errors because of the objects
stored with the wrong format.
We solve this by using FromAddress placeholders strings in
send_future_email function, so that ScheduledEmail objects end up
setting the final `from_address` value when mail is actually sent
using the setting in effect at that time.
Fixes#11008.
This refactors get_single_user to use get_user_by_id instead of
call_endpoint. Doing so is only possible now that we've upgraded
python-zulip-api to a version with the new function.
Overall, this change eliminates a lot of
optional parameters and conditionals, plus
some legacy logic related to caches.
For all the places we are just editing topics,
we now just call `check_topic` to see that
the topic got updated.
For places where the topic edit failed, we
just inline the checks that message still
has the old topic and content.
And then for successful **content** edits,
we now do a more rigorous, more sane check
that the messages are properly cached. The
old code here had evolved from 2013 into
something that didn't really make much sense
in the context of editing topics.
Now we are literally pulling data from the
cache and making sure it's valid, rather
than trying to poorly simulate the two
codepaths related to dispatching message
events and fetching messages. Some of the
history here was that when I introduced
`MessageDict` several years ago, I did a
lot of code sweeping and didn't analyze every
single test to make sure it's still valid,
plus some of the tests still had some value
for catching regressions. A recent commit
now gets us coverage on that a lot more
explicitly, rather than in passing.
See the comment in the test for a thorough explanation.
In brief, this test makes sure that the events codepath
for messages produces the same results as the fetch
codepath.
And this sets us up to simplify another test that kind
of poorly tried to do the same thing in passing. (In
fairness the test was really ancient and preceded a lot
of later work that we did here.)
When we are fetching messages, we need to hydrate
stream names into the messages for legacy reasons.
(Ideally, we could skip this step for the webapp
and modern mobile clients, since they really only
need stream_ids, but we're not there yet.)
We keep a recipient cache that maps recipient ids
to stream names.
When we populate that cache, we now use `values(...)`
to avoid fat objects and extra DB work.
Note that we are already using a similar technique
for hydrating PM/huddle recipients.
For event types that we don't yet support, like worklog_created (and
likely many more in the future), it doesn't make sense to call a
function that only parses issue events correctly.
The previous system for documenting arguments was very ugly if any of
the examples or descriptions were wrong. After thinking about this
for a while, I concluded the core problem was that a table was the
wrong design element to use for API parameters, and we'd be much
better off with individual card-type widgets instead.
This rewrites the API arguments documentation implementation to use a
basic sort of card-like system with some basic styling; I think the
result is a lot more readable, and it's a lot more clear how we would
add additional OpenAPI details (like parameter types) to the
documentation.
This is a full-stack change:
- server
- JS code
- templates
It's all pretty simple--just use stream_id instead
of stream_name.
I am 99% sure we don't document this API nor use it
in mobile, so it should be a safe change.
This commit modifies 'zerver/lib/bot_lib.py' to decouple the
user-controllable 'service_name' parameter from the value that is
passed in to 'import_module'. This is done as a precautionary
hardening.
This commit introduces two new functions in 'url_encoding.py' which
centralize two common patterns for constructing redirect URLs. It
also migrates the files using those patterns to use the new
functions.
After subscribing a stream email address to a Mailman email list
and receiving a message from it (using the polling configuration
with an Exim + Dovecot mailserver), the following error message
is emitted by Zulip:
Logger zerver.lib.email_mirror, from module zerver.lib.email_mirror line 77:
Error generated by Anonymous user (not logged in) on zulip deployment
Sender: "Foo Bar" <foo@example.com>
To: No recipient found
Missing recipient in mirror email
This is because the To: header on the received email corresponds
to the email list, and there are no other headers to indicate the
final recipient, apart from the "Envelope-To" header added by
Exim. To resolve this problem, the commit adds "Envelope-To" to
the list of headers to check for a match.
The function `prepare_login_url_and_headers` returns a register
link for any value of `is_signup` unless it's not none.
This commit changes it to a boolean for that function and other
functions using it so that it becomes much clearer when a
register link will be returned.
Also, all occurrences of `is_signup='1'` are changed to
`is_signup=True` to make the code consistent with the above change.
This allows us to block use of the desktop app with insecure versions
(we simply fail to load the Zulip webapp at all, instead rendering an
error page).
For now we block only versions that are known to be both insecure and
not auto-updating, but we can easily adjust these parameters in the
future.
This improves the error handling for invalid values of the
propagate_mode parameter to our message editing endpoints.
Previously, invalid values would just work like change_one rather than
doing nothing.
setup_event_queue() generates some logs about loaded event queues, and
it's good for the logging system to have access to the port at that
point already.