This commit renames 'test_message_to_self' and
'test_api_message_to_self' tests to
'test_message_to_stream_by_name' and
'test_api_message_to_stream_by_name' to depict
the actual purpose of these tests.
This argument does not define if an endpoint "is a webhook"; it is set
for "/api/v1/messages", which is not really a webhook, but allows
access from webhooks.
If multiple filters match the same string, we run into an infinite
loop of converting string into urls. To fix it, we mark the matched
string as atomic after first conversion.
This mimics the backend logic for adding the data-attribute -
to know what Pygments language was used to highlight the code
block - in locally echoed messages.
New test added checks our logic for canonicalizing pygments alias
(for both frontend and backend).
Other fixtures and tests amended.
In development and test, we keep the Tornado port at 9993 and 9983,
respectively; this allows tests to run while a dev instance is
running.
In production, moving to port 9800 consistently removes an odd edge
case, when just one worker is on an entirely different port than if
two workers are used.
When converting fenced code markdown, we add the language (if specified)
in a data-attribute by tweaking the HTML generated. Doing so, allows the
frontend to make use of this attr to display view-in-playground option
for codeblocks.
We use pygments to get the lexer subclass name and use that instead of
directly using the language in the data-attribute. Doing so, helps us
map different language aliases (like `js` and `javascript`) into a common
variable (like `JavaScript`) - and avoids the client from dealing with
multiple tags corresponding to the same language.
The html structure for a message like this:
``` js
..content..
```
would now be:
<div class="codehilite" data-codehilite-language="JavaScript">
<pre>..content..</pre>
</div>
Tests and fixtures amended.
Fixes#16284.
Most of the work for this was done when we implemented correct
behavior for guest users, since they treat public streams like private
streams anyway.
The general method involves moving the messages to the new stream with
special care of UserMessage.
We delete UserMessages for subs who are losing access to the message.
For private streams with protected history, we also create UserMessage
elements for users who are not present in the old stream, since that's
important for those users to access the moved messages.
Previously, S3UploadBackend.delete_export_tarball failed to strip the
leading ‘/’ from the export path. This mistake is now caught by Moto
1.3.15. I expect it caused deletion failures in the real S3, although
I haven’t verified this.
We store export_path in the audit log with a leading ‘/’, but the
actual S3 keys do not have a leading ‘/’. Changing either system
would require a migration. So the new convention is that the
variables named ‘export_path’ have a leading ‘/’, while variables
named ‘path_id’ or ‘key’ do not.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Django treats path("<name>") like re_path(r"(?P<name>[^/]+)") and
path("<path:name>") like re_path(r"(?P<name>.+)").
This is more readable and consistent than the mix of slightly
different regexes we had before, and fixes various bugs:
• The r'apps/(.*)$' regex was missing a start anchor ^, so it
incorrectly matched all URLs that included apps/ as a substring
anywhere.
• The r'accounts/login/(google)/$' regex was missing a start anchor ^,
so it incorrectly matched all URLs that ended with
accounts/login/google/.
• The type annotation of zerver.views.realm_export.delete_realm_export
takes export_id as an int, but it was previously passed as a string.
• The type annotation of zerver.views.users.avatar takes medium as a
bool, but it was previously passed as a string.
• The [0-9A-Za-z]+ pattern for uidb64 was missing the - and _
characters that can validly be part of a base64url encoded
string (although I think the id is actually a decimal integer here,
in which case only 012345ADEIMNOQTUYcgjkwxyz are present in its
base64url encoding).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Replace default root logger with zulip.auth.apple for apple auth
in file zproject/backends.py and update the test cases
accordingly in file zerver/tests/test_auth_backends.py
Replaced mock.patch with assertLogs for testing log outputs
in file test_auth_backends.py.
This change requires adjusting
test_log_into_subdomain_when_email_is_none to use an explicit token
since that appears in the log output.
This clears it out of the data sent to Sentry, where it is duplicative
with the indexed metadata -- and potentially exposes PHI if Sentry's
"make this issue public" feature is used.
Any exception is an "unexpected event", which means talking about
having an "unexpected event logger" or "unexpected event exception" is
confusing. As the error message in `exceptions.py` already explains,
this is about an _unsupported_ event type.
This also switches the path that these exceptions are written to,
accordingly.
8e10ab282a moved UnexpectedWebhookEventType into
`zerver.lib.exceptions`, but left the import into
`zserver.lib.webhooks.common` so that webhooks could continue to
import the exception from there.
This clutters things and adds complexity; there is no compelling
reason that the exception's source of truth should not move alongside
all other exceptions.
The main race conditions, which actually happened in production was with
concurrent execution of deliver_email and clear_scheduled_emails.
clear_scheduled_emails could delete all email.users in the middle of
deliver_email execution, causing it to pass empty to_user_ids list to
send_email. We mitigate this by getting the list of user ids in a single
query and moving forward with that snapshot, not having to worry about
database data being mutated anymore.
clear_scheduled_emails had potential race conditions with concurrent
execution of itself due to not locking the appropriate rows upon
selecting them for the purpose of potentially deleting them. FOR UPDATE
locks need to be acquired to prevent simultaneous mutation.
Tested manually with some print+sleep debugging to make some races
happen.
fixes #zulip-2k (sentry)
There are three functional side effects:
• Correct an insignificant but mathematically offensive bias toward
repeated characters in generate_api_key introduced in commit
47b4283c4b4c70ecde4d3c8de871c90ee2506d87; its entropy is increased
from 190.52864 bits to 190.53428 bits.
• Use the base32 alphabet in confirmation.models.generate_key; its
entropy is reduced from 124.07820 bits to the documented 120 bits, but
now it uses 1 syscall instead of 24.
• Use the base32 alphabet in get_bigbluebutton_url; its entropy is
reduced from 51.69925 bits to 50 bits, but now it uses 1 syscall
instead of 10.
(The base32 alphabet is A-Z 2-7. We could probably replace all of
these with plain secrets.token_urlsafe, since I expect most callers
can handle the full urlsafe_b64 alphabet A-Z a-z 0-9 - _ without
problems.)
Signed-off-by: Anders Kaseorg <anders@zulip.com>
For web-public streams, clients can access full topic history
without being authenticated. They only need to additionally
send "streams:web-public" narrow with their request like all
the other web-public queries.
This verifies that we actually do enqueue a record when there is an
error on non-staging. With the previous commit, it verifies that that
data serializes correctly.
When user requests for a realm that doesn't exists, we raise
a InvalidSubdomainError.
This reduces our effort at repeatedly ensuring realm is valid
in request in web-public queries.
Before this the only way we took advantage
of the summary from UnexpectedWebhookEventType
was by looking at exc_info().
Now we just explicitly add it to the log
message, which also sets us up to call
log_exception_to_webhook_logger directly
with some sort of "summary" info
when we don't actually want a real
exception (for example, we might want to
report anomalous webhook data but still
continue the transaction).
A minor change in passing is that I move
the payload parameter lexically.
Our isort configuration was almost Black-compatible, but we were
missing ensure_newline_before_comments.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The query to finds and marks all unread UserMessages in the stream as read
can be quite expensive, so we'll move that work to the deferred_work
queue and split it into batches.
Fixes#15770.
In 468c5b9a58 we changed the method of
getting the list of management commands. Using app_config.path has a
caveat in that the value depends on the path from which we're executing.
An example of things breaking can be reproduced by calling
/home/vagrant/zulip/tools/test-backend TestCommandsCanStart
This makes the app_config.path values to start with /home/vagrant/zulip,
but DEPLOY_ROOT in the dev environment is set to /srv/zulip.
/home/vagrant/zulip is a soft link to /srv/zulip, so it's a valid path
to call test-backend through, but it causes self.commands to end up
being an empty list. We fix this by converting app_config.path to the
real path.
The `typing: stop` event did not have any tests in test_events
hence its documentation wasn't added. So add tests and relevant
documentation for the typing stop event. Also edit the documentation
of `typing: start` to include the fact that servers should use
their own timeout incase `stop` event event isn't received.
Fixes#16122.
We display the text of the consent message, and then continue with the
export, which will scroll the content off the screen. Allow the
administrator time to examine the contents of the message, and decide
whether to proceed based on that and the fraction of users that have
responded so far.
Improve OpenAPI documentation of /zulip-outgoing-webhook by moving
data and making appropriate additions from its couterpart in the
/outgoing-webhook docs. Then remove the redundant documentation
from the doc and add command to render OpenAPI documetation. Also
add a test to outgoing_webhooks_interface.py to ensure that OpenAPI
documentation is correct.
Fixes#16203.
This lets the backend tests pass if zilencer has been (manually)
removed from EXTRA_INSTALLED_APPS, by skipping the tests that require
it. test-backend complains that some URLs are untested in this case:
ERROR: Some URLs are untested! Here's the list of untested URLs:
api/v1/users/me/android_gcm_reg_id
api/v1/users/me/apns_device_token
team/
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Extracting a section for presence endpoints and using path() rather
than re_path() results in a much cleaner implementation of this
concept.
This eliminates the last case where test_openapi couldn't correctly
match an endpoint documentation with the OpenAPI definitions for it.
This renames 'group_id' to 'user_group_id' in the api docs to remove
the naming mismatch between the url config and the docs and eventually
remove the 'user_groups' endpoints from 'pending_endpoints' in
test_openapi.py.
This queue had a race condition with creation of another Timer while
maybe_send_batched_emails is still doing its work, which may cause
two or more threads to be running maybe_send_batched_emails
at the same time, mutating the shared data simultaneously.
Another less likely potential race condition was that
maybe_send_batched_emails after sending out its email, can call
ensure_timer(). If the consume function is run simultaneously
in the main thread, it will call ensure_timer() too, which,
given unfortunate timings, might lead to both calls setting a new Timer.
We add locking to the queue to avoid such race conditions.
Tested manually, by print debugging with the following setup:
1. Making handle_missedmessage_emails sleep 2 seconds for each email,
and changed BATCH_DURATION to 1s to make the queue start working
right after launching.
2. Putting a bunch of events in the queue.
3. ./manage.py process_queue --queue_name missedmessage_emails
4. Once maybe_send_batched_emails is called and while it's processing
the events, I pushed more events to the queue. That triggers the
consume() function and ensure_timer().
Before implementing the locking mechanism, this causes two threads
to run maybe_send_batched_emails at the same time, mutating each other's
shared data, causing a traceback such as
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 1182, in run
self.function(*self.args, **self.kwargs)
File "/srv/zulip/zerver/worker/queue_processors.py", line 507, in maybe_send_batched_emails
del self.events_by_recipient[user_profile_id]
KeyError: '5'
With the locking mechanism, things get handled as expected, and
ensure_timer() exits if it can't obtain the lock due to
maybe_send_batched_emails still working.
Co-authored-by: Tim Abbott <tabbott@zulip.com>
I noticed RateLimitTests.test_hit_ratelimits fails when run as an
individual test, but never when run after other tests. That's due to the
first API request in a run of tests taking a long time, as detailed in
the comment on the change to the setUp method.
Django always sets request.user to a UserProfile or AnonymousUser
instance, so it's better to mimic that in the tests where we pass a
dummy request objects for rate limiter testing purposes.
Some `<img>` tags do not have an SRC, if they are rewritten using JS
to have one later. Attempting to access `first_image['src']` on these
will raise an exception, as they have no such attribute.
Only look for images which have a defined `src` attribute on them. We
could instead check if `first_image.has_attr('src')`, but this seems
only likely to produce fewer valid images.
The original commit was broken here:
b553507412
The intention was to run the same loop for all
settings, but instead, we did a funny loop of
just resetting schema_checker, and then we only
actually tested the last value of the loop.
Apparently, we were incorrectly using constants for title/description
rather than the nice non-constant values from og:title and
og:description in our meta tags.
This commit adds the is_web_public field in the AbstractAttachment
class. This is useful when validating user access to the attachment,
as otherwise we would have to make a query in the db to check if
that attachment was sent in a message in a web-public stream or not.
The new Stream administrator role is allowed to manage a stream they
administer, including:
* Setting properties like name, description, privacy and post-policy.
* Removing subscribers
* Deactivating the stream
The access_stream_for_delete_or_update is modified and is used only
to get objects from database and further checks for administrative
rights is done by check_stream_access_for_delete_or_update.
We have also added a new exception class StreamAdministratorRequired.
Via API, users can now access messages which are in web-public
streams without any authentication.
If the user is not authenticated, we assume it is a web-public
query and add `streams:web-public` narrow if not already present
to the narrow. web-public streams are also directly accessible.
Any malformed narrow which is not allowed in a web-public query
results in a 400 or 401. See test_message_fetch for the allowed
queries.
This adds 'user_id' to the simple success response for 'POST /users'
api endpoint, to make it convenient for API clients to get details
about users they just created. Appropriate changes have been made in
the docs and test_users.py.
Fixes#16072.
It is more suited for `process_request`, since it should stop
execution of the request if the domain is invalid. This code was
likely added as a process_response (in ea39fb2556) because there was
already a process_response at the time (added 7e786d5426, and no
longer necessary since dce6b4a40f).
It quiets an unnecessary warning when logging in at a non-existent
realm.
This stops performing unnecessary work when we are going to throw it
away and return a 404. The edge case to this is if the request
_creates_ a realm, and is made using the URL of the new realm; this
change would prevent the request before it occurs. While this does
arise in tests, the tests do not reflect reality -- real requests to
/accounts/register/ are made via POST to the same (default) realm,
redirected there from `confirm-preregistrationuser`. The tests are
adjusted to reflect real behavior.
Tweaked by tabbott to add a block comment in HostDomainMiddleware.
The exception trace only goes from where the exception was thrown up
to where the `logging.exception` call is; any context as to where
_that_ was called from is lost, unless `stack_info` is passed as well.
Having the stack is particularly useful for Sentry exceptions, which
gain the full stack trace.
Add `stack_info=True` on all `logging.exception` calls with a
non-trivial stack; we omit `wsgi.py`. Adjusts tests to match.
In f8bcf39014, we fixed buggy
marshalling of Streams and similar data structures where we were
including the Stream object rather than its ID in dictionaries passed
to ujson, and ujson happily wrote that large object dump into the
RealmAuditLog.extra_data field.
This commit includes a migration to fix those corrupted RealmAuditLog
entries, and because the migration loop is the same, also fixes the
format of similar RealmAuditLog entries to be in a more natural format
that doesn't weirdly nest and duplicate the "property" field.
Fixes#16066.
It doesn’t end well. Or sometimes it doesn’t end (OverflowError:
Maximum recursion level reached).
Introduced by commits ccdf52fef6 and
94d2de8b4a (#15601).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Our intent throughout the codebase is to treat email
case-insensitively.
The only codepath affected by this bug is remote_user_sso, as that's the
only one that currently passes potentially both a user_profile and
ExternalAuthDataDict when creating the ExternalAuthResult. That's why we
add a test specifically for that codepath.
Use `ujson.loads(ujson.dumps())` wrapper on events sent for OpenAPI
testing so that all tuples are converted into arrays as tuples aren't
valid in JSON.
To make it easier to check if there is user information to be used
in the error report emails, we create a user object inside report.
Now, to check if we have the user's full name, email, etc, we just
need to do report['user']['user_full_name'] rather than check
each information one by one, because if the value of one key in
the report is different than None, all the others will be as well.
The S3 data export tool's upload code path uses this nice boto
callback feature for showing a progress bar, which is nice for the
management command. It's spammy/broken in production and the backend
tests, so we change percent_callback to be a parameter passed in so
that it can only be used in the contexts where it makes sense.
This reverts commit c3779338c6 (part
of #14638), which incorrectly depended on commits from the future,
with the effect of either halting the flow of entropic time in an
irresolvable temporal paradox, summoning extradimensional beings to
rain destruction on the galaxy, or failing CI.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit modifies the /streams endpoint so that the web-public
streams are included in the default list of streams that users
have access to.
This is part of PR #14638 that aims to allow guest users to
browse and subscribe themselves to web public streams.
Modifies filter_stream_authorization so that web-public streams are
added in the list of authorized streams that a guest user can
subscribe.
This commit is part of PR #14638 that aims to allow guest users
to browse and subscribe to web-public streams.
In this commit, we grant guest users access to stream history,
send message and common stream data of web-public streams.
This is part of PR #14638 that aims to allow guest users to
browse and subscribe to web-public streams.
This modification allows guest users to have access to web-public
streams subscribers, even if they aren't subscribed or never
subscribed to that stream.
This commit is part of PR #14638 that aims to allow guest users to
browser and subscribe to web-public streams.
Now, gather_subscriptions include web-public streams in the 3 sets
of streams that it returns, subscribed, unsubscribed and never
subscribed.
This is part of PR #14638 that aims to allow guest users to browse and
subscribe to web-public streams.
Unlike the other Python datetime to Unix timestamp conversion
function (`datetime_to_timestamp`), `datetime_to_precise_timestamp`
won't drop the microseconds.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
The apple developer webapp consistently refers this App ID. So,
this clears any confusion that can occur.
Since python social auth only requires us to include App ID in
_AUDIENCE(a list), we do that in computed settings making it easier for
server admin and we make it much clear by having it set to
APP_ID instead of BUNDLE_ID.
Uses git release as this version 3.4.0 is not released to pypi.
This is required for removing some overriden functions of
apple auth backend class AppleAuthBackend.
With the update we also make following changes:
* Fix full name being populated as "None None".
c5c74f27dd that's included in update assigns first_name and last_name
to None when no name is provided by apple. Due to this our
code is filling return_data['full_name'] to 'None None'.
This commit fixes it by making first and last name strings empty.
* Remove decode_id_token override.
Python social auth merged the PR we sent including the changes
we made to decode_id_token function. So, now there is no
necessity for the override.
* Add _AUDIENCE setting in computed_settings.py.
`decode_id_token` is dependent on this setting.
Edit the function `validate_against_openapi_schema` and add some
helper functions to allow for validation of documented events.
Also add OpenAPI response validation in `verify_action` as it is
called in a large number of `/events` tests.
This lets us test the recursion bug behavior of this logging handler
without resulting in `logging.error` output being printed to the
console in the event that the test passes.
Some parameters such as `to` and `topic` have been intentionally
undocumentecd hence fail request validation. So mark tests which
fail due to this accordingly.
Change the condition for allowing failed validation to the condition
that `if the test fails, response status code begins with 4`. Also
add `intentionally_undocumented` argument in `validate_request` for
allowing passing of tests which return `200` responses but fail
validation due to some intentionally undocumented feature in
OpenAPI specification.