We set stream_weekly_traffic field to "null" for Subscription
objects in zephyr mirror realm as we do not need stream traffic
data in zephyr mirror realm. This makes the subscription data
consistent with steams data.
This commit also udpates test to check never_subscribed data for
zephyr mirror realm.
Instead of having a "realm.is_zephyr_mirror_realm" check for every
get_streams_traffic call, this commit udpates get_streams_traffic to
accept realm as parameter and return "None" for zephyr mirror realm.
Almost all users of JsonSuccessBase seem to also include
SuccessDescription. /server_settings used a different description from
the rest of the JsonSuccessBase users, but the difference is small
enough that using the generic description of the former
SuccessDescription is fine.
Migrates existing ScheduledEmails for onboarding emails that have
either "zerver/emails/followup_day1" or "zerver/emails/followup_day2"
as the email template prefix to instead use the new template
prefixes "zerver/emails/account_registered" and
"zerver/emails/zulip_onboarding_topics".
Now that we're using the new templates for the onboarding emails,
remove "followup_day1" and "followup_day2" from the EMAIL_TYPES
that are used for scheduled emails.
The "followup_day2" email template name is not clear or descriptive
about the purpose of the email. Creates a duplicate of those email
template files with the template name "zulip_onboarding_topics".
Because any existing scheduled emails that use the "followup_day2"
templates will need to be updated before the current templates can
be removed, we don't do a simple file rename here.
The "followup_day1" email template name is not clear or descriptive
about the purpose of the email. Creates a duplicate of those email
template files with the template name "account_registered".
Because any existing scheduled emails that use the "followup_day1"
templates will need to be updated before the current templates can
be removed, we don't do a simple file rename here.
In servers with `application_server.http_only = true` and
`loadbalancer.ips` set, the DetectProxyMisconfiguration middleware
prevents access over HTTP from IP addresses other than the
loadbalancer.
However, this misses the case of access from localhost over HTTP,
which is safe and expected -- for instance, the `email-mirror-postfix`
script used in the email gateway[^1] will post to `http://localhost/`
by default in such configurations. With the
DetectProxyMisconfiguration installed, this will result in a 403
response.
Make an exception for requests from `127.0.0.1` and `::1` from
proxy-misconfiguration rejections.
[^1]: https://zulip.readthedocs.io/en/latest/production/email-gateway.html
Removes a response example in the `POST users/me/subscriptions`
documentation that was listed as a 400 error response. It is
actually a variation on the success response for this endpoint.
The current rendering of our API documentation is not set up to
support `"anyOf"` which would allow for validating examples that
match multiple response schemas.
This migration applies under the assumption that extra_data_json has
been populated for all existing and coming audit log entries.
- This removes the manual conversions back and forth for extra_data
throughout the codebase including the orjson.loads(), orjson.dumps(),
and str() calls.
- The custom handler used for converting Decimal is removed since
DjangoJSONEncoder handles that for extra_data.
- We remove None-checks for extra_data because it is now no longer
nullable.
- Meanwhile, we want the bouncer to support processing RealmAuditLog entries for
remote servers before and after the JSONField migration on extra_data.
- Since now extra_data should always be a dict for the newer remote
server, which is now migrated, the test cases are updated to create
RealmAuditLog objects by passing a dict for extra_data before
sending over the analytics data. Note that while JSONField allows for
non-dict values, a proper remote server always passes a dict for
extra_data.
- We still test out the legacy extra_data format because not all
remote servers have migrated to use JSONField extra_data.
This verifies that support for extra_data being a string or None has not
been dropped.
Co-authored-by: Siddharth Asthana <siddharthasthana31@gmail.com>
Signed-off-by: Zixuan James Li <p359101898@gmail.com>
This commit removes the private stream suscriptions of the bot if the
original owner is deactivated and we change the owner to the user who
is reactivating the bot. We unsusbcribe the bot from private streams
that the new owner is not subscribed to.
Fixes part of #21700.
We remove bot's subscriptions for private streams to which the
new owner is not subscribed and keep the ones to which the new
owner is subscribed on changing owner.
This commit also changes the code for sending subscription
remove events to use transaction.on_commit since we call
the function inside a transactopn in do_change_bot_owner and
this also requires some changes in tests in test_events.
Fixes the `/api/register-queue` endpoint documentation so that the
`realm_emoji` has the correct type, object that contains objects.
By correcting the API documentation, we also fix an error in the
test for the events system, which had been relying on the API
documentation having a list as a possible type for `realm_emoji`
in the register response.
Earlier, for topic wildcard mentions, the 'wildcard_mentioned'
flag was set for all the user-messages. (similar to stream wildcard
mention).
The flag should be set for the topic participants only.
The bug was introduced in 4c9d26c.
For topic wildcard mentions, the 'wildcard_mentioned' flag is set
for those user messages having 'user_profile_id' in
'topic_participant_user_ids', i.e. all topic participants.
Earlier, the flag was set if the 'user_profile_id' exists in
'all_topic_wildcard_mention_user_ids'.
'all_topic_wildcard_mention_user_ids' contains the ids of those
users who are topic participants and have enabled notifications
for '@topic' mentions.
The earlier approach was incorrect, as it would set the
'wildcard_mentioned' flag only for those topic participants
who have enabled the notifications for '@topic' mention instead
of setting the flag for all the topic participants.
The bug was introduced in 4c9d26c.
This commit addresses the issue where the topic highlighting
in search results was offset by one character when an
apostrophe was present. The problem stemmed from the disparity
in HTML escaping generated by the function `func.escape_html` which
is used to obtain `topic_matches` differs from the escaping performed
by the function `django.utils.html.escape` for apostrophes (').
func.escape_html | django.utils.html.escape
-----------------+--------------------------
' | '
To fix this SQL query is changed to return the HTML-escaped
topic name generated by the function `func.escape_html`.
Fixes: #25633.
Currently, we are displaying the "Complete the organization profile"
banner immediately after the organization was created. It's important to
strongly encourage orgs to configure their profile, so we should delay
showing the banner if the profile has not been configured after 15 days.
Thus also allows the users to check out Zulip and see how it works before
configuring the organization settings.
Fixes: #24122.
Previoulsy, test_openapi_arguments had assumed that an endpoint
not using rest_dispatch used the GET method for the request. This
was not the case for the "/fetch_api_key" and "/dev_fetch_api_key"
endpoints, which is why those endpoints were marked as pending
even though they were documented in `zerver/openapi/zulip.yaml`.
Updates test_openapi_arguments to check a set of endpoints that
are documented and don't use the GET method so that these endpoints
can be tested and removed from the pending_endpoints set.
This adds API support to reorder linkifiers and makes sure that the
returned lists of linkifiers from `GET /events`, `POST /register`, and
`GET /realm/linkifiers` are always sorted with the order that they
should processed when rendering linkifiers.
We set the new `order` field to the ID with the migration. This
preserves the order of the existing linkifiers.
New linkifiers added will always be ordered the last. When reordering,
the `order` field of all linkifiers in the same realm is updated, in
a manner similar to how we implement ordering for
`custom_profile_fields`.
The curl examples of reordering linkifiers require there to be some
linkifiers in the database to be reordered. This adjusts some test cases
so they do not assume that there is no linkifier in the test db.
Each unittest subTest can fail without interrupting the other subTests.
By wrapping the test for each view function, we can get all validation
errors at once, which can be useful if multiple endpoints are updated.
More importantly, if the test fails anywhere inside test_openapi but
before the formatted output is printed, we will not lose the information
of which view function fails the validation. Because we attach the name
of the function to the subTest:
```
FAIL: test_openapi_arguments (zerver.tests.test_openapi.OpenAPIArgumentsTest) [zerver.views.alert_words.add_alert_words]
```
The number of affected objects may be quite high, and they are
selected by `id IN (...)` query, and updated with a giant `CASE`.
This turns out to be quadratic, and can cause large queries to take
hours, in a state where they cannot be terminated, when PostgreSQL >11
tries to JIT the query.
Set a batch_size as a stopgap performance fix before moving to
`.update()` as a real fix.
We have historically cached two types of values
on a per-request basis inside of memory:
* linkifiers
* display recipients
Both of these caches were hand-written, and they
both actually cache values that are also in memcached,
so the per-request cache essentially only saves us
from a few memcached hits.
I think the linkifier per-request cache is a necessary
evil. It's an important part of message rendering, and
it's not super easy to structure the code to just get
a single value up front and pass it down the stack.
I'm not so sure we even need the display recipient
per-request cache any more, as we are generally pretty
smart now about hydrating recipient data in terms of
how the code is organized. But I haven't done thorough
research on that hypotheseis.
Fortunately, it's not rocket science to just write
a glorified memoize decorator and tie it into key
places in the code:
* middleware
* tests (e.g. asserting db counts)
* queue processors
That's what I did in this commit.
This commit definitely reduces the amount of code
to maintain. I think it also gets us closer to
possibly phasing out this whole technique, but that
effort is beyond the scope of this PR. We could
add some instrumentation to the decorator to see
how often we get a non-trivial number of saved
round trips to memcached.
Note that when we flush linkifiers, we just use
a big hammer and flush the entire per-request
cache for linkifiers, since there is only ever
one realm in the cache.