For spectators, without sending any request to the server,
check locally if the hash requires authentication or which
shows a feature that requires authentication;
if it does, we show login_to_access modal to the user.
We will use this modal for any narrow / hash or other UI element that
requires an actual account to use, to provide something reasonable to
occur when a user clicks on those things.
aioapns already has a retry loop. By default it retries forever on
ConnectionError and ConnectionClosed, so our own retry loop would
never be reached. Remove our retry loop, and configure aioapns to
retry APNS_MAX_RETRIES times on ConnectionError like the previous
version did. It still retries forever on ConnectionClosed; that’s not
configurable but probably fine.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This utilizes the generic `BaseNotes` we added for multipurpose
patching. With this migration as an example, we can further support
more types of notes to replace the monkey-patching approach we have used
throughout the codebase for type safety.
The motive of adding `BaseNotes` was to support monokey patching
temporary attributes to objects (such as `.trigger` on `Message`) when
working on the django-stubs migration in #18777.
This way, we no longer have to manually keep the upload path code in
sync with the upload path code in zerver/lib/upload.py.
This was originally suggested in
https://github.com/zulip/zulip/pull/19478#issuecomment-911479530.
This change fixes a bug when importing into a server using the local
file uploads backend, where the `import_realm.py` copy wasn't using
our standard 256-directory approach to avoid putting too many files in
a single directory.
de04f0ad67 changed now notifications recipients were calculated, in
a manner that caused them to be sent when they should not have been.
ac70a2d2e1 was supposed to resolve this, but appears to have been
insufficient, as all three of these cases have been observed to still
happen.
Add safety checks immediately before notification, until the
underlying logic error can be sussed out.
This information can be gleaned from the stacktrace, but making it
explicit in the stringification makes it much easier to differentiate
types of errors at a glance, particularly in Sentry.
We move the emojiset_choices method from UserProfile class to
UserBaseSettings class because emojiset_choices exists in
UserBaseSettings class and this would be used for realm-level
settings as well along with existing user-level settings.
We rename the user group in the example for 'GET /user_groups'
with is_system_group=True, to be 'Moderators' as is_system_group
will be set to True for role-based user groups only.
The default is kept as no retries. Since retries with exponential
backoff are a good thing to make easy, the int form defaults to
setting a backoff_factor.
Unfortunately, urllib3 retry backoff does not implement jitter.
Switching this to use the `backoff` library[1] rather than urllib3's
native Retry is left as future extension.
[1] https://pypi.org/project/backoff/
This adds the X-Smokescreen-Role header to proxy connections, to track
usage from various codepaths, and enforces a timeout. Timeouts were
kept consistent with their previous values, or set to 5s if they had
none previously.
We ideally will also add a /help/ page explaining these with an
example, but it seems appropriate to make the labels frame it around
what sort of emails we actually intend to send (newsletters with
updates about Zulip, not a "drip campaign").
This commits removes some unnecessary checks for `self.md.zulip_message`,
which were put there historically, as earlier we used to add the additional
properties like mentions_user_ids, alert_words, etc. to Message dict
only. These were later moved to MessageRenderingResult class in commit
75cea329b but the checks weren't removed.
This is important because while rendering the messages imported from
other chat tools (like Rocket.Chat), the Message dict is not passed to
the markdown, due to which the checks for `self.md.zerver_message` fails
and hence, things like user mentions, stream/topic mentions are not
rendered in the imported messages properly.
The `user_activity_interval` worker calls:
```python3
last = UserActivityInterval.objects.filter(user_profile=user_profile).order_by("-end")[0]
`````
Which results in a query like:
```sql
SELECT "zerver_useractivityinterval"."id", "zerver_useractivityinterval"."user_profile_id", "zerver_useractivityinterval"."start", "zerver_useractivityinterval"."end" FROM "zerver_useractivityinterval" WHERE "zerver_useractivityinterval"."user_profile_id" = 12345 ORDER BY "zerver_useractivityinterval"."end" DESC LIMIT 1
```
For users which have at least one matching row, this results in a
query plan like:
```
Limit (cost=0.56..711.38 rows=1 width=24) (actual time=0.078..0.078 rows=1 loops=1)
-> Index Scan Backward using zerver_useractivityinterval_7f021a14 on zerver_useractivityinterval (cost=0.56..1031399.46 rows=1451 width=24) (actual time=0.077..0.078 rows=1 loops=1)
Filter: (user_profile_id = 12345)
Rows Removed by Filter: 98
Planning Time: 0.059 ms
Execution Time: 0.088 ms
```
But for users that have just been created, with no matching rows, this
is considerably more expensive:
```
Limit (cost=0.56..711.38 rows=1 width=24) (actual time=10798.146..10798.146 rows=0 loops=1)
-> Index Scan Backward using zerver_useractivityinterval_7f021a14 on zerver_useractivityinterval (cost=0.56..1031399.46 rows=1451 width=24) (actual time=10798.145..10798.145 rows=0 loops=1)
Filter: (user_profile_id = 12345)
Rows Removed by Filter: (count of every single row in the table, redacted)
Planning Time: 0.053 ms
Execution Time: 10798.158 ms
```
Regular vacuuming can force the use of the index on `user_profile_id`
as long as there are few enough users, which is fast -- however, at
some point, the query planner decides that is insufficiently specific,
always chooses the effective-whole-table-scan.
Add an index on `(user_profile_id, end)`, which is expected to be
sufficiently specific that it is used even with large numbers of user
profiles.
Ref #19250.
The usual output from this command looks like
Notice: Compiled catalog for localhost in environment production in 2.33 seconds
Notice: /Stage[main]/Zulip::Apt_repository/Exec[setup_apt_repo]/returns: current_value 'notrun', should be ['0'] (noop)
Notice: Class[Zulip::Apt_repository]: Would have triggered 'refresh' from 1 event
Notice: Stage[main]: Would have triggered 'refresh' from 1 event
Notice: Applied catalog in 1.20 seconds
which doesn’t seem abnormally alarming, and hiding it makes failures
harder to diagnose.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This was originally meant to fix the emoji mapping conflict during a
Slack import. In Slack, 🎉 and ㊗️ have different
symbols, but they both map to 🎉 in Zulip prior to this commit.
㊗️ now refers to the Japanese character version, as is
observed in Matrix and Slack.
I expand the fix to include all other Japanese characters. Matrix.org
and Slack already have those characters in their symbol section, and so
this is to reach feature parity.
See the discussion thread in https://chat.zulip.org/#narrow/stream/9-issues/topic/duplicate.20emoji.20in.20data.20import
Till now, we've been forking django-auth-ldap at
https://github.com/zulip/django-auth-ldap to put the
LDAPReverseEmailSearch feature in it, hoping to get it merged
upstream in https://github.com/django-auth-ldap/django-auth-ldap/pull/150
The efforts to get it merged have stalled for now however and we don't
want to be on the fork forever, so this commit puts the email search
feature as a clumsy workaround inside our codebase and switches to using
the latest upstream release instead of the fork.
The transforms called from `build_message_payload` use
`lxml.html.fromstring` to parse (and stringify, and re-parse) the HTML
generated by Markdown. However, this function fails if it is passed
an empty document. "empty" is broader than just the empty string; it
also includes any document made entirely out of control characters,
spaces, unpaired surrogates, U+FFFE, or U+FFFF, and so forth. These
documents would fail to parse, and raise a ParserError.
Using `lxml.html.fragment_fromstring` handles these cases, but does by
wrapping the contents in a <div> every time it is called. As such,
replacing each `fromstring` with `fragment_fromstring` would nest
another layer of `<div>`.
Instead of each of the helper functions re-parsing, modifying, and
stringifying the HTML, parse it once with `fragment_fromstring` and
pass around the parsed document to each helper, which modifies it
in-place. This adds one outer `<div>`, requiring minor changes to
tests and the prepend-sender functions.
The modification to add the sender is left using BeautifulSoup, as
that sort of transform is much less readable, and more fiddly, in raw
lxml.
Partial fix for #19559.
We previously used `zulip-puppet-apply` with a custom config file,
with an updated PostgreSQL version but more limited set of
`puppet_classes`, to pre-create the basic settings for the new cluster
before running `pg_upgradecluster`.
Unfortunately, the supervisor config uses `purge => true` to remove
all SUPERVISOR configuration files that are not included in the puppet
configuration; this leads to it removing all other supervisor
processes during the upgrade, only to add them back and start them
during the second `zulip-puppet-apply`.
It also leads to `process-fts-updates` not being started after the
upgrade completes; this is the one supervisor config file which was
not removed and re-added, and thus the one that is not re-started due
to having been re-added. This was not detected in CI because CI added
a `start-server` command which was not in the upgrade documentation.
Set a custom facter fact that prevents the `purge` behaviour of the
supervisor configuration. We want to preserve that behaviour in
general, and using `zulip-puppet-apply` continues to be the best way
to pre-set-up the PostgreSQL configuration -- but we wish to avoid
that behaviour when we know we are applying a subset of the puppet
classes.
Since supervisor configs are no longer removed and re-added, this
requires an explicit start-server step in the instructions after the
upgrades complete. This brings the documentation into alignment with
what CI is testing.