This change is intended to reduce confusion between zulip-bot-shell
(test bots interactively without a server) and the zulip/zulip-terminal
project and its associated command (zulip-term).
Safari interprets transparent as rgba(255, 255, 255, 0)
`transparent black` instead of rgba(0, 0, 0, 0).
We explicitly define transparent to help safari understand the
gradients.
This fixes the bug where our gradients look black on safari
on narrow screens.
This commit renames "Automatic" option in color scheme setting
dropdown to "Sync with compute". We do not change any variables
used in code just the text in the dropdown visible to user.
Fixes part of #20228.
Enable spectator access for test `zulip` realm in developement
setup.
Add option in `do_create_realm` to configure
`enable_spectator_access` field of `Realm`.
We restrict access of messages from web public streams if
anonymous login is disabled via `enable_spectator_access`.
Display of `Anonymous login` button is now controlled by
the value of `enable_spectator_access`.
Admins can toggle `enable_spectator_access` via org settings in UI.
This helps increase the probability that folks read the guidelines for how the
chat.zulip.org community works and what streams to use before arriving there.
Fixes#19827.
We had an incident where someone didn't notice for a week that they'd
accidentally enabled a 30-day message retention policy, and thus we
were unable to restore the deleted the content.
After some review of what other products do (E.g. Dropbox preserves
things in a restoreable state for 30 days) we're adjusting this
setting's default value to be substantially longer, to give more time
for users to notice their mistake and correct it before data is
irrevocably deleted.
It is possible to be in recovery, and downloading WAL logs from
archives, and not yet be replicating. If one only checks the
streaming log status, it reports as "no replicas" which is technically
accurate but not a useful summation of the state of the replica.
If null is a potential value of data type for a return value or
parameter in the API endpoint, then it is rendered as an option.
This currently relies on the 'nullable' setting in the OpenAPI spec
that was removed in the 3.1.0 release. If/when the OpenAPI version
is updated, then how the `data_type` for parameters and return values
is rendered will need to be reworked.
Fixes#20264.
At some point we must have made a change that caused the "create
stream" and "#stream name" headings to take up more vertical space,
resulting in the dividing line for the headings of the right side of
the subscription overlay to be miss-aligned with the same for the left
side. For the "create stream" panel, it also caused the scroll bar and
some content to be visible through the partially transparent bottom
section in night mode.
In this commit we reduce the padding for those headings so that things
don't look broken anymore.
Previously, running `./tools/run-dev.py` when provision was required
would lead to a warning along the lines of:
```
Before we run tests, we make sure your provisioning version
is correct by looking at var/provision_version, which is at
version 165.1, and we compare it to the version in source
control (version.py), which is 165.2.
It looks like you checked out a branch that has added
dependencies beyond what you last provisioned. Your command
is likely to fail until you add dependencies by provisioning.
Do this: `./tools/provision`
If you really know what you are doing, use --skip-provision-check to
run anyway.
```
The assumption that we're trying to run tests might cause some
confusion, especially if its the first time you're seeing the
provision warning. Hence, we reword the first paragraph to avoid
making that assumption.
The second paragraph has also been slightly altered, since (1) it's
possible that we didn't checkout a different branch, but eg just
rebased with upstream and (2) we might not be on a VM.
The warning you'd get after this commit would be along the lines of:
```
Provisioning state check failed! This check compares
`var/provision_version` (currently 165.2) to the version in
source control (`version.py`), which is 164.6, to see if you
likely need to provision before this command can run
properly.
The branch you are currently on expects an older version of
dependencies than the version you provisioned last. This may
be ok, but it's likely that you either want to rebase your
branch on top of upstream/main or re-provision your machine.
Do this: `./tools/provision`
If you really know what you are doing, use --skip-provision-check to
run anyway.
```
or along the lines of:
```
Provisioning state check failed! This check compares
`var/provision_version` (currently 165.2) to the version in
source control (`version.py`), which is 167.2, to see if you
likely need to provision before this command can run
properly.
The branch you are currently on has added dependencies beyond
what you last provisioned. Your command is likely to fail
until you add dependencies by provisioning.
Do this: `./tools/provision`
If you really know what you are doing, use --skip-provision-check to
run anyway.
```
The `cron` resource places its contents in the user's crontab, which
makes it unlike every other cron job that Zulip installs.
Switch to using `/etc/cron.d` files, like all other cron jobs.
It turns out these were just wrong. We fix a few things:
* Sort the list of settings so that it's possible to compare with reality.
* Deleted additional fields that don't actually exist.
* Fixed various fields missing past feature level updates.
OneLogin has removed the old app. The new app is nearly identical, just
with some additional configurable settings, that we don't want to touch
anyway as the default are fine - and changing the default Parameters
that are set up, so we also update the screenshot to match how it looks
with the new app.
Neither of these fields use the `realm/update_dict` event type; they
use `realm/update`; we've attempted to clarify that in the previous
commit.
That reality means we don't have automated testing for these values,
and that meant that typos like these could slip through.
The user id is a very useful piece of information that the mobile
client should have access to - instead of only getting the email. This
makes it much simpler to impleent clients that might be robust to
changes in email address.
RabbitMQ clients have a setting called prefetch[1], which controls how
many un-acknowledged events the server forwards to the local queue in
the client. The default is 0; this means that when clients first
connect, the server must send them every message in the queue.
This itself may cause unbounded memory usage in the client, but also
has other detrimental effects. While the client is attempting to
process the head of the queue, it may be unable to read from the TCP
socket at the rate that the server is sending to it -- filling the TCP
buffers, and causing the server's writes to block. If the server
blocks for more than 30 seconds, it times out the send, and closes the
connection with:
```
closing AMQP connection <0.30902.126> (127.0.0.1:53870 -> 127.0.0.1:5672):
{writer,send_failed,{error,timeout}}
```
This is https://github.com/pika/pika/issues/753#issuecomment-318119222.
Set a prefetch limit of 100 messages, or the batch size, to better
handle queues which start with large numbers of outstanding events.
Setting prefetch=1 causes significant performance degradation in the
no-op queue worker, to 30% of the prefetch=0 performance. Setting
prefetch=100 achieves 90% of the prefetch=0 performance, and higher
values offer only minor gains above that. For batch workers, their
performance is not notably degraded by prefetch equal to their batch
size, and they cannot function on smaller prefetches than their batch
size.
We also set a 100-count prefetch on Tornado workers, as they are
potentially susceptible to the same effect.
[1] https://www.rabbitmq.com/confirms.html#channel-qos-prefetch
The `current_queue_size` key in the queue monitoring stats file was
the local queue size, not the global queue size -- d5a6b0f99a
renamed the function, but did not adjust the queue monitoring JSON,
despite the last use of it having been removed in cd9b194d88.
The function is still used to mark "we emptied our queue," and it
remains a reasonable metric for that.
TOR users are legitimate users of the system; however, that system can
also be used for abuse -- specifically, by evading IP-based
rate-limiting.
For the purposes of IP-based rate-limiting, add a
RATE_LIMIT_TOR_TOGETHER flag, defaulting to false, which lumps all
requests from TOR exit nodes into the same bucket. This may allow a
TOR user to deny other TOR users access to the find-my-account and
new-realm endpoints, but this is a low cost for cutting off a
significant potential abuse vector.
If enabled, the list of TOR exit nodes is fetched from their public
endpoint once per hour, via a cron job, and cached on disk. Django
processes load this data from disk, and cache it in memcached.
Requests are spared from the burden of checking disk on failure via a
circuitbreaker, which trips of there are two failures in a row, and
only begins trying again after 10 minutes.
It's better for this to catch all exit(...) calls with non-zero exit
code, given the purpose is to catch all exits with failure, as opposed
to only exit(1).
Unhandled exceptions propagating to process_queue were not caught there,
causing improper logging - errors didn't land in errors.log as expected.
Exceptions should be caught and explicitly logged by the process_queue
logger. Exceptions occurring during consuming events are caught and
handled inside the worker's logic - however those that happen while
setting up the worker were not addressed at all, and that's the core bug
we mean to address here.
Furthermore, in multi-threaded mode we want the autoreload mechanism to
be working - which it doesn't without catching the exceptions. The
correct approach is to - again - catch the exception, log it and then
send SIGUSR1 signal to trigger exit and autoreload.
I believe that this migration with the default of atomic=True will
fail when trying to convert the field to PositiveIntegerField if there
were any 0 values present in the database when the migration began.
The fix is to have each of the steps be their own transaction.
Note from tabbott: It's not clear we want fundamentally dynamic
corporate website elements like this jobs page to be in zulip/zulip,
but there's even less reason for there to be an out-of-date copy there.