The documentation for restoring backups referenced that it needed to
be to the same version of PostgreSQL, but did not explain how to do
that.
Link to the relevant section of the installer documentation, and name
the flag explicitly.
Fixes: #23691
These hooks are run immediately around the critical section of the
upgrade. If the upgrade fails for preparatory reasons, the pre-deploy
hook may not be run; if it fails during the upgrade, the post-deploy
hook will not be run. Hooks are called from the CWD of the new
deploy, with arguments of the old version and the new version. If
they exit with non-0 exit code, the deploy aborts.
We will hopefully be able to just this in #16208 to document what
users need to configure in order to do this manually, but the content
here will be useful for anyone who hasn't set that up regardless.
This adds a new endpoint /jwt/fetch_api_key that accepts a JWT and can
be used to fetch API keys for a certain user. The target realm is
inferred from the request and the user email is part of the JWT.
A JSON containing an user API key, delivery email and (optionally)
raw user profile data is returned in response.
The profile data in the response is optional and can be retrieved by
setting the POST param "include_profile" to "true" (default=false).
Co-authored-by: Mateusz Mandera <mateusz.mandera@zulip.com>
- Renames "Customize Zulip" to "Server configuration".
- Cross-links "Server configuration" with "System and deployment
configuration".
Fixes part of #23984.
The `postfix.mailname` setting in `/etc/zulip.conf` was previously
only used for incoming mail, to identify in Postfix configuration
which messages were "local."
Also set `/etc/mailname`, which is used by Postfix to set how it
identifies to other hosts when sending outgoing email.
Co-authored-by: Alex Vandiver <alexmv@zulip.com>
When file uploads are stored in S3, this means that Zulip serves as a
302 to S3. Because browsers do not cache redirects, this means that
no image contents can be cached -- and upon every page load or reload,
every recently-posted image must be re-fetched. This incurs extra
load on the Zulip server, as well as potentially excessive bandwidth
usage from S3, and on the client's connection.
Switch to fetching the content from S3 in nginx, and serving the
content from nginx. These have `Cache-control: private, immutable`
headers set on the response, allowing browsers to cache them locally.
Because nginx fetching from S3 can be slow, and requests for uploads
will generally be bunched around when a message containing them are
first posted, we instruct nginx to cache the contents locally. This
is safe because uploaded file contents are immutable; access control
is still mediated by Django. The nginx cache key is the URL without
query parameters, as those parameters include a time-limited signed
authentication parameter which lets nginx fetch the non-public file.
This adds a number of nginx-level configuration parameters to control
the caching which nginx performs, including the amount of in-memory
index for he cache, the maximum storage of the cache on disk, and how
long data is retained in the cache. The currently-chosen figures are
reasonable for small to medium deployments.
The most notable effect of this change is in allowing browsers to
cache uploaded image content; however, while there will be many fewer
requests, it also has an improvement on request latency. The
following tests were done with a non-AWS client in SFO, a server and
S3 storage in us-east-1, and with 100 requests after 10 requests of
warm-up (to fill the nginx cache). The mean and standard deviation
are shown.
| | Redirect to S3 | Caching proxy, hot | Caching proxy, cold |
| ----------------- | ------------------- | ------------------- | ------------------- |
| Time in Django | 263.0 ms ± 28.3 ms | 258.0 ms ± 12.3 ms | 258.0 ms ± 12.3 ms |
| Small file (842b) | 586.1 ms ± 21.1 ms | 266.1 ms ± 67.4 ms | 288.6 ms ± 17.7 ms |
| Large file (660k) | 959.6 ms ± 137.9 ms | 609.5 ms ± 13.0 ms | 648.1 ms ± 43.2 ms |
The hot-cache performance is faster for both large and small files,
since it saves the client the time having to make a second request to
a separate host. This performance improvement remains at least 100ms
even if the client is on the same coast as the server.
Cold nginx caches are only slightly slower than hot caches, because
VPC access to S3 endpoints is extremely fast (assuming it is in the
same region as the host), and nginx can pool connections to S3 and
reuse them.
However, all of the 648ms taken to serve a cold-cache large file is
occupied in nginx, as opposed to the only 263ms which was spent in
nginx when using redirects to S3. This means that to overall spend
less time responding to uploaded-file requests in nginx, clients will
need to find files in their local cache, and skip making an
uploaded-file request, at least 60% of the time. Modeling shows a
reduction in the number of client requests by about 70% - 80%.
The `Content-Disposition` header logic can now also be entirely shared
with the local-file codepath, as can the `url_only` path used by
mobile clients. While we could provide the direct-to-S3 temporary
signed URL to mobile clients, we choose to provide the
served-from-Zulip signed URL, to better control caching headers on it,
and greater consistency. In doing so, we adjust the salt used for the
URL; since these URLs are only valid for 60s, the effect of this salt
change is minimal.
Moving `/user_avatars/` to being served partially through Django
removes the need for the `no_serve_uploads` nginx reconfiguring when
switching between S3 and local backends. This is important because a
subsequent commit will move S3 attachments to being served through
nginx, which would make `no_serve_uploads` entirely nonsensical of a
name.
Serve the files through Django, with an offload for the actual image
response to an internal nginx route. In development, serve the files
directly in Django.
We do _not_ mark the contents as immutable for caching purposes, since
the path for avatar images is hashed only by their user-id and a salt,
and as such are reused when a user's avatar is updated.
The authenticate_by_username limit of 5 attempts per 30 minutes can get
annoying in some cases where the user really forgot their password and
should be allowed to keep trying with admin approvial - so we should
document the command that allows unblocking them.
The documentation included the full policy for the file uploads
bucket, but only one additional statement for the avatars bucket; the
reader needed to assemble the full policy themselves.
Switch to explicitly providing the full policy for both.
Fixes#23110.
Since this is being moved to admin-facing documentation, also adds a
paragraph about the main concern with enabling this on a server that's
not zulip.com.
We should rearrange Zulip's developer docs to make it easier to
find the documentation that new contributors need.
Name changes
Rename "Code contribution guide" section -> "Contributing to Zulip".
Rename "Contributing to Zulip" page -> "Contributing guide".
Organizational changes to the newly-named "Contributing to Zulip":
Move up "Contributing to Zulip", as the third link in sidebar index.
Move up renamed "Contributing guide" page to the top of this section.
Move up "Zulip code of Conduct", as the second link of this section.
Move down "Licensing", as the last link of this section.
Move "Accessibility" just below "HTML and CSS" in Subsystems section.
Update all links according to the changes above.
Redirects should be added as needed.
Fixes: #22517.
This is not a feature intended to be used outside zulip.com, since it
just sets your server to have the zulip.com landing pages. I think
it's only been turned on by people who were confused by this text.
The default value in uwsgi is 4k; receiving more than this amount from
nginx leads to a 502 response (though, happily, the backend uwsgi does not
terminate).
ab18dbfde5 originally increased it from the unstated uwsgi default
of 4096, to 8192; b1da797955 made it configurable, in order to allow
requests from clients with many cookies, without causing 502's[1].
nginx defaults to a limitation of 1k, with 4 additional 8k header
lines allowed[2]; any request larger than that returns a response of
`400 Request Header Or Cookie Too Large`. The largest header size
theoretically possible from nginx, by default, is thus 33k, though
that would require packing four separate headers to exactly 8k each.
Remove the gap between nginx's limit and uwsgi's, which could trigger
502s, by removing the uwsgi configurability, and setting a 64k size in
uwsgi (the max allowable), which is larger than nginx's default limit.
uWSGI's documentation of `buffer-size` ([3], [4]) also notes that "It
is a security measure too, so adapt to your app needs instead of
maxing it out." Python has no security issues with buffers of 64k,
and there is no appreciable memory footprint difference to having a
larger buffer available in uwsgi.
[1]: https://chat.zulip.org/#narrow/stream/31-production-help/topic/works.20in.20Edge.20not.20Chrome/near/719523
[2]: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size
[3]: https://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
[4]: https://uwsgi-docs.readthedocs.io/en/latest/Options.html#buffer-size
Backups are written every 16k of WAL archive, and by default do not
have an upper limit on how out of date they are, as `archive_timeout`
defaults to 0.
Also emphasize that these are streaming backups, not just one
point-in-time backup daily.
Fixes#21976.
Notable changes:
- Describe `X-Forwarded-For` by name.
- Switch each specific proxy to numbered steps.
- Link back to the `X-Forwarded-For` section in each proxy
- Default to using HTTPS, not HTTP, for the backend.
- Include the HTTP-to-HTTPS redirect code for all proxies; it is
important that it happen at the proxy, as the backend is unaware of
it.
- Call out Apache2 modules which are necessary.
- Specify where the dhparam.pem file can be found.
- Call out the `Host:` header forwarding necessary, and document
`USE_X_FORWARDED_HOST` if that is not possible.
- Standardize on 20 minutes of connection timeout.
As a consequence:
• Bump minimum supported Python version to 3.8.
• Move Vagrant environment to Ubuntu 20.04, which has Python 3.8.
• Move CI frontend tests to Ubuntu 20.04.
• Move production build test to Ubuntu 20.04.
• Move 3.4 upgrade test to Ubuntu 20.04.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
On the Debian 10 -> 11 upgrade, the server is running Zulip 4.x, which
lets us pass `--audit-fts-indexes` to `upgrade-zulip-stage-2` rather
than run the command as a separate step.
The reindex-textual-data tool needs the venv to be cable to run;
switch the order of the last two steps, making them now match the
Debian 9 -> 10 and 10 -> upgrades.
Ref #21296.
When the credentials are provided by dint of being run on an EC2
instance with an assigned Role, we must be able to fetch the instance
metadata from IMDS -- which is precisely the type of internal-IP
request that Smokescreen denies.
While botocore supports a `proxies` argument to the `Config` object,
this is not actually respected when making the IMDS queries; only the
environment variables are read from. See
https://github.com/boto/botocore/issues/2644
As such, implement S3_SKIP_PROXY by monkey-patching the
`botocore.utils.should_bypass_proxies` function, to allow requests to
IMDS to be made without Smokescreen impeding them.
Fixes#20715.
Previously, it was possible to configure `wal-g` backups without
replication enabled; this resulted in only daily backups, not
streaming backups. It was also possible to enable replication without
configuring the `wal-g` backups bucket; this simply failed to work.
Make `wal-g` backups always streaming, and warn loudly if replication
is enabled but `wal-g` is not configured.
Ubuntu 22.04 pushed a post-feature-freeze update to Python 3.10,
breaking virtual environments in a Debian patch
(https://bugs.launchpad.net/ubuntu/+source/python3.10/+bug/1962791).
Also, our antique version of Tornado doesn’t work in 3.10, and we’ll
need to do some work to upgrade that.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This uses the myst_heading_anchors option to automatically generate
header anchors and make Sphinx aware of them. See
https://myst-parser.readthedocs.io/en/latest/syntax/optional.html#auto-generated-header-anchors.
Note: to be compatible with GitHub, MyST-Parser uses a slightly
different convention for .md fragment links than .html fragment links
when punctuation is involved. This does not affect the generated
fragment links in the HTML output.
Fixes#13264.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We previously had a convention of redundantly including the directory
in relative links to reduce mistakes when moving content from one file
to another. However, these days we have a broken link checker in
test-documentation, and after #21237, MyST-Parser will check relative
links (including fragments) when you run build-docs.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This was only used for upgrading from Zulip < 1.9.0, which is no
longer possible because Zulip < 2.1.0 had no common supported
platforms with current main.
If we ever want this optimization for a future migration, it would be
better implemented using Django merge migrations.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is required in order to lock down the RabbitMQ port to only
listen on localhost. If the nodename is `rabbit@hostname`, in most
circumstances the hostname will resolve to an external IP, which the
rabbitmq port will not be bound to.
Installs which used `rabbit@hostname`, due to RabbitMQ having been
installed before Zulip, would not have functioned if the host or
RabbitMQ service was restarted, as the localhost restrictions in the
RabbitMQ configuration would have made rabbitmqctl (and Zulip cron
jobs that call it) unable to find the rabbitmq server.
The previous commit ensures that configure-rabbitmq is re-run after
the nodename has changed. However, rabbitmq needs to be stopped
before `rabbitmq-env.conf` is changed; we use an `onlyif` on an `exec`
to print the warning about the node change, and let the subsequent
config change and notify of the service and configure-rabbitmq to
complete the re-configuration.
The Erlang `epmd` daemon listens on port 4369, and provides
information (without authentication) about which Erlang processes are
listening on what ports. This information is not itself a
vulnerability, but may provide information for remote attackers about
what local Erlang services (such as `rabbitmq-server`) are running,
and where.
`epmd` supports an `ERL_EPMD_ADDRESS` environment variable to limit
which interfaces it binds on. While this environment variable is set
in `/etc/default/rabbitmq-server`, Zulip unfortunately attempts to
start `epmd` using an explicit `exec` block, which ignores those
settings.
Regardless, this lack of `ERL_EPMD_ADDRESS` variable only controls
`epmd`'s startup upon first installation. Upon reboot, there are two
ways in which `epmd` might be started, neither of which respect
`ERL_EPMD_ADDRESS`:
- On Focal, an `epmd` service exists and is activated, which uses
systemd's configuration to choose which interfaces to bind on, and
thus `ERL_EPMD_ADDRESS` is irrelevant.
- On Bionic (and Focal, due to a broken dependency from
`rabbitmq-server` to `epmd@` instead of `epmd`, which may lead to
the explicit `epmd` service losing a race), `epmd` is started by
`rabbitmq-server` when it does not detect a running instance.
Unfortunately, only `/etc/init.d/rabbitmq-server` would respects
`/etc/default/rabbitmq-server` -- and it defers the actual startup
to using systemd, which does not pass the environment variable
down. Thus, `ERL_EPMD_ADDRESS` is also irrelevant here.
We unfortunately cannot limit `epmd` to only listening on localhost,
due to a number of overlapping bugs and limitations:
- Manually starting `epmd` with `-address 127.0.0.1` silently fails
to start on hosts with IPv6 disabled, due to an Erlang bug ([1],
[2]).
- The dependencies of the systemd `rabbitmq-server` service can be
fixed to include the `epmd` service, and systemd can be made to
bind to `127.0.0.1:4369` and pass that socket to `epmd`, bypassing
the above bug. However, the startup of this service is not
guaranteed, because it races with other sources of `epmd` (see
below).
- Any process that runs `rabbitmqctl` results in `epmd` being started
if one is not currently running; these instances do not respect any
environment variables as to which addresses to bind on. This is
also triggered by `service rabbitmq-server status`, as well as
various Zulip cron jobs which inspect the rabbitmq queues. As
such, it is difficult-to-impossible to ensure that some other
`epmd` process will not win the race and open the port on all
interfaces.
Since the only known exposure from leaving port 4369 open is
information that rabbitmq is running on the host, and the complexity
of adjusting this to only bind on localhost is high, we remove the
setting which does not address the problem, and document that the port
is left open, and should be protected via system-level or
network-level firewalls.
[1]: https://bugs.launchpad.net/ubuntu/+source/erlang/+bug/1374109
[2]: https://github.com/erlang/otp/issues/4820
As a consequence:
• Bump minimum supported Python version to 3.7.
• Move Vagrant environment to Debian 10, which has Python 3.7.
• Move CI frontend tests to Debian 10.
• Move production build test to Debian 10.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
With tweaks to security-model.md by tabbott to expand the SSO acronym.
Ignored, but still needs discussion on whether we should exclude this
rule:
```
The word ‘install’ is not a noun.
✗ ...ble to connect to the client during the install process: So you'll need to shut down a...
^^^^^^^
✓ ...ble to connect to the client during the installation process: So you'll need to shut down a...
A_INSTALL: a/the + install
The word ‘install’ is not a noun.
✗ ...detected at install time will cause the install to abort. If you already have PostgreSQ...
^^^^^^^
✓ ...detected at install time will cause the installation to abort. If you already have PostgreSQ...
A_INSTALL: a/the + install
```
Because Camo includes logic to deny access to private subnets, routing
its requests through Smokescreen is generally not necessary. However,
it may be necessary if Zulip has configured a non-Smokescreen exit
proxy.
Default Camo to using the proxy only if it is not Smokescreen, with a
new `proxy.enable_for_camo` setting to override this behaviour if need
be. Note that that setting is in `zulip.conf` on the host with Camo
installed -- not the Zulip frontend host, if they are different.
Fixes: #20550.
For `no_serve_uploads`, `http_only`, which previously specified
"non-empty" to enable, this tightens what values are true. For
`pgroonga` and `queue_workers_multiprocess`, this broadens the
possible values from `enabled`, and `true` respectively.
Restarting the uwsgi processes by way of supervisor opens a window
during which nginx 502's all responses. uwsgi has a configuration
called "chain reloading" which allows for rolling restart of the uwsgi
processes, such that only one process at once in unavailable; see
uwsgi documentation ([1]).
The tradeoff is that this requires that the uwsgi processes load the
libraries after forking, rather than before ("lazy apps"); in theory
this can lead to larger memory footprints, since they are not shared.
In practice, as Django defers much of the loading, this is not as much
of an issue. In a very basic test of memory consumption (measured by
total memory - free - caches - buffers; 6 uwsgi workers), both
immediately after restarting Django, and after requesting `/` 60 times
with 6 concurrent requests:
| Non-lazy | Lazy app | Difference
------------------+------------+------------+-------------
Fresh | 2,827,216 | 2,870,480 | +43,264
After 60 requests | 3,332,284 | 3,409,608 | +77,324
..................|............|............|.............
Difference | +505,068 | +539,128 | +34,060
That is, "lazy app" loading increased the footprint pre-requests by
43MB, and after 60 requests grew the memory footprint by 539MB, as
opposed to non-lazy loading, which grew it by 505MB. Using wsgi "lazy
app" loading does increase the memory footprint, but not by a large
percentage.
The other effect is that processes may be served by either old or new
code during the restart window. This may cause transient failures
when new frontend code talks to old backend code.
Enable chain-reloading during graceful, puppetless restarts, but only
if enabled via a zulip.conf configuration flag.
Fixes#2559.
[1]: https://uwsgi-docs.readthedocs.io/en/latest/articles/TheArtOfGracefulReloading.html#chain-reloading-lazy-apps
This replaces the TERMS_OF_SERVICE and PRIVACY_POLICY settings with
just a POLICIES_DIRECTORY setting, in order to support settings (like
Zulip Cloud) where there's more policies than just those two.
With minor changes by Eeshan Garg.
The certbot package installs its own systemd timer (and cron job,
which disabled itself if systemd is enabled) which updates
certificates. This process races with the cron job which Zulip
installs -- the only difference being that Zulip respects the
`certbot.auto_renew` setting, and that it passes the deploy hook.
This means that occasionally nginx would not be reloaded, when the
systemd timer caught the expiration first.
Remove the custom cron job and `certbot-maybe-renew` script, and
reconfigure certbot to always reload nginx after deploying, using
certbot directory hooks.
Since `certbot.auto_renew` can't have an effect, remove the setting.
In turn, this removes the need for `--no-zulip-conf` to
`setup-certbot`. `--deploy-hook` is similarly removed, as running
deploy hooks to restart nginx is now the default; pass
`--no-directory-hooks` in standalone mode to not attempt to reload
nginx. The other property of `--deploy-hook`, of skipping symlinking
into place, is given its own flog.
We recently changed /developer-community to /development-community.
Now that this change is in production, we can also migrate the
external links in our ReadTheDocs documentation.
PostgreSQL 11 and below used a configuration file names
`recovery.conf` to manage replicas and standbys; support for this was
removed in PostgreSQL 12[1], and the configuration parameters were
moved into the main `postgresql.conf`.
Add `zulip.conf` settings for the primary server hostname and
replication username, so that the complete `postgresql.conf`
configuration on PostgreSQL 14 can continue to be managed, even when
replication is enabled. For consistency, also begin writing out the
`recovery.conf` for PostgreSQL 11 and below.
In PostgreSQL 12 configuration and later, the `wal_level =
hot_standby` setting is removed, as `hot_standby` is equivalent to
`replica`, which is the default value[2]. Similarly, the
`hot_standby = on` setting is also the default[3].
Documentation is added for these features, and the commentary on the
"Export and Import" page referencing files under `puppet/zulip_ops/`
is removed, as those files no longer have any replication-specific
configuration.
[1]: https://www.postgresql.org/docs/current/recovery-config.html
[2]: https://www.postgresql.org/docs/12/runtime-config-wal.html#GUC-WAL-LEVEL
[3]: https://www.postgresql.org/docs/12/runtime-config-replication.html#GUC-HOT-STANDBY
When Zulip is run behind one or more reverse proxies, you must
configure `loadbalancer.ips` so that Zulip respects the client IP
addresses found in the `X-Forwarded-For` header. This is not
immediately clear from the documentation, so this commit makes it more
clear and augments the existing examples to showcase this need.
Fixes: #19073
OIDC config features a get_secret call (so it requires adding an import)
as well as having a bunch of its instructions in the form of comments on
the various keys of the config dict - thus users should really update
settings.py to fetch all of that.
The upstream of the `camo` repository[1] has been unmaintained for
several years, and is now archived by the owner. Additionally, it has
a number of limitations:
- It is installed as a sysinit service, which does not run under
Docker
- It does not prevent access to internal IPs, like 127.0.0.1
- It does not respect standard `HTTP_proxy` environment variables,
making it unable to use Smokescreen to prevent the prior flaw
- It occasionally just crashes, and thus must have a cron job to
restart it.
Swap camo out for the drop-in replacement go-camo[2], which has the
same external API, requiring not changes to Django code, but is more
maintained. Additionally, it resolves all of the above complaints.
go-camo is not configured to use Smokescreen as a proxy, because its
own private-IP filtering prevents using a proxy which lies within that
IP space. It is also unclear if the addition of Smokescreen would
provide any additional protection over the existing IP address
restrictions in go-camo.
go-camo has a subset of the security headers that our nginx reverse
proxy sets, and which camo set; provide the missing headers with `-H`
to ensure that go-camo, if exposed from behind some other non-nginx
load-balancer, still provides the necessary security headers.
Fixes#18351 by moving to supervisor.
Fixeszulip/docker-zulip#298 also by moving to supervisor.
[1] https://github.com/atmos/camo
[2] https://github.com/cactus/go-camo
This is an additional security hardening step, to make Zulip default
to preventing SSRF attacks. The overhead of running Smokescreen is
minimal, and there is no reason to force deployments to take
additional steps in order to secure themselves against SSRF attacks.
Deployments which already have a different external proxy configured
will not gain a local Smokescreen installation, and running without
Smokescreen is supported by explicitly unsetting the `host` or `port`
values in `/etc/zulip/zulip.conf`.
This helps increase the probability that folks read the guidelines for how the
chat.zulip.org community works and what streams to use before arriving there.
Fixes#19827.
Previously, our docs had links to various versions of the Django docs,
eg https://docs.djangoproject.com/en/1.10/topics/migrations/ and
https://docs.djangoproject.com/en/2.0/ref/signals/#post-save, opening
a link to a doc with an outdated Django version would show a warning
"This document is for an insecure version of Django that is no longer
supported. Please upgrade to a newer release!".
This commit uses a search with the regex
"docs.djangoproject.com/en/([0-9].[0-9]*)/" and replaces all matches
inside the /docs/ folder with "docs.djangoproject.com/en/3.2/".
All the new links in this commit have been generated by the above
replace and each link has then been manually checked to ensure that
(1) the page still exists and has not been moved to a new location
(and it has been found that no page has been moved like this), (2)
that the anchor that we're linking to has not been changed (and it has
been found that this happened once, for https://docs.djangoproject.com
/en/1.8/ref/django-admin/#runserver-port-or-address-port, where
/#runserver-port-or-address-port was changed to /#runserver).
The links we have now redirect to "My groups" and not to our
Google group. Also, the RSS feed is no longer supported by Google,
so we should no longer link to it.
Fixes#19560.
It's better to explicitly list the possibilities. Also, the
recommendation regarding is_active should be changed to a strict
"Don't", as Subscription.is_user_active is a denormalized field and
flipping a user's is_active will cause inconsistent state by leaving
Subscriptions unupdated. Given that similar things can be introduced in
the future for any other flag not officially supported by having a
setter, the recommendation should "Don't" in general.
It feels like the "Same as" content was unnecessarily requiring the
user to bounce around in these cases.
(I've left the "Same as" text for the Ubuntu ones, where it's two
steps in a row to follow).
Fixes#17456.
The main tricky part has to do with what values the attribute should
have. LDAP defines a Boolean as
Boolean = "TRUE" / "FALSE"
so ideally we'd always see exactly those values. However,
although the issue is now marked as resolved, the discussion in
https://pagure.io/freeipa/issue/1259 shows how this may not always be
respected - meaning it makes sense for us to be more liberal in
interpreting these values.
The support for bullseye was added in #17951
but it was not documented as bullseye was
frozen and did not have proper configuration
files, hence wasn't documented.
Since now bullseye is released as a stable
version, it's support can be documented.
We previously used `zulip-puppet-apply` with a custom config file,
with an updated PostgreSQL version but more limited set of
`puppet_classes`, to pre-create the basic settings for the new cluster
before running `pg_upgradecluster`.
Unfortunately, the supervisor config uses `purge => true` to remove
all SUPERVISOR configuration files that are not included in the puppet
configuration; this leads to it removing all other supervisor
processes during the upgrade, only to add them back and start them
during the second `zulip-puppet-apply`.
It also leads to `process-fts-updates` not being started after the
upgrade completes; this is the one supervisor config file which was
not removed and re-added, and thus the one that is not re-started due
to having been re-added. This was not detected in CI because CI added
a `start-server` command which was not in the upgrade documentation.
Set a custom facter fact that prevents the `purge` behaviour of the
supervisor configuration. We want to preserve that behaviour in
general, and using `zulip-puppet-apply` continues to be the best way
to pre-set-up the PostgreSQL configuration -- but we wish to avoid
that behaviour when we know we are applying a subset of the puppet
classes.
Since supervisor configs are no longer removed and re-added, this
requires an explicit start-server step in the instructions after the
upgrades complete. This brings the documentation into alignment with
what CI is testing.
Recommonmark is no longer maintained, and MyST-Parser is much more
complete.
https://myst-parser.readthedocs.io/
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Commit 30eaed0378 (#15001) incorrectly
inserted a different section between the anchor and the heading.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The auth attempt rate limit is quite low (on purpose), so this can be a
common scenario where a user asks their admin to reset the limit instead
of waiting. We should provide a tool for administrators to handle such
requests without fiddling around with code in manage.py shell.
These checks suffer from a couple notable problems:
- They are only enabled on staging hosts -- where they should never
be run. Since ef6d0ec5ca, these supervisor processes are only
run on one host, and never on the staging host.
- They run as the `nagios` user, which does not have appropriate
permissions, and thus the checks always fail. Specifically,
`nagios` does not have permissions to run `supervisorctl`, since
the socket is owned by the `zulip` user, and mode 0700; and the
`nagios` user does not have permission to access Zulip secrets to
run `./manage.py print_email_delivery_backlog`.
Rather than rewrite these checks to run on a cron as zulip, and check
those file contents as the nagios user, drop these checks -- they can
be rewritten at a later point, or replaced with Prometheus alerting,
and currently serve only to cause always-failing Nagios checks, which
normalizes alert failures.
Leave the files installed if they currently exist, rather than
cluttering puppet with `ensure => absent`; they do no harm if they are
left installed.
Strictly speaking, this sentence is talking about the IdP configuration,
while the backend is just GenericOpenIdConnectBackend, so the new
phrasing is more correct.
The script is added to upgrade steps for 20.04 and Buster because
those are the upgrades that cross glibc 2.28, which is most
problematic. It will also be called out in the upgrade notes, to
catch those that have already done that upgrade.
Fixes#17277.
The main limitation of this implementation is that the sync happens if
the user authing already exists. This means that a new user going
through the sign up flow will not have their custom fields synced upon
finishing it. The fields will get synced on their consecutive log in via
SAML in the future. This can be addressed in the future by moving the
syncing code further down the codepaths to login_or_register_remote_user
and plumbing the data through to the user creation process.
We detail that limitation in the documentation.
Updated the install documentation to include the explanation of the
two new install options `--postgresql-database-name` and
`--postgresql-database-user`.
* Move the extended documentation of code blocks to a separate page.
* Merge "code playgrounds" documentation to be a section of that page.
* Document copy widget on code blocks.
* This commit changes how we refer to "```python" type syntax for code
blocks. Instead of being called a syntax highlighting label, this is
now referred to as a "language tag", since it serves both syntax
highlighting and playgrounds.
* Remap all the links.
* Advertise this new page in various places that previously did not have a link.
Requesting external images is a privacy risk, so route all external
images through Camo.
Tweaked by tabbott for better test coverage, more comments, and to fix
bugs.
This allows access to be more configurable than just setting one
attribute. This can be configured by setting the setting
AUTH_LDAP_ADVANCED_REALM_ACCESS_CONTROL.
This hits the unauthenticated Github API to get the list of tags,
which is rate-limited to 60 requests per hour. This means that the
tool can only be run 60 times per hour before it starts to exit with
errors, but that seems like a reasonable limit for the moment.
Update `docs/production/install.md` and
`docs/production/deployment.md` to document the install flags that can
be used as part of the installer more clearly.
Fixes#18122.
Using `supervisorctl stop all` to stop the server is not terribly
discoverable, and may stop services which are not part of Zulip
proper.
Add an explicit tool which only stops the relevant services. It also
more carefully controls the order in which services are stopped to
minimize lost requests, and maximally quiesce the server.
Locations which may be stopping _older_ versions of Zulip (without
this script) are left with using `supervisorctl stop all`.
Fixes#14959.