Apparently, our nginx configuration's use of "localhost", combined
with the default in modern Linux of having localhost resolve to both
the IPv4 and IPv6 addresses on a given machine, resulted in `nginx`
load-balancing requests to a given Zulip server between the IPv4 and
IPv6 addresses. This, in turn, resulted in irrelevant 502 errors
problems every few minutes on the /events endpoints for some clients.
Disabling IPv6 on the server resolved the problem, as does simply
spelling localhost as 127.0.0.1 for the `nginx` upstreams that we
declare for proxying to non-Django services on localhost.
Now, one can just set `no_serve_uploads` in `zulip.conf` to prevent
`nginx` from serving locally uploaded files.
This should help simplify the S3 integration setup process.
This option is intended to support situations like a quick Docker
setup where doing HTTPS adds more setup overhead than it's worth.
It's not intended to be used in actual production environments.
This is preferred, since we don't currently have a way to run Django
logic on the postgres hosts with the Docker implementation.
This is a necessary part of removing the need for the docker-zulip
package to patch this file to make Zulip work with Docker.
The main purpose of this change is to make it guaranteed that
`manage.py register_server --rotate-key` can edit the
/etc/zulip/zulip-secrets.conf configuration via crudini.
But it also adds value by ensuring zulip-secrets.conf is not readable
by other users.
With modern apt-key, the fingerprints are displayed in the more fully
written-out format with spaces, and so `apt-key add` was being run
every time.
This fixes some unnecessary work being done on each puppet run on
Debian stretch.
I would have preferred to not need to do this by upgrading to
upstream, but see #7423 for notes on why that isn't going to work
(basically they broke support for puppet older than 4).
Apparently, these confused the puppet template parser, since they are
somewhat similar to its syntax, resulting in errors trying to use
these templates. It's easy enough to just remove the example
content from the base postgres config file.
We can't really do this in the zulip manifests (since it's sorta a
sysadmin policy decision), but these scripts can cause significant
load when Nagios logs into a server (because many of them take 50ms or
more of work to run). So we just get rid of them.
It seems unlikely we're going to add support for additional older
Debian-based distributions, so it makes sense to just use an else
statement. This should save a bit of busywork every time we add a new
distro.
Mostly, this involves adding the big block at the bottom and making
10 a variable so that it's easier to compare different versions of
these.
I did an audit of the configuration changes between 9.6 and 10, so
this should be fine, but it hasn't been tested yet.
Our recent addition of Content-Security-Policy to the file uploads
backend broke in-browser previews of PDFs.
The content-types change in the last commit fixed loading PDFs for
most users; but the result was ugly, because e.g. Chrome would put the
PDF previewer into a frame (so there were 2 left scrollbars).
There were two changes needed to fix this:
* Loading the style to use the plugin. We corrected this by adding
`style-src 'self' 'unsafe-inline';`
* Loading the plugin. Our CSP blocked loading the PDf viewer plugin.
To correct this, we add object-src 'self', and then limit the
plugin-type to just the one for application/pdf.
We verified this new CSP using https://csp-evaluator.withgoogle.com/
in addition to manual testing.
Previously, user-uploaded PDF files were not properly rendered by
browsers with the local uploads backend, because we weren't setting
the correct content-type.
This adds a basic Content-Security-Policy for user-uploaded avatars
served by the LOCAL_UPLOADS backend.
I think this is for now an unnecessary follow-up to
d608a9d315, but is worth doing because
we may later change what can be uploaded in the avatars directory.
This adds a basic Content-Security-Policy for user-uploaded files with local uploads.
While over time, we plan to add CSP for the main site as well, this CSP is particularly
important for the local-uploads backend, which often shares a domain with the main site.
Running this on additional machines would be redundant; additionally,
the FillState checker cron job runs only on cron systems, so this will
crash on other app frontends.
While this is a different system than I'd written up in #8004, I think
this is a better solution to the general problem of cron jobs to run
on just one server.
Fixes#8004.
Revert c8f034e9a "queue: Remove missedmessage_email_senders code."
As the comment in the code says, it ensures a smooth upgrade path
from 1.7.x; we can delete it in master after 1.8.0 is released.
The removal commit was merged early due to a communication failure.
From here on we start to authenticate uploaded file request before
serving this files in production. This involves allowing NGINX to
pass on these file requests to Django for authentication and then
serve these files by making use on internal redirect requests having
x-accel-redirect field. The redirection on requests and loading
of x-accel-redirect param is handled by django-sendfile.
NOTE: This commit starts to authenticate these requests for Zulip
servers running platforms either Ubuntu Xenial (16.04) or above.
Fixes: #320 and #291 partially.
This should make it possible to use the zulip_ops base rules
successfully on chat.zulip.org. Many of the changes in this commit
are hacks and probably can be cleaned up later, but given that we plan
to drop trusty support soon, it's likely that most of them will simply
be deleted then.
We've been running this change on zulipchat.com for a couple of months
now. Before then, we used to regularly get exceptions like this:
File "./zerver/views/messages.py", line 749, in get_messages_backend
setter=stringify_message_dict)
File "./zerver/lib/cache.py", line 275, in generic_bulk_cached_fetch
cache_set_many(items_for_remote_cache)
File "./zerver/lib/cache.py", line 215, in cache_set_many
get_cache_backend(cache_name).set_many(items, timeout=timeout)
File "/home/zulip/deployments/2017-09-28-21-04-12/zulip-py3-venv/lib/python3.5/site-packages/django/core/cache/backends/memcached.py", line 150, in set_many
self._cache.set_multi(safe_data, self.get_backend_timeout(timeout))
pylibmc.Error: error 48 from memcached_set_multi
This error means memcached was unable to find space for the new value.
You might think that because memcached provides an LRU cache, this
shouldn't happen because it would just evict something... but in fact
* memcached splits its data into "slabs" by object size, and
* until recently, once a 1MiB "chunk" is allocated to a given "slab"
i.e. size class, it wouldn't be reclaimed to allocate to another.
So once the cache has been filled up with objects of some distribution
of sizes, if some objects come in that would go in a different size
class, we have no chunks for that size class / slab, and can't get one.
And that's exactly what was happening on zulipchat.com.
Useful background can be found in
https://github.com/memcached/memcached/wiki/ServerMaint#slab-imbalancehttps://github.com/memcached/memcached/wiki/ReleaseNotes1411https://github.com/memcached/memcached/wiki/ReleaseNotes1425https://github.com/memcached/memcached/wiki/ReleaseNotes150
We're already running v1.4.25, which provides an "automover" that should
be well equipped to fix this; v1.5.0 turns it on by default.
With this commit, adopt the "modern start line" recommended in the
release notes for our v1.4.25, including turning on the automover.
This doesn't yet pass all Nagios checks correctly, and still has a few
flaws:
* The ideal setup code for the `nagios` user in the database isn't included.
* Some of the other details are a bit off; we need to split some host roles.
But it's better than nothing, and we can iterate from here.
This commit just copies all the code from MissedMessageSendingWorker
class to a new EmailSendingWorker class. All the logic to send an email
through a queue was already there. This commit only makes the logic
generic. It does so by creating a special purpose queue called
'email_senders' to send any type of email. To make
MissedMessageSendingWorker still work we derive it from
EmailSendingWorker. All the tests that were testing
MissedMessageSendingWorker now run against EmailSendingWorker.
This fixes a bug where, when a user is unsubscribed from a stream,
they might have unread messages on that stream leak. While it might
seem to be a minor problem, it can cause significant problems for
computing the `unread_msgs` data structures, since it means we need to
add an extra filter for whether the user is still subscribed, either
in the backend or in the UI.
Fixes#7095.
This causes the cron job to run only when a Zulip-managed certbot
install is actually set up.
Inside `install`, zulip.conf doesn't yet exist when we run
setup-certbot, so we write the setting later. But we also give
setup-certbot the ability to write the setting itself, so that we
can recommend it in instructions for adopting certbot in an
existing Zulip installation.
If we were making an old-fashioned webroot where hand-written static
HTML files went, somewhere under `/srv` would be most appropriate.
Here, this webroot is really more of an implementation detail of the
certbot set up by the Zulip installer/packaging, containing transient
state. So someplace under `/var` is appropriate, and specifically
under `/var/lib/zulip` in order to properly namespace it.
For background on `/var/www` and friends, see the top couple of answers
on
https://unix.stackexchange.com/questions/47436/why-web-server-var-www
For some reason, we have the USING_PGROONGA setting on in development
right now. I'm going to disable that in another commit to match what
we're doing in production, but we'll still want that setting to work
in development.
The problem here was that process_fts_updates only attempted to read
the USING_PGROONGA setting from a /etc/zulip/zulip.conf source, and
thus would just not be updating the index in development.
We weren't compressing SVG, while at the same time were incorrectly
compressing octet-stream (Which meant downloading .tar.gz files in
Chrome would get double-compressed).
Sparkle was the auto-update system used by the legacy desktop app. We
haven't been capable of using it for auto-update in years, so there's
no reason to keep around the configuration.
The new Electron app uses a different system anyway.
Whatever dist/ functionality this had in 2014 is now served by
zulip.org, and since this serves as a sample, it should be as simple
as possible.
Previously, this was more cluttered than it needed to be.
The old limits were such that these would sometimes oscillated too
high and page erroneously. The purpose of this check is to prevent
large memory leaks, and will still achieve that with a higher limit.
This allows the Nagios user to access redis without having full access
to the redis system. Ideally, this would eventually use a password
that only has statistics read access, but I'm not sure redis supports
that.
This old puppet configuration was never really used, and regardless
hardcoded an ancient zulip.net hostname. We fix this to use the
zulipconf system to get the host domain (though not, at present, the
hostname).
If a machine is configured with no swap intentationally, that
shouldn't be a Nagios problem. This alert is intended to flag
machines which are swapping.
Arguably, we should make this a symlink, but it's probably a good idea
to have every change in the production Nagios configuration go through
the zulip-puppet-apply diff experience.
Since we've found that it's fairly frequent that we want to recommend
to developers that they upgrade to a version of Zulip from Git, it
makes sense to include that by default.
It's needed by scripts/install-yarn.sh. This hadn't been discovered
because most systems end up having curl installed even though it isn't
technically a required package.
This code empirically doesn't work. It's not entirely clear why, even
having done quite a bit of debugging; partly because the code is quite
convoluted, and because it shows the symptoms of people making changes
over time without really understanding how it was supposed to work.
Moreover, this code targets an old version of the APNs provider API.
Apple deprecated that in 2015, in favor of a shiny new one which uses
HTTP/2 to meet the same needs for concurrency and scale that the old
one had to do a bunch of ad-hoc protocol design for.
So, rip this code out. We'll build a pathway to the new API from
scratch; it's not that complicated.
Whenever you restarted supervisord services, we'd end up leaking one
process from the process_queue group, eventually resulting in running
out of memory.
Fixes#6184.
This causes `upgrade-zulip-from-git`, as well as a no-option run of
`tools/build-release-tarball`, to produce a Zulip install running
Python 3, rather than Python 2. In particular this means that the
virtualenv we create, in which all application code runs, is Python 3.
One shebang line, on `zulip-ec2-configure-interfaces`, explicitly
keeps Python 2, and at least one external ops script, `wal-e`, also
still runs on Python 2. See discussion on the respective previous
commits that made those explicit. There may also be some other
third-party scripts we use, outside of this source tree and running
outside our virtualenv, that still run on Python 2.
The Zulip server's settings are only available if process-fts-updates
is running is on the same server as a Zulip production deployment. So
we instead check whether we have pgroonga configured in
/etc/zulip/zulip.conf.
On `trusty` there is no package for `boto` or `gevent` on Python 3, both
of which are dependencies of `wal-e` (at the version we've pinned.) This
is something used only on database servers and only in a replication
scenario, and it doesn't involve any of our code outside the wal-e repo,
so the Python version it uses is quite independent of the Zulip
application server itself and the rest of our code. For now, keep it
explicitly on Python 2 while we move forward for most everything else.
This script in `zulip_ops` is handy for managing EC2 instances. It uses
`boto`, which isn't available in `trusty` for Python 3. The use of
`boto` here isn't particularly deep, so we could replace it with some
more manual HTTP calls if it comes to that. For now, just mark it to
stay on Python 2 while we move the app and all the rest of the ops code
(except this and another straggler or two) to Python 3.
Also make a comment on this package in the Puppet manifest clearer
about what it specifically refers to.
This consists of the `zulip_ops::stats` Puppet class, which has apparently
not been used since 2014, and a number of files that I believe were
only used for that. Also a couple of tiny loose ends in other files.
This is only actually used in our `wal-e` setup, which is in
zulip_ops::postgres_common. (In fact the only mentions of `gevent` in
our whole Git history are for `wal-e`.) So remove where we mention it
on the broader zulip::postgres_common module, and move it where it's
needed.
This follows up on 98cef0ab4 by eliminating the only dependency
outside of the `zulip_ops` Puppet tree on a system Python-library
package which isn't available in `trusty` for Python 3.
This follows up on 207cf6302 from last year to clean up cases that
have apparently popped up since then. Invoking the scripts directly
makes a cleaner command line in any case, and moreover is essential
to how we control running a Zulip install as either Python 2 or 3
(soon, how we always ensure it runs as Python 3.)
One exception: we're currently forcing `provision` in dev to run
Python 3, while still running both Python 2 and Python 3 jobs in CI.
We use a non-shebang invocation to do the forcing of Python 3.
In some of these contexts, we may still be *using* the Python 2
version, but at least this should eliminate running into
`ImportError`s one by one in scripts that run outside a virtualenv,
as we update their shebangs to refer to Python 3.
Several Python libraries we use don't come in Python 3 versions on
trusty: gevent, boto, twisted, django, django-tagging, whisper.
The latter two don't come in Python 3 versions even on xenial.
So some work required before we can actually switch the code that
relies on those libraries to run as Python 3 -- probably the best
solution will be to backport them all in our apt repo. (All but
`whisper` are packaged in zesty; `whisper` upstream just grew Python 3
support this year.)
These are no longer useful, with our spiffy new analytics framework,
and we haven't in fact been using them for some time, while the
`active-user-stats` cron job does cause regular mail from cron.
Just delete them.
It's rare that there's value in having the log files get this big, and
these changes mean these log files should never consume more than a
few gigabytes.
And in particular, the server.log is far more important than the other
log files, and grows much faster, so we might as well spend most of
the space we are spending on that.
I estimate that the total size of log files from this is going to be
under 1-2GB, since 75MB (compressed size) * 10 (compressed logs) +
500MB (uncompressed size) = 1.25GB from server.log, and the rest is
negligible.
Fixes part of #5724.
Most of these log files are useless except a few minutes after an
event happens, and the aggregate effect of the originals size limits
meant that Zulip's logs could consume many gigabytes of disk.
The new logging strategy should limit our usage from supervisor logs
to at most 3 Gigabytes:
* 20 * 3 = 60MB per queue worker => <1GB.
* 100 * 10 = 1GB for Django and Tornado logs.
Fixes part of #5724.
While running queue processors multithreaded will limit the
performance available to very small systems, it's easy to fix that by
adding more RAM, and previously, Zulip didn't work on such systems at
all, so this is unambiguously an improvement there.
Fixes#32.
Fixes#34.
(Commit message expanded significantly by tabbott.)
Also puts them into a processing queue, though the queue processor
does nothing.
Rewritten by tabbott to avoid unnecessary database queries in
do_send_messages.
This fixes a performance problem where we were previously starting up
a full Django process (~0.7s even on a fast machine) every time a new
email came in, potentially allowing users to accidentally DoS a Zulip
server. Now, we just post over HTTPS, allowing the existing thread
pool support to do its job.
- Add script wrapper to communicate postfix pipe with django web server
over HTTP(S). It uses shared_secret authentication mode.
- Add django view to process messages from email mirror server.
- Clean management command `email-mirror`. Left just functional
for cron email processing.
- Add routes for new tornado view.
- Change pipe script in master process postfix config template
based on updated script.
- Add tests.
Tweaked by tabbott to adjust the directory and set better defaults.
Fixes#2421.
- Enable `master` parameter for `uswgi` configuration.
It allows cleaning leaked processes if the parent
process is closed unexpectedly or with SIGKILL command.
Child processes follow to the master and kill themselves
after the main process.
Fixes#3855
- Add new 'missedmessage_email_senders' queue for sending missed messages emails.
- Add the new worker to process 'missedmessage_email_senders' queue.
- Split aggregation missed messages and sending missed messages email
to separate queue workers.
- Adapt tests for sending missed emails to the new logic.
Fixes#2607
Using `supervisorctl restart all` carried longer downtime (since it
just restarts everything at the same moment) and was less under our
control; I'm not sure it had any advantages.
Since browser clients send messages via websockets and not the API,
this is an important element in making sure mission-critical Zulip
functionality is working.
I'm not altogether happy with this (a better solution would be
database-level locking), but I think it solves the immediate problem
of folks with 2 servers being very likely to run analytics on both of
them.
This results in a brief service interruption (not a graceful restart),
but fixes a bug where on a `supervisorctl restart zulip-django`, we'd
end up leaking a bunch of uwsgi processes.
The mechanism was that sending SIGHUP to uwsgi was a command for it to
gracefully restart, so it'd start doing that (whereas supervisor
expected it to be dying)... and then supervisor would start up the new
uwsgi process group, resulting in 2 uwsgi process groups running.
This, in turn, led to a memory leak that could eventually result in
OOM kills.