This has two purposes:
1. Prevent stupid stacks of diacritical marks from overflowing into
other messages. Fixes#7843.
2. Prevent Chrome from collapsing the inside bottom margin with the
.messagebox outside (in a way that Firefox doesn’t).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This allows the system to get updates to the Groonga repository
signing key, so `apt update` doesn’t start failing when the key
changes (like it recently did).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
debian-archive-keyring is a dependency of the essential package apt,
so it is present in every Debian system.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
When using our EMAIL_ADDRESS_VISIBILITY_ADMINS feature, we were
apparently creating bot users with different email and delivery_email
properties, due to effectively an oversight in how the code was
written (the initial migration handled bots correctly, but not bots
created after the transition).
Following the refactor in the last commit, the fix for this is just
adding the missing conditional, a test, and a database migration to
fix any incorrectly created bots leaked previously.
This is also a useful preparatory refactor for having a user setting
controlling whether one's own email address is publicly available
within the organization.
virtualenv on Ubuntu 16.04, when creating a new environment, downloads
the current version of setuptools, then replaces its pkg_resources
with an old copy from
/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl.
This causes problems, a simple example of which is reproducible from
the ubuntu:16.04 Docker base image as follows:
apt-get update
apt-get -y install python3-virtualenv
python3 -m virtualenv -p python3 /ve
/ve/bin/pip install sockjs-tornado
/ve/bin/pip download sockjs-tornado
→ `AttributeError: '_NamespacePath' object has no attribute 'sort'`
More relevantly, it breaks pip-compile in the same way. To fix this,
we need to force setuptools to be reinstalled, even if we’re asking
for the same version.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This makes no changes to the locked versions in *.txt, but it reduces
duplicate information and gives us sane workflows for
* upgrading packages: remove some or all lines from *.txt and re-run
`update-locked-requirements`;
* marking packages as intentionally held back: add a version bound
to *.in with an explanatory comment.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The reason that `pip-tools` running on Python 3 didn’t detect the
right requirements for `thumbor` on Python 2 is simply that some of
them are conditional on the Python version.
As for the requirements that had been manually added as a workaround:
`backports-abc` and `singledispatch` are now correctly detected, while
`backports.ssl-match-hostname` was vendored into `urllib3` some time
ago and `certifi` is no longer necessary.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Since LoopQueueProcessingWorker jobs cannot be monitored by checking
for connected consumers (since they poll, rather than consuming as
events arrive), they can't be monitored with check_consumers. It's
OK, because that monitoring was redundant with monitoring for
potential growth in their queue that we have as well.
Also clean up the block comments for the two other similar queue
procesors.
Add a specific command to restart Vagrant to adopt the new
configuration.
(When naïvely using only `vagrant halt` + `vagrant up --provision`,
external devices remained unable to connect; per `netstat -nltp`, the
host IP of forwarded ports remained `127.0.0.1`.)
This should dramatically improve the queue processor's performance in
cases where there's a very high volume of requests on a given endpoint
by a given user, as described in the new docstring.
Until we test this more broadly in production, we won't know if this
is a full solution to the problem, but I think it's likely. We've
never seen the UserActivityInterval worker end up backlogged without a
total queue processor outage, and it should have a similar workload.
Fixes#13180.
We don't actually need to go to the memcached (falling back to the
database) to fetch either user or client objects on every event. For
user objects, we actually can just pass through the user ID
transparently; for client objects, we can use an in-process cache,
since the mapping of string to ID never changes.
With the way these tests are, it's unnecessary to have 3 separate
classes, and it makes it confusing to decide where to add a potential
additional mm email test.
Bootstrap v2.2.0^2~40^2~6 changes this default to false, so this is a
prerequisite to upgrading Bootstrap, and it’s also safer.
This closes an HTML injection path via user full names in the emoji
reaction tooltip. It doesn’t appear to be exploitable for cross-site
scripting because we disallow `>` in full names, and the code happens
to be written such that the next `>` is in a different parser
invocation.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This simple backwards-compatible change saves approximately 12% in the
compressed size of the chat.zulip.org page_params. We can do much,
much better by changing the format, but this seems like a good
intermediate step.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
In a gigantic realm where we send several MB of `page_params`, it’s
slightly better to have the rest of the `<body>` available to the
browser earlier, so it can show the “Loading…” spinner and start
fetching subresources.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
any_oauth_backend_enabled is all about whether we will have extra
buttons on the login/register pages for logging in with some non-native
backends (like Github, Google etc.). And this isn't about specifically
oauth backends, but generally "social" backends - that may not rely
specifically rely on Oauth. This will have more concrete relevance when
SAML authentication is added - which will be a "social" backend,
requiring an additional button, but not Oauth-based.
SOCIAL_AUTH_BACKEND / OAUTH_BACKEND_NAMES are currently the same
backends. All Oauth backends are social, and all social are oauth.
So we get rid of OAUTH_BACKEND_NAMES and use only SOCIAL_AUTH_BACKENDS.