All the event handler did was resetting some entries in the edit
bot form. This is unnecessary, because the whole form gets
destroyed anyway when closed.
This is done by rewriting JS manipulations of the DOM tree
in the bot-settings.handlebars template. Dead code involving
the affected JS variables is removed.
This is the first step in cleaning up the bot edit code.
Since the bot edit form appears dynamically, we remove
it from the static HTML scaffold, of which settings_sidebar
is a part of.
The fresh imported data shows that the users emails are not included
in the data. However, the data received from the older method of slack
(which is using legacy tokens) contains the email data of the users.
When removing the description from a stream (i.e. setting it to ""),
the UI was not correctly updating the description. This is because we
were checking incorrectly for a falsey value, rather than the specific
value undefined (which means the description wasn't changed).
It's too easy to go over the rate limits when using the webapp.
The correct fix for this probably involves some changes to which
routes get covered by what sort of rate limit, but for now, just
increase the limits.
If some bug in Bugdown results in a rendered message content that is
bigger than twice the message size, we now just throw an exception
from Bugdown. This is considerably better than the old behavior,
which might result in an enormous message being placed in the database
(potentially, bigger than the 1MB limit to store in memcached), which
would in turn result in tragic consequences.
This fixes#8322, in that it prevents the super bad outcome seen there
(where basically Zulip became unusable for everyone on the stream
where the message is posted). Now, the failure mode is just the
message failing to send. Still not ideal (and requires further work
on the URL embed feature), but not a minor problem, not a major one.
For now, we still need the Travis badge, since Travis is where we test the
production installation process. But ideally, we'll end up removing that too.
This commit adds tests (and thus, an extra code example) for
unsubscribing another user from a particular stream by passing in
the `principals` argument to client.remove_subscriptions. The
ability to pass in `principals` was added in the latest release
of the zulip API PyPI package.
We now have a separate page for common error payloads, for example,
the payload for when the client's API key is invalid. All error
payloads that are presented on this page will be tested similarly
to our other non-error sample fixtures.
The zulip user has no need to see this file; it's used by nginx.
And when we set up the cert early in install, there's no zulip user
yet anyway, so this fails.
Otherwise prepare-base is likely to fail when first run (but then
succeed when rerun, because the container is left running), because
the container isn't up yet when we try to operate in it.
Also clean up the placement of `set -e` vs `set -x`.
We've been running this change on zulipchat.com for a couple of months
now. Before then, we used to regularly get exceptions like this:
File "./zerver/views/messages.py", line 749, in get_messages_backend
setter=stringify_message_dict)
File "./zerver/lib/cache.py", line 275, in generic_bulk_cached_fetch
cache_set_many(items_for_remote_cache)
File "./zerver/lib/cache.py", line 215, in cache_set_many
get_cache_backend(cache_name).set_many(items, timeout=timeout)
File "/home/zulip/deployments/2017-09-28-21-04-12/zulip-py3-venv/lib/python3.5/site-packages/django/core/cache/backends/memcached.py", line 150, in set_many
self._cache.set_multi(safe_data, self.get_backend_timeout(timeout))
pylibmc.Error: error 48 from memcached_set_multi
This error means memcached was unable to find space for the new value.
You might think that because memcached provides an LRU cache, this
shouldn't happen because it would just evict something... but in fact
* memcached splits its data into "slabs" by object size, and
* until recently, once a 1MiB "chunk" is allocated to a given "slab"
i.e. size class, it wouldn't be reclaimed to allocate to another.
So once the cache has been filled up with objects of some distribution
of sizes, if some objects come in that would go in a different size
class, we have no chunks for that size class / slab, and can't get one.
And that's exactly what was happening on zulipchat.com.
Useful background can be found in
https://github.com/memcached/memcached/wiki/ServerMaint#slab-imbalancehttps://github.com/memcached/memcached/wiki/ReleaseNotes1411https://github.com/memcached/memcached/wiki/ReleaseNotes1425https://github.com/memcached/memcached/wiki/ReleaseNotes150
We're already running v1.4.25, which provides an "automover" that should
be well equipped to fix this; v1.5.0 turns it on by default.
With this commit, adopt the "modern start line" recommended in the
release notes for our v1.4.25, including turning on the automover.