This reverts commit 64aced74012101f3bdbd3a4e6066b46ad8e1f4ea,
which was always intended as temporary code until we upgraded
the back end to support dictionary-style narrow operators.
(imported from commit b8d3a19f3aad3d9d6a26b9dcc07f502c55b18edd)
The filter_term() function was supporting the transition
from using tuples for search terms to using dictionaries,
but now all of the JS code should be dictionary-compatible.
(We had already abandoned the tuples safety net on staging,
and a couple days of use have given me confidence we can
pull the shim code.)
The one side effect this change has is that search terms will be
initialized to {} instead of []. This distinction matters
when it comes to calling JSON.stringify on the search terms.
(imported from commit 1fbe11011d8953dbea28c0657cbf88384d343e00)
The narrow_parameter convert now converts tuples of
(operator, operand) into dictionaries so that downstream
functions/classes deal in dictionaries. This
affects get_old_messages_backend, messages_in_narrow_backend,
and NarrowBuilder.
(imported from commit 7e8cb887f7872ec687acd8c4857d1d5222ab0d5f)
The channel module now keeps track pending ajax requests and has an
abort_all function to angle all pending requests.
(imported from commit 4e78ab24d2295bd67de5633e3a200dfa489825b1)
When we typed "stream:" into the search bar, the empty operand
triggered an error in the Dict class for an undefined key, because
we were using opts[0] as a "defensive" workaround to opts.operand,
but opts.operand of '' is more correct than opts[0] being undefined.
Now we only fall back to opts[0] whe opts.operand is undefined, and
we emit a blueslip error when that happens.
(imported from commit 88a196d3bc3d67689c36bc036f378da744c652f9)
Before we deploy this commit, we must migrate the data from the staging redis
server to the new, dedicated redis server. The steps for doing so are the
following:
* Remove the zulip::redis puppet class from staging's zulip.conf
* ssh once from staging to redis-staging.zulip.net so that the host key is known
* Create a tunnel from redis0.zulip.net to staging.zulip.net
* zulip@redis0:~$ ssh -N -L 127.0.0.1:6380:127.0.0.1:6379 -o ServerAliveInterval=30 -o ServerAliveCountMax=3 staging.zulip.net
* Set the redis instance on redis0.zulip.net to replicate the one on staging.zulip.net
* redis 127.0.0.1:6379> slaveof 127.0.0.1 6380
* Stop the app on staging
* Stop redis-server on staging
* Promote the redis server on redis0.zulip.net to a master
* redis 127.0.0.1:6379> slaveof no one
* Do a puppet apply at this commit on staging (this will bring up the tunnel to redis0)
* Deploy this commit to staging (start the app on staging)
* Kill the tunnel from redis0.zulip.net to staging.zulip.net
* Uninstall redis-server on staging
The steps for migrating prod will be the same modulo s/staging/prod0/.
(imported from commit 546d258883ac299d65e896710edd0974b6bd60f8)
The zulip::redis puppet class should be added to all our frontends' zulip.conf
after this is deployed. No puppet apply is required.
(imported from commit ccea89f4779c6c49c0cbe837adcb5be21bfe55ab)
Apparently the "inline" treeprocessor is what runs the inline
patterns. Also re-enable the rewriting-to-https support.
(imported from commit 2fde2c1f15217a784f26b16db25ee745f424f2f0)
Have the server send down the stream's id for removal
events, and have the client use that id to look up the
stream in its internal data structures. This sets the
stage for eventually just sending the stream id (and not
the stream name) down to clients, once all our clients
are ready to use the stream id.
(imported from commit 922516c98fb79ffad8ae7da0396646663ca54fd0)
Splitting out subs.mark_sub_unsubscribed gives the calling
code flexibility to track down the "sub" as it sees fit.
(imported from commit 4052f5a2a0f6fd58a2177f2b5961ce72f052cf94)
Before this change, we were using sequentially generated ids
on the client side to identify streams. Now we just use
the ids from the server. The goal here is to reduce the
confusion of having two different ids attached to a stream.
Also, not that it matters a ton, but this also means that
the browser basically has an immutable id for each stream
that is future-proof to reloads, multiple create_sub calls, etc.
It also a bit easier to grep for ".stream_id" than ".id".
(imported from commit 057f9e50dfee127edfe3facd52da93108241666a)
We have shim code that makes our internal narrow operators
support both a tuple interface and an object interface. We
are removing the shim on staging to help expose any dark
corners of the code that still rely on operators being
represented as tuples.
(imported from commit f9d101dbb7f49a4abec14806734b9c86bd93c4e1)
Here, we don't want to check the uploading users' realm when determining
message privacy, because that'll prevent non-Zulip users from having
email-mirror-uploaded images. Instead, we just pass along the target
realm for the message explicitly to upload_message_image()
(imported from commit 6891261552135b1f41ff9da55ffe963ee5000556)
Otherwise, we will enable the postfix config on all frontends,
regardless of whether Enterprise deployments requested it.
(imported from commit 9592be3706adcee7547f6795f32fe7b8d85e71ee)
Our overall guideline is the type names for events are singular, and the list of
events of that type are plural. 'subscriptions' was not following this guideline
and (potentially as a result) had a bug where it was impossible for clients to explicitly
subscribe to subscription change events properly.
(imported from commit 7b3162141fd673746e0489199966c29ea32ee876)
Chrome was showing a memory leak after many auto-reloads. Emptying the
the collections and removing the event listeners reduces the severity.
Before this change 40 reloads would would grow to about 140MB, now it
stays around 50MB.
(imported from commit 55fbeff9bdd0363bb95929f2981a2de238ff35d8)
This is yet another change related to phasing out the
[operator, operand] tuple data structure for representing
terms in a narrow.
(imported from commit 508e58fc4eebae8a24a8ae59919ba5d94fc66850)
The JS code can now call stream_data.get_sub_by_id() to get
a sub from a stream_id. Subs have stream_id due to a prior commit,
and we keep track of the mapping in stream_data's subs_by_stream_id
variable.
(imported from commit 409e13d6d2e79d909441a66c85ee651529d15cd2)