2020-08-11 01:47:54 +02:00
|
|
|
# Real-time push and events
|
2017-02-10 10:09:31 +01:00
|
|
|
|
|
|
|
Zulip's "events system" is the server-to-client push system that
|
|
|
|
powers our real-time sync. This document explains how it works; to
|
|
|
|
read an example of how a complete feature using this system works,
|
|
|
|
check out the
|
2019-09-30 19:37:56 +02:00
|
|
|
[new application feature tutorial](../tutorials/new-feature-tutorial.md).
|
2017-02-10 10:09:31 +01:00
|
|
|
|
|
|
|
Any single-page web application like Zulip needs a story for how
|
|
|
|
changes made by one client are synced to other clients, though having
|
|
|
|
a good architecture for this is particularly important for a chat tool
|
|
|
|
like Zulip, since the state is constantly changing. When we talk
|
|
|
|
about clients, think a browser tab, mobile app, or API bot that needs
|
|
|
|
to receive updates to the Zulip data. The simplest example is a new
|
|
|
|
message being sent by one client; other clients must be notified in
|
|
|
|
order to display the message. But a complete application like Zulip
|
|
|
|
has dozens of different types of data that need to be synced to other
|
|
|
|
clients, whether it be new streams, changes in a user's name or
|
|
|
|
avatar, settings changes, etc. In Zulip, we call these updates that
|
|
|
|
need to be sent to other clients **events**.
|
|
|
|
|
|
|
|
An important thing to understand when designing such a system is that
|
|
|
|
events need to be synced to every client that has a copy of the old
|
|
|
|
data if one wants to avoid clients displaying inaccurate data to
|
|
|
|
users. So if a user has two browser windows open and sends a message,
|
|
|
|
every client controlled by that user as well as any recipients of the
|
|
|
|
message, including both of those two browser windows, will receive
|
|
|
|
that event. (Technically, we don't need to send events to the client
|
|
|
|
that triggered the change, but this approach saves a bunch of
|
|
|
|
unnecessary duplicate UI update code, since the client making the
|
|
|
|
change can just use the same code as every other client, maybe plus a
|
|
|
|
little notification that the operation succeeded).
|
|
|
|
|
|
|
|
Architecturally, there are a few things needed to make a successful
|
|
|
|
real-time sync system work:
|
|
|
|
|
|
|
|
* **Generation**. Generating events when changes happen to data, and
|
|
|
|
determining which users should receive each event.
|
|
|
|
* **Delivery**. Efficiently delivering those events to interested
|
|
|
|
clients, ideally in an exactly-once fashion.
|
|
|
|
* **UI updates**. Updating the UI in the client once it has received
|
|
|
|
events from the server.
|
|
|
|
|
|
|
|
Reactive JavaScript libraries like React and Vue can help simplify the
|
|
|
|
last piece, but there aren't good standard systems for doing
|
|
|
|
generation and delivery, so we have to build them ourselves.
|
|
|
|
|
|
|
|
This document discusses how Zulip solves the generation and delivery
|
|
|
|
problems in a scalable, correct, and predictable way.
|
|
|
|
|
|
|
|
## Generation system
|
|
|
|
|
|
|
|
Zulip's generation system is built around a Python function,
|
2018-11-02 23:33:54 +01:00
|
|
|
`send_event(realm, event, users)`. It accepts the realm (used for
|
|
|
|
sharding), the event data structure (just a Python dictionary with
|
|
|
|
some keys and value; `type` is always one of the keys but the rest
|
|
|
|
depends on the specific event) and a list of user IDs for the users
|
|
|
|
whose clients should receive the event. In special cases such as
|
|
|
|
message delivery, the list of users will instead be a list of dicts
|
|
|
|
mapping user IDs to user-specific data like whether that user was
|
|
|
|
mentioned in that message. The data passed to `send_event` are simply
|
|
|
|
marshalled as JSON and placed in the `notify_tornado` RabbitMQ queue
|
|
|
|
to be consumed by the delivery system.
|
2017-02-10 10:09:31 +01:00
|
|
|
|
|
|
|
Usually, this list of users is one of 3 things:
|
|
|
|
|
|
|
|
* A single user (e.g. for user-level settings changes).
|
|
|
|
* Everyone in the realm (e.g. for organization-level settings changes,
|
|
|
|
like new realm emoji).
|
|
|
|
* Everyone who would receive a given message (for messages, emoji
|
|
|
|
reactions, message editing, etc.); i.e. the subscribers to a stream
|
|
|
|
or the people on a private message thread.
|
|
|
|
|
|
|
|
It is the responsibility of the caller of `send_event` to choose the
|
|
|
|
list of user IDs correctly. There can be security problems if e.g. an
|
|
|
|
event containing private message content is sent to the entire
|
|
|
|
organization. However, if an event isn't sent to enough clients,
|
|
|
|
there will likely be user-visible real-time sync bugs.
|
|
|
|
|
|
|
|
Most of the hard work in event generation is about defining consistent
|
|
|
|
event dictionaries that are clear, readable, will be useful to the
|
|
|
|
wide range of possible clients, and make it easy for developers.
|
|
|
|
|
|
|
|
## Delivery system
|
|
|
|
|
|
|
|
Zulip's event delivery (real-time push) system is based on Tornado,
|
|
|
|
which is ideal for handling a large number of open requests. Details
|
|
|
|
on Tornado are available in the
|
2019-09-30 19:37:56 +02:00
|
|
|
[architecture overview](../overview/architecture-overview.md), but in short it
|
2017-02-10 10:09:31 +01:00
|
|
|
is good at holding open a large number of connections for a long time.
|
2020-05-17 14:46:23 +02:00
|
|
|
The complete system is about 2000 lines of code in `zerver/tornado/`,
|
2017-02-10 10:09:31 +01:00
|
|
|
primarily `zerver/tornado/event_queue.py`.
|
|
|
|
|
|
|
|
Zulip's event delivery system is based on "long-polling"; basically
|
|
|
|
clients make `GET /json/events` calls to the server, and the server
|
|
|
|
doesn't respond to the request until it has an event to deliver to the
|
|
|
|
client. This approach is reasonably efficient and works everywhere
|
|
|
|
(unlike websockets, which have a decreasing but nonzero level of
|
|
|
|
client compatibility problems).
|
|
|
|
|
2017-03-08 03:51:53 +01:00
|
|
|
For each connected client, the **Event Queue Server** maintains an
|
2017-02-10 10:09:31 +01:00
|
|
|
**event queue**, which contains any events that are to be delivered to
|
|
|
|
that client which have not yet been acknowledged by that client.
|
|
|
|
Ignoring the subtle details around error handling, the protocol is
|
|
|
|
pretty simple; when a client does a `GET /json/events` call, the
|
|
|
|
server checks if there are any events in the queue. If there are, it
|
|
|
|
returns the events immediately. If there aren't, it records that
|
|
|
|
queue as having a waiting client (often called a `handler` in the
|
|
|
|
code).
|
|
|
|
|
|
|
|
When it pulls an event off the `notify_tornado` RabbitMQ queue, it
|
|
|
|
simply delivers the event to each queue associated with one of the
|
|
|
|
target users. If the queue has a waiting client, it breaks the
|
|
|
|
long-poll connection by returning an HTTP response to the waiting
|
|
|
|
client request. If there is no waiting client, it simply pushes the
|
|
|
|
event onto the queue.
|
|
|
|
|
|
|
|
When starting up, each client makes a `POST /json/register` to the
|
2017-03-08 03:51:53 +01:00
|
|
|
server, which creates a new event queue for that client and returns the
|
2017-02-10 10:09:31 +01:00
|
|
|
`queue_id` as well as an initial `last_event_id` to the client (it can
|
|
|
|
also, optionally, fetch the initial data to save an RTT and avoid
|
|
|
|
races; see the below section on initial data fetches for details on
|
|
|
|
why this is useful). Once the event queue is registered, the client
|
|
|
|
can just do an infinite loop calling `GET /json/events` with those
|
|
|
|
parameters, updating `last_event_id` each time to acknowledge any
|
|
|
|
events it has received (see `call_on_each_event` in the
|
|
|
|
[Zulip Python API bindings][api-bindings-code] for a complete example
|
|
|
|
implementation). When handling each `GET /json/events` request, the
|
2017-11-29 08:20:40 +01:00
|
|
|
queue server can safely delete any events that have an event ID less
|
2017-02-10 10:09:31 +01:00
|
|
|
than or equal to the client's `last_event_id` (event IDs are just a
|
|
|
|
counter for the events a given queue has received.)
|
|
|
|
|
|
|
|
If network failures were impossible, the `last_event_id` parameter in
|
|
|
|
the protocol would not be required, but it is important for enabling
|
|
|
|
exactly-once delivery in the presence of potential failures. (Without
|
|
|
|
it, the queue server would have to delete events from the queue as
|
|
|
|
soon as it attempted to send them to the client; if that specific HTTP
|
|
|
|
response didn't reach the client due to a network TCP failure, then
|
|
|
|
those events could be lost).
|
|
|
|
|
2017-07-18 09:07:14 +02:00
|
|
|
[api-bindings-code]: https://github.com/zulip/python-zulip-api/blob/master/zulip/zulip/__init__.py
|
2017-02-10 10:09:31 +01:00
|
|
|
|
|
|
|
The queue servers are a very high-traffic system, processing at a
|
2017-11-29 08:20:40 +01:00
|
|
|
minimum one request for every message delivered to every Zulip client.
|
2017-02-10 10:09:31 +01:00
|
|
|
Additionally, as a workaround for low-quality NAT servers that kill
|
|
|
|
HTTP connections that are open without activity for more than 60s, the
|
|
|
|
queue servers also send a heartbeat event to each queue at least once
|
|
|
|
every 45s or so (if no other events have arrived in the meantime).
|
|
|
|
|
|
|
|
To avoid a large memory and other resource leak, the queues are
|
|
|
|
garbage collected after (by default) 10 minutes of inactivity from a
|
|
|
|
client, under the theory that the client has likely gone off the
|
|
|
|
Internet (or no longer exists) access; this happens constantly. If
|
|
|
|
the client returns, it will receive a "queue not found" error when
|
2020-10-12 21:10:50 +02:00
|
|
|
requesting events; its handler for this case should just restart the
|
2017-02-10 10:09:31 +01:00
|
|
|
client / reload the browser so that it refetches initial data the same
|
|
|
|
way it would on startup. Since clients have to implement their
|
|
|
|
startup process anyway, this approach adds minimal technical
|
|
|
|
complexity to clients. A nice side effect is that if the Event Queue
|
2020-10-12 21:10:50 +02:00
|
|
|
Server (which stores queues in memory) were to crash and lose
|
2017-02-10 10:09:31 +01:00
|
|
|
its data, clients would recover, just as if they had lost Internet
|
|
|
|
access briefly (there is some DoS risk to manage, though).
|
|
|
|
|
|
|
|
(The Event Queue Server is designed to save any event queues to disk
|
|
|
|
and reload them when the server is restarted, and catches exceptions
|
|
|
|
carefully, so such incidents are very rare, but it's nice to have a
|
|
|
|
design that handles them without leaving broken out-of-date clients
|
|
|
|
anyway).
|
|
|
|
|
|
|
|
## The initial data fetch
|
|
|
|
|
|
|
|
When a client starts up, it usually wants to get 2 things from the
|
|
|
|
server:
|
|
|
|
|
|
|
|
* The "current state" of various pieces of data, e.g. the current
|
|
|
|
settings, set of users in the organization (for typeahead), stream,
|
|
|
|
messages, etc. (aka the "initial state").
|
|
|
|
* A subscription to receive updates to those data when they are
|
|
|
|
changed by a client (aka an event queue).
|
|
|
|
|
|
|
|
Ideally, one would get those two things atomically, i.e. if some other
|
2017-08-15 05:21:08 +02:00
|
|
|
user changes their name, either the name change happens after the
|
2017-02-10 10:09:31 +01:00
|
|
|
fetch (and thus the old name is in the initial state and there will be
|
2017-08-15 05:21:08 +02:00
|
|
|
an event in the queue for the name change) or before (the new name is
|
2017-02-10 10:09:31 +01:00
|
|
|
in the initial state, and there is no event for that name change in
|
|
|
|
the queue).
|
|
|
|
|
|
|
|
Achieving this atomicity goals means we save a huge amount of work
|
|
|
|
that the N clients for Zulip don't need to worry about a wide range of
|
|
|
|
potential rare and hard to reproduce race conditions; we just have to
|
|
|
|
implement things correctly once in the Zulip server.
|
|
|
|
|
|
|
|
This is quite challenging to do technically, because fetching the
|
|
|
|
initial state for a complex web application like Zulip might involve
|
|
|
|
dozens of queries to the database, caches, etc. over the course of
|
|
|
|
100ms or more, and it is thus nearly impossible to do all of those
|
|
|
|
things together atomically. So instead, we use a more complicated
|
|
|
|
algorithm that can produce the atomic result from non-atomic
|
|
|
|
subroutines. Here's how it works when you make a `register` API
|
2017-02-10 23:04:46 +01:00
|
|
|
request; the logic is in `zerver/views/events_register.py` and
|
|
|
|
`zerver/lib/events.py`. The request is directly handled by Django:
|
2017-02-10 10:09:31 +01:00
|
|
|
|
|
|
|
* Django makes an HTTP request to Tornado, requesting that a new event
|
|
|
|
queue be created, and records its queue ID.
|
|
|
|
* Django does all the various database/cache/etc. queries to fetch the
|
|
|
|
data, non-atomically, from the various data sources (see
|
|
|
|
the `fetch_initial_state_data` function).
|
|
|
|
* Django makes a second HTTP request to Tornado, requesting any events
|
|
|
|
that had been added to the Tornado event queue since it
|
|
|
|
was created.
|
|
|
|
* Finally, Django "applies" the events (see the `apply_events`
|
|
|
|
function) to the initial state that it fetched. E.g. for a name
|
|
|
|
change event, it finds the user data in the `realm_user` data
|
2020-03-17 13:57:10 +01:00
|
|
|
structure, and updates it to have the new name.
|
2017-02-10 10:09:31 +01:00
|
|
|
|
2019-03-01 18:21:31 +01:00
|
|
|
### Testing
|
|
|
|
|
2020-09-28 19:23:24 +02:00
|
|
|
The design above achieves everything we desire, at the cost that we need to
|
|
|
|
write a correct `apply_events` function. This is a difficult function to
|
|
|
|
implement correctly, because the situations that it handles almost never
|
|
|
|
happen (being race conditions) during manual testing. Fortunately, we have
|
|
|
|
a protocol for testing `apply_events` in our automated backend tests.
|
|
|
|
|
|
|
|
#### Overview
|
|
|
|
|
|
|
|
Once you are completely confident that an "action function" works correctly
|
|
|
|
in terms of "normal" operation (which typically involves writing a
|
|
|
|
full-stack test for the corresponding POST/GET operation), then you will be
|
|
|
|
ready to write a test in `test_events.py`.
|
|
|
|
|
|
|
|
The actual code for a `test_events` test can be quite concise:
|
|
|
|
|
|
|
|
def test_default_streams_events(self) -> None:
|
|
|
|
stream = get_stream("Scotland", self.user_profile.realm)
|
|
|
|
events = self.verify_action(lambda: do_add_default_stream(stream))
|
|
|
|
check_default_streams("events[0]", events[0])
|
|
|
|
# (some details omitted)
|
|
|
|
|
|
|
|
The real trick is debugging these tests.
|
|
|
|
|
|
|
|
The test example above has three things going on:
|
|
|
|
|
|
|
|
* Set up some data (`get_stream`)
|
|
|
|
* Call `verify_action` with an action function (`do_add_default_stream`)
|
|
|
|
* Use a schema checker to validate data (`check_default_streams`)
|
|
|
|
|
|
|
|
#### verify_action
|
|
|
|
|
|
|
|
All the heavy lifting that pertains to `apply_events` happens within the
|
|
|
|
call to `verify_action`, which is a test helper in the `BaseAction` class
|
|
|
|
within `test_events.py`.
|
|
|
|
|
2020-09-28 21:35:55 +02:00
|
|
|
The `verify_action` function simulates the possible race condition in
|
2020-09-28 19:23:24 +02:00
|
|
|
order to verify that the `apply_events` logic works correctly in the
|
2020-09-28 21:35:55 +02:00
|
|
|
context of some action function. To use our concrete example above,
|
|
|
|
we are seeing that applying the events from the
|
|
|
|
`do_remove_default_stream` action inside of `apply_events` to a stale
|
|
|
|
copy of your state results in the same state dictionary as doing the
|
|
|
|
action and then fetching a fresh copy of the state.
|
2020-09-28 19:23:24 +02:00
|
|
|
|
|
|
|
In particular, `verify_action` does the following:
|
2017-02-10 10:09:31 +01:00
|
|
|
|
|
|
|
* Call `fetch_initial_state_data` to get the current state.
|
2020-09-28 19:23:24 +02:00
|
|
|
* Call the action function (e.g. `do_add_default_stream`).
|
|
|
|
* Capture the events generated by the action function.
|
2020-09-28 21:35:55 +02:00
|
|
|
* Check the events generated are documented in the [OpenAPI
|
|
|
|
schema](../documentation/api.md) defined in
|
|
|
|
`zerver/openapi/zulip.yaml`.
|
2020-09-28 19:23:24 +02:00
|
|
|
* Call `apply_events(state, events)`, to get the resulting "hybrid state".
|
|
|
|
* Call `fetch_initial_state_data` again to get the "normal state".
|
|
|
|
* Compare the two results.
|
|
|
|
|
2020-09-28 21:35:55 +02:00
|
|
|
In the event that you wrote the `apply_events` logic correctly the
|
|
|
|
first time, then the two states will be identical, and the
|
|
|
|
`verify_action` call will succeed and return the events that came from
|
|
|
|
the action.
|
2020-09-28 19:23:24 +02:00
|
|
|
|
|
|
|
Often you will get the `apply_events` logic wrong at first, which will
|
|
|
|
cause `verify_action` to fail. To help you debug, it will print a diff
|
2019-03-01 18:21:31 +01:00
|
|
|
between the "hybrid state" and the "normal state" obtained from
|
2020-09-28 19:23:24 +02:00
|
|
|
calling `fetch_initial_state_data` after the changes. If you encounter a
|
|
|
|
diff like this, you may be in for a challenging debugging exercise. It
|
|
|
|
will be helpful to re-read this documentation to understand the rationale
|
|
|
|
behind the `apply_events` function. It may also be helpful to read the code
|
|
|
|
for `verify_action` itself. Finally, you may want to ask for help on chat.
|
|
|
|
|
|
|
|
Before we move on to the next step, it's worth noting that `verify_action`
|
|
|
|
only has one required parameter, which is the action function. We
|
|
|
|
typically express the action function as a lambda, so that we
|
|
|
|
can pass in arguments:
|
|
|
|
|
|
|
|
events = self.verify_action(lambda: do_add_default_stream(stream))
|
|
|
|
|
|
|
|
There are some notable optional parameters for `verify_action`:
|
|
|
|
|
|
|
|
* `state_change_expected` must be set to `False` if your action
|
|
|
|
doesn't actually require state changes for some reason; otherwise,
|
|
|
|
`verify_action` will complain that your test doesn't really
|
2020-09-28 21:35:55 +02:00
|
|
|
exercise any `apply_events` logic. Typing notifications (which
|
|
|
|
are ephemereal) are a common place where we use this.
|
2020-09-28 19:23:24 +02:00
|
|
|
|
2020-09-28 21:35:55 +02:00
|
|
|
* `num_events` will tell `verify_action` how many events the
|
|
|
|
`hamlet` user will receive after the action (the default is 1).
|
2020-09-28 19:23:24 +02:00
|
|
|
|
|
|
|
* parameters such as `client_gravatar` and `slim_presence` get
|
|
|
|
passed along to `fetch_initial_state_data` (and it's important
|
|
|
|
to test both boolean values of these parameters for relevant
|
2020-09-28 21:35:55 +02:00
|
|
|
actions).
|
2020-09-28 19:23:24 +02:00
|
|
|
|
2020-09-28 21:35:55 +02:00
|
|
|
For advanced use cases of `verify_action`, we highly recommend reading
|
2020-09-28 19:23:24 +02:00
|
|
|
the code itself in `BaseAction` (in `test_events.py`).
|
|
|
|
|
|
|
|
#### Schema checking
|
|
|
|
|
2020-09-28 21:35:55 +02:00
|
|
|
The `test_events.py` system has two forms of schema checking. The
|
|
|
|
first is verifying that you've updated the [GET /events API
|
|
|
|
documentation](https://zulip.com/api/get-events) to document your new
|
|
|
|
event's format for benefit of the developers of Zulip's mobile app,
|
|
|
|
terminal app, and other API clients. See the [API documentation
|
|
|
|
docs](../documentation/api.md) for details on the OpenAPI
|
|
|
|
documentation.
|
|
|
|
|
|
|
|
The second is higher-detail check inside `test_events` that this
|
|
|
|
specific test generated the expected series of events. Let's look at
|
|
|
|
the last line of our example test snippet:
|
2020-09-28 19:23:24 +02:00
|
|
|
|
|
|
|
# ...
|
|
|
|
events = self.verify_action(lambda: do_add_default_stream(stream))
|
|
|
|
check_default_streams("events[0]", events[0])
|
|
|
|
|
|
|
|
We have discussed `verify_action` in some detail, and you will
|
|
|
|
note that it returns the actual events generated by the action
|
|
|
|
function. It is part of our test discipline in `test_events` to
|
|
|
|
verify that the events are formatted in a predictable way.
|
|
|
|
|
|
|
|
Ideally, we would test that events match the exact data that we
|
|
|
|
expect, but it can be difficult to do this due to unpredictable
|
|
|
|
things like database ids. So instead, we just verify the "schema"
|
|
|
|
of the event(s). We use a schema checker like `check_default_streams`
|
|
|
|
to validate the types of the data.
|
|
|
|
|
|
|
|
If you are creating a new event format, then you will have to
|
|
|
|
write your own schema checker in `event_schema.py`. Here is
|
|
|
|
the example relevant to our example:
|
|
|
|
|
|
|
|
default_streams_event = event_dict_type(
|
|
|
|
required_keys=[
|
|
|
|
("type", Equals("default_streams")),
|
|
|
|
("default_streams", ListType(DictType(basic_stream_fields))),
|
|
|
|
]
|
|
|
|
)
|
|
|
|
check_default_streams = make_checker(default_streams_event)
|
|
|
|
|
|
|
|
Note that `basic_stream_fields` is not shown in these docs. The
|
|
|
|
best way to understand how to write schema checkers is to read
|
|
|
|
`event_schema.py`. There is a large block comment at the top of
|
|
|
|
the file, and then you can skim the rest of the file to see the
|
|
|
|
patterns.
|
|
|
|
|
2020-09-28 21:35:55 +02:00
|
|
|
When you create a new schema checker for a new event, you not only
|
|
|
|
make the `test_events` test more rigorous, you also allow our other
|
|
|
|
tools to use the same schema checker to validate event formats in our
|
|
|
|
node test fixtures and our OpenAPI documentation.
|
|
|
|
|
|
|
|
#### Node testing
|
|
|
|
|
|
|
|
Once you've completed backend testing, be sure to add an example event
|
|
|
|
in `frontend_tests/node_tests/lib/events.js`, a test of the
|
|
|
|
`server_events_dispatch.js` code for that event in
|
|
|
|
`frontend_tests/node_tests/dispatch.js`, and verify your example
|
|
|
|
against the two versions of the schema that you declared above using
|
|
|
|
`tools/check-node-fixtures`.
|
2020-09-28 19:23:24 +02:00
|
|
|
|
|
|
|
#### Code coverage
|
2017-02-10 10:09:31 +01:00
|
|
|
|
|
|
|
The final detail we need to ensure that `apply_events` always works
|
2020-06-27 17:03:37 +02:00
|
|
|
correctly is to make sure that we have relevant tests for
|
2017-02-10 10:09:31 +01:00
|
|
|
every event type that can be generated by Zulip. This can be tested
|
2020-06-27 17:03:37 +02:00
|
|
|
manually using `test-backend --coverage BaseAction` and then
|
2017-02-10 10:09:31 +01:00
|
|
|
checking that all the calls to `send_event` are covered. Someday
|
|
|
|
we'll add automation that verifies this directly by inspecting the
|
|
|
|
coverage data.
|
|
|
|
|
2020-09-28 19:23:24 +02:00
|
|
|
#### page_params
|
|
|
|
|
2017-05-14 07:49:35 +02:00
|
|
|
In the Zulip webapp, the data returned by the `register` API is
|
|
|
|
available via the `page_params` parameter.
|
|
|
|
|
2017-02-10 10:09:31 +01:00
|
|
|
### Messages
|
|
|
|
|
|
|
|
One exception to the protocol described in the last section is the
|
|
|
|
actual messages. Because Zulip clients usually fetch them in a
|
|
|
|
separate AJAX call after the rest of the site is loaded, we don't need
|
|
|
|
them to be included in the initial state data. To handle those
|
|
|
|
correctly, clients are responsible for discarding events related to
|
|
|
|
messages that the client has not yet fetched.
|
2018-11-30 00:48:13 +01:00
|
|
|
|
|
|
|
Additionally, see
|
2019-09-30 19:37:56 +02:00
|
|
|
[the master documentation on sending messages](../subsystems/sending-messages.md)
|