This is follow up to da79fd206a
I accidentally skipped over pm_conversations. Same
ideas as the bigger previous commit--we pass in params
to the initialize function and do the delete cleanup
within ui_init.
Let's say you have module hello.js like so:
// hello.js
const hello_world = i18n.t('Hello world');
exports.get_greeting = () => hello_world;
And then two modules like this:
// apple.js
const hello = require('hello');
exports.foo = () => {
show_greeting(hello.get_greeting());
};
// banana.js
const hello = require('hello');
exports.foo = () => {
display_greeting(hello.get_greeting());
};
The test for apple.js could look like this,
and it won't crash due to the stub:
set_global('i18n', {t: () => {}});
zrequire('hello');
zrequire('apple');
Now let's say your write this broken version
of a test for banana.js:
zrequire('hello');
zrequire('banana');
If you run `./tools/test-js-with-node`, the
"banana" test will pass, because while it
does require "hello", it won't actually
*execute* the code that happens at require
time for "hello", because it's already in
the cache. Here is the code that gets
skipped:
const hello_world = i18n.t('Hello world');
But then if you try to run the banana test
individually, the above line of code will
cause the test to crash. And it will crash
even before you actually try to test the
meaningful code here:
exports.foo = () => {
display_greeting(hello.get_greeting());
};
This commit fixes this leak scenario by just
aggressively clearing out things from the
require cache.
This slows tests down by about 10%, which I think
is worth the extra safety here.
This cleans up the handoff of page_params
data between ui_init and modules that
take over ownership of page_params-derived
data.
Read the long comment in ui_init for a bit
more context.
Most of this diff is actually test cleanup.
And a lot of the diff to "real" code is
just glorified `s/page_params/params/`
in the `initialize` functions.
One little oddity is that we don't actually
surrender ownership of `page_params.user_id`
to `people.js`. We could plausibly sweep
the rest of the codebase to just use
`people.my_user_id()` consistently, but it's
not a super high priority thing to fix,
since the value never changes.
The stream_data situation is a bit messy,
since we consume `page_params` data in the
initialize() function in addition to the
`params` data we "own". I added a comment
there and intend to follow up. I tried
to mostly avoid the "word soup" by extracting
three locals at the top.
Finally, I don't touch `alert_words` yet,
despite it also doing the delete-page-params-data
dance. The problem is that `alert_words`
doesn't have a proper `initialize()`. We
should clean that up and have it use a
`Map` internally, too.
This gives them cache-compatible URLs, and also avoids some extra
copies of the sprite sheet images.
Comments on the Octopus emoji added by tabbott.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This is not always a behavior-preserving translation: _.defaults
mutates its first argument. However, the code does not always appear
to have been written to expect that.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This is not always a behavior-preserving translation: _.extend mutates
its first argument. However, the code does not always appear to have
been written to expect that.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This is not always a behavior-preserving translation: $.extend mutates
its first argument. However, the code does not always appear to have
been written to expect that.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This test wasn't particularly high value, was flaky, and would be
better rewritten as a set of node tests verifying the logic that would
run 100x as fast and more reliably for similar testing fidelity.
This moves some code from settings_display.js
into the new module settings_config.js.
Extracting this module breaks some dependencies
on settings_display.js (which has some annoying
transitive dependencies, including jQuery).
In particular this isolates stream_data from
from settings_display.js.
Two of the three structures that we moved here
weren't even directly used by settings_display.js,
since we do a lot of rendering in the modules
admin.js and setting.js.
We make get_all_display_settings() a function
to avoid a require-time dependency on page_params.
Breaking the dependencies simplifies a few
node tests.
Most of the node test complexity came from the
following commit in March 2019:
5a130097bf
The commit itself seems harmless enough, but
dependencies can have a somewhat "viral" nature,
where making stream_data depend on settings_display
caused us to modify four different node tests.
This refactoring is the first step toward sharing
our markdown code with mobile. This focuses on
the Zulip layer, not the underlying third party `marked`
library.
In this commit we do a one-time initialization to
wire up the markdown functions, but after further
discussions with Greg, it might make more sense
to just pass in helpers on every use of markdown
(which is generally only once per sent message).
I'll address that in follow-up commits.
Even though it looks like a pretty invasive change,
you will note that we barely needed to modify the
node tests to make this pass. And we have pretty
decent test coverage here.
All of the places where we used to depend on
other Zulip modules now use helper functions that
any client (e.g. mobile) can configure themselves.
Or course, in the webapp, we configure these from
modules like people/stream_data/hash_util/etc.
Even in places where markdown used to deal directly with
data structures from other modules, we now use functions.
We may revisit this in a future commit, and we might
just pass data directly for certain things.
I decided to keep the helpers data structure completely flat,
so we don't have ugly nested names like
`helpers.emoji.get_emoji_codepoint`. Because of this,
some of the names aren't 1:1, which I think is fine.
For example, we map `user_groups.is_member_of` to
`is_member_of_user_group`.
It's likely that mobile already has different names
for their versions of these functions, so trying for
fake consistency would only help the webapp. In some
cases, I think the webapp functions have names that
could be improved, but we can clean that up in future
commits, and since the names aren't coupled to markdown
itself (i.e. only the config), we will be less
constrained.
It's worth noting that `marked` has an `options`
data structure that it uses for configuration, but
I didn't piggyback onto it, since the `marked`
options are more at the lexing/parsing layer vs.
the app-data layer stuff that our helpers mostly
help with.
Hopefully it's obvious why I just put helpers in
the top-level namespace for the module rather than
passing it around through multiple layers of the
parser.
There were a couple places in markdown where we
were doing awkward `hasOwnProperty` checks for
emoji-related stuff. Now we use the Python
principle of ask-forgiveness-not-permission and
just handle the getters returning falsy data. (It
should be `undefined`, but any falsy value is
unworkable in the places I changed, so I use
the simpler, less brittle form.)
We also break our direct dependency on
`emoji_codes.json` (with some help from the
prior commit).
In one place I rename streamName to stream_name,
fixing up an ancient naming violation that goes
way back to before this code was even extracted
away from echo.js. I didn't bother to split this
out into a separate commit, since 2 of the 4
lines would be immediately re-modified in the
subsequent commit.
Note that we still depend on `fenced_code`
via the global namespace, instead of simply
requiring it directly or injecting it. The
reason I'm postponing any action there is that
we'll have to change things once we move
markdown into a shared library. (The most
likely outcome is that we'll rename/move both files
at the same time and fix the namespace/require
details as part of that commit.)
Also the markdown code still relies on `_` being
available in the global namespace. We aren't
quite ready to share code with mobile yet, but the
underscore dependency should not be problematic,
since mobile already uses underscore to use the
webapp's shared typing_status module.
This mostly moves logic into people.js.
The people functions added here are glorified
two-liners.
One thing that changes here is that we
are a bit more rigorous about duplicate
names.
The code is slightly awkward, because this
commit preserves the strange behavior
that if 'alice|42' doesn't match on
the user with the name "alice" and user_id
"42", we instead look for a user whose
name is "alice|42". That seems like a
misfeature to me, but there's a test for
it, so I want to check with Tim that it's not
intentional behavior before I simplify
the code.
We add this API to emoji.js, so that markdown
doesn't need to look at internal data structures
(or even need to understand any kind of record
format for results).
Here are the functions:
get_realm_emoji_url()
get_emoji_name()
get_emoji_codepoint()
We use the API now in markdown, which eliminates
the need for the markdown parser to require
the emoji JSON file.
Each function has a simple docstring:
get_emoji_name('1f384') === 'holiday_tree'
get_emoji_codepoint('avocado') === '1f951'
get_realm_emoji_url('shrug') === '/user_avatars/2/emoji/images/31.png'
Also we have simple test coverage for the API
(including tests that verify the docstrings).
This name was misleading, because we weren't
actually setting realm_filters (that's what
`page_params.realm_filters = realm_filters`
is for); we were instead updating our
realm filter rules.
This allows us to collect coverage for Handlebars templates, and also
improves the readability of Handlebars-related stack traces.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
These functions were just shims that were
used in the somewhat painful migration from
subject_* to topic_*.
The commit 4572be8c27
fixed it so that the client never needs to
deal with "subject_links".
So now we just go back to simpler code:
message.topic_links = links
links = message.topic_links
I am not quite ready to declare victory on
the subject/topic migration, but we are super
close. In this commit I bump a blueslip
warning to a blueslip error, so that we'll
be notified of any codepath that is still
using the janky fall-back-to-subject defensive
code here.
If we go a couple days without any errors, then
we can remove the blueslip warning and the
defensive code immediately and then inline
the callers at our leisure. I wouldn't be
wildly against keeping these wrappers in some
parts of the code, but that debate is out of
the scope of this immediate fix, and I haven't
thought hard about it yet.
We can basically sweep set_message_topic() now,
if we wanted to, since it's truly just a one-liner.
(At one point it was encapsulating something
like `message.subject = foo`).
This required a tiny change to compose_fade
test setup.
Most of this logic is specific to markdown
message processing, so we move the code to
markdown.js.
The only responsibility that we leave with
`emoji.js` is to provide us with a list
of translations (regex and replacement text).
But now `markdown.js` actually (directly) executes
those translations against Zulip messages
as part of its preprocessing.
This should simplify the upcoming mobile conversion.
Instead of mobile needing to duplicate this fairly
complex function, they will just need to pass
us in a list similar to `emoji_translations` inside
of `emoji.js`. That code has a comment that shows
what the data structure looks like.
I am 99% sure we can rely on trimRight() and
trim() being available in all browsers that
we support. I verified in FF.
This removes the util dependency from both
modules touched here.
We now treat util like a leaf module and
use "require" to import it everywhere it's used.
An earlier version of this commit moved
util into our "shared" library, but we
decided to wait on that. Once we're ready
to do that, we should only need to do a
simple search/replace on various
require/zrequire statements plus a small
tweak to one of the custom linter checks.
It turns out we don't really need util.js
for our most immediate code-sharing goal,
which is to reuse our markdown code on
mobile. There's a little bit of cleanup
still remaining to break the dependency,
but it's minor.
The util module still calls the global
blueslip module in one place, but that
code is about to be removed in the next
few commits.
I am pretty confident that once we start
sharing things like the typeahead code
more aggressively, we'll start having
dependencies on util. The module is barely
more than 300 lines long, so we'll probably
just move the whole thing into shared
rather than break it apart. Also, we
can continue to nibble away at the
cruftier parts of the module.
We used to have a block of code doing this just in the presence
endpoint because that's where we'd had error-handling problems with it
not being present, but it seems more correct for it to run
unconditionally on all HTTP requests.
This requires adding a dependency of channel on reload_state, which we
record in the webpack configuration for now.
webpack optimizes JSON modules using JSON.parse("{…}"), which is
faster than the normal JavaScript parser.
Update the backend to use emoji_codes.json too instead of the three
separate JSON files.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
I believe we can remove these and rely on
other parts of our testing/code-review
to ensure template quality.
These tests never really exercised our
app code, as evidenced by us not regressing
any of the 100%-line-coverage files.
We have a couple other ways that we verify
the correct format of the templates:
- webpack (can they compile?)
- check-templates (are they nicely indented?)
For deep testing, we have Casper, which
exercises most of our most important templates
in some meaningful way.
I think it's pretty rare that we get bugs
now that are directly caused by bad templates,
and an even smaller subset of them would
have been caught by the node tests.
If that trend changes in the future, I would prefer to
just do something "greenfield" to address
any common problems rather than resurrect
this code, but we could always resurrect it
from git.
The template node tests did check a little bit of
detail about which fields are there, but not
in an integrated way, so that aspect of the tests
wasn't very useful either.
This effectively reverts the following
commit from May 2019:
be527905ca
The implementation of closest() was a bit
buggy and complex. It's easy enough
to just stub the method yourself. We may
want to eventually re-implement it, but we
should follow the template of parent/set_parent.
If you fail to stub `closest` zjquery gives
a fairly helpful error message:
Error: You must create a stub for $("link-stub").closest
We stub out jquery elements rather than giving
the illusion of having real DOM.
Also, we make it so that the message_store
interaction has an assertion attached to it.
In the next commit we're going to change what the
server sends for the following:
- page_params
- server responses to /json/users/me/presence
We will **not** yet be changing the format of the data
that we get in events when users update their presence.
It's also just a bit in flux what our final formats
will be for various presence payloads, and different
optimizations may lead us to use different data
structures in different payloads.
So for now we decouple these two things:
raw_info: this is intended to represent a
snapshot of the latest data from the
server, including some data like
timestamps that are only used
in downstream calculations and not
user-facing
exports.presence_info: this is calculated
info for modules like buddy_data that
just need to know active vs. idle and
last_active_date
Another change that happens here is we rename
set_info_for_user to update_info_for_event,
which just makes it clear that the function
expects data in the "event" format (as opposed
to the format for page_params or server
responses).
As of now keeping the intermediate raw_info data
around feels slightly awkward, because we just
immediately calculate presence_info for any kind
of update. This may be sorta surprising if you
just skim the code and see the various timeout
constants. You would think we might be automatically
expiring "active" statuses in the client due to
the simple passage of time, but in fact the precise
places we do this are all triggered by new data
from the server and we re-calculate statuses
immediately.
(There are indirect ways that clients
have timing logic, since they ask the
server for new data at various intervals, but a
smarter client could simply expire users on its
own, or at least with a more efficient transfer
of info between it and the server. One of
the thing that complicates client-side logic
is that server and client clocks may be out
of sync. Also, it's not inherently super expensive
to get updates from the server.)
The important details for the test setup here
are just the number of users who are active.
We don't need to simulate the currently awkward
way of populating this data.
The _.each calls with an inline function expression have already been
converted to for…of loops. We could do that here, but using .forEach
when we’re just reusing an existing function seems like a good
guideline.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
predicate is expected to return a function, not a boolean. The
boolean true was causing _.filter to match items with a property named
"true", which is definitely not what was intended. Matching no items
is probably also not intended, but matching every item causes the test
to fail.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We just get the stream_name from the sub struct now.
This mostly affects node tests.
The only place in real code where we called add_sub()
was when we initialized data from the server.
We now require all of our unit tests to handle
blueslip errors for warn/error/fatal. This
simplifies the zblueslip code to not have any
options passed in.
Most of the places changed here fell into two
categories:
- We were just missing a random piece of
setup data in a happy path test.
- We were testing error handling in just
a lazy way to ensure 100% coverage. Often
these error codepaths were fairly
contrived.
The one place where we especially lazy was
the stream_data tests, and those are now
more thorough.
This saves a tiny bit of bandwidth, but more
importantly, it protects us against races for
stream name changes. There's some argument that
if the user is thinking they're sending to
old_stream_name, and unbeknownst to them, the
stream has changed to new_stream_name, then we
should fail. But I think 99% of the time the
user just wants the message to go that stream
despite any renames.
In order to verify the blueslip error, we
had to turn on error checking, which required
a tiny fix to a place where we left out
a stream_id for add_sub.
We avoid complicated code to update unread counts
by just using vdom.js.
One small change here is that if click on "more
topics", we replace it with the spinner instead
of putting the spinner after it. This saves us
a redraw under the new scheme.
Babel strict generates more code for [...x] than you’d like, while
Babel loose mode assumes x is an array.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We had a plan at some point to use this to display a phone icon or
something for users who would receive push notifications if you
messaged them. IT's not clear that feature was a good idea in any
case, but it certainly shouldn't be synced as presence data; it would
change >100x less often than the rest of presence and so should likely
be synced differently, maybe as a property on user. So it's best to
delete this prototype.
The “Smileys & People” category has been split into “Smilys & Emotion”
and “People & Body”.
Also, fix generate_sha1sum_emoji to read the emoji-datasource-google
version from yarn.lock, since package.json only gives a version range.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
When quoting a message with fenced code blocks without a language,
we used to have ambiguity in which '```' fence terminates the quote.
This commit adds explicitly non-interfering fences, which fixes the
above issue as well as makes the raw message easier to quickly read.
Fixes#12446.
This commit includes a new `stream_post_policy` setting,
by replacing the `is_announcement_only` field from the Stream model,
which is done by mirroring the structure of the existing
`create_stream_policy`.
It includes the necessary schema and database migrations to migrate
the is_announcement_only boolean field to stream_post_policy,
a smallPositiveInteger field similar to many other settings.
This change is done to allow organization administrators to restrict
new members from creating and posting to a stream. However, this does
not affect admins who are new members.
With many tweaks by tabbott to documentation under /help, etc.
Fixes#13616.
This flag affects page_params and the
payload you get back from POSTs to this
url:
users/me/presence
The flag does not yet affect the
presence events that get sent to a
client.
This is defensive code for the scenario that we
have a user_id in presence but not people. This is
unlikely to occur by the time that we actually render
the buddy list, which is the context for this code.
We have previously been reporting an error here via
the people code, but we add an additional warning.
Also, we filter the user_id from the result.
This reverts commit d84646f091 (which
incorrectly assumed in unread_topic_counter that the messages were
present in the message store), while fixing the type confusion problem
by using IntDict for stream_id keys.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
stream_count and topic_count in the actual code have been IntDict
since commit 9ba1829243 (#13569).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Fixes type confusion in unread_topic_counter, which uses stream IDs as
keys.
Since unread_topic_counter calls message_store.get now, update the
mocks so that message_store.get knows about our mocked messages.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Previously the sender was not included in display_recipient when
a private message was locally echoed. This broke the copy conversation
link functionality, if the user try to copy the link immedeatly after
sending the message. This issue is present only during local echo.
This was fixed by including the recipient of the user during
local echo.
Fixes#13547.
Edited the warning to clearly state that most members/most stream members
will be notified on using wildcard mentions, along with the specific
mention (e.g. @ALL, @everyone and @stream).
Did a separate check for all wildcard mentions in util.js and stored the
corresponding mention in wildcard_mention inside compose.js.
Fixes: #13636
For few settings like `waiting_period_threshold` it makes sense to have the
"value" attribute of option to have a value other than the actual setting
value because multiple settings are depending upon this dropdown, so
handling them in JS code makes more sense. But for many settings (which has
integer values), we have followed a wrong trend over the time of
representing every new dropdown with human-readable values and manually
handling them in JS Code, where it makes more sense to use actual setting
value. The result of which is code has become less concise, sensible and
less likely to be mistaken.
We now use vdom-ish techniques to track the
list items for the pm list. When we go to update
the list, we only re-render nodes whose data
has changed, with two exceptions:
- Obviously, the first time we do a full render.
- If the keys for the items have changed (i.e.
a new node has come in or the order has changed),
we just re-render the whole list.
If the keys are the same since the last re-render, we
only re-render individual items if their data has
changed.
Most of the new code is in these two modules:
- pm_list_dom.js
- vdom.js
We remove all of the code in pm_list.js that is
related to updating DOM with unread counts.
For presence updates, we are now *never*
re-rendering the whole list, since presence
updates only change individual line items and
don't affect the keys. Instead, we just update
any changed elements in place.
The main thing that makes this all work is the
`update` method in `vdom`, which is totally generic
and essentially does a few simple jobs:
- detect if keys are different
- just render the whole ul as needed
- for items that change, do the appropriate
jQuery to update the item in place
Note that this code seems to play nice with simplebar.
Also, this code continues to use templates to render
the individual list items.
FWIW this code isn't radically different than list_render,
but it's got some key differences:
- There are fewer bells and whistles in this code.
Some of the stuff that list_render does is overkill
for the PM list.
- This code detects data changes.
Note that the vdom scheme is agnostic about templates;
it simply requires the child nodes to provide a render
method. (This is similar to list_render, which is also
technically agnostic about rendering, but which also
does use templates in most cases.)
These fixes are somewhat related to #13605, but we
haven't gotten a solid repro on that issue, and
the scrolling issues there may be orthogonal to the
redraws. But having fewer moving parts here should
help, and we won't get the rug pulled out from under
us on every presence update.
There are two possible extensions to this that are
somewhat overlapping in nature, but can be done
one a time.
* We can do a deeper vdom approach here that
gets us away from templates, and just have
nodes write to an AST. I have this on another
branch, but it might be overkill.
* We can avoid some redraws by detecting where
keys are moving up and down. I'm not completely
sure we need it for the PM list.
If this gets merged, we may want to try similar
things for the stream list, which also does a fairly
complicated mixture of big-hammer re-renders and
surgical updates-in-place (with custom code).
BTW we have 100% line coverage for vdom.js.
Now that we have the type situation of having anchor support passing a
string, this is a much more natural way to implement
use_first_unread_anchor.
We still support the old interface to avoid breaking compatibility
with legacy versions of the mobile apps.
This makes the code more readable, by just passing the anchor through
without changing its field name back and forth.
There's no reason for this parameter to involve parsing and integer --
it should be a number in all incoming code paths.
The feature is used for editing stream descriptions as well, and in
any case, what's important is that it's a content-editable widget (aka
a form of input box).
We now give "slight smile" precedence over
"small airplane" if you type "sm".
More generally, we favor popularity over prefix
matches for emoji matches, as long as the popular
emoji matches on any of its pieces.
This extracts get_emoji_matcher and all the
functions it depended on, most of which were
in composebox_typeahead.js.
We also move remove_diacritics out of the people
module.
This is the first major step for #13728.
We used to put the user's email in a value, which was
redundant (we could find the value from
our parent's label) and brittle (would break
on email changes).
Now the DOM's a bit slimmer and more robust.
Also note that we now deal with user_ids, not emails,
in the call stack until we hit the "edge" and convert
to emails for the server.
We now only go the server if both of these
conditions are true:
- our message data seems incomplete for
the stream
- we haven't already fetched history
This function will make more sense when we start
tracking api calls that retrieve topic history.
The unit tests here are kinda duplicating what we
have in the stream_data tests. If we move the
function out of stream_data, we can kill off the
tests there, but for now I think a bit of duplicate
testing is fine here.
This is mostly for tactical reasons. It's hard to
get 100% test coverage on topic_list.js, but it
should be easy to get 100% test coverage on this
very important function.
I considered just moving this code into topic_data.js,
but it just didn't feel quite right. I feel like
this is a pretty core piece of code that's nice
to be by itself and not be near other complicated
code that does stuff like build widgets or talk
to servers. (And, again, it's not just the actual
code here, which is pretty small, it's the unit
tests, which are inherently verbose to exercise
all the edge cases.)
This test mostly tests how we glue everything
together, but I want to change that in an upcoming
commit.
Also, the data stuff that it tests is now better
covered by the test recent tests I added.
The only place we ever set active-sub-filter is
right after we build the template, so there is
no reason to have it be a separate step.
(I made a similar fix to pm_list recently, and
this helps set the stage for doing vdom-like
stuff.)
We may revisit this in the future, but similar to is:private, the
current Zulip user experience makes users expect that in the
is:mentioned view, they should really be able to mark messages as
read.
Further, the practice use case for not marking them as read is very
low, since it's rare for someone to have so many mentions that
revisiting the mentions view isn't sufficient to see everything that
needs their attention.
Previously, is_exactly() had already been repalced with can_bucket_by().
This commit removes is_exactly() and replaces its usage in our tests
with can_bucket_by().
This new function Casper testing function improperly used
`casper.then` in a nested fashion rather than in series, which doesn't
work how one expects. This likely caused the test flakes we've
started seeing with this code path since adding
submit_notifications_stream_settings (though it's hard to prove).
We now incorporate people.get_message_people() in our
logic for compose/PM typeaheads. This not only gives
users better results in some cases, but it will also
improve performance for large realms in some cases.
A recent commit removed test coverage for the
actual filtering/sorting of mention typeaheads
when you did a non-silent method. This commit
now tests that important step again.
Note that we also had (and still have) tests
that make sure the is_silent flag is set
correctly by get_candidates.
We don't have a true full-stack test, but those
can be quite tricky to set up and maintain.
This is relatively unobtrusive, and we don't send
anything to the server.
But any user can now enter blueslip.timings in the
console to see a map of how long things take in
milliseconds. We only record one timing per
event label (i.e. the most recent).
It's pretty easy to test this by just clicking
around. For 300 users/streams most things are
fast except for:
- initialize_everything
- manage streams (render_subscriptions)
Both do lots of nontrivial work, although
"manage streams" is a bit surprising, since
we're only measuring how long to build the
HTML from the templates (whereas the real
time is probably browser rendering costs).
This change sets us up to optimize how we
filter users in the admin user settings.
See #13554 for more context on the user
facing issues.
This fix is basically three related things:
- Add filterer options to list_render.
- Add helper method to people.js.
- Use filterer in settings_users.js.
The filter "callback" was only a "callback" in the
most general sense of the word.
It's just a filter predicate that returns a bool.
This is to prepare for another filtering option,
where the caller can filter the whole list
themselves. I haven't figured out what I will name
the new option yet, but I know I want to make the
two options have specific names.
We are already providing callbacks everywhere, so
it would be nice to eliminate some dead code.
This also speeds things up ever so slightly (no
longer type-checking the option every time through
the loop).
We also split out exports.filter to make unit testing
easier. The function seems kinda silly now, being so
small, but I hope to add another filtering option soon.
Zulip has had a small use of WebSockets (specifically, for the code
path of sending messages, via the webapp only) since ~2013. We
originally added this use of WebSockets in the hope that the latency
benefits of doing so would allow us to avoid implementing a markdown
local echo; they were not. Further, HTTP/2 may have eliminated the
latency difference we hoped to exploit by using WebSockets in any
case.
While we’d originally imagined using WebSockets for other endpoints,
there was never a good justification for moving more components to the
WebSockets system.
This WebSockets code path had a lot of downsides/complexity,
including:
* The messy hack involving constructing an emulated request object to
hook into doing Django requests.
* The `message_senders` queue processor system, which increases RAM
needs and must be provisioned independently from the rest of the
server).
* A duplicate check_send_receive_time Nagios test specific to
WebSockets.
* The requirement for users to have their firewalls/NATs allow
WebSocket connections, and a setting to disable them for networks
where WebSockets don’t work.
* Dependencies on the SockJS family of libraries, which has at times
been poorly maintained, and periodically throws random JavaScript
exceptions in our production environments without a deep enough
traceback to effectively investigate.
* A total of about 1600 lines of our code related to the feature.
* Increased load on the Tornado system, especially around a Zulip
server restart, and especially for large installations like
zulipchat.com, resulting in extra delay before messages can be sent
again.
As detailed in
https://github.com/zulip/zulip/pull/12862#issuecomment-536152397, it
appears that removing WebSockets moderately increases the time it
takes for the `send_message` API query to return from the server, but
does not significantly change the time between when a message is sent
and when it is received by clients. We don’t understand the reason
for that change (suggesting the possibility of a measurement error),
and even if it is a real change, we consider that potential small
latency regression to be acceptable.
If we later want WebSockets, we’ll likely want to just use Django
Channels.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Currently, if we change stream we see the immediate saving of stream, but
it is more convenient to have "Save" and "Discard" buttons as we use
everywhere else in the organization setting subsystem.
As the part of making notification stream settings to change using
"save/discard" widget instead of immediate saving, we need to access the
stream id which is being selected at the moment.
For "New stream notifications" and "New user notifications" it is more
intuitive to just use the new system for showing success/saving status
feedback.
Extracting the function makes it a bit easier to
test and use in a generic way.
Also, I wanted this to live in stream_data, so that
it's easier to find if we change how we model
subscriber data.
Finally, I use _.every to do the subset check
instead of `_.difference`, since _.difference
is actually N-squared:
_.difference = restArguments(function(array, rest) {
rest = flatten(rest, true, true);
return _.filter(array, function(value){
return !_.contains(rest, value);
});
});
And we don't actually want to build a list only
to check that it's zero vs. nonzero length.
We now do this, which short circuits as soon
as it finds any key that is only in sub1:
return _.every(sub1.subscribers.keys(), (key) => {
return sub2_set.has(key);
});
First, there are no more convoluted signals.
We also simplify the parameter to just the "mentioned"
object corresponding to either a user or a broadcast
mention.
For the user group scenario, this has always been dead
code, which you only realized when you got to the comment
at the bottom. Now we actually do nothing.
And I moved the relevant commment to the
the typeahead code (with new wording).
I also moved the is_silent check to the caller. I don't
feel too strongly about that either way. It's kind of silly
to call a function only to give that function an additional
responsibility to worry about. On the other hand, I see
the logic of that function enforcing everything. I went
with the former for now.
Arguably we should have a warning for silent mentions,
since doing a silent mention of somebody not on a stream
is a good indication of a typo. I do understand the use
case, but the user can always ignore the warning. Anyway,
we have decent test coverage on this.
This isn't really an extraction; it's more giving
a name to an anonymous function and moving it to
higher module scope.
We convert this to an ordinary function call, which
allows us to move it out of intialize().
Since there's just one simple parameter now (linked_stream),
we can avoid some error checking.
We also avoid the comment that describes the function,
since it now has a name.
And then one minor tweak is to do the inexpensive
`invite_only` higher in the function. This will be
a nice speedup when you link to really large public
streams.
The unit tests are also a bit easier to read now--less
setup and more explicit names.
Previously, if you tried to invite a user whose account had been
deactivated, we didn't provide a clear path forward for reactivating
the users, which was confusing.
We fix this by plumbing through to the frontend the information that
there is an existing user account with that email address in this
organization, but that it's deactivated. For administrators, we
provide a link for how to reactivate the user.
Fixes#8144.
The sort_recipients helper is used for many different
typeaheads, such as compose PMs, compose mentions,
and some settings-related code.
We now avoid unnecessary sorting steps in cases
where we have plenty of results in the top buckets
(such as users who match on prefix).
This change should not have any user-facing
implications.
This method is a bit complex, but I think it's
worthwhile to force PM autocompletes and mention
autocompletes through the same code path.
We also kill off this method:
typeahead_helper.sort_people_and_user_groups
For historical reasons pm_list was handling just
one possible edge case of where is:private was
combined with other search terms, namely the
pm-with operator.
The code was correct in realizing the is:private
was redundant there, but now we handle that
upstream in Filter.fix_operators (see previous
commit).
Now we just look for any is:private term.
This change makes these two functions more alike:
- get_search_result
- get_search_result_legacy
To test the UI modify zerver/views/home.py by
replacing `settings.SEARCH_PILLS_ENABLED` with
`True`. I only did a quick sanity check, since
any bugs with the new system are more likely due
to bitrot than any changes I have made here.
The history is this:
Tim cloned the code (before the smaller
helpers were extracted):
db4f6e278f
In 8b153f6452
Shubham removed get_operator_subset_suggestions but
accidentally left a `concat` statement in that got
misapplied to the previous suggestions:
- suggestions = get_operator_subset_suggestions(operators);
result = result.concat(suggestions);
The error there was carried over in some recent changes,
but this commit fixes that strangeness.
In 73e4f3b3fa
Shubham made this change, which makes sense only for
pills, and this code remains intact.
- if (operators.length > 0) {
- last = operators.slice(-1)[0];
+ if (query_operators.length > 0) {
+ last = query_operators.slice(-1)[0];
+ } else {
+ // If query_operators = [] then last will remain
+ // {operator: '', operand: '', negated: false}; from above.
+ // `last` has not yet been added to operators/query_operators.
+ // The code below adds last to operators/query_operators
+ operators.push(last);
+ query_operators.push(last);
}
Mohit made a couple changes to both old and new.
Anders made a couple non-substantive changes related to
the ES6 migration.
Steve (me) made several structural changes to the code. For
some of them I only changed the legacy code, not the pills
code. I didn't fix Shubham's mistake until this change.
Now the two functions should look similar except in the places
where they are intentionally different. I also added a comment
explaining the get_operator_subset_suggestions difference.
Fixes#13609
I want to be able to easily test this without
having to simulate all the jQuery side effects.
This simply preserves the old logic, which seems
to handle one edge case without handling every
possible edge case. The edge cases aren't super
important here, though, since the only thing it affects
is bolding "Private Messages", and when to do that
is somewhat up to personal tastes.
Having said that, we could definitely improve
this code and possibly should move some of this
logic to either narrow_state.js or filter.js.
Instead of doing various ad-hoc calculations of
which PM is "active" and plumbing it through various
functions and then updating it via jQuery instead of
just the template, we now just calculate `is_active`
in `_build_private_messages_list` with a little
helper function.
This test mostly tests logic that I'm about
to remove in subsequent commits, and it's a bit
messy.
This commit removes 100% line coverage, but I
will restore that a few commits later.
In 3cfc3ca24b I removed
the feature that limited PM conversations to five or
less (including the active conversation), but I
didn't clean up this parameter. I think lint was
confused by the fact that we did mutate it.
I am wondering if this started out as an experiment
and was never fully polished before the push? Or
maybe I was just careless. Anyway, I don't
think were any symptoms here--it was just dead code
that we didn't need.
This fixes a rebase issue between the int_dict introduction and use
for people.js with the introduce of filter_values on dict.js and use
inside people.js.
Note that we haven't fully swept this for Dict,
since some dicts are keyed by strings. For
example PM counts can have a huddle like
"101,102,103" as a key.
This should be slightly more performant, and we
often call this function N times, such as when
rendering the buddy list.
There's a minor change to pm_list to avoid
an unnecessary computation on huddles that would
otherwise trigger a blueslip warning for the
huddles case.
When we are pulling data from message.display_recipient
for private messages, the user_id field is always
called 'id', not 'user_id', so we can simplify
some defensive code.
This required lots of manual testing:
- search/navigate user presence
- send PM and mention user
- pay attention to compose fade
- send stream msg and mention user
- open Private Messages in top-left and click
- test unread counts
- invite user who already has account
- search for users in search bar
- check user settings
- User Groups
- Users
- Deactivated Users
- Bots
- create a bot
- mention user groups
- send group PM then click on lower right
- view/edit/create streams
If there are still pieces of code that don't convert
ids to ints, the code should still work but report
blueslip errors.
I try to mostly convert user_ids to ints in the callers,
since often the callers are dealing with small amounts
of data, like user ids from huddles.
We only ever show 3 or 4 people in search suggestions
(possibly w/a couple variations, like pm-with/sender/etc.),
so we can try to search a smaller subset of people
before going through the entire realm.
We use message_store.user_ids() for this, since you
typically want to search messages for people that
have sent messages recently, and we already sort
based on PM conversations.
This should avoid some memory allocations.
We also use build_person_matcher to avoid
repeating the same logic over and over
again to process the query into termlets.
We also remove people.get_all_persons() and
people.person_matches_query().
This looks like simple code cleanup, but it's more
than that.
The code cleanup here is that we don't have three
callbacks to get a list of typeaheads for bootstrap.
Instead, we just have one function that does all the
main work.
And then the speedup comes from the fact we no longer
need to remove diacritics from the query for every
time through our loop of seeing if a person matches
the query.
It's a bit subtle to see in the diff, but these are
the relevant lines:
const matcher = exports.get_person_or_user_group_matcher(query);
const filtered_results = _.filter(people_and_groups, matcher);
Before this, bootstrap was doing $.grep, and we'd have
to reinitialize the matcher for every person.
If you profile this before and after, you'll see that
remove_diacritics gets called fewer times.
To profile this, you want to loads lots of users into
your DB and try to autocomplete "Extra", as in "Extra1 User".
If you try to autocomplete something else, then my patch
won't really help, and `remove_diacritics` will still
show up as expensive. Because it is that expensive a function.
This may seem silly now, since we are returning a function
that still dispatches over all flavors of search for
every item, but subsequent commits will make it obvious
why I'm doing this.
We want to do our own matching of items, rather than
just giving a callback to bootstrap, which does $.grep
on all the items.
Doing our own matching gives us flexibility for future
improvements like custom data structures for searching
through big amounts of data. Even in the short term
we can speed up searches by pulling expensive operations
outside the grep/filter call.
This architecture has been in place for our search
bar since ~2014.
The benchmark is commented out. It takes only a few
milliseconds to run, so there may be no reason not
to always run it. It doesn't test correctness, so
it would arguably inflate line coverage, but set/get
are obviously covered elsewhere.
We now require the actual tests to explicitly
to zrequire Dict, rather than magically adding this.
In one case, the use of Dict was clearly just for
the test (not the app), so I converted that an ordinary
JS object (see timerender.js).
We have ~5 years of proof that we'll probably never
extend Dict with more options.
Breaking the classes into makes both a little faster
(no options to check), and we remove some options
in FoldDict that are never used (from/from_array).
A possible next step is to fine-tune the Dict to use
Map internally.
Note that the TypeScript types for FoldDict are now
more specific (requiring string keys). Of course,
this isn't really enforced until we convert other
modules to TS.
We had a potentially nasty bug where we
weren't guaranteeing that all/stream/everyone
collated in consistent ways inside of
`compare_people_for_relevance`, which can
send certain types of sort algorithms into
an infinite loop. I doubt this ever happened
in practice, but it's obviously worth fixing.
Now we also have a clear tiebreaker between
any two all/everyone/stream mentions, which
is the idx field.
Finally, this should be a bit more efficient.
We don't have people named "all". Instead, we
create pseudo person objects with email/full_name
of "all" (along with some other fields). The tests
now reflect this.