webpack optimizes JSON modules using JSON.parse("{…}"), which is
faster than the normal JavaScript parser.
Update the backend to use emoji_codes.json too instead of the three
separate JSON files.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
I believe we can remove these and rely on
other parts of our testing/code-review
to ensure template quality.
These tests never really exercised our
app code, as evidenced by us not regressing
any of the 100%-line-coverage files.
We have a couple other ways that we verify
the correct format of the templates:
- webpack (can they compile?)
- check-templates (are they nicely indented?)
For deep testing, we have Casper, which
exercises most of our most important templates
in some meaningful way.
I think it's pretty rare that we get bugs
now that are directly caused by bad templates,
and an even smaller subset of them would
have been caught by the node tests.
If that trend changes in the future, I would prefer to
just do something "greenfield" to address
any common problems rather than resurrect
this code, but we could always resurrect it
from git.
The template node tests did check a little bit of
detail about which fields are there, but not
in an integrated way, so that aspect of the tests
wasn't very useful either.
This effectively reverts the following
commit from May 2019:
be527905ca
The implementation of closest() was a bit
buggy and complex. It's easy enough
to just stub the method yourself. We may
want to eventually re-implement it, but we
should follow the template of parent/set_parent.
If you fail to stub `closest` zjquery gives
a fairly helpful error message:
Error: You must create a stub for $("link-stub").closest
We stub out jquery elements rather than giving
the illusion of having real DOM.
Also, we make it so that the message_store
interaction has an assertion attached to it.
In the next commit we're going to change what the
server sends for the following:
- page_params
- server responses to /json/users/me/presence
We will **not** yet be changing the format of the data
that we get in events when users update their presence.
It's also just a bit in flux what our final formats
will be for various presence payloads, and different
optimizations may lead us to use different data
structures in different payloads.
So for now we decouple these two things:
raw_info: this is intended to represent a
snapshot of the latest data from the
server, including some data like
timestamps that are only used
in downstream calculations and not
user-facing
exports.presence_info: this is calculated
info for modules like buddy_data that
just need to know active vs. idle and
last_active_date
Another change that happens here is we rename
set_info_for_user to update_info_for_event,
which just makes it clear that the function
expects data in the "event" format (as opposed
to the format for page_params or server
responses).
As of now keeping the intermediate raw_info data
around feels slightly awkward, because we just
immediately calculate presence_info for any kind
of update. This may be sorta surprising if you
just skim the code and see the various timeout
constants. You would think we might be automatically
expiring "active" statuses in the client due to
the simple passage of time, but in fact the precise
places we do this are all triggered by new data
from the server and we re-calculate statuses
immediately.
(There are indirect ways that clients
have timing logic, since they ask the
server for new data at various intervals, but a
smarter client could simply expire users on its
own, or at least with a more efficient transfer
of info between it and the server. One of
the thing that complicates client-side logic
is that server and client clocks may be out
of sync. Also, it's not inherently super expensive
to get updates from the server.)
The important details for the test setup here
are just the number of users who are active.
We don't need to simulate the currently awkward
way of populating this data.
The _.each calls with an inline function expression have already been
converted to for…of loops. We could do that here, but using .forEach
when we’re just reusing an existing function seems like a good
guideline.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
predicate is expected to return a function, not a boolean. The
boolean true was causing _.filter to match items with a property named
"true", which is definitely not what was intended. Matching no items
is probably also not intended, but matching every item causes the test
to fail.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We just get the stream_name from the sub struct now.
This mostly affects node tests.
The only place in real code where we called add_sub()
was when we initialized data from the server.
We now require all of our unit tests to handle
blueslip errors for warn/error/fatal. This
simplifies the zblueslip code to not have any
options passed in.
Most of the places changed here fell into two
categories:
- We were just missing a random piece of
setup data in a happy path test.
- We were testing error handling in just
a lazy way to ensure 100% coverage. Often
these error codepaths were fairly
contrived.
The one place where we especially lazy was
the stream_data tests, and those are now
more thorough.
This saves a tiny bit of bandwidth, but more
importantly, it protects us against races for
stream name changes. There's some argument that
if the user is thinking they're sending to
old_stream_name, and unbeknownst to them, the
stream has changed to new_stream_name, then we
should fail. But I think 99% of the time the
user just wants the message to go that stream
despite any renames.
In order to verify the blueslip error, we
had to turn on error checking, which required
a tiny fix to a place where we left out
a stream_id for add_sub.
We avoid complicated code to update unread counts
by just using vdom.js.
One small change here is that if click on "more
topics", we replace it with the spinner instead
of putting the spinner after it. This saves us
a redraw under the new scheme.
Babel strict generates more code for [...x] than you’d like, while
Babel loose mode assumes x is an array.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We had a plan at some point to use this to display a phone icon or
something for users who would receive push notifications if you
messaged them. IT's not clear that feature was a good idea in any
case, but it certainly shouldn't be synced as presence data; it would
change >100x less often than the rest of presence and so should likely
be synced differently, maybe as a property on user. So it's best to
delete this prototype.
The “Smileys & People” category has been split into “Smilys & Emotion”
and “People & Body”.
Also, fix generate_sha1sum_emoji to read the emoji-datasource-google
version from yarn.lock, since package.json only gives a version range.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
When quoting a message with fenced code blocks without a language,
we used to have ambiguity in which '```' fence terminates the quote.
This commit adds explicitly non-interfering fences, which fixes the
above issue as well as makes the raw message easier to quickly read.
Fixes#12446.
This commit includes a new `stream_post_policy` setting,
by replacing the `is_announcement_only` field from the Stream model,
which is done by mirroring the structure of the existing
`create_stream_policy`.
It includes the necessary schema and database migrations to migrate
the is_announcement_only boolean field to stream_post_policy,
a smallPositiveInteger field similar to many other settings.
This change is done to allow organization administrators to restrict
new members from creating and posting to a stream. However, this does
not affect admins who are new members.
With many tweaks by tabbott to documentation under /help, etc.
Fixes#13616.
This flag affects page_params and the
payload you get back from POSTs to this
url:
users/me/presence
The flag does not yet affect the
presence events that get sent to a
client.
This is defensive code for the scenario that we
have a user_id in presence but not people. This is
unlikely to occur by the time that we actually render
the buddy list, which is the context for this code.
We have previously been reporting an error here via
the people code, but we add an additional warning.
Also, we filter the user_id from the result.
This reverts commit d84646f091 (which
incorrectly assumed in unread_topic_counter that the messages were
present in the message store), while fixing the type confusion problem
by using IntDict for stream_id keys.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
stream_count and topic_count in the actual code have been IntDict
since commit 9ba1829243 (#13569).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Fixes type confusion in unread_topic_counter, which uses stream IDs as
keys.
Since unread_topic_counter calls message_store.get now, update the
mocks so that message_store.get knows about our mocked messages.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Previously the sender was not included in display_recipient when
a private message was locally echoed. This broke the copy conversation
link functionality, if the user try to copy the link immedeatly after
sending the message. This issue is present only during local echo.
This was fixed by including the recipient of the user during
local echo.
Fixes#13547.
Edited the warning to clearly state that most members/most stream members
will be notified on using wildcard mentions, along with the specific
mention (e.g. @ALL, @everyone and @stream).
Did a separate check for all wildcard mentions in util.js and stored the
corresponding mention in wildcard_mention inside compose.js.
Fixes: #13636
For few settings like `waiting_period_threshold` it makes sense to have the
"value" attribute of option to have a value other than the actual setting
value because multiple settings are depending upon this dropdown, so
handling them in JS code makes more sense. But for many settings (which has
integer values), we have followed a wrong trend over the time of
representing every new dropdown with human-readable values and manually
handling them in JS Code, where it makes more sense to use actual setting
value. The result of which is code has become less concise, sensible and
less likely to be mistaken.
We now use vdom-ish techniques to track the
list items for the pm list. When we go to update
the list, we only re-render nodes whose data
has changed, with two exceptions:
- Obviously, the first time we do a full render.
- If the keys for the items have changed (i.e.
a new node has come in or the order has changed),
we just re-render the whole list.
If the keys are the same since the last re-render, we
only re-render individual items if their data has
changed.
Most of the new code is in these two modules:
- pm_list_dom.js
- vdom.js
We remove all of the code in pm_list.js that is
related to updating DOM with unread counts.
For presence updates, we are now *never*
re-rendering the whole list, since presence
updates only change individual line items and
don't affect the keys. Instead, we just update
any changed elements in place.
The main thing that makes this all work is the
`update` method in `vdom`, which is totally generic
and essentially does a few simple jobs:
- detect if keys are different
- just render the whole ul as needed
- for items that change, do the appropriate
jQuery to update the item in place
Note that this code seems to play nice with simplebar.
Also, this code continues to use templates to render
the individual list items.
FWIW this code isn't radically different than list_render,
but it's got some key differences:
- There are fewer bells and whistles in this code.
Some of the stuff that list_render does is overkill
for the PM list.
- This code detects data changes.
Note that the vdom scheme is agnostic about templates;
it simply requires the child nodes to provide a render
method. (This is similar to list_render, which is also
technically agnostic about rendering, but which also
does use templates in most cases.)
These fixes are somewhat related to #13605, but we
haven't gotten a solid repro on that issue, and
the scrolling issues there may be orthogonal to the
redraws. But having fewer moving parts here should
help, and we won't get the rug pulled out from under
us on every presence update.
There are two possible extensions to this that are
somewhat overlapping in nature, but can be done
one a time.
* We can do a deeper vdom approach here that
gets us away from templates, and just have
nodes write to an AST. I have this on another
branch, but it might be overkill.
* We can avoid some redraws by detecting where
keys are moving up and down. I'm not completely
sure we need it for the PM list.
If this gets merged, we may want to try similar
things for the stream list, which also does a fairly
complicated mixture of big-hammer re-renders and
surgical updates-in-place (with custom code).
BTW we have 100% line coverage for vdom.js.
Now that we have the type situation of having anchor support passing a
string, this is a much more natural way to implement
use_first_unread_anchor.
We still support the old interface to avoid breaking compatibility
with legacy versions of the mobile apps.
This makes the code more readable, by just passing the anchor through
without changing its field name back and forth.
There's no reason for this parameter to involve parsing and integer --
it should be a number in all incoming code paths.
The feature is used for editing stream descriptions as well, and in
any case, what's important is that it's a content-editable widget (aka
a form of input box).
We now give "slight smile" precedence over
"small airplane" if you type "sm".
More generally, we favor popularity over prefix
matches for emoji matches, as long as the popular
emoji matches on any of its pieces.
This extracts get_emoji_matcher and all the
functions it depended on, most of which were
in composebox_typeahead.js.
We also move remove_diacritics out of the people
module.
This is the first major step for #13728.
We now only go the server if both of these
conditions are true:
- our message data seems incomplete for
the stream
- we haven't already fetched history
This function will make more sense when we start
tracking api calls that retrieve topic history.
The unit tests here are kinda duplicating what we
have in the stream_data tests. If we move the
function out of stream_data, we can kill off the
tests there, but for now I think a bit of duplicate
testing is fine here.
This is mostly for tactical reasons. It's hard to
get 100% test coverage on topic_list.js, but it
should be easy to get 100% test coverage on this
very important function.
I considered just moving this code into topic_data.js,
but it just didn't feel quite right. I feel like
this is a pretty core piece of code that's nice
to be by itself and not be near other complicated
code that does stuff like build widgets or talk
to servers. (And, again, it's not just the actual
code here, which is pretty small, it's the unit
tests, which are inherently verbose to exercise
all the edge cases.)
This test mostly tests how we glue everything
together, but I want to change that in an upcoming
commit.
Also, the data stuff that it tests is now better
covered by the test recent tests I added.
The only place we ever set active-sub-filter is
right after we build the template, so there is
no reason to have it be a separate step.
(I made a similar fix to pm_list recently, and
this helps set the stage for doing vdom-like
stuff.)
We may revisit this in the future, but similar to is:private, the
current Zulip user experience makes users expect that in the
is:mentioned view, they should really be able to mark messages as
read.
Further, the practice use case for not marking them as read is very
low, since it's rare for someone to have so many mentions that
revisiting the mentions view isn't sufficient to see everything that
needs their attention.
Previously, is_exactly() had already been repalced with can_bucket_by().
This commit removes is_exactly() and replaces its usage in our tests
with can_bucket_by().
We now incorporate people.get_message_people() in our
logic for compose/PM typeaheads. This not only gives
users better results in some cases, but it will also
improve performance for large realms in some cases.
A recent commit removed test coverage for the
actual filtering/sorting of mention typeaheads
when you did a non-silent method. This commit
now tests that important step again.
Note that we also had (and still have) tests
that make sure the is_silent flag is set
correctly by get_candidates.
We don't have a true full-stack test, but those
can be quite tricky to set up and maintain.
This is relatively unobtrusive, and we don't send
anything to the server.
But any user can now enter blueslip.timings in the
console to see a map of how long things take in
milliseconds. We only record one timing per
event label (i.e. the most recent).
It's pretty easy to test this by just clicking
around. For 300 users/streams most things are
fast except for:
- initialize_everything
- manage streams (render_subscriptions)
Both do lots of nontrivial work, although
"manage streams" is a bit surprising, since
we're only measuring how long to build the
HTML from the templates (whereas the real
time is probably browser rendering costs).
This change sets us up to optimize how we
filter users in the admin user settings.
See #13554 for more context on the user
facing issues.
This fix is basically three related things:
- Add filterer options to list_render.
- Add helper method to people.js.
- Use filterer in settings_users.js.
The filter "callback" was only a "callback" in the
most general sense of the word.
It's just a filter predicate that returns a bool.
This is to prepare for another filtering option,
where the caller can filter the whole list
themselves. I haven't figured out what I will name
the new option yet, but I know I want to make the
two options have specific names.
We are already providing callbacks everywhere, so
it would be nice to eliminate some dead code.
This also speeds things up ever so slightly (no
longer type-checking the option every time through
the loop).
We also split out exports.filter to make unit testing
easier. The function seems kinda silly now, being so
small, but I hope to add another filtering option soon.
Zulip has had a small use of WebSockets (specifically, for the code
path of sending messages, via the webapp only) since ~2013. We
originally added this use of WebSockets in the hope that the latency
benefits of doing so would allow us to avoid implementing a markdown
local echo; they were not. Further, HTTP/2 may have eliminated the
latency difference we hoped to exploit by using WebSockets in any
case.
While we’d originally imagined using WebSockets for other endpoints,
there was never a good justification for moving more components to the
WebSockets system.
This WebSockets code path had a lot of downsides/complexity,
including:
* The messy hack involving constructing an emulated request object to
hook into doing Django requests.
* The `message_senders` queue processor system, which increases RAM
needs and must be provisioned independently from the rest of the
server).
* A duplicate check_send_receive_time Nagios test specific to
WebSockets.
* The requirement for users to have their firewalls/NATs allow
WebSocket connections, and a setting to disable them for networks
where WebSockets don’t work.
* Dependencies on the SockJS family of libraries, which has at times
been poorly maintained, and periodically throws random JavaScript
exceptions in our production environments without a deep enough
traceback to effectively investigate.
* A total of about 1600 lines of our code related to the feature.
* Increased load on the Tornado system, especially around a Zulip
server restart, and especially for large installations like
zulipchat.com, resulting in extra delay before messages can be sent
again.
As detailed in
https://github.com/zulip/zulip/pull/12862#issuecomment-536152397, it
appears that removing WebSockets moderately increases the time it
takes for the `send_message` API query to return from the server, but
does not significantly change the time between when a message is sent
and when it is received by clients. We don’t understand the reason
for that change (suggesting the possibility of a measurement error),
and even if it is a real change, we consider that potential small
latency regression to be acceptable.
If we later want WebSockets, we’ll likely want to just use Django
Channels.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
As the part of making notification stream settings to change using
"save/discard" widget instead of immediate saving, we need to access the
stream id which is being selected at the moment.
For "New stream notifications" and "New user notifications" it is more
intuitive to just use the new system for showing success/saving status
feedback.
Extracting the function makes it a bit easier to
test and use in a generic way.
Also, I wanted this to live in stream_data, so that
it's easier to find if we change how we model
subscriber data.
Finally, I use _.every to do the subset check
instead of `_.difference`, since _.difference
is actually N-squared:
_.difference = restArguments(function(array, rest) {
rest = flatten(rest, true, true);
return _.filter(array, function(value){
return !_.contains(rest, value);
});
});
And we don't actually want to build a list only
to check that it's zero vs. nonzero length.
We now do this, which short circuits as soon
as it finds any key that is only in sub1:
return _.every(sub1.subscribers.keys(), (key) => {
return sub2_set.has(key);
});
First, there are no more convoluted signals.
We also simplify the parameter to just the "mentioned"
object corresponding to either a user or a broadcast
mention.
For the user group scenario, this has always been dead
code, which you only realized when you got to the comment
at the bottom. Now we actually do nothing.
And I moved the relevant commment to the
the typeahead code (with new wording).
I also moved the is_silent check to the caller. I don't
feel too strongly about that either way. It's kind of silly
to call a function only to give that function an additional
responsibility to worry about. On the other hand, I see
the logic of that function enforcing everything. I went
with the former for now.
Arguably we should have a warning for silent mentions,
since doing a silent mention of somebody not on a stream
is a good indication of a typo. I do understand the use
case, but the user can always ignore the warning. Anyway,
we have decent test coverage on this.
This isn't really an extraction; it's more giving
a name to an anonymous function and moving it to
higher module scope.
We convert this to an ordinary function call, which
allows us to move it out of intialize().
Since there's just one simple parameter now (linked_stream),
we can avoid some error checking.
We also avoid the comment that describes the function,
since it now has a name.
And then one minor tweak is to do the inexpensive
`invite_only` higher in the function. This will be
a nice speedup when you link to really large public
streams.
The unit tests are also a bit easier to read now--less
setup and more explicit names.
Previously, if you tried to invite a user whose account had been
deactivated, we didn't provide a clear path forward for reactivating
the users, which was confusing.
We fix this by plumbing through to the frontend the information that
there is an existing user account with that email address in this
organization, but that it's deactivated. For administrators, we
provide a link for how to reactivate the user.
Fixes#8144.
The sort_recipients helper is used for many different
typeaheads, such as compose PMs, compose mentions,
and some settings-related code.
We now avoid unnecessary sorting steps in cases
where we have plenty of results in the top buckets
(such as users who match on prefix).
This change should not have any user-facing
implications.
This method is a bit complex, but I think it's
worthwhile to force PM autocompletes and mention
autocompletes through the same code path.
We also kill off this method:
typeahead_helper.sort_people_and_user_groups
For historical reasons pm_list was handling just
one possible edge case of where is:private was
combined with other search terms, namely the
pm-with operator.
The code was correct in realizing the is:private
was redundant there, but now we handle that
upstream in Filter.fix_operators (see previous
commit).
Now we just look for any is:private term.
This change makes these two functions more alike:
- get_search_result
- get_search_result_legacy
To test the UI modify zerver/views/home.py by
replacing `settings.SEARCH_PILLS_ENABLED` with
`True`. I only did a quick sanity check, since
any bugs with the new system are more likely due
to bitrot than any changes I have made here.
The history is this:
Tim cloned the code (before the smaller
helpers were extracted):
db4f6e278f
In 8b153f6452
Shubham removed get_operator_subset_suggestions but
accidentally left a `concat` statement in that got
misapplied to the previous suggestions:
- suggestions = get_operator_subset_suggestions(operators);
result = result.concat(suggestions);
The error there was carried over in some recent changes,
but this commit fixes that strangeness.
In 73e4f3b3fa
Shubham made this change, which makes sense only for
pills, and this code remains intact.
- if (operators.length > 0) {
- last = operators.slice(-1)[0];
+ if (query_operators.length > 0) {
+ last = query_operators.slice(-1)[0];
+ } else {
+ // If query_operators = [] then last will remain
+ // {operator: '', operand: '', negated: false}; from above.
+ // `last` has not yet been added to operators/query_operators.
+ // The code below adds last to operators/query_operators
+ operators.push(last);
+ query_operators.push(last);
}
Mohit made a couple changes to both old and new.
Anders made a couple non-substantive changes related to
the ES6 migration.
Steve (me) made several structural changes to the code. For
some of them I only changed the legacy code, not the pills
code. I didn't fix Shubham's mistake until this change.
Now the two functions should look similar except in the places
where they are intentionally different. I also added a comment
explaining the get_operator_subset_suggestions difference.
Fixes#13609
I want to be able to easily test this without
having to simulate all the jQuery side effects.
This simply preserves the old logic, which seems
to handle one edge case without handling every
possible edge case. The edge cases aren't super
important here, though, since the only thing it affects
is bolding "Private Messages", and when to do that
is somewhat up to personal tastes.
Having said that, we could definitely improve
this code and possibly should move some of this
logic to either narrow_state.js or filter.js.
Instead of doing various ad-hoc calculations of
which PM is "active" and plumbing it through various
functions and then updating it via jQuery instead of
just the template, we now just calculate `is_active`
in `_build_private_messages_list` with a little
helper function.
This test mostly tests logic that I'm about
to remove in subsequent commits, and it's a bit
messy.
This commit removes 100% line coverage, but I
will restore that a few commits later.
In 3cfc3ca24b I removed
the feature that limited PM conversations to five or
less (including the active conversation), but I
didn't clean up this parameter. I think lint was
confused by the fact that we did mutate it.
I am wondering if this started out as an experiment
and was never fully polished before the push? Or
maybe I was just careless. Anyway, I don't
think were any symptoms here--it was just dead code
that we didn't need.
This fixes a rebase issue between the int_dict introduction and use
for people.js with the introduce of filter_values on dict.js and use
inside people.js.
Note that we haven't fully swept this for Dict,
since some dicts are keyed by strings. For
example PM counts can have a huddle like
"101,102,103" as a key.
This should be slightly more performant, and we
often call this function N times, such as when
rendering the buddy list.
There's a minor change to pm_list to avoid
an unnecessary computation on huddles that would
otherwise trigger a blueslip warning for the
huddles case.
When we are pulling data from message.display_recipient
for private messages, the user_id field is always
called 'id', not 'user_id', so we can simplify
some defensive code.
This required lots of manual testing:
- search/navigate user presence
- send PM and mention user
- pay attention to compose fade
- send stream msg and mention user
- open Private Messages in top-left and click
- test unread counts
- invite user who already has account
- search for users in search bar
- check user settings
- User Groups
- Users
- Deactivated Users
- Bots
- create a bot
- mention user groups
- send group PM then click on lower right
- view/edit/create streams
If there are still pieces of code that don't convert
ids to ints, the code should still work but report
blueslip errors.
I try to mostly convert user_ids to ints in the callers,
since often the callers are dealing with small amounts
of data, like user ids from huddles.
We only ever show 3 or 4 people in search suggestions
(possibly w/a couple variations, like pm-with/sender/etc.),
so we can try to search a smaller subset of people
before going through the entire realm.
We use message_store.user_ids() for this, since you
typically want to search messages for people that
have sent messages recently, and we already sort
based on PM conversations.
This should avoid some memory allocations.
We also use build_person_matcher to avoid
repeating the same logic over and over
again to process the query into termlets.
We also remove people.get_all_persons() and
people.person_matches_query().
This looks like simple code cleanup, but it's more
than that.
The code cleanup here is that we don't have three
callbacks to get a list of typeaheads for bootstrap.
Instead, we just have one function that does all the
main work.
And then the speedup comes from the fact we no longer
need to remove diacritics from the query for every
time through our loop of seeing if a person matches
the query.
It's a bit subtle to see in the diff, but these are
the relevant lines:
const matcher = exports.get_person_or_user_group_matcher(query);
const filtered_results = _.filter(people_and_groups, matcher);
Before this, bootstrap was doing $.grep, and we'd have
to reinitialize the matcher for every person.
If you profile this before and after, you'll see that
remove_diacritics gets called fewer times.
To profile this, you want to loads lots of users into
your DB and try to autocomplete "Extra", as in "Extra1 User".
If you try to autocomplete something else, then my patch
won't really help, and `remove_diacritics` will still
show up as expensive. Because it is that expensive a function.
This may seem silly now, since we are returning a function
that still dispatches over all flavors of search for
every item, but subsequent commits will make it obvious
why I'm doing this.
We want to do our own matching of items, rather than
just giving a callback to bootstrap, which does $.grep
on all the items.
Doing our own matching gives us flexibility for future
improvements like custom data structures for searching
through big amounts of data. Even in the short term
we can speed up searches by pulling expensive operations
outside the grep/filter call.
This architecture has been in place for our search
bar since ~2014.
The benchmark is commented out. It takes only a few
milliseconds to run, so there may be no reason not
to always run it. It doesn't test correctness, so
it would arguably inflate line coverage, but set/get
are obviously covered elsewhere.
We now require the actual tests to explicitly
to zrequire Dict, rather than magically adding this.
In one case, the use of Dict was clearly just for
the test (not the app), so I converted that an ordinary
JS object (see timerender.js).
We have ~5 years of proof that we'll probably never
extend Dict with more options.
Breaking the classes into makes both a little faster
(no options to check), and we remove some options
in FoldDict that are never used (from/from_array).
A possible next step is to fine-tune the Dict to use
Map internally.
Note that the TypeScript types for FoldDict are now
more specific (requiring string keys). Of course,
this isn't really enforced until we convert other
modules to TS.
We had a potentially nasty bug where we
weren't guaranteeing that all/stream/everyone
collated in consistent ways inside of
`compare_people_for_relevance`, which can
send certain types of sort algorithms into
an infinite loop. I doubt this ever happened
in practice, but it's obviously worth fixing.
Now we also have a clear tiebreaker between
any two all/everyone/stream mentions, which
is the idx field.
Finally, this should be a bit more efficient.
We don't have people named "all". Instead, we
create pseudo person objects with email/full_name
of "all" (along with some other fields). The tests
now reflect this.
We want to mostly deprecate this function (see
the comment I added), so I gave it a more specific
name.
Ideally I'd just fix `stream_create`, but it does
use this function in a couple places, and it's helpful
to reuse the same sort here. In one place stream_create
actually unshifts the "me" user back to the top of the
list, which makes sense for its use case.
If two user_ids in a recent huddle have ids
that sort lexically differently than numerically,
such as 7 and 66, then we were creating two
different buckets in pm_conversations.
This regression was introduced in
263ac0eb45 on
November 21, 2019.
Instead of having our callers pass in a possibly
non-canonical version of a user_ids_string, just
have them pass in a list.
The next commit will canonicalize the sort.
The server may send us ids in the order
[11, 2], instead of [2, 11]. We don't want
to rely on server behavior, regardless, for
the sort.
Our tests now show we process that data.
The current code is is still buggy and causes
us to show the same huddle two different times
for situations where the lexical sort doesn't
match the numerical sort.
This happens on czo often, where Tim is user
7, and his id sorts lexically after ids like
58, 622, 4444, etc.
This should make any operation on subscribed
streams faster (we won't need to filter out
unsubscribed streams every time).
I started writing this before I realized we
had a bug where we call `subscribed_streams`
in a nested loop.
After fixing the bugs, this is not as much of
a bottleneck, but it's still a speedup in many
important places:
* build left sidebar
* every keystroke in search bar
* first keystroke in making #stream_links
* every keystroke in compose stream box
The streams settings code is kinda complicated.
It does a non-deterministic sort of the "others"
bucket when you add elements to the left panel.
They get hidden, anyway. Our values() call now
puts subscribed streams first. It never guaranteed
order, but putting subscribed streams first is
probably a good behavior for most situations.
This defers O(N*S) operations, where
N = number of streams
S = number of subscribers per stream
In many cases we never do an O(N) operation on
a stream. Exceptions include:
- checking stream links from the compose box
- editing a stream
- adding members to a newly added stream
An operation that used to be O(N)--computing
the number of subscribers--is now O(1), and we
don't even pay O(N) on a one-time basis to
compute it (not counting the cost to build the
array from JSON, but we have to do that).
Calling `set_filter_out_inactives` is expensive, since we
count up the number of subscribed streams, which iterates
through all your streams, creates a new list of subscribed
streams, then counts them.
In my dev setup, I created 700 streams, and this shaved
about 700ms off of the initial call to `build_stream_list`.
If we aren't showing users emails, then we don't
want to use emails in the search.
And if we are showing users emails, we want to
search on the email that's displayed to them.
For admins this will be delivery_email.
For regular users we arguably shouldn't search
on emails either, since it mostly causes confusion,
but this commit just preserves the current
behavior for those users (unless `show_email` is
false).
We want to be able to unit test this value,
since it's conditional on several factors:
- am I an admin?
- can non-admins view emails?
- do we have delivery_email for the user?
I'm mocking show_email in the tests, since the
show_email code is in `settings_org` and
kind of hard to unit test. It's not impossible,
but it's too much for this commit. (Either
we need to extract it out to a nice file or
deal with mocking jQuery. That module is
mostly data-oriented, so it would be nice
to have something like `settings_config` that
is actually pure data.)
This was duplicate code. I'm moving it to people
for pragmatic reasons--it's hard to unit test stuff
in settings_users.js due to all the jQuery.
It's also nice to have all people-related search
code in one place, just for auditing purposes.
Once we have max_items results, stop trying
to get more items.
This should really help large realms when
you do a search on streams that turns up
more than N streams (where N is about 12).
We won't even bother to find people.
This isn't a huge speedup, but it's an easy
code change.
We remove the two-liner highlight_with_escaping,
which was only called in one place, and when
we inline it into the caller, we can pull the
first line, which builds the regex, out of the
loop.
The streams:all adveritsement notice in search should only appear
after we've already received the response from the server, to avoid a
mix of problems ranging from misplaced loading indicator to scrolling
issues to the notice just being distracting while you're waiting for
the server to return results.
We need to add a pre_scroll_cont parameter to the message_fetch API,
since adding this notice would otherwise potentially throw off the
scroll positioning logic for which message to select.
Fixes#13441.
In 452e226ea2 and
648a60baf6, we changed how `search:`
narrows work to:
(1) Never mark messages as read inside searches (search:)
(2) Take you to the bottom, not the first unread, if a `near:` or
similar wasn't specified.
This is far better behavior for these use cases, because in these
narrows, you can't actually see all the context around the target
messages, so marking them as read is counterproductive. This is
especially important in `has:mention` where you goal is likely
specifically to keep track of which threads mentioning you haven't
been read. But in many other narrows, the current behavior is
effectively (1) setting the read bit on random messages and (2) if the
search term matches many messages in a muted stream with 1000s of
unreads, making it hard or impossible to find recent search matches.
The new behavior is that any narrow that is structurally a search of
history (including everything that that isn't a stream, topic,
pm-with, "all messages" or "private messages") gets that new behavior
of being unable to mark messages as read and narrows taking you to the
latest matching messages.
A few corner cases of interest:
* `is:private` is keeping the old behavior, because users on
chat.zulip.org found it confusing for `is:private` to not mark
messages as read when one could see them all. Possibly a more
complex answer is required here.
* `near:` narrows are getting the new behavior, even if it's a stream:
+ topic: narrow. This is debatable, but is probably better than
what was happening before.
Modified significantly by tabbott for cleanliness of implementation,
this commit message, and unit tests.
Fixes#9893. Follow-up to #12556.
In 1fe4f795af, we added the
wildcard_mentions_notify setting, which controls whether wildcard
mentions should be treated as mentions for the purposes of
notifications. The original implementation focused on the more
important area of email/push notifications, and neglected to address
desktop notifications for wildcard mentions.
This change makes the wildcard_mentions_notify flag behave correctly
for desktop/sound notifications, including unit tests.
Fixes#13073.
Adds required API and front-end changes to modify and read the
wildcard_mentions_notify field in the Subscription model.
It includes front-end code to add the setting to the user's "manage
streams" page. This setting will be greyed out when a stream is muted.
The PR also includes back-end code to add the setting the initial state of
a subscription.
New automated tests were added for the API, events system and front-end.
In manual testing, we checked that modifying the setting in the front end
persisted the change in the Subscription model. We noticed the notifications
were not behaving exactly as expected in manual testing; see
https://github.com/zulip/zulip/issues/13073#issuecomment-560263081 .
Tweaked by tabbott to fix real-time synchronization issues.
Fixes: #13429.
If a message begins with /me, we do not have any cases where the
rendered content would not begin with `<p>/me`. Thus, we can safely
remove the redundant checks both on the backend and frontend.
In e42c3f7418, we made the assumption
that compose_pm_pill.get_recipient() would return no users for stream
messages. It turns out, due to the confusing name of
compose_state.recipient (which we just renamed to
compose_state.private_message_recipient), this assumption was wrong.
As a result, when composing a stream message using the reply hotkeys,
we'd end up sending typing notiifcations to the person who sent the
message we're replying to as though a PM was being composed.
We fix this by avoiding passing an (expected to be unused) value for
private_message_recipient to compose_state.start.
The compose_state.recipient field was only actually the recipient for
the message if it was a private_message_recipient (in the sense of
other code); we store the stream in compose_state.stream instead.
As a result, the name was quite confusing, resulting in the
possibility of problematic correctness bugs where code assumes this
field has a valid value for stream messages. Fix this by changing it
to compose_state.private_message_recipient for clarity.
Fixes commit id 648a60baf6. When
allow_use_first_unread_when_narrowing() is false last message of
narrow is shown in view.
Comments rewritten by tabbott to explain in detail what's happening.