I think we could write this test better, but it's not a big deal for
this to break in the rare even that we change/remove one of the 2
strings it interacts with.
We now only use `page_params.realm_default_streams` during
initialization, and then after that we use `stream_data`
APIs to get default stream ids and related info. (And
for the event that replace the data, we just update our
internal data structures as well.)
Long term we should have the server just send us ids here,
since we are now hydrating info from stream data in all places.
We only used get_default_stream_names() in a
test, so now it's being replaced with a function
that just gets ids.
We'll have use for get_default_streams_ids()
in an upcoming commit.
Now if a default stream gets deleted, we just
redraw the table. We always have a small number
of default streams, and the way that we were removing
rows without the actual consent of `list_render` was
really janky (and just a vestige of pre-list-render
code that never got fully ported).
This also makes us consistent with how we handle
added streams (i.e. just call
`update_default_streams_table`).
ASIDE:
Ideally we will update `list_render` at some point to
have an API for adding and removing elements. It does
allow you now to call `data()` to reset its data, but
for now we just build a new `list_render` object every
time.
We stopped needing this with
0329b67048
(Dec 2016).
The function sets `bot.can_admin`,
which was only used in `bot_data.get_editable`.
We removed two tests (and then put back
some test setup that needed to leak down
to the last test).
This is code simplification motivated
by a recent bug that we fixed with some
server changes, but which was really
caused in some sense by our client code
using an overly finicky
condition to check falsiness.
For cross-realm bots, the value of
`user.bot_owner_id` may be `null`, or it
may simply be `undefined`, depending
on whether the server passes `None`
or simply omits the field.
We don't want out client code to be
coupled to that rather arbitrary
decision.
We were doing a `!== null` check instead
of checking for falsiness, which led to
blueslip errors in the past. Because a
bot owner id could be plausibly 0, a falsiness
check would be brittle in a different way.
Now we avoid that ugliness by calling
`get_bot_owner_user`, which either returns
an object or `undefined`.
And then the caller can just do a concise
check for whether `bot_owner` exists.
And we also fix up the crufty code that
was putting `bot_owner_full_name` on to
the object instead of using a local.
We have a bug report for this again, although
it might be on an old branch.
Fixes#13621.
Instead of having logical expressions in templates, it's always preferred
to calculating them in javascript and pass the results as a context. It
also enhances the readability of templates and testing of such logic is
easier in js over templates.
The use case for this are small or fixed tables, which do not need
filtering support. Thus we are able to not include the unnecessary
search input inside the html parent container.
It is not used at present, but will be required when we refactor
the settings pages.
We also split out exports.validate_filter function for
unit testing the above condition.
Before this commit, the reactions code would
take the `message.reactions` structure from
the server and try to "collapse" all the reactions
for the same users into the same reactions,
but with each reaction having a list of user_ids.
It was a strangely denormalized structure that
was awkward to work with, and it made it really
hard to reason about whether the data was in
the original structure that the server sent or
the modified structure.
Now we use a cleaner, normalized Map to keep
each reaction (i.e. one per emoji), and we
write that to `message.clean_reactions`.
The `clean_reactions` structure is now the
authoritatize source for all reaction-related
operations. As soon as you try to do anything
with reactions, we build the `clean_reactions`
data on the fly from the server data.
In particular, when we process events, we just
directly manipulate the `clean_reactions` data,
which is much easier to work with, since it's
a Map and doesn't duplicate any data.
This rewrite should avoid some obscure bugs.
I use `r` as shorthand for the clean reaction
structures, so as not to confuse it with
data from the server's message.reactions.
It also avoids some confusion where we use
`reaction` as a var name for the reaction
elements.
Fixes#14254
You can test this on dev:
* do "-stream:Verona" in the search bar (the minus
sign negates the search here)
* reload the browser
You should see the same search (all streams besides Verona).
We had this API:
people.add_in_realm = full-fledged user
people.add = not necessarily in realm
Now the API is this:
people.add = full-fledged user
people._add_user = internal API for cross-realm bots
and deactivated users
I think in most of our tests the distinction between
people.add() and people.add_in_realm() was just an
accident of history and didn't reflect any real intention.
And if I had to guess the intention in 99% of the cases,
folks probably thought they were just creating ordinary,
active users in the current realm.
In places where the distinction was obviously important
(because a test failed), I deactivated the user via
`people.deactivate`.
For the 'basics' test in the people test suite, I clean
up the test setup for Isaac. Before this commit I was
adding him first as a non-realm user then as a full-fledged
user, but this was contrived and confusing, and we
didn't really need it for test coverage purposes.
We want to move more logic to stream_data to facilitate
testing.
Both before and after this commit, we essentially build a
new list of users for typeahead, but now the new list
excludes subscribed users. We can do even better than
this in a follow-up commit.
Before this commit, presence used get_realm_count()
to determine whether a realm was "small" (and thus
should show all human users in the buddy list, even
humans that had not been active in a while).
The `get_realm_count` function--despite a very wrong,
misleading comment--was including bots in its count.
The new function truly counts only active humans
(and no bots).
Because we were overcounting users before this change,
we should technically adjust `BIG_REALM_COUNT` down
by some amount to reflect our original intention there
on the parameter. I'm leaving it alone for now, though,
since we've improved the performance of the buddy list
over time, and it's probably fine if a few "big" realms
get re-classified as small realms (and show more users)
by virtue of this change.
(Also note that this cutoff value only affects the
"normal" view of the buddy list; both small realms
and large realms will show long-inactive users if you
do searches.)
Fixes#14215
Given that can_mark_messages_read is called whenever the blue box
cursor stops on a message and that it is calculated purely on the
basis of sorted_term_types, it makes sense to cache the result.
If you were in the "Starred messages" narrow and
your pointer was on a message with the stream/topic
of "social/lunch", we wouldn't move you to the unread
messages for that topic.
I fixed this by removing the code that looked at
the current message's topic. Instead, we only look
at the active narrow to figure out the "next" topic
to go to.
Fixes#14120.
Original email address is shown to admin users in subscriber list when
email_address_visibilty is set to "Admins only" by passing delivery_email
at required places. Email address are not shown to non-admin users when
visibility is set to "Admins only".
Tweaked by tabbott to fix a few bugs and dead code.
Fixes a part of #13541.
This intent is that we'll be able to reuse this when editing streams
as well.
* Rename method: filter_with_new_topic to filter_with_new_param.
* Fix tests and method calls.
This extends our email address visibility settings to deny access to
user email addresses even to organization administrators.
At the moment, they can of course change the setting (which leaves an
audit trail), but in the future only organization owners will be able
to change that setting.
While we're at this, we rewrite the settings_data.js test to cover all
the cases in a more consistent way.
Fixes#14111.
The file populates `windows.i18n`, so now
the file name matches our convention.
Note that the module really just initializes
`i18next` and then does this:
window.i18n = i18next;
It doesn't really add any functionality to
third party library.
Before this test, we were validating the behavior
of `i18next`, but we weren't validating our light
layer that sits on top of `i18next`, which currently
resides in the slightly misnamed `translations.js`
file.
The translations module is now so small that I'll
just quote it verbatim here:
import i18next from 'i18next';
i18next.init({
lng: 'lang',
resources: {
lang: {
translation: page_params.translation_data,
},
},
nsSeparator: false,
keySeparator: false,
interpolation: {
prefix: "__",
suffix: "__",
},
returnEmptyString: false, // Empty string is not a valid translation.
});
window.i18n = i18next;
We now just do `zrequire('translations')` to initialize
the `i18next` library, which allows us to have simpler
test setup and to actually exercise the above call to
`i18next.init`.
This change now gives us 100% line coverage of `translations.js`,
which of course isn't that hard to acheive (see above).
This extracts a new module with three
functions, which we will test with 100%
line coverage:
- show_email
- email_for_user_settings
- get_time_preferences
The first two break several dependencies
in the codebase on `settings_org.js`. The
`get_time_preferences` breaks an annoying
dependency on `page_params` within people.
The module is pretty cohesive, in terms that
all three functions are just light wrappers
around `page_params` and/or `settings_config`.
Now all the modules that want to call show_email()
only have to require `settings_data`, instead of
having a dependency on the much heavier
`settings_org.js` module.
I also make some of the unit tests here be more
full-stack, where instead of stubbing show_email,
I basically just toggle `page_params.is_admin`.
Explicitly stubbing i18n in 48 different files
is mostly busy work at this point, and it doesn't
provide much signal, since often it's invoked
only to satisfy transitive dependencies.
This follows the convention of other code calling into
add_sub_to_table of checking whether the stream settings overlay is
open (and thus in the DOM) before trying to rerender it.
We add these two functions to the API,
so that we no longer have `alert_words_ui`
using private data from `alert_word`:
alert_words.has_alert_word()
alert_words.get_word_list()
And to initialize the data, we have a proper
`initialize` method that is passed in only
the parameters that it needs from `ui_init`.
(We also move the step of deleting `alert_words`
from `page_params` to the `ui_init` module.)
Because it's a bit less cumbersome to initialize
`alert_words`, we now just it directly in the
node tests for `alert_words_ui`.
This is follow up to da79fd206a
I accidentally skipped over pm_conversations. Same
ideas as the bigger previous commit--we pass in params
to the initialize function and do the delete cleanup
within ui_init.
This cleans up the handoff of page_params
data between ui_init and modules that
take over ownership of page_params-derived
data.
Read the long comment in ui_init for a bit
more context.
Most of this diff is actually test cleanup.
And a lot of the diff to "real" code is
just glorified `s/page_params/params/`
in the `initialize` functions.
One little oddity is that we don't actually
surrender ownership of `page_params.user_id`
to `people.js`. We could plausibly sweep
the rest of the codebase to just use
`people.my_user_id()` consistently, but it's
not a super high priority thing to fix,
since the value never changes.
The stream_data situation is a bit messy,
since we consume `page_params` data in the
initialize() function in addition to the
`params` data we "own". I added a comment
there and intend to follow up. I tried
to mostly avoid the "word soup" by extracting
three locals at the top.
Finally, I don't touch `alert_words` yet,
despite it also doing the delete-page-params-data
dance. The problem is that `alert_words`
doesn't have a proper `initialize()`. We
should clean that up and have it use a
`Map` internally, too.
This gives them cache-compatible URLs, and also avoids some extra
copies of the sprite sheet images.
Comments on the Octopus emoji added by tabbott.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This is not always a behavior-preserving translation: _.defaults
mutates its first argument. However, the code does not always appear
to have been written to expect that.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This is not always a behavior-preserving translation: _.extend mutates
its first argument. However, the code does not always appear to have
been written to expect that.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This moves some code from settings_display.js
into the new module settings_config.js.
Extracting this module breaks some dependencies
on settings_display.js (which has some annoying
transitive dependencies, including jQuery).
In particular this isolates stream_data from
from settings_display.js.
Two of the three structures that we moved here
weren't even directly used by settings_display.js,
since we do a lot of rendering in the modules
admin.js and setting.js.
We make get_all_display_settings() a function
to avoid a require-time dependency on page_params.
Breaking the dependencies simplifies a few
node tests.
Most of the node test complexity came from the
following commit in March 2019:
5a130097bf
The commit itself seems harmless enough, but
dependencies can have a somewhat "viral" nature,
where making stream_data depend on settings_display
caused us to modify four different node tests.
This refactoring is the first step toward sharing
our markdown code with mobile. This focuses on
the Zulip layer, not the underlying third party `marked`
library.
In this commit we do a one-time initialization to
wire up the markdown functions, but after further
discussions with Greg, it might make more sense
to just pass in helpers on every use of markdown
(which is generally only once per sent message).
I'll address that in follow-up commits.
Even though it looks like a pretty invasive change,
you will note that we barely needed to modify the
node tests to make this pass. And we have pretty
decent test coverage here.
All of the places where we used to depend on
other Zulip modules now use helper functions that
any client (e.g. mobile) can configure themselves.
Or course, in the webapp, we configure these from
modules like people/stream_data/hash_util/etc.
Even in places where markdown used to deal directly with
data structures from other modules, we now use functions.
We may revisit this in a future commit, and we might
just pass data directly for certain things.
I decided to keep the helpers data structure completely flat,
so we don't have ugly nested names like
`helpers.emoji.get_emoji_codepoint`. Because of this,
some of the names aren't 1:1, which I think is fine.
For example, we map `user_groups.is_member_of` to
`is_member_of_user_group`.
It's likely that mobile already has different names
for their versions of these functions, so trying for
fake consistency would only help the webapp. In some
cases, I think the webapp functions have names that
could be improved, but we can clean that up in future
commits, and since the names aren't coupled to markdown
itself (i.e. only the config), we will be less
constrained.
It's worth noting that `marked` has an `options`
data structure that it uses for configuration, but
I didn't piggyback onto it, since the `marked`
options are more at the lexing/parsing layer vs.
the app-data layer stuff that our helpers mostly
help with.
Hopefully it's obvious why I just put helpers in
the top-level namespace for the module rather than
passing it around through multiple layers of the
parser.
There were a couple places in markdown where we
were doing awkward `hasOwnProperty` checks for
emoji-related stuff. Now we use the Python
principle of ask-forgiveness-not-permission and
just handle the getters returning falsy data. (It
should be `undefined`, but any falsy value is
unworkable in the places I changed, so I use
the simpler, less brittle form.)
We also break our direct dependency on
`emoji_codes.json` (with some help from the
prior commit).
In one place I rename streamName to stream_name,
fixing up an ancient naming violation that goes
way back to before this code was even extracted
away from echo.js. I didn't bother to split this
out into a separate commit, since 2 of the 4
lines would be immediately re-modified in the
subsequent commit.
Note that we still depend on `fenced_code`
via the global namespace, instead of simply
requiring it directly or injecting it. The
reason I'm postponing any action there is that
we'll have to change things once we move
markdown into a shared library. (The most
likely outcome is that we'll rename/move both files
at the same time and fix the namespace/require
details as part of that commit.)
Also the markdown code still relies on `_` being
available in the global namespace. We aren't
quite ready to share code with mobile yet, but the
underscore dependency should not be problematic,
since mobile already uses underscore to use the
webapp's shared typing_status module.
This mostly moves logic into people.js.
The people functions added here are glorified
two-liners.
One thing that changes here is that we
are a bit more rigorous about duplicate
names.
The code is slightly awkward, because this
commit preserves the strange behavior
that if 'alice|42' doesn't match on
the user with the name "alice" and user_id
"42", we instead look for a user whose
name is "alice|42". That seems like a
misfeature to me, but there's a test for
it, so I want to check with Tim that it's not
intentional behavior before I simplify
the code.
We add this API to emoji.js, so that markdown
doesn't need to look at internal data structures
(or even need to understand any kind of record
format for results).
Here are the functions:
get_realm_emoji_url()
get_emoji_name()
get_emoji_codepoint()
We use the API now in markdown, which eliminates
the need for the markdown parser to require
the emoji JSON file.
Each function has a simple docstring:
get_emoji_name('1f384') === 'holiday_tree'
get_emoji_codepoint('avocado') === '1f951'
get_realm_emoji_url('shrug') === '/user_avatars/2/emoji/images/31.png'
Also we have simple test coverage for the API
(including tests that verify the docstrings).
This name was misleading, because we weren't
actually setting realm_filters (that's what
`page_params.realm_filters = realm_filters`
is for); we were instead updating our
realm filter rules.
These functions were just shims that were
used in the somewhat painful migration from
subject_* to topic_*.
The commit 4572be8c27
fixed it so that the client never needs to
deal with "subject_links".
So now we just go back to simpler code:
message.topic_links = links
links = message.topic_links
I am not quite ready to declare victory on
the subject/topic migration, but we are super
close. In this commit I bump a blueslip
warning to a blueslip error, so that we'll
be notified of any codepath that is still
using the janky fall-back-to-subject defensive
code here.
If we go a couple days without any errors, then
we can remove the blueslip warning and the
defensive code immediately and then inline
the callers at our leisure. I wouldn't be
wildly against keeping these wrappers in some
parts of the code, but that debate is out of
the scope of this immediate fix, and I haven't
thought hard about it yet.
We can basically sweep set_message_topic() now,
if we wanted to, since it's truly just a one-liner.
(At one point it was encapsulating something
like `message.subject = foo`).
This required a tiny change to compose_fade
test setup.
Most of this logic is specific to markdown
message processing, so we move the code to
markdown.js.
The only responsibility that we leave with
`emoji.js` is to provide us with a list
of translations (regex and replacement text).
But now `markdown.js` actually (directly) executes
those translations against Zulip messages
as part of its preprocessing.
This should simplify the upcoming mobile conversion.
Instead of mobile needing to duplicate this fairly
complex function, they will just need to pass
us in a list similar to `emoji_translations` inside
of `emoji.js`. That code has a comment that shows
what the data structure looks like.
I am 99% sure we can rely on trimRight() and
trim() being available in all browsers that
we support. I verified in FF.
This removes the util dependency from both
modules touched here.
We now treat util like a leaf module and
use "require" to import it everywhere it's used.
An earlier version of this commit moved
util into our "shared" library, but we
decided to wait on that. Once we're ready
to do that, we should only need to do a
simple search/replace on various
require/zrequire statements plus a small
tweak to one of the custom linter checks.
It turns out we don't really need util.js
for our most immediate code-sharing goal,
which is to reuse our markdown code on
mobile. There's a little bit of cleanup
still remaining to break the dependency,
but it's minor.
The util module still calls the global
blueslip module in one place, but that
code is about to be removed in the next
few commits.
I am pretty confident that once we start
sharing things like the typeahead code
more aggressively, we'll start having
dependencies on util. The module is barely
more than 300 lines long, so we'll probably
just move the whole thing into shared
rather than break it apart. Also, we
can continue to nibble away at the
cruftier parts of the module.
We used to have a block of code doing this just in the presence
endpoint because that's where we'd had error-handling problems with it
not being present, but it seems more correct for it to run
unconditionally on all HTTP requests.
This requires adding a dependency of channel on reload_state, which we
record in the webpack configuration for now.
webpack optimizes JSON modules using JSON.parse("{…}"), which is
faster than the normal JavaScript parser.
Update the backend to use emoji_codes.json too instead of the three
separate JSON files.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
I believe we can remove these and rely on
other parts of our testing/code-review
to ensure template quality.
These tests never really exercised our
app code, as evidenced by us not regressing
any of the 100%-line-coverage files.
We have a couple other ways that we verify
the correct format of the templates:
- webpack (can they compile?)
- check-templates (are they nicely indented?)
For deep testing, we have Casper, which
exercises most of our most important templates
in some meaningful way.
I think it's pretty rare that we get bugs
now that are directly caused by bad templates,
and an even smaller subset of them would
have been caught by the node tests.
If that trend changes in the future, I would prefer to
just do something "greenfield" to address
any common problems rather than resurrect
this code, but we could always resurrect it
from git.
The template node tests did check a little bit of
detail about which fields are there, but not
in an integrated way, so that aspect of the tests
wasn't very useful either.
This effectively reverts the following
commit from May 2019:
be527905ca
The implementation of closest() was a bit
buggy and complex. It's easy enough
to just stub the method yourself. We may
want to eventually re-implement it, but we
should follow the template of parent/set_parent.
If you fail to stub `closest` zjquery gives
a fairly helpful error message:
Error: You must create a stub for $("link-stub").closest
We stub out jquery elements rather than giving
the illusion of having real DOM.
Also, we make it so that the message_store
interaction has an assertion attached to it.
In the next commit we're going to change what the
server sends for the following:
- page_params
- server responses to /json/users/me/presence
We will **not** yet be changing the format of the data
that we get in events when users update their presence.
It's also just a bit in flux what our final formats
will be for various presence payloads, and different
optimizations may lead us to use different data
structures in different payloads.
So for now we decouple these two things:
raw_info: this is intended to represent a
snapshot of the latest data from the
server, including some data like
timestamps that are only used
in downstream calculations and not
user-facing
exports.presence_info: this is calculated
info for modules like buddy_data that
just need to know active vs. idle and
last_active_date
Another change that happens here is we rename
set_info_for_user to update_info_for_event,
which just makes it clear that the function
expects data in the "event" format (as opposed
to the format for page_params or server
responses).
As of now keeping the intermediate raw_info data
around feels slightly awkward, because we just
immediately calculate presence_info for any kind
of update. This may be sorta surprising if you
just skim the code and see the various timeout
constants. You would think we might be automatically
expiring "active" statuses in the client due to
the simple passage of time, but in fact the precise
places we do this are all triggered by new data
from the server and we re-calculate statuses
immediately.
(There are indirect ways that clients
have timing logic, since they ask the
server for new data at various intervals, but a
smarter client could simply expire users on its
own, or at least with a more efficient transfer
of info between it and the server. One of
the thing that complicates client-side logic
is that server and client clocks may be out
of sync. Also, it's not inherently super expensive
to get updates from the server.)
The important details for the test setup here
are just the number of users who are active.
We don't need to simulate the currently awkward
way of populating this data.