webpack optimizes JSON modules using JSON.parse("{…}"), which is
faster than the normal JavaScript parser.
Update the backend to use emoji_codes.json too instead of the three
separate JSON files.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
I believe we can remove these and rely on
other parts of our testing/code-review
to ensure template quality.
These tests never really exercised our
app code, as evidenced by us not regressing
any of the 100%-line-coverage files.
We have a couple other ways that we verify
the correct format of the templates:
- webpack (can they compile?)
- check-templates (are they nicely indented?)
For deep testing, we have Casper, which
exercises most of our most important templates
in some meaningful way.
I think it's pretty rare that we get bugs
now that are directly caused by bad templates,
and an even smaller subset of them would
have been caught by the node tests.
If that trend changes in the future, I would prefer to
just do something "greenfield" to address
any common problems rather than resurrect
this code, but we could always resurrect it
from git.
The template node tests did check a little bit of
detail about which fields are there, but not
in an integrated way, so that aspect of the tests
wasn't very useful either.
This adds Ubuntu 19.10 as a valid provisioning target.
The release test in setup-apt-repo was changed from a list of values to
a regex check for brevity.
This code was very useful when first implemented to help catch errors
where our backend templates didn't render, but has been superceded by
the success of our URL coverage testing (which ensures every URL
supported by Zulip's urls.py is accessed by our tests, with a few
exceptions) and other tests covering all of the emails Zulip sends.
It has a significant maintenance cost because it's a bit hacky and
involves generating fake context, so it makes sense to remove these.
Any future coverage issues with templates should be addressed with a
direct test that just accessing the relevant URL or sends the relevant
email.
In python 3.5-3.6, generic types had an __origin__ attribute which
indicated which generic they originated from; the code was reflecting on
that value to check types against the openapi spec. In python3.7, this
changed, and there's no longer an immediately simple way to get this
information in all cases. __origin__ appears to be the implementing
class now, returning `list` or `collections.abc.Iterator` rather than
`typing.List` and `typing.Iterator`. This adds a sloppy-but-effective
mechanism for inferring if a type maps to the List/Dict/Iterator/Mapping
types and gets the test suite passing again.
Every CLI program should have a usage message.
Also add a mention in the `push-to-pull-request` usage message of
its participation in the `refs/remotes/pr/` pseudo-remote feature.
This gives us the right behavior when using the `url.*.insteadOf`
mechanism for aliases in Git remote URLs. For example, if
one's ~/.gitconfig has:
[url "git@github.com:"]
insteadOf = gh:
then `git remote add upstream gh:zulip/zulip` will work great, as
the nice, short, mnemonic `gh:` prefix gets expanded to the more
finicky `git@github.com:`. I use just such a prefix routinely.
But the feature does require that scripts go through the right
abstractions. In particular `git remote get-url`, since Git 2.7
(from 2016), exists for exactly this reason. A plain `git config`
command bypasses the expansion, getting the verbatim `gh:...`
version, which doesn't work.
So, switch to that.
As a bonus, we get to behave correctly if for some reason the user
has configured a push URL distinct from the fetch URL for this
remote, just by adding `--push`. With `git config`, we'd have had
to manually implement the fallback from `remote.upstream.pushUrl` to
`remote.upstream.url` in order to properly handle that case.
The URL used earlier no longer consists of authentication guide for
github and google. So, two different permalinks to google and github
in authentication.html are added to config_error.html to direct the
user to proper authentication setup guide.
We now use realm_id for querying UserPresence
instead of building a big WHERE clause from the
list of user_ids.
This commit may be a bit hard to measure, since
we still get the list of user_ids for the PushToken
query in the same method.
It adds this index:
"zerver_userpresence_realm_id_timestamp_25f410da_idx" btree (realm_id, "timestamp")
We expect this index to provide a major performance improvement when
fetching presence data for the whole realm from the database on
servers like zulipchat.com hosting several realms.
We now validate streams with a separate
function from PM recipients.
It's confusing enough all the ways you can
encode a stream or encode the PM recipients,
but trying to do it all in one function was
hard to reason about and led to at least one
bug.
In particular, there was a bug where streams
with commas in them would get split. Now
we just don't ever split on commas inside
of `extract_stream_indicator`.
Fixes#13836
After removing internal_send_message() in a recent
commit, we now have only two callers for
extract_recipients, and they are both related
to our REQ mechanism that always passes strings
to converters. (If there are default values,
REQ does not call the converters.)
We therefore make two changes:
- use the more strict annotation of "str"
for the `s` parameter
- don't bother with the isinstance check
Note that while the test mocks the actual message
send, we now have a `get_stream` call in the queue
worker, so we have to set up a real stream for
testing (or we could have mocked that as well, but
it didn't seem necessary). The setup queries add
to the amount of queries reported by the test,
plus the `get_stream` call. I just made the
query count a digits regex, which is a little bit
lame, but I don't think it's worth risking test
flakes for this.
This effectively reverts the following
commit from May 2019:
be527905ca
The implementation of closest() was a bit
buggy and complex. It's easy enough
to just stub the method yourself. We may
want to eventually re-implement it, but we
should follow the template of parent/set_parent.
If you fail to stub `closest` zjquery gives
a fairly helpful error message:
Error: You must create a stub for $("link-stub").closest
We stub out jquery elements rather than giving
the illusion of having real DOM.
Also, we make it so that the message_store
interaction has an assertion attached to it.
This index is intended to optimize the performance of the very
frequently run query of "what is the presence status of all users in a
realm?".
Main changes:
- add realm_id to UserPresence
- add index for realm_id
- backfill realm_id for old rows
- change all writes to UserPresence to include
realm_id
The index is of this form:
"zerver_userpresence_realm_id_5c4ef5a9" btree (realm_id)
We will create an index on (realm_id, timestamp) in a
future commit, but I think it's a bit faster if you do
the backfill before the index.
There's also a minor tweak to the populate_db script.
This is just a refactoring to the more modern API
for sending internal messages.
To make this work we now plumb the email_gateway
flag through `internal_send_stream_message` instead
of `internal_send_message`.
We also change `send_zulip` to have its callers
pass in a full UserProfile object (which one of
them already had).
We prefer this to internal_send_message().
We are trying to deprecate `internal_send_message`,
which has extra moving parts related to
`extract_recipients` and `Addressee.legacy_build`.
There are two chunks of code that I touch here
that look pretty similar, but I'm not quite
sure they're worth de-duplicating, since they
use different topics and different message
content.
Instead of having `notify_new_user` delegate
all the heavy lifting to `send_signup_message`,
we just rename `send_signup_message` to be
`notify_new_user` and remove the one-line
wrapper.
We remove a lot of obsolete complexity:
- `internal` was no longer ever set to True
by real code, so we kill it off as well
as well as killing off the internal_blurb code
and the now-obsolete test
- the `sender` parameter was actually an
email, not a UserProfile, but I think
that got past mypy due to the caller
passing in something from settings.py
- we were only passing in NOTIFICATION_BOT
for the sender, so we just hard code
that now
- we eliminate the verbose
`admin_realm_signup_notifications_stream`
parameter and just hard code it to
"signups"
- we weren't using the optional realm
parameter
There's also a long ugly comment in
`get_recipient_info` related to this code
that I amended for now.
We should try to take action in a subsequent
commit.
In the next commit we're going to change what the
server sends for the following:
- page_params
- server responses to /json/users/me/presence
We will **not** yet be changing the format of the data
that we get in events when users update their presence.
It's also just a bit in flux what our final formats
will be for various presence payloads, and different
optimizations may lead us to use different data
structures in different payloads.
So for now we decouple these two things:
raw_info: this is intended to represent a
snapshot of the latest data from the
server, including some data like
timestamps that are only used
in downstream calculations and not
user-facing
exports.presence_info: this is calculated
info for modules like buddy_data that
just need to know active vs. idle and
last_active_date
Another change that happens here is we rename
set_info_for_user to update_info_for_event,
which just makes it clear that the function
expects data in the "event" format (as opposed
to the format for page_params or server
responses).
As of now keeping the intermediate raw_info data
around feels slightly awkward, because we just
immediately calculate presence_info for any kind
of update. This may be sorta surprising if you
just skim the code and see the various timeout
constants. You would think we might be automatically
expiring "active" statuses in the client due to
the simple passage of time, but in fact the precise
places we do this are all triggered by new data
from the server and we re-calculate statuses
immediately.
(There are indirect ways that clients
have timing logic, since they ask the
server for new data at various intervals, but a
smarter client could simply expire users on its
own, or at least with a more efficient transfer
of info between it and the server. One of
the thing that complicates client-side logic
is that server and client clocks may be out
of sync. Also, it's not inherently super expensive
to get updates from the server.)
The important details for the test setup here
are just the number of users who are active.
We don't need to simulate the currently awkward
way of populating this data.