Instead of having `notify_new_user` delegate
all the heavy lifting to `send_signup_message`,
we just rename `send_signup_message` to be
`notify_new_user` and remove the one-line
wrapper.
We remove a lot of obsolete complexity:
- `internal` was no longer ever set to True
by real code, so we kill it off as well
as well as killing off the internal_blurb code
and the now-obsolete test
- the `sender` parameter was actually an
email, not a UserProfile, but I think
that got past mypy due to the caller
passing in something from settings.py
- we were only passing in NOTIFICATION_BOT
for the sender, so we just hard code
that now
- we eliminate the verbose
`admin_realm_signup_notifications_stream`
parameter and just hard code it to
"signups"
- we weren't using the optional realm
parameter
There's also a long ugly comment in
`get_recipient_info` related to this code
that I amended for now.
We should try to take action in a subsequent
commit.
In the next commit we're going to change what the
server sends for the following:
- page_params
- server responses to /json/users/me/presence
We will **not** yet be changing the format of the data
that we get in events when users update their presence.
It's also just a bit in flux what our final formats
will be for various presence payloads, and different
optimizations may lead us to use different data
structures in different payloads.
So for now we decouple these two things:
raw_info: this is intended to represent a
snapshot of the latest data from the
server, including some data like
timestamps that are only used
in downstream calculations and not
user-facing
exports.presence_info: this is calculated
info for modules like buddy_data that
just need to know active vs. idle and
last_active_date
Another change that happens here is we rename
set_info_for_user to update_info_for_event,
which just makes it clear that the function
expects data in the "event" format (as opposed
to the format for page_params or server
responses).
As of now keeping the intermediate raw_info data
around feels slightly awkward, because we just
immediately calculate presence_info for any kind
of update. This may be sorta surprising if you
just skim the code and see the various timeout
constants. You would think we might be automatically
expiring "active" statuses in the client due to
the simple passage of time, but in fact the precise
places we do this are all triggered by new data
from the server and we re-calculate statuses
immediately.
(There are indirect ways that clients
have timing logic, since they ask the
server for new data at various intervals, but a
smarter client could simply expire users on its
own, or at least with a more efficient transfer
of info between it and the server. One of
the thing that complicates client-side logic
is that server and client clocks may be out
of sync. Also, it's not inherently super expensive
to get updates from the server.)
The important details for the test setup here
are just the number of users who are active.
We don't need to simulate the currently awkward
way of populating this data.
The _.each calls with an inline function expression have already been
converted to for…of loops. We could do that here, but using .forEach
when we’re just reusing an existing function seems like a good
guideline.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
predicate is expected to return a function, not a boolean. The
boolean true was causing _.filter to match items with a property named
"true", which is definitely not what was intended. Matching no items
is probably also not intended, but matching every item causes the test
to fail.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This should somewhat reduce the gravity of the failure mode for cases
where the message the user clicked cannot be found (which would be a
significant bug on its own merit in any case).
The keys for message_store are since the recent Map migration intended
to be integer message IDs, not strings (and likely were always
intended to be integers; the failure mode may simply have shifted).
This may just be a new bug, but this max also fix#9549; certainly
we'll want to redo any investigation with this fix in place.
Fixes#9549.
We just get the stream_name from the sub struct now.
This mostly affects node tests.
The only place in real code where we called add_sub()
was when we initialized data from the server.
We now have 100% line coverage on 71 JS files.
This is thanks to about 150 people who have
contributed code to frontend/node_tests.
And then 126 files are still short of 100% line
coverage.
We now enforce line coverage with a set called
EXEMPT_FILES, which are the files for which
we do NOT expect to have 100% coverage.
Using an exemption list makes it so that adding
a new JS file to the project without 100% line
coverage will cause the build to fail. This will
encourage folks to be intentional about their
lack of test coverage.
If a file that had 100% coverage somehow regressed
to 0% coverage, we would report an error to the
console, but we weren't treating it as an actual
failure.
We've probably always had this bug, but it probably
rarely was an issue, since devs might have seen
the error locally, or hopefully whatever crazy
thing you did to totally remove coverage would
have had other symptoms.
If this was intentional, I suspect it might have
had something to do with wanting to get coverage
reports when you just run individual tests. But
a while back we changed it so that when you run
individual tests, we don't do the line coverage
enforcement.
This avoids an unnecessary join to UserProfile.
To verify this, you can do `print(queries)` in the
`test_get_custom_profile_fields_from_api` test. It's
kinda noisy, so I excerpted them below...
Before:
SELECT ...
FROM "zerver_customprofilefieldvalue"
INNER JOIN "zerver_userprofile" ON ("zerver_customprofilefieldvalue"."user_profile_id" = "zerver_userprofile"."id")
INNER JOIN "zerver_customprofilefield" ON ("zerver_customprofilefieldvalue"."field_id" = "zerver_customprofilefield"."id")
WHERE "zerver_userprofile"."realm_id" = 2
After:
SELECT ...
FROM "zerver_customprofilefieldvalue"
INNER JOIN "zerver_customprofilefield" ON ("zerver_customprofilefieldvalue"."field_id" = "zerver_customprofilefield"."id")
WHERE "zerver_customprofilefield"."realm_id" = 2'
I don't have any way to measure the two queries with
realistic data, but I would assume the second
query is significantly faster on most of our instances,
since CustomProfileField should be tiny.
I am trying to optimize a query in this endpoint.
I don't think I'll actually reduce the number of
queries, but I wanted to capture the query and
this was the easiest way to do it, so might as
well check in the code! :)
The line removed here is a noop, as both sides of the
immediately following conditional reassign the
same variable.
This harmless cruft was the result of the recent commit
1ae5964ab8, which added
support for single-user GETs.
This fixes a bug where our asynchronous requests were only copying the
Content-Type header (i.e. the one case where we're noticed) from the
Django HttpResponse. I'm not sure what the impact of this would be;
the rate-limiting headers rarely come up when breaking a long-polled
request. But it seems clearly an improvement to do this in a
consistent fashion.
Only the headers piece is a change; in Tornado
self.finish(x)
is equivalent to:
self.write(x)
self.finish()