The test named `test_archiving_messages_with_attachment`
started flaking recently. We use sets for comparison
instead of lists to avoid arbitrary sorting differences.
Masking content can be useful for testing
out conversions where you're dealing
with data from customers and want to avoid
inadvertently reading their content (while
still having semi-realistic messages).
Having two smaller functions should make it
easier to customize the behavior for each specific
use case. The only reason they were ever coupled
was to keep ids in sequence, but the recent NEXT_ID
changes make that a non-issue now.
We now instantiate NEXT_ID in sequencer.py, which avoids
having multiple modules make multiple copies of a sequencer
and possibly causing id collisions.
Bots are not allowed to use the same name as
other users in the realm (either bot or human).
This is kind of a big commit, but I wanted to
combine the post/patch (aka add/edit) checks
into one commit, since it's a change in policy
that affects both codepaths.
A lot of the noise is in tests. We had good
coverage on the previous code, including some places
like event testing where we were expediently
not bothering to use different names for
different bots in some longer tests. And then
of course I test some new scenarios that are relevant
with the new policy.
There are two new functions:
check_bot_name_available:
very simple Django query
check_change_bot_full_name:
this diverges from the 3-line
check_change_full_name, where the latter
is still used for the "humans" use case
And then we just call those in appropriate places.
Note that there is still a loophole here
where you can get two bots with the same
name if you reactivate a bot named Fred
that was inactive when the second bot named
Fred was created. Also, we don't attempt
to fix historical data. So this commit
shouldn't be considered any kind of lockdown,
it's just meant to help people from
inadvertently creating two bots of the same
name where they don't intend to. For more
context, we are continuing to allow two
human users in the same realm to have the
same full name, and our code should generally
be tolerant of that possibility. (A good
example is our new mention syntax, which disambiguates
same-named people using ids.)
It's also worth noting that our web app client
doesn't try to scrub full_name from its payload in
situations where the user has actually only modified other
fields in the "Edit bot" UI. Starting here
we just handle this on the server, since it's
easy to fix there, and even if we fixed it in the web
app, there's no guarantee that other clients won't be
just as brute force. It wasn't exactly broken before,
but we'd needlessly write rows to audit tables.
Fixes#10509
This is the natural behavior that most users will
probably expect. If you need to go to All Messages when
topics are zoomed in, you can just hit ESC twice.
Before this change, if you hit ESC, then hotkey
code would call search.clear_search, which would
call narrow.deactivate(), which would then use
`$('#search_query')` to clear a value, but then
let search.clear_search blur the input and
disable the exit button. It was all confusing.
Things are a bit more organized now.
Now the code works like this:
hotkey.process_escape_key
Just call narrow.deactivate.
$('#search_exit').on('click', ...):
Just call narrow.deactivate.
narrow.deactivate:
Just call search.clear_search_form
search.clear_search_form:
Just do simple jquery stuff. Don't
change the entire user's narrow, not
even indirectly!
There's still a two-way interaction between
the narrow.js module and the search.js module,
but in each direction it's a one-liner.
The guiding principle here is that we only
want one top-level API, which is narrow.deactivate,
and that does the whole "kitchen sink" of
clearing searches, closing popovers, switching
in views, etc. And then all the functions it
calls out to tend to have much smaller jobs to
do.
This commit can mostly be considered a refactoring, but the
order of operations changes slightly. Basically, as
soon as you hit ESC or click on the search "X", we
clear the search widget. Most users won't notice
any difference, because we don't have to hit the
server to populate the home view. And it's arguably
an improvement to give more immediate feedback.
If you zoom into "more topics" for a stream that has
a LOT of topics, and then scroll down to the bottom,
and then zoom out by selecting "All messages" or
similar upper-left-sidebar options, we now try to scroll
the more recently active stream back into place after we scroll
out.
Before this change, it was possible for your lower left
sidebar to appear empty, as it would keep the
scroll offset from "more topics".
If our topic list isn't zoomed in, avoid calling
stream_list.zoom_out_topics().
This commit also introduces `zoomed_in` to track
our topic zooming state.
This small modules nicely breaks down the
responsibilities of topic_list and stream_list
when it comes to zooming in and out of topics
(also known as hitting "more topics" or "All
Streams).
Before this, neither module was clearly in
charge, and there were kind of complicated
callback mechanisms. The stream_list code
was asking topic_list to create click handlers
that called back into stream_list.
Now we just topic_zoom set up its own click
handlers and delegate out to the other two
modules.
This bug was introduced very recently and is an
aliasing bug. It caused extra UserMessage rows to
be created as we inadvertently updated the underlying
subscriber_map sets for multiple messages.
This probably mostly affected PMs.
It's doubtful the bug ever got out into the field.
Previously, MissedMessageWorker used a batching strategy of just
grabbing all the events from the last 2 minutes, and then sending them
off as emails. This suffered from the problem that you had a random
time, between 0s and 120s, to edit your message before it would be
sent out via an email.
Additionally, this made the queue had to monitor, because it was
expected to pile up large numbers of events, even if everything was
fine.
We fix this by batching together the events using a timer; the queue
processor itself just tracks the items, and then a timer-handler
process takes care of ensuring that the emails get sent at least 120s
(and at most 130s) after the first triggering message was sent in Zulip.
This introduces a new unpleasant bug, namely that when we restart a
Zulip server, we can now lose some missed_message email events;
further work is required on this point.
Fixes#6839.
The actual implementation of the change will be a cron job that runs once a
day and generates invoices for anyone with an account_balance > 0.
There are currently no tests for that part of the flow, so no tests had to
change.
These test are for the handling of HipChat
sender info. The data formats are somewhat
inconsistent and sometimes require us to
generate "mirror" users, so this is potentially
fragile code if we don't cover it well.
We extract this function and put it in the shared
library `import_util.py`.
Also, we make it one time higher up in the call
stack, rather than re-building it for every batch
of messages. I doubt this was super expensive, but
there's no reason to repeatedly execute this.
Before this fix, we were creating two copies of every
PM Message in zerver_message with only corresponding
UserMessage row.
Now we only create one PM Message per message, which
we accomplish by making sure we only use imported
messages from the sender's history.json file. And
then we write UserMessage rows for both participants
by making sure to include sender_id in the set of
user_ids that feeds into making UserMessage. For
the case where you PM yourself, there's just one
UserMessage row.
It does not appear that we need to support huddles
yet.
When we create new ids for message rows, we
now sort the new ids by their corresponding
pub_date values in the rows.
This takes a sizable chunk of memory.
This feature only gets turned on if you
set sort_by_date to True in realm.json.
We could migrate all the current PREMIUM_FREE organizations to have more
invites, but this setting mainly affects orgs right as they are starting, so
it's probably fine.
We recently received a bug report that implied that for certain
payloads, the `requested_reviewers` key was empty whereas a
singular `requested_reviewer` key containing one reviewer's
information was present in its stead. Naturally, this raised
some not so pretty IndexError exceptions.
After some investigation and generating a few similar payloads,
I discovered that in every case both the `requested_reviewers`
and the `requested_reviewer` keys were correctly populated, so I
had to manually edit the payload to reproduce the error on my end.
My guess is that this anomaly goes back to when GitHub's reviewer
request feature was new and didn't support requesting multiple
reviewers, and that the singular `requested_reviewer` key could
possibly just be there for backwards compatibility or might just
be mere oversight. Either way, the solution here is to look for the
plural `requested_reviewers` key, and if that is empty, fall back
to the singular `requested_reviewer` key.
Fixes#10706.
Issue: Before this commit, the `refname` positional argument to
`upgrade-zulip-from-git` script would run successfully for a branch
name on the given remote, but the script would fail if it was
provided with a tag or commit ID.
Solution: 'git clone -q -b refname LOCAL_GIT_CACHE_DIR deploy_path`
would be split into two commands:
1.) `git clone -q LOCAL_GIT_CACHE_DIR deploy_path`
2.) `git checkout -b deploy_timestamp refname` which makes a new
branch with the same name as the timestamp used in make_deploy_path.