`wal-g wal-push` has a known bug with occasionally hanging after file
upload to S3[1]; set a rather long timeout on the upload process, so
that we don't simply stall forever when archiving WAL segments.
[1] https://github.com/wal-g/wal-g/issues/656
Update the New Relic webhook and tests to match the format specified
in the New Relic documentation. The new format sends a json body
instead of using url parameters. The old format is no longer supported
by New Relic according to their support staff; as a result, the fixtures for
the old test cases were removed. Added fixtures for new test cases.
Fixes: #16393.
We seem to periodically get contributors who rebase upstream commits
onto their pull request rather than the other way around, resulting in
a lot of GitHub noise. The PRs where this happens were made from
branches named master. We have always documented that you should work
on a feature branch, but not from this page; maybe this will help
reduce that kind of confusion.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The changes made in this commit are as follows:
* The `remove_messages` is moved to the `message_events.js`
file from `ui.js`.
* We refactor `MessageListData.change_message_id` to no
longer require an `opts` parameter as this function
just returns whether we need to rerender or not.
The blueslip error block can be removed since we made
the change to no long defer the data updates in
commit 3b5ba6b2c1,
this case can no longer occur.
The changes made in this commit are as follows:
* We remove the now unused `ui.find_message` which was added
in commit 1666403850.
* We change the function paramter to now accept message ids
instead of messages to eliminate redundant message ids to
message convertion as only the id is required.
* The remove method in MessageListData did not remove the
messages from the hash, it removed only from the items,
this fixes it.
* This commit also fixes a bug where messages are not added
to the current message list if an event is recieved where
messages are moved to this current narrow.
Only the message removal logic was present, which has been
refactored in this commit.
For 3000 messages and 400 users, this saved
about 30 seconds.
We only do two queries per batch of messages
now, and the algorithm is easier to analyze,
as it's just three nested loops.
Note that we are much more efficient about finding
active users here:
- we do one query per realm (instead of per-user)
- we pass the cutoff date to the database
- we get back just a list of distinct ids
This function is going away completely soon. It is
querying everybody's entire UserActivity history instead
of passing the cutoff date to the database!
The query counts increase here for somewhat
contrived reasons. The tests before this
commit reflected a successful trip to the
UserProfile cache, but that's not actually
realistic in practice.
We don't need to mock the dates here. We also
explicitly clear out all streams first, and then
we explicitly test with both the stream being
current and the stream being old.
We can use the _enqueue_emails_for_realm helper
to avoid all the Tuesday-related logic here.
We also don't bother to create UserActivity
records, since the bot gets excluded by virtue
of its being a bot. (Also, the date ranges
here were sketchy due to the time mocking.)
We can avoid all the date mocking now for all
but a couple tests that exercise the is-it-Tuesday
logic.
And this test now correctly tests that we exclude
recently active users.
And this allows us to remove the other test.
Logging `Host` is useful for determining access patterns to realms,
especially if ROOT_DOMAIN_LANDING_PAGE is set. Total response time is
useful in debugging access and performance patterns.
We add navigating to user home inside WSL virtual disk as another
step as many users clone Zulip inside a mounted windows disk and
run into permission issues when running provision.