On OSX, the user id and group id don't match. So while the previous
code was always wrong, it produced incorrect output there. We can fix
this by replacing `whoami` with `id -g` for finding the current user's
group ID.
When a user clicks the compose `+` button, create a popover at the
bottom right of the screen including buttons for opening a new stream
message or a new private message.
Use CSS to display a `+` button on mobile but keep the more verbose
buttons on desktop. In the future, this button will be used to display
a popop for a new message.
Apparently, the QUERY_STRING property of the report object wasn't
actually a string; since we only care about its string representation,
we should just stringify it.
Apparently, we weren't resetting the query counters inside the
websockets codebase, resulting in broken log results like this:
SOCKET 403 2ms (db: 1ms/2q) /socket/auth [transport=websocket] (unknown via ?)
SOCKET 403 5ms (db: 2ms/3q) /socket/auth [transport=websocket] (unknown via ?)
SOCKET 403 2ms (db: 3ms/4q) /socket/auth [transport=websocket] (unknown via ?)
SOCKET 403 2ms (db: 3ms/5q) /socket/auth [transport=websocket] (unknown via ?)
SOCKET 403 2ms (db: 4ms/6q) /socket/auth [transport=websocket] (unknown via ?)
SOCKET 403 2ms (db: 5ms/7q) /socket/auth [transport=websocket] (unknown via ?)
SOCKET 403 2ms (db: 5ms/8q) /socket/auth [transport=websocket] (unknown via ?)
SOCKET 403 3ms (db: 6ms/9q) /socket/auth [transport=websocket] (unknown via ?)
The correct fix for this is to call reset_queries at the start of each
endpoint within the websockets system. As it turns out, we're already
calling record_request_start_data there, and in fact should be calling
`reset_queries` in all code paths that use that function (the other
code paths, in zerver/middleware.py, do it manually with
connection.connection.queries = []).
So we can clean up the code in a way that reduces risk for similar
future issues and fix this logging bug with this simple refactor.
Guest users can't subscribe themselves to streams, so we shouldn't
display the subscription button at end of stream message view.
Fixes part of #10749.
Guest users can't subscribe themselves to any stream, so we hide the
"Subscribe" button. Previously, it was showing Subscribe button after
a guest user unsubscribed from a stream.
Fixes part of #10749.
The "notification settings" page previously advertised support for
mobile push notifications via checkboxes, even if the server hadn't
yet been registered for push notifications. This was a frequent
source of onboarding pain for new Zulip organizations.
We fix this by providing a clear warning and disabling the relevant
inputs on the settings pages.
Modified significantly by tabbott to correct some tricky logic errors
as well as some copy-paste bugs.
Fixes#10331.
This test started failing recently; the apparent cause is that
sometimes, zerver/lib/generate_test_data.py generates messages
containing bulleted lists, and those don't end with a `</p>` tag since
they end with `</ul>` instead; the result is that this test failed
nondeterministically in CI.
There isn't really a useful version of this check to do that would
cover that case (as well as the entire message body being a bulleted
list), so we just remove the check; I don't think it's ever caught any
actual bugs.
Fixes#10745.
Use get_display_recipient to get stream names, and remove the
references to message.stream_name in push_notifications.py which were
added in 97571a203, as the actual stream names were being retrived
only for Message objects associated with public streams.
This means you'll need access to our Stripe API key to add new fixtures.
Will be undone eventually, but having this in place will make it easier to
finish the mock.patch to mock_stripe migration.
This is mostly an extraction, but it does change the
way we calculate `content`. We append the markdown
links from ALL files to any content that came in the
message itself.
Separating this out also allows us to add more
test coverage for the extracted code.
We now use subscriber_map for building UserMessage
rows in Slack/Gitter conversions.
This is mostly designed to simplify the code, rather
than having to scan the entire subscribers for each
message.
I am guessing this will improve performance for most
conversions. We sort small lists on every message,
in order to be deterministic, but the sorting cost
is probably more than offset by avoiding the O(N)
scans across all subscriptions. Also, it's probably
negligible in the grand scheme of things, compared
to JSON parsing, file I/O, etc.
This commits also fixes some typos with mentioned_users_id ->
mentioned_user_ids and cleans up a test a bit as well.
We now have all three third party
conversions (Gitter/Slack/Hipchat)
go through build_user_message().
Hipchat was already using this helper.
We also avoid callers having to pass in
an id to build_user_message().
When you send a message to a bot that wants
to talk via an outgoing webhook, and there's
an error (e.g. server is down), we send a
message to the bot's owner that links to the
message that triggered the error.
The code to produce those links was out of
date.
Now we move the important code to the
`url_encoding.py` library and fix the PM
links to use the more modern style (user_ids
instead of emails). We also replace "subject"
with "topic" in the stream urls.
We want to avoid `blueslip.error` in cases where
the root cause could just be bad data that is
human-entered.
There are a few callers here who **should** be
sending good data all the time, but hopefully
they either have good test coverage, other
obvious failure symptoms, or, ideally, just
do what the user would mostly expect in the
face of bad data.
This supports guest user in the user-info-form-modal as well as in the
role section of the admin-user-table.
With some fixes by Tim Abbott and Shubham Dhama.
The purpose of this commit is to pass information
to the frontend whether the message response recieved
has been limited due to plan restrictions or not.
To implement this, the backend for limiting the message
history had to be rewritten as we used to fetch
only the message rows whose id was greater than
first_visible_message_id. The filtered rows gives us
no information on whether the message history was
limited or not. So the backend was rewritten to not
do any restriction of limiting the message rows while
making the query. The limiting of rows is now done in
post_process_limited_query which will also return back
the value of history_limited flag.
Tweaked by tabbott to note a few cases where the results are
incorrect. I'm merging this despite those, because those cases don't
impact the correctness of the feature, and it may have tricky
performance implications to fix correctly.