Most of the code in show_unreads is for diagnosising unread
counts issues, and we may not use that often.
We're creating a dedicated fix_unreads management command with
less clutter.
This change is mostly based on a similar commit from hackerkid
in a feature branch. It borrows both code and ideas. Some of
it's my own stuff, as I was working on a newer branch.
We now call get_user_including_cross_realm_email() inside of
user_profiles_from_unvalidated_emails(), instead of using
get_user_profile_by_email.
This requires a few of our callers to pass down sender into us.
One consequence of this change is that we change the symptoms
for trying to send to emails outside of your realm. In some
cases, we simply raise an error that an email is invalid to us
instead of getting into the deeper validate_recipient_user_profiles
check.
We are trying to convert emails to user_profiles earlier in
the codepath. This may cause subtle changes in which errors
appear, but it's probably generally good to report on bad
addressees sooner than later.
This class simplifies the calling sequence to methods like
check_message and _internal_prep_message, and it's also more
type safe.
Checking for message types is encapsulated with calls to is_stream()
and is_private(). There are also shortcut constructors when you
know that the type of the address (stream vs. private), which is often.
In this we basically seed a single message for the user which will
be soft deactivated by sending a stream message / group PM to
ensure that is has at least one UserMessage row, since in real
world every human user will always have at least one User Message
row.
This causes `upgrade-zulip-from-git`, as well as a no-option run of
`tools/build-release-tarball`, to produce a Zulip install running
Python 3, rather than Python 2. In particular this means that the
virtualenv we create, in which all application code runs, is Python 3.
One shebang line, on `zulip-ec2-configure-interfaces`, explicitly
keeps Python 2, and at least one external ops script, `wal-e`, also
still runs on Python 2. See discussion on the respective previous
commits that made those explicit. There may also be some other
third-party scripts we use, outside of this source tree and running
outside our virtualenv, that still run on Python 2.
An expression like `force_bytes(chr(...))`, on Python 3 where the
`force_bytes` finds itself with something to do because `chr` returns
a text string, gives the UTF-8 encoding of the given value as a
Unicode codepoint.
Here, we don't want that -- rather we want the given value as a
single byte. We can do that with `struct.pack`.
This fixes an issue where the "Link with Webathena" flow was producing
invalid credential caches when run on Python 3, breaking the Zephyr
mirror for any user who went through it anew.
This management command creates the same indexes as migrations
82, 83, and 95, which are all indexes on the huge UserMessage
table. (*)
This command quickly no-ops with clear messaging when the
indexes already exist, so it's idempotent in that regard. (If
somebody somehow creates an index by the same name incorrectly,
they can always drop it in dbshell and re-run this command.)
If any of the migrations have not been run, which we detect simply
by the existence of the indexes, then we create them using a
`CREATE INDEX CONCURRENTLY` command. This functionality in
postgres allows you to create indexes against large tables
without disrupting queries against those tables. The tradeoff
here is that creating indexes concurrently takes significantly
longer than doing them non-concurrently.
Since most tables are small, we typically just use regular
Django migrations and run them during a brief interval while
the app is down.
For indexes on big tables, we will want to run this command
as part of the upgrade process, and we will want to run
it while the app is still up, otherwise it's pointless.
All the code in create_indexes() is literally copy/pasted
from the relevant migrations, and that scheme should work
going forward. (It uses a different implementation of
create_index_if_not_exist than the migrations use, but the
code is identical lexically in the function.)
If we ever do major restructuring of our large tables, such
as UserMessage, and we end up droppping some of these indexes,
then we will need to make this command migrations-aware. For
now it's safe to assume that indexes are generally additive in
nature, and the sooner we create them during the upgrade process,
the better.
(*) UserMessage is huge for large installations, of course.
Before this change, server searches for both
`is:mentioned` and `is:alerted` would return all messages
where the user is specifically mentioned (but not
at-all mentions).
Now we follow the JS semantics:
is:mentioned -- all mentions, including wildcards
is:alerted -- has an alert word
Here is one relevant JS snippet:
} else if (operand === 'mentioned') {
return message.mentioned;
} else if (operand === 'alerted') {
return message.alerted;
And here you see that `mentioned` is OR'ed over both mention flags:
message.mentioned = convert_flag('mentioned') || convert_flag('wildcard_mentioned');
The `alerted` flag on the JS side is a simple mapping:
message.alerted = convert_flag('has_alert_word');
Fixes#5020
Given typeahed and the fact that this only worked if the person had a
full name that didn't contain whitespace, this side effect of the
original @shortname mentionfeature that we removed was experienced by
users as a bug.
Fixes#6142.
We apparently were using the default of num_before=1, not
num_before=0, which meant that if the very last randomly generated
message was one by cordelia mentioning lunch,
test_get_messages_with_search would fail because there were actually 3
matches.
This adds the authors to the Zulip repository on GitHub from
/authors/ along with re-styling the page to fit the same
aesthetic as /for/open-source/ and other product-pages.
This fixes the significant duplication of code between the
authenticate_log_and_execute_json code path and the `validate_api_key`
code path.
These's till a bit of duplication, in the form of `process_client` and
`request._email` interactions, but it is very minor at this point.
The old iOS app has been gone from the app store for 8 months, never
had a huge userbase, and its latest version didn't need this hack. So
this code is unlikely to do anything in the future; remove it to
declutter our authentication decorators codebase.
The check itself was correct, but the error message was in fact the
opposite of what this check is for. In other words, the only things
these users can do is post messages, and the error message when you
tried to do something else was to tell you that the user can't post
messages.
This technically changes the behavior in the case that
!settings.ZILENCER_ENABLED but is_remote_zulip_server(role).
Fortunately, that case is mostly irrelevant (in that remote zulip
servers is a Zilencer feature). The old behavior was also probably
slightly wrong, in that you'd get a zilencer-specific error message in
that case.
The new endpoints are:
/json/mark_stream_as_read: takes stream name
/json/mark_topic_as_read: takes stream name, topic name
The /json/flags endpoint no longer allows streams or topics
to be passed in as parameters.
This function optimizes marking streams and topics as read,
by using UserMessage.where_unread(), which uses a partial
index on the "read" flag.
This also simplifies the code path for ordinary message
flag updates.
In order to keep 100% line coverage, I simplified the
logging in update_message_flags, so now all requests
will show the "actually" format.
This is an interim step toward creating dedicated endpoints
for marking streams/topics as reads, so we do error checking
with asserts for flag/operation, so we don't introduce a
temporary translation string.
This is mostly a pure code extraction, except that we now
disregard the `messages` option for stream/topic updates,
since the web app always passes in an empty list (and this
commit is really just an incremental step toward creating
new endpoints.)
This is the first part of a larger migration to convert Zulip's
reactions storage to something based on the codepoint, not the emoji
name that the user typed in, so that we don't need to worry about
changes in the names we're using breaking the emoji storage.
We recently changed the populate_db data set to include more variable
message content, which happened to include the possibility of the word
"lunch" appearing in the test messages. This caused occasional
failures of the search tests that looked for messages containing
"lunch" starting at the beginning of time, not the beginning of the
test.
This commits adds new helper functions which are:
* get_users_for_soft_deactivation(): This function can be used to
fetch a list of human users which pass the criteria of minimum
inactivity period (in days) passed as a parameter to the function.
* do_soft_activate_users(): Given a list of users this function
reactivates them and help them catch up with the missing message
rows for them in the UserMessage table.
This function will help us in creating undisturbed experience for
returning soft deactivated users.
Tweaked by tabbott to fix minor performance and clarity issues.
This field is convenient for bankruptcy checks. Clients could
calculate it from page_params.unread_msgs before this change, but
it would kind of a painful calculation.
To add count, we had to simplify the mypy annotations, which weren't
really accurate before.
Update Email, Beanstalk, Hubot, JIRA, and Trello integrations
links.
The Hubot integrations section (/integrations#hubot-integrations)
was removed in an earlier redesign of /integrations. This commit
replaces the link with the hubot-scripts organization on
Github, which displays the comprehensive list of all integrations
available via Hubot.
Fixes#5875.
We were exiting this function in certain cases before updating
mentions. This bug was always there, but it was flaky in terms
of database setup whether the tests would fail, so now the
relevant test sends three consecutive messages.
We also avoid putting duplicate message ids in mentions.
This should significantly improve the user experience for new users
signing up with GitHub/Google auth. It comes complete with tests for
the various cases. Further work may be needed for LDAP to not prompt
for a password, however.
Fixes#886.
This allows us to go to Registration form directly. This behaviour is
similar to what we follow in GitHub oAuth. Before this, in registration
flow if an account was not found, user was asked if they wanted to go to
registration flow. This confirmation behavior is followed for login
oauth path.