Without the fix here, you will get an exception
similar to below if you try to invite one of the
cross realm bots. (The actual exception is
a bit different due to some rebasing on my branch.)
File "/home/zulipdev/zulip/zerver/lib/request.py", line 368, in _wrapped_view_func
return view_func(request, *args, **kwargs)
File "/home/zulipdev/zulip/zerver/views/invite.py", line 49, in invite_users_backend
do_invite_users(user_profile, invitee_emails, streams, invite_as)
File "/home/zulipdev/zulip/zerver/lib/actions.py", line 5153, in do_invite_users
email_error, email_skipped, deactivated = validate_email(user_profile, email)
File "/home/zulipdev/zulip/zerver/lib/actions.py", line 5069, in validate_email
return None, (error.code), (error.params['deactivated'])
TypeError: 'NoneType' object is not subscriptable
Obviously, you shouldn't try to invite a cross
realm bot to your realm, but we want a reasonable
error message.
RESOLUTION:
Populate the `code` parameter for `ValidationError`.
BACKGROUND:
Most callers to `validate_email_for_realm` simply catch
the `ValidationError` and then report a more generic error.
That's also what `do_invite_users` does, but it has the
somewhat convoluted codepath through `validate_email`
that triggers this code:
try:
validate_email_for_realm(user_profile.realm, email)
except ValidationError as error:
return None, (error.code), (error.params['deactivated'])
The way that we're using the `code` parameter for
`ValidationError` feels hacky to me. The intention
behind `code` is to provide a descriptive error to
calling code, and it's not intended for humans, and
it feels strange that we actually translate this in
other places. Here are the Django docs:
https://docs.djangoproject.com/en/3.0/ref/forms/validation/
And then here's an example of us actually translating
a code (not part of this commit, just providing context):
raise ValidationError(_('%s already has an account') %
(email,), code = _("Already has an account."),
params={'deactivated': False})
Those codes eventually get put into InvitationError, which
inherits from JsonableError, and we do actually display
these errors in the webapp:
if skipped and len(skipped) == len(invitee_emails):
# All e-mails were skipped, so we didn't actually invite anyone.
raise InvitationError(_("We weren't able to invite anyone."),
skipped, sent_invitations=False)
I will try to untangle this somewhat in upcoming commits.
We allow folks to invite emails that are
associated with a mirror_dummy account.
We had a similar test already for registration,
but not invites.
This logic typically affects MIT realms in the
real world, but the logic should apply to any
realm, so I use accounts from the zulip realm
for convenient testing. (For example, we might
run an IRC mirror for a non-MIT account.)
I use a range here because there's some leak
from another test that causes the count to
vary. Once we get this a bit more under control,
we should be able to analyze the leak better.
The substantive improvement here is to use
a strange casing for Hamlet's email, which
will prevent future casing bugs.
I also log in as Cordelia to prevent confusion
that the test has something to do with
inviting yourself. It's more typical for
somebody to invite another person to a realm
(not realizing they're already there).
I also made two readability tweaks.
Replaced unique_together with UniqueConstraint in models that
covered nullable fields as in unique_together database indexes
don't work where subgroup=None. So added conditional unique
index handling invalid duplicate Count data.
Added 0015_clear_duplicate_counts migration to handle existing
data that violates the constraints.
Also corrected a test case in test_counts.py which didn't clear its
state properly and thus was accidentally taking advantage of this
database schema bug.
The copy of the styling for users_hover_info:
```
-#users_hover_info {
- left: 25px;
- top: -40px;
-}
```
Looks less good than the common one with #hoverinfo, so we remove it.
Currently, the cursor for the date input field in the settings page
is 'not-allowed' as it has the disabled attribute because we want
users to pick the date from the date picker. But this leads to
confusion whether the field is editable at all.
Change the cursor to 'pointer' to make it clear that the field has
a click action associated with it.
Clicking on the 'Owner' value for a row in the list of bots does
nothing, and causes a blueslip error.
This is because the map object in which we store the users have
integer keys, while we pass the owner id as string.
This is fixed by parsing the owner id to integer before passing it
on.
Fixes#14107.
We need to request access to read:org scope to be able to check org/team
membership. Without it SOCIAL_AUTH_GITHUB_ORG_NAME and
SOCIAL_AUTH_GITHUB_TEAM_ID settings don't work and simply lead to all
auth attempts failing.
Tested manually.
Several of our queues are capable of doing work that includes
rendering markdown (outgoing_webhook, embedded_bots, embed_links, and
email_mirror). As a result, it's essential that these don't cache
per-request data (specifically, realm filters) longer than they
should, making editing/deleting linkifiers potentially use old
settings until the relevant process was restarted.
Flushing these caches is extremely cheap (just clearing two
dictionaries) and thus is reasonable to do after every queue event,
rather than trying to do it only the ~1/3 of queues that specifically
do markdown processing. We do the same in our middleware for
reset_queries.
It's not worth writing a test for this because it's very difficult to
create the test setup situation for this bug with a single test worker
process; one needs to edit the linkifier configuration in a different
process than the one sending the message in order to see the bug.
This was a much larger visible bug on Zulip 2.1.x, where the presence
of the message_sender queue meant that this would apply to messages
sent via a browser.
Fixes#14095.
Previously, the input:
====================
- One
- Two
Two continued
====================
Would produce the same output as:
====================
- One
- Two
```
Two continued
```
====================
This was because our CodeBlockProcessor had a higher priority than
the ListIndentProcessor. This issue was discussed here:
https://chat.zulip.org/#narrow/stream/9-issues/topic/continuation.20paragraphs.20in.20list.20items.
/delete_topic endpoint could be used to request the deletion of a topic,
that would cause do_delete_messages to be called with an empty set in
these cases:
1. Requesting deletion of an empty stream.
2. Requesting deletion of a topic in a private stream with history not
public to subscribers, if the requesting admin doesn't have access to
any of the messages in that topic.
This function slims down the data that we get
from the database in order to create the
streams part of our client payload.
We also fix a typo.
We also clearly distinguish between queries
and lists here.
This new method prevents us from getting fat
objects from the database.
Instead, now we just get ids from the database
to build our subqueries.
Note that we could also technically eliminate
the `set(...)` wrappers in this code to have
Django make a subquery and save a round trip.
I am postponing that for another commit (since
it's still somewhat coupled to some other
complexity in `do_get_streams` that I am trying
to cut through, plus it's not the main point
of this commit.)
BEFORE:
# old, still in use for other codepaths
def get_stream_subscriptions_for_user(user_profile: UserProfile) -> QuerySet:
# TODO: Change return type to QuerySet[Subscription]
return Subscription.objects.filter(
user_profile=user_profile,
recipient__type=Recipient.STREAM,
)
user_subs = get_stream_subscriptions_for_user(user_profile).filter(
active=True,
).select_related('recipient')
recipient_check = Q(id__in=[sub.recipient.type_id for sub in user_subs])
AFTER:
# newly added
def get_subscribed_stream_ids_for_user(user_profile: UserProfile) -> QuerySet:
return Subscription.objects.filter(
user_profile_id=user_profile,
recipient__type=Recipient.STREAM,
active=True,
).values_list('recipient__type_id', flat=True)
subscribed_stream_ids = get_subscribed_stream_ids_for_user(user_profile)
recipient_check = Q(id__in=set(subscribed_stream_ids))
We calculate `max_message_id` for the mobile client.
Our query now no longer joins to the Message table
and just grabs one value instead of fat objects.
We were only checking error handling before, not
the happy path. The structure of the code
made it so that we effectively tested most of the
logic for this use case (since all the other flags
are sort of just filters on top of this), but
obviously we want explicit coverage here. Also,
we weren't testing the is-admin-but-not-api-super-user
error checking until this commit.
The file populates `windows.i18n`, so now
the file name matches our convention.
Note that the module really just initializes
`i18next` and then does this:
window.i18n = i18next;
It doesn't really add any functionality to
third party library.
Before this test, we were validating the behavior
of `i18next`, but we weren't validating our light
layer that sits on top of `i18next`, which currently
resides in the slightly misnamed `translations.js`
file.
The translations module is now so small that I'll
just quote it verbatim here:
import i18next from 'i18next';
i18next.init({
lng: 'lang',
resources: {
lang: {
translation: page_params.translation_data,
},
},
nsSeparator: false,
keySeparator: false,
interpolation: {
prefix: "__",
suffix: "__",
},
returnEmptyString: false, // Empty string is not a valid translation.
});
window.i18n = i18next;
We now just do `zrequire('translations')` to initialize
the `i18next` library, which allows us to have simpler
test setup and to actually exercise the above call to
`i18next.init`.
This change now gives us 100% line coverage of `translations.js`,
which of course isn't that hard to acheive (see above).
Before 2018, we used a feature of i18next where
we would cache translations in local storage
for up to two weeks:
var cacheOptions = {
// ...
prefix: 'i18next:' + page_params.server_generation + ':',
expirationTime: 2*7*24*60*60*1000, // 2 weeks
};
i18next.init({
/// ...
cache: cacheOptions
}
Because `server_generation` would change each time you
upgraded a server, a frequently upgraded server like
chat.zulip.org would cause its active users to start
to accumulate lots of obsolete key/value pairs in local
storage over the two weeks.
See #4443 for more details.
We eventually reduced the cache life to 2 days. And then
on top of that, newer versions of the server would start
to clean up after themselves using this commit from
April 2017:
e3f1d025ae
We then removed the caching option altogether a year
later in May 2018:
cff40c557b
We kept around the code to remove all the old keys, though.
This was particularly important for users who may have
been hitting servers that did an upgrade to the new
version from some older version that didn't have the
key-fixing code.
But mostly the problem takes care of itself after
either two days or two weeks, even on really out-of-date
servers.
The original problem was most likely to affect server
admins that did a lot of upgrades (and possibly only really
affected chat.zulip.org), so as long as those server
admins continued their patterns, it's highly likely that
they've done several upgrades since May 2018 that would
have cleaned these keys out for good.
And, again, even if there is some strange straggler here,
they probably only have one set of keys that will expire
either two days or two weeks after an upgrade, depending
on how long ago the prior upgrade was. (All of their
keys based on older versions of `server_generation` would
have long since expired.)
Finally, any upgrade certainly won't make the problem
worse for any users under this hypothetical situation,
since the new server won't be writing new keys.
So I am removing the cleanup code.
This extracts a new module with three
functions, which we will test with 100%
line coverage:
- show_email
- email_for_user_settings
- get_time_preferences
The first two break several dependencies
in the codebase on `settings_org.js`. The
`get_time_preferences` breaks an annoying
dependency on `page_params` within people.
The module is pretty cohesive, in terms that
all three functions are just light wrappers
around `page_params` and/or `settings_config`.
Now all the modules that want to call show_email()
only have to require `settings_data`, instead of
having a dependency on the much heavier
`settings_org.js` module.
I also make some of the unit tests here be more
full-stack, where instead of stubbing show_email,
I basically just toggle `page_params.is_admin`.
Explicitly stubbing i18n in 48 different files
is mostly busy work at this point, and it doesn't
provide much signal, since often it's invoked
only to satisfy transitive dependencies.