c2526844e9 removed the `signups` queue
worker, and the command-line tool that enqueues to it -- but not the
automated process that enqueues during signups itself.
Remove the signup, since it is no longer in use.
Previously, the data type of parameters wasn't displayed in the API
Documentation, even though that OpenAPI data is carefully validated
against the implementation. Here we add a recursive function to
render the data types visibly in the API documentation.
This only covers the request parameters; we'll want to do something
similar for response parameters in a follow-up PR.
Fixes part of #15967.
When we were getting an apply_event call for
a subscription/add event, we were trying not to
mutate the event itself, but this clumsy code
was still mutating the actual event:
# Avoid letting 'subscribers' entries end up in the list
for i, sub in enumerate(event['subscriptions']):
event['subscriptions'][i] = \
copy.deepcopy(event['subscriptions'][i])
del event['subscriptions'][i]['subscribers']
This is only a theoretical bug.
The only person who receives a subscription/add
event is the current user.
And it wouldn't have affected the current user,
since the apply_event was correctly updating the
state, and we wouldn't actually deliver the event
to the client (because the whole point of apply_event
is to prevent us from having to piggyback the
super-recent events on to our payload or put
them into the event queue and possibly race).
The new code just cleanly makes a copy of each
sub, if necessary, as we add them to state["subscriptions"].
And I updated the event schemas to reflect that
subscribers is always present in subscription/add
event.
Long term we should probably avoid sending subscribers
on this event when the clients don't set something
like include_subscribers. That's a fairly complicated
fix that involves passing in flags to ClientDescriptor.
Alternatively, we could just say that our policy is
that we never send subscribers there, but we instead
use peer_add events. See issue #17089 for more
details.
It's always cleaner to work in id space. It probably
would have required a perfect storm to have broken
the existing code, but using ids is obviously more
robust in theory, and just as simple.
We now require keywords, so that there is no
pitfall for mixing up boolean parameters.
Positional parameters are basically evil
when you have a bunch of bools.
I also make user_profile the first argument.
Finally, the code is more diff-friendly.
I eliminate the defaults, since the existing code
was already specificying values for most things.
I move all the booleans to the bottom for both
parameters and arguments.
I require explicit keywords for everything but
user_profile (which is now first).
And, finally, I format the code in a more
diff-friendly manner.
We eliminate some redundant checks.
We also consistently provide a `subscribers` field
in our stream data with `[]`, even if our users
can't access subscribers. We therefore bump
the API version and tweak the docs. (See further
down for a detailed justification of the change.)
Even though it is sometimes fine to have redundant code
that is defensive in nature, some upcoming changes are gonna
move subscriber-related logic out of build_stream_dict_for_sub
for certain codepaths as part of our effort to streamline
the payload for subscribers within page_params.
So we can't rely on the code that I removed here
inside of build_stream_dict_for_sub.
Anyway, it makes more sense to do these checks explicitly
in the validate function.
The code in build_stream_dict_for_sub was almost effectively
a noop, since the validation function was already preventing
us from getting subscriber info. The only difference it
made was sometimes converting `[]` to `None`, and then
subsequently omitting the subscribers field.
Neither ZT nor the webapp make any distinction between
`[]` or <missing key> for the `subscribers` data in
`page_params`.
The webapp has had this code for a long time (and now
equivalent code elsewhere in this PR):
if (!Object.prototype.hasOwnProperty.call(sub, "subscribers")) {
sub.subscribers = new LazySet([]);
}
The webapp calculates access based on booleans, anyway:
sub.can_access_subscribers =
page_params.is_admin || sub.subscribed ||
(!page_params.is_guest && !sub.invite_only);
And ZT would choke if `subscribers` were missing, except that
it never gets to the relevant code due to other checks:
def get_other_subscribers_in_stream(<snip>):
assert stream_id is not None or stream_name is not None
if stream_id:
assert self.is_user_subscribed_to_stream(stream_id)
return [sub
for sub in self.stream_dict[stream_id]['subscribers']
if sub != self.user_id]
else:
return [sub
for _, stream in self.stream_dict.items()
for sub in stream['subscribers']
if stream['name'] == stream_name
if sub != self.user_id]
You could make a semantic argument that we should prefer
<missing key> to `[]` when subscribers aren't even available, but
we have precedent from the way that `bulk_get_subscriber_user_ids`
has traditionally populated its result:
result: Dict[int, List[int]] =
{stream["id"]: [] for stream in stream_dicts}
If we changed `stream_dicts` to `target_stream_dicts` we
would faciliate a move toward `None`, but it would just cause
headaches for other server code as well as the frontends
(which, to reiterate, already prefer the empty array
for convenience).
As my comment indicates, I would prefer to handle
this explicitly by raising JsonableError in an
else statement here, but it's not a big deal.
This function can probably be simplified with a
bit of work, mostly on the testing side to make
sure we are covering all edge cases, but that
is out of the scope of my current PR.
By moving the relevant logic from realm.get_bot_domain to
get_fake_email_domain we will make realm.host be used (if possible) for
dummy user addresses. That is, instead of user11@zulipchat.com, the
address will become user11@subdomain.zulipchat.com.
With the change in d70e1bcdb7,
bots get email like bot@zulip.com with EXTERNAL_HOST="zulip.com",
rather than bot@subdomain.zulip.com, which was the old format. That's
not desirable, so with this commit, realm.host will be used when
possible and only falling back to FAKE_EMAIL_DOMAIN if needed.
We often send only one field (away or status_text)
to be updated.
So we have to make our schema support optional
keys.
As a result of the more flexible schema, we no
longer need to exempt the node fixtures from
our schema checks.
Since recipient_id (id of the PERSONAL Recipient of the user) was
denormalized into the UserProfile model, this query can be simplified by
getting rid of the zerver_recipient JOIN.
This makes us more efficient when handling
multiple users. We don't have to keep
sending the same two queries to the database.
Note that as part of this we eliminated
a failure mode for the obscure population
of users from whom both `user.is_guest` and
`user.can_access_public_streams()` returns
False. We know this would have only affected
Zephyr users (by looking at the code), and
we know we don't actually process Zephyr
users for email digests (or else we would
have raised exceptions in the old code).
We mostly need realm_id, but when we go to build
message lists, we need realm.uri.
We could probably be more aggresive about using
`only` here, but for now I am just trying to
reduce hops to the database.
The `deployment` key was only set in `do_report_error`, which is now
only used in one codepath (the queue worker). The logging handlers on
staging call notify_server_error directly, which omits the
`deployment` key.
Remove the odd one-of key, and instead simply do dispatch in
`do_report_error`.
The codepath for moving a topic changes the message.recipient_id to the
id of the new recipient, but later, in update_messages_for_topic_edit,
it uses message.recipient when querying for messages with the matching
topic in the *old* stream (because those are the other messages that
need to be moved). This is a bug which happens to work fine, because in
Django 2, if message.recipient gets fetched first and then
message.recipient_id is mutated, message.recipient will not be altered
and thus will retain the outdated, previously fetched value.
In Django 3 changing .recipient_id causes .recipient to be updated to
the new Recipient objects, which is the Recipient of the *new* stream.
That will cause the bug to manifest.
This is a bugfix preparing for the upgrade to Django 3.
Support for saving it in the session is dropped in django3, the cookie
is the mechanism that needs to be used. The relevant i18n code doesn't
have access to the response objects and thus needs to delegate setting
the cookie to LocaleMiddleware.
Fixes the LocaleMiddleware point of #16030.
We now require explicit keywords for all arguments
to fetch_initial_state_data except user_profile.
We provide reasonable defaults to keep the test
code concise.
In the case of reusing a registration link, reuse the
redirect_to_email_login_url helper. This does have the side effect of
now showing a "you've already registered" note, which did not happen
previously, but that seems probably for the best, since the user did
just click a "register" link.
ecfafc05c0 shifted to using a different paramter name to hint that
the user had previously signed up -- and in so doing also stopped
pre-filling the "email" box. Also send along the email box, to save
users time.
Checking for `validate_email_not_already_in_realm` again (after the
form already did so), but only in the case that the form fails to
validate, means that we may be spending time pushing totally invalid
emails to the DB to check. In the case of emails containing nulls,
this can even trigger a 500 error from PostgreSQL.
Stop calling `validate_email_not_already_in_realm` in the form
validation. The form is currently only used in two places -- in
`accounts_home` and in `maybe_send_to_registration`. The latter is
only called if the address is known to not currently have an account,
so checking in there is unnecessary; and in the former case, we wish
different behaviour (the redirect) than just validation failure, which
is all the validator can do.
Fixes#17015.
Co-authored-by: Alex Vandiver <alexmv@zulip.com>
Add a `--allow-reserved-subdomain` flag which allows creation of
reserved keyword domains. This also always enforces that the domain
is not in use, which was removed in 0258d7d.
Fixes#16924.
When changing the subdomain of a realm, create a deactivated realm with
the old subdomain of the realm, and set its deactivated_redirect to the
new subdomain.
Doing this will help us to do the following:
- When a user visits the old subdomain of a realm, we can tell the user
that the realm has been moved.
- During the registration process, we can assure that the old subdomain
of the realm is not used to create a new realm.
If the subdomain is changed multiple times, the deactivated_redirect
fields of all the deactivated realms are updated to point to the new
uri.
Instead of just storing the edit history in the message which
triggered the topic edit, we store the edit history in all
the messages that changed. This helps users track the edit history
of a message more reliably.
This change updates the GitHub Integration webhook
get_opened_or_update_pull_request_body method so that
the description is only printed if it actually changes.
If the update event is a result of some other
attribute update, such as an asignee change, then the
description is not included in the message sent to
the zulip stream.
Fixes#16345
As of Feb 15th 2019, Hipchat Cloud and Stride
have reached End Of Life and are no longer
supported by Atlassian. Since it is almost 2 years
now we can remove the migration guides.
Fetchings rows with end_time within the last 25 hours would result
in the realmcount queries returning two rows for each realm
if the analytics page was opened within an hour since the
count stats were updated.
Allowing any admins to create arbitrary users is not ideal because it
can lead to abuse issues. We should require something stronger that
requires the server operator's approval and thus we add a new
can_create_users permission.