If your browser width was between 701px and 750px you got the mobile
view without the mobile header preventing you from changing sections in
the settings menu.
This was caused by a media-query mismatch:
subscriptions.scss used @media (max-width: 750px)
settings.scss however used @media (max-width: 700px)
Comments added by tabbott to help avoid future bugs like this.
* Don't annoyingly open the first section when switching
between the Settings and Organization tabs.
* Don't highlight currently active section in the settings list
(we don't display the currently active section in the mobile settings
list so it isn't actually active).
* Remove nearly invisible and buggy no-border logic.
Update the REQ check for profile_data in
update_user_backend by tweaking `check_profile_data`
to use `check_dict_only`.
Here is the relevant URL:
path('users/<int:user_id>', rest_dispatch,
{'GET': 'zerver.views.users.get_members_backend',
It would be nice to unify the validator
for these two views, but they are different:
update_user_backend
update_user_custom_profile_data
It's not completely clear to me why update_user_backend
seems to support a superset of the functionality
of `update_user_custom_profile_data`, but it has
this code to allow you to remove custom profile fields:
clean_profile_data = []
for entry in profile_data:
assert isinstance(entry["id"], int)
if entry["value"] is None or not entry["value"]:
field_id = entry["id"]
check_remove_custom_profile_field_value(target, field_id)
else:
clean_profile_data.append({
"id": entry["id"],
"value": entry["value"],
})
Whereas the other view is much simpler:
def update_user_custom_profile_data(
<snip>
) -> HttpResponse:
validate_user_custom_profile_data(user_profile.realm.id, data)
do_update_user_custom_profile_data_if_changed(user_profile, data)
# We need to call this explicitly otherwise constraints are not check
return json_success()
This tightens our checking of user-supplied data
for this endpoint:
path('users/me/profile_data', rest_dispatch,
{'PATCH': 'zerver.views.custom_profile_fields.update_user_custom_profile_data',
...
We now explicitly require the `value` field
to be present in the dicts being passed in
here, as part of `REQ`. There is no reason
that our current clients would be sending
extra fields here, and we would just ignore
them anyway, so we also move to using
check_dict_only.
Here is some relevant webapp code (see settings_account.js):
fields.push({id: field.id, value: user_ids});
update_user_custom_profile_fields(fields, channel.patch);
settings_ui.do_settings_change(method, "/json/users/me/profile_data",
{data: JSON.stringify([field])}, spinner_element);
The webapp code sends fields one at a time
as one-element arrays, which is strange, but
that is out of the scope of this change.
The flake was caused due to the fact that current_msg_list was not
populated with any messages in rare cases. We were missing a check
to guard against current message list having no messages. The
failure screenshot show no messages to prove this.
Relevant error:
Evaluation failed: TypeError: Cannot read property 'raw_content' of undefined
After some discussion, everyone seems to agree that 3.0 is the more
appropriate version number for our next major release. This updates
our documentation to reflect that we'll be using 3.0 as our next major
release.
Ubuntu 20.04 "focal" comes up to runlevel 5 several seconds before it
is able to successfully resolve hosts, causing `prepare-base` to fail
while fetching from the apt repositories.
Add an additional check to verify that outbound networking is running
before returning from `lxc-wait`.
`/api/v1/fetch_api_key`'s response had a key `email` with the user's
delivery email. But its JSON counterpart `/json/fetch_api_key`, which
has a completely different implementation, did not return `email` in
its success response.
So to avoid confusion, the non-API endpoint, `/json/fetch_api_key`
response has been made identical with it's `/api` counterpart by
adding the `email` key. Also it is safe to send as the calling user
will only see their own email.
We now have our muted topics use tuples internally,
which allows us to tighten up the annotation
for get_topic_mutes, as well as our schema
checking.
We want to deprecate sub_validator=None
for check_list, so we also introduce
check_tuple here. Now we also want to deprecate
check_tuple, but it's at least isolated now.
We will use this for data structures that are tuples,
but which are sent as lists over the wire. Fortunately,
we don't have too many of those.
The plan is to convert tuples to dictionaries,
but backward compatibility may be tricky in some
places.
This commit changes do_get_user_invites function to not return
multiuse invites to non-admin users. We should only return multiuse
invites to admins, as we only allow admins to create them.
As in the previous commit, we can no longer pre-install the wrong
version of postgres. Unfortunately, this leaves it out of the base
image and thus makes testing installs longer.
49a7a66004 and immediately previous commits began installing
PostgreSQL 12 from their apt repository. On machines which already
have the distribution-provided version of PostgreSQL installed,
however, this leads to failure to apply puppet when restarting
PostgreSQL 12, as both attempt to claim the same port.
During installation, if we will be installing PostgreSQL, look for
other versions than what we will install, and abort if they are
found. This is safer than attempting to automatically uninstall or
reconfigure existing databases.
This allows for installing from-scratch with a different pinned
version of PostgreSQL, and provides a single place to change when the
default should increase.
Using `/etc/init.d/postgresql` as the detection of if Postgres is on
the server is incorrect, because this line runs _before_ puppet and
any packages are installed. Thus, it cannot tell the difference
between a new Ubuntu one-host first-time-install without PostgreSQL
yet, and one which is merely a front-end and will never have
PostgreSQL. This leads to failures in first-time installs:
```
Error: Evaluation Error: Error while evaluating a Function Call,
Could not find template 'zulip/postgresql//postgresql.conf.template.erb'
```
The only way to detect if PostgreSQL will be present in the _end_
state of the install is to examine the puppet classes that are
applied.
To do this, we must inspect `PUPPET_CLASSES`. Unfortunately, this can
be fragile to subclassing (e.g. `zulip_ops::postgres_appdb`). We
might desire to use `puppet apply --write-catalog-summary` to deduce
the _applied_ classes, which would unroll the inheritance; however,
this causes a chicken-and-egg problem, because `zulip.conf` must be
already written out (including a value for `postgresql.version`, if
necessary!) before such a puppet run could successfully complete.
Switch to predicating the `postgresql.version` key on the puppet
classes that are known to install postgres.
Support for Xenial and Stretch was removed (5154ddafca, 0f4b1076ad,
8944e0ad53, 79acd5ae40, 1219a2e854), but not all codepaths were
updated to remove their conditionals on it.
Remove all code predicated on Xenial or Stretch. debathena support
was migrated to Bionic, since that appears to be the current state of
existing debathena servers.
0f4b1076ad removed Ubuntu 16.04 "xenial" and Debian 9 "stretch" from
the printed list of supported operating systems, but left them in the
verification check that controls if that message is printed,
effectively continuing to support them.
Conversely, 439f0d3004 added Ubuntu 20.04 "focal" to the check, but
not to the printed list.
Synchronize to check and print the right supported distributions:
Ubuntu 18.04 "bionic", Ubuntu 20.04 "focal", and Debian 10 "buster".
The previous commit removed the only behavior difference between the
two flags; both of them skip user/database creation, and the tables
therein.
Of the two options `--no-init-db` is more explicit as to what it does,
as opposed to just one facet of when it might be used; remove
`--remote-postgres`.
The previous architecture did not work properly with the automatically
detected night theme, resulting in a weird mix of the night and day
themes on code blocks.
I'm not thrilled with the requirement this imposes that all of our
night theme CSS needs to be in one file, but we do need to get a quick
fix out here.
Fixes#15554.
This commit removes invited_as_values map in settings_invites.js.
This object has been removed to avoid duplication as we already
have role values in settings_config.js.
A similar map is created from settings_config.user_role_values
in settings_config.js and is used to populate invited_as_text
for invites.
This commit changes the PreregistrationUser.invite_as dict to have
same set of values as we have for UserProfile.role.
This also adds a data migration to update the already exisiting
PreregistrationUser and MultiuseInvite objects.
We leverage the composebox typeaheads to show flatpickr to pick dates
and times for the !time syntax.
We use moment.js to try and parse the time from current token. If we
are successful, we initialize flatpickr with the parsed time, else we
default to using the current time.
Streams can have lots of subscribers, meaning that the archiving process
will be moving tons of UserMessages per message. For that reason, using
a smaller batch size for stream messages is justified.
Some personal messages need to be added in test_scrub_realm to have
coverage of do_delete_messages_by_sender after these changes.
Currently we display -1 in input box of id_realm_message_retention_days
when realm_message_retention_days is -1, which isn't user friendly.
Displaying the input box as empty is more intuitive.
And if the user tries to submit an empty input box we throw invalid JSON
error that isn't user friendly either, so fixed that too. In the ideal
case, we shouldn't send the request at first place to the backend when we
don't have any input.
Currently, we use -1 as the Realm.message_retention_days value to retain
message forever unless specified at stream level for a particular stream,
that is, no policy set at the realm level. But this is incoherent with what
we use for Stream.message_retention_days where -1 means
> disable retention policy for this stream unconditionally
that can be confusing from an API standpoint.
So instead of trying some hack to reset the value to NULL or using some
other value like -2 for RETAIN_MESSAGE_FOREVER and use that for API. It is
much more intuitive to use a string like 'forever' that can be mapped to
RETAIN_MESSAGE_FOREVER at the backend. And this is similar to what we use
for streams settings as well.
`get_input_element_value()` function is more reliable to detect the input
element type and extract it's value. But the current way of setting the
value of input elements relies on first checking the `property_value` type.
Which is fine, but for the cases when the property value is null, and we
want to set element value as empty, this method will throw an error as it's
unable to detect the appropriate element type. This new function
`set_input_element_value` first rely on property value and then use
`setting-widget-type` as a fallback.