We stopped needing this with
0329b67048
(Dec 2016).
The function sets `bot.can_admin`,
which was only used in `bot_data.get_editable`.
We removed two tests (and then put back
some test setup that needed to leak down
to the last test).
This is code simplification motivated
by a recent bug that we fixed with some
server changes, but which was really
caused in some sense by our client code
using an overly finicky
condition to check falsiness.
For cross-realm bots, the value of
`user.bot_owner_id` may be `null`, or it
may simply be `undefined`, depending
on whether the server passes `None`
or simply omits the field.
We don't want out client code to be
coupled to that rather arbitrary
decision.
We were doing a `!== null` check instead
of checking for falsiness, which led to
blueslip errors in the past. Because a
bot owner id could be plausibly 0, a falsiness
check would be brittle in a different way.
Now we avoid that ugliness by calling
`get_bot_owner_user`, which either returns
an object or `undefined`.
And then the caller can just do a concise
check for whether `bot_owner` exists.
And we also fix up the crufty code that
was putting `bot_owner_full_name` on to
the object instead of using a local.
We have a bug report for this again, although
it might be on an old branch.
Fixes#13621.
This allows us to block use of the desktop app with insecure versions
(we simply fail to load the Zulip webapp at all, instead rendering an
error page).
For now we block only versions that are known to be both insecure and
not auto-updating, but we can easily adjust these parameters in the
future.
We know that by default the value of `enable_stream_push_notifications` is
true, so on some server if push notifications are disabled then due to
```
{{#unless show_push_notifications_tooltip}}
```
the `enable_stream_push_notifications` checkbox is rendered as unchecked
and when we update any other setting in the "Stream notifications" section,
due to `settings_notifications.update_page()` all checkboxes are changed to
current values which means above checkbox get checked.
We can deal with this bug either making changes in
`settings_notifications.update_page()` but we should show check/uncheck
state of checkboxes anyway so removing this `unless` condition fixes this
bug.
Instead of having logical expressions in templates, it's always preferred
to calculating them in javascript and pass the results as a context. It
also enhances the readability of templates and testing of such logic is
easier in js over templates.
`all_notification_settings_labels` is misleading that this variable is a
list of notifications setting labels so changed it to
`all_notification_settings`.
The reason for extracting this function is that getting the text, integer,
boolean value from the input elements (like checkboxes, dropdowns) is a
common task, and later we can use this function to get the input element
value in `settings_notifications` in the upcoming commit.
This is a bug fix where, if a list_render
object with the given name exists and it's items
have been sorted, then the filtered_list's data
does not get updated on re-rendering.
This line was present in the original commit
9576d5caef.
The use case for this are small or fixed tables, which do not need
filtering support. Thus we are able to not include the unnecessary
search input inside the html parent container.
It is not used at present, but will be required when we refactor
the settings pages.
We also split out exports.validate_filter function for
unit testing the above condition.
This improves the error handling for invalid values of the
propagate_mode parameter to our message editing endpoints.
Previously, invalid values would just work like change_one rather than
doing nothing.
As a consequence of too many options in the bottom `Other permissions`
subsection, the `Save` button could end up too far up from the bottom,
such that it might appear offscreen on low-height laptops.
We fix this by reorganizing the settings in a way that is both more
intuitive and also ensures that none of the subsections are too tall.
Fixes: #14274.
setup_event_queue() generates some logs about loaded event queues, and
it's good for the logging system to have access to the port at that
point already.
In development env, we use `get_secret` to get
`SOCIAL_AUTH_GITLAB_KEY` from `dev-secrets.conf`. But in
production env, we don't need this as we ask the user
to set that value in `prod_settings_template.py`.
This restricts the code from looking `zulip-secrets.conf`
for `social_auth_gitlab_key` in production env.
Before this commit, the reactions code would
take the `message.reactions` structure from
the server and try to "collapse" all the reactions
for the same users into the same reactions,
but with each reaction having a list of user_ids.
It was a strangely denormalized structure that
was awkward to work with, and it made it really
hard to reason about whether the data was in
the original structure that the server sent or
the modified structure.
Now we use a cleaner, normalized Map to keep
each reaction (i.e. one per emoji), and we
write that to `message.clean_reactions`.
The `clean_reactions` structure is now the
authoritatize source for all reaction-related
operations. As soon as you try to do anything
with reactions, we build the `clean_reactions`
data on the fly from the server data.
In particular, when we process events, we just
directly manipulate the `clean_reactions` data,
which is much easier to work with, since it's
a Map and doesn't duplicate any data.
This rewrite should avoid some obscure bugs.
I use `r` as shorthand for the clean reaction
structures, so as not to confuse it with
data from the server's message.reactions.
It also avoids some confusion where we use
`reaction` as a var name for the reaction
elements.
Previously, we only did apt updates when our sources.list files or
keys changed, which could result in provisioning errors for
development systems that don't routinely update their apt cache
(probably including ~all Vagrant environments).
importlib-metadata and importlib-resources are dependent packages for jsonschema
and cfn-lint respectively. They are built-in modules in later versions
of python (3.8, 3.7). When update-locked-requirements is run within python3.7 or
3.8 they will generate difference in locked files so we build these modules separately
to avoid such conflicts.
transifex-client0.13.4 did not support python3.8 so updated
it to the latest version. Earlier we kept transifex-client version to
0.13.4 as transifex-client0.13.5 added overly strict version bounds
on six and urllib3. With the latest version this is not the case.
Bumped provision version.
cgi.escape is deprecated in python3.2 and removed in python3.8.
This function was unsafe because quote is false by default, hence
removed and replaced with a safer html.escape.
Used get_venv_dependencies function to return the correct dependencies
for RHEL, Centos, Fedora rather than importing them as separate
COMMON_YUM_DEPENDENCIES in provision and create-production-venv.
In virtualenv ≥ 20, the site_packages variable was removed from
activate_this.py. To avoid a KeyError, replace
activate_locals['site_packages'] with os.path.join(venv, 'lib',
python_version), where python_version is the 'pythonX.Y' name of the
directory where site-packages resides in the virtualenv.
Fixes#14025.
I'm not sure what causes some Jira webhook events to not include the
metadata that other events do, but it's definitely a format sent by
real installations of Jira (likely a very old version, since this has
fields missing from what modern Jira does) and we've seen it in
production.
The best we can do is encourage users to upgrade Jira for better data.
The previous starred_messages race handling did not correctly consider
the possibility that an event queue might have been registered without
starred_messages.
Instead of operating on RateLimitedObjects, and making the classes
depend on each too strongly. This also allows getting rid of get_keys()
function from RateLimitedObject, which was a redis rate limiter
implementation detail. RateLimitedObject should only define their own
key() function and the logic forming various necessary redis keys from
them should be in RedisRateLimiterBackend.
type().__name__ is sufficient, and much readable than type(), so it's
better to use the former for keys.
We also make the classes consistent in forming the keys in the format
type(self).__name__:identifier and adjust logger.warning and statsd to
take advantage of that and simply log the key().
This returns us to a consistent logging format regardless of whether
the request is authenticated.
We also update some log examples in docs to be consistent with the new
style.
When a user in login flow using github auth chooses a email that is
not associated with an existing account, it leads to a "continue to
registration" choice. This cannot be tested with the earlier version
of `stage_two_of_registration`.
Also added the test.
Thanks to Mateusz Mandera for the solution.
Co-authored-by: Mateusz Mandera <mateusz.mandera@protonmail.com>
The previous model for GitHub authentication was as follows:
* If the user has only one verified email address, we'll generally just log them in to that account
* If the user has multiple verified email addresses, we will always
prompt them to pick which one to use, with the one registered as
"primary" in GitHub listed at the top.
This change fixes the situation for users going through a "login" flow
(not registration) where exactly one of the emails has an account in
the Zulip oragnization -- they should just be logged in.
Fixes part of #12638.