We rework the landing page for companies in the same way we've
recently revamped the landing pages for other use cases.
This implementation unfortunately duplicates a lot of content from
/plans; we should clean that up at some point.
This reverts commit 1965584eec.
This syntax has a bad interaction with table syntax and needs to be
rethought.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Slack bot emails generated by us can be duplicate for two bots.
If such a case occur, append a counter to the email to make it
unique.
For maintaining the counter of duplicate emails and the final
email assigned to each bot, a class based approach is used with
static variables and static (class) methods. This keeps all the
data related to slack bot emails at the same place and easily
accessible from anywhere inside the module (without defining any
class object and passing it around).
Fixes: #16793
These checks suffer from a couple notable problems:
- They are only enabled on staging hosts -- where they should never
be run. Since ef6d0ec5ca, these supervisor processes are only
run on one host, and never on the staging host.
- They run as the `nagios` user, which does not have appropriate
permissions, and thus the checks always fail. Specifically,
`nagios` does not have permissions to run `supervisorctl`, since
the socket is owned by the `zulip` user, and mode 0700; and the
`nagios` user does not have permission to access Zulip secrets to
run `./manage.py print_email_delivery_backlog`.
Rather than rewrite these checks to run on a cron as zulip, and check
those file contents as the nagios user, drop these checks -- they can
be rewritten at a later point, or replaced with Prometheus alerting,
and currently serve only to cause always-failing Nagios checks, which
normalizes alert failures.
Leave the files installed if they currently exist, rather than
cluttering puppet with `ensure => absent`; they do no harm if they are
left installed.
This is more efficient than get_lexer_by_name, since we don’t need to
instantiate the class just to get its name.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The BlockingChannel annotations in TornadoQueueClient were flat-out
wrong. BlockingChannel and Channel have no common base classes.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This is effectively a step closer to what was proposed in
https://github.com/zulip/zulip/pull/18678#discussion_r644490540 when
this code was written in #18678.
If the Customer object has neither of a Stripe id, nor any historical
plans, then there's no real billing association contained in the
existence of the Customer object, and it's safe to delete.
This fixes a regression in de04f0ad67.
We'll do a proper test in a follow-up commit; this is a quick fix to
make sure master works.
The emails will bounce, but it'll create all sorts of infrastructure
headaches.
We added "user_settings" object containing all the user settings in
previous commit. This commit modifies the code to send the existing
setting fields in the top-level object only if user_settings_object
client_capabilities field is False.
This commit adds "user_settings_object" field to
client_capabilities which will be used to determine
if the client needs 'update_display_settings' and
'update_global_notifications' event.
We send a event with type 'user_settings' on updating user's display
and notification settings.
The old event types - 'update_global_notifications' and
'update_display_settings', are still supported for backwards
compatibility.
We do not require separate tests for checking events when changing
"enable_drafts_synchronization" as we already do this in the display
settings test because this setting is included in property_types.
Return zulip_merge_base alongside zulip_version
in `/register`, `/event` and `/server_settings`
endpoint so that the value can be used by other
clients.
These were added at some point in the past, but were not complete, and
it makes sense to document the current feature level as and when they
become available, since clients should not use the drafts endpoints on
older feature levels.
The main reason why this is needed is because this seems to be
convention and because we can't easily test event creation without
doing this.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
This commit allows to import the following from rocketchat:
* All users
* All public/private channels
* All teams and its public/private channels
* All discussion rooms as topics in their parent channel
* All the messages in all the channels
* All private conversations
* Reactions on messages (except for custom emojis)
* Mentions in messages (except @all, @here mentions)
Previously, we checked for the `enable_offline_email_notifications` and
`enable_offline_push_notifications` settings (which determine whether the
user will receive notifications for PMs and mentions) just before sending
notifications. This has a few problem:
1. We do not have access to all the user settings in the notification
handlers (`handle_missedmessage_emails` and `handle_push_notifications`),
and therefore, we cannot correctly determine whether the notification should
be sent. Checks like the following which existed previously, will, for
example, incorrectly not send notifications even when stream email
notifications are enabled-
```
if not receives_offline_email_notifications(user_profile):
return
```
With this commit, we simply do not enqueue notifications if the "offline"
settings are disabled, which fixes that bug.
Additionally, this also fixes a bug with the "online push notifications"
feature, which was, if someone were to:
* turn off notifications for PMs and mentions (`enable_offline_push_notifications`)
* turn on stream push notifications (`enable_stream_push_notifications`)
* turn on "online push" (`enable_online_push_notifications`)
then, they would still receive notifications for PMs when online.
This isn't how the "online push enabled" feature is supposed to work;
it should only act as a wrapper around the other notification settings.
The buggy code was this in `handle_push_notifications`:
```
if not (
receives_offline_push_notifications(user_profile)
or receives_online_push_notifications(user_profile)
):
return
// send notifications
```
This commit removes that code, and extends our `notification_data.py` logic
to cover this case, along with tests.
2. The name for these settings is slightly misleading. They essentially
talk about "what to send notifications for" (PMs and mentions), and not
"when to send notifications" (offline). This commit improves this condition
by restricting the use of this term only to the database field, and using
clearer names everywhere else. This distinction will be important to have
non-confusing code when we implement multiple options for notifications
in the future as dropdown (never/when offline/when offline or online, etc).
3. We should ideally re-check all notification settings just before the
notifications are sent. This is especially important for email notifications,
which may be sent after a long time after the message was sent. We will
in the future add code to thoroughly re-check settings before sending
notifications in a clean manner, but temporarily not re-checking isn't
a terrible scenario either.
Part of #19272
We still keep refering to this model with "MutedTopic" to reduce the
diff size of this commit. The alias will be removed in the next commit.
This commit skips on renaming the `date_muted` field to something more
general. That will be done in further commits, along with the code and
API changes.
In this commit:
* We update the `UserStatus` model to accept
`AbstractReaction` as a base class so, we can get all the
fields related to store status emoji.
* We update the user status endpoint
(`users/me/status`) to accept status emoji fields.
* We update the user status event to add status emoji
fields.
Co-authored-by: Yash Rathore <33805964+YashRE42@users.noreply.github.com>
This commit adds can_add_custom_emoji
helper to check whether the user can
add custom emoji or not.
This function will be used further when
add_custom_emoji_policy will be extended
to include all COMMON_POLICY_VALUES.
This commit replaces boolean field add_emoji_by_admins_only with an
integer field add_custom_emoji_policy as we would also add full members
and moderators option for this setting in further commits.
This removes a bunch of non-functional duplicate JavaScript, HTML, and
CSS that was interfering with maintenance on the functional originals,
because it was never clear how to update the duplicates or how to
check that you’d updated the duplicates correctly.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This commit moves "enter_sends" setting to property_types dict.
With this change, changing enter_sends setting also sends an
event of type "update_display_settings" and thus enables us
to live-update the UI.
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'invite_to_realm_policy' and others will
be removed separately.
Instead of directly changing the `POST` attribute of a request, we
utilize the `HostRequestMock` initializer to produce requests with
different post data.
The decorators require the decorated function to be a valid view
function. This changes the way the mocked view functions and requests
are implemented such that we can invoke view functions without future
type errors.
`export_realm` accepts an HttpRequest as the first argument,
while `self.client_post` conflicts with it. Though the argument is
unused in `export_realm`, we keep it to be compliant with the
view function type.
As we only return the actual decorator as-is only if `function` is
`None`, we can use `@overload` to accurately annotate the return type
for the decorator.
When calling some functions or assigning values to certain attributes,
the arguments/right operand do not match the exact type that the
functions/attributes expect, and thus we fix that by converting types
beforehand.
Of the two other logging mocks left in this file, one checks
a logging call isn't made and another makes sure errors
aren't allowed by raising an exception as a side_effect
to the logger.
Cross realm bots will soon stop being a thing. This param is responsible
for displaying "System Bot" in the user info popover - so this rename is the
right way to handle the situation.
We will likely want to rename the `cross_realm_bots` section as well,
but that is a more involved API migration.
The code didn't account for existence of SOCIAL_AUTH_SUBDOMAIN. So the
redirects would happen to endpoints on the SOCIAL_AUTH_SUBDOMAIN, which
is incorrect. The redirects should happen to the realm from which the
user came.
This fixes a batch of mypy errors of the following format:
'Item "None" of "Optional[Something]" has no attribute "abc"
Since we have already been recklessly using these attritbutes
in the tests, adding assertions beforehand is justified presuming
that they oughtn't to be None.
If a user doesn't have enable_drafts_synchronization set to True, then
don't let them access the drafts API. This will help protect us
against client bugs accidentally sending drafts to the server when the
feature is disabled.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
This field will control whether or not a user wants to sync their
drafts between different clients. Defaults to enabled.
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
We allow a maximum value of one week to make sure there aren't a huge
number of rows in the table for any user (this could happen if stream
notifications are enabled).
This commit also fixes a small error in the user_settings test.
We only have one query which will change database state in this function,
and we already have a lock on the process itself, so there's no need for
a transaction.
This was added in ebb4eab0f9.
These modern landing pages cover use cases previously not detailed on
our website. Technically, we had a /for/research page before, but it
wasn't finished or linked everywhere.
Removed "function-url-quotes" stylelint rule
since I need to use quotes in url to use an
svg as list bullet point. There are spacing issues
using it as an image. Also, using quotes in url
is actually the recommended way to do it otherwise
there could be issue with escaping.
There might be good reasons to have other external authentication
methods such as SAML configured, but none of them is available.
This happens, for example, when you have enabled SAML so that Zulip is
able to generate the metadata in XML format, but you haven't
configured an IdP yet. This commit makes sure that the phrase _OR_ is
only shown on the login/account page when there are actually other
authentication methods available. When they are just configured, but
not available yet, the page looks like as if no external
authentication methods are be configured.
We achieve this by deleting any_social_backend_enabled, which was very
similar to page_params.external_authentication_methods, which
correctly has one entry per configured SAML IdP.
This is necessary to break the uncollectable reference cycle created
by our ‘request_notes.saved_response = json_response(…)’, Django’s
‘response._resource_closers.append(request.close)’, and Python’s
https://bugs.python.org/issue44680.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This prevents a memory leak caused by the `SimpleLazyObject` instance of
`UserProfile` that create a reference loop with the request object
via `ZulipRequestNotes`.
This API change removes unnecessary complexity from a client that
wants to change a user's personal settings, and also saves developers
from needing to make decisions about what sort of setting something is
at the API level.
We preserve the old settings endpoints as mapping to the same function
as the new one for backwards-compatibility. We delete the
documentation for the old endpoints, though the documentation for the
merged /settings endpoint mentions how to use the old endpoints when
needed.
We migrate all backend tests to the new endpoints, except for
individual tests for each legacy endpoint to verify they still work.
Co-authored-by: sahil839 <sahilbatra839@gmail.com>
This prevents a memory leak arising from Python’s inability to collect
a reference cycle from a WeakKeyDictionary value to its key
(https://bugs.python.org/issue44680).
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This makes several changes:
* Fixes a bug where the help text explaining our policies was not displayed.
* No help text was defined for many organization types.
* Copy-edits the help text somewhat.
* Offers all of the organization type options.
* Removes the 100% coverage requirement because it's annoying to test
the e.currentTarget click handler.
Most of the Markdown Preprocessors followed a common
template, and the `run` and `init` code was duplicated
multiple times for different preprocessors.
This commit adds a base class from which the preprocessors
following the pattern can inherit, and can override the
`render` and `generate_text` functions to execute the code.
Sometime in the deep past, Zulip the GET /users/me/subscriptions
endpoint started returning subscribers. We noticed this and made it
optional via the include_subscribers parameter in
1af72a2745, however, we didn't notice
that they were being returned as emails rather than user IDs.
We migrated the core /register code paths to use subscriber IDs years
ago; this change completes that for the endpoints we forgot about.
The documentation allowed this error because we apparently had no
tests for this code path that used the actual API.
We remove the "full_name" and "account_email" fields from the response
of 'PATCH /settings' endpoint. These fields were part of the response
to make sure that we tell that the parameters not present in response
were ignored.
We can remove these fields as 'ignored_parameters_unsupported' now
specifies which parameters were ignored and not supported by the
endpoint.
We add "ignored_parameters_unsupported" field to the response object
of 'PATCH /settings' endpoint. This will contain the parameters
passed to the endpoint which are not changed by the endpoint and are
ignored.
This will help in removing the other fields like "full_name" from
response which was essentially present to specify that only these
fields were updated by the endpoint and rest were ignored.
We will also change other endpoints to follow this in future.
We migrated the main method in the API bindings project to
get_subscriptions some time ago, and apparently neglected to change
the API documentation as well.
This is more robust towards reruning failed tests (which ran
partially and added some events to a queue before failing).
The tearDown code was added in 571f8b8664.
We are starting to run into situations where this data could be
quite useful for making future decisions, so it makes to store it
in the database, not just in an email.
Moving forward we are hoping to collect data on org types from our
users, so it makes sense to display the org type on the "Counts"
tab of our /activity page.
We incorrectly include many realm settings in the data section of
'realm/update_dict' schema. It should only contain the settings
related to message edit, realm icon, realm logo and authentication
methods and not other settings, becausea all the other settings send
'realm/update' event and not 'realm/update_dict' event.
This commit only removes 'add_emoji_by_admins_only' and others will
be removed separately.
Previously, non-admin emoji authors were allowed to
delete the emoji only if add_emoji_by_admins_only
was false. But, as add_emoji_by_admins_only setting
is for who can add emoji and not delete emojis, it
should not affect the behavior of deleting emojis
and users should always be allowed to delete the
emojis which. they added themselves
This commit adds moderators and full members options for
user_group_edit_policy by using COMMON_POLICY_TYPES.
Moderators do not require to be a member of user group in
order to edit or remove the user group if they are allowed
to do so according to user_group_edit_policy.
But full members need to be a member of user group to edit
or remove the user group.
There is no need to have a error message which specifies the
roles having permission to edit user-groups, we can simply
have error message as "Insufficient permission" as we already
show the roles having permission clearly in UI.
This concludes the HttpRequest migration to eliminate arbitrary
attributes (except private ones that are belong to django) attached
to the request object during runtime and migrated them to a
separate data structure dedicated for the purpose of adding
information (so called notes) to a HttpRequest.
This migrates some mocked Request class and mocked request achieved
with namedtuple in test_decorators and test_mirror_users to use the
refactored HostMockRequest.
Since weakref cannot be used with namedtuple, this old way of mocking a
request object should be migrated to using HostRequestMock. Only after
this change we can extract client from the request object and store it
via ZulipRequestNotes.
This includes the migration of fields that require trivial changes
to be migrated to be stored with ZulipRequestNotes.
Specifically _requestor_for_logs, _set_language, _query, error_format,
placeholder_open_graph_description, saveed_response, which were all
previously set on the HttpRequest object at some point. This migration
allows them to be typed.
We will no longer use the HttpRequest to store the rate limit data.
Using ZulipRequestNotes, we can access rate_limit and ratelimits_applied
with type hints support. We also save the process of initializing
ratelimits_applied by giving it a default value.
We create a class called ZulipRequestNotes as a new home to all the
additional attributes that we add to the Django HttpRequest object.
This allows mypy to do the typecheck and also enforces type safety.
Most of the attributes are added in the middleware, and thus it is
generally safe to assert that they are not None in a code path that
goes through the middleware. The caller is obligated to do manual
the type check otherwise.
This also resolves some cyclic dependencies that zerver.lib.request
have with zerver.lib.rate_limiter and zerver.tornado.handlers.
Previously, we stored up to 2 minutes worth of email events in memory
before processing them. So, if the server were to go down we would lose
those events.
To fix this, we store the events in the database.
This is a prep change for allowing users to set custom grace period for
email notifications, since the bug noted above will aggravate with
longer grace periods.
This will be used to store the missedmessage events received
during the waiting time for email notifications (which is currently
2 minutes, hardcoded).
The change in `test_retention` is because we've set `on_delete=CASCADE`
for the message field this table.
The new query is like so:
```
DELETE FROM "zerver_missedmessageemailentry"
WHERE "zerver_missedmessageemailentry"."message_id" IN (
1545, 1546, 1547, 1548, 1549, 1550, 1551, 1552, 1553
)
```
This reduces loose strings in the codebase, and allows us to not worry
about the exact naming (`stream_email_enabled` or `stream_emails_enabled`?)
and tense (`mentioned` or `mention`?).
Ideally this new class should have been in `lib/notification_data.py`,
which is our file for things like this. But, the next commit requires
using this data in `models.py`, and importing from `notification_data.py`
to `models.py` causes recursive imports.
The operationId is directly used in URLs of API doc pages
to find the OpenAPI data to render. However, this is dash-
separated in the URLs, and having underscore_separated IDs
in OpenAPI data doesn't allow direct comparison of the two.
This commit changes all OperationIDs from underscore_separated
to dash-separated.
Work around Fatal1ty/aioapns#15, by silencing error-level logging from
the aioapns logger. We deal with the results of failed
send_notification calls by examining the `result.description` and
handling them; the extra logging message merely clutters the Sentry
logs.
Previously, one needed to specifying all the HTTP status
codes that we want to render along with the operation,
but the primary use case just needs the responses of
all the status codes, and not just one.
This commit modifies the Markdown extension to render
all the responses of all status codes of a specified
operation in a loop.
The `# nocoverage` was unnecessary apart from for the compatibility code,
so add a test for that code and remove the `# nocoverage`.
The `message_id` -> `message_ids` conversion was done in
9869153ae8.
Previously, even non-admins had the option to override built-in
emojis in the `Settings Emoji` UI.
This commits essentially limits the functionality of overriding
custom and allows only realm administrators to
override built-in emojis with their custom emojis by adding an
authorization check in the backend.
It also adds relevant tests in `test_realm_emoji` which tests
for the cases where an admin and non admin tries to override
the built-in emoji.
Fixes#18860.
We added this function in 8e1a7cfb52
in order to make things more readable in example which hard-code user
ids. The point is to validate that the id indeed refers to the user that
the person writing the example expects, while providing information to
readers of the code so they don't have to do db queries to figure out
the user. As mentioned in the commit referred to above, this is
particularly useful when some db changes cause renumbering of user ids -
because then all these ids have to be adjusted and it's nice to know the
intended user.
Zulip identifies users by realm+delivery_email which means that the
Django changepassword command doesn't work well -
since it looks only at the .email field.
Thus we fork its code to our own change_password command.
We use subs as a common variable name for a collection of stream
data structure used in settings, in lot of modules. So this
rename clears a bunch of related shadowed variables.
This function had a confusing name, which could result in someone
using it unintentionally when they meant do_reactivate_user.
We also add docstrings for both functions.
We don't want this rate limit to affect legitimate users so it being hit
should be abnormal - thus worth logging so that we can spot if we're
rate limiting legitimate users and can know to increase the limit.
If the user is logged in, we'll stick to rate limiting by the
UserProfile. In case of requests without authentication, we'll apply the
same limits but to the IP address.
This option of specifying a different domain isn't used anywhere as of
now and we don't have a concrete way it could be used in the near
future. It's also getting in the way of how we want to do rate limiting
by IP, for which we'll want to apply a new domain 'api_by_ip'. That's
incompatible with how this decorator wants to determine the domain based
on the argument it receives when called to decorate a view function.
If in the future we want to have more granular control over API domains,
this can be refactored to be more general, but as of now it's just
imposing restrictions on how we can write the rate limiting code inside
it.
We add a new class UserBaseSettings and will be moving some of
the user settings to this class from UserProfile and UserProfile
will inherit it.
This is a prep commit for adding RealmUserDefault table which will
be used to set the realm-wide default for user settings like night
mode, etc. Adding UserBaseSettings will help us in avoiding copy
the same fields in RealmUserDefault.
We remove timezone setting from UserProfile.property_types
so that we can directly use UserProfile.property_types for
implementation of realm-default values of various user
settings.
This commits removes the redundant `compute_show_invites` function
which computes the `show_invites` page parameter in `lib/users.py`.
It is so because, commit 13399833b0 removed
the `show_invites` context variable passed in index.html.
Hence, the `show_invites` page_param key is no
longer required to compute in backend as it can be switched with
`settings_data.user_can_invite_others_to_realm()` in the frontend.
This commits also removes the `test_compute*` tests in
`test_home` that concerned with the `show_invites` page parameter
as they are no longer required.
* `stream_name`: This field is actually redundant. The email/push
notifications handlers don't use that field from the dict, and they
anyways query for the message, so we're safe in deleting this field,
even if in the future we end up needing the stream name.
* `timestamp`: This is totally unused by the email/push notification
handlers, and aren't sent to push clients either.
* `type` is used only for the push notifications handler, since only
push notifications can be revoked, so we move them to only run there.
The code to also notify for wildcard mentions was added in
0ed0bb6828.
But that showed the same text for both the cases. This commit fixes
that.
This is more of change for correctness. The mobile app currently does
not rely on this text for notifications, but constructs the text by
itself from the data in the payload.
This also fixes the "stream_push_notify" case to consistently show
a `#` before the stream name.
Running notify_server_error directly from the logging handler can lead
to database queries running in a random context. Among the many
potential problems that could cause, one actual problem is a
SynchronousOnlyOperation exception when running in an asyncio event
loop.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This fixes a bug introduced in 95b46549e1
which made the worker simply log a warning about the timeout and then
continue consume()ing the event that should have also been interrupted.
The idea here is to introduce an exception which can be used to
interrupt the consume() process without triggering the regular handling
of exceptions that happens in _handle_consume_exception.
Since do_create_realm also creates general and core team streams,
we rename general to verona right after the realm is created. Mostly
because we dont really want two additional streams and this might
probably make it easy to review things.
There are puppeteer test changes because, we have a new "core team"
stream in tests as well as there is a new default notification stream
"Verona". Because of this tests in message-basics for example have
to be changed since the newly added core team affects the order in
which we navigate through the streams using arrow keys.
The extra await for selector was added in subscriptions test to make
the tests wait. Without the await the tests were passing ocassionally
and failing in some other times.
Fixes#6967
The previous logic was incorrect and was not flushing the stream from
cache after deletion.
```
stream = get_realm_stream("Verona", realm.id)
stream.delete()
get_realm_stream("Verona", realm.id)
```
In the above example, the last line of code would have returned
the stream from cache instead of throwing a Stream.DoesNotExist
error. This is fixed in the commit.
I have verified that this commit indeed fix the issue by verifying
that calling get_realm_stream again after deleting the stream
results in Stream.DoesNotExist error.
This commit migrates the `navbar.html` Django template
to handlebars by creating a new file as `navbar.hbs`
within `/static/templates` which is then rendered
using `ui_init` module.
As a part of migration, we also remove the `search_pills_enabled`
and `embedded` parameters from the context attribute as they
are no longer needed now.
Fixes part of #18792.
Throwing an exception is excessive in case of this worker, as it's
expected for it to time out sometimes if the urls take too long to
process.
With a test added by tabbott.
This allows specific queue workers to override the defaut behavior and
implement their own response to the timer expiring. We will want to use
this for embed_links queue at least.