The test_docs change is because Django runs test cases with DEBUG =
False, which ordinarily means it doesn’t serve /static during tests.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The documentation suggests that you can get the dev server to use
production assets by setting PIPELINE_ENABLED = True, but that
resulted in Django being unable to find any static files because
FileSystemFinder was missing from STATICFILES_FINDERS. Using the
production storage configuration in this case reduces the number of
possible configurations and seems to result in things being less
broken.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We create a path structure in the from:
`/var/<uuid>/test-backend/run_1234567/worker_N/`
A settings attribute, `TEST_WORKER_DIR`, was created as a worker's
directory for a given `test-backend` run's file storage. The
appropirate path is created in `setup_test_environment`, while each
workers subdirectory is created within `init_worker`.
This allows a test class to write to `settings.TEST_WORKER_DIR`,
populating the appropirate directory of a given worker. Also
providing the long-term approach to clean up filesystem access
in the backend unit tests.
We don’t need a hacked copy anymore. We run the installed version out
of node_modules in development, and a Webpack-bundled version of that
in production.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
In each url of urls.py, if we want to mark an endpoint as being
intentionally undocumented, then in the kwargs instead of directly
mapping like 'METHOD': 'zerver.views.package.foo', we can provide
a tag called 'intentionally_undocumented' and map like:
'METHOD': ('zerver.views.package.foo', {'intentionally_undocumented'}).
If an endpoint is marked as intentionally undocumented, but we find
some OpenAPI documentation for it then we'll throw an error, in which
case either the 'intentionally_undocumented' tag should be removed or
the faulty documentation should be removed.
This will allow us to mark a REQ variable as intentionally
undocumented. With this, we can remove some of the endpoints marked
as "buggy" even though they're not actually buggy, we just needed to
specify certain parameters as intentionally undocumented (e.g. the
stream_id for the /users/me/subscriptions/muted_topics endpoint.)
Any REQ variable with intentionally_undocumentated set to True
will not be added to the arguments_map data structure.
For some of the other "buggy" endpoints, we would want to mark the
entire endpoint as being undocumented intentionally via. the urls.py
file.
As of commit cff40c557b (#9300), these
files are no longer served directly to the browser. Disentangle them
from the static asset pipeline so we can refactor it without worrying
about them.
This has the side effect of eliminating the accidental duplication of
translation data via hash-naming in our release tarballs.
This reverts commit b546391f0b (#1148).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
It appears not to have been useful and makes it marginally harder to
reason about how module resolution works. Paths to static content in
node_modules should be resolved through Webpack instead.
(This node_modules symlink was originally created in the pre-webpack world
where all of our static asset paths were based in static/.)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
It's common for a broken settings.py file to result in Django not
starting, thus never writing to `/var/log/zulip/errors.log`. Such
behavior can be discouraging when the server 500s without a traceback
to accompany it. To fix this, we simply catch any exceptions in
django.setup(), if raised, and log the exception appropriately.
Comments tweaked by tabbott.
Closes: #7032.
Previously, our Github authentication backend just used the user's
primary email address associated with GitHub, which was a reasonable
default, but quite annoying for users who have several email addresses
associated with their GitHub account.
We fix this, by adding a new screen where users can select which of
their (verified) GitHub email addresses to use for authentication.
This is implemented using the "partial" feature of the
python-social-auth pipeline system.
Each email is displayed as a button. Clicking on that button chooses
the email. The email value is stored in a hidden input above the
button. The `primary_email` is displayed on top followed by
`verified_non_primary_emails`. Backend name is also passed as
`backend` to the template, which in our case is GitHub.
Fixes#9876.
This fixes an issue that caused LDAP synchronization to fail for
avatars. The problem occurred due to the lack of a 'name' attribute
on the BytesIO object that we pass to the upload backend (which is
only used in the S3 backend for computing Content-Type).
Fixes#12411.
The `AUTH_LDAP_ALWAYS_UPDATE_USER` is `True` by default, and this would sync the
attributes defined in the `AUTH_LDAP_USER_ATTR_MAP` to the user profile. But,
the default code in `django-auth-ldap` would work correctly only for `full_name`
field. This commit disables the setting by default, in favour of using the
`sync_ldap_user_data` script as a cron job.
Since positional arguments are interpreted differently by different
backends in Django's authentication backend system, it’s safer to
disallow them.
This had been the motivation for previously declaring the parameters
with default values when we were on Python 2, but that was not super
effective because Python has no rule against positional default
arguments and that convention for our authentication backends was
solely enforced by code review.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This commit adds a new developer tool: The "integrations dev panel"
which will serve as a replacement for the send_webhook_fixture_message
management command as a way to test integrations with much greater ease.
Jitsi Meet is the correct name for the product we integrate with. There is
one other reference to Jitsi, but it's in the db and will require a
migration.
This makes the implementation of `get_realm` consistent with its
declared return type of `Realm` rather than `Optional[Realm]`.
Fixes#12263.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Fixes#12132.
Realm setting to disable avatar changes is already present.
The `AVATAR_CHANGES_DISABLED` setting now follows the same
2-setting model as `NAME_CHANGES_DISABLED`.
The github-services model for how GitHub would send requests to this
legacy integration is no longer available since earlier in 2019.
Removing this integration also allows us to finally remove
authenticated_api_view, the legacy authentication model from 2013 that
had been used for this integration (and other features long since
upgraded).
A few functions that were used by the Beanstalk webhook are moved into
that webhook's implementation directly.
An endpoint was created in zerver/views. Basic rate-limiting was
implemented using RealmAuditLog. The idea here is to simply log each
export event as a realm_exported event. The number of events
occurring in the time delta is checked to ensure that the weekly
limit is not exceeded.
The event is published to the 'deferred_work' queue processor to
prevent the export process from being killed after 60s.
Upon completion of the export the realm admin(s) are notified.
The main point here is that you should use a symlink rather than
changing it, since it's more maintenance work to update our nginx
configuration to use an alternative path than to just create a
symbolic link.
Fixes#12157.
Previously, we had some expensive-to-calculate keys in
zulip_default_context, especially around enabled authentication
backends, which in total were a significant contributor to the
performance of various logged-out pages. Now, these keys are only
computed for the login/registration pages where they are needed.
This is a moderate performance optimization for the loading time of
many logged-out pages.
Closes#11929.
These previously lived in Optional settings, which generally caused
users to not read it.
(Also do a bit of reorganization of the "optional settings" area).
Closes#2420
We add rate limiting (max X emails withing Y seconds per realm) to the
email mirror. By creating RateLimitedRealmMirror class, inheriting from
RateLimitedObject, and rate_limit_mirror_by_realm function, following a
mechanism used by rate_limit_user, we're able to have this
implementation mostly rely on the already existing, and proven over
time, rate_limiter.py code. The rules are configurable in settings.py in
RATE_LIMITING_MIRROR_REALM_RULES, analogically to RATE_LIMITING_RULES.
Rate limit verification happens in the MirrorWorker in
queue_processors.py. We don't rate limit missed message emails, as due
to using one time addresses, they're not a spam threat.
test_mirror_worker is adapted to the altered MirrorWorker code and a new
test - test_mirror_worker_rate_limiting is added in test_queue_worker.py
to provide coverage for these changes.
This avoids repeatedly calling a Django auth function that takes a few
hundred microseconds to run in auth_enabled_helper, which itself is
currently called 14 times in every request to pages using
common_context.
This renames references to user avatars, bot avatars, or organization
icons to profile pictures. The string in the UI are updated,
in addition to the help files, comments, and documentation. Actual
variable/function names, changelog entries, routes, and s3 buckets are
left as-is in order to avoid introducing bugs.
Fixes#11824.
When soft deactivation is run for in "auto" mode (no emails are
specified and all users inactive for specified number of days are
deactivated), catch-up is also run in the "auto" mode if
AUTO_CATCH_UP_SOFT_DEACTIVATED_USERS is True.
Automatically catching up soft-deactivated users periodically would
ensure a good user experience for returning users, but on some servers
we may want to turn off this option to save on some disk space.
Fixes#8858, at least for the default configuration, by eliminating
the situation where there are a very large number of messages to recover.
Previously, the LDAP authentication model ignored the realm-level
settings for who can join a realm. This was sort of reasonable at the
time, because the original LDAP auth was an SSO solution that didn't
allow multiple realms, and so one could fully configure authentication
settings on the LDAP side. But now that we allow multiple realms with
the LDAP backend, one could easily imagine wanting different
restrictions on them, and so it makes sense to add this enforcement.
Now that we've more or less stabilized our authentication/registration
subsystem how we want it, it seems worth adding proper documentation
for this.
Fixes#7619.
Earlier the behavior was to raise an exception thereby stopping the
whole sync. Now we log an error message and skip the field. Also
fixes the `query_ldap` command to report missing fields without
error.
Fixes: #11780.
When a bunch of messages with active notifications are all read at
once -- e.g. by the user choosing to mark all messages, or all in a
stream, as read, or just scrolling quickly through a PM conversation
-- there can be a large batch of this information to convey. Doing it
in a single GCM/FCM message is better for server congestion, and for
the device's battery.
The corresponding client-side logic is in zulip/zulip-mobile#3343 .
Existing clients today only understand one message ID at a time; so
accommodate them by sending individual GCM/FCM messages up to an
arbitrary threshold, with the rest only as a batch.
Also add an explicit test for this logic. The existing tests
that happen to cause this function to run don't exercise the
last condition, so without a new test `--coverage` complains.
ACCOUNT_ACTIVATION_DAYS doesn't seems to be used anywhere.
INVITATION_LINK_VALIDITY_DAYS seems to do it's job currently.
(It was only ever used in very early Zulip commits).
We were using a hardcoded relative path, which doesn't work if you're
not running this from the root of the Zulip checkout.
As part of fixing this, we need to make `LOCAL_UPLOADS_DIR` an
absolute path.
Fixes#11581.
The client-side fix to make these not a problem was in release
16.2.96, of 2018-08-22. We've been sending them from the
development community server chat.zulip.org since 2018-11-29.
We started forcing clients to upgrade with commit fb7bfbe9a,
deployed 2018-12-05 to zulipchat.com.
(The mobile app unconditionally makes a request to a route on
zulipchat.com to check for this kind of forced upgrade, so that
applies to mobile users of any Zulip server.)
So at this point it's long past safe for us to unconditionally
send these. Hardwire the old `SEND_REMOVE_PUSH_NOTIFICATIONS`
setting to True, and simplify it out.
For Google auth, the multiuse invite key should be stored in the
csrf_state sent to google along with other values like is_signup,
mobile_flow_otp.
For social auth, the multiuse invite key should be passed as params to
the social-auth backend. The passing of the key is handled by
social_auth pipeline and made available to us when the auth is
completed.
This is primarily a feature for onboarding, where an organization
administrator might send a bunch of random test messages as part of
joining, but then want a pristine organization when their users later
join.
But it can theoretically be used for other use cases (e.g. for
moderation or removing threads that are problematic in some way).
Tweaked by tabbott to handle corner cases with
is_history_public_to_subscribers.
Fixes#10912.
This reverts commit ec9f6702d8.
Now that pika 0.13.0 has merged our PR to not import twisted unless it
is needed, we don't need to use this performance hack in order to
avoid wasting time importing twisted and all its dependencies.
Previously, zerver.views.registration.confirmation_key was only
available in development; now we make that more structurally clear by
moving it to the special zerver/views/development directory.
Fixes#11256.
Some urls are only available in the development environment
(dev_urls.py); Corresponding views (here email_log.py) is moved to the
new directory zerver/views/development.
Fixes#11256.
We had an inconsistent behavior when `LDAP_APPEND_DOMAIN` was set
in that we allowed user to enter username instead of his email in
the auth form but later the workflow failed due to a small bug.
Fixes: #10917.