Fixes#11878
Instead of a confusing mix of django_auth_backed applying
ldap_to_django_username in its internals for one part of the
translation, and then custom logic for grabbing it from the email
attribute of the ldapuser in ZulipLDAPAuthBackend.get_or_build_user
for the second part of the translation,
we put all the logic in a single function user_email_from_ldapuser
which will be used by get_or_build of both ZulipLDAPUserPopulator and
ZulipLDAPAuthBackend.
This, building on the previous commits with the email search feature,
fixes the ldap sync bug from issue #11878.
If we can get upstream django-auth-ldap to merge
https://github.com/django-auth-ldap/django-auth-ldap/pull/154, we'll
be able to go back to using the version of ldap_to_django_username
that accepts a _LDAPUser object.
With this, django_to_ldap_username can take an email and find the ldap
username of the ldap user who has this email - if email search is
configured.
This allows successful authenticate() with ldap email and ldap password,
instead of ldap username. This is especially useful because when
a user wants to fetch their api key, the server attempts authenticate
with user_profile.email - and this used to fail if the user was an ldap
user (because the ldap username was required to authenticate
succesfully). See issue #9277.
This fixes a collection of bugs surrounding LDAP configurations A and
C (i.e. LDAP_APPEND_DOMAIN=None) with EmailAuthBackend also enabled.
The core problem was that our desired security model in that setting
of requiring LDAP authentication for accounts managed by LDAP was not
implementable without a way to
Now admins can configure an LDAPSearch query that will find if there
are users in LDAP that have the email address and
email_belongs_to_ldap() will take advantage of that - no longer
returning True in response to all requests and thus blocking email
backend authentication.
In the documentation, we describe this as mandatory configuration for
users (and likely will make it so soon in the code) because the
failure modes for this not being configured are confusing.
But making that change is pending work to improve the relevant error
messages.
Fixes#11715.
There are a few outstanding issues that we expect to resolve beforce
including this in a release, but this is good checkpoint to merge.
This PR is a collaboration with Tim Abbott.
Fixes#716.
If the social backend doesn't have get_verified_emails emails, and we
simply grab kwargs["details"].get("email") for the email, we should
still validate it is correct.
Needed for SAML. This will get covered by tests in upcoming commits that
add SAML support.
Previously, while Django code that relied on EXTERNAL_HOST and other
settings would know the Zulip server is actually on port 9991, the
upcoming Django SAML code in python-social-auth would end up detecting
a port of 9992 (the one the Django server is actually listening on).
We fix this using X-Forwarded-Port.
any_oauth_backend_enabled is all about whether we will have extra
buttons on the login/register pages for logging in with some non-native
backends (like Github, Google etc.). And this isn't about specifically
oauth backends, but generally "social" backends - that may not rely
specifically rely on Oauth. This will have more concrete relevance when
SAML authentication is added - which will be a "social" backend,
requiring an additional button, but not Oauth-based.
SOCIAL_AUTH_BACKEND / OAUTH_BACKEND_NAMES are currently the same
backends. All Oauth backends are social, and all social are oauth.
So we get rid of OAUTH_BACKEND_NAMES and use only SOCIAL_AUTH_BACKENDS.
Also cleans up the interface between the management command and the
LDAP backends code to not guess/recompute under what circumstances
what should be logged.
Co-authored-by: mateuszmandera <mateusz.mandera@protonmail.com>
The order of operations for our LDAP synchronization code wasn't
correct: We would run the code to sync avatars (etc.) even for
deactivated users.
Thanks to niels for the report.
Co-authored-by: mateuszmandera <mateusz.mandera@protonmail.com>
Fixes#13130.
django_auth_ldap doesn't give any other way of detecting that LDAPError
happened other than catching the signal it emits - so we have to
register a receiver. In the receiver we just raise our own Exception
which will properly propagate without being silenced by
django_auth_ldap. This will stop execution before the user gets
deactivated.
Fixes#9401.
This adds a FAKE_EMAIL_DOMAIN setting, which should be used if
EXTERNAL_HOST is not a valid domain, and something else is needed to
form bot and dummy user emails (if email visibility is turned off).
It defaults to EXTERNAL_HOST.
get_fake_email_domain() should be used to get this value. It validates
that it's correctly set - that it can be used to form valid emails.
If it's not set correctly, an exception is raised. This is the right
approach, because it's undesirable to have the server seemingly
peacefully operating with that setting misconfigured, as that could
mask some hidden sneaky bugs due to UserProfiles with invalid emails,
which would blow up the moment some code that does validate the emails
is called.
Previously, several of our URL patterns accidentally did not end with
`$`, and thus ended up controlling just the stated URL, but actually a
much broader set of URLs starting with it.
I did an audit and fixed what I believe are all instances of this URL
pattern behavior. In the process, I fixed a few tests that were
unintentionally relying on the behavior.
Fixes#13082.
In bf14a0af4, we refactored the Google authentication system to use
the same code as GitHub auth, but neglected to provide a
backwards-compatible URL available for use by older versions of the
mobile apps.
Fixes#13081.
This restructures the API endpoints that we currently have implemented
more or less for exclusive use by the mobile and desktop apps (things
like checking what authentication methods are supported) to use a
system that can be effectively parsed by our test_openapi
documentation.
This brings us close to being able to eliminate
`buggy_documentation_endpoints` as a persistently nonempty list.
For the emails that are associated to an existing account in an
organisation, the avatars will be displayed in the email selection
page. This includes avatar data in what is passed to the page.
Added `avatar_urls` to the context in `test_templates.py`.
Apparently GitHub changed the email address for these; we need to
update our code accordingly.
One cannot receive emails on the username@users.noreply.github.com, so
if someone tries creating an account with this email address, that
person would not be able to verify the account.
We were incorrectly setting LOCAL_UPLOADS_DIR to the empty string in
this code path, which would result in upload files being logged to the
root directory of the repository.
Fixes#12909.
Previous cleanups (mostly the removals of Python __future__ imports)
were done in a way that introduced leading newlines. Delete leading
newlines from all files, except static/assets/zulip-emoji/NOTICE,
which is a verbatim copy of the Apache 2.0 license.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The current code looks like it's trying to redirect /integrations/doc/email
to /integrations when EMAIL_GATEWAY_PATTERN is not set.
I think it doesn't currently do this. The test for that pathway has a bug:
self.get_doc('integrations/doc-html/email', subdomain='zulip') needs a
leading slash, and putting the slash back in results in the test failing.
This redirection is not really desired behavior -- better is to
unconditionally show that the email integration exists, and just point the
user to https://zulip.readthedocs.io/en/latest/production/email-gateway.html
(this is done in a child commit).
This commit alone breaks things, needs to be merged with the follow-up
ones.
welcome-bot is removed from the explicit list, because it already is in
settings.INTERNAL_BOTS.
This feature is intended to cover all of our ways of exporting a
realm, not just the initial "public export" feature, so we should name
things appropriately for that goal.
Additionally, we don't want to include data exports in page_params;
the original implementation was actually buggy and would have.
Django’s default FileSystemFinder disallows STATICFILES_DIRS from
containing STATIC_ROOT (by raising an ImproperlyConfigured exception),
because STATIC_ROOT is supposed to be the result of collecting all the
static files in the project, not one of the potentially many sources
of static files.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This change serves to declutter webhook-errors.log, which is
filled with too many UnexpectedWebhookEventType exceptions.
Keeping UnexpectedWebhookEventType in zerver/lib/webhooks/common.py
led to a cyclic import when we tried to import the exception in
zerver/decorators.py, so this commit also moves this exception to
another appropriate module. Note that our webhooks still import
this exception via zerver/lib/webhooks/common.py.
This replaces the two custom Google authentication backends originally
written in 2012 with using the shared python-social-auth codebase that
we already use for the GitHub authentication backend. These are:
* GoogleMobileOauth2Backend, the ancient code path for mobile
authentication last used by the EOL original Zulip Android app.
* The `finish_google_oauth2` code path in zerver/views/auth.py, which
was the webapp (and modern mobile app) Google authentication code
path.
This change doesn't fix any known bugs; its main benefit is that we
get to remove hundreds of lines of security-sensitive semi-duplicated
code, replacing it with a widely trusted, high quality third-party
library.
The test_docs change is because Django runs test cases with DEBUG =
False, which ordinarily means it doesn’t serve /static during tests.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The documentation suggests that you can get the dev server to use
production assets by setting PIPELINE_ENABLED = True, but that
resulted in Django being unable to find any static files because
FileSystemFinder was missing from STATICFILES_FINDERS. Using the
production storage configuration in this case reduces the number of
possible configurations and seems to result in things being less
broken.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We create a path structure in the from:
`/var/<uuid>/test-backend/run_1234567/worker_N/`
A settings attribute, `TEST_WORKER_DIR`, was created as a worker's
directory for a given `test-backend` run's file storage. The
appropirate path is created in `setup_test_environment`, while each
workers subdirectory is created within `init_worker`.
This allows a test class to write to `settings.TEST_WORKER_DIR`,
populating the appropirate directory of a given worker. Also
providing the long-term approach to clean up filesystem access
in the backend unit tests.
We don’t need a hacked copy anymore. We run the installed version out
of node_modules in development, and a Webpack-bundled version of that
in production.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
In each url of urls.py, if we want to mark an endpoint as being
intentionally undocumented, then in the kwargs instead of directly
mapping like 'METHOD': 'zerver.views.package.foo', we can provide
a tag called 'intentionally_undocumented' and map like:
'METHOD': ('zerver.views.package.foo', {'intentionally_undocumented'}).
If an endpoint is marked as intentionally undocumented, but we find
some OpenAPI documentation for it then we'll throw an error, in which
case either the 'intentionally_undocumented' tag should be removed or
the faulty documentation should be removed.
This will allow us to mark a REQ variable as intentionally
undocumented. With this, we can remove some of the endpoints marked
as "buggy" even though they're not actually buggy, we just needed to
specify certain parameters as intentionally undocumented (e.g. the
stream_id for the /users/me/subscriptions/muted_topics endpoint.)
Any REQ variable with intentionally_undocumentated set to True
will not be added to the arguments_map data structure.
For some of the other "buggy" endpoints, we would want to mark the
entire endpoint as being undocumented intentionally via. the urls.py
file.
As of commit cff40c557b (#9300), these
files are no longer served directly to the browser. Disentangle them
from the static asset pipeline so we can refactor it without worrying
about them.
This has the side effect of eliminating the accidental duplication of
translation data via hash-naming in our release tarballs.
This reverts commit b546391f0b (#1148).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
It appears not to have been useful and makes it marginally harder to
reason about how module resolution works. Paths to static content in
node_modules should be resolved through Webpack instead.
(This node_modules symlink was originally created in the pre-webpack world
where all of our static asset paths were based in static/.)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
It's common for a broken settings.py file to result in Django not
starting, thus never writing to `/var/log/zulip/errors.log`. Such
behavior can be discouraging when the server 500s without a traceback
to accompany it. To fix this, we simply catch any exceptions in
django.setup(), if raised, and log the exception appropriately.
Comments tweaked by tabbott.
Closes: #7032.
Previously, our Github authentication backend just used the user's
primary email address associated with GitHub, which was a reasonable
default, but quite annoying for users who have several email addresses
associated with their GitHub account.
We fix this, by adding a new screen where users can select which of
their (verified) GitHub email addresses to use for authentication.
This is implemented using the "partial" feature of the
python-social-auth pipeline system.
Each email is displayed as a button. Clicking on that button chooses
the email. The email value is stored in a hidden input above the
button. The `primary_email` is displayed on top followed by
`verified_non_primary_emails`. Backend name is also passed as
`backend` to the template, which in our case is GitHub.
Fixes#9876.
This fixes an issue that caused LDAP synchronization to fail for
avatars. The problem occurred due to the lack of a 'name' attribute
on the BytesIO object that we pass to the upload backend (which is
only used in the S3 backend for computing Content-Type).
Fixes#12411.
The `AUTH_LDAP_ALWAYS_UPDATE_USER` is `True` by default, and this would sync the
attributes defined in the `AUTH_LDAP_USER_ATTR_MAP` to the user profile. But,
the default code in `django-auth-ldap` would work correctly only for `full_name`
field. This commit disables the setting by default, in favour of using the
`sync_ldap_user_data` script as a cron job.
Since positional arguments are interpreted differently by different
backends in Django's authentication backend system, it’s safer to
disallow them.
This had been the motivation for previously declaring the parameters
with default values when we were on Python 2, but that was not super
effective because Python has no rule against positional default
arguments and that convention for our authentication backends was
solely enforced by code review.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This commit adds a new developer tool: The "integrations dev panel"
which will serve as a replacement for the send_webhook_fixture_message
management command as a way to test integrations with much greater ease.
Jitsi Meet is the correct name for the product we integrate with. There is
one other reference to Jitsi, but it's in the db and will require a
migration.
This makes the implementation of `get_realm` consistent with its
declared return type of `Realm` rather than `Optional[Realm]`.
Fixes#12263.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Fixes#12132.
Realm setting to disable avatar changes is already present.
The `AVATAR_CHANGES_DISABLED` setting now follows the same
2-setting model as `NAME_CHANGES_DISABLED`.
The github-services model for how GitHub would send requests to this
legacy integration is no longer available since earlier in 2019.
Removing this integration also allows us to finally remove
authenticated_api_view, the legacy authentication model from 2013 that
had been used for this integration (and other features long since
upgraded).
A few functions that were used by the Beanstalk webhook are moved into
that webhook's implementation directly.
An endpoint was created in zerver/views. Basic rate-limiting was
implemented using RealmAuditLog. The idea here is to simply log each
export event as a realm_exported event. The number of events
occurring in the time delta is checked to ensure that the weekly
limit is not exceeded.
The event is published to the 'deferred_work' queue processor to
prevent the export process from being killed after 60s.
Upon completion of the export the realm admin(s) are notified.
The main point here is that you should use a symlink rather than
changing it, since it's more maintenance work to update our nginx
configuration to use an alternative path than to just create a
symbolic link.
Fixes#12157.
Previously, we had some expensive-to-calculate keys in
zulip_default_context, especially around enabled authentication
backends, which in total were a significant contributor to the
performance of various logged-out pages. Now, these keys are only
computed for the login/registration pages where they are needed.
This is a moderate performance optimization for the loading time of
many logged-out pages.
Closes#11929.
These previously lived in Optional settings, which generally caused
users to not read it.
(Also do a bit of reorganization of the "optional settings" area).
Closes#2420
We add rate limiting (max X emails withing Y seconds per realm) to the
email mirror. By creating RateLimitedRealmMirror class, inheriting from
RateLimitedObject, and rate_limit_mirror_by_realm function, following a
mechanism used by rate_limit_user, we're able to have this
implementation mostly rely on the already existing, and proven over
time, rate_limiter.py code. The rules are configurable in settings.py in
RATE_LIMITING_MIRROR_REALM_RULES, analogically to RATE_LIMITING_RULES.
Rate limit verification happens in the MirrorWorker in
queue_processors.py. We don't rate limit missed message emails, as due
to using one time addresses, they're not a spam threat.
test_mirror_worker is adapted to the altered MirrorWorker code and a new
test - test_mirror_worker_rate_limiting is added in test_queue_worker.py
to provide coverage for these changes.
This avoids repeatedly calling a Django auth function that takes a few
hundred microseconds to run in auth_enabled_helper, which itself is
currently called 14 times in every request to pages using
common_context.
This renames references to user avatars, bot avatars, or organization
icons to profile pictures. The string in the UI are updated,
in addition to the help files, comments, and documentation. Actual
variable/function names, changelog entries, routes, and s3 buckets are
left as-is in order to avoid introducing bugs.
Fixes#11824.
When soft deactivation is run for in "auto" mode (no emails are
specified and all users inactive for specified number of days are
deactivated), catch-up is also run in the "auto" mode if
AUTO_CATCH_UP_SOFT_DEACTIVATED_USERS is True.
Automatically catching up soft-deactivated users periodically would
ensure a good user experience for returning users, but on some servers
we may want to turn off this option to save on some disk space.
Fixes#8858, at least for the default configuration, by eliminating
the situation where there are a very large number of messages to recover.
Previously, the LDAP authentication model ignored the realm-level
settings for who can join a realm. This was sort of reasonable at the
time, because the original LDAP auth was an SSO solution that didn't
allow multiple realms, and so one could fully configure authentication
settings on the LDAP side. But now that we allow multiple realms with
the LDAP backend, one could easily imagine wanting different
restrictions on them, and so it makes sense to add this enforcement.
Now that we've more or less stabilized our authentication/registration
subsystem how we want it, it seems worth adding proper documentation
for this.
Fixes#7619.
Earlier the behavior was to raise an exception thereby stopping the
whole sync. Now we log an error message and skip the field. Also
fixes the `query_ldap` command to report missing fields without
error.
Fixes: #11780.
When a bunch of messages with active notifications are all read at
once -- e.g. by the user choosing to mark all messages, or all in a
stream, as read, or just scrolling quickly through a PM conversation
-- there can be a large batch of this information to convey. Doing it
in a single GCM/FCM message is better for server congestion, and for
the device's battery.
The corresponding client-side logic is in zulip/zulip-mobile#3343 .
Existing clients today only understand one message ID at a time; so
accommodate them by sending individual GCM/FCM messages up to an
arbitrary threshold, with the rest only as a batch.
Also add an explicit test for this logic. The existing tests
that happen to cause this function to run don't exercise the
last condition, so without a new test `--coverage` complains.
ACCOUNT_ACTIVATION_DAYS doesn't seems to be used anywhere.
INVITATION_LINK_VALIDITY_DAYS seems to do it's job currently.
(It was only ever used in very early Zulip commits).
We were using a hardcoded relative path, which doesn't work if you're
not running this from the root of the Zulip checkout.
As part of fixing this, we need to make `LOCAL_UPLOADS_DIR` an
absolute path.
Fixes#11581.
The client-side fix to make these not a problem was in release
16.2.96, of 2018-08-22. We've been sending them from the
development community server chat.zulip.org since 2018-11-29.
We started forcing clients to upgrade with commit fb7bfbe9a,
deployed 2018-12-05 to zulipchat.com.
(The mobile app unconditionally makes a request to a route on
zulipchat.com to check for this kind of forced upgrade, so that
applies to mobile users of any Zulip server.)
So at this point it's long past safe for us to unconditionally
send these. Hardwire the old `SEND_REMOVE_PUSH_NOTIFICATIONS`
setting to True, and simplify it out.
For Google auth, the multiuse invite key should be stored in the
csrf_state sent to google along with other values like is_signup,
mobile_flow_otp.
For social auth, the multiuse invite key should be passed as params to
the social-auth backend. The passing of the key is handled by
social_auth pipeline and made available to us when the auth is
completed.
This is primarily a feature for onboarding, where an organization
administrator might send a bunch of random test messages as part of
joining, but then want a pristine organization when their users later
join.
But it can theoretically be used for other use cases (e.g. for
moderation or removing threads that are problematic in some way).
Tweaked by tabbott to handle corner cases with
is_history_public_to_subscribers.
Fixes#10912.
This reverts commit ec9f6702d8.
Now that pika 0.13.0 has merged our PR to not import twisted unless it
is needed, we don't need to use this performance hack in order to
avoid wasting time importing twisted and all its dependencies.
Previously, zerver.views.registration.confirmation_key was only
available in development; now we make that more structurally clear by
moving it to the special zerver/views/development directory.
Fixes#11256.
Some urls are only available in the development environment
(dev_urls.py); Corresponding views (here email_log.py) is moved to the
new directory zerver/views/development.
Fixes#11256.
We had an inconsistent behavior when `LDAP_APPEND_DOMAIN` was set
in that we allowed user to enter username instead of his email in
the auth form but later the workflow failed due to a small bug.
Fixes: #10917.
This endpoint serves requests which might originate from an image
preview link which had an http url and the message holding the image
link was rendered before we introduced thumbnailing. In that case
we would have used a camo proxy to proxy http content over https and
avoid mix content warnings.
In near future, we plan to drop use of camo and just rely on thumbor
to serve such images. This endpoint helps maintain backward
compatibility for links which were already rendered.
This setting splits away part of responsibility from THUMBOR_URL.
Now on, this setting will be responsible for controlling whether
we thumbnail images or not by asking bugdown to render image links
to hit our /thumbnail endpoint. This is irrespective of what
THUMBOR_URL is set to though ideally THUMBOR_URL should be set
to point to a running thumbor instance.
This is somewhat hacky, in that in order to do what we're doing, we
need to parse the HTML of the rendered page to extract the first
paragraph to include in the open graph description field. But
BeautifulSoup does a good job of it.
This carries a nontrivial performance penalty for loading these pages,
but overall /help/ is a low-traffic site compared to the main app, so
it doesn't matter much.
(As a sidenote, it wouldn't be a bad idea to cache this stuff).
There's lots of things we can improve in this, largely through editing
the articles, but we can deal with that over time.
Thanks to Rishi for writing all the tests.
This adds a new realm_logo field, which is a horizontal-format logo to
be displayed in the top-left corner of the webapp, and any other
places where we might want a wide-format branding of the organization.
Tweaked significantly by tabbott to rebase, fix styling, etc.
Fixing the styling of this feature's loading indicator caused me to
notice the loading indicator for the realm_icon feature was also ugly,
so I fixed that too.
Fixes#7995.
This should make it possible for blueslip error reports to be sent on
our logged-out portico pages, which should in turn make it possible to
debug any such issues as they occur.
This should make life a lot more convenient for organizations that use
the LDAP integration and have their avatars in LDAP already.
This hasn't been end-to-end tested against LDAP yet, so there may be
some minor revisions, but fundamentally, it works, has automated
tests, and should be easy to maintain.
Fixes#286.
Apparently, Django's get_current_site function (used, e.g., in
django-two-factor to look up the domain to use in QR codes) first
tries to use the Sites framework, and if unavailable, does the right
thing (namely, using request.get_host()).
We don't use the Sites framework for anything in Zulip, so the correct
fix is to just remove it.
Fixes#11014.
A key part of this is the new helper, get_user_by_delivery_email. Its
verbose name is important for clarity; it should help avoid blind
copy-pasting of get_user (which we'll also want to rename).
Unfortunately, it requires detailed understanding of the context to
figure out which one to use; each is used in about half of call sites.
Another important note is that this PR doesn't migrate get_user calls
in the tests except where not doing so would cause the tests to fail.
This probably deserves a follow-up refactor to avoid bugs here.
This makes it possible to still run the deliver_scheduled_messages
queue worker, even though we're not creating reminder-bot by default
in new organizations.
This reverts commit 2fa77d9d54.
Further investigation has determined that this did not fix the
password-reset problem described in the previous commit message;
meanwhile, it causes other problems. We still need to track down the
root cause of the original password-reset bug.
This adds a web flow and management command for reactivating a Zulip
organization, with confirmation from one of the organization
administrators.
Further work is needed to make the emails nicer (ideally, we'd send
one email with all the admins on the `To` line, but the `send_email`
library doesn't support that).
Fixes#10783.
With significant tweaks to the email text by tabbott.
Realm object is not json-serializable; store the realm id instead
and retrieve the realm in social_auth_finish using
`Realm.objects.get(id=return_data["realm_id"])`.
The email_list returned has the primary email as the first element.
Testing: The order of the emails in the test was changed to put a
verified email before the primary one. The tests would fail without
this commit's change after the changes in the order of test emails.
See https://github.com/zulip/zulip/issues/10856 for details on the bug
here; but basically, users who reset their password were unable to
login until the next time we flushed memcached. The issue disappeared
after stopping using the Django cached_db session engine, so it's
pretty clear that some sort of bug with that session engine
interacting with our password reset logic is the root cause.
Further debugging is required to understand this fully, but for now,
it seems wise to just disable the backend.
The cost of doing so is a small performance decrease, which is likely
acceptable until we can resolve this (it's certainly a more minor
problem than the "Can't login" bug that disabling this removes).
This is a preparatory commit which will help us with removing camo.
In the upcoming commits we introduce a new endpoint which is based
out on the setting CAMO_URI. Since camo could have been hosted on
a different server as well from the main Zulip server, this change
will help us realise in tests how that scenerio might be dealt with.
These lazy imports save a significant amount of time on Zulip's core
import process, because mock imports pbr, which in turn import
pkgresources, which is in turn incredibly slow to import.
Fixes part of #9953.
This is a performance optimization; see the comment. This fixes part
of #9953.
Eventually, we should do the same thing for importing Tornado as well,
but it's less important because Tornado is a much smaller library.
Before, presence information for an entire realm could only be queried via
the `POST /api/v1/users/me/presence` endpoint. However, this endpoint also
updates the presence information for the user making the request. Therefore,
bot users are not allowed to access this endpoint because they don't have
any presence data.
This commit adds a new endpoint `GET /api/v1/realm/presence` that just
returns the presence information for the realm of the caller.
Fixes#10651.
Some admins setting up Zulip's LDAP auth against Active Directory see
a rather baffling error message: "In order to perform this operation a
successful bind must be completed on the connection". This happens
despite AUTH_LDAP_BIND_DN and auth_ldap_bind_password being set
perfectly correctly, and on a query that the `ldapsearch` command-line
tool performs quite happily.
Empirically, adding a setting like this to /etc/zulip/settings.py
resolves the issue:
AUTH_LDAP_CONNECTION_OPTIONS = {
ldap.OPT_REFERRALS: 0
}
Some useful, concise background on the LDAP "referral" concept is here:
https://docs.oracle.com/javase/jndi/tutorial/ldap/referral/overview.html
and a pertinent bit of docs for the underlying Python `ldap` client:
https://www.python-ldap.org/en/latest/faq.html
and some very helpful documentation for Active Directory:
https://docs.microsoft.com/en-us/windows/desktop/ad/referrals
Based on the docs above, the story appears to be something like this:
* This server has the information for part of the scope of our query
-- in particular it happens to have the information we actually want.
* But there are other areas ("subordinate domains") that our query is
in principle asking about, and this server doesn't know if there are
matches there, so it gives us a referral.
* And by default, python-ldap lets `libldap` run ahead and attempt to
bind to those referrals and do those queries too -- which raises an
error because, unlike Microsoft's "LDAP API", it doesn't reuse the
credentials.
So if we simply skip trying to follow the referrals, there's no
error... and we already have, from the original response, the answer
we actually need. That's what the `ldap.OPT_REFERRALS` option does.
There may be more complex situations where the referral really is
relevant, because the desired user info is split across servers. Even
then, unless an anonymous query will be acceptable, there's no point
in letting `libldap` follow the referral and setting this option is
still the right thing. When someone eventually comes to this bridge,
some code will be required to cross it, by following the referrals.
That code might look a bit like this (unfinished) example:
https://bugs.launchpad.net/nav/+bug/1209178
Manually tested by tabbott.
Fixes#343, which was effectively a report of the need for this
OPT_REFERRALS setting.
Fixes#349, since with this change, we no longer require tricky manual
configuration to get Active Directory up and running.
The term "username" confusingly refers both to the Django concept of
"username" (meaning "the name the user types into the login form") and
a concept the admin presumably already has in their existing
environment; which may or may not be the same thing, and in fact this
is where we document the admin's choice of whether and how they should
correspond. The Django concept in particular isn't obvious, and is
counterintuitive when it means something like an email address.
Explicitly explain the Django "username" concept, under the name of
"Zulip username" to take responsibility for our choice of how it's
exposed in the settings interface. Then use an explicit qualifier,
like "LDAP username", whenever referring to some other notion of
username. And make a pass over this whole side of the instructions,
in particular for consistent handling of these concepts.
Expand on a few things that tend to confuse people (especially the
`%(user)s` thing); move the `LDAPSearchUnion` example out to docs;
adjust the instructions to fit a bit better in their new docs/ home.
This makes it easier to iterate on these, and to expand supplemental
information (like troubleshooting, or unusual configurations) without
further straining the already-dauntingly-long settings.py.
It also makes it easier to consult the instructions while editing the
secrets file, or testing things, etc. -- most admins will find it more
natural to keep a browser open somewhere than a second terminal.
Fixes part of #10297.
Use FAKE_LDAP_NUM_USERS which specifies the number of LDAP users
instead of FAKE_LDAP_EXTRA_USERS which specified the number of
extra users.
Also use name for selecting form in casper tests
as form with action=new is present in both /new
and /accounts/new/send_confirm/ which breaks
test in CircleCI as
waitWhileVisible('form[action^="/new/"]) never stops
waiting.
This uses the recently introduced active_mobile_push_notification
flag; messages that have had a mobile push notification sent will have
a removal push notification sent as soon as they are marked as read.
Note that this feature is behind a setting,
SEND_REMOVE_PUSH_NOTIFICATIONS, since the notification format is not
supported by the mobile apps yet, and we want to give a grace period
before we start sending notifications that appear as (null) to
clients. But the tracking logic to maintain the set of message IDs
with an active push notification runs unconditionally.
This is designed with at-least-once semantics; so mobile clients need
to handle the possibility that they receive duplicat requests to
remove a push notification.
We reuse the existing missedmessage_mobile_notifications queue
processor for the work, to avoid materially impacting the latency of
marking messages as read.
Fixes#7459, though we'll need to open a follow-up issue for
using these data on iOS.
This uses the MockLDAP class of fakeldap to fake a ldap server, based
on the approach already used in the tests in `test_auth_backends.py`.
Adds the following settings:
- FAKE_LDAP_MODE: Lets user choose out of three preset configurations.
The default mode if someone erases the entry in settings is 'a'. The
fake ldap server is disable if this option is set to None.
- FAKE_LDAP_EXTRA_USERS: Number of extra users in LDAP directory beyond
the default 8.
Fixes#9934.
Generates ldap_dir based on the mode and the no. of extra users.
It supports three modes, 'a', 'b' and 'c', description for which
can be found in prod_settings_templates.py.
That value is necessary to configure anonymous binds in
django-auth-ldap, which are useful when we're using LDAP just to
populate the user database.
Fixes#10257.
This lets us have a slightly cleaner interface for cases where we need
a custom default value than an `or None` in the definition (which
might cause issues with other falsey values).
The autenticate function now follows the signature of
Django 2.0 https://github.com/django-auth-ldap/
django-auth-ldap/commit/27a8052b26f1d3a43cdbcdfc8e7dc0322580adae
Also AUTH_LDAP_CACHE_GROUPS is depricated in favor of
AUTH_LDAP_CACHE_TIMEOUT.
We already had a setting for whether these logs were enabled; now it
also controls which stream the messages go to.
As part of this migration, we disable the feature in dev/production by
default; it's not useful for most environments.
Fixes the proximal data-export issue reported in #10078 (namely, a
stream with nobody ever subscribed to having been created).
As part of our effort to change the data model away from each user
having a single API key, we're eliminating the couple requests that
were made from Django to Tornado (as part of a /register or home
request) where we used the user's API key grabbed from the database
for authentication.
Instead, we use the (already existing) internal_notify_view
authentication mechanism, which uses the SHARED_SECRET setting for
security, for these requests, and just fetch the user object using
get_user_profile_by_id directly.
Tweaked by Yago to include the new /api/v1/events/internal endpoint in
the exempt_patterns list in test_helpers, since it's an endpoint we call
through Tornado. Also added a couple missing return type annotations.
Input pills require a contenteditable div with a class named input
to fall inside the pill container. On converting the input tag into
a div, the size of the input decreases which is compensated by a
line-height of 40px. Comment above letter-spacing:normal was removed
as chrome and firefox do not change the letter-spacing to normal
for a div via the default browser stylesheet.
NOTE: Currently writing something into the div will call the action
corresponding to that key in the keyboard shortcuts. The input will
work fine once the pills have been initiated.
For the casper tests, for now, we just use the legacy search code.
When we change that, $.val() cannot be used on contenteditable div, so
$.html() will need to be used instead in select_item_via_typeahead.
This setting isn't intended to exist long term, but instead to make it
possible to merge our search pills code before we're ready to cut over
production environments to use it.
Various pieces of our thumbor-based thumbnailing system were already
merged; this adds the remaining pieces required for it to work:
* a THUMBOR_URL Django setting that controls whether thumbor is
enabled on the Zulip server (and if so, where thumbor is hosted).
* Replaces the overly complicated prototype cryptography logic
* Adds a /thumbnail endpoint (supported both on web and mobile) for
accessing thumbnails in messages, designed to support hosting both
external URLs as well as uploaded files (and applying Zulip's
security model for access to thumbnails of uploaded files).
* Modifies bugdown to, when THUMBOR_URL is set, render images with the
`src` attribute pointing /thumbnail (to provide a small thumbnail
for the image), along with adding a "data-original" attribute that
can be used to access the "original/full" size version of the image.
There are a few things that don't work quite yet:
* The S3 backend support is incomplete and doesn't work yet.
* The error pages for unauthorized access are ugly.
* We might want to rename data-original and /thumbnail?size=original
to use some other name, like "full", that better reflects the fact
that we're potentially not serving the original image URL.
In this commit we add a new endpoint so as to have a way of fetching
topic history for a given stream id without having to be logged in.
This can only happen if the said stream is web public otherwise we
just return an empty topics list. This endpoint is quite analogous
to get_topics_backend which is used by our main web app.
In this commit we also do a bit of duplication regarding the query
responsible for fetching all the topics from DB. Basically this
query is exactly the same as what we have in the
get_topic_history_for_stream function in actions.py. Basically
duplicating now is the right thing to do because this query is
really gonna change when we add another criteria for filtering
messages which is:
Only topics for messages which were sent during the period the
corresponding stream was web public should be returned.
Now when we will do this, the query will change and thus it won't
really be a code duplication!
This commits adds the necessary puppet configuration and
installer/upgrade code for installing and managing the thumbor service
in production. This configuration is gated by the 'thumbor.pp'
manifest being enabled (which is not yet the default), and so this
commit should have no effect in a default Zulip production environment
(or in the long term, in any Zulip production server that isn't using
thumbor).
Credit for this effort is shared by @TigorC (who initiated the work on
this project), @joshland (who did a great deal of work on this and got
it working during PyCon 2017) and @adnrs96, who completed the work.
This adds a new settings, SOCIAL_AUTH_SUBDOMAIN, which specifies which
domain should be used for GitHub auth and other python-social-auth
backends.
If one is running a single-realm Zulip server like chat.zulip.org, one
doesn't need to use this setting, but for multi-realm servers using
social auth, this fixes an annoying bug where the session cookie that
python-social-auth sets early in the auth process on the root domain
ends up masking the session cookie that would have been used to
determine a user is logged in. The end result was that logging in
with GitHub on one domain on a multi-realm server like zulipchat.com
would appear to log you out from all the others!
We fix this by moving python-social-auth to a separate subdomain.
Fixes: #9847.
In this commit we are fixing a kinda serious un-noticed bug with
the way run_db_migrations worked for test db.
Basically run_db_migrations runs new migrations on db (dev or test).
When we talk about the dev platform this process is straight forward.
We have a single DB zulip which was once created and now has some data.
Introduction of new migration causes a schema change or does something
else but bottom line being we just migrate the zulip DB and stuff works
fine.
Now coming to zulip test db (zulip_test) situation is a bit complex
in comparision to dev db. Basically this is because we make use of
what we call zulip_test_template to make test fixture restoration
after tests run fast. Now before we introduced the performance
optimisation of just doing migrations when possible, introduction of
a migration would ideally result in provisioning do a full rebuild of
the test database. When that used to happen sequence of events used to
be something like this:
* Create a zulip_test db from zulip_test_base template (An absolute
basic schema holding)
* Migrate and populate the zulip_test db.
* Create/Re-create zulip_test_template from the latest zulip_test.
Now after we introduced just do migrations instead of full db rebuild
when possible, what used to happen was that zulip_test db got
successfully migrated but when test suites would run they would try to
create zulip_test from zulip_test_template (so that individual tests
don't affect each other on db level).
This is where the problem resides; zulip_test_template wasn't migrated
and we just scrapped zulip_test and re-created it using
zulip_test_template as a template and hence zulip_test will not hold the
latest schema.
This is what we fix in this commit.
This commit moves all files previously under the 'app' bundle in
the Django pipeline to being compiled by webpack under the 'app'
entry point. In the process, it moves assets under the app entry
to a file called app.js that consumes all relevant css and js files.
This commit also edits the webpack config to be able to expose certain
variables for third party libraries that are currently required by
some modules. This is bad coding form and should be refactored to
requiring whatever dependencies a module may have; we're just
deferring that to the future to simplify the series of transitions we
need to do here. The variable exposure is done using expose-loader in
webpack.
The app/index.html template is edited to override the newly introduced
'commonjs' block in the base template. This is done as a temporary
measure so as not to disrupt other pages on the app during the transition.
It also fixes the value of the 'this' context that was being inferred
as window by third party libraries. This is done using imports-loader
in the webpack config. This is also messy and probably isn't how we
want things to work long term.
We need to do a small monkey-patching of python-social-auth to ensure
that it doesn't 500 the request when a user does something funny in
their browser (e.g. using the back button in the auth flow) that is
fundamentally a user error, not a server error.
This was present in the pre-rewrite version of our Social auth
codebase, without clear documentation; I've fixed the explanation
part here.
It's perhaps worth investigating with the core social auth team
whether there's a better way to do this.
It's possible to make GitHub social authentication support letting the
user pick which of their verified email addresses to pick, using the
python-social-auth pipeline feature. We need to add an additional
screen to let the user pick, so we're not adding support for that now,
but this at least migrates this to use the data set of all emails that
have been verified as associated with the user's GitHub account (and
we just assume the user wants their primary email).
This also fixes the inability for very old GitHub accounts (where the
`email` field in the details might be a string the user wanted on
their GitHub profile page) to using GitHub auth to login.
Fixes#9127.
Adds search_pill.js to the static asset pipeline. The items
for search pill contain 2 keys, display_value and search_string.
Adding all the operator information i.e the operator, operand and
negated fields along with the search_string and description was tried out.
It was dropped because it didn't provide any advantage as one had to
always calculate the search_string and the description from the operator.
This fixes an important issue where the realm_users cache could grow
beyond 1MB when a Zulip server had more than about 10K users. The
result was that Zulip would start 500ing with that size of userbase.
There are probably better long-term fixes, but because the realm_users
data set caches well, this change should be sufficient to let us
handle to 50-100K users or more on that metric (though at some point,
we'll start having other problems interacting with the realm_users
data set).