The current code looks like it's trying to redirect /integrations/doc/email
to /integrations when EMAIL_GATEWAY_PATTERN is not set.
I think it doesn't currently do this. The test for that pathway has a bug:
self.get_doc('integrations/doc-html/email', subdomain='zulip') needs a
leading slash, and putting the slash back in results in the test failing.
This redirection is not really desired behavior -- better is to
unconditionally show that the email integration exists, and just point the
user to https://zulip.readthedocs.io/en/latest/production/email-gateway.html
(this is done in a child commit).
This commit alone breaks things, needs to be merged with the follow-up
ones.
welcome-bot is removed from the explicit list, because it already is in
settings.INTERNAL_BOTS.
This feature is intended to cover all of our ways of exporting a
realm, not just the initial "public export" feature, so we should name
things appropriately for that goal.
Additionally, we don't want to include data exports in page_params;
the original implementation was actually buggy and would have.
Django’s default FileSystemFinder disallows STATICFILES_DIRS from
containing STATIC_ROOT (by raising an ImproperlyConfigured exception),
because STATIC_ROOT is supposed to be the result of collecting all the
static files in the project, not one of the potentially many sources
of static files.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This change serves to declutter webhook-errors.log, which is
filled with too many UnexpectedWebhookEventType exceptions.
Keeping UnexpectedWebhookEventType in zerver/lib/webhooks/common.py
led to a cyclic import when we tried to import the exception in
zerver/decorators.py, so this commit also moves this exception to
another appropriate module. Note that our webhooks still import
this exception via zerver/lib/webhooks/common.py.
This replaces the two custom Google authentication backends originally
written in 2012 with using the shared python-social-auth codebase that
we already use for the GitHub authentication backend. These are:
* GoogleMobileOauth2Backend, the ancient code path for mobile
authentication last used by the EOL original Zulip Android app.
* The `finish_google_oauth2` code path in zerver/views/auth.py, which
was the webapp (and modern mobile app) Google authentication code
path.
This change doesn't fix any known bugs; its main benefit is that we
get to remove hundreds of lines of security-sensitive semi-duplicated
code, replacing it with a widely trusted, high quality third-party
library.
The test_docs change is because Django runs test cases with DEBUG =
False, which ordinarily means it doesn’t serve /static during tests.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
The documentation suggests that you can get the dev server to use
production assets by setting PIPELINE_ENABLED = True, but that
resulted in Django being unable to find any static files because
FileSystemFinder was missing from STATICFILES_FINDERS. Using the
production storage configuration in this case reduces the number of
possible configurations and seems to result in things being less
broken.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We create a path structure in the from:
`/var/<uuid>/test-backend/run_1234567/worker_N/`
A settings attribute, `TEST_WORKER_DIR`, was created as a worker's
directory for a given `test-backend` run's file storage. The
appropirate path is created in `setup_test_environment`, while each
workers subdirectory is created within `init_worker`.
This allows a test class to write to `settings.TEST_WORKER_DIR`,
populating the appropirate directory of a given worker. Also
providing the long-term approach to clean up filesystem access
in the backend unit tests.
We don’t need a hacked copy anymore. We run the installed version out
of node_modules in development, and a Webpack-bundled version of that
in production.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
In each url of urls.py, if we want to mark an endpoint as being
intentionally undocumented, then in the kwargs instead of directly
mapping like 'METHOD': 'zerver.views.package.foo', we can provide
a tag called 'intentionally_undocumented' and map like:
'METHOD': ('zerver.views.package.foo', {'intentionally_undocumented'}).
If an endpoint is marked as intentionally undocumented, but we find
some OpenAPI documentation for it then we'll throw an error, in which
case either the 'intentionally_undocumented' tag should be removed or
the faulty documentation should be removed.
This will allow us to mark a REQ variable as intentionally
undocumented. With this, we can remove some of the endpoints marked
as "buggy" even though they're not actually buggy, we just needed to
specify certain parameters as intentionally undocumented (e.g. the
stream_id for the /users/me/subscriptions/muted_topics endpoint.)
Any REQ variable with intentionally_undocumentated set to True
will not be added to the arguments_map data structure.
For some of the other "buggy" endpoints, we would want to mark the
entire endpoint as being undocumented intentionally via. the urls.py
file.
As of commit cff40c557b (#9300), these
files are no longer served directly to the browser. Disentangle them
from the static asset pipeline so we can refactor it without worrying
about them.
This has the side effect of eliminating the accidental duplication of
translation data via hash-naming in our release tarballs.
This reverts commit b546391f0b (#1148).
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
It appears not to have been useful and makes it marginally harder to
reason about how module resolution works. Paths to static content in
node_modules should be resolved through Webpack instead.
(This node_modules symlink was originally created in the pre-webpack world
where all of our static asset paths were based in static/.)
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
It's common for a broken settings.py file to result in Django not
starting, thus never writing to `/var/log/zulip/errors.log`. Such
behavior can be discouraging when the server 500s without a traceback
to accompany it. To fix this, we simply catch any exceptions in
django.setup(), if raised, and log the exception appropriately.
Comments tweaked by tabbott.
Closes: #7032.
Previously, our Github authentication backend just used the user's
primary email address associated with GitHub, which was a reasonable
default, but quite annoying for users who have several email addresses
associated with their GitHub account.
We fix this, by adding a new screen where users can select which of
their (verified) GitHub email addresses to use for authentication.
This is implemented using the "partial" feature of the
python-social-auth pipeline system.
Each email is displayed as a button. Clicking on that button chooses
the email. The email value is stored in a hidden input above the
button. The `primary_email` is displayed on top followed by
`verified_non_primary_emails`. Backend name is also passed as
`backend` to the template, which in our case is GitHub.
Fixes#9876.
This fixes an issue that caused LDAP synchronization to fail for
avatars. The problem occurred due to the lack of a 'name' attribute
on the BytesIO object that we pass to the upload backend (which is
only used in the S3 backend for computing Content-Type).
Fixes#12411.
The `AUTH_LDAP_ALWAYS_UPDATE_USER` is `True` by default, and this would sync the
attributes defined in the `AUTH_LDAP_USER_ATTR_MAP` to the user profile. But,
the default code in `django-auth-ldap` would work correctly only for `full_name`
field. This commit disables the setting by default, in favour of using the
`sync_ldap_user_data` script as a cron job.
Since positional arguments are interpreted differently by different
backends in Django's authentication backend system, it’s safer to
disallow them.
This had been the motivation for previously declaring the parameters
with default values when we were on Python 2, but that was not super
effective because Python has no rule against positional default
arguments and that convention for our authentication backends was
solely enforced by code review.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This commit adds a new developer tool: The "integrations dev panel"
which will serve as a replacement for the send_webhook_fixture_message
management command as a way to test integrations with much greater ease.
Jitsi Meet is the correct name for the product we integrate with. There is
one other reference to Jitsi, but it's in the db and will require a
migration.
This makes the implementation of `get_realm` consistent with its
declared return type of `Realm` rather than `Optional[Realm]`.
Fixes#12263.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Fixes#12132.
Realm setting to disable avatar changes is already present.
The `AVATAR_CHANGES_DISABLED` setting now follows the same
2-setting model as `NAME_CHANGES_DISABLED`.
The github-services model for how GitHub would send requests to this
legacy integration is no longer available since earlier in 2019.
Removing this integration also allows us to finally remove
authenticated_api_view, the legacy authentication model from 2013 that
had been used for this integration (and other features long since
upgraded).
A few functions that were used by the Beanstalk webhook are moved into
that webhook's implementation directly.
An endpoint was created in zerver/views. Basic rate-limiting was
implemented using RealmAuditLog. The idea here is to simply log each
export event as a realm_exported event. The number of events
occurring in the time delta is checked to ensure that the weekly
limit is not exceeded.
The event is published to the 'deferred_work' queue processor to
prevent the export process from being killed after 60s.
Upon completion of the export the realm admin(s) are notified.
The main point here is that you should use a symlink rather than
changing it, since it's more maintenance work to update our nginx
configuration to use an alternative path than to just create a
symbolic link.
Fixes#12157.
Previously, we had some expensive-to-calculate keys in
zulip_default_context, especially around enabled authentication
backends, which in total were a significant contributor to the
performance of various logged-out pages. Now, these keys are only
computed for the login/registration pages where they are needed.
This is a moderate performance optimization for the loading time of
many logged-out pages.
Closes#11929.
These previously lived in Optional settings, which generally caused
users to not read it.
(Also do a bit of reorganization of the "optional settings" area).
Closes#2420
We add rate limiting (max X emails withing Y seconds per realm) to the
email mirror. By creating RateLimitedRealmMirror class, inheriting from
RateLimitedObject, and rate_limit_mirror_by_realm function, following a
mechanism used by rate_limit_user, we're able to have this
implementation mostly rely on the already existing, and proven over
time, rate_limiter.py code. The rules are configurable in settings.py in
RATE_LIMITING_MIRROR_REALM_RULES, analogically to RATE_LIMITING_RULES.
Rate limit verification happens in the MirrorWorker in
queue_processors.py. We don't rate limit missed message emails, as due
to using one time addresses, they're not a spam threat.
test_mirror_worker is adapted to the altered MirrorWorker code and a new
test - test_mirror_worker_rate_limiting is added in test_queue_worker.py
to provide coverage for these changes.
This avoids repeatedly calling a Django auth function that takes a few
hundred microseconds to run in auth_enabled_helper, which itself is
currently called 14 times in every request to pages using
common_context.
This renames references to user avatars, bot avatars, or organization
icons to profile pictures. The string in the UI are updated,
in addition to the help files, comments, and documentation. Actual
variable/function names, changelog entries, routes, and s3 buckets are
left as-is in order to avoid introducing bugs.
Fixes#11824.
When soft deactivation is run for in "auto" mode (no emails are
specified and all users inactive for specified number of days are
deactivated), catch-up is also run in the "auto" mode if
AUTO_CATCH_UP_SOFT_DEACTIVATED_USERS is True.
Automatically catching up soft-deactivated users periodically would
ensure a good user experience for returning users, but on some servers
we may want to turn off this option to save on some disk space.
Fixes#8858, at least for the default configuration, by eliminating
the situation where there are a very large number of messages to recover.
Previously, the LDAP authentication model ignored the realm-level
settings for who can join a realm. This was sort of reasonable at the
time, because the original LDAP auth was an SSO solution that didn't
allow multiple realms, and so one could fully configure authentication
settings on the LDAP side. But now that we allow multiple realms with
the LDAP backend, one could easily imagine wanting different
restrictions on them, and so it makes sense to add this enforcement.
Now that we've more or less stabilized our authentication/registration
subsystem how we want it, it seems worth adding proper documentation
for this.
Fixes#7619.
Earlier the behavior was to raise an exception thereby stopping the
whole sync. Now we log an error message and skip the field. Also
fixes the `query_ldap` command to report missing fields without
error.
Fixes: #11780.
When a bunch of messages with active notifications are all read at
once -- e.g. by the user choosing to mark all messages, or all in a
stream, as read, or just scrolling quickly through a PM conversation
-- there can be a large batch of this information to convey. Doing it
in a single GCM/FCM message is better for server congestion, and for
the device's battery.
The corresponding client-side logic is in zulip/zulip-mobile#3343 .
Existing clients today only understand one message ID at a time; so
accommodate them by sending individual GCM/FCM messages up to an
arbitrary threshold, with the rest only as a batch.
Also add an explicit test for this logic. The existing tests
that happen to cause this function to run don't exercise the
last condition, so without a new test `--coverage` complains.
ACCOUNT_ACTIVATION_DAYS doesn't seems to be used anywhere.
INVITATION_LINK_VALIDITY_DAYS seems to do it's job currently.
(It was only ever used in very early Zulip commits).
We were using a hardcoded relative path, which doesn't work if you're
not running this from the root of the Zulip checkout.
As part of fixing this, we need to make `LOCAL_UPLOADS_DIR` an
absolute path.
Fixes#11581.
The client-side fix to make these not a problem was in release
16.2.96, of 2018-08-22. We've been sending them from the
development community server chat.zulip.org since 2018-11-29.
We started forcing clients to upgrade with commit fb7bfbe9a,
deployed 2018-12-05 to zulipchat.com.
(The mobile app unconditionally makes a request to a route on
zulipchat.com to check for this kind of forced upgrade, so that
applies to mobile users of any Zulip server.)
So at this point it's long past safe for us to unconditionally
send these. Hardwire the old `SEND_REMOVE_PUSH_NOTIFICATIONS`
setting to True, and simplify it out.
For Google auth, the multiuse invite key should be stored in the
csrf_state sent to google along with other values like is_signup,
mobile_flow_otp.
For social auth, the multiuse invite key should be passed as params to
the social-auth backend. The passing of the key is handled by
social_auth pipeline and made available to us when the auth is
completed.
This is primarily a feature for onboarding, where an organization
administrator might send a bunch of random test messages as part of
joining, but then want a pristine organization when their users later
join.
But it can theoretically be used for other use cases (e.g. for
moderation or removing threads that are problematic in some way).
Tweaked by tabbott to handle corner cases with
is_history_public_to_subscribers.
Fixes#10912.
This reverts commit ec9f6702d8.
Now that pika 0.13.0 has merged our PR to not import twisted unless it
is needed, we don't need to use this performance hack in order to
avoid wasting time importing twisted and all its dependencies.
Previously, zerver.views.registration.confirmation_key was only
available in development; now we make that more structurally clear by
moving it to the special zerver/views/development directory.
Fixes#11256.
Some urls are only available in the development environment
(dev_urls.py); Corresponding views (here email_log.py) is moved to the
new directory zerver/views/development.
Fixes#11256.
We had an inconsistent behavior when `LDAP_APPEND_DOMAIN` was set
in that we allowed user to enter username instead of his email in
the auth form but later the workflow failed due to a small bug.
Fixes: #10917.
This endpoint serves requests which might originate from an image
preview link which had an http url and the message holding the image
link was rendered before we introduced thumbnailing. In that case
we would have used a camo proxy to proxy http content over https and
avoid mix content warnings.
In near future, we plan to drop use of camo and just rely on thumbor
to serve such images. This endpoint helps maintain backward
compatibility for links which were already rendered.
This setting splits away part of responsibility from THUMBOR_URL.
Now on, this setting will be responsible for controlling whether
we thumbnail images or not by asking bugdown to render image links
to hit our /thumbnail endpoint. This is irrespective of what
THUMBOR_URL is set to though ideally THUMBOR_URL should be set
to point to a running thumbor instance.
This is somewhat hacky, in that in order to do what we're doing, we
need to parse the HTML of the rendered page to extract the first
paragraph to include in the open graph description field. But
BeautifulSoup does a good job of it.
This carries a nontrivial performance penalty for loading these pages,
but overall /help/ is a low-traffic site compared to the main app, so
it doesn't matter much.
(As a sidenote, it wouldn't be a bad idea to cache this stuff).
There's lots of things we can improve in this, largely through editing
the articles, but we can deal with that over time.
Thanks to Rishi for writing all the tests.
This adds a new realm_logo field, which is a horizontal-format logo to
be displayed in the top-left corner of the webapp, and any other
places where we might want a wide-format branding of the organization.
Tweaked significantly by tabbott to rebase, fix styling, etc.
Fixing the styling of this feature's loading indicator caused me to
notice the loading indicator for the realm_icon feature was also ugly,
so I fixed that too.
Fixes#7995.
This should make it possible for blueslip error reports to be sent on
our logged-out portico pages, which should in turn make it possible to
debug any such issues as they occur.
This should make life a lot more convenient for organizations that use
the LDAP integration and have their avatars in LDAP already.
This hasn't been end-to-end tested against LDAP yet, so there may be
some minor revisions, but fundamentally, it works, has automated
tests, and should be easy to maintain.
Fixes#286.
Apparently, Django's get_current_site function (used, e.g., in
django-two-factor to look up the domain to use in QR codes) first
tries to use the Sites framework, and if unavailable, does the right
thing (namely, using request.get_host()).
We don't use the Sites framework for anything in Zulip, so the correct
fix is to just remove it.
Fixes#11014.
A key part of this is the new helper, get_user_by_delivery_email. Its
verbose name is important for clarity; it should help avoid blind
copy-pasting of get_user (which we'll also want to rename).
Unfortunately, it requires detailed understanding of the context to
figure out which one to use; each is used in about half of call sites.
Another important note is that this PR doesn't migrate get_user calls
in the tests except where not doing so would cause the tests to fail.
This probably deserves a follow-up refactor to avoid bugs here.
This makes it possible to still run the deliver_scheduled_messages
queue worker, even though we're not creating reminder-bot by default
in new organizations.
This reverts commit 2fa77d9d54.
Further investigation has determined that this did not fix the
password-reset problem described in the previous commit message;
meanwhile, it causes other problems. We still need to track down the
root cause of the original password-reset bug.
This adds a web flow and management command for reactivating a Zulip
organization, with confirmation from one of the organization
administrators.
Further work is needed to make the emails nicer (ideally, we'd send
one email with all the admins on the `To` line, but the `send_email`
library doesn't support that).
Fixes#10783.
With significant tweaks to the email text by tabbott.
Realm object is not json-serializable; store the realm id instead
and retrieve the realm in social_auth_finish using
`Realm.objects.get(id=return_data["realm_id"])`.
The email_list returned has the primary email as the first element.
Testing: The order of the emails in the test was changed to put a
verified email before the primary one. The tests would fail without
this commit's change after the changes in the order of test emails.
See https://github.com/zulip/zulip/issues/10856 for details on the bug
here; but basically, users who reset their password were unable to
login until the next time we flushed memcached. The issue disappeared
after stopping using the Django cached_db session engine, so it's
pretty clear that some sort of bug with that session engine
interacting with our password reset logic is the root cause.
Further debugging is required to understand this fully, but for now,
it seems wise to just disable the backend.
The cost of doing so is a small performance decrease, which is likely
acceptable until we can resolve this (it's certainly a more minor
problem than the "Can't login" bug that disabling this removes).
This is a preparatory commit which will help us with removing camo.
In the upcoming commits we introduce a new endpoint which is based
out on the setting CAMO_URI. Since camo could have been hosted on
a different server as well from the main Zulip server, this change
will help us realise in tests how that scenerio might be dealt with.
These lazy imports save a significant amount of time on Zulip's core
import process, because mock imports pbr, which in turn import
pkgresources, which is in turn incredibly slow to import.
Fixes part of #9953.
This is a performance optimization; see the comment. This fixes part
of #9953.
Eventually, we should do the same thing for importing Tornado as well,
but it's less important because Tornado is a much smaller library.
Before, presence information for an entire realm could only be queried via
the `POST /api/v1/users/me/presence` endpoint. However, this endpoint also
updates the presence information for the user making the request. Therefore,
bot users are not allowed to access this endpoint because they don't have
any presence data.
This commit adds a new endpoint `GET /api/v1/realm/presence` that just
returns the presence information for the realm of the caller.
Fixes#10651.
Some admins setting up Zulip's LDAP auth against Active Directory see
a rather baffling error message: "In order to perform this operation a
successful bind must be completed on the connection". This happens
despite AUTH_LDAP_BIND_DN and auth_ldap_bind_password being set
perfectly correctly, and on a query that the `ldapsearch` command-line
tool performs quite happily.
Empirically, adding a setting like this to /etc/zulip/settings.py
resolves the issue:
AUTH_LDAP_CONNECTION_OPTIONS = {
ldap.OPT_REFERRALS: 0
}
Some useful, concise background on the LDAP "referral" concept is here:
https://docs.oracle.com/javase/jndi/tutorial/ldap/referral/overview.html
and a pertinent bit of docs for the underlying Python `ldap` client:
https://www.python-ldap.org/en/latest/faq.html
and some very helpful documentation for Active Directory:
https://docs.microsoft.com/en-us/windows/desktop/ad/referrals
Based on the docs above, the story appears to be something like this:
* This server has the information for part of the scope of our query
-- in particular it happens to have the information we actually want.
* But there are other areas ("subordinate domains") that our query is
in principle asking about, and this server doesn't know if there are
matches there, so it gives us a referral.
* And by default, python-ldap lets `libldap` run ahead and attempt to
bind to those referrals and do those queries too -- which raises an
error because, unlike Microsoft's "LDAP API", it doesn't reuse the
credentials.
So if we simply skip trying to follow the referrals, there's no
error... and we already have, from the original response, the answer
we actually need. That's what the `ldap.OPT_REFERRALS` option does.
There may be more complex situations where the referral really is
relevant, because the desired user info is split across servers. Even
then, unless an anonymous query will be acceptable, there's no point
in letting `libldap` follow the referral and setting this option is
still the right thing. When someone eventually comes to this bridge,
some code will be required to cross it, by following the referrals.
That code might look a bit like this (unfinished) example:
https://bugs.launchpad.net/nav/+bug/1209178
Manually tested by tabbott.
Fixes#343, which was effectively a report of the need for this
OPT_REFERRALS setting.
Fixes#349, since with this change, we no longer require tricky manual
configuration to get Active Directory up and running.
The term "username" confusingly refers both to the Django concept of
"username" (meaning "the name the user types into the login form") and
a concept the admin presumably already has in their existing
environment; which may or may not be the same thing, and in fact this
is where we document the admin's choice of whether and how they should
correspond. The Django concept in particular isn't obvious, and is
counterintuitive when it means something like an email address.
Explicitly explain the Django "username" concept, under the name of
"Zulip username" to take responsibility for our choice of how it's
exposed in the settings interface. Then use an explicit qualifier,
like "LDAP username", whenever referring to some other notion of
username. And make a pass over this whole side of the instructions,
in particular for consistent handling of these concepts.
Expand on a few things that tend to confuse people (especially the
`%(user)s` thing); move the `LDAPSearchUnion` example out to docs;
adjust the instructions to fit a bit better in their new docs/ home.
This makes it easier to iterate on these, and to expand supplemental
information (like troubleshooting, or unusual configurations) without
further straining the already-dauntingly-long settings.py.
It also makes it easier to consult the instructions while editing the
secrets file, or testing things, etc. -- most admins will find it more
natural to keep a browser open somewhere than a second terminal.
Fixes part of #10297.
Use FAKE_LDAP_NUM_USERS which specifies the number of LDAP users
instead of FAKE_LDAP_EXTRA_USERS which specified the number of
extra users.
Also use name for selecting form in casper tests
as form with action=new is present in both /new
and /accounts/new/send_confirm/ which breaks
test in CircleCI as
waitWhileVisible('form[action^="/new/"]) never stops
waiting.
This uses the recently introduced active_mobile_push_notification
flag; messages that have had a mobile push notification sent will have
a removal push notification sent as soon as they are marked as read.
Note that this feature is behind a setting,
SEND_REMOVE_PUSH_NOTIFICATIONS, since the notification format is not
supported by the mobile apps yet, and we want to give a grace period
before we start sending notifications that appear as (null) to
clients. But the tracking logic to maintain the set of message IDs
with an active push notification runs unconditionally.
This is designed with at-least-once semantics; so mobile clients need
to handle the possibility that they receive duplicat requests to
remove a push notification.
We reuse the existing missedmessage_mobile_notifications queue
processor for the work, to avoid materially impacting the latency of
marking messages as read.
Fixes#7459, though we'll need to open a follow-up issue for
using these data on iOS.
This uses the MockLDAP class of fakeldap to fake a ldap server, based
on the approach already used in the tests in `test_auth_backends.py`.
Adds the following settings:
- FAKE_LDAP_MODE: Lets user choose out of three preset configurations.
The default mode if someone erases the entry in settings is 'a'. The
fake ldap server is disable if this option is set to None.
- FAKE_LDAP_EXTRA_USERS: Number of extra users in LDAP directory beyond
the default 8.
Fixes#9934.
Generates ldap_dir based on the mode and the no. of extra users.
It supports three modes, 'a', 'b' and 'c', description for which
can be found in prod_settings_templates.py.
That value is necessary to configure anonymous binds in
django-auth-ldap, which are useful when we're using LDAP just to
populate the user database.
Fixes#10257.
This lets us have a slightly cleaner interface for cases where we need
a custom default value than an `or None` in the definition (which
might cause issues with other falsey values).
The autenticate function now follows the signature of
Django 2.0 https://github.com/django-auth-ldap/
django-auth-ldap/commit/27a8052b26f1d3a43cdbcdfc8e7dc0322580adae
Also AUTH_LDAP_CACHE_GROUPS is depricated in favor of
AUTH_LDAP_CACHE_TIMEOUT.
We already had a setting for whether these logs were enabled; now it
also controls which stream the messages go to.
As part of this migration, we disable the feature in dev/production by
default; it's not useful for most environments.
Fixes the proximal data-export issue reported in #10078 (namely, a
stream with nobody ever subscribed to having been created).
As part of our effort to change the data model away from each user
having a single API key, we're eliminating the couple requests that
were made from Django to Tornado (as part of a /register or home
request) where we used the user's API key grabbed from the database
for authentication.
Instead, we use the (already existing) internal_notify_view
authentication mechanism, which uses the SHARED_SECRET setting for
security, for these requests, and just fetch the user object using
get_user_profile_by_id directly.
Tweaked by Yago to include the new /api/v1/events/internal endpoint in
the exempt_patterns list in test_helpers, since it's an endpoint we call
through Tornado. Also added a couple missing return type annotations.
Input pills require a contenteditable div with a class named input
to fall inside the pill container. On converting the input tag into
a div, the size of the input decreases which is compensated by a
line-height of 40px. Comment above letter-spacing:normal was removed
as chrome and firefox do not change the letter-spacing to normal
for a div via the default browser stylesheet.
NOTE: Currently writing something into the div will call the action
corresponding to that key in the keyboard shortcuts. The input will
work fine once the pills have been initiated.
For the casper tests, for now, we just use the legacy search code.
When we change that, $.val() cannot be used on contenteditable div, so
$.html() will need to be used instead in select_item_via_typeahead.
This setting isn't intended to exist long term, but instead to make it
possible to merge our search pills code before we're ready to cut over
production environments to use it.
Various pieces of our thumbor-based thumbnailing system were already
merged; this adds the remaining pieces required for it to work:
* a THUMBOR_URL Django setting that controls whether thumbor is
enabled on the Zulip server (and if so, where thumbor is hosted).
* Replaces the overly complicated prototype cryptography logic
* Adds a /thumbnail endpoint (supported both on web and mobile) for
accessing thumbnails in messages, designed to support hosting both
external URLs as well as uploaded files (and applying Zulip's
security model for access to thumbnails of uploaded files).
* Modifies bugdown to, when THUMBOR_URL is set, render images with the
`src` attribute pointing /thumbnail (to provide a small thumbnail
for the image), along with adding a "data-original" attribute that
can be used to access the "original/full" size version of the image.
There are a few things that don't work quite yet:
* The S3 backend support is incomplete and doesn't work yet.
* The error pages for unauthorized access are ugly.
* We might want to rename data-original and /thumbnail?size=original
to use some other name, like "full", that better reflects the fact
that we're potentially not serving the original image URL.
In this commit we add a new endpoint so as to have a way of fetching
topic history for a given stream id without having to be logged in.
This can only happen if the said stream is web public otherwise we
just return an empty topics list. This endpoint is quite analogous
to get_topics_backend which is used by our main web app.
In this commit we also do a bit of duplication regarding the query
responsible for fetching all the topics from DB. Basically this
query is exactly the same as what we have in the
get_topic_history_for_stream function in actions.py. Basically
duplicating now is the right thing to do because this query is
really gonna change when we add another criteria for filtering
messages which is:
Only topics for messages which were sent during the period the
corresponding stream was web public should be returned.
Now when we will do this, the query will change and thus it won't
really be a code duplication!
This commits adds the necessary puppet configuration and
installer/upgrade code for installing and managing the thumbor service
in production. This configuration is gated by the 'thumbor.pp'
manifest being enabled (which is not yet the default), and so this
commit should have no effect in a default Zulip production environment
(or in the long term, in any Zulip production server that isn't using
thumbor).
Credit for this effort is shared by @TigorC (who initiated the work on
this project), @joshland (who did a great deal of work on this and got
it working during PyCon 2017) and @adnrs96, who completed the work.
This adds a new settings, SOCIAL_AUTH_SUBDOMAIN, which specifies which
domain should be used for GitHub auth and other python-social-auth
backends.
If one is running a single-realm Zulip server like chat.zulip.org, one
doesn't need to use this setting, but for multi-realm servers using
social auth, this fixes an annoying bug where the session cookie that
python-social-auth sets early in the auth process on the root domain
ends up masking the session cookie that would have been used to
determine a user is logged in. The end result was that logging in
with GitHub on one domain on a multi-realm server like zulipchat.com
would appear to log you out from all the others!
We fix this by moving python-social-auth to a separate subdomain.
Fixes: #9847.
In this commit we are fixing a kinda serious un-noticed bug with
the way run_db_migrations worked for test db.
Basically run_db_migrations runs new migrations on db (dev or test).
When we talk about the dev platform this process is straight forward.
We have a single DB zulip which was once created and now has some data.
Introduction of new migration causes a schema change or does something
else but bottom line being we just migrate the zulip DB and stuff works
fine.
Now coming to zulip test db (zulip_test) situation is a bit complex
in comparision to dev db. Basically this is because we make use of
what we call zulip_test_template to make test fixture restoration
after tests run fast. Now before we introduced the performance
optimisation of just doing migrations when possible, introduction of
a migration would ideally result in provisioning do a full rebuild of
the test database. When that used to happen sequence of events used to
be something like this:
* Create a zulip_test db from zulip_test_base template (An absolute
basic schema holding)
* Migrate and populate the zulip_test db.
* Create/Re-create zulip_test_template from the latest zulip_test.
Now after we introduced just do migrations instead of full db rebuild
when possible, what used to happen was that zulip_test db got
successfully migrated but when test suites would run they would try to
create zulip_test from zulip_test_template (so that individual tests
don't affect each other on db level).
This is where the problem resides; zulip_test_template wasn't migrated
and we just scrapped zulip_test and re-created it using
zulip_test_template as a template and hence zulip_test will not hold the
latest schema.
This is what we fix in this commit.
This commit moves all files previously under the 'app' bundle in
the Django pipeline to being compiled by webpack under the 'app'
entry point. In the process, it moves assets under the app entry
to a file called app.js that consumes all relevant css and js files.
This commit also edits the webpack config to be able to expose certain
variables for third party libraries that are currently required by
some modules. This is bad coding form and should be refactored to
requiring whatever dependencies a module may have; we're just
deferring that to the future to simplify the series of transitions we
need to do here. The variable exposure is done using expose-loader in
webpack.
The app/index.html template is edited to override the newly introduced
'commonjs' block in the base template. This is done as a temporary
measure so as not to disrupt other pages on the app during the transition.
It also fixes the value of the 'this' context that was being inferred
as window by third party libraries. This is done using imports-loader
in the webpack config. This is also messy and probably isn't how we
want things to work long term.
We need to do a small monkey-patching of python-social-auth to ensure
that it doesn't 500 the request when a user does something funny in
their browser (e.g. using the back button in the auth flow) that is
fundamentally a user error, not a server error.
This was present in the pre-rewrite version of our Social auth
codebase, without clear documentation; I've fixed the explanation
part here.
It's perhaps worth investigating with the core social auth team
whether there's a better way to do this.
It's possible to make GitHub social authentication support letting the
user pick which of their verified email addresses to pick, using the
python-social-auth pipeline feature. We need to add an additional
screen to let the user pick, so we're not adding support for that now,
but this at least migrates this to use the data set of all emails that
have been verified as associated with the user's GitHub account (and
we just assume the user wants their primary email).
This also fixes the inability for very old GitHub accounts (where the
`email` field in the details might be a string the user wanted on
their GitHub profile page) to using GitHub auth to login.
Fixes#9127.
Adds search_pill.js to the static asset pipeline. The items
for search pill contain 2 keys, display_value and search_string.
Adding all the operator information i.e the operator, operand and
negated fields along with the search_string and description was tried out.
It was dropped because it didn't provide any advantage as one had to
always calculate the search_string and the description from the operator.
This fixes an important issue where the realm_users cache could grow
beyond 1MB when a Zulip server had more than about 10K users. The
result was that Zulip would start 500ing with that size of userbase.
There are probably better long-term fixes, but because the realm_users
data set caches well, this change should be sufficient to let us
handle to 50-100K users or more on that metric (though at some point,
we'll start having other problems interacting with the realm_users
data set).
This new implementation model is a lot cleaner and should extend
better to the non-oauth backend supported by python-social-auth (since
we're not relying on monkey-patching `do_auth` in the OAuth backend
base class).
This adds a /ping command that will be useful for users
to see what the round trip to the Zulip server is (including
only a tiny bit of actual server time to basically give a
200).
It also introduce the "/zcommand" endpoint and zcommand.js
module.
The previous logic made it look like catching ZulipLDAPException on
the authenticate() line was possible, but it isn't, because that
exception is actually being handled inside django-auth-ldap's
authenticate method.
Previously, if you had LDAPAuthBackend enabled, we basically blocked
any other auth backends from working at all, by requiring the user's
login flow include verifying the user's LDAP password.
We still want to enforce that in the case that the account email
matches LDAP_APPEND_DOMAIN, but there's a reasonable corner case:
Having effectively guest users from outside the LDAP domain.
We don't want to allow creating a Zulip-level password for a user
inside the LDAP domain, so we still verify the LDAP password in that
flow, but if the email is allowed to register (due to invite or
whatever) but is outside the LDAP domain for the organization, we
allow it to create an account and set a password.
For the moment, this solution only covers EmailAuthBackend. It's
likely that just extending the list of other backends we check for in
the new conditional on `email_auth_backend` would be correct, but we
haven't done any testing for those cases, and with auth code paths,
it's better to disallow than allow untested code paths.
Fixes#9422.
Previously, if both EmailAuthBackend and LDAPAuthBackend were enabled,
LDAP users could set a password using EmailAuthBackend and continue to
use that password, even if their LDAP account was later deactivated.
That configuration wasn't supported at all before, so this doesn't fix
a pre-existing security issue, but now that we're making that a valid
configuration, we need to cover this case.
A "zform" knows how to render data that follows our
schema for widget messages with form elements like
buttons and choices.
This code won't be triggered until a subsequent
server-side commit takes widget_content from
API callers such as the trivial chat bot and
creates submessages for us.
This starts the concept of a schema checker, similar to
zerver/lib/validator.py on the server. We can use this
to validate incoming data. Our server should filter most
of our incoming data, but it's useful to have client-side
checking to defend against things like upgrade
regressions (i.e. what if we change the name of the field
on the server side without updating all client uses).
The only purpose of this commit is to make the django templates
of Two Factor Auth work. We probably won't need this commit once
we upgrade the admin backend of Two Factor Auth to use handlebar
templates.
This commit improves the output that blueslip produces while
showing error stack traces on the front-end. This is done by
using a library called error-stack-parser to format the stack
traces.
This commit also edits the webpack config to use a different
devtool setting since the previous one did not support sourcemaps
within stack traces. It also removes a plugin that was obviated
by this change.
We add conditional infinite sleep to this delivery job as a means to
handle case of multiple servers in service to a realm running this
job. In such a scenerio race conditions might arise leading to
multiple deliveries for same message. This way we try to match the
behaviour of what other jobs do in such a case.
Note: We should eventually do something to make such jobs work
while being running on multiple servers.
Slow queries during backend tests sends messages to Error Bot
which affects the database state causing the tests to fail.
This fixes the occasional flakes due to that.
This commit lays the foundation to handle submessages for
plugin widgets. Right now it just logs events, but subsequent
commits will add widget functionality.
This moves the documentation for this feature out of
prod_settings_template.py, so that we can edit it more easily.
We also add a bucket policy, which is part of what one would want to
use this in production.
This addresses much, but not all, of #9361.
We don't reference this anymore (it was only ever used by the Dropbox
integration, which was hardcoded-off for years before being removed in
e6833b6427)
This fixes exceptions when sending PMs in development (where we were
trying to connect to the localhost push bouncer, which we weren't
authorized for, but even if we were, it wouldn't work, since there's
no APNS/GCM certs).
At the same time, we also set and order of operations that ensures one
has the opportunity to adjust the server URL before submitting
anything to us.
Most of this was straightforward.
Most functions that were grabbed verbatim and whole from
the original class still have one-line wrappers.
Many functions are just-the-data versions of functions that
remain in MessageList: see add, append, prepend, remove as
examples. In a typical pattern the MessageList code becomes
super simple:
prepend: function MessageList_prepend(messages) {
var viewable_messages = this.data.prepend(messages);
this.view.prepend(viewable_messages);
},
Two large functions had some minor surgery:
triage_messages =
top half of add_messages +
API to pass three lists back
change_message_id =
original version +
two simple callbacks to list
For the function update_muting_and_rerender(), we continue
to early-exit if this.muting_enabled is false, and we copied
that same defensive check to the new function
named update_items_for_muting(), even though it's technically
hidden from that codepath by the caller.
This commit moves the stylesheets under the archive bundle in
the Django pipeline to being compiled by webpack instead. It
also removes a remaining call to a portico stylesheet that no
longer exists.
This commit transitions landing-page.css from the Django pipeline
to being compiled by webpack as landing-page.scss under the
'landing-page' and 'integration' bundles.
This commit transitions all styles in app.css in the Django pipeline
to being compiled by webpack in an app-styles bundle, and renames the
various files to now be processed as SCSS.
To implement this transition, we move the old CSS file refernces in
settings.py and replace them with a bundle declared in
`webpack.assets.json` and includedn in the index.html template
Tweaked by tabbott to keep the list of files in `app.css` in
`webpack.assets.json`, and to preserve the ordering from the old
`settings.py`.
This commit removes the need for portico.css to be generated
by the Django pipeline and makes the error page use the css
file compiled by webpack instead.
We haven't seen significant traffic from the legacy desktop app in
over a year, and users using it get a warning to upgrade since last
summer, so it's probably OK to stop providing special fonts for it.
This introduces a generic class called list_cursor to handle the
main details of navigating the buddy list and wires it into
activity.js. It replaces some fairly complicated code that
was coupled to stream_list and used lots of jQuery.
The new code interacts with the buddy_list API instead of jQuery
directly. It also persists the key across redraws, so we don't
lose our place when a focus ping happens or we type more characters.
Note that we no longer cycle to the top when we hit the bottom, or
vice versa. Cycling can be kind of an anti-feature when you want to
just lay on the arrow keys until they hit the end.
The changes to stream_list.js here do not affect the left sidebar;
they only remove code that was used for the right sidebar.
This was a bit more than moving code. I extracted the
following things:
$widget (and three helper methods)
$input
text()
empty()
expand_column
close_widget
activity.clear_highlight
There was a minor bug before this commit, where we were inconsistent
about trimming spaces. The introduction of text() and empty() should
prevent bugs where users type the space bar into search.
static/styles/scss/portico.scss is now compiled by webpack
and supports SCSS syntax.
Changed the server-side templates to render the portico-styles
bundle instead of directly requiring the portico stylesheet. This
allows webpack to handle stylesheet compilation and minification.
We use the mini-css-extract-plugin to extract out css from the
includes in webpack and let webpacks production mode handle
minification. Currently we're not able to use it for dev mode
because it does not support HMR so we use style-loader instead.
Once the plugin supports HMR we can go on to use it for both
dev and prod.
The downside of this is that when reloading pages in the development
environment, there's an annoying flash of unstyled content :(.
It is now possible to make a change in any of the styles included
by static/styles/scss/portico.scss and see the code reload live
in the browser. This is because style-loader which we currently
use has the module.accept code built-in.
This shouldn't have a material performance impact, since we don't
query these except during login, and meanwhile this fixes an issue
where users needed to restart memcached (which usually manifested as
"needing to reboot the whole server" after updating their LDAP
configuration before a user who was migrated from one OU to another
could login).
Fixes#9057.
The code in maybe_send_to_registration incorrectly used the
`get_realm_from_request` function to fetch the subdomain. This usage
was incorrect in a way that should have been irrelevant, because that
function only differs if there's a logged-in user, and in this code
path, a user is never logged in (it's the code path for logged-out
users trying to sign up).
This this bug could confuse unit tests that might run with a logged-in
client session. This made it possible for several of our GitHub auth
tests to have a totally invalid subdomain value (the root domain).
Fixing that bug in the tests, in turn, let us delete a code path in
the GitHub auth backend logic in `backends.py` that is impossible in
production, and had just been left around for these broken tests.
This is done mainly because this backend has the simplest code path
for calling login_or_register_remote_user, more than because we expect
this case to come up. It'll make it easier to write unit tests for
the `invalid_subdomain` corner case.
This is a mobile-specific endpoint used for logging into a dev server.
On mobile without this realm_uri it's impossible to send a login request
to the corresponding realm on the dev server and proceed further; we can
only guess, which doesn't work for using multiple realms.
Also rename the endpoint to reflect the additional data.
Testing Plan:
Sent a request to the endpoint, and inspected the result.
[greg: renamed function to match, squashed renames with data change,
and adjusted commit message.]
This should help a bit more in making this file navigable.
I think there's further work that could be done to organize the
settings better: e.g., group LDAP with the auth section; separate
resource limits, from debugging and error reporting, from configuring
service dependencies like Redis and Rabbit. That'd require reordering
many settings, and also taking a closer look at many settings one by
one in order to do a good job. Leaving that for another day.
I've found this file hard to navigate for a while. We actually have a
little hierarchy of section headings which applies to a lot of the
file already; make the boundaries bolder.
Add a clear heading, and use fewer words and simpler sentences. Also
explain the password thing a bit more, and put that more inline next
to the username.
Also, on checking the Django docs, the default for EMAIL_USE_TLS
is False and for EMAIL_PORT is 25. So most admins, certainly any that
are using an SMTP service on the public Internet (that is at all
decently run), will need to set those settings. Mention that.
This is a pretty thin abstraction to prevent having to put
magic numbers in code, doing the which/keyCode hack, and remembering
to all preventDefault.
Hopefully we'll expand it to handle things like shift/alt keys
for components that want their own keyboard handlers (vs. going
through hotkey.js).
We don't indend for this server-level setting to exist in the long
term; the purpose of this is just to make it easy to test this code
path for development purposes.
This implements much of the Message side part of #2745.
This should eliminate the occasional problems that some installations
have had with RabbitMQ trying to load-balance requests between
127.0.0.1 and ::1 (the ipv6 version of localhost).
There are several ways we open help for keyboard shortcuts,
markdown help, and search operators.
- from the gear menu
- from the compose box
- from the search box
- hitting ? for keyboard help
- arrowing/clicking through the tabs
This just moves the relevant code into a module and changes a
bunch of one-line calls in various places.
Now that we have support for displaying custom profile fields, this
adds administrator-level support for creating them.
Tweaked by tabbott to fix a few small bugs and clean up the commit message.
Fixes#1760.
This bot was basically a duplicate of NOTIFICATION_BOT for some
specific corner cases, and didn't add much value. It's better to just
eliminate it, which also removes some ugly corner cases around what
happens if the user account doesn't exist.
In 18e43895ff we replaced
stream subscribe buttons with stream links. The new feature
has been well tested and well received for over a year now,
so it's safe to remove the older feature at this point.
Older sites will have super old messages that still have the
rendered markup; this commit does not attempt to address those
situations. Most likely, clicking on an old button in the old
message will either do nothing or look like a message reply.
This feature isn't really ready yet -- the relevance isn't good, so
the emails aren't a great experience. More work needed; pending that,
just don't send them.
There's already a per-realm setting, which doesn't have a control in
the org settings UI but does suppress it in the per-user settings UI.
Piggyback on that to suppress that UI control when the feature is
disabled at the server level too.
Also cut a comment that hasn't really made sense since the logic was
changed months ago -- the comment originally explained why we sent
digests on Tuesday, Wednesday, and Thursday, and doesn't correspond to
why we dialled back to weekly on Tuesdays.
This is nicer than the "For manual testing ..." comment. :-)
Also as a proper setting we can have it control some logging I
added locally while testing my recent changes to pika logging.
In this commit we start to support redirects to urls supplied as a
'next' param for the following two backends:
* GoogleOAuth2 based backend.
* GitHubAuthBackend.
Update perfect-scrollbar to fix stutter space-scrolling in #8544. Also
reworked deprecated `element.perfectScrollbar` to `new
PerfectScrollbar(element)`. Lastly, updated provision version and
changed node module path to new path.
This also refactors perfect-scrollbar in help.js to work with updated
version of perfect-scrollbar. Because the update also changed
perfect-scrollbar's css selectors for all scrollbars in zulip, we
update those too.
Fixes#8544.
In some environments, these exceptions happen regularly on upgrading
Zulip. They're harmless because we reconnect, so avoid complaining
noisily about them.
This applies only on a server open for anyone to create a realm.
Moreover, if the server admins have granted any given realm a
max_invites greater than the default, that realm is exempt too.
This makes this value much easier for a server admin to change than it
was when embedded directly in the code. (Note this entire mechanism
already only applies on a server open for anyone to create a realm.)
Doing this also means getting the default out of the database.
Instead, we make the column nullable, and when it's NULL in the
database, treat that as whatever the current default is. This better
matches anyway the likely model where there are a few realms with
specially-set values, and everything else should be treated uniformly.
The migration contains a `RenameField` step, which sounds scary
operationally -- but it really does mean just the *field*, in
the model within the Python code. The underlying column's name
doesn't change.
This mostly moves code from ui.js.
We change the arguments to `message_fetch.load_more_messages()`
to be `opts` with callbacks for `show_loading` and `hide_loading`.
We also defer starting the scroll handler until `message_fetch.js`
has been initialized.
@brockwhittaker wrote the original prototype for having
pills in the recipient box when users compose PMs (either
1:1 or huddle). The prototype was test deloyed on our
main realm for several weeks.
This commit includes all the original CSS and HTML from
the prototype.
After some things changed with the codebase after the initial
test deployment, I made the following changes:
* In prior commits I refactored out a module called
`user_pill.js` that implemented some common functions
against a more streamlined version of `input_pill.js`,
and this commit largely integrates with that.
* I made changes in a prior commit to handle Zephyr
semantics (emails don't get validated) and tested
this commit with zephyr.
* I fixed a reload bug by extracting code out to
`compose_pm_pill.js` and re-ordering some
calls to `initialize`.
There are still two flaws related to un-pill-ified text in the
input:
* We could be more aggressive about trying to pill-ify
emails when you blur or tab away.
* We only look at the pills when you send the message,
instead of complaining about the un-pill-ified text.
(Some folks may consider that a feature, but it's
probably surprising to others.)
The main point of this change is to streamline the core
code for input pills, and we use also modify user groups.
The main change to input_pill.js is that you now
configure a function called `create_item_from_text`, and
that can return an arbitrary object, and it just needs
a field called `display_value`.
Other changes:
* You now call `input.create(opts)` to create the
widget.
* There is no longer a cache, because we can
write smarter code in typeahead `source` functions
that exclude ids up front.
* There is no value/optinalKey complexity, because
the calling code can supply arbitrary objects and
do their own external data management on the pill
items.
* We eliminate `prependPill`.
* We eliminate `data`, `keys`, and `values`, and just
have `items`.
This is necessary for mobile apps to do the right thing when only
RemoteUserBackend is enabled, namely, directly redirect to the
third-party SSO auth site as soon as the user enters the server URL
(no need to display a login form, since it'll be useless).
Previously, we used to raise an exception if the direct dev login code
path was attempted when:
* we were running under production environment.
* dev. login was not enabled.
Now we redirect to an error page and give an explanatory message to the
user.
Fixes#8249.
We now isolate the code to transmit messages into transmit.js.
It is stable code that most folks doing UI work in compose.js don't
care about the details of, so it's just clutter there. Also, we may
soon have other widgets than the compose box that send messages.
This change mostly preserves test coverage, although in some cases
we stub at a higher level for the compose path (this is a good thing).
Extracting out transmit.js allows us to lock down 100% coverage on that
file.
This uses an actual query to the backend to check if the subdomain is
available, using the same logic we would use to check when the
subdomain is in fact created.
This adds button under "Organization profile" settings, which
deactivates the organization and sends an "event" to all the
active user and log out them.
Fixes: #8212.
From here on we start to authenticate uploaded file request before
serving this files in production. This involves allowing NGINX to
pass on these file requests to Django for authentication and then
serve these files by making use on internal redirect requests having
x-accel-redirect field. The redirection on requests and loading
of x-accel-redirect param is handled by django-sendfile.
NOTE: This commit starts to authenticate these requests for Zulip
servers running platforms either Ubuntu Xenial (16.04) or above.
Fixes: #320 and #291 partially.
This comment didn't really explain things unless you were looking at
the code, and/or had a strong enough association for what "cn" means
that it was obvious what must be meant. Maybe this will be clearer.
There is one other meaningful key, which is optional: "short_name",
for which I guess a typical value if supplied would be "uid" or
"userid". I'm not sure we really do much with a user's `short_name`,
though, so didn't add a comment for it. When this key is omitted,
we set the user's `short_name` to the same thing as `full_name`.
It's too easy to go over the rate limits when using the webapp.
The correct fix for this probably involves some changes to which
routes get covered by what sort of rate limit, but for now, just
increase the limits.
The original code made a 3/4-hearted effort to generically accommodate
more banners/"panels" later, but named itself after the first one made.
[greg: expanded commit message.]
Different formats for configuration files have a wide variety of ways
of representing lists; so if you're not accustomed to Python syntax,
or aren't thinking of this file as Python code, the syntax for several
ALLOWED_HOSTS entries may not be obvious. And this setting is one
that an admin is likely to want to touch quite early in using Zulip.
So, demonstrate a multi-element list.
For similar reasons, demonstrate an IP address. This one is in a
range reserved for documentation (by RFC 5737), like `example.com`.
For now, this does nothing in a production environment, but it should
simplify the process of doing testing on the Thumbor implementation,
by integrating a lot of dependency management logic.