This adds support for running a Zulip production server with each
realm on its own unique subdomain, e.g. https://realm_name.example.com.
This patch includes a ton of important features:
* Configuring the Zulip sesion middleware to issue cookier correctly
for the subdomains case.
* Throwing an error if the user tries to visit an invalid subdomain.
* Runs a portion of the Casper tests with REALMS_HAVE_SUBDOMAINS
enabled to test the subdomain signup process.
* Updating our integrations documentation to refer to the current subdomain.
* Enforces that users can only login to the subdomain of their realm
(but does not restrict the API; that will be tightened in a future commit).
Note that toggling settings.REALMS_HAVE_SUBDOMAINS on a live server is
not supported without manual intervention (the main problem will be
adding "subdomain" values for all the existing realms).
[substantially modified by tabbott as part of merging]
This works around a nasty problem with Webpack that you can't run two
copies of the Webpack development server on the same project at the
same time (even if on different ports). The second copy doesn't fail,
it just hangs waiting for some lock, which is confusing; but even if
that were to be solved, we don't actually need the webpack development
server running to run the Casper tests; we just need bundle.js built.
So the easy solution is to just run webpack manually and be sure to
include bundle.js in the JS_SPECS entry.
As a follow-up to this change, we should clean up how test_settings.py
is implemented to not require duplicating code from settings.py.
Fixes#878.
Camo is a caching image proxy, used in Zulip to avoid mixed-content
warnings by proxying HTTP image content over HTTPS. We've been using
it in zulip.com production for years; this change makes it available
in standalone Zulip deployments.
This could potentially help with debugging exactly what happened with
some issue down the line.
(imported from commit cc7321d742875b644d4727a084b462dcd01dcf10)
I figure we can start with 600s as a maximum age -- our threads do
many dozens of requests per minute, so I figure we'll get most of the
benefit of permanently persisting connections this way. I could also
be convinced to do just 60s, though the impact will likely to be less
visible on staging. 600s seems to be what Django originally had for
this parameter before they disabled it by default. See:
https://groups.google.com/forum/#!msg/django-developers/rH0QQP7tI6w/yBusiFTNBR4J
for discussion, which also suggests we might have issues with
runserver that we should watch out for.
(imported from commit 0ae09fa4f1b39cc88c76fa58258aaf20ab168dcf)
This requires no changes in production, but is tagged as manual to
remind developers that they need to edit and run the tools/migrate-db
script to fix up their local database instances.
(imported from commit fbf764fb61592ef994d6d2ad56edad65ff01f14b)
This includes a hack to preserve humbug/backends.py as a symlink, so
that we don't need to regenerate all our old sessions.
(imported from commit b7918988b31c71ec01bbdc270db7017d4069221d)