Previously, we set restrict_to_domain and invite_required differently
depending on whether we were setting up a community or a corporate
realm. Setting restrict_to_domain requires validation on the domain of the
user's email, which is messy in the web realm creation flow, since we
validate the user's email before knowing whether the user intends to set up
a corporate or community realm. The simplest solution is to have the realm
creation flow impose as few restrictions as possible (community defaults),
and then worry about restrict_to_domain etc. after the user is already in.
We set the test suite to explictly use the old defaults, since several of
the tests depend on the old defaults.
This commit adds a database migration.
Adds a database migration, adds a new string_id argument to the management
realm creation command, and adds a short name field to the web realm
creation form when REALMS_HAVE_SUBDOMAINS is False.
Adds a new field org_type to Realm. Defaults for restricted_to_domain
and invite_required are now controlled by org_type at time of realm
creation (see zerver.lib.actions.do_create_realm), rather than at the
database level. Note that the backend defaults are all
org_type=corporate, since that matches the current assumptions in the
codebase, whereas the frontend default is org_type=community, since if
a user isn't sure they probably want community.
Since we will likely in the future enable/disable various
administrative features based on whether an organization is corporate
or community, we discuss those issues in the realm creation form.
Before we actually implement any such features, we'll want to make
sure users understand what type of organization they are a member of.
Choice of org_type (via radio button) has been added to the realm
creation flow and the realm creation management command, and the
open-realm option removed.
The database defaults have not been changed, which allows our testing code
to work unchanged.
[includes some HTML/CSS work by Brock Whittaker to make it look nice]
Previously, the generate-fixtures shell script by called into Django
multiple times in order to check whether the database was in a
reasonable state. Since there's a lot of overhead to starting up
Django, this resulted in `test-backend` and `test-js-with-casper`
being quite slow to run a single small test (2.8s or so) even on my
very fast laptop.
We fix this is by moving the checks into a new Python library, so that
we can avoid paying the Django startup overhead 3 times unnecessarily.
The result saves about 1.2s (~40%) from the time required to run a
single backend test.
Fixes#1221.
This is a convenient tool to have around.
We require an unusual argument value of "YES" to send to everyone on
the server, since that's something one should do with a great deal of
care.
We no longer have an in-process code path to export
UserMessage rows. We want to only maintain the
subprocess code, which we'll always use in production,
and which will work fine in dev.
The previous export tool would only work properly for small realms,
and was missing a number of important features:
* Export of avatars and uploads from S3
* Export of presence data, activity data, etc.
* Faithful export/import of timestamps
* Parallel export of messages
* Not OOM killing for large realms
The new tool runs as a pair of documented management commands, and
solves all of those problems.
Also we add a new management command for exporting the data of an
individual user.
All other zulip management command names have underscores, so
rename email-mirror to email_mirror.
This will also make it possible to import this module, which will
help in writing tests for it.
Running `./manage.py email-mirror` used to fail on python 3
because twisted.mail.imap4 is not python 3 compatible.
Display a message informing the user that email-mirror is not
available on python 3 instead of failing with a traceback.
Also add tools/test-management to py3-backend.
runtornado unbuffers its output using
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0).
This is not python 3 compatible since we can't specify
buffering on a text stream in python 3. So use the '-u'
option of python when calling runtornado.py to make output
unbuffered.
* get_realm returns None if no matching realm is present, but
create_stream.py assumed it raises Realm.DoesNotExist.
* encoded/decode strings properly.
* Replace filter by list comprehension.
* Add '# type: ignore' to statements which use attributes from
modeule `posix`, since stubs for posix are missing on python 3.
* get_realm returns None if no matching realm is present, but
create_stream.py assumed it raises Realm.DoesNotExist.
* encoded/decode strings properly.
This prototype from Dropbox Hack Week turned out to be too inefficient
to be used for realms with any significant amount of history, so we're
removing it.
It will be replaced by https://github.com/zulip/zulip/pull/673.
This results in a substantial performance improvement for all of
Zulip's backend templates.
Changes in templates:
- Change `block.super` to `super()`.
- Remove `load` tag because Jinja2 doesn't support it.
- Use `minified_js()|safe` instead of `{% minified_js %}`.
- Use `compressed_css()|safe` instead of `{% compressed_css %}`.
- `forloop.first` -> `loop.first`.
- Use `{{ csrf_input }}` instead of `{% csrf_token %}`.
- Use `{# ... #}` instead of `{% comment %}`.
- Use `url()` instead of `{% url %}`.
- Use `_()` instead of `{% trans %}` because in Jinja `trans` is a block tag.
- Use `{% trans %}` instead of `{% blocktrans %}`.
- Use `{% raw %}` instead of `{% verbatim %}`.
Changes in tools:
- Check for `trans` block in `check-templates` instead of `blocktrans`
Changes in backend:
- Create custom `render_to_response` function which takes `request` objects
instead of `RequestContext` object. There are two reasons to do this:
1. `RequestContext` is not compatible with Jinja2
2. `RequestContext` in `render_to_response` is deprecated.
- Add Jinja2 related support files in zproject/jinja2 directory. It
includes a custom backend and a template renderer, compressors for js
and css and Jinja2 environment handler.
- Enable `slugify` and `pluralize` filters in Jinja2 environment.
Fixes#620.
This commit adds the capability to keep track and remove uploaded
files. Unclaimed attachments are files that have been uploaded to the
server but are not referred in any messages. A management command to
remove old unclaimed files after a week is also included.
Tests for getting the file referred in messages are also included.
As documented in https://github.com/zulip/zulip/issues/441, Guardian
has quite poor performance, and in fact almost 50% of the time spent
running the Zulip backend test suite on my laptop was inside Guardian.
As part of this migration, we also clean up the old API_SUPER_USERS
variable used to mark EMAIL_GATEWAY_BOT as an API super user; now that
permission is managed entirely via the database.
When rebasing past this commit, developers will need to do a
`manage.py migrate` in order to apply the migration changes before the
server will run again.
We can't yet remove Guardian from INSTALLED_APPS, requirements.txt,
etc. in this release, because otherwise the reverse migration won't
work.
Fixes#441.
This change drops the memory used for Python processes run by Zulip in
development from about 1GB to 300MB on my laptop.
On the front of safety, http://pika.readthedocs.org/en/latest/faq.html
explains "Pika does not have any notion of threading in the code. If
you want to use Pika with threading, make sure you have a Pika
connection per thread, created in that thread. It is not safe to share
one Pika connection across threads.". Since this code only connects
to rabbitmq inside the individual threads, I believe this should be
safe.
Progress towards #32.
The new Tornado handler tracking logic properly handled requests that
threw an exception or followed the RespondAsynchronously code path,
but did not properly de-allocated the handler in the syncronous case.
An easy reproducer for this is to load a new Zulip browser window;
that will leak 2 handler objects for the 2 synchronous requests made
from Django to Tornado as part of initial state fetching.
The recent Tornado memory leak fix
(1396eb7022) didn't use the correct
variable name for the current handler ID, causing this cleanup code to
fail in the event that a view raised an exception.
In 2ea0daab19, handlers were moved to
being tracked via the handlers_by_id dict, but nothing cleared this
dict, resulting in every handler object being leaked. Since a Tornado
process uses a different handler object for every request, this
resulted in a significant memory leak. We fix this by clearing the
handlers_by_id dict in the two code paths that would result in a
Tornado handler being de-allocated: the exception codepath and the
handler disconnect codepath.
Fixes#463.
Add a function email_allowed_for_realm that checks whether a user with
given email is allowed to join a given realm (either because the email
has the right domain, or because the realm is open), and use it
whenever deciding whether to allow adding a user to a realm.
This commit is not intended to change any behavior, except in one case
where the Zulip realm's domain was not being converted to lowercase.
Previously, client descriptors were referenced directly from the
handler object. Once we split the Tornado process into separate queue
and connection servers, these will no longer be in the same process,
so we need to reference them by ID instead.
Django's `manage.py runserver` prints a relatively low-information log
line for every request of the form:
[14/Dec/2015 00:43:06]"GET /static/js/message_list.js HTTP/1.0" 200 21969
This is pretty spammy, especially given that we already have our own
middleware printing a more detailed version of the same log lines:
2015-12-14 00:43:06,935 INFO 127.0.0.1 GET 200 0ms /static/js/message_list.js (unauth via ?)
Since runserver doesn't have support controlling whether these log
lines are printed, we wrap it with a small bit of code that silences
the log lines for 200/304 requests (aka the uninteresting ones).
notify_new_user was recently moved to zerver.lib.actions from
zerver.views and this wasn't properly updated. This would give an
error when doing a `manage.py create_user` from the command line.
get_realm is better in two key ways:
* It uses memcached to fetch the data from the cache and thus is faster.
* It does a case-insensitive query and thus is more safe.
This also removes the convenient way to run statsd in the Dev VM,
because we don't anticipate anyone doing that. It's just 2 lines of
config to configure it anyway:
STATSD_HOST = 'localhost'
STATSD_PREFIX = 'user'
(imported from commit 5b09422ee0e956bc7f336dd1e575634380b8bfa2)
The one time use address are a unique token which maps to stored stated
in redis. We store the user_id, recipient_id, and subject. When an email
is received at this address it is sent to the stored recipient by the
stored user. Anyone with this address can send a single message as this
user.
(imported from commit 4219417bdc30c033a6cf7a0c7c0939f7d0308144)
We already have a try-except earlier in the file about email_gateway_user, so we don't
need to check for it again.
(imported from commit 2d9fa357fab2605916c5c5cb61961c0a121b1211)
That way if all you do is briefly check Zulip because you got the
email, we'll send you another one tomorrow.
(imported from commit fcbbd264c5e5fea7352f0fee6989e000af7b7bed)
This must be run manually on staging after deployment. Once it has been run,
it can be deleted. It only needs to be run on staging, not prod.
(imported from commit 79252c23ba8cda93500a18aa7b02575f406dd379)
We now have 2 variablse:
EXTERNAL_API_PATH: e.g. staging.zulip.com/api
EXTERNAL_API_URI: e.g. https://staging.zulip.com/api
The former is primarily needed for certain integrations.
(imported from commit 3878b99a4d835c5fcc2a2c6001bc7eeeaf4c9363)
This requires a puppet apply on each of staging and prod0 to update
the nginx configuration to support the new URL when it is deployed.
(imported from commit a35a71a563fd1daca0d3ea4ec6874c5719a8564f)
This command should be run continuously via supervisor. It periodically
checks for new email messages to send, and then sends them. This is for
sending email that you've queued via the Email table, instead of mandrill
(as is the case for our localserver/development deploys).
(imported from commit a2295e97b70a54ba99d145d79333ec76b050b291)
In particular, EXTERNAL_HOST doesn't specify the protocol, which gets
coerced to HTTPS.
(imported from commit 53f2e8106cf33114dcdd2ad17e09b41609641e71)
We only needed a transaction here to workaround problems associated
with not having database-level autocommit.
(imported from commit 240ba05a4a4a846a7ff62e6e59e403ab0d78ab11)
The register_json_consumer() function now expects its callback
function to accept a single argument, which is the payload, as
none of the callbacks cared about channel, method, and properties.
This change breaks down as follows:
* A couple test stubs and subclasses were simplified.
* All the consume() and consume_wrapper() functions in
queue_processors.py were simplified.
* Two callbacks via runtornado.py were simplified. One
of the callbacks was socket.respond_send_message, which
had an additional caller, i.e. not register_json_consumer()
calling back to it, and the caller was simplified not
to pass None for the three removed arguments.
(imported from commit 792316e20be619458dd5036745233f37e6ffcf43)
We now ensure `create_realm` adds you to a default deployment and that
`create_deployment` removes the old deployment association when
performed.
(imported from commit 5b94fb07b8e11332765b057dc640a5ed873ec99e)
Before we were removing items individually from the queue. We now
directly use RabbitMQ's queue purging mechanism.
(imported from commit 62ab52c724c5a221b4c81a967154a4046a579f84)
This will allow us to redirect clients to the correct local site.
To apply this migration, just run:
python manage.py migrate zilencer 0002
(imported from commit 7bd39b5f035145b6b52e1b2cb2ad5f6720d598ce)
Here we introduce a new Django app, zilencer. The intent is to not have
this app enabled on LOCALSERVER instances, and for it to grow to include
all the functionality we want to have in our central server that isn't
relevant for local deployments.
Currently we have to modify functions in zerver/* to match; in the
future, it would be cool to have the relevant shared code broken out
into a separate library.
This commit inclues both the migration to create the models as well as a
data migration that (for non-LOCALSERVER) creates a single default
Deployment for zulip.com.
To apply this migration to your system, run:
./manage.py migrate zilencer
(imported from commit 86d5497ac120e03fa7f298a9cc08b192d5939b43)
New dependency: sockjs-tornado
One known limitation is that we don't clean up sessions for
non-websockets transports. This is a bug in Tornado so I'm going to
look at upgrading us to the latest version:
https://github.com/mrjoes/sockjs-tornado/issues/47
(imported from commit 31cdb7596dd5ee094ab006c31757db17dca8899b)
The current version should only be used for testing; for example,
if you want to create a bunch of streams for stress testing, you
can run this in a loop.
(imported from commit ec51a431fb9679fc18379e4c6ecdba66bc75a395)
Handled by the queue processor for signups. Added a management command
that accomplishes the same task, in case it's needed for manually added users,
or in case we goof and need to remove queued emails for a given user.
This addresses Trac #1807
(imported from commit 6727b82a07fa6a3ea3d827860c9e60fd0602297a)
Empirically, we only get these for malformed emails where the charset
specified in a message part header does not match the true encoding of
the part. I checked what the resulting Zulip looked like for the
original offender, and it looked find with ignoring errors.
(imported from commit ac6ba65b611cb22d4ec547b75a585abce6fc50b0)
The overall message charset may be null or not match the part's
charset. Even though it's unclear from the documentation,
experimentally using the charset for a message part seems to give you
the charset even for non-multipart emails.
(imported from commit 0e1d23073f4c53041f9760e66a6635f8a94893d1)
This is useful in debugging when you just want to discard all the
messages in a queue because they have the wrong structure.
(imported from commit 8559ac74f11841430b4d0c801d5506ebcb74c3eb)
These engagement data will be useful both for making pretty graphs of
how addicted our users are as well as for allowing us to check whether
a new deployment is actually using the product or not.
This measures "number of minutes during which each user had checked
the app within the previous 15 minutes". It should correctly not
count server-initiated reloads.
It's possible that we should use something less aggressive than
mousemove; I'm a little torn on that because you really can check the
app for new messages without doing anything active.
This is somewhat tested but there are a few outstanding issues:
* Mobile apps don't report these data. It should be as easy as having
them send in update_active_status queries with new_user_input=true.
* The semantics of this should be better documented (e.g. the
management script should print out the spec above)x.
(imported from commit ec8b2dc96b180e1951df00490707ae916887178e)
This will allow mail with an implicit destination (mailing lists, complex
forwarding) to be received by our system.
See http://support.google.com/a/bin/answer.py?hl=en&answer=2368151 for
documentation of Google's behaviour that adds this header.
(imported from commit f8fd500e3c27e12af5941c63c91d5c796a2cd24a)
Previously the email gateway had to be the only address in a recipient
field or we'd mis-parse the recipient.
This commit also makes the mirror correctly handle addresses of the
form "Jessica McKellar <jesstess@zulip.com>".
(imported from commit 7435f2b59b8f47dc599cc869f64597a730af7d12)
The mirror will use INBOX when deployed and Test locally. Send an
e-mail to the Test mailbox by including the word "localhost" in the
subject; a GMail filter will place it in Test on receipt.
(imported from commit bacf9a9554c8c5e1f3ec8497761edf2c15d3745d)
For now, just do this, and we'll reach out to realms having trouble
manually. We may eventually need to automatically reply to the e-mail,
reach out to a realm admin, etc.
(imported from commit 5c5ac354066f9e9be3fb928e1f8801613c22c1ac)
This needs to be deployed to both staging and prod at the same
off-peak time (and the schema migration run).
At the time it is deployed, we need to make a few changes directly in
the database:
(1) UPDATE django_content_type set app_label='zerver' where app_label='zephyr';
(2) UPDATE south_migrationhistory set app_name='zerver' where app_name='zephyr';
(imported from commit eb3fd719571740189514ef0b884738cb30df1320)