Running `./manage.py email-mirror` used to fail on python 3
because twisted.mail.imap4 is not python 3 compatible.
Display a message informing the user that email-mirror is not
available on python 3 instead of failing with a traceback.
Also add tools/test-management to py3-backend.
runtornado unbuffers its output using
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0).
This is not python 3 compatible since we can't specify
buffering on a text stream in python 3. So use the '-u'
option of python when calling runtornado.py to make output
unbuffered.
* get_realm returns None if no matching realm is present, but
create_stream.py assumed it raises Realm.DoesNotExist.
* encoded/decode strings properly.
* Replace filter by list comprehension.
* Add '# type: ignore' to statements which use attributes from
modeule `posix`, since stubs for posix are missing on python 3.
* get_realm returns None if no matching realm is present, but
create_stream.py assumed it raises Realm.DoesNotExist.
* encoded/decode strings properly.
This prototype from Dropbox Hack Week turned out to be too inefficient
to be used for realms with any significant amount of history, so we're
removing it.
It will be replaced by https://github.com/zulip/zulip/pull/673.
This results in a substantial performance improvement for all of
Zulip's backend templates.
Changes in templates:
- Change `block.super` to `super()`.
- Remove `load` tag because Jinja2 doesn't support it.
- Use `minified_js()|safe` instead of `{% minified_js %}`.
- Use `compressed_css()|safe` instead of `{% compressed_css %}`.
- `forloop.first` -> `loop.first`.
- Use `{{ csrf_input }}` instead of `{% csrf_token %}`.
- Use `{# ... #}` instead of `{% comment %}`.
- Use `url()` instead of `{% url %}`.
- Use `_()` instead of `{% trans %}` because in Jinja `trans` is a block tag.
- Use `{% trans %}` instead of `{% blocktrans %}`.
- Use `{% raw %}` instead of `{% verbatim %}`.
Changes in tools:
- Check for `trans` block in `check-templates` instead of `blocktrans`
Changes in backend:
- Create custom `render_to_response` function which takes `request` objects
instead of `RequestContext` object. There are two reasons to do this:
1. `RequestContext` is not compatible with Jinja2
2. `RequestContext` in `render_to_response` is deprecated.
- Add Jinja2 related support files in zproject/jinja2 directory. It
includes a custom backend and a template renderer, compressors for js
and css and Jinja2 environment handler.
- Enable `slugify` and `pluralize` filters in Jinja2 environment.
Fixes#620.
This commit adds the capability to keep track and remove uploaded
files. Unclaimed attachments are files that have been uploaded to the
server but are not referred in any messages. A management command to
remove old unclaimed files after a week is also included.
Tests for getting the file referred in messages are also included.
As documented in https://github.com/zulip/zulip/issues/441, Guardian
has quite poor performance, and in fact almost 50% of the time spent
running the Zulip backend test suite on my laptop was inside Guardian.
As part of this migration, we also clean up the old API_SUPER_USERS
variable used to mark EMAIL_GATEWAY_BOT as an API super user; now that
permission is managed entirely via the database.
When rebasing past this commit, developers will need to do a
`manage.py migrate` in order to apply the migration changes before the
server will run again.
We can't yet remove Guardian from INSTALLED_APPS, requirements.txt,
etc. in this release, because otherwise the reverse migration won't
work.
Fixes#441.
This change drops the memory used for Python processes run by Zulip in
development from about 1GB to 300MB on my laptop.
On the front of safety, http://pika.readthedocs.org/en/latest/faq.html
explains "Pika does not have any notion of threading in the code. If
you want to use Pika with threading, make sure you have a Pika
connection per thread, created in that thread. It is not safe to share
one Pika connection across threads.". Since this code only connects
to rabbitmq inside the individual threads, I believe this should be
safe.
Progress towards #32.
The new Tornado handler tracking logic properly handled requests that
threw an exception or followed the RespondAsynchronously code path,
but did not properly de-allocated the handler in the syncronous case.
An easy reproducer for this is to load a new Zulip browser window;
that will leak 2 handler objects for the 2 synchronous requests made
from Django to Tornado as part of initial state fetching.
The recent Tornado memory leak fix
(1396eb7022) didn't use the correct
variable name for the current handler ID, causing this cleanup code to
fail in the event that a view raised an exception.
In 2ea0daab19, handlers were moved to
being tracked via the handlers_by_id dict, but nothing cleared this
dict, resulting in every handler object being leaked. Since a Tornado
process uses a different handler object for every request, this
resulted in a significant memory leak. We fix this by clearing the
handlers_by_id dict in the two code paths that would result in a
Tornado handler being de-allocated: the exception codepath and the
handler disconnect codepath.
Fixes#463.
Add a function email_allowed_for_realm that checks whether a user with
given email is allowed to join a given realm (either because the email
has the right domain, or because the realm is open), and use it
whenever deciding whether to allow adding a user to a realm.
This commit is not intended to change any behavior, except in one case
where the Zulip realm's domain was not being converted to lowercase.
Previously, client descriptors were referenced directly from the
handler object. Once we split the Tornado process into separate queue
and connection servers, these will no longer be in the same process,
so we need to reference them by ID instead.
Django's `manage.py runserver` prints a relatively low-information log
line for every request of the form:
[14/Dec/2015 00:43:06]"GET /static/js/message_list.js HTTP/1.0" 200 21969
This is pretty spammy, especially given that we already have our own
middleware printing a more detailed version of the same log lines:
2015-12-14 00:43:06,935 INFO 127.0.0.1 GET 200 0ms /static/js/message_list.js (unauth via ?)
Since runserver doesn't have support controlling whether these log
lines are printed, we wrap it with a small bit of code that silences
the log lines for 200/304 requests (aka the uninteresting ones).
notify_new_user was recently moved to zerver.lib.actions from
zerver.views and this wasn't properly updated. This would give an
error when doing a `manage.py create_user` from the command line.
get_realm is better in two key ways:
* It uses memcached to fetch the data from the cache and thus is faster.
* It does a case-insensitive query and thus is more safe.
This also removes the convenient way to run statsd in the Dev VM,
because we don't anticipate anyone doing that. It's just 2 lines of
config to configure it anyway:
STATSD_HOST = 'localhost'
STATSD_PREFIX = 'user'
(imported from commit 5b09422ee0e956bc7f336dd1e575634380b8bfa2)
The one time use address are a unique token which maps to stored stated
in redis. We store the user_id, recipient_id, and subject. When an email
is received at this address it is sent to the stored recipient by the
stored user. Anyone with this address can send a single message as this
user.
(imported from commit 4219417bdc30c033a6cf7a0c7c0939f7d0308144)
We already have a try-except earlier in the file about email_gateway_user, so we don't
need to check for it again.
(imported from commit 2d9fa357fab2605916c5c5cb61961c0a121b1211)
That way if all you do is briefly check Zulip because you got the
email, we'll send you another one tomorrow.
(imported from commit fcbbd264c5e5fea7352f0fee6989e000af7b7bed)
This must be run manually on staging after deployment. Once it has been run,
it can be deleted. It only needs to be run on staging, not prod.
(imported from commit 79252c23ba8cda93500a18aa7b02575f406dd379)
We now have 2 variablse:
EXTERNAL_API_PATH: e.g. staging.zulip.com/api
EXTERNAL_API_URI: e.g. https://staging.zulip.com/api
The former is primarily needed for certain integrations.
(imported from commit 3878b99a4d835c5fcc2a2c6001bc7eeeaf4c9363)
This requires a puppet apply on each of staging and prod0 to update
the nginx configuration to support the new URL when it is deployed.
(imported from commit a35a71a563fd1daca0d3ea4ec6874c5719a8564f)
This command should be run continuously via supervisor. It periodically
checks for new email messages to send, and then sends them. This is for
sending email that you've queued via the Email table, instead of mandrill
(as is the case for our localserver/development deploys).
(imported from commit a2295e97b70a54ba99d145d79333ec76b050b291)
In particular, EXTERNAL_HOST doesn't specify the protocol, which gets
coerced to HTTPS.
(imported from commit 53f2e8106cf33114dcdd2ad17e09b41609641e71)