Several recently merged webhooks were incorrectly not checking that
the actual webhook result didn't return an error. While they would
usually still fail in most cases when checking whether the message
came back correctly, this hid the root cause errors and thus made it
much harder to debug.
We were incorrectly applying the rate limiting rules to webhooks even
if rate limiting was disabled (as in the test suite), causing test
failures when the total number of webhook tests in Zulip got too high.
This integration relies on the Teamcity "tcWebHooks" plugin which is
available at
https://netwolfuk.wordpress.com/category/teamcity/tcplugins/tcwebhooks/
It posts build fail and success notifications to a stream specified in
the webhook URL.
It uses the name of the build configuration as the topic.
For personal builds, it tries to map the Teamcity username to a Zulip
username, and sends a private message to that person.
As documented in https://github.com/zulip/zulip/issues/441, Guardian
has quite poor performance, and in fact almost 50% of the time spent
running the Zulip backend test suite on my laptop was inside Guardian.
As part of this migration, we also clean up the old API_SUPER_USERS
variable used to mark EMAIL_GATEWAY_BOT as an API super user; now that
permission is managed entirely via the database.
When rebasing past this commit, developers will need to do a
`manage.py migrate` in order to apply the migration changes before the
server will run again.
We can't yet remove Guardian from INSTALLED_APPS, requirements.txt,
etc. in this release, because otherwise the reverse migration won't
work.
Fixes#441.
When uploaded avatar image is not a valid image file, PIL raises
IOError. Catch the IOError raised by PIL and raise JsonableError.
This will return a response with status code 400.
S3Test is now only the S3-specific test (which isn't even run), so we
can now invest in making FileUploadTest have good coverage of the
(local) file upload code paths.
Previously, the UserProfile objects were created in the order
generated by a Set, which meant tests would randomly start failing if
the code that runs before this part of populate_db changed (and thus
caused the Set object used to pass users into bulk_create_users to
have a different order when enumerated).
This fixes the issue in two ways -- one by sorting the users inside
bulk_create_users, and second by attaching subscriptions to users
based on a deterministic ordering.
The restarted Tornado processes seemed to escape the process group and
thus continue running after run-dev.py finished.
While we're at it, we don't need to dump/reload event queues in the
test suite either.
The previous version of sanitize_name dropped all unicode characters
and mangled filenames with multiple `.`s in the extension, leading to
confusing URLs for files uploaded to Zulip.
Fixes#321.
[tweaked significantly by tabbott]
It's always been the case that in production, Tornado dumps all the
event queues when shut down so that they can be reloaded by the
replacement Tornado process. This never worked in development because
the codepath for auto-reload didn't go through either a signal or
sys.exit (it re-execs the process instead).
This meant that we didn't have a mechanism for testing the event queue
dump/load functionality in the development environment. We fix this
by adding such dumping/loading. However, this breaks the automatic
reloading of open browser windows on a server restart, so we add that
back in by adjusting the special `restart` events to pass a special
`immediate` flag when used in development.
This also has the benefit of removing the "Bad event queue" errors one
would get on every file save induced restart on the Python console.
This is a no-op right now, but we'll want the new structure for the
next commit, and splitting this out makes it a lot easier to read what
is actually changed in the next commit.
This change drops the memory used for Python processes run by Zulip in
development from about 1GB to 300MB on my laptop.
On the front of safety, http://pika.readthedocs.org/en/latest/faq.html
explains "Pika does not have any notion of threading in the code. If
you want to use Pika with threading, make sure you have a Pika
connection per thread, created in that thread. It is not safe to share
one Pika connection across threads.". Since this code only connects
to rabbitmq inside the individual threads, I believe this should be
safe.
Progress towards #32.
Apparently, our event queue garbage collection logic never actually
disconnected any existing handler objects.
We fix this by disconnecting the handler inside cleanup(), adding a
special check to avoid creating a pointless timeout object.
The new Tornado handler tracking logic properly handled requests that
threw an exception or followed the RespondAsynchronously code path,
but did not properly de-allocated the handler in the syncronous case.
An easy reproducer for this is to load a new Zulip browser window;
that will leak 2 handler objects for the 2 synchronous requests made
from Django to Tornado as part of initial state fetching.
This line appears to have been lost in rebasing from the original
implementation of 1396eb7022faec4c2d91553800a35781a96dd5bd; so the
previous fix actually only addressed the issue in a rare exception
case.
The recent Tornado memory leak fix
(1396eb7022) didn't use the correct
variable name for the current handler ID, causing this cleanup code to
fail in the event that a view raised an exception.
Replaced calls to ifilterfalse by list comprehensions because
ifilterfalse is not part of python 3. Also changed some lists to sets
for faster lookup.
Refer to #256.
In 2ea0daab19, handlers were moved to
being tracked via the handlers_by_id dict, but nothing cleared this
dict, resulting in every handler object being leaked. Since a Tornado
process uses a different handler object for every request, this
resulted in a significant memory leak. We fix this by clearing the
handlers_by_id dict in the two code paths that would result in a
Tornado handler being de-allocated: the exception codepath and the
handler disconnect codepath.
Fixes#463.
Add a function email_allowed_for_realm that checks whether a user with
given email is allowed to join a given realm (either because the email
has the right domain, or because the realm is open), and use it
whenever deciding whether to allow adding a user to a realm.
This commit is not intended to change any behavior, except in one case
where the Zulip realm's domain was not being converted to lowercase.
While I believe this actually produced correct output since users are
always subscribed to streams within their realm, this code was
definitely wrong.
Discovered using the mypy type-checking tool.
If the content wasn't rendered, both rendered_content and
rendered_content_version would be None. In addition to being
confusing, in Python 3, `None < 2` is an error and this code breaks.
At present, we only do a few simple checks on the client type inside
the event system, and this saves database/memcached queries.
Note that this preserves the structure of the marshalled name in
to_dict/from_dict as client_type to avoid an unnecessary migration.
Previously, client descriptors were referenced directly from the
handler object. Once we split the Tornado process into separate queue
and connection servers, these will no longer be in the same process,
so we need to reference them by ID instead.
This commit is somewhat ugly, but its purpose is to be early
preparation for splitting Tornado into a queue server and a frontend
server, and this code belongs, by and large, in the queue server
component.
89a2765553 didn't include the database
migration corresponding to the change, which means it didn't take full
effect when it was merged.
I noticed this because `manage.py makemigrations` would generate these
migrations; that suggests a good idea for a test to add.
Previously these were hardcoded in zproject/settings.py to be accessed
on localhost.
[Modified by Tim Abbott to adjust comments and fix configure-rabbitmq]
Previously:
* It wouldn't raise an exception if the stream didn't exist
* It didn't correctly handle being passed a stream name
that differed in case from the stream name in the database.
The previous implementation didn't work because HomepageForm rejected
the email as not having a domain. Additionally, the logic in
accounts_register didn't work with Google auth because that code path
doesn't pass through accounts_home. Since whether there's a unique
open realm for the server is effectively a configuration property, we
can fix the bug and make the logic clearer by moving it into the
"figure out the user's realm" function.
The browser registers for events via loading the home view, not this
interface, and this functionality is available via the API-format
register route anyway.
This removes from our cache a moderate amount of totally useless alert
word data corresponding to users who don't have any alert words.
Thanks to @dbiollo for the suggestion!
Just doing the database query is more readable, and has about the same
performance as before in the case where active user dicts for the
realm are in cache (and is substantially better in the rare case that
this isn't in the cache).
Thanks to @dbiollo for the perf investigation and suggestion!
This makes it possible to use DevAuthBackend when doing
performance/scalability testing on Zulip with many thousands of users.
It's unlikely that anyone testing this backend will find it valuable
to have more than 100 login buttons on the same page, and if they do,
they can always just change this limit.
Thanks to @dbiollo for the suggestion!
This fixes a performance issue looking up UserProfile objects for
realms with a large number of users in the case that a UserProfile
object is not in the cache.
Thanks to @dbiollo for the suggestion!
Django's `manage.py runserver` prints a relatively low-information log
line for every request of the form:
[14/Dec/2015 00:43:06]"GET /static/js/message_list.js HTTP/1.0" 200 21969
This is pretty spammy, especially given that we already have our own
middleware printing a more detailed version of the same log lines:
2015-12-14 00:43:06,935 INFO 127.0.0.1 GET 200 0ms /static/js/message_list.js (unauth via ?)
Since runserver doesn't have support controlling whether these log
lines are printed, we wrap it with a small bit of code that silences
the log lines for 200/304 requests (aka the uninteresting ones).
notify_new_user was recently moved to zerver.lib.actions from
zerver.views and this wasn't properly updated. This would give an
error when doing a `manage.py create_user` from the command line.