This commit adds support for removing reactions via DELETE requests to
the /reactions endpoint with parameters emoji_name and message_id.
The reaction is deleted from the database and a reaction event is sent
out with 'op' set to 'remove'.
Tests are added to check:
1. Removing a reaction that does not exist fails
2. When removing a reaction, the event payload and users are correct
This commit adds the following:
1. A reaction model that consists of a user, a message and an emoji that
are unique together (a user cannot react to a particular message more
than once with the same emoji)
2. A reaction event that looks like:
{
'type': 'reaction',
'op': 'add',
'message_id': 3,
'emoji_name': 'doge',
'user': {
'user_id': 1,
'email': 'hamlet@zulip.com',
'full_name': 'King Hamlet'
}
}
3. A new API endpoint, /reactions, that accepts POST requests to add a
reaction to a message
4. A migration to add the new model to the database
5. Tests that check that
(a) Invalid requests cannot be made
(b) The reaction event body contains all the info
(c) The reaction event is sent to the appropriate users
(d) Reacting more than once fails
It is still missing important features like removing emoji and
fetching them alongside messages.
This makes it possible to configure only certain authentication
methods to be enabled on a per-realm basis.
Note that the authentication_methods_dict function (which checks what
backends are supported on the realm) requires an in function import
due to a circular dependency.
Note that we still need the equivalent function in our
user-facing API, so there is not much code removal yet.
(Also, we will probably always keep this in our API,
as bot authors will usually just want a simple endpoint
here, whereas our client code gets page_params and events.)
Previously, we used to create one Google OAuth callback url entry
per subdomain. This commit allows us to authenticate subdomain users
against a single Google OAuth callback url entry.
The actual logic is that if the user already exists than the
function should return a False and if the user does not exist
the function should first create the user and return True.
- To avoid redefining migrate manage command is added new application
configuration class which emit post_migration signal. This signal
require models module inside application and defined AppConfig
Instance as signal sender. Documentation here:
https://docs.djangoproject.com/en/1.8/ref/signals/#post-migrate.
- Add AppConf subclass to __init__ zerver app file to make apllication
load it by default.
Fixes#1084.
This creates the new topic_list.js module, and the first
function that we extract is topic_list.update_count_in_dom().
This function needed to be decoupled from some non-topic-list
stuff which was overly complicated.
POST to /typing creates a typing event
Required parameters are 'op' ('start' or 'stop') and 'to' (recipient
emails). If there are multiple recipients, the 'to' parameter
should be a JSON string of the list of recipient emails.
The event created looks like:
{
'type': 'typing',
'op': 'start',
'sender': 'hamlet@zulip.com',
'recipients': [{
'id': 1,
'email': 'othello@zulip.com'
}]
}
For each database query made by an analytics function, log time spent and
the number of rows changed to var/logs/analytics.log.
In the spirit of write ahead logging, for each (stat, end_time)
update, log the start and end of the "transaction", as well as time
spent.
Previously, we sent users to an "invite your friends" page after they
created an organization. This commit removes that step in the flow and sends
users directly to the home page. We also remove the now-unused
initial_invite_page.html template, initial_invite.js (which pre-filled the
invite emails with characters from literature), and the /invite URL route.
test_settings.py was setting EXTERNAL_HOST after importing settings.py,
which has several variables (like SERVER_URI) that are computed from
EXTERNAL_HOST.
[tweaked by tabbott to add comments explaining the story here].
This moves the logic for renaming a stream to the REST API
update_stream_backend method, eliminating the legacy API endpoint for
doing so.
It also adds a nice test suite covering international stream names.
This improves Google and JWT auth as well as the registration
codepath to log something if the wrong subdomain is encountered.
Ideally, we'd have tests for these, and code to make the Google and JWT
auth cases show a clear error message.
This adds support for running a Zulip production server with each
realm on its own unique subdomain, e.g. https://realm_name.example.com.
This patch includes a ton of important features:
* Configuring the Zulip sesion middleware to issue cookier correctly
for the subdomains case.
* Throwing an error if the user tries to visit an invalid subdomain.
* Runs a portion of the Casper tests with REALMS_HAVE_SUBDOMAINS
enabled to test the subdomain signup process.
* Updating our integrations documentation to refer to the current subdomain.
* Enforces that users can only login to the subdomain of their realm
(but does not restrict the API; that will be tightened in a future commit).
Note that toggling settings.REALMS_HAVE_SUBDOMAINS on a live server is
not supported without manual intervention (the main problem will be
adding "subdomain" values for all the existing realms).
[substantially modified by tabbott as part of merging]
This adds an event listener (by way of delegation) to the
.message_inline_image elements that pops up the overlay and hides it
when the overlay exit is clicked.
Fixes#654.
This was the original way to send messages via the Zulip API in the
very early days of Zulip, but was replaced by the REST API back in
2013.
Fixes: #730.
The main purpose of the "var" convention is to make it easy to write stuff
inside of our git repo when running a dev instance, and then "var" gets
excluded from checkins. For production, that's not as much of a concern.
For upgrades we don't want to be changing the directory around and confusing
matters, especially with the extra moving part of nginx configs (which have
their own issues in terms of being overwritten by accident when admins go to
S3).
This adds support for using PGroonga to back the Zulip full-text
search feature. Because built-in PostgreSQL full text search doesn't
support languages that don't put space between terms such as Japanese,
Chinese and so on. PGroonga supports all languages including Japanese
and Chinese.
Developers will need to re-provision when rebasing past this patch for
the tests to pass, since provision is what installs the PGroonga
package and extension.
PGroonga is enabled by default in development but not in production;
the hope is that after the PGroonga support is tested further, we can
enable it by default.
Fixes#615.
[docs and tests tweaked by tabbott]
The previous default configuration resulted in delivery problems if
the Zulip server was authorized in the SPF records for the domains of
all users on the Zulip server.
These error messages are pretty spammy because most servers on the
public Internet receive some amount of HTTP(S) scanning traffic.
We still log them, just don't email the administrators.
This adds a few new helpful context variables that we can use to
compute URLs in all of our templates:
* external_uri_scheme: http(s)://
* server_uri: The base URL for the server's canonical name
* realm_uri: The base URL for the user's realm
This is preparatory work for making realm_uri != server_uri when we
add support for subdomains.
Most directly useful for the migration to zulipchat.com.
Creates a new field in UserProfile to store the tos_version, as well as two
new settings TOS_VERSION and FIRST_TIME_TOS_TEMPLATE. We check for a version
mismatch between what the user has signed and the current
settings.TOS_VERSION whenever the user hits the home page, and redirect them
if needed.
Note that accounts_accept_terms.html and
zerver.views.accounts_accept_terms were unused before this commit
(they date from c327446537)
Create `media.css` using media queries that had been at the bottom
of `zulip.css`, then update miscellaneous setttings/docs files.
I also add `.screen-medium-show` and `.screen-narrow-show` to
`media.css`, as they seem to be an important part of our
responsive design.
Fixes#1532.
Define Integration and WebhookIntegration classes.
Change webhook part of integration's guide.
Replace hardcoded webhook urls to generating
based on WEBHOOKS list.
Both key and secret settings of team and organization default to
SOCIAL_AUTH_GITHUB_KEY and SOCIAL_AUTH_GITHUB_SECRET respectively.
SOCIAL_AUTH_GITHUB_TEAM_ID and SOCIAL_AUTH_GITHUB_ORG_NAME default
to `None`.
This exists primarily in order to allow us to mock settings.DEBUG for
the purposes of rate limiting, without actually mocking
settings.DEBUG, which I suspect Django never intended one to do, and
thus caused some very strange test failures (see
https://github.com/zulip/zulip/pull/776 for details).
All other zulip management command names have underscores, so
rename email-mirror to email_mirror.
This will also make it possible to import this module, which will
help in writing tests for it.
This reverts commit be93b6ea28.
Unfortunately, the newer jquery comes with a huge performance
regression affecting the hotkeys code, which has the effect of making
typing super slow.
Fixes: #1449.
When email mirroring is done via polling, the IMAP account's
password should be stored in zulip-secrets.conf in
email_gateway_password, not in email_gateway_login.
tools/provision.py: Create directory var/uploads.
zproject/local_settings_template.py: Update Upload dir to var/uploads.
zproject/dev_settings.py: Update upload dir to var/uploads.
Bitbucket changed the format of their API. The old format is still
useful for BitBucket enterprise, but for the main cloud verison of
Bitbucket, we need a new BitBucket integration supporting the new API.
This works around a nasty problem with Webpack that you can't run two
copies of the Webpack development server on the same project at the
same time (even if on different ports). The second copy doesn't fail,
it just hangs waiting for some lock, which is confusing; but even if
that were to be solved, we don't actually need the webpack development
server running to run the Casper tests; we just need bundle.js built.
So the easy solution is to just run webpack manually and be sure to
include bundle.js in the JS_SPECS entry.
As a follow-up to this change, we should clean up how test_settings.py
is implemented to not require duplicating code from settings.py.
Fixes#878.
The manage.py change effectively switches the Zulip production server
to use the virtualenv, since all of our supervisord commands for the
various Python services go through manage.py.
Additionally, this migrates the production scripts and Nagios plugins
to use the virtualenv as well.
We would like to know which kind of authentication backends the server
supports.
This is information you can get from /login, but not in a way easily
parseable by API apps (e.g. the Zulip mobile apps).
This prototype from Dropbox Hack Week turned out to be too inefficient
to be used for realms with any significant amount of history, so we're
removing it.
It will be replaced by https://github.com/zulip/zulip/pull/673.
For a long time, rest_dispatch has had this hack where we have to
create a copy of it in each views file using it, in order to directly
access the globals list in that file. This removes that hack, instead
making rest_dispatch just use Django's import_string to access the
target method to use.
[tweaked and reorganized from acrefoot's original branch in various
ways by tabbott]
Currently we use the deprecated django pattern() prefix pattern.
This make it hard to read the router logic in zproject/urls.py
This commit denormalizes the urls so that they can be read
more easily, at the expense of some verbosity. This also makes it
easier to reorganize urls in that file.
We skip denomalizing rest_dispatch due to its unique complications.
The recent changes to api_fetch_api_key to receive detailed data via
the "return_data" object did not properly update the LDAP backend to
accept that argument, causing mobile password authentication to not
work with the LDAP backend.
Apparently, there are like 5 independently developed jquery-caret
plugins, none of which are great. The previous one we were using was
last modified in 2010. This new one comes from
https://github.com/acdvorak/jquery.caret and at least doesn't use
deprecated jQuery syntax and has a repository on GitHub.
This plugin is way larger than it needs to be for what it does, but we
can deal with that later.
Previously, uploaded files were served:
* With S3UploadBackend, via get_uploaded_file (redirects to S3)
* With LocalUploadBackend in production, via nginx directly
* With LocalUploadBackend in development, via Django's static file server
This changes that last case to use get_uploaded_file in development,
which is a key step towards being able to do proper access control
authorization.
Does not affect production.
`makemessages` escapes the `%` sign in `.po` files, but Jinja2 does
not unescape it while replacing the tranlation strings. In Jinja2,
there is an updated implementation of gettext available called
new-style gettext which handles escaping better; this commit switches
to using that.
Fixes#906.
Previously, api_fetch_api_key would not give clear error messages if
password auth was disabled or the user's realm had been deactivated;
additionally, the account disabled error stopped triggering when we
moved the active account check into the auth decorators.
The security model for deactivated users (and users in deactivated
realms) being unable to access the service is intended to work via two
mechanisms:
* All active user sessions are deleted, and all login code paths
(where a user could get a new session) check whether the user (or
realm) is inactive before authorizing the request, preventing the
user from accessing the website and AJAX endpoints.
* All API code paths (which don't require a session) check whether the
user (and realm) are active.
However, this security model was not implemented correctly. In
particular, the check for whether a user has an active account in the
login process was done inside the login form's validators, which meant
that authentication mechanisms that did not use the login form
(e.g. Google and REMOTE_USER auth) could succeed in granting a session
even with an inactive account. The Zulip homepage would still fail to
load because the code for / includes an API call to Tornado authorized
by the user's token that would fail, but this mechanism could allow an
inactive user to access realm data or users to access data in a
deactivated realm.
This fixes the issue by adding explicit checks for inactive users and
inactive realms in all authentication backends (even those that were
already protected by the login form validator).
Mirror dummy users are already inactive, so we can remove the explicit
code around mirror dummy users.
The following commits add a complete set of tests for Zulip's inactive
user and realm security model.
This results in a substantial performance improvement for all of
Zulip's backend templates.
Changes in templates:
- Change `block.super` to `super()`.
- Remove `load` tag because Jinja2 doesn't support it.
- Use `minified_js()|safe` instead of `{% minified_js %}`.
- Use `compressed_css()|safe` instead of `{% compressed_css %}`.
- `forloop.first` -> `loop.first`.
- Use `{{ csrf_input }}` instead of `{% csrf_token %}`.
- Use `{# ... #}` instead of `{% comment %}`.
- Use `url()` instead of `{% url %}`.
- Use `_()` instead of `{% trans %}` because in Jinja `trans` is a block tag.
- Use `{% trans %}` instead of `{% blocktrans %}`.
- Use `{% raw %}` instead of `{% verbatim %}`.
Changes in tools:
- Check for `trans` block in `check-templates` instead of `blocktrans`
Changes in backend:
- Create custom `render_to_response` function which takes `request` objects
instead of `RequestContext` object. There are two reasons to do this:
1. `RequestContext` is not compatible with Jinja2
2. `RequestContext` in `render_to_response` is deprecated.
- Add Jinja2 related support files in zproject/jinja2 directory. It
includes a custom backend and a template renderer, compressors for js
and css and Jinja2 environment handler.
- Enable `slugify` and `pluralize` filters in Jinja2 environment.
Fixes#620.
In theory these should be the same, but in misconfigured environments
(such at Travis CI) where /etc/hosts has multiple entries for
"localhost", 127.0.0.1 is safer than "localhost".
Camo is a caching image proxy, used in Zulip to avoid mixed-content
warnings by proxying HTTP image content over HTTPS. We've been using
it in zulip.com production for years; this change makes it available
in standalone Zulip deployments.
Previously, if a user had only authenticated via Google auth, they
would be unable to reset their password in order to set one (which is
needed to setup the mobile apps, for example).
In theory, tools like populate_db should probably be in zerver, not
zilencer, but until we migrate them out, we need to include these in
EXTRA_INSTALLED_APPS in development.
The previous separated-out configuration wasn't helping us, and this
makes it easier to make the extra installed applications pluggable in
the following commits.
This will merge conflict with every new integraiton in flight, which
is unfortunate, but will make there be fewer merge conflicts as people
add new webhooks in the future (currently, every pair of new
integrations conflict because folks are adding them all at the end,
whereas after this change, there will only be merge conflicts when
adding two integrations near each other alphabetically).
This integration relies on the Teamcity "tcWebHooks" plugin which is
available at
https://netwolfuk.wordpress.com/category/teamcity/tcplugins/tcwebhooks/
It posts build fail and success notifications to a stream specified in
the webhook URL.
It uses the name of the build configuration as the topic.
For personal builds, it tries to map the Teamcity username to a Zulip
username, and sends a private message to that person.
As documented in https://github.com/zulip/zulip/issues/441, Guardian
has quite poor performance, and in fact almost 50% of the time spent
running the Zulip backend test suite on my laptop was inside Guardian.
As part of this migration, we also clean up the old API_SUPER_USERS
variable used to mark EMAIL_GATEWAY_BOT as an API super user; now that
permission is managed entirely via the database.
When rebasing past this commit, developers will need to do a
`manage.py migrate` in order to apply the migration changes before the
server will run again.
We can't yet remove Guardian from INSTALLED_APPS, requirements.txt,
etc. in this release, because otherwise the reverse migration won't
work.
Fixes#441.
Move recenter_pointer_on_display, suppress_scroll_pointer_update,
fast_forward_pointer, furthest_read, and server_furthest_read to
a new pointer module in pointer.js.
While we already don't link to /terms anywhere on the site, they can still be
accessed if you navigate to /terms directly. Now, those routes will only be
exported on the Zulip.com service.
We should ideally provide a mechanism for deployments to specify their own
terms without modifying source code; in the interim, sites that have already
customised the provided Zulip.com terms can simply carry a patch reverting this
commit.
Previously these were hardcoded in zproject/settings.py to be accessed
on localhost.
[Modified by Tim Abbott to adjust comments and fix configure-rabbitmq]
The browser registers for events via loading the home view, not this
interface, and this functionality is available via the API-format
register route anyway.
It's needed for the tornado server. Otherwise, you get errors like
2015-12-20 09:33:55,124 ERROR Internal Server Error: /api/v1/events
Traceback (most recent call last):
File "/home/zulip/deployments/2015-12-20-13-44-47/zerver/management/commands/runtornado.py", line 209, in get_response
response = middleware_method(request)
File "/usr/lib/python2.7/dist-packages/django/middleware/common.py", line 62, in process_request
host = request.get_host()
File "/usr/lib/python2.7/dist-packages/django/http/request.py", line 101, in get_host
raise DisallowedHost(msg)
DisallowedHost: Invalid HTTP_HOST header: 'localhost:9993'. You may need to add u'localhost' to ALLOWED_HOSTS.