Long ago, there was work on an experimental integration model where
every user in a realm would have administrative control over all bots,
with the goal of simplifying the process of setting up communally
administered bots for smaller teams. While that new model was never
fully implemented (and thus never setup as an option), an error in
that original implementation meant that the data on all bots in a
realm, including their API keys, was sent to the browsers of users via
the `realm_bots` variable in `page_params`. The data wasn't displayed
in the UI for non-admin users, but was available via e.g. the
javascript console.
This commit updates this behavior to only send sensitive bot data like
API keys to the owner of the bot (and realm admins).
We may in the future implement a model simplifying communally
administered integrations, but if we do that, those bots should be
limited in their capabilities (e.g. only able to send webhook
messages).
This bug has been present since Zulip was released as open source.
The old code for this lookup was unnecessarily complicated because we
were working around Guardian, where the `is_realm_admin` check was
extremely expensive.
Previously we relied on having two matching list of fields for the
get_active_user_dicts_in_realm, one in the actual code and the other
in the caching system. By unifying these lists to have a single
source, we eliminate a class of caching bugs we might otherwise
regularly introduce.
This results in a substantial performance improvement for all of
Zulip's backend templates.
Changes in templates:
- Change `block.super` to `super()`.
- Remove `load` tag because Jinja2 doesn't support it.
- Use `minified_js()|safe` instead of `{% minified_js %}`.
- Use `compressed_css()|safe` instead of `{% compressed_css %}`.
- `forloop.first` -> `loop.first`.
- Use `{{ csrf_input }}` instead of `{% csrf_token %}`.
- Use `{# ... #}` instead of `{% comment %}`.
- Use `url()` instead of `{% url %}`.
- Use `_()` instead of `{% trans %}` because in Jinja `trans` is a block tag.
- Use `{% trans %}` instead of `{% blocktrans %}`.
- Use `{% raw %}` instead of `{% verbatim %}`.
Changes in tools:
- Check for `trans` block in `check-templates` instead of `blocktrans`
Changes in backend:
- Create custom `render_to_response` function which takes `request` objects
instead of `RequestContext` object. There are two reasons to do this:
1. `RequestContext` is not compatible with Jinja2
2. `RequestContext` in `render_to_response` is deprecated.
- Add Jinja2 related support files in zproject/jinja2 directory. It
includes a custom backend and a template renderer, compressors for js
and css and Jinja2 environment handler.
- Enable `slugify` and `pluralize` filters in Jinja2 environment.
Fixes#620.
This fixes an exception where client_id was never set in an error code
path. It shouldn't be needed, but I think this makes the code clearer
and this will help in debugging the actual problem.
Related to #753.
The error message for a test file that doesn't import properly was
previously pretty difficult to understand and it wasn't clear how to
debug the issue.
This commit adds the capability to keep track and remove uploaded
files. Unclaimed attachments are files that have been uploaded to the
server but are not referred in any messages. A management command to
remove old unclaimed files after a week is also included.
Tests for getting the file referred in messages are also included.
Since we don't have a stable way to get the Dropbox preview failure
image (and it was sorta a weird setup anyway), it seems best to just
remove the condition.
Previously we needed to use a specified password when activating a
formerly mirror dummy user, in order for that user to be able to
(re)set their password and login. Now that we have our own password
reset form, this is no longer required.
As documented in https://github.com/zulip/zulip/issues/441, Guardian
has quite poor performance, and in fact almost 50% of the time spent
running the Zulip backend test suite on my laptop was inside Guardian.
As part of this migration, we also clean up the old API_SUPER_USERS
variable used to mark EMAIL_GATEWAY_BOT as an API super user; now that
permission is managed entirely via the database.
When rebasing past this commit, developers will need to do a
`manage.py migrate` in order to apply the migration changes before the
server will run again.
We can't yet remove Guardian from INSTALLED_APPS, requirements.txt,
etc. in this release, because otherwise the reverse migration won't
work.
Fixes#441.
When uploaded avatar image is not a valid image file, PIL raises
IOError. Catch the IOError raised by PIL and raise JsonableError.
This will return a response with status code 400.
Previously, the UserProfile objects were created in the order
generated by a Set, which meant tests would randomly start failing if
the code that runs before this part of populate_db changed (and thus
caused the Set object used to pass users into bulk_create_users to
have a different order when enumerated).
This fixes the issue in two ways -- one by sorting the users inside
bulk_create_users, and second by attaching subscriptions to users
based on a deterministic ordering.
The restarted Tornado processes seemed to escape the process group and
thus continue running after run-dev.py finished.
While we're at it, we don't need to dump/reload event queues in the
test suite either.
The previous version of sanitize_name dropped all unicode characters
and mangled filenames with multiple `.`s in the extension, leading to
confusing URLs for files uploaded to Zulip.
Fixes#321.
[tweaked significantly by tabbott]
It's always been the case that in production, Tornado dumps all the
event queues when shut down so that they can be reloaded by the
replacement Tornado process. This never worked in development because
the codepath for auto-reload didn't go through either a signal or
sys.exit (it re-execs the process instead).
This meant that we didn't have a mechanism for testing the event queue
dump/load functionality in the development environment. We fix this
by adding such dumping/loading. However, this breaks the automatic
reloading of open browser windows on a server restart, so we add that
back in by adjusting the special `restart` events to pass a special
`immediate` flag when used in development.
This also has the benefit of removing the "Bad event queue" errors one
would get on every file save induced restart on the Python console.
This is a no-op right now, but we'll want the new structure for the
next commit, and splitting this out makes it a lot easier to read what
is actually changed in the next commit.
This change drops the memory used for Python processes run by Zulip in
development from about 1GB to 300MB on my laptop.
On the front of safety, http://pika.readthedocs.org/en/latest/faq.html
explains "Pika does not have any notion of threading in the code. If
you want to use Pika with threading, make sure you have a Pika
connection per thread, created in that thread. It is not safe to share
one Pika connection across threads.". Since this code only connects
to rabbitmq inside the individual threads, I believe this should be
safe.
Progress towards #32.
Apparently, our event queue garbage collection logic never actually
disconnected any existing handler objects.
We fix this by disconnecting the handler inside cleanup(), adding a
special check to avoid creating a pointless timeout object.
This line appears to have been lost in rebasing from the original
implementation of 1396eb7022faec4c2d91553800a35781a96dd5bd; so the
previous fix actually only addressed the issue in a rare exception
case.
Replaced calls to ifilterfalse by list comprehensions because
ifilterfalse is not part of python 3. Also changed some lists to sets
for faster lookup.
Refer to #256.
In 2ea0daab19, handlers were moved to
being tracked via the handlers_by_id dict, but nothing cleared this
dict, resulting in every handler object being leaked. Since a Tornado
process uses a different handler object for every request, this
resulted in a significant memory leak. We fix this by clearing the
handlers_by_id dict in the two code paths that would result in a
Tornado handler being de-allocated: the exception codepath and the
handler disconnect codepath.
Fixes#463.
Add a function email_allowed_for_realm that checks whether a user with
given email is allowed to join a given realm (either because the email
has the right domain, or because the realm is open), and use it
whenever deciding whether to allow adding a user to a realm.
This commit is not intended to change any behavior, except in one case
where the Zulip realm's domain was not being converted to lowercase.
While I believe this actually produced correct output since users are
always subscribed to streams within their realm, this code was
definitely wrong.
Discovered using the mypy type-checking tool.
At present, we only do a few simple checks on the client type inside
the event system, and this saves database/memcached queries.
Note that this preserves the structure of the marshalled name in
to_dict/from_dict as client_type to avoid an unnecessary migration.
Previously, client descriptors were referenced directly from the
handler object. Once we split the Tornado process into separate queue
and connection servers, these will no longer be in the same process,
so we need to reference them by ID instead.
This commit is somewhat ugly, but its purpose is to be early
preparation for splitting Tornado into a queue server and a frontend
server, and this code belongs, by and large, in the queue server
component.
Previously these were hardcoded in zproject/settings.py to be accessed
on localhost.
[Modified by Tim Abbott to adjust comments and fix configure-rabbitmq]
Previously:
* It wouldn't raise an exception if the stream didn't exist
* It didn't correctly handle being passed a stream name
that differed in case from the stream name in the database.
This removes from our cache a moderate amount of totally useless alert
word data corresponding to users who don't have any alert words.
Thanks to @dbiollo for the suggestion!
Just doing the database query is more readable, and has about the same
performance as before in the case where active user dicts for the
realm are in cache (and is substantially better in the rare case that
this isn't in the cache).
Thanks to @dbiollo for the perf investigation and suggestion!
Prior to adding reply-to-missed-message-email functionality, adding
automated tests for simpler case - incoming stream messages. Added
to new file test_email_mirror.py.
Also removed the "if not body" code from process_stream_message that
will never run because of an upstream ZulipEmailForwardError exception.
get_realm is better in two key ways:
* It uses memcached to fetch the data from the cache and thus is faster.
* It does a case-insensitive query and thus is more safe.
Previously we only did this when new human users were created via the
login process, which meant the management command to create a user did
not add the user to default streams (for example) and any future code
that might want to register a new Zulip user (such as the LDAP
integration) would need to import views/__init__.py in order to
properly set this up.
The do_send_missedmessage_events_reply_in_zulip function in the email
mirror didn't support EMAIL_GATEWAY_PATTERN that wasn't of the form
%s@example.com (which resulted in replies to missed message emails failing
to be parsed).
This also requires updating the required version of oauthlib; previously an
appropriate version was being installed only because it was a dependency of
the wrong twitter library.
This only affects development environments and/or hand-built
installations relying on the contents of requirements.txt.
To fix existing environments, the incorrect api needs to be explicitly
removed with `pip uninstall twitter`.
Fixes#86.
This reverts commit 39f2908a32c0276b1d87ecedc876c71dd35a9b2f.
We're not including the preview_fail.png image in the release.
(imported from commit 2de1451de2f9b1727fc3a7e64c380b71c0f2caa8)
This also removes the convenient way to run statsd in the Dev VM,
because we don't anticipate anyone doing that. It's just 2 lines of
config to configure it anyway:
STATSD_HOST = 'localhost'
STATSD_PREFIX = 'user'
(imported from commit 5b09422ee0e956bc7f336dd1e575634380b8bfa2)
django commit 596564e80808 stores the user id in the session as a
string, which broke our code that extracts the user id and compares
it to the id of a UserProfile object.
(imported from commit 99defd7fea96553550fa19e0b2f3e91a1baac123)
We can add it back later but for now we can just stick with localhost
since that's what most people will want.
(imported from commit c5fe524282219dc62a0670f569c0cb6af04be339)
Fixes
[
File "/srv/zulip/zerver/lib/actions.py", line 605, in recipient_for_emails
if not (normalized_emails & admin_realm_admin_emails or normalized_emails & settings.CROSS_REALM_BOT_EMAILS):
TypeError: unsupported operand type(s) for &: 'set' and 'list'
(imported from commit f39a95dad7b3207e9188fc03926cd116061ef3f3)
Include new field on Realm to control whether e-mail invitations are required
separately from whether the e-mail domain must match.
Allow control of these fields from admin panel.
Update logic in registration page to use these fields.
(imported from commit edc7f0a4c43b57361d9349e258ad4f217b426f88)
Also increase the number of messages sent as context from 5 to 10 and
look up to 15 minutes in to the past for context.
(imported from commit bfaed9bcff1ee2047fc3b7a63acf93cd2d47cc7d)
Now we have 2 different Zulip apps out there, and they are signed with
two certs: Zulip and Dropbox. The Dropbox-signed apps are going to need
to be sent APNS notifications from the appropriate APNS connection
(imported from commit 6db50c5811847db4f08e5c997c7bbb4b46cfc462)
Missed message email were including the context messages in the number
of messages you were mentioned in.
(imported from commit 1749c5d272d2e17d6e28456ace932f80715103a3)
* Fixes a few bugs with missed message address for PMs and huddles.
* Uses missed message address for all missed message reply-to headers on
the zulip.com realm.
(imported from commit 61dd09386e1bbdf9a5096e2400984d31e73a5b74)
The one time use address are a unique token which maps to stored stated
in redis. We store the user_id, recipient_id, and subject. When an email
is received at this address it is sent to the stored recipient by the
stored user. Anyone with this address can send a single message as this
user.
(imported from commit 4219417bdc30c033a6cf7a0c7c0939f7d0308144)
Send a different missed message email for each recipient. This allows us
to set a different reply to address for each one. PMs and huddles use
the existing logic, replies will be sent to all parties via email.
Missed @-mention emails will have the reply to address set to the
stream's email address.
(imported from commit bfb7cf7c1382adbf3720caa74cbb927c10dea267)
One common place that this happens (for us) is on a local
Dropbox .dev.corp.dropbox.com instance, which can't be reached
by the Zulip servers.
This commit also:
* Fixes the test suite
* Properly previews /photos/ links
(imported from commit b4788b6236e7a9d390e1efc4673be34d9ba5e091)
Truthfully, the actual way to do this is going to be a bit
more involved and also involves changing Realm.NOTIFICATION_STREAM_NAME,
probably on a realm-by-realm basis.
(imported from commit b6a05849d215e07ee6716d116ff5e2c819d5b4be)
When you are at mentioned in a stream we will now send you up to the
last five messages which were sent in the past 5 minutes on the same
topic and stream.
(imported from commit 6df6c1cf868722a7bf76e54710e38741a7ac8f31)
URLs with a realm of "unk" will be queried against the new bucket to
determine the relevant realm of the uploading user.
(imported from commit 5d39801951face3cc33c46a61246ba434862a808)
This way if two browsers are disagreeing about your active status, the
active one wins. The active browser continues to update your timestamp,
and the idle browser's changes are discarded until the timestamp on your
active status expires.
(imported from commit dc29e013d045c4b72793097f611ba6802c58e57a)
We still don't show this in the frontend, aside from our usual "Not
delivered" message that we also show when you send to a non-existent
user.
Addresses #2349
(imported from commit 2f348b15a4d539987ddbcccbbf40e2be87c1f92d)
In a test run with a hand-constructed query, this sped up the query time from
280ms to 50ms.
(imported from commit 8cbe199ca50a487491d13d6d6ef940ea668c1038)
See #2357. We now support `~~~ .py ` with that trailing space.
Note that the test coverage is Python-side only due to
bugdown_matches_marked being set to false, since we don't yet
support language syntax on the client side.
(imported from commit ccd5fcb0eee01478d349161400103480678d7486)
Adds APIs edit a bot's default_to_stream, default_events_register_stream
and default_all_public_streams.
(imported from commit c848a94b7932311143dad770c901d6688c936b6d)
Support setting default_to_stream, default_events_register_stream, and
default_all_public_streams during in the bot creation API.
(imported from commit bef484dd8be9f8aacd65a959594075aea8bdf271)
This allows bot owners to configure which streams messages are delivered
to without needing to change webhook URLs or configuration files.
(imported from commit 32a0c26657c145b001cd8cb3ce0a0364d48902ce)
Before saving a Message object, call update_calculated_fields()
to set the has_attachment/has_image/has_link fields.
Note that the pre_save hook we added here does not get called
if you call bulk_create, hence the explicit call to
update_calculated_fields() in do_send_messages().
(imported from commit 1d60ae5908ef186aa5ff1e39277dbb2b765e60d4)
A stream is vacant when it has no subscribers and occupied when it has at least
one subscriber.
We have a slightly odd model where stream creation is conflated with
subscription creation. Streams are created by attempting to subscribe to a
stream that doesn't exist. We also hide streams with no subscribers from users
to make it seem like they've gone away. However, we can't actually remove those
streams because we want to preserve history.
This commit moves us towards a separation of these two concepts. By sending
events for stream creation, occupation, vacancy, and deletion, we allow clients
to directly observe the global state of streams rather than indirectly observing
subscription information. A more complete solution would involve adding a view
for explicitly creating streams without subscribing to them.
This commit does not handle the intricacies of invite-only streams. We
currently simply do not send these events for invite-only streams.
(imported from commit 5430e5a5eecefafcdba4f5d4f9aa665556fcc559)
Previously, the email mirror queue worker used the API bindings to send
messages to Zulip, as if it were any other API client.
This is inefficient since we're running the worker inside the Django
context on a machine with database access; we can instead just use the
internal message-sending functions we use elsewhere. This also resolves
potential issues with SSL certificates, etc. that might occur when we
were previously making a HTTPS connection.
(imported from commit 6de8015829bec440f1af0199a2138828e86ed2a4)
Previously, digest emails provided links to Zulip that didn't correctly
encode "/" if it occurred in a stream name or topic. By explicitly
specifying «safe=""», we can request that urllib.quote escape such
slashes.
Closes trac #2294.
(imported from commit 2e6334672969d4cf4032d2ea5dc80091af96d672)
We now allow the list of recipients to be sent as a
comma-delimited string with optional JSON encoding.
(imported from commit e928b037bbd258348eb5b2ecca486d0bb77f593e)
We now will match an alert word even if it is used at the boundry of
bolding, backtick escaping, or caret quoting.
Closes trac #2186.
(imported from commit 984bc63eb621772c95a01ca5c5bfeb190767f71f)
Before we deploy this commit, we must migrate the data from the staging redis
server to the new, dedicated redis server. The steps for doing so are the
following:
* Remove the zulip::redis puppet class from staging's zulip.conf
* ssh once from staging to redis-staging.zulip.net so that the host key is known
* Create a tunnel from redis0.zulip.net to staging.zulip.net
* zulip@redis0:~$ ssh -N -L 127.0.0.1:6380:127.0.0.1:6379 -o ServerAliveInterval=30 -o ServerAliveCountMax=3 staging.zulip.net
* Set the redis instance on redis0.zulip.net to replicate the one on staging.zulip.net
* redis 127.0.0.1:6379> slaveof 127.0.0.1 6380
* Stop the app on staging
* Stop redis-server on staging
* Promote the redis server on redis0.zulip.net to a master
* redis 127.0.0.1:6379> slaveof no one
* Do a puppet apply at this commit on staging (this will bring up the tunnel to redis0)
* Deploy this commit to staging (start the app on staging)
* Kill the tunnel from redis0.zulip.net to staging.zulip.net
* Uninstall redis-server on staging
The steps for migrating prod will be the same modulo s/staging/prod0/.
(imported from commit 546d258883ac299d65e896710edd0974b6bd60f8)
Apparently the "inline" treeprocessor is what runs the inline
patterns. Also re-enable the rewriting-to-https support.
(imported from commit 2fde2c1f15217a784f26b16db25ee745f424f2f0)
Have the server send down the stream's id for removal
events, and have the client use that id to look up the
stream in its internal data structures. This sets the
stage for eventually just sending the stream id (and not
the stream name) down to clients, once all our clients
are ready to use the stream id.
(imported from commit 922516c98fb79ffad8ae7da0396646663ca54fd0)
Here, we don't want to check the uploading users' realm when determining
message privacy, because that'll prevent non-Zulip users from having
email-mirror-uploaded images. Instead, we just pass along the target
realm for the message explicitly to upload_message_image()
(imported from commit 6891261552135b1f41ff9da55ffe963ee5000556)
Our overall guideline is the type names for events are singular, and the list of
events of that type are plural. 'subscriptions' was not following this guideline
and (potentially as a result) had a bug where it was impossible for clients to explicitly
subscribe to subscription change events properly.
(imported from commit 7b3162141fd673746e0489199966c29ea32ee876)
For EventsRegisterTest that test updates to streams and
subscriptions, we now validate the events generated by
the actions under test conform to predicted schemas.
We define the schemas with help from the validators code
that is also sometimes used to validate incoming request
parameters for our views.
(imported from commit b4222b920a588e15cccee4a2349c074ca9697448)
This change also makes it so that the test_rename_stream()
test exercises the code path. We need to subscribe the user
to the stream in order to generate events.
(imported from commit 77f965efbf5a766eb8de23486e303fa135b2e638)
We now show the module name (e.g. "tests or test_hooks") in the
test output. This change also eliminates the intermediate use
of slashes in the test_name var, which was passed to
bounce_key_prefix_for_testing().
(imported from commit 58e73301037a0b07d7e437514c247f7cb559420e)
Instead of having home() set page_params.realm_name directly from
the user_profile object, have fetch_initial_state_data() set it.
This is more consistent with how we treat other data, and it protects
us against a race condition where realm name updates arrive during
the DB fetching.
(imported from commit 545e3bd73f150438126e3f941e9bebc7aa1d0614)
In particular, make the stream history inaccessible and free up the
name to be re-used.
(imported from commit 6063b7a484ed0ba0279a17d2b3e9a92b5ef1f762)
The file test_runner.py has our subclass of DjangoTestSuiteRunner
and various methods that help it work.
(imported from commit 8eca39a7ed3f8312c986224a810d4951559e7a8b)
The function update_user_profile_caches now operates on a list
of user_profiles, so callers like flush_realm() can benefit from
having a single cache_set_many() call. This slightly complicates
the call from flush_user_profile().
(imported from commit e064871d849b873c6ca388f00d4f7afaba1bf222)
For the realm-wide caches of active user dicts and alert words, just
make a single call to cache_delete() when you are deactivating a
realm. Before this change, we were doing O(N) cache_deletes as
part of the code path through flush_user_profile(). Now we just
call update_user_profile_caches() directly to clear the user_profile
caches.
This change also sets us up to turn flush_realm() into a post-save hook.
(imported from commit 699b4ea226ae15fc8c402cb4bc64ff6bdc041fc2)
This is a slight behavior change, as we now flush user_profile
caches for bots as well as humans.
(imported from commit 24c72c44d851ee4c66a67a4728cd6c548faeedcd)
This function updates all the user_profile-related caches
that are keyed on a per-user basis.
(This had some test coverage already.)
(imported from commit 37979400514a7b46a6dcb7e36665b0fee2f3c525)
Stream name and descriptions updates were being sent to all of the
active users on a realm. They are now only send to users who would have
information about that stream.
(imported from commit 2621ee8029f7356bf44ec493d7b5361bd546a8f5)
This is a lot simpler and eliminates a possible failure mode in the
data transfer path.
(imported from commit 19308d2715bbd12dc9385234f1d9156f91bdfae0)
To mutate the state for removing subscribers, the previous
code was essentially adding in event['subscriptions'] to
state['unsubscribed'], but that was a naive approach, since
the event object only has the name of the subscriber, and not
the full subscription info.
We instead effectively copy records over from state['subscriptions']
to state['unsubscribed'], and we also do surgery on the subscribers
that made me need to add the user_profile parameter to apply_events().
With the code apparently working now, I was able to remove the
match_except() test helper and use a more thorough matcher in
the test on do_remove_subscription().
Part of fixing the "remove" case was cleaning up the "add" case,
since they aren't quite symmetric opposites of each other, although
under this refactoring they now share the new name() helper.
(imported from commit 0deab67d0c7b08b3ad962493efae3762a835fd29)
Because full_name and is_admin changes go through many similar,
generic codepaths, it is almost more work at this point to keep
is_admin out of page_params as it is to just put it in. So
I put it in. This should pave the path for showing admins in
the GUI.
This commit actually starting by my adding a test
that calling apply_events() with the notification you get
when calling do_change_is_admin() updates
state['realm_users'] to be similar to what you would get
out of fetch_initial_state_data(). We didn't have test coverage
there before. Making that test pass forced my hand to
either add is_admin to page_params or to special-case
apply_events() not to update page_params with is_admin. I
chose the former approach.
(imported from commit 1e49d59c66540014284529c29d5007224be6a0c6)
A description was added to the streams and it is now displayed on the
subscriptions page. It can not be set in the UI yet.
(imported from commit 81d08b65eee42dba87cd99dd5bd30106c4eb6c6a)
While we're at it, lets comment up the function so I know what this is
doing next time :)
(imported from commit e745be75fcd6dbce9997e1d73464619fc8b73996)
If a name change event arrived during the call to
fetch_initial_state_data(), we would call apply_events() to
update the data structure that eventually becomes page_params.
Our update code, instead of surgically updating the fields, was
just overwriting the record, which led to is_bot being taken
out of the record when only full_name was in the event.
Apart from fixing the "update" case to do the right thing,
this commit also does a bit of cleanup on the code handling
"realm_user" events to make it more generic in how it does
add/remove/update. If we could standardize our events a bit
more, this could eventually lead to DRYing up some of the
apply_events() code.
(imported from commit 772e2fcd6a5605ccb6e8d1bc499b5f336934cf3c)
When new streams are created we now send a message with a custom
markdown tag that renders a subscribe button.
(imported from commit 9dfba280b3b4ff4f32f6431ef9227867c8bf4b40)
Added a default_desktop_notifications boolean to userprofile with a UI
in Zulip Labs. This flag is used to default the notification flag on new
subscriptions.
(imported from commit a25223cc5ecf09980cf877991e25034bb3fd4046)
Google Groups won't let you add an email address that has a '+' in it
as a member of a group, so we allow '.' as well.
This commit also fixes a current issue with email mirroring, where it
doesn't work if you had a + in the stream name.
This fixes Trac #2102.
(imported from commit 9a7a5f5d16087f6f74fb5308e170a6f04387599e)
Cross-realm private messages are only used to respond to support
requests. So now cross-realm private messages are only allowed if
exactly two realms are in the private message and one of them is
zulip.com.
(imported from commit f01a2824e214682acb22a6995714a9d1b0d0c66f)
This does result in a few more rabbitmq events to be processed (though
a negligible number compared to what we already do), but it saves a
database query from inside Tornado whenever we occasionally have a
cache miss looking up the UserProfile, which is far more important.
(imported from commit a553a00a3004ba27bfb54ffbc3e9c9b170ebae4d)
When we edit a message, send out UserMessage flags to the recipients.
This sets the stage for making sure that changes related to
user-specific alert words or mentions get sent out to users.
(imported from commit bce1de19acef44b5e106352f261203352ece02b9)
Image and video links in the twitter API are media and need to be
handed on separately. We also include a preview image if the media link
is a to a picture.
(imported from commit 2bd00d267e51b29ad0ba681195b2bfea9b991d8c)
This converts links in tweets to a tags. We also convert the displayed
text to the target of the twitter short URL. Mentions are linked to the
users twitter page.
(imported from commit 192d5546a7eea82759f9ae30d82c102aed15ff71)
Previously, we were processing in Tornado mirroring dummy users (and
deactived users) as recipients -- resulting in bugs where the missed
message hooks would fire for these nonexistent users.
Our Tornado real-time delivery system only needs the list of active
users both for delivery and for presence information updates.
(imported from commit b81143f106a4d0eefa4b838e7c074b2963259746)
Before this is deployed to prod, we need to manually frob our database
to set the is_mirror_dummy=True bit for all existing mirror users.
(imported from commit 39f1938cef091cf1d7d97307f76b137fe1d92b6c)
This contains the various fixes that needed to be made in order to get
accurate statistics.
Most notably, the active_users_between function in the previous
version of zerver/lib/statistics.py was broken for end dates in the
past, because it used the UserActivity table to get its data -- so in
fact it really was querying "users last active between".
This commit isn't super clean, but I figure we're probably better off
having our latest code for historical usage data in git so it doesn't
bitrot and anyone can improve on it.
(imported from commit 24ff2f24a22e5bdc004ea8043d8da12deb97ff2f)
Features:
* Only shows messages in the narrow
* New messages in the narrow will arrive as they are sent
* Works even for streams you're not subscribed to
* Automatically subscribes you to a stream on send
* Doesn't update your pointer
* All searches etc. automatically have the narrow added
(imported from commit 2e12b76849f6ca0f53dda5985dad477a04f7bbac)
This sets up a scheme to validate complex data structures and
give specific error messages for improperly typed parameters.
(imported from commit 33b2f070d993da4ee929119dd41503bd0128c8eb)
* Deal with shorter tweet IDs
(some old tweets don't have a full 18-character ID)
* Allow trailing slash
* Deal with old-style #! syntax
* Deal with links that link to a photo
(imported from commit 008a98c806f3b8dddd9e2f18a8f002af6932766f)
These images at least load now, but that's because Camo redirects
the browser to the origin server, so the only effect is an extra
round-trip time.
(imported from commit 0d6b9c888a5cdfaa9299272d74a085e872dfa434)
In plaintext e-mails these will be simple links.
In HTML e-mails these become a <link> and <img>, which some web mail
clients may inline.
(imported from commit b1242dfd917008a019981eb2224c1c7f5f84739f)
You can't have unread PMs sent by you, so we weren't explicitly
checking this, but when testing locally we often ignore the unread
check. Filter PMs sent by you to reduce confusion when testing
locally.
(imported from commit 0205c4a3ed67790b9d60d4f2b927e4cb9e720bf3)
Since we changed the initial subscription data to include
user_profile_ids rather than emails, we need to preserve that when
adding in events generated during the page load.
(imported from commit 4f4071b8ba30e57c6f64c9e7b54c1cc754e8f010)
This will allow us to substantially decrease the server-side work that
we do to support our Mirroring systems (since the personal mirrors can
request only messages that user sent) and also is what we need to
support a single-stream Zulip widget that we embed in webpages.
(imported from commit 055f2e9a523920719815181f8fdb44d3384e4a34)
This caused problems with our tests suite where we were using a logged
in browser session and actually acting as a different user.
(imported from commit 73b8cb39d5d669e682fbacf2f7e574c228885c2f)
This replaces the AppleDeviceToken table with a generic
PushDeviceToken with a `kind` field to make it easier to add functionality
like per-device/per-stream settings that share code between Android and
iOS devices.
The schema must continue to work on prod with the old table name, so we
add the new table in parallel and can drop the old table once this code
hits prod and any necessary data is copied.
(imported from commit 0209a7013f2850ac6311f23c3d6f92c65ffd19e3)
We have observed additional exceptions being thrown from zulip_finish and we
need to make sure that the handler is disconnected from the queue, or else the
event queue will keep throwing exceptions due to the handler being closed.
(imported from commit 59273aa14495216430b9eb1525b2cce230d8913d)
This should dramatically improve the speed of the dump/load part of
our restart process, especially with large long-lived event queues.
(imported from commit ae4ae20ba2ca4433e25a5e7beeb4fa4882c53972)
Previously, we had an issue with the ACKing protocol, where if a
virtualizable event (like a "read" flag) was dispatched to a queue
client immediately, we would not properly ACK the change because it
had been made a virtual event.
(imported from commit ea09812f8a5ba1d5aad65f536022e3dbc77b0f9e)
Unbundle the push notifications from the missed message queue processors
and handlers. This makes notifications more immediate, and sets things up
for better badge count handling, and possibly per-stream filtering.
(imported from commit 11840301751b0bbcb3a99848ff9868d9023b665b)
This should dramatically improve the speed of the dump/load part of
our restart process, especially with large long-lived event queues.
(imported from commit cc493fa50b4c339257e060b3f0c0956c682e449d)
Now that we support email aliases, we have to be careful when going from
an email address to a domain that we assume we can use to get a Realm
object for. When we care about the Realm's domain, we want to follow
any RealmAliases that exist for a certain domain.
When we just care about the original email address domain itself,
for comparison or other purposes, use split_email_from_domain
This removes the ambiguity of having to decide when to use
email_to_domain + RealmAlias or just email_to_domain
(imported from commit 0e199495502d946ce2e1aae56263e7e8665be4ed)
Now we can nest fenced code/quote blocks inside of quote
blocks down to arbitrary depths. Code blocks are always leafs.
Fenced blocks start with at least three tildes or backticks,
and the clump of punctuation then becomes the terminator for
the block. If the user ends their message without terminators,
all blocks are automatically closed.
When inside a quote block, you can start another fenced block
with any header that doesn't match the end-string of the outer
block. (If you don't want to specify a language, then you
can change the number of backticks/tildes to avoid amiguity.)
Most of the heavy lifting happens in FencedBlockPreprocessor.run().
The parser works by pushing handlers on to a stack and popping
them off when the ends of blocks are encountered. Parents communicate
with their children by passing in a simple Python list of strings
for the child to append to. Handlers also maintain their own
lists for their own content, and when their done() method is called,
they render their data as needed.
The handlers are objects returned by functions, and the handler
functions close on variables push, pop, and processor. The closure
style here makes the handlers pretty tightly coupled to the outer
run() method. If we wanted to move to a class-based style, the
tradeoff would be that the class instances would have to marshall
push/pop/processor etc., but we could test the components more
easily in isolation.
Dealing with blank lines is very fiddly inside of bugdown.
The new functionality here is captured in the test
BugdownTest.test_complexly_nested_quote().
(imported from commit 53886c8de74bdf2bbd3cef8be9de25f05bddb93c)
This reverts commit 1147814b22fb9737a807057ddbdbe0e9554086e0.
This seems to with some probability screw up our Zephyr mirroring
script.
(imported from commit 4f82452f1b0ca98e6b895db020e071d2daa325e4)
This requires a puppet apply on each of staging and prod0 to update
the nginx configuration to support the new URL when it is deployed.
(imported from commit a35a71a563fd1daca0d3ea4ec6874c5719a8564f)
There will be browser errors on staging when this is deployed due to the socket
protocol changing.
(imported from commit f1eda5b5c2ec9c60c23b3ca96277a61debadf5bb)
I believe there may also be others. I'm still not sure why clients would be
sending open requests without session or csrf values in their cookies, though.
(imported from commit 7e9660c1c4d5c2abf55ff21b433ba0117180eb82)
CUSTOMER13 doesn't want it, and there's currently no nginx config
or configurable Camo URI, so it wouldn't work if image preview
were enabled.
(imported from commit 615d4a32acbc4d4d590f88cf4e7d45d8f49db1d3)
ScheduledJobs with type Email displace the usual mandrill codepaths
in the Zulip Enterprise deploys
* Email-specific helper functions will appear in deliver_email.py
* 0058_auto__add_scheduledjob.py
(imported from commit 8db08d8a279600322acfdbed792dc1a676f7a0ab)
The !gravatar markdown no longer hard codes to Gravatar, but
instead it serves up our generic avatar URL.
(imported from commit 4e3e2baeb3374bcf025a18ff27a8452b975c22b7)
We also now separate out the times for the socket overhead, the
request service time, and the queuing delays.
(imported from commit e1683f7f28b968b86ebb701b0ac29b00ac6d67c3)
One quirk here is that the Request object is built in the
message_sender worker, not Tornado. This means that the request time
only counts time taken for the actual sending and does not account
for socket overhead. For this reason, I've left the fake logging in
for now so we can compare the two times.
(imported from commit b0c60a3017527a328cadf11ba68166e59cf23ddf)
This logging is kinda excessive since it adds like 4 log lines per
recipient, so I expect we'll end up reverting it once we've debugged
the proximal issue.
(imported from commit 5e6ab3e230f32b65ad9cf0d95f20ffbc0fe7397e)
This will hopefully help with the send dialog being stuck on
"sending" as well as allowing us to not show errors to the user on
reconnect.
(imported from commit 31ee889853f348e486863073dc130cdfb4e1338d)
Clients can only have one connection at a time, anyway, so we can
just keep track of a client id, instead. This makes reconnections
easier.
It's a little funny to use queue ids for the client id, but we know
they should exist by the time the client is connecting and they are
guaranteed to already be unique and authenticatable. We will also
eventually be integrating the event system and the socket code closer
anyway.
(imported from commit 1f60e06fb16d31d6c121deafd493fb304d19a6c2)
If you don't have a cookie or basic auth and the request looks like
a top-level page in the browser, redirect to the login page.
(imported from commit fc1bcb1080591522bd1b694664255f7049a5d443)
The register_json_consumer() function now expects its callback
function to accept a single argument, which is the payload, as
none of the callbacks cared about channel, method, and properties.
This change breaks down as follows:
* A couple test stubs and subclasses were simplified.
* All the consume() and consume_wrapper() functions in
queue_processors.py were simplified.
* Two callbacks via runtornado.py were simplified. One
of the callbacks was socket.respond_send_message, which
had an additional caller, i.e. not register_json_consumer()
calling back to it, and the caller was simplified not
to pass None for the three removed arguments.
(imported from commit 792316e20be619458dd5036745233f37e6ffcf43)
Looking at the historical data, fewer than 50% of active users have
completed the checklist, which means that it is just persistent
clutter. We also have other better ways of encouraging people to send
traffic and get the apps now.
This commit removes both the frontend UI and backend work but leaves
the db row for now for the historical data.
(imported from commit e8f5780be37bbc75f794fb118e4dd41d8811f2bf)
Here we introduce a new Django app, zilencer. The intent is to not have
this app enabled on LOCALSERVER instances, and for it to grow to include
all the functionality we want to have in our central server that isn't
relevant for local deployments.
Currently we have to modify functions in zerver/* to match; in the
future, it would be cool to have the relevant shared code broken out
into a separate library.
This commit inclues both the migration to create the models as well as a
data migration that (for non-LOCALSERVER) creates a single default
Deployment for zulip.com.
To apply this migration to your system, run:
./manage.py migrate zilencer
(imported from commit 86d5497ac120e03fa7f298a9cc08b192d5939b43)
Trac #1734
This is implemented by bouncing uploaded file links through a view
that checks authentication and redirects to an expiring S3 URL.
This makes file uploads return a domain-relative URI. The client converts
this to an absolute URI when it's in the composebox, then back to relative
when it's submitted to the server.
We need the relative URI because the same message may be viewed across
{staging,www,zephyr}.zulip.com, which have different cookies.
(imported from commit 33acb2abaa3002325f389d5198fb20ee1b30f5fa)
This has a small bug where we don't actually filter the message out of
the home view; fixing that requires adding an index on the "flags"
field of UserMessage.
(imported from commit 492c99d0a8e87b253e577be6564bec12099bd8e9)
Because our authentication system reads cookies from the initial
connection attempt, several SockJS transports can't be used.
(imported from commit 34b9571225d39072985b8223fb12c43c7235841f)
New dependency: sockjs-tornado
One known limitation is that we don't clean up sessions for
non-websockets transports. This is a bug in Tornado so I'm going to
look at upgrading us to the latest version:
https://github.com/mrjoes/sockjs-tornado/issues/47
(imported from commit 31cdb7596dd5ee094ab006c31757db17dca8899b)
The gather_subscriptions_helper() does a separate query to
get emails from user_ids, and it returns an email_dict to its
caller.
This may seem like a step backward, since gather_subscriptions()
now needs to do an additional query, but there is some benefit
in passing fewer redundant emails over the wire from the DB.
The real payoff, though, will come in subsequent commits, where
we will reduce the amount of data going over the wire to the browser,
which will benefit users with slow connections.
(imported from commit bf1cc5828a4c5f68cafd052ea29a177837970206)
Arguably the nl2br extension should be doing this for us. Given that
we're using nl2br, the "two spaces at the end of a line makes a line
break" rule doesn't make any sense (since every newline leads to a
linebreak), so we disable it.
(imported from commit 5ffa2ac8a825642ad31e085c532091e076665710)
clear_followup_emails_queue now filters by from_email too
send_local_email_template_with_delay passes the template_payload into the subject template
(imported from commit 8044fe2ebad90a9d6d5c67cdfdd08801760fd7f7)
The current version should only be used for testing; for example,
if you want to create a bunch of streams for stress testing, you
can run this in a loop.
(imported from commit ec51a431fb9679fc18379e4c6ecdba66bc75a395)
It makes the event queue return all messages on public streams, rather
than only the user's subscriptions. It's meant for use with chat bots.
(imported from commit 12d7e9e9586369efa7e7ff9eb060f25360327f71)
Trac #1162
The process_fence method replaces code blocks with placeholders, so
indexes stored before the replacement are incorrect. However, because
the closed code blocks have been replaced, we can simply search the
whole string for any remaining opening code block markers.
(imported from commit 6a9e6924840f8f3ca5175da7c52a905e27c1fabd)
I added filter() statements to do_update_message_flags().
Here is some context:
Steve Howell: Case 1, have AND clause to reduce work for DB.
humbug=> update zerver_usermessage set flags = (flags & ~1) where id > 9000;
UPDATE 382
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
382
(1 row)
humbug=> explain analyze update zerver_usermessage set flags = (flags | 1) where (flags & 1) = 0;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Update on zerver_usermessage (cost=0.00..266.85 rows=47 width=27) (actual time=5.727..5.727 rows=0 loops=1)
-> Seq Scan on zerver_usermessage (cost=0.00..266.85 rows=47 width=27) (actual time=0.045..2.751 rows=382 loops=1)
Filter: ((flags & 1::bigint) = 0)
Rows Removed by Filter: 9000
Total runtime: 5.759 ms
(5 rows)
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
0
(1 row)
Leo Franchi: Sounds reasonable, but I know way less than zev about DBs so I'll defer to his judgement :)
Steve Howell: Case 2, how the code works now:
humbug=> update zerver_usermessage set flags = (flags & ~1) where id > 9000;
UPDATE 382
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
382
(1 row)
humbug=> explain analyze update zerver_usermessage set flags = (flags | 1);
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------
Update on zerver_usermessage (cost=0.00..243.28 rows=9382 width=27) (actual time=362.075..362.075 rows=0 loops=1)
-> Seq Scan on zerver_usermessage (cost=0.00..243.28 rows=9382 width=27) (actual time=0.008..6.138 rows=9382 loops=1)
Total runtime: 362.105 ms
(3 rows)
humbug=> select count(*) from zerver_usermessage where (flags & 1) = 0;
count
-------
0
(1 row)
Steve Howell: In both trials, we set it up so that only 382 of 9382 rows need to be updated. The first trial runs about 63x as fast. The second trial, if my theory is correct, is doing 24x as many writes as it needs. Both trials are reading all 9382 rows.
Steve Howell: The expense of the update statement seems to be proportional to the number of rows you "update", not the number of rows that you actually change.
Steve Howell: For now I created #1869.
Zev Benjamin: That sounds like a reasonable explanation. The disk IO can be expensive
(imported from commit d9090daee1f81cad76c430de0956f9bd504da075)
Handled by the queue processor for signups. Added a management command
that accomplishes the same task, in case it's needed for manually added users,
or in case we goof and need to remove queued emails for a given user.
This addresses Trac #1807
(imported from commit 6727b82a07fa6a3ea3d827860c9e60fd0602297a)
We want to avoid opening a DB connection in the markdown thread
as its DB connection might live for a long time
(imported from commit 7700b2ca793ee5e9add7f071b92f22a4bf576b3d)
This will hopefully incentivize people to click one and get back into
the app.
We'll also need this for digest emails.
(imported from commit 57191c3fcca3b12df93a81e4692bb7eb8ccc83b2)
The text of manual links are already AtomicStrings, so linkified strings
should be too.
Moves emoji detection to happen after linkification, so the emoji rule
won't look at links.
(imported from commit 9c56bce6a0e873b398255e0762dfb312a4a9a64e)
InlinePatterns should return None on failure, not text that may
have placeholders in it.
(imported from commit f9d8d22b2b8cfa7a92ecf3e52a6c76b48e6f0175)
This function doesn't require the whole UserProfile object to
create the avatar url, and we call it from Message.to_dict_uncached().
(imported from commit e814caab101c4fedd1ba66df041a3408014e4085)
The realm should always be the realm of the stream, and we should
always pass in a stream rather than sometimes passing in a stream name
and other times passing in a stream.
(imported from commit a098d6ed3db218a37c1b6b7c956e847c316c2d13)
We have been persisting muting preferences on the back end for
a while, but we haven't been adding them to page_params for the
client to have at reload/startup time.
(imported from commit d9ca68aa0e4d22bfb0e6ce67fc0bc63981175c8b)
We now bulk-fetch subscription information once from the database
and use it throughout bulk_add_subscriptions in order to avoid
hitting the db O(streams) times.
On my machine this shaved the accounts_register API call from making
66 queries to making 37 queries.
(imported from commit 5dd5ad3f50b2a6edf85b5f1d55ebd697a1c60647)
When we send a message, we send some presence information to Tornado
to help it figure out how to generate emails for idle recipients of
a message. This change limits the presence info to being the
intersection of present users and recipients of the message. It is
just an internal optimization to avoid queueing up unneeded data.
The history behind this feature is that I implemented it a while
back, but I think I made a rebase mistake that sent all the presence
data over the wire, despite having code to filter on recipients.
It was mostly harmless, just leading to some inefficiency which is
now fixed.
(imported from commit 7c8e97705afb299c67b99053909e952fbc823551)
For a 4-person stream, we were hitting the DB 8 times, and 4 of
those queries were to lazily get user.email for the 4 recipients
due to upstream code using only(). I added user_profile__email
to the only() call.
I believe this regression started 9/18, and after pushing this
to prod, we would should look at this graph:
https://stats1.zulip.net/graphs/8274cd84588
(imported from commit 70629cb69fe5955c674ba76482609dfe78e5faaf)
Use stream.num_subscribers() in check_if_a_bot_is_sending_a_message_to_an_empty_stream().
The num_subscribers() function using Django's count() method, which returns
a single row, vs. len() on an iterator of query rows.
(imported from commit 6157fe248945e9288ee71d8cc39fb6dda4e9a247)
Some bots created by us do not have owners. Don't try to send a
message to the nonexistent owner.
(imported from commit ab952eccd7d6c4728e9477a106142214b5c81ca9)
Instead just rely on the 2-minute delay in the management command to
batch conversations.
We've had people report being confused or thinking the feature was
broken when they didn't get e-mails because of our rate-limiting, so
let's see if this is not too overwhelming.
(imported from commit 706ddb07b906b5c2edea1159c04acc2ee6f06e29)
Don't send peer_add notifications to users who are already
getting add notifications, because they will already know
about subscribers.
(imported from commit 726b54ae0e30b71440b17d9c51b026872ea96218)
It only grabs the user_profile_id column now. This leads to a
speedup of about 16x between grabbing large ORM objects vs.
small 1-column dictionaries.
(imported from commit 95150bff3fdcbe250b04f014062224af42a6644f)
Splitting out notify_peers() will give us flexibility for cleaning
up how we notify peers for bulk adds.
(imported from commit e108fa2c432cc1fe54d788c58c82c983e0f2394e)
If you expand subscribers on your settings page, you will now see
a query like this in your postgres logs:
SELECT "zerver_userprofile"."email"
FROM "zerver_subscription" INNER JOIN "zerver_recipient" ON ("zerver_subscription"."recipient_id" = "zerver_recipient"."id") INNER JOIN "zerver_userprofile" ON ("zerver_subscription"."user_profile_id" = "zerver_userprofile"."id") WHERE ("zerver_recipient"."type" = 2 AND "zerver_subscription"."active" = true AND "zerver_recipient"."type_id" = 40 AND "zerver_userprofile"."is_active" = true )
The join's still complicated, but the list of fields is one instead of 40+.
(imported from commit 48de1f888193a4d23fcea52d0b633d134e4a3ff7)
get_subscribers_backend() now calls the new get_subscriber_emails()
function, which just queries the email field:
"zerver_userprofile"."email"
...instead of querying about 40 fields that it never uses.
I was able to verify the query slimming by watching my postgres server log.
Also, you can verify that the ORM does roughly 16x less work using values():
>>> def f(): return [sub.user_profile.email for sub in list(Subscription.objects.all().select_related())]
...
>>> def g(): return [row['user_profile__email'] for row in list(Subscription.objects.all().values('user_profile__email'))]
...
>>> def timeit(func): t = time.time(); func(); return time.time() - t
...
>>> timeit(f)
0.045198917388916016
>>> timeit(g)
0.002752065658569336
(imported from commit a69f690a96d076b323fdfc2f4821b0548bdfac7f)
LinkPattern returned a string which contained a placeholder if the URL was
considered invalid. AtomicLinkPattern wrapped this in an AtomicString,
where the placeholder doesn't get removed properly.
m.group(0) is always incorrect because python-markdown modifies your regex
to include more than you specified (this is why part of the message got
duplicated).
(imported from commit 576bdf09c2b677cf4bc56484c363eb05f2110158)
We have to read the data anyway, and we don't have a convenient file
handle for uploads from attachments sent through the e-mail gateway.
(imported from commit 86260a4eaceef85c82707929a80558e11dc54ef6)
The get_status_dict_by_realm helper gets called whenever our
realm user_presences cache expires, and it used to query these fields:
"zerver_userpresence"."id", "zerver_userpresence"."user_profile_id", "zerver_userpresence"."client_id", "zerver_userpresence"."timestamp", "zerver_userpresence"."status", "zerver_userprofile"."id", "zerver_userprofile"."password", "zerver_userprofile"."last_login", "zerver_userprofile"."is_superuser", "zerver_userprofile"."email", "zerver_userprofile"."is_staff", "zerver_userprofile"."is_active", "zerver_userprofile"."is_bot", "zerver_userprofile"."date_joined", "zerver_userprofile"."bot_owner_id", "zerver_userprofile"."full_name", "zerver_userprofile"."short_name", "zerver_userprofile"."pointer", "zerver_userprofile"."last_pointer_updater", "zerver_userprofile"."realm_id", "zerver_userprofile"."api_key", "zerver_userprofile"."enable_desktop_notifications", "zerver_userprofile"."enable_sounds", "zerver_userprofile"."enter_sends", "zerver_userprofile"."enable_offline_email_notifications", "zerver_userprofile"."last_reminder", "zerver_userprofile"."rate_limits", "zerver_userprofile"."avatar_source", "zerver_userprofile"."tutorial_status", "zerver_userprofile"."onboarding_steps", "zerver_userprofile"."invites_granted", "zerver_userprofile"."invites_used", "zerver_userprofile"."alert_words", "zerver_userprofile"."muted_topics", "zerver_client"."id", "zerver_client"."name"
Now it queries just the fields it needs:
"zerver_client"."name", "zerver_userpresence"."status", "zerver_userpresence"."timestamp", "zerver_userprofile"."email" FROM "zerver_userpresence"
Also, get_status_dict_by_realm is now namespaced under UserPresence as a static method.
(imported from commit be1266844b6bd28b6c615594796713c026a850a1)
This function gets user presence information, which changes rapidly
and requires a pretty simple query.
(imported from commit f9b9f0f22277335c76eb4371930a4fff2758a240)
The do_send_messages() populates the user_presences data structure
for process_new_message(), so that Tornado code never needs to hit
the database or memcached to get the user presence info.
(imported from commit 194aeaead8fa712297a2ee8aff5aa773b92f1207)
This reduces the number of memcached calls we make in our time-
slice-limited tornado event handler.
(imported from commit 8903ce4ac754ba82d57e04d1b0356be7533edee2)
These engagement data will be useful both for making pretty graphs of
how addicted our users are as well as for allowing us to check whether
a new deployment is actually using the product or not.
This measures "number of minutes during which each user had checked
the app within the previous 15 minutes". It should correctly not
count server-initiated reloads.
It's possible that we should use something less aggressive than
mousemove; I'm a little torn on that because you really can check the
app for new messages without doing anything active.
This is somewhat tested but there are a few outstanding issues:
* Mobile apps don't report these data. It should be as easy as having
them send in update_active_status queries with new_user_input=true.
* The semantics of this should be better documented (e.g. the
management script should print out the spec above)x.
(imported from commit ec8b2dc96b180e1951df00490707ae916887178e)
We found that since bugdown processes are threaded, the cost of
doing a db query in a markdown processor is quite high---each
thread must start up a new db connection including a SSL handshake
etc. We should strive to keep our rendering pipeline free of mandatory
DB queries.
(imported from commit 555066bd03da6c681b74ce6137acc264eb41c55d)
It is triggered by specifying the "language" of a code block to
"quote" or "quoted":
Hamlet said:
~~~ quote
To be or **not** to be.
That is the question
~~~
(imported from commit 847a0602e335e9f2955e32d9955adf8ac8de068c)
It was getting hard to follow and is going to get more complicated
with a new super user check in a later commit.
(imported from commit 8d5cfa960824519d87ce0f09aab3a120ba9ef357)
An important part of this is updating the various caches that cache
the display_recipient.
(imported from commit 888bf54fd205516cf31a25ba3f4e45ad11bbd4d5)
This shows up when you're not running a Zephyr mirroring bot and lets
you use Webathena to have us run it. Obviously needs more docs.
Current problems include:
* supervisorctl reload ends up recreating /var/run/supervisor.sock
with the wrong permissions, so it only works once in a row before
you need to chmod that.
* /etc/supervisor/conf.d needs to be humbug-writeable; this is a clear
local root vulnerability
* This uses SSH and thus is kinda slow.
(imported from commit 7029979615ffd50b10f126ce2cf9a85a5eefd7a2)
Before this it was [deleted]. Using parens is consistent with how we put
in (no topic) if you don't specify a topic.
(imported from commit 931c06a1096cf7b0d226336cbe82535abd2e6032)
This helps make our statuses more meaningful and should resolve trac #1534.
As part of this, we lower OFFLINE_THRESHOLD_SECS to 1.1̅6 minutes and
mark the user as idle after 5 minutes.
(imported from commit ee6b1ad203554a84b11e16c4c6195be9df5bcf4f)
This change would allow anyone in the realm to set a topic for a "no topic"
message. As soon as the message topic is set, only the sender can change it again.
(imported from commit 0a91a93b8fd14549965cedc79f45ffd869d82307)
This has the amusing side effect of showing all the Zulip bots in the
administration view because none of them have the is_bot set.
(imported from commit cdec19d2109c092018c1f331aa32f345d1587683)
ALLOW_REGISTER was no longer being used in determining whether you could
register for the app, so I've removed it to avoid additional local-dev /
production issues.
This closes#1613.
(imported from commit c928c6d350602d35f745ae1e60d734e4567885fc)
On Debian systems, this is found in the `python-dns` package.
On OS X and others, install "pydns" using your Python package manager.
(imported from commit 17827d0a1d3d72b12945df5563295a1573bfa1ed)
This was previously causing us to generate a traceback every time we
hit a duplicated zephyr due to CC'ing.
(imported from commit 240e1559655d0166dcd864e84649ab97b87a29ad)
Our API documentation says that we do, and it seems like it could be
useful to clients, so we might as well do it.
(imported from commit c391e4952a09d41df4dc06e3dc6ee094f774822b)