Previously, we suggested running
`python -c import zerver.tests.test_mytest`
when importing test_mytest failed, which doesn't work.
This commit adds the missing quotes, making it
`python -c 'import zerver.tests.test_mytest'`
Adds a new field org_type to Realm. Defaults for restricted_to_domain
and invite_required are now controlled by org_type at time of realm
creation (see zerver.lib.actions.do_create_realm), rather than at the
database level. Note that the backend defaults are all
org_type=corporate, since that matches the current assumptions in the
codebase, whereas the frontend default is org_type=community, since if
a user isn't sure they probably want community.
Since we will likely in the future enable/disable various
administrative features based on whether an organization is corporate
or community, we discuss those issues in the realm creation form.
Before we actually implement any such features, we'll want to make
sure users understand what type of organization they are a member of.
Choice of org_type (via radio button) has been added to the realm
creation flow and the realm creation management command, and the
open-realm option removed.
The database defaults have not been changed, which allows our testing code
to work unchanged.
[includes some HTML/CSS work by Brock Whittaker to make it look nice]
Previously, the generate-fixtures shell script by called into Django
multiple times in order to check whether the database was in a
reasonable state. Since there's a lot of overhead to starting up
Django, this resulted in `test-backend` and `test-js-with-casper`
being quite slow to run a single small test (2.8s or so) even on my
very fast laptop.
We fix this is by moving the checks into a new Python library, so that
we can avoid paying the Django startup overhead 3 times unnecessarily.
The result saves about 1.2s (~40%) from the time required to run a
single backend test.
Fixes#1221.
This is a first pass at building a framework for collecting various
stats about realms, users, streams, etc. Includes:
* New analytics tables for storing counts data
* Raw SQL queries for pulling data from zerver/models.py tables
* Aggregation functions for aggregating hourly stats into daily stats, and
aggregating user/stream level stats into realm level stats
* A management command for pulling the data
Note that counts.py was added to the linter exclude list due to errors
around %%s.
The command to render old messages now looks for all messages
not matching the bugdown version, and it no longer directly calls
into model code. We should still be extremely cautious about
using this code.
This pulls message-related code from models.py into a new
module called message.py, and it starts to break some bugdown
dependencies. All the methods here are basically related to
serializing Message objects as dictionaries for caches and
events.
extract_message_dict
stringify_message_dict
message_to_dict
message_to_dict_json
MessageDict.to_dict_uncached
MessageDict.to_dict_uncached_helper
MessageDict.build_dict_from_raw_db_row
MessageDict.build_message_dict
This fix also removes a circular dependency related
to get_avatar_url.
Also, there was kind of a latent bug in Message.need_to_render_content
where it was depending on other calls to Message to import bugdown
and set it globally in the namespace. We really need to just
eliminate the function, since it's so small and used by code that
may be doing very sketchy things, but for now I just fix it. (The
bug would possibly be exposed by moving build_message_dict out to the
library.)
I move these three functions to lib/cache.py:
to_dict_cache_key_id
to_dict_cache_key
flush_message
This will prepare us for a more significant refactoring that
eventually breaks down some circular dependencies with
Message and bugdown.
This adds support for running a Zulip production server with each
realm on its own unique subdomain, e.g. https://realm_name.example.com.
This patch includes a ton of important features:
* Configuring the Zulip sesion middleware to issue cookier correctly
for the subdomains case.
* Throwing an error if the user tries to visit an invalid subdomain.
* Runs a portion of the Casper tests with REALMS_HAVE_SUBDOMAINS
enabled to test the subdomain signup process.
* Updating our integrations documentation to refer to the current subdomain.
* Enforces that users can only login to the subdomain of their realm
(but does not restrict the API; that will be tightened in a future commit).
Note that toggling settings.REALMS_HAVE_SUBDOMAINS on a live server is
not supported without manual intervention (the main problem will be
adding "subdomain" values for all the existing realms).
[substantially modified by tabbott as part of merging]
This sets the “title” attribute on the image to the actual title of
the image specified by the user in their markdown, rather than just
the URL of the full link to it.
We can now rely on UserProfile.last_reminder being time zone
aware, or even if it isn't, it's a self-correcting problem the
first time a reminder is sent. (It's a non-problem to be off
by a few timezones if somebody still has an old value there, because
they will still be outside the 1-minute nag window even with the
timezone disparity.)
We no longer use all the alert words for all the users in the
entire realm when we look for alert words in a newly sent/edited
message. Now we limit the search to only all the alert words
for all the users who will get UserMessage records. This will
hopefully make a big difference for big realms where most messages
are only sent to a small subset of users.
The bugdown parser no longer has a concept of which users need which
alert words, since it can't really do anything actionable with that info
from a rendering standpoint.
Instead, our calling code passes in a set of search words to the parser.
The parser returns the list of words it finds in the message.
Then the model method builds up the list of user ids that should be
flagged as having alert words in the message.
This refactoring is a little more involved than I'd like, but there are
still some circular dependency issues with rendering code, so I need to
pass in the rather complicated realm_alert_words data structure all the way
from the action through the model to the renderer.
This change shouldn't change the overall behavior of the system, except
that it does remove some duplicate regex checks that were occurring when
multiple users may have had the same alert word.
We now use render_incoming_message() to render all incoming
new messages (sends/edits), so that they will get the same treatment.
This change also establishes do_send_messages() as the code
path to get new messages rendered. It removes some
logic from check_message() that only happened on certain code paths
for sending messages, and which would only detect failures by
expensively rendering messages, so it wasn't much of a guard.
This change also helps to phase out maybe_render_content(), which
deepens the call stack without providing much clarity to the reader,
since it's behavior is so variable.
Finally, this sets up to fix a flaw in the way we compute which
users have alert words in their messages (in a subsequent commit).
If you supplied an unrecognizable address to our email system,
or you had EMAIL_GATEWAY_PATTERN configured wrong,
the get_missed_message_token_from_address() used to crash
hard and cryptically with a traceback saying that you can't
call startswith() on a None object.
Now we throw a ZulipEmailForwardError exception. This will
still lead to a traceback, but it should be easier to diagnose
the problem.
In our email mirror, we have a special format for missed
message emails that uses a 32-bit randomly generated token
that we put into redis that is then prefixed with "mm" for
a total of 34 characters.
We had a bug where we would mis-classify emails like
mmcfoo@example.com as being these system-generated emails
that were part of the redis setup.
It's actually a little unclear how the bug in the library
function would have manifested from the user's point of view,
but it was definitely buggy code, and it's possibly related in
a subtle way to an error report we got from a customer where
only one of their users, who happened to have a name like
mmcfoo, was having problems with the mirror.
We now raise an exception in bugdown.do_convert() if rendering
fails, to avoid silent failures, and then calling code can convert
the exception to a JsonableError.
While one often might want to put the user's name in an email
template, `name` here was the user's full name, not their first name,
and thus reads as quite formal.
Our implementation of duplication detection in the Zulip email error
reporting system was buggy in two important ways:
* It did not look at the traceback, and thus considered all errors as
the same.
* It reset the 10-minute duplicate timer every time an error happened,
thus concealing situations where the same error was occuring more
often than 1/10 minutes.
This fixes a problem where the requests to Tornado would attempt to
use a configured outgoing HTTP proxy, when really we want to connect
directly to localhost.
Fixes: #468.
Old behavior is to do something tricky that relies on the server being on
Pacific Time and the users being in the US. The goal is to have this message
appear during business hours, since click through rates are higher during
business hours. Our server is now on the East Coast though and our users are
in every timezone, so until we do something smarter this seems like a better
heuristic. We're also trying to cleanse our codebase of non-timezone-aware
datetime.datetime objects.
The previous default configuration resulted in delivery problems if
the Zulip server was authorized in the SPF records for the domains of
all users on the Zulip server.
This has the nice side effect of getting rid of the now-unnecessary
ADMIN_DOMAIN from this codepath -- we really just want whichever
realm ERROR_BOT is in.
Since this delayed sending feature is the only thing
settings.MANDRILL_API_KEY is used for, it seems reasonable for that to
be the gate as to whether we actually use Mandrill.
This sends an event when a new avatar is uploaded that refreshes the
avatar for all browser clients without the need to reload the browser.
Fixes: #1359.
!avatar, !modal_link, !gravatar, etc. were incorrectly being processed
before the escape character for code blocks.
While we're at it, we add tests for these special syntaxes.
This fixes a nasty bug where exporting messages sent by a single user
might only contain some of the messages in the event that the
unspecified sort order by the database didn't happen to be sorted by
message ID.
This commit only addresses tables that currently derive from
user_profile_config in get_realm_config:
zerver_userpresence
zerver_useractivity
zerver_useractivityinterval
zerver_subscription
zerver_recipient
zerver_stream
zerver_huddle
It also introduces an entry in realm.json for a virtual
table called "zerver_userprofile_mirrordummy" for dummy users,
which include prior dummy users and users excluded from the call
to do_export_realm().
Note that this feature is not yet exposed in the management command.
This adds a few new helpful context variables that we can use to
compute URLs in all of our templates:
* external_uri_scheme: http(s)://
* server_uri: The base URL for the server's canonical name
* realm_uri: The base URL for the user's realm
This is preparatory work for making realm_uri != server_uri when we
add support for subdomains.
The message cache filling script actually used both database and
memcached queries as part of filling the cache (all used to compute
the needed display_recipient values). We would ideally fix this by
using bulk operations to fill the display_recipient cache, but until
we do so, this cache filler is counterproductive.
I believe this disabling fixes an issue where memcached would get
overloaded and stop handling requests during a server restart on busy
servers.
Now attachment data gets written to its own json file. We are
splitting this out so that will be easier for us to cross-check
attachments against messages without holding up writing a lot
of the other realm data. (message cross-checking is coming soon)
This commit doesn't change any behavior; it just moves fetching
attachments out of the Config scheme and into its own method.
This prepares us to start writing attachment data to its own
file and cross-checking against message ids (coming soon).
We now just have a single configuration get_realm_config() that
handles most of the top-down realm export tables. (It basically
does everything not related to messages or uploads/avatars.)
Unifying the configs allows us to be more strict in our
configuration about checking for anomalies. In the future
we may need to loosen up some of those restrictions again,
but for now we are picky and paranoid.
Fetch stream data only for stream recipients, instead of
getting streams via realm_id.
(This change is kind of moot for now, since our stream recipients
include all possible stream recipients in the realm, but this
sets us up for when we start restricting users that we export
within the realm.)
Previously, missed message emails with multiple senders would
incorrectly have a "," outside the quoted sender name part of the from
address string, resulting in confusing email output.
This commit introduces the ability to do custom fetches
and to essentially use temp tables for intermediate results.
(The temp table stuff deals with recipients/subscriptions
having three different flavors--user, stream, and huddle.)
Most directly useful for the migration to zulipchat.com.
Creates a new field in UserProfile to store the tos_version, as well as two
new settings TOS_VERSION and FIRST_TIME_TOS_TEMPLATE. We check for a version
mismatch between what the user has signed and the current
settings.TOS_VERSION whenever the user hits the home page, and redirect them
if needed.
Note that accounts_accept_terms.html and
zerver.views.accounts_accept_terms were unused before this commit
(they date from c327446537)
The function to create the message partial files has been
renamed to export_partial_message_files(). It now gets its own
list of user profile ids and recipient ids from the response,
so that we can de-clutter do_export_realm().
The name avatar_bucket was confusing for a boolean, and
in some places it was used for non-S3 paths.
I considered the more concise 'is_avatar', but that
was still confusing when you are processing multiple
files, because you think it's a calculated property
on one file instead of an overall codepath switch.
I also considered splitting up some functions, but
there is a lot of common logic between handling
file uploads and avatars that's not trivial to extract
into helpers, especially on the S3 side.
I did some minor moving around of code that made us have
one fewer function without any additional conditional
logic. The names are more explicit about saying
"from_local" and "from_s3". Also, there is less clutter
now in do_export_realm(), which is evolving into more of
a dispatcher and less of a worker.
This is pretty minor cleanup, but it makes it a little more
explicit what we're writing to the shard file, and it allows
us to use a more specific mypy type when calling
floatify_datetime_fields.
We no longer have an in-process code path to export
UserMessage rows. We want to only maintain the
subprocess code, which we'll always use in production,
and which will work fine in dev.
Adds a new field default language in the zerver_realm model.
This realm level default language will be used as default language
for newly created users. Realm level default language can be
changed from the administration page.
Fixes#1372.
I also fixed some small things like removing unnecessary return
statements, and adding a TODO.
In some cases I explicitly cast stuff at run-time to set() or
str() to appease mypy, as well as make it clear to somebody
reading the code that the callee might not respect ordering
or tolerate unicode.
The previous export tool would only work properly for small realms,
and was missing a number of important features:
* Export of avatars and uploads from S3
* Export of presence data, activity data, etc.
* Faithful export/import of timestamps
* Parallel export of messages
* Not OOM killing for large realms
The new tool runs as a pair of documented management commands, and
solves all of those problems.
Also we add a new management command for exporting the data of an
individual user.
Define Integration and WebhookIntegration classes.
Change webhook part of integration's guide.
Replace hardcoded webhook urls to generating
based on WEBHOOKS list.
The old behavior was to raise an exception, but Django was catching
the exception and doing unexpected things. For instance, in the
manage.py shell, printing out a ModelReprMixin object (with
__unicode__ not implemented) would result in nothing being printed,
rather than it raising a error or otherwise alerting the programmer as
to what was going on.
This fixes a regression where missed message emails would not be sent
at all in the event that EMAIL_GATEWAY_PATTERN was unset.
The overall experience still isn't great, but it's better than crashing.
Fixes: #1411
[commit message expanded by tabbott]
This will lead to minor differences in the warnings that
people see when they run tests that are slow. We call out
the slowness a little more clearly from a visual standpoint,
and we simplify the calculation of the slowness threshold.
We still allow more time for tests with the `@slow` decorator
to run, but we don't use their expected_run_time.
Our flush functions update user profile cache entries which can cause
confusing race conditions (see e.g. #1257). To resolve this, we move
all the user_profile flush functions to delete the entry instead of
updating it -- it will then be fetched as part of the next request
that needs to access the user object.
There are still races here, and there is perhaps an argument that a
better fix for this would be to re-fetch the object and then put it
into the cache, but this resolves the main cache correctness problem
we had with the previous implementation.
Fixes: #1322.
This makes us more consistent, since we have other wrappers
like client_patch, client_put, and client_delete.
Wrapping also will facilitate instrumentation of our posting code.
This function is only called in cases where user_profile isn't None,
and the code reads better if we just check that first rather than
checking it on every line that accesses user_profile.
This allows the frontend to fetch data on the subscribers list (etc.)
for streams where the user has never been subscribed, making it
possible to implement UI showing details like subscribe counts on the
subscriptions page.
This is likely a performance regression for very large teams with
large numbers of streams; we'll want to do some testing to determine
the impact (and thus whether we should make this feature only fully
enabled for larger realms).
There were a bunch of authorization and well-formedness checks in
zerver.lib.actions.do_update_message that I moved to
zerver.views.messages.update_message_backend.
Reason: by convention, functions in actions.py complete their actions;
error checking should be done outside the file when possible.
Fixes: #1150.
This is controlled through the admin tab and a new field in the Realms table.
Notes:
* The admin tab setting takes a value in minutes, whereas the backend stores it
in seconds.
* This setting is unused when allow_message_editing is false.
* There is some generosity in how the limit is enforced. For instance, if the
user sees the hovering edit button, we ensure they have at least 5 seconds to
click it, and if the user gets to the message edit form, we ensure they have
at least 10 seconds to make the edit, by relaxing the limit.
* This commit also includes a countdown timer in the message edit form.
Resolves#903.
Correctly encode and decode strings in convert_html_to_markdown.
It wasn't possible to use universal_newlines=True since
Popen.communicate doesn't encode/decode strings correctly on
python 2.
Add methods assert_equals_response and assert_in_response to
AuthedTestCase. These methods make it convenient to check if
a string equals the contents of an HttpResponse's body or if a
string is a substring of the contents of an HttpResponse's body.
response.content is binary data, but code usually assumes it to
be text. Fix this by decoding response.content where required.
Don't do this in tests yet.
This is controlled through the admin tab and a new field in the Realms
table. This mirrors the behavior of the old hardcoded setting
feature_flags.disable_message_editing. Partially resolves#903.
This reverts commit f1f48f305e.
The use of sklearn unfortunately caused a substantial slowdown to the
Zulip provisioning process, which didn't seem worth it for a
relatively minor feature.
Originally this cache was used to transmit data from Django to Tornado
(and also for general message caching purposes), but now nothing
actually reads from this cache, so we can eliminate it.
Apparently, the message cache we were filling was completely useless
and unused, and furthermore, the cache we were filling as part of
restarting the server was also totally useless, since it didn't have
the messages users would be requesting.
The functions truncate_content, truncate_body and truncate_topic
are only meant to be used on text strings. So change its
parameter types from AnyStr to text_type.
Many stubs in xml.etree.ElementTree use Union[str, bytes] as
return type. Mypy wants us to correctly handle each case. This
is correct, but not useful for us since we know that we'll always
get str. So force the return value to text_type, to supress mypy
errors.
Also encode/decode strings appropriately when using api_keys to generate
basic auth header.
Also fix clashing annotations in zerver/tests/test_external.py.
* Add Optional where required.
* Set type of req_redis_key as `(text_type) -> text_type` for consistency.
Almost all our cache keys and redis keys use this signature.
For a long time, rest_dispatch has had this hack where we have to
create a copy of it in each views file using it, in order to directly
access the globals list in that file. This removes that hack, instead
making rest_dispatch just use Django's import_string to access the
target method to use.
[tweaked and reorganized from acrefoot's original branch in various
ways by tabbott]
If a user's session cookie expired, the next REST API request their
browser did would go into the json_unauthorized code path. This
returned a response with a WWW-Authenticate tag for HTTP Basic Auth
(since that's what the REST API uses), even for /json requests which
should only be authenticated using session auth.
We fix this by explicitly passing the desired WWW-Authenticate state.
Fixes: #800.
After annotating rate_limiter.rules, mypy complained that rules does
not support cmp. So use key to customize sort instead of cmp.
Python docs also recommend using key over cmp.
zerver.lib.initial_password.initial_password is supposed to return an
Optional[text_type], but it returns an Optional[binary_type] instead.
Encode the return value to make sure it returns an Optional[text_type].
Add a class 'BaseHandler' and make it a base class of OuterHandler,
QuoteHandler and CodeHandler. This will help annotate some functions
and improve type checking.
get_display_recipient's annotation clashes with other wrong annotations.
Fix those wrong annotations.
Since get_display_recipient returns a Union, use isinstance checks and
casts to make mypy checks succeed.
generate_random_token used to return a value of type six.binary_type
and its return type was annotated as `str`. This commit fixes that
by making it return a value of type `six.text_type` and updating
the annotation accordingly.
Also fix clashing annnotations.
Change choices of UserProfile.avatar_sources and UserProfile.tutorial_status
from str literals to unicode literals. This is done because these fields
are CharFields, which are of type `six.text_type`. So the set of values
which they can take should also be of the type `six.text_type`.
Also fix clashing annotations.
Change `str` to `text_type` in annotations in zerver/models.py
related to realm emoji and realm filters.
Also fix clashing annotations in zerver/lib/bugdown/__init__.py.
Due to a cyclic dependency issue, functions having models as parameters
were annotated as Any.
That issue is fixed by importing models inside an `if False:` block,
so that mypy sees them but they are not imported at runtime.
In update_user_profile_caches, the return type in annotation was
marked as Any. Change that to None because, nothing is being returned
in that function.
This changes the type annotations for the cache keys in Zulip to be
consistently text_type, and updates the annotations for values that
are used as cache keys across the codebase.
str_utils.py has functions for converting strings from one type to
another. It also has a TypeVar called NonBinaryStr, which is like AnyStr
except that it doesn't allow bytes.
Previously, uploaded files were served:
* With S3UploadBackend, via get_uploaded_file (redirects to S3)
* With LocalUploadBackend in production, via nginx directly
* With LocalUploadBackend in development, via Django's static file server
This changes that last case to use get_uploaded_file in development,
which is a key step towards being able to do proper access control
authorization.
Does not affect production.
This has no functional changes; we just replace the old hacky
assignment of functions with assignment of the upload backend to a
variable.
I'm not totally happy with this, because we end up having to copy the
type annotations of the three methods 4 times each, but this should
make it a lot easier to test the (non-default-in-tests) S3 backend
using end-to-end tests, which would have caught
13bac1cc2a.
I expect we'll iterate on the interface over time; ideally, I'd like
all the code that checks LOCAL_UPLOADS_DIR to be inside upload.py, and
primarily in these classes.