This was the original way to send messages via the Zulip API in the
very early days of Zulip, but was replaced by the REST API back in
2013.
Fixes: #730.
Sorry, couldn't resist wording the commit message like that. :)
The remove_unreachable() method on Message was no longer being
used, and the commit history made it fairly clear we won't need it
in the future.
We can now rely on UserProfile.last_reminder being time zone
aware, or even if it isn't, it's a self-correcting problem the
first time a reminder is sent. (It's a non-problem to be off
by a few timezones if somebody still has an old value there, because
they will still be outside the 1-minute nag window even with the
timezone disparity.)
This is partly a concession to testing; it's really hard to test
that we are flushing the cache properly if tests need to look
at a global variable in models.py that can be re-assigned on every
request. Extracting this function makes it easy for tests to know
whether a domain is in the local cache.
We currently do
var invite_suffix = "{{invite_suffix}}";
in javascript in the initial_invite_page.html template.
This sets invite_suffix to "{{invite_suffix}}" when the template is rendered
without invite_suffix in the params, rather than to "" as intended. This
later causes problems in the invite_email validator in initial_invite.js.
We no longer use all the alert words for all the users in the
entire realm when we look for alert words in a newly sent/edited
message. Now we limit the search to only all the alert words
for all the users who will get UserMessage records. This will
hopefully make a big difference for big realms where most messages
are only sent to a small subset of users.
The bugdown parser no longer has a concept of which users need which
alert words, since it can't really do anything actionable with that info
from a rendering standpoint.
Instead, our calling code passes in a set of search words to the parser.
The parser returns the list of words it finds in the message.
Then the model method builds up the list of user ids that should be
flagged as having alert words in the message.
This refactoring is a little more involved than I'd like, but there are
still some circular dependency issues with rendering code, so I need to
pass in the rather complicated realm_alert_words data structure all the way
from the action through the model to the renderer.
This change shouldn't change the overall behavior of the system, except
that it does remove some duplicate regex checks that were occurring when
multiple users may have had the same alert word.
We now use render_incoming_message() to render all incoming
new messages (sends/edits), so that they will get the same treatment.
This change also establishes do_send_messages() as the code
path to get new messages rendered. It removes some
logic from check_message() that only happened on certain code paths
for sending messages, and which would only detect failures by
expensively rendering messages, so it wasn't much of a guard.
This change also helps to phase out maybe_render_content(), which
deepens the call stack without providing much clarity to the reader,
since it's behavior is so variable.
Finally, this sets up to fix a flaw in the way we compute which
users have alert words in their messages (in a subsequent commit).
If you supplied an unrecognizable address to our email system,
or you had EMAIL_GATEWAY_PATTERN configured wrong,
the get_missed_message_token_from_address() used to crash
hard and cryptically with a traceback saying that you can't
call startswith() on a None object.
Now we throw a ZulipEmailForwardError exception. This will
still lead to a traceback, but it should be easier to diagnose
the problem.
In our email mirror, we have a special format for missed
message emails that uses a 32-bit randomly generated token
that we put into redis that is then prefixed with "mm" for
a total of 34 characters.
We had a bug where we would mis-classify emails like
mmcfoo@example.com as being these system-generated emails
that were part of the redis setup.
It's actually a little unclear how the bug in the library
function would have manifested from the user's point of view,
but it was definitely buggy code, and it's possibly related in
a subtle way to an error report we got from a customer where
only one of their users, who happened to have a name like
mmcfoo, was having problems with the mirror.
It appears that the assertRaisesRegexp approach we had before didn't
work properly on some systems, likely due to a bad interact with a
i18n (we haven't definitively determined the cause).
We now raise an exception in bugdown.do_convert() if rendering
fails, to avoid silent failures, and then calling code can convert
the exception to a JsonableError.
The list_to_streams() method now uses create_streams_if_needed() to
do its heavy lifting during the autocreate=True case.
This commit gets us to 100% coverage on the streams view. (The
recently created action.create_streams_if_needed() was easy
to test in isolation, and it has 100% coverage as well, so we are
not cheating here.)
Fixes: #1005.
When we push a device token, we want to clean out any other user's
tokens on the device, but not the current user's. We were wiping
away our own token, if it existed, before creating it again. This
was probably never a user-facing problem; it just made for dead code
and a little unnecessary DB churn. By excluding the current user
from the delete() call, we exercise the update path in our tests now,
so we have 100% coverage.
We now have 100% coverage on views/push_notifications.py, modulo
some dead code which will be addressed in the next commit.
There were some existing tests in text_external.py, but that
module is really intended for tests that hit external services.
The view is a really simple API that updates a DB table, and the
new test code focuses on error handling and idempotency as well
as the happy path.
Apparently, in urllib.parse, one need to extract the query string from
the rest of the URL before parsing the query string, otherwise the
very first query parameter will have rest of the URL in its name.
This results in a nondeterministic failure that happens 1/N of the
time, where N is the number of fields marshalled from a dictionary
into the query string.
This commit extracts compose_views() from update_subscriptions_backend(),
and it implements the correct behavior for forcing transactions to roll
back, which is to raise an exception.
There were really three steps in this commit:
- Extract buggy code to compose_views().
- Add tests on compose_views().
- Fix bugs exposed by the new tests by converting errors to exceptions.
In HTML, the line break immediately following a start tag is ignored
(see: https://www.w3.org/TR/html4/appendix/notes.html#h-B.3.1). An
extra span tag has been introduced in the upstream Pygments
HtmlFormatter in order to preserve the first new line. The Bugdown
Tests as well as our fenced_code.js frontend markdown processor have
been updated to reflect this new behavior.
These annotations aren't perfect because the sqlalchemy stubs in
typeshed are broken (e.g. a `Select` doesn't have the ability to do
`.where()`, but we've at least used some typevars to make it easy to
address that when the sqlalchemy stubs are less broken).
While one often might want to put the user's name in an email
template, `name` here was the user's full name, not their first name,
and thus reads as quite formal.
Our implementation of duplication detection in the Zulip email error
reporting system was buggy in two important ways:
* It did not look at the traceback, and thus considered all errors as
the same.
* It reset the 10-minute duplicate timer every time an error happened,
thus concealing situations where the same error was occuring more
often than 1/10 minutes.
This fixes a problem where the requests to Tornado would attempt to
use a configured outgoing HTTP proxy, when really we want to connect
directly to localhost.
Fixes: #468.
This adds support for using PGroonga to back the Zulip full-text
search feature. Because built-in PostgreSQL full text search doesn't
support languages that don't put space between terms such as Japanese,
Chinese and so on. PGroonga supports all languages including Japanese
and Chinese.
Developers will need to re-provision when rebasing past this patch for
the tests to pass, since provision is what installs the PGroonga
package and extension.
PGroonga is enabled by default in development but not in production;
the hope is that after the PGroonga support is tested further, we can
enable it by default.
Fixes#615.
[docs and tests tweaked by tabbott]
Old behavior is to do something tricky that relies on the server being on
Pacific Time and the users being in the US. The goal is to have this message
appear during business hours, since click through rates are higher during
business hours. Our server is now on the East Coast though and our users are
in every timezone, so until we do something smarter this seems like a better
heuristic. We're also trying to cleanse our codebase of non-timezone-aware
datetime.datetime objects.
Apparently, we had incorrectly concluded that our highlight_string
search result highlighting offsets coming from tsearch_extras were
measured in bytes, whereas in fact it is measured in characters.
The previous default configuration resulted in delivery problems if
the Zulip server was authorized in the SPF records for the domains of
all users on the Zulip server.
Also changes all links to /integrations to be relative rather than
absolute, so that users will primarily access the /integrations page
for their subdomain.
This has the nice side effect of getting rid of the now-unnecessary
ADMIN_DOMAIN from this codepath -- we really just want whichever
realm ERROR_BOT is in.
Since this delayed sending feature is the only thing
settings.MANDRILL_API_KEY is used for, it seems reasonable for that to
be the gate as to whether we actually use Mandrill.
This is a convenient tool to have around.
We require an unusual argument value of "YES" to send to everyone on
the server, since that's something one should do with a great deal of
care.
This sends an event when a new avatar is uploaded that refreshes the
avatar for all browser clients without the need to reload the browser.
Fixes: #1359.
!avatar, !modal_link, !gravatar, etc. were incorrectly being processed
before the escape character for code blocks.
While we're at it, we add tests for these special syntaxes.
This fixes a nasty bug where exporting messages sent by a single user
might only contain some of the messages in the event that the
unspecified sort order by the database didn't happen to be sorted by
message ID.
This commit only addresses tables that currently derive from
user_profile_config in get_realm_config:
zerver_userpresence
zerver_useractivity
zerver_useractivityinterval
zerver_subscription
zerver_recipient
zerver_stream
zerver_huddle
It also introduces an entry in realm.json for a virtual
table called "zerver_userprofile_mirrordummy" for dummy users,
which include prior dummy users and users excluded from the call
to do_export_realm().
Note that this feature is not yet exposed in the management command.
This adds a few new helpful context variables that we can use to
compute URLs in all of our templates:
* external_uri_scheme: http(s)://
* server_uri: The base URL for the server's canonical name
* realm_uri: The base URL for the user's realm
This is preparatory work for making realm_uri != server_uri when we
add support for subdomains.
The message cache filling script actually used both database and
memcached queries as part of filling the cache (all used to compute
the needed display_recipient values). We would ideally fix this by
using bulk operations to fill the display_recipient cache, but until
we do so, this cache filler is counterproductive.
I believe this disabling fixes an issue where memcached would get
overloaded and stop handling requests during a server restart on busy
servers.
Now attachment data gets written to its own json file. We are
splitting this out so that will be easier for us to cross-check
attachments against messages without holding up writing a lot
of the other realm data. (message cross-checking is coming soon)
This commit doesn't change any behavior; it just moves fetching
attachments out of the Config scheme and into its own method.
This prepares us to start writing attachment data to its own
file and cross-checking against message ids (coming soon).
We now just have a single configuration get_realm_config() that
handles most of the top-down realm export tables. (It basically
does everything not related to messages or uploads/avatars.)
Unifying the configs allows us to be more strict in our
configuration about checking for anomalies. In the future
we may need to loosen up some of those restrictions again,
but for now we are picky and paranoid.
Fetch stream data only for stream recipients, instead of
getting streams via realm_id.
(This change is kind of moot for now, since our stream recipients
include all possible stream recipients in the realm, but this
sets us up for when we start restricting users that we export
within the realm.)
Previously, missed message emails with multiple senders would
incorrectly have a "," outside the quoted sender name part of the from
address string, resulting in confusing email output.
This commit introduces the ability to do custom fetches
and to essentially use temp tables for intermediate results.
(The temp table stuff deals with recipients/subscriptions
having three different flavors--user, stream, and huddle.)
Most directly useful for the migration to zulipchat.com.
Creates a new field in UserProfile to store the tos_version, as well as two
new settings TOS_VERSION and FIRST_TIME_TOS_TEMPLATE. We check for a version
mismatch between what the user has signed and the current
settings.TOS_VERSION whenever the user hits the home page, and redirect them
if needed.
Note that accounts_accept_terms.html and
zerver.views.accounts_accept_terms were unused before this commit
(they date from c327446537)
The function to create the message partial files has been
renamed to export_partial_message_files(). It now gets its own
list of user profile ids and recipient ids from the response,
so that we can de-clutter do_export_realm().
The name avatar_bucket was confusing for a boolean, and
in some places it was used for non-S3 paths.
I considered the more concise 'is_avatar', but that
was still confusing when you are processing multiple
files, because you think it's a calculated property
on one file instead of an overall codepath switch.
I also considered splitting up some functions, but
there is a lot of common logic between handling
file uploads and avatars that's not trivial to extract
into helpers, especially on the S3 side.
I did some minor moving around of code that made us have
one fewer function without any additional conditional
logic. The names are more explicit about saying
"from_local" and "from_s3". Also, there is less clutter
now in do_export_realm(), which is evolving into more of
a dispatcher and less of a worker.
This is pretty minor cleanup, but it makes it a little more
explicit what we're writing to the shard file, and it allows
us to use a more specific mypy type when calling
floatify_datetime_fields.
We no longer have an in-process code path to export
UserMessage rows. We want to only maintain the
subprocess code, which we'll always use in production,
and which will work fine in dev.
Adds a new field default language in the zerver_realm model.
This realm level default language will be used as default language
for newly created users. Realm level default language can be
changed from the administration page.
Fixes#1372.