This diff is nothing but dedentation -- it's empty under
`git diff -b`. These with-statements are only needed for
a pretty narrow scope of code, so make that clear in the
source.
There are two different things you need to patch in order to get error
emails (at `/emails`) in dev. Flip one of them in dev all the time,
and make the comment on the other a bit more explicit.
When I added this "Deployed code" feature to the error reporting,
I apparently hadn't worked out enough of how this code works to
realize that `notify_server_error` may be in a different process,
at a different time and potentially even on a different machine
from the actual error being reported.
Given that architecture, all the data about the error must be computed
in `AdminNotifyHandler`, before sending the report through the queue,
or else it risks being wrong. The job of `notify_server_error` and
friends is only to format the data and send it off. So, move the
implementation of this feature in order to do that.
(@showell added some "nocoverage" directives here for code that
is hard to test (exceptions being thrown, deployment files not
existing) and that was originally part of a file that didn't
require 100% coverage)
This helps prevent them from diverging and getting different sets of
features and fixes. As a bonus, the email path gets a nice tweak that
the Zulip path has had for years, since f7f2ec0ac, which makes the
emails clearer and less broken-looking when logging a message with no
stack trace.
This deduplicates a little bit of logic, and also has us always put
things into `report` the same way.
Empirically an exception in this codepath is very rare, so we won't
complicate the code by trying to salvage a lot of partial information
if it happens -- just log the traceback, and try to get a minimal
notification sent of the bare fact this happened.
This name hasn't been right since f7f2ec0ac back in 2013; this handler
sends the log record to a queue, whose consumer will not only maybe
send a Zulip message but definitely send an email. I found this
pretty confusing when I first worked on this logging code and was
looking for how exception emails got sent; so now that I see exactly
what's actually happening here, fix it.
This is just a basic Dropbox webhook integration. It just
notifies a user when something has changed, it does not
specify what changed. Doing so would require storing data,
as Dropbox API was created mainly for file managers, not
integrations like this.
Closes#5672
We have shifted to a generic queue to send all the emails. This queue
can retry in case of network issues; this makes sure that the emails are
always sent.
This commit just copies all the code from MissedMessageSendingWorker
class to a new EmailSendingWorker class. All the logic to send an email
through a queue was already there. This commit only makes the logic
generic. It does so by creating a special purpose queue called
'email_senders' to send any type of email. To make
MissedMessageSendingWorker still work we derive it from
EmailSendingWorker. All the tests that were testing
MissedMessageSendingWorker now run against EmailSendingWorker.
Such payloads are generated when a GitLab repository has merge
request approvals enabled and a project member approves a merge
request. Approving is not the same as merging.
This reverts commit 620b2cd6e.
Contributors setting up a new development environment were getting
errors like this:
```
++ dirname tools/do-destroy-rebuild-database
[...]
+ ./manage.py purge_queue --all
Traceback (most recent call last):
[...]
File "/home/zulipdev/zulip/zproject/legacy_urls.py", line 3, in <module>
import zerver.views.streams
File "/home/zulipdev/zulip/zerver/views/streams.py", line 187, in <module>
method_kwarg_pairs: List[FuncKwargPair]) -> HttpResponse:
File "/usr/lib/python3.5/typing.py", line 1025, in __getitem__
tvars = _type_vars(params)
[...]
File "/usr/lib/python3.5/typing.py", line 277, in _get_type_vars
for t in types:
TypeError: 'ellipsis' object is not iterable
```
The issue appears to be that we're using the `typing` module from the
3.5 stdlib, rather than the `typing=3.6.2` in our requirements files,
and that doesn't understand the `Callable[..., HttpResponse]` that
appears in the definition of `FuncKwargPair`.
Revert for now to get provision working again; at least one person
reports that reverting this sufficed. We'll need to do more testing
before putting this change back in.
The name `create_logger` suggests something much bigger than what this
function actually does -- the logger doesn't any more or less exist
after the function is called than before. Its one real function is to
send logs to a specific file.
So, pull out that logic to an appropriately-named function just for
it. We already use `logging.getLogger` in a number of places to
simply get a logger by name, and the old `create_logger` callsites can
do the same.
From the docs:
> This function does nothing if the root logger already has handlers
> configured for it.
Which we do if we've started up Django and configured settings, and in
particular allowed Django to process `settings.LOGGING`.
So, cut it out -- all it can do is confuse people about how logging
works.
If we ever actually used the `log_format` parameter, this would be
doubly confused, because only the first call would have any effect.
Because calls to `create_logger` generally run after settings are
configured, these would override what we have in `settings.LOGGING` --
which in particular defeated any attempt to set log levels in
`test_settings.py`. Move all of these settings to the same place in
`settings.py`, so they can be overridden in a uniform way.
This is already the loglevel we set on the root logger, so this has no
effect -- except in tests, where `test_settings.py` attempts to set
some of these same loggers to higher loglevels. Because the
`create_logger` call generally runs after we've configured settings,
it clobbers that effect.
The code in `test_settings.py` that tries to suppress logs only works
because it also sets `propagate=False`, which has nothing to do with
loglevels but does cause logs at this logger (and descendants) to be
dropped completely unless we've configured handlers for this logger
(or one of its relevant descendants.)
Adds a markdown preprocessor that finds ordered lists where all items
use the same number and change them to be in normal increasing order,
starting with that number.
Fixes#5159.
Eventually this check for the realm will be done in get_object_from_key
itself. Rewriting this to fit the pattern in get_object_from_key.
No change to behavior.
Commit d4ee3023 and its parent have the history behind this code.
Since d4ee3023^, all new PreregistrationUser objects, except those for
realm creation, have a non-None `realm`. Since d4ee3023, any legacy
PreregistrationUsers, with a `realm` of None despite not being for
realm creation, are treated as expired. Now, we ignore them
completely, and remove any that exist from the database.
The user-visible effect is to change the error message for
registration (or invitation) links created before d4ee3023^ to be
"link does not exist", rather than "link expired".
This change will at most affect users upgrading straight from 1.7 or
earlier to 1.8 (rather than from 1.7.1), but I think that's not much
of a concern (such installations are probably long-running
installations, without many live registration or invitation links).
[greg: tweaked commit message]
We should omit these for mypy. For most class definitions,
mypy doesn't need `Any`, and it provides no real useful info.
For clever monkeypatches, you should provide a more specific
type than `Any`.
The original logic is buggy now that emails can belong to (and be
invited to) multiple realms.
The new logic in the `invites` queue worker also avoids the bug where
when the PreregistrationUser was gone by the time the queue worker got
to the invite (e.g., because it'd been revoked), we threw an exception.
[greg: fix upgrade-compatibility logic; add test; explain
revoked-invite race above]
This code changes frequently enough that errors are bound to creep in. The
main change is that this sends the original invitation email instead of the
reminder email, but I think that's fine.
We'll need the expanded test coverage when we move
check_prereg_key_and_redirect to zerver/views/registration.py to avoid
test failures, and these are also tests we should really have anyway.
Empirically, the retry in `_on_connection_closed` didn't actually work
-- if a reconnect failed, that was it, and the exception handler
didn't get run. A traceback would get logged, but all its frames were
in Tornado or Pika, not our own code; presumably something magic and
async was happening to the exception.
Moreover, though we would make one attempt to reconnect if we had a
connection that got closed, we didn't have any form of retry if the
original attempt at connecting failed in the first place.
Happily, upstream offers a perfectly reasonable bit of API that avoids
both of these problems: the on-open-error callback. So use that.
This method was new in Tornado 4.0. It saves us from having to get
the time ourselves and do the arithmetic -- which not only makes the
code a bit shorter, but also easier to get right. Tornado docs (see
http://www.tornadoweb.org/en/stable/ioloop.html) say we should have
been getting the time from `ioloop.time()` rather than hardcoding
`time.time()`, because the loop could e.g. be running on the
`time.monotonic()` clock.
Adding it afterward is inherently racy, and upstream's API is quite
reasonable for avoiding that -- just like we can pass an on-open
callback up front, we can do the same with the on-close callback.
This is a more thorough version of 4adf2d5c2 from back in 2013-04.
The default value of this parameter is already False upstream.
(It was already False in pika version 0.9.6, which we were
supposedly using when we introduced this in 4baeaaa52; not sure
what the story was there.)
Previously, we weren't doing a proper left join in
user_groups_in_realm_serialized, resulting in empty user groups being
excluded from the query. We want to leave decisions about excluding
empty user groups to the UI layer, so we include these here.
These are the exceptions to the rule that our queues correspond to
queue-processor workers.
Purging `notify_tornado` in particular is a useful workaround right
now for some error spew in the dev environment.
Because we use access_stream_by_id here, and that checks for an active
subscription to interact with a private stream, this didn't work.
The correct fix to add an option to active_stream_by_id to accept an
argument indicating whether we need an active subscription; for this
use case, we definitely do not.
For example, this means that if a user already has an account on one
realm and they try to make an account on another by hitting "Sign in
with Google" (rather than following the little "Register" link to a
"Sign up with Google" button instead), they'll get to make an account
instead of getting an error.
Until very recently, if the user existed on another realm, any attempt
to register with that email address had to fail in the end, so this
logic gave the user a useful error message early. We introduced it in
c23aaa178 "GitHub: Show error on login page for wrong subdomain"
back in 2016-10 for that purpose. No longer! We now support reusing
an email on multiple realms, so we let the user proceed instead.
This function's interface is kind of confusing, but I believe when its
callers use it properly, `invalid_subdomain` should only ever be true
when `user_profile` is None -- in which case the revised
`invalid_subdomain` condition in this commit can never actually fire,
and the `invalid_subdomain` parameter no longer has any effect. (At
least some unit tests call this function improperly in that respect.)
I've kept this commit to a minimal change, but it would be a good
followup to go through the call sites, verify that, eliminate the use
of `invalid_subdomain`, then remove it from the function entirely.
[Modified by greg to (1) keep `USERNAME_FIELD = 'email'`,
(2) silence the corresponding system check, and (3) ban
reusing a system bot's email address, just like we do in
realm creation.]
As we migrate to allow reuse of the same email with multiple realms,
we need to replace the old "no email reuse" validators. Because
stealing the email for a system bot would be problematic, we still ban
doing so.
This commit only affects the realm creation logic, not registering an
account in an existing realm.
Just now this is largely redundant with `test_signup_already_active`;
but very soon when we allow reusing an email across realms, the logic
will diverge.
The one thing this bit of logic is used for is to decide whether
there's an existing user which is a mirror dummy that we should
activate. This change causes us to ignore such an existing user if
it's on some other realm, and go straight into `do_create_user`.
This code appears to exist to cover a few extra lines in
zerver/lib/digest.py. But it's rather brittle, tucked as it is into
the middle of a different test's loop, and with the upcoming
introduction of the `lear` realm in testing, this test code itself
loses coverage.
For now, rather than fix this test code up just delete it; we don't
have 100% coverage on `zerver/lib/digest.py`, while we do on this test
file, so that avoids breaking coverage in CI. As a followup, we
should add back some logic like this but in a more robust way,
probably as its own separate test method.