Our implementation requires at least 1 space after the
'#' not not break existing linkifiers like '#123', etc.
that generally follow the convention we show in linkifier
examples.
- [valid] : # Hello
- [valid] : # Hello
- [invalid]: #Hello
For the frontend, we have taken the code from v0.7.0 of
upstream marked and made minor changes to avoid having
to refactor a significant part of our marked code.
For the backend, we merely have to change the regex to
force require spaces after #, and add hashheader to our
list of blockparsers.
Fixes#11418.
This fixes two issues:
* The syntax check logic we had for zerver.tornado.autoreload would
end up clearing _reload_hooks if one of the files that had changed
was zerver.tornado.autoreload itself (because we'd had re-imported
the current module), which could be incredibly confusing when trying
to test the autoreload logic. It seems better to just not run the
syntax check for syntax errors in this file.
Similarly, because reloading event_queue.py would destroy the state
in the queues, we avoid that as well.
* We make sure to flush stdout after running and reload hooks, to make
sure their output reaches the user.
We were apparently not running our own forked Tornado autoreload
library when adding reload hooks, which meant that our autoreload
hooks didn't run at all.
This fixes an issue that made dump_event_queues never run and thus the
local development environment difficult to use for testing event queues.
We simply state that certain options are `Optional`.
The following files are affected:
add_users_to_mailing_list
send_to_email_mirror
fill_memcached_caches
client_activity
When typing `**options` as an `Optional[str]` we will see errors
in the from of `None type has no attribute 'split'`. This change
allows mypy to effectively handle the `None` case.
This commits reduces the number of values returned by
channel_to_zerver_stream function by setting the values
directly in realm dict and returning it instead.
Fixes#11209.
This requires changing how zadd is used in rate_limiter.py:
In redis-py >= 3.0 the pairs to ZADD need to be passed as a dictionary,
not as *args or **kwargs, as described at
https://pypi.org/project/redis/3.2.1/ in the section
"Upgrading from redis-py 2.X to 3.0".
The rate_limiter change has to be in one commit with the redis upgrade,
because the dict format is not supported before redis-py 3.0.
We know that via the `AbstractMessage` class that `sender`
is of the type `UserProfile`. We type this as `Optional`
to tell mypy that the operands to the right of the first
`or` can indeed be evaluated within the following `for` loop.
As a result of dropping support for trusty, we can remove our old
pattern of putting `if False` before importing the typing module,
which was essential for Python 3.4 support, but not required and maybe
harmful on newer versions.
cron_file_helper
check_rabbitmq_consumers
hash_reqs
check_zephyr_mirror
check_personal_zephyr_mirrors
check_cron_file
zulip_tools
check_postgres_replication_lag
api_test_helpers
purge-old-deployments
setup_venv
node_cache
clean_venv_cache
clean_node_cache
clean_emoji_cache
pg_backup_and_purge
restore-backup
generate_secrets
zulip-ec2-configure-interfaces
diagnose
check_user_zephyr_mirror_liveness
This verifies that the client passed a last_event_id that actually
came from the queue instead of making up an ID from the future. It
turns out one of our tests was making up such an ID, but legitimate
clients are expected not to do so.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This feature is intended to cover all of our ways of exporting a
realm, not just the initial "public export" feature, so we should name
things appropriately for that goal.
Additionally, we don't want to include data exports in page_params;
the original implementation was actually buggy and would have.
When a person creates a new realm, they'll likely want to create a
bunch of initial streams at once. When doing so, it could be annoying
to have to mark all of the new stream notification messages as read.
Thus to make this process smoother, we should automatically mark
the messages generated by the Notification Bot in the notifications
(announcements) stream, as well as in the newly created stream itself
as read by the stream creator.
Fixes#12765.
This commit add an pretty elaborate extension to the existing
openapi documentation validation test: test_openapi_arguments.
This does a metacode analysis, comparing the openapi documentation
with the appropriate function's declaration, default values etc.
While it has some limitations, it is able to catch various common
classes of mistakes in the types declared for our OpenAPI
documentation.
In `force_str` we assume that python 2 strings should be
considered. This is no longer the case, so we replace all
occurences of `Text` with `str`, and remove the unreachable
condition.
(Probably further cleanup is possible, but this code shouldn't be
modified again in any case).
While it's true `datetime` is implicit via `pytz`, it makes sense
that mypy should now complain about the semantics of calling our
return type `pytz.datetime.tzinfo`, when such a type doesn't
actually exist.
Investigation into #12876, a mysterious bug where users were seeing
messages reappear as unread, determined that the root cause was
missing headers to disable client-side caching for Zulip's REST API
endpoints.
This manifested, in particular, for `GET /messages`, which is
essentially the only API GET endpoint used by the webapp at all. When
using the `Ctrl+Shift+T` feature of browsers to restore a recently
closed tab (and potentially other code paths), the browser would
return from its disk cache a cached copy of the GET /messages results.
Because we include message flags on messages fetched from the server,
this in particular meant that those tabs would get a stale version of
the unread flag for the batches of the most recent ~1200 messages that
Zulip fetches upon opening a new browser tab.
The issue took same care to reproduce as well, in large part because
the arguments to those initial GET /messages requests will vary as one
reads messages (because the `pointer` moves forward) and then enters
the "All messages" view; the disk cache is only used for GET requests
with the exact same URL parameters.
We will probably still want to merge the events error-handling changes
we had previously proposed for this, but the conclusion of this being
a straightforward case of missing cache-control headers is much more
satisfying than the "badly behaving Chrome" theory discussed in the
issue thread.
Fixes#12876.
Django’s default FileSystemFinder disallows STATICFILES_DIRS from
containing STATIC_ROOT (by raising an ImproperlyConfigured exception),
because STATIC_ROOT is supposed to be the result of collecting all the
static files in the project, not one of the potentially many sources
of static files.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
In the rare case that Zulip receives an email with only an HTML
format, we originally (code dating to 2013) shelled out to
html2markdown/python-html2text in order to convert the HTML into
markdown.
We long since added html2text as a reasonably managed Python
dependency of Zulip; we should just use it here.
This change serves to declutter webhook-errors.log, which is
filled with too many UnexpectedWebhookEventType exceptions.
Keeping UnexpectedWebhookEventType in zerver/lib/webhooks/common.py
led to a cyclic import when we tried to import the exception in
zerver/decorators.py, so this commit also moves this exception to
another appropriate module. Note that our webhooks still import
this exception via zerver/lib/webhooks/common.py.
Changed the requirements for UserProfile in order to allow use of
the formataddr function in send_mail.py.
Converted send_email to use formataddr in conjunction with the commit
that strengthened requirements for full_name, such that they can now be
used in the to field of emails.
Fixes#4676.
This changes the requirements for UserProfile to disallow some
additional characters, with the overall goal of being able to use
formataddr in send_mail.py.
We don't need to be particularly careful in the database migration,
because user full_names are not required to be unique.
We were seeing errors when pubishing typical events in the form of
`Dict[str, Any]` as the expected type to be a `Union`. So we instead
change the only non-dictionary call, to pass a dict instead of `str`.
Per the import line:
`from unittest import loader, runner # type: ignore # Mypy cannot pick
these up.`
Because `TextTestResult` inherits from `runner.TextTestResult`, mypy
doesn't see `self` as having an attribute `stream`, so we ignore these
instead of cluttering with `casts` or `isinstances`.
The code generating pub_dates for messages would fail to distribute them
across days if tot_messages was too large.
We refactor this code as a separate function (for clarity and to unit
test for the bug we're fixing), and change the structure and naming to a
form that more clearly describes what's happening. We also shift away
from the approach of all the float-to-int conversions as this is in
general tricky and bug prone - django's timedelta() handles floats as
arguments, so we take advantage of that.
Apparently, a subtle mismatch between the filename/URL formats for our
upload codebases meant that importing Slack avatars into systems using
S3_UPLOAD_BACKEND would end up with the avatars having the wrong URLs.
This replaces the two custom Google authentication backends originally
written in 2012 with using the shared python-social-auth codebase that
we already use for the GitHub authentication backend. These are:
* GoogleMobileOauth2Backend, the ancient code path for mobile
authentication last used by the EOL original Zulip Android app.
* The `finish_google_oauth2` code path in zerver/views/auth.py, which
was the webapp (and modern mobile app) Google authentication code
path.
This change doesn't fix any known bugs; its main benefit is that we
get to remove hundreds of lines of security-sensitive semi-duplicated
code, replacing it with a widely trusted, high quality third-party
library.
During the time between when we refactored the GitHub authentication
backend to use SocialAuthBase and now (when we're about to migrate
GoogleAuthBackend to use that code path as well), we accidentally
added some GitHub-specific authentication backend tests to the common
test class.
Fix this by moving them to the GitHub-specific subclass.
This is a prep commit for adding validation of the request variable
types since then we would need to actually analyze the code of the
actual function itself and we would need a variable storing the
function itself.
In commit 7c71e98, we added a special exception for the
/users/me/subscriptions endpoint in the automatic validation test.
By adding some extra documentation, we now remove this extra code,
as well as the endpoint from the list of pending endpoints.
In the validation test, we now use a different message for when there
is an endpoint in pending_endpoints with some documentation already.
This change is a bit hackish, but it's okay since we'll be removing it
once we've resolved all pending endpoints (which is bound to happen).
The previous iteration did not properly handle languages with a
different word order than English.
Discovered via warning output in `manage.py makemessages`.
When we add Plus, the first sentence should change to "Available on Zulip
Standard and Plus".
I copied the styling of .tip out of expediency, but it's also possible that
long term we'll want only 1 tip-like box styling.
The hover styling is a bit random, but I tried to copy other hover styles I
found in settings.scss.
Note that this renames .upgrade_realm_plan_type_suggestion to .upgrade-tip.