Previously api_description and api_code_examples were two independent
markdown extensions for displaying OpenAPI content used in the same
places. We combine them into a single markdown extension (with two
processors) and move them to the openapi folder to make the codebase
more readable and better group the openapi code in the same place.
To facilitate re-use of the same parameters in other paths, this commit
store the content of the parameter "include_custom_profile_fields" in
components.
To facilitate re-use of the same parameters in other paths, this commit
store the content of the parameter "history_public_to_subscribers" in
components.
Instread of using stream_name + Intergers as topics, we now
generate topics using pos in `config.generate_data.json`.
This helps us create and test more realistic topics.
This page isn't polished properly and I'm not sure it's the best
decision tree here, but it's definitely better to have this page than
not, and we can always adjust forward.
Fixes#10033.
For realms with no retention policy on themselves or any of their
streams, no archiving happens, but 3 lines of logs would be generated.
That's redundant and we make changes in this commit to avoid logging
those lines if nothing of interest is happening.
For privacy-minded folks who don't want to leak the
information of whether they're online, this adds an
option to disable sending presence updates to other
users.
The new settings lies in the "Other notification
settings" section of the "Notification settings"
page, under a "Presence" subheading.
Closes#14798.
This commit extends the template for "choose email" to mention for
users who have unverified emails that they need to verify them before
using them for Zulip authentication.
Also modified `social_auth_test_finish` to assert if all emails
are present in "choose email" screen as we need unverified emails
to be shown to user and verified emails to login/signup.
Fixes#12638 as this was the last task for that issue.
As "choose email" screen is only used for GitHub auth, the part
that deals with it is separated from `social_auth_test` and
dealt in a new function `social_auth_finish`. This new
`social_auth_finish` contains only the code that deals with
authentication backends that do not have "choose email" screen.
But it is overidden in GitHub test class to handle the
"choose email" screen.
It was refactored because `expect_choose_email_screen` blocks
were confusing while figuring out how tests work on non GitHub
auths.
Sentry has client SDKs for many programming languages and frameworks.
Sentry has deprecated their old "Raven" series of client SDKs in favor
of a new series of client SDKs following their unified API format.
As it stood, our Sentry integration was already outdated being written
for the version 5 payloads (the Raven SDKs stopped at version 6 which
is already vastly different from version 5) when the current and
prominently used version is version 7.
This commit completely rewrites the existing Sentry integration.
Tested and supported events:
- Issue created, resolved, assigned, and ignored events.
- "Sentry events" for "capture exception" and "capture message" with
the Golang, Node.js, and Python SDKs (other SDKs should also work but
only these were used for testing).
For reference:
- Old (Raven) SDK for python:
https://github.com/getsentry/raven-python
- New (Unified API format) SDK for python:
https://github.com/getsentry/sentry-python
Signed-off-by: Hemanth V. Alluri <hdrive1999@gmail.com>
**Features:**
Improving `./manage.py convert_gitter_data`
- If messages have been post-processed to add a 'room' field, we
create as many streams as existing rooms.
- Messages with a 'room' field go to the corresponding stream.
- This modification is backward compatible. I.e.
+ messages that have no 'room' field go to the default stream/topic
+ messages that do, go to a specific stream
**Implementation:**
- adding a map `stream_map` to map room names to stream ids
- create as many streams as room field messages + 1 default streamFeatures:
- If messages have been post-processed to add a 'room' field to messages,
we create as many streams as existing rooms.
- Up to renaming of the default stream/topic, this modification is
backwards compatible.
I.e. messages that have no 'room' field go to the default stream/topic
messages that do, go to a specific stream
Implementation:
- adding a map stream_map to map room names to stream ids
- create as many streams as room field messages + 1 default stream
Takes advantage of https://github.com/minrk/archive-gitter/pull/5.
Member of the org can able see list of invitations sent by him/her.
given permission for the member to revoke and resend the invitations
sent by him/her and added tests for test member can revoke and resend
the invitations only sent by him/her.
Fixes#14007.
Previously, hanging_lists preprocessor didn't consider anything
indented at 4 or above spaces to be a list. This meant that when
we had a list like:
1. 1
2. 2
3. 3
2. 2a
1. 1a
We would insert a newline between 3. 3 and 2. 2a. This resulted
in the block processor breaeking down 1 list into 2 blocks, which
messed up the nesting and indentation for the second block.
This does not rely on the desktop app being able to register for the
zulip:// scheme (which is problematic with, for example, the AppImage
format).
It also is a better interface for managing changes to the system,
since the implementation exists almost entirely in the server/webapp
project.
This provides a smoother user experience, where the user doesn't need
to do the paste step, when combined with
https://github.com/zulip/zulip-desktop/pull/943.
Fixes#13613.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We've had bugs in the past where users with a name in the format
"Alice|999" would confuse our markdown rendering or typeahead. While
that's a fully solvable problem, there's no real use case for that, so
it's probably simpler to just prevent users from setting their name
that way.
Fixes#13923.
Prior to this change, there were reports of 500s in
production due to `export.extra_data` being a
Nonetype. This was reproducible using the s3
backend in development when a row was created in
the `RealmAuditLog` table, but the export failed in
the `DeferredWorker`. This left an entry lying
about that was never updated with an `extra_data`
field.
To fix this, we catch any exceptions in the
`DeferredWorker`, and then update `extra_data` to
encode the failure. We also fix the fact that we
never updated the export UI table with pending exports.
These changes also negated the use for the somewhat
hacky `clear_success_banner` logic.
This will give help up write new digest only if the db rebuild
succeeds. We were relying on the caller to
be successful in building db, this was hacky and unreliable.
We write new db digest once the caller succeeds, this ensures
that we write new digest after every successful attempt.
This fixes the anomality we were facing that Databases were rebuild
on the 2nd provision attempt with no changes to files or migrations.
This was happening because we didn't write a new digest for db
after the first provision (The case of DB didn't exist).
During the 1st provision, we check the template_status() of
Database both Dev and Test, but database_exists() of Databases
obviously returned false, and we rebuild the database,
but forgot to write_new_digest and hence the anomaly in the
second provision explained above.
Our previous set of indexes for the Message table did not contain
anything to optimize queries for all the messages in a topic in an
organization where the same topic name might appear in 10,000s of
messages in many streams.
We add two indexes here to support common queries
* A `(recipient_id, upper(subject), id)` index to support
"Fetch all messages from a topic" queries.
* A `(recipient_id, subject, id)` index to support
"Fetch all messages by topic"
We use the `DESC NULLS last` on both indexes because we almost always
want to query from the "Latest N messages" on a topic, not the
"Earliest N messages".
These indexes dramatically improve the performance of fetching topic
history (which remains not good enough in my opinion; we'll likely
need caching to make it nice), and more importantly make it possible
to check quickly which users have sent messages to a topic for the
"Topics I follow" feature.
Fixes part of #13726.
This ensures that if one deletes `zproject/dev-secrets.conf`, we end
up rebuilding the databases from scratch (which, critically, will
ensure the password that gets setup matches what's in the current
version of the configuration file).
This should address a category of issue we've had where deleting
`zproject/dev-secrets.conf` would result in provision failing.
The logic in do_set_realm_property would previously "change" the email
addrssees of every user in the realm, even if they hadn't actually
changed.
We fix this by skipping the logic when it's unnecessary.
bulk_update is used to update the email of user_profile objects in
database when email_address_visibility is changed.
This helps resolve the problem of timeout errors in realms with large
number of users due to large number of database queries run in a
loop.
Since bulk_update doesn't flush caches, we need our own bit of code to
do that.
Fixes a part of #14600.
This will make django automatically remove them when we run
squashmigrations. There are still some RunSQL statements which
we will have to take care of manually.
We add URLs to the `links_for_embed set`, only when
the `url_embed_preview_enabled` flag is turned on.
So, it is sufficient to check if `links_for_embed`
is not empty.
I imagine this can be improved in various ways, but I've initialized
this with all the **Changes** entries recorded in either zulip.yaml or
the rest of the API documentation, and I expect we'll be able to
iterate on this effectively.
It'll also be useful as a record of changes that we should remember to
document the API documentation as we document more endpoints that
currently don't discuss these issues.
While working on this, I fixed various issues where feature levels
could be mentioned or endpoints didn't properly document changes.
This new type eliminates a bunch of messy code that previously
involved passing around long lists of mixed positional keyword and
arguments, instead using a consistent data object for communicating
about the state of an external authentication (constructed in
backends.py).
The result is a significantly more readable interface between
zproject/backends.py and zerver/views/auth.py, though likely more
could be done.
This has the side effect of renaming fields for internally passed
structures from name->full_name, next->redirect_to; this results in
most of the test codebase changes.
Modified by tabbott to add comments and collaboratively rewrite the
initialization logic.
This changes add_reaction in zerver.views.reactions to allow
calling POST ../messages/{message_id}/reactions api endpoint with
emoji_name only, even in the case of a custom emoji.
We now prevent these variations:
* <hr/>
* <hr />
* <br/>
* <br />
We could enforce similar consistency for other void
tags, if we wished, but these two are particularly
prevalent.
Firstly, change endpoint descriptions in zulip.yaml so that they
match their counterpart in the api docs. Then edit the api docs
so that they use api description markdown extension for displaying
endpoint description.
Add function in openapi.py to access endpoint descriptions written
in zulip.yaml. Use this function for creating a markdown extension
for rendering endpoint descriptions written in zulip.yaml.
We use this extension for a single endpoint to get test coverage.
Changing test_alert_words to use do_add_alert_words() and
do_remove_alert_words() from lib/actions.py instead of the
existing add_user_alert_words() and remove_user_alert_words()
as is the general policy of calling these functions when we
are updating the database.
This reverts commit 8f32db81a1.
This change unfortunately requires an index that we don't have, and
thus is incredibly expensive. We'll need to do a thoughtful reworking
before we can integrate it again.
The post_init cache-flushing behavior in the original alert words
migration was subtly wrong; while it may have passed tests, it didn't
have the right ordering for unlikely races.
We use post_save rather than post_init hooks precisely because they
ensure that we flush the cache after we know the database has been
updated and any future reads from the database will have the latest
state.
Previously, alert words were case-insensitive in practice, by which I
mean the Markdown logic had always been case-insensitive; but the data
model was not, so you could create "duplicate" alert words with the
same words in different cases. We fix this inconsistency by making
the database model case-insensitive.
I'd prefer to be using the Postgres `citext` extension to have
postgres take care of case-insensitive logic for us, but that requires
installing a postgres extension as root on the postgres server, which
is a pain and perhaps not worth the effort to arrange given that we
can achieve our goals with transaction when adding alert words.
We take advantage of the migrate_alert_words migration we're already
doing for all users to effect this transition.
Fixes#12563.
Previously, alert words were a JSON list of strings stored in a
TextField on user_profile. That hacky model reflected the fact that
they were an early prototype feature.
This commit migrates from that to a separate table, 'AlertWord'. The
new AlertWord has user_profile, word, id and realm(denormalization so
we can provide a nice index for fetching all the alert words in a
realm).
This transition requires moving the logic for flushing the Alert Words
caches to their own independent feature.
Note that this commit should not be cherry-picked without the
following commit, which fixes case-sensitivity issues with Alert Words.
This is a precursor commit to change the name of
AlertWordNotificationProcessor to AlertWordsNotificationProcessor
to match the change from UserProfile.alert_words to Alertword.
Previously, we added support for 'none', 'plain' and 'noop' and a
function `lang = remap_language(lang)`. This also had the potential
to encourage adding more remappings- something that we deliberatly
want to keep to a minimum.
For context, Anders K doesn't want us to keep any remapping (only
keeping 'text' which is the default no-op lexer that pygments has)
and Tim wants to keep 'plain' and 'text'. We should only document
and advertise 'text'.
When a user is reading messages only in stream or topic narrows, the pointer
can be left far behind. Using this to compute the furthest_read_time causes
the banckruptcy banner to be shown even when a user has been actively
reading messages. This commit switches to using the sent time on the last
message that the user has read to compute the furthest read time.
This hack was important when only the mobile apps (and not the webapp)
were using the unread_msgs data structure and the first_unread
infrastructure. Now that the webapp is using those things, there
aren't leaked ancient unread messages that aren't accessible on the
webapp, so any few users still in this situation can get out of it by
just reading the problematic messages.
I don't think we've had a use for these tools since our unread systems
stabilized shortly after they were written, so it makes sense to just
remove them rather than updating them for the pointer migration.
In Django 2.1, the preferred way to express a nullable BooleanField
changed from NullBooleanField to passing null=True to BooleanField.
This updates our codebase to use the preferred API. Tweaked by
tabbott to update the linter rules.
The migration is a noop for Django accounting only.
Part of #11341.
This cleans up any messages that might have been exchanged with
`NEW_USER_BOT` or `FEEDBACK_BOT` (cross-realm bots that were last
used, as far as we know, years ago) that have been completely removed
from the codebase.
Details on the algorithm are in the migration code itself.
Fixes#13583.
Previously, the message and event APIs represented the user differently
for the same reaction data. To make this more consistent, I added a
user_id field to the reaction dict for both messages and events. I
updated the front end to use the user_id field rather than the user
dict. Lastly, I updated front end and back end tests that used user
info.
I primarily tested this by running my local Zulip build and
adding/removing reactions from messages.
Fixes#12049.
Some sites don't render correctly unless you are one of the latest browsers.
YouTube Music, for instance, changes the page title to "Your browser is
deprecated, please upgrade.", which makes our URL previews look bad.
This ancient migration imports boto, which interferes
with our upgrade to boto3.
> git name-rev f13d6a18ebf13d6a18eb tags/1.6.0~1082
We can safely assume nobody is upgrading from a server on <1.6.0,
since we have no supported platforms in common with those releases.
This ancient migration imports boto, which interferes
with our upgrade to boto3.
> git name-rev 8ae35211b58ae35211b5 tags/1.6.0~1924
We can safely assume nobody is upgrading from a server on <1.6.0,
since we have no supported platforms in common with those releases.
This ancient migration imports boto, which interferes
with our upgrade to boto3.
> git name-rev a32f666f5ca32f666f5c tags/1.6.0~2384
We can safely assume nobody is running servers on <1.6.0; there are no
supported platforms in common with 1.6.0 anyway.
In the original implementation, we were checking for the default language
inside format_code, which resulted in the setting being ignored when set to
quote, math, tex or latex. We shift the validation to `check_for_new_fence`
We also update the tests to use a saner naming scheme for the variables.
Internet Explorer does not support `position: sticky` which improves
floating recipient bar behavior during scrolling which is one of the
issues blocking PR #9910.
IE also does not support some features that modern browsers support
hence may not super well.
This commit adds an error page that'll be displayed when a user logs
in from Internet Explorer. Also, a test is added.
Includes this change:
* openapi/python_examples: Update get_single_user.
This updates get_single_user to pass keyword arguments to
get_user_by_id instead of passing a dictionary.
Which is required for CI to pass, as we indeed fixed the API of that
function (which had only been present with the wrong API for one release).
This commit removes can_create_streams and can_subscribe_other_users
to use has_permission as a generic function in UserProfile model for
these settings policies.
Relevant changes are made to events.py to avoid duplication at some
places.
We have two different digest schemes to make
sure we keep the database up to date. There
is the migration digest, which is NOT in the
scope of this commit, and which already
used the mechanism we use for other tools.
Here we are talking about the digest for
important files like `populate_db.py`.
Now our scheme is more consistent with how we
check file changes for other tools (as
well as the aformentioned migration files).
And we only write one hash file, instead of
seven.
And we only write the file when things have
actually changed.
And we are explicit about side effects.
Finally, we include a couple new bot settings
in the digest:
INTERNAL_BOTS
DISABLED_REALM_INTERNAL_BOTS
NOTE: This will require a one-time transition,
where we rebuild both databases (dev/test).
It takes a little over two minutes for me,
so it's not super painful.
I bump the provision version here, even
though you don't technically need it (since
the relevant tools are actually using the
digest files to determine if they need to
rebuild the database). I figure it's just
good to explicitly make this commit trigger
a provision, and the user will then see
the one-time migration of the hash files
with a little bit less of a surprise.
And I do a major bump, not a minor bump,
because when we go in the reverse direction,
the old code will have to rebuild the
database due to the legacy hash files not
being around, so, again, I just prefer it
to be explicit.
Fixes#14595.
Invalid HTTP requests could end up in an unhandled exception in
skip_200_and_304 due the record not having the status_code attribute
set. With this change we'll avoid the exception
Example:
curl -X POST -H 'Transfer-Encoding : chunked' --data-binary 'a' 'http://zulipdev.com:9991/json/messages/57'
2020-04-21 10:56:22.007 WARN [django.server] "POST /json/messages/57 HTTP/1.1" 405 95
2020-04-21 10:56:22.007 INFO [django.server] code 400, message Bad request syntax ('a')
2020-04-21 10:56:22.008 WARN [django.server] "a" 400 -
This will work around https://bugs.python.org/issue34939 when we
convert the type comment to a Python 3.6 style annotation.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
We remove the `generate_fixtures` option here mostly
for simplicity, but in particular to facilitate
an upcoming commit to simplify the job of
`generate-fixtures` (and remove its `--force` option).
The command line option here for `test-backend`
was really calling `generate_fixtures --force`,
which we're about to rename `tools/rebuild-test-database`.
The `test-backend` tools is already smart about catching
up on migrations, so we generally don't need to tell it
to repair the database.
And if the database does get corrupt, you can just do
it directly with `tools/rebuild-test-database`.
This eliminates the `use_force` flag in
`update_test_databases_if_required`, which was easy
to confuse with `rebuild_test_database`.
The other caller wasn't using `use_force`.
Somewhat confusingly, we have two types of different
digests related to databases. The migration digests
are pragmatic, since changes to migrations are a bit
more frequent for certain use cases and don't
necessitate a complete rebuild of the database.
Anyway, these are just more specific names.
Generated by autopep8 --aggressive, with the setup.cfg configuration
from #14532. In general, an isinstance check may not be equivalent to
a type check because it includes subtypes; however, that’s usually
what you want.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
Generated by autopep8, with the setup.cfg configuration from #14532.
I’m not sure why pycodestyle didn’t already flag these.
Signed-off-by: Anders Kaseorg <anders@zulipchat.com>
This refactors `extract_python_code_example` to accept an
`example_regex` parameter. It can now be used to extract code examples
from javascript_examples.py.
Previously, the send_custom_email code path leaked files in paths that
were not `.gitignored`, under templates/zerver/emails.
This became problematic when we added automated tests for this code
path, as it meant we leaked these files every time `test-backend` ran.
Fix this by ensuring all the files we generate are in this special
subdirectory.
The purpose is to provide a way for (non-webapp) clients,
like the mobile and terminal apps, to tell whether the
server it's talking to is new enough to support a given
API feature -- in particular a way that
* is finer-grained than release numbers, so that for
features developed after e.g. 2.1.0 we can use them
immediately on servers deployed from master (like
chat.zulip.org and zulipchat.com) without waiting the
months until a 2.2 release;
* is reliable, unlike e.g. looking at the number of
commits since a release;
* doesn't lead to a growing bag of named feature flags
which the server has to go on sending forever.
Tweaked by tabbott to extend the documentation.
Closes#14618.
We now have two functions related to digests
for processes:
is_digest_obsolete
write_digest_file
In most cases we now **wait** to write the
digest file until after we've successfully
run a process with its new inputs.
In one place, for database migrations, we
continue to write the digest optimistically.
We'll want to fix this, but it requires a
little more code cleanup.
Here is the typical sequence of events:
NEVER RUN -
is_digest_obsolete returns True
quickly (we don't compute a hash)
write_digest_file does a write (duh)
AFTER NO CHANGES -
is_digest_obsolete returns False
after reading one file for old
hash and multiple files to compute
hash
most callers skip write_digest_file
(no files are changed)
AFTER SOME CHANGES -
is_digest_obsolete returns False
after doing full checks
most callers call write_digest_file
*after* running a process
I make these all functions for consistency,
and in particular I want to continue to avoid
`glob.glob` calls until we are actually
computing hashes.
This is mostly a prep to allow us to do
hashing in two separate places:
- check hashes
- update hashes
We would only update hashes **after** running
processes anew.
For `provision_inner` I considered using a
class to put the three path-related helpers
into a mini namespace, but it felt too heavy.
It wouldn't be completely implausible here
to extract something like a JSON config
file that has a list of globs for each
process that we do path-hashing for, but I
want to clean up other stuff first.
We now remove the `Type` and `_TYPE` suffixes,
as we will start treating this like a real
class with behavior, instead of a glorified
struct.
We pass in `platform_type`, so that we can
just derive some of our data from that,
where naming conventions apply.
And we use the name `migrations_status_path`,
instead of the name `migration_status`, which
had two different meanings before this change.
This is a pure refactor, and we just early-exit
in case the datbase doesn't exist (knowing that
that can be a bit of a lie now--see the comment
I added.)
Refactored code in actions.py and streams.py to move stream related
functions into streams.py and remove the dependency on actions.py.
validate_sender_can_write_to_stream function in actions.py was renamed
to access_stream_for_send_message in streams.py.
This test was passing a string, not an Iterable[str], and effectively
a quirk in the remove_alert_words implementation happened to result in
processing each character in the string working.