Zulip's openapi specification in zulip.yaml has various examples
for various schemas. Validate the example with their respective
schemas to ensure that all the examples are schematically correct.
Part of #14100.
We use this new widget in bot settings panels
(personal and org). It lets you re-assign a
bot to a new human user.
Ideally we can improve this code to use
our existing list widgets to make it more
performant for realms with lots of users.
While this functionality to post slow queries to a Zulip stream was
very useful in the early days of Zulip, when there were only a few
hundred accounts, it's long since been useless since (1) the total
request volume on larger Zulip servers run by Zulip developers, and
(2) other server operators don't want real-time notifications of slow
backend queries. The right structure for this is just a log file.
We get rid of the queue and replace it with a "zulip.slow_queries"
logger, which will still log to /var/log/zulip/slow_queries.log for
ease of access to this information and propagate to the other logging
handlers. Reducing the amount of queues is good for lowering zulip's
memory footprint and restart performance, since we run at least one
dedicated queue worker process for each one in most configurations.
We use cloud-config for setting up the SSH keys and executing
some commands. When cloud-config sets the SSH key it doesn't override
the existing keys. So we need to set the SSH keys manually using a command
instead. This means we no longer require cloud config. We can instead
pass a bash script as the user data instead of cloud-config.
I also included a command to set the SSH key of the root.
size_slug represents the plan the droplet should be created on.
Since the new base droplet is created on the cheaper but more
feature rich new plan we have to update the slug_size as well
to take advantage of the cheaper plan.
Previously a YAML syntax error resulted in an
UnhandledPromiseRejectionWarning and a successful exit code; now it
gives a clear message and a failing exit code.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
The major PROVISION_VERSION bump would not be needed, but it was
missing in commit 5ab62a3514 (#14834),
so I’m doing it here.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
We now use the `--streamlined` options for `run-dev.py`
when we use `test_server_running` for `test-api` and
`test-js-with-casper` (and its experimental
replacement, `test-js-with-puppeteer`).
This means we don't slow anything down with
processes like thumbor, process_fts_updates, etc.,
which aren't meaningfully exercised by these tests.
We may eventually want some tests to meaningfully
exercise those processes, and when that day comes,
we will need to add an extra argument to
`test_server_running`, probably, but until then,
we just always set `--streamlined` in that codepath.
There is actually a tool called `./tools/test-run-dev`
that we run in CI, and it will use the full mode.
It just doesn't verify much stuff--it mostly polls
the server without testing specific features.
This seems to save about 1s of the startup time on a system I use
(~10.6s -> ~9.7s).
For basic testing (either manual or automated), we
generally only need the server and tornado running.
Obviously, it's nice to test the complete system,
but if you're on a slow PC, the overhead can be
annoying.
Note that we don't launch any of these processes
in `--streamlined` mode:
process_queue
process_fts_updates
deliver_scheduled_messages
thumbor
And then by not launching process_queue, we avoid
several child processes.
Basic functionality like sending messages will
still work here.
The streamlined mode may be helpful in debugging
our generally slow server startup time. Obviously,
some of the problem with startup is the auxiliary
processes here, but removing them as a variable
could help us focus on getting the core stuff fast.
Note that we still have the webpack watcher running
in streamlined mode.
For the particular case of thumbor, note that we
modify the proxy server to explicitly print and
return an error if we get a `/thumbor/*` request.
We clean up the code related to launching
processes here.
We extract:
server_processes
We also extract these helper for webpack
stuff:
do_one_time_webpack_compile
start_webpack_watcher
And then we move the code to actually launch
them lexically within the file (so as not to
be obscured by various function definitions).
Here is the new output for displaying ports:
Zulip services will listen on ports:
9991: web proxy
9992: Django
9993: Tornado
9994: webpack
9995: Thumbor
Note to Vagrant users: Only the proxy port (9991) is exposed.
I tone down the yellow for the Vagrant warning, and I show
the web proxy in cyan to emphasize it.
I also extracted the code into a function, and I don't call
that function until after `app.listen()`. (The users probably
won't notice much difference in the timing of this message, but
the message won't show if the `listen` step fails for some
reason, which I think is what we want here.)
We remove the import-tools code that was plunked
right into the middle of our command line
arguments.
Then we add a local var called `DESCRIPTION` to
fix some ugly code formatting, and we stop with the
unnecessary `r` prefix to the multi-line string.
This does not rely on the desktop app being able to register for the
zulip:// scheme (which is problematic with, for example, the AppImage
format).
It also is a better interface for managing changes to the system,
since the implementation exists almost entirely in the server/webapp
project.
This provides a smoother user experience, where the user doesn't need
to do the paste step, when combined with
https://github.com/zulip/zulip-desktop/pull/943.
Fixes#13613.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
This will give help up write new digest only if the db rebuild
succeeds. We were relying on the caller to
be successful in building db, this was hacky and unreliable.
We write new db digest once the caller succeeds, this ensures
that we write new digest after every successful attempt.
This fixes the anomality we were facing that Databases were rebuild
on the 2nd provision attempt with no changes to files or migrations.
This was happening because we didn't write a new digest for db
after the first provision (The case of DB didn't exist).
During the 1st provision, we check the template_status() of
Database both Dev and Test, but database_exists() of Databases
obviously returned false, and we rebuild the database,
but forgot to write_new_digest and hence the anomaly in the
second provision explained above.
Yes, it's slightly janky to create an
argparse.Namespace object like this, but it
saves us from shelling out to a script whose
only real value-add is parsing a single
`threshold_days` argument.
This saves about 130ms for a no-op provision.
We now just have two modes for setting up a dev/test
database. This makes it easy to see these things
side-by-side, when you're trying to understand how
the two different databases get built:
# dev:
USERNAME=zulip
DBNAME=zulip
STATUS_FILE_NAME=migration_status_dev
# test:
USERNAME=zulip_test
DBNAME=zulip_test
STATUS_FILE_NAME=migration_status_test
And then we make it more explicit the things that
are common between dev and test (which are
important things to understand when troubleshooting
provision-related glitches):
SEARCH_PATH=zulip,public
PASSWORD=$("$(dirname "$0")/../../scripts/get-django-setting" LOCAL_DATABASE_PASSWORD)
DBNAME_BASE=${DBNAME}_base
We lose some "generality" here, but passing in arbitrary
combinations of username/dbname/status_file to the script
would cause chaos for our digest checks, and all the different
template/base databases could cause confusion too.
We now prevent these variations:
* <hr/>
* <hr />
* <br/>
* <br />
We could enforce similar consistency for other void
tags, if we wished, but these two are particularly
prevalent.
Instead of figuring out the image path from the integration name in the
puppeteer script, we do it in the `generate-integration-docs-screenshot`
script and pass it as an argument to `message-screenshot.js`.