When you use nyc, its code instrumentation transforms
the code so that line numbers and columns no longer
make sense, and the long stack trace is likely to cause
more confusion than convenience.
We want to encourage a workflow where you debug your
node tests using the normal (and much quicker mode)
before running `--coverage`.
do_delete_users had two bugs:
1. Creating the replacement dummy users
with active=True
2. Creating the replacement dummy users with email domain set to
realm.uri, which may not be a valid email domain.
Prior commits fixed the bugs, and this migration fixes the pre-existing
objects.
The existing callsites of this are via `source` or being inline'd into
the startup of a new host; in both of these cases, the surrounding
script is already `set -eu`. However, if run as a standalone tool, it
should also configure itself to catch checksum failures and other
problems.
This is a fairly straightforward extraction.
It's good to test this with Iago, and then go into
Manage Streams and add/remove subscribers for a stream
like devel.
I copy/pasted two small functions that will soon
diverge from stream_edit. The get_stream_id function
will either use a module variable (since we're
generally only editing subscribers for one stream, and
we already have the singleton assumption with
`input_pill`) or a more strict CSS selector. And then
get_sub_for_target depends on get_stream_id. We may not
always need full subs, anyway, and when we adapt some
of this code for creating streams, things are likely to
change.
I stopped exporting a couple functions that have no
callers outside of this module.
The main entry point for the module is
enable_subscriber_management.
We continue to export invite_user_to_stream and
remove_user_from_stream, which should possibly be just
pulled into their own module to lessen some
dependencies, but they don't have too much baggage,
since they just wrap channel calls.
Appending to bytes in a loop leads to a quadratic slowdown since
Python doesn’t optimize this for bytes like it does for str.
Signed-off-by: Anders Kaseorg <anders@zulip.com>
Given that these values are uuids, it's better to use UUIDField which is
meant for exactly that, rather than an arbitrary CharField.
This requires modifying some tests to use valid uuids.
It should not use the configured zulip username, but should instead
pull from the login user (likely `nagios`), or an explicit alternate
provided PostgreSQL username. Failure to do so results in Nagios
failures because the `nagios` login does not have permissions to
authenticated the `zulip` PostgreSQL user.
This requires CI changes, as the install tests install as the `zulip`
login username, which allowed Nagios tests to pass previously; with
the custom database and username, however, they must be passed to
process_fts_updates explicitly when validating the install.
I have looked at maybe ~100 errors in the last week as part
of fixing the tooling, and it's quite common to want to just
see what the improved file would look like. Now I show the
desired output with line numbers.
I also try to encourage devs to scroll up, since newbies
often don't do that for some reason when confronted with
error output.
Finally, I add some color. I try to repeat myself without
color for certain things in case colors on certain
backgrounds are hard to read.
A fast way to test this is to just break up a long tag
into two lines.
`Press Enter to send` used to hide `Send` button, we remove that
behaviour.
We show the current state of `Enter` hotkey action via text below
`Send` button which can toggle behaviour on click.
get_object_from_key should be used when trying to fetch a Confirmation
object. There are some places that need to make
Confirmation.objects.filter(...) queries, so we can't completely ban the
pattern, but we can ban .get(...) and
.filter(..., confirmation_key=..., ...).
Now we only tokenize the file once, and we pass
**validated** tokens to the pretty printer.
There are a few reasons for this:
* It obviously saves a lot of extra computation
just in terms of tokenization.
* It allows our validator to add fields
to the Token objects that help the pretty
printer.
I also removed/tweaked a lot of legacy tests for
pretty_print.py that were exercising bizarrely
formatted HTML that we now simply ban during the
validation phase.
This accomplishes a few things:
* lighten the load for the main validation loop
* defer indentation checks until we are sure the author
even knows how to match up tags
* add some info to the Token objects that we may soon
consume in our pretty-printer
We now complain about programmers who don't use
4-space indents in template files, rather than
letting the pretty printer fix them.
This is partly just to simplify the pretty printer
code (in future commits), but it also makes the
symptom more obvious to newbie developers. They
are probably just as able to react to the direct
error messages as they are able to figure out how
to read diffs from the pretty printer and grok
the --fix syntax. And once they learn the convention
and configure their editor, it should then be a
one time problem.
We now create tokens for whitespace and text, such that you
could rebuild the template file with "".join(token.s for
token in tokens).
I also fixed a few bugs related to not parsing
whitespace-control tokens.
We no longer ignore template variables, although we could do
a lot better at validating them.
The most immediate use case for the more thorough parser is
to simplify the pretty printer, but it should also make it
less likely for us to skip over new template constructs
(i.e. the tool will fail hard rather than acting strange).
Note that this speeds up the tool by almost 3x, which may be
slightly surprising considering we are building more tokens.
The reason is that we are now munching efficiently through
big chunks of whitespace and text at a time, rather than
checking each individual character to see if it starts one
of the N other token types.
The changes to the pretty_print module here are a bit ugly,
but they should mostly be made irrelevant in subsequent
commits.