It's possible we should just eliminate this mechanism, but this fixes
a proximal problem where the multi-line get_subscribers endpoint
description was being handled wrong.
Travis CI's model of installing every version of postgres on the test
VM and then shutting all the versions other than the one requested
down seems to not work very well with doing apt upgrades. It seems
the best way to resolve this is to just uninstall the versions we
don't need.
The point of the lock is to prevent two deployments happening at the
same time and racing with each other, not to prevent doing any future
deployments after an error happens (which is what the current
implementation does in practice).
Addresses part of #208.
We ran into a bug with the Travis CI infrastructure where it postgres
9.1 is installed on the system, and so when we'd do an apt upgrade
with a new version of 9.1, the 9.1 daemon would end up getting started
and conflict with the 9.3 daemon we were trying to run.
Django's `manage.py runserver` prints a relatively low-information log
line for every request of the form:
[14/Dec/2015 00:43:06]"GET /static/js/message_list.js HTTP/1.0" 200 21969
This is pretty spammy, especially given that we already have our own
middleware printing a more detailed version of the same log lines:
2015-12-14 00:43:06,935 INFO 127.0.0.1 GET 200 0ms /static/js/message_list.js (unauth via ?)
Since runserver doesn't have support controlling whether these log
lines are printed, we wrap it with a small bit of code that silences
the log lines for 200/304 requests (aka the uninteresting ones).
Several of these rules only apply to one of Python and Javascript, and
this simplifies the logic and should make our linter code more readable.
In the process, we add support for per-rule/file pair exclusions to
handle the tab exception for codehilite.py.
The #! line processing interpreted the argument to pass to `env` as
"python2.7 -u", which obviously isn't a real program.
We fix this by setting the PYTHONUNBUFFERED environment variable
inside the program, which has the same effect.
Thanks to Dan Fedele for the bug report and suggested solution!
This test caught a few bugs where refactoring had made management
commands fail (and would have caught a few more recent ones).
Ideally we'd replace this with a more advanced test that actually
tests that the management command do something useful, but it's a
start.
The node packages 'jQuery' and 'jquery' are different--'jQuery' is the
legacy support package that is needed for Zulip so the require statements
in the tests were updated.
Travis uses node 4.0 by default and we are using 0.10, so the command to
install the correct version had to be added to the .travis.yml file.
This tests whether a new patch introduces any regressions related to
any of the Python 3 compatibility fixers we've run in the past, so
that we can make continuous forward progress on our path towards
Python 3 compatibility.
This produces error output that looks like this:
"""
Testing for additions of Python 2 patterns we've removed as part of moving towards Python 3 compatibility.
Running Python 3 compatibility test lib2to3.fixes.fix_apply
Running Python 3 compatibility test lib2to3.fixes.fix_except
diff --git a/zerver/views/__init__.py b/zerver/views/__init__.py
index b5c0102..2defd46 100644
--- a/zerver/views/__init__.py
+++ b/zerver/views/__init__.py
@@ -296,7 +296,7 @@ def accounts_register(request):
do_activate_user(user_profile)
do_change_password(user_profile, password)
do_change_full_name(user_profile, full_name)
- except UserProfile.DoesNotExist, e:
+ except UserProfile.DoesNotExist as e:
user_profile = do_create_user(email, password, realm, full_name, short_name,
prereg_user=prereg_user,
newsletter_data={"IP": request.META['REMOTE_ADDR']})
Python 3 compatibility error(s) detected! See diff above for what you need to change.
"""
Running check-templates test fails when there are 'blocktrans' tags in
django templates. The fix is to add 'blocktrans' to
is_django_block_tag function in check-templates.
With this change, we are now testing the production static asset
pipeline and installation process in a new testing job (and also run
the frontend/backend tests separately).
This means that changes that break the Zulip static asset pipeline or
production installation process are more likely to fail tests. The
testing is imperfect in that it does not have proper isolation -- we
build a complete Zulip development environment and then install a
Zulip production environment on top of it, so e.g. any apt
dependencies installed for Zulip development will still be available
for the Zulip production environment. But, it's better than nothing!
A good v2 of this would be to have the production setup process just
install the minimum stuff needed to run `build-release-tarball` and
then uninstall it / clean it up so that we can do a more clear
production installation, but that's more work.
This fixes an annoying issue where one tries to rebuild the database,
and it fails due to there being existing connections.
The one thing that is potentially scary about this implementation is
that it means it's now a lot easier to accidentally drop your
production database by running the wrong script; might be worth adding
a "--force" flag controlling this behavior or something.
Thanks to Nemanja Stanarevic and Neeraj Wahi for prototypes of this
implementation! They did most of the work and testing for this.
This fixes some issues that we've had where commands will fail is
confusing ways after the database is rebuilt because data from before
the database was dropped is still in the memcached cache.
Instead, build them automatically when provision the development
environment and in update-prod-static.
(imported from commit aac8dfeaafbe872c113e5f2b6bd8f655a1af36f2)
It doesn't have any sensitive data since that lives in a separate
configuration file, and it's potentially useful.
(imported from commit 094e315439f8bd23ad07a8c2bc7d9776c8c7f096)
The tarball build process runs in DEVELOPMENT mode, assuming it is run
on a dev VM (since then there is no /etc/zulip directory). Commit
d067bcfe9d71 made settings.py import local_settings_template.py in
DEVELOPMENT mode (then "not DEPLOYED"), not local_settings.py.
(imported from commit 9a08138d748dfca9c4ab8b366bee5c2fb96c25af)
Just importing zerver.lib.cache creates a file memcached_prefix that
is mode 0444, so we need to use -f or rm will prompt about whether to
remove it. Not sure why this is apparently a new issue.
(imported from commit 93c5140b66992339859e2b204c200d1dd7a35f2d)
This commit loses some indexes, unique constraints etc. that were
manually added by the old migrations. I plan to add them to a new
migration in a subsequent commit.
(imported from commit 4bcbf06080a7ad94788ac368385eac34b54623ce)
We don't need to check whether the user exists before creating it:
CREATE USER failing is fine.
(imported from commit e8b2bc5495e328ee30d15445a566c0edff2f069d)
If we run provision.py a second time, there will already be
zulip/zulip_test users, so the CREATE USER will fail and the password
won't get updated to the newly generated value. By creating the user
and setting the password in two commands, we allow the creation to
fail without affecting whether the password is set.
Also the quoting for updating .pgpass was wrong.
(imported from commit 5e249813c17cb4829e4e4958e92aaa30563c5f96)
Sometimes I get the error "Selected message id not in MessageList"
when running the casper tests. I think it's probably when the test
user's home view does not contain any messages.
Ideally we would fix this in a way that guarantees that we generate
whatever messages the test suite needs...
(imported from commit 51a02da612dda88d60681b9e09cd6e6a2c39a470)
Source LOCAL_DATABASE_PASSWORD and INITIAL_PASSWORD_SALT from the secrets file.
Fix the creation of pgpass file.
Tim's note: This will definitely break the original purpose of the
tool but it should be pretty easy to add that back as an option.
(imported from commit 8ab31ea2b7cbc80a4ad2e843a2529313fad8f5cf)
The old language was confusing because "the interface" could refer to something
like eth0, but in actuality refers to the IP/hostname to listen on.
(imported from commit 4f77d72a4dfcdbe7e7747c6228975aa68dfbe6ac)
It's been very buggy for a while, has limited usefulness compared with
unread counts, and profiling over the weekend indicates that it's very
slow.
(imported from commit 716fe47f2bbec1bd8a6e4d265ded5c64efe2ad5c)
This doesn't change the alerting UI logic, it just turns
alert_words_ui into a module and calls the setup code from settings.js
when the settings page is rendered.
(imported from commit 05f95383b046086641280f82f648be58688efe61)
Before this change, the way we'd strip tags of punctuation
was just sort of messed up, because we'd strip the start tags
one way and strip the end tags another, and we had conditionals
for the different flavors of tags, instead of doing the stripping
when we already knew what flavor of tag we were dealing with.
(imported from commit 60c5ebd45e21b88bbfc98ff4b43dbbc6b32b38a1)
This makes it simpler to test between two VMs by allowing you to bind to
non localhost interfaces.
(imported from commit f70755533b52ff8c49fd916941d2210fb8c33b47)
Importing zerver.worker.queue_processors (which is needed to get the list of
workers to start) is slow because it, in turn, imports a bunch of stuff. So we
move the process of starting up queue processing workers into another script
that gets started in parallel with everything else.
(imported from commit 839bada6dc7b93825c69b0d8fd9fbe2de75eabee)
Add a helper to patch_global to change a global and then reset it to the
original value after a test file is complete.
(imported from commit 1b65ff6ea8693ad61b7f18f35dafa942429252a8)
In the early days of the node tests we didn't have an index.js
driver, so each test set it up its own environment. That ship
has long since sailed, so now we just require assert in index.js
as a global and make it so that the linter doesn't complain.
(imported from commit 1ded3d330ff40603cf4dd7c5578f6a47088d7cc8)
I apparently screwed up when backing up the process_loaded_for_unread
move in a way that just lost the function.
(imported from commit 91dfcf1abc85d439274cb8b0be380e9230942ebb)
The production database has a public schema. I thought we had dropped it, but
apparently not. We should match what exists in prod either way.
(imported from commit 1bf956360029ebbd59afc3cc30fca9a859343adf)
We do this by creating a new zulip{_test}_base database that only has the zulip
schema and the tsearch_extras extension. We then use that as a template when
creating zulip{_test}.
(imported from commit 8adb4b98410e4042a0187902e89c99561eac8c8f)