Source LOCAL_DATABASE_PASSWORD and INITIAL_PASSWORD_SALT from the secrets file.
Fix the creation of pgpass file.
Tim's note: This will definitely break the original purpose of the
tool but it should be pretty easy to add that back as an option.
(imported from commit 8ab31ea2b7cbc80a4ad2e843a2529313fad8f5cf)
The old language was confusing because "the interface" could refer to something
like eth0, but in actuality refers to the IP/hostname to listen on.
(imported from commit 4f77d72a4dfcdbe7e7747c6228975aa68dfbe6ac)
It's been very buggy for a while, has limited usefulness compared with
unread counts, and profiling over the weekend indicates that it's very
slow.
(imported from commit 716fe47f2bbec1bd8a6e4d265ded5c64efe2ad5c)
This doesn't change the alerting UI logic, it just turns
alert_words_ui into a module and calls the setup code from settings.js
when the settings page is rendered.
(imported from commit 05f95383b046086641280f82f648be58688efe61)
Before this change, the way we'd strip tags of punctuation
was just sort of messed up, because we'd strip the start tags
one way and strip the end tags another, and we had conditionals
for the different flavors of tags, instead of doing the stripping
when we already knew what flavor of tag we were dealing with.
(imported from commit 60c5ebd45e21b88bbfc98ff4b43dbbc6b32b38a1)
This makes it simpler to test between two VMs by allowing you to bind to
non localhost interfaces.
(imported from commit f70755533b52ff8c49fd916941d2210fb8c33b47)
Importing zerver.worker.queue_processors (which is needed to get the list of
workers to start) is slow because it, in turn, imports a bunch of stuff. So we
move the process of starting up queue processing workers into another script
that gets started in parallel with everything else.
(imported from commit 839bada6dc7b93825c69b0d8fd9fbe2de75eabee)
Add a helper to patch_global to change a global and then reset it to the
original value after a test file is complete.
(imported from commit 1b65ff6ea8693ad61b7f18f35dafa942429252a8)
In the early days of the node tests we didn't have an index.js
driver, so each test set it up its own environment. That ship
has long since sailed, so now we just require assert in index.js
as a global and make it so that the linter doesn't complain.
(imported from commit 1ded3d330ff40603cf4dd7c5578f6a47088d7cc8)
I apparently screwed up when backing up the process_loaded_for_unread
move in a way that just lost the function.
(imported from commit 91dfcf1abc85d439274cb8b0be380e9230942ebb)
The production database has a public schema. I thought we had dropped it, but
apparently not. We should match what exists in prod either way.
(imported from commit 1bf956360029ebbd59afc3cc30fca9a859343adf)
We do this by creating a new zulip{_test}_base database that only has the zulip
schema and the tsearch_extras extension. We then use that as a template when
creating zulip{_test}.
(imported from commit 8adb4b98410e4042a0187902e89c99561eac8c8f)
These classes are in test_hooks.py now. They still run as part of
the regular suite, so this is just to make it easier to navigate the
files.
JiraHookTests
BeanstalkHookTests
GithubV1HookTests
GithubV2HookTests
PivotalV3HookTests
PivotalV5HookTests
NewRelicHookTests
StashHookTests
FreshdeskHookTests
ZenDeskHookTests
(imported from commit 26a9572dd5170f9516e739d587a119bd1f87959a)
We changed our endpoint from "get_public_streams" in August, but the
API call whitelist was not updated.
(imported from commit 293c1da8e43c24ad8188ed2096a47992ad3a2c89)
Have run-dev.py watch for template changes by calling
`./tools/compile-handlebars-templates forever`. This doesn't
have much effect until the subsequent commit, but it does
alert users to broken templates.
(imported from commit 3fa5f403cabe0057f6f43180f1d09db669d98682)
The "forever" option causes the tool to continue looking for
template changes and, when they happen, to recompile them.
(imported from commit 2fa719a205f02c7c90cc071f99252148a888654f)
Before this change, we were compiling handlebars templates but then
still sometimes used the copy of compiled.js from the previous deploy,
which made zero sense. Since compiling is super fast, we continue to
compile during every deployment, and now we actually use the results
of compiling. Forcing a recompile every time avoids pitfalls like
failing to notice deleted files or failing to notice a Handlebars
upgrade.
(imported from commit 675932428ec420bfe0fd5a5c748a85600206764c)
postgres-init-db was expecting to be located two directories below the
project root. It is only one directory below the root of the project so
remove a set of .. .
(imported from commit 228b47ac2539caf2b6cd12a1b5f399534cf0c866)
This replaces the --noworkers option; the new --minimal option
starts a couple "essential" workers.
(imported from commit 4ca08709052c47257bc0448e51760edb4969d92e)
This allows us to avoid a circular import when importing models.py
from inside bugdown for the realm-filters-from-database branch.
(imported from commit 7de85b54243132ade6818b080abdc8c5e8ad84f5)
There are now 2 cases for narrowing:
1. We narrowed, but only backwards in time (ie no unread were
read). In this case, try to go back to exactly where we were before
narrowing. This behavior is unchanged.
2. We read some unread messages in a narrow. Instead of going back to
where we were before the narrow, go to our first unread message (or
the bottom of the feed, if there are no unread messages). This is new.
This means that after catching up through the sidebar, on returning
home you'll be at the bottom of your feed.
Searching for the first unread message in a message list with 40,000
messages only takes 17ms according to:
function timeit() {
var t0 = new Date().getTime();
_.find(current_msg_list.all(), unread.message_unread);
var t1 = new Date().getTime();
console.log('Find first unread: ' + (t1 - t0) + ' ms');
}
(imported from commit 87c467578a2cced0aa976d8ae2924371b85d2445)
This basically prints out a template JSON data structure to
be used with a handlebar template that you specify on the command
line. (You can actually supply multiple files, too.)
Example usage:
$ ./tools/get-handlebar-vars static/templates/tab_bar.handlebars
=== static/templates/tab_bar.handlebars
{
"tabs": [
{
"hash": "",
"title": "",
"active": "",
"icon": true,
"data": "",
"cls": ""
}
]
}
(imported from commit d7239fcae7d94038fa0e4b34c8b1208a1070ecbb)
This uses git ls-files -m, which will show modified and added files,
but it doesn't seem to show staged files, so buyer beware.
(imported from commit 6ecc1d5ee628deae17197addf5586f1f6bcd4b9c)
If you use persistent ssh connections, ssh'ing as admin will cause a sshd
process to hang around on the server, preventing us from deleting the admin
account. Therefore, we disable persistent connections for that ssh connection.
(imported from commit 2d043768417d20ef2f12695475a20b74bf3374de)
This requires a puppet apply on each of staging and prod0 to update
the nginx configuration to support the new URL when it is deployed.
(imported from commit a35a71a563fd1daca0d3ea4ec6874c5719a8564f)
When you upload a 2nd avatar to Zulip, the URL doesn't actually
change, so even new messages can show the old avatar, if your
browser is caching. We work against the cache by having the
"stamp" argument, which we vary at reload time and also when
we upload the new avatar. The browser still benefits from
cached images as new messages come in.
(imported from commit 84869c8d7f251c9f2498026a5e9e3b2451784879)
The check-handlebars-templates script now looks at most of our
back end templates to try and find imbalanced tags. This commit
fixes a bunch of the existing templates.
(imported from commit fad4a5d85d68160370dd588b41d6f125f64d198f)
Because git < 1.8.1 ignores lines with trailing slashes and
1.8.1.1 - 1.8.1.6 ignore lines without!
(imported from commit 8139a742f4a52ccb1bce4e06fb24c9626fdb01f2)
update-prod-static needs DEBUG=False. This also replaces our
local_settings.py before generating anything included in the tarball.
(imported from commit 890cd9d1a44acfd2c20e1662e0c68132c633d1b3)
We still need it in integrations, because those don't require Python
2.7, but we don't need it in any of our code that runs on internal
servers.
(imported from commit 3c340567f1a372dcb4206c6af9a6e5e18005b1b8)
The corporate "app" is not a full-fledged Django app, but it has
a urls.py and a templates directory. This commit creates the app
and moves the jobs pages into it. Localserver deployments will
not see any of the corporate code.
(imported from commit 35889c3cf92329258c30741fdfa564769a4fac1a)
The main use-case here is for ensuring that we don't deploy to master
while doing demos.
(imported from commit d3c8ba502052b352291548200032f39e0743b774)
Run the following commands as root before deploying this branch:
# /root/zulip/tools/migrate-server-config
# rm /etc/zulip/machinetype /etc/zulip/server /etc/zulip/local /etc/humbug-machinetype /etc/humbug-server /etc/humbug-local
(imported from commit aa7dcc50d2f4792ce33834f14761e76512fca252)
This moves the list of removed files from .gitattributes to
tools/build-local-server-tarball because static/ and tools/ are
necessary for update-prod-static, and it seemed best to keep the
entire list in one place.
(imported from commit 2a447cbde29e90d776da43bb333650a40d4d363c)
update-deployment has been replaced by upgrade-zulip for local server
instances, since it won't be running off a git repository, and
update-prod-static won't be needed since we plan on shipping minified
javascript.
When we deploy this, the deployment will fail, and then we'll need to
update the git checkout from which post-receive runs on git.zulip.net.
(imported from commit 86aaedbab09c60ae86ac1d0ae492d0d1bc45569f)
I switched narrow.by_subject and narrow.by_recipient to use the all_msg_list
instead of current_msg_list, since we wanted to be able to narrow to messages
specifically not in the current_msg_list. However, in searches which revealed
old messages outside the range of all_msg_list (which only has a single contiguous range),
this broke narrowing.
Let's use msg_metadata_cache instead.
(imported from commit 427f717484b4ae83d9bb4cc6e51ce17177d037fe)
Looking at the historical data, fewer than 50% of active users have
completed the checklist, which means that it is just persistent
clutter. We also have other better ways of encouraging people to send
traffic and get the apps now.
This commit removes both the frontend UI and backend work but leaves
the db row for now for the historical data.
(imported from commit e8f5780be37bbc75f794fb118e4dd41d8811f2bf)
We need to maintain this by labeling the files that we don't want to
ship with our local server tarballs in the .gitattributes file.
(imported from commit e29f38c477a4cdfd80fbb8e4e95c611b34f3b212)
Check that settings.html has at least balanced tags, and
automate the checking of those tags.
(imported from commit 35e9be269caa211803d64f2b54cb0287e13707b3)
We need to do a puppet apply immediately after deploying this, or our
rabbitmq consumer state files will become stale.
(imported from commit 18696a5a8b1ff431425d1f71c208acc9bf0694f2)
This requires no changes in production, but is tagged as manual to
remind developers that they need to edit and run the tools/migrate-db
script to fix up their local database instances.
(imported from commit fbf764fb61592ef994d6d2ad56edad65ff01f14b)
This commit must be simultaneously deployed on both staging and
prod0. It also requires completely taking down the app.
To deploy these changes, do:
* check out this commit at /root/zulip on postgres0, postgres1, staging, and prod0
* stop the process_fts_updates job on postgres0 and postgres1
* stop the app on staging and prod0
* do a puppet apply on postgres0, postgres1, staging, and prod0
* move the new client certificates into place on staging and app
* move the new server certificates into place on postgres0 and postgres1
* reload the database config on postgres0 and postgres1 (this might
actually require a restart)
* run tools/migrate-db on postgres0 as root
* do a deploy through this commit on staging and prod0
* start the process_fts_updates job on postgres0 and postgres1
* do a puppet apply on nagios
(imported from commit 819bdd14326c1425e2d3041a491a8ca3b9716506)
New dependency: sockjs-tornado
One known limitation is that we don't clean up sessions for
non-websockets transports. This is a bug in Tornado so I'm going to
look at upgrading us to the latest version:
https://github.com/mrjoes/sockjs-tornado/issues/47
(imported from commit 31cdb7596dd5ee094ab006c31757db17dca8899b)
This may require just doing an mv on the home directory, plus changing
the home directory in /etc/passwd. It should of course be done carefully.
(imported from commit 660997d897ee6d33563af74f0fc5d4267a911755)
This requires doing an `mv` and then `puppet apply` on each of staging
and prod as part of the deployment process.
(imported from commit 5d0be64a3846f7151d2036d2e0b31049bc1c2dd2)
You can immediately apply the changes by passing '-f' or '--force' as
the first argument.
(imported from commit 91f00f6afd3c42e2b11f60e92fc20962bc952f0d)
This temporarily breaks the rabbitmq consumer checks for the
user_activity and notify_tornado queues because their state files
were renamed to match their queue names. It will be fixed for
staging in the next commit.
(imported from commit a6aaa330a1134d8ddffe8f4959deb12b219f241a)
The minify logic doesn't have an easy way to detect that you
removed a file since the last deployment.
(imported from commit 50d05fcdad382a586073c06d29d279433d1bba81)
people_list and people_dict include the feedback bot and anyone you've
cross-realm PM'd with. Useful for autocomplete, but not for admin and
stream settings views.
Fixes the UI part of Trac #1772.
(imported from commit cdefd4e86980447aad5190e7fc8ae3666d66e3c3)