This will allow us to substantially decrease the server-side work that
we do to support our Mirroring systems (since the personal mirrors can
request only messages that user sent) and also is what we need to
support a single-stream Zulip widget that we embed in webpages.
(imported from commit 055f2e9a523920719815181f8fdb44d3384e4a34)
This way command-line scripts that use our `optparse` populator can
still specify a custom client without munging their `parser` object.
(imported from commit df8d28a46a4d4574523b106030dbfed2d9ac931e)
We previously sent our client type to the server as a GET/POST parameter
of "client=<something>", most commonly "client=API: Python".
We switch here to providing the same information as a User-agent header
sent on each request, which is more standards-compliant.
Also added is some data about the platform the user is using. If your
client string was set to "MyLittleZulip/1.0", the resultant string could
look something like this:
MyLittleZulip/1.0 (Ubuntu; 12.04)
(imported from commit 39fd187a8f9d4b3c9b63fc623e0836e57a4099ca)
This is designed to be run as a "change-commit" trigger. See [Perforce's
documentation][1] on how to set up a trigger for your platform.
[1]: http://www.perforce.com/perforce/r12.1/manuals/cmdref/triggers.html
Closes trac #1034.
(imported from commit d94fa4e50637ade2847a96eab8c5514de3811c24)
Since we're using the module outside of a git repository context we
don't have a git config to reference. Instead, we'll just use the
environment variables we're passed.
(imported from commit 8ae707d5d60eb700052e0ee89e7d36c660e00bb6)
Previously, users of setuptools would get our data embedded in eggs.
Eggs are horrible things, but more importantly our package data should
be free, in a well-known albeit system-dependant path that is
independent of the package.
By specifying [install_data][1] as an alias of install, we assure that
our data (examples, integrations, etc) are placed in $data/share as
$DEITY intended.
Alternative suggestions included force-adding "--old-and-unmanageable",
which would invoke the distutils-style install command, but that had the
unfortunate side effect of turning off eggs and dependency resolution
altogether.
We could also use "--single-version-externally-managed", but I think
that was designed to be used by package managers, not by us.
In any case, both of the above were fragile and might break if the user
specified additional options to setup.py.
In closing, Python module management is horrible. See [this][2],
[this][3], and [this][4] for info about the status quo, and [this][5]
for information about crack to be smoked later down the road. Don't even
get me started about [PEP 427 -- Python wheels][6].
[1]: http://docs.python.org/2/distutils/commandref.html#install-data
[2]: http://lucumr.pocoo.org/2012/6/22/hate-hate-hate-everywhere/
[3]: http://stackoverflow.com/a/6522905/90777
[4]: http://python-notes.boredomandlaziness.org/en/latest/pep_ideas/core_packaging_api.html
[5]: https://python-packaging-user-guide.readthedocs.org/en/latest/future.html
[6]: http://www.python.org/dev/peps/pep-0427/
(imported from commit 6cf1bd2b8f5a60b2f02f5d11094e4a41cc5e48aa)
Due to limitations in their API, we have to poll and check for
creation and completion events.
(imported from commit be65e507fac16a7f8ad3dc57b2af9c4b98aacf39)
This requires renaming the account in Google Apps at the time we
deploy this; we'll probably want to do this during off hours to avoid
any user-visible downtime.
This also updates some related email addresses.
(imported from commit fce7629b359a4f278bbf7815c8d177a8fa0484fe)
Before these examples weren't obviously blocking calls (they seemed
more like a callback registration, which may make more sense in the future)
(imported from commit 78fdf98d791b19843526437c710901d8dff62e8c)
This mirrors all the "events" in a Basecamp account onto a stream in Zulip
(default "basecamp"); it sets the topic as the calendar or project that the
event belongs to.
Unfortunately, Basecamp will not host hooks, and neither do we, so this script
is currently intended to be run by our customers, much like the Trello mirror.
(imported from commit 484bc12681a43cd01fe0189c072ab4230eb32c22)
Previously it only provided the list of all public streams; now it
allows one to specify any union of some of the following:
* all public streams
* all streams the user subscribed to
(the most relevant being the union of those two, which is what we want
for the "streams" page).
Or:
* all streams in realm (superuser only)
The manual task required is that when this is pushed to prod, we need
to also deploy the new sync-public-streams version to zmirror.
(imported from commit 27848b8bd136e2777f399b7d05b2fdcec35e4e21)
When we deploy this, we'll need to of course actually build and deploy
the new API tarball.
(imported from commit 03c853e8a9424a63f1c74bb83637d5a1e50a159a)
This makes our life a bit nicer if the message is super-long,
because then even when it's "condensed", we still get a link
to the actual article.
(imported from commit 32e70d29cb702ce73f6cd0c04dbc58457cd2e6b5)
Even though we support a command-line option of --user=,
it gets stored in a field called 'email'.
(imported from commit f2956524517a93187ed182caf8e2d85ccbc1a0f4)
Previously, we would return a JSONDecodeError to the user in the event
that the server returned a 500 error (or other non-JSON content).
(imported from commit 1624dfec6ac65d34216f4de91e33116a54e414fa)
urlparse.urljoin(base_url, url) will drop any path inside base_url if
either the url has a leading "/" or base_url doesn't have a trailing
"/". So adjust our API bindings to ensure that doesn't happen.
(imported from commit c080ee8c04b89127888609da28afc8b388af1911)
This must be deployed after we update our running nginx configuration
to serve api.humbughq.com.
(imported from commit b5c34ebdd595f55eecd6dca6a18a37f105107bd5)
Since in the future we might want requests to add subscriptions to
include things like colors, in_home_view, etc., we're changing the
data format for the add_subscriptions API call to pass each stream as
a dictionary, giving a convenient place to put any added options.
The manual step required here is updating the API version in AFS
available for use with the zephyr_mirror.py system.
(imported from commit 364960cca582a0658f0d334668822045c001b92c)
I believe this should require no special work on deploy, since some
grepping of logs suggests we are not currently using this API query.
(imported from commit 240086f900c6680cbc90bf6a2f334a9e1f172df6)
The --site= option is really only for internal developer use, so I
don't think we gain anything from being strict about it.
(And it doesn't help that the error message one gets pre-this-patch is
super confusing). Fixes Trac #937.
(imported from commit 8d699982aa6830f9eae2bccd6d0c7a1e0e53dd56)
Give better examples, and rewrite options parsing to be more consistent across examples.
Make it more obvious that you can use "--user" and "--api-key" with our python examples.
This bumps our python bindings to v0.1.9
(imported from commit 297468088f864b7d585e567dc45523ea681f1856)
Previously our receive API bindings were broken in our API tarballs
because we weren't including the receive API bindings which they used.
This requires our deploying the built API tarball to the prod server
when we deploy it so that the link on /api isn't broken.
(imported from commit 14ecaab34556f4e29c72f4f567d8af73c89d6297)
Currently the interface for editing messages is limited to a
command-line API tool; it's great for testing with e.g.:
./api/examples/edit-message --message=348135 --content="test $(date +%s)" --site=http://localhost:9991 --subject="test"
The next commit will add a user interface for actually doing the editing.
(imported from commit bdd408cec2946f31c2292e44f724f96ed5938791)
This post-commit hook depends on pysvn. After a transaction is completed,
a Humbug is sent to a configurable stream with the repo modified, actor,
and commit message.
(imported from commit 75cab82d5fe993ea7c4c05be07a7b61e770aff81)
Now that we're distributing these examples, we shouldn't be promoting
the use of the "--site" option.
(imported from commit 01ded4a851dc799fa6c7e902e937c4331ed92bf8)
The previous version of our code only worked with python-requests <
1.0 (as is the case on our servers), the new version will work with
any python-requests new enough to have a .json at all.
(imported from commit 77ffe3e0d890fe88776c313e0e3289aee1bb30ea)
Previously we sent it always as "data", which caused problems for GET
requests where there is no request body.
(imported from commit 20084d1da1b8228cc484536ca4d6f77f547a9d78)
We also switch the Python client to use a client string of "API: Python"
to allow us to determine more easily which bindings our users are using.
(imported from commit 7216c3d150b371835f14d1bc8d81979a92e44925)
After this commit, we built an API tarball and sent it to
CUSTOMER4, and then promptly reverted the commit so that
we could continue as we had been before.
(imported from commit 662519a79edd508e7c115b451a7ec6fbdf1fc0a4)
To incorporate the site parsing fix from a couple weeks ago.
Before deploying this to prod we need to run build-api-tarball and
deploy the code to humbughq.com as for usual API releases.
(imported from commit f6711f5cc07d174c30866029032a595ecee785a3)
At Ksplice we used /usr/bin/python because we shipped dependencies as Debian /
Red Hat packages, which would be installed against the system Python. We were
also very careful to use only Python 2.3 features so that even old system
Python would still work.
None of that is true at Humbug. We expect users to install dependencies
themselves, so it's more likely that the Python in $PATH is correct. On OS X
in particular, it's common to have five broken Python installs and there's no
expectation that /usr/bin/python is the right one.
The files which aren't marked executable are not interesting to run as scripts,
so we just remove the line there. (In general it's common to have libraries
that can also be executed, to run test cases or whatever, but that's not the
case here.)
(imported from commit 437d4aee2c6e66601ad3334eefd50749cce2eca6)
We need to run build-api-tarball and release it on prod when pushing
this commit to prod.
(imported from commit 09e86500d2d208b1972c87444b4c2d56faafc8e6)
Before pushing this to prod, we need to build the 0.1.2 API tarball
and deploy it to the appropriate place on our servers.
(imported from commit ec1a07b3cc2a3e360dac32823ff7cd9de9de1da2)
Previously we were installing data files to e.g. /usr/local/example
and /usr/local/integrations, which is really not OK.
(imported from commit 0efb50412f93efabfe55443d5cac57a8ebb9fe06)
This works much better for working with staging, since rather than
needing to tell each individual tool that you're using staging, you
just specify that along with your API (which at the moment implies
whether you should be using staging or prod).
(imported from commit c1de8e72c24f35ef2160bce5339a5f03c6e1da95)
It doesn't matter for this script, but I worry the code will be
copy-pasted into other new plugins.
(imported from commit 0fe5280af5aa05a7efc3d146f1570f9a72c62027)
This should fix the symptoms of the problem we've been having where a
few API clients using the MIT Zephyr mirroring system sometimes seem
to end up with a too-old value of last.
(imported from commit 9f2426fa6a7e8365e8d3443bfd2cce3238cc9510)
I would prefer to be testing the attribute itself rather than the
version, but it's not easy to access without an actual request object,
and I'd prefer to compute this once-and-for-all on startup, rather
than on each request, since the latter just seems fragile.
(imported from commit dd74cadb1b2359faeb3e1b482faeee4003dfad77)
Bots are not part of what we distribute, so put them in the repo root.
We also updated some of the bots to use relative path names.
(imported from commit 0471d863450712fd0cdb651f39f32e9041df52ba)
My previous commit (fbdc092029bbafea716e27fbb99fec58a6f24392)
incorrectly specified that you must have a version of python-requests
greater than 0.12.1, when it should be a >= relation since 0.12.1 is
sufficient.
(imported from commit 9f716af6dfe0ce17d982fc22d507f144e9543bec)
Previously, if users of our code put the API folder in their pyshared
they would have to import it as "humbug.humbug". By moving Humbug's API
into a directory named "humbug" and moving the API into __init__, you
can just "import humbug".
(imported from commit 1d2654ae57f8ecbbfe76559de267ec4889708ee8)
The old-style api.humbug is deprecated, so we rewrite the path for things
to work correctly.
(imported from commit 360282b70fab1ad9d9d517157fc4fe06f9acd6a2)
This was causing about 10% of the time, personals being forwarded by
tabbott/extra@ATHENA.MIT.EDU to get this error:
zwrite: Field too long for buffer while sending notice to
tabbott/extra@ATHENA.MIT.EDU
We don't fully understand the cause of the problem, but this fixes it.
(imported from commit 22c39a1061cde9ad6973ef07aee7227623a3d47d)
Sometimes the very first message we send isn't received by
python-zephyr; this could be because of our growing a 17-message queue
of zephyrs to service, so let's not do that.
(imported from commit 281bf1807442b6335b05c803b1a47e0a162bef4e)
This ensures that Humbug receives the verbatim text of the Nagios
body, rather than a slightly modified version. Tested by running
manually.
(imported from commit 7e0eea0b230fd8c5552860acfb7372099598f036)
The refactoring that we did a couple weeks ago to make the zephyr
mirror script restart itself automatically (by splitting it into
zephyr_mirror.py and zephyr_mirror_backend.py) had a poor interaction
with our code for killing old zephyr_mirror processes (to prevent
double-mirroring). If you manually ran two copies of the outer
mirroring script (zephyr_mirror.py), then the second one would on
startup kill the first one's zephyr_mirror_backend.py children (for
being duplicate zephyr_mirror_backend.py processes that would result
in double-mirroring). However, importantly, it did not kill the first
mirroring script's zephyr_mirror.py parent process, so the first
mirroring script would then proceed to startup up new children. The
process then repeats, with the two scripts' roles reversed.
This issue didn't affect the sharded mirroring case, where I had been
doing the testing of the kill code with that refactoring, because we
don't have a version of the outer zephyr_mirror.py loop for that
situation (a consequence of it being hard to restart the threads
properly with the run_parallel API that we're using to spawn all the
children).
(imported from commit d4886ac77312a6b0ebd0d612f6fb084970bf23a2)
This is probably a lot more annoying to use, in that e.g. there are
separate log files for the two directions of the mirror, but we
haven't used these logs for much, so whatever.
(imported from commit d3bc407d90099214d242526c01cd3d3cd9d9d9bd)
Previously all that we logged was the sender, which turns out to be a
constant for humbug=>zephyr mirroring.
(imported from commit 527a3ac4b9b815a2b4d6b63c3ad92d9d5ad99a6e)
feedback-bot and zephyr_mirror will need to be updated and restarted
when this is deployed to prod.
(imported from commit fe2b524424c174bcb1b717a851a5d3815fda3f69)
Features include:
* Not forking into two processes (shells out to zwrite to send
instead). This makes life easier since we're not doing concurrent
programming.
* Eliminated a lot of hard-to-read or unnecessary debugging output.
* Adding explanatory test suggesting the likely problem for some
common sets of received messages.
* Much less code duplication.
* Support for testing a sharded zephyr_mirror script (--sharded).
* Use of the logging module to print timestamps -- makes debugging
some issues a lot.
* Only one sleep, and for only 10 seconds, between sending the
outgoing messages and checking that they were received.
* Support for running two copies of this script at the same time, so that
running it manually doesn't screw up Nagios.
* Passed running 100 tests run in a row.
(imported from commit a3ec02ac1d1a04972e469ca30fec1790c4fb53bc)
We should be canonicalizing stream names to class names in
update_subscriptions_from_humbug, before we even decide which classes
to subscribe to; otherwise deduplication and tracking of which classes
we're already subscribed to won't work.
(imported from commit a751b6fca1022390a087516a0730ff77f13d7edf)
This causes e.g. call_on_each_message to switch to dont_block mode
after the first error.
(imported from commit b6a5a10970c987faf8017f0ddae4e0b64a513c6f)
Also fixes bugs where the retry code wouldn't work correctly if
verbose wasn't set, prints out which API query had the error, and is
more consistent about printing something to end the "..." sequences.
(imported from commit 15c1e0e4a14c5c5559b43bafe4ec268451ee04f5)
Previously, we were sending "Skipping message we got from Humbug!"
for messages we wouldn't have forwarded anyway.
(imported from commit 36df85a61336ac00e3d7913d5a417d6b42764350)
In this case, if we're configured to not forward personals, there's no
point in logging a decision not to forward one.
(imported from commit 62c37591c6a70afb6235de626b0c6a3502cbcb27)
It seems that check-mirroring was reporting a lot of spurious failures
due to it sometimes taking more than 3 seconds for the setup phase of
check-mirroring's receive path to run. So fix this:
(1) Wait a bit more than 3 seconds for the receiver to subscribe to
messages
(2) Subscribe to Humbug messages before forking (we can't do this with
zephyr.init() because python-zephyr gets totally messed up if you use
it from multiple processes with a shared initialization)
(3) Get rid of the old time.sleep(0.x) values that were intended to
make messages arrive in order -- since we're now checking that
messages correctly arrived using set(), they aren't needed.
(4) Use a single request to subscribe to both zephyr classes we need
to subscribe to (saves 1 RTT).
(imported from commit d96aef05405ce43e9a4a549de189da9a2e393875)
- The default 'default' is None
- The default 'dest' is the option name, with - replaced with _
- The default 'action' is 'store'
Not coincidentally, these defaults are correct for most of our existing code.
(imported from commit 9c6078bd778324e08e1ca214a97281a7bdce08c2)