Otherwise it applies to all password-type <input>s, which is not necessarily
what we want.
(imported from commit da2bb86961f4ff1dcc48e89e51abac6dbea79548)
We now have the bar color to indicate (for most users) whether the password is
valid, so revert to the default validation behavior and don't validate before
the first blur.
(imported from commit 5c2f6e05a8796033942a2af62f244b61459ff1bb)
And scroll there on any error (previously, we would scroll only if we end up
submitting the form).
(imported from commit 63597c4da78ac92cd5c2314d6d174d178b1caaf3)
It seems to have no effect and does not appear anywhere else in our repository
or in jquery.validate.min.js.
(imported from commit c4d2f730f3b680e15af17cefee34f6930e64ade0)
Otherwise, if you get an error those e-mails are still around the next
time you try to invite someone.
(imported from commit b521a74f4d6c0d67271f804221f519d1aa7551ff)
This avoids 10s of seconds of delay when you invite several people at
once through the web UI.
(imported from commit 75acdbdb04caf62bbb08affc7796330246d8a00e)
This fixes user-visible browser errors caused by trying to use the id
of messages in an empty message list.
One error could be triggered by trying to go to the end of your feed
with the End key during a reload.
Another could be triggered by trying to narrow to a stream or subject
using hotkeys while in an empty narrow.
(imported from commit a0e5456fd3b475aecac6eddd7104772baaf3aeb8)
This also changes the API for GET /json/subscriptions/property to
only retrieve the property for a particular stream instead of
returning all streams and their properties. We weren't using this
functionality anywhere and the change makes the API more consistent.
(imported from commit 2799aec2550fd0558e2282beb19734d60801bdb8)
I noticed that on chrome, calling narrow.deactivate() actually ended
up calling itself recursively due to the hashchange code not correctly
handling the fact that in Chrome if you set
window.location.hash = '#';
and then read out the value, you get '' back out.
(imported from commit 9b5047fbe0e2ac1846e5325d066c72306634c523)
What was happening is that if you un-narrowed immediately after
receiving a message (e.g. because you just sent it), the autoscroll
animation from the zfilt table would still be running after you return
to the home view, resulting in the viewport being scrolled to an
apparently random point in the home view (even though the pointer was
still in the right place).
This cancels the autoscroll animations whenever you do one of:
(1) hashchange (e.g. to go to the settings page)
(2) select a message (covers narrowing/unnarrowing as well as keyboard hotkeys)
(3) mousewheel scroll
since those are basically the cases where we set the viewport
scrolltop directly.
Arguably this should instead be something where we somehow detect
which scroll events are triggered by what and cancel for any scroll
event not from the animation or rererendering, but that seems hard.
(imported from commit f776021303404c87b36241c733b3d1bcb083163b)
The previous code for adding users to default streams wouldn't do so
if the user didn't have a PreregistrationUser row.
(imported from commit 25f1383f6771319542d07660b29d891368889212)
Now that our plugin is in the Jenkins marketplace thing,
we don't need to have the user laboriously download it
from us and upload it themselves.
(imported from commit 25e9926f7f2314db8f3ea6c00c40514b6fd546c3)
For our primary measures of user engagement, messages sent by bots can
confuse the picture (e.g. a realm could be dead, but not appear to be,
because they didn't bother uninstalling their github and jenkins
hooks). So it's best to leave those out of our main stats.
(imported from commit 4d0f0e6442093daab164d0ed016fff1d1aa906c7)
When testing locally this bar sort of lies, because the actual bottleneck
is Django→S3.
In prod, our connection to S3 will supposudly be really fast so this won't
matter.
(imported from commit c9f4b4882cbfdf3bbb8180f1500f35d8481c1f39)
This allows users to drag and drop content onto the compose box, storing
their data in Amazon S3.
New dependencies:
- python-boto
(imported from commit 339874e483db5c36312c9ceae56db29da6ca0d99)
This creates a new management command, subscribe_new_users, which should be
run as a daemon process. When new users are created, an event is passed to
RabbitMQ including the following data:
* Email
* Full name
* IP address of the person who confirmed registration
* Time of registration confirmation
MailChimp strongly encourages the collection of the last two to enable
responses to abuse requests, and providing more data lowers the chance that
we could get banned from their service if complaints do occur.
To use this commit, you need to install the "postmonkey" module from
PyPI.
(imported from commit 20c628c3fa8bb985aaead85a80ad3b38bf94b9dc)
Apparently it no longer coalesces adjacent blank lines in a code block (which
seems like an improvement). The new test case doesn't have adjacent blank
lines and will work on old and new versions alike (tested on staging).
(imported from commit e49902be041cf1e7d6fbe489685b966cf4eae108)
We accidentally lost this when we did the User/UserProfile merge (this
commit also deletes the old code to add the auth_user index in
do-destroy-rebuild-database).
This below is mostly just notes for future reference, but when
deploying this change to staging, we should consider running the
following instead of using the migration directly:
CREATE UNIQUE INDEX CONCURRENTLY zephyr_userprofile_email_uniq ON zephyr_userprofile(email);
ALTER TABLE zephyr_userprofile ADD CONSTRAINT zephyr_userprofile_email_uniq UNIQUE USING INDEX zephyr_userprofile_email_uniq;
CREATE INDEX CONCURRENTLY zephyr_userprofile_email ON zephyr_userprofile(email);
But I think it might be the case that it's fine to just run it
directly, since the ALTER TABLE part seems to hang if there's an open
transaction working on a UserProfile object anyway.
(imported from commit 1bf34ce242de51e97c91c8bab86b6b273e17fb43)
This is preparatory for removing the StreamColor model, so we also set
things up so anything changing the StreamColor model changes the
Subscription model too.
The manual task is to run the copy_colors.py management command after
deployment to each of staging and prod.
(imported from commit 1be7523ca59f5266eb2c4dc2009e31209ed49635)
This allows blueslip to catch exceptions from the event handlers on
these elements in addition to the other benefits that not using
inline handlers provide.
(imported from commit 2bdcb2496c6c08fa7228a20ce6164b527cf64e41)
The close handler will be called on cancel anyway, so we don't need
to delete in the click handler.
(imported from commit 0fcf4b0d1408312a0889f2b69e01207c9c3835fa)
Previously, narrowing to a stream name that only contained digits
would throw an exception.
(imported from commit dc76877427078d70e3d5625622c665be3302c976)
Otherwise you could encounter errors if you POST to a method
with this decorator applied.
(imported from commit bcb31f336ea2a1eeee6b9e3e9dfeed1d205ae26a)
I generally don't like this sort of state variable, but I don't see a
better solution. The codepath is that when you start out on the
subscriptions page and then click one of the left sidebar links to
narrow to something:
(1) hashchanged() would call ui.change_tab
(2) ui.change_tab triggers a gear change event
(3) The ui.js gear-changed event handler updates the hash
Resulting in the hash ending up at "#". Since there's no easy way to
pass arguments through to the event handler, we just use a global
variable inside hash_change.js to track whether we're currently
handling a hashchange event.
(imported from commit 7bb905a223b5539240fc36de7896ee8074ebc62e)
We previously had 2 mechanisms for narrowing used by the left sidebar
-- the top few links used the hashchange mechanism, while the streams
links used a custom click handler. Both were buggy -- the hashchange
one hadn't been updated to just select the first unread message,
whereas the click handler didn't change tabs.
Fixes#1141.
(imported from commit 8a8af974e78cc5c33937ac0078f04a9b5452b94a)
This appears to have been caused by our code for preventing the
viewport from being recentered if you move the pointer away from the
edge of the viewport from a position near the edge, which was being
run even when it was not triggered by a scroll event.
(imported from commit 0a4b3dcca75a6e5dbf1beb77a5249bd6a9c61341)
The old directional hotkey calculation system was fragile, and because
of this, didn't scroll when you used the home/end keys.
(imported from commit dca4786de13a4ed2864600dadbf4b1a5ba848074)
...rather than embedding them into index.html.
This is only acceptable for dev, but the next commit adds an alternative
mechanism for prod.
There isn't actually a manual deployment step here. However, this commit won't
work on staging / prod without the next one (since we don't serve
zephyr/static/templates in prod).
(imported from commit dce7ddfe89e07afc3a96699bb972fd124335aa05)
Not needed for any specific reason, but we will need the .runtime.js file
eventually, and we should use a version of the library that matches the
Handlebars compiler.
(imported from commit 5600bc8d44b681999e2e5bbf04b890e2bb8477a1)
Beanstalk integration uses webhooks that use http basic auth to authenticate
the sending user.
(imported from commit bd65f5b2d052a3c1eb04da64d055a3640a384892)
I think all that one needs to do to deploy this commit is on developer
laptops, run `generate-fixtures --force`.
(imported from commit 34916341435fef0875b5a2c7f53c2f5606cd16cd)
When this is deployed to staging, we need to run
./manage.py logout_all_users --realm=humbughq.com
When this is deployed to prod, we need to run
./manage.py logout_all_users
(imported from commit d6c6ea4b1c347f3d9122742db23c7b67767a7349)
This is intended to be used logging out users during our deployment of
the UserProfile merge, but it could be useful for other things too.
(imported from commit bfe896d854f997f7a4d06e5bc0f19ec5b1aa5e69)
Previously, we weren't clearing the users out of memcached (we just
killed them in the database), so in fact users were not logged out
when we deactivated them for an hour (when the memcached caches would
expire).
(imported from commit 0f0a2f70e003c184106c73b22b876f57c1ef3371)
The associated function was moved into zephyr.lib, but the file
location was never updated.
(imported from commit 24c3348533324b0af7c52d6a121eef8b00615275)
And keep the fields updated, by copying on UserProfile creation and
updating the UserProfile object whenever we're updating the User
object, and add management commands to (1) initially ensure that they
match and (2) check that they still match (aka that the updating code
is working).
The copy_user_to_userprofile migration needs to be run after this is
deployed to prod.
(imported from commit 0a598d2e10b1a7a2f5c67dd5140ea4bb8e1ec0b8)
This has the nice side effect of not requiring us to trigger the
events manually in the success callbacks of our subscribe/unsubscribe
ajax calls.
(imported from commit e8d9970b708e9832d22be4803570071bacb46792)
We currently only use these events to change the autocomplete lists.
I figure that the presence list will be updated by presence events.
(imported from commit e9c1466659c4bfd463806656e0023984a4ea4177)
I can't reproduce the problem this works around anymore. If it comes back,
let's debug and figure out what's happening.
(imported from commit 26096405a93a530e449c9f1f60d8110b1bb0e96b)
We were incorrectly using User objects, rather than UserProfile
objects, for fetching Recipient objects for generated messages.
(imported from commit c3dfe52f4e0a68400e22ca49293b5bf2d6986402)
Our testing code had a number of places where it was using User
objects where it should have been using UserProfile objects --
e.g. using a User id as the type_id in a Recipient table. This commit
addresses this in the filter_by_subscriptions code paths.
(imported from commit e305bc8e2a8bdbfd04c93c59d56955e7971552af)
This way we're not directly manipulating user.password() in random
management commands.
(imported from commit e6e32ae422015ab55184d5d8111148793a8aca36)
The previous situation was bad for two reasons:
(1) It had a lot of copies of the code, some of them missing pieces:
UserProfile.objects.get(user__email__iexact=foo)
This was in particular going to be inconvenient since we are dropping
the __user part of that.
(2) It didn't take advantage of our memcached caching.
(imported from commit 2325795f288a7cf306cdae191f5d3080aac0651a)
Only a few of them took a User as an argument anyway.
This is preparatory work for merging the User and UserProfile models.
(imported from commit 65b2bd2453597531bcf135ccf24d2a4615cd0d2a)
The previous version of our code only worked with python-requests <
1.0 (as is the case on our servers), the new version will work with
any python-requests new enough to have a .json at all.
(imported from commit 77ffe3e0d890fe88776c313e0e3289aee1bb30ea)
A ticket is filed and this error is not fatal to the UI but rather
a warning to investigate, which we will now do
(imported from commit 3f67ec2b503e91b3921e33b89febd97790e389f1)
Before this commit, if you try to arrow around when the selected
message is outside the pointer threshold for recentering, you get a
big jump, even if you are arrowing towards the center of the viewport.
(imported from commit 5c15d5ccccdf027a8bfa8b79bf519fccbfa971d8)
We have to be careful about timing here. If Tornado fails to load
existing queues on startup then all clients will reload at once. On
the other hand, if we don't reload immediately then the client won't
get any events until the reload. For now, I've opted for the
user-friendly approach, so we need to make sure that Tornado gets a
chance to dump and reload its queues correctly.
(imported from commit 51a6ab31cb461e1e3373486dcec2e57eb12a8077)
Clients can now request to receive only certain kinds of events,
although they always receive restart events.
(imported from commit 1e72981f8fe763829ab2abde1e35f94cad5c34e4)
The new nginx configuration file needs to be copied to
/etc/nginx/humbug-include and nginx needs to be restarted when this
commit is deployed.
(imported from commit 6c43f3c2c7a6acee6a852c672c96a38bda01dd0d)
This version has several limitations that are addressed in later
commits in this series.
(imported from commit 5d452b312d4204935059c4d602af0b9a8be1a009)
When we added rabbitmq usage within Tornado, we inadvertently caused
the Tornado ioloop to be initialized in runtornado.py's imports,
before we overwrote the _poll method. The end result was that we
weren't running the our instrumented Tornado poll function.
Fix this by moving that code to its own file which we import at the
top of runtornado.py, and adding comments documenting the situation so
we don't break this in some future import reorganization.
(imported from commit 016717476f10566fef4ed2b656f29f865d2084db)
Previously user_profile was a kwarg, which was inconsistent with all other
_backend functions.
(imported from commit 6b857bcb2c3c978079af2f6edd367c1804d51988)
This is to allow flexibility in functions that we think should be callable
via either GET or POST.
As part of this, POSTRequestMock was extended to populate the REQUEST
dict.
(imported from commit b9d32d2b65ff8a25885452992cf7dd37b9664246)
This includes a process_patch_as_post decorator which enables this view
to be invoked as a PATCH on an object.
Hopefully this decorator can go away once POST values are correctly parsed
in Django for PATCH verb invocations.
(imported from commit 6cf9d69cfb9dea5354ea37408566146757b5be54)
This slightly reduces code duplication and in the future the {api,json}_ methods
will hopefully go away, leaving only the _backend methods.
(imported from commit 82a6e4a2ff2ba5d272068e9ff043ea47a1a8d278)
Instead we now rely on the request._client value, which we were previously
passing along to s_m_b in all but one case.
For that one case, we just modify the Request object to include the value
beforehand.
(imported from commit 542f38f94bc447149cd4d2efaa5e8f48f756725b)
Addresses a complaint brought up in our usability study.
We now hook into the "show" event on .subscription_settings elements and
do some obnoxious math to move the scrollbar the way we want.
Closes trac #1015.
(imported from commit 5d9cee1ffc242eb7b743fdccd2bd76bf0a7ba060)
This can result in a significant performance benefit because we only
need to update the columns that changed..
(imported from commit 42bef1fcc58ad79bd864f89263fe82e90743ee5b)
The policy this implements is:
* 1 week for most persistent data (Clients, etc.)
* 1 day for messages
(imported from commit d57bb2c6b9626ffa2155c6d0ef9b60827d1f2381)
This saves 2 database queries per user in the huddle when sending the
first message to a particular huddle.
(imported from commit f71aa32df846fb4b82651a93ff9608087ffcaa5a)
This is in addition to only successfully reporting a given error once
per session. Previously, if an error was triggered many times before
the ajax call to report the error returned, we'd end up making many
ajax requests to report the error.
(imported from commit 559179e3c8c3fbf03bbb091a67361d447c80b7bb)
Also improve display of times passed -- we now use display short times
in milliseconds for easier reading.
(imported from commit 08e1e7e6acbef48453080864946f7602a3395e7c)
Previous we had around 4 copies of the logic for deciding whether we
should publish data via a SimpleQueueClient queue, a
TornadoQueueClient queue, or to directly handle the operation, which
resulted in their getting out of sync and buggy (see e.g. the previous
commit).
We need to add a lock around adding things to the queue to work around
a bug with pika's BlockingConnection.
I should note that the previous logic in some places had a bunch of
tests of the form "elif settings.TEST_SUITE" for doing the work that
would have been done by the queue processor directly; these should
have just been "else" clauses -- since we generally want that code to
run on development environments whether or not the test suite is
currently running.
(imported from commit 16bdbed4fff04b1bda6fde3b16bee7359917720b)
Previously we had several files which initialized SimpleQueueClient()
for sending items to the UserActivity queue, even though those code
paths aren't used outside Tornado. This resulted in slower Tornado
startup times.
(imported from commit ad97021ec18d3927233744037c548c22db33c321)
The actual database query that we use to fill the UserMessage cache
only takes a few hundred milliseconds to run; however the process of
iterating through the results would take 3-5 seconds because the
Django ORM is not very efficient for small tables where we're only
interested in the integer values in a couple columns.
So we can save most of that Tornado startup time by just doing this
one query manually; I left the original query next to it in a comment
so it is easy to keep it all up to date as we change our product.
(imported from commit ac4675bcdda5d812ebfbe211450c85ee2787ee66)
See http://bugs.python.org/issue5876 for an explanation for why this
is needed -- basically __repr__() needs to return a string, not a
unicode object in Python 2.
This causes problems on Django 1.5 because the more expressive
exception code in model.objects.get() will crash with a __repr__()
containing non-ascii unicode characters.
(imported from commit f44085e67d9d14629b821a29bbf65738f1794d6c)
We made this change for performance reasons that don't exist now that
we only render a small portion of your messages, and it causes a
distracting flicker when you scroll through messages slowly.
(imported from commit 33379320f6b90d93ec8beac17323b287f8bb2485)
Those examples make the tutorial feel much longer, and they aren't
relevant to people who aren't using Humbug to talk about code.
(imported from commit c3213775d26cf533b3d9bde691de08a53d427939)
It's not so black and white in a world where we auto-scroll at the
bottom, and we've observed that people trying Humbug over-focus on it.
(imported from commit 2057643f179d5d1666cb33438c5a513977197b37)
This is a lot cleaner, and also cuts about 50-70 ms off of page load time in
local testing (with lots of users), presumably because there's less work to be
done by the slow Django template engine.
(imported from commit 257b700238ee5d9a4ae00a53011ed5bce018124c)
This fixes tests that have been failing for me for, well, months, that
I've been ignoring:
======================================================================
FAIL: test_successful_subscriptions_list (zephyr.tests.SubscriptionAPITest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/jesstess/dev/humbug/zephyr/tests.py", line 631, in test_successful_subscriptions_list
self.assertIsInstance(stream['name'], str)
AssertionError: u'Denmark' is not an instance of <type 'str'>
======================================================================
FAIL: test_get_stream_colors (zephyr.tests.SubscriptionPropertiesTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/jesstess/dev/humbug/zephyr/tests.py", line 515, in test_get_stream_colors
self.assertIsInstance(color, str)
AssertionError: u'#c2c2c2' is not an instance of <type 'str'>
----------------------------------------------------------------------
The more comprehensive fix to this is going through both our API and
JSON calls and ensuring that we always return unicode objects,
documenting that, and then testing that more specifically. For now, at
least have passing tests.
(imported from commit ed1875ea1f66c1f1e89f80502c0d6abb323dc489)