No change to behavior. non_mit_mailing_list never returned False, so it was
never possible to reach the line "Otherwise, the user is an MIT mailing
list, and .."
send_event() expects a list of user ids (ints) except for the special case
of messages. This commit:
1. Fixes this in the call to send_event() in do_send_typing_notification()
2. Renames the variables in do_send_typing_notification() to better reflect
their content (for example, recipient_ids instead of recipients).
3. Renames the id field in the dicts sent in the typing event body (sender,
recipients) to user_id.
4. Adds assertions to the tests to verify that the tornado event user ids
are the same as the recipients in the event body.
5. Adds assertions to the tests to verify that the tornado event user
ids and the recipient user ids (in the event body) are the same as the
expected user ids (obtained from the emails using
get_user_profile_by_email)
6. Changes all assertTrues to assertEquals in the tests
This fixes#2151.
This makes it possible to configure only certain authentication
methods to be enabled on a per-realm basis.
Note that the authentication_methods_dict function (which checks what
backends are supported on the realm) requires an in function import
due to a circular dependency.
We recently made it so that a cross-realm bot can only send
messages to one realm at a time. (It can send to a realm
outside of its offical realm, but only one of them.) This
test adds coverage for that.
Disallow Realm.string_id's like "streams", "about", and several hundred
others. Also restrict string_id's to be at least 3 characters long, and only
use characters in [a-z0-9-].
Does not restrict realms created by the create_realm.py management command.
Before it was in UserSignUpTest, now it is in RealmCreationTest. The diff
makes it look like test_user_default_language is the target of the move,
but it isn't.
If a stream is public, we now send notifications to all realm users
if the name or description of the stream changes. For private
streams, the behavior remains the same.
We do this by introducing a method called
can_access_stream_user_ids().
(showell helped with this fix)
Fixes#2195
Previously, we set restrict_to_domain and invite_required differently
depending on whether we were setting up a community or a corporate
realm. Setting restrict_to_domain requires validation on the domain of the
user's email, which is messy in the web realm creation flow, since we
validate the user's email before knowing whether the user intends to set up
a corporate or community realm. The simplest solution is to have the realm
creation flow impose as few restrictions as possible (community defaults),
and then worry about restrict_to_domain etc. after the user is already in.
We set the test suite to explictly use the old defaults, since several of
the tests depend on the old defaults.
This commit adds a database migration.
This test seems intended to verify registration in the case of a
unique completely open domain; but because of the mit.edu realm, it
instead tested that a logic bug in the non-subdomains case was
present.
We now send dictionaries for cross-realm bots. This led to the
following changes:
* Create get_cross_realm_dicts() in actions.py.
* Rename the page_params field to cross_realm_bots.
* Fix some back end tests.
* Add cross_realm_dict to people.js.
* Call people.add for cross-realm bots (if they are not already part of the realm).
* Remove hack to add in feedback@zulip.com on the client side.
* Add people.is_cross_realm_email() and use it in compose.js.
* Remove util.string_in_list_case_insensitive().
Adds a database migration, adds a new string_id argument to the management
realm creation command, and adds a short name field to the web realm
creation form when REALMS_HAVE_SUBDOMAINS is False.
Does a database migration to rename Realm.subdomain to
Realm.string_id, and makes Realm.subdomain a property. Eventually,
Realm.string_id will replace Realm.domain as the handle by which we
retrieve Realm objects.
We now simply exclude all cross-realm bots from the set of emails
under consideration, and then if the remaining emails are all in
the same realm, we're good.
This fix changes two behaviors:
* You can no longer send a PM to an ordinary user in another realm
by piggy-backing a cross-realm bot on to the message. (This was
basically a bug, but it would never manifest under current
configurations.)
* You will be able to send PMs to multiple cross-realm bots at once.
(This was an arbitrary restriction. We don't really care about this
scenario much yet, and it fell out of the new implementation.)
We can currently send a PM to a user in another realm, as long
as we copy a cross-realm bot from the same realm. This loophole
doesn't yet affect us in practice--all cross-realm bots are
generally configured for the "admin" realm like the old zulip.com--
but we should lock it down in a subsequent commit.
Having each condition in a separate test was confusing to read,
especially since the tests were doing inconsistent setup, sometimes
calling user2 the user from 2.example.com realm and other times
calling user2 the cross-bot realm, etc.
Note that we still need the equivalent function in our
user-facing API, so there is not much code removal yet.
(Also, we will probably always keep this in our API,
as bot authors will usually just want a simple endpoint
here, whereas our client code gets page_params and events.)
Previously, we used to create one Google OAuth callback url entry
per subdomain. This commit allows us to authenticate subdomain users
against a single Google OAuth callback url entry.
This is a first step towards implementing a message retention policy
feature.
- Add Realm model message_retention_days field to setup
messages expired period for realm.
- Add migration.
- Add tool to get expired messages for each Realm.
- Add tests to cover tool for getting expired messages.
Passes the allowed domains for a realm to the frontend, via
page_params.domains. Groundwork for allowing users to add and
remove domains via the admin setting page, rather than via the
realm_alias.py management command.
This is a preliminary step towards eliminating the realm.domain field
in favor of realm.subdomain. Includes a database migration to create
these for existing realms.
This adds a medium (500px) size avatar thumbnail, that can be
referenced as `{name}-medium.png`. It is intended to be used on the
user's own settings page, though we may come up with other use cases
for high-resolution avatars in the future.
This will automatically generate and upload the medium avatar images
when a new avatar original is uploaded, and contains a migration
(contributed by Kirill Kanakhin) to ensure all pre-existing avatar
images have a medium avatar.
Note that this implementation does not provide an endpoint for
fetching the medium-size avatar for another user.
[substantially modified by tabbott]
This is some of the code we'd need if we wanted to have Zulip generate
avatars for things. Since it is so little useful code, and it's not
clear we will need this feature ever, we can remove this code to make
the codebase less confusing. It'd be easy to dig this out of history
if we ever want it.
Fixes#2101.
- Add tests for SEND_MISSED_MESSAGE_EMAILS_AS_USER is False (the
default!).
- Reorganized test case code by removing repeated parts of code,
improving code style and moving common parts to separate class
methods.
Fixes#1697.
POST to /typing creates a typing event
Required parameters are 'op' ('start' or 'stop') and 'to' (recipient
emails). If there are multiple recipients, the 'to' parameter
should be a JSON string of the list of recipient emails.
The event created looks like:
{
'type': 'typing',
'op': 'start',
'sender': 'hamlet@zulip.com',
'recipients': [{
'id': 1,
'email': 'othello@zulip.com'
}]
}
We now send peer_remove events to folks who have never subscribed
to the streams (except for private streams and zephyr).
We also use logic that is more similar to how
bulk_add_subscriptions() works.
This distinguishes between YouTube Videos and Image Previews by adding
a particular “youtube-video” class to the preview along with changing
the title to the video ID rather than the link. This serves to allow
the lightbox to ID when a lightbox preview should be treated like a
YouTube video rather than an image preview.
This also modifies the tests in bug down to expect a youtube-video class
along with the title to just be the video ID on YouTube rather than the
entire URL link.
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / PR #id title'.
Modify some test fixtures for better coverage.
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / PR #id title'.
Modify some test fixtures for better coverage.
Add detailed info (description, source and target branch, assignee) to message.
Change subject to 'repo_name / MR #id title'.
Modify some test fixtures for better coverage.
Fixes: #1883.
Rename:
PUSH_COMMITS_LIMIT to COMMITS_LIMIT
PUSH_COMMIT_ROW_TEMPLATE to COMMIT_ROW_TEMPLATE
PUSH_COMMITS_MORE_THAN_LIMIT_TEMPLATE to COMMITS_MORE_THAN_LIMIT_TEMPLATE
With reactions and other upcoming features, we'll be adding several
places where we need to check whether a particular user can access a
particular message. It's best to just have a single helper function
for this purpose that we can use everywhere.
Previously, we sent users to an "invite your friends" page after they
created an organization. This commit removes that step in the flow and sends
users directly to the home page. We also remove the now-unused
initial_invite_page.html template, initial_invite.js (which pre-filled the
invite emails with characters from literature), and the /invite URL route.
test_settings.py was setting EXTERNAL_HOST after importing settings.py,
which has several variables (like SERVER_URI) that are computed from
EXTERNAL_HOST.
[tweaked by tabbott to add comments explaining the story here].
Adds a new field org_type to Realm. Defaults for restricted_to_domain
and invite_required are now controlled by org_type at time of realm
creation (see zerver.lib.actions.do_create_realm), rather than at the
database level. Note that the backend defaults are all
org_type=corporate, since that matches the current assumptions in the
codebase, whereas the frontend default is org_type=community, since if
a user isn't sure they probably want community.
Since we will likely in the future enable/disable various
administrative features based on whether an organization is corporate
or community, we discuss those issues in the realm creation form.
Before we actually implement any such features, we'll want to make
sure users understand what type of organization they are a member of.
Choice of org_type (via radio button) has been added to the realm
creation flow and the realm creation management command, and the
open-realm option removed.
The database defaults have not been changed, which allows our testing code
to work unchanged.
[includes some HTML/CSS work by Brock Whittaker to make it look nice]
This pulls message-related code from models.py into a new
module called message.py, and it starts to break some bugdown
dependencies. All the methods here are basically related to
serializing Message objects as dictionaries for caches and
events.
extract_message_dict
stringify_message_dict
message_to_dict
message_to_dict_json
MessageDict.to_dict_uncached
MessageDict.to_dict_uncached_helper
MessageDict.build_dict_from_raw_db_row
MessageDict.build_message_dict
This fix also removes a circular dependency related
to get_avatar_url.
Also, there was kind of a latent bug in Message.need_to_render_content
where it was depending on other calls to Message to import bugdown
and set it globally in the namespace. We really need to just
eliminate the function, since it's so small and used by code that
may be doing very sketchy things, but for now I just fix it. (The
bug would possibly be exposed by moving build_message_dict out to the
library.)
This moves the logic for renaming a stream to the REST API
update_stream_backend method, eliminating the legacy API endpoint for
doing so.
It also adds a nice test suite covering international stream names.
This adds support for running a Zulip production server with each
realm on its own unique subdomain, e.g. https://realm_name.example.com.
This patch includes a ton of important features:
* Configuring the Zulip sesion middleware to issue cookier correctly
for the subdomains case.
* Throwing an error if the user tries to visit an invalid subdomain.
* Runs a portion of the Casper tests with REALMS_HAVE_SUBDOMAINS
enabled to test the subdomain signup process.
* Updating our integrations documentation to refer to the current subdomain.
* Enforces that users can only login to the subdomain of their realm
(but does not restrict the API; that will be tightened in a future commit).
Note that toggling settings.REALMS_HAVE_SUBDOMAINS on a live server is
not supported without manual intervention (the main problem will be
adding "subdomain" values for all the existing realms).
[substantially modified by tabbott as part of merging]
This sets the “title” attribute on the image to the actual title of
the image specified by the user in their markdown, rather than just
the URL of the full link to it.
This was the original way to send messages via the Zulip API in the
very early days of Zulip, but was replaced by the REST API back in
2013.
Fixes: #730.
We no longer use all the alert words for all the users in the
entire realm when we look for alert words in a newly sent/edited
message. Now we limit the search to only all the alert words
for all the users who will get UserMessage records. This will
hopefully make a big difference for big realms where most messages
are only sent to a small subset of users.
The bugdown parser no longer has a concept of which users need which
alert words, since it can't really do anything actionable with that info
from a rendering standpoint.
Instead, our calling code passes in a set of search words to the parser.
The parser returns the list of words it finds in the message.
Then the model method builds up the list of user ids that should be
flagged as having alert words in the message.
This refactoring is a little more involved than I'd like, but there are
still some circular dependency issues with rendering code, so I need to
pass in the rather complicated realm_alert_words data structure all the way
from the action through the model to the renderer.
This change shouldn't change the overall behavior of the system, except
that it does remove some duplicate regex checks that were occurring when
multiple users may have had the same alert word.
If you supplied an unrecognizable address to our email system,
or you had EMAIL_GATEWAY_PATTERN configured wrong,
the get_missed_message_token_from_address() used to crash
hard and cryptically with a traceback saying that you can't
call startswith() on a None object.
Now we throw a ZulipEmailForwardError exception. This will
still lead to a traceback, but it should be easier to diagnose
the problem.
In our email mirror, we have a special format for missed
message emails that uses a 32-bit randomly generated token
that we put into redis that is then prefixed with "mm" for
a total of 34 characters.
We had a bug where we would mis-classify emails like
mmcfoo@example.com as being these system-generated emails
that were part of the redis setup.
It's actually a little unclear how the bug in the library
function would have manifested from the user's point of view,
but it was definitely buggy code, and it's possibly related in
a subtle way to an error report we got from a customer where
only one of their users, who happened to have a name like
mmcfoo, was having problems with the mirror.
It appears that the assertRaisesRegexp approach we had before didn't
work properly on some systems, likely due to a bad interact with a
i18n (we haven't definitively determined the cause).
We now raise an exception in bugdown.do_convert() if rendering
fails, to avoid silent failures, and then calling code can convert
the exception to a JsonableError.
We now have 100% coverage on views/push_notifications.py, modulo
some dead code which will be addressed in the next commit.
There were some existing tests in text_external.py, but that
module is really intended for tests that hit external services.
The view is a really simple API that updates a DB table, and the
new test code focuses on error handling and idempotency as well
as the happy path.
Apparently, in urllib.parse, one need to extract the query string from
the rest of the URL before parsing the query string, otherwise the
very first query parameter will have rest of the URL in its name.
This results in a nondeterministic failure that happens 1/N of the
time, where N is the number of fields marshalled from a dictionary
into the query string.
This commit extracts compose_views() from update_subscriptions_backend(),
and it implements the correct behavior for forcing transactions to roll
back, which is to raise an exception.
There were really three steps in this commit:
- Extract buggy code to compose_views().
- Add tests on compose_views().
- Fix bugs exposed by the new tests by converting errors to exceptions.
This adds support for using PGroonga to back the Zulip full-text
search feature. Because built-in PostgreSQL full text search doesn't
support languages that don't put space between terms such as Japanese,
Chinese and so on. PGroonga supports all languages including Japanese
and Chinese.
Developers will need to re-provision when rebasing past this patch for
the tests to pass, since provision is what installs the PGroonga
package and extension.
PGroonga is enabled by default in development but not in production;
the hope is that after the PGroonga support is tested further, we can
enable it by default.
Fixes#615.
[docs and tests tweaked by tabbott]
The previous default configuration resulted in delivery problems if
the Zulip server was authorized in the SPF records for the domains of
all users on the Zulip server.
This sends an event when a new avatar is uploaded that refreshes the
avatar for all browser clients without the need to reload the browser.
Fixes: #1359.
Previously, missed message emails with multiple senders would
incorrectly have a "," outside the quoted sender name part of the from
address string, resulting in confusing email output.
Adds a new field default language in the zerver_realm model.
This realm level default language will be used as default language
for newly created users. Realm level default language can be
changed from the administration page.
Fixes#1372.
Often, users will copy email addresses with a name (rather than pure
email addresses) into the Zulip "invite users" UI. Previously, that
would throw an error.
This change also adds a get_invitee_emails_set function for parsing
emails content and a test suite for this new feature.
Fixes: #1419.
Define Integration and WebhookIntegration classes.
Change webhook part of integration's guide.
Replace hardcoded webhook urls to generating
based on WEBHOOKS list.
This exists primarily in order to allow us to mock settings.DEBUG for
the purposes of rate limiting, without actually mocking
settings.DEBUG, which I suspect Django never intended one to do, and
thus caused some very strange test failures (see
https://github.com/zulip/zulip/pull/776 for details).
This was in AdminCreateUserTest.test_create_user_backend().
For end to end tests we are logged in, and we need to verify
that our decorators add UserProfile to the parameters of
the view on our behalf, so that we don't get false positives.
In an upcoming commit, we will want to be able to serialize
the parameters for client_put to produce url coverage reports,
so that is another reason not to pass in the UserProfile
object. (That was how this was discovered.)
This makes us more consistent, since we have other wrappers
like client_patch, client_put, and client_delete.
Wrapping also will facilitate instrumentation of our posting code.
This new helper combines two old helpers, one of which was misnamed
and the other of which was always called after the first, so it
made sense to just combine the helpers.
Fixes: #1386
This commit adds these two tests:
test_use_first_unread_anchor_with_some_unread_messages
test_use_first_unread_anchor_with_no_unread_messages
The new tests add coverage to the conditional logic in
get_old_messages_backend() that looks at first_unread_result
when use_first_unread_anchor is set to True.
The test is now called test_use_first_unread_anchor_with_muted_topics().
Before this commit, the test exercised setting
use_first_unread_anchor to True, but it didn't inspect the
most relevant query affected by the flag. Now it does.
This test is still kind of hard to read, and it's far from ideal,
but I'm reluctant to remove it from the test suite.
This increases test coverage by exercising highlight_string().
It also gives deeper test coverage to NarrowBuilder.by_search(),
which had test coverage before, but only in terms of inspecting
the SQL that was generated. This test actually runs the SQL
under the hood.
This partly fixes#1006.
This allows the frontend to fetch data on the subscribers list (etc.)
for streams where the user has never been subscribed, making it
possible to implement UI showing details like subscribe counts on the
subscriptions page.
This is likely a performance regression for very large teams with
large numbers of streams; we'll want to do some testing to determine
the impact (and thus whether we should make this feature only fully
enabled for larger realms).
There were a bunch of authorization and well-formedness checks in
zerver.lib.actions.do_update_message that I moved to
zerver.views.messages.update_message_backend.
Reason: by convention, functions in actions.py complete their actions;
error checking should be done outside the file when possible.
Fixes: #1150.
This is controlled through the admin tab and a new field in the Realms table.
Notes:
* The admin tab setting takes a value in minutes, whereas the backend stores it
in seconds.
* This setting is unused when allow_message_editing is false.
* There is some generosity in how the limit is enforced. For instance, if the
user sees the hovering edit button, we ensure they have at least 5 seconds to
click it, and if the user gets to the message edit form, we ensure they have
at least 10 seconds to make the edit, by relaxing the limit.
* This commit also includes a countdown timer in the message edit form.
Resolves#903.
Taiga's webhook integration would give output events in a random
order which caused test failures on python 3 (seems like python
3 is more prone to non-deterministic failures). Fix that by
sorting the outputs obtained from events before concatenating them.
Use ujson.dumps to render raw messages sent by the PagerDuty
integration instead of using pprint.pformat. pprint.pformat
gives different results on python 2 and 3.
Correctly encode and decode strings in convert_html_to_markdown.
It wasn't possible to use universal_newlines=True since
Popen.communicate doesn't encode/decode strings correctly on
python 2.
Bitbucket changed the format of their API. The old format is still
useful for BitBucket enterprise, but for the main cloud verison of
Bitbucket, we need a new BitBucket integration supporting the new API.
This is controlled through the admin tab and a new field in the Realms
table. This mirrors the behavior of the old hardcoded setting
feature_flags.disable_message_editing. Partially resolves#903.
This reverts commit f1f48f305e.
The use of sklearn unfortunately caused a substantial slowdown to the
Zulip provisioning process, which didn't seem worth it for a
relatively minor feature.
The subscribers list is appended to in `peer_add` events with not
regard for preserving the ordering, and ordering isn't really
important here, so it seems best to just sort it in these checks.
Also encode/decode strings appropriately when using api_keys to generate
basic auth header.
Also fix clashing annotations in zerver/tests/test_external.py.
We would like to know which kind of authentication backends the server
supports.
This is information you can get from /login, but not in a way easily
parseable by API apps (e.g. the Zulip mobile apps).
This reverts commit e985b57259.
This commit will break production when we next do a release, because
we haven't done a migration to create Attachment objects for
previously uploaded files.
For a long time, rest_dispatch has had this hack where we have to
create a copy of it in each views file using it, in order to directly
access the globals list in that file. This removes that hack, instead
making rest_dispatch just use Django's import_string to access the
target method to use.
[tweaked and reorganized from acrefoot's original branch in various
ways by tabbott]
get_display_recipient's annotation clashes with other wrong annotations.
Fix those wrong annotations.
Since get_display_recipient returns a Union, use isinstance checks and
casts to make mypy checks succeed.
Now that we have a working S3 mock and an effective way to toggle the
upload backend that Zulip is using, we can re-enable this important
end-to-end test of the Zulip S3 upload backend.
This has no functional changes; we just replace the old hacky
assignment of functions with assignment of the upload backend to a
variable.
I'm not totally happy with this, because we end up having to copy the
type annotations of the three methods 4 times each, but this should
make it a lot easier to test the (non-default-in-tests) S3 backend
using end-to-end tests, which would have caught
13bac1cc2a.
I expect we'll iterate on the interface over time; ideally, I'd like
all the code that checks LOCAL_UPLOADS_DIR to be inside upload.py, and
primarily in these classes.
In order to genericize use of Zulip outside companies,
all instances of coworkers have been changed to users.
NOTABLE EXCEPTION: When the Zulip instance is domain-
locked, the reference to coworkers remains. The reason
for this is twofold: first, the majority of Zulip instances
which require a particular domain will be locked to a
company, and second, the template variable for the domain
necessary should be added to the alert so it is clear
to the user what the domain needs to be for access.
Fixes: #861.
Just render the templates without the actual workflow to see if they
don't return a 500 error; this lets us catch various classes of
template bugs automatically.
Fixes#784.
Like the recent change blocking JSON endpoints for deactivated users
and users in deactivated realms, this change is a hardening
improvement. Those users should be unable to get an active session
anyway, but if somehow one is leaked, this means they won't be able to
access any user data.
Previously, api_fetch_api_key would not give clear error messages if
password auth was disabled or the user's realm had been deactivated;
additionally, the account disabled error stopped triggering when we
moved the active account check into the auth decorators.
This commit adds the capability to keep track and remove uploaded
files. Unclaimed attachments are files that have been uploaded to the
server but are not referred in any messages. A management command to
remove old unclaimed files after a week is also included.
Tests for getting the file referred in messages are also included.
Since we don't have a stable way to get the Dropbox preview failure
image (and it was sorta a weird setup anyway), it seems best to just
remove the condition.
Several recently merged webhooks were incorrectly not checking that
the actual webhook result didn't return an error. While they would
usually still fail in most cases when checking whether the message
came back correctly, this hid the root cause errors and thus made it
much harder to debug.
This integration relies on the Teamcity "tcWebHooks" plugin which is
available at
https://netwolfuk.wordpress.com/category/teamcity/tcplugins/tcwebhooks/
It posts build fail and success notifications to a stream specified in
the webhook URL.
It uses the name of the build configuration as the topic.
For personal builds, it tries to map the Teamcity username to a Zulip
username, and sends a private message to that person.
S3Test is now only the S3-specific test (which isn't even run), so we
can now invest in making FileUploadTest have good coverage of the
(local) file upload code paths.
For reasons I don't understand, it appears that in Travis CI we're now
seeing errors using Casper that seem to correspond to a compatibility
issue introduced in PhantomJS 2, even though we're still using 1.9.8.
The solution for that compatability issue of patching casper's
bootstrap.js to get arguments from system.args at a slightly different
time than before seems to work in our setting as well, and that's what
this implements.
Probably the right long-term solution involves upgrading both
phantomjs and Casper to the latest versions.
The tests run as iago, who is now an administrator and therefore has
control over many more bots. Be specific about which bot to operate on.
(imported from commit 7a9d3e12da905338624747dd402702bb66907cfd)
After I reverted the change to the bot stetings page, this
broke a test. This commit fixes that.
(imported from commit 394b29fea4f75096f7cb8d819145a9adc386276b)
When the date changes between an existing group and a new group the
existing date separator needs to be updated. This is done by rerendering
the existing group.
(imported from commit a3775815e33872b0ec07704dc7ccf5fd2671fa21)
Now that we are not directly using message in the message list view
rename the uses of message that are message_containers.
(imported from commit 5c355703a8934a74864f5de6ecb1e2fd851e5d41)
The messages being passed to the handlebars templates were global
messages which we were adding per list details to, show name bar etc.
This causes rendering bugs when you try to rerender a message, because a
different list may have changed it. This commit moves the global message
data to a msg attribute on the message_container which will contain the
per list attributes.
(imported from commit 26b1f0d2c72d6288a6d3e7ed5f8692426f2a97ad)
The goal is to have a more data centric piece that can be unit tested.
We also try to minimise the number of one off jQuery DOM updates and
rerender handlebars fragments instead. This will prevent the
message_group and DOM from drifting apart and not being able to rerender
correctly.
(imported from commit 03f09803f2bc0c3b8187f76f2cfe90be9f7512a3)
Before this change, we were incorrectly trying to do local
filtering on negated has searches.
(imported from commit d1a6f1feef6b3cc1c984eb91a73cd16c4e66874e)
We show a user as "on mobile" if:
* They are only active on mobile
* They are inactive on all devices and can receive push notifications
(imported from commit 0510b9371727cd19c72f6990df7112921c36ad48)
This doesn't affect code when not in testing. It shaves 7 seconds off of casper
test time on my machine.
(imported from commit 7e27fa781bcf16f36d9c8f058427ba57c41068bd)
Normally, casper delays checking the waitFor condition for 100 milliseconds and
further does not act on that check for another 100 milliseconds. This is just
silly.
(imported from commit ad046ceda81abda5c609ce25ef0d4fb27d3da716)
send_message -> then_send_message
send_many -> then_send_many
wait_and_send -> then_wait_and_send
Hopefully this makes it clearer that they should not be called inside of steps.
(imported from commit 4fcc971817b25056100311ba55303da2c5527f0f)
Casper was calling casper.then(then) instead of calling the callback directly.
This meant that the callback was being added as a step, which worked, but was
not consistent with the rest of the casper model.
(imported from commit b3bf916f7c56dd3d4e7be3569ebdf9d3045cd085)
Previously, if you searched for "in:home search:foo", we
weren't making "in:home" a public operator, so the back end
wouldn't know to exclude muted messages, but the front end
also wouldn't exclude muted messages, because it assumed
that queries with "search:" in them were fully narrowed by
the back end.
Prior commits made it so that the back end is now capable
of doing "in:home" narrowing, so to get the properly narrowed
results, we simply needed to make in:home be a public operator
in this commit. We also made in:all be public for convenience,
although it's essentially a no-op.
(imported from commit e4a8b10813b50163c431b1721bd316b676be1b83)
This commit finishes up support for has:* searches by adding
the front-end pieces, specifically the part that "has" operators
will not be applied locally. It also implements basic
descriptions for search suggestions and canonicalization
of operands from plural to singular.
(imported from commit a3285bc33d06d76b5a2b403ebcdd911b4cc03980)
Typing "stream:foo -topic:b" leads to "stream:foo -topic:bar" properly
as a suggestion now.
(imported from commit bb0acf52744f7b13977a3db5d3c130d1402b09b7)
The match_subject and match_content template vars are notorious
for causing bugs due to the way handlebars forces the strange
../../.. syntax on us, so now we have some test coverage.
(imported from commit c6b151b964ae8b6fb199d9cdbe533a87c6b58947)
This helps the common case of not liking our default of having audible
and desktop notifications enabled, and not making users adjust the
settings on every existing stream to fix it.
(imported from commit be75edb2c1385d1bd9a289416e2dffd8007f5e0a)
As part of this, I also made test_basics() have a third
stream that makes false positives in the test less likely.
(imported from commit d5ba64ec9346741818e30abe9e9594788c339fab)
Now that we no longer use tables for our message list, we can
more logically group messages together.
(imported from commit 9923a092f91a45fe3ef06f2f00e23e4e3fb62a37)
This changes Filter.describe and Filter.operator_to_prefix
to handle negated terms correctly.
(imported from commit 673c0d3a5a77784e95772c14e12534ad2daecda2)
Commit "ecf0eb85 Redesign styles for message pane" removed the
right_part class, updates the tests to not use it.
(imported from commit 277eb3748913895b13ab7bdca11e668033c9f9b3)
Remove the options to narrow by topic/person from the menu,
because there are better ways to do this in the UI, and
remove the time travel option, because the "Link to this
conversation" achieves mostly the same effect.
(imported from commit b7e0cfe64c0760e5a7bf7a8c9c05ed1a5b747300)
For the Filter helper functions above, we generally want to
ignore negated search terms, since their existence should
really only impact filter predicates and nothing else on the
JS side. The exception is search, where even the existence
of a negated search needs to be noted to know that we can't
apply a filter locally.
(imported from commit 8bbb410a85fefed549d359e4c779a134ad830c11)
For negated search terms, we weren't explicitly setting
"negated" to false when callers left it undefined, which was
mostly fine, since undefined is falsey, but it is better to
define it explicitly for debugging/testing purposes.
(imported from commit 68a2790b510d17caed8ca11c38188545d1dcc347)
Behind a feature flag you can now do searches like this:
-pm-with:othello@example.com is:private
The "-" in front of "pm-with" tells us to exclude messages
with Othello from our search. We support "-" in front of
all operators, although the behavior for "-search:" and
and "-near:" doesn't really change in this commit.
Note that the filtering out of "negated" predicates only
happens on the client side in this commit. On the server
side we ignore negated predicates and send back a superset
of the results.
(imported from commit 6cdeaf32f2d493fbbb838630f0da3da880b1ca18)
This this removed one forced relayout of the page on unnarrow. This
saves about 100ms for me.
(imported from commit 0755f425abbe3d99b8a99765549a5bbf3c620b9a)
The filter_term() function was supporting the transition
from using tuples for search terms to using dictionaries,
but now all of the JS code should be dictionary-compatible.
(We had already abandoned the tuples safety net on staging,
and a couple days of use have given me confidence we can
pull the shim code.)
The one side effect this change has is that search terms will be
initialized to {} instead of []. This distinction matters
when it comes to calling JSON.stringify on the search terms.
(imported from commit 1fbe11011d8953dbea28c0657cbf88384d343e00)
When we typed "stream:" into the search bar, the empty operand
triggered an error in the Dict class for an undefined key, because
we were using opts[0] as a "defensive" workaround to opts.operand,
but opts.operand of '' is more correct than opts[0] being undefined.
Now we only fall back to opts[0] whe opts.operand is undefined, and
we emit a blueslip error when that happens.
(imported from commit 88a196d3bc3d67689c36bc036f378da744c652f9)
We have shim code that makes our internal narrow operators
support both a tuple interface and an object interface. We
are removing the shim on staging to help expose any dark
corners of the code that still rely on operators being
represented as tuples.
(imported from commit f9d101dbb7f49a4abec14806734b9c86bd93c4e1)
This is yet another change related to phasing out the
[operator, operand] tuple data structure for representing
terms in a narrow.
(imported from commit 508e58fc4eebae8a24a8ae59919ba5d94fc66850)
The JS code can now call stream_data.get_sub_by_id() to get
a sub from a stream_id. Subs have stream_id due to a prior commit,
and we keep track of the mapping in stream_data's subs_by_stream_id
variable.
(imported from commit 409e13d6d2e79d909441a66c85ee651529d15cd2)
The tutorial introduces "engineering" messages that might not
be in the user's normal subscription, and they would get a gray
border if we did not override the stream color. Before this change,
we accomplished this by overriding the core data structure in
stream_data.js. Now we are a bit more future-proof; we only
override stream_color.default_color.
(imported from commit 0d0845b72f766912679f5aa7641ae9a60fdbb4ce)
Add try/catch blocks to get_updates_success and send a blueslip error on
errors we catch. This will let get_updates_success return successfully
so that the next call to get_updates will start immediately.
(imported from commit 44d8b85d9d8e930a5552a5fbf4af1d0e5e8c07e8)
Add a helper to patch_global to change a global and then reset it to the
original value after a test file is complete.
(imported from commit 1b65ff6ea8693ad61b7f18f35dafa942429252a8)
In the early days of the node tests we didn't have an index.js
driver, so each test set it up its own environment. That ship
has long since sailed, so now we just require assert in index.js
as a global and make it so that the linter doesn't complain.
(imported from commit 1ded3d330ff40603cf4dd7c5578f6a47088d7cc8)
Having to explicitly call out the underscore/Dict dependencies
in nearly every test has proven to be more cumbersome than helpful.
We never monkeypatch those modules in the tests.
(imported from commit 49ef70c835edd4e22a5869eda9235ef3ffc3c59b)
If you do a search like id:5 topic:foo and message #5
does not have the topic "foo", we now return zero results.
(imported from commit 8121fac1dbd79024c51af1f310d831dab9242e36)
By having Filter.canonicalize_tuple() call filter_term(),
we make it so that Filter objects get operator/operand
fields in their terms when we initialize this.
This mostly caused test breakage for tests that were doing
assert.deepEqual; now we just check to make sure that the
field we need are there.
(imported from commit 63b2516dc72edeb11e76a1fa4442570b9c605baa)
Consumers of Filter.parse() can now reference
search term parts like so: term.operand, term.operator
(Legacy code can still use term[0] and term[1].)
(imported from commit 06d0da65f13f1eb7e3ba8eac0e69448aab2735ab)
Previously, while you'd get the event saying you'd been knighted,
which would make the Administration tab visible, clicking on the tab
would error out because the admin page HTML was never sent over on
page load (since you weren't an admin at that point).
(imported from commit 90ad351533515bebece630d67baf4b142d320754)
Add javascript to handle the button clicks and update the status based
on the subscribe and unsubscribe events from the server.
(imported from commit 6b9c0b40d9084e3d8b64bed701ebc786bef6d432)
There was a bug where you would type "is:private je" into the search
suggestion and see undefined:jesstess@zulip.com. Now we use
the "pm-with" operator. The search suggestions for people are kind
of complicated now, because there is some overlap between
get_private_suggestions and get_person_suggestions.
(imported from commit 7d330f34f4a433995420de6eb90cb41229b70272)
This function can redraws the lock icon (or lack of lock icon)
for a stream in the stream sidebar. It can be called when
admins change the stream privacy.
(imported from commit 880133d02525137094c48ecad8cf2dfff59f3307)
This is a node test that verifies that
stream_list.add_stream_to_sidebar() creates the right
DOM when it renders the stream_sidebar_row template.
The test also makes sure that the DOM gets put in the
correct place to be retrieved by stream_list.get_stream_li()
calls.
(imported from commit ed4c0148da2261870e3db5a9b553913788b4eccd)
The node tests will now throw an exception if you haven't
at least compiled one of the handlebars templates as part
or running template.js. This includes a one-line fix to
include tutorial_welcome.handlebars.
(imported from commit 51b4cae293d54c1f374a84623b4928519775e228)
expected_messages was ltriming headings but we had a newline at the end
innerText. The ltrim was changed to a trim to correct this.
(imported from commit 5e411c5fc46a2cb675c1268041e95bbb2522c8f9)
This is the UI piece that finishes the features to let admins
make streams private or public.
(imported from commit 1a193165a6304dc358982e9850a75965fb3a03fd)
When we rebuild the user list from scratch, set the unread
counts in the templates to avoid multiple DOM updates.
(imported from commit 2d0c9b0fb99b382332e464ba7c3caad95e05363e)
Features:
* Only shows messages in the narrow
* New messages in the narrow will arrive as they are sent
* Works even for streams you're not subscribed to
* Automatically subscribes you to a stream on send
* Doesn't update your pointer
* All searches etc. automatically have the narrow added
(imported from commit 2e12b76849f6ca0f53dda5985dad477a04f7bbac)
This adds coverage on all of our handlebars templates. It renders
each template with representative data and generally performs at
least one sanity check on the DOM that gets created.
Also, the tests output the rendered HTML to .test-js-with-node.html,
which can serve as a way of documenting all of our templates.
(imported from commit 63dd7502457a8199dd35277fc5ab80cd53e2af22)
We convert sender:me to sender:steve@zulip.com at parsing time,
so users will see the canonicalization in the search bar. Likewise
for pm-with.
(imported from commit aa9951f13d4633cfef85f03e5486d607fdef414f)
This requires the nodes.js version of jquery. Talk to Luke if you are having issues
installing it.
(imported from commit 86dc91a9a41b2ef9c2dbcc4fb0085109250d7af7)
The node tests can write to .test-js-with-node.html now, so that
when you are unit-testing template-related code, you can use
the browser to inspect the output.
(imported from commit 3cbd7470ec0da1973124f79acb665c3a3e17c580)
R means "I want to send a PM, you can guess the destination"
r means "I want send a stream message, you can guess the destination"
C means "I want to send a PM and specify the destination"
c means "I want to send a stream message and specify the destination"
(imported from commit 755c92aed79ab79089b2e35d2c100582f012736a)
Add a method that lets us know what percentage of a huddle is
present. (We can use this later to set the opacity of huddles
in the UI.)
(imported from commit 8a2383951807d7bfbf9d730a8980d977cf23b379)
This reverts commit c10d9c1a0d23891acce88bf8d79866c08cb75681.
This reverts commit 9259a246946cd968a8725c38ff5ef2d4b4793717.
(imported from commit 50e9e0136c2487cc63d75ae2b78df0c70a1b0be1)
The simulated people_dict in the activity.js test was not
matching the production code, and going forward, we'll want to
share the people_dict setup for all of our tests.
(imported from commit fc21a02216b9422130b9fe9c11bcf80590612844)
Activity.js now has the capability to track huddles that
come through in loaded messages and return them in reverse
chronological order by their most recent message. Right
now this only connected to a unit test, not any production
code.
(imported from commit 59957086fa2e454e5711472df091f178217aed2b)
This test has been broken for a couple months, and nobody has taken
ownership of fixing it. It's always slow, sometimes it fails
randomly, sometimes it fails for things that aren't really problems,
and it's generally been way more trouble than it's worth.
(imported from commit 8080e81b226a372e763a2558f4e5668c3a4d087c)
We really should be setting a variable in Javascript to indicate that
we've finished loading, but this hasn't bitten us yet.
(imported from commit ee1f7c76d9f3c482561cc5c44b81537c7e9636be)
Summary blocks can contain hundreds of messages. When the rendering window
code didn't take this into account, it would lead to all kinds of
unpleasant behavior when you scroll.
Trac #1888
Unfortunately, this replaces a subtraction with a function that iterates
through all the messages.
(imported from commit 9259a246946cd968a8725c38ff5ef2d4b4793717)
When decoding an operand, a + can be converted to a space
only if the operand is not an email address.
(imported from commit 08fc36a579bbe6409137c60c0fa9579fe3ab2c43)
There is a scenario where we call process_read_message()
for a message that we haven't recorded as unread before.
I'm not sure how it happens, but I put back code to
guard against crashing. The regression happened in
5752458c821.
(imported from commit 5ce15d2e236b738b445ed88f1733aa0612be0ff3)
Update get_counts() so that it ignores counts for muted topics
when calculating stream/home unread counts.
(imported from commit 9b4e4da4346c225c535e97d709d3dee032603cc5)
The indirection was more confusing than helpful, especially
since the function had side effects, despite its getter-like
name.
(imported from commit 85d9cf642b4177f62488136f0e0f7f6c9304942e)
Instead of collapsing muted messages, just hide them altogether
in view where it makes sense to hide them.
(imported from commit 1c2c987ff302ceb135a025753cf421b4de1aea71)
Warn inside these functions when you get data on streams that you
are not subscribed to:
add_subscriber
remove_subscriber
user_is_subscribed
The back end should be smart enough not to spam us with subscriber
info that we don't care about.
(imported from commit b27644be2abc37c11ddff884ef392ea208bd1bd3)
We create a blueslip error for undefined keys in Dict. This led
to a straightforward change in the unit tests for Dict. For the
unread test, to avoid the blueslip error, we had to be more specific
in setting up a user in one place, but this reduced our coverage,
leading to another small test being added.
(imported from commit 33e14795500d9283de2a7c03c4c58aec11cea4b8)
The exceptions were cryptic before, and they were inconsistent with
the fold_case: false behavior.
(imported from commit a40704d1a22bcdc60d91be832ee3c81eb416c6dd)
There was nothing to ensure that the changes resulting from scrolling
happened before the unread counts were checked. We already had a long
wait there; might as well do those checks after it to ensure that the
DOM is updated.
(imported from commit 0d4014ae6a74dd684521fecabefc4bf79015f842)
This is experimental, for staging only. There might be a better
way to model this than dueling force_expand/force_collapse flags,
but it works for now. The code in collapse_recipient_group()
could also be DRYed up relative to expand_summary_row().
(imported from commit 107151d1ecd640970fb7700d41278a003bd1abaa)
I was saying bar.d in places where I wasn't really specifically
testing the .d feature, and it was distracting and just an
unintentional consequence of copy/paste.
(imported from commit 7b137b28cb33c72b83f02fe1d2961c5c6accc263)
This change will allow us to test the muting feature on
staging. Any topic named "muted" will automatically be
muted. You can also mute any other topic on the console:
muting.mute_topic('devel', 'ios');
current_msg_list.rerender();
More UI around this experiment will be coming soon, as well
as support for muting entire streams.
The muting module keeps track of which topics are muted, but a
user can expand muted messages, and once that happens, the
messages are marked with the "force_expand" flag that gets
persisted to the back end.
Muted messages are rendered in similar fashion to the summarized
rows, and as part of unifying some of that code, we have
made it so that expanding a summarized section doesn't remove
individual flags related to summaries; instead, the messages
get the force_expand flag set.
(imported from commit acee4190e63813d46850415c41ff8ebfae4a6953)
I regressed this recently, thinking that all our operators are
strings, but I forgot about the "near:" operator used in the
"Narrow messages around this time" feature. The user facing
symptom was that the search bar showed up empty instead
of saying near:50, which might actually be the better
behavior, but it certainly was not intentional. :)
(imported from commit fcb93cecbe9a052bb9bc1af7fcac5aecaba5aafb)
I'm trying to move well-isolated methods out of narrow.js, so that
narrow.js is more strongly focused on UI/ajax interactions and
big, heavy lifting stuff. The logical home for parse/unparse
seemed to be Filter, and they brought along two private methods
with them. The big code moves involved trivial follow ups
like s/exports/Filter/.
(imported from commit ace0fe5aa1c7abce0334d079ba9eb8d9a57bd10f)
This is hard to break up into separate commits; sorry about that.
Before this commit we had 3 tests:
1. Claims to check that an unread count was 3, but actually doesn't.
2. Checks that scrolling down causes the left-sidebar stream unread
count to decrease.
3. Claims to check that unread counts are correct, but actually doesn't.
From talking to Leo, it seems that he originally tried to actually do
what tests 1 and 3 claim to do, but found it too fragile to check an
exact unread count because of font sizes, layout, etc.
We now have 4 tests. For each of the stream sidebar and user sidebar, we
test that:
1. Scrolling down causes the unread count to decrease
2. Logging out and back in again leaves the unread count unchanged
I've removed the two bogus tests and some other code that didn't seem to
serve a purpose.
(imported from commit 9f8e4b521e2765099510426d0b7e2960885e6f19)
You used to have to call casper.test.done(N) where N was the number of
tests run. This is no longer required and is deprecated in CasperJS 1.1.
(imported from commit 0de9ecb1930cbce416fa02c24a882e926cdc8e87)
Have ui.set_presence_list() only touch the presence list.
Before this change, it was calling update_unread_counts(), which
has a bunch of side effects unrelated to the presence list.
(imported from commit 690f754d78874a03fa36f8ff8765d5a63e431d28)
The functions add_dependencies() and set_global() are convenience
methods that allow you to modify the global namespace while
the current file is running but then have it be cleaned up
by index.js when you're done.
(imported from commit f75b8a10c19f773a8d2d3a8fa4bc39b1679566fe)
This is like Python's dict.setdefault. I don't love the name, but
the consistency is nice.
We have lots of places where we do things like:
if (! dict.has('foo')) {
dict.set('foo', []);
}
var arr = dict.get('foo');
arr.push(3);
We can now write:
var arr = dict.setdefault('foo', []);
arr.push(3);
(imported from commit b8933809c69ba47ec346ed51d53966793403e56c)
The function narrow.unparse() is used in a bunch of places in
the search suggestion code, and now it no longer lower cases
operands. This change contributes to fixing trac #1659.
(imported from commit 6b44b8a818482b5c8b4f9a45bc7d3a9d21e04eba)
Streams are converted to their "official" names now.
Topics are not canonicalized at all.
All other operands continue to be lowercased.
Since we don't lowercase stream/topic at the parsing stage,
we have to modify the predicate function to do the lowercasing
of stream/topic to enable case-insensitive comparisons. This
is slightly more expensive. The server-side predicate
functions are already case-insensitive.
(imported from commit 286f118c6c3ff9d23b37c7f958cab4c0eacd5feb)
If we have a stream named "Denmark" and we're narrowed to it,
then use "Denmark" as the default stream name in the compose box
even if the narrow operators are lowercase.
(imported from commit e9f06b7307c73231aa887dc95849e0307984e6f0)
This function returns the stream's actual name, if we can get it;
otherwise, it's the identity function.
(imported from commit 7a981adba9632d6c6eba54cb6514a9226d1e83e8)
There are no functional changes; you can still use the shell script
tools/test-js-with-node. It just delegates now to the new index.js to
iterate through all the other .js files in the test directory and run
them. This sets the stage for Istanbul to correctly compute test
coverage.
(imported from commit 6f521c78b7a314d010fa113f9c2c971ab999b637)