IFTTT allows custom templating for their payloads, so the onus is
on the user to ensure that their custom templates conform to the
expectations outlined in our IFTTT webhook docs. For that reason,
these payloads weren't generated, but were manually edited.
After discovering a couple of bugs, I decided to thoroughly test
and rewrite this integration from scratch. The older code wasn't
generating coherent messages.
This also commit gets this integration up to 100% test coverage.
Test coverage was improved by removing an unused function and
removing some code (written by me) that was actually handling
Test Hook event types incorrectly.
It was a painful amount of work to generate the actual payload.
Since the only difference was a small build URL, I manually
edited the payload and used that for testing.
This commit gets our GitHub webhook up to 100% test coverage.
Some of the page build message code had insufficient test coverage.
I looked at generating the payloads that would allow me to test
the lines of code in question, but it was too much work to
generate the payloads and this seemed like a vague event anyway.
So I just rewrote the logic so that the lines missing
coverage are implicitly covered.
This is a part of our efforts to get this webhook's coverage
up to 100%.
Note that apart from just testing an uncovered line of code, this
commit also fixes a minor bug in the code for messages about issue
comment deletion and editing.
Note that Freshdesk allows custom templating for outgoing payloads
in their webhook UI. Therefore, the payloads added in this commit
did not have to be official payloads from Freshdesk.
Instead of just referring to the commit with the raw URL, we
should use the commit ID as the text of the hyperlink.
Note that in commit_status_changed type messages, the name of the
commit isn't available.
The function that generates the body of the commit_status_changed
event messages generated an invalid commit URL.
Most likely, we missed this because this event type is fairly
vague and it is possible it was never tested by users much,
if at all.
The lack of coverage was due to:
* An unused function that was never used anywhere.
* get_commit_status_changed_body was using a regex where it didn't
really need to use one. And there was an if statement that
assumed that the payload might NOT contain the URL to the commit.
However, I checked the payload and there shouldn't be any instances
where a commit event is generated but there is no URL to the commit.
* get_push_tag_body had an `else` condition that really can't happen
in any payload. I verified this by checking the BitBucket webhook
docs.
We shouldn't just ignore exceptions when encoding the incoming
auth credentials. Even if the incoming credentials are properly
encoded, it is better to know when that is the case or if
something else fails.
This is a very early version of a tool to convert Hipchat
tar files into data files that can be used by the Zulip
import process.
We include the most fundamental entities--users and
streams. Customers who don't care about past messages
or customizations could start an instance off of this
and start communicating.
Of course, there are a lot of things missing in the
initial version:
* messages!
* file assets -- avatars, emojis, attachments
* probably lots of other minor things
We currently ignore any incoming dates from Hipchat data
and just use the current time. This is consistent with
other imports.
We also don't have any docs yet, although the process
will be extremely similar to the "Slack" process:
https://zulipchat.com/help/import-from-slack
Also, there's a comment at the top of convert_hipchat_data.py
that describes how to test this in dev mode.
I tested this by following the steps in the comment above.
The users just "show up" in /devlogin, so that's nice, and
you can send messages to other users. To verify the stream
data you have to go into the gear menu and click on "All
Streams", then you can subscribe and send a message.
Production users will need to get new passwords and
re-subscribe to streams. We will probably auto-subscribe
all users to public streams.
The code was needlessly querying the DB to get full
objects for entities where we only needed user_id,
realm_id, and stream_id.
With my test data of ~1000 records this sped up the
function from ~8s to ~0.5s. The speedup would probably
be even more for larger data sets.
The `match_subject` field is supposed to contain HTML; that's how
the highlighting is done. But the `subject` field is plain text --
it must be encoded if we want corresponding HTML.
Of the three places the `match_subject` field is populated -- two
here in messages_in_narrow_backend, one in get_messages_backend --
two of them already do this correctly, via get_search_fields.
Fix the remaining one, where in a `/messages/matches_narrow` query
we populate `matches_subject` even if the query didn't involve a
full-text search.
This doesn't affect the webapp, which ignores `match_subject` unless
it knows it did a full-text search; nor the mobile app, which
doesn't use `/messages/matches_narrow` at all.
Fixes the urgent part of #10397.
It was discovered that soft-deactivated users don't get mobile push
notifications for messages on private streams that they have configured
to send push notifications.
Reason: `handle_push_notification` calls `access_message`, and that
logic assumes that a user who is a recipient of a message has an
associated UserMessage row. Those UserMessage rows are created
lazily for soft-deactivated users, so they might not exist (yet)
until the user comes back.
Solution: Ensure that userMessage row is created for
stream_push_user_ids and stream_email_user_ids in create_user_messages.
At some point as part of the process of supporting renumbering data,
we changed the structure of our file uploads to expect `path` to match
`s3_path`, with both having the relative path within the overall
hierarchy (including the realm ID). This change updates the more
rarely-used S3 export code path to use that model, fixing a crash when
messages reference an Attachment object with a rewritten path_id.
If any user had sent the reply to the welcome bot recommended by our
tutorial, then the Zulip export/import process didn't work properly,
because we weren't including (and then remapping) the recipient ID for
sending PMs to the cross-realm bots. This commit fixes that gap, by
recording the necessary data on the export side, and doing the
appropriate remapping on the import side.
Previously, our realm import logic only did the special remapping
logic for the original notifications_stream_id; when we added the new
signup_notifications_stream_id field, we neglected to handle it in the
same way.
Note we're no longer using subscriptions_html in the help docs, so no need
to test for it. There is already a test for subscriptions_html in
IntegrationTest.
In the event that two processes are racing to be the
first to load data from zulip.yaml, we now make the
race scenario be duplicated effort instead of having
the second racer get an attribute error on `data`.
We do this by declaring victory only after setting
`data`. "Declaring victory" in this case is a matter
of setting `last_update`.
We are still possibly vulnerable to corrupted data
here, so we should investigate a mutex, or just
read the data on every call (but it's strangely
expensive, almost 3.5s on my instance), or converting
the YAML to code before launching the server.
This fixes an inconsistent test failure with test_users.py (that
depended on the ordering between this migration and the creation of
test database users like hamlet).
We start by stripping the ids in front of the name before the database
lookup. This has the advantage of not mentioning anyone if an incorrect
user id and full name combination is specified, as well as not having
the query the database twice, once by fullname and next by id.
Previously, we were storing only the most recent person with the same
full name as others; this commit adds new keys to the dict such that
simply looking by name would get you the newest user with this name,
and the get_user_by_id function can index the remaining users.
This is largely inspired by requests from people not liking the
Google's new emojiset. A lot of people were requesting to revert
back to old blobs emojiset so we are re-enabling this feature
after making relevant infrastructure changes for supporting google's
old blob emojiset and re-adding support for twitter emojiset.
Fixes: #10158.
Fixes part of #10297.
Use FAKE_LDAP_NUM_USERS which specifies the number of LDAP users
instead of FAKE_LDAP_EXTRA_USERS which specified the number of
extra users.
This adds a feature in the "Notification" section of "Settings" tab,
which lets user enable or disable login emails notification.
Tweaked by tabbott to simplify the test.
Fixes: #5795, progress towards #5854.
Also use name for selecting form in casper tests
as form with action=new is present in both /new
and /accounts/new/send_confirm/ which breaks
test in CircleCI as
waitWhileVisible('form[action^="/new/"]) never stops
waiting.
We also remove some unreachable code. Calling
split() always returns at least one token, even
if it's just the empty string. This is tested
directly on this commit, plus messages with
empty content get rejected pretty early in
the execution path.
In user type custom field, field value is list of user ids. We weren't
converting list to json object in update event payload. This throws
error in frontend, cause we store stringify representation of custom
field value. Therefore, after update event is recieved field-value-
type gets updated to array from string which throws json parsing error.
Having HTML (or HTML-like) content in the examples was making parts of
the content invisible, since the browser identified them as HTML tags
rather than verbose text.
There are some endpoints that don't fall into the currently available
categories, so this new function will be used for calling the tests for
server and realm-related endpoints.
Using early-exit here allows us to more easily
comment why there are certain exemptions to
this logic.
We also only require callers to pass in realm,
not the whole user object.
The function being tested here was kind of an
emergency response to some spam attacks. It
works for a pretty specific set of circumstances,
so it requires a lot of setup.
We may eliminate this function as we improve
our realm "plan types", and if that happens, we
can either eliminate this test or repurpose it.
Since this class was built, folks have always chosen
to subclass JsonableError for situations where
the default of ErrorCode.BAD_REQUEST is insufficient.
So now we simplify the use cases, which also gets
us 100% coverage on this core module.
The output of generate_dev_ldap_dir was being tested against the fixture
located at zerver/tests/fixtures/ldap_dir.json. This didn't make much sense
as generate_dev_ldap_dir was itself used by developers to generate/update
the fixtures. Instead, test_generate_dev_ldap_dir checks the structure of
the dict returned by generate_dev_ldap_dir. The structure is checked by
regex checks, checking whether the dict contains some keys or not, etc.
This commit add FIELD_TYPE_CHOICES_DICT to page_params and replace
FIELD_TYPE_CHOICES.
FIELD_TYPE_CHOICES_DICT includes all field types with keyword, id
and display name. Using this field-type-dict, we can access field
type information by it's keyword, and remove all static use of
field-type'a name or id in frontend.
This commit also modifies functions in js where this page_params
field-types is used.
This commit modifies FIELD_TYPE_DATA dict in `CustomProfileField`
model to store keyword of field types. And create new dict
FIELD_TYPE_CHOICES_DICT to store all field type information
by field type keyword, i.e. id, name.
This is preparatory commit to remove all static use of field
types in frontend and access field type with keyword instead
of display name.
This prevents leaking some variables into an already
cluttered function.
We also add test coverage for what's now an
early-exit condition in the new function--we exempt
public MIT streams from these events.
This extends a test that proved only what Cordelia
could do with/without super_user privileges when she
was trying to send to an unsubscribed stream as herself.
Now the test shows the same powers extend to Cordelia
when she's sending messages on behalf of a mirrored
user.
This change was partially driven by a quirk in Python
where peephole optimizations make `continue` lines
appear not to be covered.
I also think it's generally a good idiom to extract
functions for loop bodies when they don't actually
accumulate values or maintain other state. With this
commit we now prevent potential bugs for vars like
`is_stream` leaking between loop iterations.
We simulate a race condition by mocking create_user
to actually create a user, but then raise an
IntegrityError (as if another process had actually
created the user, not our test).
I also changed the real code to use explicitly
named parameters.
I don't understand why this didn't cause test failures in CI; this
change was clearly required and test_change_realm_property was failing
consistently for me locally.
The missing fields are checked by `full_clean()` method.
The datetime field errors are ignored as they are fixed
in the `import_realm` script. The field that are
allowed to be null are not included while building
this object.
These test cases are used to test the cost of stream creation.
Three scenarios of stream creation are covered:
1) create a public stream;
2) create a private stream;
3) create a public stream with announce=true when there is a notification stream.
Fix: #4804.
We've been getting reports from users that our Freshdesk webhook
isn't working correctly. It turns out that the issue had nothing
to do with the webhook implementation itself!
In freshdesk/doc.md, we have a JSON template we ask users to
copy/paste into a textbox in the Freshdesk UI. That JSON template
contains "{{" and "}}" characters which we escaped as Unicode
decimals to prevent clashes with Jinja2 syntax in other parts
of the same template. This worked for a while!
But thanks to the changes introduced as part of the
nested_code_blocks extension, such escaped characters were never
decoded, leading users to copy/paste the same template but with
raw escaped unicode representations of "{{" and "}}" inside. And
that eventually broke our webhook implementation.
This commit makes sure that such characters are properly "unescaped",
just for Freshdesk docs.
We have code to prevent newbies on open realms
from inviting users. This is mostly intended
to hinder spammers. This commit just adds some
test coverage.
Our get_streams_traffic function used to query
all streams in the StreamCount table if you
passed in `None` for `streams`.
Now we require that you pass in a list of
stream_ids.
I don't know how much work this will save
the database, since probably the bulk of
the work is aggregating. If we need to fine
tune DB performance, we could possibly add
`realm` as an argument and add it to the filter.
What we'll immediately get, for large multi-realm
installations, is less data over the wire and
less work for the ORM.
The prior code uses an awkward idiom that
pre-dates the `exists()` function, and it
had an unreachable line of code.
The new version should be faster, since we
don't create a throwaway heavy Django object
or send needless data over the wire.
This functions appears to be redundant to
`access_stream_by_name`. The only
meaningful line of code in the function that we're
removing, the code that raises an error,
appears to be unreachable, despite reasonably
extensive tests.
The only thing the function was restricting
was that the case where the bot's owner was
unsubscribed to a private stream, which
is already locked down in
`access_stream_by_name` calls inside of
`patch_bot_backend`.
This commit increases test coverage
by removing unreachable code.
It's possible this function had
some theoretical value before we
introduced the `require_non_guest_human_user`
decorator to the `patch_bot_backend`
view, since in theory the bot itself
could have subscribed to a stream that
the owner didn't subscribe to. Even
then it's not clear that allowing the
bot to set that as a default stream
would have been harmful, since they
can already access it.
This commit adds some more tests related to patching
a bot's `default_sending_stream`.
Unfortunately, this didn't reach the code that I was
intending to add line coverage to, since checks happen
higher up in the stack, but the test code I added
is probably worthwhile.
We want our methodology for extracting the last message
id to be consistent, particularly in terms of how we
handle edge cases. (I'll concede that the
`bulk_remove_subscriptions` codepath never hits that
corner case in practice, but it's harmless to handle
the theoretical case.)
It may also be nice to have this function show up
clearly in profiling.
This also adds some direct testing to the function.
It's not clear to me why we don't use `latest('id')`
in the implementation, but that's outside the scope
of this commit.