The major changes are:
* Remove the --destroy-rebuild-database option
* Merge the new and existing self-hosted server sections
* Change the wording of the Gitter document to match the Slack one
IFTTT allows custom templating for their payloads, so the onus is
on the user to ensure that their custom templates conform to the
expectations outlined in our IFTTT webhook docs. For that reason,
these payloads weren't generated, but were manually edited.
After discovering a couple of bugs, I decided to thoroughly test
and rewrite this integration from scratch. The older code wasn't
generating coherent messages.
This also commit gets this integration up to 100% test coverage.
Test coverage was improved by removing an unused function and
removing some code (written by me) that was actually handling
Test Hook event types incorrectly.
It was a painful amount of work to generate the actual payload.
Since the only difference was a small build URL, I manually
edited the payload and used that for testing.
This commit gets our GitHub webhook up to 100% test coverage.
Some of the page build message code had insufficient test coverage.
I looked at generating the payloads that would allow me to test
the lines of code in question, but it was too much work to
generate the payloads and this seemed like a vague event anyway.
So I just rewrote the logic so that the lines missing
coverage are implicitly covered.
This is a part of our efforts to get this webhook's coverage
up to 100%.
Note that apart from just testing an uncovered line of code, this
commit also fixes a minor bug in the code for messages about issue
comment deletion and editing.
Note that Freshdesk allows custom templating for outgoing payloads
in their webhook UI. Therefore, the payloads added in this commit
did not have to be official payloads from Freshdesk.
Instead of just referring to the commit with the raw URL, we
should use the commit ID as the text of the hyperlink.
Note that in commit_status_changed type messages, the name of the
commit isn't available.
The function that generates the body of the commit_status_changed
event messages generated an invalid commit URL.
Most likely, we missed this because this event type is fairly
vague and it is possible it was never tested by users much,
if at all.
The lack of coverage was due to:
* An unused function that was never used anywhere.
* get_commit_status_changed_body was using a regex where it didn't
really need to use one. And there was an if statement that
assumed that the payload might NOT contain the URL to the commit.
However, I checked the payload and there shouldn't be any instances
where a commit event is generated but there is no URL to the commit.
* get_push_tag_body had an `else` condition that really can't happen
in any payload. I verified this by checking the BitBucket webhook
docs.
We shouldn't just ignore exceptions when encoding the incoming
auth credentials. Even if the incoming credentials are properly
encoded, it is better to know when that is the case or if
something else fails.
This is a very early version of a tool to convert Hipchat
tar files into data files that can be used by the Zulip
import process.
We include the most fundamental entities--users and
streams. Customers who don't care about past messages
or customizations could start an instance off of this
and start communicating.
Of course, there are a lot of things missing in the
initial version:
* messages!
* file assets -- avatars, emojis, attachments
* probably lots of other minor things
We currently ignore any incoming dates from Hipchat data
and just use the current time. This is consistent with
other imports.
We also don't have any docs yet, although the process
will be extremely similar to the "Slack" process:
https://zulipchat.com/help/import-from-slack
Also, there's a comment at the top of convert_hipchat_data.py
that describes how to test this in dev mode.
I tested this by following the steps in the comment above.
The users just "show up" in /devlogin, so that's nice, and
you can send messages to other users. To verify the stream
data you have to go into the gear menu and click on "All
Streams", then you can subscribe and send a message.
Production users will need to get new passwords and
re-subscribe to streams. We will probably auto-subscribe
all users to public streams.
The code was needlessly querying the DB to get full
objects for entities where we only needed user_id,
realm_id, and stream_id.
With my test data of ~1000 records this sped up the
function from ~8s to ~0.5s. The speedup would probably
be even more for larger data sets.
The `match_subject` field is supposed to contain HTML; that's how
the highlighting is done. But the `subject` field is plain text --
it must be encoded if we want corresponding HTML.
Of the three places the `match_subject` field is populated -- two
here in messages_in_narrow_backend, one in get_messages_backend --
two of them already do this correctly, via get_search_fields.
Fix the remaining one, where in a `/messages/matches_narrow` query
we populate `matches_subject` even if the query didn't involve a
full-text search.
This doesn't affect the webapp, which ignores `match_subject` unless
it knows it did a full-text search; nor the mobile app, which
doesn't use `/messages/matches_narrow` at all.
The boilerplate here is supposed to be example of how to apply the Apache
license to files in the project, and is not intended to be applied to the
LICENSE file itself. See https://github.com/benbalter/licensee/issues/103
for some discussion of this.
Moved the text from the LICENSE file to a NOTICE file, with one wording
change: "this file" -> "this project".
Previously we only added the active class to the Date uploaded
column, thinking it was already sorted by upload date by default.
However, it wasn't, so now we explicitly make a call to sort it by upload
date to fix an issue with broken sorting.
Fixes#10518.