While it does work, it's more an accident than intentional behavior
and not something we want to be encouraging (and it's messier code).
(imported from commit 3797147fc21836135a6304412bd3f958873a0576)
This shows the number of messages sent by humans for the last
eight 24-hour periods, for each realm. "Messages sent" isn't a
perfect metric of activity, but it's easier to query with our
current data model than certain other statistics.
(imported from commit 9de3c479640a0b9dbc017b245dda21d951f4efa4)
Validators are similar to converters, but they don't have
to parse JSON, and they are told the name of the request
variable to help format error messages.
(imported from commit 3c33e301892519c67e70675006d5686d9f013353)
Make sure that principles is a list a of strings (unless it
is None). This includes a unit test.
(imported from commit c2e3f1c0cafc207ceca67d5a174ef4e29a32c6ca)
This sets up a scheme to validate complex data structures and
give specific error messages for improperly typed parameters.
(imported from commit 33b2f070d993da4ee929119dd41503bd0128c8eb)
* Deal with shorter tweet IDs
(some old tweets don't have a full 18-character ID)
* Allow trailing slash
* Deal with old-style #! syntax
* Deal with links that link to a photo
(imported from commit 008a98c806f3b8dddd9e2f18a8f002af6932766f)
These images at least load now, but that's because Camo redirects
the browser to the origin server, so the only effect is an extra
round-trip time.
(imported from commit 0d6b9c888a5cdfaa9299272d74a085e872dfa434)
This will allow us to substantially decrease the server-side work that
we do to support our Mirroring systems (since the personal mirrors can
request only messages that user sent) and also is what we need to
support a single-stream Zulip widget that we embed in webpages.
(imported from commit 055f2e9a523920719815181f8fdb44d3384e4a34)
Now that we support email aliases, we have to be careful when going from
an email address to a domain that we assume we can use to get a Realm
object for. When we care about the Realm's domain, we want to follow
any RealmAliases that exist for a certain domain.
When we just care about the original email address domain itself,
for comparison or other purposes, use split_email_from_domain
This removes the ambiguity of having to decide when to use
email_to_domain + RealmAlias or just email_to_domain
(imported from commit 0e199495502d946ce2e1aae56263e7e8665be4ed)
It's a little weird that these still open in a new tab, but it might
be best to keep them consistent with all other links?
This is a first pass on Trac #1927.
(imported from commit 390bdef790a83af4240ad5f5a82e572ef5824756)
Now we can nest fenced code/quote blocks inside of quote
blocks down to arbitrary depths. Code blocks are always leafs.
Fenced blocks start with at least three tildes or backticks,
and the clump of punctuation then becomes the terminator for
the block. If the user ends their message without terminators,
all blocks are automatically closed.
When inside a quote block, you can start another fenced block
with any header that doesn't match the end-string of the outer
block. (If you don't want to specify a language, then you
can change the number of backticks/tildes to avoid amiguity.)
Most of the heavy lifting happens in FencedBlockPreprocessor.run().
The parser works by pushing handlers on to a stack and popping
them off when the ends of blocks are encountered. Parents communicate
with their children by passing in a simple Python list of strings
for the child to append to. Handlers also maintain their own
lists for their own content, and when their done() method is called,
they render their data as needed.
The handlers are objects returned by functions, and the handler
functions close on variables push, pop, and processor. The closure
style here makes the handlers pretty tightly coupled to the outer
run() method. If we wanted to move to a class-based style, the
tradeoff would be that the class instances would have to marshall
push/pop/processor etc., but we could test the components more
easily in isolation.
Dealing with blank lines is very fiddly inside of bugdown.
The new functionality here is captured in the test
BugdownTest.test_complexly_nested_quote().
(imported from commit 53886c8de74bdf2bbd3cef8be9de25f05bddb93c)
The register_json_consumer() function now expects its callback
function to accept a single argument, which is the payload, as
none of the callbacks cared about channel, method, and properties.
This change breaks down as follows:
* A couple test stubs and subclasses were simplified.
* All the consume() and consume_wrapper() functions in
queue_processors.py were simplified.
* Two callbacks via runtornado.py were simplified. One
of the callbacks was socket.respond_send_message, which
had an additional caller, i.e. not register_json_consumer()
calling back to it, and the caller was simplified not
to pass None for the three removed arguments.
(imported from commit 792316e20be619458dd5036745233f37e6ffcf43)
Subclasses of QueueProcessingWorker that don't override start() will
have their consume() functions wrapped by consume_wrapper(), which
will catch exceptions and log data from troublesome events to a log
file.
We need to do a puppet apply to create /var/log/zulip/queue_error.
(imported from commit 3bd7751da5fdef449eeec3f7dd29977df11e2b9c)
The "Your Account" and "Notifications" boxes on the Settings
page each had their own border and their own "Save changes"
button, but they were within the same form and sending to the
same back end point.
This commit creates a separate form and endpoint for each
of the two boxes.
(imported from commit 04d4d16938f20749a18d2c6887da3ed3cf21ef74)
Trac #1734
This is implemented by bouncing uploaded file links through a view
that checks authentication and redirects to an expiring S3 URL.
This makes file uploads return a domain-relative URI. The client converts
this to an absolute URI when it's in the composebox, then back to relative
when it's submitted to the server.
We need the relative URI because the same message may be viewed across
{staging,www,zephyr}.zulip.com, which have different cookies.
(imported from commit 33acb2abaa3002325f389d5198fb20ee1b30f5fa)
As it turns out, some of these tests used message IDs 1 and 2, which
Hamlet didn't even necessarily receive as the messages to update --
which meant that they previously updated 0 messages and returned
success. So those tests started failing when I added a check for not
updating anything in the update_message_flags backend -- and this
commit fixes the tests to actually update a nonempty set of messages.
(imported from commit 9034b415d4862216a266416a8e509d987050ffd7)
The gather_subscriptions_helper() does a separate query to
get emails from user_ids, and it returns an email_dict to its
caller.
This may seem like a step backward, since gather_subscriptions()
now needs to do an additional query, but there is some benefit
in passing fewer redundant emails over the wire from the DB.
The real payoff, though, will come in subsequent commits, where
we will reduce the amount of data going over the wire to the browser,
which will benefit users with slow connections.
(imported from commit bf1cc5828a4c5f68cafd052ea29a177837970206)
I am about to change the behavior of the internal API, and it's really more
important to have test coverage on the external API anyway.
(imported from commit 8a0723cbcb4ac1819a63397584aa40e69ceb827d)
Arguably the nl2br extension should be doing this for us. Given that
we're using nl2br, the "two spaces at the end of a line makes a line
break" rule doesn't make any sense (since every newline leads to a
linebreak), so we disable it.
(imported from commit 5ffa2ac8a825642ad31e085c532091e076665710)
In order to support iOS Push Notifications, we need to keep track
of a device's unique APNS Token. These are delivered to our iOS
code after registering for remote notifications
(imported from commit bbe34483e1380dc20a1c93e3ffa1fcfdb9087e67)
Trac #1162
The process_fence method replaces code blocks with placeholders, so
indexes stored before the replacement are incorrect. However, because
the closed code blocks have been replaced, we can simply search the
whole string for any remaining opening code block markers.
(imported from commit 6a9e6924840f8f3ca5175da7c52a905e27c1fabd)
We want to avoid opening a DB connection in the markdown thread
as its DB connection might live for a long time
(imported from commit 7700b2ca793ee5e9add7f071b92f22a4bf576b3d)
A few "slow" tests aren't as slow any more, for whatever reason,
so we're setting a higher bar going forward.
(imported from commit 642137cebb7826f4512b5635da9d7b75bd5c35f4)
The text of manual links are already AtomicStrings, so linkified strings
should be too.
Moves emoji detection to happen after linkification, so the emoji rule
won't look at links.
(imported from commit 9c56bce6a0e873b398255e0762dfb312a4a9a64e)
InlinePatterns should return None on failure, not text that may
have placeholders in it.
(imported from commit f9d8d22b2b8cfa7a92ecf3e52a6c76b48e6f0175)
This tab shows how long each user has been on during the last 24
hours, using data from UserActivityInterval. Much of the code
is borrowed from analyze_user_activity.py, but in this version
we set the time interval to be the last 24 hours and sort by
realm and email. I also ensure that it only executes one
query to get all the data (and there's test coverage for that).
(imported from commit 7a2b80f52679054b03c5f5f42b2cda07d5599432)
Waseem is ok with removing the client-specific tabs on the
main /activity page. This reduces the number of queries from
25 to 1. We might eventually restore some of that logic, but
we will do it more efficiently. A lot of the data for
non-website clients is kind of unreliable, anyway.
The page looks kind of funny with only one tab, but that
will be fixed in the next commit.
(imported from commit 54f08f89d5242ad3e045d8ca0d97b86617c15380)
When we don't already have old messages in cache, we need to
fetch data from the database and create dictionaries for the
cache. This commit makes that process work in 50ms, instead
of 130ms, for the data set in test_bulk_message_fetching(),
which is 602 records. Before this commit we had about 132
microseconds of unnecessary churn per message, because we
were fetching DB fields we didn't need and incurring the cost
of the Django ORM. Now we use values() to get only the columns
we need, and we take advantage of previous commits that make
our code less OO and more function-driven, so we can pass the
values directly to build_message_dict() without having to create
objects.
A couple caveats on this commit:
1) I haven't been able to get good measurements on the overall
effect on get_old_messages_backend(). If you kill the cache to
force DB queries, you introduce noise related to sessions and
user profiles.
2) Look at the long comment in this commit related to
re-rendering messages in this codepath. The problem precedes
this commit.
(imported from commit dcb64aa9416f0e9583355ddd6dc3adfa746b9fc7)
Before, it was trying to use connection.queries, but Django
could pull the rug out from under us. Now we monkeypatch
the CursorDebugWrapper methods instead.
(imported from commit 25d5bab47673f2b66a6325f48d33e66c31055ab3)
For a 4-person stream, we were hitting the DB 8 times, and 4 of
those queries were to lazily get user.email for the 4 recipients
due to upstream code using only(). I added user_profile__email
to the only() call.
I believe this regression started 9/18, and after pushing this
to prod, we would should look at this graph:
https://stats1.zulip.net/graphs/8274cd84588
(imported from commit 70629cb69fe5955c674ba76482609dfe78e5faaf)
This tests that a bot's owner gets sent a message if the bot
sends a message to a stream with no subscribers. (Presumably
the message will be a PM; we could make the test more precise
in the future.)
(imported from commit 0aaf931a90cb9c7bc3fde8ac545c6b6ad0a55668)
Don't send peer_add notifications to users who are already
getting add notifications, because they will already know
about subscribers.
(imported from commit 726b54ae0e30b71440b17d9c51b026872ea96218)
LinkPattern returned a string which contained a placeholder if the URL was
considered invalid. AtomicLinkPattern wrapped this in an AtomicString,
where the placeholder doesn't get removed properly.
m.group(0) is always incorrect because python-markdown modifies your regex
to include more than you specified (this is why part of the message got
duplicated).
(imported from commit 576bdf09c2b677cf4bc56484c363eb05f2110158)
The test will fail if a new attribute is added to the structure that
gather_subscriptions() returns. It should only be concerned with the
subscription's color.
(imported from commit fd5bad97bbce2544e0078ee029f54d4e45da9c15)
It is triggered by specifying the "language" of a code block to
"quote" or "quoted":
Hamlet said:
~~~ quote
To be or **not** to be.
That is the question
~~~
(imported from commit 847a0602e335e9f2955e32d9955adf8ac8de068c)
Previously, when added to an invite-only stream, you got notifications
like this:
Hi there! We thought you'd like to know that Some Person just subscribed
you to the invite-only stream 'Secret stuff'
You can see historical content on a non-invite-only stream by narrowing
to it.
Note that the second line is irrelevant and confusing in light of the
first!
This commit leaves this out, and also, to make sure I didn't mess
anything up (and because I needed to change the tests anyway), adds a
test for invite-only stream notifications.
(imported from commit 49c333629c78fc06f6d2f1ec8a627c6d38e7716a)
Previously it only provided the list of all public streams; now it
allows one to specify any union of some of the following:
* all public streams
* all streams the user subscribed to
(the most relevant being the union of those two, which is what we want
for the "streams" page).
Or:
* all streams in realm (superuser only)
The manual task required is that when this is pushed to prod, we need
to also deploy the new sync-public-streams version to zmirror.
(imported from commit 27848b8bd136e2777f399b7d05b2fdcec35e4e21)
For syncing streams between Zephyr and Zulip, we need to be able to
have the API client send the server a long list of streams, some of
which might be invite-only, and add the ones that it can add and not
the ones it cannot without a bunch of annoying round trips dropping
individual streams one is not authorized to one by one. This argument
makes that possible.
We might find other applications as well.
(imported from commit 9236d185897c42218ab6cac3d8f3ddcb1bbc94e9)