We now consistently set our query limits so that we get at
least `num_after` rows such that id > anchor. (Obviously, the
caveat is that if there aren't enough rows that fulfill the
query, we'll return the full set of rows, but that may be less
than `num_after`.) Likewise for `num_before`.
Before this change, we would sometimes return one too few rows
for narrow queries.
Now, we're still a bit broken, but in a more consistent way. If
we have a query that does not match the anchor row (which could
be true even for a non-narrow query), but which does match lots
of rows after the anchor, we'll return `num_after + 1` rows
on the right hand side, whether or not the query has narrow
parameters.
The off-by-one semantics here have probably been moot all along,
since our windows are approximate to begin with. If we set
num_after to 100, its just a rough performance optimization to
begin with, so it doesn't matter whether we return 99 or 101 rows,
as long as we set the anchor correctly on the subsequent query.
We will make the results more rigorous in a follow up commit.
We start to force downloads for the attachment files. We do this
for all files except images or pdf's. We would like images or pdf's
to open up in browser itself.
Tweaked by tabbott for comment clarity and correctness.
This will allow realm admins to remove others from private stream to
which the realm administrator is not subscribed; this is important for
managing those streams, because previously nobody could remove users
from private streams that didn't have any realm administrators
subscribed.
This will allow realm admins to access subscribers of unsubscribed
private stream. This is a preparatory commit for letting realm admins
remove those users.
This will allow realm admins to update the names and descriptions of
private streams even if they are not subscribed, which fixes the buggy
behavior that previously nobody could(!).
If anchor is 0, there is no sense doing a before_query.
Likewise, if anchor is `LARGER_THAN_MAX_MESSAGE_ID`, there is
no sense doing an after_query.
We introduce variables called `need_before_query` and
`need_after_query` to enforce those conditions.
This also adds some comments explaining the fallthrough case
where neither query makes sense.
If use_first_unread_anchor is set and we don't have any unread
messages, then our anchor is effectively "positive infinity" and
we can streamline queries.
In the past we'd have clauses like `message_id <= 999999999999999`
in the query that were harmless but crufty.
Apparently, we did essentially all the work to support showing full
topic history to newly subscribed users from a data flow perspective,
but didn't actually enable this feature by having the topic history
endpoint grant access to historical topics. This fixes that gap.
I'm not altogether happy with how the code and tests read for this
feature; the code itself has more duplication than I'd like, and the
tests do too, but it works.
We now include whether the message was a private or group private
message; this is particularly important with the new setting to
disable including any message content in these emails (since in that
case, one doesn't know anything about the message types).
If new private stream is created by realm admin without realm admin
subscribed to it, then it doesn't automatically add created stream to
realm admin's stream list. We have to reload the browser to get newly
created stream in stream list. Cause private stream creation event is
only sent to the subscribed users to private stream, so even if realm
admin is acting user, they don't get creation event.
We should send private stream creation event to realm admin users along
with subscribed user to stream, as realm admins can access unsubscribed
private streams.
Tweaked by tabbott to fix various typos and clean up the code.
Till now, we had been storing realm emoji's name in emoji code field
in reactions' model. This commit migrates it to store realm emoji's id.
It is a part of effort to migrate realm emojis to be referenced by their
id and not by name.
"incorrect" here means rejected by a bot's validate_config() method.
A common scenario for this is validating API keys before the bot is
created. If validate_config() fails, the bot will not be created.
Adds realm_bot delete event. On bot ownership change, add event is
sent to the bot_owner(if not admin) and delete event to the
previous bot owner(if not admin). For admin, update event is sent.
if the test fails, the 'output_dir' would not be deleted and
hence it would give an error when we run the tests next time,
as 'do_convert_data' expects an empty 'output_dir'.
Also the unzipped data file should be removed if the test fails
at 'do_convert_data'.
The messages were first being read and passed to the helper
functions channel wise.
This function makes a list of all the messages in the all the channels
beforehand which would be used to pass in the helper functions.
This field has been unused by clients for some time, and isn't great
for our public archive feature plans (where we'll not want to be
including email addresses in messages).
Add `translate_emoticons` to `prop_types` and `expected_keys`.
Furthermore, create a emoji-translating Markdown inline pattern.
Also use a JavaScript version of `translate_emoticons` and then use
this function during Markdown previews and as a preprocessor. This
is only needed for previews, because usually emoticon translation
happens on the backend after sending.
Add tests for emoticon translation, a settings UI, and a /help/ page
as well.
Tweaked by tabbott to fix various test failurse as well as how this
handles whitespace, requiring emoticons to not have adjacent
characters.
Fixes#1768.
This sets up a new test class with a simple
test, mostly for increasing coverage. The class
should in the future be extended to properly
verify the handle_feedback() logic.
Webhook functions wrapped by the decorator:
@authenticated_api_view(is_webhook=True)
now log payloads that cause exceptions to webhook-errors.log.
Note that authenticated_api_view is only used by webhooks/github
and not anywhere else.
slack avatar urls have the format:
'https://ca.slack-edge.com/<team_id>-<user_id>-<avatar_hash>-<size>'
For any url of this form, if the user hasn't uploaded an image,
Slack uses default gravatar, but we don't have a way of knowing if Slack
has used the uploaded image or the custom gravatar
eg: https://ca.slack-edge.com/T5YFFM2QY-U6006P1CN-gd41c3c33cbe-512.
Hence, avatar_source should be mapped to 'U'.
Previously, this function executed the same test as
test_bots.py/test_create_embedded_bot_with_incorrect_service_name().
Now, instead of testing to add an embedded bot with an incorrect service
name, we test messaging an embedded bot with an incorrect service
name.
This requires updating one of the tests for the group_pm_with feature
in test_narrow to use the new style of tautology generated by SQLAlchemy.
Thanks to Sinwar for investigating this.
Fixes#8381.
The check for the channel ('general' and 'random') must be added before
'build_defaultstream' function is called and then the id is incremented.
Otherwise, the id appended at the end of second defaultstream object, which would be
greater than the total number of defaultstream objects would crash at
'defaultstream_id_list[defaultstream_id]' which is a paramater of 'build_defaultstream'.
Added tests to prevent the same.
This is necessary for mobile apps to do the right thing when only
RemoteUserBackend is enabled, namely, directly redirect to the
third-party SSO auth site as soon as the user enters the server URL
(no need to display a login form, since it'll be useless).
Previously, we used to raise an exception if the direct dev login code
path was attempted when:
* we were running under production environment.
* dev. login was not enabled.
Now we redirect to an error page and give an explanatory message to the
user.
Fixes#8249.
This uses an actual query to the backend to check if the subdomain is
available, using the same logic we would use to check when the
subdomain is in fact created.
This commit prefixes stream names in urls with stream ids,
so that the urls don't break when we rename streams.
strean name: foo bar.com%
before: #narrow/stream/foo.20bar.2Ecom.25
after: #narrow/stream/20-foo-bar.2Ecom.25
For new realms, everything is simple under the new scheme, since
we just parse out the stream id every time to figure out where
to narrow.
For old realms, any old URLs will still work under the new scheme,
assuming the stream hasn't been renamed (and of course old urls
wouldn't have survived stream renaming in the first place). The one
exception is the hopefully rare case of a stream name starting with
something like "99-" and colliding with another stream whose id is 99.
The way that we enocde the stream name portion of the URL is kind
of unimportant now, since we really only look at the stream id, but
we still want a safe encoding of the name that is mostly human
readable, so we now convert spaces to dashes in the stream name. Also,
we try to ensure more code on both sides (frontend and backend) calls
common functions to do the encoding.
Fixes#4713
We use the command
'select nextval('sequence') from generate_series(1, increment_number)'
which returns a list of allocated values for the ids.
This list is used to assign ids to the to be converted objects.
This adds button under "Organization profile" settings, which
deactivates the organization and sends an "event" to all the
active user and log out them.
Fixes: #8212.
This enforces `**` around all the mentions including "at-all" and
"at-everyone" mentions. Hence this makes `@all` and `@everyone`
invalid mentions, resulting into proper syntax for these mentions as
`@**all**` and `@**everyone**` respectively.
Note from tabbott: This removes an old feature/syntax, which made
sense back when @Tim was also a way to mention a user with Tim as
their first name. Given how nice typeahead is now, the user part of
the feature was removed a while ago; this should have gone at the same
time.
Fixes: #8143.
This may be helpful for some API clients, since it avoids them needed
to do somewhat messy post-processing on the results (the data was
always available via scanning for the first unread message in the result).
Fixes#6244.
From here on we start to authenticate uploaded file request before
serving this files in production. This involves allowing NGINX to
pass on these file requests to Django for authentication and then
serve these files by making use on internal redirect requests having
x-accel-redirect field. The redirection on requests and loading
of x-accel-redirect param is handled by django-sendfile.
NOTE: This commit starts to authenticate these requests for Zulip
servers running platforms either Ubuntu Xenial (16.04) or above.
Fixes: #320 and #291 partially.
External bots may call bot_handler.quit() when
they wish to terminate, e.g. due to a misconfiguration.
Currently, embedded bots ignore calls to quit(), even
though they signal a problem. This commit does the first
step in handling quit() calls by logging a warning.
It catches the `UserProfile.DoesNotExist` exception and
hence prevent internal server error.
Also remove option to select empty bot owner.
Fixes: #8334.
When the answer is False, this will allow the mobile app to show a
warning that push notifications will not work and the server admin
should set them up.
Based partly on Kunal's PR #7810. Provides the necessary backend API
for zulip/zulip-mobile#1507.
We keep having to change the same thing in three places here; and also
the duplicates have accumulated unnecessary variation that makes it
hard to see what's actually supposed to be different and not different
in the three cases.
* Put imports in order.
* `import stripe`; that's the style upstream docs recommend, and it avoids
confusion e.g. between our StripeError and the library's StripeError.
* Simplify loading JSON.
* Keep lines largely to 100 columns.
I got distracted, came back later to a successful test run in my
terminal, and thought I remembered finishing the change and just
kicking off a final test run to check.
In fact, there was an `assert False` right in the normal case for
production, and I just hadn't finished a test for that path. (m.-)
Definitely the most grateful I've been for our coverage checks,
which highlighted this for me.
Remove the `assert False`, and also finish writing the test it was
there to help me write. Those lines are covered now.
This is a wrapper over lru_cache function. It adds following features on
top of lru_cache:
* It will not cache result of functions with unhashable arguments.
* It will clear cache whenever zerver.lib.cache.KEY_PREFIX changes.
The fresh imported data shows that the users emails are not included
in the data. However, the data received from the older method of slack
(which is using legacy tokens) contains the email data of the users.
If some bug in Bugdown results in a rendered message content that is
bigger than twice the message size, we now just throw an exception
from Bugdown. This is considerably better than the old behavior,
which might result in an enormous message being placed in the database
(potentially, bigger than the 1MB limit to store in memcached), which
would in turn result in tragic consequences.
This fixes#8322, in that it prevents the super bad outcome seen there
(where basically Zulip became unusable for everyone on the stream
where the message is posted). Now, the failure mode is just the
message failing to send. Still not ideal (and requires further work
on the URL embed feature), but not a minor problem, not a major one.
The count of integrations is automatically computed now, so this will
change every time we add 10 more. Just stop asserting on the number.
Thanks to @hackerkid for spotting the issue.
Users having only account in one realm will not be distracted by realm
name in subject lines of every email. Users who have multiple
accounts in realms can turn this setting on and receive a
corresponding realm name in email's subject.
Tweaked by tabbott to rebase and address a few small issues.
Fixes#5489.
For messages where the entire rendered body is a message_inline_image
object, we actually don't display any text and just display the
image. These messages may have links to images which might or might
not be internal to Zulip but in both cases there is a chance of this
links being broken when accessed by an email server like Gmail that
doesn't possess the recipient user's cookies.
We don't want to have ugly looking broken images displayed in email
notifications. So we patch this by inserting a replacement for the
`message_inline_image` block in which we essentially replace the
content with the textual link.
Edited for clarity by tabbott.
This will let us defer configuring outbound email to the end of the
install procedure, so we can greatly simplify it by consolidating
several scripted steps.
The new flow could be simplified further by giving the user the full
form in the first place, rather than first a form for just their
email address and then a form with the other details. We'll leave
that improvement for a separate change.
This fixes an issue where the user's own avatar was being sent down
the wire as None. We could have fixed it, as in #8265, by adding code
in the webapp and mobile apps to compute medium-size gravatar URLs as
well, but that would be messy, and there's little benefit to that
complexity (saving at most 2 URLs from the payload).
Fixes#8253.
If an exception was thrown inside `send_email` resulting in a retry,
we would include the `failed_tries` data in the event, which turned
out to thrown an exception itself.
This fixes that flow, including deepening the test so that it would
fail if we didn't have the new logic.
We'll replace this primarily with per-realm quotas (plus the simple
per-file limit of settings.MAX_FILE_UPLOAD_SIZE, 25 MiB by default).
We do want per-user quotas too, but they'll need some more management
apparatus around them so an admin has a practical way to set them
differently for different users. And the error handling in this
existing code is rather confused. Just clear this feature out
entirely for now; then we'll build the per-realm version more cleanly,
and then we can later add back per-realm quotas modelled after that.
The migration to actually remove the field is in a subsequent commit.
Based in part on work by Vishnu Ks (hackerkid).
This is a little cleaner in that the try/except blocks for
SMTPException are a lot narrower; and it'll facilitate an upcoming
change to sometimes skip sending mail.
In this commit we also fix a test which would fail as a result of
doing this cleanup since the test wasn't designed to take into
account the space chars which might occur in the beginning of a
html line.
In the email integration, previously, EMAIL_GATEWAY_EXAMPLE wasn't
rendered at all, which was recently fixed. So, now, we should make
sure that it gets rendered!
In order to get test coverage on topic name checks, we
do them in Addressee, so that we don't hit an assertion
first. The assertion in question is in Addressee.topic(),
and it was added partly to appease mypy.
Adds a check for newline that was present on backend, but missing in the
frontend markdown implementation. Updating messages uses is_me_message flag
received from server instead of its own partial test. Similarly, rendering
previews uses markdown code.
Fixes#6493.
This is the first step for allowing users
to edit a bot's service entries, name the
outgoing webhook configuration entries. The
chosen data structures allow for a future
with multiple services per bot; right now,
only one service per bot is supported.
This is responsible for:
1.) Handling all the incoming requests at the
messages endpoint which have defer param set. This is similar to
send_message_backend apart from the fact that instead of really
sending a message it schedules one to be sent later on.
2.) Does some preliminary checks such as validating timestamp for
scheduling a message, prevent scheduling a message in past, ensure
correct format of message to be scheduled.
3.) Extracts time of scheduled delivery from message.
4.) Add tests for the newly introduced function.
5.) timezone: Add get_timezone() to obtain tz object from string.
This helps in obtaining a timezone (tz) object from a timezone
specified as a string. This string needs to be a pytz lib defined
timezone string which we use to specify local timezones of the
users.
This also amends a commit from Brock Whittaker <brock@zulipchat.com>
that merges two separate functions for YouTube videos and Vimeo videos
into a generic video recall function.
Fixes#7550.