To ensure that we have some basic data for custom profile settings,
in the `populate_db` data set, remove `options['test_suite']` check
for adding intial custom profile data.
It's possible that this won't work with some versions of the
third-party backend, but tabbott has tested carefully that it does
work correctly with the Apache basic auth backend in our test
environment.
In this commit we start to support redirects to urls supplied as a
'next' param for the following two backends:
* GoogleOAuth2 based backend.
* GitHubAuthBackend.
This commit migrates realm emoji to be addressed by their `id` rather
than their name. This fixes a long standing issue which was causing
an error on uploading an emoji with same name as a deactivated realm
emoji.
Fixes: #6977.
The original implementation of this migration had a highly unfortunate
bug that would result in it deleting all reactions to realm emoji on
the server; we missed this in review, so essentially all historical
realm emoji reactions on chat.zulip.org were lost :(.
We both correct the problem, and also add logging of the deleted rows
that would help should anything be deleted erroneously.
Because the base class's __init__ calls `_connect`, when we set the
value after that call has already returned, our new value only takes
effect if the first connection fails and we have to reconnect.
Make it take effect from the beginning.
This parameter isn't used anywhere. A good thing, because if it were,
the code would immediately raise an exception -- `self._on_open_cbs`
hasn't been initialized yet when we first call `_connect`, from the
base class's `__init__`.
So, just cut it. If we later need something like this, it's easy to
add a working version then.
We disabled the original "colorized HTML edit-history" feature way
back in 2013 in c51056ff8e.
That original feature involved showing what had been edited inline in
message bodies, so one could easily see what had been changed.
That old feature has since been replaced with the "view edit history"
menu option, and we're unlikely to ever want the old feature back.
So, we can just remove its code. There's a few supporting variables
that were created to help implement this; we can clean those up and
simplify the `update_message` code now that this feature is fully
removed.
For a personal build, the teamcity webhook still sends a private
message using check_send_private_message since a personal build
should never trigger a public notification.
For a non-personal build, check_send_webhook_message is used,
which can either send a PM or a stream message based on whether
a stream is specified in the webhook URL or not.
We now only give users two options, to specify a stream and receive
public notifications for their goals, or to leave it out and receive
PMs and thus, keep their goals private. This simplifies the docs!
This applies only on a server open for anyone to create a realm.
Moreover, if the server admins have granted any given realm a
max_invites greater than the default, that realm is exempt too.
This makes this value much easier for a server admin to change than it
was when embedded directly in the code. (Note this entire mechanism
already only applies on a server open for anyone to create a realm.)
Doing this also means getting the default out of the database.
Instead, we make the column nullable, and when it's NULL in the
database, treat that as whatever the current default is. This better
matches anyway the likely model where there are a few realms with
specially-set values, and everything else should be treated uniformly.
The migration contains a `RenameField` step, which sounds scary
operationally -- but it really does mean just the *field*, in
the model within the Python code. The underlying column's name
doesn't change.
This fixes an unpleasant regression in
f5edeb01ae, where we stopped correctly
filtering users who have an open browser session that's idle. These
users are tagged as "UserPresence.IDLE" with an current timestamp in
the database, and should be treated as idle for presence purposes.
As a result, if you had an open Zulip browser session, you incorrectly
wouldn't get missed-message emails for PMs and mentions before this fix.
This fixes a regression in 93678e89cd
and a4979410f9, where the webhooks using
authenticated_rest_api_view were migrated to a new model that didn't
include setting a custom Client string for the webhook.
When restoring these webhooks' client strings, we also fix places
where the client string was not capitalized the same was as the
product's name.
This commit migrates all of our webhooks to use
check_send_webhook_message, except the following:
beeminder: Rishi wanted to wait on this one.
teamcity: This one is slightly more work.
yo: This one is PM-only. I am still trying to decide whether we
should have a force_private argument or something in
check_send_webhook_message.
facebook: No point in migrating this, will be removed as part of
#8433.
slack: Slightly more work too with the `channel_to_topics` feature.
Warrants a longer discussion.
This changes the followup_day2 emails delay from one day later to two days
later if it is getting delivered on any working days(i.e. Mon - Fri).
For Thursday it is compromised to next day as it would be too late to
postponed to Monday and for Friday it should be Monday.
At last actually, emails should send one hour before the above calculated so
that user can catch them when they are dealing with these kinds of stuff.
Fixes: #7078.
These changes are in one commit, since the previous typing of check_url
does not match the centralized strict definition (object/Any vs Text),
actually already used elsewhere in validator.py, and also had a different
API.
check_url is updated here to match the API of the other check_* functions,
ie. val is an object (not Text) & returns Optional[str]. It also now checks
the value is text explicitly at run-time, which was only type-checked
previously. Tests are updated accordingly.
Currently, when other private stream subscriber add realm admin to
stream, new copy private stream is created in realm admin's streams.
Which resulted in error, cause there are two similar stream element
in stream settings.
If new subscriber is added to private stream, we first send them
stream `create` event, cause private stream are not visible until
user don't get subscribed at least once. But realm admins can now
always access private stream, so when realm admin is subscribed to
stream, realm admin get stream `create` event even if stream already
exist in on realm admin client side.
Fix this by extracting realm admins from stream `create` event on
`add` subscription operation and sending private stream `create`
event to all realm admins on stream creation operation.
Fixes#8695
These are the straightforward ones.
Note that there is a line in zerver.lib.test_classes.build_webhook_url
that lost test coverage. That's because most of our tests test using
stream messages so the webhook URLs being tested always have a query
parameter. So the line that accounts for there being no query
parameters never gets called, which is fine, but we should still
keep it.
This commit adds a generic function called check_send_webhook_message
that does the following:
* If a stream is specified in the webhook URL, it sends a stream
message, otherwise sends a PM to the owner of the bot.
* In the case of a stream message, if a custom topic is specified
in the webhook URL, it uses that topic as the subject of the
stream message.
Also, note that we need not test this anywhere except for the
helloworld webhook. Since helloworld is our default example for
webhooks, it is here to stay and it made sense that tests for a
generic function such as check_send_webhook_message be tested
with an actual generic webhook!
Fixes#8607.
We solved the problem the TODO raised by using a different type
annotation syntax, and I'm not sure whether that refactor would
actually improve the code.
The previous system would crash with some files (because for some
reason the comment count was 1 but there was no "initial comment") and
also the file comment and file name were sorta redundant.
The 'make_new_dir' bool value was used to create a new directory
every time True is passed. Now that avatars and uploads directory
are being created seperately, we don't need this anymore.
If an emoji that was deleted was the only realm emoji, or more
generally if all realm emoji were deleted, then we would just leave
the reaction unchanged, with an `emoji_code` that is now corrupt.
Instead, treat this case the same as if only this emoji was deleted
while others remain.
The domain name is being set in the helper function
'slack_workspace_to_realm', but it should be set in the main function
'do_convert_data', as we need it in other child functions of
'do_convert_data'.
This code was originally written when we were using the old South
system, and hasn't been used in a few years. It probably doesn't
work, and thus only serves to clutter the codebase.
Many declarations were previously annotated with
Callable[..., HttpResponse]; this is equivalent to ViewFuncT, so here we
switch to it.
To enable this migration, the WrappedViewFuncT alias is removed; this is
equivalent to the simple & legible Callable[[ViewFuncT], ViewFuncT], so
for relatively no space change, a clearer return type is possible.
Originally was going to centralize this in zerver/lib/request.pyi, but this
file is not visible at run-time, being only a stub. The matching request.py
file seemed inappropriate, as it doesn't actually use ViewFuncT.
Namely, annotate as best as possible, and add notes to indicate preference,
if QuerySet develops generic typing.
Note that the return values of functions with annotations changed in this
commit are used elsewhere as QuerySets, so the Sequence[T] approach used
for some functions in models.py is not applicable.
Other functions took the form of returning Sequence[T] when the QuerySet
functionality is unused beyond the function, with T being the objects
filtered for in the function body; this commit follows that practice for the
one remaining python2 comment-annotated function, completing the transition
of models.py to py3.5 function annotations.
A note is also added to another function regarding a need to return a
QuerySet, and ideally a QuerySet[T] in line with the other functions, as and
when QuerySet becomes annotated as a generic.
We now consistently set our query limits so that we get at
least `num_after` rows such that id > anchor. (Obviously, the
caveat is that if there aren't enough rows that fulfill the
query, we'll return the full set of rows, but that may be less
than `num_after`.) Likewise for `num_before`.
Before this change, we would sometimes return one too few rows
for narrow queries.
Now, we're still a bit broken, but in a more consistent way. If
we have a query that does not match the anchor row (which could
be true even for a non-narrow query), but which does match lots
of rows after the anchor, we'll return `num_after + 1` rows
on the right hand side, whether or not the query has narrow
parameters.
The off-by-one semantics here have probably been moot all along,
since our windows are approximate to begin with. If we set
num_after to 100, its just a rough performance optimization to
begin with, so it doesn't matter whether we return 99 or 101 rows,
as long as we set the anchor correctly on the subsequent query.
We will make the results more rigorous in a follow up commit.
Note that the "Save" button has no text in the Taiga webhook
setup UI. There is a small floppy disk symbol for saving which
is visible right beside the form fields. So I simply said,
"Save the form".
We start to force downloads for the attachment files. We do this
for all files except images or pdf's. We would like images or pdf's
to open up in browser itself.
Tweaked by tabbott for comment clarity and correctness.
This will allow realm admins to remove others from private stream to
which the realm administrator is not subscribed; this is important for
managing those streams, because previously nobody could remove users
from private streams that didn't have any realm administrators
subscribed.
This will allow realm admins to access subscribers of unsubscribed
private stream. This is a preparatory commit for letting realm admins
remove those users.
This will allow realm admins to update the names and descriptions of
private streams even if they are not subscribed, which fixes the buggy
behavior that previously nobody could(!).
This generic function isolates the before/after logic that really
is independent of Message and doesn't need to clutter up
`get_messages_backend`. Also, introducing a new namespace
reduces some shadowing/mutation with variables like `query`.
It's a pure code move, with some very minor renaming (e.g.
inner_msg_id_col -> id_col).
If anchor is 0, there is no sense doing a before_query.
Likewise, if anchor is `LARGER_THAN_MAX_MESSAGE_ID`, there is
no sense doing an after_query.
We introduce variables called `need_before_query` and
`need_after_query` to enforce those conditions.
This also adds some comments explaining the fallthrough case
where neither query makes sense.
If use_first_unread_anchor is set and we don't have any unread
messages, then our anchor is effectively "positive infinity" and
we can streamline queries.
In the past we'd have clauses like `message_id <= 999999999999999`
in the query that were harmless but crufty.
We want to say `if num_after > 0` when we expect num_after to be
a positive integer. We don't want any confusion that we will
execute the blocks for values of -7 or None.
We also delete a couple helper functions that were only used there.
This management command was primarily used before we had a UI for
creating outgoing webhook bots.
In theory, we should be able to delete this, since first, if there are
no users in the organization, we'll end up with an equivalent value
(an empty collection of users), and second, it shouldn't be possible
for an active Zulip realm to have 0 active users in it anyway.
But the way we construct the database query in query_for_ids is such
that it's necessary to avoid a 500.
Apparently, we did essentially all the work to support showing full
topic history to newly subscribed users from a data flow perspective,
but didn't actually enable this feature by having the topic history
endpoint grant access to historical topics. This fixes that gap.
I'm not altogether happy with how the code and tests read for this
feature; the code itself has more duplication than I'd like, and the
tests do too, but it works.
This commit refactors the bugdown to perform a lookup only on active
realm emojis. This was needed because once we migrate realm emojis
to be addressed by `id` rather than name, it will be costly to
perform a lookup on all the realm emojis.
We no longer accept URLs while creating emoji; so this management
command was probably left out while migrating realm emoji
infrastructure to upload backend.
We could fix this to work properly today, but the command was
originally written in a context when Zulip didn't have a UI for
managing realm emoji at all. Now that we do have such a UI, it
doesn't have a compelling use case, and work on migrating the realm
emoji schema demonstrates that this does have a maintenance cost.
So, we simply remove this command.
If user has disabled message content in missed email notifications,
we shouldn't send any informations about missed messages i.e. sender,
stream, message text, etc. to email servers.
We are already hiding this informations in email templates, but we
shoudln't expose any information about message content if user
has disabled this setting.
We now include whether the message was a private or group private
message; this is particularly important with the new setting to
disable including any message content in these emails (since in that
case, one doesn't know anything about the message types).
If new private stream is created by realm admin without realm admin
subscribed to it, then it doesn't automatically add created stream to
realm admin's stream list. We have to reload the browser to get newly
created stream in stream list. Cause private stream creation event is
only sent to the subscribed users to private stream, so even if realm
admin is acting user, they don't get creation event.
We should send private stream creation event to realm admin users along
with subscribed user to stream, as realm admins can access unsubscribed
private streams.
Tweaked by tabbott to fix various typos and clean up the code.
Till now, we had been storing realm emoji's name in emoji code field
in reactions' model. This commit migrates it to store realm emoji's id.
It is a part of effort to migrate realm emojis to be referenced by their
id and not by name.
Rewritten in significant part by tabbott to actually be correct.
One particularly nasty thing the original webhook integration did is
do `current_time = time.time()` at the top of the `view.py` function
-- that means that code ran at import time, not runtime.
"incorrect" here means rejected by a bot's validate_config() method.
A common scenario for this is validating API keys before the bot is
created. If validate_config() fails, the bot will not be created.
Adds realm_bot delete event. On bot ownership change, add event is
sent to the bot_owner(if not admin) and delete event to the
previous bot owner(if not admin). For admin, update event is sent.
if the test fails, the 'output_dir' would not be deleted and
hence it would give an error when we run the tests next time,
as 'do_convert_data' expects an empty 'output_dir'.
Also the unzipped data file should be removed if the test fails
at 'do_convert_data'.
The messages were first being read and passed to the helper
functions channel wise.
This function makes a list of all the messages in the all the channels
beforehand which would be used to pass in the helper functions.
There's probably follow-up work to do here to eliminate these
completely, but this dramatically shrinks the ~1 minute race window
that was previously present between import and test function being
called.
This commit:
* Restructures the doc to use a numbered-step format.
Note that there are no screenshots. I signed up for a
Fabric/Crashlyics account but you have to link an Android/iOS app
to even get to the settings panel, which seemed like too much work
just to get a screenshot.
However, the way we can verify (somewhat) the correctness of the
last step is that it is a paraphrase of the first paragraph of
Fabric's Webhook docs, which can be found here:
https://docs.fabric.io/apple/crashlytics/custom-web-hooks.html
This commit modifies the text to:
1. Removes unnecessary screenshots.
2. Use the numbered-style format.
3. I also removed the instructions for generating an access token.
I took a look at Dropbox's docs and you shouldn't need that
for a webhook setup. The whole point behind a webhook is that
one can get by without using OAuth.
4. Rearranges the instructions to only contain 4 steps. For
uncomplicated instructions, that seems to be the ideal number.
This field has been unused by clients for some time, and isn't great
for our public archive feature plans (where we'll not want to be
including email addresses in messages).
Add `translate_emoticons` to `prop_types` and `expected_keys`.
Furthermore, create a emoji-translating Markdown inline pattern.
Also use a JavaScript version of `translate_emoticons` and then use
this function during Markdown previews and as a preprocessor. This
is only needed for previews, because usually emoticon translation
happens on the backend after sending.
Add tests for emoticon translation, a settings UI, and a /help/ page
as well.
Tweaked by tabbott to fix various test failurse as well as how this
handles whitespace, requiring emoticons to not have adjacent
characters.
Fixes#1768.
These two classes are tricky to test, and nocoverage-ing them
allows us to mark queue_processors.py as fully covered. We
still want to cover these two workers at some point, but for
now, it's nice to enforce full coverage for any future changes
to queue_processors.py.
Fixes (sort of) #6542.
This sets up a new test class with a simple
test, mostly for increasing coverage. The class
should in the future be extended to properly
verify the handle_feedback() logic.
We already check in get_service_bot_events() if a bot is mentioned,
and then only pass on the call to the bot handler if it is. The
commit removes the additional check in the embedded bot queue
processor simply because it is impossible to obtain test coverage
for it (there is no meaningful way to trigger the content of the
if-clause, because there will never be a message reaching the bot
without @-mentioning it.
To alleviate the danger of a potential regression, the check is not
removed completely, but rather replaced by an assert statement.
Previously, when a user updated the config data of an
embedded bot, only the updated fields were dispatched
back to the client. Dispatching all config data fields
makes the client updating code less brittle.
Webhook functions wrapped by the decorator:
@authenticated_api_view(is_webhook=True)
now log payloads that cause exceptions to webhook-errors.log.
Note that authenticated_api_view is only used by webhooks/github
and not anywhere else.
During a slack import, we don't have medium-size avatars already
available in the export data set (and possibly also with a normal
import/export?). The medium size avatar can be created by the
'ensure_medium_avatar_image' function, which checks if the medium
image exists, and if it doesn't, it creates the image.
This commit was substantially edited by tabbott to get rid of an
undefined variable bug, avoid initializing the upload backend classes
in a loop, and add some TODO notes on things that could be improved
later.
slack avatar urls have the format:
'https://ca.slack-edge.com/<team_id>-<user_id>-<avatar_hash>-<size>'
For any url of this form, if the user hasn't uploaded an image,
Slack uses default gravatar, but we don't have a way of knowing if Slack
has used the uploaded image or the custom gravatar
eg: https://ca.slack-edge.com/T5YFFM2QY-U6006P1CN-gd41c3c33cbe-512.
Hence, avatar_source should be mapped to 'U'.
When our code raises an exception and Django converts it to a 500
response (in django.core.handlers.exception.handle_uncaught_exception),
it attaches the request to the log record, and we use this in our
AdminNotifyHandler to include data like the user and the URL path
in the error email sent to admins.
On this line, when our code raises an exception but we've decided (in
`TagRequests`) to format any errors as JSON errors, we suppress the
exception so we have to generate the log record ourselves. Attach the
request here, just like Django does when we let it do the job.
This still isn't an awesome solution, in that there are lots of other
places where we call `logging.error` or `logging.exception` while
inside a request; this just covers one of them. This is one of the
most common, though, so it's a start.
models.py should only contain thin wrapper functions. Furthermore,
this move allows us to remove the circular imports. The two moved
functions are interdependent and are thus moved in one commit.
Revert c8f034e9a "queue: Remove missedmessage_email_senders code."
As the comment in the code says, it ensures a smooth upgrade path
from 1.7.x; we can delete it in master after 1.8.0 is released.
The removal commit was merged early due to a communication failure.
Previously, this function executed the same test as
test_bots.py/test_create_embedded_bot_with_incorrect_service_name().
Now, instead of testing to add an embedded bot with an incorrect service
name, we test messaging an embedded bot with an incorrect service
name.
This requires updating one of the tests for the group_pm_with feature
in test_narrow to use the new style of tautology generated by SQLAlchemy.
Thanks to Sinwar for investigating this.
Fixes#8381.
GitLab recently changed their event name for build notifications
from "Build Hook" to "Job Hook". Instead of just supporting the
latter, we should support both just in case people are running
older versions of GitLab.
The check for the channel ('general' and 'random') must be added before
'build_defaultstream' function is called and then the id is incremented.
Otherwise, the id appended at the end of second defaultstream object, which would be
greater than the total number of defaultstream objects would crash at
'defaultstream_id_list[defaultstream_id]' which is a paramater of 'build_defaultstream'.
Added tests to prevent the same.
This is necessary for mobile apps to do the right thing when only
RemoteUserBackend is enabled, namely, directly redirect to the
third-party SSO auth site as soon as the user enters the server URL
(no need to display a login form, since it'll be useless).
At some point, GitLab decided to change the name of the event for
CI notifications from "Build Hook" to "Job Hook" and we started
running into errors in webhook-logger.log.
This commit:
* Removes the unnecessary screenshot. The UI is intuitive enough
and standalone instructions should suffice.
* Rearranges the instructions into 4 steps.
* Makes the wording more explicit.
This commit:
* Removes the unnecessary screenshot. The user should be able to
easily see the fields in question in this case.
* Wraps the text at 80 chars.
* Combines the instructions into 4 steps.
The docs for this can easily be combined into 4 steps. For
uncomplicated setups, 4 seems to be like a good number.
Again, I have no way of verifying the correctness of the instructions
here because Airbrake doesn't let you do anything till you specify
your credit card information, which I didn't want to.
This commit modifies the text to:
* Use number 1 for all steps and let Markdown take care of the rest.
* Removes the line that says this webhook is "experimental". It isn't
anymore.
This commit modifies the doc.md to:
* Use consistent language and style.
* Use the number 1 for all numbered steps and let Markdown take
care of the rest.
* Have detailed steps on how to get to the Integrations settings
instead of just linking to the page.
* Remove unnecessary screenshots.
This commit:
* Adds a missing step to the documentation.
* Replaces wording such as "Go to X" with "Click on X".
* Removes the unnecessary screenshots.
* Rearranges the doc to contain only 4 steps. For uncomplicated
setups, 4 seems to be the right number.
This commit:
* Removes the unnecessary screenshot.
* Reorders the instructions and combines them in to 4 steps.
* Improves the contents of the webhook-url-with-bot-email-indented.md
macro and makes it more consistent with create-bot-construct-url.md.
* Sets the recommended stream name to "commits", since that's what
the webhook function for Beanstalk expects in
zerver/webhooks/beanstalk/view.py. This allows us to use the
create-stream.md macro.
This got broken at some point when we moved around the context
processing logic for integrations/webhooks. Thankfully, the
context value for external_uri_scheme was only used in a couple of
our less popular integration docs. It should render perfectly now.
* Remove unnecessary screenshot. It doesn't help very much in this
case.
* Update text to instruct users to not leave the `Title` field
empty (it cannot be blank).
* Replace wording such as `Go to Settings` with `Click on Settings`.
* Combine the "fill out the form" and "click 'Save'" steps.
* Replace "Choose X on the left-hand side" with "Choose X".
* Replace "Remember to check the X" with "Check the X".
Creating the very first organization administrator user and
subscribing them to streams before any messages were sent resulted in
RealmAuditLog entries being created with a `event_last_message_id` of
None, because that's the maximum ID in the empty set.
We correct this by fixing the incorrectly created RealmAuditLog
entries, both for new servers and also fixing old broken entries on
existing servers.
This fixes an issue where if a user setup a Zulip server with just the
organization administrator, and then forgot about it (so that the
initial user became soft-deactivated), trying to sign in 3 weeks later
would throw an exception.
This fixes the issue reported here:
https://chat.zulip.org/#narrow/stream/9-issues/subject/500.20error.20on.20login/near/511981
The total number of stream objects are allocated to
total_users. They should be allocated to the total_channels.
This passed the tests as the total number of users in the test
where greater than the total number of channels.
Previously, we used to raise an exception if the direct dev login code
path was attempted when:
* we were running under production environment.
* dev. login was not enabled.
Now we redirect to an error page and give an explanatory message to the
user.
Fixes#8249.
The Markdown extension that lives inside
zerver/lib/bugdown/api_code_example.py previously used ujson.
ujson's `dumps` function doesn't accept a `separators` argument,
which means we have no control over how the JSON is pretty-printed.
This resulted in JSON fixtures with no spaces after the colon, which
looks unnecessarily convoluted.
So now, we use the built-in `json` module to get around this.
For further reading, this issue
<https://github.com/esnme/ultrajson/issues/82> opened on ujson's
repo explains why they are reluctant to support such formatting
due to performance considerations.
This commit adds a test for the sample fixture for when an invalid
stream name is passed to a query that expects a valid stream name
as an argument. This is the case with almost all of our queries
documented under the sidebar heading "Streams".
EDIT: Actually, I was wrong. This payload is highly specific to
get-stream-id, so it shouldn't be a part of common-error-payloads
at all.
In templates/zerver/api/delete-queue.md, we have a sample fixture
for when the queue_id passed to client.deregister_queue is not
valid or the event queue in question has already been deleted.
This commit tests that fixture.
Note that this error payload is specific to client.deregister_queue.
In templates/zerver/api/create-user.md, we have a sample fixture
for when a client attempts to create a user with the same email
as an existing user. This commit adds a test for that fixture.
Note that this error payload is specific to client.create_user
and this error payload isn't generated anywhere else.
In templates/zerver/api/add-subscriptions, we have a sample fixture
for when the user being subscribed is already subscribed to a
stream. This commit tests that fixture against a running server.
This commit adds tests for the fixture for when a user is not
authorized (perhaps because the query requires the use of admin
privileges) for a particular query.
In templates/zerver/api/update-message.md, we have a sample fixture
for when a zulip.Client does not have the permission to update/edit
a particular message. This commit adds a test for that fixture.
Also, tools/test-api now also uses a non-admin client for this test,
which might come in handy in the future.
This commit adds tests for the sample fixture for when a required
request argument is missing. Also, it moves the sample fixture
to common-error-payloads.md, since this is an error payload that
is common to most requests (except the ones that don't take any
arguments).
In templates/zerver/api/private-message.md, we have a sample fixture
for when the email address of the PM's recipient is invalid. This
commit makes sure that fixture is tested against a running server.
In templates/zerver/api/stream-message.md, we have a sample fixture
for when the target stream does not exist. This commit adds a test
for that sample fixture.
I think it makes more sense to first tell the user that
the character you are entering is invalid than telling
minimum length requirement is not satisfied.
Fixes#3058.
This uses an actual query to the backend to check if the subdomain is
available, using the same logic we would use to check when the
subdomain is in fact created.
This changes the missed-message logic to use the new encoding
scheme for stream url fragments that prefixes them with
"{stream_id}-".
For simplicity sake, we just get a Stream object to pass to
the helper functions, and we only call get_display_recipient()
now for the HUDDLE case. (We were calling it wastefully before
for the PM case.)
This commit prefixes stream names in urls with stream ids,
so that the urls don't break when we rename streams.
strean name: foo bar.com%
before: #narrow/stream/foo.20bar.2Ecom.25
after: #narrow/stream/20-foo-bar.2Ecom.25
For new realms, everything is simple under the new scheme, since
we just parse out the stream id every time to figure out where
to narrow.
For old realms, any old URLs will still work under the new scheme,
assuming the stream hasn't been renamed (and of course old urls
wouldn't have survived stream renaming in the first place). The one
exception is the hopefully rare case of a stream name starting with
something like "99-" and colliding with another stream whose id is 99.
The way that we enocde the stream name portion of the URL is kind
of unimportant now, since we really only look at the stream id, but
we still want a safe encoding of the name that is mostly human
readable, so we now convert spaces to dashes in the stream name. Also,
we try to ensure more code on both sides (frontend and backend) calls
common functions to do the encoding.
Fixes#4713
We use the command
'select nextval('sequence') from generate_series(1, increment_number)'
which returns a list of allocated values for the ids.
This list is used to assign ids to the to be converted objects.
This adds button under "Organization profile" settings, which
deactivates the organization and sends an "event" to all the
active user and log out them.
Fixes: #8212.
This enforces `**` around all the mentions including "at-all" and
"at-everyone" mentions. Hence this makes `@all` and `@everyone`
invalid mentions, resulting into proper syntax for these mentions as
`@**all**` and `@**everyone**` respectively.
Note from tabbott: This removes an old feature/syntax, which made
sense back when @Tim was also a way to mention a user with Tim as
their first name. Given how nice typeahead is now, the user part of
the feature was removed a while ago; this should have gone at the same
time.
Fixes: #8143.
This commit modifies the Markdown extension in bugdown/api_code_examples.py
to support rendering code examples in multiple languages by specifying
the language like so:
{generate_code_example(python)|doc.md|example}
This makes us one step closer towards adding support for testable
JavaScript code examples.
This commit simply adds a comment in zerver/lib/api_test_helpers.py
explaining why it isn't worth the effort to explicitly test the
code example in api/get-events-from-queue.md.
This may be helpful for some API clients, since it avoids them needed
to do somewhat messy post-processing on the results (the data was
always available via scanning for the first unread message in the result).
Fixes#6244.
From here on we start to authenticate uploaded file request before
serving this files in production. This involves allowing NGINX to
pass on these file requests to Django for authentication and then
serve these files by making use on internal redirect requests having
x-accel-redirect field. The redirection on requests and loading
of x-accel-redirect param is handled by django-sendfile.
NOTE: This commit starts to authenticate these requests for Zulip
servers running platforms either Ubuntu Xenial (16.04) or above.
Fixes: #320 and #291 partially.
With minor fixes by eeshangarg!
Eeshan: I decided to remove the screenshot. It looks very old and
was blurry and the instructions were very screenshot-agnostic
anyway!
I couldn't update the screenshot because Airbrake doesn't even let
you use the free trial till you give them your credit card info,
which I didn't want to do!
External bots may call bot_handler.quit() when
they wish to terminate, e.g. due to a misconfiguration.
Currently, embedded bots ignore calls to quit(), even
though they signal a problem. This commit does the first
step in handling quit() calls by logging a warning.
This is based on usage in bulk_change_user_names.py, and that
the RealmAuditLog acting_user field is Optional[UserProfile].
This could be more meaningfully changed in future, perhaps to
indicate that the command was run by a specific zulip user.
It catches the `UserProfile.DoesNotExist` exception and
hence prevent internal server error.
Also remove option to select empty bot owner.
Fixes: #8334.
When the answer is False, this will allow the mobile app to show a
warning that push notifications will not work and the server admin
should set them up.
Based partly on Kunal's PR #7810. Provides the necessary backend API
for zulip/zulip-mobile#1507.
We keep having to change the same thing in three places here; and also
the duplicates have accumulated unnecessary variation that makes it
hard to see what's actually supposed to be different and not different
in the three cases.
* Put imports in order.
* `import stripe`; that's the style upstream docs recommend, and it avoids
confusion e.g. between our StripeError and the library's StripeError.
* Simplify loading JSON.
* Keep lines largely to 100 columns.
The error message, in relevant part:
```
zerver/templatetags/app_filters.py: error: INTERNAL ERROR --
please report a bug at https://github.com/python/mypy/issues version: 0.560
File ".../site-packages/mypy/nodes.py", line 497, in accept
return visitor.visit_func_def(self)
File ".../site-packages/mypy/report.py", line 317, in visit_func_def
assert start_indent is not None and start_indent > old_indent
AssertionError:
zerver/templatetags/app_filters.py: : note: use --pdb to drop into pdb
```
Seems mypy gets confused by the comment following the decorator.
I got distracted, came back later to a successful test run in my
terminal, and thought I remembered finishing the change and just
kicking off a final test run to check.
In fact, there was an `assert False` right in the normal case for
production, and I just hadn't finished a test for that path. (m.-)
Definitely the most grateful I've been for our coverage checks,
which highlighted this for me.
Remove the `assert False`, and also finish writing the test it was
there to help me write. Those lines are covered now.
This is a wrapper over lru_cache function. It adds following features on
top of lru_cache:
* It will not cache result of functions with unhashable arguments.
* It will clear cache whenever zerver.lib.cache.KEY_PREFIX changes.
The fresh imported data shows that the users emails are not included
in the data. However, the data received from the older method of slack
(which is using legacy tokens) contains the email data of the users.
If some bug in Bugdown results in a rendered message content that is
bigger than twice the message size, we now just throw an exception
from Bugdown. This is considerably better than the old behavior,
which might result in an enormous message being placed in the database
(potentially, bigger than the 1MB limit to store in memcached), which
would in turn result in tragic consequences.
This fixes#8322, in that it prevents the super bad outcome seen there
(where basically Zulip became unusable for everyone on the stream
where the message is posted). Now, the failure mode is just the
message failing to send. Still not ideal (and requires further work
on the URL embed feature), but not a minor problem, not a major one.
This commit adds tests (and thus, an extra code example) for
unsubscribing another user from a particular stream by passing in
the `principals` argument to client.remove_subscriptions. The
ability to pass in `principals` was added in the latest release
of the zulip API PyPI package.
We now have a separate page for common error payloads, for example,
the payload for when the client's API key is invalid. All error
payloads that are presented on this page will be tested similarly
to our other non-error sample fixtures.