Apparently, we did essentially all the work to support showing full
topic history to newly subscribed users from a data flow perspective,
but didn't actually enable this feature by having the topic history
endpoint grant access to historical topics. This fixes that gap.
I'm not altogether happy with how the code and tests read for this
feature; the code itself has more duplication than I'd like, and the
tests do too, but it works.
There might be case that NOTIFICATION_BOT is none, so before sending stream
announce notification, check first if settings.NOTIFICATION_BOT is not none.
This reverts commit 620b2cd6e.
Contributors setting up a new development environment were getting
errors like this:
```
++ dirname tools/do-destroy-rebuild-database
[...]
+ ./manage.py purge_queue --all
Traceback (most recent call last):
[...]
File "/home/zulipdev/zulip/zproject/legacy_urls.py", line 3, in <module>
import zerver.views.streams
File "/home/zulipdev/zulip/zerver/views/streams.py", line 187, in <module>
method_kwarg_pairs: List[FuncKwargPair]) -> HttpResponse:
File "/usr/lib/python3.5/typing.py", line 1025, in __getitem__
tvars = _type_vars(params)
[...]
File "/usr/lib/python3.5/typing.py", line 277, in _get_type_vars
for t in types:
TypeError: 'ellipsis' object is not iterable
```
The issue appears to be that we're using the `typing` module from the
3.5 stdlib, rather than the `typing=3.6.2` in our requirements files,
and that doesn't understand the `Callable[..., HttpResponse]` that
appears in the definition of `FuncKwargPair`.
Revert for now to get provision working again; at least one person
reports that reverting this sufficed. We'll need to do more testing
before putting this change back in.
Since subscribed_to_stream is only doing an id lookup
on the Stream model to find out if a user is subscribed to
a stream, there's no reason to require a full Stream object.
It's currently the case that all callers do have full Stream
objects handy to pass in to this function, but it's still a
good practice to have functions only ask for objects that they
need.
In anticipation of have all unread message ids available to the
web app in page_params (via a separate effort), we are simplifying
the /topics endpoint to no longer return unread counts.
Instead we have a list of tiny dictionaries with these fields:
name - name of the topic
max_id - max message id for the topic (aka most recent)
The items in the list are order by most-recent-topic-first.
This route is called only in `js/compose.js`, to handle autosubscribe.
That code doesn't check this "exists" field, because there's no need
-- the same information is already carried in whether the result was
success or failure. So just eliminate it.
This makes the logic here a little simpler. It also eliminates
another usage of the `data` parameter to `json_error`. I have half a
mind to eliminate that parameter, in favor of making `JsonableError`
subclasses whenever there's structured data to include, in particular
to get the benefits of typing. There are a couple of places where
that change isn't locally a clear win, but this is not one of them.
This provides the main infrastructure for fixing #5598. From here,
it's a matter of on the one hand upgrading exception handlers -- the
many except-blocks in the codebase that look for JsonableError -- to
look beyond the string `msg` and pass on the machine-readable full
error information to their various downstream recipients, and on the
other hand adjusting places where we raise errors to take advantage
of this mechanism to give the errors structured details.
In an ideal future, I think all exception handlers that look (or
should look) for a JsonableError would use its contents in structured
form, never mentioning `msg`; but the majority of error sites might
continue to just instantiate JsonableError with a string message. The
latter is the simplest thing to do, and probably most error types will
never have code looking for them specifically.
Because the new API refactors the `to_json_error_msg` method which was
designed for subclasses to override, update the 4 subclasses that did
so to take full advantage of the new API instead.
This simplifies things for all codepaths not involving this feature.
Using this feature becomes slightly easier when you're already
defining a subclass, but now requires you to define a subclass.
Currently we use it just once out of >100 uses of JsonableError, and
that use already has a subclass, so this seems like a win.
With #5598 there will soon be an application-level error code
optionally associated with a `JsonableError`, so rename this
field to make clear that it specifically refers to an
HTTP status code.
Also take this opportunity to eliminate most of the places
that refer to it, which only do so to repeat the default value.
The whole thing is an error, so "message" is a more apt word for the
error message specifically. We abbreviate that as `msg` in the actual
HTTP responses and in the signatures of `json_error` and friends, so
do the same here.
In most cases, we do have the data for which other user was
responsible for subscribing the target user to new streams.
The main case where we don't is when the user is created and gets the
default streams.
When we create a stream, we usually send a welcome message on the
stream itself as well as an announcement on the announcement stream,
but we no longer PM the individual users. Hopefully this will be
more pleasant for users (less spammy), and it also will make creating a
stream a lot faster.
We still send notifications when we add subscribers to an existing
stream.
This is one of the last major endpoints that were still done in the
pre-REST style.
While we're at it, we change the endpoint to expect a stream ID, not a
stream name.
The new function takes a full UserProfile object for the sender,
which allows us to avoid O(N) calls when creating the stream to
find the user profile of the notification bot. (The calls were
already cached, so this won't necessarily be a huge performance
win.)
We also don't have to worry about sending a blank subject any more.
The new, more direct interface for prepping internal stream
messages circumvents the bug-prone extract_recipients() method,
which has the pitfall that it will try to parse a stream name
as JSON. It also takes a UserProfile object for the sender, so
it's a bit more type-safe.
A bug in Zulip's implementation of the "stream exists" endpoint meant
that any user of a Zulip server could subscribe to an invite-only
stream without needing to be invited by using the "autosubscribe"
argument.
Thanks to Rafid Aslam for discovering this issue.
In order to correctly handle messages sent by cross-realm bots, we
need to specify the realm that the messages are being sent into in the
send message code path. The commit and its successors convert that
code path to include the realm the message is being sent to explicitly.
- Change `stream_name` into `stream_id` on some API endpoints that use
`stream_name` in their URLs to prevent confusion of `views` selection.
For example:
If the stream name is "foo/members", the URL would be trigger
"^streams/(?P<stream_name>.*)/members$" and it would be confusing because
we intend to use the endpoint with "^streams/(?P<stream_name>.*)$" regex.
All stream-related endpoints now use stream id instead of stream name,
except for a single endpoint that lets you convert stream names to stream ids.
See https://github.com/zulip/zulip/issues/2930#issuecomment-269576231
- Add `get_stream_id()` method to Zulip API client, and change
`get_subscribers()` method to comply with the new stream API
(replace `stream_name` with `stream_id`).
Fixes#2930.
Previously, we included a special subscribe button in new stream
notifications, but that had 2 problems:
(1) The subscribe button would render badly if the stream was renamed.
(2) There wasn't an easy way to look at the stream when deciding
whether to subscribe.
This fixes the second problem, but not really the first.
Refactor list_to_streams and create_streams_if_needed to take a list
of dictionaries, instead of a list of stream names. This is
preparation for being able to pass additional arguments into the
stream creation process.
An important note: This removes a set of validation code from the
start of add_subscriptions_backend; doing so is correct because
list_to_streams has that same validation code already.
[with some tweaks by tabbott for clarity]
This moves the logic for renaming a stream to the REST API
update_stream_backend method, eliminating the legacy API endpoint for
doing so.
It also adds a nice test suite covering international stream names.
The list_to_streams() method now uses create_streams_if_needed() to
do its heavy lifting during the autocreate=True case.
This commit gets us to 100% coverage on the streams view. (The
recently created action.create_streams_if_needed() was easy
to test in isolation, and it has 100% coverage as well, so we are
not cheating here.)
Fixes: #1005.
This commit extracts compose_views() from update_subscriptions_backend(),
and it implements the correct behavior for forcing transactions to roll
back, which is to raise an exception.
There were really three steps in this commit:
- Extract buggy code to compose_views().
- Add tests on compose_views().
- Fix bugs exposed by the new tests by converting errors to exceptions.
For a long time, rest_dispatch has had this hack where we have to
create a copy of it in each views file using it, in order to directly
access the globals list in that file. This removes that hack, instead
making rest_dispatch just use Django's import_string to access the
target method to use.
[tweaked and reorganized from acrefoot's original branch in various
ways by tabbott]
As documented in https://github.com/zulip/zulip/issues/441, Guardian
has quite poor performance, and in fact almost 50% of the time spent
running the Zulip backend test suite on my laptop was inside Guardian.
As part of this migration, we also clean up the old API_SUPER_USERS
variable used to mark EMAIL_GATEWAY_BOT as an API super user; now that
permission is managed entirely via the database.
When rebasing past this commit, developers will need to do a
`manage.py migrate` in order to apply the migration changes before the
server will run again.
We can't yet remove Guardian from INSTALLED_APPS, requirements.txt,
etc. in this release, because otherwise the reverse migration won't
work.
Fixes#441.