Previoulsy, we display "0 subscribers" if user can't access stream's
subscribers. Replace subscriber count with lock icon in case of
unsubscribed private stream, in "All stream" list.
Display warning, saying "You can not access private stream subscribers,
in which you aren't subscribed", if user can not access subscribers;
instead of showing zero subscriber to stream.
It makes sense to make our Python and JS API examples more visible
than our curl examples, since Python is what most people will tend
to use.
Also, from a design perspective, an API documentation page that
starts off with a shiny Python example with syntax highlighting
looked much better than having a bland curl example be the first
thing readers see.
In order to do development on the installer itself in a sane way,
we need a reasonably fast and automatic way to get a fresh environment
to try to run it in.
This calls for some form of virtualization. Choices include
* A public cloud, like EC2 or Digital Ocean. These could work, if we
wrote some suitable scripts against their APIs, to manage
appropriate base images (as AMIs or snapshots respectively) and to
start fresh instances/droplets from a base image. There'd be some
latency on starting a new VM, and this would also require the user
to have an account on the relevant cloud with API access to create
images and VMs.
* A local whole-machine VM system (hypervisor) like VirtualBox or
VMware, perhaps managing the configuration through Vagrant. These
hypervisors can be unstable and painfully slow. They're often the
only way to get development work done on a Mac or Windows machine,
which is why we use them there for the normal Zulip development
environment; but I don't really want to find out how their
instability scales when constantly spawning fresh VMs from an image.
* Containers. The new hotness, the name on everyone's lips, is Docker.
But Docker is not designed for virtualizing a traditional Unix server,
complete with its own init system and a fleet of processes with a
shared filesystem -- in other words, the platform Zulip's installer
and deployment system are for. Docker brings its own quite
different model of deployment, and someday we may port Zulip from
the traditional Unix server to the Docker-style deployment model,
but for testing our traditional-Unix-server deployment we need a
(virtualized) traditional Unix server.
* Containers, with LXC. LXC provides containers that function as
traditional Unix servers; because of the magic of containers, the
overhead is quite low, and LXC offers handy snapshotting features
so that we can quickly start up a fresh environment from a base
image. Running LXC does require a Linux base system. For
contributors whose local development machine isn't already Linux,
the same solutions are available as for our normal development
environment: the base system for running LXC could be e.g. a
Vagrant-managed VirtualBox VM, or a machine in a public cloud.
This commit adds a first version of such a thing, using LXC to manage
a base image plus a fresh container for each test run. The test
containers function as VMs: once installed, all the Zulip services run
normally in them and can be managed in the normal production ways.
This initial version has a shortage of usage messages or docs, and
likely has some sharp edges. It also requires familiarity with the
basics of LXC commands in order to make good use of the resulting
containers: `lxc-ls -f`, `lxc-attach`, `lxc-stop`, and `lxc-start`,
in particular.
In this we change the way 'Sending...' is displayed. Instead of
hardcoding it into the template we make change the paradigm so
that we can have a flexible message about what's happening
rather than just always saying 'Sending...'. For eg. this will
help in the upcoming feature of Scheduled Messages by having this
message say 'Scheduling...'.
This is responsible for:
1.) Handling all the incoming requests at the
messages endpoint which have defer param set. This is similar to
send_message_backend apart from the fact that instead of really
sending a message it schedules one to be sent later on.
2.) Does some preliminary checks such as validating timestamp for
scheduling a message, prevent scheduling a message in past, ensure
correct format of message to be scheduled.
3.) Extracts time of scheduled delivery from message.
4.) Add tests for the newly introduced function.
5.) timezone: Add get_timezone() to obtain tz object from string.
This helps in obtaining a timezone (tz) object from a timezone
specified as a string. This string needs to be a pytz lib defined
timezone string which we use to specify local timezones of the
users.
This is a server setting so I created the section "Server settings" in the help sidebar for this to go under, and rewrote the copy and retook the images that were originally done by @Privisus due to some issues.
Hopefully this version makes it somewhat clearer how the different
methods relate to each other, how to choose between them, what
`ZulipRemoteUserBackend` is for, and how the latter works.
The main cleanup here is to move the examples out of the bulleted
list, which was getting rendered in an ugly/confusing way.
I also removed some redundant text, fixed some typos, and changed
the wording a bit.
This code takes care of the environment running Python 3.4 when
test label is passed directly to the test-backend command:
./tools/test-backend test_alert_words
Several changes:
* De-duplicate code for different error types.
* No need to list lots of error subtypes where we aren't treating
them differently; StripeError is the base class of them all.
* Unexpected, non-Stripe-related, exceptions we can handle in the normal
way. Just make them show up in the billing-specific log too.
* The Stripe client library already logs type, code, param, and message
before raising an error, so we don't need to repeat those; just add the
HTTP status code (because it's not there already and sure why not),
and the Python exception type the client library chose to raise
in case that makes things a bit easier to interpret.