If localstorage has `annual` schedule set, upgrade page for
free trial will show annual schedule. We fix it was overriding
the schedule if it was set to be fixed to a value by us.
For self hosted basic plan, we need to allow customers to subscribe
without purchasing 10 licenses and also we need to allow customer
take fully use the available discount so that if the add more
users in the future, the full discount was already applied.
To fix above, we set minimum user count to the least number
of licenses we require for the charge to be positive after applying
the complete discount.
149bea8309 added a separate config file
for smokescreen (which is necessary because it can be installed
separately) but failed ot notice that `zulip.template.erb` already had
a config line for it. This leads to failures starting the logrotate
service:
```
logrotate[4158688]: error: zulip:1 duplicate log entry for /var/log/zulip/smokescreen.log
logrotate[4158688]: error: found error in file zulip, skipping
```
Remove the duplicate line.
Fixed an issue in the linkifier and custom profile tables where
dragging darker rows color changes in the background.
Following a CZO discussion on using alpha values in HSL,
I implemented a fix using the CSS color-mix property. This approach
mixes the original color with var(--color-background-modal) in
sRGB mode, effectively eliminating the use of alpha and preventing
color leaks. For more context, see the CZO discussion:
[https://chat.zulip.org/#narrow/stream/6-frontend/topic/alphas.20in.20color.20definitions].
Fixes#26480.
As part of the process of moving from stream names to ids, we now only
pass the stream id in compose args to `compose_actions.start()`.
For when we still need the stream name, and have access to the compose
args, we compute it from the id exactly where needed, to localise the
instances of stream names.
Updates title and main description to follow the general style
of the API endpoint documentation.
Updates `token` description to clarify suggested mobile client
behavior.
Adds a set of excluded endpoints for the test of generated curl
examples in the API documentation.
Currently, only the `api/test-notify` endpoint is excluded since
there would need to be a push notification bouncer set up to test
that generated curl example.
We return expected_end_timestamp as "None" for the plans to be
downgraded if number of users is not more than MAX_USERS_WITHOUT_PLAN
since they will be downgraded to self-managed plan and would
have push notifications enabled.
Requests to these endpoint are about a specified user, and therefore
also have a notion of the RemoteRealm for these requests. Until now
these endpoints weren't getting the realm_uuid value, because it wasn't
used - but now it is needed for updating .last_request_datetime on the
RemoteRealm.
For the RemoteRealm case, we can only set this in endpoints where the
remote server sends us the realm_uuid. So we're missing that for the
endpoints:
- remotes/push/unregister and remotes/push/unregister/all
- remotes/push/test_notification
This should be added in a follow-up commit.
os.path.getmtime needs to be mock.patched or otherwise the success of
the test depends on the filesystem state and breaks if version.py hasn't
been modified in a while.
Earlier, the 'handle_customer_migration_from_server_to_realms'
function was called during the send analytics step.
It resulted in an error for customers having multiple Zulip servers,
one for testing and the others for not-testing, sharing a
push bouncer registration.
The migration step when run in a test instance caused customers to
have their legacy plan migrated to a test realm, resulting in them
losing their legacy plan.
This commit moves the migration step to run during plan management
login step. This reduces the chances of losing legacy
plan as we expect them to only verify that 8.0 upgrade works and
not bother trying to login to plan management from their test instance.
`<time:1234567890123>` causes a "signed integer is greater than
maximum" exception from dateutil.parser; datetime also cannot handle
it ("year 41091 is out of range") but that is a ValueError which is
already caught.
Catch the OverflowError thrown by dateutil.
boto3 has two different modalities of making API calls -- through
resources, and through clients. Resources are a higher-level
abstraction, and thus more generally useful, but some APIs are only
accessible through clients. It is possible to get to a client object
from a resource, but not vice versa.
Use `get_bucket(...).meta.client` when we need direct access to the
client object for more complex API calls; this lets all of the
configuration for how to access S3 to sit within `get_bucket`. Client
objects are not bound to only one bucket, but we get to them based on
the bucket we will be interacting with, for clarity.
We removed the cached session object, as it serves no real purpose.
e883ab057f started caching the boto client, which we had identified
as slow call. e883ab057f went further, calling
`get_boto_client().generate_presigned_url()` once and caching that
result.
This makes the inner cache on the client useless. Remove it.
This was weird, and I think incorrect. Places that call add_message
seem to expect consistent data structures.
I tried looking a bit into it, and couldn't find anywhere where
returning true made more sense. I'm sort of confused this hasn't
caused issues though.
Adds a support action for updating the minimum licenses on a
customer object once a default discount has also been set.
In the case that the current billing entity has a current active
plan or a scheduled upgrade to a new plan, then the minimum
licenses will not be updated.
Previously, the message string was sent as a success response to
the context, which could have been confusing or ignored when shown
in the support admin view.
This protects us from incorrectly handling situations where someone
tested and upgrade to 8.0 for a backup on a separate hostname, and
left the test system live while upgrading the main system, in a way
that results in duplicate RemoteRealm objects that are all marked as
locally deleted.
Further word is required to figure out how to avoid the original
duplication problem.
It seems most correct to answer the question about whether push
notifications are working specifically for the exact set of realms
that the server self-reported to us in fact exist.
Sending data on any additional realms that were not referenced in the
request (if that's somehow possible without them being locally
deleted) is likely to only be confusing.
And the client should reasonably be able to expect to get a response
covering exactly the realms it told us about.