This is a simple computed field. It's intended to more clearly
capture the meaning of this restriction for the users in zephyr mirror
realms, and eventually support guest user accounts in normal Zulip
realms.
This is part of the effort to remove the use of is_zephyr_mirror_realm
across the code path for situations that might be relevant for other
users. It helps keep the code readable.
This commit sends the event for renaming of a private stream to
organization admins of the realm, in addition to the obvious list of
subscribers of the private stream.
Normally, admins can manage a private stream (e.g. unsubscribing a
user). But when the admin tried to unsubscribes a user from a
previously renamed stream, we previously were throwing a JS error, as
the webapp hadn't been notified about the new stream name.
Fixes#9034.
This was stored as a fixture file under zerver/fixtures, which caused
problems, since we don't show that directory under production (as its
part of the test system).
The simplest emergency fix here would be to just move the file, but
when looking at it, it's clear that we don't need or want a fixture
file here; we want a Python object, so we just do that.
A valuable follow-up improvement to this block would be to create an
actual new Realm object (not saved to the database), and dump it the
same code we use in the export tool; that should handle the vast
majority of these correctly.
Fixes#9123.
The size information of an avatar is not required during the import.
Check function 'import_uploads_local' and 'import_uploads_s3'
in 'export.py' for this.
We should still short-circuit the iteration in
`add_missing_messages` if the unsubscription was the last
thing to happen to the user before unsubscription and
soft deactivation.
Rishi and I decided that it makes sense to get rid of the Facebook
integration for a few reasons, some of which are:
* The setup process is too complicated on Facebook's end. The users
will surely have to browse Facebook's huge API reference before even
having a vague idea of what they want.
* Slack chooses not to have a Facebook integration, but relies on
Zapier for it. Zaps that integrate with Facebook are much more
streamlined and the setup process isn't as much of a pain. Zapier's
Facebook Zaps are much more fine-tuned and there are different Zaps
for different parts of the FB API, a luxury that would likely span
2K+ lines of code on our end if we were to implement it from
scratch. So, I think we should relegate integration with Facebook to
Zapier as well!
* After thoroughly testing the setup process, we concluded that the
person who submitted the FB integration didn't really test it
thoroughly because there were some gaping holes in the docs (missing
steps, user permissions, etc.).
This extends the /user_uploads API endpoint to support passing the
authentication credentials via the URL, not the HTTP_AUTHORIZATION
headers. This is an important workaround for the fact that React
Native's Webview system doesn't support setting HTTP_AUTHORIZATION;
the app will be responsible for rewriting URLs for uploaded files
directly to add this parameter.
This commit increases the rendered_content limit from 2x to 10x of the
original message length.
Earlier, we had placed a limit of MAX_MESSAGE_LENGTH * 2 for the
rendered content (explained in commit
77addc5456). That limit was based on
the assumption that in most cases, the rendered content wouldn't cause
a large increase in message length. However, quite prominently in
syntax highlighted codeblocks, that wasn't true and this caused the
limit condition to be hit for long messages composed primarily of code
blocks.
Example: The following message would render close to 10x it's original size.
```py
if:
def:
print("x", var)
x = y
```
Because the syntax highlighted logic is extremely compressible, having
rendered_content reach up to 100KB doesn't create a network
performance problem.
Deletion of medium sized image is done if it exists before calling the
function 'ensure_medium_avatar_image', to avoid potentially confusing
problems with left-over medium-size avatar images from a previous run
being used when repeatedly importing the same realm in a development
environment..
Fixes#8949.
Remove models.get_user_profiles_by_ids() and
obtain user's bots profiles in actions.get_service_dicts_for_bots() by
users.user_ids_to_users() instead of models.get_user_profiles_by_ids().
Fixes#8939
We don't indend for this server-level setting to exist in the long
term; the purpose of this is just to make it easy to test this code
path for development purposes.
This implements much of the Message side part of #2745.
The new name can_access_stream_history_by_name gets to the point of
what this function actually does. And passing in a user object lets
us define what this does based on the user subscribed.
The comments explain this pretty well, but basically because we
rewrite the realm ID during the import process, we need to edit all
the message bodies that link to an attachment to instead link to the
post-processed URL where that file will be hosted on the new server.
Fixes#8926.
Implement few optimizations for reading admin's bot dicts from database
for a constants number of requests:
- add models.get_user_profiles_by_ids() for reading bots profiles
by single query from database
- add models.get_services_for_bots() for reading services for bots
by single query from database
- add bot_config.get_bot_configs() for reading config data for bots
by single query from database
Fixes#8838
'processing_emojis' check is added in the 'import_uploads'
function, so that the emoji files present in the to be imported
data file can be uploaded.
The procedure of saving emoji files in slack importer is same as
saving attachments and avatars, and the import has the similar
procedure too.
Change 'get_user_data' function to a more general function
to get data from the slack api using legacy tokens.
Also, change the error handling as upon invalid token,
the response is 200, but the response has an error
field in it.
For eg. Go to the following link with invalid token:
https://slack.com/api/emoji.list?token=xoxp-249056023425
Remove allocation ID function from slack import script. All the IDs
count will start from 0. Hence the ID List returned
by the allocation function is of no use, and we remove its implementation.
(example: get_total_messages_and_attachments function is of no use anymore,
hence we remove it)
In importing avatars, we use the implementation where the 'avatar_path'
is seperately calculated using realm and user ID and then the content
of the path provided in the avatar's 'records.json' are copied to this
'avatar_path'.
Similary, here for the uploads, 's3_file_name' is seperately calculated
using the realm ID and uploaded file name and then the content of the
path provided in upload's 'records.json' are copied to this 's3_file_name'.
'recipient_field' is added as a bool variable in the function
'update_id_map' to update the recipient foreign keys.
Recipient Foreign Key is equal to the UserProfile ID, if the
type is 1, and the same is equal to Stream ID, if the type is 2.
Hence a check is added in the 'update_id_map' field for this.
All the objects with realm ID as the foreign keys need to
be remapped with updated with the allocated ID.
Also the ID of the realm object itself is updated with the allocated
ID.
The 'id_field' bool variable is added to the function just to check
if the field is the ID of that object, and not the foreign key relation.
For foreign key field names, a "_id" has to be added after the field name,
however we don't need that for the ID field of the object.
Fixes#8853.
In certain cases, the browser is not able to look up the message.
Include the recipient data for the message in the delete_message event,
so look up of those attributes by the browser isn't required.
This PR solves some of the parity issues in the emoticon translation
logic. I was unable to find a way of matching only one of the
lookaround groups, so we still have some inconsistency (see
testcase). The approach of having another check while converting just
for this seemed like an inefficient way, so I've left that last change
as it is.
This bot was basically a duplicate of NOTIFICATION_BOT for some
specific corner cases, and didn't add much value. It's better to just
eliminate it, which also removes some ugly corner cases around what
happens if the user account doesn't exist.
Since the test database is in part controlled by the Zulip settings
files for testing, these settings files should be included in the list
of files that require populate_db to be rerun.
This issue was found due to changes to internal bots.
Also switches the default behaviour of the code to not translate the
emoticons. Earlier, the code was testing-aware, and used to translate
when there was no user profile data available(assuming that as a testing
environment).
webhook-errors.log file is cluttered with Stream.DoesNotExist
errors, which hides the errors that we actually need to see. So,
since check_message already sends the bot_owner a PM if the webhook
bot tries to send a message to a non-existent stream, we can ignore
such exceptions.