Ideally we would not be relying on something that requires a 4 line
comment, and also makes it harder to add more static elements at the
end of the navbar, but this block should be acceptable for now.
One alternate would be a "grow-1" class or similar but we might need
to think that through.
Prior to commit eb4a2b9d4e the center
area of the navbar was based on a structure that appended crumbs or
"tabs" as <li>s, forming a tab_bar and a tab_list.
However, in eb4a2b9d4e we apply a new
style and structure to the navbar which lets go of the convention of
tabs. Hence, we'd like to purge the tab_bar and tab_list labels from
our code base. This commit pushes us towards that goal.
Previously, this element was part of both searchbox and
searchbox_legacy of which, only one would render based on the flag on
search pills. This wasn't great because:
* it made it likely that someone would change only one of the two and
unintentionally introduce regressions.
* it meant that search_icon selectors within the searchbox would mess
with the search_icon elements within the tab_bar, leading us to rely
on messy CSS overriding.
Since there doesn't seem to be a strong reason to have this be the way
it previously was, this commit extracts "#tab_bar".
It's worth keeping in mind that we use the "#tab_bar" element as an
anchor to append the #tab_list onto.
Previously, we had the entire div within the conditional, instead of
just the contents, which were the only variable elements.
This change moves the conditional over just the contents of the div
and improves readability.
Previously, the navbar sub count would not live update as users
subscribed or unsubscribed, this commit adds the relevant calls in
stream events.
It would have been better to just have a single call within
server_events_dispatch but it seems difficult due to the way of
mark_subscribed and mark_unsubscribed are structured.
stream_events.mark_unsubscribed conditionally calls
subs.update_settings_for_unsubscribed which calls
subs.rerender_subscriptions_settings and as such handles the update
for the subscriptions modal on its own. Hence, we simply rely on the
stream_data.update_calculated_fields to ensure the subscriber counts
are updated and make a call to
tab_bar.maybe_rerender_title_area_for_stream(sub).
stream_events.mark_subscribed is similar.
Previously, we were overriding narrow_state.is_for_stream_id() to make
sure we test the functions we intend to.
In this commit we:
* zrequire('narrow_state')
* set the filter to "stream:frontend" before the test cases which
were overriding is_for_stream_id to return true (and remove the
overrides).
* reset_current_filter() at the end of the above cases (and remove
the lines overriding is_for_stream_id to return false)
This is a prep commit to adding live update for sub_count in the
navbar.
Previously, we had the lines from this block being duplicated in all
the stream update paths, which is a little brittle.
Hence, in this commit, we extract it out with a comment explaining
what it does and call it in all the duplicated spots.
My previous message_header fix
0b4568d249
accidentally changed the alignment of dates in private messages (so that
it was inconsistent to the alignment in other narrows).
We use narrow_state.stream_sub instead of narrow_state.stream to directly
get the sub object instead of stream name, while subscribing to a stream
from the stream narrow.
This commit changes stream_data.is_user_subscribed to use stream id
instead of stream name.
We are using stream ids so that we can avoid bugs related to live
update after stream rename.
Fixes#12868.
We now also include python version in the format
'major.minor.patchlevel', when generating hash for a
requirement file. This was necessary since packages tend to
break on different versions of python, so it is important to
track the version on which the venv was setup.
WARN: This commit will force all zulip venvs to be recreated.
We were already using packages names along with their versions
to generate hash for the requirement file, as we were passing
the `.txt` files to the hash_reqs file instead of intended `.in` files
for which the functions in this file was originially designed.
Changed the expand_reqs_helper function to adapt for the `.txt` files.
No plugins are installed inside the /usr/local/munin/lib this creates
in munin-node, nor are they symlinked into /etc/munin/plugins, so
non-default plugins are added by this.
The contents in the database are unchanged across the PostgreSQL
restart; as such, there is no reason to invalidate the caches.
This step was inherited from the general operating system upgrade
documentation. When Python versions change, such as during OS
upgrades, we must ensure that memcached is cleared. However, the
`do-release-upgrade` process uninstalled and upgraded to a new
memcached, as well as likely restarted the system; a separate step for
OS upgrades to restart memcached is thus unnecessary.
pg_upgradecluster has two possibilities for `--method`: `dump`, and
`upgrade`. The former is the default, and does a `pg_dump` of all of
the databases in the old cluster and feeds them into the new cluster.
This is a sure-fire way of getting the same information in both
databases, but may be extremely slow on large databases, and is
guaranteed to fail on servers whose databases take up >50% of their
disk.
The `--method=upgrade` method, by contrast, uses pg_upgrade to copy
the raw database data file over to the new cluster, and then fiddles
with their internal structure as needed by the upgrade to let them be
correct for the new version[1]. This is slightly faster than the
dump/load method, since it skips the serialization step, but still
requires that there be enough space on disk for both old and new
versions at once. `pg_upgrade` is currently supported for all
versions of PostgreSQL from 8.4 to 12.
Using `pg_upgrade` incurs slightly more risk, but since the it is
widely used by now, using it in the relatively-controlled Zulip server
environment is reasonable. The expected worst failure is failure to
upgrade, not corruption or data loss.
Additionally passing `--link` uses hardlinks to link the data files
into both the old and new directories simultaneously. This resolve
both the runtime of the operation, as well as the disk space usage.
The only potential downside to this is that as soon as writes have
occurred on the upgraded cluster, the old cluster can no longer be
started. Since this tooling intends to remove the old cluster
immediately after the upgrade completes successfully, this is not a
significant drawback.
Switch to using `--method=upgrade --link`. This technique spits out
two shell scripts which are expected to be run after completion of the
upgrade; one re-analyzes the statistics, the other does an `rm -rf` of
the data where it is still hardlinked in the old cluster. Extract the
location of these scripts from parsing the `pg_upgradecluster` output;
since the path is not static, we must rely on it being relatively easy
to parse. The risk of the path changing is lower, and has more
obvious failure modes, than inserting the current contents of these
upgrade steps into the overall `upgrade-postgres`.
[1] https://www.postgresql.org/docs/12/pgupgrade.html
We have logic in place to update the ui for re-sending messages
on recieving the acknowledgement from the server on that API call.
However, if the acknowledgement is recieved through the get events
request before the `on_success` of `resend_message`, the message
gets re-rendered allowing the failed message actions to be clickable.
Now, we update the ".message_failed" ui for both cases. This helps
in preventing the "Trying to get local_id from row that has reified
message id" exception.
Fixes#15351.
Previously, we rendered the twitter previews outside of a
spoiler block at the end of the message. The commit series
ending with this commit fixes that by inlining twitter
previews instead of appending them all at the end. As a
consequence of the inlining, we have fixed the issue here.
This commit just adds a test to assert that.
Fixes#15518.
This is similar to our behavior with image previews, and helps
reduce clutter in the final rendered html.
We add the string 'Tweet: ' to our existing tests so those tests
remain the same.
This commit makes our handling of twitter previews consistent with
how we handle our inline images so that tweets render next to the
paragraph that links to the tweet.
We decouple the logic of insertion rules for inline links from
image preview logic. Now, we can use this same logic for other
kinds of link previews as well.
fadeTo is not a good method to hide elements since it sets
opacity to 0 in which the element still can consume space and
be clickable. We set it's display to None using fadeOut method.
Also, allow this method to be called via ui_report.error.
The one complexity is that hosts_fullstack are treated differently, as
they are not currently found in the manual `hosts` list, and as such
do not get munin monitoring.
check_memcached does not support memcached authentication even in its
latest release (it’s in a TODO item comment, and that’s it), and was
never particularly useful.
Although mktemp is deprecated due to security issues, this is not a
security issue.
The security problems with mktemp happen when you open the resulting
filename (without O_EXCL) in a publicly writable directory, because
then someone else might have predicted the filename and created or
symlinked or hardlinked something there between the mktemp and the
open, causing you to write to a file you didn’t expect.
Here we don’t open the resulting filename, we symlink to it. symlink
will refuse to clobber an existing file, and we handle the error that
arises from this case. This is the normal way to atomically create a
symlink.
We should still replace mktemp because it’s deprecated, but we can’t
replace it with a function that creates the temporary file. Instead
we build a random filename ourselves.
Signed-off-by: Anders Kaseorg <anders@zulip.com>