mirror of https://github.com/zulip/zulip.git
f37ac80384
On my data (about 10 million messages in 1600 streams) this used to take about 40 hours, while the improved statement completes in roughly 30 seconds. The old solution had postgres go through the entire table until the first match for each stream. Thus, the time spent scanning the table got longer and longer for each stream because postgres always started at the beginning (and somehow it did not use any indices) and had to skip over all rows until it found the first message from the stream that is was looking for each time. This new statement just performans a bulk operation, scanning the table only once and then inserts the results directly into the destination table. Slightly more verbose inforation about this change can be found in: https://chat.zulip.org/#narrow/stream/31-production-help/topic/Import.20Rocketchat.20data/near/1408867 Signed-off-by: Florian Pritz <bluewind@xinu.at> |
||
---|---|---|
.. | ||
actions | ||
data_import | ||
integration_fixtures/nagios | ||
lib | ||
management | ||
migrations | ||
openapi | ||
tests | ||
tornado | ||
views | ||
webhooks | ||
worker | ||
__init__.py | ||
apps.py | ||
context_processors.py | ||
decorator.py | ||
filters.py | ||
forms.py | ||
logging_handlers.py | ||
middleware.py | ||
models.py | ||
signals.py |