Hi!
I’m using team edition self-hosted in docker container.
I’m trying to import Slack export, it’s kinda huge(jsonl file uncompressed is ~7G):
Users 691
Posts 6483790
Direct Channels 5642
Direct Posts 1786848
That’s without attachments (which I will try later, there are around 130G of them).
I’ve already edited “long profile description” and two strange channels duplicates (renamed to channelname2 (including user channel-members).
Import of users - successful. Channels incl. private - successful. Some of messages (850k) - also successful.
But then it throws an error(from mattermost container, and the same in job description):
{"timestamp":"2024-09-09 11:41:43.470 Z","level":"error","msg":"SimpleWorker: job execution error","caller":"jobs/base_workers.go:96","worker_name":"ImportProcess","job_id":"tq14i45mtiffmykyy9usih84ko","job_type":"import_process","job_create_at":"Sep 9 11:22:42.309","error":"importMultiplePostLines: Unable to save the Post., failed to save Post: pq: canceling statement due to user request"}
That error from postgres container:
2024-09-09 14:19:11.762 UTC [48] DETAIL: parameters: $1 = '1725902351761', $2 = 'error', $3 = '{"error": "Error during job execution. — importMultiplePostLines: Unable to overwrite the Post., MError:\nfailed to update Post with id=rf1yk4x9i7rmmd8r9pa3obsd3w: pq: canceling statement due to user request\ndriver: bad connection\n2 errors total.\n", "local_mode": "false", "import_file": "d1kros96wbf9jqgcyqm87xbtfy_mm_double.zip", "line_number": "488542", "extract_content": "true"}', $4 = '-1', $5 = '9quky6wyn3rhujancrzcrsf1zc', $6 = 'in_progress'
I’ve tried to increase VM resources (10cpu, 10g ram) - during the import load average rise to [8-10, ram usage just to 2-3gb.
I’ve tried to increase postgres connection timeout (600s), but I haven’t seen long sessions in postgres before the error.
What could I do else? Where to look, how to troubleshoot more? Any ideas?