Error”: “We could not count the users.“, in slack to mattermost migration

The “type not supported” messages occur for the following post_subtypes:

  • channel_archive
  • group_join
  • bot_add
  • bot_remove

These most likely are just status messages, not sure why they cannot be imported, but I think it should not be a big problem skipping them.

The following user accounts cannot be found and are part of the export:

  • USLACKBOT
  • UK4CE473K

Can you check the slack dump and verify that these users are available there?

  • error retreiving file …

I think this is a result of running the command without the attachments, when you use the export-with-emails-and-attachments.zip file, the files should be available.

@agriesser Thank you very much for your help. We have tried performing an import again after tinkering with the channels.jsonl and folder names of channels inside export file (.zip) generated from slack, due to a constraint which was causing an error when channel names start with a hyphen. After removing the hyphen at the start of channel names (both for folder names and channel names inside channels.json).

After performing all the steps for importing, we are getting the error log mentioned below. Does this mean that there is some sort of member limit for mattermost teams due to our mattermost edition?

{"timestamp":"2022-09-19 19:31:20.326 +05:00","level":"error","msg":"SimpleWorker: job execution error","caller":"jobs/base_workers.go:83","worker":"ImportProcess","job_id":"y8tcud1k9tnj7p5qxk7r1ofdie","error":"BulkImport: Unable to import team membership because no more members are allowed in that team, limit exceeded: what: TeamMember count: 51 metadata: team members limit exceeded"}

Interesting, good to know - thanks!

The number of members in a team can be configured, please check your /opt/mattermost/config/config.json" file and look for the TeamSettings.MaxUsersPerTeamoption (which defaults to50`).

Change the value and restart your Mattermost server, then run the import again.

1 Like

Hy @agriesser Thanks for your quick support.
when I run this command on mmctl mmctl import upload ./mattermost-bulk-import.zip it display following error:

$ mmctl import upload ./mattermost-bulk-import.zip
Error: failed to upload data: Post "http://localhost:8065/api/v4/uploads/iw61m9ptt7f5fdce499dicde3h": write tcp 127.0.0.1:53950->127.0.0.1:8065: use of closed network connection

log of this error is following:
{"timestamp":"2022-09-20 13:12:15.655 +05:00","level":"error","msg":"Unable to write the file.","caller":"web/context.go:117","path":"/api/v4/uploads/ao8dudzh6ffnt81xodfx3uydgy","request_id":"rwqrjytzg7dbjjycfnc8x34k4c","ip_addr":"127.0.0.1","user_id":"ietkk6nsejfzdkobcpxc6t13qr","method":"POST","err_where":"WriteFile","http_code":500,"error":"WriteFile: Unable to write the file., unable write the data in the file data/import/ao8dudzh6ffnt81xodfx3uydgy_mattermost-bulk-import.zip.tmp: read tcp 127.0.0.1:8065->127.0.0.1:46736: i/o timeout"}

then I set Forward80To443": false, in my config.json file. But When I execute $ mmctl import upload ./mattermost-bulk-import.zip then it display the same error. Can yo help us to resolve this issue?

Hmm… Is your Mattermost running inside a container? Are you sure that the directory data/import/ is writable by the mattermost application server?

1 Like

yes @agriesser We are running inside docker container.Secondly how we can check our directory data/import/ is writable by the mattermost application server or not? please assist us. Thanks

When you followed the deployment instructions, you should have created the volumes subfolder and set the permissions properly, but I’m not sure if you used these instructions.

Does your container maybe restart while you try to upload the file? Also, how big is the file?
Can you please post the output of the following commands:

ls -lh ./mattermost-bulk-import.zip
docker ps
1 Like

hy @agriesser file size is almost 22GB.


I run this command that you mentioned above. it shows following results.

OK, there’s probably a timeout with such a big file upload, but you do not need to use mmctl for the upload, you can inject the file directly, but to do that, we need to know what data directory you mapped from the host to the container.
Can you run the following command, please? I could not prefill the container ID of your mattermost container because it was not listed in the output, so please do that on your own (or is the Mattermost container running on a different machine?):

docker inspect <ID-OF-YOUR-MATTERMOST-CONTAINER> | grep -B1 '"Destination": "/mattermost/data",' | sed -n 's/^.*Source": "\(.*\)",$/\1/p'

This should print a directory which is being mapped to the container’s /mattermost/data. You can move or copy the mattermost-bulk-import.zip file to this directory on the host and this will have the same effect as uploading it using the mmctl import upload command.

1 Like

Hy @agriesser, the mattermost container is being run on the same machine, and we have tried using your command with the container id of the 4 containers shown in above screenshot, however, there is no output. What might be the reason for this?

Hmm… OK, let me guess, you’re running an HA deployment here and your data is not stored on the filesystem but in an S3 bucket provided by the minio container, right? If so, I think the problem is that the 22GB fileupload needs to traverse to your loadbalancer and through the mattermost containers into the minio container and this process takes too long and causes a timeout somewhere along the road.

I’m not familiar with the K8S deployments here (if this is one?), so I’m not sure where to start looking now, but I think if you’re really using S3 as the backend, you could try to check the logs on the minio container for possible timeouts there.

1 Like

I got the following info after asking the staff for help:

Yeah, basically any S3 operations have a default timeout of 30secs now. You’ll need to bump that up higher for operations like import.
We are planning to have a separate/remove timeout for the import bit because that seems like a special case.
The setting is FileSettings.AmazonS3RequestTimeoutMilliseconds

1 Like

Hy @agriesser Thank you for your response. can You tell me what maximum limit can we set for AmazonS3RequestTimeoutMilliseconds.I set 90000000000000000 it display error. please assist me exact maximum limit for it in digits. thanks

hy @agriesser, You are great man. Alhumdulilah (Thanks to Allah) We have done migration from slack to mattermost (emails and attachment) successfully.

I just copied mattermost-bulk-import.zip manually from migration folder to mattermost-server/data/import instead of mmctl import upload ./mattermost-bulk-import.zip. Becuase when run this command it shows this error log:
{"timestamp":"2022-09-20 13:12:15.655 +05:00","level":"error","msg":"Unable to write the file.","caller":"web/context.go:117","path":"/api/v4/uploads/ao8dudzh6ffnt81xodfx3uydgy","request_id":"rwqrjytzg7dbjjycfnc8x34k4c","ip_addr":"127.0.0.1","user_id":"ietkk6nsejfzdkobcpxc6t13qr","method":"POST","err_where":"WriteFile","http_code":500,"error":"WriteFile: Unable to write the file., unable write the data in the file data/import/ao8dudzh6ffnt81xodfx3uydgy_mattermost-bulk-import.zip.tmp: read tcp 127.0.0.1:8065->127.0.0.1:46736: i/o timeout"} . Thank you so much for your quick response and support during this tenure. :heart:

Hi again,

sorry, I was rather busy today and could not check the forums.
I will try to find out the maximum limit for this value and will let you know then - I could not find the related documentation so far either. In the code it says that this value is of type Int64 and the maximum for Int64 would be 9223372036854775807 which is bigger than the value of 90000000000000000 you tried. If I find that out, I’ll report back here.

So did I get this right, you’re up and running now? If so: Woohoo! :slight_smile: Glad to hear that!

1 Like

@agriesser Yes. Done Successfully now. :slightly_smiling_face:

Awesome, thanks for confirming! I’ve marked this issue as resolved now :slight_smile:

2 Likes

hy @agriesser Hope you are good. as I mentioned above We have done migration successfully on local server without s3 bucket. but now when we place our import/data on s3 bucket we are unable to process after this command mmctl import upload ./mattermost-bulk-import.zip. Now we want to import our import data on s3 bucket while remaining files are place on local server. can you assist us in this situation?
is there any way to move data of this successful migration on s3 bucket without starting it from scratch on s3 bucket?

I’m not sure I can follow, sorry.
You did validate that the import file mattermost-bulk-import.zip is valid because you managed to import it into a locally running Mattermost instance and everything is working there.
You do now want to run the import in your production environment again and the production environment has the S3 storage as database backend and it’s timing out with the error message you already posted here, right?

I think I’m a bit lost at understanding the current situation and the target situation, sorry.