Can I migrate from binary to docker?

I run a Mattermost server using the binary/tarball. It’s not too bad but the upgrade process is a little annoying and I’m always a little worried something will go wrong, I get the impression that if I were using Docker for this it would be simpler.

Is it simpler to maintain/upgrade a Mattermost server using Docker?
Is it feasible to migrate my existing production server to use Docker?

HI Johnny,

I think it all depends on your personal preference and how you’d like to work. I do find the docker deployment method not as transparent as the binary deployment and it’s much more configurable, because I’m not limited to what’s there in the container, but as a sysadmin I have to say that, I guess :slight_smile: If you do not want to take care of things like that and are happy with what is being provided, docker might be the better option for you.

The upgrade instructions are basically the same for both deployments, it all boils down to:

  • get the new version
  • extract it
  • backup the old version (or move it aside)
  • install the new version
  • start the new version
  • watch the magic (aka database migration) happen

What docker does, is it decouples the directories you want to keep during an upgrade (config, data, logs, etc.) from the application by moving them out to the “volume” subdirectory which will then be mounted into the container, that way it is easy to just install a new container and do not have to worry about the rest.
But you can also achieve that with the binary installation, there are configuration options to point those directories to different paths, outside the mattermost installation and if you do that, the upgrade process for the binary installation changes a bit and also makes it easier because you do not have to copy things around, but just start the new version then.

So there are pros and cons for both versions and if you want to switch from one to the other, we’ll help you with that.

I’ve not done this yet, but I could imagine the following points need to be done:

  1. install the same mattermost version and edition with docker on the same or a different system without any specific configuration, so that an empty setup is running to make sure everything is configured correctly (containers, ssl offloading, database, etc.)
  2. shut down the container deployment
  3. copy the directories “data”, “logs”, “config”, “client” and “plugins” from your binary installation over to “volumes/app/mattermost” on your docker deployment
  4. run chown -R 2000:2000 volumes/app/mattermost (docker container uses UID 2000 and your binary installation probably uses “mattermost” with a different UID)

If you want to keep the database outside the containers, this should be all it needs then, because the config.json you copied over from your binary installation already points to the existing database installation. As long as you do not use a socket connection but a TCP connection, this should work (you cannot access the socket from within the container).
Also SMTP or other external resources (like ElasticSearch) will need to be tested then.
If you’re using the bleve index you can either reindex it after you migrated to the docker deployment or also copy over the “bleve” directory in the step where you copied config, data, etc.

If you also want to migrate the database into docker it’s important to say that the docker compose templates from mattermost by default use postgres, so I’ve never tried to deploy it with a mariadb/mysql container for the database part, but this probably also will work. So if you’re running postgres in your binary deployment, you would have to dump the database there (man pg_dump) and restore it into the database that’s running as a container then (man pg_restore).
If your binary deployment is on MariaDB/MySQL, you should either try to get this version running inside a container instead of the PostgreSQL container or we could try a whole different approach instead of what is outlined above by going the mmctl export and mmctl import route.

Sorry for this wall of text, but it all depends a bit on what your current setup looks like, the details not mentioned so far and if you want to go all-in on the docker approach or leave parts outside of it.

1 Like

Thanks for the response! I’ll try this out.

My deployment does use MySQL, but for now I think I’ll try keeping the DB outside of the container. If all goes well, I might follow up later and try to move the DB into docker later.

Alright, good luck with that and when you’re stuck on the way, let me know :slight_smile:

I’ve been trying to set this up using the mattermost/docker repo, but I’m getting an error when the pod starts up.

Error: failed to load configuration: failed to create store: unable to load on store creation: invalid config: Config.IsValid: model.config.is_valid.site_url.app_error, 

I modified docker-compose.yml so it doesn’t create the postgresql container.
I’ve copied my existing data from my binary deployment to volumes/app/mattermost, so the site url is the existing one for that deployment, in the format I tried changing it to other values such as http://localhost or (I’m using port 8888 for http so I don’t conflict with my existing deployment).

Environment variables always overrule values in the config.json and at the bottom of your .env file, you will see some of them (like MM_SERVICESETTINGS_SITEURL, MM_SQLSETTINGS_DRIVERNAME, etc.).
You either need to make sure that they contain the correct values, or you remove them from the environment: section in your docker-compose.yml.

      # timezone inside container
      - TZ

      # necessary Mattermost options/variables (see env.example)

      # necessary for bleve

      # additional settings

The message you’re seeing here might be related to the fact that your docker-compose version is too old to handle variable substitution in the .env file, but if you do not want to use the environment variables here, you do not need to worry about that and just get rid of them.

Ah, that was it, I wasn’t setting the port in MM_SERVICESETTINGS_SITEURL. Setting that to got it past that error. Progress!

That being said, now I’m running into an error connecting to the DB:

{"timestamp":"2022-07-24 23:24:38.135 -05:00","level":"info","msg":"Server is initializing...","caller":"app/server.go:237","go_version":"go1.16.7"}
{"timestamp":"2022-07-24 23:24:38.139 -05:00","level":"info","msg":"Starting websocket hubs","caller":"app/web_hub.go:93","number_of_hubs":2}
{"timestamp":"2022-07-24 23:24:38.139 -05:00","level":"info","msg":"Pinging SQL","caller":"sqlstore/store.go:262","database":"master"}
{"timestamp":"2022-07-24 23:24:38.148 -05:00","level":"error","msg":"Failed to ping DB","caller":"sqlstore/store.go:272","error":"dial tcp connect: connection refused","retrying in seconds":10}


I’m using this format, but I wouldn’t be surprised if I made a mistake here:


The data source is based on what was already in config.json

When you connect to from within the container, this connection will not leave the container, so although your MySQL server is running on localhost, from the container’s point of view, it’s “outside” on the host network.

So if your server f.ex. has the IP, make sure that MySQL is also listening on and that connections are allowed on this IP. Your container then needs to connect to instead of

1 Like

Still more progress, but a different error this time. I changed localhost to my IP and now I’m getting this error: i/o timeout

{"timestamp":"2022-07-24 23:52:26.790 -05:00","level":"info","msg":"Server is initializing...","caller":"app/server.go:237","go_version":"go1.16.7"}
{"timestamp":"2022-07-24 23:52:26.791 -05:00","level":"info","msg":"Starting websocket hubs","caller":"app/web_hub.go:93","number_of_hubs":2}
{"timestamp":"2022-07-24 23:52:26.791 -05:00","level":"info","msg":"Pinging SQL","caller":"sqlstore/store.go:262","database":"master"}
{"timestamp":"2022-07-24 23:52:36.791 -05:00","level":"error","msg":"Failed to ping DB","caller":"sqlstore/store.go:272","error":"dial tcp REDACTED_IP:3306: i/o timeout","retrying in seconds":10}

On the host OS (outside of your containers), please run:

lsof -n -i :3306

and post the output.
Also, please login to your MySQL server via commandline and run the following command, replacing “mmuser” with your database username for the mattermost database, if it’s not “mmuser”:

select user,host from mysql.user where user="mmuser";
lsof -n -i :3306
mysqld    500229      mysql   24u  IPv4 123161825      0t0  TCP (LISTEN)
mysqld    500229      mysql   25u  IPv4 232568145      0t0  TCP> (ESTABLISHED)
mysqld    500229      mysql   44u  IPv4 232568163      0t0  TCP> (ESTABLISHED)
mysqld    500229      mysql   55u  IPv4 233527217      0t0  TCP> (ESTABLISHED)
mysqld    500229      mysql  116u  IPv4 232568177      0t0  TCP> (ESTABLISHED)
mattermos 980450 mattermost    6u  IPv4 232568144      0t0  TCP> (ESTABLISHED)
mattermos 980450 mattermost    7u  IPv4 233527216      0t0  TCP> (ESTABLISHED)
mattermos 980450 mattermost   12u  IPv4 232568162      0t0  TCP> (ESTABLISHED)
mattermos 980450 mattermost   19u  IPv4 232568176      0t0  TCP> (ESTABLISHED)
mysql> select user,host from mysql.user where user="mmuser";
| user   | host |
| mmuser | %    |
1 row in set (0.00 sec)

Alright, so the mmuser account is allowed to connect from all IPs, which is good, but mysql is only running on Please check your mysql config files for an option called “bind-address”, it is set to at the moment and should be commented out. Once you’ve done that, restart your mysql server and you will see in the lsof -n -i :3306 output, that it’s also listening on your internal IP now.

Please note that this change allows access from other systems to your mysql server, so if your network is not trusted, you will also have to set up firewall rules on your host then to prevent unwanted access to this service from other hosts.

I checked /etc/mysql/conf.d/mysql.cnf and its contents were just


I tried editing it to

bind-address =

per MySQL Bind Address - FOSS TechNix but this didn’t change anything.

Can you share a list of files below /etc/mysql on your system? Sometimes it’s a bit hidden and really tedious to work with:

find /etc/mysql

You applied the change to the [mysql] section, which is the client-part. But we need to add this to the [mysqld] section, the server part. So you could look out for a file that contains [mysqld], on my system, this is the file /etc/mysql/mariadb.conf.d/50-server.cnf:

# grep -rF "[mysqld]" /etc/mysql
1 Like
find /etc/mysql

I see, it’s in /etc/mysql/mysql.conf.d/mysqld.cnf. These configs really are tedious!

I updated bind-address = = in mysqld.cnf, and restarted mysql, however I’m still running into the same error when the mattermost pod starts up:

{"timestamp":"2022-07-26 17:59:45.908 -05:00","level":"info","msg":"Pinging SQL","caller":"sqlstore/store.go:262","database":"master"}
{"timestamp":"2022-07-26 17:59:55.934 -05:00","level":"error","msg":"Failed to ping DB","caller":"sqlstore/store.go:272","error":"dial tcp REDACTED_PUBLIC_IP:3306: i/o timeout","retrying in seconds":10}

I’ve also been trying running mysql in a container (on port 3333), using a copy of my existing db’s configs. Regardless of the one I choose mattermost to connect to, I have the same issue.

FWIW though, I can use a SQL client (Sequeler) to connect to the DB running in docker on port 3333 from my laptop, and I was also able to do so before changing the config. I cannot however connect to the DB running on 3306 (it times out).

Are there any firewall rules on your system that could prevent access to mysql on port 3306? Please post the output of:

lsof -n -i :3306
iptables -L -n -v | grep 3306
~# lsof -n -i :3306
mysqld    1007060      mysql   24u  IPv4 235051886      0t0  TCP *:mysql (LISTEN)
mysqld    1007060      mysql   27u  IPv4 235052005      0t0  TCP> (ESTABLISHED)
mysqld    1007060      mysql   33u  IPv4 235052023      0t0  TCP> (ESTABLISHED)
mysqld    1007060      mysql   37u  IPv4 235052029      0t0  TCP> (ESTABLISHED)
mysqld    1007060      mysql   38u  IPv4 235374276      0t0  TCP> (ESTABLISHED)
mysqld    1007060      mysql   41u  IPv4 235052037      0t0  TCP> (ESTABLISHED)
mysqld    1007060      mysql   75u  IPv4 235055409      0t0  TCP> (ESTABLISHED)
mattermos 1007103 mattermost    6u  IPv4 235052004      0t0  TCP> (ESTABLISHED)
mattermos 1007103 mattermost   12u  IPv4 235052022      0t0  TCP> (ESTABLISHED)
mattermos 1007103 mattermost   15u  IPv4 235052028      0t0  TCP> (ESTABLISHED)
mattermos 1007103 mattermost   19u  IPv4 235052035      0t0  TCP> (ESTABLISHED)
mattermos 1007103 mattermost   64u  IPv4 235374274      0t0  TCP> (ESTABLISHED)
plugin-li 1007762 mattermost   12u  IPv4 235055408      0t0  TCP> (ESTABLISHED)
~# iptables -L -n -v | grep 3306
    7   380 ACCEPT     tcp  --  !br-73fec2cf3b15 br-73fec2cf3b15             tcp dpt:3306

Hmm… OK, next try. When you look at the IP addresses configured on your host system, you should see one interface called docker0, here’s the output on my system for reference:

# ifconfig | grep -A1 docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet  netmask  broadcast

Can you try to use this IP address for the mysql connection inside your container? In my case it would be

# ifconfig | grep -A1 docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet  netmask  broadcast

I tried switching the IP in the mysql data source to Still fails with the same error

{"timestamp":"2022-07-26 22:24:24.319 -05:00","level":"error","msg":"Failed to ping DB","caller":"sqlstore/store.go:272","error":"dial tcp i/o timeout","retrying in seconds":10}

I’m wondering if there’s an issue with how I’ve structured the environment variables in the mattermost container:


Btw thanks for your perseverance, you’ve been very helpful despite this issue!