Migrate from mattermost docker compose to mattermost kubernetes operator

I have a running installation of mattermost docker compose from GitHub - mattermost/docker: Install Mattermost server via Docker

Id like to use the Kubernetes operator instead of docker compose.

I tried to connect my Kubernetes operator mattermost to the postgres that our docker compose mattermost was using, but It did not start up with all the data. Instead it created a new install.

Is there some documentation on how we can go about moving to the operator?

Hi Hunter-Thompson and welcome to the Mattermost forums!

Pointing the new deployment to the database alone might not be enough, depending on the rest of your configuration. Did you, in your docker deployment, activate S3 as a storage backend or a local directory for your file attachments? Did you move the configuration into the database or are you using the config.json file?

We enabled local directory for our file attachments.

I see a config.json file inside the container at /mattermost/config/config.json, so I believe we use the config.json file.

In this case you will also have to copy over the data path to the new installation and not just the database.

This is, unfortunately, not a good indication. If you migrated your configuration to the database, only the database connection string will be used out of the existing config.json, everything else will then be pulled from the config in the DB.
Anyways, a new installation usually only happens when the application server is unable to connect to the database properly, missing file attachments will not prevent it from starting, you will just be unable to download the attachments then, so there must be something wrong with the connection to your database and since you saw a new install, there has been some successful connection at least which might be a default configuration available. I’m not sure about your setup in detail, but if the docker compose deployment runs on a different host than the k8s cluster, you will not be able to connect to the database in the container on the docker deployment host, because by default, this database is not reachable from the outside.
You will have to transfer this container alongside with its data volume to your k8s cluster and start the pod there or start a new database server there with an empty database and dump the data out of your old container and import it into the new one.

I made sure to make changes in the postgres config to allow connections from 0.0.0.0.

The connection was successful. I feel the culprit could be config.json.

Is there a way to move the contents of the config.json to the DB?

Well, in this case you need to change the database connection string since this is probably a cross-host connection and the namespaces are different.
By default, in the docker deployment, the Mattermost application server is connecting to a host called postgres which is the internal name of one of the containers running on the same host, in the same docker namespace.
If you configured this container on the source system to be publically reachable, you will need to change the database connection string also to point to this hostname.

I don’t think that moving the configuration to the database will help you here, but to answer your question: Here’s the documentation on how to do that, but you really should not do that now before we know how your setup looks like.

https://docs.mattermost.com/configure/configuation-in-mattermost-database.html

Would a URI like this work?

postgres://user:pass@10.140.22.53:5432/mattermost?connect_timeout=10

Yes, this looks correct.
To verify that this connection is reachable, login to your new Mattermost application container (or any container in your K8S deployment) and try to run a connection test. The Mattermost application container does have curl, so you could use the following command to test it:

curl -v telnet://10.140.22.53:5432

If it works, you should see a message telling you, that you’re connected:

* Connected to 10.140.22.53 (10.140.22.53) port 5432 (#0)

The mattermost operator runs an init container that checks for DB connection before it starts the mattermost container.

If the postgres connection was not happening, the init container would fail. But for me, it did not fail.

See code: mattermost-operator/database_external.go at 0202e3e5a6c5960fe277fd8e24441b6743780168 · mattermost/mattermost-operator · GitHub

If I read this correctly, you’re spinning up a container as part of your operator code and this is exactly what you should not do, since you want to connect to the existing database on a different host.
The pg_isready command is being run from inside your postgres:13 container, if I interpret the code correctly. You do not need a PostgreSQL container in k8s if you just want to connect to the existing database, you just need to set the environment variable for the connection string to the existing database for your application container and it should connect properly then.

While I agree that it is overkill to spin up a container to check for pg connection if the DB is on another host. It has nothing to do with why Mattermost is creating a whole new installation even though postgres connection is successful.

The init container simply runs pg_isready it is not causing any issues…