I have enabled Bleve in my Mattermost instance, and the index seems to
be gone anytime I am upgrading the instance, so search doesn’t work for my users.
I need to rebuild the Bleve index at least once per month (i.e. anytime I upgrade).
The odd thing however is that
I can’t find physical files for the indexes in the configured directory:
even when search works just fine, for example at the time of this writing.
server /opt/mattermost-docker/volumes/app/mattermost/bleveindexes # ls -la
total 8
drwxr-xr-x 2 2000 user 4096 Dec 13 2020 .
drwxr-xr-x 8 2000 user 4096 Dec 13 2020 ..
server /opt/mattermost-docker/volumes/app/mattermost/bleveindexes #
Questions
I was hoping if anyone had then any ideas on:
Where do the indexes live in my setup?
How can the Bleve index be maintained across upgrades?
NB: My Mattermost instance is small (less than 10 users, with about 100 messages daily), is there any chance that indexes are kept in the database?
Setup
Mattermost docker 7.9.1 (however the problem has been ongoing for a while now)
please check that the bleve index directory is properly mapped outside the container, otherwise its contents will get lost everytime you get a new container. The default path is different from what you’re seeing in your system console, maybe that’s the reason (missing -), but it could also be that you’re not mapping the directory at all. To find out, run the following command in /opt/mattermost-docker:
They helped me realize that I’m still using the mattermost-docker image structure, and so
the docker-compose.yml and .env files are outdated; specifically with regards to Bleve they are missing the variables altogether.
I suppose I could add them manually, but I guess the futureproof thing to do is redeploy using the mattermost/mattermost-team-edition image, as documented here
You should switch points 5 and 6, do the dump when the server is offline (else it might not be consistent).
Also you can skip point 4, the database will be overwritten anyways when you import it later on.
For the copy of the data directory, you could use rsync in order to presync the files while your server is still running, so the downtime is shorter.
This command will keep the destination directory in sync with the source directory, so the first time you run it, it will take long, the second time it will be through quickly (because there’s nothing new to copy, or just the latest attachments, emojis, etc.). The rsync method will also speed up recovery process if you have to revert back and try again, because it just continues to keep the destination directory in sync and if anything has changed there, it will just be overwritten.
Please note that that trailing / are important - don’t skip them.
Not sure if the file ownerships for your old and the new deployment are the same, so you could (since you didn’t start the new deployment anyways) move the chown command from point 2 down to be run after point 9 in your list, because that will make sure that all the files are changed to the new owner also after you copied them.
Thanks @agriesser, I have updated my previous post based on your recommendations.
Would however pg_dump work with the database image being down? Maybe the clean approach is to shut down the app image but not the database image and then do the database dump.
Also, any thoughts about copying the older directories under ./volumes/app/mattermost besides data? Plugins in particular feels important.
Good point, sorry for that - pg_dump will not work when the database instance is down, so please just shut down the application container, thanks for pointing that out.
Plugins are bundled with the installation and will be automatically deployed upon the first start. If you have manually installed plugins, you can copy them over or install them after you’ve started Mattermost again. Plugins are always optional fortunately, so you can take your time with installing them after the migration.
Is the length of the downtime important for you? Because if it is and you need to keep the downtime as short as possible, we could also have additional options like also rsyncing the database container, but that only works if the source and destination postgres versions are identical, in that case you could skip the pg_dump and restore part and rsync the application data as well as the postgres database while the server is running until you can have a short downtime, then shut down the source (including the database container), run a final rsync command to copy over the last delta and start up the destination and that should be all then and the shortest possible downtime. If it doesn’t matter, your approach looks good.
Just wanted to share my experience just in case it’s useful to someone else.
I needed to iron out certain kinks before I finally made the migration from the old mattermost-docker repo work, namely:
Database (Postgres)
Importing a dump on the mattermost database already created by the new instance didn’t work,
I couldn’t log in if I let docker-compose bring up the images first and import the dump afterwards.
My assumption is that certain values were not overwritten while importing the dump.
I had to:
Fire up the new database image (without the app)
Log into the image and create the mattermost database manually
Import the dump
Shut down the new postgres image
Fire up both images via the standard docker-compose command
Docker Networking
I noticed that the docker network created was using a dynamic bridge name
and was incrementing the second octet of the /16 network used anytime I fired it up (e.g. 172.18.x.x
→ 172.19.x.x and so on).
This is a problem for me, since I need to have static mappings/variables in my Shorewall configuration files to make it work.
After some digging on docker networking fundamentals, I added the following stanza in docker-compose.without-nginx.yml:
#added manually to ensure network is always using the same /16 and bridge interface
networks:
mm:
ipam:
config:
- subnet: 172.25.0.0/16
gateway: 172.25.0.1
driver_opts:
com.docker.network.bridge.name: br-e60e7b360248
Thanks for sharing this and good to hear that you’re up and running now!
Are there any issues left with the new installation or should we mark this thread as solved?