Bleve indexes need to be rebuilt after upgrades

Greetings fellow Matter…mosters! :wave:

Problem statement

I have enabled Bleve in my Mattermost instance, and the index seems to
be gone anytime I am upgrading the instance, so search doesn’t work for my users.

I need to rebuild the Bleve index at least once per month (i.e. anytime I upgrade).

The odd thing however is that
I can’t find physical files for the indexes in the configured directory:

even when search works just fine, for example at the time of this writing.

server /opt/mattermost-docker/volumes/app/mattermost/bleveindexes # ls -la
total 8
drwxr-xr-x 2 2000 user 4096 Dec 13  2020 .
drwxr-xr-x 8 2000 user 4096 Dec 13  2020 ..
server /opt/mattermost-docker/volumes/app/mattermost/bleveindexes # 


I was hoping if anyone had then any ideas on:

  • Where do the indexes live in my setup?
  • How can the Bleve index be maintained across upgrades?

NB: My Mattermost instance is small (less than 10 users, with about 100 messages daily), is there any chance that indexes are kept in the database?


  • Mattermost docker 7.9.1 (however the problem has been ongoing for a while now)
  • PostgreSQL 13.8 database
  • Gentoo Linux server

Hi @daydr3am3r ,

please check that the bleve index directory is properly mapped outside the container, otherwise its contents will get lost everytime you get a new container. The default path is different from what you’re seeing in your system console, maybe that’s the reason (missing -), but it could also be that you’re not mapping the directory at all. To find out, run the following command in /opt/mattermost-docker:

docker-compose.yml:      - ${MATTERMOST_BLEVE_INDEXES_PATH}:/mattermost/bleve-indexes:rw

Your output should look similar to this - if it doesn’t, please show it if you can’t find out how to fix it on your own then.

Thank you for the pointers, @agriesser

They helped me realize that I’m still using the mattermost-docker image structure, and so
the docker-compose.yml and .env files are outdated; specifically with regards to Bleve they are missing the variables altogether.

I suppose I could add them manually, but I guess the futureproof thing to do is redeploy using the mattermost/mattermost-team-edition image, as documented here

Yes, that’s the future-proof method. Let me know if you require help with that.

1 Like

Since you’re kind enough to help out, let me run the steps by you (or anyone else really), just in case:

  1. Clone the new docker repo into a new location, say /opt/docker using
    git clone
  2. Create the required directories under /opt/docker
#sudo mkdir -p ./volumes/app/mattermost/{config,data,logs,plugins,client/plugins,bleve-indexes}
  1. Copy env.example to .env in the new instance directory and edit to taste (e.g.
    set Mattermost version, server name, database credentials, etc.)
  2. Run docker-compose down on the old instance
  3. Do a pg_dump of the old instance
  4. Copy ./volumes/app/mattermost/config/config.json of old instance to new instance location
  5. Copy (using rsync) ./volumes/app/mattermost/data of old instance to new instance location
  6. Import database dump of old instance into the new instance
  7. Set permissions in the new instance: #sudo chown -R 2000:2000 ./volumes/app/mattermost
  8. Since I’m using my own Nginx server, in the new instance run docker-compose -f docker-compose.yml -f docker-compose.without-nginx.yml up -d
  • Should I also copy over the rest of the directories from the old instance to the new one under ./volumes/app/mattermost/ ?
  • Is there anything I am missing?

EDIT: Updated post to incorporate suggestions from @agriesser

You should switch points 5 and 6, do the dump when the server is offline (else it might not be consistent).
Also you can skip point 4, the database will be overwritten anyways when you import it later on.

For the copy of the data directory, you could use rsync in order to presync the files while your server is still running, so the downtime is shorter.

rsync -av --delete /path/to/old/volumes/app/mattermost/data/ /path/to/new/volumes/app/mattermost/data/

This command will keep the destination directory in sync with the source directory, so the first time you run it, it will take long, the second time it will be through quickly (because there’s nothing new to copy, or just the latest attachments, emojis, etc.). The rsync method will also speed up recovery process if you have to revert back and try again, because it just continues to keep the destination directory in sync and if anything has changed there, it will just be overwritten.
Please note that that trailing / are important - don’t skip them.

Not sure if the file ownerships for your old and the new deployment are the same, so you could (since you didn’t start the new deployment anyways) move the chown command from point 2 down to be run after point 9 in your list, because that will make sure that all the files are changed to the new owner also after you copied them.

Thanks @agriesser, I have updated my previous post based on your recommendations.

Would however pg_dump work with the database image being down? Maybe the clean approach is to shut down the app image but not the database image and then do the database dump.

Also, any thoughts about copying the older directories under ./volumes/app/mattermost besides data? Plugins in particular feels important.

Good point, sorry for that - pg_dump will not work when the database instance is down, so please just shut down the application container, thanks for pointing that out.
Plugins are bundled with the installation and will be automatically deployed upon the first start. If you have manually installed plugins, you can copy them over or install them after you’ve started Mattermost again. Plugins are always optional fortunately, so you can take your time with installing them after the migration.

Is the length of the downtime important for you? Because if it is and you need to keep the downtime as short as possible, we could also have additional options like also rsyncing the database container, but that only works if the source and destination postgres versions are identical, in that case you could skip the pg_dump and restore part and rsync the application data as well as the postgres database while the server is running until you can have a short downtime, then shut down the source (including the database container), run a final rsync command to copy over the last delta and start up the destination and that should be all then and the shortest possible downtime. If it doesn’t matter, your approach looks good.

1 Like

Just wanted to share my experience just in case it’s useful to someone else.

I needed to iron out certain kinks before I finally made the migration from the old mattermost-docker repo work, namely:

Database (Postgres)

Importing a dump on the mattermost database already created by the new instance didn’t work,
I couldn’t log in if I let docker-compose bring up the images first and import the dump afterwards.
My assumption is that certain values were not overwritten while importing the dump.

I had to:

  1. Fire up the new database image (without the app)
  2. Log into the image and create the mattermost database manually
  3. Import the dump
  4. Shut down the new postgres image
  5. Fire up both images via the standard docker-compose command

Docker Networking

I noticed that the docker network created was using a dynamic bridge name
and was incrementing the second octet of the /16 network used anytime I fired it up (e.g. 172.18.x.x
→ 172.19.x.x and so on).

This is a problem for me, since I need to have static mappings/variables in my Shorewall configuration files to make it work.

After some digging on docker networking fundamentals, I added the following stanza in docker-compose.without-nginx.yml:

#added manually to ensure network is always using the same /16 and bridge interface
        - subnet:
    driver_opts: br-e60e7b360248

HTH, and always appreciate feedback!

Thanks for sharing this and good to hear that you’re up and running now!
Are there any issues left with the new installation or should we mark this thread as solved?