I’ve been using Mattermost for team communication in my organization, and overall, I’m really impressed with its features and flexibility. However, as our team grows, I’m starting to notice some slowdowns in performance, especially when navigating channels and searching for messages.
Are there any best practices or configuration tips for improving performance in Mattermost when working with a larger team? I’ve already looked into things like archiving old messages and optimizing the server, but I’d love to hear any additional advice from those with experience in scaling Mattermost for bigger
teams.
Also, I’ve been exploring automation in our workflows and I’m curious about the best RPA tools to integrate with Mattermost. Has anyone used RPA (Robotic Process Automation) tools like UiPath or Automation Anywhere to automate tasks within Mattermost, and how did that impact performance?
Lastly, I’m curious about any plugins or add-ons that could enhance the experience without affecting performance too much.
Welcome to the community, @wajaces! For optimizing Mattermost performance with a growing team, I recommend checking out our official deployment guide to start.
For RPA integrations, you might also find some helpful tools in the Integrations Directory.
Hi. I can share a little experience of deploying matermost in docker.
At the moment we have a server with 500 users.
We use the following characteristics.
For the application we allocate 4 processor cores and 16 GB of RAM.
For postgres we allocate 32 GB of RAM and also 4 cores. The size of our database is about 30 GB.
also postgres is used in the cluster, and we enter the following values into the configuration. #CUSTOM OPTIONS
max_connections = 1500
superuser_reserved_connections = 30
shared_buffers = 21GB
work_mem = 32MB
effective_cache_size = 21GB
maintenance_work_mem = 512MB #WORKERS
max_worker_processes = 16 # (change requires restart)
max_parallel_workers_per_gather = 2 # taken from max_parallel_workers
max_parallel_maintenance_workers = 2 # taken from max_parallel_workers
max_parallel_workers = 16