Image Processing Takes Long Time

Summary
Uploading images on an Amazon EC2 t3.small (2 vCPUs, 2.0 GB RAM) is only about 1/2 of the speed of Slack at uploading images due to the processing. Is there a way to optimize this, or is there a way to change the image processing options?

Steps to reproduce
Spin up an Amazon EC2 t3.small instance, and use the omnibus installation method to install 10.6.1. Attempt to upload an image, and pay attention to the time.

Expected behavior
Mattermost should be faster at processing images.

Observed behavior
The image upload takes way longer than expected. Here’s some timing data that I’ve collected

Image Attachments (single post with 4 images):

  • CFAB8D85-FFB1-4093-897D-8431D45F258B.jpg
  • D4658D7A-97AE-46A7-A72D-A185AB4387B4.jpg
  • 717E268F-ABBA-49F5-9F29-55150E0D5E6C.jpg
  • 26BF1E2F-806F-4377-9445-8E114B69F388.jpg

Mattermost

  • 62s
  • 56s
  • 56s
  • 63s
  • 63s

Slack

  • 46s
  • 51s
  • 41s
  • 52s
  • 42s

Image Attachment (single post with one image):

  • PXL_20250326_142113995.jpg

Mattermost

  • 11s
  • 12s
  • 12s
  • 11s
  • 12s

Slack

  • 7s
  • 6s
  • 6s
  • 5s
  • 6s

All test images are attached.




Hi Dylan! Thanks for bringing this up. Optimizing image processing is a great idea. You might find it helpful to check out this guide on configuration settings, as it highlights key configurations that could improve speed. Let us know how it goes!

Hey John,
Do you have any particular configuration setting in mind, or are you just telling me to review the configuration settings in general? I’m not seeing any settings related to image processing in general.
Thanks in advance!

So, this probably isn’t a fair comparison, as Slack is undoubtably a containerized platform that can spin up containers and schedule them to run on much beefier nodes than a t3.small! Have you watched htop or something similar on that node while performing that operation? I suspect you may see that RAM exhausted.

Also, we don’t recommend the Omnibus install for production use since it presents a huge single point of failure… you’re running your Postgres database on the same 2GB node as your Mattermost server. I bet if you repeated your test with 4GB, or better 8GB, you’d probably see better results.

That explanation makes sense! Monitoring htop shows that it’s CPU bound during the processing step. Is there a way for us to allow tuning the image processing step? Maybe by changing the encoding options and such?

I’ve asked, but I really doubt it… you’re probably going to need a bigger instance, or divorce your Postgres onto its own instance. And even then, the t3.small may still not be optimal.