Mattermost docker with nginx-proxy - 502 Bad Gateway

Hello everyone,

I hope you can help me with my problem, I am stuck for a couple of days now.

In order to provide my fiancees researcher team with a Mattermost instance during COVID19, I tried to setup a docker version of it on my DigitalOcean droplet.

What I try to achieve is docker compose for app and db, nginx-proxy and letsencrypt companion as SSL reverse proxy.
The nginx-proxy is reachable, but always fails with “502 Bad Gateway”.

Maybe someone can help me here.

The OS is: Ubuntu 16.04.6 LTS
The main installation document is followed is: docs.mattermost[.]com/install/prod-docker.html

I modified the yaml file to remove the web server part, plus I switched to the teams version, as I want the free Mattermost.
app: build: context: app args: - edition=team

Then I follwed "How do I setup an NGINX proxy with the Mattermost Docker installation?” from Configuring NGINX as a proxy for Mattermost Server — Mattermost 5.21 documentation

It isn’t explicitly stated, but I assumed the nginx-proxy image via: docker pull jwilder/nginx-proxy:alpine is required.

I started the nginx-proxy and added it to the mattermost docker network:
docker run -d -p 443:443 -p 80:8000 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
docker network connect mattermost-docker_default nginx-proxy-instance-name

According to this post: [Use nginx-proxy and v3.7.1 (Docker) - Installation - Mattermost, Inc.](Link forum.mattermost[.]org/t/use-nginx-proxy-and-v3-7-1-docker/3062)
I also exposed port 8000 in the yaml file, as it is the team version.

As nginx-proxy usually uses port 80 as default, I also tried VIRTUAL_PORT:8000 (GitHub - nginx-proxy/nginx-proxy: Automated nginx proxy for Docker containers using docker-gen)

environment:
  # set same as db credentials and dbname
  - MM_USERNAME=mmuser
  - MM_PASSWORD=mmuser_password
  - MM_DBNAME=mattermost

  - VIRTUAL_HOST=temp.tld
  - VIRTUAL_PORT=8000
  - LETSENCRYPT_HOST=temp.tld
  - LETSENCRYPT_EMAIL=temp.tld
expose:
  - "8000"

When i list the running containers, it looks like this:

c83b81fb5bf4 jwilder/nginx-proxy “/app/docker-entrypo…” 50 minutes ago Up 50 minutes 80/tcp, 0.0.0.0:80->8000/tcp hardcore_tharp
9f8cce41c2a1 mattermost-docker_app “/entrypoint.sh matt…” About an hour ago Up 27 minutes (healthy) 443/tcp, 8000/tcp mattermost-docker_app_1
b6a533bcea03 mattermost-docker_db “/entrypoint.sh post…” 23 hours ago Up About an hour (healthy) 5432/tcp mattermost-docker_db_1
Still it doesnt work. I have no clue where I am stuck or what I am doing wrong.
Anyone with any idea? How would this also work with port 443 for HTTPS?

Your help is really appreciated. Kinda frustrating.

Cheers
Tom

PS: I had to remove some links, as I am a new user.

@paulrothrock Would the support team be familiar with this issue?

Hey Tom,
did you change the default port for mattermost in the config of the mattermost server?
The default port is 8065 and not 8000 :slight_smile:

Let me know if this helps.

Hi Sven,
good point. This is also super confusing looking at the docs :slight_smile:
When I change the docker build to team edition, it started to expose port 8000.

I will try it out.

Could it be there is a mishap between documentation - docker - app server…?!

9f8cce41c2a1        mattermost-docker_app   "/entrypoint.sh matt…"   2 hours ago         Up 48 minutes (healthy)   443/tcp, 8000/tcp                          mattermost-docker_app_1
b6a533bcea03        mattermost-docker_db    "/entrypoint.sh post…"   24 hours ago        Up 2 hours (healthy)      5432/tcp                                   mattermost-docker_db_1

Would be highly appreciated, thanks @amy.blais

Hi Tom,
i would recommend to not use the letsencrypt companion because it’s not easier if you’re new to this. The simplest setup would be to run postgres and Mattermost in Docker and nginx & certbot on the host. And you’ll get nginx updates from Ubuntu Security Team. Add an cronjob or systemd timer to renew the certs and you’re done. The companion will just save the cert creation and renewal for the price of Docker socket access (root) and you need to provide your custom nginx template anyway because Mattermost is using websockets and the companion will just setup an config with a reverse proxy.

Don’t forget to change foo.bar in the compose file, nginx conf and certbot command to your desired domain.

Use this docker-compose file

version: "3"

services:
  db:
    container_name: postgres_mattermost
    image: postgres:alpine
    restart: unless-stopped
    volumes:
      - ./postgres:/var/lib/postgresql/data
      - /etc/localtime:/etc/localtime:ro
    environment:
      POSTGRES_USER: mmuser
      POSTGRES_PASSWORD: mmuser_password
      POSTGRES_DB: mattermost

  app:
    depends_on:
      - db
    container_name: mattermost
    image: mattermost/mattermost-team-edition:release-5.21
    restart: unless-stopped
    volumes:
      - ./mattermost/config:/mattermost/config:rw
      - ./mattermost/data:/mattermost/data:rw
      - ./mattermost/logs:/mattermost/logs:rw
      - ./mattermost/plugins:/mattermost/plugins:rw
      - ./mattermost/client-plugins:/mattermost/client/plugins:rw
      - /etc/localtime:/etc/localtime:ro
    environment:
      MM_SERVICESETTINGS_SITEURL: https://foo.bar
      MM_SQLSETTINGS_DRIVERNAME: postgres
      MM_SQLSETTINGS_DATASOURCE: postgres://mmuser:mmuser_password@db:5432/mattermost?sslmode=disable&connect_timeout=10
    ports:
      - "127.0.0.1:8065:8065"

Don’t forget to change the db username and password in both sections.
Afterwards create the needed folders (inside the folder where the compose file is lying or change the volume mappings)

$ mkdir -p postgres mattermost/{config,data,logs,plugins,client-plugins}
$ sudo chown -R 2000:2000 mattermost

Install nginx-full and certbot on the host and create a cert

$ sudo certbot certonly --standalone -d foo.bar

Drop the following config to /etc/nginx/sites-enabled/mattermost.conf

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;

server {
    server_name foo.bar;
    listen 80;
    listen [::]:80;

    # ACME-challenge
    location ^~ /.well-known/acme-challenge/ {
        root /var/www/html;
    }

    # redirect all HTTP requests to HTTPS with a 301 Moved Permanently response.
    return 301 https://$host$request_uri;
}

server {
    server_name foo.bar;
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    access_log /var/log/nginx/access.log;

    # ssl
    ssl_certificate /etc/letsencrypt/live/foo.bar/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/foo.bar/privkey.pem;
    ssl_session_timeout 1d;
    ssl_session_cache shared:MozSSL:10m;  # about 40000 sessions
    ssl_session_tickets off;

    # curl https://ssl-config.mozilla.org/ffdhe2048.txt > /path/to/dhparam.pem
    #ssl_dhparam /path/to/dhparam.pem;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;

    # HSTS (ngx_http_headers_module is required) (63072000 seconds)
    add_header Strict-Transport-Security "max-age=63072000" always;

    # other security headers (see https://securityheaders.com)
    add_header Referrer-Policy 'strict-origin-when-cross-origin';
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Content-Type-Options 'nosniff' 'always';
    add_header X-Frame-Options 'SAMEORIGIN' 'always';

    # OCSP stapling
    ssl_stapling on;
    ssl_stapling_verify on;

    # verify chain of trust of OCSP response using Root CA and Intermediate certs
    #ssl_trusted_certificate /path/to/root_CA_cert_plus_intermediates;

    
    location ~ /api/v[0-9]+/(users/)?websocket$ {
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        client_max_body_size 50M;
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Frame-Options SAMEORIGIN;
        proxy_buffers 256 16k;
        proxy_buffer_size 16k;
        client_body_timeout 60;
        send_timeout 300;
        lingering_timeout 5;
        proxy_connect_timeout 90;
        proxy_send_timeout 300;
        proxy_read_timeout 90s;
        proxy_pass http://localhost:8065;
   }

    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Frame-Options SAMEORIGIN;
        client_max_body_size 50M;
        proxy_set_header Connection "";
        proxy_buffers 256 16k;
        proxy_buffer_size 16k;
        proxy_read_timeout 600s;
        proxy_cache mattermost_cache;
        proxy_cache_revalidate on;
        proxy_cache_min_uses 2;
        proxy_cache_use_stale timeout;
        proxy_cache_lock on;
        proxy_http_version 1.1;
        proxy_pass http://localhost:8065;
    }
}

You can verify the nginx conf with sudo nginx -t and afterwards restart nginx sudo systemctl restart nginx. For the autorenewal of the cert put the two snipets to /etc/systemd/system/ as certbot.service:

[Unit]
Description=certbot renew
After=network.target

[Service]
Type=oneshot
ExecStart=certbot renew

[Install]
WantedBy=multi-user.target

and certbot.timer:

[Unit]
Description=Run certbot renew

[Timer]
Persistent=true
OnCalendar=*-*-* 5:00:00
Unit=certbot.service

[Install]
WantedBy=timers.target

reload the daemon files and enable it

$ sudo systemctl daemon-reload
$ sudo systemctl enable --now certbot.timer

Spin up the containers and the setup should be up and running. If you want to harden your nginx create the dhparam file and point the ocsp stapling cert setting to the ssl bundle from Ubuntu (probably something like /etc/ssl/certs/ca-certificates.crt)

Marco

1 Like

So, I changed the docker compose file to port 8065. and it looks like this now:
app:
build:
# change build:app to build: and uncomment following lines for team edition or change UID/GID
context: app
args:
- edition=team
# - PUID=1000
# - PGID=1000
restart: unless-stopped
volumes:
- ./volumes/app/mattermost/config:/mattermost/config:rw
- ./volumes/app/mattermost/data:/mattermost/data:rw
- ./volumes/app/mattermost/logs:/mattermost/logs:rw
- ./volumes/app/mattermost/plugins:/mattermost/plugins:rw
- ./volumes/app/mattermost/client-plugins:/mattermost/client/plugins:rw
- /etc/localtime:/etc/localtime:ro
environment:
# set same as db credentials and dbname
- MM_USERNAME=mmuser
- MM_PASSWORD=mmuser_password
- MM_DBNAME=mattermost
- VIRTUAL_HOST=domain
- VIRTUAL_PORT=8065
- LETSENCRYPT_HOST=domain
- LETSENCRYPT_EMAIL=admin@domain
# use the credentials you’ve set above, in the format:
# MM_SQLSETTINGS_DATASOURCE=postgres://{MM_USERNAME}:{MM_PASSWORD}@db:5432/${MM_DBNAME}?sslmode=disable&connect_timeout=10
- MM_SQLSETTINGS_DATASOURCE=postgres://mmuser:mmuser_password@db:5432/mattermost?sslmode=disable&connect_timeout=10

      # in case your config is not in default location
      #- MM_CONFIG=/mattermost/config/config.json

    expose:
      - "8065"

Did adocker-compose stop && docker-compose start`

Same result, calling port 80 on the nginx-proxy return 502 Bad Gateway.
Checking the running containers, the app still exposes 443 and 8000. Very strange
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7804c08be997 jwilder/nginx-proxy “/app/docker-entrypo…” 18 hours ago Up 18 hours 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp clever_shannon
9f8cce41c2a1 mattermost-docker_app “/entrypoint.sh matt…” 20 hours ago Up 3 minutes (healthy) 443/tcp, 8000/tcp mattermost-docker_app_1
b6a533bcea03 mattermost-docker_db “/entrypoint.sh post…” 42 hours ago Up 3 minutes (healthy) 5432/tcp mattermost-docker_db_1

Thank you @marcokundt. I also thought it might be faster and easier to just install it all on the host.
I might even install the app and the db on the host itself. Or do you see any benefit having both in a docker setup?
My first idea was to have a nice, containerized setup, but probably not the easiest way. :thinking:

Yet it seemed quite obvious. The nginx-proxy wiki on github mentioned setting VIRTUAL_PORT env var to select a dedicated port, which would be 8000 for the team edition app server (according to the documentation). The VIRTUAL_HOST env variable would be used to configure the nginx-proxy hostname.

What I really dont understand is also why docker container ls shows that the app container exposes 443 and 8000, even though it is not configured in the yaml file.

@HachimanSec the half docker half host approach has some benefits in my eyes. You’re are exposed to the image and base image update processes in Dockerland and therefore it’s advisable to run at least nginx on your host because you will get the updates from paranoid (in a good sense) people. If no one builds a new image directly after an security bug was found your direct point to the internet is maybe attackable. The Mattermost and Postgres inside Docker has the benefit that you can update them more easily and you save multiple steps at once. The Postgres is only accessible from inside the Mattermost and Postgres network (which will be created at docker-compose up) and the Mattermost images are being rebuilt on a regulary basis.

What I really dont understand is also why docker container ls shows that the app container exposes 443 and 8000, even though it is not configured in the yaml file.

the yaml is not only truth. The ports are exposed in the Dockerfiles from which your images are built. The yaml is just overriding the already defined exposures.

1 Like

Thank you @marcokundt, good thoughts. :+1:
I will follow your approach, getting nginx on the host and the Mattermost app and db in Docker.

Also thanks for the hint with the ports. I was sure I missed something :wink:

Hi @marcokundt, I tried now to configure the local nginx and the docker compose setup. Unfortunately I still receive a 502 Bad Gateway.

Local testing via curl reveals:

curl http://localhost:8065
curl: (52) Empty reply from server

The docker compose yaml has been adapted, according to your proposal, I added the ports and the MM_SERVICESETTINGS_SITEURL env variable.

app:
build: app
 # change `build:app` to `build:` and uncomment following lines for team edition or change UID/GID
    # context: app
      # args:
      #   - edition=team
      #   - PUID=1000
      #   - PGID=1000
    ports:
            - "127.0.0.1:8065:8065"
    restart: unless-stopped
    volumes:
      - ./volumes/app/mattermost/config:/mattermost/config:rw
      - ./volumes/app/mattermost/data:/mattermost/data:rw
      - ./volumes/app/mattermost/logs:/mattermost/logs:rw
      - ./volumes/app/mattermost/plugins:/mattermost/plugins:rw
      - ./volumes/app/mattermost/client-plugins:/mattermost/client/plugins:rw
      - /etc/localtime:/etc/localtime:ro
    environment:
      # set same as db credentials and dbname
      - MM_USERNAME=mmuser
      - MM_PASSWORD=pass
      - MM_DBNAME=mattermost
      - MM_SERVICESETTINGS_SITEURL:https://collaboration.temp
      # use the credentials you've set above, in the format:
      # MM_SQLSETTINGS_DATASOURCE=postgres://${MM_USERNAME}:${MM_PASSWORD}@db:5432/${MM_DBNAME}?sslmode=disable&connect_timeout=10
      - MM_SQLSETTINGS_DATASOURCE=postgres://mmuser:pass@db:5432/mattermost?sslmode=disable&connect_timeout=10

      # in case your config is not in default location
      #- MM_CONFIG=/mattermost/config/config.json

Strangely the dockerfile for the app has 8000, do you know why 8000 is used in contrast to 8065? I am aware this port 8000 is only exposed in the internal docker network, but may I have to somewhat rewrite 8065?

docker container ls
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS                    PORTS                                NAMES
bc342a6437e6        mattermost-docker_app   "/entrypoint.sh matt…"   15 minutes ago      Up 15 minutes (healthy)   8000/tcp, 127.0.0.1:8065->8065/tcp   mattermost-docker_app_1
cd1fa5d5f028        mattermost-docker_db    "/entrypoint.sh post…"   15 minutes ago      Up 15 minutes (healthy)   5432/tcp                             mattermost-docker_db_1

For my nginx, I kept the default config file and added your comments.
Checking the file reveals one error:

/etc/nginx/sites-enabled# nginx -t
nginx: [emerg] "proxy_cache" zone "mattermost_cache" is unknown in /etc/nginx/nginx.conf:63
nginx: configuration file /etc/nginx/nginx.conf test failed

I removed the proxy_cache lines just to make it work.
My nginx config looks like this:

server {

    listen [::]:443 ssl http2 ipv6only=on; # managed by Certbot
    listen 443 ssl http2; # managed by Certbot

    ssl_certificate /etc/letsencrypt/live/collaboration.temp/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/collaboration.temp/privkey.pem; # managed by Certbot

    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    # HSTS (63072000 seconds = 2 years)
    add_header Strict-Transport-Security "max-age=63072000; includeSubdomains;" always;

    # other security headers (see https://securityheaders.com)
    add_header Referrer-Policy 'strict-origin-when-cross-origin';
    add_header X-XSS-Protection "1; mode=block";
    add_header X-Content-Type-Options 'nosniff' 'always';
    add_header X-Frame-Options 'SAMEORIGIN' 'always';

    location ~ /api/v[0-9]+/(users/)?websocket$ {
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        client_max_body_size 50M;
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Frame-Options SAMEORIGIN;
        proxy_buffers 256 16k;
        proxy_buffer_size 16k;
        client_body_timeout 60;
        send_timeout 300;
        lingering_timeout 5;
        proxy_connect_timeout 90;
        proxy_send_timeout 300;
        proxy_read_timeout 90s;
        proxy_pass http://localhost:8065;
    }

    location / {
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Frame-Options SAMEORIGIN;
        client_max_body_size 50M;
        proxy_set_header Connection "";
        proxy_buffers 256 16k;
        proxy_buffer_size 16k;
        proxy_read_timeout 600s;
        #proxy_cache mattermost_cache;
        #proxy_cache_revalidate on;
        #proxy_cache_min_uses 2;
        #proxy_cache_use_stale timeout;
        #proxy_cache_lock on;
        proxy_http_version 1.1;
        proxy_pass http://localhost:8065;
    }
}
server {
    if ($host = collaboration.temp) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


        listen 80 ;
        listen [::]:80 ;
    server_name collaboration.temp;
    return 404; # managed by Certbot


}

Do you have any advice? It still seems to be an issue with the docker compose / dockerfile?!

Best
Tom

Hey,
don’t use mattermost-docker at all. The compose file i provided will use the prebuilt images from Mattermost staff and should work out of the box. Right now this isn’t working because the mattermost-docker repo uses an non standard port.
You probably forgot to define the proxy_cache but without seeing your nginx.conf i can’t say where the problems are. My advice is to use the configuration i provided because this should work.

Thanks @marcokundt, I didnt realized you had a very own docker compose file “instead” of the other one.
I will check this out later today. Thanks again for pointing this out.

For the proxy I also realized I probably missed this line in the conf:
`

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;

`

I made it working now, thank you @marcokundt. (I even got it working with port 8000)
.