Bridge Nginx and Mattermost on the same server

I have Mattermost set up and running using the default web server/configuration. I would like to have it running on Nginx. I’ve configured Nginx following the Nginx Server instructions. I can reach the page in the browser with SSL.

I’m having trouble pointing the Mattermost installation/configuration to the Nginx configuration.

Any help will be greatly appreciated.

When you say “the page” is the page in reference the default SSL page, or is that the mattermost setup/login page?

I meant the intended site page using Nginx, using my site’s sub-domain ( What I can’t figure out is how to get Mattermost to load on that site instead of using the default web server.

Below, I’ve attached a copy of my nginx configuration file

cat /etc/nginx/conf.d/mattermost
upstream backend {
   keepalive 32;

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:10m max_size=3g inactive=120m use_temp_path=off;

server {
   listen 4431 default_server;
   server_name ;
   return 301 https://$server_name$request_uri;

server {
   listen 4431 http2;

   ssl on;
   ssl_certificate /etc/nginx/certs/OCert.pem;
   ssl_certificate_key /etc/nginx/certs/priv_key.pem;
   ssl_client_certificate /etc/nginx/certs/cloudflare.crt;
   ssl_verify_client on;
   ssl_session_timeout 1d;
   ssl_protocols TLSv1.2 TLSv1.3;
   ssl_prefer_server_ciphers on;
   ssl_session_cache shared:SSL:50m;
   # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
   add_header Strict-Transport-Security max-age=15768000;
   # OCSP Stapling ---
   # fetch OCSP records from URL in ssl_certificate and cache them
   ssl_stapling on;
   ssl_stapling_verify on;

   location ~ /api/v[0-9]+/(users/)?websocket$ {
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection "upgrade";
       client_max_body_size 50M;
       proxy_set_header Host $http_host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       client_body_timeout 60;
       send_timeout 300;
       lingering_timeout 5;
       proxy_connect_timeout 90;
       proxy_send_timeout 300;
       proxy_read_timeout 90s;
       proxy_pass http://backend;

   location / {
       client_max_body_size 50M;
       proxy_set_header Connection "";
       proxy_set_header Host $http_host;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
       proxy_set_header X-Forwarded-Proto $scheme;
       proxy_set_header X-Frame-Options SAMEORIGIN;
       proxy_buffers 256 16k;
       proxy_buffer_size 16k;
       proxy_read_timeout 600s;
       proxy_cache mattermost_cache;
       proxy_cache_revalidate on;
       proxy_cache_min_uses 2;
       proxy_cache_use_stale timeout;
       proxy_cache_lock on;
       proxy_http_version 1.1;
       proxy_pass http://backend;

Did you remove the default enabled site with sudo rm /etc/nginx/sites-enabled/default and enable your mattermost site configuration by symlinking the configuration file in /etc/nginx/sites-available/my-file to /etc.nginx/sites-enabled/my-file?

Also, can I ask why the server IP address is If your referring to the localhost, it should be or just plain localhost, right?

Yes, I did and set the above config file as the default.

I guess I’m somewhat confused then since It appears in the top portion of the above configuration file you still have the local IP address as, and this is incorrect. Here is why:

According to Wikipedia,
In the Internet Protocol Version 4, the address is a non-routable meta-address used to designate an invalid, unknown or non-applicable target. This address is assigned specific meanings in a number of contexts, such as on clients or on servers.
Which directly states that the IP address in question is non-routable.

If you don’t know what a non-routable IP address is, Nexor Cybersecurity provides a nice explanation:
The term non-routable just means exactly that; that IP packets cannot be directed from one network to another.
This states that a non-routable IP address cannot direct any packets to any location on any network, be that internal, public, private, virtualized, etc, and as such, effectively states that is a dead-end or “black hole” of sorts for internet information and network data transfer.

You may be thinking “I should google this, this is speaking in networking terms not related to being on a server.” Don’t worry, the following covers specifically using the IP address on servers!

ThreatBrief provides a great explanation in detail of the use of in server networking, however here I include a small paraphrase of the general information, to get the point across: – you may have noticed this IP address several times during your online sojourns. Many users think this IP address belongs to some hackers while others think nothing of it. Tech geeks know that it’s a ‘no particular address’ placeholder’.
While it is true, actually specifies all IP addresses on all interfaces on the systems. It’s a non-routable meta-address that’s used to define an unknown or invalid target. If we talk about servers, means all IPv4 addresses on the local machine. In the case of a route entry, it means the default route.

Your PC will pop up a address when it is not connected to any TCP/IP network because it has to resolve the IP address. By default, if you see a IP address, your PC/laptop is offline. Similarly, if the host PC is unable to resolve IP failure, it may be automatically assigned by DHCP. Once your DHCP is assigned, it can’t communicate with any other device over IP. In simple language, it means no internet or intranet access.

So in other terms, even if somehow this was able to be used as a network-connected address, which would require sophisticated reconfiguration of your entire IP tables setup, and could likely severely impact your DHCP and or even static network connections to the public internet, the standard system processing of the IP address is that it is a placeholder and non-routable address. A further note to add is that in the event that it was being detected and parsed as a wildcard, the full range of IP addresses in the IP block would all be being searched for a connection to a webserver at the same time which would also result in errors, if not total failure.

That all being said, your Nginx configuration file is functioning as a reverse proxy, which means that it is taking the IP address found in the upstream backend and listening to that IP address on your local machine and then taking the information and data that it receives from the internal IP address, and porting it to the desired external port. (IE the port you are connecting to from your website, the standard 80, 443, or a custom port. The external port is what is exposed to the world and connected to.)

This is where the issue is.
In the configuration file which you have posted above, the upstream backend IP address is shown as which is effectively a data black hole. (See below for the snippet of code from your configuration file posted above that shows this)

cat /etc/nginx/conf.d/mattermost
upstream backend {
   keepalive 32;

Instead of being set to this should be a localhost internal IP address, which is most commonly (and by default) or just plainly localhost. Setting the IP address to either of the two, and ensuring that the Mattermost config.json file reflects the same listening address (Example of the referenced portion of config.json shown below is from my personal Mattermost installation)

    "ServiceSettings": {
        "WebsocketURL": "",
        "LicenseFileLocation": "",
        "ListenAddress": ":8065",
        "ConnectionSecurity": "",
        "TLSCertFile": "",
        "TLSKeyFile": "",
        "TLSMinVer": "1.2",
        "TLSStrictTransport": false,
        "TLSStrictTransportMaxAge": 63072000,
        "TLSOverwriteCiphers": [],
        "UseLetsEncrypt": false,
        "LetsEncryptCertificateCacheFile": "./config/letsencrypt.cache",
        "Forward80To443": false,
        "TrustedProxyIPHeader": [

Seeing that the local port used in your upstream backend is port 4431, I would suggest just verifying that the port declared in "ListenAddress": is set as "ListenAddress": ":4431",
As something to note, you may notice that in the mattermost configuration files, there are no options to set the listening IP address. This is more secure because it forces the user to reverse proxy the server’s instance to the public internet, which in principle is a much safer way to do it, not to even mention that there are many other reasons why it’s just better.

I hope this helps, If your still experiencing troubles or I am somehow misunderstanding the issue, I’ll get back to you as soon as I can!

1 Like

I’m sorry, I’ve not been able to respond promptly.

Back on the issue, we’re trying to solve; I do understand the addressing scheme (I left in the attachment for provisional purposes, pardon my mistake I should have used the generic loopback address). Nonetheless, I appreciate your effortless knowledgeability on the matter and willingness to explain everything in sufficient detail.

While digging deeper into the issue over the weekend, I noticed proxy_pass http://backend; line which I believe should be pointing to the Mattermost installation or address and port, e.g. proxy_pass http://localhost:8065; or the correct port based on the config’.

Now, I’ve not tried this implementation, but I think that could be part of the problem. Also, just to clarify, both my Nginx and Mattermost installations are running on the same server.
I can’t set Nginx to listen on the same port as Mattermost because then one of the two won’t start.

Again, I can get to the Mattermost installation from the browser as well as Nginx, except the fact that Nginx throws a page error 503 Service Temporarily Unavailable :: nginx/1.17.6; see below.


At this point, I believe it’s my proxy settings that are failing me.

Yes indeed, you aren’t supposed to proxy your Nginx server and mattermost using the same port, the intended way to do this is to use Nginx as the proxy, to take a port (for example, 8080) and redirect that data to, say, port 80, or for HTTPS, to port 443.

To do this you would need to set your mattermost port as 8080, and proxy it to the port that you want your outside instance to connect to, and take care to make sure you also reverse proxy your WebSockets as well.

I feel like we’re getting closer to the solution, but this is exactly where I’m hazy (proxying). I haven’t really grasped the concept but I’m going to read up on how to get it going especially with Nginx

I do agree, basically a reverse proxy with either Nginx or Apache is along the same concept as if you were to route port 8080 (as I used in my example) to port 80, with iptables or another protocol such as that, except that it is able to be configured to pass headers and allow websockets etc as well

1 Like

Hello, I don’t know where to look or I can’t find it. How can I change the listening port or my web address url with a non-standard port on omnibus? I have put several posts with the help and nobody tells me anything and this thread is the closest thing to be able to do something similar.

As seen in the image, I need my URL to show another port that I understand should be changed in nginx but I don’t see the way,as noted, I already have the certificate assigned through the Installation script of Mattermost omnibus but as seen in the console those parameters can not be changed there.

These are my posts on the subject in case it is worth reference. I just need my address to be like this: where the listening port is this and not a standard 443 one. I don’t know if I explain myself well.

Thank you for the help that will be welcome.

So the electricity goes out right before the ciritical final scene of the movie.
What happened then, have you solved how to run both on the same server? I got stuck here since days with no helpful recommendations. I got lost in circles…

Hi hsd and welcome to the Mattermost forums,

since this is your first post I’m not sure where you’re stuck and how this is related to this current topic. Can you create a new topic with the information related to your issue or add some context here, please?

Hi agriesser,
Thanks a lot.
I believe I am having exactly the same issue with @Mattermo .
Installed the Nginx and Mattermost on the same server following the -not so clear, instructions here:

Configure NGINX with SSL and HTTP/2 — Mattermost documentation

I am now trying to play various config combinations between these two files;

Speaking of the Mattermost configuration;

  • I am not sure how to write SiteURL: With or without the port nr, which port nr?
  • I am not sure about the ListenAddress: since what I understood from above correspondence with @XxLilBoPeepsxX Mattermost will use 8065 listen from Nginx and Nginx will listen from the web via 443. So Mattermost configuration becomes more complicated. Because we cannot use LetsEncrypt if we do not config it to listen via 443…

I am totally confused here.

My conf looks like below (only subject criticl ones);

“SiteURL”: “http://teamup.etc…com”,
“WebsocketURL”: “”,
“ListenAddress”: “:8065”,
“ConnectionSecurity”: “”,
“TLSCertFile”: “”,
“TLSKeyFile”: “”,
“TLSMinVer”: “1.2”,
“TLSStrictTransport”: false,
“UseLetsEncrypt”: false,
“LetsEncryptCertificateCacheFile”: “./config/letsencrypt.cache”,
“Forward80To443”: false,

and NginX configuration looks like;

upstream backend {
server 185.248.xx.xx:443;

server {
server_name http://teamup.etc…com


location / {

proxy_pass http://backend

thanks in advance, please let me know if we need to see more.

The SiteURL should read

The ListenAddress can be set to to only listen on the local interface on port 8065. This will help with the system not being reachable directly from the outside, except when being accessed via nginx.

The nginx config needs to be changed, the upstream backend part needs to point to your backned server, so from the parts you posted, this needs to be changed:

    upstream backend {

    server {
    server_name teamup.etc…com;
    proxy_pass http://backend;

    location / {
    proxy_pass http://backend

I did not fully understand if you already have the SSL certificate or if you do need one at all. The current nginx snippet you have here looks like it is running on http only. Please let me know the desired configuration and what you have done already with regards to the certificate, so I can continue from there on.

BTW, there’s also a simplified installation method available since quite some time, it’s called the Mattermost Omnibus setup and it comes with nginx as a reverse proxy by default.

thanks for all your help & comments.
yes I do already have the SSL certificate via LetsEncrypt, as per the installation instructions. But it works with this configuration you have given above. I tried changing “proxy_pass” with “https” but ended up with a 502 bad gateway error.

What happens when the certificate expires?

Nginx holds the certificate, proxy pass defines the connection to the mattermost server which is unsecured then, so http is OK.

You will need to manually rerun the command you used to create the certificate from time to time in order to renew it.

now we have all the answers in one place. thanks for your prompt response.