iRedMail blocking Mattermost sites

What does green-rainbow.party resolve to for you? For me, it’s 86.38.217.68

I would remove it from DataSource and replace it with 127.0.0.1

John, I changed 127.0.1.1 to ‘localhost’: then systemctl start mattemost.service returned no errors.
But browsing to neither (IP#s):8065 nor to green-rainbow.party:8065 works.
Thanks for your attention to this.
I also put my home ip#, from which I ssh, into the ignore list of fail2ban.
I can get to https://green-rainbow.party/mail/ successfully, but not https://green-rainbow.party:8065

So, sudo systemctl status mattermost.service shows the service is running with no errors?

What is ListenAddress and SiteURL in config.json?

What IP address(es) does your host have? If you have a browser on that host, can you http://127.0.0.1:8065 and get a response? From another host on the same segment, can you http://<ip.address>:8065 where ip.address is your Ethernet or wifi intefrace?

Open up two terminals. In one, sudo systemctl stop mattermost.service In the other, tail -f /opt/mattermost/logs/mattermost.log Hit enter a few times to open up some white space. In the first, sudo systemctl start mattermost.service Watch what goes by in the second terminal. Look for errors.

Hi John,
“SiteURL”: “https://green-rainbow.party”,
“ListenAddress”: “:8065”,
I think I can not browse through this ssh on the VPS, but I can now curl (VPS IP#):8065 and get some html that looks to be mattermost login page.
In my home machine browser (VPS IP#):8065 times out, while (VPS IP#)/mattermost brings up a 404 page.

I did the two windows thing: The only thing looking like an error is a warn…
{“timestamp”:“2024-03-28 20:16:24.198 Z”,“level”:“warn”,“msg”:“failed to get public IP address for local interface”,“caller”:“app/plugin_api.go:1006”,“plugin_id”:“com.mattermost.calls”,“origin”:“main.(*logger).Warn log.go:108”,“localAddr”:“127.0.0.1”,“error”:“failed to get public address: write udp4 127.0.0.1:8443->52.72.139.62:3478: sendto: invalid argument”}
{“timestamp”:“2024-03-28 20:16:24.262 Z”,“level”:“info”,“msg”:“got public IP address for local interface”,“caller”:“app/plugin_api.go:1000”,“plugin_id”:“com.mattermost.calls”,“origin”:“main.(*logger).Info log.go:104”,“localAddr”:“86.38.217.68”,“remoteAddr”:“86.38.217.68”}

Correction: On shutdown, a plugin failed to behave.
{“timestamp”:“2024-03-28 20:15:43.205 Z”,“level”:“info”,“msg”:“Shutting down plugins”,“caller”:“app/plugin.go:389”}
{“timestamp”:“2024-03-28 20:15:43.206 Z”,“level”:“error”,“msg”:“RPC call OnDeactivate to plugin failed.”,“caller”:“plugin/client_rpc_generated.go:33”,“plugin_id”:“com.mattermost.nps”,“error”:“connection is shut down”}
{“timestamp”:“2024-03-28 20:15:43.206 Z”,“level”:“warn”,“msg”:“error closing client during Kill”,“caller”:“plugin/hclog_adapter.go:70”,“plugin_id”:“com.mattermost.nps”,“wrapped_extras”:“errconnection is shut down”}
{“timestamp”:“2024-03-28 20:15:43.206 Z”,“level”:“warn”,“msg”:“plugin failed to exit gracefully”,“caller”:“plugin/hclog_adapter.go:72”,“plugin_id”:“com.mattermost.nps”}
{“timestamp”:“2024-03-28 20:15:43.206 Z”,“level”:“error”,“msg”:“RPC call OnDeactivate to plugin failed.”,“caller”:“plugin/client_rpc_generated.go:33”,“plugin_id”:“com.mattermost.calls”,“error”:“connection is shut down”}
{“timestamp”:“2024-03-28 20:15:43.206 Z”,“level”:“warn”,“msg”:“error closing client during Kill”,“caller”:“plugin/hclog_adapter.go:70”,“plugin_id”:“com.mattermost.calls”,“wrapped_extras”:“errconnection is shut down”}
{“timestamp”:“2024-03-28 20:15:43.206 Z”,“level”:“warn”,“msg”:“plugin failed to exit gracefully”,“caller”:“plugin/hclog_adapter.go:72”,“plugin_id”:“com.mattermost.calls”}
{“timestamp”:“2024-03-28 20:15:43.206 Z”,“level”:“warn”,“msg”:“error closing client during Kill”,“caller”:“plugin/hclog_adapter.go:70”,“plugin_id”:“playbooks”,“wrapped_extras”:“errconnection is shut down”}
{“timestamp”:“2024-03-28 20:15:43.206 Z”,“level”:“warn”,“msg”:“plugin failed to exit gracefully”,“caller”:“plugin/hclog_adapter.go:72”,“plugin_id”:“playbooks”}
{“timestamp”:“2024-03-28 20:15:43.207 Z”,“level”:“info”,“msg”:“stopping websocket hub connections”,“caller”:“platform/web_hub.go:120”}
{“timestamp”:“2024-03-28 20:15:43.212 Z”,“level”:“info”,“msg”:“Server stopped”,“caller”:“app/server.go:786”}

What is the output of ip link sh and ip a?

root@green-rainbow:/opt/mattermost/config# ip link sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 0a:56:0d:da:9f:2f brd ff:ff:ff:ff:ff:ff
root@green-rainbow:/opt/mattermost/config# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 0a:56:0d:da:9f:2f brd ff:ff:ff:ff:ff:ff
inet 86.38.217.68/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 2a02:4780:10:e50c::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::856:dff:feda:9f2f/64 scope link
valid_lft forever preferred_lft forever

You mentioned VPS (Virtual Private Server?) There has to be some layer of abstraction between your VM and the Internet. 443 and 80 are showing as open but with no valid cert. 8065 doesn’t show as open at all. All I can think, either there’s hypervisor networking, or a firewall, or some kind of load balancer or something in front of your VM.

I would get the simplest possible setup going. Two VMs with no weird networking, no firewalls, nothing impeding traffic. get Mattermost started and running without error, where you can curl http://127.0.0,1:8065 and get a response. then be able to hit 8065 via curl or browser from the other VM. Then start worrying about firewalls or whatever.

John, I appreciate your taking this time. I’m reluctant to leave anything without a firewall open for any period of time. The Hostinger firewall has 443, 80 open, as well as a few more. I have yet to get a cert from let’sencrypt this time - that’s the next step after getting mattermost running. Your last note got me to notice that 8065 wasn’t open yet - I opened it. It still times out going from 75.68.204.180 to: https://86.38.217.68:8065/
I have curled 127.0.0.1:8065 and get what looks like a mattermost page - that’s done.
This may now be not a mattermost question, but a nginx/debian/html one, I understand.
I don’t see how to make things simpler - I have just one Virtual Private Server. Two VMs seem to me to add complexity.
I’ve gotten mattermost working by itself. What’s difficult is integrating iRedMail and Mattermost. But this is potentially valuable: iRedMail provides sophisticated spam and virus control; Spam Assassin, fresh clam, and such. Mattermost competes with slack, and extensions offer full email participation, through Mailermost. If we can figure this out, and write it up as a through step-by-step configuration guide, with steps and tests to show that each step was successfully completed, it could be a boon to both. I think we are nearly there. I expect that the next step is conferring with nginx and debian docs and forums.

At least now I can see that 8065 is available, but still filtered:

sudo nmap -p 8065 86.38.217.68
Starting Nmap 7.80 ( https://nmap.org ) at 2024-03-28 19:24 CDT
Nmap scan report for 86.38.217.68
Host is up (0.056s latency).

PORT     STATE    SERVICE
8065/tcp filtered unknown

Nmap done: 1 IP address (1 host up) scanned in 0.79 seconds

I stopped the fail2ban server: I got the same result you did with nmap.

“Filtered” usually means a firewall. There may be a firewall not on your host but in front of it. I would test by temporarily disabling the firewall on your host. Maybe check system logs for something like SELinux or AppArmor or fapolicy issues.

It seems to me that you should post / check your nginx configs. I have iredmail and mattermost running on the same box. It also seems you changed the port from 80 to 8065, so you need to make sure that all of your configs are adjusted appropriately.

Here is my mattermost.conf for NGINX. I don’t think you should be accessing port 8065 directly - but via the reverse proxy on 80/443 on a (sub) domain. The Iredmail redirects are handled via location statements in the includes:

    include /etc/nginx/templates/iredadmin.tmpl;
    include /etc/nginx/templates/roundcube.tmpl;
    include /etc/nginx/templates/sogo.tmpl;

Depending on your desired domain set up, you may want to specify a subdomain e.g. chat.yourdomain.com or yourdomain.com/chat and configure that in the mattermost.conf for NGINX.

I’ve tweaked a couple other parameters to fit my needs, so if you copy this you may (or may not) have an issue that you have to hunt down.

upstream backend {
    server 127.0.0.1:8065;
    keepalive 32;
    }

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mattermost_cache:50m max_size=3g inactive=120m use_temp_path=off;

server {
    server_name chat.domain.name;

    location ~ /api/v[0-9]+/(users/)?websocket$ {
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        client_max_body_size 50M;
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Frame-Options SAMEORIGIN;
        proxy_buffers 256 16k;
        proxy_buffer_size 16k;
        client_body_timeout 60;
        send_timeout 300;
        lingering_timeout 5;
        proxy_connect_timeout 90;
        proxy_send_timeout 300;
        proxy_read_timeout 90s;
        proxy_pass http://backend;
    }

    location / {
        client_max_body_size 50M;
        proxy_set_header Connection "";
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Frame-Options SAMEORIGIN;
        proxy_buffers 256 16k;
        proxy_buffer_size 16k;
        proxy_read_timeout 600s;
        proxy_cache mattermost_cache;
        proxy_cache_revalidate on;
        proxy_cache_min_uses 2;
        proxy_cache_use_stale timeout;
        proxy_cache_lock on;
        proxy_http_version 1.1;
        proxy_pass http://backend;
    }

    listen 443 ssl; # managed by Certbot
    listen [::]:443 ssl;
    http2 on;
    ssl_certificate /etc/letsencrypt/live/chat.domain.name/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/chat.domain.name/privkey.pem; # managed by Certbot
    # include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    ssl_session_timeout 1d;

    # Enable TLS versions (TLSv1.3 is required upcoming HTTP/3 QUIC).
    ssl_protocols TLSv1.2 TLSv1.3;

    # Enable TLSv1.3's 0-RTT. Use $ssl_early_data when reverse proxying to
    # prevent replay attacks.
    #
    # @see: https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_early_data
    ssl_early_data on;

    ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES256-SHA;
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:50m;
    # HSTS (ngx_http_headers_module is required) (15768000 seconds = six months)
    add_header Strict-Transport-Security max-age=15768000;
    # OCSP Stapling ---
    # fetch OCSP records from URL in ssl_certificate and cache them
    ssl_stapling on;
    ssl_stapling_verify on;
}


server {
    if ($host = chat.domain.name) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    listen 80;
#    listen [::]:443;
    server_name chat.domain.name;
    return 404; # managed by Certbot

}