Docker with nginx install ending up with 301 error

ifb-user@mattermost:~/matterm$ lsof -i :443
COMMAND    PID     USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
rootlessk 1017 ifb-user   11u  IPv4 1569701      0t0  TCP *:https (LISTEN)
rootlessk 1017 ifb-user   14u  IPv6 1569713      0t0  TCP *:https (LISTEN)

and

ifb-user@mattermost:~/matterm$ ps ax | grep 1017
   1017 ?        Ssl    0:01 rootlesskit --net=slirp4netns --mtu=65520 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run --propagation=rslave /home/ifb-user/bin/dockerd-rootless.sh

so obviously this doesn’t look like yours… but yes, I am running rootless for user ifb-user…

as far as I remember, no I didn’t touch the nginx configuration…

OK, I don’t have a rootless setup here to test that, sorry, but all I can say is that according to your nginx logs (mm.access.log), the requests are not hitting the container for the default VHOST and I’m unsure about how to debug that further.

Looking at the default nginx config, a request to /robots.txt should immediately return the robots.txt and HTTP 200:

https://mattermost.france-bioinformatique.fr/robots.txt

→ This does not work on your server, it’s also redirecting in and endless loop, so something is really wrong here with your setup and I think we may need to continue debugging on your rootless configuration.
Can you describe this setup, please? Can you deploy a vanilla nginx container out of docker hub f.ex. and successfully access files in it when running on port 443?

First, thank you so much for your patience and interest (and I am learning heaps re docker debugging).

Now, rootless setup:

  • using /home/ifb-user/bin/dockerd-rootless.sh as shown above
  • and running docker daemon as ‘systemctl --user’ for a user (ifb-user) who is a password-less and sudo-less user
  • btw, this was the reason why my previous issue here #14062 has to be solved using a mapping from the rootless container to ifb-user’s name space (and not 2000)

Nginx: I am not good with docker, nginx container etc…

However, I tried the setup ‘wi’hout nginx’ and running nginx as a global service (systemctl nginx), and configuring the nginx backend as localhost:8065
→ without luck, I had the same pb

and still, I am running other publicly facing vms with a setup rootless docker + global ngnix service and all works well

So I don’t know what to say, or do:

Can you deploy a vanilla nginx container out of docker hub f.ex. and successfully access files in it when running on port 443?

I am not sure if I know how to do that… :frowning:
in other word: how to make a vanilla nginx docker talk to the other containers…

We could leave it at that, and I’ll try omnibus…

I’ll leave shortly; happy to take this wherever… but I’ll leave it to you. Ta!

I think I will cry :frowning: and leave it there for now.
Following this excellent page 4 Reasons Why Your Docker Containers Can't Talk to Each Other - Maxim Orlov
I can show that the containers talk to each other:

ifb-user@mattermost:~/matterm$ docker exec matterm-nginx-1 ping matterm-mattermost-1 -c2
PING matterm-mattermost-1 (172.22.0.2): 56 data bytes
64 bytes from 172.22.0.2: seq=0 ttl=64 time=0.095 ms
64 bytes from 172.22.0.2: seq=1 ttl=64 time=0.098 ms

--- matterm-mattermost-1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.095/0.096/0.098 ms
ifb-user@mattermost:~/matterm$ docker exec matterm-mattermost-1 ping matterm-nginx-1 -c2
PING matterm-nginx-1 (172.22.0.4) 56(84) bytes of data.
64 bytes from matterm-nginx-1.mattermost (172.22.0.4): icmp_seq=1 ttl=64 time=0.039 ms
64 bytes from matterm-nginx-1.mattermost (172.22.0.4): icmp_seq=2 ttl=64 time=0.071 ms

--- matterm-nginx-1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 33ms
rtt min/avg/max/mdev = 0.039/0.055/0.071/0.016 ms

so, this is above my skill levels, and actually even my most basic comprehension…
I’ll see if I am luckier with omnibus.

Who knows, someone might have a bright idea one day…

Hello: indeed, I did find the culprit. The HAproxy behind which the machine sat was uncorrectly configured. It took a long time (and lots of effort) to figure that one out (and one I have no control over).
Thanks again.

Hi froggypaule and thanks for providing us with this info!

I assumed it must be something out of our control here, wondering how we could have identified that earlier… But still, thanks for the followup!