Unrecognized error message in server log

Recently upgraded from 6.0.4 to 7.2 and everything went smooth as far as I can tell, been toying around with Calls but haven’t got it working yet. Noticed an error message I’ve never seen before and can’t find any information on when searching the web:

{"timestamp":"2022-08-19 12:16:39.117 +02:00","level":"error","msg":"callback failed: call state is missing from channel state","caller":"app/plugin_api.go:975","plugin_id":"com.mattermost.calls","origin":"main.(*Plugin).handleLeave websocket.go:372"}

Is this serious? What does it mean?

Also, the Calls configuration states that the RTC-server will listen on port 8443, but I can’t see anything listening on that port when checking, is that a problem?

Hi and welcome to the Mattermost forums!

If nothing is listening on port 8443, then the plugin did either not start correctly or your deployment model forbids the RTC server to start.
Can you describe your environment a bit? What deployment m odel are you using? Binary? Docker?

In the mattermost logs directly after the start where you see the logs for the loaded plugins, you should see lines like this:

{"timestamp":"2022-08-19 05:16:18.868 +02:00","level":"info","msg":"got public IP address","caller":"app/plugin_api.go:972","plugin_id":"com.mattermost.calls","origin":"main.(*logger).Info log.go:84","addr":"your-public-ip"}
{"timestamp":"2022-08-19 05:16:18.869 +02:00","level":"info","msg":"rtc: server is listening on udp 8443","caller":"app/plugin_api.go:972","plugin_id":"com.mattermost.calls","origin":"main.(*logger).Info log.go:84"}
{"timestamp":"2022-08-19 05:16:18.869 +02:00","level":"info","msg":"rtc: server is listening on udp 8443","caller":"app/plugin_api.go:972","plugin_id":"com.mattermost.calls","origin":"main.(*logger).Info log.go:84"}

One rtcd process is being started per CPU core.
Can you see messages like these in the logs?

The log line you quoted here can be disregarded as long as we aren’t sure that the plugin is working properly.
When you disable and re-enable the plugin, what logs are printed to the console/in the logfile?

I’m using a binary deployment, single server on RHEL 8.

These lines appear in the log:

{"timestamp":"2022-08-19 12:42:19.662 +02:00","level":"info","msg":"rtc: server is listening on udp 8443","caller":"app/plugin_api.go:972","plugin_id":"com.mattermost.calls","origin":"main.(*logger).Info lo
g.go:84"}                                                                                                                                                                                                     {"timestamp":"2022-08-19 12:42:19.662 +02:00","level":"info","msg":"rtc: server is listening on udp 8443","caller":"app/plugin_api.go:972","plugin_id":"com.mattermost.calls","origin":"main.(*logger).Info lo
g.go:84"}

I don’t have a line regarding public ip BUT I did set the ip at “ICE Host Override” in the configuration.

When disabling/enabling the plugin the log states the following:

{"timestamp":"2022-08-19 12:48:11.622 +02:00","level":"warn","msg":"Unrecognized config permissions tag value.","caller":"api4/config.go:431","tag_value":"sysconsole_write_*_read"}
{"timestamp":"2022-08-19 12:48:11.634 +02:00","level":"info","msg":"rtc: server was shutdown","caller":"app/plugin_api.go:972","plugin_id":"com.mattermost.calls","origin":"main.(*logger).Info log.go:84"}
{"timestamp":"2022-08-19 12:48:11.642 +02:00","level":"info","msg":"plugin process exited","caller":"plugin/hclog_adapter.go:61","plugin_id":"com.mattermost.calls","wrapped_extras":"pathplugins/com.mattermost.calls/server/dist/plugin-linux-amd64pid59095"}

{"timestamp":"2022-08-19 12:48:47.299 +02:00","level":"warn","msg":"Unrecognized config permissions tag value.","caller":"api4/config.go:431","tag_value":"sysconsole_write_*_read"}
{"timestamp":"2022-08-19 12:48:47.408 +02:00","level":"info","msg":"rtc: server is listening on udp 8443","caller":"app/plugin_api.go:972","plugin_id":"com.mattermost.calls","origin":"main.(*logger).Info log.go:84"}
{"timestamp":"2022-08-19 12:48:47.410 +02:00","level":"info","msg":"rtc: server is listening on udp 8443","caller":"app/plugin_api.go:972","plugin_id":"com.mattermost.calls","origin":"main.(*logger).Info log.go:84"}

Yah, there you go - rtcd is listening on udp:8443 on this machine.
You should be able to see it with the command:

# lsof -n -i :8443
COMMAND    PID       USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
plugin-li 1644 mattermost   12u  IPv4  72756      0t0  UDP *:8443
plugin-li 1644 mattermost   14u  IPv4  72757      0t0  UDP *:8443
plugin-li 1644 mattermost   16u  IPv4  72758      0t0  UDP *:8443
plugin-li 1644 mattermost   18u  IPv4  72759      0t0  UDP *:8443
plugin-li 1644 mattermost   20u  IPv4  72760      0t0  UDP *:8443
plugin-li 1644 mattermost   22u  IPv4  72761      0t0  UDP *:8443

There will be one process per CPU core in your system.
Since this is a udp stream, connections to it need to be allowed directly on the server (maybe something like ufw is running?) and on the firewalls in front of your server. You cannot tunnel the call stream through a reverse proxy, f.ex., so from a networking point of view, please make sure to allow tcp/443 to your mattermost server (assuming that you’re running a SSL-enabled deployment, which is also a requirement for calls working in the web app) and udp/8443 from all possible call clients.

Ok, so when I run the lsof-command like you did I get this:

COMMAND     PID       USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
plugin-li 59121 mattermost   12u  IPv4 574618      0t0  UDP *:pcsync-https
plugin-li 59121 mattermost   14u  IPv4 574619      0t0  UDP *:pcsync-https

Looks right except the name pcsync-https, but I guess that’s just a name assigned to the port 8443 from the os.

I’ll check with my comms team if the firewall is blocking udp/8443!

Yah, the name resolution for the service names can be different, I bet that in your /etc/services file you will see an entry for 8443 to resolve to pcsync-https.

But yes, this is the correct reading, so it’s most likely only a problem with the connection to your server.
The ICE Host Override setting should in such a single-server scenario be set to the domain of your mattermost application server, so if you access your mattermost installation with https://mattermost.mycompany.com, make sure that the ICE Host Override is set to mattermost.mycompany.com and restart the plugin (disable/enable).

Ok, so I’m talking with comms and the server is listening and accepting connections fine on 8443. When I try calling a colleague we connect just fine but it drops after about 10 seconds. I turned on debug logging in the mattermost server and got this when performing a call:

 {"timestamp":"2022-08-19 14:36:40.867 +02:00","level":"debug","msg":"leave message","caller":"app/plugin_api.go:969","plugin_id":"com.mattermost.calls","origin":"main.(*Plugin).WebSocketMessageHasBeenPosted
 websocket.go:654","userID":"gh4ug4ys57d6ufai8op7mjkszy","connID":"59h6dnt983ffmeg6uxy4ezjjda"}                                                                                                               {"timestamp":"2022-08-19 14:36:40.867 +02:00","level":"debug","msg":"websocket.NextReader: client side closed socket","caller":"app/web_conn.go:824","user_id":"gh4ug4ys57d6ufai8op7mjkszy"}
{"timestamp":"2022-08-19 14:36:40.867 +02:00","level":"debug","msg":"handleLeave","caller":"app/plugin_api.go:969","plugin_id":"com.mattermost.calls","origin":"main.(*Plugin).handleLeave websocket.go:337","
userID":"gh4ug4ys57d6ufai8op7mjkszy","connID":"59h6dnt983ffmeg6uxy4ezjjda","channelID":"ho336fyyjpb7jd483pnk4o1mkr"}
{"timestamp":"2022-08-19 14:36:40.868 +02:00","level":"debug","msg":"user left call","caller":"app/plugin_api.go:969","plugin_id":"com.mattermost.calls","origin":"main.(*Plugin).handleLeave websocket.go:344
","userID":"gh4ug4ys57d6ufai8op7mjkszy","connID":"59h6dnt983ffmeg6uxy4ezjjda","channelID":"ho336fyyjpb7jd483pnk4o1mkr"}
{"timestamp":"2022-08-19 14:36:40.868 +02:00","level":"debug","msg":"Received HTTP request","caller":"web/handlers.go:156","method":"GET","url":"/api/v4/websocket","request_id":"561sa9bxbifwzgnnemqifmz6fc"}
{"timestamp":"2022-08-19 14:36:40.869 +02:00","level":"debug","msg":"closing ws channel for session","caller":"app/plugin_api.go:969","plugin_id":"com.mattermost.calls","origin":"main.(*Plugin).OnWebSocketD
isconnect websocket.go:271","userID":"gh4ug4ys57d6ufai8op7mjkszy","connID":"59h6dnt983ffmeg6uxy4ezjjda","channelID":"ho336fyyjpb7jd483pnk4o1mkr"}                                                             {"timestamp":"2022-08-19 14:36:40.871 +02:00","level":"debug","msg":"closeRTCSession","caller":"app/plugin_api.go:969","plugin_id":"com.mattermost.calls","origin":"main.(*Plugin).closeRTCSession websocket.g
o:699","userID":"gh4ug4ys57d6ufai8op7mjkszy","connID":"59h6dnt983ffmeg6uxy4ezjjda","channelID":"ho336fyyjpb7jd483pnk4o1mkr"}
{"timestamp":"2022-08-19 14:36:40.871 +02:00","level":"debug","msg":"peer connection closed","caller":"app/plugin_api.go:969","plugin_id":"com.mattermost.calls","origin":"main.(*logger).Debug log.go:80","sessionID":"59h6dnt983ffmeg6uxy4ezjjda"}
{"timestamp":"2022-08-19 14:36:40.871 +02:00","level":"debug","msg":"ice closed","caller":"app/plugin_api.go:969","plugin_id":"com.mattermost.calls","origin":"main.(*logger).Debug log.go:80","sessionID":"59
h6dnt983ffmeg6uxy4ezjjda"}                                                                                                                                                                                    {"timestamp":"2022-08-19 14:36:40.872 +02:00","level":"debug","msg":"removing session from state","caller":"app/plugin_api.go:969","plugin_id":"com.mattermost.calls","origin":"main.(*Plugin).removeSession s
ession.go:221","userID":"gh4ug4ys57d6ufai8op7mjkszy","connID":"59h6dnt983ffmeg6uxy4ezjjda","originalConnID":"59h6dnt983ffmeg6uxy4ezjjda"}                                                                     {"timestamp":"2022-08-19 14:36:40.883 +02:00","level":"debug","msg":"removing session from state","caller":"app/plugin_api.go:969","plugin_id":"com.mattermost.calls","origin":"main.(*Plugin).removeSession s
ession.go:221","userID":"gh4ug4ys57d6ufai8op7mjkszy","connID":"59h6dnt983ffmeg6uxy4ezjjda","originalConnID":"59h6dnt983ffmeg6uxy4ezjjda"}
{"timestamp":"2022-08-19 14:36:40.884 +02:00","level":"error","msg":"callback failed: call state is missing from channel state","caller":"app/plugin_api.go:975","plugin_id":"com.mattermost.calls","origin":"
main.(*Plugin).handleLeave websocket.go:372"}

Anything obviously wrong here?

This usually happens when the websocket connection drops (you can think of it as the control channel for the call and the udp connection is the data channel).
Is there a loadbalancer/reverse proxy in front of your mattermost installation? If so, please make sure it does not terminate websockets prematurely. I did experience such calldrops when I first introduced calls on my insatllation and could fix the problem by increasing the tunnel timeouts in the reverse proxy configuration.

There is a loadbalancer in the mattermost installation, but I’ve configured Calls to bypass the proxied URL so I don’t think it should interfere. However, there is an active firewalld running on the installations server but I’ve verified that it doesn’t block our Calls port by using netcat. Maybe I should monitor the firewalld logs to see if there is something happening when initiating a Calls session…

I’m not sure if you will see socket timeouts in the firewalld logs, since they are happening inside an established connection, but maybe it helps to narrow down the issue.

For a call to work, all participants need to be able to establish a non-idling connection to https://mattermost.yourdomain.com as well as at least one RTC stream from their client to mattermost.yourdomain.com on port udp/8443.

Problem is solved now. Firewalld was blocking outgoing connections, when we disabled firewalld Calls started working perfectly.

Thank you for your time!

Awesome, thanks for confirming that you’re up and running now :slight_smile: