Mattermost v7.9.1 team-edition helm deployment issues

This list should be quite comprehensive and you’ll see that there’s a scheme behind the environment variable names:

https://docs.mattermost.com/configure/configuration-settings.html

If you want to override the configuration setting TeamSettings.EnableCustomBrand in your config.json with an environment variable, its name is MM_TEAMSETTINGS_ENABLECUSTOMBRAND.

That is a known problem with kubectl.

Try chown 1001:1001 -R plugins. Replace 1001 with whatever user the other folders use.

By default, anything configured during the deployment for plugins is created with root as the owner.
short of a root login, there is no way to fix this issue. I will assume from this that the teams deployment has disabled plugins as part of version restrictions.

Thanks everyone who has tried to help

My last post on the subject. Enterprise when installed without a license still has the correct permissions and deploys the default pulugins.

mattermost@mattermost-ent-mattermost-enterprise-edition-678857cdcf-jhf7n:~$ ls -l client/ | grep plugin
drwxrwxrwx 8 root root 4096 Mar 21 23:13 plugins
mattermost@mattermost-ent-mattermost-enterprise-edition-678857cdcf-jhf7n:~$ cd client/plugins/
mattermost@mattermost-ent-mattermost-enterprise-edition-678857cdcf-jhf7n:~/client/plugins$ ls
com.mattermost.calls com.mattermost.nps com.mattermost.plugin-channel-export focalboard jitsi playbooks
mattermost@mattermost-ent-mattermost-enterprise-edition-678857cdcf-jhf7n:~/client/plugins

So it appears to be a code issue and not much I can do about that.

So you’re saying that the enterprise build has no problem with the plugins, but the teamedition does? I’m not sure what you meant with “still has the correct permissions”.

I am suggesting its not a license issue but just some code. team-edition does complain about lack of a license.

Back door fix:
Make sure you have ssh access to workers in the cluster. I had not addedd rsa key to my EKS deployment and had to add that to get things working.

As the deployment spins up do a kubectl gep pod -o wide and get the node the pod is running on.
log into the the node.
run sudo docker ps | grep edition to get the docker id

run docker exec -u 0 instanceID chmod 777 /mattermost/client/plugins

Add S3 bucket in file storage, restart the pod, this allows the default plugins (including apps) to deploy

kubectl rollout restart deployment/mattermost-server-mattermost-team-edition -n mattermost

I don’t know if it will help because my English is not good.

Env:
on-prem , k8s v1.24.11
mattermost team edition v7.9.1, helm chart installed
storage use minio bucket.

i installed based on this guide.
https://docs.mattermost.com/install/installing-team-edition-helm-chart.html

My problem was also a permission problem when installing the plugin.

added env to values.yaml

config:
MM_FILESETTINGS_AMAZONS3SSL: “false”

also, can test at mattermost
*System Console - File Storage - Enable Secure Amazon S3 Connections: false

Hi @new2001yy and welcome to the Mattermost forums!

Are you using an insecure deployment of minio maybe for your S3 backend? That would explain why you have to disable the SSL security for the S3 backends.

I found a workaround. Add this to you helm values:

extraInitContainers:
  - command:
      - sh
      - '-c'
      - chown -R 2000:2000 /client/plugins
    image: busybox
    name: changeowner
    volumeMounts:
      - mountPath: /client/plugins
        name: mattermost-plugins

This runs a init container which changes the owner of the plugins folder.

Interesting workaround - thanks for sharing!

Thanks! Will take a look at that soon

For the sake of completeness, the same can be used to resolve a problem fetching profile pictures, etc… from the data directory:

extraInitContainers:
  - command:
      - sh
      - '-c'
      - chown -R 2000:2000 /client/plugins
    image: busybox
    name: changeowner-plugins
    volumeMounts:
      - mountPath: /client/plugins
        name: mattermost-plugins
  - command:
      - sh
      - '-c'
      - chown -R 2000:2000 /data
    image: busybox
    name: changeowner-data
    volumeMounts:
      - mountPath: /data
        name: mattermost-data
1 Like

@agnivade is that how it’s supposed to be or are we overseeing something here?

Sorry, there seems to be various things being discussed here. How can I help?

The main problem seems to be that the filesystem permissions in the application containers, especially /client/plugins, /plugins and obviously also some parts below /data do not seem to be writeable by the pods and someone came up with the idea to start a sidecar container to fix that during initialization, but I don’t believe that this is the only way to fix permission issues. Not sure what needs to be done here in order for the permissions to be set correctly in such a deployment scenario and since you’re experienced with the k8s environment I pinged you for help.

Ah well, you got the wrong person here :stuck_out_tongue: Let me bring this to the attention of our cloud platform team.

1 Like

Sorry man - and thanks for forwarding.

Hello,

Thank you @michaelkoelle for the workaround. Unfortunaly it didn’t work for me, my initcontainer has not the right to modify permission of the folder /client/plugin. I hope this issue will be resolved by mattermost. I encounter the problem since I started the depolyment in my compagny with version 7.7.0

Hello, Any news about this problem ?