At Mattermost we are developing a sample data server to prototype LLM use cases in Mattermost where responses are personalized based on context within a self-hosted, open source collaboration platform (with 1-1 and group messaging, bot integration, audio calling and screenshare, content sharing, automations and integrations, enterprise admin and management, etc.).
Intention is to augment requests from users to LLMs with an embedding including user context, consisting of user identity, channel membership, channel discussions and interactions (threads, emoji reactions, etc.) and other data and metadata. Initial scenarios we’re envisioning include:
1 - Personalized Meeting Summarization - Recordings of non-confidential team meetings where LLM summarization brings back different sets of summary bullet points for different users. For example, the Director of Product Engineering may get a more technical summary focused on internal development efforts whereas the Head of Content might get a more generalized summary highlighting potential content topics the company may want to share with external audiences.
2 - Personalized Channel Summarization - We have a number of channels pulling in public data and intention is for summaries to be personalized based on user context. For example, we have a channel with automation pushing in Hacker News articles relevant to different topics, and different users asking for summaries should get different results.
Future scenarios may include automatically summarizing channel history on joining or re-entering channels, creating alerts on discussions that may be relevant to a user’s priorities, and eventually personalized summarization in highly confidential, real-time scenarios, such as security incident response and remediation (e.g. a controlled workflow with automated, contextual summarization to security engineering, customer contact, and public relations contact points as information develops).
At Mattermost, our expertise is in the application layer of Generative AI and we’re learning as we go working with different LLMs.
We would love to run our concepts, data and early implementations by organizations with LLM platforms (e.g. OpenAI, Azure AI, AWS Bedrock, Meta/Llama, Anthropic, etc.) to help us understand how aligned (or unaligned) we are to the vision and roadmap of different backends, and what approaches we might explore to make the fastest progress.
There’s a lot of different workplace messaging platforms, like Teams, Slack, Discord, Google Chat, that are probably exploring (or will soon be exploring) the same things we’re building out at Mattermost.
The approach we’re taking in the ecosystem is to offer an enduring open source, self-hosted option for the application layer (Mattermost), bot framework and future AI personalization framework (OpenOps) for organizations that need the option to maintain full control of data and infrastructure in their Gen AI workflows.