Is there a way for Mattermost to display streaming text as it receives it from bots? (LLMs, by SSE)

Is there a way for Mattermost to display streaming text as it receives it from bots?

LLMs such as GPT-* send tokens as they produce them (usually by Server Sent Events) and they are usually quite slow. Reading the message as it is being generated (instead of waiting for it to be entirely generated before being able to start reading it) is a very important UX feature in practice, especially for long text that routinely take long seconds before being completed. That’s why all the majors clients of these LLMs (ChatGPT, Bing, Perplexity.ai, Phind, open source clients, etc.) have this feature.

It seems that there is a window of opportunity where Mattermost could position itself as an AI-friendly chat framework. If this feature is not currently possible, it might be wise to prioritize it. I, for one, am considering basing my next AI startup on Mattermost, but the absence/impossibility of this feature would be a dealbreaker.

Hi @vibl and welcome to the Mattermost forums!

There’s no such feature currently and the way posts are handled by the system this would only be possible if your bot sends the initial text and a lot of edits afterwards for adding additional text, so technically, you could do that, but it’s not optimal. You might want to raise this idea at Mattermost’s ProductBoard - if it gets enough votes, chances are high that we will see this feature in the future.

Thank you @agriesser! I submitted it to the ProductBoard.

By the way, I see no trace or acknowledgments of it in ProductBoard or my emails. Now I’m not sure if it was really submitted or if the process was interrupted by me signing up…

@amy.blais can you verify that the request was received?

Yes, I see that the request was submitted in ProductBoard. I don’t see a way to link it here though.

1 Like

There seems to be some progress on that, the official mattermost AI plugin supports streaming text (see the video on this page):