Exploits and possible LLM attacks plus M365 Copilot
There is a German idiom called “in between the years”, meaning the timeframe from Christmas to New Year’s eve. It is a time of reflection and of course family. This year it was also the time of the 37th Chaos Communication Congress (37C3) in Hamburg. The 37C3 was a hybrid event, meaning it was a physical event in Hamburg and a virtual event on the internet.
Real-world exploits and mitigations in LLM applications
One of the talks that I found very interesting was the talk about Real-world exploits and mitigations in LLM applications (37c3). The talk was about a new type of attacks that are based on LLMs by Johann Rehberger. Johann is currently a Red Team Director at Electronic Arts and worked as a Principal Security Engineer Manager for years at Microsoft. Red Teams are tasked with simulating attacks on a company’s systems to test the effectiveness of its security measures. So it’s very interessting so see his thoughts on this topic.
Content of the talk
The talk was about a new type of attacks that are based on LLMs. LLM stands for “Large Language Models” and is a term that is used to describe models like GPT-3, which is a language model that is trained on a large corpus of text data. The talk was about how these models can be used to generate exploits for real-world applications. This can be as easy as just asking the model to give away information it shouldn’t but can get a bit more tricky if you want to generate payloads that exfiltrate data from a chat or user.
Johann shows hoe easy it was in 2023 to just add some additional text to a html page and then use a LLM to generate a summary of this page and inject your paylod into the instruction set of the LLM. He documented how the different LLMs adopted to different payloads but one is left behind with a slight reminder of the very early years of the internet, where passwords were just hidden in the source code of the website.
I already tried a couple of his ideas with Microsoft365 Copilot, but this is a different story. I will write about this in a different post. As I wasn’t able to reproduce his findings with the current version of M365 Copilot, I will not go into detail here.
Nevertheless I think it is very important to be aware of these new types of attacks and to think about how to mitigate them. I think that the talk is a great starting point for this discussion and I hope that it will lead to more awareness in this area. I also hope that it will lead to better security measures for LLM applications and again the needed awareness for the developers and the companies that use these models.