It was bound to happen sooner or later. For what looks like the first time ever, bug hunters used ChatGPT in a successful Pwn2Own exploit, helping researchers hijack software used in industrial applications and win $20,000.
To be clear: the AI did not find the vulnerability nor write and run code to exploit a specific flaw. But its successful usage in the bug-reporting contest could be a harbinger of hacks to come.
“This is not interpreting the Rosetta Stone,” Dustin Childs, head of threat awareness at Trend Micro’s Zero Day Initiative (ZDI) told The Register.
“It is a first step towards something more. We don’t think AI is the future of hacking, but it could certainly turn into a great assistant for when a researcher comes up against a piece of code they aren’t familiar with or a defense they weren’t expecting.”
At last week’s contest in Miami, Florida, Claroty’s Team82 asked ChatGPT to help them develop a remote code execution attack against Softing edgeAggregator Siemens — software that provides connectivity at the interface between OT (operational technology) and IT in industrial applications.
The technical details are limited due to the nature of Pwn2Own: people find security holes, demonstrate them on stage, privately disclose how they did it to the developer or vendor, claim a prize, and we all wait for the details and patches to come out eventually when ready.
In the meantime, we can say this: the humans involved in the exploit, security researchers Noam Moshe and Uri Katz, identified a vulnerability in an OPC Unified Architecture (OPC UA) client presumably within the edgeAggregator industrial software suite. OPC UA is a machine-to-machine communication protocol that used in industrial automation.
After finding the bug, the researchers asked ChatGPT to develop a backend module for an OPC UA server to test their remote execution exploit. It would seem this module was needed to basically build a malicious server to attack the vulnerable client via the vulnerability the duo found.
“Because we had to make a lot of modifications for our exploitation technique to work, we had to make many changes to existing open source OPC UA projects,” Moshe and Katz told The Register.
“Since we were not familiar with the specific server SDK implementation, we used ChatGPT to expedite the process by helping us use and modify the existing server.”
The team provided the AI with instructions, and did have to do a few rounds of corrections and “minor” alterations until it came up with a workable backend server module, they admitted.
But overall, we’re told, the chatbot provided a useful tool that saved them time, especially in terms of filling in knowledge gaps such as learning how to write a backend module and allowing the humans to focus more on implementing the exploit.
“ChatGPT has the capacity to be a great tool for accelerating the coding process,” the duo said, adding that it boosted their efficiency.
“It’s like doing many rounds of Google searches for a specific code template, then adding multiple rounds of modifications to the code based on our specific needs, solely by instructing it what we wanted to achieve,” Moshe and Katz said.
According to Childs, this is probably how we’ll see cybercriminals use ChatGPT in real-life attacks against industrial systems.
“Exploiting complex systems is challenging, and often, threat actors aren’t familiar with every aspect of a particular target,” he said. Childs added that he doesn’t expect to see AI-generated tools writing exploits, “but providing that last piece of the puzzle needed for success.”
And he’s not concerned about AI taking over Pwn2Own. At least not yet.
“That’s still quite a way off,” Childs said. “However, the use of ChatGPT here shows how AI can help to turn a vulnerability into an exploit – provided the researcher knows how to ask the right questions and ignore the wrong answers. It’s an interesting development in the competition’s history, and we look forward to seeing where it may lead.” ®