First, the brokers have been in a position to uncover new vulnerabilities in a take a look at atmosphere — however that doesn’t imply that they will discover all types of vulnerabilities in all types of environments. Within the simulations that the researchers ran, the AI brokers have been mainly taking pictures fish in a barrel. These might need been new species of fish, however they knew, typically, what fish appeared like. “We haven’t discovered any proof that these brokers can discover new sorts of vulnerabilities,” says Kang.
LLMs can discover new makes use of for frequent vulnerabilities
As an alternative, the brokers discovered new examples of quite common sorts of vulnerabilities, akin to SQL injections. “Massive language fashions, although superior, usually are not but able to absolutely understanding or navigating complicated environments autonomously with out vital human oversight,” says Ben Gross, safety researcher at cybersecurity agency JFrog.
And there wasn’t a variety of variety within the vulnerabilities examined, Gross says, they have been primarily web-based, and could be simply exploited attributable to their simplicity.