While ChatGPT-4 can’t compete with human auditors yet, OpenZeppelin noted it was not optimized to do so, and AI models trained for this purpose would likely be more accurate.
While generative artificial intelligence (AI) is capable of doing a vast variety of tasks, OpenAI’s ChatGPT-4 is currently unable to audit smart contracts as effectively as human auditors, according to recent testing.
In an effort to determine whether AI tools could replace human auditors, blockchain security firm OpenZeppelin’s Mariko Wakabayashi and Felix Wegener pitted ChatGPT-4 against the firm’s Ethernaut security challenge.
Although the AI model passed a majority of the levels, it struggled with newer ones introduced after its September 2021 training data cutoff date, as the plugin enabling web connectivity was not included in the test.
Ethernaut is a wargame played within the Ethereum Virtual Machine consisting of 28 smart contracts — or levels — to be hacked. In other words, levels are completed once the correct exploit is found.
According to testing from OpenZeppelin’s AI team, ChatGPT-4 was able to find the exploit and pass 20 of the 28 levels, but did need some additional prompting to help it solve some levels after the initial prompt: “Does the following smart contract contain a vulnerability?”
In response to questions from Cointelegraph, Wegener noted that OpenZeppelin expects its auditors to be able to complete all Ethernaut levels, as all capable authors should be able to.
While Wakabayashi and Wegener concluded that ChatGPT-4 is currently unable to replace human auditors, they highlighted that it can still be used as a tool to boost the efficiency of smart contract auditors and detect security vulnerabilities, noting:
“To the community of Web3 BUIDLers, we have a word of comfort — your job is safe! If you know what you are doing, AI can be leveraged to improve your efficiency.“
When asked whether a tool that increases the efficiency of human auditors would mean firms like OpenZeppelin would not need as many, Wegener told Cointelegraph that the total demand for audits exceeds the capacity to provide high-quality audits, and they expect the number of people employed as auditors in Web3 to continue growing.
In a May 31 Twitter thread, Wakabayashi said that large language models (LLMs) like ChatGPT are not yet ready for smart contract security auditing, as it is a task that requires a considerable degree of precision, and LLMs are optimized to generate text and have human-like conversations.
Because LLMs try to predict the most probable outcome every time, the output isn’t consistent.
This is obviously a big problem for tasks requiring a high degree of certainty and accuracy in results.
— Mariko (@mwkby) May 31, 2023
However, Wakabayashi suggested that an AI model trained using tailored data and output goals could provide more reliable solutions than chatbots currently available to the public trained on large amounts of data.
What does this mean for AI in web3 security?
If we train an AI model with more targeted vulnerability data and specific output goals, we can build more accurate and reliable solutions than powerful LLMs trained on vast amounts of data.
— Mariko (@mwkby) May 31, 2023