AI tokens jump 10% as Vitalik Buterin explains how AI can enhance security and efficiency on Ethereum
The Ethereum co-founder argued that AI could help solve the blockchain network "biggest technical risk."
Digital assets in the artificial intelligence sector continued their recent surge, increasing by 10%, as Ethereum co-founder Vitalik Buterin stated that the technology could help the blockchain network mitigate its “biggest technical risk.”
CryptoSlate’s data show that several tokens in the sector, including SingularityNET’s AGIX, 0x0.ai, Ocean Protocol’s OCEAN, The Graph’s GRT, Fetch.AI’s FET, and others, rose more than 10%, respectively, during the past day, pushing the sector’s market capitalization to $18.06 billion and its trading volume to approximately $2.5 billion.
How AI can aid Ethereum
According to Buterin, AI technology could help blockchain network developers identify bugs and assist with the verification of codes.
“Right now ethereum’s biggest technical risk probably is bugs in code, and anything that could significantly change the game on that would be amazing,” he added.
His remarks come in the wake of his recent exploration of the synergy between AI and crypto, detailed in a blog post on Jan. 30. Buterin outlined various potential applications of AI within the crypto sphere.
He pointed out that integrating AI could revolutionize crypto-systems, particularly in scenarios where individual participants are replaced by AI entities, enabling more efficient operation on a micro-scale.
While recognizing AI integration’s functionality and safety enhancements, Buterin also urged caution, especially in high-value and high-risk settings.
Beyond Ethereum, other protocols and exchanges like Solana, Polkadot, Binance, and stake.link also explore AI-driven solutions.
However, despite the enthusiasm for AI, concerns linger regarding its efficacy in bug detection. Daniel “Haxx” Stenberg of cURL highlighted potential challenges, including the risk of AI-generated false positives complicating bug identification efforts.
“An (AI-generated) security report can take away a developer from fixing a really annoying bug. because a security issue is always more important than other bugs. If the report turned out to be crap, we did not improve security and we missed out time on fixing bugs or developing a new feature,” Stenberg argued.