Ad
News
Hoskinson proposes ‘generative AI-proof watermarking’ to thwart deep fakes Hoskinson proposes ‘generative AI-proof watermarking’ to thwart deep fakes

Hoskinson proposes ‘generative AI-proof watermarking’ to thwart deep fakes

Countering AI deep fakes - technological advancements will soon make fakery indistinguishable from real life.

Hoskinson proposes ‘generative AI-proof watermarking’ to thwart deep fakes

Web Summit / CC BY 2.0 / Wikimedia. Remixed by CryptoSlate

Join Japan's Web3 Evolution Today

Input Output CEO Charles Hoskinson said the emergence of AI deep fakes would force us to assume everything we see and hear is fraudulent.

AI deep fakes to become indistinguishable from real life

During a recent AMA, Hoskinson was asked about using blockchain technology to authenticate visual content. In response, he acknowledged that a verification system is needed.

He further expressed that, within the next 12 to 24 months, rapid advances in AI technology will see deep fake video and audio indistinguishable from real life. With that will come a gradual shift from the customary “seeing is believing” attitude currently prevalent to one assuming everything is fake.

Continuing, he spoke of a propagandized use of the technology that involves nation-states producing and distributing deep fakes for “instruments of polarization.”

Hoskinson suggested the way to combat this is to use the immutable properties of blockchains to store a verifiable “chain of evidence” from image capture to upload – this he called “generative AI-proof watermarking.”

“The only way to get out of it is to have verified information and verified content. So at the time of creation, you need to create an NFT and sign it, and have some chain of evidence that it was created on a legitimate device.”

Other approaches to combat deep fakes

Gizmodo pointed out that deep fake detection technology had long been developing well before ChatGPT launched. However, the recent popularity of its use has intensified the urgency in finding up-to-task detection solutions.

Although Hoskinson spoke of a blockchain-based content traceability model, companies like Optic and FakeCatch are focused on identifying AI involvement at the audio and visual levels. Similarly, Fictitious.AI is adopting a similar non-blockchain-based approach, but in respect of written content.

Arjun Narayan, the former Trust and Safety Lead at Google, said although the detection systems have a degree of accomplishment, he suspects the technology is “playing catch up” to deep fakes.

Mentioned in this article
Posted In: AI, Featured