Michael Selig, chair of the US Commodity Futures Trading Commission, has argued that blockchain technology could play a central role in verifying whether content is authentic or artificially generated. Speaking on The Pomp Podcast on Thursday, Selig addressed questions from host Anthony Pompliano about the use of AI-generated images and memes in financial markets. He stated that regulators are focused on maintaining US leadership in crypto and emphasized that, in his view, artificial intelligence and blockchain are inseparable technologies.
Selig also addressed the growing prevalence of autonomous trading in financial markets, where regulators face pressure to draw clear distinctions between automated tools and fully autonomous AI agents. He indicated that the CFTC is actively assessing how AI models are being deployed in markets. His comments suggest that enforcement efforts will concentrate on participants directly engaged in financial activity, rather than on the underlying technology itself.
A central challenge accompanying the rapid expansion of artificial intelligence is the difficulty of distinguishing genuine content from synthetic media. Selig’s remarks reflect a wider effort among policymakers and developers to apply blockchain as a tool for content verification and provenance tracking. Establishing reliable methods to confirm the origin and authenticity of digital content has become an increasing priority as concerns over misinformation intensify.
One approach gaining attention is proof-of-personhood systems, which are designed to confirm that an online account belongs to a real, unique individual rather than an automated bot. The most prominent example is Sam Altman‘s World, whose World ID protocol allows users to demonstrate their humanity without disclosing personal data. The system relies on encrypted biometric iris scans stored on users’ own devices, though it has attracted criticism regarding privacy risks and the potential for coercion.
In March, World introduced AgentKit, a toolkit enabling AI agents to demonstrate a verified link to a real human while interacting with online services. The toolkit integrates proof-of-personhood credentials with the x402 micropayments protocol, which was developed by Coinbase and Cloudflare. This combination allows agents to pay for access to services while simultaneously presenting cryptographic proof of human backing.
Ethereum co-founder Vitalik Buterin has separately proposed using cryptography and blockchain infrastructure to make online systems more verifiable. His proposals include zero-knowledge proofs and onchain timestamps that could help validate how content is generated and distributed. Importantly, these methods are designed to achieve verification without exposing sensitive user data.
These developments are unfolding as US policymakers consider broader frameworks for regulating artificial intelligence. On March 20, the Trump administration released a national framework calling for a unified federal approach to AI governance. The framework warned that a fragmented set of state-level laws could undermine innovation and the country’s global competitiveness in the sector.
Originally reported by CoinTelegraph.
