Fintech and crypto markets were abuzz this week as Ethereum co-founder Vitalik Buterin spotlighted mounting risks surrounding artificial intelligence (AI) governance. In high-level comments echoed across X (formerly Twitter) and the crypto news circuit, Buterin cautioned that naive, unguarded AI-driven governance models are especially susceptible to manipulation—especially through social engineering tactics known as “jailbreaks.”
This is also why naive "AI governance" is a bad idea.
— vitalik.eth (@VitalikButerin) September 13, 2025
If you use an AI to allocate funding for contributions, people WILL put a jailbreak plus "gimme all the money" in as many places as they can.
As an alternative, I support the info finance approach ( https://t.co/Os5I1voKCV… https://t.co/a5EYH6Rmz9
Buterin, widely regarded for his contributions in decentralized finance, argued that permitting AI unchecked influence over critical financial and governance decisions could pave the way for systemic exploitation. Citing recent examples, he highlighted how attackers have leveraged simple jailbreak prompts to redirect financial flows or trigger unanticipated AI behaviors, underscoring a profound need for reinforced oversight.
Info Finance as a Blueprint: Human and Algorithmic Diversity
Buterin’s solution is “info finance”—a model blending open market contributions with human jury spot checks on AI models. Such a framework aims to benefit from diverse input and continuous review, effectively aligning the incentives of both developers and end-users. Human oversight isn’t just a safeguard but central to Buterin’s vision for robust, tamper-resistant decentralized infrastructure.
These views resonated broadly after Eito Miyamura, a respected voice in AI and crypto, demonstrated a ChatGPT “jailbreak” that allowed extraction of private email data via crafted prompts. Miyamura’s findings revealed that a single malicious calendar invite could trick generative models into disclosing confidential information. His takeaway: AI tools adeptly follow user instructions but lack the capacity for common-sense risk filtering.
Integrating Oversight with Innovation
Further debate, triggered by blockchain researcher Sreeram Kannan, revolved around deploying info finance to steer public goods funding. Kannan noted that conditional markets may lack reliable ground truths for sustainable resource allocation. In response, Buterin advocated that every modern system—especially those empowering AI—must include trusted signals derived from human juries, ideally with assistance from large language models. He warned that the spectrum of attacks extends beyond “jailbreaking” and includes subtler manipulations aimed at faking adoption or legitimacy.
Do you have a version of info finance to make incentive decisions on public goods for example?
— Sreeram Kannan (@sreeramkannan) September 13, 2025
Conditional markets seem very weak in this setting as the truth value (which is the best thing to fund) doesn’t even have a correct answer in the future.
Jailbreaking is a short…
What emerges is an urgent consensus: while AI offers unprecedented efficiencies, its rapid infusion into financial and crypto governance requires rigorous human oversight, constant auditability, and institutional seriousness. Emerging frameworks that blend open competition, incentives, and layered review could shape the next chapter in responsible, resilient fintech infrastructure.
You might be interested in:



