Ethereum‘s co-creator, Vitalik Buterin, recently voiced concerns about utilizing artificial intelligence (AI) in governance systems, labeling it as a “poor choice.” He shared his thoughts on X in a recent message:
“If AI is implemented to allocate funds for contributions, individuals will inevitably incorporate jailbreak methods along with requests for substantial financial gains wherever possible.”
The Problem with AI in Governance
Buterin’s statement came as a response to Eito Miyamura, the co-founder and CEO of EdisonWatch, an AI data governance platform. Miyamura highlighted a significant security vulnerability discovered within ChatGPT. In his Friday post, Miyamura revealed that the integration of full support for MCP (Model Context Protocol) tools on ChatGPT has opened doors for potential exploitation of the AI.
This update, implemented the previous Wednesday, allows ChatGPT to connect and retrieve information from various applications, including Gmail, Calendar, and Notion.
Miyamura pointed out that an email address is now sufficient to potentially “extract all of your confidential information.” He outlined a straightforward three-step process that malicious actors could use to access private data:
The first step involves sending a harmful calendar invitation containing a jailbreak prompt to the intended target. A jailbreak prompt is a piece of code designed to circumvent restrictions and obtain administrative privileges.
Miyamura clarified that the data breach can occur even if the recipient doesn’t accept the malicious invitation.

Don’t Get Left Holding the Bag
Join The Crypto Investor Blueprint — 5 days of pro-level strategies to turbocharge your portfolio.
Brought to you by CryptoSlate
The next step requires the victim to utilize ChatGPT for daily planning or preparation. Finally, once ChatGPT processes the compromised calendar invitation, it becomes vulnerable, allowing the attacker to take complete control of the AI, search the victim’s private emails, and forward the extracted data to the attacker’s own email address.
Buterin’s Proposed Solution
Buterin advocates for the “info finance” model for AI governance. This approach envisions an open marketplace where developers can contribute diverse AI models. According to Buterin, this marketplace would feature a spot-check mechanism, initiated by anyone and evaluated by a panel of human experts.
In a separate communication, Buterin elaborated that individual human jurors would be supported by large language models (LLMs).
Buterin believes that this “institution design” strategy is fundamentally more resilient. He argues that it promotes model diversity in real-time and incentivizes both developers and independent analysts to monitor and address potential issues.
While many are enthusiastic about the potential of “AI as a governor,” Buterin cautioned:
“I believe this approach carries risks, both for traditional AI safety concerns and the near-term potential for significant value destruction.”


