In a recent blog post, Ethereum co-founder Vitalik Buterin outlined his approach to artificial intelligence, emphasizing a focus on privacy and security. He detailed a personal setup that operates entirely on local hardware using the open-source Qwen3.5:35B model. Buterin expressed his concerns regarding cloud-based AI tools, which he views as significant privacy risks.
Buterin constructed a messaging daemon that restricts his AI agent from sending messages or executing crypto transactions without explicit human approval. He referred to this dual-layered security measure as a form of two-factor authentication, incorporating both human oversight and the AI’s capabilities.
This latest update marks an evolution from Buterin’s previously stated vision for privacy-preserving AI, shared earlier this year during a discussion on a comprehensive Ethereum-AI roadmap. In February, he had outlined various aspects of private AI usage, agent markets, and governance mechanisms. The new post, however, dives deeper into his actual implementation of these principles.
For his AI configuration, Buterin runs the Qwen3.5:35B model locally via llama-server, using a laptop equipped with an Nvidia 5090 GPU capable of processing 90 tokens per second. He has built a local repository of Wikipedia articles and technical documentation to limit reliance on external search queries, viewing those as potential privacy vulnerabilities.
One of the key elements of his setup involves the connection between his AI and Ethereum wallet, as well as messaging accounts. The messaging daemon he developed allows the AI to access Signal messages and emails while ensuring that any outbound communication must receive human approval first. He urged development teams working on AI-enhanced Ethereum wallet tools to implement similar safeguard measures—where transactions above $100 require explicit confirmation, aligning with Buterin’s existing practices for handling his crypto assets. He maintains approximately 90% of his funds in a multisig Safe wallet, with distributed keys among trusted contacts to prevent any single point of failure.
Addressing broader concerns within the tech community, Buterin referenced research that indicated around 15% of community-built tools for OpenClaw—a rapidly growing GitHub repository—contained malicious code that could exfiltrate user data without their knowledge. He articulated a fear that the momentum gained in enhancing privacy through technologies like end-to-end encryption may be compromised by the increasing normalization of cloud-based AI systems, which often collect vast amounts of user data.
In summary, Buterin’s setup and recommendations highlight an ongoing commitment to privacy in the rapidly evolving landscape of AI and blockchain technology, advocating for robust security measures that empower users to maintain control over their data.


