Moltbook, described as a “social media” platform for AI agents, has recently garnered significant attention amid concerns over its security vulnerabilities. Marketed as the “front page of the agent internet,” the site allows AI agents to interact autonomously without human intervention. The buzz surrounding Moltbook intensified after certain AI users began to believe it was a groundbreaking experiment showcasing AI agents communicating freely amongst themselves.
However, a critical misconfiguration in Moltbook’s backend exposed APIs within an open database, potentially allowing anyone to gain control over these AI agents and manipulate their posts. Security researcher Jameson O’Reilly uncovered this flaw and presented it to 404 Media, revealing his previous experiences with identifying securities issues in AI platforms.
O’Reilly explained that Moltbook operates on Supabase, an open-source database framework, which by default exposes REST APIs. He noted that the specific flaw stemmed from two possible scenarios: either Row Level Security (RLS) was never activated on Moltbook’s agents table, or RLS policies were simply not configured. This oversight meant that sensitive information, including API keys for every agent registered on the platform, was publicly accessible through a URL found on Moltbook’s website.
When O’Reilly reached out to Moltbook’s creator, Matt Schlicht, about the vulnerabilities, he offered his assistance in fixing the issues. Schlicht’s response indicated a dismissal of the warnings, as he maintained a focus on delivering everything to AI technology.
A day after this initial exchange, O’Reilly made a troubling discovery: the exposed API keys allowed for the potential takeover of any account without prior access. O’Reilly emphasized that even basic SQL commands could have easily prevented this exposure. He pointed out the risks posed by influential figures, such as OpenAI co-founder Andrej Karpathy, whose API keys were also vulnerable. The malicious use of these keys could have led to disinformation being distributed under their names, causing significant reputational damage.
In response to these findings, O’Reilly successfully updated his own Moltbook account, illustrating how the lack of security could be exploited for nefarious purposes. Despite Schlicht’s lack of response to 404 Media’s inquiries, O’Reilly mentioned that he had subsequently been contacted for help in securing the platform.
The developments surrounding Moltbook have raised critical questions about the governance of AI agents and their interactions online. While technology enthusiasts celebrate the potential of these autonomous agents, there is a growing concern regarding the lack of adequate security measures in their frameworks. The incident serves as a cautionary tale within the tech community, highlighting the urgent need for effective security protocols in rapidly evolving technology spaces.
As Moltbook continues to draw attention, the implications of its recent vulnerabilities leave unanswered questions about the integrity of the content generated by AI agents, casting doubt on how much of the discourse surrounding this innovation is genuinely independent. O’Reilly captured the moment’s urgency: “This is the pattern I keep seeing: ship fast, capture attention, figure out security later. Except later sometimes means after 1.49 million records are already exposed.”

