CertiK, a cybersecurity firm, has issued a warning that the growing adoption of AI assistant platforms such as OpenClaw exposes users to serious risks, including unauthorized system actions, data breaches, and drained cryptocurrency wallets. OpenClaw is a self-hosted AI agent that connects with messaging platforms including WhatsApp, Slack, and Telegram, and can autonomously manage email, calendars, and files on users’ computers. The platform claims approximately 2 million active monthly users, and a McKinsey study from November found that 62% of surveyed organizations were already experimenting with AI agents.
OpenClaw originated as a side project called Clawdbot, launched in November 2025, and rapidly accumulated over 300,000 stars on GitHub, reflecting a sharp rise in popularity. However, CertiK notes that this rapid growth came with significant accumulated security vulnerabilities. The platform has since gathered more than 280 GitHub Security Advisories, over 100 Common Vulnerabilities and Exposures (CVEs), and has been subjected to multiple ecosystem-level attacks since its launch.
Security researchers identified the scale of exposure early on. Bitsight found 30,000 internet-exposed instances of OpenClaw within weeks of its launch, while SecurityScorecard researchers discovered 135,000 instances across 82 countries, with 15,200 specifically vulnerable to remote code execution. CertiK has described the platform as the most aggressively scrutinized AI agent from a security standpoint, calling it a primary supply chain attack vector operating at scale.
Because OpenClaw acts as a bridge between external inputs and local system execution, it introduces several classic attack vectors, according to CertiK researchers. These include local gateway hijacking, where malicious websites or payloads exploit the agent’s presence on a local machine to extract sensitive data or run unauthorized commands. Malicious plugins can also add harmful channels, tools, and services, while so-called malicious skills installed from local or marketplace sources can manipulate agent behavior through natural language, making them resistant to conventional security scanning.
CertiK researchers told Cointelegraph that attackers deliberately planted malicious skills across high-value categories, including utilities for Phantom, wallet trackers, insider-wallet finders, Polymarket tools, and Google Workspace integrations. The primary payloads were designed to simultaneously target a wide range of browser extension wallets, including MetaMask, Trust Wallet, Coinbase Wallet, and OKX Wallet, among others. Researchers noted a clear overlap with broader crypto-theft tactics, such as social engineering, fake utility lures, credential theft, and wallet-focused phishing.
Earlier this month, cybersecurity firm OX Security reported a phishing campaign that used fake GitHub posts and a fraudulent token called CLAW to trick OpenClaw developers into connecting their crypto wallets. OpenClaw founder Peter Steinberg, who recently joined OpenAI, acknowledged the security concerns at the ClawCon event in Tokyo, stating that the team had spent the past two months focused on security improvements. He indicated that conditions have improved but did not provide specific details.
In light of these risks, CertiK advised everyday users who are not security professionals or experienced developers to refrain from installing OpenClaw and instead wait for more mature and hardened versions of the platform. Separately, cybersecurity company SlowMist introduced a security framework for AI agents in March, positioning it as a defense against risks associated with autonomous systems that handle on-chain actions and digital assets. The developments highlight growing industry concern over the security implications of deploying AI agents with broad system-level access.
Originally reported by CoinTelegraph.
