Last month, security researchers demonstrated a supply chain attack on ClawdHub, Moltbot’s skill marketplace. The implications extend far beyond this one incident.
What Happened
A researcher uploaded a malicious skill to ClawdHub that:
- Passed review (such as it is)
- Looked like a legitimate productivity tool
- Contained hidden malicious payloads
- Got 4,000+ downloads before removal
The skill could have done anything with the user’s Moltbot permissions - read files, execute commands, exfiltrate data.
The Ecosystem Problem
ClawdHub is essentially an app store for AI agent capabilities. But unlike iOS or Google Play:
No security review process
- Skills are uploaded with minimal vetting
- No static analysis for malicious patterns
- No sandbox testing before publication
High-privilege execution
- Skills run with Moltbot’s full permissions
- No capability-based restrictions
- No user consent for specific actions
Implicit trust model
- Users assume community skills are safe
- No reputation system for skill authors
- No verified publisher program
Why This Will Get Worse
AI agent skills are uniquely dangerous because:
-
They execute with AI reasoning: The malicious code does not need to be obvious. It can be triggered by specific conversation patterns.
-
They have broad access: Unlike mobile apps, AI skills are not sandboxed. They inherit the agent’s full permission set.
-
They are easy to hide: A skill can look like a simple utility while containing sophisticated payloads activated in edge cases.
-
Detection is hard: Traditional malware scanning does not work well on AI-integrated code.
What Needs to Change
For AI agent ecosystems to be trustworthy:
- Mandatory code review for published skills
- Sandboxed execution with explicit permission grants
- Verified publisher programs
- Runtime monitoring for anomalous behavior
- User-controlled capability restrictions
Until these exist, installing community skills is gambling with your security.
This is the attack vector that keeps me up at night.
We can harden individual Moltbot instances. We can enable auth, restrict network access, audit permissions. But the skill ecosystem is a fundamentally different problem.
The npm/PyPI parallel:
Remember the event-stream attack? Left-pad chaos? The ongoing typosquatting campaigns? Open source package ecosystems have struggled with supply chain security for years, and they have:
- Much larger security teams
- Automated scanning infrastructure
- Community vetting over decades
ClawdHub has none of that maturity. And the stakes are higher because skills have real-time system access, not just library functionality.
What I am doing:
My team is allowed to use Moltbot with:
- Only skills we have manually reviewed
- A forked skill registry with approved packages only
- Monitoring for unexpected outbound connections
It is a lot of overhead. But the alternative is trusting random code from the internet with shell access to our machines.
The 4,000 downloads number is scary. That is 4,000 people who installed code without reviewing it, running on machines with potentially sensitive data.
This highlights a broader issue with AI tool development culture.
Move fast and break things does not work when your tool has:
- Shell access
- File system access
- Network access
- API credentials
The Moltbot team prioritized features and adoption over security infrastructure. Now they have a skill ecosystem with thousands of users and no security foundation.
The responsibility question:
When that malicious skill exfiltrated data, who is responsible?
- The skill author? Obviously, but they are anonymous.
- The user? They trusted the ecosystem.
- Moltbot? They built the platform without safeguards.
- ClawdHub? They published without review.
This ambiguity is by design. It lets platforms grow fast while pushing risk to users.
What we need:
AI tool developers need to be held to the same standards as other software that handles sensitive data. That means:
- Security by default
- Liability for platform failures
- Transparency about review processes
- User-accessible security information
The “it is open source so caveat emptor” defense should not apply to tools marketed for productivity and daily use.
Playing devil’s advocate here.
The skill that got 4,000 downloads was a proof-of-concept by a security researcher demonstrating the vulnerability. It was not weaponized malware from an actual attacker.
Context matters:
- The researcher disclosed responsibly and the skill was removed
- No evidence of actual data exfiltration
- The demonstration led to security improvements
This is how security research works. Finding vulnerabilities before malicious actors is valuable.
That said:
The underlying criticism is valid. The ecosystem does need:
- Better vetting processes
- Sandboxed execution
- User consent for sensitive operations
But let us not conflate a security research demonstration with an actual attack. The former helps us; the latter harms us.
What the Moltbot team should do:
- Acknowledge the vulnerability publicly (they did)
- Implement the suggested mitigations (in progress)
- Create a bug bounty program (not yet)
- Establish a security review board (not yet)
The response matters more than the initial vulnerability.