Fenris AI
← Back to Blog

What Are Claws, and Why Is Everyone Losing Their Minds?

March 19, 2026 · Molly Edwards

OpenClaw logo

A plain-language breakdown of the biggest shift in AI since ChatGPT.

If you've been anywhere near tech news in 2026, you've probably seen the word "claw" and had no idea what anyone was talking about. Or maybe you saw a lobster emoji and kept scrolling. Fair.

Here's what's actually happening, and why it matters even if you never plan to install one.

The Short Version

A "claw" is an AI agent that doesn't just talk to you. It does things for you. It can open your apps, send your emails, manage your files, browse the web, book things, write code, and run tasks on your computer. All from a chat message. You text it like you'd text a friend, and it goes and handles it.

The one that started all of this is called OpenClaw. It's free, open-source, and runs locally on your machine. You connect it to a large language model (Claude, GPT, DeepSeek, whatever you want), and then you talk to it through WhatsApp, Telegram, Discord, or Signal. It has access to your computer, your files, your apps. And it acts on your behalf.

That's the key distinction. ChatGPT talks. A claw acts.

Or as DigitalOcean put it, developers have described OpenClaw as "the closest thing to JARVIS we've seen." That's not wrong. It remembers your preferences, learns your patterns, runs 24/7, and can handle tasks while you sleep. It's less chatbot, more personal employee.

How We Got Here

OpenClaw was built by Peter Steinberger, a developer who spent 13 years building PSPDFKit before pivoting to this. It started under different names (first Clawdbot, then Moltbot) before settling on OpenClaw. The lobster branding stuck. The "claw" metaphor is about grabbing and executing tasks. It's not an acronym. It's just a lobster.

It went viral in late January 2026. Hit 60,000 GitHub stars in 72 hours. Within weeks, it passed 250,000 stars and overtook React as the most-starred non-aggregator software project on the platform. By February, Steinberger announced he was joining OpenAI, and the project was being moved to an open-source foundation.

In his own announcement, Steinberger wrote: "I'm a builder at heart... What I want is to change the world, not build a large company." His stated mission? To "build an agent that even my mum can use." That's why he joined OpenAI instead of raising a round. He called it "the fastest way to bring this to everyone."

Sam Altman's post on X about the hire was telling: he called Steinberger "a genius with a lot of amazing ideas about the future of very smart agents interacting with each other."

The speed of adoption is what caught the industry's attention. Jensen Huang, NVIDIA's CEO, compared OpenClaw to the launch of GPT itself, calling it "probably the single most important release of software... probably ever." He framed it as the shift from AI that generates to AI that works: "An AI that was able to perceive became an AI that could generate... an AI that can reason now became an AI that can actually do work."

Whether or not you buy the hyperbole, the comparison is directionally right. GPT gave us AI that could write. Claws give us AI that can execute. And Steinberger predicted that 80% of apps will eventually disappear because of it.

The Big Players

Once OpenClaw took off, everyone piled in. Here's the landscape:

OpenClaw is the original. Free, open-source, runs locally. Over 50 integrations (Gmail, GitHub, Spotify, Obsidian, etc.) and 100+ preconfigured "skills" that extend what it can do. Think of it as the Linux of AI agents: powerful, flexible, and very much a build-your-own situation. Steinberger put it bluntly in a Y Combinator interview: because it runs on your computer, "it can do every effing thing."

NVIDIA NemoClaw is NVIDIA's answer to the security problem. It wraps OpenClaw with enterprise-grade guardrails: sandboxing, policy-based privacy controls, and the ability to run open models locally on NVIDIA hardware. Jensen Huang's pitch to enterprises: "Every company now needs to have an OpenClaw strategy." NVIDIA is partnering with CrowdStrike, Cisco, Google, and Microsoft on this.

IronClaw is the security-first alternative, built in Rust by NEAR AI. It runs in encrypted enclaves, stores credentials in a vault the AI never sees directly, sandboxes every tool in WebAssembly, and scans outbound traffic for leaks. It's the "I want a claw but I don't trust a claw" option. Starts at $5/month.

The pattern is clear. OpenClaw is the wild, powerful open-source base. NemoClaw is NVIDIA making it enterprise-safe. IronClaw is the security-paranoid alternative. And behind all of them, the big model providers (OpenAI, Anthropic, Google) are building their own agentic capabilities directly into their products.

The Security Problem Is Real

Here's where it gets uncomfortable.

A security audit by Giskard in January 2026 found that OpenClaw's default configuration uses a shared session for all direct messages. That means environment variables and API keys loaded into one person's session were available to anyone who could message the bot. Files saved in one user's session could be retrieved by another user. That's not a theoretical risk. That's an open door.

A broader audit found 512 vulnerabilities, eight classified as critical. The worst one allowed full compromise of the system: an attacker could run arbitrary commands on your machine.

But it gets worse. Researchers found that the link preview feature in messaging apps like Telegram could be weaponized. Through prompt injection (basically tricking the AI into following hidden instructions), an attacker could get your claw to generate a URL that exfiltrates your private data to their server. Your AI assistant, sending your stuff to a stranger, because it got fooled by a message.

And then there's ClawHub, the skill marketplace. Cisco's security team found that the #1-ranked skill in the repository ("What Would Elon Do?") contained active data exfiltration, direct prompt injection, command injection through embedded bash commands, and malicious payloads. Over 800 malicious skills were discovered across the registry. That's roughly 20% of the entire marketplace. One in five plugins was compromised.

OpenClaw's own documentation acknowledges it: "There is no 'perfectly secure' setup."

China restricted government agencies from running OpenClaw on work computers. Microsoft, Cisco, and Kaspersky all published security advisories. The phrase "security nightmare" appeared in more than one headline.

This is not theoretical. This is real, documented, already-exploited risk.

What People Are Missing

Here's what I think the conversation is getting wrong.

The hype crowd is treating claws like the next app store. Install it, add some skills, let it run your life. They're glossing over the fact that you're giving an AI full access to your computer, your accounts, and your data. And the security model is, charitably, a work in progress. The 20% malicious skill rate should make anyone pause. Even Steinberger himself told interviewers: "I literally had to argue with people that told me, 'Yeah, but my agent said this and this.' So, we, as a society, we have some catching up to do in terms of understanding that AI is incredibly powerful, but it's not always right."

The fear crowd is treating claws like they're uniquely dangerous and missing the fact that this is where all of AI is headed. Agentic AI (AI that acts, not just talks) is the entire trajectory of the field. OpenAI, Anthropic, Google, and Microsoft are all building this into their products. OpenClaw just got there first as an open-source project, which means the problems showed up faster and louder.

What both sides are missing: This is the same pattern we've seen with every major technology shift. The capability arrives before the safety infrastructure. The early adopters take the risks. The security community scrambles. The enterprise players build guardrails. And eventually, it becomes normal.

The car didn't ship with seatbelts. The internet didn't ship with HTTPS. And claws didn't ship with adequate security. That doesn't mean the technology is wrong. It means we're early.

What This Means for You

If you're a regular user of AI (someone who uses ChatGPT or Claude to write emails, brainstorm, or get answers), here's what claws mean for your near future:

The tools you already use are going agentic. Claude can already use tools, browse the web, and execute code. ChatGPT has plugins and actions. The line between "chatbot" and "agent" is disappearing inside the products you're already paying for. You don't need to install OpenClaw to experience this shift. It's coming to you.

The permission model is going to matter a lot. Right now, when you use Claude or ChatGPT, you type something and it responds. With agentic AI, you're giving it permission to do things: send messages, modify files, interact with other services. Understanding what you're authorizing becomes a real skill, not just clicking "accept."

"Vibe tasking" is the next vibe coding. Just like vibe coding let non-developers build software by describing what they wanted, claws let non-technical people automate workflows by describing what they need done. The barrier to automation just dropped dramatically.

Security literacy is no longer optional. If your AI can access your email, your files, and your calendar, you need to understand what that means. Not at an engineering level. But at a "what am I comfortable with" level. The same way you (hopefully) don't click random email attachments, you'll need judgment about what you let an AI agent do on your behalf.

The Bottom Line

Claws are AI that does things instead of just saying things. OpenClaw is the project that made it real, fast, and free. The security problems are serious and documented. The enterprise world is scrambling to build guardrails. And whether you install one or not, the shift to agentic AI is already baked into every major AI product you use.

This is not a hype cycle. This is the next layer. And like every layer before it (the web, mobile, cloud, generative AI), understanding it early gives you an advantage.

You don't need to run a claw today. But you should understand what they are, because the tools you're already using are becoming them.

ME

Molly Edwards

Founder of Fenris AI. Background in art history and SaaS product implementation. Building ethical AI education for everyone.

Learn AI With Fenris

Join the waitlist for practical AI training with ethics certification and a real community. Launching Spring 2026.

Join the Waitlist