OpenClaw Proved the Demand. Now Enterprises Need the Infrastructure.

Over the weekend, OpenAI beat out Meta by snagging Peter Steinberger, the creator of OpenClaw, to help build out OpenAI’s story for running agentic workflows in the enterprise. It will be interesting to see how OpenAI and Steinberger translate the ideas that made OpenClaw a viral sensation for developers into the very different world of the enterprise.
In this article, I want to break down some of the key design choices that made OpenClaw such a sensation, what the biggest friction points are for enterprises trying to adopt and run agents at scale, and how the open-source work we’ve been doing at Platformatic can bridge that gap.
Developers and the Path of Least Resistance
OpenClaw racked up 196,000 GitHub stars, caught the eye of Meta and OpenAI, and got flagged by Gartner as an “unacceptable cybersecurity risk” for enterprises. So what’s really going on?
Let’s first take a look at what made OpenClaw so appealing to everyday developers. Namely, it brought the world of LLMs and agents to where developers were most excited to apply them, i.e., the data and apps on their own machines. (This, interestingly enough, is a common thread between consumers and enterprise teams, which I’ll touch on later.)
Second, it came with a fantastic developer experience out of the box. Because it was built on Node.js, OpenClaw shipped with a rich ecosystem that let developers hook their agents up to … well, pretty much whatever they wanted, just with a few simple lines of code.
Your Agents, Your System
So what does OpenClaw’s viral appeal teach us about bringing agents to the enterprise?
Well, it turns out, agents are most useful when you run them where they can do useful things. Again, your data, your files, all that good stuff. That’s greatly simplified if your agent runs on your own system, and it’s this simplicity that's largely been missing from most cloud-based Agentic Platforms.
This is because enterprises need something that integrates with the infrastructure they’ve already invested in.
When we talk about the sometimes ambiguous notion of “the enterprise”, what we are really referring to about are teams that have invested years of engineering effort and millions of dollars (both in terms of engineering hours and/or commercial licences) building heavily customized Kubernetes platforms for their teams, replete with observability stacks, CI/CD pipelines, security policies, and compliance systems; all heavily customized to the ergonomics of their developers and domain. So you can imagine how platform teams respond when a new vendor says,
“Great news, agentic AI is here. You just need to adopt this entirely new platform to run it.”
Here’s where Watt comes in: making your existing stack agent-ready.
Why Node.js Is the Runtime for Agents
OpenClaw’s architecture is a 390,000-line TypeScript codebase running on Node.js 22 or higher. Its Gateway, the control plane that manages every agent interaction across WhatsApp, Telegram, Slack, Discord, iMessage, and more, is written entirely in JavaScript and TypeScript. It works anywhere Node.js works.
If you’ve ever looked closely at how agents work, this makes a lot of sense. Agents aren’t batch jobs; they are persistent, event-driven processes that keep long WebSocket connections open, respond to messages across multiple channels at once, call external APIs, and manage conversations over time. This is exactly what Node.js was built for. The event loop, the main feature that makes Node.js great for high-concurrency I/O, lets an agent handle many conversations, tool calls, and streaming LLM responses at the same time without needing a separate thread for each connection.
OpenClaw chose Node.js because no other runtime handles this pattern as smoothly. Python would struggle with concurrency. Go could work, but it lacks the rich ecosystem that lets Steinberger build integrations for every major messaging platform in just weeks. The npm ecosystem, such as Baileys for WhatsApp, grammY for Telegram, discord.js, and Slack’s Bolt SDK, is why a single developer could build something in weeks that would take an enterprise team months.
Watt: The Primitive that makes your existing stack Agent-Ready
At its core, Watt implements ideas that are simple to grasp but challenging to execute (elegantly) from an engineering perspective. Namely, we wanted to 1) truly unlock the power of multi-threading for Node.js by running your application as a worker thread within Watt, and 2) provide a universal primitive to run your app across any infrastructure, while making all the NFRs (observability, thread management, etc) “out of the box”.
So - what are the benefits of using tools like Watt to run and manage your agents as isolated worker threads? Let’s do a quick reality check.
Can you see every long-running, event-driven process in your stack right now?
Do you have automated visibility into which connections are open, what messages are moving, or how your agents scale during spikes in requests?
If you hesitate, you’re not alone. Most enterprise stacks aren’t built for persistent, event-driven workloads. That’s exactly where agentic AI exposes the cracks.
Long-running operations for agents. Agents are stateful, as they inherently operate in a “loop”. They that must remain active for hours, days, or even longer, maintaining state, holding connections, and reacting to events across multiple channels. Sub-agents can be spawned on demand to adapt the system on the fly. Watt allows your application to do so in isolated worker threads. Watt manages their full lifecycle of long-running Node.js agents on Kubernetes, including smooth restarts, health monitoring, and resource management, without losing agent state.
For enterprise teams, this brings real improvements: Watt's ability to recycle and self-heal threads means agentic workflows keep running without interruption.
Put another way, if your agent is in the middle of a conversation with a customer, coordinating across Slack and email, and your pod is rescheduled on Kubernetes, you lose your state and frustrate your users. With Watt, we automatically detect service degradation and act accordingly, gracefully hot-swapping threads before Kubernetes (or your customer) notices anything has gone awry.
Out-of-the-Box Observability for Node.js. The OpenClaw security nightmare was as much about bad defaults as anything. Let’s be honest - configuring security and observability is going to be perceived as a distracting sidequest for an excited developer who wants to just ship (they are called ‘NFRs’ for a reason, after all). Our workaround was to provide all of this “out-of-the-box” for both devs and the platform teams that look after them.
To this end, Watt’s Intelligent Command Center (and its companion Admin service) provides continuous profiling, event loop monitoring, and application-level metrics, giving DevOps teams and security leaders a clear view of every Node.js process in their cluster. You can’t secure what you can’t see.
Intelligent autoscaling tied to Node.js internals. Agents often have unpredictable workloads. One agent might be idle for hours, then suddenly need to handle dozens of LLM calls when a user starts a complex workflow.
Watt’s autoscaler understands Node.js event loop metrics, not just CPU and memory, and scales based on real application-level demand. This kind of event-loop aware scaling can deliver strong business results. Application-level autoscaling strategies like this can cut cloud compute costs by 25 percent or more by avoiding overprovisioning during slow periods and preventing slowdowns during traffic spikes.
Put another way: autoscaling on the wrong metrics is expensive, both financially and in terms of performance SLOs.
Enterprise-grade operations without rewrites. A big driver of adoption for us has been the fact that we don’t ask teams to rewrite their Node.js applications or give up their current infrastructure.
Watt wraps your Node.js app and adds operational features such as profiling, logging, tracing, and scaling, all without code changes. It integrates with your current Kubernetes setup, works with your observability tools, and fits into your deployment workflows. If your team has been building agent features on Node.js, Watt makes those agents ready for production on the infrastructure you already have.
Watt and the Multi-agent-verse
Let’s imagine a multi-agent workflow you could put into production next quarter:
A sales agent gets a message from a customer about a delayed order.
Instead of forwarding the ticket manually, the sales agent automatically works with a logistics-tracking agent to check the shipment status.
If there’s a problem, an incident response agent opens a case in the ITSM system and notifies the customer proactively, all without human intervention.
Your teams see faster response times, fewer dropped tickets, and a better customer experience, and the whole process is auditable from start to finish.
At its core, this is a distributed systems problem, and one that ties back to Node’s core strengths, with its event-driven architecture, streaming capabilities, and unmatched ecosystem for live communication.
But distributed systems also need operational infrastructure. They require monitoring, lifecycle management, security boundaries, and proven operational tools that Matteo and I have spent the last decade building.
OpenClaw showed that a single developer using Node.js can build an agent platform that excites hundreds of thousands of people. Imagine what happens when enterprises bring the same capabilities and add proper security, observability, and operational controls.
What if you could deploy AI agents the same way you deploy microservices, with worker isolation, auto-scaling, health checks, and hot-reload, on infrastructure you already own? Watt could run each agent type as an isolated application with its own worker pool, sandboxed filesystem, and tool policy, while a single gateway handles authentication, role-based access control, and routing across Slack, Teams, Telegram, or any HTTP client through an OpenAI-compatible API. No vendor lock-in, no data leaving your network, and the same Node.js runtime your team already knows, just pointed at a harder problem.
That’s the world Watt is making real.
Time to take the lobster by the claws.
If you’re leading an enterprise and watching OpenClaw unfold, here’s my take:
Don’t ban agentic AI. The demand is real, and your teams will find workarounds if you try. Instead, invest in the infrastructure that ensures safety. The pull of the ecosystem is strong. Your agent strategy is really a Node.js strategy.
Get your operations in order. You need visibility into long-running Node.js processes, autoscaling that understands the event loop, and lifecycle management for processes that aren’t just stateless web servers.
Start with what you already have. If your teams are running Node.js (and most likely they are), the path to production-ready agents is shorter than you think. Watt is built to meet you where you are.
The OpenClaw moment is just the beginning. Enterprises that build the right infrastructure now will be the ones to take advantage of agentic AI. Those who respond with bans and blocks will spend years trying to catch up.
Node.js made OpenClaw possible. Your cloud investment made your infrastructure real. Watt connects the two, turning them into enterprise-grade platforms that run secure, scalable, and durable agents.






