Blog
Why Your AI Agent Stack Is Three Separate Problems — And One Platform
Pablo Marin, CTO @ KSGai.com · March 2, 2026
Strip away the hype and an AI agent is just a prompt. A sophisticated prompt, maybe — with chain-of-thought reasoning and tool-calling syntax — but a prompt nonetheless. What makes that prompt production-ready are the three things wrapped around it: tools to take action, knowledge to know how, and a computer to execute code.
Most teams building agents today are assembling a Frankenstein stack. One vendor for tool access. Another for code execution. And domain knowledge? That's hard-coded in application logic, scattered across repos, invisible to everyone except the engineer who wrote it. Each piece works in isolation. None of them talk to each other.
We think about agent infrastructure through three pillars: Tools. Knowledge. Computer. Every production agent needs all three. The question is whether you manage them as three separate problems — with three separate vendors, three separate dashboards, and three separate blind spots — or as one integrated platform.
The Tools
MCP Servers
The Knowledge
Agent Skills
The Computer
Sandboxes
This post maps the landscape. We'll walk through each pillar, name the alternatives honestly, and show where the gaps are. Not because the alternatives are bad — many are excellent at what they do — but because solving one pillar in isolation creates new problems that only surface when you try to run agents at scale.
The Tools — MCP Servers
The first pillar is tool access. Agents need to interact with the outside world: call APIs, query databases, read files, send messages, trigger workflows. The Model Context Protocol (MCP) has become the standard for this — a universal interface that lets any agent talk to any service through a common protocol. MCP servers are the connectors that make this possible, exposing tools, resources, and prompts to agents in a consistent format.
Several strong products have emerged in this space. ACI.dev (formerly Gate22) provides a hosted MCP registry with unified authentication — a clean approach to managing tool connections from a single dashboard. Composio offers 250+ tool integrations with managed authentication, making it straightforward to connect agents to popular SaaS tools. Obot takes an open-source, agent-framework approach. Context Forge (IBM) targets enterprise MCP management. And API gateway players like Kong and Portkey have extended their existing platforms to support MCP traffic. Bifrost handles protocol translation, converting OpenAPI specs to MCP servers automatically.
These are all legitimate approaches, and each solves real problems. But when you evaluate them for production deployment, patterns emerge. Most are hosted-only — your tool traffic flows through a third-party cloud, which is a non-starter for regulated industries where data cannot leave the network perimeter. Many support a limited set of server types (typically just SSE or HTTP), which means you cannot connect to tools that run as local processes, Docker containers, or WebAssembly modules. Few offer multi-tenancy, so isolating tool access between teams, environments, or customers requires manual configuration. And credential management is often basic — static API keys rather than a proper vault with rotation and audit trails.
MCP Gateway approaches this differently. It supports six server types: stdio, SSE, streamable HTTP, Docker, WASM, and remote gateway federation. It runs entirely self-hosted on your own infrastructure — your Kubernetes cluster, your VPC, your compliance boundary. Multi-tenancy is built in, with full tenant isolation at the data and network level. And credentials are managed through an integrated vault with rotation policies, not scattered across environment variables.
The Knowledge — Agent Skills
The second pillar is the one nobody is talking about.
Tools give agents the ability to act. But knowing what to do — the domain knowledge, the multi-step workflows, the institutional expertise that separates a useful agent from a hallucinating liability — that requires something else entirely. We call these skills: packaged, reusable artifacts that encode procedural knowledge as versionable, shareable SKILL.md files.
No competitor has this. Search the entire MCP ecosystem — ACI.dev, Composio, Obot, Context Forge, every API gateway vendor, every sandbox provider — and you will find zero platforms that manage agent skills as a first-class capability. This is the empty column in every comparison table.
Without managed skills, teams fall into a familiar trap. Domain knowledge gets hard-coded directly into application logic — buried in prompt templates, scattered across repositories, duplicated between teams with no visibility into what exists or what works. Every team reinvents the wheel. You cannot audit what knowledge an agent is using. You cannot version it. You cannot share it across teams. You cannot monitor whether a skill is performing well or drifting.
MCP Gateway treats skills as infrastructure, not application code. The platform provides a skill catalog where teams can browse, search, and discover existing skills. An AI-powered generation pipeline lets you describe what you need in natural language and get a working skill — complete with instructions, parameters, and workflow steps. Import and export enables sharing across teams and organizations. Every skill is versioned, so you can roll back when something breaks. And usage monitoring shows you which skills are being invoked, by which agents, how often, and with what outcomes.
When your CISO asks “what knowledge is this agent using to make decisions?” you need an answer more specific than “whatever the engineer put in the prompt.”
For enterprises, this is not a nice-to-have. It is a compliance requirement. Managed skills provide that answer: versioned, auditable, governed artifacts with clear ownership and usage telemetry.
The Computer — Sandboxes
The third pillar is code execution. Anthropic's research showed that agents writing and executing code — rather than calling tools directly — achieve 98.7% token savings on complex tasks. But letting an AI write and run arbitrary code on your infrastructure without isolation is, to put it mildly, inadvisable. Sandboxes provide secure, isolated environments where agents can write, execute, and iterate on code without risking the host system.
Good options exist here. E2B offers cloud-first sandboxes with impressively fast cold starts — their infrastructure is purpose-built for ephemeral code execution and it shows. Daytona provides open-source development environments with strong isolation guarantees. Scrapybara specializes in browser-based agent execution — useful when agents need to interact with web applications directly.
The limitations follow the same pattern. Most sandbox providers are SaaS-only: your data and code leave your network, execute on someone else's infrastructure, and return results over the public internet. For many organizations, that is an acceptable trade-off. For banks, hospitals, defense contractors, and anyone operating under strict data residency requirements, it is not. Beyond data sovereignty, standalone sandbox providers operate in isolation from the rest of the agent stack. There is no visibility into what tools or skills triggered the code execution. You cannot correlate a sandbox session with the agent action that spawned it. Debugging becomes archaeology.
MCP Gateway runs sandboxes as self-hosted containers on your own infrastructure. Warm container pools eliminate cold-start latency — sandboxes are ready before the agent needs them. And critically, sandbox execution is integrated with tool calls and skill invocations in a single observability trace. When an agent calls a tool, invokes a skill, and then executes code, you see the entire chain in one timeline — not three separate logs from three separate systems.
The Real Problem: Fragmentation
Suppose you pick the best vendor for each pillar individually. ACI.dev for tool management. E2B for sandboxes. And for skills — well, there is no vendor, so you build something in-house. You now have three separate systems. Three separate dashboards to monitor. Three separate authentication mechanisms to manage. Three separate audit logs to correlate.
You cannot trace an agent's action end-to-end: from the tool it called, to the skill that guided its reasoning, to the code it executed in a sandbox.
This fragmentation has a concrete cost. Each system has its own view of what happened, but no system has the complete picture. When something goes wrong — and it will — you are stitching together timestamps from three different logging pipelines, hoping the clocks were synchronized.
For regulated industries, this fragmented audit trail is a non-starter. Banking regulators do not accept “we think the agent did X based on correlating logs from three vendors.” Healthcare compliance does not work with partial observability. Government contracts require unified audit trails, not best-effort log aggregation.
The unified platform argument is not about convenience — it is about operational necessity. One credential vault managing all secrets. One observability pipeline capturing all telemetry. One RBAC system governing all access. One API for all agent infrastructure. When your security team needs to answer “what did this agent have access to, what knowledge was it using, and what code did it execute?” the answer comes from one place, not three.
This is not a novel pattern. It is the same arc as API management (Kong, Apigee), service mesh (Istio, Linkerd), and cloud networking (AWS VPC). Every time distributed services proliferate, organizations eventually need a control plane to manage them. Agent infrastructure is following the same trajectory — and the organizations that recognize this early will have a structural advantage over those still duct-taping point solutions together.
How the Landscape Breaks Down
| Capability | MCP Gateway | ACI.dev | Composio | E2B | Daytona |
|---|---|---|---|---|---|
| Self-hosted | ✓ | — | — | — | ✓ |
| Multi-tenant | ✓ | — | — | — | — |
| MCP Server Management | ✓ | ✓ | ✓ | — | — |
| Skill ManagementUnique | ✓ | — | — | — | — |
| Sandboxes | ✓ | — | — | ✓ | ✓ |
| Unified Observability | ✓ | Partial | Partial | — | — |
| Credential Vault | ✓ | ✓ | ✓ | — | — |
| RBAC | ✓ | — | Partial | — | — |
| AI Generation | ✓ | — | — | — | — |
The pattern is clear. Each alternative excels at one or two pillars but leaves the rest to you. Only MCP Gateway covers all three — tools, knowledge, and computer — in a single, self-hosted platform with unified governance.
What's next
Tools. Knowledge. Computer. One Platform.
Your agents need all three pillars. You can assemble them from separate vendors and accept the blind spots, or you can run them on one platform with one audit trail, one credential vault, and one control plane.