Case Study

VIVI logo

How VIVI Powers Hospitality AI with MCP Gateway

VIVI is a KSG portfolio product · Hospitality · March 2, 2026

VIVI is an AI studio for hospitality. Its customers call it the HR department for AI agents — a platform where you onboard, train, monitor, and report on AI agents like new employees. These agents handle phone calls, messaging, room service, concierge requests, housekeeping, and more across hotel properties, languages, and channels.

But building the agent is only half the problem. The other half is giving it the ability to actually do things.

How It Starts
01

The Customer Builds the Agent

Every VIVI agent is created in collaboration with the customer. Resort managers and headquarters teams are the ones who know how the front desk answers the phone, how reservations work, what the agent should and shouldn't say, and what the hotel's brand voice sounds like. Together with the VIVI customer success team, they build the agent's system prompt — its personality, persona, operating rules, and brand guidelines.

That system prompt is powerful. But there's a big gap between knowing what to say and being able to actually do something — like looking up a reservation, placing a room service order, or dispatching housekeeping to room 412.

The problem isn't building one AI agent. It's bridging the gap between a system prompt and 20 legacy APIs that have never heard of MCP.

The Gap
02

The Gap Between Prompt and Action

Hotels run on a patchwork of legacy systems. For reservations and room management, there's Opera, Cloudbeds, Infor, Mews, Stayntouch, and others. For food & beverage, Agilysys, Toast, Symphony, among others. For CRM, HotSOS, Salesforce, and often custom internal APIs that hotels have built themselves over the years.

None of these vendors have official MCP servers. Many of these systems are on-premises, legacy, and speak only HTTP REST or SOAP. The agent speaks MCP. Somebody has to bridge that gap — and do it efficiently, because every extra token in the agent's context window costs money and slows down responses.

Two things are missing between the system prompt and the ability to take action:

The Two Missing Pieces

1. An MCP translation layer that converts the agent's MCP protocol calls into HTTP calls against each hotel's specific APIs — token-efficiently, without exposing 3,000 individual tools to the agent.

2. Operational knowledge — not just which tools to call, but how to orchestrate them. A late checkout isn't one API call. It's a multi-step flow that checks availability, updates the folio, notifies housekeeping, and confirms with the guest. That orchestration logic needs to live somewhere — and it needs to run somewhere.

The Solution
03

Three Pillars, One Platform

MCP Gateway fills both gaps with three capabilities that map directly to what any AI agent needs to function in the real world:

MCP Servers

The Tools

Agent Skills

The Knowledge

Sandboxes

The Computer

MCP Servers are the tools. VIVI's engineering team builds token-efficient MCP servers that translate the MCP protocol to each hotel system's HTTP API — Opera, Cloudbeds, Agilysys, Salesforce, custom internal systems. But you can't expose 3,000 individual tools to the agent — that would blow up the context window, burn through tokens, and make the agent completely unusable. What the agent actually needs is just two operations: search for the right tool based on customer intent, and execute it. MCP Gateway provides exactly that — a unified endpoint where the agent searches available tools, finds the ones relevant to the current request, and executes them. The context stays minimal, the token window stays efficient, and the agent stays fast.

Agent Skills are the knowledge. A skill is a structured prompt that tells the agent how to use the MCP tools for a specific workflow — reservations, in-room dining, housekeeping dispatch, concierge requests, late checkout. But skills are more than prompts. They're also a way to be token-efficient: a skill can concatenate multiple MCP tool calls into a single Python or Node.js script, execute them in sequence, and return a concise result to the agent. Instead of the agent making 5 round-trips through the LLM, the skill makes them programmatically and hands back one clean response. That efficiency matters when you're handling hundreds of concurrent calls.

Sandboxes are the computer. Skills need to run their scripts somewhere. Sandboxes provide isolated, containerized execution environments with their own filesystem, network rules, and resource limits. Each skill execution gets its own sandbox — no agent can affect another's runtime. The warm pool keeps startup latency low so guests aren't waiting.

A system prompt gives an agent personality. MCP Gateway gives it the ability to do its job.

The Bigger Picture
04

Not Just Hospitality

VIVI's use case is hospitality. But the pattern is universal. Any SaaS company that wants to offer AI agents to its customers will hit the same gap: the agent has a prompt, but it can't actually do anything until you connect it to tools, teach it workflows, and give it a place to execute code.

Healthcare platforms need agents that connect to EHR systems. Financial services need agents that call trading and compliance APIs. Logistics companies need agents that orchestrate warehouse and shipping systems. In every case, the three requirements are the same:

Every Agent Needs Three Things

Tools — MCP servers that translate the agent protocol to legacy HTTP APIs, unified behind a single gateway so the agent isn't overwhelmed with thousands of endpoints.

Knowledge — Skills that encode domain-specific workflows and orchestrate multi-step API calls token-efficiently.

A computer — Sandboxes that execute skill scripts in isolation, with security and resource guarantees.

That's what MCP Gateway is. The platform where an agent — which at its core is just a system prompt — can actually do stuff.

What's next

Build Your Own Agent Infrastructure

MCP Gateway gives your AI agents the same tool routing, skill management, and sandboxed execution that powers VIVI — deployed on your infrastructure, under your control.