Insight

With the AI Agent Standard, MCP Connecting Claude to OpsNow

OpsNow Team

The introduction to Anthropic's Model Context Protocol (MCP)

Why We Need a New Connectivity Standard for AI: The background of the rise of MCP

From the picture above:

Left: Before MCP
Without a standard protocol, each AI model must connect to tools like Slack, Google Drive, and GitHub using its own unique API. This results in fragmented connections and complex, repetitive integrations.

Right: After MCP
With MCP, models can access various tools through a single, standardized interface. Integration becomes much simpler and more unified across systems.

Large language models (LLMs), often referred to as AI chatbots or virtual assistants, are highly intelligent but still face a significant limitation, they are typically disconnected from the outside world. In simple terms, no matter how advanced AI becomes, it will always have constraints if it cannot access essential data, like the Internet or internal company databases. For example, if you ask current AI to "generate a graph using our sales data," it struggles because it doesn't have direct access to that data. This issue has led to numerous efforts to link AI with external tools and data sources. However, the traditional approach involved writing custom code for each AI model and tool, with each new tool requiring complex development. It’s similar to the past when different "cables" were needed for each printer or keyboard, there were various "specifications" between AI and the tools.

Anthropic has introduced the MCP (Model Context Protocol) as an open standard protocol designed to solve this issue.

It can be simplified as "USB-C for AI." Just like USB-C connects multiple devices through a single, unified port, MCP serves as the standard for linking AI applications with various external systems in a standardized way. This eliminates redundancy and simplifies integration between different systems. For instance, if three AI applications need to connect to three tools, without MCP, you would need to establish 3×3, or 9 separate connections. With MCP, only 3+3, or 6 standardized connections are required. This makes AI-tool connections more efficient, allowing AI to access and use the necessary data for improved responses.

Latest Trends: Claude and the Industry’s Shift Toward MCP Adoption

The growing trend of linking AI models with external tools isn’t unique to Anthropic. Since 2023, OpenAI’s ChatGPT has been expanding its capabilities through plugins and function calling, enabling tasks like web searches and calculator functions. Microsoft has also integrated ChatGPT into Bing to enhance web search functionality. These efforts reflect a broader industry movement focused on making AI more practical and integrated into daily workflows. In this landscape, Anthropic made a notable impact by introducing the Model Context Protocol (MCP) in November 2024, offering a standardized approach that’s quickly gaining recognition as a potential industry standard. Claude, Anthropic’s AI assistant, comes equipped with a built-in MCP client, making it easy for apps powered by Claude to connect directly to MCP-compatible servers. For example, in Claude’s desktop application, users can connect services like Google Drive, Slack, and GitHub with just a few clicks, allowing Claude to retrieve relevant information from those platforms in real time.

After MCP’s launch, many developer tools and services quickly began to support the protocol. GitHub Copilot and the development IDE Cursor were among the first to announce MCP integration. Other platforms, including Zed Editor, Replit, Codeium, and Sourcegraph, are also embedding MCP into their ecosystems to enhance how AI assistants understand and interact with code. In summary, the push to connect AI models with external systems is a major trend in the tech industry, and MCP is rapidly gaining traction as an open standard. Some experts say the MCP has actually gained momentum enough to become a favorite in the race to define the “AI Agent Standard” between 2023 and 2025.

As more companies adopt MCP or experiment with similar solutions, the industry is seeing rapid progress in making AI assistants more powerful, flexible, and useful in practical environments.

MCP Strengths: Supporting Diverse LLMs and Operating in Closed Network

MCP is gaining industry attention for several compelling advantages. First and foremost, it standardizes the way AI models connect with tools, significantly improving development efficiency. For developers, this means that once a new AI service is integrated using the MCP standard, it can be reused across multiple systems, streamlining workflows and boosting productivity. This standardized approach also leads to more consistent code implementation, resulting in greater service stability and easier maintenance. From the end-user perspective, this translates into a smoother, more unified experience when interacting with AI features across different applications.

Another key benefit is MCP’s flexibility, as it’s not tied to any specific AI vendor or model. It supports a wide range of large language models (LLMs), including proprietary systems like Anthropic’s Claude and OpenAI’s GPT series, as well as open-source alternatives. For example, developers can utilize open-source LLMs like Llama 2 or DeepSeek by simply meeting the MCP standard, avoiding lock-in to a single provider’s API. This flexibility extends to closed-network environments—organizations operating in secure, internet-restricted settings can still deploy AI assistants by installing open-source models and MCP connectors internally. Even without external internet access, AI functionality can be implemented as long as the MCP protocol is followed, making it ideal for high-security or offline systems.

Additionally, MCP is designed to work across various infrastructure environments. It’s compatible with a wide range of hypervisors and virtualization platforms, whether on-premises or in the cloud. This includes solutions like VMware, Proxmox, and OpenStack, all of which can be managed by AI through a single MCP interface. Since MCP operates independently of specific infrastructure types, organizations can integrate AI without overhauling their existing systems. Moreover, MCP is built for two-way communication, AI can send requests to external tools and receive real-time notifications when specific events occur. Access control is also highly customizable; the MCP server can define granular permissions to ensure AI only accesses the necessary data, preserving security and privacy. In summary, MCP stands out as a modern integration standard that balances flexibility, compatibility, and secure deployment across diverse IT environments.

OpsNow Use Case: Conversational Virtual Machine Management with MCP

OpsNow demonstrates practical MCP integration through two use cases. Setting up MCP-based workflows is simple, and the foundational setup can be done as follows:

  1. Combine OpsNow FinOps with MCP.

  • Third Parties MCP Server
    • Connect with a variety of already public external MCP servers to extend capabilities.
  • LLM Vendor Desktop Application (Claude)
    • Claude, Anthropic’s powerful LLM, connects seamlessly with various MCP servers to process user commands in natural language. It interprets user requests, initiates the appropriate MCP server calls, and facilitates interaction with external systems.
  • OpsNow MCP Server
    • Claude connects to two custom-built MCP servers designed specifically to access cloud cost and asset information within OpsNow.
      • OpsNow Cost MCP Server: Retrieves cloud cost information from OpsNow.
      • OpsNow Asset MCP Server: Provides detailed data on active assets, including servers, networks, databases, and more.
  • OpsNow MCP Provider
    • Each MCP server is internally connected to the OpsNow MCP Provider, a component that bridges Claude’s MCP requests to the actual OpsNow API.
  • OpsNow Resources
    • The actual data is provided by OpsNow’s internal system, known as OpsNow Resources.

In this context, existing FinOps services and resources are referred to as OpsNow Resources. We'll explore the practicality of MCP by developing the green-highlighted section in a simplified way. For now, the setup will operate using dummy data for testing purposes rather than live production data.

OpsNow MCP Provider

This project acts as a provider that enables Claude to communicate with MCP servers to retrieve asset and cost information from OpsNow. The overall system is structured as follows:

  • Asynchronous web server built with FastAPI
  • API client for asset queries (asset_api_client.py)
  • API client for cost queries (cost_api_client.py)
  • Main application that handles requests from the MCP server (main.py)

main.py: The entry point for handling MCP requests

Python

from fastapi import FastAPI
from asset_api_client import get_assets
from cost_api_client import get_costs

app = FastAPI()

@app.get("/health")
async def health_check():
    return {"status": "ok"}

@app.get("/assets")
async def get_assets_data():
    return await get_assets()

@app.get("/costs")
async def get_costs_data():
    return await get_costs()
  • /assets: API endpoint called when Claude requests the asset list
  • /costs: API endpoint called when Claude requests cost data
  • /health: Health check endpoint used to verify the MCP server’s status

cost_api_client.py: Provides cost data

Python

async def get_costs():
    return {
        "costs": [
            {
                "cloud_provider": "<CSP_NAME>",  # Example: AWS, Azure, GCP, etc.
                "monthly_costs": [
                    {
                        "month": "<YYYY-MM>",  # Example: 2025-03
                        "total": "<TOTAL_COST>",
                        "by_service": {
                            "<SERVICE_NAME_1>": "<COST_1>",
                            "<SERVICE_NAME_2>": "<COST_2>",
                            # ...
                        }
                    },
                    # ...Multiple monthly cost data
                ]
            },
            # ...Multiple CSP
        ]
    }
  • It provides cost data based on dummy data rather than actual OpsNow API integration. 
  • When Claude receives requests like "What is the cost for this month?", this data will be returned.

asset_api_client.py: Provides asset data

Python

async def get_assets():
    return {
        "AWS": [
            {
                "id": "<RESOURCE_ID>",
                "type": "<RESOURCE_TYPE>",  # Example: EC2, RDS 
                "region": "<REGION_CODE>",
                "status": "<STATUS>"        # Example: running, stopped 
            },
            # ...Multiple Assets
        ]
    }

  • It provides an asset list based on dummy data, rather than actual OpsNow API integration. 
  • When Claude receives requests like "What resources are currently in use?", this data will be returned.

How to Run

# Default Installation
pip install -r requirements.txt
# 서버 실행
python main.py

Full Source Code: opsnow-mcp-provider

OpsNow MCP Server

OpsNow Cost MCP Server

Now, let's take a closer look at the role of the "OpsNow Cost MCP Server" in the Claude MCP integration structure. This component receives requests from the Claude desktop application and responds with OpsNow cost data using the MCP protocol.

Before starting this section, it is highly recommended to review the following document.

Key Technologies

  • Node.js + TypeScript
  • @modelcontextprotocol/sdk: Official SDK for developing the Claude MCP server
  • node-fetch: Used for communication with the Provider API
  • zod: Schema validation library

src/index.ts: MCP server initialization

JavaScript

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

// Create server instance
const server = new McpServer({
  name: "cloud-cost",
  version: "1.0.0",
});

Define Input Schema and Register Tools

JavaScript

server.tool(
  "get-cost",
  "Get cloud cost summary for multiple vendors and months",
  {
    vendors: z.array(z.string()).optional().describe("List of cloud vendor names"),
    months: z.array(z.string()).optional().describe("List of months in YYYY-MM format"),
  },
  async ({ vendors, months }) => {
    ...
  }
);

Retrieve cost data from the Provider API

JavaScript

async function readCostData(): Promise<any | null> {
  const response = await fetch('http://localhost:8000/api/v1/costs/info');
  ...
  return data;
}

Build

# Default Installation
npm install
# Build
npm run build

Full Source Code: opsnow-mcp-cost-server

OpsNow Asset MCP Server

Since the structure is identical to the OpsNow Cost MCP Server, the explanation is omitted.

Full Source Code: opsnow-mcp-asset-server

Usage in Claude Desktop

Environment Settings

1. Claude Desktop Settings > Developer > Edit Settings

2. Open the claude_desktop_config.json file.

Register cloud-cost-server and cloud-asset-server settings.

JavaScript

{
  "mcpServers": {
   "cloud-cost-server": {
      "command": "node",
      "args": [
        "/Users/tae-joolee/codeProject/opsnow-mcp-cost-server/build/index.js"
      ]
    },
    "cloud-asset-server": {
      "command": "node",
      "args": [
        "/Users/tae-joolee/codeProject/opsnow-mcp-asset-server/build/index.js"
      ]
    }
  }
}

3. Verify successful registration

Check for "Hammer2" in the prompt input field

You can view the MCP server list by clicking

Usage Example

Based on the powerful performance of the Claude Desktop Agent, it demonstrates quite useful potential.

1. "What is the cloud cost for April?"

2. "Is it higher than March? If so, what’s the reason?"

3. "What is cloud usage?"

4. Then, visualize it.

Deploying My MCP Server

Smithery.ai is a central hub that allows you to search, install, and manage Model Context Protocol (MCP) servers to extend the capabilities of large language models (LLMs). Developers can use Smithery to integrate various external tools and data with LLMs, enabling the creation of more powerful AI systems.

Key Features of Smithery.ai

  • MCP Server Registry: smithery hosts over 4,500 MCP servers, enabling LLMs to perform a wide range of functions. For example, it supports integrations with GitHub, Google Drive, PostgreSQL, Slack, Brave Search, and more.

  1. Log in to Smithery. Click the login button at the top right of the screen, and you’ll need to log in using your GitHub account.

  1. Click "+Add Server" at the top right of the screen.

  1. Select one of the MCP Server projects registered in my GitHub.

  1. Here, I will select opsnow-mcp-asset-server-no-server.

  1. Click the Create button.

  1. Enter the information and click the Save button.

  1. The deployment is performed automatically (though it can also be done manually).

Registering the Deployed MCP Server

1. Open the claude_desktop_config.json file.

Register the MCP server information for cloud-cost-server, cloud-asset-server.

JavaScript

{
  "mcpServers": {
    "opsnow-mcp-cost-server-no-server": {
      "command": "npx",
      "args": [
        "-y",
        "@smithery/cli@latest",
        "run",
        "@taejulee/opsnow-mcp-cost-server-no-server",
        "--key",
        "19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
      ]
    },
    "opsnow-mcp-asset-server-no-server": {
      "command": "npx",
      "args": [
        "-y",
        "@smithery/cli@latest",
        "run",
        "@taejulee/opsnow-mcp-asset-server-no-server",
        "--key",
        "19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
      ]
    }
  }
}

Future Expansion Proposal: Claude MCP-Based OpsNow Integration Architecture

  • OpsNow Desktop Application
    • It is a hybrid desktop application that integrates control of various MCP-supported LLM desktop apps, such as Claude and ChatGPT.
    • Supports various MCP-compatible LLMs (Claude, GPT, etc.)
    • Integrates with third-party MCP servers (Sequential Thinking, GitHub, etc.) and the internal OpsNow MCP Server
    • Users can directly enter their LLM API Key to connect to the desired model
    • Provides a natural workflow within the Chat UI using the OpsNow ReAct (Reasoning + Acting) framework
  • OpsNow MCP Provider
    • It serves as an API service that provides OpsNow's asset, cost, and connection information to external LLMs.
      • Endpoint server serving data based on OpsNow Resources
      • Built-in License Key Manager:
        • Authentication and Authorization Isolation via OpsNow License Key
        • Company-Specific Mapping by Key
        • Provide Customized Information for Each Customer (e.g., limit access to specific service ranges)
        • Capability Extension through OpsNow API Bridge and Connect API
  • License Key-Based Control System
    • All MCP communications are authenticated based on the License Key, providing the following security/control features:
      • Limit Call Scope by Key (e.g., restrict access to specific asset queries)
      • Provide Dedicated Data Based on Customer ID
      • Ensure Scalability for Multiple Companies and Customers
  • Diverse MCP Compatibility and Third-Party Integration
    • With the strengthened MCP support from LLM vendors like GPT and Gemini, third-party LLMs can also be integrated through the MCP server, in addition to Claude.

Through this system design and PoC, we have confirmed that Claude-based AI can extend beyond simple Q&A to become an agent that directly queries and interprets OpsNow's FinOps data.

Through this structure, we have discovered the following possibilities for the future:

  • AI-based operational analysis automation for all resources within the company
  • Integration of a customer-specific portal with LLM agents
  • User-specific LLM model selection → License-based control → Completion of natural language interface

Claude MCP has now moved beyond the experimental introduction phase and is becoming a core tool for AI operations. Based on this, OpsNow aims to leap forward by offering new user interfaces and enhanced customer experiences.

2. Integration of OpsNow Prime with MCP

MCP technology will also be integrated into OpsNow Prime, the on-premises CMP solution scheduled for release in the first half of 2025. This integration enables infrastructure automation through AI, allowing users to perform complex tasks with simple commands.

OpsNow Prime is a Private & On-Premises Cloud Management Platform designed to manage enterprise IT infrastructure and cloud resources. By integrating MCP, we can demonstrate how an AI assistant simplifies infrastructure operations. In simple terms, users can issue commands through a chat interface, and the AI handles tasks like creating or shutting down virtual machines (VMs). For example, when an administrator asks the OpsNow AI chatbot, “Create a new virtual server,” the AI will automatically call the appropriate virtualization platform API to provision the VM and return the result. Previously, developing these services required a lot of time, effort, and resources, but MCP, an open standard, made it easier and more efficient to link together.

The OpsNow team has developed a dedicated MCP server that integrates internal systems—such as virtualization hypervisor APIs into the standardized Model Context Protocol (MCP) framework. By connecting large language models (LLMs) like Anthropic Claude to OpsNow as MCP clients, AI can now interact directly with hypervisors like Proxmox or VMware. These models can execute real-world infrastructure tasks based on natural language commands. Within OpsNow, a specialized module called the MCP Provider acts as an adapter, bridging various hypervisors and OpsNow’s native APIs. For example, when a user instructs the AI to “delete a VM,” the MCP Provider identifies the correct platform, performs the appropriate API call, and sends the result back through the MCP interface. This seamless integration allows users to manage infrastructure effortlessly—no need to access complex consoles. Simply speak, and the AI handles the rest.

Let’s explore a real-world use case where Claude Desktop utilizes the OpsNow Prime MCP Server to easily create and activate virtual machines (VMs) on Proxmox, with full lifecycle management handled by OpsNow Prime.

This overview covers the entire process—from training the MCP server to deployment and operation. The MCP server was developed using Cursor, and it's built on Node.js. The basic server structure was obtained from the official MCP repository on GitHub.

To begin, we cloned the source code from the example MCP server Git repository to utilize the basic structure. Using the Postman collection file and TypeScript files provided, we studied the architecture and workflow, which allowed us to successfully create the OpsNow Prime MCP server.

postman collection file

JavaScript

"item":[
   {
      "name":"Proxmox",
      "item":[
         {
            "name":"VM 리스트 조회(PRX-IF-VM-003)",
            "request":{
               "auth":{
                  "type":"bearer",
                  ... 중략
               },
               ... 중략
            },
            "response":[
               {
                  "name":"VM 리스트 조회(PRX-IF-VM-003)",
                  ... 중략
                  ],
... 중략

route.ts

JavaScript

import { Express, Request, Response } from 'express';
import { ProxmoxApiManager } from './proxmox/proxmoxApiManager';
import { getLogger } from './utils/logger';

const logger = getLogger('Routes');
const proxmoxManager = ProxmoxApiManager.getInstance();

export const setupRoutes = (app: Express) => {
  // Default health check endpoint
  app.get('/api/health', (req: Request, res: Response) => {
    res.status(200).json({ status: 'ok', message: 'Prime MCP 서버가 정상 동작 중입니다.' });
  });

  // Node related endpoints
  app.get('/api/nodes', async (req: Request, res: Response) => {
    try {
      const result = await proxmoxManager.getNodes();
      res.status(200).json(result);
    } catch (error) {
      logger.error('Node list lookup failed:', error);
      res.status(500).json({ error: 'An error occurred trying to query the node list.' });
    }
  });

// ....(skip)

After completing the training, source files are generated based on the learned results for the OpsNow Prime MCP server. To build it, you use the following command. Currently, Node.js version 20 or higher is used. After installing the required modules and proceeding with the build, you verify whether the server is running properly. For the OpsNow Prime MCP server, there were some differences depending on the LLM model, but to improve accuracy, we conducted retraining and testing. 

JavaScript

## Check node version
node -version

## build a project
npm install & npm run build

## Check project normal startup
node dist/index.js

When you're asked to run tests for the OpsNow Prime MCP server in Cursor, the API call test will be executed. If any issues arise, it will automatically fix them. Once you confirm there are no errors, click the settings button at the top of Cursor to proceed with registering the newly created MCP server.

Click on "MCP" on the left to navigate to the MCP management screen.

Click on "Add New Global MCP Server" in the screen above, and specify the absolute path of the index.js file generated after the build in the "args" section.

After that, when you create a server and retrieve instance information through Cursor or Claude, the relevant API is called to perform the requested operation. If the API's definitions or authentication information is incorrect, errors may occur multiple times, but the system will automatically go through the retraining and rebuilding process via Cursor.

While the MCP server can also be used in Cursor, it was functionally tested across platforms to verify if it works well in a cross-platform environment. Below is how to register the MCP in Claude.

Now, we're ready to use the learned MCP server to manage resources. Simply enter "Create 1 VM in Prime Proxmox" in Claude Desktop, and you'll be working with the OpsNow Prime MCP server to create the VM.

You can easily and quickly see that VMs are created in Proxmox, and you can also check the status of VMs created in OpsNow Prime. It should be noted that even if storage connectivity errors occur during the process, we decided on alternative solutions and successfully completed the VM creation without any issues. The created VMs can be viewed in real time from the Proxmox console, and you can also check their status in real time from the VM management menu in OpsNow Prime.

Now, let's abort/delete the created resources. If you type "Delete the VM you just created" in Claude Desktop, it will automatically call the API to stop the VM you just created and then delete it.

You can see in real-time from the Proxmox console that the VM has been deleted.

OpsNow's MCP utilization has already been demonstrated through features such as VM creation, querying, and deletion within the Proxmox virtualization environment via MCP, which has been developed and tested.

We are also expanding additional capabilities, such as shutting down VMs. For instance, what used to require an administrator to manually create a VM through the Proxmox Web interface can now be automated simply by instructing the OpsNow AI assistant.

Interestingly, these capabilities are not limited to specific virtualization offerings; they can be easily generalized to VMware, OpenStack, and more. Since the OpsNow team has already deployed connectivity to MCP standards, it is relatively easy to integrate new platforms. Moreover, as mentioned earlier, even in a closed-network data center environment, deploying open-source LLM and MCP servers can provide the same AI capabilities.

OpsNow Prime also has a roadmap to support MCP servers for detecting and automating unused or abnormal resources, automating system failures, and managing ITSM (Resource Application/Authorization), expanding beyond just managing resource creation and deletion in the context of On-Premise infrastructure operations.

As demonstrated in the OpsNow case, the introduction of MCP is expected to empower AI to act as the "hands and feet" of the data center, ushering in a new era of automation in infrastructure operations.

____________________________________________________________________________________________________________________________________

Anthropic's MCP has emerged as a universal connector, bringing the world's information to isolated AIs, and its scope of use is steadily expanding. While it's still in the early stages, it has garnered significant attention from the industry, and practical advantages are rapidly being proven through real-world cases like OpsNow. It will be interesting to see how quickly MCP becomes a standard and integrates into everyday services. The "MCP era", connecting data and AI, is fast approaching.

Get Started Today with OpsNow

Insight

With the AI Agent Standard, MCP Connecting Claude to OpsNow

OpsNow Team

The introduction to Anthropic's Model Context Protocol (MCP)

Why We Need a New Connectivity Standard for AI: The background of the rise of MCP

From the picture above:

Left: Before MCP
Without a standard protocol, each AI model must connect to tools like Slack, Google Drive, and GitHub using its own unique API. This results in fragmented connections and complex, repetitive integrations.

Right: After MCP
With MCP, models can access various tools through a single, standardized interface. Integration becomes much simpler and more unified across systems.

Large language models (LLMs), often referred to as AI chatbots or virtual assistants, are highly intelligent but still face a significant limitation, they are typically disconnected from the outside world. In simple terms, no matter how advanced AI becomes, it will always have constraints if it cannot access essential data, like the Internet or internal company databases. For example, if you ask current AI to "generate a graph using our sales data," it struggles because it doesn't have direct access to that data. This issue has led to numerous efforts to link AI with external tools and data sources. However, the traditional approach involved writing custom code for each AI model and tool, with each new tool requiring complex development. It’s similar to the past when different "cables" were needed for each printer or keyboard, there were various "specifications" between AI and the tools.

Anthropic has introduced the MCP (Model Context Protocol) as an open standard protocol designed to solve this issue.

It can be simplified as "USB-C for AI." Just like USB-C connects multiple devices through a single, unified port, MCP serves as the standard for linking AI applications with various external systems in a standardized way. This eliminates redundancy and simplifies integration between different systems. For instance, if three AI applications need to connect to three tools, without MCP, you would need to establish 3×3, or 9 separate connections. With MCP, only 3+3, or 6 standardized connections are required. This makes AI-tool connections more efficient, allowing AI to access and use the necessary data for improved responses.

Latest Trends: Claude and the Industry’s Shift Toward MCP Adoption

The growing trend of linking AI models with external tools isn’t unique to Anthropic. Since 2023, OpenAI’s ChatGPT has been expanding its capabilities through plugins and function calling, enabling tasks like web searches and calculator functions. Microsoft has also integrated ChatGPT into Bing to enhance web search functionality. These efforts reflect a broader industry movement focused on making AI more practical and integrated into daily workflows. In this landscape, Anthropic made a notable impact by introducing the Model Context Protocol (MCP) in November 2024, offering a standardized approach that’s quickly gaining recognition as a potential industry standard. Claude, Anthropic’s AI assistant, comes equipped with a built-in MCP client, making it easy for apps powered by Claude to connect directly to MCP-compatible servers. For example, in Claude’s desktop application, users can connect services like Google Drive, Slack, and GitHub with just a few clicks, allowing Claude to retrieve relevant information from those platforms in real time.

After MCP’s launch, many developer tools and services quickly began to support the protocol. GitHub Copilot and the development IDE Cursor were among the first to announce MCP integration. Other platforms, including Zed Editor, Replit, Codeium, and Sourcegraph, are also embedding MCP into their ecosystems to enhance how AI assistants understand and interact with code. In summary, the push to connect AI models with external systems is a major trend in the tech industry, and MCP is rapidly gaining traction as an open standard. Some experts say the MCP has actually gained momentum enough to become a favorite in the race to define the “AI Agent Standard” between 2023 and 2025.

As more companies adopt MCP or experiment with similar solutions, the industry is seeing rapid progress in making AI assistants more powerful, flexible, and useful in practical environments.

MCP Strengths: Supporting Diverse LLMs and Operating in Closed Network

MCP is gaining industry attention for several compelling advantages. First and foremost, it standardizes the way AI models connect with tools, significantly improving development efficiency. For developers, this means that once a new AI service is integrated using the MCP standard, it can be reused across multiple systems, streamlining workflows and boosting productivity. This standardized approach also leads to more consistent code implementation, resulting in greater service stability and easier maintenance. From the end-user perspective, this translates into a smoother, more unified experience when interacting with AI features across different applications.

Another key benefit is MCP’s flexibility, as it’s not tied to any specific AI vendor or model. It supports a wide range of large language models (LLMs), including proprietary systems like Anthropic’s Claude and OpenAI’s GPT series, as well as open-source alternatives. For example, developers can utilize open-source LLMs like Llama 2 or DeepSeek by simply meeting the MCP standard, avoiding lock-in to a single provider’s API. This flexibility extends to closed-network environments—organizations operating in secure, internet-restricted settings can still deploy AI assistants by installing open-source models and MCP connectors internally. Even without external internet access, AI functionality can be implemented as long as the MCP protocol is followed, making it ideal for high-security or offline systems.

Additionally, MCP is designed to work across various infrastructure environments. It’s compatible with a wide range of hypervisors and virtualization platforms, whether on-premises or in the cloud. This includes solutions like VMware, Proxmox, and OpenStack, all of which can be managed by AI through a single MCP interface. Since MCP operates independently of specific infrastructure types, organizations can integrate AI without overhauling their existing systems. Moreover, MCP is built for two-way communication, AI can send requests to external tools and receive real-time notifications when specific events occur. Access control is also highly customizable; the MCP server can define granular permissions to ensure AI only accesses the necessary data, preserving security and privacy. In summary, MCP stands out as a modern integration standard that balances flexibility, compatibility, and secure deployment across diverse IT environments.

OpsNow Use Case: Conversational Virtual Machine Management with MCP

OpsNow demonstrates practical MCP integration through two use cases. Setting up MCP-based workflows is simple, and the foundational setup can be done as follows:

  1. Combine OpsNow FinOps with MCP.

  • Third Parties MCP Server
    • Connect with a variety of already public external MCP servers to extend capabilities.
  • LLM Vendor Desktop Application (Claude)
    • Claude, Anthropic’s powerful LLM, connects seamlessly with various MCP servers to process user commands in natural language. It interprets user requests, initiates the appropriate MCP server calls, and facilitates interaction with external systems.
  • OpsNow MCP Server
    • Claude connects to two custom-built MCP servers designed specifically to access cloud cost and asset information within OpsNow.
      • OpsNow Cost MCP Server: Retrieves cloud cost information from OpsNow.
      • OpsNow Asset MCP Server: Provides detailed data on active assets, including servers, networks, databases, and more.
  • OpsNow MCP Provider
    • Each MCP server is internally connected to the OpsNow MCP Provider, a component that bridges Claude’s MCP requests to the actual OpsNow API.
  • OpsNow Resources
    • The actual data is provided by OpsNow’s internal system, known as OpsNow Resources.

In this context, existing FinOps services and resources are referred to as OpsNow Resources. We'll explore the practicality of MCP by developing the green-highlighted section in a simplified way. For now, the setup will operate using dummy data for testing purposes rather than live production data.

OpsNow MCP Provider

This project acts as a provider that enables Claude to communicate with MCP servers to retrieve asset and cost information from OpsNow. The overall system is structured as follows:

  • Asynchronous web server built with FastAPI
  • API client for asset queries (asset_api_client.py)
  • API client for cost queries (cost_api_client.py)
  • Main application that handles requests from the MCP server (main.py)

main.py: The entry point for handling MCP requests

Python

from fastapi import FastAPI
from asset_api_client import get_assets
from cost_api_client import get_costs

app = FastAPI()

@app.get("/health")
async def health_check():
    return {"status": "ok"}

@app.get("/assets")
async def get_assets_data():
    return await get_assets()

@app.get("/costs")
async def get_costs_data():
    return await get_costs()
  • /assets: API endpoint called when Claude requests the asset list
  • /costs: API endpoint called when Claude requests cost data
  • /health: Health check endpoint used to verify the MCP server’s status

cost_api_client.py: Provides cost data

Python

async def get_costs():
    return {
        "costs": [
            {
                "cloud_provider": "<CSP_NAME>",  # Example: AWS, Azure, GCP, etc.
                "monthly_costs": [
                    {
                        "month": "<YYYY-MM>",  # Example: 2025-03
                        "total": "<TOTAL_COST>",
                        "by_service": {
                            "<SERVICE_NAME_1>": "<COST_1>",
                            "<SERVICE_NAME_2>": "<COST_2>",
                            # ...
                        }
                    },
                    # ...Multiple monthly cost data
                ]
            },
            # ...Multiple CSP
        ]
    }
  • It provides cost data based on dummy data rather than actual OpsNow API integration. 
  • When Claude receives requests like "What is the cost for this month?", this data will be returned.

asset_api_client.py: Provides asset data

Python

async def get_assets():
    return {
        "AWS": [
            {
                "id": "<RESOURCE_ID>",
                "type": "<RESOURCE_TYPE>",  # Example: EC2, RDS 
                "region": "<REGION_CODE>",
                "status": "<STATUS>"        # Example: running, stopped 
            },
            # ...Multiple Assets
        ]
    }

  • It provides an asset list based on dummy data, rather than actual OpsNow API integration. 
  • When Claude receives requests like "What resources are currently in use?", this data will be returned.

How to Run

# Default Installation
pip install -r requirements.txt
# 서버 실행
python main.py

Full Source Code: opsnow-mcp-provider

OpsNow MCP Server

OpsNow Cost MCP Server

Now, let's take a closer look at the role of the "OpsNow Cost MCP Server" in the Claude MCP integration structure. This component receives requests from the Claude desktop application and responds with OpsNow cost data using the MCP protocol.

Before starting this section, it is highly recommended to review the following document.

Key Technologies

  • Node.js + TypeScript
  • @modelcontextprotocol/sdk: Official SDK for developing the Claude MCP server
  • node-fetch: Used for communication with the Provider API
  • zod: Schema validation library

src/index.ts: MCP server initialization

JavaScript

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

// Create server instance
const server = new McpServer({
  name: "cloud-cost",
  version: "1.0.0",
});

Define Input Schema and Register Tools

JavaScript

server.tool(
  "get-cost",
  "Get cloud cost summary for multiple vendors and months",
  {
    vendors: z.array(z.string()).optional().describe("List of cloud vendor names"),
    months: z.array(z.string()).optional().describe("List of months in YYYY-MM format"),
  },
  async ({ vendors, months }) => {
    ...
  }
);

Retrieve cost data from the Provider API

JavaScript

async function readCostData(): Promise<any | null> {
  const response = await fetch('http://localhost:8000/api/v1/costs/info');
  ...
  return data;
}

Build

# Default Installation
npm install
# Build
npm run build

Full Source Code: opsnow-mcp-cost-server

OpsNow Asset MCP Server

Since the structure is identical to the OpsNow Cost MCP Server, the explanation is omitted.

Full Source Code: opsnow-mcp-asset-server

Usage in Claude Desktop

Environment Settings

1. Claude Desktop Settings > Developer > Edit Settings

2. Open the claude_desktop_config.json file.

Register cloud-cost-server and cloud-asset-server settings.

JavaScript

{
  "mcpServers": {
   "cloud-cost-server": {
      "command": "node",
      "args": [
        "/Users/tae-joolee/codeProject/opsnow-mcp-cost-server/build/index.js"
      ]
    },
    "cloud-asset-server": {
      "command": "node",
      "args": [
        "/Users/tae-joolee/codeProject/opsnow-mcp-asset-server/build/index.js"
      ]
    }
  }
}

3. Verify successful registration

Check for "Hammer2" in the prompt input field

You can view the MCP server list by clicking

Usage Example

Based on the powerful performance of the Claude Desktop Agent, it demonstrates quite useful potential.

1. "What is the cloud cost for April?"

2. "Is it higher than March? If so, what’s the reason?"

3. "What is cloud usage?"

4. Then, visualize it.

Deploying My MCP Server

Smithery.ai is a central hub that allows you to search, install, and manage Model Context Protocol (MCP) servers to extend the capabilities of large language models (LLMs). Developers can use Smithery to integrate various external tools and data with LLMs, enabling the creation of more powerful AI systems.

Key Features of Smithery.ai

  • MCP Server Registry: smithery hosts over 4,500 MCP servers, enabling LLMs to perform a wide range of functions. For example, it supports integrations with GitHub, Google Drive, PostgreSQL, Slack, Brave Search, and more.

  1. Log in to Smithery. Click the login button at the top right of the screen, and you’ll need to log in using your GitHub account.

  1. Click "+Add Server" at the top right of the screen.

  1. Select one of the MCP Server projects registered in my GitHub.

  1. Here, I will select opsnow-mcp-asset-server-no-server.

  1. Click the Create button.

  1. Enter the information and click the Save button.

  1. The deployment is performed automatically (though it can also be done manually).

Registering the Deployed MCP Server

1. Open the claude_desktop_config.json file.

Register the MCP server information for cloud-cost-server, cloud-asset-server.

JavaScript

{
  "mcpServers": {
    "opsnow-mcp-cost-server-no-server": {
      "command": "npx",
      "args": [
        "-y",
        "@smithery/cli@latest",
        "run",
        "@taejulee/opsnow-mcp-cost-server-no-server",
        "--key",
        "19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
      ]
    },
    "opsnow-mcp-asset-server-no-server": {
      "command": "npx",
      "args": [
        "-y",
        "@smithery/cli@latest",
        "run",
        "@taejulee/opsnow-mcp-asset-server-no-server",
        "--key",
        "19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
      ]
    }
  }
}

Future Expansion Proposal: Claude MCP-Based OpsNow Integration Architecture

  • OpsNow Desktop Application
    • It is a hybrid desktop application that integrates control of various MCP-supported LLM desktop apps, such as Claude and ChatGPT.
    • Supports various MCP-compatible LLMs (Claude, GPT, etc.)
    • Integrates with third-party MCP servers (Sequential Thinking, GitHub, etc.) and the internal OpsNow MCP Server
    • Users can directly enter their LLM API Key to connect to the desired model
    • Provides a natural workflow within the Chat UI using the OpsNow ReAct (Reasoning + Acting) framework
  • OpsNow MCP Provider
    • It serves as an API service that provides OpsNow's asset, cost, and connection information to external LLMs.
      • Endpoint server serving data based on OpsNow Resources
      • Built-in License Key Manager:
        • Authentication and Authorization Isolation via OpsNow License Key
        • Company-Specific Mapping by Key
        • Provide Customized Information for Each Customer (e.g., limit access to specific service ranges)
        • Capability Extension through OpsNow API Bridge and Connect API
  • License Key-Based Control System
    • All MCP communications are authenticated based on the License Key, providing the following security/control features:
      • Limit Call Scope by Key (e.g., restrict access to specific asset queries)
      • Provide Dedicated Data Based on Customer ID
      • Ensure Scalability for Multiple Companies and Customers
  • Diverse MCP Compatibility and Third-Party Integration
    • With the strengthened MCP support from LLM vendors like GPT and Gemini, third-party LLMs can also be integrated through the MCP server, in addition to Claude.

Through this system design and PoC, we have confirmed that Claude-based AI can extend beyond simple Q&A to become an agent that directly queries and interprets OpsNow's FinOps data.

Through this structure, we have discovered the following possibilities for the future:

  • AI-based operational analysis automation for all resources within the company
  • Integration of a customer-specific portal with LLM agents
  • User-specific LLM model selection → License-based control → Completion of natural language interface

Claude MCP has now moved beyond the experimental introduction phase and is becoming a core tool for AI operations. Based on this, OpsNow aims to leap forward by offering new user interfaces and enhanced customer experiences.

2. Integration of OpsNow Prime with MCP

MCP technology will also be integrated into OpsNow Prime, the on-premises CMP solution scheduled for release in the first half of 2025. This integration enables infrastructure automation through AI, allowing users to perform complex tasks with simple commands.

OpsNow Prime is a Private & On-Premises Cloud Management Platform designed to manage enterprise IT infrastructure and cloud resources. By integrating MCP, we can demonstrate how an AI assistant simplifies infrastructure operations. In simple terms, users can issue commands through a chat interface, and the AI handles tasks like creating or shutting down virtual machines (VMs). For example, when an administrator asks the OpsNow AI chatbot, “Create a new virtual server,” the AI will automatically call the appropriate virtualization platform API to provision the VM and return the result. Previously, developing these services required a lot of time, effort, and resources, but MCP, an open standard, made it easier and more efficient to link together.

The OpsNow team has developed a dedicated MCP server that integrates internal systems—such as virtualization hypervisor APIs into the standardized Model Context Protocol (MCP) framework. By connecting large language models (LLMs) like Anthropic Claude to OpsNow as MCP clients, AI can now interact directly with hypervisors like Proxmox or VMware. These models can execute real-world infrastructure tasks based on natural language commands. Within OpsNow, a specialized module called the MCP Provider acts as an adapter, bridging various hypervisors and OpsNow’s native APIs. For example, when a user instructs the AI to “delete a VM,” the MCP Provider identifies the correct platform, performs the appropriate API call, and sends the result back through the MCP interface. This seamless integration allows users to manage infrastructure effortlessly—no need to access complex consoles. Simply speak, and the AI handles the rest.

Let’s explore a real-world use case where Claude Desktop utilizes the OpsNow Prime MCP Server to easily create and activate virtual machines (VMs) on Proxmox, with full lifecycle management handled by OpsNow Prime.

This overview covers the entire process—from training the MCP server to deployment and operation. The MCP server was developed using Cursor, and it's built on Node.js. The basic server structure was obtained from the official MCP repository on GitHub.

To begin, we cloned the source code from the example MCP server Git repository to utilize the basic structure. Using the Postman collection file and TypeScript files provided, we studied the architecture and workflow, which allowed us to successfully create the OpsNow Prime MCP server.

postman collection file

JavaScript

"item":[
   {
      "name":"Proxmox",
      "item":[
         {
            "name":"VM 리스트 조회(PRX-IF-VM-003)",
            "request":{
               "auth":{
                  "type":"bearer",
                  ... 중략
               },
               ... 중략
            },
            "response":[
               {
                  "name":"VM 리스트 조회(PRX-IF-VM-003)",
                  ... 중략
                  ],
... 중략

route.ts

JavaScript

import { Express, Request, Response } from 'express';
import { ProxmoxApiManager } from './proxmox/proxmoxApiManager';
import { getLogger } from './utils/logger';

const logger = getLogger('Routes');
const proxmoxManager = ProxmoxApiManager.getInstance();

export const setupRoutes = (app: Express) => {
  // Default health check endpoint
  app.get('/api/health', (req: Request, res: Response) => {
    res.status(200).json({ status: 'ok', message: 'Prime MCP 서버가 정상 동작 중입니다.' });
  });

  // Node related endpoints
  app.get('/api/nodes', async (req: Request, res: Response) => {
    try {
      const result = await proxmoxManager.getNodes();
      res.status(200).json(result);
    } catch (error) {
      logger.error('Node list lookup failed:', error);
      res.status(500).json({ error: 'An error occurred trying to query the node list.' });
    }
  });

// ....(skip)

After completing the training, source files are generated based on the learned results for the OpsNow Prime MCP server. To build it, you use the following command. Currently, Node.js version 20 or higher is used. After installing the required modules and proceeding with the build, you verify whether the server is running properly. For the OpsNow Prime MCP server, there were some differences depending on the LLM model, but to improve accuracy, we conducted retraining and testing. 

JavaScript

## Check node version
node -version

## build a project
npm install & npm run build

## Check project normal startup
node dist/index.js

When you're asked to run tests for the OpsNow Prime MCP server in Cursor, the API call test will be executed. If any issues arise, it will automatically fix them. Once you confirm there are no errors, click the settings button at the top of Cursor to proceed with registering the newly created MCP server.

Click on "MCP" on the left to navigate to the MCP management screen.

Click on "Add New Global MCP Server" in the screen above, and specify the absolute path of the index.js file generated after the build in the "args" section.

After that, when you create a server and retrieve instance information through Cursor or Claude, the relevant API is called to perform the requested operation. If the API's definitions or authentication information is incorrect, errors may occur multiple times, but the system will automatically go through the retraining and rebuilding process via Cursor.

While the MCP server can also be used in Cursor, it was functionally tested across platforms to verify if it works well in a cross-platform environment. Below is how to register the MCP in Claude.

Now, we're ready to use the learned MCP server to manage resources. Simply enter "Create 1 VM in Prime Proxmox" in Claude Desktop, and you'll be working with the OpsNow Prime MCP server to create the VM.

You can easily and quickly see that VMs are created in Proxmox, and you can also check the status of VMs created in OpsNow Prime. It should be noted that even if storage connectivity errors occur during the process, we decided on alternative solutions and successfully completed the VM creation without any issues. The created VMs can be viewed in real time from the Proxmox console, and you can also check their status in real time from the VM management menu in OpsNow Prime.

Now, let's abort/delete the created resources. If you type "Delete the VM you just created" in Claude Desktop, it will automatically call the API to stop the VM you just created and then delete it.

You can see in real-time from the Proxmox console that the VM has been deleted.

OpsNow's MCP utilization has already been demonstrated through features such as VM creation, querying, and deletion within the Proxmox virtualization environment via MCP, which has been developed and tested.

We are also expanding additional capabilities, such as shutting down VMs. For instance, what used to require an administrator to manually create a VM through the Proxmox Web interface can now be automated simply by instructing the OpsNow AI assistant.

Interestingly, these capabilities are not limited to specific virtualization offerings; they can be easily generalized to VMware, OpenStack, and more. Since the OpsNow team has already deployed connectivity to MCP standards, it is relatively easy to integrate new platforms. Moreover, as mentioned earlier, even in a closed-network data center environment, deploying open-source LLM and MCP servers can provide the same AI capabilities.

OpsNow Prime also has a roadmap to support MCP servers for detecting and automating unused or abnormal resources, automating system failures, and managing ITSM (Resource Application/Authorization), expanding beyond just managing resource creation and deletion in the context of On-Premise infrastructure operations.

As demonstrated in the OpsNow case, the introduction of MCP is expected to empower AI to act as the "hands and feet" of the data center, ushering in a new era of automation in infrastructure operations.

____________________________________________________________________________________________________________________________________

Anthropic's MCP has emerged as a universal connector, bringing the world's information to isolated AIs, and its scope of use is steadily expanding. While it's still in the early stages, it has garnered significant attention from the industry, and practical advantages are rapidly being proven through real-world cases like OpsNow. It will be interesting to see how quickly MCP becomes a standard and integrates into everyday services. The "MCP era", connecting data and AI, is fast approaching.

With the AI Agent Standard, MCP Connecting Claude to OpsNow

The introduction to Anthropic's Model Context Protocol (MCP)

Why We Need a New Connectivity Standard for AI: The background of the rise of MCP

From the picture above:

Left: Before MCP
Without a standard protocol, each AI model must connect to tools like Slack, Google Drive, and GitHub using its own unique API. This results in fragmented connections and complex, repetitive integrations.

Right: After MCP
With MCP, models can access various tools through a single, standardized interface. Integration becomes much simpler and more unified across systems.

Large language models (LLMs), often referred to as AI chatbots or virtual assistants, are highly intelligent but still face a significant limitation, they are typically disconnected from the outside world. In simple terms, no matter how advanced AI becomes, it will always have constraints if it cannot access essential data, like the Internet or internal company databases. For example, if you ask current AI to "generate a graph using our sales data," it struggles because it doesn't have direct access to that data. This issue has led to numerous efforts to link AI with external tools and data sources. However, the traditional approach involved writing custom code for each AI model and tool, with each new tool requiring complex development. It’s similar to the past when different "cables" were needed for each printer or keyboard, there were various "specifications" between AI and the tools.

Anthropic has introduced the MCP (Model Context Protocol) as an open standard protocol designed to solve this issue.

It can be simplified as "USB-C for AI." Just like USB-C connects multiple devices through a single, unified port, MCP serves as the standard for linking AI applications with various external systems in a standardized way. This eliminates redundancy and simplifies integration between different systems. For instance, if three AI applications need to connect to three tools, without MCP, you would need to establish 3×3, or 9 separate connections. With MCP, only 3+3, or 6 standardized connections are required. This makes AI-tool connections more efficient, allowing AI to access and use the necessary data for improved responses.

Latest Trends: Claude and the Industry’s Shift Toward MCP Adoption

The growing trend of linking AI models with external tools isn’t unique to Anthropic. Since 2023, OpenAI’s ChatGPT has been expanding its capabilities through plugins and function calling, enabling tasks like web searches and calculator functions. Microsoft has also integrated ChatGPT into Bing to enhance web search functionality. These efforts reflect a broader industry movement focused on making AI more practical and integrated into daily workflows. In this landscape, Anthropic made a notable impact by introducing the Model Context Protocol (MCP) in November 2024, offering a standardized approach that’s quickly gaining recognition as a potential industry standard. Claude, Anthropic’s AI assistant, comes equipped with a built-in MCP client, making it easy for apps powered by Claude to connect directly to MCP-compatible servers. For example, in Claude’s desktop application, users can connect services like Google Drive, Slack, and GitHub with just a few clicks, allowing Claude to retrieve relevant information from those platforms in real time.

After MCP’s launch, many developer tools and services quickly began to support the protocol. GitHub Copilot and the development IDE Cursor were among the first to announce MCP integration. Other platforms, including Zed Editor, Replit, Codeium, and Sourcegraph, are also embedding MCP into their ecosystems to enhance how AI assistants understand and interact with code. In summary, the push to connect AI models with external systems is a major trend in the tech industry, and MCP is rapidly gaining traction as an open standard. Some experts say the MCP has actually gained momentum enough to become a favorite in the race to define the “AI Agent Standard” between 2023 and 2025.

As more companies adopt MCP or experiment with similar solutions, the industry is seeing rapid progress in making AI assistants more powerful, flexible, and useful in practical environments.

MCP Strengths: Supporting Diverse LLMs and Operating in Closed Network

MCP is gaining industry attention for several compelling advantages. First and foremost, it standardizes the way AI models connect with tools, significantly improving development efficiency. For developers, this means that once a new AI service is integrated using the MCP standard, it can be reused across multiple systems, streamlining workflows and boosting productivity. This standardized approach also leads to more consistent code implementation, resulting in greater service stability and easier maintenance. From the end-user perspective, this translates into a smoother, more unified experience when interacting with AI features across different applications.

Another key benefit is MCP’s flexibility, as it’s not tied to any specific AI vendor or model. It supports a wide range of large language models (LLMs), including proprietary systems like Anthropic’s Claude and OpenAI’s GPT series, as well as open-source alternatives. For example, developers can utilize open-source LLMs like Llama 2 or DeepSeek by simply meeting the MCP standard, avoiding lock-in to a single provider’s API. This flexibility extends to closed-network environments—organizations operating in secure, internet-restricted settings can still deploy AI assistants by installing open-source models and MCP connectors internally. Even without external internet access, AI functionality can be implemented as long as the MCP protocol is followed, making it ideal for high-security or offline systems.

Additionally, MCP is designed to work across various infrastructure environments. It’s compatible with a wide range of hypervisors and virtualization platforms, whether on-premises or in the cloud. This includes solutions like VMware, Proxmox, and OpenStack, all of which can be managed by AI through a single MCP interface. Since MCP operates independently of specific infrastructure types, organizations can integrate AI without overhauling their existing systems. Moreover, MCP is built for two-way communication, AI can send requests to external tools and receive real-time notifications when specific events occur. Access control is also highly customizable; the MCP server can define granular permissions to ensure AI only accesses the necessary data, preserving security and privacy. In summary, MCP stands out as a modern integration standard that balances flexibility, compatibility, and secure deployment across diverse IT environments.

OpsNow Use Case: Conversational Virtual Machine Management with MCP

OpsNow demonstrates practical MCP integration through two use cases. Setting up MCP-based workflows is simple, and the foundational setup can be done as follows:

  1. Combine OpsNow FinOps with MCP.

  • Third Parties MCP Server
    • Connect with a variety of already public external MCP servers to extend capabilities.
  • LLM Vendor Desktop Application (Claude)
    • Claude, Anthropic’s powerful LLM, connects seamlessly with various MCP servers to process user commands in natural language. It interprets user requests, initiates the appropriate MCP server calls, and facilitates interaction with external systems.
  • OpsNow MCP Server
    • Claude connects to two custom-built MCP servers designed specifically to access cloud cost and asset information within OpsNow.
      • OpsNow Cost MCP Server: Retrieves cloud cost information from OpsNow.
      • OpsNow Asset MCP Server: Provides detailed data on active assets, including servers, networks, databases, and more.
  • OpsNow MCP Provider
    • Each MCP server is internally connected to the OpsNow MCP Provider, a component that bridges Claude’s MCP requests to the actual OpsNow API.
  • OpsNow Resources
    • The actual data is provided by OpsNow’s internal system, known as OpsNow Resources.

In this context, existing FinOps services and resources are referred to as OpsNow Resources. We'll explore the practicality of MCP by developing the green-highlighted section in a simplified way. For now, the setup will operate using dummy data for testing purposes rather than live production data.

OpsNow MCP Provider

This project acts as a provider that enables Claude to communicate with MCP servers to retrieve asset and cost information from OpsNow. The overall system is structured as follows:

  • Asynchronous web server built with FastAPI
  • API client for asset queries (asset_api_client.py)
  • API client for cost queries (cost_api_client.py)
  • Main application that handles requests from the MCP server (main.py)

main.py: The entry point for handling MCP requests

Python

from fastapi import FastAPI
from asset_api_client import get_assets
from cost_api_client import get_costs

app = FastAPI()

@app.get("/health")
async def health_check():
    return {"status": "ok"}

@app.get("/assets")
async def get_assets_data():
    return await get_assets()

@app.get("/costs")
async def get_costs_data():
    return await get_costs()
  • /assets: API endpoint called when Claude requests the asset list
  • /costs: API endpoint called when Claude requests cost data
  • /health: Health check endpoint used to verify the MCP server’s status

cost_api_client.py: Provides cost data

Python

async def get_costs():
    return {
        "costs": [
            {
                "cloud_provider": "<CSP_NAME>",  # Example: AWS, Azure, GCP, etc.
                "monthly_costs": [
                    {
                        "month": "<YYYY-MM>",  # Example: 2025-03
                        "total": "<TOTAL_COST>",
                        "by_service": {
                            "<SERVICE_NAME_1>": "<COST_1>",
                            "<SERVICE_NAME_2>": "<COST_2>",
                            # ...
                        }
                    },
                    # ...Multiple monthly cost data
                ]
            },
            # ...Multiple CSP
        ]
    }
  • It provides cost data based on dummy data rather than actual OpsNow API integration. 
  • When Claude receives requests like "What is the cost for this month?", this data will be returned.

asset_api_client.py: Provides asset data

Python

async def get_assets():
    return {
        "AWS": [
            {
                "id": "<RESOURCE_ID>",
                "type": "<RESOURCE_TYPE>",  # Example: EC2, RDS 
                "region": "<REGION_CODE>",
                "status": "<STATUS>"        # Example: running, stopped 
            },
            # ...Multiple Assets
        ]
    }

  • It provides an asset list based on dummy data, rather than actual OpsNow API integration. 
  • When Claude receives requests like "What resources are currently in use?", this data will be returned.

How to Run

# Default Installation
pip install -r requirements.txt
# 서버 실행
python main.py

Full Source Code: opsnow-mcp-provider

OpsNow MCP Server

OpsNow Cost MCP Server

Now, let's take a closer look at the role of the "OpsNow Cost MCP Server" in the Claude MCP integration structure. This component receives requests from the Claude desktop application and responds with OpsNow cost data using the MCP protocol.

Before starting this section, it is highly recommended to review the following document.

Key Technologies

  • Node.js + TypeScript
  • @modelcontextprotocol/sdk: Official SDK for developing the Claude MCP server
  • node-fetch: Used for communication with the Provider API
  • zod: Schema validation library

src/index.ts: MCP server initialization

JavaScript

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

// Create server instance
const server = new McpServer({
  name: "cloud-cost",
  version: "1.0.0",
});

Define Input Schema and Register Tools

JavaScript

server.tool(
  "get-cost",
  "Get cloud cost summary for multiple vendors and months",
  {
    vendors: z.array(z.string()).optional().describe("List of cloud vendor names"),
    months: z.array(z.string()).optional().describe("List of months in YYYY-MM format"),
  },
  async ({ vendors, months }) => {
    ...
  }
);

Retrieve cost data from the Provider API

JavaScript

async function readCostData(): Promise<any | null> {
  const response = await fetch('http://localhost:8000/api/v1/costs/info');
  ...
  return data;
}

Build

# Default Installation
npm install
# Build
npm run build

Full Source Code: opsnow-mcp-cost-server

OpsNow Asset MCP Server

Since the structure is identical to the OpsNow Cost MCP Server, the explanation is omitted.

Full Source Code: opsnow-mcp-asset-server

Usage in Claude Desktop

Environment Settings

1. Claude Desktop Settings > Developer > Edit Settings

2. Open the claude_desktop_config.json file.

Register cloud-cost-server and cloud-asset-server settings.

JavaScript

{
  "mcpServers": {
   "cloud-cost-server": {
      "command": "node",
      "args": [
        "/Users/tae-joolee/codeProject/opsnow-mcp-cost-server/build/index.js"
      ]
    },
    "cloud-asset-server": {
      "command": "node",
      "args": [
        "/Users/tae-joolee/codeProject/opsnow-mcp-asset-server/build/index.js"
      ]
    }
  }
}

3. Verify successful registration

Check for "Hammer2" in the prompt input field

You can view the MCP server list by clicking

Usage Example

Based on the powerful performance of the Claude Desktop Agent, it demonstrates quite useful potential.

1. "What is the cloud cost for April?"

2. "Is it higher than March? If so, what’s the reason?"

3. "What is cloud usage?"

4. Then, visualize it.

Deploying My MCP Server

Smithery.ai is a central hub that allows you to search, install, and manage Model Context Protocol (MCP) servers to extend the capabilities of large language models (LLMs). Developers can use Smithery to integrate various external tools and data with LLMs, enabling the creation of more powerful AI systems.

Key Features of Smithery.ai

  • MCP Server Registry: smithery hosts over 4,500 MCP servers, enabling LLMs to perform a wide range of functions. For example, it supports integrations with GitHub, Google Drive, PostgreSQL, Slack, Brave Search, and more.

  1. Log in to Smithery. Click the login button at the top right of the screen, and you’ll need to log in using your GitHub account.

  1. Click "+Add Server" at the top right of the screen.

  1. Select one of the MCP Server projects registered in my GitHub.

  1. Here, I will select opsnow-mcp-asset-server-no-server.

  1. Click the Create button.

  1. Enter the information and click the Save button.

  1. The deployment is performed automatically (though it can also be done manually).

Registering the Deployed MCP Server

1. Open the claude_desktop_config.json file.

Register the MCP server information for cloud-cost-server, cloud-asset-server.

JavaScript

{
  "mcpServers": {
    "opsnow-mcp-cost-server-no-server": {
      "command": "npx",
      "args": [
        "-y",
        "@smithery/cli@latest",
        "run",
        "@taejulee/opsnow-mcp-cost-server-no-server",
        "--key",
        "19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
      ]
    },
    "opsnow-mcp-asset-server-no-server": {
      "command": "npx",
      "args": [
        "-y",
        "@smithery/cli@latest",
        "run",
        "@taejulee/opsnow-mcp-asset-server-no-server",
        "--key",
        "19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
      ]
    }
  }
}

Future Expansion Proposal: Claude MCP-Based OpsNow Integration Architecture

  • OpsNow Desktop Application
    • It is a hybrid desktop application that integrates control of various MCP-supported LLM desktop apps, such as Claude and ChatGPT.
    • Supports various MCP-compatible LLMs (Claude, GPT, etc.)
    • Integrates with third-party MCP servers (Sequential Thinking, GitHub, etc.) and the internal OpsNow MCP Server
    • Users can directly enter their LLM API Key to connect to the desired model
    • Provides a natural workflow within the Chat UI using the OpsNow ReAct (Reasoning + Acting) framework
  • OpsNow MCP Provider
    • It serves as an API service that provides OpsNow's asset, cost, and connection information to external LLMs.
      • Endpoint server serving data based on OpsNow Resources
      • Built-in License Key Manager:
        • Authentication and Authorization Isolation via OpsNow License Key
        • Company-Specific Mapping by Key
        • Provide Customized Information for Each Customer (e.g., limit access to specific service ranges)
        • Capability Extension through OpsNow API Bridge and Connect API
  • License Key-Based Control System
    • All MCP communications are authenticated based on the License Key, providing the following security/control features:
      • Limit Call Scope by Key (e.g., restrict access to specific asset queries)
      • Provide Dedicated Data Based on Customer ID
      • Ensure Scalability for Multiple Companies and Customers
  • Diverse MCP Compatibility and Third-Party Integration
    • With the strengthened MCP support from LLM vendors like GPT and Gemini, third-party LLMs can also be integrated through the MCP server, in addition to Claude.

Through this system design and PoC, we have confirmed that Claude-based AI can extend beyond simple Q&A to become an agent that directly queries and interprets OpsNow's FinOps data.

Through this structure, we have discovered the following possibilities for the future:

  • AI-based operational analysis automation for all resources within the company
  • Integration of a customer-specific portal with LLM agents
  • User-specific LLM model selection → License-based control → Completion of natural language interface

Claude MCP has now moved beyond the experimental introduction phase and is becoming a core tool for AI operations. Based on this, OpsNow aims to leap forward by offering new user interfaces and enhanced customer experiences.

2. Integration of OpsNow Prime with MCP

MCP technology will also be integrated into OpsNow Prime, the on-premises CMP solution scheduled for release in the first half of 2025. This integration enables infrastructure automation through AI, allowing users to perform complex tasks with simple commands.

OpsNow Prime is a Private & On-Premises Cloud Management Platform designed to manage enterprise IT infrastructure and cloud resources. By integrating MCP, we can demonstrate how an AI assistant simplifies infrastructure operations. In simple terms, users can issue commands through a chat interface, and the AI handles tasks like creating or shutting down virtual machines (VMs). For example, when an administrator asks the OpsNow AI chatbot, “Create a new virtual server,” the AI will automatically call the appropriate virtualization platform API to provision the VM and return the result. Previously, developing these services required a lot of time, effort, and resources, but MCP, an open standard, made it easier and more efficient to link together.

The OpsNow team has developed a dedicated MCP server that integrates internal systems—such as virtualization hypervisor APIs into the standardized Model Context Protocol (MCP) framework. By connecting large language models (LLMs) like Anthropic Claude to OpsNow as MCP clients, AI can now interact directly with hypervisors like Proxmox or VMware. These models can execute real-world infrastructure tasks based on natural language commands. Within OpsNow, a specialized module called the MCP Provider acts as an adapter, bridging various hypervisors and OpsNow’s native APIs. For example, when a user instructs the AI to “delete a VM,” the MCP Provider identifies the correct platform, performs the appropriate API call, and sends the result back through the MCP interface. This seamless integration allows users to manage infrastructure effortlessly—no need to access complex consoles. Simply speak, and the AI handles the rest.

Let’s explore a real-world use case where Claude Desktop utilizes the OpsNow Prime MCP Server to easily create and activate virtual machines (VMs) on Proxmox, with full lifecycle management handled by OpsNow Prime.

This overview covers the entire process—from training the MCP server to deployment and operation. The MCP server was developed using Cursor, and it's built on Node.js. The basic server structure was obtained from the official MCP repository on GitHub.

To begin, we cloned the source code from the example MCP server Git repository to utilize the basic structure. Using the Postman collection file and TypeScript files provided, we studied the architecture and workflow, which allowed us to successfully create the OpsNow Prime MCP server.

postman collection file

JavaScript

"item":[
   {
      "name":"Proxmox",
      "item":[
         {
            "name":"VM 리스트 조회(PRX-IF-VM-003)",
            "request":{
               "auth":{
                  "type":"bearer",
                  ... 중략
               },
               ... 중략
            },
            "response":[
               {
                  "name":"VM 리스트 조회(PRX-IF-VM-003)",
                  ... 중략
                  ],
... 중략

route.ts

JavaScript

import { Express, Request, Response } from 'express';
import { ProxmoxApiManager } from './proxmox/proxmoxApiManager';
import { getLogger } from './utils/logger';

const logger = getLogger('Routes');
const proxmoxManager = ProxmoxApiManager.getInstance();

export const setupRoutes = (app: Express) => {
  // Default health check endpoint
  app.get('/api/health', (req: Request, res: Response) => {
    res.status(200).json({ status: 'ok', message: 'Prime MCP 서버가 정상 동작 중입니다.' });
  });

  // Node related endpoints
  app.get('/api/nodes', async (req: Request, res: Response) => {
    try {
      const result = await proxmoxManager.getNodes();
      res.status(200).json(result);
    } catch (error) {
      logger.error('Node list lookup failed:', error);
      res.status(500).json({ error: 'An error occurred trying to query the node list.' });
    }
  });

// ....(skip)

After completing the training, source files are generated based on the learned results for the OpsNow Prime MCP server. To build it, you use the following command. Currently, Node.js version 20 or higher is used. After installing the required modules and proceeding with the build, you verify whether the server is running properly. For the OpsNow Prime MCP server, there were some differences depending on the LLM model, but to improve accuracy, we conducted retraining and testing. 

JavaScript

## Check node version
node -version

## build a project
npm install & npm run build

## Check project normal startup
node dist/index.js

When you're asked to run tests for the OpsNow Prime MCP server in Cursor, the API call test will be executed. If any issues arise, it will automatically fix them. Once you confirm there are no errors, click the settings button at the top of Cursor to proceed with registering the newly created MCP server.

Click on "MCP" on the left to navigate to the MCP management screen.

Click on "Add New Global MCP Server" in the screen above, and specify the absolute path of the index.js file generated after the build in the "args" section.

After that, when you create a server and retrieve instance information through Cursor or Claude, the relevant API is called to perform the requested operation. If the API's definitions or authentication information is incorrect, errors may occur multiple times, but the system will automatically go through the retraining and rebuilding process via Cursor.

While the MCP server can also be used in Cursor, it was functionally tested across platforms to verify if it works well in a cross-platform environment. Below is how to register the MCP in Claude.

Now, we're ready to use the learned MCP server to manage resources. Simply enter "Create 1 VM in Prime Proxmox" in Claude Desktop, and you'll be working with the OpsNow Prime MCP server to create the VM.

You can easily and quickly see that VMs are created in Proxmox, and you can also check the status of VMs created in OpsNow Prime. It should be noted that even if storage connectivity errors occur during the process, we decided on alternative solutions and successfully completed the VM creation without any issues. The created VMs can be viewed in real time from the Proxmox console, and you can also check their status in real time from the VM management menu in OpsNow Prime.

Now, let's abort/delete the created resources. If you type "Delete the VM you just created" in Claude Desktop, it will automatically call the API to stop the VM you just created and then delete it.

You can see in real-time from the Proxmox console that the VM has been deleted.

OpsNow's MCP utilization has already been demonstrated through features such as VM creation, querying, and deletion within the Proxmox virtualization environment via MCP, which has been developed and tested.

We are also expanding additional capabilities, such as shutting down VMs. For instance, what used to require an administrator to manually create a VM through the Proxmox Web interface can now be automated simply by instructing the OpsNow AI assistant.

Interestingly, these capabilities are not limited to specific virtualization offerings; they can be easily generalized to VMware, OpenStack, and more. Since the OpsNow team has already deployed connectivity to MCP standards, it is relatively easy to integrate new platforms. Moreover, as mentioned earlier, even in a closed-network data center environment, deploying open-source LLM and MCP servers can provide the same AI capabilities.

OpsNow Prime also has a roadmap to support MCP servers for detecting and automating unused or abnormal resources, automating system failures, and managing ITSM (Resource Application/Authorization), expanding beyond just managing resource creation and deletion in the context of On-Premise infrastructure operations.

As demonstrated in the OpsNow case, the introduction of MCP is expected to empower AI to act as the "hands and feet" of the data center, ushering in a new era of automation in infrastructure operations.

____________________________________________________________________________________________________________________________________

Anthropic's MCP has emerged as a universal connector, bringing the world's information to isolated AIs, and its scope of use is steadily expanding. While it's still in the early stages, it has garnered significant attention from the industry, and practical advantages are rapidly being proven through real-world cases like OpsNow. It will be interesting to see how quickly MCP becomes a standard and integrates into everyday services. The "MCP era", connecting data and AI, is fast approaching.

Download it for free
Submit the information below and get the files you need right away
Name *
Company *
Business Email *
By registering an inquiry, you agree to allow OpsNow to store and process your information for contact purposes.
Please read our Privacy Policy for more information.
Thank you.
The file has been downloaded.
Please enter all required fields.

With the AI Agent Standard, MCP Connecting Claude to OpsNow

OpsNow Team

The introduction to Anthropic's Model Context Protocol (MCP)

Why We Need a New Connectivity Standard for AI: The background of the rise of MCP

From the picture above:

Left: Before MCP
Without a standard protocol, each AI model must connect to tools like Slack, Google Drive, and GitHub using its own unique API. This results in fragmented connections and complex, repetitive integrations.

Right: After MCP
With MCP, models can access various tools through a single, standardized interface. Integration becomes much simpler and more unified across systems.

Large language models (LLMs), often referred to as AI chatbots or virtual assistants, are highly intelligent but still face a significant limitation, they are typically disconnected from the outside world. In simple terms, no matter how advanced AI becomes, it will always have constraints if it cannot access essential data, like the Internet or internal company databases. For example, if you ask current AI to "generate a graph using our sales data," it struggles because it doesn't have direct access to that data. This issue has led to numerous efforts to link AI with external tools and data sources. However, the traditional approach involved writing custom code for each AI model and tool, with each new tool requiring complex development. It’s similar to the past when different "cables" were needed for each printer or keyboard, there were various "specifications" between AI and the tools.

Anthropic has introduced the MCP (Model Context Protocol) as an open standard protocol designed to solve this issue.

It can be simplified as "USB-C for AI." Just like USB-C connects multiple devices through a single, unified port, MCP serves as the standard for linking AI applications with various external systems in a standardized way. This eliminates redundancy and simplifies integration between different systems. For instance, if three AI applications need to connect to three tools, without MCP, you would need to establish 3×3, or 9 separate connections. With MCP, only 3+3, or 6 standardized connections are required. This makes AI-tool connections more efficient, allowing AI to access and use the necessary data for improved responses.

Latest Trends: Claude and the Industry’s Shift Toward MCP Adoption

The growing trend of linking AI models with external tools isn’t unique to Anthropic. Since 2023, OpenAI’s ChatGPT has been expanding its capabilities through plugins and function calling, enabling tasks like web searches and calculator functions. Microsoft has also integrated ChatGPT into Bing to enhance web search functionality. These efforts reflect a broader industry movement focused on making AI more practical and integrated into daily workflows. In this landscape, Anthropic made a notable impact by introducing the Model Context Protocol (MCP) in November 2024, offering a standardized approach that’s quickly gaining recognition as a potential industry standard. Claude, Anthropic’s AI assistant, comes equipped with a built-in MCP client, making it easy for apps powered by Claude to connect directly to MCP-compatible servers. For example, in Claude’s desktop application, users can connect services like Google Drive, Slack, and GitHub with just a few clicks, allowing Claude to retrieve relevant information from those platforms in real time.

After MCP’s launch, many developer tools and services quickly began to support the protocol. GitHub Copilot and the development IDE Cursor were among the first to announce MCP integration. Other platforms, including Zed Editor, Replit, Codeium, and Sourcegraph, are also embedding MCP into their ecosystems to enhance how AI assistants understand and interact with code. In summary, the push to connect AI models with external systems is a major trend in the tech industry, and MCP is rapidly gaining traction as an open standard. Some experts say the MCP has actually gained momentum enough to become a favorite in the race to define the “AI Agent Standard” between 2023 and 2025.

As more companies adopt MCP or experiment with similar solutions, the industry is seeing rapid progress in making AI assistants more powerful, flexible, and useful in practical environments.

MCP Strengths: Supporting Diverse LLMs and Operating in Closed Network

MCP is gaining industry attention for several compelling advantages. First and foremost, it standardizes the way AI models connect with tools, significantly improving development efficiency. For developers, this means that once a new AI service is integrated using the MCP standard, it can be reused across multiple systems, streamlining workflows and boosting productivity. This standardized approach also leads to more consistent code implementation, resulting in greater service stability and easier maintenance. From the end-user perspective, this translates into a smoother, more unified experience when interacting with AI features across different applications.

Another key benefit is MCP’s flexibility, as it’s not tied to any specific AI vendor or model. It supports a wide range of large language models (LLMs), including proprietary systems like Anthropic’s Claude and OpenAI’s GPT series, as well as open-source alternatives. For example, developers can utilize open-source LLMs like Llama 2 or DeepSeek by simply meeting the MCP standard, avoiding lock-in to a single provider’s API. This flexibility extends to closed-network environments—organizations operating in secure, internet-restricted settings can still deploy AI assistants by installing open-source models and MCP connectors internally. Even without external internet access, AI functionality can be implemented as long as the MCP protocol is followed, making it ideal for high-security or offline systems.

Additionally, MCP is designed to work across various infrastructure environments. It’s compatible with a wide range of hypervisors and virtualization platforms, whether on-premises or in the cloud. This includes solutions like VMware, Proxmox, and OpenStack, all of which can be managed by AI through a single MCP interface. Since MCP operates independently of specific infrastructure types, organizations can integrate AI without overhauling their existing systems. Moreover, MCP is built for two-way communication, AI can send requests to external tools and receive real-time notifications when specific events occur. Access control is also highly customizable; the MCP server can define granular permissions to ensure AI only accesses the necessary data, preserving security and privacy. In summary, MCP stands out as a modern integration standard that balances flexibility, compatibility, and secure deployment across diverse IT environments.

OpsNow Use Case: Conversational Virtual Machine Management with MCP

OpsNow demonstrates practical MCP integration through two use cases. Setting up MCP-based workflows is simple, and the foundational setup can be done as follows:

  1. Combine OpsNow FinOps with MCP.

  • Third Parties MCP Server
    • Connect with a variety of already public external MCP servers to extend capabilities.
  • LLM Vendor Desktop Application (Claude)
    • Claude, Anthropic’s powerful LLM, connects seamlessly with various MCP servers to process user commands in natural language. It interprets user requests, initiates the appropriate MCP server calls, and facilitates interaction with external systems.
  • OpsNow MCP Server
    • Claude connects to two custom-built MCP servers designed specifically to access cloud cost and asset information within OpsNow.
      • OpsNow Cost MCP Server: Retrieves cloud cost information from OpsNow.
      • OpsNow Asset MCP Server: Provides detailed data on active assets, including servers, networks, databases, and more.
  • OpsNow MCP Provider
    • Each MCP server is internally connected to the OpsNow MCP Provider, a component that bridges Claude’s MCP requests to the actual OpsNow API.
  • OpsNow Resources
    • The actual data is provided by OpsNow’s internal system, known as OpsNow Resources.

In this context, existing FinOps services and resources are referred to as OpsNow Resources. We'll explore the practicality of MCP by developing the green-highlighted section in a simplified way. For now, the setup will operate using dummy data for testing purposes rather than live production data.

OpsNow MCP Provider

This project acts as a provider that enables Claude to communicate with MCP servers to retrieve asset and cost information from OpsNow. The overall system is structured as follows:

  • Asynchronous web server built with FastAPI
  • API client for asset queries (asset_api_client.py)
  • API client for cost queries (cost_api_client.py)
  • Main application that handles requests from the MCP server (main.py)

main.py: The entry point for handling MCP requests

Python

from fastapi import FastAPI
from asset_api_client import get_assets
from cost_api_client import get_costs

app = FastAPI()

@app.get("/health")
async def health_check():
    return {"status": "ok"}

@app.get("/assets")
async def get_assets_data():
    return await get_assets()

@app.get("/costs")
async def get_costs_data():
    return await get_costs()
  • /assets: API endpoint called when Claude requests the asset list
  • /costs: API endpoint called when Claude requests cost data
  • /health: Health check endpoint used to verify the MCP server’s status

cost_api_client.py: Provides cost data

Python

async def get_costs():
    return {
        "costs": [
            {
                "cloud_provider": "<CSP_NAME>",  # Example: AWS, Azure, GCP, etc.
                "monthly_costs": [
                    {
                        "month": "<YYYY-MM>",  # Example: 2025-03
                        "total": "<TOTAL_COST>",
                        "by_service": {
                            "<SERVICE_NAME_1>": "<COST_1>",
                            "<SERVICE_NAME_2>": "<COST_2>",
                            # ...
                        }
                    },
                    # ...Multiple monthly cost data
                ]
            },
            # ...Multiple CSP
        ]
    }
  • It provides cost data based on dummy data rather than actual OpsNow API integration. 
  • When Claude receives requests like "What is the cost for this month?", this data will be returned.

asset_api_client.py: Provides asset data

Python

async def get_assets():
    return {
        "AWS": [
            {
                "id": "<RESOURCE_ID>",
                "type": "<RESOURCE_TYPE>",  # Example: EC2, RDS 
                "region": "<REGION_CODE>",
                "status": "<STATUS>"        # Example: running, stopped 
            },
            # ...Multiple Assets
        ]
    }

  • It provides an asset list based on dummy data, rather than actual OpsNow API integration. 
  • When Claude receives requests like "What resources are currently in use?", this data will be returned.

How to Run

# Default Installation
pip install -r requirements.txt
# 서버 실행
python main.py

Full Source Code: opsnow-mcp-provider

OpsNow MCP Server

OpsNow Cost MCP Server

Now, let's take a closer look at the role of the "OpsNow Cost MCP Server" in the Claude MCP integration structure. This component receives requests from the Claude desktop application and responds with OpsNow cost data using the MCP protocol.

Before starting this section, it is highly recommended to review the following document.

Key Technologies

  • Node.js + TypeScript
  • @modelcontextprotocol/sdk: Official SDK for developing the Claude MCP server
  • node-fetch: Used for communication with the Provider API
  • zod: Schema validation library

src/index.ts: MCP server initialization

JavaScript

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

// Create server instance
const server = new McpServer({
  name: "cloud-cost",
  version: "1.0.0",
});

Define Input Schema and Register Tools

JavaScript

server.tool(
  "get-cost",
  "Get cloud cost summary for multiple vendors and months",
  {
    vendors: z.array(z.string()).optional().describe("List of cloud vendor names"),
    months: z.array(z.string()).optional().describe("List of months in YYYY-MM format"),
  },
  async ({ vendors, months }) => {
    ...
  }
);

Retrieve cost data from the Provider API

JavaScript

async function readCostData(): Promise<any | null> {
  const response = await fetch('http://localhost:8000/api/v1/costs/info');
  ...
  return data;
}

Build

# Default Installation
npm install
# Build
npm run build

Full Source Code: opsnow-mcp-cost-server

OpsNow Asset MCP Server

Since the structure is identical to the OpsNow Cost MCP Server, the explanation is omitted.

Full Source Code: opsnow-mcp-asset-server

Usage in Claude Desktop

Environment Settings

1. Claude Desktop Settings > Developer > Edit Settings

2. Open the claude_desktop_config.json file.

Register cloud-cost-server and cloud-asset-server settings.

JavaScript

{
  "mcpServers": {
   "cloud-cost-server": {
      "command": "node",
      "args": [
        "/Users/tae-joolee/codeProject/opsnow-mcp-cost-server/build/index.js"
      ]
    },
    "cloud-asset-server": {
      "command": "node",
      "args": [
        "/Users/tae-joolee/codeProject/opsnow-mcp-asset-server/build/index.js"
      ]
    }
  }
}

3. Verify successful registration

Check for "Hammer2" in the prompt input field

You can view the MCP server list by clicking

Usage Example

Based on the powerful performance of the Claude Desktop Agent, it demonstrates quite useful potential.

1. "What is the cloud cost for April?"

2. "Is it higher than March? If so, what’s the reason?"

3. "What is cloud usage?"

4. Then, visualize it.

Deploying My MCP Server

Smithery.ai is a central hub that allows you to search, install, and manage Model Context Protocol (MCP) servers to extend the capabilities of large language models (LLMs). Developers can use Smithery to integrate various external tools and data with LLMs, enabling the creation of more powerful AI systems.

Key Features of Smithery.ai

  • MCP Server Registry: smithery hosts over 4,500 MCP servers, enabling LLMs to perform a wide range of functions. For example, it supports integrations with GitHub, Google Drive, PostgreSQL, Slack, Brave Search, and more.

  1. Log in to Smithery. Click the login button at the top right of the screen, and you’ll need to log in using your GitHub account.

  1. Click "+Add Server" at the top right of the screen.

  1. Select one of the MCP Server projects registered in my GitHub.

  1. Here, I will select opsnow-mcp-asset-server-no-server.

  1. Click the Create button.

  1. Enter the information and click the Save button.

  1. The deployment is performed automatically (though it can also be done manually).

Registering the Deployed MCP Server

1. Open the claude_desktop_config.json file.

Register the MCP server information for cloud-cost-server, cloud-asset-server.

JavaScript

{
  "mcpServers": {
    "opsnow-mcp-cost-server-no-server": {
      "command": "npx",
      "args": [
        "-y",
        "@smithery/cli@latest",
        "run",
        "@taejulee/opsnow-mcp-cost-server-no-server",
        "--key",
        "19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
      ]
    },
    "opsnow-mcp-asset-server-no-server": {
      "command": "npx",
      "args": [
        "-y",
        "@smithery/cli@latest",
        "run",
        "@taejulee/opsnow-mcp-asset-server-no-server",
        "--key",
        "19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
      ]
    }
  }
}

Future Expansion Proposal: Claude MCP-Based OpsNow Integration Architecture

  • OpsNow Desktop Application
    • It is a hybrid desktop application that integrates control of various MCP-supported LLM desktop apps, such as Claude and ChatGPT.
    • Supports various MCP-compatible LLMs (Claude, GPT, etc.)
    • Integrates with third-party MCP servers (Sequential Thinking, GitHub, etc.) and the internal OpsNow MCP Server
    • Users can directly enter their LLM API Key to connect to the desired model
    • Provides a natural workflow within the Chat UI using the OpsNow ReAct (Reasoning + Acting) framework
  • OpsNow MCP Provider
    • It serves as an API service that provides OpsNow's asset, cost, and connection information to external LLMs.
      • Endpoint server serving data based on OpsNow Resources
      • Built-in License Key Manager:
        • Authentication and Authorization Isolation via OpsNow License Key
        • Company-Specific Mapping by Key
        • Provide Customized Information for Each Customer (e.g., limit access to specific service ranges)
        • Capability Extension through OpsNow API Bridge and Connect API
  • License Key-Based Control System
    • All MCP communications are authenticated based on the License Key, providing the following security/control features:
      • Limit Call Scope by Key (e.g., restrict access to specific asset queries)
      • Provide Dedicated Data Based on Customer ID
      • Ensure Scalability for Multiple Companies and Customers
  • Diverse MCP Compatibility and Third-Party Integration
    • With the strengthened MCP support from LLM vendors like GPT and Gemini, third-party LLMs can also be integrated through the MCP server, in addition to Claude.

Through this system design and PoC, we have confirmed that Claude-based AI can extend beyond simple Q&A to become an agent that directly queries and interprets OpsNow's FinOps data.

Through this structure, we have discovered the following possibilities for the future:

  • AI-based operational analysis automation for all resources within the company
  • Integration of a customer-specific portal with LLM agents
  • User-specific LLM model selection → License-based control → Completion of natural language interface

Claude MCP has now moved beyond the experimental introduction phase and is becoming a core tool for AI operations. Based on this, OpsNow aims to leap forward by offering new user interfaces and enhanced customer experiences.

2. Integration of OpsNow Prime with MCP

MCP technology will also be integrated into OpsNow Prime, the on-premises CMP solution scheduled for release in the first half of 2025. This integration enables infrastructure automation through AI, allowing users to perform complex tasks with simple commands.

OpsNow Prime is a Private & On-Premises Cloud Management Platform designed to manage enterprise IT infrastructure and cloud resources. By integrating MCP, we can demonstrate how an AI assistant simplifies infrastructure operations. In simple terms, users can issue commands through a chat interface, and the AI handles tasks like creating or shutting down virtual machines (VMs). For example, when an administrator asks the OpsNow AI chatbot, “Create a new virtual server,” the AI will automatically call the appropriate virtualization platform API to provision the VM and return the result. Previously, developing these services required a lot of time, effort, and resources, but MCP, an open standard, made it easier and more efficient to link together.

The OpsNow team has developed a dedicated MCP server that integrates internal systems—such as virtualization hypervisor APIs into the standardized Model Context Protocol (MCP) framework. By connecting large language models (LLMs) like Anthropic Claude to OpsNow as MCP clients, AI can now interact directly with hypervisors like Proxmox or VMware. These models can execute real-world infrastructure tasks based on natural language commands. Within OpsNow, a specialized module called the MCP Provider acts as an adapter, bridging various hypervisors and OpsNow’s native APIs. For example, when a user instructs the AI to “delete a VM,” the MCP Provider identifies the correct platform, performs the appropriate API call, and sends the result back through the MCP interface. This seamless integration allows users to manage infrastructure effortlessly—no need to access complex consoles. Simply speak, and the AI handles the rest.

Let’s explore a real-world use case where Claude Desktop utilizes the OpsNow Prime MCP Server to easily create and activate virtual machines (VMs) on Proxmox, with full lifecycle management handled by OpsNow Prime.

This overview covers the entire process—from training the MCP server to deployment and operation. The MCP server was developed using Cursor, and it's built on Node.js. The basic server structure was obtained from the official MCP repository on GitHub.

To begin, we cloned the source code from the example MCP server Git repository to utilize the basic structure. Using the Postman collection file and TypeScript files provided, we studied the architecture and workflow, which allowed us to successfully create the OpsNow Prime MCP server.

postman collection file

JavaScript

"item":[
   {
      "name":"Proxmox",
      "item":[
         {
            "name":"VM 리스트 조회(PRX-IF-VM-003)",
            "request":{
               "auth":{
                  "type":"bearer",
                  ... 중략
               },
               ... 중략
            },
            "response":[
               {
                  "name":"VM 리스트 조회(PRX-IF-VM-003)",
                  ... 중략
                  ],
... 중략

route.ts

JavaScript

import { Express, Request, Response } from 'express';
import { ProxmoxApiManager } from './proxmox/proxmoxApiManager';
import { getLogger } from './utils/logger';

const logger = getLogger('Routes');
const proxmoxManager = ProxmoxApiManager.getInstance();

export const setupRoutes = (app: Express) => {
  // Default health check endpoint
  app.get('/api/health', (req: Request, res: Response) => {
    res.status(200).json({ status: 'ok', message: 'Prime MCP 서버가 정상 동작 중입니다.' });
  });

  // Node related endpoints
  app.get('/api/nodes', async (req: Request, res: Response) => {
    try {
      const result = await proxmoxManager.getNodes();
      res.status(200).json(result);
    } catch (error) {
      logger.error('Node list lookup failed:', error);
      res.status(500).json({ error: 'An error occurred trying to query the node list.' });
    }
  });

// ....(skip)

After completing the training, source files are generated based on the learned results for the OpsNow Prime MCP server. To build it, you use the following command. Currently, Node.js version 20 or higher is used. After installing the required modules and proceeding with the build, you verify whether the server is running properly. For the OpsNow Prime MCP server, there were some differences depending on the LLM model, but to improve accuracy, we conducted retraining and testing. 

JavaScript

## Check node version
node -version

## build a project
npm install & npm run build

## Check project normal startup
node dist/index.js

When you're asked to run tests for the OpsNow Prime MCP server in Cursor, the API call test will be executed. If any issues arise, it will automatically fix them. Once you confirm there are no errors, click the settings button at the top of Cursor to proceed with registering the newly created MCP server.

Click on "MCP" on the left to navigate to the MCP management screen.

Click on "Add New Global MCP Server" in the screen above, and specify the absolute path of the index.js file generated after the build in the "args" section.

After that, when you create a server and retrieve instance information through Cursor or Claude, the relevant API is called to perform the requested operation. If the API's definitions or authentication information is incorrect, errors may occur multiple times, but the system will automatically go through the retraining and rebuilding process via Cursor.

While the MCP server can also be used in Cursor, it was functionally tested across platforms to verify if it works well in a cross-platform environment. Below is how to register the MCP in Claude.

Now, we're ready to use the learned MCP server to manage resources. Simply enter "Create 1 VM in Prime Proxmox" in Claude Desktop, and you'll be working with the OpsNow Prime MCP server to create the VM.

You can easily and quickly see that VMs are created in Proxmox, and you can also check the status of VMs created in OpsNow Prime. It should be noted that even if storage connectivity errors occur during the process, we decided on alternative solutions and successfully completed the VM creation without any issues. The created VMs can be viewed in real time from the Proxmox console, and you can also check their status in real time from the VM management menu in OpsNow Prime.

Now, let's abort/delete the created resources. If you type "Delete the VM you just created" in Claude Desktop, it will automatically call the API to stop the VM you just created and then delete it.

You can see in real-time from the Proxmox console that the VM has been deleted.

OpsNow's MCP utilization has already been demonstrated through features such as VM creation, querying, and deletion within the Proxmox virtualization environment via MCP, which has been developed and tested.

We are also expanding additional capabilities, such as shutting down VMs. For instance, what used to require an administrator to manually create a VM through the Proxmox Web interface can now be automated simply by instructing the OpsNow AI assistant.

Interestingly, these capabilities are not limited to specific virtualization offerings; they can be easily generalized to VMware, OpenStack, and more. Since the OpsNow team has already deployed connectivity to MCP standards, it is relatively easy to integrate new platforms. Moreover, as mentioned earlier, even in a closed-network data center environment, deploying open-source LLM and MCP servers can provide the same AI capabilities.

OpsNow Prime also has a roadmap to support MCP servers for detecting and automating unused or abnormal resources, automating system failures, and managing ITSM (Resource Application/Authorization), expanding beyond just managing resource creation and deletion in the context of On-Premise infrastructure operations.

As demonstrated in the OpsNow case, the introduction of MCP is expected to empower AI to act as the "hands and feet" of the data center, ushering in a new era of automation in infrastructure operations.

____________________________________________________________________________________________________________________________________

Anthropic's MCP has emerged as a universal connector, bringing the world's information to isolated AIs, and its scope of use is steadily expanding. While it's still in the early stages, it has garnered significant attention from the industry, and practical advantages are rapidly being proven through real-world cases like OpsNow. It will be interesting to see how quickly MCP becomes a standard and integrates into everyday services. The "MCP era", connecting data and AI, is fast approaching.

Insight

With the AI Agent Standard, MCP Connecting Claude to OpsNow

OpsNow Team

The introduction to Anthropic's Model Context Protocol (MCP)

Why We Need a New Connectivity Standard for AI: The background of the rise of MCP

From the picture above:

Left: Before MCP
Without a standard protocol, each AI model must connect to tools like Slack, Google Drive, and GitHub using its own unique API. This results in fragmented connections and complex, repetitive integrations.

Right: After MCP
With MCP, models can access various tools through a single, standardized interface. Integration becomes much simpler and more unified across systems.

Large language models (LLMs), often referred to as AI chatbots or virtual assistants, are highly intelligent but still face a significant limitation, they are typically disconnected from the outside world. In simple terms, no matter how advanced AI becomes, it will always have constraints if it cannot access essential data, like the Internet or internal company databases. For example, if you ask current AI to "generate a graph using our sales data," it struggles because it doesn't have direct access to that data. This issue has led to numerous efforts to link AI with external tools and data sources. However, the traditional approach involved writing custom code for each AI model and tool, with each new tool requiring complex development. It’s similar to the past when different "cables" were needed for each printer or keyboard, there were various "specifications" between AI and the tools.

Anthropic has introduced the MCP (Model Context Protocol) as an open standard protocol designed to solve this issue.

It can be simplified as "USB-C for AI." Just like USB-C connects multiple devices through a single, unified port, MCP serves as the standard for linking AI applications with various external systems in a standardized way. This eliminates redundancy and simplifies integration between different systems. For instance, if three AI applications need to connect to three tools, without MCP, you would need to establish 3×3, or 9 separate connections. With MCP, only 3+3, or 6 standardized connections are required. This makes AI-tool connections more efficient, allowing AI to access and use the necessary data for improved responses.

Latest Trends: Claude and the Industry’s Shift Toward MCP Adoption

The growing trend of linking AI models with external tools isn’t unique to Anthropic. Since 2023, OpenAI’s ChatGPT has been expanding its capabilities through plugins and function calling, enabling tasks like web searches and calculator functions. Microsoft has also integrated ChatGPT into Bing to enhance web search functionality. These efforts reflect a broader industry movement focused on making AI more practical and integrated into daily workflows. In this landscape, Anthropic made a notable impact by introducing the Model Context Protocol (MCP) in November 2024, offering a standardized approach that’s quickly gaining recognition as a potential industry standard. Claude, Anthropic’s AI assistant, comes equipped with a built-in MCP client, making it easy for apps powered by Claude to connect directly to MCP-compatible servers. For example, in Claude’s desktop application, users can connect services like Google Drive, Slack, and GitHub with just a few clicks, allowing Claude to retrieve relevant information from those platforms in real time.

After MCP’s launch, many developer tools and services quickly began to support the protocol. GitHub Copilot and the development IDE Cursor were among the first to announce MCP integration. Other platforms, including Zed Editor, Replit, Codeium, and Sourcegraph, are also embedding MCP into their ecosystems to enhance how AI assistants understand and interact with code. In summary, the push to connect AI models with external systems is a major trend in the tech industry, and MCP is rapidly gaining traction as an open standard. Some experts say the MCP has actually gained momentum enough to become a favorite in the race to define the “AI Agent Standard” between 2023 and 2025.

As more companies adopt MCP or experiment with similar solutions, the industry is seeing rapid progress in making AI assistants more powerful, flexible, and useful in practical environments.

MCP Strengths: Supporting Diverse LLMs and Operating in Closed Network

MCP is gaining industry attention for several compelling advantages. First and foremost, it standardizes the way AI models connect with tools, significantly improving development efficiency. For developers, this means that once a new AI service is integrated using the MCP standard, it can be reused across multiple systems, streamlining workflows and boosting productivity. This standardized approach also leads to more consistent code implementation, resulting in greater service stability and easier maintenance. From the end-user perspective, this translates into a smoother, more unified experience when interacting with AI features across different applications.

Another key benefit is MCP’s flexibility, as it’s not tied to any specific AI vendor or model. It supports a wide range of large language models (LLMs), including proprietary systems like Anthropic’s Claude and OpenAI’s GPT series, as well as open-source alternatives. For example, developers can utilize open-source LLMs like Llama 2 or DeepSeek by simply meeting the MCP standard, avoiding lock-in to a single provider’s API. This flexibility extends to closed-network environments—organizations operating in secure, internet-restricted settings can still deploy AI assistants by installing open-source models and MCP connectors internally. Even without external internet access, AI functionality can be implemented as long as the MCP protocol is followed, making it ideal for high-security or offline systems.

Additionally, MCP is designed to work across various infrastructure environments. It’s compatible with a wide range of hypervisors and virtualization platforms, whether on-premises or in the cloud. This includes solutions like VMware, Proxmox, and OpenStack, all of which can be managed by AI through a single MCP interface. Since MCP operates independently of specific infrastructure types, organizations can integrate AI without overhauling their existing systems. Moreover, MCP is built for two-way communication, AI can send requests to external tools and receive real-time notifications when specific events occur. Access control is also highly customizable; the MCP server can define granular permissions to ensure AI only accesses the necessary data, preserving security and privacy. In summary, MCP stands out as a modern integration standard that balances flexibility, compatibility, and secure deployment across diverse IT environments.

OpsNow Use Case: Conversational Virtual Machine Management with MCP

OpsNow demonstrates practical MCP integration through two use cases. Setting up MCP-based workflows is simple, and the foundational setup can be done as follows:

  1. Combine OpsNow FinOps with MCP.

  • Third Parties MCP Server
    • Connect with a variety of already public external MCP servers to extend capabilities.
  • LLM Vendor Desktop Application (Claude)
    • Claude, Anthropic’s powerful LLM, connects seamlessly with various MCP servers to process user commands in natural language. It interprets user requests, initiates the appropriate MCP server calls, and facilitates interaction with external systems.
  • OpsNow MCP Server
    • Claude connects to two custom-built MCP servers designed specifically to access cloud cost and asset information within OpsNow.
      • OpsNow Cost MCP Server: Retrieves cloud cost information from OpsNow.
      • OpsNow Asset MCP Server: Provides detailed data on active assets, including servers, networks, databases, and more.
  • OpsNow MCP Provider
    • Each MCP server is internally connected to the OpsNow MCP Provider, a component that bridges Claude’s MCP requests to the actual OpsNow API.
  • OpsNow Resources
    • The actual data is provided by OpsNow’s internal system, known as OpsNow Resources.

In this context, existing FinOps services and resources are referred to as OpsNow Resources. We'll explore the practicality of MCP by developing the green-highlighted section in a simplified way. For now, the setup will operate using dummy data for testing purposes rather than live production data.

OpsNow MCP Provider

This project acts as a provider that enables Claude to communicate with MCP servers to retrieve asset and cost information from OpsNow. The overall system is structured as follows:

  • Asynchronous web server built with FastAPI
  • API client for asset queries (asset_api_client.py)
  • API client for cost queries (cost_api_client.py)
  • Main application that handles requests from the MCP server (main.py)

main.py: The entry point for handling MCP requests

Python

from fastapi import FastAPI
from asset_api_client import get_assets
from cost_api_client import get_costs

app = FastAPI()

@app.get("/health")
async def health_check():
    return {"status": "ok"}

@app.get("/assets")
async def get_assets_data():
    return await get_assets()

@app.get("/costs")
async def get_costs_data():
    return await get_costs()
  • /assets: API endpoint called when Claude requests the asset list
  • /costs: API endpoint called when Claude requests cost data
  • /health: Health check endpoint used to verify the MCP server’s status

cost_api_client.py: Provides cost data

Python

async def get_costs():
    return {
        "costs": [
            {
                "cloud_provider": "<CSP_NAME>",  # Example: AWS, Azure, GCP, etc.
                "monthly_costs": [
                    {
                        "month": "<YYYY-MM>",  # Example: 2025-03
                        "total": "<TOTAL_COST>",
                        "by_service": {
                            "<SERVICE_NAME_1>": "<COST_1>",
                            "<SERVICE_NAME_2>": "<COST_2>",
                            # ...
                        }
                    },
                    # ...Multiple monthly cost data
                ]
            },
            # ...Multiple CSP
        ]
    }
  • It provides cost data based on dummy data rather than actual OpsNow API integration. 
  • When Claude receives requests like "What is the cost for this month?", this data will be returned.

asset_api_client.py: Provides asset data

Python

async def get_assets():
    return {
        "AWS": [
            {
                "id": "<RESOURCE_ID>",
                "type": "<RESOURCE_TYPE>",  # Example: EC2, RDS 
                "region": "<REGION_CODE>",
                "status": "<STATUS>"        # Example: running, stopped 
            },
            # ...Multiple Assets
        ]
    }

  • It provides an asset list based on dummy data, rather than actual OpsNow API integration. 
  • When Claude receives requests like "What resources are currently in use?", this data will be returned.

How to Run

# Default Installation
pip install -r requirements.txt
# 서버 실행
python main.py

Full Source Code: opsnow-mcp-provider

OpsNow MCP Server

OpsNow Cost MCP Server

Now, let's take a closer look at the role of the "OpsNow Cost MCP Server" in the Claude MCP integration structure. This component receives requests from the Claude desktop application and responds with OpsNow cost data using the MCP protocol.

Before starting this section, it is highly recommended to review the following document.

Key Technologies

  • Node.js + TypeScript
  • @modelcontextprotocol/sdk: Official SDK for developing the Claude MCP server
  • node-fetch: Used for communication with the Provider API
  • zod: Schema validation library

src/index.ts: MCP server initialization

JavaScript

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

// Create server instance
const server = new McpServer({
  name: "cloud-cost",
  version: "1.0.0",
});

Define Input Schema and Register Tools

JavaScript

server.tool(
  "get-cost",
  "Get cloud cost summary for multiple vendors and months",
  {
    vendors: z.array(z.string()).optional().describe("List of cloud vendor names"),
    months: z.array(z.string()).optional().describe("List of months in YYYY-MM format"),
  },
  async ({ vendors, months }) => {
    ...
  }
);

Retrieve cost data from the Provider API

JavaScript

async function readCostData(): Promise<any | null> {
  const response = await fetch('http://localhost:8000/api/v1/costs/info');
  ...
  return data;
}

Build

# Default Installation
npm install
# Build
npm run build

Full Source Code: opsnow-mcp-cost-server

OpsNow Asset MCP Server

Since the structure is identical to the OpsNow Cost MCP Server, the explanation is omitted.

Full Source Code: opsnow-mcp-asset-server

Usage in Claude Desktop

Environment Settings

1. Claude Desktop Settings > Developer > Edit Settings

2. Open the claude_desktop_config.json file.

Register cloud-cost-server and cloud-asset-server settings.

JavaScript

{
  "mcpServers": {
   "cloud-cost-server": {
      "command": "node",
      "args": [
        "/Users/tae-joolee/codeProject/opsnow-mcp-cost-server/build/index.js"
      ]
    },
    "cloud-asset-server": {
      "command": "node",
      "args": [
        "/Users/tae-joolee/codeProject/opsnow-mcp-asset-server/build/index.js"
      ]
    }
  }
}

3. Verify successful registration

Check for "Hammer2" in the prompt input field

You can view the MCP server list by clicking

Usage Example

Based on the powerful performance of the Claude Desktop Agent, it demonstrates quite useful potential.

1. "What is the cloud cost for April?"

2. "Is it higher than March? If so, what’s the reason?"

3. "What is cloud usage?"

4. Then, visualize it.

Deploying My MCP Server

Smithery.ai is a central hub that allows you to search, install, and manage Model Context Protocol (MCP) servers to extend the capabilities of large language models (LLMs). Developers can use Smithery to integrate various external tools and data with LLMs, enabling the creation of more powerful AI systems.

Key Features of Smithery.ai

  • MCP Server Registry: smithery hosts over 4,500 MCP servers, enabling LLMs to perform a wide range of functions. For example, it supports integrations with GitHub, Google Drive, PostgreSQL, Slack, Brave Search, and more.

  1. Log in to Smithery. Click the login button at the top right of the screen, and you’ll need to log in using your GitHub account.

  1. Click "+Add Server" at the top right of the screen.

  1. Select one of the MCP Server projects registered in my GitHub.

  1. Here, I will select opsnow-mcp-asset-server-no-server.

  1. Click the Create button.

  1. Enter the information and click the Save button.

  1. The deployment is performed automatically (though it can also be done manually).

Registering the Deployed MCP Server

1. Open the claude_desktop_config.json file.

Register the MCP server information for cloud-cost-server, cloud-asset-server.

JavaScript

{
  "mcpServers": {
    "opsnow-mcp-cost-server-no-server": {
      "command": "npx",
      "args": [
        "-y",
        "@smithery/cli@latest",
        "run",
        "@taejulee/opsnow-mcp-cost-server-no-server",
        "--key",
        "19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
      ]
    },
    "opsnow-mcp-asset-server-no-server": {
      "command": "npx",
      "args": [
        "-y",
        "@smithery/cli@latest",
        "run",
        "@taejulee/opsnow-mcp-asset-server-no-server",
        "--key",
        "19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
      ]
    }
  }
}

Future Expansion Proposal: Claude MCP-Based OpsNow Integration Architecture

  • OpsNow Desktop Application
    • It is a hybrid desktop application that integrates control of various MCP-supported LLM desktop apps, such as Claude and ChatGPT.
    • Supports various MCP-compatible LLMs (Claude, GPT, etc.)
    • Integrates with third-party MCP servers (Sequential Thinking, GitHub, etc.) and the internal OpsNow MCP Server
    • Users can directly enter their LLM API Key to connect to the desired model
    • Provides a natural workflow within the Chat UI using the OpsNow ReAct (Reasoning + Acting) framework
  • OpsNow MCP Provider
    • It serves as an API service that provides OpsNow's asset, cost, and connection information to external LLMs.
      • Endpoint server serving data based on OpsNow Resources
      • Built-in License Key Manager:
        • Authentication and Authorization Isolation via OpsNow License Key
        • Company-Specific Mapping by Key
        • Provide Customized Information for Each Customer (e.g., limit access to specific service ranges)
        • Capability Extension through OpsNow API Bridge and Connect API
  • License Key-Based Control System
    • All MCP communications are authenticated based on the License Key, providing the following security/control features:
      • Limit Call Scope by Key (e.g., restrict access to specific asset queries)
      • Provide Dedicated Data Based on Customer ID
      • Ensure Scalability for Multiple Companies and Customers
  • Diverse MCP Compatibility and Third-Party Integration
    • With the strengthened MCP support from LLM vendors like GPT and Gemini, third-party LLMs can also be integrated through the MCP server, in addition to Claude.

Through this system design and PoC, we have confirmed that Claude-based AI can extend beyond simple Q&A to become an agent that directly queries and interprets OpsNow's FinOps data.

Through this structure, we have discovered the following possibilities for the future:

  • AI-based operational analysis automation for all resources within the company
  • Integration of a customer-specific portal with LLM agents
  • User-specific LLM model selection → License-based control → Completion of natural language interface

Claude MCP has now moved beyond the experimental introduction phase and is becoming a core tool for AI operations. Based on this, OpsNow aims to leap forward by offering new user interfaces and enhanced customer experiences.

2. Integration of OpsNow Prime with MCP

MCP technology will also be integrated into OpsNow Prime, the on-premises CMP solution scheduled for release in the first half of 2025. This integration enables infrastructure automation through AI, allowing users to perform complex tasks with simple commands.

OpsNow Prime is a Private & On-Premises Cloud Management Platform designed to manage enterprise IT infrastructure and cloud resources. By integrating MCP, we can demonstrate how an AI assistant simplifies infrastructure operations. In simple terms, users can issue commands through a chat interface, and the AI handles tasks like creating or shutting down virtual machines (VMs). For example, when an administrator asks the OpsNow AI chatbot, “Create a new virtual server,” the AI will automatically call the appropriate virtualization platform API to provision the VM and return the result. Previously, developing these services required a lot of time, effort, and resources, but MCP, an open standard, made it easier and more efficient to link together.

The OpsNow team has developed a dedicated MCP server that integrates internal systems—such as virtualization hypervisor APIs into the standardized Model Context Protocol (MCP) framework. By connecting large language models (LLMs) like Anthropic Claude to OpsNow as MCP clients, AI can now interact directly with hypervisors like Proxmox or VMware. These models can execute real-world infrastructure tasks based on natural language commands. Within OpsNow, a specialized module called the MCP Provider acts as an adapter, bridging various hypervisors and OpsNow’s native APIs. For example, when a user instructs the AI to “delete a VM,” the MCP Provider identifies the correct platform, performs the appropriate API call, and sends the result back through the MCP interface. This seamless integration allows users to manage infrastructure effortlessly—no need to access complex consoles. Simply speak, and the AI handles the rest.

Let’s explore a real-world use case where Claude Desktop utilizes the OpsNow Prime MCP Server to easily create and activate virtual machines (VMs) on Proxmox, with full lifecycle management handled by OpsNow Prime.

This overview covers the entire process—from training the MCP server to deployment and operation. The MCP server was developed using Cursor, and it's built on Node.js. The basic server structure was obtained from the official MCP repository on GitHub.

To begin, we cloned the source code from the example MCP server Git repository to utilize the basic structure. Using the Postman collection file and TypeScript files provided, we studied the architecture and workflow, which allowed us to successfully create the OpsNow Prime MCP server.

postman collection file

JavaScript

"item":[
   {
      "name":"Proxmox",
      "item":[
         {
            "name":"VM 리스트 조회(PRX-IF-VM-003)",
            "request":{
               "auth":{
                  "type":"bearer",
                  ... 중략
               },
               ... 중략
            },
            "response":[
               {
                  "name":"VM 리스트 조회(PRX-IF-VM-003)",
                  ... 중략
                  ],
... 중략

route.ts

JavaScript

import { Express, Request, Response } from 'express';
import { ProxmoxApiManager } from './proxmox/proxmoxApiManager';
import { getLogger } from './utils/logger';

const logger = getLogger('Routes');
const proxmoxManager = ProxmoxApiManager.getInstance();

export const setupRoutes = (app: Express) => {
  // Default health check endpoint
  app.get('/api/health', (req: Request, res: Response) => {
    res.status(200).json({ status: 'ok', message: 'Prime MCP 서버가 정상 동작 중입니다.' });
  });

  // Node related endpoints
  app.get('/api/nodes', async (req: Request, res: Response) => {
    try {
      const result = await proxmoxManager.getNodes();
      res.status(200).json(result);
    } catch (error) {
      logger.error('Node list lookup failed:', error);
      res.status(500).json({ error: 'An error occurred trying to query the node list.' });
    }
  });

// ....(skip)

After completing the training, source files are generated based on the learned results for the OpsNow Prime MCP server. To build it, you use the following command. Currently, Node.js version 20 or higher is used. After installing the required modules and proceeding with the build, you verify whether the server is running properly. For the OpsNow Prime MCP server, there were some differences depending on the LLM model, but to improve accuracy, we conducted retraining and testing. 

JavaScript

## Check node version
node -version

## build a project
npm install & npm run build

## Check project normal startup
node dist/index.js

When you're asked to run tests for the OpsNow Prime MCP server in Cursor, the API call test will be executed. If any issues arise, it will automatically fix them. Once you confirm there are no errors, click the settings button at the top of Cursor to proceed with registering the newly created MCP server.

Click on "MCP" on the left to navigate to the MCP management screen.

Click on "Add New Global MCP Server" in the screen above, and specify the absolute path of the index.js file generated after the build in the "args" section.

After that, when you create a server and retrieve instance information through Cursor or Claude, the relevant API is called to perform the requested operation. If the API's definitions or authentication information is incorrect, errors may occur multiple times, but the system will automatically go through the retraining and rebuilding process via Cursor.

While the MCP server can also be used in Cursor, it was functionally tested across platforms to verify if it works well in a cross-platform environment. Below is how to register the MCP in Claude.

Now, we're ready to use the learned MCP server to manage resources. Simply enter "Create 1 VM in Prime Proxmox" in Claude Desktop, and you'll be working with the OpsNow Prime MCP server to create the VM.

You can easily and quickly see that VMs are created in Proxmox, and you can also check the status of VMs created in OpsNow Prime. It should be noted that even if storage connectivity errors occur during the process, we decided on alternative solutions and successfully completed the VM creation without any issues. The created VMs can be viewed in real time from the Proxmox console, and you can also check their status in real time from the VM management menu in OpsNow Prime.

Now, let's abort/delete the created resources. If you type "Delete the VM you just created" in Claude Desktop, it will automatically call the API to stop the VM you just created and then delete it.

You can see in real-time from the Proxmox console that the VM has been deleted.

OpsNow's MCP utilization has already been demonstrated through features such as VM creation, querying, and deletion within the Proxmox virtualization environment via MCP, which has been developed and tested.

We are also expanding additional capabilities, such as shutting down VMs. For instance, what used to require an administrator to manually create a VM through the Proxmox Web interface can now be automated simply by instructing the OpsNow AI assistant.

Interestingly, these capabilities are not limited to specific virtualization offerings; they can be easily generalized to VMware, OpenStack, and more. Since the OpsNow team has already deployed connectivity to MCP standards, it is relatively easy to integrate new platforms. Moreover, as mentioned earlier, even in a closed-network data center environment, deploying open-source LLM and MCP servers can provide the same AI capabilities.

OpsNow Prime also has a roadmap to support MCP servers for detecting and automating unused or abnormal resources, automating system failures, and managing ITSM (Resource Application/Authorization), expanding beyond just managing resource creation and deletion in the context of On-Premise infrastructure operations.

As demonstrated in the OpsNow case, the introduction of MCP is expected to empower AI to act as the "hands and feet" of the data center, ushering in a new era of automation in infrastructure operations.

____________________________________________________________________________________________________________________________________

Anthropic's MCP has emerged as a universal connector, bringing the world's information to isolated AIs, and its scope of use is steadily expanding. While it's still in the early stages, it has garnered significant attention from the industry, and practical advantages are rapidly being proven through real-world cases like OpsNow. It will be interesting to see how quickly MCP becomes a standard and integrates into everyday services. The "MCP era", connecting data and AI, is fast approaching.