From the picture above:
Left: Before MCP
Without a standard protocol, each AI model must connect to tools like Slack, Google Drive, and GitHub using its own unique API. This results in fragmented connections and complex, repetitive integrations.
Right: After MCP
With MCP, models can access various tools through a single, standardized interface. Integration becomes much simpler and more unified across systems.
Large language models (LLMs), often referred to as AI chatbots or virtual assistants, are highly intelligent but still face a significant limitation, they are typically disconnected from the outside world. In simple terms, no matter how advanced AI becomes, it will always have constraints if it cannot access essential data, like the Internet or internal company databases. For example, if you ask current AI to "generate a graph using our sales data," it struggles because it doesn't have direct access to that data. This issue has led to numerous efforts to link AI with external tools and data sources. However, the traditional approach involved writing custom code for each AI model and tool, with each new tool requiring complex development. It’s similar to the past when different "cables" were needed for each printer or keyboard, there were various "specifications" between AI and the tools.
Anthropic has introduced the MCP (Model Context Protocol) as an open standard protocol designed to solve this issue.
It can be simplified as "USB-C for AI." Just like USB-C connects multiple devices through a single, unified port, MCP serves as the standard for linking AI applications with various external systems in a standardized way. This eliminates redundancy and simplifies integration between different systems. For instance, if three AI applications need to connect to three tools, without MCP, you would need to establish 3×3, or 9 separate connections. With MCP, only 3+3, or 6 standardized connections are required. This makes AI-tool connections more efficient, allowing AI to access and use the necessary data for improved responses.
The growing trend of linking AI models with external tools isn’t unique to Anthropic. Since 2023, OpenAI’s ChatGPT has been expanding its capabilities through plugins and function calling, enabling tasks like web searches and calculator functions. Microsoft has also integrated ChatGPT into Bing to enhance web search functionality. These efforts reflect a broader industry movement focused on making AI more practical and integrated into daily workflows. In this landscape, Anthropic made a notable impact by introducing the Model Context Protocol (MCP) in November 2024, offering a standardized approach that’s quickly gaining recognition as a potential industry standard. Claude, Anthropic’s AI assistant, comes equipped with a built-in MCP client, making it easy for apps powered by Claude to connect directly to MCP-compatible servers. For example, in Claude’s desktop application, users can connect services like Google Drive, Slack, and GitHub with just a few clicks, allowing Claude to retrieve relevant information from those platforms in real time.
After MCP’s launch, many developer tools and services quickly began to support the protocol. GitHub Copilot and the development IDE Cursor were among the first to announce MCP integration. Other platforms, including Zed Editor, Replit, Codeium, and Sourcegraph, are also embedding MCP into their ecosystems to enhance how AI assistants understand and interact with code. In summary, the push to connect AI models with external systems is a major trend in the tech industry, and MCP is rapidly gaining traction as an open standard. Some experts say the MCP has actually gained momentum enough to become a favorite in the race to define the “AI Agent Standard” between 2023 and 2025.
As more companies adopt MCP or experiment with similar solutions, the industry is seeing rapid progress in making AI assistants more powerful, flexible, and useful in practical environments.
MCP is gaining industry attention for several compelling advantages. First and foremost, it standardizes the way AI models connect with tools, significantly improving development efficiency. For developers, this means that once a new AI service is integrated using the MCP standard, it can be reused across multiple systems, streamlining workflows and boosting productivity. This standardized approach also leads to more consistent code implementation, resulting in greater service stability and easier maintenance. From the end-user perspective, this translates into a smoother, more unified experience when interacting with AI features across different applications.
Another key benefit is MCP’s flexibility, as it’s not tied to any specific AI vendor or model. It supports a wide range of large language models (LLMs), including proprietary systems like Anthropic’s Claude and OpenAI’s GPT series, as well as open-source alternatives. For example, developers can utilize open-source LLMs like Llama 2 or DeepSeek by simply meeting the MCP standard, avoiding lock-in to a single provider’s API. This flexibility extends to closed-network environments—organizations operating in secure, internet-restricted settings can still deploy AI assistants by installing open-source models and MCP connectors internally. Even without external internet access, AI functionality can be implemented as long as the MCP protocol is followed, making it ideal for high-security or offline systems.
Additionally, MCP is designed to work across various infrastructure environments. It’s compatible with a wide range of hypervisors and virtualization platforms, whether on-premises or in the cloud. This includes solutions like VMware, Proxmox, and OpenStack, all of which can be managed by AI through a single MCP interface. Since MCP operates independently of specific infrastructure types, organizations can integrate AI without overhauling their existing systems. Moreover, MCP is built for two-way communication, AI can send requests to external tools and receive real-time notifications when specific events occur. Access control is also highly customizable; the MCP server can define granular permissions to ensure AI only accesses the necessary data, preserving security and privacy. In summary, MCP stands out as a modern integration standard that balances flexibility, compatibility, and secure deployment across diverse IT environments.
OpsNow demonstrates practical MCP integration through two use cases. Setting up MCP-based workflows is simple, and the foundational setup can be done as follows:
In this context, existing FinOps services and resources are referred to as OpsNow Resources. We'll explore the practicality of MCP by developing the green-highlighted section in a simplified way. For now, the setup will operate using dummy data for testing purposes rather than live production data.
OpsNow MCP Provider
This project acts as a provider that enables Claude to communicate with MCP servers to retrieve asset and cost information from OpsNow. The overall system is structured as follows:
main.py: The entry point for handling MCP requests
Python
from fastapi import FastAPI
from asset_api_client import get_assets
from cost_api_client import get_costs
app = FastAPI()
@app.get("/health")
async def health_check():
return {"status": "ok"}
@app.get("/assets")
async def get_assets_data():
return await get_assets()
@app.get("/costs")
async def get_costs_data():
return await get_costs()
cost_api_client.py: Provides cost data
Python
async def get_costs():
return {
"costs": [
{
"cloud_provider": "<CSP_NAME>", # Example: AWS, Azure, GCP, etc.
"monthly_costs": [
{
"month": "<YYYY-MM>", # Example: 2025-03
"total": "<TOTAL_COST>",
"by_service": {
"<SERVICE_NAME_1>": "<COST_1>",
"<SERVICE_NAME_2>": "<COST_2>",
# ...
}
},
# ...Multiple monthly cost data
]
},
# ...Multiple CSP
]
}
asset_api_client.py: Provides asset data
Python
async def get_assets():
return {
"AWS": [
{
"id": "<RESOURCE_ID>",
"type": "<RESOURCE_TYPE>", # Example: EC2, RDS
"region": "<REGION_CODE>",
"status": "<STATUS>" # Example: running, stopped
},
# ...Multiple Assets
]
}
How to Run
# Default Installation
pip install -r requirements.txt
# 서버 실행
python main.py
Full Source Code: opsnow-mcp-provider
OpsNow MCP Server
OpsNow Cost MCP Server
Now, let's take a closer look at the role of the "OpsNow Cost MCP Server" in the Claude MCP integration structure. This component receives requests from the Claude desktop application and responds with OpsNow cost data using the MCP protocol.
Before starting this section, it is highly recommended to review the following document.
Key Technologies
src/index.ts: MCP server initialization
JavaScript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
// Create server instance
const server = new McpServer({
name: "cloud-cost",
version: "1.0.0",
});
Define Input Schema and Register Tools
JavaScript
server.tool(
"get-cost",
"Get cloud cost summary for multiple vendors and months",
{
vendors: z.array(z.string()).optional().describe("List of cloud vendor names"),
months: z.array(z.string()).optional().describe("List of months in YYYY-MM format"),
},
async ({ vendors, months }) => {
...
}
);
Retrieve cost data from the Provider API
JavaScript
async function readCostData(): Promise<any | null> {
const response = await fetch('http://localhost:8000/api/v1/costs/info');
...
return data;
}
Build
# Default Installation
npm install
# Build
npm run build
Full Source Code: opsnow-mcp-cost-server
OpsNow Asset MCP Server
Since the structure is identical to the OpsNow Cost MCP Server, the explanation is omitted.
Full Source Code: opsnow-mcp-asset-server
Usage in Claude Desktop
Environment Settings
1. Claude Desktop Settings > Developer > Edit Settings
2. Open the claude_desktop_config.json file.
Register cloud-cost-server and cloud-asset-server settings.
JavaScript
{
"mcpServers": {
"cloud-cost-server": {
"command": "node",
"args": [
"/Users/tae-joolee/codeProject/opsnow-mcp-cost-server/build/index.js"
]
},
"cloud-asset-server": {
"command": "node",
"args": [
"/Users/tae-joolee/codeProject/opsnow-mcp-asset-server/build/index.js"
]
}
}
}
3. Verify successful registration
Check for "Hammer2" in the prompt input field
You can view the MCP server list by clicking
Usage Example
Based on the powerful performance of the Claude Desktop Agent, it demonstrates quite useful potential.
1. "What is the cloud cost for April?"
2. "Is it higher than March? If so, what’s the reason?"
3. "What is cloud usage?"
4. Then, visualize it.
Deploying My MCP Server
Smithery.ai is a central hub that allows you to search, install, and manage Model Context Protocol (MCP) servers to extend the capabilities of large language models (LLMs). Developers can use Smithery to integrate various external tools and data with LLMs, enabling the creation of more powerful AI systems.
Key Features of Smithery.ai
Registering the Deployed MCP Server
1. Open the claude_desktop_config.json file.
Register the MCP server information for cloud-cost-server, cloud-asset-server.
JavaScript
{
"mcpServers": {
"opsnow-mcp-cost-server-no-server": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@taejulee/opsnow-mcp-cost-server-no-server",
"--key",
"19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
]
},
"opsnow-mcp-asset-server-no-server": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@taejulee/opsnow-mcp-asset-server-no-server",
"--key",
"19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
]
}
}
}
Future Expansion Proposal: Claude MCP-Based OpsNow Integration Architecture
Through this system design and PoC, we have confirmed that Claude-based AI can extend beyond simple Q&A to become an agent that directly queries and interprets OpsNow's FinOps data.
Through this structure, we have discovered the following possibilities for the future:
Claude MCP has now moved beyond the experimental introduction phase and is becoming a core tool for AI operations. Based on this, OpsNow aims to leap forward by offering new user interfaces and enhanced customer experiences.
2. Integration of OpsNow Prime with MCP
MCP technology will also be integrated into OpsNow Prime, the on-premises CMP solution scheduled for release in the first half of 2025. This integration enables infrastructure automation through AI, allowing users to perform complex tasks with simple commands.
OpsNow Prime is a Private & On-Premises Cloud Management Platform designed to manage enterprise IT infrastructure and cloud resources. By integrating MCP, we can demonstrate how an AI assistant simplifies infrastructure operations. In simple terms, users can issue commands through a chat interface, and the AI handles tasks like creating or shutting down virtual machines (VMs). For example, when an administrator asks the OpsNow AI chatbot, “Create a new virtual server,” the AI will automatically call the appropriate virtualization platform API to provision the VM and return the result. Previously, developing these services required a lot of time, effort, and resources, but MCP, an open standard, made it easier and more efficient to link together.
The OpsNow team has developed a dedicated MCP server that integrates internal systems—such as virtualization hypervisor APIs into the standardized Model Context Protocol (MCP) framework. By connecting large language models (LLMs) like Anthropic Claude to OpsNow as MCP clients, AI can now interact directly with hypervisors like Proxmox or VMware. These models can execute real-world infrastructure tasks based on natural language commands. Within OpsNow, a specialized module called the MCP Provider acts as an adapter, bridging various hypervisors and OpsNow’s native APIs. For example, when a user instructs the AI to “delete a VM,” the MCP Provider identifies the correct platform, performs the appropriate API call, and sends the result back through the MCP interface. This seamless integration allows users to manage infrastructure effortlessly—no need to access complex consoles. Simply speak, and the AI handles the rest.
Let’s explore a real-world use case where Claude Desktop utilizes the OpsNow Prime MCP Server to easily create and activate virtual machines (VMs) on Proxmox, with full lifecycle management handled by OpsNow Prime.
This overview covers the entire process—from training the MCP server to deployment and operation. The MCP server was developed using Cursor, and it's built on Node.js. The basic server structure was obtained from the official MCP repository on GitHub.
To begin, we cloned the source code from the example MCP server Git repository to utilize the basic structure. Using the Postman collection file and TypeScript files provided, we studied the architecture and workflow, which allowed us to successfully create the OpsNow Prime MCP server.
postman collection file
JavaScript
"item":[
{
"name":"Proxmox",
"item":[
{
"name":"VM 리스트 조회(PRX-IF-VM-003)",
"request":{
"auth":{
"type":"bearer",
... 중략
},
... 중략
},
"response":[
{
"name":"VM 리스트 조회(PRX-IF-VM-003)",
... 중략
],
... 중략
route.ts
JavaScript
import { Express, Request, Response } from 'express';
import { ProxmoxApiManager } from './proxmox/proxmoxApiManager';
import { getLogger } from './utils/logger';
const logger = getLogger('Routes');
const proxmoxManager = ProxmoxApiManager.getInstance();
export const setupRoutes = (app: Express) => {
// Default health check endpoint
app.get('/api/health', (req: Request, res: Response) => {
res.status(200).json({ status: 'ok', message: 'Prime MCP 서버가 정상 동작 중입니다.' });
});
// Node related endpoints
app.get('/api/nodes', async (req: Request, res: Response) => {
try {
const result = await proxmoxManager.getNodes();
res.status(200).json(result);
} catch (error) {
logger.error('Node list lookup failed:', error);
res.status(500).json({ error: 'An error occurred trying to query the node list.' });
}
});
// ....(skip)
After completing the training, source files are generated based on the learned results for the OpsNow Prime MCP server. To build it, you use the following command. Currently, Node.js version 20 or higher is used. After installing the required modules and proceeding with the build, you verify whether the server is running properly. For the OpsNow Prime MCP server, there were some differences depending on the LLM model, but to improve accuracy, we conducted retraining and testing.
JavaScript
## Check node version
node -version
## build a project
npm install & npm run build
## Check project normal startup
node dist/index.js
When you're asked to run tests for the OpsNow Prime MCP server in Cursor, the API call test will be executed. If any issues arise, it will automatically fix them. Once you confirm there are no errors, click the settings button at the top of Cursor to proceed with registering the newly created MCP server.
Click on "MCP" on the left to navigate to the MCP management screen.
Click on "Add New Global MCP Server" in the screen above, and specify the absolute path of the index.js file generated after the build in the "args" section.
After that, when you create a server and retrieve instance information through Cursor or Claude, the relevant API is called to perform the requested operation. If the API's definitions or authentication information is incorrect, errors may occur multiple times, but the system will automatically go through the retraining and rebuilding process via Cursor.
While the MCP server can also be used in Cursor, it was functionally tested across platforms to verify if it works well in a cross-platform environment. Below is how to register the MCP in Claude.
Now, we're ready to use the learned MCP server to manage resources. Simply enter "Create 1 VM in Prime Proxmox" in Claude Desktop, and you'll be working with the OpsNow Prime MCP server to create the VM.
You can easily and quickly see that VMs are created in Proxmox, and you can also check the status of VMs created in OpsNow Prime. It should be noted that even if storage connectivity errors occur during the process, we decided on alternative solutions and successfully completed the VM creation without any issues. The created VMs can be viewed in real time from the Proxmox console, and you can also check their status in real time from the VM management menu in OpsNow Prime.
Now, let's abort/delete the created resources. If you type "Delete the VM you just created" in Claude Desktop, it will automatically call the API to stop the VM you just created and then delete it.
You can see in real-time from the Proxmox console that the VM has been deleted.
OpsNow's MCP utilization has already been demonstrated through features such as VM creation, querying, and deletion within the Proxmox virtualization environment via MCP, which has been developed and tested.
We are also expanding additional capabilities, such as shutting down VMs. For instance, what used to require an administrator to manually create a VM through the Proxmox Web interface can now be automated simply by instructing the OpsNow AI assistant.
Interestingly, these capabilities are not limited to specific virtualization offerings; they can be easily generalized to VMware, OpenStack, and more. Since the OpsNow team has already deployed connectivity to MCP standards, it is relatively easy to integrate new platforms. Moreover, as mentioned earlier, even in a closed-network data center environment, deploying open-source LLM and MCP servers can provide the same AI capabilities.
OpsNow Prime also has a roadmap to support MCP servers for detecting and automating unused or abnormal resources, automating system failures, and managing ITSM (Resource Application/Authorization), expanding beyond just managing resource creation and deletion in the context of On-Premise infrastructure operations.
As demonstrated in the OpsNow case, the introduction of MCP is expected to empower AI to act as the "hands and feet" of the data center, ushering in a new era of automation in infrastructure operations.
____________________________________________________________________________________________________________________________________
Anthropic's MCP has emerged as a universal connector, bringing the world's information to isolated AIs, and its scope of use is steadily expanding. While it's still in the early stages, it has garnered significant attention from the industry, and practical advantages are rapidly being proven through real-world cases like OpsNow. It will be interesting to see how quickly MCP becomes a standard and integrates into everyday services. The "MCP era", connecting data and AI, is fast approaching.
From the picture above:
Left: Before MCP
Without a standard protocol, each AI model must connect to tools like Slack, Google Drive, and GitHub using its own unique API. This results in fragmented connections and complex, repetitive integrations.
Right: After MCP
With MCP, models can access various tools through a single, standardized interface. Integration becomes much simpler and more unified across systems.
Large language models (LLMs), often referred to as AI chatbots or virtual assistants, are highly intelligent but still face a significant limitation, they are typically disconnected from the outside world. In simple terms, no matter how advanced AI becomes, it will always have constraints if it cannot access essential data, like the Internet or internal company databases. For example, if you ask current AI to "generate a graph using our sales data," it struggles because it doesn't have direct access to that data. This issue has led to numerous efforts to link AI with external tools and data sources. However, the traditional approach involved writing custom code for each AI model and tool, with each new tool requiring complex development. It’s similar to the past when different "cables" were needed for each printer or keyboard, there were various "specifications" between AI and the tools.
Anthropic has introduced the MCP (Model Context Protocol) as an open standard protocol designed to solve this issue.
It can be simplified as "USB-C for AI." Just like USB-C connects multiple devices through a single, unified port, MCP serves as the standard for linking AI applications with various external systems in a standardized way. This eliminates redundancy and simplifies integration between different systems. For instance, if three AI applications need to connect to three tools, without MCP, you would need to establish 3×3, or 9 separate connections. With MCP, only 3+3, or 6 standardized connections are required. This makes AI-tool connections more efficient, allowing AI to access and use the necessary data for improved responses.
The growing trend of linking AI models with external tools isn’t unique to Anthropic. Since 2023, OpenAI’s ChatGPT has been expanding its capabilities through plugins and function calling, enabling tasks like web searches and calculator functions. Microsoft has also integrated ChatGPT into Bing to enhance web search functionality. These efforts reflect a broader industry movement focused on making AI more practical and integrated into daily workflows. In this landscape, Anthropic made a notable impact by introducing the Model Context Protocol (MCP) in November 2024, offering a standardized approach that’s quickly gaining recognition as a potential industry standard. Claude, Anthropic’s AI assistant, comes equipped with a built-in MCP client, making it easy for apps powered by Claude to connect directly to MCP-compatible servers. For example, in Claude’s desktop application, users can connect services like Google Drive, Slack, and GitHub with just a few clicks, allowing Claude to retrieve relevant information from those platforms in real time.
After MCP’s launch, many developer tools and services quickly began to support the protocol. GitHub Copilot and the development IDE Cursor were among the first to announce MCP integration. Other platforms, including Zed Editor, Replit, Codeium, and Sourcegraph, are also embedding MCP into their ecosystems to enhance how AI assistants understand and interact with code. In summary, the push to connect AI models with external systems is a major trend in the tech industry, and MCP is rapidly gaining traction as an open standard. Some experts say the MCP has actually gained momentum enough to become a favorite in the race to define the “AI Agent Standard” between 2023 and 2025.
As more companies adopt MCP or experiment with similar solutions, the industry is seeing rapid progress in making AI assistants more powerful, flexible, and useful in practical environments.
MCP is gaining industry attention for several compelling advantages. First and foremost, it standardizes the way AI models connect with tools, significantly improving development efficiency. For developers, this means that once a new AI service is integrated using the MCP standard, it can be reused across multiple systems, streamlining workflows and boosting productivity. This standardized approach also leads to more consistent code implementation, resulting in greater service stability and easier maintenance. From the end-user perspective, this translates into a smoother, more unified experience when interacting with AI features across different applications.
Another key benefit is MCP’s flexibility, as it’s not tied to any specific AI vendor or model. It supports a wide range of large language models (LLMs), including proprietary systems like Anthropic’s Claude and OpenAI’s GPT series, as well as open-source alternatives. For example, developers can utilize open-source LLMs like Llama 2 or DeepSeek by simply meeting the MCP standard, avoiding lock-in to a single provider’s API. This flexibility extends to closed-network environments—organizations operating in secure, internet-restricted settings can still deploy AI assistants by installing open-source models and MCP connectors internally. Even without external internet access, AI functionality can be implemented as long as the MCP protocol is followed, making it ideal for high-security or offline systems.
Additionally, MCP is designed to work across various infrastructure environments. It’s compatible with a wide range of hypervisors and virtualization platforms, whether on-premises or in the cloud. This includes solutions like VMware, Proxmox, and OpenStack, all of which can be managed by AI through a single MCP interface. Since MCP operates independently of specific infrastructure types, organizations can integrate AI without overhauling their existing systems. Moreover, MCP is built for two-way communication, AI can send requests to external tools and receive real-time notifications when specific events occur. Access control is also highly customizable; the MCP server can define granular permissions to ensure AI only accesses the necessary data, preserving security and privacy. In summary, MCP stands out as a modern integration standard that balances flexibility, compatibility, and secure deployment across diverse IT environments.
OpsNow demonstrates practical MCP integration through two use cases. Setting up MCP-based workflows is simple, and the foundational setup can be done as follows:
In this context, existing FinOps services and resources are referred to as OpsNow Resources. We'll explore the practicality of MCP by developing the green-highlighted section in a simplified way. For now, the setup will operate using dummy data for testing purposes rather than live production data.
OpsNow MCP Provider
This project acts as a provider that enables Claude to communicate with MCP servers to retrieve asset and cost information from OpsNow. The overall system is structured as follows:
main.py: The entry point for handling MCP requests
Python
from fastapi import FastAPI
from asset_api_client import get_assets
from cost_api_client import get_costs
app = FastAPI()
@app.get("/health")
async def health_check():
return {"status": "ok"}
@app.get("/assets")
async def get_assets_data():
return await get_assets()
@app.get("/costs")
async def get_costs_data():
return await get_costs()
cost_api_client.py: Provides cost data
Python
async def get_costs():
return {
"costs": [
{
"cloud_provider": "<CSP_NAME>", # Example: AWS, Azure, GCP, etc.
"monthly_costs": [
{
"month": "<YYYY-MM>", # Example: 2025-03
"total": "<TOTAL_COST>",
"by_service": {
"<SERVICE_NAME_1>": "<COST_1>",
"<SERVICE_NAME_2>": "<COST_2>",
# ...
}
},
# ...Multiple monthly cost data
]
},
# ...Multiple CSP
]
}
asset_api_client.py: Provides asset data
Python
async def get_assets():
return {
"AWS": [
{
"id": "<RESOURCE_ID>",
"type": "<RESOURCE_TYPE>", # Example: EC2, RDS
"region": "<REGION_CODE>",
"status": "<STATUS>" # Example: running, stopped
},
# ...Multiple Assets
]
}
How to Run
# Default Installation
pip install -r requirements.txt
# 서버 실행
python main.py
Full Source Code: opsnow-mcp-provider
OpsNow MCP Server
OpsNow Cost MCP Server
Now, let's take a closer look at the role of the "OpsNow Cost MCP Server" in the Claude MCP integration structure. This component receives requests from the Claude desktop application and responds with OpsNow cost data using the MCP protocol.
Before starting this section, it is highly recommended to review the following document.
Key Technologies
src/index.ts: MCP server initialization
JavaScript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
// Create server instance
const server = new McpServer({
name: "cloud-cost",
version: "1.0.0",
});
Define Input Schema and Register Tools
JavaScript
server.tool(
"get-cost",
"Get cloud cost summary for multiple vendors and months",
{
vendors: z.array(z.string()).optional().describe("List of cloud vendor names"),
months: z.array(z.string()).optional().describe("List of months in YYYY-MM format"),
},
async ({ vendors, months }) => {
...
}
);
Retrieve cost data from the Provider API
JavaScript
async function readCostData(): Promise<any | null> {
const response = await fetch('http://localhost:8000/api/v1/costs/info');
...
return data;
}
Build
# Default Installation
npm install
# Build
npm run build
Full Source Code: opsnow-mcp-cost-server
OpsNow Asset MCP Server
Since the structure is identical to the OpsNow Cost MCP Server, the explanation is omitted.
Full Source Code: opsnow-mcp-asset-server
Usage in Claude Desktop
Environment Settings
1. Claude Desktop Settings > Developer > Edit Settings
2. Open the claude_desktop_config.json file.
Register cloud-cost-server and cloud-asset-server settings.
JavaScript
{
"mcpServers": {
"cloud-cost-server": {
"command": "node",
"args": [
"/Users/tae-joolee/codeProject/opsnow-mcp-cost-server/build/index.js"
]
},
"cloud-asset-server": {
"command": "node",
"args": [
"/Users/tae-joolee/codeProject/opsnow-mcp-asset-server/build/index.js"
]
}
}
}
3. Verify successful registration
Check for "Hammer2" in the prompt input field
You can view the MCP server list by clicking
Usage Example
Based on the powerful performance of the Claude Desktop Agent, it demonstrates quite useful potential.
1. "What is the cloud cost for April?"
2. "Is it higher than March? If so, what’s the reason?"
3. "What is cloud usage?"
4. Then, visualize it.
Deploying My MCP Server
Smithery.ai is a central hub that allows you to search, install, and manage Model Context Protocol (MCP) servers to extend the capabilities of large language models (LLMs). Developers can use Smithery to integrate various external tools and data with LLMs, enabling the creation of more powerful AI systems.
Key Features of Smithery.ai
Registering the Deployed MCP Server
1. Open the claude_desktop_config.json file.
Register the MCP server information for cloud-cost-server, cloud-asset-server.
JavaScript
{
"mcpServers": {
"opsnow-mcp-cost-server-no-server": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@taejulee/opsnow-mcp-cost-server-no-server",
"--key",
"19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
]
},
"opsnow-mcp-asset-server-no-server": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@taejulee/opsnow-mcp-asset-server-no-server",
"--key",
"19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
]
}
}
}
Future Expansion Proposal: Claude MCP-Based OpsNow Integration Architecture
Through this system design and PoC, we have confirmed that Claude-based AI can extend beyond simple Q&A to become an agent that directly queries and interprets OpsNow's FinOps data.
Through this structure, we have discovered the following possibilities for the future:
Claude MCP has now moved beyond the experimental introduction phase and is becoming a core tool for AI operations. Based on this, OpsNow aims to leap forward by offering new user interfaces and enhanced customer experiences.
2. Integration of OpsNow Prime with MCP
MCP technology will also be integrated into OpsNow Prime, the on-premises CMP solution scheduled for release in the first half of 2025. This integration enables infrastructure automation through AI, allowing users to perform complex tasks with simple commands.
OpsNow Prime is a Private & On-Premises Cloud Management Platform designed to manage enterprise IT infrastructure and cloud resources. By integrating MCP, we can demonstrate how an AI assistant simplifies infrastructure operations. In simple terms, users can issue commands through a chat interface, and the AI handles tasks like creating or shutting down virtual machines (VMs). For example, when an administrator asks the OpsNow AI chatbot, “Create a new virtual server,” the AI will automatically call the appropriate virtualization platform API to provision the VM and return the result. Previously, developing these services required a lot of time, effort, and resources, but MCP, an open standard, made it easier and more efficient to link together.
The OpsNow team has developed a dedicated MCP server that integrates internal systems—such as virtualization hypervisor APIs into the standardized Model Context Protocol (MCP) framework. By connecting large language models (LLMs) like Anthropic Claude to OpsNow as MCP clients, AI can now interact directly with hypervisors like Proxmox or VMware. These models can execute real-world infrastructure tasks based on natural language commands. Within OpsNow, a specialized module called the MCP Provider acts as an adapter, bridging various hypervisors and OpsNow’s native APIs. For example, when a user instructs the AI to “delete a VM,” the MCP Provider identifies the correct platform, performs the appropriate API call, and sends the result back through the MCP interface. This seamless integration allows users to manage infrastructure effortlessly—no need to access complex consoles. Simply speak, and the AI handles the rest.
Let’s explore a real-world use case where Claude Desktop utilizes the OpsNow Prime MCP Server to easily create and activate virtual machines (VMs) on Proxmox, with full lifecycle management handled by OpsNow Prime.
This overview covers the entire process—from training the MCP server to deployment and operation. The MCP server was developed using Cursor, and it's built on Node.js. The basic server structure was obtained from the official MCP repository on GitHub.
To begin, we cloned the source code from the example MCP server Git repository to utilize the basic structure. Using the Postman collection file and TypeScript files provided, we studied the architecture and workflow, which allowed us to successfully create the OpsNow Prime MCP server.
postman collection file
JavaScript
"item":[
{
"name":"Proxmox",
"item":[
{
"name":"VM 리스트 조회(PRX-IF-VM-003)",
"request":{
"auth":{
"type":"bearer",
... 중략
},
... 중략
},
"response":[
{
"name":"VM 리스트 조회(PRX-IF-VM-003)",
... 중략
],
... 중략
route.ts
JavaScript
import { Express, Request, Response } from 'express';
import { ProxmoxApiManager } from './proxmox/proxmoxApiManager';
import { getLogger } from './utils/logger';
const logger = getLogger('Routes');
const proxmoxManager = ProxmoxApiManager.getInstance();
export const setupRoutes = (app: Express) => {
// Default health check endpoint
app.get('/api/health', (req: Request, res: Response) => {
res.status(200).json({ status: 'ok', message: 'Prime MCP 서버가 정상 동작 중입니다.' });
});
// Node related endpoints
app.get('/api/nodes', async (req: Request, res: Response) => {
try {
const result = await proxmoxManager.getNodes();
res.status(200).json(result);
} catch (error) {
logger.error('Node list lookup failed:', error);
res.status(500).json({ error: 'An error occurred trying to query the node list.' });
}
});
// ....(skip)
After completing the training, source files are generated based on the learned results for the OpsNow Prime MCP server. To build it, you use the following command. Currently, Node.js version 20 or higher is used. After installing the required modules and proceeding with the build, you verify whether the server is running properly. For the OpsNow Prime MCP server, there were some differences depending on the LLM model, but to improve accuracy, we conducted retraining and testing.
JavaScript
## Check node version
node -version
## build a project
npm install & npm run build
## Check project normal startup
node dist/index.js
When you're asked to run tests for the OpsNow Prime MCP server in Cursor, the API call test will be executed. If any issues arise, it will automatically fix them. Once you confirm there are no errors, click the settings button at the top of Cursor to proceed with registering the newly created MCP server.
Click on "MCP" on the left to navigate to the MCP management screen.
Click on "Add New Global MCP Server" in the screen above, and specify the absolute path of the index.js file generated after the build in the "args" section.
After that, when you create a server and retrieve instance information through Cursor or Claude, the relevant API is called to perform the requested operation. If the API's definitions or authentication information is incorrect, errors may occur multiple times, but the system will automatically go through the retraining and rebuilding process via Cursor.
While the MCP server can also be used in Cursor, it was functionally tested across platforms to verify if it works well in a cross-platform environment. Below is how to register the MCP in Claude.
Now, we're ready to use the learned MCP server to manage resources. Simply enter "Create 1 VM in Prime Proxmox" in Claude Desktop, and you'll be working with the OpsNow Prime MCP server to create the VM.
You can easily and quickly see that VMs are created in Proxmox, and you can also check the status of VMs created in OpsNow Prime. It should be noted that even if storage connectivity errors occur during the process, we decided on alternative solutions and successfully completed the VM creation without any issues. The created VMs can be viewed in real time from the Proxmox console, and you can also check their status in real time from the VM management menu in OpsNow Prime.
Now, let's abort/delete the created resources. If you type "Delete the VM you just created" in Claude Desktop, it will automatically call the API to stop the VM you just created and then delete it.
You can see in real-time from the Proxmox console that the VM has been deleted.
OpsNow's MCP utilization has already been demonstrated through features such as VM creation, querying, and deletion within the Proxmox virtualization environment via MCP, which has been developed and tested.
We are also expanding additional capabilities, such as shutting down VMs. For instance, what used to require an administrator to manually create a VM through the Proxmox Web interface can now be automated simply by instructing the OpsNow AI assistant.
Interestingly, these capabilities are not limited to specific virtualization offerings; they can be easily generalized to VMware, OpenStack, and more. Since the OpsNow team has already deployed connectivity to MCP standards, it is relatively easy to integrate new platforms. Moreover, as mentioned earlier, even in a closed-network data center environment, deploying open-source LLM and MCP servers can provide the same AI capabilities.
OpsNow Prime also has a roadmap to support MCP servers for detecting and automating unused or abnormal resources, automating system failures, and managing ITSM (Resource Application/Authorization), expanding beyond just managing resource creation and deletion in the context of On-Premise infrastructure operations.
As demonstrated in the OpsNow case, the introduction of MCP is expected to empower AI to act as the "hands and feet" of the data center, ushering in a new era of automation in infrastructure operations.
____________________________________________________________________________________________________________________________________
Anthropic's MCP has emerged as a universal connector, bringing the world's information to isolated AIs, and its scope of use is steadily expanding. While it's still in the early stages, it has garnered significant attention from the industry, and practical advantages are rapidly being proven through real-world cases like OpsNow. It will be interesting to see how quickly MCP becomes a standard and integrates into everyday services. The "MCP era", connecting data and AI, is fast approaching.
From the picture above:
Left: Before MCP
Without a standard protocol, each AI model must connect to tools like Slack, Google Drive, and GitHub using its own unique API. This results in fragmented connections and complex, repetitive integrations.
Right: After MCP
With MCP, models can access various tools through a single, standardized interface. Integration becomes much simpler and more unified across systems.
Large language models (LLMs), often referred to as AI chatbots or virtual assistants, are highly intelligent but still face a significant limitation, they are typically disconnected from the outside world. In simple terms, no matter how advanced AI becomes, it will always have constraints if it cannot access essential data, like the Internet or internal company databases. For example, if you ask current AI to "generate a graph using our sales data," it struggles because it doesn't have direct access to that data. This issue has led to numerous efforts to link AI with external tools and data sources. However, the traditional approach involved writing custom code for each AI model and tool, with each new tool requiring complex development. It’s similar to the past when different "cables" were needed for each printer or keyboard, there were various "specifications" between AI and the tools.
Anthropic has introduced the MCP (Model Context Protocol) as an open standard protocol designed to solve this issue.
It can be simplified as "USB-C for AI." Just like USB-C connects multiple devices through a single, unified port, MCP serves as the standard for linking AI applications with various external systems in a standardized way. This eliminates redundancy and simplifies integration between different systems. For instance, if three AI applications need to connect to three tools, without MCP, you would need to establish 3×3, or 9 separate connections. With MCP, only 3+3, or 6 standardized connections are required. This makes AI-tool connections more efficient, allowing AI to access and use the necessary data for improved responses.
The growing trend of linking AI models with external tools isn’t unique to Anthropic. Since 2023, OpenAI’s ChatGPT has been expanding its capabilities through plugins and function calling, enabling tasks like web searches and calculator functions. Microsoft has also integrated ChatGPT into Bing to enhance web search functionality. These efforts reflect a broader industry movement focused on making AI more practical and integrated into daily workflows. In this landscape, Anthropic made a notable impact by introducing the Model Context Protocol (MCP) in November 2024, offering a standardized approach that’s quickly gaining recognition as a potential industry standard. Claude, Anthropic’s AI assistant, comes equipped with a built-in MCP client, making it easy for apps powered by Claude to connect directly to MCP-compatible servers. For example, in Claude’s desktop application, users can connect services like Google Drive, Slack, and GitHub with just a few clicks, allowing Claude to retrieve relevant information from those platforms in real time.
After MCP’s launch, many developer tools and services quickly began to support the protocol. GitHub Copilot and the development IDE Cursor were among the first to announce MCP integration. Other platforms, including Zed Editor, Replit, Codeium, and Sourcegraph, are also embedding MCP into their ecosystems to enhance how AI assistants understand and interact with code. In summary, the push to connect AI models with external systems is a major trend in the tech industry, and MCP is rapidly gaining traction as an open standard. Some experts say the MCP has actually gained momentum enough to become a favorite in the race to define the “AI Agent Standard” between 2023 and 2025.
As more companies adopt MCP or experiment with similar solutions, the industry is seeing rapid progress in making AI assistants more powerful, flexible, and useful in practical environments.
MCP is gaining industry attention for several compelling advantages. First and foremost, it standardizes the way AI models connect with tools, significantly improving development efficiency. For developers, this means that once a new AI service is integrated using the MCP standard, it can be reused across multiple systems, streamlining workflows and boosting productivity. This standardized approach also leads to more consistent code implementation, resulting in greater service stability and easier maintenance. From the end-user perspective, this translates into a smoother, more unified experience when interacting with AI features across different applications.
Another key benefit is MCP’s flexibility, as it’s not tied to any specific AI vendor or model. It supports a wide range of large language models (LLMs), including proprietary systems like Anthropic’s Claude and OpenAI’s GPT series, as well as open-source alternatives. For example, developers can utilize open-source LLMs like Llama 2 or DeepSeek by simply meeting the MCP standard, avoiding lock-in to a single provider’s API. This flexibility extends to closed-network environments—organizations operating in secure, internet-restricted settings can still deploy AI assistants by installing open-source models and MCP connectors internally. Even without external internet access, AI functionality can be implemented as long as the MCP protocol is followed, making it ideal for high-security or offline systems.
Additionally, MCP is designed to work across various infrastructure environments. It’s compatible with a wide range of hypervisors and virtualization platforms, whether on-premises or in the cloud. This includes solutions like VMware, Proxmox, and OpenStack, all of which can be managed by AI through a single MCP interface. Since MCP operates independently of specific infrastructure types, organizations can integrate AI without overhauling their existing systems. Moreover, MCP is built for two-way communication, AI can send requests to external tools and receive real-time notifications when specific events occur. Access control is also highly customizable; the MCP server can define granular permissions to ensure AI only accesses the necessary data, preserving security and privacy. In summary, MCP stands out as a modern integration standard that balances flexibility, compatibility, and secure deployment across diverse IT environments.
OpsNow demonstrates practical MCP integration through two use cases. Setting up MCP-based workflows is simple, and the foundational setup can be done as follows:
In this context, existing FinOps services and resources are referred to as OpsNow Resources. We'll explore the practicality of MCP by developing the green-highlighted section in a simplified way. For now, the setup will operate using dummy data for testing purposes rather than live production data.
OpsNow MCP Provider
This project acts as a provider that enables Claude to communicate with MCP servers to retrieve asset and cost information from OpsNow. The overall system is structured as follows:
main.py: The entry point for handling MCP requests
Python
from fastapi import FastAPI
from asset_api_client import get_assets
from cost_api_client import get_costs
app = FastAPI()
@app.get("/health")
async def health_check():
return {"status": "ok"}
@app.get("/assets")
async def get_assets_data():
return await get_assets()
@app.get("/costs")
async def get_costs_data():
return await get_costs()
cost_api_client.py: Provides cost data
Python
async def get_costs():
return {
"costs": [
{
"cloud_provider": "<CSP_NAME>", # Example: AWS, Azure, GCP, etc.
"monthly_costs": [
{
"month": "<YYYY-MM>", # Example: 2025-03
"total": "<TOTAL_COST>",
"by_service": {
"<SERVICE_NAME_1>": "<COST_1>",
"<SERVICE_NAME_2>": "<COST_2>",
# ...
}
},
# ...Multiple monthly cost data
]
},
# ...Multiple CSP
]
}
asset_api_client.py: Provides asset data
Python
async def get_assets():
return {
"AWS": [
{
"id": "<RESOURCE_ID>",
"type": "<RESOURCE_TYPE>", # Example: EC2, RDS
"region": "<REGION_CODE>",
"status": "<STATUS>" # Example: running, stopped
},
# ...Multiple Assets
]
}
How to Run
# Default Installation
pip install -r requirements.txt
# 서버 실행
python main.py
Full Source Code: opsnow-mcp-provider
OpsNow MCP Server
OpsNow Cost MCP Server
Now, let's take a closer look at the role of the "OpsNow Cost MCP Server" in the Claude MCP integration structure. This component receives requests from the Claude desktop application and responds with OpsNow cost data using the MCP protocol.
Before starting this section, it is highly recommended to review the following document.
Key Technologies
src/index.ts: MCP server initialization
JavaScript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
// Create server instance
const server = new McpServer({
name: "cloud-cost",
version: "1.0.0",
});
Define Input Schema and Register Tools
JavaScript
server.tool(
"get-cost",
"Get cloud cost summary for multiple vendors and months",
{
vendors: z.array(z.string()).optional().describe("List of cloud vendor names"),
months: z.array(z.string()).optional().describe("List of months in YYYY-MM format"),
},
async ({ vendors, months }) => {
...
}
);
Retrieve cost data from the Provider API
JavaScript
async function readCostData(): Promise<any | null> {
const response = await fetch('http://localhost:8000/api/v1/costs/info');
...
return data;
}
Build
# Default Installation
npm install
# Build
npm run build
Full Source Code: opsnow-mcp-cost-server
OpsNow Asset MCP Server
Since the structure is identical to the OpsNow Cost MCP Server, the explanation is omitted.
Full Source Code: opsnow-mcp-asset-server
Usage in Claude Desktop
Environment Settings
1. Claude Desktop Settings > Developer > Edit Settings
2. Open the claude_desktop_config.json file.
Register cloud-cost-server and cloud-asset-server settings.
JavaScript
{
"mcpServers": {
"cloud-cost-server": {
"command": "node",
"args": [
"/Users/tae-joolee/codeProject/opsnow-mcp-cost-server/build/index.js"
]
},
"cloud-asset-server": {
"command": "node",
"args": [
"/Users/tae-joolee/codeProject/opsnow-mcp-asset-server/build/index.js"
]
}
}
}
3. Verify successful registration
Check for "Hammer2" in the prompt input field
You can view the MCP server list by clicking
Usage Example
Based on the powerful performance of the Claude Desktop Agent, it demonstrates quite useful potential.
1. "What is the cloud cost for April?"
2. "Is it higher than March? If so, what’s the reason?"
3. "What is cloud usage?"
4. Then, visualize it.
Deploying My MCP Server
Smithery.ai is a central hub that allows you to search, install, and manage Model Context Protocol (MCP) servers to extend the capabilities of large language models (LLMs). Developers can use Smithery to integrate various external tools and data with LLMs, enabling the creation of more powerful AI systems.
Key Features of Smithery.ai
Registering the Deployed MCP Server
1. Open the claude_desktop_config.json file.
Register the MCP server information for cloud-cost-server, cloud-asset-server.
JavaScript
{
"mcpServers": {
"opsnow-mcp-cost-server-no-server": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@taejulee/opsnow-mcp-cost-server-no-server",
"--key",
"19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
]
},
"opsnow-mcp-asset-server-no-server": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@taejulee/opsnow-mcp-asset-server-no-server",
"--key",
"19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
]
}
}
}
Future Expansion Proposal: Claude MCP-Based OpsNow Integration Architecture
Through this system design and PoC, we have confirmed that Claude-based AI can extend beyond simple Q&A to become an agent that directly queries and interprets OpsNow's FinOps data.
Through this structure, we have discovered the following possibilities for the future:
Claude MCP has now moved beyond the experimental introduction phase and is becoming a core tool for AI operations. Based on this, OpsNow aims to leap forward by offering new user interfaces and enhanced customer experiences.
2. Integration of OpsNow Prime with MCP
MCP technology will also be integrated into OpsNow Prime, the on-premises CMP solution scheduled for release in the first half of 2025. This integration enables infrastructure automation through AI, allowing users to perform complex tasks with simple commands.
OpsNow Prime is a Private & On-Premises Cloud Management Platform designed to manage enterprise IT infrastructure and cloud resources. By integrating MCP, we can demonstrate how an AI assistant simplifies infrastructure operations. In simple terms, users can issue commands through a chat interface, and the AI handles tasks like creating or shutting down virtual machines (VMs). For example, when an administrator asks the OpsNow AI chatbot, “Create a new virtual server,” the AI will automatically call the appropriate virtualization platform API to provision the VM and return the result. Previously, developing these services required a lot of time, effort, and resources, but MCP, an open standard, made it easier and more efficient to link together.
The OpsNow team has developed a dedicated MCP server that integrates internal systems—such as virtualization hypervisor APIs into the standardized Model Context Protocol (MCP) framework. By connecting large language models (LLMs) like Anthropic Claude to OpsNow as MCP clients, AI can now interact directly with hypervisors like Proxmox or VMware. These models can execute real-world infrastructure tasks based on natural language commands. Within OpsNow, a specialized module called the MCP Provider acts as an adapter, bridging various hypervisors and OpsNow’s native APIs. For example, when a user instructs the AI to “delete a VM,” the MCP Provider identifies the correct platform, performs the appropriate API call, and sends the result back through the MCP interface. This seamless integration allows users to manage infrastructure effortlessly—no need to access complex consoles. Simply speak, and the AI handles the rest.
Let’s explore a real-world use case where Claude Desktop utilizes the OpsNow Prime MCP Server to easily create and activate virtual machines (VMs) on Proxmox, with full lifecycle management handled by OpsNow Prime.
This overview covers the entire process—from training the MCP server to deployment and operation. The MCP server was developed using Cursor, and it's built on Node.js. The basic server structure was obtained from the official MCP repository on GitHub.
To begin, we cloned the source code from the example MCP server Git repository to utilize the basic structure. Using the Postman collection file and TypeScript files provided, we studied the architecture and workflow, which allowed us to successfully create the OpsNow Prime MCP server.
postman collection file
JavaScript
"item":[
{
"name":"Proxmox",
"item":[
{
"name":"VM 리스트 조회(PRX-IF-VM-003)",
"request":{
"auth":{
"type":"bearer",
... 중략
},
... 중략
},
"response":[
{
"name":"VM 리스트 조회(PRX-IF-VM-003)",
... 중략
],
... 중략
route.ts
JavaScript
import { Express, Request, Response } from 'express';
import { ProxmoxApiManager } from './proxmox/proxmoxApiManager';
import { getLogger } from './utils/logger';
const logger = getLogger('Routes');
const proxmoxManager = ProxmoxApiManager.getInstance();
export const setupRoutes = (app: Express) => {
// Default health check endpoint
app.get('/api/health', (req: Request, res: Response) => {
res.status(200).json({ status: 'ok', message: 'Prime MCP 서버가 정상 동작 중입니다.' });
});
// Node related endpoints
app.get('/api/nodes', async (req: Request, res: Response) => {
try {
const result = await proxmoxManager.getNodes();
res.status(200).json(result);
} catch (error) {
logger.error('Node list lookup failed:', error);
res.status(500).json({ error: 'An error occurred trying to query the node list.' });
}
});
// ....(skip)
After completing the training, source files are generated based on the learned results for the OpsNow Prime MCP server. To build it, you use the following command. Currently, Node.js version 20 or higher is used. After installing the required modules and proceeding with the build, you verify whether the server is running properly. For the OpsNow Prime MCP server, there were some differences depending on the LLM model, but to improve accuracy, we conducted retraining and testing.
JavaScript
## Check node version
node -version
## build a project
npm install & npm run build
## Check project normal startup
node dist/index.js
When you're asked to run tests for the OpsNow Prime MCP server in Cursor, the API call test will be executed. If any issues arise, it will automatically fix them. Once you confirm there are no errors, click the settings button at the top of Cursor to proceed with registering the newly created MCP server.
Click on "MCP" on the left to navigate to the MCP management screen.
Click on "Add New Global MCP Server" in the screen above, and specify the absolute path of the index.js file generated after the build in the "args" section.
After that, when you create a server and retrieve instance information through Cursor or Claude, the relevant API is called to perform the requested operation. If the API's definitions or authentication information is incorrect, errors may occur multiple times, but the system will automatically go through the retraining and rebuilding process via Cursor.
While the MCP server can also be used in Cursor, it was functionally tested across platforms to verify if it works well in a cross-platform environment. Below is how to register the MCP in Claude.
Now, we're ready to use the learned MCP server to manage resources. Simply enter "Create 1 VM in Prime Proxmox" in Claude Desktop, and you'll be working with the OpsNow Prime MCP server to create the VM.
You can easily and quickly see that VMs are created in Proxmox, and you can also check the status of VMs created in OpsNow Prime. It should be noted that even if storage connectivity errors occur during the process, we decided on alternative solutions and successfully completed the VM creation without any issues. The created VMs can be viewed in real time from the Proxmox console, and you can also check their status in real time from the VM management menu in OpsNow Prime.
Now, let's abort/delete the created resources. If you type "Delete the VM you just created" in Claude Desktop, it will automatically call the API to stop the VM you just created and then delete it.
You can see in real-time from the Proxmox console that the VM has been deleted.
OpsNow's MCP utilization has already been demonstrated through features such as VM creation, querying, and deletion within the Proxmox virtualization environment via MCP, which has been developed and tested.
We are also expanding additional capabilities, such as shutting down VMs. For instance, what used to require an administrator to manually create a VM through the Proxmox Web interface can now be automated simply by instructing the OpsNow AI assistant.
Interestingly, these capabilities are not limited to specific virtualization offerings; they can be easily generalized to VMware, OpenStack, and more. Since the OpsNow team has already deployed connectivity to MCP standards, it is relatively easy to integrate new platforms. Moreover, as mentioned earlier, even in a closed-network data center environment, deploying open-source LLM and MCP servers can provide the same AI capabilities.
OpsNow Prime also has a roadmap to support MCP servers for detecting and automating unused or abnormal resources, automating system failures, and managing ITSM (Resource Application/Authorization), expanding beyond just managing resource creation and deletion in the context of On-Premise infrastructure operations.
As demonstrated in the OpsNow case, the introduction of MCP is expected to empower AI to act as the "hands and feet" of the data center, ushering in a new era of automation in infrastructure operations.
____________________________________________________________________________________________________________________________________
Anthropic's MCP has emerged as a universal connector, bringing the world's information to isolated AIs, and its scope of use is steadily expanding. While it's still in the early stages, it has garnered significant attention from the industry, and practical advantages are rapidly being proven through real-world cases like OpsNow. It will be interesting to see how quickly MCP becomes a standard and integrates into everyday services. The "MCP era", connecting data and AI, is fast approaching.
From the picture above:
Left: Before MCP
Without a standard protocol, each AI model must connect to tools like Slack, Google Drive, and GitHub using its own unique API. This results in fragmented connections and complex, repetitive integrations.
Right: After MCP
With MCP, models can access various tools through a single, standardized interface. Integration becomes much simpler and more unified across systems.
Large language models (LLMs), often referred to as AI chatbots or virtual assistants, are highly intelligent but still face a significant limitation, they are typically disconnected from the outside world. In simple terms, no matter how advanced AI becomes, it will always have constraints if it cannot access essential data, like the Internet or internal company databases. For example, if you ask current AI to "generate a graph using our sales data," it struggles because it doesn't have direct access to that data. This issue has led to numerous efforts to link AI with external tools and data sources. However, the traditional approach involved writing custom code for each AI model and tool, with each new tool requiring complex development. It’s similar to the past when different "cables" were needed for each printer or keyboard, there were various "specifications" between AI and the tools.
Anthropic has introduced the MCP (Model Context Protocol) as an open standard protocol designed to solve this issue.
It can be simplified as "USB-C for AI." Just like USB-C connects multiple devices through a single, unified port, MCP serves as the standard for linking AI applications with various external systems in a standardized way. This eliminates redundancy and simplifies integration between different systems. For instance, if three AI applications need to connect to three tools, without MCP, you would need to establish 3×3, or 9 separate connections. With MCP, only 3+3, or 6 standardized connections are required. This makes AI-tool connections more efficient, allowing AI to access and use the necessary data for improved responses.
The growing trend of linking AI models with external tools isn’t unique to Anthropic. Since 2023, OpenAI’s ChatGPT has been expanding its capabilities through plugins and function calling, enabling tasks like web searches and calculator functions. Microsoft has also integrated ChatGPT into Bing to enhance web search functionality. These efforts reflect a broader industry movement focused on making AI more practical and integrated into daily workflows. In this landscape, Anthropic made a notable impact by introducing the Model Context Protocol (MCP) in November 2024, offering a standardized approach that’s quickly gaining recognition as a potential industry standard. Claude, Anthropic’s AI assistant, comes equipped with a built-in MCP client, making it easy for apps powered by Claude to connect directly to MCP-compatible servers. For example, in Claude’s desktop application, users can connect services like Google Drive, Slack, and GitHub with just a few clicks, allowing Claude to retrieve relevant information from those platforms in real time.
After MCP’s launch, many developer tools and services quickly began to support the protocol. GitHub Copilot and the development IDE Cursor were among the first to announce MCP integration. Other platforms, including Zed Editor, Replit, Codeium, and Sourcegraph, are also embedding MCP into their ecosystems to enhance how AI assistants understand and interact with code. In summary, the push to connect AI models with external systems is a major trend in the tech industry, and MCP is rapidly gaining traction as an open standard. Some experts say the MCP has actually gained momentum enough to become a favorite in the race to define the “AI Agent Standard” between 2023 and 2025.
As more companies adopt MCP or experiment with similar solutions, the industry is seeing rapid progress in making AI assistants more powerful, flexible, and useful in practical environments.
MCP is gaining industry attention for several compelling advantages. First and foremost, it standardizes the way AI models connect with tools, significantly improving development efficiency. For developers, this means that once a new AI service is integrated using the MCP standard, it can be reused across multiple systems, streamlining workflows and boosting productivity. This standardized approach also leads to more consistent code implementation, resulting in greater service stability and easier maintenance. From the end-user perspective, this translates into a smoother, more unified experience when interacting with AI features across different applications.
Another key benefit is MCP’s flexibility, as it’s not tied to any specific AI vendor or model. It supports a wide range of large language models (LLMs), including proprietary systems like Anthropic’s Claude and OpenAI’s GPT series, as well as open-source alternatives. For example, developers can utilize open-source LLMs like Llama 2 or DeepSeek by simply meeting the MCP standard, avoiding lock-in to a single provider’s API. This flexibility extends to closed-network environments—organizations operating in secure, internet-restricted settings can still deploy AI assistants by installing open-source models and MCP connectors internally. Even without external internet access, AI functionality can be implemented as long as the MCP protocol is followed, making it ideal for high-security or offline systems.
Additionally, MCP is designed to work across various infrastructure environments. It’s compatible with a wide range of hypervisors and virtualization platforms, whether on-premises or in the cloud. This includes solutions like VMware, Proxmox, and OpenStack, all of which can be managed by AI through a single MCP interface. Since MCP operates independently of specific infrastructure types, organizations can integrate AI without overhauling their existing systems. Moreover, MCP is built for two-way communication, AI can send requests to external tools and receive real-time notifications when specific events occur. Access control is also highly customizable; the MCP server can define granular permissions to ensure AI only accesses the necessary data, preserving security and privacy. In summary, MCP stands out as a modern integration standard that balances flexibility, compatibility, and secure deployment across diverse IT environments.
OpsNow demonstrates practical MCP integration through two use cases. Setting up MCP-based workflows is simple, and the foundational setup can be done as follows:
In this context, existing FinOps services and resources are referred to as OpsNow Resources. We'll explore the practicality of MCP by developing the green-highlighted section in a simplified way. For now, the setup will operate using dummy data for testing purposes rather than live production data.
OpsNow MCP Provider
This project acts as a provider that enables Claude to communicate with MCP servers to retrieve asset and cost information from OpsNow. The overall system is structured as follows:
main.py: The entry point for handling MCP requests
Python
from fastapi import FastAPI
from asset_api_client import get_assets
from cost_api_client import get_costs
app = FastAPI()
@app.get("/health")
async def health_check():
return {"status": "ok"}
@app.get("/assets")
async def get_assets_data():
return await get_assets()
@app.get("/costs")
async def get_costs_data():
return await get_costs()
cost_api_client.py: Provides cost data
Python
async def get_costs():
return {
"costs": [
{
"cloud_provider": "<CSP_NAME>", # Example: AWS, Azure, GCP, etc.
"monthly_costs": [
{
"month": "<YYYY-MM>", # Example: 2025-03
"total": "<TOTAL_COST>",
"by_service": {
"<SERVICE_NAME_1>": "<COST_1>",
"<SERVICE_NAME_2>": "<COST_2>",
# ...
}
},
# ...Multiple monthly cost data
]
},
# ...Multiple CSP
]
}
asset_api_client.py: Provides asset data
Python
async def get_assets():
return {
"AWS": [
{
"id": "<RESOURCE_ID>",
"type": "<RESOURCE_TYPE>", # Example: EC2, RDS
"region": "<REGION_CODE>",
"status": "<STATUS>" # Example: running, stopped
},
# ...Multiple Assets
]
}
How to Run
# Default Installation
pip install -r requirements.txt
# 서버 실행
python main.py
Full Source Code: opsnow-mcp-provider
OpsNow MCP Server
OpsNow Cost MCP Server
Now, let's take a closer look at the role of the "OpsNow Cost MCP Server" in the Claude MCP integration structure. This component receives requests from the Claude desktop application and responds with OpsNow cost data using the MCP protocol.
Before starting this section, it is highly recommended to review the following document.
Key Technologies
src/index.ts: MCP server initialization
JavaScript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
// Create server instance
const server = new McpServer({
name: "cloud-cost",
version: "1.0.0",
});
Define Input Schema and Register Tools
JavaScript
server.tool(
"get-cost",
"Get cloud cost summary for multiple vendors and months",
{
vendors: z.array(z.string()).optional().describe("List of cloud vendor names"),
months: z.array(z.string()).optional().describe("List of months in YYYY-MM format"),
},
async ({ vendors, months }) => {
...
}
);
Retrieve cost data from the Provider API
JavaScript
async function readCostData(): Promise<any | null> {
const response = await fetch('http://localhost:8000/api/v1/costs/info');
...
return data;
}
Build
# Default Installation
npm install
# Build
npm run build
Full Source Code: opsnow-mcp-cost-server
OpsNow Asset MCP Server
Since the structure is identical to the OpsNow Cost MCP Server, the explanation is omitted.
Full Source Code: opsnow-mcp-asset-server
Usage in Claude Desktop
Environment Settings
1. Claude Desktop Settings > Developer > Edit Settings
2. Open the claude_desktop_config.json file.
Register cloud-cost-server and cloud-asset-server settings.
JavaScript
{
"mcpServers": {
"cloud-cost-server": {
"command": "node",
"args": [
"/Users/tae-joolee/codeProject/opsnow-mcp-cost-server/build/index.js"
]
},
"cloud-asset-server": {
"command": "node",
"args": [
"/Users/tae-joolee/codeProject/opsnow-mcp-asset-server/build/index.js"
]
}
}
}
3. Verify successful registration
Check for "Hammer2" in the prompt input field
You can view the MCP server list by clicking
Usage Example
Based on the powerful performance of the Claude Desktop Agent, it demonstrates quite useful potential.
1. "What is the cloud cost for April?"
2. "Is it higher than March? If so, what’s the reason?"
3. "What is cloud usage?"
4. Then, visualize it.
Deploying My MCP Server
Smithery.ai is a central hub that allows you to search, install, and manage Model Context Protocol (MCP) servers to extend the capabilities of large language models (LLMs). Developers can use Smithery to integrate various external tools and data with LLMs, enabling the creation of more powerful AI systems.
Key Features of Smithery.ai
Registering the Deployed MCP Server
1. Open the claude_desktop_config.json file.
Register the MCP server information for cloud-cost-server, cloud-asset-server.
JavaScript
{
"mcpServers": {
"opsnow-mcp-cost-server-no-server": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@taejulee/opsnow-mcp-cost-server-no-server",
"--key",
"19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
]
},
"opsnow-mcp-asset-server-no-server": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@taejulee/opsnow-mcp-asset-server-no-server",
"--key",
"19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
]
}
}
}
Future Expansion Proposal: Claude MCP-Based OpsNow Integration Architecture
Through this system design and PoC, we have confirmed that Claude-based AI can extend beyond simple Q&A to become an agent that directly queries and interprets OpsNow's FinOps data.
Through this structure, we have discovered the following possibilities for the future:
Claude MCP has now moved beyond the experimental introduction phase and is becoming a core tool for AI operations. Based on this, OpsNow aims to leap forward by offering new user interfaces and enhanced customer experiences.
2. Integration of OpsNow Prime with MCP
MCP technology will also be integrated into OpsNow Prime, the on-premises CMP solution scheduled for release in the first half of 2025. This integration enables infrastructure automation through AI, allowing users to perform complex tasks with simple commands.
OpsNow Prime is a Private & On-Premises Cloud Management Platform designed to manage enterprise IT infrastructure and cloud resources. By integrating MCP, we can demonstrate how an AI assistant simplifies infrastructure operations. In simple terms, users can issue commands through a chat interface, and the AI handles tasks like creating or shutting down virtual machines (VMs). For example, when an administrator asks the OpsNow AI chatbot, “Create a new virtual server,” the AI will automatically call the appropriate virtualization platform API to provision the VM and return the result. Previously, developing these services required a lot of time, effort, and resources, but MCP, an open standard, made it easier and more efficient to link together.
The OpsNow team has developed a dedicated MCP server that integrates internal systems—such as virtualization hypervisor APIs into the standardized Model Context Protocol (MCP) framework. By connecting large language models (LLMs) like Anthropic Claude to OpsNow as MCP clients, AI can now interact directly with hypervisors like Proxmox or VMware. These models can execute real-world infrastructure tasks based on natural language commands. Within OpsNow, a specialized module called the MCP Provider acts as an adapter, bridging various hypervisors and OpsNow’s native APIs. For example, when a user instructs the AI to “delete a VM,” the MCP Provider identifies the correct platform, performs the appropriate API call, and sends the result back through the MCP interface. This seamless integration allows users to manage infrastructure effortlessly—no need to access complex consoles. Simply speak, and the AI handles the rest.
Let’s explore a real-world use case where Claude Desktop utilizes the OpsNow Prime MCP Server to easily create and activate virtual machines (VMs) on Proxmox, with full lifecycle management handled by OpsNow Prime.
This overview covers the entire process—from training the MCP server to deployment and operation. The MCP server was developed using Cursor, and it's built on Node.js. The basic server structure was obtained from the official MCP repository on GitHub.
To begin, we cloned the source code from the example MCP server Git repository to utilize the basic structure. Using the Postman collection file and TypeScript files provided, we studied the architecture and workflow, which allowed us to successfully create the OpsNow Prime MCP server.
postman collection file
JavaScript
"item":[
{
"name":"Proxmox",
"item":[
{
"name":"VM 리스트 조회(PRX-IF-VM-003)",
"request":{
"auth":{
"type":"bearer",
... 중략
},
... 중략
},
"response":[
{
"name":"VM 리스트 조회(PRX-IF-VM-003)",
... 중략
],
... 중략
route.ts
JavaScript
import { Express, Request, Response } from 'express';
import { ProxmoxApiManager } from './proxmox/proxmoxApiManager';
import { getLogger } from './utils/logger';
const logger = getLogger('Routes');
const proxmoxManager = ProxmoxApiManager.getInstance();
export const setupRoutes = (app: Express) => {
// Default health check endpoint
app.get('/api/health', (req: Request, res: Response) => {
res.status(200).json({ status: 'ok', message: 'Prime MCP 서버가 정상 동작 중입니다.' });
});
// Node related endpoints
app.get('/api/nodes', async (req: Request, res: Response) => {
try {
const result = await proxmoxManager.getNodes();
res.status(200).json(result);
} catch (error) {
logger.error('Node list lookup failed:', error);
res.status(500).json({ error: 'An error occurred trying to query the node list.' });
}
});
// ....(skip)
After completing the training, source files are generated based on the learned results for the OpsNow Prime MCP server. To build it, you use the following command. Currently, Node.js version 20 or higher is used. After installing the required modules and proceeding with the build, you verify whether the server is running properly. For the OpsNow Prime MCP server, there were some differences depending on the LLM model, but to improve accuracy, we conducted retraining and testing.
JavaScript
## Check node version
node -version
## build a project
npm install & npm run build
## Check project normal startup
node dist/index.js
When you're asked to run tests for the OpsNow Prime MCP server in Cursor, the API call test will be executed. If any issues arise, it will automatically fix them. Once you confirm there are no errors, click the settings button at the top of Cursor to proceed with registering the newly created MCP server.
Click on "MCP" on the left to navigate to the MCP management screen.
Click on "Add New Global MCP Server" in the screen above, and specify the absolute path of the index.js file generated after the build in the "args" section.
After that, when you create a server and retrieve instance information through Cursor or Claude, the relevant API is called to perform the requested operation. If the API's definitions or authentication information is incorrect, errors may occur multiple times, but the system will automatically go through the retraining and rebuilding process via Cursor.
While the MCP server can also be used in Cursor, it was functionally tested across platforms to verify if it works well in a cross-platform environment. Below is how to register the MCP in Claude.
Now, we're ready to use the learned MCP server to manage resources. Simply enter "Create 1 VM in Prime Proxmox" in Claude Desktop, and you'll be working with the OpsNow Prime MCP server to create the VM.
You can easily and quickly see that VMs are created in Proxmox, and you can also check the status of VMs created in OpsNow Prime. It should be noted that even if storage connectivity errors occur during the process, we decided on alternative solutions and successfully completed the VM creation without any issues. The created VMs can be viewed in real time from the Proxmox console, and you can also check their status in real time from the VM management menu in OpsNow Prime.
Now, let's abort/delete the created resources. If you type "Delete the VM you just created" in Claude Desktop, it will automatically call the API to stop the VM you just created and then delete it.
You can see in real-time from the Proxmox console that the VM has been deleted.
OpsNow's MCP utilization has already been demonstrated through features such as VM creation, querying, and deletion within the Proxmox virtualization environment via MCP, which has been developed and tested.
We are also expanding additional capabilities, such as shutting down VMs. For instance, what used to require an administrator to manually create a VM through the Proxmox Web interface can now be automated simply by instructing the OpsNow AI assistant.
Interestingly, these capabilities are not limited to specific virtualization offerings; they can be easily generalized to VMware, OpenStack, and more. Since the OpsNow team has already deployed connectivity to MCP standards, it is relatively easy to integrate new platforms. Moreover, as mentioned earlier, even in a closed-network data center environment, deploying open-source LLM and MCP servers can provide the same AI capabilities.
OpsNow Prime also has a roadmap to support MCP servers for detecting and automating unused or abnormal resources, automating system failures, and managing ITSM (Resource Application/Authorization), expanding beyond just managing resource creation and deletion in the context of On-Premise infrastructure operations.
As demonstrated in the OpsNow case, the introduction of MCP is expected to empower AI to act as the "hands and feet" of the data center, ushering in a new era of automation in infrastructure operations.
____________________________________________________________________________________________________________________________________
Anthropic's MCP has emerged as a universal connector, bringing the world's information to isolated AIs, and its scope of use is steadily expanding. While it's still in the early stages, it has garnered significant attention from the industry, and practical advantages are rapidly being proven through real-world cases like OpsNow. It will be interesting to see how quickly MCP becomes a standard and integrates into everyday services. The "MCP era", connecting data and AI, is fast approaching.
From the picture above:
Left: Before MCP
Without a standard protocol, each AI model must connect to tools like Slack, Google Drive, and GitHub using its own unique API. This results in fragmented connections and complex, repetitive integrations.
Right: After MCP
With MCP, models can access various tools through a single, standardized interface. Integration becomes much simpler and more unified across systems.
Large language models (LLMs), often referred to as AI chatbots or virtual assistants, are highly intelligent but still face a significant limitation, they are typically disconnected from the outside world. In simple terms, no matter how advanced AI becomes, it will always have constraints if it cannot access essential data, like the Internet or internal company databases. For example, if you ask current AI to "generate a graph using our sales data," it struggles because it doesn't have direct access to that data. This issue has led to numerous efforts to link AI with external tools and data sources. However, the traditional approach involved writing custom code for each AI model and tool, with each new tool requiring complex development. It’s similar to the past when different "cables" were needed for each printer or keyboard, there were various "specifications" between AI and the tools.
Anthropic has introduced the MCP (Model Context Protocol) as an open standard protocol designed to solve this issue.
It can be simplified as "USB-C for AI." Just like USB-C connects multiple devices through a single, unified port, MCP serves as the standard for linking AI applications with various external systems in a standardized way. This eliminates redundancy and simplifies integration between different systems. For instance, if three AI applications need to connect to three tools, without MCP, you would need to establish 3×3, or 9 separate connections. With MCP, only 3+3, or 6 standardized connections are required. This makes AI-tool connections more efficient, allowing AI to access and use the necessary data for improved responses.
The growing trend of linking AI models with external tools isn’t unique to Anthropic. Since 2023, OpenAI’s ChatGPT has been expanding its capabilities through plugins and function calling, enabling tasks like web searches and calculator functions. Microsoft has also integrated ChatGPT into Bing to enhance web search functionality. These efforts reflect a broader industry movement focused on making AI more practical and integrated into daily workflows. In this landscape, Anthropic made a notable impact by introducing the Model Context Protocol (MCP) in November 2024, offering a standardized approach that’s quickly gaining recognition as a potential industry standard. Claude, Anthropic’s AI assistant, comes equipped with a built-in MCP client, making it easy for apps powered by Claude to connect directly to MCP-compatible servers. For example, in Claude’s desktop application, users can connect services like Google Drive, Slack, and GitHub with just a few clicks, allowing Claude to retrieve relevant information from those platforms in real time.
After MCP’s launch, many developer tools and services quickly began to support the protocol. GitHub Copilot and the development IDE Cursor were among the first to announce MCP integration. Other platforms, including Zed Editor, Replit, Codeium, and Sourcegraph, are also embedding MCP into their ecosystems to enhance how AI assistants understand and interact with code. In summary, the push to connect AI models with external systems is a major trend in the tech industry, and MCP is rapidly gaining traction as an open standard. Some experts say the MCP has actually gained momentum enough to become a favorite in the race to define the “AI Agent Standard” between 2023 and 2025.
As more companies adopt MCP or experiment with similar solutions, the industry is seeing rapid progress in making AI assistants more powerful, flexible, and useful in practical environments.
MCP is gaining industry attention for several compelling advantages. First and foremost, it standardizes the way AI models connect with tools, significantly improving development efficiency. For developers, this means that once a new AI service is integrated using the MCP standard, it can be reused across multiple systems, streamlining workflows and boosting productivity. This standardized approach also leads to more consistent code implementation, resulting in greater service stability and easier maintenance. From the end-user perspective, this translates into a smoother, more unified experience when interacting with AI features across different applications.
Another key benefit is MCP’s flexibility, as it’s not tied to any specific AI vendor or model. It supports a wide range of large language models (LLMs), including proprietary systems like Anthropic’s Claude and OpenAI’s GPT series, as well as open-source alternatives. For example, developers can utilize open-source LLMs like Llama 2 or DeepSeek by simply meeting the MCP standard, avoiding lock-in to a single provider’s API. This flexibility extends to closed-network environments—organizations operating in secure, internet-restricted settings can still deploy AI assistants by installing open-source models and MCP connectors internally. Even without external internet access, AI functionality can be implemented as long as the MCP protocol is followed, making it ideal for high-security or offline systems.
Additionally, MCP is designed to work across various infrastructure environments. It’s compatible with a wide range of hypervisors and virtualization platforms, whether on-premises or in the cloud. This includes solutions like VMware, Proxmox, and OpenStack, all of which can be managed by AI through a single MCP interface. Since MCP operates independently of specific infrastructure types, organizations can integrate AI without overhauling their existing systems. Moreover, MCP is built for two-way communication, AI can send requests to external tools and receive real-time notifications when specific events occur. Access control is also highly customizable; the MCP server can define granular permissions to ensure AI only accesses the necessary data, preserving security and privacy. In summary, MCP stands out as a modern integration standard that balances flexibility, compatibility, and secure deployment across diverse IT environments.
OpsNow demonstrates practical MCP integration through two use cases. Setting up MCP-based workflows is simple, and the foundational setup can be done as follows:
In this context, existing FinOps services and resources are referred to as OpsNow Resources. We'll explore the practicality of MCP by developing the green-highlighted section in a simplified way. For now, the setup will operate using dummy data for testing purposes rather than live production data.
OpsNow MCP Provider
This project acts as a provider that enables Claude to communicate with MCP servers to retrieve asset and cost information from OpsNow. The overall system is structured as follows:
main.py: The entry point for handling MCP requests
Python
from fastapi import FastAPI
from asset_api_client import get_assets
from cost_api_client import get_costs
app = FastAPI()
@app.get("/health")
async def health_check():
return {"status": "ok"}
@app.get("/assets")
async def get_assets_data():
return await get_assets()
@app.get("/costs")
async def get_costs_data():
return await get_costs()
cost_api_client.py: Provides cost data
Python
async def get_costs():
return {
"costs": [
{
"cloud_provider": "<CSP_NAME>", # Example: AWS, Azure, GCP, etc.
"monthly_costs": [
{
"month": "<YYYY-MM>", # Example: 2025-03
"total": "<TOTAL_COST>",
"by_service": {
"<SERVICE_NAME_1>": "<COST_1>",
"<SERVICE_NAME_2>": "<COST_2>",
# ...
}
},
# ...Multiple monthly cost data
]
},
# ...Multiple CSP
]
}
asset_api_client.py: Provides asset data
Python
async def get_assets():
return {
"AWS": [
{
"id": "<RESOURCE_ID>",
"type": "<RESOURCE_TYPE>", # Example: EC2, RDS
"region": "<REGION_CODE>",
"status": "<STATUS>" # Example: running, stopped
},
# ...Multiple Assets
]
}
How to Run
# Default Installation
pip install -r requirements.txt
# 서버 실행
python main.py
Full Source Code: opsnow-mcp-provider
OpsNow MCP Server
OpsNow Cost MCP Server
Now, let's take a closer look at the role of the "OpsNow Cost MCP Server" in the Claude MCP integration structure. This component receives requests from the Claude desktop application and responds with OpsNow cost data using the MCP protocol.
Before starting this section, it is highly recommended to review the following document.
Key Technologies
src/index.ts: MCP server initialization
JavaScript
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
// Create server instance
const server = new McpServer({
name: "cloud-cost",
version: "1.0.0",
});
Define Input Schema and Register Tools
JavaScript
server.tool(
"get-cost",
"Get cloud cost summary for multiple vendors and months",
{
vendors: z.array(z.string()).optional().describe("List of cloud vendor names"),
months: z.array(z.string()).optional().describe("List of months in YYYY-MM format"),
},
async ({ vendors, months }) => {
...
}
);
Retrieve cost data from the Provider API
JavaScript
async function readCostData(): Promise<any | null> {
const response = await fetch('http://localhost:8000/api/v1/costs/info');
...
return data;
}
Build
# Default Installation
npm install
# Build
npm run build
Full Source Code: opsnow-mcp-cost-server
OpsNow Asset MCP Server
Since the structure is identical to the OpsNow Cost MCP Server, the explanation is omitted.
Full Source Code: opsnow-mcp-asset-server
Usage in Claude Desktop
Environment Settings
1. Claude Desktop Settings > Developer > Edit Settings
2. Open the claude_desktop_config.json file.
Register cloud-cost-server and cloud-asset-server settings.
JavaScript
{
"mcpServers": {
"cloud-cost-server": {
"command": "node",
"args": [
"/Users/tae-joolee/codeProject/opsnow-mcp-cost-server/build/index.js"
]
},
"cloud-asset-server": {
"command": "node",
"args": [
"/Users/tae-joolee/codeProject/opsnow-mcp-asset-server/build/index.js"
]
}
}
}
3. Verify successful registration
Check for "Hammer2" in the prompt input field
You can view the MCP server list by clicking
Usage Example
Based on the powerful performance of the Claude Desktop Agent, it demonstrates quite useful potential.
1. "What is the cloud cost for April?"
2. "Is it higher than March? If so, what’s the reason?"
3. "What is cloud usage?"
4. Then, visualize it.
Deploying My MCP Server
Smithery.ai is a central hub that allows you to search, install, and manage Model Context Protocol (MCP) servers to extend the capabilities of large language models (LLMs). Developers can use Smithery to integrate various external tools and data with LLMs, enabling the creation of more powerful AI systems.
Key Features of Smithery.ai
Registering the Deployed MCP Server
1. Open the claude_desktop_config.json file.
Register the MCP server information for cloud-cost-server, cloud-asset-server.
JavaScript
{
"mcpServers": {
"opsnow-mcp-cost-server-no-server": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@taejulee/opsnow-mcp-cost-server-no-server",
"--key",
"19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
]
},
"opsnow-mcp-asset-server-no-server": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@taejulee/opsnow-mcp-asset-server-no-server",
"--key",
"19e6494b-1fc7-47dc-81fb-2c26fb2f0277"
]
}
}
}
Future Expansion Proposal: Claude MCP-Based OpsNow Integration Architecture
Through this system design and PoC, we have confirmed that Claude-based AI can extend beyond simple Q&A to become an agent that directly queries and interprets OpsNow's FinOps data.
Through this structure, we have discovered the following possibilities for the future:
Claude MCP has now moved beyond the experimental introduction phase and is becoming a core tool for AI operations. Based on this, OpsNow aims to leap forward by offering new user interfaces and enhanced customer experiences.
2. Integration of OpsNow Prime with MCP
MCP technology will also be integrated into OpsNow Prime, the on-premises CMP solution scheduled for release in the first half of 2025. This integration enables infrastructure automation through AI, allowing users to perform complex tasks with simple commands.
OpsNow Prime is a Private & On-Premises Cloud Management Platform designed to manage enterprise IT infrastructure and cloud resources. By integrating MCP, we can demonstrate how an AI assistant simplifies infrastructure operations. In simple terms, users can issue commands through a chat interface, and the AI handles tasks like creating or shutting down virtual machines (VMs). For example, when an administrator asks the OpsNow AI chatbot, “Create a new virtual server,” the AI will automatically call the appropriate virtualization platform API to provision the VM and return the result. Previously, developing these services required a lot of time, effort, and resources, but MCP, an open standard, made it easier and more efficient to link together.
The OpsNow team has developed a dedicated MCP server that integrates internal systems—such as virtualization hypervisor APIs into the standardized Model Context Protocol (MCP) framework. By connecting large language models (LLMs) like Anthropic Claude to OpsNow as MCP clients, AI can now interact directly with hypervisors like Proxmox or VMware. These models can execute real-world infrastructure tasks based on natural language commands. Within OpsNow, a specialized module called the MCP Provider acts as an adapter, bridging various hypervisors and OpsNow’s native APIs. For example, when a user instructs the AI to “delete a VM,” the MCP Provider identifies the correct platform, performs the appropriate API call, and sends the result back through the MCP interface. This seamless integration allows users to manage infrastructure effortlessly—no need to access complex consoles. Simply speak, and the AI handles the rest.
Let’s explore a real-world use case where Claude Desktop utilizes the OpsNow Prime MCP Server to easily create and activate virtual machines (VMs) on Proxmox, with full lifecycle management handled by OpsNow Prime.
This overview covers the entire process—from training the MCP server to deployment and operation. The MCP server was developed using Cursor, and it's built on Node.js. The basic server structure was obtained from the official MCP repository on GitHub.
To begin, we cloned the source code from the example MCP server Git repository to utilize the basic structure. Using the Postman collection file and TypeScript files provided, we studied the architecture and workflow, which allowed us to successfully create the OpsNow Prime MCP server.
postman collection file
JavaScript
"item":[
{
"name":"Proxmox",
"item":[
{
"name":"VM 리스트 조회(PRX-IF-VM-003)",
"request":{
"auth":{
"type":"bearer",
... 중략
},
... 중략
},
"response":[
{
"name":"VM 리스트 조회(PRX-IF-VM-003)",
... 중략
],
... 중략
route.ts
JavaScript
import { Express, Request, Response } from 'express';
import { ProxmoxApiManager } from './proxmox/proxmoxApiManager';
import { getLogger } from './utils/logger';
const logger = getLogger('Routes');
const proxmoxManager = ProxmoxApiManager.getInstance();
export const setupRoutes = (app: Express) => {
// Default health check endpoint
app.get('/api/health', (req: Request, res: Response) => {
res.status(200).json({ status: 'ok', message: 'Prime MCP 서버가 정상 동작 중입니다.' });
});
// Node related endpoints
app.get('/api/nodes', async (req: Request, res: Response) => {
try {
const result = await proxmoxManager.getNodes();
res.status(200).json(result);
} catch (error) {
logger.error('Node list lookup failed:', error);
res.status(500).json({ error: 'An error occurred trying to query the node list.' });
}
});
// ....(skip)
After completing the training, source files are generated based on the learned results for the OpsNow Prime MCP server. To build it, you use the following command. Currently, Node.js version 20 or higher is used. After installing the required modules and proceeding with the build, you verify whether the server is running properly. For the OpsNow Prime MCP server, there were some differences depending on the LLM model, but to improve accuracy, we conducted retraining and testing.
JavaScript
## Check node version
node -version
## build a project
npm install & npm run build
## Check project normal startup
node dist/index.js
When you're asked to run tests for the OpsNow Prime MCP server in Cursor, the API call test will be executed. If any issues arise, it will automatically fix them. Once you confirm there are no errors, click the settings button at the top of Cursor to proceed with registering the newly created MCP server.
Click on "MCP" on the left to navigate to the MCP management screen.
Click on "Add New Global MCP Server" in the screen above, and specify the absolute path of the index.js file generated after the build in the "args" section.
After that, when you create a server and retrieve instance information through Cursor or Claude, the relevant API is called to perform the requested operation. If the API's definitions or authentication information is incorrect, errors may occur multiple times, but the system will automatically go through the retraining and rebuilding process via Cursor.
While the MCP server can also be used in Cursor, it was functionally tested across platforms to verify if it works well in a cross-platform environment. Below is how to register the MCP in Claude.
Now, we're ready to use the learned MCP server to manage resources. Simply enter "Create 1 VM in Prime Proxmox" in Claude Desktop, and you'll be working with the OpsNow Prime MCP server to create the VM.
You can easily and quickly see that VMs are created in Proxmox, and you can also check the status of VMs created in OpsNow Prime. It should be noted that even if storage connectivity errors occur during the process, we decided on alternative solutions and successfully completed the VM creation without any issues. The created VMs can be viewed in real time from the Proxmox console, and you can also check their status in real time from the VM management menu in OpsNow Prime.
Now, let's abort/delete the created resources. If you type "Delete the VM you just created" in Claude Desktop, it will automatically call the API to stop the VM you just created and then delete it.
You can see in real-time from the Proxmox console that the VM has been deleted.
OpsNow's MCP utilization has already been demonstrated through features such as VM creation, querying, and deletion within the Proxmox virtualization environment via MCP, which has been developed and tested.
We are also expanding additional capabilities, such as shutting down VMs. For instance, what used to require an administrator to manually create a VM through the Proxmox Web interface can now be automated simply by instructing the OpsNow AI assistant.
Interestingly, these capabilities are not limited to specific virtualization offerings; they can be easily generalized to VMware, OpenStack, and more. Since the OpsNow team has already deployed connectivity to MCP standards, it is relatively easy to integrate new platforms. Moreover, as mentioned earlier, even in a closed-network data center environment, deploying open-source LLM and MCP servers can provide the same AI capabilities.
OpsNow Prime also has a roadmap to support MCP servers for detecting and automating unused or abnormal resources, automating system failures, and managing ITSM (Resource Application/Authorization), expanding beyond just managing resource creation and deletion in the context of On-Premise infrastructure operations.
As demonstrated in the OpsNow case, the introduction of MCP is expected to empower AI to act as the "hands and feet" of the data center, ushering in a new era of automation in infrastructure operations.
____________________________________________________________________________________________________________________________________
Anthropic's MCP has emerged as a universal connector, bringing the world's information to isolated AIs, and its scope of use is steadily expanding. While it's still in the early stages, it has garnered significant attention from the industry, and practical advantages are rapidly being proven through real-world cases like OpsNow. It will be interesting to see how quickly MCP becomes a standard and integrates into everyday services. The "MCP era", connecting data and AI, is fast approaching.