前几篇给大家分享MCP相关文章篇理论多一些,这次给大家分享一些MCP相关实战操作,今天用Java SDK 教大家快速构建MCP Server 和 Clinet。
一、背景介绍
模型上下文协议(Model Context Protocol,MCP)的 Java SDK 实现了 AI 模型与工具之间的标准化集成。该 SDK 支持客户端和服务端双向协议交互,通过分层架构设计实现灵活扩展
特性
MCP 客户端和服务端实现支持以下核心功能:
- 协议协商
协议版本兼容性检查与协商
- 工具管理
工具发现、执行和列表变更通知
- 资源管理
基于 URI 模板的访问机制和订阅系统
- Prompt 处理
提示模板的管理与动态生成
- 采样支持
AI 模型交互的采样控制
- 多传输协议
-
默认传输:基于标准输入输出(Stdio)的进程间通信
-
HTTP SSE 传输:支持 Java HttpClient、Servlet 实现
-
Spring 集成:WebFlux(响应式)和 WebMVC(Servlet)的 HTTP 流式传输
-
架构设计
SDK遵循分层架构,明确区分关注点
- 客户端/服务器层(McpClient/McpServer):
两者都使用McpSession进行同步/异步操作,McpClient处理客户端协议操作,McpServer管理服务器端协议操作。
- 会话层(McpSession):
使用DefaultMcpSession实现管理通信模式和状态。
- 传输层(McpTransport):
通过以下方式处理JSON-RPC消息序列化/反序列化:
-
核心模块中的StdioTransport(stdin/stdout)
-
专用传输模块中的HTTP SSE传输(Java HttpClient、Spring WebFlux、Spring WebMVC)
-
Java 结合AI模型构建的应用架构如下:
StdioTransport和SSE两类传输层架构如下:
时序图
二、系统搭建准备
1、系统要求
-
Java 17 或更高版本.
-
Spring Boot 3.3.x或更高版
2、添加依赖包
# Maven
<dependencies>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-mcp-server-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
</dependency>
</dependencies>
3. 属性配置
设置应用属性配置文件,applicaiton.properties 或 applicaiton.yml
# application.properties
spring.main.bannerMode=off
logging.pattern.console=
# application.yml
logging:
pattern:
console:
spring:
main:
banner-mode: off
三、实现MCP Server
1. MCP Server(基于sse协议)
import express from "express";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
import { z } from "zod";
const server = new McpServer({
name: "demo-sse",
version: "1.0.0"
});
server.tool("exchange",
'人民币汇率换算',
{ rmb: z.number() },
async ({ rmb }) => {
// 使用固定汇率进行演示,实际应该调用汇率API
const usdRate = 0.14; // 1人民币约等于0.14美元
const hkdRate = 1.09; // 1人民币约等于1.09港币
const usd = (rmb * usdRate).toFixed(2);
const hkd = (rmb * hkdRate).toFixed(2);
return {
content: [{
type: "text",
text: `${rmb}人民币等于:\n${usd}美元\n${hkd}港币`
}]
}
},
);
const app = express();
const sessions: Record<string, { transport: SSEServerTransport; response: express.Response }> = {}
app.get("/sse", async (req, res) => {
console.log(`New SSE connection from ${req.ip}`);
const sseTransport = new SSEServerTransport("/messages", res);
const sessionId = sseTransport.sessionId;
if (sessionId) {
sessions[sessionId] = { transport: sseTransport, response: res }
}
await server.connect(sseTransport);
});
app.post("/messages", async (req, res) => {
const sessionId = req.query.sessionId as string;
const session = sessions[sessionId];
if (!session) {
res.status(404).send("Session not found");
return;
}
await session.transport.handlePostMessage(req, res);
});
app.listen(3001);
2. 调试
官方提供了一个调试器,我们对编写完的代码进行调试
npx @modelcontextprotocol/inspector
2.1 连接Server
2.2 获取工具
2.3 执行调试
四、实现 MCP Clinet
1. 配置文件
const config = [
{
name: 'demo-stdio',
type: 'command',
command: 'node ~/code-open/cursor-toolkits/mcp/build/demo-stdio.js',
isOpen: true
},
{
name: 'weather-stdio',
type: 'command',
command: 'node ~/code-open/cursor-toolkits/mcp/build/weather-stdio.js',
isOpen: true
},
{
name: 'demo-sse',
type: 'sse',
url: 'http://localhost:3001/sse',
isOpen: false
}
];
export default config;
2. 交互形式
MCP Client主要还是基于LLM,识别到需要调用外部系统,调用MCP Server提供的Tool,所以还是以对话为入口,可以方便一点,直接在terminal里对话,使用readline来读取用户输入。大模型可以直接使用openai,Tool的路由直接使用function calling。
3. 编写 MCP Clinet (基于sse协议)
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport, StdioServerParameters } from "@modelcontextprotocol/sdk/client/stdio.js";
import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js";
import OpenAI from "openai";
import { Tool } from "@modelcontextprotocol/sdk/types.js";
import { ChatCompletionMessageParam } from "openai/resources/chat/completions.js";
import { createInterface } from "readline";
import { homedir } from 'os';
import config from "./mcp-server-config.js";
// 初始化环境变量
const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
if (!OPENAI_API_KEY) {
throw new Error("OPENAI_API_KEY environment variable is required");
}
interface MCPToolResult {
content: string;
}
interface ServerConfig {
name: string;
type: 'command' | 'sse';
command?: string;
url?: string;
isOpen?: boolean;
}
class MCPClient {
static getOpenServers(): string[] {
return config.filter(cfg => cfg.isOpen).map(cfg => cfg.name);
}
private sessions: Map<string, Client> = new Map();
private transports: Map<string, StdioClientTransport | SSEClientTransport> = new Map();
private openai: OpenAI;
constructor() {
this.openai = new OpenAI({
apiKey: OPENAI_API_KEY
});
}
async connectToServer(serverName: string): Promise<void> {
const serverConfig = config.find(cfg => cfg.name === serverName) as ServerConfig;
if (!serverConfig) {
throw new Error(`Server configuration not found for: ${serverName}`);
}
let transport: StdioClientTransport | SSEClientTransport;
if (serverConfig.type === 'command' && serverConfig.command) {
transport = await this.createCommandTransport(serverConfig.command);
} else if (serverConfig.type === 'sse' && serverConfig.url) {
transport = await this.createSSETransport(serverConfig.url);
} else {
throw new Error(`Invalid server configuration for: ${serverName}`);
}
const client = new Client(
{
name: "mcp-client",
version: "1.0.0"
},
{
capabilities: {
prompts: {},
resources: {},
tools: {}
}
}
);
await client.connect(transport);
this.sessions.set(serverName, client);
this.transports.set(serverName, transport);
// 列出可用工具
const response = await client.listTools();
console.log(`\nConnected to server '${serverName}' with tools:`, response.tools.map((tool: Tool) => tool.name));
}
private async createCommandTransport(shell: string): Promise<StdioClientTransport> {
const [command, ...shellArgs] = shell.split(' ');
if (!command) {
throw new Error("Invalid shell command");
}
// 处理参数中的波浪号路径
const args = shellArgs.map(arg => {
if (arg.startsWith('~/')) {
return arg.replace('~', homedir());
}
return arg;
});
const serverParams: StdioServerParameters = {
command,
args,
env: Object.fromEntries(
Object.entries(process.env).filter(([_, v]) => v !== undefined)
) as Record<string, string>
};
return new StdioClientTransport(serverParams);
}
private async createSSETransport(url: string): Promise<SSEClientTransport> {
return new SSEClientTransport(new URL(url));
}
async processQuery(query: string): Promise<string> {
if (this.sessions.size === 0) {
throw new Error("Not connected to any server");
}
const messages: ChatCompletionMessageParam[] = [
{
role: "user",
content: query
}
];
// 获取所有服务器的工具列表
const availableTools: any[] = [];
for (const [serverName, session] of this.sessions) {
const response = await session.listTools();
const tools = response.tools.map((tool: Tool) => ({
type: "function" as const,
function: {
name: `${serverName}__${tool.name}`,
description: `[${serverName}] ${tool.description}`,
parameters: tool.inputSchema
}
}));
availableTools.push(...tools);
}
// 调用OpenAI API
const completion = await this.openai.chat.completions.create({
model: "gpt-4-turbo-preview",
messages,
tools: availableTools,
tool_choice: "auto"
});
const finalText: string[] = [];
// 处理OpenAI的响应
for (const choice of completion.choices) {
const message = choice.message;
if (message.content) {
finalText.push(message.content);
}
if (message.tool_calls) {
for (const toolCall of message.tool_calls) {
const [serverName, toolName] = toolCall.function.name.split('__');
const session = this.sessions.get(serverName);
if (!session) {
finalText.push(`[Error: Server ${serverName} not found]`);
continue;
}
const toolArgs = JSON.parse(toolCall.function.arguments);
// 执行工具调用
const result = await session.callTool({
name: toolName,
arguments: toolArgs
});
const toolResult = result as unknown as MCPToolResult;
finalText.push(`[Calling tool ${toolName} on server ${serverName} with args ${JSON.stringify(toolArgs)}]`);
console.log(toolResult.content);
finalText.push(toolResult.content);
// 继续与工具结果的对话
messages.push({
role: "assistant",
content: "",
tool_calls: [toolCall]
});
messages.push({
role: "tool",
tool_call_id: toolCall.id,
content: toolResult.content
});
// 获取下一个响应
const nextCompletion = await this.openai.chat.completions.create({
model: "gpt-4-turbo-preview",
messages,
tools: availableTools,
tool_choice: "auto"
});
if (nextCompletion.choices[0].message.content) {
finalText.push(nextCompletion.choices[0].message.content);
}
}
}
}
return finalText.join("\n");
}
async chatLoop(): Promise<void> {
console.log("\nMCP Client Started!");
console.log("Type your queries or 'quit' to exit.");
const readline = createInterface({
input: process.stdin,
output: process.stdout
});
const askQuestion = () => {
return new Promise<string>((resolve) => {
readline.question("\nQuery: ", resolve);
});
};
try {
while (true) {
const query = (await askQuestion()).trim();
if (query.toLowerCase() === 'quit') {
break;
}
try {
const response = await this.processQuery(query);
console.log("\n" + response);
} catch (error) {
console.error("\nError:", error);
}
}
} finally {
readline.close();
}
}
async cleanup(): Promise<void> {
for (const transport of this.transports.values()) {
await transport.close();
}
this.transports.clear();
this.sessions.clear();
}
hasActiveSessions(): boolean {
return this.sessions.size > 0;
}
}
// 主函数
async function main() {
const openServers = MCPClient.getOpenServers();
console.log("Connecting to servers:", openServers.join(", "));
const client = new MCPClient();
try {
// 连接所有开启的服务器
for (const serverName of openServers) {
try {
await client.connectToServer(serverName);
} catch (error) {
console.error(`Failed to connect to server '${serverName}':`, error);
}
}
if (!client.hasActiveSessions()) {
throw new Error("Failed to connect to any server");
}
await client.chatLoop();
} finally {
await client.cleanup();
}
}
// 运行主函数
main().catch(console.error);
4. 运行效果
NODE_TLS_REJECT_UNAUTHORIZED=0 node build/client.js
希望这篇文章能对您练习使用MCP有所帮助,如果你有任何问题或建议,欢迎在评论区留言,我们一起讨论,共同进步。后续将会给大家分享更多MCP的相关实践案例,请大家多关注