{ "tools":[ { "name":"get_current_time", "description":"\n Returns the current date and time in the 'Asia/Shanghai' timezone.\n\n Returns:\n datetime: The current datetime object localized to 'Asia/Shanghai' timezone.\n ", "inputSchema":{ "type":"object", "properties":{}, "title":"get_current_timeArguments" } } ] }
@app.get_prompt() asyncdefget_prompt( name: str, arguments: dict[str, str] | None = None ) -> types.GetPromptResult: if name notin PROMPTS: raise ValueError(f"Prompt not found: {name}")
if name == "git-commit": changes = arguments.get("changes") if arguments else"" return types.GetPromptResult( messages=[ types.PromptMessage( role="user", content=types.TextContent( type="text", text=f"Generate a concise but descriptive commit message " f"for these changes:\n\n{changes}" ) ) ] )
if name == "explain-code": code = arguments.get("code") if arguments else"" language = arguments.get("language", "Unknown") if arguments else"Unknown" return types.GetPromptResult( messages=[ types.PromptMessage( role="user", content=types.TextContent( type="text", text=f"Explain how this {language} code works:\n\n{code}" ) ) ] )
raise ValueError("Prompt implementation not found")
整体看下来,这个其实非常像函数的定义,prompt 的 name 实际上就是函数名,prompt 的参数就是函数的参数,而函数体实际上就是把参数带进去,生成我们常见的消息格式,role user content 这一套,最终返回该消息。然后在 get_prompt 中,根据不同的 prompt name 返回不同的 prompt 消息。
所以这样来看,用户是不是就不能更改某个 prompt 的具体内容了?虽然文档上说 prompt 是 user-controlled,但是这里的 control 应该只是选择哪一个 prompt,而不是修改。
// For images: data?: string,// base64 encoded mimeType?: string } } ], modelPreferences?:{ hints?:[{ name?: string // Suggested model name/family }], costPriority?: number,// 0-1, importance of minimizing cost speedPriority?: number,// 0-1, importance of low latency intelligencePriority?: number // 0-1, importance of capabilities }, systemPrompt?: string, includeContext?:"none" | "thisServer" | "allServers", temperature?: number, maxTokens: number, stopSequences?: string[], metadata?: Record<string, unknown> }
这些字段基本都和使用 openai 请求时差不多,但有一个不同的是,modelPreferences.hints 中的 name ,这个字段实际上填的是可以匹配完整或部分模型名称的字符串,如 “claude-3” 和 “sonnet”。结合下面的几个字段,client 会自动选择指定的模型,比如最省钱、最快或者最智能。
一个请求的实际例子:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
{ "method":"sampling/createMessage", "params":{ "messages":[ { "role":"user", "content":{ "type":"text", "text":"What files are in the current directory?" } } ], "systemPrompt":"You are a helpful file system assistant.", "includeContext":"thisServer", "maxTokens":100 } }
响应的格式也和使用 openai 时差不多:
1 2 3 4 5 6 7 8 9 10 11
{ model: string,// Name of the model used stopReason?:"endTurn" | "stopSequence" | "maxTokens" | string, role:"user" | "assistant", content:{ type:"text" | "image", text?: string, data?: string, mimeType?: string } }