目 录
- 一、为什么要接入MCP?
- 二、MCP是什么?
- 1. MCP解决什么问题?
- 2. 和传统 tool calling 的区别
- 三、实现过程
- Step 1:接入 MCP 服务
- Step 2:注册成工具
- Step 3:接入 LangGraph Agent 流程
- 让 normalize_decision() 接受 web_search
- 给 router 一个更明确的 web_search 选择规则
- 再加一个“保守兜底规则”
- 工具执行微调
一、为什么要接入MCP?
前面咱们做的帮助阅读论文的Agent系统可以回答用户关于论文(本地知识库)的相关问题,但是也有一点不足,比如当用户问到了一些最近趋势或者最近发展这样的问题时,本地知识库没有相关信息,那么系统就回答不了,所以咱们需要让系统具备“获得外部信息”的能力,这就需要用到MCP技术了。
二、MCP是什么?
查询IBM中国给出的答案是:模型上下文协议 (Model Context Protocol, MCP) 是 AI 应用程序的一个标准化层,可与外部服务(如数据源、工具和工作流)进行有效通信。
1. MCP解决什么问题?
我个人理解它对于智能体来说是一种外部厂商提供的数据源、工具和工作流使用规则,当一个智能体按照这些规则给外部厂商发去请求时,就可以得到外部厂商给予的服务。可以理解成:
MCP = 给 AI 应用接外部能力的统一插口。
参考一些资料,我了解到这项协议的初衷是为了减少一些通用工具的重复开发,比如一些基础业务比如查询天气,网络搜索等这些工具功能比较通用,如果每个智能体都重复写这种通用的工具,确实是很消耗精力,所以做成MCP服务,可以很轻松的复用一些通用的功能。
2. 和传统 tool calling 的区别
传统的tool calling需要自己写工具函数并且手动接入工具,但是有了MCP智能体按照协议发出请求就可以调用外部能力了。MCP和tool calling的就好像是插座和自己焊接电线。
三、实现过程
在开始之前需要安装一个依赖,LangChain官方提供接入MCP的适配库,我想增加的是可以进行网络搜索的功能,服务选择了智谱的MCP服务:
pipinstalllangchain-mcp-adaptersStep 1:接入 MCP 服务
首先在app目录下新建一个mcp_tools.py,然后在里面需要些三个功能分别是接入智谱MCP,并发送请求的功能、整理返回结果的功能以及将功能封装成一个工具:
importasyncioimportjsonimportosfromdotenvimportload_dotenvfromlangchain_mcp_adapters.clientimportMultiServerMCPClientfromapp.logger_configimportsetup_logger load_dotenv()logger=setup_logger()asyncdef_call_zhipu_web_search(query:str,recency:str="noLimit",content_size:str="medium",location:str="us",):api_key=os.getenv("ZHIPU_API_KEY")ifnotapi_key:raiseValueError("ZHIPU_API_KEY not found in .env")client=MultiServerMCPClient({"zhipu_search":{"transport":"http","url":"https://open.bigmodel.cn/api/mcp/web_search_prime/mcp","headers":{"Authorization":f"Bearer{api_key}"},}})tools=awaitclient.get_tools()search_tool=next((toolfortoolintoolsiftool.name=="web_search_prime"),None)ifsearch_toolisNone:raiseRuntimeError("web_search_prime not found in MCP tools")result=awaitsearch_tool.ainvoke({"search_query":query,"search_recency_filter":recency,"content_size":content_size,"location":location,})returnresultdef_parse_mcp_search_result(raw_result)->list[dict]:ifnotraw_result:return[]raw_text=""ifisinstance(raw_result,list)andlen(raw_result)>0:first_item=raw_result[0]ifisinstance(first_item,dict):raw_text=first_item.get("text","")else:raw_text=getattr(first_item,"text","")else:raw_text=str(raw_result)data=raw_text# 这个 MCP 返回里,text 可能是 “字符串里的 JSON”# 所以这里做最多两次 json.loadsfor_inrange(2):ifisinstance(data,str):try:data=json.loads(data)exceptException:breakifisinstance(data,list):returndatareturn[]defweb_search_tool(query:str)->str:logger.info(f"[web_search_tool] query:{query}")raw_result=asyncio.run(_call_zhipu_web_search(query=query,recency="oneMonth",content_size="medium",location="us",))items=_parse_mcp_search_result(raw_result)ifnotitems:logger.warning("[web_search_tool] parsed result is empty, fallback to raw text")returnstr(raw_result)lines=[]foridx,iteminenumerate(items[:5],start=1):title=item.get("title","No title")link=item.get("link","")content=item.get("content","")lines.append(f"[{idx}]{title}\n"f"{content}\n"f"{link}")final_text="\n\n".join(lines)logger.info("[web_search_tool] search finished successfully")returnfinal_textStep 2:注册成工具
当工具被封装后,我们就可以在tools.py中注册我们的工具了,这里与tool calling不同的地方也出现了,本地写的工具咱们在这里需要接入智能体,但是对于MCP服务,只需要注册到模型工具中就好,不需要走一波接入:
fromdatetimeimportdatetimefromapp.rag_systemimportRAGSystemfromapp.llm_utilsimportclientfromapp.configimportCHAT_MODELfromapp.mcp_toolsimportweb_search_tooldefrag_tool(query,rag:RAGSystem,chat_history=None):returnrag.ask(query,chat_history=chat_history)defcalculator_tool(expression):try:returnstr(eval(expression))exceptException:return"Invalid expression"deftime_tool(_):returndatetime.now().strftime("%Y-%m-%d %H:%M:%S")defllm_tool(query,chat_history=None):messages=[{"role":"system","content":"You are a helpful assistant."}]ifchat_history:messages.extend(chat_history)messages.append({"role":"user","content":query})response=client.chat.completions.create(model=CHAT_MODEL,messages=messages,)returnresponse.choices[0].message.content TOOLS=[{"name":"rag","description":"Use for paper/document questions","func":rag_tool},{"name":"calculator","description":"Use for math calculations","func":calculator_tool},{"name":"time","description":"Get current time","func":time_tool},{"name":"web_search","description":"Use for external web search when local documents are not enough or when real-time web information is needed","func":web_search_tool},{"name":"llm","description":"Use for general questions","func":llm_tool}]Step 3:接入 LangGraph Agent 流程
让 normalize_decision() 接受 web_search
注册完工具后,咱们现在的合法工具就不只是有rag、llm、time以及calculator了,所以在工具规范化的地方需要进行一定的扩充,将咱们的web_search工具也扩充进去:
defnormalize_decision(decision:dict,query:str,valid_tool_names:set[str])->dict:ifnotisinstance(decision,dict):return{"tool":"llm","input":query}tool=str(decision.get("tool","")).strip().lower()tool_input=str(decision.get("input","")).strip()iftoolnotinvalid_tool_names:return{"tool":"llm","input":query}# 对 rag / llm / time / web_search,都统一保留原始 queryiftoolin{"rag","llm","time","web_search"}:return{"tool":tool,"input":query}# calculator 允许保留抽出来的表达式iftool=="calculator":ifnottool_input:return{"tool":"calculator","input":query}return{"tool":"calculator","input":tool_input}return{"tool":"llm","input":query}给 router 一个更明确的 web_search 选择规则
咱最早build_choose_tool_node()中关于工具选择的路由响应的需要进行调整,当用户问道一些关于最近、当前、最新等这样字眼的时候,咱们的路由要选择web_search功能:
prompt=f""" You are a tool router. Your job is ONLY to choose the best tool and prepare its input. Do NOT answer the user's question. Do NOT rewrite the user's question into an answer. Return JSON only. Available tools:{tool_desc}Tool selection guidance: - Use rag for questions about the loaded local papers/documents, such as paper1, paper2, this paper, the PDF, or document-based comparison/analysis. - Use calculator for clear math calculations. - Use time for current time questions. - Use web_search for questions that explicitly need web information, latest information, recent updates, current events, online search, or information likely not contained in the local PDFs. - Use llm for general questions that do not need document retrieval, calculation, time, or web search. Rules: 1. You must return exactly one JSON object. 2. JSON format: {{"tool": "...", "input": "..."}} 3. tool must be one of: rag, calculator, time, web_search, llm 4. For rag, llm, time, and web_search: - input should stay the same as the user's original question - do not invent a new sentence 5. For calculator: - input should be the math expression only if you can extract it 6. Do not include markdown, explanations, or code fences. User question:{query}"""再加一个“保守兜底规则”
在路由上其实通过大语言模型得到的路由是一种软路由,为什么这么说呢,因为是通过语言模型进行判断,结果可能有一些不确定性,所以咱们还要加一个兜底刚性路,匹配用户问题中的关键词,如果出现了比如,latest 、 recent 、联网等,这样的关键词,那么直接匹配到网络搜索工具:
defmaybe_force_web_search(query:str,decision:dict)->dict:q=query.lower()web_keywords=["latest","recent","current","today","news","web","online","internet","search the web","最新","最近","当前","今天","联网","网上","搜索一下"]local_doc_keywords=["paper1","paper2","this paper","the paper","pdf","document","论文","文档"]has_web_signal=any(kinqforkinweb_keywords)looks_like_local_doc=any(kinqforkinlocal_doc_keywords)ifhas_web_signalandnotlooks_like_local_doc:return{"tool":"web_search","input":query}returndecision工具执行微调
有了上面的约束后,咱们的工具微调部分做出响应调整即可,给工具选择结果加上硬约束:
raw_decision=json.loads(cleaned)decision=normalize_decision(raw_decision,query,valid_tool_names)decision=maybe_force_web_search(query,decision)logger.info(f"[choose_tool_node] raw decision:{raw_decision}")logger.info(f"[choose_tool_node] normalized decision:{decision}")return{"decision":decision}如果这篇文章对你有帮助,可以点个赞~
完整代码地址:https://github.com/1186141415/LangChain-for-A-Paper-Rag-Agent