Skip to article frontmatterSkip to article content

中间件

中间件(middleware)是本次更新中最亮眼的特性,诸多新功能都是藉由中间件实现的,比如动态系统提示词、人机交互、动态注入上下文等等。

本节我们将使用中间件实现四个功能:

  • 预算控制

  • 消息截断

  • 敏感词过滤

  • 用户敏感信息过滤

一、预算控制

随着对话轮次增加,积累的历史对话越来越多,每次请求的费用也随之增加。为了控制预算,我们可以设定在对话超过某个轮次之后,切换到低费率模型。该功能可以通过中间件实现。

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from langchain_core.messages import HumanMessage
from langgraph.graph import MessagesState

# 加载模型配置
_ = load_dotenv()

# 低费率模型
basic_model = ChatOpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url=os.getenv("DASHSCOPE_BASE_URL"),
    model="qwen3-coder-plus",
)

# 高费率模型
advanced_model = ChatOpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url=os.getenv("DASHSCOPE_BASE_URL"),
    model="qwen3-max",
)

具体来说,下面的预算控制功能是通过一个叫 @wrap_model_call 的装饰器,所装饰的中间件实现的。

接口文档 中可以找到所有装饰器:

  • @before_agent: 封装在 Agent 执行之前执行的逻辑

  • @before_model: 封装在模型调用之前执行的逻辑

  • @after_agent: 封装在 Agent 执行之后执行的逻辑

  • @after_model: 封装在模型调用之后执行的逻辑

  • @wrap_model_call: 控制模型调用过程

  • @wrap_tool_call: 控制工具调用过程

  • @dynamic_prompt: 动态修改系统提示词

  • @hook_config: 配置钩子行为

@wrap_model_call
def dynamic_model_selection(request: ModelRequest, handler) -> ModelResponse:
    """Choose model based on conversation complexity."""
    message_count = len(request.state["messages"])

    if message_count > 5:
        # Use an advanced model for longer conversations
        model = advanced_model
    else:
        model = basic_model

    request.model = model
    print(f"message_count: {message_count}")
    print(f"model_name: {model.model_name}")

    return handler(request)

agent = create_agent(
    model=basic_model,  # Default model
    middleware=[dynamic_model_selection]
)
state: MessagesState = {"messages": []}
items = ['汽车', '飞机', '摩托车', '自行车']
for idx, i in enumerate(items):
    print(f"\n=== Round {idx+1} ===")
    state["messages"] += [HumanMessage(content=f"{i}有几个轮子,请简单回答")]
    result = agent.invoke(state)
    state["messages"] = result["messages"]
    print(f"content: {result["messages"][-1].content}")

=== Round 1 ===
message_count: 1
model_name: qwen3-coder-plus
content: 汽车有4个轮子。

=== Round 2 ===
message_count: 3
model_name: qwen3-coder-plus
content: 飞机有3个轮子(起落架)。

=== Round 3 ===
message_count: 5
model_name: qwen3-coder-plus
content: 摩托车有2个轮子。

=== Round 4 ===
message_count: 7
model_name: qwen3-max
content: 自行车有2个轮子。

二、消息截断

智能体系统上下文长度是有限。若超过限制,就应该想办法压缩上下文。在所有方法中,最简单粗暴的就是截断。截断功能可以利用 @before_model 装饰器实现。

from langchain.messages import RemoveMessage
from langgraph.graph.message import REMOVE_ALL_MESSAGES
from langgraph.checkpoint.memory import InMemorySaver
from langchain.agents import create_agent, AgentState
from langchain.agents.middleware import before_model
from langgraph.runtime import Runtime
from langchain_core.runnables import RunnableConfig
from typing import Any

在下面的例子中,由于我们在第一条消息中告诉智能体我叫 bob,并且让智能体始终保留第一条消息,因此智能体总是记得我是 bob.

@before_model
def trim_messages(state: AgentState, runtime: Runtime) -> dict[str, Any] | None:
    """Keep only the last few messages to fit context window."""
    messages = state["messages"]

    if len(messages) <= 3:
        return None  # No changes needed

    first_msg = messages[0]
    recent_messages = messages[-3:] if len(messages) % 2 == 0 else messages[-4:]
    new_messages = [first_msg] + recent_messages

    return {
        "messages": [
            RemoveMessage(id=REMOVE_ALL_MESSAGES),
            *new_messages
        ]
    }

agent = create_agent(
    basic_model,
    middleware=[trim_messages],
    checkpointer=InMemorySaver(),
)

config: RunnableConfig = {"configurable": {"thread_id": "1"}}

def agent_invoke(agent):
    agent.invoke({"messages": "hi, my name is bob"}, config)
    agent.invoke({"messages": "write a short poem about cats"}, config)
    agent.invoke({"messages": "now do the same but for dogs"}, config)
    final_response = agent.invoke({"messages": "what's my name?"}, config)
    
    final_response["messages"][-1].pretty_print()

agent_invoke(agent)
================================== Ai Message ==================================

Your name is Bob! You introduced yourself to me earlier.

接下来,对中间件做一些更改。改为保留最后两条对话记录,现在智能体不记得我是 bob 了。

@before_model
def trim_without_first_message(state: AgentState, runtime: Runtime) -> dict[str, Any] | None:
    """Keep only the last few messages to fit context window."""
    messages = state["messages"]

    return {
        "messages": [
            RemoveMessage(id=REMOVE_ALL_MESSAGES),
            *messages[-2:]
        ]
    }

agent = create_agent(
    basic_model,
    middleware=[trim_without_first_message],
    checkpointer=InMemorySaver(),
)

agent_invoke(agent)
================================== Ai Message ==================================

I don't have access to your name or personal information. I don't know who you are beyond our current conversation. If you'd like to share your name, I'd be happy to use it, but I can't access that information on my own. Is there something specific I can help you with today?

三、护栏:敏感词过滤

护栏(Guardrails)是智能体提供的一类内容安全能力的统称。我们知道模型本身是有内容安全能力的,但很容易被绕过,这一般被称为破甲或者破限。智能体可以在模型之外提供额外的安全能力。这是通过工程上的强制性检查实现的。

在 LangGraph 中,护栏可以通过中间件实现。下面我们实现一个简单的“护栏”:若用户最新一条输入中包含指定的敏感词,则智能体拒绝回答用户的问题。

from typing import Any

from langchain.agents.middleware import before_agent, AgentState
from langgraph.runtime import Runtime

banned_keywords = ["hack", "exploit", "malware"]

@before_agent(can_jump_to=["end"])
def content_filter(state: AgentState, runtime: Runtime) -> dict[str, Any] | None:
    """Deterministic guardrail: Block requests containing banned keywords."""
    # Get the first user message
    if not state["messages"]:
        return None

    last_message = state["messages"][-1]
    if last_message.type != "human":
        return None

    content = last_message.content.lower()

    # Check for banned keywords
    for keyword in banned_keywords:
        if keyword in content:
            # Block execution before any processing
            return {
                "messages": [{
                    "role": "assistant",
                    "content": "I cannot process requests containing inappropriate content. Please rephrase your request."
                }],
                "jump_to": "end"
            }

    return None

agent = create_agent(
    model=basic_model,
    middleware=[content_filter],
)

# This request will be blocked before any processing
result = agent.invoke({
    "messages": [{"role": "user", "content": "How do I hack into a database?"}]
})
for message in result["messages"]:
    message.pretty_print()
================================ Human Message =================================

How do I hack into a database?
================================== Ai Message ==================================

I cannot process requests containing inappropriate content. Please rephrase your request.

四、护栏:PII 检测

接下来,我们继续编写护栏。PII(Personally Identifiable Information)检测是一个过滤用户敏感信息的护栏功能。在下面的例子中,我们将检测用户的邮箱、IP、地址、银行卡等敏感信息。

我们将尝试两种策略处理检测到的敏感信息:

  1. 拒绝回答用户的问题

  2. 将敏感信息替换为一连串的星号 ********

from textwrap import dedent
from pydantic import BaseModel, Field

# 可信任的模型,一般是本地模型,为了方便,这里依然使用qwen
trusted_model = ChatOpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url=os.getenv("DASHSCOPE_BASE_URL"),
    model="qwen3-coder-plus",
)

# 用于格式化智能体输出,若发现敏感信息返回True,没发现返回False
class PiiCheck(BaseModel):
    """Structured output indicating whether text contains PII."""
    is_pii: bool = Field(description="Whether the text contains PII")

def message_with_pii(pii_middleware):
    agent = create_agent(
        model=basic_model,
        middleware=[pii_middleware],
    )

    # This request will be blocked before any processing
    result = agent.invoke({
        "messages": [{
            "role": "user",
            "content": dedent(
                """
                File "/home/luochang/proj/agent.py", line 53, in my_agent
                    agent = create_react_agent(
                            ^^^^^^^^^^^^^^^^^^^
                File "/home/luochang/miniconda3/lib/python3.12/site-packages/typing_extensions.py", line 2950, in wrapper
                    return arg(*args, **kwargs)
                        ^^^^^^^^^^^^^^^^^^^^
                File "/home/luochang/miniconda3/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py", line 566, in create_react_agent
                    model = cast(BaseChatModel, model).bind_tools(
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                AttributeError: 'RunnableLambda' object has no attribute 'bind_tools'
    
                ---
    
                为啥报错
                """).strip()
        }]
    })

    return result

策略一:如遇敏感信息,拒绝回复。

@before_agent(can_jump_to=["end"])
def content_blocker(state: AgentState,  runtime: Runtime) -> dict[str, Any] | None:
    """Deterministic guardrail: Block requests containing banned keywords."""
    # Get the first user message
    if not state["messages"]:
        return None

    last_message = state["messages"][-1]
    if last_message.type != "human":
        return None

    content = last_message.content.lower()
    prompt = (
        "你是一个隐私保护助手。请识别下面文本中涉及个人可识别信息(PII),"
        "例如:姓名、身份证号、护照号、电话号码、邮箱、住址、银行卡号、社交账号、车牌等。"
        "特别注意,若代码、文件路径中包含用户名,也应被视为敏感信息。"
        "若包含敏感信息,请返回{\"is_pii\": True},否则返回{\"is_pii\": False}。"
        "请严格以 json 格式返回,并且只输出 json。文本如下:\n\n" + content
    )

    pii_agent = trusted_model.with_structured_output(PiiCheck)
    result = pii_agent.invoke(prompt)

    if result.is_pii is True:
        # Block execution before any processing
        return {
            "messages": [{
                "role": "assistant",
                "content": "I cannot process requests containing inappropriate content. Please rephrase your request."
            }],
            "jump_to": "end"
        }
    else:
        print("No PII found")

    return None
result = message_with_pii(pii_middleware=content_blocker)

for message in result["messages"]:
    message.pretty_print()
================================ Human Message =================================

File "/home/luochang/proj/agent.py", line 53, in my_agent
    agent = create_react_agent(
            ^^^^^^^^^^^^^^^^^^^
File "/home/luochang/miniconda3/lib/python3.12/site-packages/typing_extensions.py", line 2950, in wrapper
    return arg(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^
File "/home/luochang/miniconda3/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py", line 566, in create_react_agent
    model = cast(BaseChatModel, model).bind_tools(
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'RunnableLambda' object has no attribute 'bind_tools'

---

为啥报错
================================== Ai Message ==================================

I cannot process requests containing inappropriate content. Please rephrase your request.

策略二:如遇敏感信息,使用 * 号屏蔽敏感信息。

@before_agent(can_jump_to=["end"])
def content_filter(state: AgentState,  runtime: Runtime) -> dict[str, Any] | None:
    """Deterministic guardrail: Block requests containing banned keywords."""
    # Get the first user message
    if not state["messages"]:
        return None

    last_message = state["messages"][-1]
    if last_message.type != "human":
        return None

    content = last_message.content.lower()
    prompt = (
        "你是一个隐私保护助手。请识别下面文本中涉及个人可识别信息(PII),"
        "例如:姓名、身份证号、护照号、电话号码、邮箱、住址、银行卡号、社交账号、车牌等。"
        "特别注意,若代码、文件路径中包含用户名,也应被视为敏感信息。"
        "若包含敏感信息,请返回{\"is_pii\": True},否则返回{\"is_pii\": False}。"
        "请严格以 json 格式返回,并且只输出 json。文本如下:\n\n" + content
    )

    pii_agent = trusted_model.with_structured_output(PiiCheck)
    result = pii_agent.invoke(prompt)

    if result.is_pii is True:
        mask_prompt = (
            "你是一个隐私保护助手。请将下面文本中的所有个人可识别信息(PII)用星号(*)替换。"
            "仅替换敏感片段,其他文本保持不变。"
            "只输出处理后的文本,不要任何解释或额外内容。文本如下:\n\n" + last_message.content
        )
        masked_message = basic_model.invoke(mask_prompt)
        return {
            "messages": [{
                "role": "assistant",
                "content": masked_message.content
            }]
        }
    else:
        print("No PII found")

    return None
result = message_with_pii(pii_middleware=content_filter)

for message in result["messages"]:
    message.pretty_print()
================================ Human Message =================================

File "/home/luochang/proj/agent.py", line 53, in my_agent
    agent = create_react_agent(
            ^^^^^^^^^^^^^^^^^^^
File "/home/luochang/miniconda3/lib/python3.12/site-packages/typing_extensions.py", line 2950, in wrapper
    return arg(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^
File "/home/luochang/miniconda3/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py", line 566, in create_react_agent
    model = cast(BaseChatModel, model).bind_tools(
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'RunnableLambda' object has no attribute 'bind_tools'

---

为啥报错
================================== Ai Message ==================================

File "/home/********/proj/agent.py", line 53, in my_agent
    agent = create_react_agent(
            ^^^^^^^^^^^^^^^^^^^
File "/home/********/miniconda3/lib/python3.12/site-packages/typing_extensions.py", line 2950, in wrapper
    return arg(*args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^
File "/home/********/miniconda3/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py", line 566, in create_react_agent
    model = cast(BaseChatModel, model).bind_tools(
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'RunnableLambda' object has no attribute 'bind_tools'

---

为啥报错
================================== Ai Message ==================================

这个错误的原因是:**你传给 `create_react_agent` 的 model 参数是一个 `RunnableLambda` 对象,而不是一个支持 `bind_tools` 方法的聊天模型**。

## 问题分析

`create_react_agent` 函数期望接收一个实现了 `bind_tools` 方法的聊天模型(如 `ChatOpenAI`、`ChatAnthropic` 等),但你传入的是一个 `RunnableLambda` 对象,这个对象没有 `bind_tools` 方法。

## 解决方案

### 方案1:直接使用聊天模型(推荐)

```python
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent

# 直接使用聊天模型
model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)

agent = create_react_agent(
    model=model,
    tools=your_tools,
    # 其他参数...
)
```

### 方案2:如果你需要使用 RunnableLambda,需要包装一下

```python
from langchain_openai import ChatOpenAI
from langchain_core.runnables import RunnableLambda

# 先创建聊天模型
chat_model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)

# 如果你需要自定义处理逻辑
def custom_process(input_data):
    # 你的自定义逻辑
    return input_data

# 创建 RunnableLambda
custom_runnable = RunnableLambda(custom_process)

# 在 create_react_agent 中仍然使用原始的聊天模型
agent = create_react_agent(
    model=chat_model,  # 使用原始聊天模型,不是 RunnableLambda
    tools=your_tools,
)
```

### 方案3:检查你的 model 创建代码

看起来你可能这样创建了 model:

```python
# 错误的方式
model = RunnableLambda(some_function)
agent = create_react_agent(model=model, tools=tools)  # 这会报错
```

应该改为:

```python
# 正确的方式
from langchain_openai import ChatOpenAI

model = ChatOpenAI(model="gpt-3.5-turbo")  # 或其他聊天模型
agent = create_react_agent(model=model, tools=tools)
```

## 总结

确保传给 `create_react_agent` 的 `model` 参数是一个标准的聊天模型对象,而不是 `RunnableLambda` 或其他不支持 `bind_tools` 方法的对象。