Skip to article frontmatterSkip to article content

记忆

记忆(Memory)是一个可选模块。除非必要,你无需向智能体添加 Memory 模块。因为 StateGraph 本身就有历史消息的存储功能,足以满足最基础的“记忆”需求。

需要添加 Memory 模块的情况包括:

  1. 历史消息超出限制,需要使用外部工具存储记忆

  2. 触发人工干预(interrupt)后,需要临时存储智能体状态

  3. 需要跨对话提取用户偏好

LangGraph 将记忆分为:

此外,还有一个 LangMem 也提供记忆存取功能。

PS: 不知道开发团队为啥把记忆分得这么稀碎。感觉这些模块还不成熟,后边变动会比较大。

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, MessagesState, START, END
from langgraph.checkpoint.memory import InMemorySaver

# 加载模型配置
_ = load_dotenv()

# 加载模型
model = ChatOpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url=os.getenv("DASHSCOPE_BASE_URL"),
    model="qwen3-coder-plus",
    temperature=0.7,
)

# 创建助手节点
def assistant(state: MessagesState):
    return {'messages': [model.invoke(state['messages'])]}

一、短期记忆

短期记忆(工作记忆)一般用于临时存储,与当前对话内容强相关。与依赖上下文的记忆方式不同,短期记忆可以主动记住重要的内容,增加工程稳定性。

1)在 StateGraph 中使用短期记忆

为了方便演示,我们使用 InMemorySaver 存储短期记忆。这意味着短期记忆存储在内存中。如果退出当前程序,记忆将会消失。

# 创建短期记忆
checkpointer = InMemorySaver()

# 创建图
builder = StateGraph(MessagesState)

# 添加节点
builder.add_node('assistant', assistant)

# 添加边
builder.add_edge(START, 'assistant')
builder.add_edge('assistant', END)

graph = builder.compile(checkpointer=checkpointer)

# 告诉智能体我叫 luochang
result = graph.invoke(
    {'messages': ['hi! i am luochang']},
    {"configurable": {"thread_id": "1"}},
)

for message in result['messages']:
    message.pretty_print()
================================ Human Message =================================

hi! i am luochang
================================== Ai Message ==================================

Hi Luochang! Nice to meet you. How are you doing today? Is there anything I can help you with?
# 让智能体说出我的名字
result = graph.invoke(
    {"messages": [{"role": "user", "content": "What is my name?"}]},
    {"configurable": {"thread_id": "1"}},  
)

for message in result['messages']:
    message.pretty_print()
================================ Human Message =================================

hi! i am luochang
================================== Ai Message ==================================

Hi Luochang! Nice to meet you. How are you doing today? Is there anything I can help you with?
================================ Human Message =================================

What is my name?
================================== Ai Message ==================================

Your name is Luochang, as you introduced yourself to me earlier!

2)在 create_agent 中使用短期记忆

from langchain.agents import create_agent

# 创建短期记忆
checkpointer = InMemorySaver()

agent = create_agent(
    model=model,
    checkpointer=checkpointer
)

# 告诉智能体我叫 luochang
result = agent.invoke(
    {'messages': ['hi! i am luochang']},
    {"configurable": {"thread_id": "2"}},
)

for message in result['messages']:
    message.pretty_print()
================================ Human Message =================================

hi! i am luochang
================================== Ai Message ==================================

Hi Luochang! Nice to meet you. How are you doing today? Is there anything I can help you with?
# 让智能体说出我的名字
result = agent.invoke(
    {"messages": [{"role": "user", "content": "What is my name?"}]},
    {"configurable": {"thread_id": "2"}},  
)

for message in result['messages']:
    message.pretty_print()
================================ Human Message =================================

hi! i am luochang
================================== Ai Message ==================================

Hi Luochang! Nice to meet you. How are you doing today? Is there anything I can help you with?
================================ Human Message =================================

What is my name?
================================== Ai Message ==================================

Your name is Luochang, as you introduced yourself to me earlier!

为了验证 InMemorySaver 是否真的有效果,可以将 checkpointer=checkpointer 注释后,再观察智能体能不能正确回复我的名字。

3)使用外部数据库支持的短期记忆

如果使用 SQLite 保存当前工作状态,即使退出程序,依然能在下次进入时恢复上次退出时的状态,我们来测试这一点。

在使用 SQLite 作为短期记忆的外部数据库之前,需要安装一个 Python 包以支持这项功能:

pip install langgraph-checkpoint-sqlite
# 删除SQLite数据库
if os.path.exists("short-memory.db"):
    os.remove("short-memory.db")
import os
import sqlite3

from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.sqlite import SqliteSaver
from langchain.agents import create_agent

# 加载模型配置
_ = load_dotenv()

# 加载模型
model = ChatOpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url=os.getenv("DASHSCOPE_BASE_URL"),
    model="qwen3-coder-plus",
    temperature=0.7,
)

# 创建sqlite支持的短期记忆
checkpointer = SqliteSaver(
    sqlite3.connect("short-memory.db", check_same_thread=False)
)

# 创建Agent
agent = create_agent(
    model=model,
    checkpointer=checkpointer,
)

# 告诉智能体我叫 luochang
result = agent.invoke(
    {'messages': ['hi! i am luochang']},
    {"configurable": {"thread_id": "3"}},
)

for message in result['messages']:
    message.pretty_print()
================================ Human Message =================================

hi! i am luochang
================================== Ai Message ==================================

Hi Luochang! Nice to meet you. How are you doing today? Is there anything I can help you with?

重启 Jupyter Notebook 后看智能体能否从 SQLite 中读取关于我名字的记忆。

Kernel -> Restart Kernel... 中重启服务。然后运行以下代码。

import os
import sqlite3

from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.sqlite import SqliteSaver
from langchain.agents import create_agent

# 加载模型配置
_ = load_dotenv()

# 加载模型
model = ChatOpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url=os.getenv("DASHSCOPE_BASE_URL"),
    model="qwen3-coder-plus",
    temperature=0.7,
)

# 创建sqlite支持的短期记忆
checkpointer = SqliteSaver(
    sqlite3.connect("short-memory.db", check_same_thread=False)
)

# 创建Agent
agent = create_agent(
    model=model,
    checkpointer=checkpointer,
)

# 让智能体回忆我的名字
result = agent.invoke(
    {'messages': ['What is my name?']},
    {"configurable": {"thread_id": "3"}},
)

for message in result['messages']:
    message.pretty_print()
================================ Human Message =================================

hi! i am luochang
================================== Ai Message ==================================

Hi Luochang! Nice to meet you. How are you doing today? Is there anything I can help you with?
================================ Human Message =================================

What is my name?
================================== Ai Message ==================================

Your name is Luochang, as you introduced yourself to me earlier!

二、长期记忆

import os
from dotenv import load_dotenv
from openai import OpenAI
from langchain_core.runnables import RunnableConfig
from langchain.agents import create_agent
from langchain.tools import tool, ToolRuntime
from langgraph.store.memory import InMemoryStore
from dataclasses import dataclass

EMBED_MODEL = "text-embedding-v4"
EMBED_DIM = 1024

# 加载模型配置
_ = load_dotenv()

# 用于获取text embedding的接口
client = OpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url=os.getenv("DASHSCOPE_BASE_URL"),
)

# 加载模型
model = ChatOpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url=os.getenv("DASHSCOPE_BASE_URL"),
    model="qwen3-coder-plus",
    temperature=0.7,
)
# embedding生成函数
def embed(texts: list[str]) -> list[list[float]]:
    response = client.embeddings.create(
        model=EMBED_MODEL,
        input=texts,
        dimensions=EMBED_DIM,
    )

    return [item.embedding for item in response.data]

# 测试能否正常生成text embedding
texts = [
    "LangGraph的中间件非常强大",
    "LangGraph的MCP也很好用",
]
vectors = embed(texts)

len(vectors), len(vectors[0])
(2, 1024)

1)直接读写长期记忆

# InMemoryStore saves data to an in-memory dictionary. Use a DB-backed store in production use.
store = InMemoryStore(index={"embed": embed, "dims": EMBED_DIM})

# 添加两条用户数据
namespace = ("users", )
key = "user_1"
store.put(
    namespace,
    key,
    {
        "rules": [
            "User likes short, direct language",
            "User only speaks English & python",
        ],
        "rule_id": "3",
    },
)

store.put( 
    ("users",),  # Namespace to group related data together (users namespace for user data)
    "user_2",  # Key within the namespace (user ID as key)
    {
        "name": "John Smith",
        "language": "English",
    }  # Data to store for the given user
)

# get the "memory" by ID
item = store.get(namespace, "a-memory") 

# search for "memories" within this namespace, filtering on content equivalence, sorted by vector similarity
items = store.search( 
    namespace, filter={"rule_id": "3"}, query="language preferences"
)

items
[Item(namespace=['users'], key='user_1', value={'rules': ['User likes short, direct language', 'User only speaks English & python'], 'rule_id': '3'}, created_at='2025-11-04T10:08:24.319215+00:00', updated_at='2025-11-04T10:08:24.319226+00:00', score=0.4085710154661828)]

2)使用工具读取长期记忆

@dataclass
class Context:
    user_id: str

@tool
def get_user_info(runtime: ToolRuntime[Context]) -> str:
    """Look up user info."""
    # Access the store - same as that provided to `create_agent`
    store = runtime.store 
    user_id = runtime.context.user_id
    # Retrieve data from store - returns StoreValue object with value and metadata
    user_info = store.get(("users",), user_id) 
    return str(user_info.value) if user_info else "Unknown user"

agent = create_agent(
    model=model,
    tools=[get_user_info],
    # Pass store to agent - enables agent to access store when running tools
    store=store, 
    context_schema=Context
)

# Run the agent
result = agent.invoke(
    {"messages": [{"role": "user", "content": "look up user information"}]},
    context=Context(user_id="user_2") 
)

for message in result['messages']:
    message.pretty_print()
================================ Human Message =================================

look up user information
================================== Ai Message ==================================
Tool Calls:
  get_user_info (call_fb3ff8f64e7f4bd1b7e2d854)
 Call ID: call_fb3ff8f64e7f4bd1b7e2d854
  Args:
================================= Tool Message =================================
Name: get_user_info

{'name': 'John Smith', 'language': 'English'}
================================== Ai Message ==================================

The user's name is John Smith and their language is English.

3)使用工具写入长期记忆

from typing_extensions import TypedDict

# InMemoryStore saves data to an in-memory dictionary. Use a DB-backed store in production.
store = InMemoryStore() 

@dataclass
class Context:
    user_id: str

# TypedDict defines the structure of user information for the LLM
class UserInfo(TypedDict):
    name: str

# Tool that allows agent to update user information (useful for chat applications)
@tool
def save_user_info(user_info: UserInfo, runtime: ToolRuntime[Context]) -> str:
    """Save user info."""
    # Access the store - same as that provided to `create_agent`
    store = runtime.store 
    user_id = runtime.context.user_id 
    # Store data in the store (namespace, key, data)
    store.put(("users",), user_id, user_info) 
    return "Successfully saved user info."

agent = create_agent(
    model=model,
    tools=[save_user_info],
    store=store,
    context_schema=Context
)

# Run the agent
agent.invoke(
    {"messages": [{"role": "user", "content": "My name is John Smith"}]},
    # user_id passed in context to identify whose information is being updated
    context=Context(user_id="user_123") 
)

# You can access the store directly to get the value
store.get(("users",), "user_123").value
{'name': 'John Smith'}