从 RefineDocumentsChain 进行迁移
RefineDocumentsChain 实现了一种分析长文本的策略。该策略如下:
- 将文本分割成更小的文档;
- 应用一个流程到第一个文档;
- 根据下一个文档精炼或更新结果;
- 重复整个文档序列直至完成。
在此场景下一种常见的应用是摘要生成,其中一个运行中的摘要会随着我们处理长文本的各个部分而修改。这对于比给定 LLM 的上下文窗口大得多的文本特别有用。
LangGraph 的实现为此问题带来了许多优势:
RefineDocumentsChain在类的内部通过for循环精炼摘要,而 LangGraph 的实现允许您逐步执行以监控或根据需要进行引导。- LangGraph 的实现支持执行步骤和单个 token 的流式传输。
- 由于它是从模块化组件组装而成的,因此也易于扩展或修改(例如,集成 工具调用 或其他行为)。
下面我们将通过一个简单的示例来分别介绍 RefineDocumentsChain 和相应的 LangGraph 实现,以作说明。
首先加载一个聊天模型:
Select chat model:
pip install -qU "langchain[google-genai]"
import getpass
import os
if not os.environ.get("GOOGLE_API_KEY"):
os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter API key for Google Gemini: ")
from langchain.chat_models import init_chat_model
llm = init_chat_model("gemini-2.0-flash", model_provider="google_genai")
示例
让我们通过一个例子来学习如何总结一系列文档。首先,我们生成一些简单的文档用于说明:
from langchain_core.documents import Document
documents = [
Document(page_content="Apples are red", metadata={"title": "apple_book"}),
Document(page_content="Blueberries are blue", metadata={"title": "blueberry_book"}),
Document(page_content="Bananas are yelow", metadata={"title": "banana_book"}),
]
API Reference:Document
遗留实现
Details
下面我们展示了使用 RefineDocumentsChain 的实 现。我们定义了初始摘要和后续改进的提示模板,为这两个目的实例化了单独的 LLMChain 对象,并使用这些组件实例化 RefineDocumentsChain。
from langchain.chains import LLMChain, RefineDocumentsChain
from langchain_core.prompts import ChatPromptTemplate, PromptTemplate
from langchain_openai import ChatOpenAI
# This controls how each document will be formatted. Specifically,
# it will be passed to `format_document` - see that function for more
# details.
document_prompt = PromptTemplate(
input_variables=["page_content"], template="{page_content}"
)
document_variable_name = "context"
# The prompt here should take as an input variable the
# `document_variable_name`
summarize_prompt = ChatPromptTemplate(
[
("human", "Write a concise summary of the following: {context}"),
]
)
initial_llm_chain = LLMChain(llm=llm, prompt=summarize_prompt)
initial_response_name = "existing_answer"
# The prompt here should take as an input variable the
# `document_variable_name` as well as `initial_response_name`
refine_template = """
Produce a final summary.
Existing summary up to this point:
{existing_answer}
New context:
------------
{context}
------------
Given the new context, refine the original summary.
"""
refine_prompt = ChatPromptTemplate([("human", refine_template)])
refine_llm_chain = LLMChain(llm=llm, prompt=refine_prompt)
chain = RefineDocumentsChain(
initial_llm_chain=initial_llm_chain,
refine_llm_chain=refine_llm_chain,
document_prompt=document_prompt,
document_variable_name=document_variable_name,
initial_response_name=initial_response_name,
)
现在我们可以调用我们的链了:
result = chain.invoke(documents)
result["output_text"]
'Apples are typically red in color, blueberries are blue, and bananas are yellow.'
LangSmith 追踪 由三个 LLM 调用组成:一个用于生成初始摘要,另外两次用于更新该摘要。当我们使用最终文档的内容更新摘要时,该过程即告完成。
LangGraph
Details
下面我们展示了该流程的 LangGraph 实现:
- 我们使用与之前相同的两个模板。
- 我们生成一个简单的链来处理初始摘要,该链会提取第一个文档,将其格式化为提示,并使用我们的 LLM 运行推理。
- 我们生成第二个
refine_summary_chain,该链对每个后续文档进行操作,以完善初始摘要。
我们需要安装 langgraph:
pip install -qU langgraph
import operator
from typing import List, Literal, TypedDict
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig
from langchain_openai import ChatOpenAI
from langgraph.constants import Send
from langgraph.graph import END, START, StateGraph
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# Initial summary
summarize_prompt = ChatPromptTemplate(
[
("human", "Write a concise summary of the following: {context}"),
]
)
initial_summary_chain = summarize_prompt | llm | StrOutputParser()
# Refining the summary with new docs
refine_template = """
Produce a final summary.
Existing summary up to this point:
{existing_answer}
New context:
------------
{context}
------------
Given the new context, refine the original summary.
"""
refine_prompt = ChatPromptTemplate([("human", refine_template)])
refine_summary_chain = refine_prompt | llm | StrOutputParser()
# For LangGraph, we will define the state of the graph to hold the query,
# destination, and final answer.
class State(TypedDict):
contents: List[str]
index: int
summary: str
# We define functions for each node, including a node that generates
# the initial summary:
async def generate_initial_summary(state: State, config: RunnableConfig):
summary = await initial_summary_chain.ainvoke(
state["contents"][0],
config,
)
return {"summary": summary, "index": 1}
# And a node that refines the summary based on the next document
async def refine_summary(state: State, config: RunnableConfig):
content = state["contents"][state["index"]]
summary = await refine_summary_chain.ainvoke(
{"existing_answer": state["summary"], "context": content},
config,
)
return {"summary": summary, "index": state["index"] + 1}
# Here we implement logic to either exit the application or refine
# the summary.
def should_refine(state: State) -> Literal["refine_summary", END]:
if state["index"] >= len(state["contents"]):
return END
else:
return "refine_summary"
graph = StateGraph(State)
graph.add_node("generate_initial_summary", generate_initial_summary)
graph.add_node("refine_summary", refine_summary)
graph.add_edge(START, "generate_initial_summary")
graph.add_conditional_edges("generate_initial_summary", should_refine)
graph.add_conditional_edges("refine_summary", should_refine)
app = graph.compile()
API Reference:StrOutputParser | ChatPromptTemplate | RunnableConfig | ChatOpenAI | Send | StateGraph
from IPython.display import Image
Image(app.get_graph().draw_mermaid_png())
我们可以按如下方式逐步执行,并在每次提炼时打印出摘要:
async for step in app.astream(
{"contents": [doc.page_content for doc in documents]},
stream_mode="values",
):
if summary := step.get("summary"):
print(summary)
Apples are typically red in color.
Apples are typically red in color, while blueberries are blue.
Apples are typically red in color, blueberries are blue, and bananas are yellow.
在 LangSmith trace 中,我们再次获得了三次 LLM 调用,它们执行的功能与之前相同。
请注意,我们可以从应用程序流式传输 token,包括来自中间步骤的 token:
async for event in app.astream_events(
{"contents": [doc.page_content for doc in documents]}, version="v2"
):
kind = event["event"]
if kind == "on_chat_model_stream":
content = event["data"]["chunk"].content
if content:
print(content, end="|")
elif kind == "on_chat_model_end":
print("\n\n")
Ap|ples| are| characterized| by| their| red| color|.|
Ap|ples| are| characterized| by| their| red| color|,| while| blueberries| are| known| for| their| blue| hue|.|
Ap|ples| are| characterized| by| their| red| color|,| blueberries| are| known| for| their| blue| hue|,| and| bananas| are| recognized| for| their| yellow| color|.|
下一步
请参阅 此教程 了解更多基于 LLM 的摘要策略。
请查看 LangGraph 文档 了解使用 LangGraph 构建的详细信息。