使用参考示例进行提取
通过为大型语言模型(LLM)提供参考示例,通常可以提高提取的质量。
数据提取旨在从文本和其他非结构化或半结构化格式的信息生成结构化表示。在此背景下,通常会使用工具调用 LLM 功能。本指南演示了如何构建工具调用的 few-shot 示例,以帮助引导提取和类似应用程序的行为。
虽然本指南侧重于如何将示例与工具调用模型结合使用,但此技术具有普遍适用性,并且同样适用于 基于 JSON 或基于提示的技术。
LangChain 在包含工具调用的 LLM 消息上实现了一个工具调用属性。有关更多详细信息,请参阅我们关于工具调用的操作指南。为了构建数据提取的参考示例,我们构建了一个包含一系列以下内容的聊天历史:
- 包含示例输入的HumanMessage;
- 包含示例工具调用的AIMessage;
- 包含示例工具输出的ToolMessage。
LangChain 采用此约定来跨 LLM 模型提供商将工具调用构建到对话中。
首先,我们构建一个包含这些消息占位符的提示模板:
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
# Define a custom prompt to provide instructions and any additional context.
# 1) You can add examples into the prompt template to improve extraction quality
# 2) Introduce additional parameters to take context into account (e.g., include metadata
# about the document from which the text was extracted.)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are an expert extraction algorithm. "
"Only extract relevant information from the text. "
"If you do not know the value of an attribute asked "
"to extract, return null for the attribute's value.",
),
# ↓↓↓↓↓↓↓↓↓↓↓↓ ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓
MessagesPlaceholder("examples"), # <-- EXAMPLES!
# ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑
("human", "{text}"),
]
)
测试模板:
from langchain_core.messages import (
HumanMessage,
)
prompt.invoke(
{"text": "this is some text", "examples": [HumanMessage(content="testing 1 2 3")]}
)
ChatPromptValue(messages=[SystemMessage(content="You are an expert extraction algorithm. Only extract relevant information from the text. If you do not know the value of an attribute asked to extract, return null for the attribute's value.", additional_kwargs={}, response_metadata={}), HumanMessage(content='testing 1 2 3', additional_kwargs={}, response_metadata={}), HumanMessage(content='this is some text', additional_kwargs={}, response_metadata={})])
定义 schema
让我们重用来自 提取教程 的 person schema。
from typing import List, Optional
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
class Person(BaseModel):
"""Information about a person."""
# ^ Doc-string for the entity Person.
# This doc-string is sent to the LLM as the description of the schema Person,
# and it can help to improve extraction results.
# Note that:
# 1. Each field is an `optional` -- this allows the model to decline to extract it!
# 2. Each field has a `description` -- this description is used by the LLM.
# Having a good description can help improve extraction results.
name: Optional[str] = Field(..., description="The name of the person")
hair_color: Optional[str] = Field(
..., description="The color of the person's hair if known"
)
height_in_meters: Optional[str] = Field(..., description="Height in METERs")
class Data(BaseModel):
"""Extracted data about people."""
# Creates a model so that we can extract multiple entities.
people: List[Person]
定义参考示例
示例可以定义为一组输入-输出对。
每个示例包含一个示例 input 文本和一个示例 output,展示了应该从文本中提取的内容。
这部分内容比较深入,你可以选择跳过。
示例的格式需要与所使用的 API(例如,工具调用或 JSON 模式等)相匹配。
在这里,格式化的示例将匹配我们正在使用的工具调用 API 所期望的格式。
import uuid
from typing import Dict, List, TypedDict
from langchain_core.messages import (
AIMessage,
BaseMessage,
HumanMessage,
SystemMessage,
ToolMessage,
)
from pydantic import BaseModel, Field
class Example(TypedDict):
"""A representation of an example consisting of text input and expected tool calls.
For extraction, the tool calls are represented as instances of pydantic model.
"""
input: str # This is the example text
tool_calls: List[BaseModel] # Instances of pydantic model that should be extracted
def tool_example_to_messages(example: Example) -> List[BaseMessage]:
"""Convert an example into a list of messages that can be fed into an LLM.
This code is an adapter that converts our example to a list of messages
that can be fed into a chat model.
The list of messages per example corresponds to:
1) HumanMessage: contains the content from which content should be extracted.
2) AIMessage: contains the extracted information from the model
3) ToolMessage: contains confirmation to the model that the model requested a tool correctly.
The ToolMessage is required because some of the chat models are hyper-optimized for agents
rather than for an extraction use case.
"""
messages: List[BaseMessage] = [HumanMessage(content=example["input"])]
tool_calls = []
for tool_call in example["tool_calls"]:
tool_calls.append(
{
"id": str(uuid.uuid4()),
"args": tool_call.dict(),
# The name of the function right now corresponds
# to the name of the pydantic model
# This is implicit in the API right now,
# and will be improved over time.
"name": tool_call.__class__.__name__,
},
)
messages.append(AIMessage(content="", tool_calls=tool_calls))
tool_outputs = example.get("tool_outputs") or [
"You have correctly called this tool."
] * len(tool_calls)
for output, tool_call in zip(tool_outputs, tool_calls):
messages.append(ToolMessage(content=output, tool_call_id=tool_call["id"]))
return messages
接下来,我们定义示例,然后将它们转换为消息格式。
examples = [
(
"The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.",
Data(people=[]),
),
(
"Fiona traveled far from France to Spain.",
Data(people=[Person(name="Fiona", height_in_meters=None, hair_color=None)]),
),
]
messages = []
for text, tool_call in examples:
messages.extend(
tool_example_to_messages({"input": text, "tool_calls": [tool_call]})
)
我们来测试一下提示
example_prompt = prompt.invoke({"text": "this is some text", "examples": messages})
for message in example_prompt.messages:
print(f"{message.type}: {message}")
system: content="You are an expert extraction algorithm. Only extract relevant information from the text. If you do not know the value of an attribute asked to extract, return null for the attribute's value." additional_kwargs={} response_metadata={}
human: content="The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it." additional_kwargs={} response_metadata={}
ai: content='' additional_kwargs={} response_metadata={} tool_calls=[{'name': 'Data', 'args': {'people': []}, 'id': '240159b1-1405-4107-a07c-3c6b91b3d5b7', 'type': 'tool_call'}]
tool: content='You have correctly called this tool.' tool_call_id='240159b1-1405-4107-a07c-3c6b91b3d5b7'
human: content='Fiona traveled far from France to Spain.' additional_kwargs={} response_metadata={}
ai: content='' additional_kwargs={} response_metadata={} tool_calls=[{'name': 'Data', 'args': {'people': [{'name': 'Fiona', 'hair_color': None, 'height_in_meters': None}]}, 'id': '3fc521e4-d1d2-4c20-bf40-e3d72f1068da', 'type': 'tool_call'}]
tool: content='You have correctly called this tool.' tool_call_id='3fc521e4-d1d2-4c20-bf40-e3d72f1068da'
human: content='this is some text' additional_kwargs={} response_metadata={}
创建提取器
让我们选择一个 LLM。因为我们正在使用工具调用,所以我们需要一个支持工具调用功能的模型。请参阅此表了 解可用的 LLM。
pip install -qU "langchain[google-genai]"
import getpass
import os
if not os.environ.get("GOOGLE_API_KEY"):
os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter API key for Google Gemini: ")
from langchain.chat_models import init_chat_model
llm = init_chat_model("gemini-2.0-flash", model_provider="google_genai")
遵循 提取教程,我们使用 .with_structured_output 方法根据所需模式来构建模型输出:
runnable = prompt | llm.with_structured_output(
schema=Data,
method="function_calling",
include_raw=False,
)
没有示例 😿
请注意,即使是能力很强的模型,在面对一个极其简单的测试用例时也可能会失败!
for _ in range(5):
text = "The solar system is large, but earth has only 1 moon."
print(runnable.invoke({"text": text, "examples": []}))
people=[Person(name='earth', hair_color='null', height_in_meters='null')]
``````output
people=[Person(name='earth', hair_color='null', height_in_meters='null')]
``````output
people=[]
``````output
people=[Person(name='earth', hair_color='null', height_in_meters='null')]
``````output
people=[]
示例集锦 😻
参考示例有助于修复故障!
for _ in range(5):
text = "The solar system is large, but earth has only 1 moon."
print(runnable.invoke({"text": text, "examples": messages}))
people=[]
``````output
people=[]
``````output
people=[]
``````output
people=[]
``````output
people=[]
请注意,我们可以在 Langsmith 跟踪 中将少样本示例视为工具调用。
并且我们在正面样本上保持了性能:
runnable.invoke(
{
"text": "My name is Harrison. My hair is black.",
"examples": messages,
}
)
Data(people=[Person(name='Harrison', hair_color='black', height_in_meters=None)])