Skip to main content
Open In ColabOpen on GitHub

ChatAnthropic

本 Notebook 快速介绍了如何开始使用 Anthropic 的 聊天模型。如需了解 ChatAnthropic 所有功能和配置的详细文档,请前往 API 参考

Anthropic 提供多种聊天模型。您可以在 Anthropic 文档 中找到有关其最新模型以及成本、上下文窗口和支持的输入类型的相关信息。

AWS Bedrock 和 Google VertexAI

请注意,可以通过 AWS Bedrock 和 Google VertexAI 访问某些 Anthropic 模型。请参阅 ChatBedrockChatVertexAI 集成,通过这些服务使用 Anthropic 模型。

概述

集成详情

本地可序列化JS 支持包下载次数包最新版本
ChatAnthropiclangchain-anthropicbetaPyPI - DownloadsPyPI - Version

模型功能

工具调用结构化输出JSON 模式图像输入音频输入视频输入Token 级流式传输原生异步Token 用量跟踪Logprobs

设置

如需访问 Anthropic 模型,您需要创建一个 Anthropic 账户,获取 API 密钥,并安装 langchain-anthropic 集成包。

凭证

前往 https://console.anthropic.com/ 注册 Anthropic 并生成 API 密钥。完成此操作后,设置 ANTHROPIC_API_KEY 环境变量:

import getpass
import os

if "ANTHROPIC_API_KEY" not in os.environ:
os.environ["ANTHROPIC_API_KEY"] = getpass.getpass("Enter your Anthropic API key: ")

要启用模型调用的自动跟踪,请设置您的 LangSmith API 密钥:

# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"

安装

LangChain Anthropic 集成位于 langchain-anthropic 包中:

%pip install -qU langchain-anthropic
本指南需要 langchain-anthropic>=0.3.13

实例化

现在我们可以实例化我们的模型对象并生成聊天补全:

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(
model="claude-3-5-sonnet-20240620",
temperature=0,
max_tokens=1024,
timeout=None,
max_retries=2,
# other params...
)
API Reference:ChatAnthropic

调用

messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
AIMessage(content="J'adore la programmation.", response_metadata={'id': 'msg_018Nnu76krRPq8HvgKLW4F8T', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 29, 'output_tokens': 11}}, id='run-57e9295f-db8a-48dc-9619-babd2bedd891-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40})
print(ai_msg.content)
J'adore la programmation.

链式调用

我们可以像这样将模型与提示模板链接起来:

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)

chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
API Reference:ChatPromptTemplate
AIMessage(content="Here's the German translation:\n\nIch liebe Programmieren.", response_metadata={'id': 'msg_01GhkRtQZUkA5Ge9hqmD8HGY', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 23, 'output_tokens': 18}}, id='run-da5906b4-b200-4e08-b81a-64d4453643b6-0', usage_metadata={'input_tokens': 23, 'output_tokens': 18, 'total_tokens': 41})

内容块

来自单个 Anthropic AI 消息的内容可以是单个字符串或内容块列表。例如,当 Anthropic 模型调用工具时,工具调用是消息内容的一部分(同时也会在标准化的 AIMessage.tool_calls 中暴露):

from pydantic import BaseModel, Field


class GetWeather(BaseModel):
"""Get the current weather in a given location"""

location: str = Field(..., description="The city and state, e.g. San Francisco, CA")


llm_with_tools = llm.bind_tools([GetWeather])
ai_msg = llm_with_tools.invoke("Which city is hotter today: LA or NY?")
ai_msg.content
[{'text': "To answer this question, we'll need to check the current weather in both Los Angeles (LA) and New York (NY). I'll use the GetWeather function to retrieve this information for both cities.",
'type': 'text'},
{'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A',
'input': {'location': 'Los Angeles, CA'},
'name': 'GetWeather',
'type': 'tool_use'},
{'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP',
'input': {'location': 'New York, NY'},
'name': 'GetWeather',
'type': 'tool_use'}]
ai_msg.tool_calls
[{'name': 'GetWeather',
'args': {'location': 'Los Angeles, CA'},
'id': 'toolu_01Ddzj5PkuZkrjF4tafzu54A'},
{'name': 'GetWeather',
'args': {'location': 'New York, NY'},
'id': 'toolu_012kz4qHZQqD4qg8sFPeKqpP'}]

多模态

Claude 支持将图像和 PDF 作为内容块输入,这两种格式都支持 Anthropic 的原生格式(请参阅有关 VisionPDF 支持 的文档),以及 LangChain 的 标准格式

Files API

Claude 还通过其托管的 Files API 支持与文件的交互。请参阅下面的示例。

Files API 还可用于将文件上传到容器,供 Claude 的内置代码执行工具使用。有关详细信息,请参阅下面的 代码执行 部分。

图片
# Upload image

import anthropic

client = anthropic.Anthropic()
file = client.beta.files.upload(
# Supports image/jpeg, image/png, image/gif, image/webp
file=("image.png", open("/path/to/image.png", "rb"), "image/png"),
)
image_file_id = file.id


# Run inference
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(
model="claude-sonnet-4-20250514",
betas=["files-api-2025-04-14"],
)

input_message = {
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image.",
},
{
"type": "image",
"source": {
"type": "file",
"file_id": image_file_id,
},
},
],
}
llm.invoke([input_message])
API Reference:ChatAnthropic
PDF 文件
# Upload document

import anthropic

client = anthropic.Anthropic()
file = client.beta.files.upload(
file=("document.pdf", open("/path/to/document.pdf", "rb"), "application/pdf"),
)
pdf_file_id = file.id


# Run inference
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(
model="claude-sonnet-4-20250514",
betas=["files-api-2025-04-14"],
)

input_message = {
"role": "user",
"content": [
{"type": "text", "text": "Describe this document."},
{"type": "document", "source": {"type": "file", "file_id": pdf_file_id}}
],
}
llm.invoke([input_message])
API Reference:ChatAnthropic

扩展思考

Claude 3.7 Sonnet 支持一项 扩展思考 功能,该功能将输出导致其最终答案的逐步推理过程。

要使用它,请在初始化 ChatAnthropic 时指定 thinking 参数。它也可以在调用期间作为关键字参数传递。

您需要指定一个 token 预算来使用此功能。请参阅下面的使用示例:

import json

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(
model="claude-3-7-sonnet-latest",
max_tokens=5000,
thinking={"type": "enabled", "budget_tokens": 2000},
)

response = llm.invoke("What is the cube root of 50.653?")
print(json.dumps(response.content, indent=2))
API Reference:ChatAnthropic
[
{
"signature": "ErUBCkYIARgCIkCx7bIPj35jGPHpoVOB2y5hvPF8MN4lVK75CYGftmVNlI4axz2+bBbSexofWsN1O/prwNv8yPXnIXQmwT6zrJsKEgwJzvks0yVRZtaGBScaDOm9xcpOxbuhku1zViIw9WDgil/KZL8DsqWrhVpC6TzM0RQNCcsHcmgmyxbgG9g8PR0eJGLxCcGoEw8zMQu1Kh1hQ1/03hZ2JCOgigpByR9aNPTwwpl64fQUe6WwIw==",
"thinking": "To find the cube root of 50.653, I need to find the value of $x$ such that $x^3 = 50.653$.\n\nI can try to estimate this first. \n$3^3 = 27$\n$4^3 = 64$\n\nSo the cube root of 50.653 will be somewhere between 3 and 4, but closer to 4.\n\nLet me try to compute this more precisely. I can use the cube root function:\n\ncube root of 50.653 = 50.653^(1/3)\n\nLet me calculate this:\n50.653^(1/3) \u2248 3.6998\n\nLet me verify:\n3.6998^3 \u2248 50.6533\n\nThat's very close to 50.653, so I'm confident that the cube root of 50.653 is approximately 3.6998.\n\nActually, let me compute this more precisely:\n50.653^(1/3) \u2248 3.69981\n\nLet me verify once more:\n3.69981^3 \u2248 50.652998\n\nThat's extremely close to 50.653, so I'll say that the cube root of 50.653 is approximately 3.69981.",
"type": "thinking"
},
{
"text": "The cube root of 50.653 is approximately 3.6998.\n\nTo verify: 3.6998\u00b3 = 50.6530, which is very close to our original number.",
"type": "text"
}
]

提示缓存

Anthropic 支持对提示中的元素进行缓存,包括消息、工具定义、工具结果、图像和文档。这允许您重新使用大型文档、说明、少样本文档和其他数据,以减少延迟和成本。

要为提示中的元素启用缓存,请使用 cache_control 键标记其关联的内容块。请参阅下面的示例:

消息

import requests
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-7-sonnet-20250219")

# Pull LangChain readme
get_response = requests.get(
"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text

messages = [
{
"role": "system",
"content": [
{
"type": "text",
"text": "You are a technology expert.",
},
{
"type": "text",
"text": f"{readme}",
"cache_control": {"type": "ephemeral"},
},
],
},
{
"role": "user",
"content": "What's LangChain, according to its README?",
},
]

response_1 = llm.invoke(messages)
response_2 = llm.invoke(messages)

usage_1 = response_1.usage_metadata["input_token_details"]
usage_2 = response_2.usage_metadata["input_token_details"]

print(f"First invocation:\n{usage_1}")
print(f"\nSecond:\n{usage_2}")
API Reference:ChatAnthropic
First invocation:
{'cache_read': 0, 'cache_creation': 1458}

Second:
{'cache_read': 1458, 'cache_creation': 0}
扩展缓存

默认情况下,缓存生命周期为 5 分钟。如果这太短,您可以启用 "extended-cache-ttl-2025-04-11" Beta 版标头来应用一小时缓存:

llm = ChatAnthropic(
model="claude-3-7-sonnet-20250219",
betas=["extended-cache-ttl-2025-04-11"],
)

并指定 "cache_control": {"type": "ephemeral", "ttl": "1h"}

缓存的 token 计数详情将包含在响应 usage_metadataInputTokenDetails 中:

response = llm.invoke(messages)
response.usage_metadata
{
"input_tokens": 1500,
"output_tokens": 200,
"total_tokens": 1700,
"input_token_details": {
"cache_read": 0,
"cache_creation": 1000,
"ephemeral_1h_input_tokens": 750,
"ephemeral_5m_input_tokens": 250,
}
}

工具

from langchain_anthropic import convert_to_anthropic_tool
from langchain_core.tools import tool

# For demonstration purposes, we artificially expand the
# tool description.
description = (
f"Get the weather at a location. By the way, check out this readme: {readme}"
)


@tool(description=description)
def get_weather(location: str) -> str:
return "It's sunny."


# Enable caching on the tool
weather_tool = convert_to_anthropic_tool(get_weather)
weather_tool["cache_control"] = {"type": "ephemeral"}

llm = ChatAnthropic(model="claude-3-7-sonnet-20250219")
llm_with_tools = llm.bind_tools([weather_tool])
query = "What's the weather in San Francisco?"

response_1 = llm_with_tools.invoke(query)
response_2 = llm_with_tools.invoke(query)

usage_1 = response_1.usage_metadata["input_token_details"]
usage_2 = response_2.usage_metadata["input_token_details"]

print(f"First invocation:\n{usage_1}")
print(f"\nSecond:\n{usage_2}")
First invocation:
{'cache_read': 0, 'cache_creation': 1809}

Second:
{'cache_read': 1809, 'cache_creation': 0}

会话应用中的增量缓存

提示缓存可用于多轮对话中,以在不冗余处理的情况下维护早期消息的上下文。

我们可以通过标记最后一个带有 cache_control 的消息来启用增量缓存。Claude 将自动使用先前缓存的最长前缀来处理后续消息。

下面,我们实现了一个包含此功能的简单聊天机器人。我们遵循了 LangChain 的聊天机器人教程,但添加了一个自定义的reducer,该 reducer 会自动在每条用户消息的最后一个内容块上标记 cache_control。请看下面:

import requests
from langchain_anthropic import ChatAnthropic
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import START, StateGraph, add_messages
from typing_extensions import Annotated, TypedDict

llm = ChatAnthropic(model="claude-3-7-sonnet-20250219")

# Pull LangChain readme
get_response = requests.get(
"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text


def messages_reducer(left: list, right: list) -> list:
# Update last user message
for i in range(len(right) - 1, -1, -1):
if right[i].type == "human":
right[i].content[-1]["cache_control"] = {"type": "ephemeral"}
break

return add_messages(left, right)


class State(TypedDict):
messages: Annotated[list, messages_reducer]


workflow = StateGraph(state_schema=State)


# Define the function that calls the model
def call_model(state: State):
response = llm.invoke(state["messages"])
return {"messages": [response]}


# Define the (single) node in the graph
workflow.add_edge(START, "model")
workflow.add_node("model", call_model)

# Add memory
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)
from langchain_core.messages import HumanMessage

config = {"configurable": {"thread_id": "abc123"}}

query = "Hi! I'm Bob."

input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f'\n{output["messages"][-1].usage_metadata["input_token_details"]}')
API Reference:HumanMessage
================================== Ai Message ==================================

Hello, Bob! It's nice to meet you. How are you doing today? Is there something I can help you with?

{'cache_read': 0, 'cache_creation': 0}
query = f"Check out this readme: {readme}"

input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f'\n{output["messages"][-1].usage_metadata["input_token_details"]}')
================================== Ai Message ==================================

I can see you've shared the README from the LangChain GitHub repository. This is the documentation for LangChain, which is a popular framework for building applications powered by Large Language Models (LLMs). Here's a summary of what the README contains:

LangChain is:
- A framework for developing LLM-powered applications
- Helps chain together components and integrations to simplify AI application development
- Provides a standard interface for models, embeddings, vector stores, etc.

Key features/benefits:
- Real-time data augmentation (connect LLMs to diverse data sources)
- Model interoperability (swap models easily as needed)
- Large ecosystem of integrations

The LangChain ecosystem includes:
- LangSmith - For evaluations and observability
- LangGraph - For building complex agents with customizable architecture
- LangGraph Platform - For deployment and scaling of agents

The README also mentions installation instructions (`pip install -U langchain`) and links to various resources including tutorials, how-to guides, conceptual guides, and API references.

Is there anything specific about LangChain you'd like to know more about, Bob?

{'cache_read': 0, 'cache_creation': 1498}
query = "What was my name again?"

input_message = HumanMessage([{"type": "text", "text": query}])
output = app.invoke({"messages": [input_message]}, config)
output["messages"][-1].pretty_print()
print(f'\n{output["messages"][-1].usage_metadata["input_token_details"]}')
================================== Ai Message ==================================

Your name is Bob. You introduced yourself at the beginning of our conversation.

{'cache_read': 1498, 'cache_creation': 269}

LangSmith trace 中,切换“raw output”将显示发送到聊天模型的准确消息,包括 cache_control 键。

标记高效工具使用

Anthropic 支持一项(测试版)标记高效工具使用功能。要使用它,在实例化模型时指定相关的测试版标头。

from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool

llm = ChatAnthropic(
model="claude-3-7-sonnet-20250219",
temperature=0,
model_kwargs={
"extra_headers": {"anthropic-beta": "token-efficient-tools-2025-02-19"}
},
)


@tool
def get_weather(location: str) -> str:
"""Get the weather at a location."""
return "It's sunny."


llm_with_tools = llm.bind_tools([get_weather])
response = llm_with_tools.invoke("What's the weather in San Francisco?")
print(response.tool_calls)
print(f'\nTotal tokens: {response.usage_metadata["total_tokens"]}')
API Reference:ChatAnthropic | tool
[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'toolu_01EoeE1qYaePcmNbUvMsWtmA', 'type': 'tool_call'}]

Total tokens: 408

引用

Anthropic 支持一项 引用 功能,该功能允许 Claude 根据用户提供的源文档,在其答案中附加上下文。当查询中包含具有 "citations": {"enabled": True}文档搜索结果 内容块时,Claude 可能会在其响应中生成引用。

简单示例

在此示例中,我们传递了一个 纯文本文件。在后台,Claude 会 自动分块 输入文本为句子,这些句子将在生成引用时使用。

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-5-haiku-latest")

messages = [
{
"role": "user",
"content": [
{
"type": "document",
"source": {
"type": "text",
"media_type": "text/plain",
"data": "The grass is green. The sky is blue.",
},
"title": "My Document",
"context": "This is a trustworthy document.",
"citations": {"enabled": True},
},
{"type": "text", "text": "What color is the grass and sky?"},
],
}
]
response = llm.invoke(messages)
response.content
API Reference:ChatAnthropic
[{'text': 'Based on the document, ', 'type': 'text'},
{'text': 'the grass is green',
'type': 'text',
'citations': [{'type': 'char_location',
'cited_text': 'The grass is green. ',
'document_index': 0,
'document_title': 'My Document',
'start_char_index': 0,
'end_char_index': 20}]},
{'text': ', and ', 'type': 'text'},
{'text': 'the sky is blue',
'type': 'text',
'citations': [{'type': 'char_location',
'cited_text': 'The sky is blue.',
'document_index': 0,
'document_title': 'My Document',
'start_char_index': 20,
'end_char_index': 36}]},
{'text': '.', 'type': 'text'}]

在工具结果中(Agentic RAG)

需要 langchain-anthropic>=0.3.17

Claude 支持一种名为 search_result 的内容块,用于表示对知识库或其他自定义源的查询可引用的结果。这些内容块可以作为顶层内容(如上例所示)或在工具结果中传递给 Claude。这使得 Claude 能够通过工具调用的结果来引用其响应中的元素。

要响应工具调用传递搜索结果,请定义一个工具,该工具以 Anthropic 的原生格式返回 search_result 内容块列表。例如:

def retrieval_tool(query: str) -> list[dict]:
"""访问我的知识库。"""

# 执行搜索(例如使用 LangChain 向量存储)
results = vector_store.similarity_search(query=query, k=2)

# 将结果打包成 search_result 块
return [
{
"type": "search_result",
# 可根据需要自定义字段,使用文档元数据或其他方式
"title": "我的文档标题",
"source": "来源描述或出处",
"citations": {"enabled": True},
"content": [{"type": "text", "text": doc.page_content}],
}
for doc in results
]

我们还需要在实例化 ChatAnthropic 时指定 search-results-2025-06-09 beta 版本。您可以在下面的端到端示例中看到。

使用 LangGraph 的端到端示例

在此我们演示一个端到端示例,其中我们将示例文档填充到 LangChain 的 向量存储 中,并为 Claude 提供一个查询这些文档的工具。 此处的工具接收一个搜索查询和一个 category 字符串字面量,但可以使用任何有效的工具签名。

from typing import Literal

from langchain.chat_models import init_chat_model
from langchain.embeddings import init_embeddings
from langchain_core.documents import Document
from langchain_core.vectorstores import InMemoryVectorStore
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.prebuilt import create_react_agent


# 设置向量存储
embeddings = init_embeddings("openai:text-embedding-3-small")
vector_store = InMemoryVectorStore(embeddings)

document_1 = Document(
id="1",
page_content=(
"To request vacation days, submit a leave request form through the "
"HR portal. Approval will be sent by email."
),
metadata={
"category": "HR Policy",
"doc_title": "Leave Policy",
"provenance": "Leave Policy - page 1",
},
)
document_2 = Document(
id="2",
page_content="Managers will review vacation requests within 3 business days.",
metadata={
"category": "HR Policy",
"doc_title": "Leave Policy",
"provenance": "Leave Policy - page 2",
},
)
document_3 = Document(
id="3",
page_content=(
"Employees with over 6 months tenure are eligible for 20 paid vacation days "
"per year."
),
metadata={
"category": "Benefits Policy",
"doc_title": "Benefits Guide 2025",
"provenance": "Benefits Policy - page 1",
},
)

documents = [document_1, document_2, document_3]
vector_store.add_documents(documents=documents)


# 定义工具
async def retrieval_tool(
query: str, category: Literal["HR Policy", "Benefits Policy"]
) -> list[dict]:
"""访问我的知识库。"""

def _filter_function(doc: Document) -> bool:
return doc.metadata.get("category") == category

results = vector_store.similarity_search(
query=query, k=2, filter=_filter_function
)

return [
{
"type": "search_result",
"title": doc.metadata["doc_title"],
"source": doc.metadata["provenance"],
"citations": {"enabled": True},
"content": [{"type": "text", "text": doc.page_content}],
}
for doc in results
]



# 创建代理
llm = init_chat_model(
"anthropic:claude-3-5-haiku-latest",
betas=["search-results-2025-06-09"],
)

checkpointer = InMemorySaver()
agent = create_react_agent(llm, [retrieval_tool], checkpointer=checkpointer)


# 调用查询
config = {"configurable": {"thread_id": "session_1"}}

input_message = {
"role": "user",
"content": "How do I request vacation days?",
}
async for step in agent.astream(
{"messages": [input_message]},
config,
stream_mode="values",
):
step["messages"][-1].pretty_print()

与文本分割器一起使用

Anthropic 还允许您使用 自定义文档 类型指定自己的分割。LangChain 的 文本分割器 可用于为此目的生成有意义的分割。请参见下面的示例,其中我们分割了 LangChain README(一个 markdown 文档)并将其作为上下文传递给 Claude:

import requests
from langchain_anthropic import ChatAnthropic
from langchain_text_splitters import MarkdownTextSplitter


def format_to_anthropic_documents(documents: list[str]):
return {
"type": "document",
"source": {
"type": "content",
"content": [{"type": "text", "text": document} for document in documents],
},
"citations": {"enabled": True},
}


# Pull readme
get_response = requests.get(
"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md"
)
readme = get_response.text

# Split into chunks
splitter = MarkdownTextSplitter(
chunk_overlap=0,
chunk_size=50,
)
documents = splitter.split_text(readme)

# Construct message
message = {
"role": "user",
"content": [
format_to_anthropic_documents(documents),
{"type": "text", "text": "Give me a link to LangChain's tutorials."},
],
}

# Query LLM
llm = ChatAnthropic(model="claude-3-5-haiku-latest")
response = llm.invoke([message])

response.content
[{'text': "You can find LangChain's tutorials at https://python.langchain.com/docs/tutorials/\n\nThe tutorials section is recommended for those looking to build something specific or who prefer a hands-on learning approach. It's considered the best place to get started with LangChain.",
'type': 'text',
'citations': [{'type': 'content_block_location',
'cited_text': "[Tutorials](https://python.langchain.com/docs/tutorials/):If you're looking to build something specific orare more of a hands-on learner, check out ourtutorials. This is the best place to get started.",
'document_index': 0,
'document_title': None,
'start_block_index': 243,
'end_block_index': 248}]}]

内置工具

Anthropic 支持各种内置工具,这些工具可以以常规方式绑定到模型。Claude 将根据其内部工具模式生成工具调用:

网络搜索

Claude 可以使用一个 网络搜索工具 来执行搜索,并提供引用来佐证其回复。

Web search tool is supported since langchain-anthropic>=0.3.13
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-5-sonnet-latest")

tool = {"type": "web_search_20250305", "name": "web_search", "max_uses": 3}
llm_with_tools = llm.bind_tools([tool])

response = llm_with_tools.invoke("How do I update a web app to TypeScript 5.5?")
API Reference:ChatAnthropic

代码执行

Claude 可以使用 代码执行工具 在沙盒环境中执行 Python 代码。

代码执行自 langchain-anthropic>=0.3.14 起受支持。
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(
model="claude-sonnet-4-20250514",
betas=["code-execution-2025-05-22"],
)

tool = {"type": "code_execution_20250522", "name": "code_execution"}
llm_with_tools = llm.bind_tools([tool])

response = llm_with_tools.invoke(
"Calculate the mean and standard deviation of " "[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]"
)
API Reference:ChatAnthropic
与 Files API 一起使用

使用 Files API,Claude 可以编写代码来访问文件以进行数据分析和其他用途。请参阅以下示例:

# Upload file

import anthropic

client = anthropic.Anthropic()
file = client.beta.files.upload(
file=open("/path/to/sample_data.csv", "rb")
)
file_id = file.id


# Run inference
from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(
model="claude-sonnet-4-20250514",
betas=["code-execution-2025-05-22"],
)

tool = {"type": "code_execution_20250522", "name": "code_execution"}
llm_with_tools = llm.bind_tools([tool])

input_message = {
"role": "user",
"content": [
{
"type": "text",
"text": "Please plot these data and tell me what you see.",
},
{
"type": "container_upload",
"file_id": file_id,
},
]
}
llm_with_tools.invoke([input_message])
API Reference:ChatAnthropic

请注意,Claude 在代码执行过程中可能会生成文件。您可以使用 Files API 来访问这些文件:

# Take all file outputs for demonstration purposes
file_ids = []
for block in response.content:
if block["type"] == "code_execution_tool_result":
file_ids.extend(
content["file_id"]
for content in block.get("content", {}).get("content", [])
if "file_id" in content
)

for i, file_id in enumerate(file_ids):
file_content = client.beta.files.download(file_id)
file_content.write_to_file(f"/path/to/file_{i}.png")

远程 MCP

Claude 可以使用 MCP 连接器工具 来调用远程 MCP 服务器的模型生成调用。

远程 MCP 自 langchain-anthropic>=0.3.14 起受支持
from langchain_anthropic import ChatAnthropic

mcp_servers = [
{
"type": "url",
"url": "https://mcp.deepwiki.com/mcp",
"name": "deepwiki",
"tool_configuration": { # optional configuration
"enabled": True,
"allowed_tools": ["ask_question"],
},
"authorization_token": "PLACEHOLDER", # optional authorization
}
]

llm = ChatAnthropic(
model="claude-sonnet-4-20250514",
betas=["mcp-client-2025-04-04"],
mcp_servers=mcp_servers,
)

response = llm.invoke(
"What transport protocols does the 2025-03-26 version of the MCP "
"spec (modelcontextprotocol/modelcontextprotocol) support?"
)
API Reference:ChatAnthropic

文本编辑器

文本编辑器工具可用于查看和修改文本文件。有关详细信息,请参阅文档 此处

from langchain_anthropic import ChatAnthropic

llm = ChatAnthropic(model="claude-3-7-sonnet-20250219")

tool = {"type": "text_editor_20250124", "name": "str_replace_editor"}
llm_with_tools = llm.bind_tools([tool])

response = llm_with_tools.invoke(
"There's a syntax error in my primes.py file. Can you help me fix it?"
)
print(response.text())
response.tool_calls
API Reference:ChatAnthropic
I'd be happy to help you fix the syntax error in your primes.py file. First, let's look at the current content of the file to identify the error.
[{'name': 'str_replace_editor',
'args': {'command': 'view', 'path': '/repo/primes.py'},
'id': 'toolu_01VdNgt1YV7kGfj9LFLm6HyQ',
'type': 'tool_call'}]

API 参考

要获取 ChatAnthropic 所有功能和配置的详细文档,请访问 API 参考:https://python.langchain.com/api_reference/anthropic/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html