Skip to main content
Open In ColabOpen on GitHub

如何缓存聊天模型响应

先决条件

本指南假定您熟悉以下概念:

LangChain 提供了一个可选的模型聊天缓存层。这在两个主要方面很有用:

  • 如果您经常多次请求相同的完成,通过减少对 LLM 提供商的 API 调用次数来节省您的费用。这在应用程序开发过程中尤其有用。
  • 通过减少对 LLM 提供商的 API 调用次数来加快您的应用程序速度。

本指南将引导您了解如何在应用程序中启用此功能。

pip install -qU "langchain[google-genai]"
import getpass
import os

if not os.environ.get("GOOGLE_API_KEY"):
os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter API key for Google Gemini: ")

from langchain.chat_models import init_chat_model

llm = init_chat_model("gemini-2.0-flash", model_provider="google_genai")
# <!-- ruff: noqa: F821 -->
from langchain_core.globals import set_llm_cache
API Reference:set_llm_cache

内存缓存

这是一个临时的缓存,它会将模型调用存储在内存中。当您的环境重启时,它将被清除,并且不会在进程之间共享。

%%time
from langchain_core.caches import InMemoryCache

set_llm_cache(InMemoryCache())

# The first time, it is not yet in cache, so it should take longer
llm.invoke("Tell me a joke")
API Reference:InMemoryCache
CPU times: user 645 ms, sys: 214 ms, total: 859 ms
Wall time: 829 ms
AIMessage(content="Why don't scientists trust atoms?\n\nBecause they make up everything!", response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 11, 'total_tokens': 24}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-b6836bdd-8c30-436b-828f-0ac5fc9ab50e-0')
%%time
# The second time it is, so it goes faster
llm.invoke("Tell me a joke")
CPU times: user 822 µs, sys: 288 µs, total: 1.11 ms
Wall time: 1.06 ms
AIMessage(content="Why don't scientists trust atoms?\n\nBecause they make up everything!", response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 11, 'total_tokens': 24}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-b6836bdd-8c30-436b-828f-0ac5fc9ab50e-0')

SQLite 缓存

此缓存实现使用 SQLite 数据库存储响应,并且能够跨进程重启保持数据。

!rm .langchain.db
# We can do the same thing with a SQLite cache
from langchain_community.cache import SQLiteCache

set_llm_cache(SQLiteCache(database_path=".langchain.db"))
API Reference:SQLiteCache
%%time
# The first time, it is not yet in cache, so it should take longer
llm.invoke("Tell me a joke")
CPU times: user 9.91 ms, sys: 7.68 ms, total: 17.6 ms
Wall time: 657 ms
AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field!', response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 11, 'total_tokens': 28}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-39d9e1e8-7766-4970-b1d8-f50213fd94c5-0')
%%time
# The second time it is, so it goes faster
llm.invoke("Tell me a joke")
CPU times: user 52.2 ms, sys: 60.5 ms, total: 113 ms
Wall time: 127 ms
AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field!', id='run-39d9e1e8-7766-4970-b1d8-f50213fd94c5-0')

下一步

您现在已经学会了如何缓存模型响应以节省时间和金钱。

接下来,请查看本节中聊天模型的其他操作指南,例如如何让模型返回结构化输出如何创建自己的自定义聊天模型