RunPod LLM
开始使用 RunPod LLMs。
概述
本指南介绍如何使用 LangChain RunPod LLM 类与托管在 RunPod Serverless 上的文本生成模型进行交互。
设置
- 安装包:
pip install -qU langchain-runpod - 部署 LLM 端点: 按照 RunPod Provider Guide 中的设置步骤,在 RunPod Serverless 上部署兼容的文本生成端点并获取其 Endpoint ID。
- 设置环境变量: 确保
RUNPOD_API_KEY和RUNPOD_ENDPOINT_ID已经设置。
import getpass
import os
# Make sure environment variables are set (or pass them directly to RunPod)
if "RUNPOD_API_KEY" not in os.environ:
os.environ["RUNPOD_API_KEY"] = getpass.getpass("Enter your RunPod API Key: ")
if "RUNPOD_ENDPOINT_ID" not in os.environ:
os.environ["RUNPOD_ENDPOINT_ID"] = input("Enter your RunPod Endpoint ID: ")
实例化
初始化 RunPod 类。您可以通过 model_kwargs 传递模型特定的参数并配置轮询行为。
from langchain_runpod import RunPod
llm = RunPod(
# runpod_endpoint_id can be passed here if not set in env
model_kwargs={
"max_new_tokens": 256,
"temperature": 0.6,
"top_k": 50,
# Add other parameters supported by your endpoint handler
},
# Optional: Adjust polling
# poll_interval=0.3,
# max_polling_attempts=100
)
调用
请使用 LangChain 标准的 .invoke() 和 .ainvoke() 方法来调用模型。您也可以通过 .stream() 和 .astream() 来启用流式传输(通过轮询 RunPod /stream 端点模拟)。
prompt = "Write a tagline for an ice cream shop on the moon."
# Invoke (Sync)
try:
response = llm.invoke(prompt)
print("--- Sync Invoke Response ---")
print(response)
except Exception as e:
print(
f"Error invoking LLM: {e}. Ensure endpoint ID/API key are correct and endpoint is active/compatible."
)
# Stream (Sync, simulated via polling /stream)
print("\n--- Sync Stream Response ---")
try:
for chunk in llm.stream(prompt):
print(chunk, end="", flush=True)
print() # Newline
except Exception as e:
print(
f"\nError streaming LLM: {e}. Ensure endpoint handler supports streaming output format."
)
异步用法
# AInvoke (Async)
try:
async_response = await llm.ainvoke(prompt)
print("--- Async Invoke Response ---")
print(async_response)
except Exception as e:
print(f"Error invoking LLM asynchronously: {e}.")
# AStream (Async)
print("\n--- Async Stream Response ---")
try:
async for chunk in llm.astream(prompt):
print(chunk, end="", flush=True)
print() # Newline
except Exception as e:
print(
f"\nError streaming LLM asynchronously: {e}. Ensure endpoint handler supports streaming output format."
)
链式调用
LLM 无缝集成到 LangChain Expression Language (LCEL) 链中。
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
# Assumes 'llm' variable is instantiated from the 'Instantiation' cell
prompt_template = PromptTemplate.from_template("Tell me a joke about {topic}")
parser = StrOutputParser()
chain = prompt_template | llm | parser
try:
chain_response = chain.invoke({"topic": "bears"})
print("--- Chain Response ---")
print(chain_response)
except Exception as e:
print(f"Error running chain: {e}")
# Async chain
try:
async_chain_response = await chain.ainvoke({"topic": "robots"})
print("--- Async Chain Response ---")
print(async_chain_response)
except Exception as e:
print(f"Error running async chain: {e}")
API Reference:StrOutputParser | PromptTemplate
端点注意事项
- 输入: 端点处理程序应在
{"input": {"prompt": "...", ...}}中预料到 prompt 字符串。 - 输出: 处理程序应在最终状态响应的
"output"键内返回生成的文本(例如{"output": "生成的文本..."}或{"output": {"text": "..."}})。 - 流式传输: 对于通过
/stream端点进行的模拟流式传输,处理程序必须在状态响应中用一系列字典填充"stream"键,例如[{"output": "token1"}, {"output": "token2"}]。
API 参考
有关 RunPod LLM 类、参数和方法的详细文档,请参阅源代码或生成的 API 参考(如果可用)。
指向源代码的链接:https://github.com/runpod/langchain-runpod/blob/main/langchain_runpod/llms.py
Related
- LLM conceptual guide
- LLM how-to guides