Fireworks
Fireworks 通过创建创新的 AI 实验和生产平台,加速生成式 AI 产品开发。
本示例将介绍如何使用 LangChain 与 Fireworks 模型进行交互。
概览
集成详情
| 类 | 包 | 本地 | 可序列化 | JS 支持 | 包下载 | 包最新版本 |
|---|---|---|---|---|---|---|
| Fireworks | langchain_fireworks | ❌ | ❌ | ✅ |
设置
凭证
登录 Fireworks AI 获取 API 密钥以访问我们的模型,并确保将其设置为 FIREWORKS_API_KEY 环境变量。
3. 使用模型 ID 设置您的模型。如果未设置模型,则默认模型为 fireworks-llama-v2-7b-chat。有关完整且最新的模型列表,请参阅 fireworks.ai。
import getpass
import os
if "FIREWORKS_API_KEY" not in os.environ:
os.environ["FIREWORKS_API_KEY"] = getpass.getpass("Fireworks API Key:")
安装
您需要安装 langchain_fireworks Python 包,才能使其余的 Notebook 生效。
%pip install -qU langchain-fireworks
Note: you may need to restart the kernel to use updated packages.
实例化
from langchain_fireworks import Fireworks
# Initialize a Fireworks model
llm = Fireworks(
model="accounts/fireworks/models/llama-v3p1-8b-instruct",
base_url="https://api.fireworks.ai/inference/v1/completions",
)
API Reference:Fireworks
调用
您可以通过字符串提示直接调用模型来获取补全。
output = llm.invoke("Who's the best quarterback in the NFL?")
print(output)
If Manningville Station, Lions rookie EJ Manuel's
使用多个提示进行调用
# Calling multiple prompts
output = llm.generate(
[
"Who's the best cricket player in 2016?",
"Who's the best basketball player in the league?",
]
)
print(output.generations)
[[Generation(text=" We're not just asking, we've done some research. We'")], [Generation(text=' The conversation is dominated by Kobe Bryant, Dwyane Wade,')]]
使用附加参数调用
# Setting additional parameters: temperature, max_tokens, top_p
llm = Fireworks(
model="accounts/fireworks/models/llama-v3p1-8b-instruct",
temperature=0.7,
max_tokens=15,
top_p=1.0,
)
print(llm.invoke("What's the weather like in Kansas City in December?"))
December is a cold month in Kansas City, with temperatures of
链接
您可以使用 LangChain Expression Language (LCEL) 来创建包含非聊天模型的简单链。
from langchain_core.prompts import PromptTemplate
from langchain_fireworks import Fireworks
llm = Fireworks(
model="accounts/fireworks/models/llama-v3p1-8b-instruct",
temperature=0.7,
max_tokens=15,
top_p=1.0,
)
prompt = PromptTemplate.from_template("Tell me a joke about {topic}?")
chain = prompt | llm
print(chain.invoke({"topic": "bears"}))
API Reference:PromptTemplate | Fireworks
What do you call a bear with no teeth? A gummy bear!
流式输出
如果您愿意,可以进行流式输出。
for token in chain.stream({"topic": "bears"}):
print(token, end="", flush=True)
Why do bears hate shoes so much? They like to run around in their
API 参考
如需 Fireworks LLM 功能和配置的详细文档,请访问 API 参考:https://python.langchain.com/api_reference/fireworks/llms/langchain_fireworks.llms.Fireworks.html#langchain_fireworks.llms.Fireworks
Related
- LLM conceptual guide
- LLM how-to guides