UpTrain
UpTrain [github || website || docs] 是一个开源平台,用于评估和改进 LLM 应用。它为 20 多个预配置的检查(涵盖语言、代码、嵌入式用例)提供评分,对失败案例进行根本原因分析,并提供解决它们的指导。
UpTrain 回调处理器
本 Notebook 展示了 UpTrain 回调处理器如何无缝集成到您的 pipeline 中,从而实现多样化的评估。我们选择了一些我们认为适合评估链的评估。这些评估会自动运行,结果会显示在输出中。有关 UpTrain 评估的更多详细信息,请参阅 此处。
为方便演示,重点介绍了 Langchain 中选定的检索器:
1. 基础 RAG (Vanilla RAG):
RAG 在检索上下文和生成响应方面发挥着至关重要的作用。为确保其性能和响应质量,我们进行了以下评估:
- Context Relevance: 确定从查询中提取的上下文是否与响应相关。
- Factual Accuracy: 评估 LLM 是否出现幻觉或提供错误信息。
- Response Completeness: 检查响应是否包含查询要求的所有信息。
2. 多查询生成 (Multi Query Generation):
MultiQueryRetriever 会创建原始问题具有相似含义的多个问题变体。考虑到其复杂性,我们包含了之前的评估并添加了:
- Multi Query Accuracy: 确保生成的多查询与原始查询具有相同的含义。
3. 上下文压缩与重排序 (Context Compression and Reranking):
重排序涉及根据与查询的相关性对节点进行重新排序,并选择前 n 个节点。由于完成重排序后节点数量可能会减少,因此我们执行以下评估:
- Context Reranking: 检查重排序后的节点顺序是否比原始顺序更与查询相关。
- Context Conciseness: 检查减少后的节点数量是否仍然提供了所有必需的信息。
这些评估共同确保了 RAG、MultiQueryRetriever 以及链中重排序过程的健壮性和有效性。
安装依赖项
%pip install -qU langchain langchain_openai langchain-community uptrain faiss-cpu flashrank
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
``````output
[33mWARNING: There was an error checking the latest version of pip.[0m[33m
[0mNote: you may need to restart the kernel to use updated packages.
注意:你也可以安装 faiss-gpu 来代替 faiss-cpu,如果你想使用 GPU 加速的版本的话。
导入库
from getpass import getpass
from langchain.chains import RetrievalQA
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import FlashrankRerank
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain_community.callbacks.uptrain_callback import UpTrainCallbackHandler
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers.string import StrOutputParser
from langchain_core.prompts.chat import ChatPromptTemplate
from langchain_core.runnables.passthrough import RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_text_splitters import (
RecursiveCharacterTextSplitter,
)
加载文档
loader = TextLoader("../../how_to/state_of_the_union.txt")
documents = loader.load()
将文档拆分成块
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
chunks = text_splitter.split_documents(documents)
创建检索器
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(chunks, embeddings)
retriever = db.as_retriever()
定义大语言模型
llm = ChatOpenAI(temperature=0, model="gpt-4")
设置
UpTrain 为您提供:
- 具有高级下钻和过滤选项的仪表板
- 失败案例中的见解和常见主题
- 对生产数据的可观测性和实时监控
- 通过与 CI/CD 管道的无缝集成进行回归测试
您可以选择以下选项来评估使用 UpTrain:
1. UpTrain 的开源软件 (OSS):
您可以使用开源评估服务来评估您的模型。在这种情况下,您需要提供 OpenAI API 密钥。UpTrain 使用 GPT 模型来评估 LLM 生成的响应。您可以在此处获取您的 API 密钥:https://platform.openai.com/account/api-keys
为了在 UpTrain 仪表板中查看您的评估,您需要通过在终端中运行以下命令来设置它:
git clone https://github.com/uptrain-ai/uptrain
cd uptrain
bash run_uptrain.sh
这将启动您本地机器上的 UpTrain 仪表板。您可以通过 http://localhost:3000/dashboard 访问它。
参数:
- key_type="openai"
- api_key="OPENAI_API_KEY"
- project_name="PROJECT_NAME"
2. UpTrain 托管服 务和仪表板:
或者,您可以使用 UpTrain 的托管服务来评估您的模型。您可以在此处创建免费的 UpTrain 帐户:https://uptrain.ai/ 并获得免费试用积分。如果您需要更多试用积分,请在此处与 UpTrain 的维护者预约会议:https://calendly.com/uptrain-sourabh/30min
使用托管服务的优势包括:
- 无需在本地机器上设置 UpTrain 仪表板。
- 无需 API 密钥即可访问许多 LLM。
执行评估后,您可以在 UpTrain 仪表板的 https://dashboard.uptrain.ai/dashboard 上查看它们。
参数:
- key_type="uptrain"
- api_key="UPTRAIN_API_KEY"
- project_name="PROJECT_NAME"
注意: project_name 将是 UpTrain 仪表板中用于显示所执行评估的项目名称。
设置 API 密钥
笔记本会提示您输入 API 密钥。您可以通过更改下方单元格中的 key_type 参数,在 OpenAI API 密钥或 UpTrain API 密钥之间进行选择。
KEY_TYPE = "openai" # or "uptrain"
API_KEY = getpass()
1. 原生 RAG
UpTrain 回调处理程序将在生成查询、上下文和响应后自动捕获它们,并对响应运行以下三项评估 (评分从 0 到 1):
- Context Relevance:检查从查询中提取的上下文是否与响应相关。
- Factual Accuracy:检查响应的事实准确性。
- Response Completeness:检查响应是否包含查询要求的所有信息。
# Create the RAG prompt
template = """Answer the question based only on the following context, which can include text and tables:
{context}
Question: {question}
"""
rag_prompt_text = ChatPromptTemplate.from_template(template)
# Create the chain
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| rag_prompt_text
| llm
| StrOutputParser()
)
# Create the uptrain callback handler
uptrain_callback = UpTrainCallbackHandler(key_type=KEY_TYPE, api_key=API_KEY)
config = {"callbacks": [uptrain_callback]}
# Invoke the chain with a query
query = "What did the president say about Ketanji Brown Jackson"
docs = chain.invoke(query, config=config)
[32m2024-04-17 17:03:44.969[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate_on_server[0m:[36m378[0m - [1mSending evaluation request for rows 0 to <50 to the Uptrain[0m
[32m2024-04-17 17:04:05.809[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate[0m:[36m367[0m - [1mLocal server not running, start the server to log data and visualize in the dashboard![0m
``````output
Question: What did the president say about Ketanji Brown Jackson
Response: The president mentioned that he had nominated Ketanji Brown Jackson to serve on the United States Supreme Court 4 days ago. He described her as one of the nation's top legal minds who will continue Justice Breyer’s legacy of excellence. He also mentioned that she is a former top litigator in private practice, a former federal public defender, and comes from a family of public school educators and police officers. He described her as a consensus builder and noted that since her nomination, she has received a broad range of support from various groups, including the Fraternal Order of Police and former judges appointed by both Democrats and Republicans.
Context Relevance Score: 1.0
Factual Accuracy Score: 1.0
Response Completeness Score: 1.0
2. 多查询生成
MultiQueryRetriever 用于解决 RAG 管道可能无法根据查询返回最佳文档集的问题。它会生成多个与原始查询含义相同的查询,然后为每个查询获取文档。
为了评估此检索器,UpTrain 将运行以下评估:
- 多查询准确性:检查生成的 多查询是否与原始查询含义相同。
# Create the retriever
multi_query_retriever = MultiQueryRetriever.from_llm(retriever=retriever, llm=llm)
# Create the uptrain callback
uptrain_callback = UpTrainCallbackHandler(key_type=KEY_TYPE, api_key=API_KEY)
config = {"callbacks": [uptrain_callback]}
# Create the RAG prompt
template = """Answer the question based only on the following context, which can include text and tables:
{context}
Question: {question}
"""
rag_prompt_text = ChatPromptTemplate.from_template(template)
chain = (
{"context": multi_query_retriever, "question": RunnablePassthrough()}
| rag_prompt_text
| llm
| StrOutputParser()
)
# Invoke the chain with a query
question = "What did the president say about Ketanji Brown Jackson"
docs = chain.invoke(question, config=config)
[32m2024-04-17 17:04:10.675[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate_on_server[0m:[36m378[0m - [1mSending evaluation request for rows 0 to <50 to the Uptrain[0m
[32m2024-04-17 17:04:16.804[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate[0m:[36m367[0m - [1mLocal server not running, start the server to log data and visualize in the dashboard![0m
``````output
Question: What did the president say about Ketanji Brown Jackson
Multi Queries:
- How did the president comment on Ketanji Brown Jackson?
- What were the president's remarks regarding Ketanji Brown Jackson?
- What statements has the president made about Ketanji Brown Jackson?
Multi Query Accuracy Score: 0.5
``````output
[32m2024-04-17 17:04:22.027[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate_on_server[0m:[36m378[0m - [1mSending evaluation request for rows 0 to <50 to the Uptrain[0m
[32m2024-04-17 17:04:44.033[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate[0m:[36m367[0m - [1mLocal server not running, start the server to log data and visualize in the dashboard![0m
``````output
Question: What did the president say about Ketanji Brown Jackson
Response: The president mentioned that he had nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United States Supreme Court 4 days ago. He described her as one of the nation's top legal minds who will continue Justice Breyer’s legacy of excellence. He also mentioned that since her nomination, she has received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
Context Relevance Score: 1.0
Factual Accuracy Score: 1.0
Response Completeness Score: 1.0
3. 上下文压缩与重排
重排过程涉及到根据与查询的相关性对节点进行重新排序,并选择最重要的 n 个节点。由于重排完成后节点数量可能会减少,我们将进行以下评估:
# Create the retriever
compressor = FlashrankRerank()
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever
)
# Create the chain
chain = RetrievalQA.from_chain_type(llm=llm, retriever=compression_retriever)
# Create the uptrain callback
uptrain_callback = UpTrainCallbackHandler(key_type=KEY_TYPE, api_key=API_KEY)
config = {"callbacks": [uptrain_callback]}
# Invoke the chain with a query
query = "What did the president say about Ketanji Brown Jackson"
result = chain.invoke(query, config=config)
[32m2024-04-17 17:04:46.462[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate_on_server[0m:[36m378[0m - [1mSending evaluation request for rows 0 to <50 to the Uptrain[0m
[32m2024-04-17 17:04:53.561[0m | [1mINFO [0m | [36muptrain.framework.evalllm[0m:[36mevaluate[0m:[36m367[0m - [1mLocal server not running, start the server to log data and visualize in the dashboard