Skip to main content
Open In ColabOpen on GitHub

needle

Needle Retriever

Needle 让您能够轻松地创建 RAG 管道,只需付出极小的努力。

有关更多详细信息,请参阅我们的 API 文档

概览

Needle文档加载器是用于将Needle集合与LangChain集成的实用工具。它使得文档的存储、检索和利用能够无缝集成到检索增强生成(RAG)工作流程中。

本示例演示了:

  • 将文档存储到Needle集合中。
  • 设置检索器以获取文档。
  • 构建检索增强生成(RAG)管道。

设置

开始之前,请确保已设置以下环境变量:

  • NEEDLE_API_KEY:用于身份验证 Needle 的 API 密钥。
  • OPENAI_API_KEY:用于语言模型操作的 OpenAI API 密钥。

初始化

要初始化 NeedleLoader,您需要以下参数:

  • needle_api_key: 您的 Needle API 密钥(也可以将其设置为环境变量)。
  • collection_id: 您要使用的 Needle 集合的 ID。
import os
os.environ["NEEDLE_API_KEY"] = ""
os.environ["OPENAI_API_KEY"] = ""

实例化

from langchain_community.document_loaders.needle import NeedleLoader

collection_id = "clt_01J87M9T6B71DHZTHNXYZQRG5H"

# Initialize NeedleLoader to store documents to the collection
document_loader = NeedleLoader(
needle_api_key=os.getenv("NEEDLE_API_KEY"),
collection_id=collection_id,
)
API Reference:NeedleLoader

加载

要将文件添加到 Needle 集合中:

files = {
"tech-radar-30.pdf": "https://www.thoughtworks.com/content/dam/thoughtworks/documents/radar/2024/04/tr_technology_radar_vol_30_en.pdf"
}

document_loader.add_files(files=files)
# Show the documents in the collection
# collections_documents = document_loader.load()

用法

在链中使用

下面是一个使用 Needle 在链中设置 RAG pipeline 的完整示例:

import os

from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_community.retrievers.needle import NeedleRetriever
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(temperature=0)

# Initialize the Needle retriever (make sure your Needle API key is set as an environment variable)
retriever = NeedleRetriever(
needle_api_key=os.getenv("NEEDLE_API_KEY"),
collection_id="clt_01J87M9T6B71DHZTHNXYZQRG5H",
)

# Define system prompt for the assistant
system_prompt = """
You are an assistant for question-answering tasks.
Use the following pieces of retrieved context to answer the question.
If you don't know, say so concisely.\n\n{context}
"""

prompt = ChatPromptTemplate.from_messages(
[("system", system_prompt), ("human", "{input}")]
)

# Define the question-answering chain using a document chain (stuff chain) and the retriever
question_answer_chain = create_stuff_documents_chain(llm, prompt)

# Create the RAG (Retrieval-Augmented Generation) chain by combining the retriever and the question-answering chain
rag_chain = create_retrieval_chain(retriever, question_answer_chain)

# Define the input query
query = {"input": "Did RAG move to accepted?"}

response = rag_chain.invoke(query)

response
{'input': 'Did RAG move to accepted?',
'context': [Document(metadata={}, page_content='New Moved in/out No change\n\n© Thoughtworks, Inc. All Rights Reserved. 12\n\nTechniques\n\n1. Retrieval-augmented generation (RAG)\nAdopt\n\nRetrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of \nresponses generated by a large language model (LLM). We’ve successfully used it in several projects, \nincluding the popular Jugalbandi AI Platform. With RAG, information about relevant and trustworthy \ndocuments — in formats like HTML and PDF — are stored in databases that supports a vector data \ntype or efficient document search, such as pgvector, Qdrant or Elasticsearch Relevance Engine. For \na given prompt, the database is queried to retrieve relevant documents, which are then combined \nwith the prompt to provide richer context to the LLM. This results in higher quality output and greatly \nreduced hallucinations. The context window — which determines the maximum size of the LLM input \n— is limited, which means that selecting the most relevant documents is crucial. We improve the \nrelevancy of the content that is added to the prompt by reranking. Similarly, the documents are usually \ntoo large to calculate an embedding, which means they must be split into smaller chunks. This is often \na difficult problem, and one approach is to have the chunks overlap to a certain extent.'),
Document(metadata={}, page_content='New Moved in/out No change\n\n© Thoughtworks, Inc. All Rights Reserved. 12\n\nTechniques\n\n1. Retrieval-augmented generation (RAG)\nAdopt\n\nRetrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of \nresponses generated by a large language model (LLM). We’ve successfully used it in several projects, \nincluding the popular Jugalbandi AI Platform. With RAG, information about relevant and trustworthy \ndocuments — in formats like HTML and PDF — are stored in databases that supports a vector data \ntype or efficient document search, such as pgvector, Qdrant or Elasticsearch Relevance Engine. For \na given prompt, the database is queried to retrieve relevant documents, which are then combined \nwith the prompt to provide richer context to the LLM. This results in higher quality output and greatly \nreduced hallucinations. The context window — which determines the maximum size of the LLM input \n— is limited, which means that selecting the most relevant documents is crucial. We improve the \nrelevancy of the content that is added to the prompt by reranking. Similarly, the documents are usually \ntoo large to calculate an embedding, which means they must be split into smaller chunks. This is often \na difficult problem, and one approach is to have the chunks overlap to a certain extent.'),
Document(metadata={}, page_content='New Moved in/out No change\n\n© Thoughtworks, Inc. All Rights Reserved. 12\n\nTechniques\n\n1. Retrieval-augmented generation (RAG)\nAdopt\n\nRetrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of \nresponses generated by a large language model (LLM). We’ve successfully used it in several projects, \nincluding the popular Jugalbandi AI Platform. With RAG, information about relevant and trustworthy \ndocuments — in formats like HTML and PDF — are stored in databases that supports a vector data \ntype or efficient document search, such as pgvector, Qdrant or Elasticsearch Relevance Engine. For \na given prompt, the database is queried to retrieve relevant documents, which are then combined \nwith the prompt to provide richer context to the LLM. This results in higher quality output and greatly \nreduced hallucinations. The context window — which determines the maximum size of the LLM input \n— is limited, which means that selecting the most relevant documents is crucial. We improve the \nrelevancy of the content that is added to the prompt by reranking. Similarly, the documents are usually \ntoo large to calculate an embedding, which means they must be split into smaller chunks. This is often \na difficult problem, and one approach is to have the chunks overlap to a certain extent.'),
Document(metadata={}, page_content='New Moved in/out No change\n\n© Thoughtworks, Inc. All Rights Reserved. 12\n\nTechniques\n\n1. Retrieval-augmented generation (RAG)\nAdopt\n\nRetrieval-augmented generation (RAG) is the preferred pattern for our teams to improve the quality of \nresponses generated by a large language model (LLM). We’ve successfully used it in several projects, \nincluding the popular Jugalbandi AI Platform. With RAG, information about relevant and trustworthy \ndocuments — in formats like HTML and PDF — are stored in databases that supports a vector data \ntype or efficient document search, such as pgvector, Qdrant or Elasticsearch Relevance Engine. For \na given prompt, the database is queried to retrieve relevant documents, which are then combined \nwith the prompt to provide richer context to the LLM. This results in higher quality output and greatly \nreduced hallucinations. The context window — which determines the maximum size of the LLM input \n— is limited, which means that selecting the most relevant documents is crucial. We improve the \nrelevancy of the content that is added to the prompt by reranking. Similarly, the documents are usually \ntoo large to calculate an embedding, which means they must be split into smaller chunks. This is often \na difficult problem, and one approach is to have the chunks overlap to a certain extent.'),
Document(metadata={}, page_content='https://www.thoughtworks.com/radar/tools/nemo-guardrails\nhttps://www.thoughtworks.com/radar/platforms/langfuse\nhttps://www.thoughtworks.com/radar/techniques/retrieval-augmented-generation-rag\nhttps://cruisecontrol.sourceforge.net/\nhttps://martinfowler.com/articles/continuousIntegration.html\nhttps://www.thoughtworks.com/radar/techniques/peer-review-equals-pull-request\nhttps://martinfowler.com/bliki/ContinuousIntegrationCertification.html\nhttps://linearb.io/platform/gitstream\nhttps://www.thoughtworks.com/radar/tools/github-merge-queue\nhttps://stacking.dev/\n\n© Thoughtworks, Inc. All Rights Reserved. 8\n\nHold HoldAssess AssessTrial TrialAdopt Adopt\n\n18\n\n8\n\n24\n\n29\n\n30\n31\n\n32\n33\n\n34 35\n\n36\n37\n\n38 39\n\n40\n41\n\n42\n43\n\n26\n\n2\n\n3\n\n4\n\n5\n\n6 7\n\n9\n\n15\n\n16\n\n17\n\n10\n\n11\n\n12\n\n13 14\n\n44\n\n47\n49\n\n50\n\n65\n66\n\n67 68\n69\n\n70\n71\n\n72\n\n73 74\n\n75\n\n76 77\n\n78\n79\n\n80\n81\n\n82\n\n83\n\n51\n\n52 54\n\n59\n\n53\n56\n\n58\n\n61\n\n62\n63\n\n64\n\n85\n\n88 89\n\n90 91\n\n92\n93\n\n94\n95 96\n\n97\n\n98 99\n\n100\n\n101\n102\n\n103\n\n104\n\n86\n\n87\n1921\n\n22\n\n20\n28\n\n25\n\n27\n\n23\n\n84\n\n105\n\n1\n45\n\n46\n\n48\n\n55\n57')],
'answer': 'Yes, RAG has moved to the "Adopt" status.'}

API 参考

有关 Needle 所有功能和配置的详细文档,请访问 API 参考:https://docs.needle-ai.com