Skip to main content
Open In ColabOpen on GitHub

Contextual AI

Contextual AI 提供最先进的 RAG 组件,专为准确可靠的企业 AI 应用而设计。我们的 LangChain 集成公开了我们专业模型的独立 API 端点:

  • Grounded Language Model (GLM):全球最接地气的语言模型,通过优先考虑检索到的知识来最大限度地减少幻觉。GLM 提供卓越的事实准确性,并附带内联归因,非常适合对可靠性至关重要的企业 RAG 和 agentic 应用。

  • Instruction-Following Reranker:首个能够遵循自定义指令的重排序器,可根据相关性、来源或文档类型等特定标准智能地优先排序文档。我们的重排序器在行业基准测试中超越了竞争对手,解决了企业知识库中的冲突信息挑战。

Contextual AI 由 RAG 技术发明者创立,其专业组件可帮助创新团队加速开发生产就绪的 RAG agent,提供卓越准确性的响应。

Grounded Language Model (GLM)

Grounded Language Model (GLM) 专为最大限度地减少企业 RAG 和 agentic 应用中的幻觉而设计。GLM 提供:

  • 在 FACTS 基准测试中具有 88% 的事实准确性,性能强劲 (查看基准测试结果)
  • 严格基于提供的知识来源生成响应,并附带内联归因 (阅读产品详情)
  • 精确的来源引用直接集成到生成的响应中
  • 优先检索到的上下文而非参数化知识 (查看技术概述)
  • 在信息不可用时清楚地承认不确定性

GLM 可作为 RAG 管道中通用 LLM 的即插即用替代品,显著提高了任务关键型企业应用的可靠性。

Instruction-Following Reranker

全球首款 Instruction-Following Reranker 以前所未有的控制力和准确性彻底改变了文档排序。主要功能包括:

  • 自然语言指令,可根据相关性、来源、元数据等优先排序文档 (查看工作原理)
  • 在 BEIR 基准测试中表现优异,得分为 61.2,远超竞争对手 (查看基准测试数据)
  • 智能解析来自多个知识源的冲突信息
  • 可作为现有重排序器的无缝即插即用替代品
  • 通过自然语言命令动态控制文档排序

该重排序器非常擅长处理可能包含冲突信息的企业知识库,允许您精确指定在各种场景下应优先考虑的来源。

将 Contextual AI 与 LangChain 结合使用

在此处 (/docs/integrations/chat/contextual) 查看详细信息。

此集成使您可以轻松地将 Contextual AI 的 GLM 和 Instruction-Following Reranker 纳入您的 LangChain 工作流程。GLM 确保您的应用程序提供严格基于事实的响应,而重排序器通过智能地优先排序最相关的文档来显著提高检索质量。

无论您是为受监管行业还是注重安全的领域构建应用程序,Contextual AI 都提供了您的企业用例所需的准确性、控制力和可靠性。

立即开始免费试用,体验专为企业人工智能应用打造的最接地气语言模型和指令遵循重排序器。

具身语言模型

# Integrating the Grounded Language Model
import getpass
import os

from langchain_contextual import ChatContextual

# Set credentials
if not os.getenv("CONTEXTUAL_AI_API_KEY"):
os.environ["CONTEXTUAL_AI_API_KEY"] = getpass.getpass(
"Enter your Contextual API key: "
)

# initialize Contextual llm
llm = ChatContextual(
model="v1",
api_key="",
)
# include a system prompt (optional)
system_prompt = "You are a helpful assistant that uses all of the provided knowledge to answer the user's query to the best of your ability."

# provide your own knowledge from your knowledge-base here in an array of string
knowledge = [
"There are 2 types of dogs in the world: good dogs and best dogs.",
"There are 2 types of cats in the world: good cats and best cats.",
]

# create your message
messages = [
("human", "What type of cats are there in the world and what are the types?"),
]

# invoke the GLM by providing the knowledge strings, optional system prompt
# if you want to turn off the GLM's commentary, pass True to the `avoid_commentary` argument
ai_msg = llm.invoke(
messages, knowledge=knowledge, system_prompt=system_prompt, avoid_commentary=True
)

print(ai_msg.content)
According to the information available, there are two types of cats in the world:

1. Good cats
2. Best cats

指令遵循重排器

import getpass
import os

from langchain_contextual import ContextualRerank

if not os.getenv("CONTEXTUAL_AI_API_KEY"):
os.environ["CONTEXTUAL_AI_API_KEY"] = getpass.getpass(
"Enter your Contextual API key: "
)


api_key = ""
model = "ctxl-rerank-en-v1-instruct"

compressor = ContextualRerank(
model=model,
api_key=api_key,
)

from langchain_core.documents import Document

query = "What is the current enterprise pricing for the RTX 5090 GPU for bulk orders?"
instruction = "Prioritize internal sales documents over market analysis reports. More recent documents should be weighted higher. Enterprise portal content supersedes distributor communications."

document_contents = [
"Following detailed cost analysis and market research, we have implemented the following changes: AI training clusters will see a 15% uplift in raw compute performance, enterprise support packages are being restructured, and bulk procurement programs (100+ units) for the RTX 5090 Enterprise series will operate on a $2,899 baseline.",
"Enterprise pricing for the RTX 5090 GPU bulk orders (100+ units) is currently set at $3,100-$3,300 per unit. This pricing for RTX 5090 enterprise bulk orders has been confirmed across all major distribution channels.",
"RTX 5090 Enterprise GPU requires 450W TDP and 20% cooling overhead.",
]

metadata = [
{
"Date": "January 15, 2025",
"Source": "NVIDIA Enterprise Sales Portal",
"Classification": "Internal Use Only",
},
{"Date": "11/30/2023", "Source": "TechAnalytics Research Group"},
{
"Date": "January 25, 2025",
"Source": "NVIDIA Enterprise Sales Portal",
"Classification": "Internal Use Only",
},
]

documents = [
Document(page_content=content, metadata=metadata[i])
for i, content in enumerate(document_contents)
]
reranked_documents = compressor.compress_documents(
query=query,
instruction=instruction,
documents=documents,
)
API Reference:Document