如何按字符递归拆分文本
此文本拆分器是推荐的通用文本拆分器。它由一个字符列表进行参数化。它会按顺序尝试拆分,直到块足够小为止。默认列表为 ["\n\n", "\n", " ", ""]。此方式的效果是尽可能地将所有段落(然后是句子,然后是单词)保持在一起,因为这些通常被认为是语义上最相关的文本片段。
- 文本的拆分方式:通过字符列表。
- 块大小的测量方式:通过字符数。
下面我们展示了示例用法。
要直接获取字符串内容,请使用 .split_text。
要创建 LangChain Document 对象(例如用于下游任务),请使用 .create_documents。
%pip install -qU langchain-text-splitters
from langchain_text_splitters import RecursiveCharacterTextSplitter
# Load example document
with open("state_of_the_union.txt") as f:
state_of_the_union = f.read()
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size=100,
chunk_overlap=20,
length_function=len,
is_separator_regex=False,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
print(texts[1])
API Reference:RecursiveCharacterTextSplitter
page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and'
page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.'
text_splitter.split_text(state_of_the_union)[:2]
['Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and',
'of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.']
让我们回顾一下上面为 RecursiveCharacterTextSplitter 设置的参数:
chunk_size:‘‘一个分块的最大大小,大小由length_function确定。chunk_overlap: ‘‘分块之间的目标重叠量。重叠的分块有助于在上下文被分割到不同分块时,减轻信息丢失。length_function: ‘‘用于确定分块大小的函数。is_separator_regex: ‘‘分隔符列表(默认为["\n\n", "\n", " ", ""])是否应被解释为正则表达式。
分割无词边界语言的文本
一些书写系统没有词边界,例如中文、日文和泰文。使用默认分隔符列表 ["\n\n", "\n", " ", ""] 分割文本可能会导致单词在块之间被拆分。为了保持单词的完整性,您可以覆盖分隔符列表以包含其他标点符号:
- 添加 ASCII 句号“
.”,全角句号“.”(用于中文文本),以及中文日文句号“。”(用于日文和中文) - 添加用于泰文、缅甸文、高棉文和日文的零宽空格。
- 添加 ASCII 逗号“
,”,全角逗号“,”,以及中文日文逗号“、”
text_splitter = RecursiveCharacterTextSplitter(
separators=[
"\n\n",
"\n",
" ",
".",
",",
"\u200b", # Zero-width space
"\uff0c", # Fullwidth comma
"\u3001", # Ideographic comma
"\uff0e", # Fullwidth full stop
"\u3002", # Ideographic full stop
"",
],
# Existing args
)