文檔分割方式
1 固定長度/分割符
如:RecursiveCharacterTextSplitter, CharacterTextSplitter
CharacterTextSplitter: 按照固定長度順序分割笤闯,同時有一定的overlap
RecursiveCharacterTextSplitter: 按一個字符優(yōu)先級(如 \n\n, \n, 括號... )遞歸地分割逾冬,很適合處理類似括號這樣的嵌套引用
TokenTextSplitter:基于Token的長度進行順序分割,同時有一定的overlap
也可以直接基于某個分割符簡單分割:
docs = text.split(".")
CharacterTextSplitter:
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(
separator = "\n",
chunk_size = 64,
chunk_overlap = 20
)
docs = text_splitter.create_documents([text])
print(docs)
2 規(guī)范格式的分割
MarkdownHeaderTextSplitter: 適用于Markdown文件
LatexTextSplitter: 適用于Latex文件
PyPDFLoader: 適用于PDF文件
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("xxx.pdf")
pages = loader.load()
3 按規(guī)則分句工具
NLTK 或者 spaCy
NLTK使用的句子拆分原理是基于訓練好的模型和規(guī)則浦旱。它使用了一種稱為句子邊界檢測(Sentence Boundary Detection直奋,SBD)的技術能庆,該技術利用了標點符號、縮略詞脚线、數字和其他語言特定的規(guī)則來確定句子的邊界搁胆。依賴語言學上的規(guī)則和模式,而不是句子之間的語義關系邮绿。
拆英文句子:
import nltk
nltk.download('punkt')
from nltk.tokenize import sent_tokenize
text = "Hello, how are you? I'm doing well. Thanks for asking."
sentences = sent_tokenize(text)
print(sentences)
# ['Hello, how are you?', "I'm doing well.", 'Thanks for asking.']
拆中文句子要用jieba分詞庫:
import nltk
import jieba
from nltk.tokenize import sent_tokenize
# 載入中文分詞詞典
jieba.initialize()
# 使用nltk的sent_tokenize函數來拆分中文文本
def chinese_sent_tokenize(text):
sentences = []
seg_list = jieba.cut(text, cut_all=False)
seg_str = ' '.join(seg_list)
for sent in seg_str.split('渠旁。'):
sentences.append(sent.strip() + '。')
return sentences
sentences = chinese_sent_tokenize(text)
4 基于語義進行分割
- 基于 BERT 的 cross-segment 模型
- seqModel:
一個實例: nlp_bert_document-segmentation_chinese-base
https://modelscope.cn/models/iic/nlp_bert_document-segmentation_chinese-base/summary
from modelscope.outputs import OutputKeys
from modelscope.pipelines import pipeline
from modelscope.utils.constant import Tasks
p = pipeline(
task=Tasks.document_segmentation,
model='damo/nlp_bert_document-segmentation_chinese-base')
result = p(documents=text)
print(result[OutputKeys.TEXT])
參考:
https://zhuanlan.zhihu.com/p/673906072
https://zhuanlan.zhihu.com/p/666273413
https://blog.csdn.net/hmywillstronger/article/details/130073676
LangChain+LLM 本地知識庫:
https://blog.csdn.net/v_JULY_v/article/details/131552592
seqModel:
https://blog.csdn.net/weixin_48827824/article/details/126952959
從cross-segment到seqModel:
https://blog.csdn.net/v_JULY_v/article/details/135386202