- 動(dòng)手學(xué)習(xí)RAG: 向量模型
- 動(dòng)手學(xué)習(xí)RAG: BGE向量模型微調(diào)實(shí)踐]()
- 動(dòng)手學(xué)習(xí)RAG: BCEmbedding 向量模型 微調(diào)實(shí)踐]()
- BCE ranking 微調(diào)實(shí)踐]()
- GTE向量與排序模型 微調(diào)實(shí)踐]()
- 模型微調(diào)中的模型序列長度]()
本文我們來進(jìn)行ColBERT模型的實(shí)踐,按慣例框仔,還是以open-retrievals中的代碼為藍(lán)本舀武。在RAG興起之后,ColBERT也獲得了更多的關(guān)注离斩。ColBERT整體結(jié)構(gòu)和雙塔特別相似奕剃,但遲交互式也就意味著比起一般ranking模型,交互來的更晚一些捐腿。
本文代碼:https://colab.research.google.com/drive/1QVtqhQ080ZMltXoJyODMmvEQYI6oo5kO?usp=sharing
準(zhǔn)備環(huán)境
pip install transformers
pip install open-retrievals
準(zhǔn)備數(shù)據(jù)
還是采用C-MTEB/T2Reranking數(shù)據(jù)纵朋。
- 每個(gè)樣本有query, positive, negative。其中query和positive構(gòu)成正樣本對(duì)茄袖,query和negative構(gòu)成負(fù)樣本對(duì)
[圖片上傳失敗...(image-583d0-1726122973759)]
使用
由于ColBERT作為遲交互式模型操软,既可以像向量模型一樣生成向量,也可以計(jì)算相似度宪祥。BAAI/bge-m3中的colbert模型是基于XLMRoberta訓(xùn)練而來聂薪,因此使用ColBERT可以直接從bge-m3中加載預(yù)訓(xùn)練權(quán)重家乘。
import transformers
from retrievals import ColBERT
model_name_or_path: str = 'BAAI/bge-m3'
model = ColBERT.from_pretrained(
model_name_or_path,
colbert_dim=1024,
use_fp16=True,
loss_fn=ColbertLoss(use_inbatch_negative=True),
)
model
[圖片上傳失敗...(image-54e0cd-1726122973759)]
- 生成向量的方法
sentences_1 = ["In 1974, I won the championship in Southeast Asia in my first kickboxing match", "In 1982, I defeated the heavy hitter Ryu Long."]
sentences_2 = ['A dog is chasing car.', 'A man is playing a guitar.']
output_1 = model.encode(sentences_1, normalize_embeddings=True)
print(output_1.shape, output_1)
output_2 = model.encode(sentences_2, normalize_embeddings=True)
print(output_2.shape, output_2)
[圖片上傳失敗...(image-18c91a-1726122973759)]
- 計(jì)算句子對(duì) 相似度的方法
sentences = [
["In 1974, I won the championship in Southeast Asia in my first kickboxing match", "In 1982, I defeated the heavy hitter Ryu Long."],
["In 1974, I won the championship in Southeast Asia in my first kickboxing match", 'A man is playing a guitar.'],
]
scores_list = model.compute_score(sentences)
print(scores_list)
[圖片上傳失敗...(image-d2a27b-1726122973759)]
微調(diào)
嘗試了兩種方法來做,一種是調(diào)包自己寫代碼藏澳,一種是采用open-retrievals中的代碼寫shell腳本仁锯。這里我們采用第一種,另外一種方法可參考文章最后番外中的微調(diào)
import transformers
from transformers import AutoTokenizer, TrainingArguments, get_cosine_schedule_with_warmup, AdamW
from retrievals import AutoModelForRanking, RerankCollator, RerankTrainDataset, RerankTrainer, ColBERT, RetrievalTrainDataset, ColBertCollator
from retrievals.losses import ColbertLoss
transformers.logging.set_verbosity_error()
model_name_or_path: str = 'BAAI/bge-m3'
learning_rate: float = 1e-5
batch_size: int = 2
epochs: int = 1
output_dir: str = './checkpoints'
train_dataset = RetrievalTrainDataset(
'C-MTEB/T2Reranking', positive_key='positive', negative_key='negative', dataset_split='dev'
)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=False)
data_collator = ColBertCollator(
tokenizer,
query_max_length=64,
document_max_length=128,
positive_key='positive',
negative_key='negative',
)
model = ColBERT.from_pretrained(
model_name_or_path,
colbert_dim=1024,
loss_fn=ColbertLoss(use_inbatch_negative=False),
)
optimizer = AdamW(model.parameters(), lr=learning_rate)
num_train_steps = int(len(train_dataset) / batch_size * epochs)
scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=0.05 * num_train_steps, num_training_steps=num_train_steps)
training_args = TrainingArguments(
learning_rate=learning_rate,
per_device_train_batch_size=batch_size,
num_train_epochs=epochs,
output_dir = './checkpoints',
remove_unused_columns=False,
gradient_accumulation_steps=8,
logging_steps=100,
)
trainer = RerankTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
data_collator=data_collator,
)
trainer.optimizer = optimizer
trainer.scheduler = scheduler
trainer.train()
model.save_pretrained(output_dir)
訓(xùn)練過程中會(huì)加載BAAI/bge-m3
模型權(quán)重
[圖片上傳失敗...(image-6ac845-1726122973759)]
損失函數(shù)下降
{'loss': 7.4858, 'grad_norm': 30.484981536865234, 'learning_rate': 4.076305220883534e-06, 'epoch': 0.6024096385542169}
{'loss': 1.18, 'grad_norm': 28.68316650390625, 'learning_rate': 3.072289156626506e-06, 'epoch': 1.2048192771084336}
{'loss': 1.1399, 'grad_norm': 14.203865051269531, 'learning_rate': 2.068273092369478e-06, 'epoch': 1.8072289156626506}
{'loss': 1.1261, 'grad_norm': 24.30337905883789, 'learning_rate': 1.0642570281124499e-06, 'epoch': 2.4096385542168672}
{'train_runtime': 465.7768, 'train_samples_per_second': 34.265, 'train_steps_per_second': 1.069, 'train_loss': 2.4146631079984, 'epoch': 3.0}
評(píng)測(cè)
在C-MTEB中進(jìn)行評(píng)測(cè)翔悠。微調(diào)前保留10%的數(shù)據(jù)集作為測(cè)試集驗(yàn)證
from datasets import load_dataset
dataset = load_dataset("C-MTEB/T2Reranking", split="dev")
ds = dataset.train_test_split(test_size=0.1, seed=42)
ds_train = ds["train"].filter(
lambda x: len(x["positive"]) > 0 and len(x["negative"]) > 0
)
ds_train.to_json("t2_ranking.jsonl", force_ascii=False)
微調(diào)前的指標(biāo):
[圖片上傳失敗...(image-5307ee-1726122973759)]
微調(diào)后的指標(biāo):
09/12/2024 15:30:26 - INFO - mteb.evaluation.MTEB - Evaluation for CustomReranking on test took 221.45 seconds
09/12/2024 15:30:26 - INFO - mteb.evaluation.MTEB - Scores: {'map': 0.6950128151840831, 'mrr': 0.8193114944390455, 'evaluation_time': 221.45}
番外:從語言模型直接訓(xùn)練ColBERT
之前的例子里是從BAAI/bge-m3繼續(xù)微調(diào)业崖,這里再跑一個(gè)從hfl/chinese-roberta-wwm-ext訓(xùn)練一個(gè)ColBERT模型
MODEL_NAME='hfl/chinese-roberta-wwm-ext'
TRAIN_DATA="/root/kaggle101/src/open-retrievals/t2/t2_ranking.jsonl"
OUTPUT_DIR="/root/kaggle101/src/open-retrievals/t2/ft_out"
cd /root/open-retrievals/src
torchrun --nproc_per_node 1 \
--module retrievals.pipelines.rerank \
--output_dir $OUTPUT_DIR \
--overwrite_output_dir \
--model_name_or_path $MODEL_NAME \
--tokenizer_name $MODEL_NAME \
--model_type colbert \
--do_train \
--data_name_or_path $TRAIN_DATA \
--positive_key positive \
--negative_key negative \
--learning_rate 5e-5 \
--bf16 \
--num_train_epochs 5 \
--per_device_train_batch_size 32 \
--dataloader_drop_last True \
--query_max_length 128 \
--max_length 256 \
--train_group_size 4 \
--unfold_each_positive false \
--save_total_limit 1 \
--logging_steps 100 \
--use_inbatch_negative False