根據(jù)前面的工程,首先編譯了kaldi工具盅称,利用speech_data(即aishell1數(shù)據(jù)集肩祥,只是刪除了一層wav目錄),完成了stage 0缩膝、1混狠、2步驟,主要是數(shù)據(jù)準(zhǔn)備與fbank特征提取,將kaldi與Speech-Transformer目錄均作為kaggle/working輸出,再作為新工程的數(shù)據(jù)導(dǎo)入蜡感,修改目錄名稱為fbank_done
方法一 只copy部分文件胎挎,其余用軟鏈接
1仲吏、復(fù)制speech-transformer-project/Speech-Transformer工程目錄
# 復(fù)制speech-transformer-project/Speech-Transformer工程目錄
!cp -r /kaggle/input/speech-transformer-project/Speech-Transformer /kaggle/working/
2、切換到egs/aishell目錄下,替換steps utils目錄(此處用軟鏈接則無法修改內(nèi)部文件的執(zhí)行權(quán)限)
# 切換到egs/aishell目錄下,替換steps utils目錄
%cd /kaggle/working/Speech-Transformer/egs/aishell
!rm -R steps utils
!cp -r /kaggle/input/fbank-done/kaldi/egs/wsj/s5/steps /kaggle/working/Speech-Transformer/egs/aishell/
!cp -r /kaggle/input/fbank-done/kaldi/egs/wsj/s5/utils /kaggle/working/Speech-Transformer/egs/aishell/
!ls -l
3掖蛤、將fbank_done里的dump data目錄設(shè)置軟鏈接到working目錄
%cd /kaggle/working/Speech-Transformer/egs/aishell
!ln -s /kaggle/input/fbank-done/Speech-Transformer/egs/aishell/dump /kaggle/working/Speech-Transformer/egs/aishell/dump
!ln -s /kaggle/input/fbank-done/Speech-Transformer/egs/aishell/data /kaggle/working/Speech-Transformer/egs/aishell/data
!ls -l
4、切換到utils目錄下井厌,生成run.pl軟鏈接(之前工程輸出數(shù)據(jù)保存過程中會自動刪除軟鏈接)
%cd /kaggle/working/Speech-Transformer/egs/aishell/utils
!ln -s /kaggle/working/Speech-Transformer/egs/aishell/utils/parallel/run.pl /kaggle/working/Speech-Transformer/egs/aishell/utils/
# !ls -l
%cd /kaggle/working/Speech-Transformer/egs/aishell
!ls -l
5蚓庭、創(chuàng)建 lib 目錄,里面設(shè)置所有src目錄下的共享庫.so的軟鏈接
!mkdir -p /kaggle/working/kaldi/src/lib
!ln -s /kaggle/input/fbank-done/kaldi/src/*/*.so /kaggle/working/kaldi/src/lib/
6仅仆、copy config目錄器赞,內(nèi)部包含后面需調(diào)用的文件
!mkdir -p /kaggle/working/kaldi/tools
!cp -r /kaggle/input/fbank-done/kaldi/tools/config /kaggle/working/kaldi/tools/
!ls -l /kaggle/working/kaldi/tools/
7、安裝kaldi_io
!pip install kaldi_io
8墓拜、開放可執(zhí)行文件的權(quán)限
!chmod +x /kaggle/working/* -R
9港柜、追加指定py文件搜索路徑(不同路徑下py文件可以被import)
import sys
sys.path.append(r'/kaggle/working/Speech-Transformer/src/bin')
sys.path.append(r'/kaggle/working/Speech-Transformer/src/data')
sys.path.append(r'/kaggle/working//Speech-Transformer/src/solver')
sys.path.append(r'/kaggle/working//Speech-Transformer/src/transformer')
sys.path.append(r'/kaggle/working//Speech-Transformer/src/utils')
10、直接利用%run命令運(yùn)行train.py腳本進(jìn)行訓(xùn)練咳榜,log文件不保存潘懊,而是直接打印到輸出窗口
# 以%開頭的代碼為魔法函數(shù),其中:
# %run 調(diào)用外部python腳本贿衍,直接運(yùn)行出結(jié)果
# %load 加載本地文件到notebook,然后點(diǎn)擊運(yùn)行
%cd /kaggle/working/Speech-Transformer/egs/aishell
%run /kaggle/working/Speech-Transformer/src/bin/train.py \
--train-json dump/train/deltafalse/data.json \
--valid-json dump/dev/deltafalse/data.json \
--dict data/lang_1char/train_chars.txt \
--LFR_m 7 --LFR_n 6 --d_input 80 \
--n_layers_enc 6 --n_layers_dec 6 --n_head 8 --d_k 64 --d_v 64 \
--d_model 256 --d_word_vec 256 --d_inner 1024 \
--dropout 0.1 --pe_maxlen 5000 --tgt_emb_prj_weight_sharing 1 --label_smoothing 0.1 \
--epochs 25 --shuffle 1 \
--batch-size 64 --batch_frames 0 \
--maxlen-in 800 --maxlen-out 150 \
--num-workers 2 --k 0.2 --warmup_steps 300 \
--save-folder exp/train_result \
--checkpoint 0 --continue-from "" \
--print-freq 10 --visdom 0 --visdom_lr 0 --visdom_epoch 0 --visdom-id "Transformer Training"
11救恨、直接執(zhí)行run.sh腳本贸辈,與上面 %run train.py 任選其一運(yùn)行即可。(不輸出到窗口,而是保存到train.log文件)
# 執(zhí)行run.sh
%cd /kaggle/working/Speech-Transformer/egs/aishell
# !./run.sh --checkpoint 0 --stage 0 --visdom 0 --visdom_id "train test" --visdom_lr 0 --visdom_epoch 0 --LFR_m 1 --LFR_n 1 --batch_frames 1500 --batch-size 16 --print-freq 100 --num-workers 4
# !./run.sh --checkpoint 0 --stage 1 --visdom 0 --visdom_id "train test" --visdom_lr 0 --visdom_epoch 0 --LFR_m 1 --LFR_n 1 --batch_frames 1500 --batch-size 16 --print-freq 100 --num-workers 4
# !./run.sh --checkpoint 0 --stage 2 --visdom 0 --visdom_id "train test" --visdom_lr 0 --visdom_epoch 0 --LFR_m 1 --LFR_n 1 --batch_frames 1500 --batch-size 16 --print-freq 100 --num-workers 4
# !./run.sh --checkpoint 0 --stage 3 --LFR_m 7 --LFR_n 6 --batch_frames 0 --batch-size 32 --print-freq 10 --num-workers 4 --visdom 0 --visdom_id "train test" --visdom_lr 0 --visdom_epoch 0
!./run.sh --stage 3 --LFR_m 7 --LFR_n 6 \
--d_input 80 --n_layers_enc 6 --n_head 8 --d_k 64 --d_v 64 \
--d_model 256 --d_inner 1024 --dropout 0.1 --pe_maxlen 5000 \
--d_word_vec 256 --n_layers_dec 6 --tgt_emb_prj_weight_sharing 1 \
--label_smoothing 0.1 \
--epochs 25 --shuffle 1 \
--batch-size 128 --batch_frames 0 \
--maxlen-in 800 --maxlen-out 150 \
--num-workers 2 --k 0.2 --warmup_steps 300 \
--checkpoint 0 --continue-from "" --print-freq 10 \
--visdom 0 --visdom_lr 0 --visdom_epoch 0 --visdom-id "Transformer Training"
通過執(zhí)行%run train.py擎淤,可以在Console查看訓(xùn)練情況奢啥,及時(shí)調(diào)整訓(xùn)練策略
直接執(zhí)行run.sh時(shí),無法實(shí)時(shí)看到訓(xùn)練情況嘴拢,所以Create Save & Run All Version桩盲,讓其在后臺執(zhí)行。(記得要選擇Run All with GPU席吴,過程中仍然進(jìn)行GPU使用時(shí)間計(jì)時(shí)赌结,每周大約30+小時(shí))
私有數(shù)據(jù)占用的空間,似乎是去掉了與平臺公開數(shù)據(jù)集重復(fù)的數(shù)據(jù)后的最終容量孝冒?