轉(zhuǎn)錄組分析:RNA-seq pipeline through kallisto or salmon

RNA-seq pipeline through kallisto or salmon

kallisto 和salmon相比含有hisat2和STAR等軟的RNA-seq流程而言,速度更快陨囊,這是因為該軟件基于轉(zhuǎn)錄組序列reference(也即是cDNA序列)并且基于k mer比對原理辩诞。通常如果想研究RNA-seq過程基因發(fā)生了何種變化且不需知道新轉(zhuǎn)錄本坟瓢,可以直接使用kallisto或salmon獲取轉(zhuǎn)錄本的豐度信息。

install software

conda install -c bioconda kallisto salmon -y

過濾原始 fastq files

任何RNA-seq流程都需要對原始fastq reads文件進行質(zhì)控闯第,質(zhì)控包括去除接頭、barcode等額外引入的序列,當(dāng)然也包括舍棄低質(zhì)量的reads零聚。這類軟件有,cutadapt些侍、flexbar和trimmomatic等隶症。

download reference

轉(zhuǎn)錄本序列可以從多個網(wǎng)站下載,這里我們下載ENSEMBL數(shù)據(jù)庫的小鼠cDNA數(shù)據(jù)岗宣。

wget ftp://ftp.ensembl.org/pub/release-100/fasta/mus_musculus/cdna/Mus_musculus.GRCm38.cdna.all.fa

build index

kallisto index -i kallisto_index/Mus_musculus.GRCm38 Mus_musculus.GRCm38.cdna.all.fa
salmon index -i salmon_index/Mus_musculus.GRCm38 -t Mus_musculus.GRCm38.cdna.all.fa -p 20

生成批量運行的腳本

準備文件:samples.path.tsv(原始數(shù)據(jù)文件)

SampleID LaneID Path
HF_NC01 HF_NC01_RNA_b2.r2 /01.rename/HF_NC01_RNA_b2.r2.fq.gz
HF_NC01 HF_NC01_RNA_b2.r1 /01.rename/HF_NC01_RNA_b2.r1.fq.gz
HF_NC02 HF_NC02_RNA_b2.r1 /01.rename/HF_NC02_RNA_b2.r1.fq.gz
HF_NC02 HF_NC02_RNA_b2.r2 /01.rename/HF_NC02_RNA_b2.r2.fq.gz
HF_NC03 HF_NC03_RNA_novo.r2 /01.rename/HF_NC03_RNA_novo.r2.fq.gz
HF_NC03 HF_NC03_RNA_novo.r1 /01.rename/HF_NC03_RNA_novo.r1.fq.gz

過濾文件目錄:02.trim (高質(zhì)量的reads文件)

撰寫腳本quant_feature.py:生成批量運行shell文件

# key command
kallisto quant 
    -i kallisto_index/Mus_musculus.GRCm38 
    -g gtf_file 
    -o kallisto_quant/HF07 
    -b 30 
    -t 10 02.trim/HF07_1.fastq.gz 02.trim/HF07_2.fastq.gz
    
salmon quant 
    -i salmon_index/Mus_musculus.GRCm38/ 
    -l IU -p 10 
    -1 02.trim/HF07_1.fastq.gz 
    -2 02.trim/HF07_2.fastq.gz 
    -o salmon_quant/HF07 
    --numBootstraps 30
#!/usr/bin/python

import os
import re
import sys
import argparse as ap
from collections import defaultdict

current_abosulte_path = os.getcwd()


def parse_arguments(args):
    parser = ap.ArgumentParser(description= "the initial script of pipeline")
    parser.add_argument('-t','--trim',metavar='<string>', type=str, help='trimmed data dir')
    parser.add_argument('-s','--sample',metavar='<string>', type=str, help='SampleID|LaneID|Path')
    parser.add_argument('-o','--output',metavar='<string>', type=str, help='the output directory')
    parser.add_argument('-g','--gtf',metavar='<string>', type=str, help='the genome annotation file')
    parser.add_argument('-i','--index',metavar='<string>', type=str, help='the index of genome')
    parser.add_argument('-p','--prefix',metavar='<string>', type=str, help='the prefix of output file')
    return parser.parse_args()


def create_folder(folder):
    '''
    create folder for the result
    '''
    folder_path = os.path.join(current_abosulte_path, folder)
    if not os.path.exists(folder_path):
        os.makedirs(folder_path)
    return folder_path


def get_path_file(sample):
    '''
    the connections among sampleid, laneid and path
    '''
    dict_lane   = defaultdict(list)
    with open(sample, 'r') as f:
        for line in f:
            line = line.strip()
            if not line.startswith("SampleID"):
                group = re.split(r'\s+', line)
                dict_lane[group[0]].append(group[1])

    return dict_lane


def generate_shell(trim_folder, sample_file, outdir, gtf_file, genome_index, file_name):
    trim_file_folder = os.path.join(current_abosulte_path, trim_folder)
    output_path = create_folder(outdir)
    output_name = os.path.join(current_abosulte_path, file_name)
    sample_name = get_path_file(sample_file)

    output_f = open(output_name, 'w')
    for key in sample_name:
        out_sample_prefix = os.path.join(output_path, key)
        #target1 = [x for x in sample_name[key] if re.findall(r'r1', x)]
        #fq1 = os.path.join(trim_file_folder, ("_".join([target1[0], "val_1.fq.gz"])))
        #target2 = [x for x in sample_name[key] if re.findall(r'r2', x)]
        #fq2 = os.path.join(trim_file_folder, ("_".join([target2[0], "val_2.fq.gz"])))
        fq1 = os.path.join(trim_file_folder, ("_".join([key, "1.fastq.gz"])))
        fq2 = os.path.join(trim_file_folder, ("_".join([key, "2.fastq.gz"])))
        if os.path.isfile(fq1) and os.path.isfile(fq2):
            fq = " ".join([fq1, fq2])

            if re.findall(r'kallisto', genome_index):
                shell = " ".join(["kallisto quant -i", genome_index, "-g", \
                                "gtf_file", "-o", out_sample_prefix, "-b 30 -t 10", fq])
            elif re.findall(r'salmon', genome_index):
                shell = " ".join(["salmon quant -i",  genome_index, \
                                "-l IU -p 10", \
                                "-1", fq1, "-2", fq2, \
                                "-o", out_sample_prefix,\
                                "--numBootstraps 30"])
            output_f.write(shell + "\n")

    output_f.close()


def main():
    args = parse_arguments(sys.argv)
    generate_shell(args.trim, args.sample, args.output, args.gtf, args.index, args.prefix)


if __name__ == '__main__':
    main()

運行腳本: 生成RUN文件

# kallisto 
python quant_feature.py -t 02.trim/ -s 46RNA.samples.path.tsv -o kallisto_quant -g /disk/share/database/Mus_musculus.GRCm38_release100/Mus_musculus.GRCm38.cdna.all.fa -i /disk/share/database/Mus_musculus.GRCm38_release100/kallisto_index/Mus_musculus.GRCm38 -p RUN.kallisto_quant.sh
# salmon
python quant_feature.py -t 02.trim/ -s 46RNA.samples.path.tsv -o salmon_quant -g /disk/share/database/Mus_musculus.GRCm38_release100/Mus_musculus.GRCm38.cdna.all.fa -i /disk/share/database/Mus_musculus.GRCm38_release100/salmon_index/Mus_musculus.GRCm38/ -p RUN.salmon_quant.sh

獲取expression matrix:

撰寫get_merged_table.R蚂会,從quant目錄獲取整合所有樣本表達值的matrix文件

#!/bin/usr/R
library(dplyr)
library(argparser)
library(tximport)

# parameter input
parser <- arg_parser("merge transcript abundance files") %>%
    add_argument("-s", "--sample",
        help = "the sampleID files") %>%
    add_argument("-d", "--dir",
        help = "the dir of output") %>%
    add_argument("-g", "--gene",
        help = "transcriptID into geneID") %>%
    add_argument("-t", "--type",
        help = "the type of software") %>%
    add_argument("-c", "--count",
        help = "the method of scale for featurecounts") %>%
    add_argument("-n", "--name",
        help = "the name of transcript file") %>%
    add_argument("-o", "--out",
        help = "the merged files's dir and name")

args <- parse_args(parser)

# prepare for function
sample <- args$s
dir    <- args$d
tx2gen <- args$g
type   <- args$t
scount <- args$c
name   <- args$n
prefix <- args$o

tx2gene_table <- read.csv(tx2gen, header=T)
tx2gene <- tx2gene_table %>%
            dplyr::select(tx_id, gene_id, gene_name)
phen <- read.table(sample, header=T, sep="\t")
sample_name <- unique(as.character(phen$SampleID))
files <- file.path(dir, sample_name, name)
names(files) <- sample_name
result <- tximport(files,
                   type = type,
                   countsFromAbundance = scount,
                   tx2gene = tx2gene,
                   ignoreTxVersion = T)
save(result, file = prefix)
Rscript get_merged_table.R \
    -s samples.path.tsv \
    -d kallisto_quant/ \
    -g EnsDb.Mmusculus.v79_transcriptID2geneID.csv \
    -t kallisto \
    -c no \
    -n abundance.tsv \
    -o kallisto_quant.RData

Rscript get_merged_table.R \
    -s samples.path.tsv \
    -d salmon_quant/ \
    -g EnsDb.Mmusculus.v79_transcriptID2geneID.csv \
    -t salmon \
    -c no \
    -n quant.sf \
    -o salmon_quant.RData

獲取TPM和FPKM表達矩陣:可在本地電腦運行

library(dplyr)
library(tibble)
load("kallisto_quant.RData")
phen <- read.csv("phenotype.csv")
kallisto_count <- round(result$counts) %>% data.frame()
kallisto_length <- apply(result$length, 1, mean) %>% data.frame() %>%
  setNames("Length")

get_profile <- function(matrix, exon_length,
                        y=phen,
                        occurrence=0.2, 
                        ncount=10){
  # matrix=count_format
  # exon_length=count_length 
  # occurrence=0.2
  # ncount=10
  
  # filter with occurrence and ncount
  prf <- matrix %>% rownames_to_column("Type") %>% 
    filter(apply(dplyr::select(., -one_of("Type")), 1, 
                 function(x){sum(x > 0)/length(x)}) > occurrence) %>%
            data.frame(.) %>% 
    column_to_rownames("Type")
  prf <- prf[rowSums(prf) > ncount, ]
  
  # change sampleID 
  sid <- intersect(colnames(prf), y$SampleID)
  phen.cln <- y %>% filter(SampleID%in%sid) %>%
    arrange(SampleID)
  prf.cln <- prf %>% dplyr::select(as.character(phen.cln$SampleID))
  # determine the right order between profile and phenotype 
  for(i in 1:ncol(prf.cln)){ 
    if (!(colnames(prf.cln)[i] == phen.cln$SampleID[i])) {
      stop(paste0(i, " Wrong"))
    }
  }  
  colnames(prf.cln) <- phen.cln$SampleID_v2  
  
  # normalizate the profile using FPKM and TPM
  exon_length.cln <- exon_length[as.character(rownames(prf.cln)), , F]
  for(i in 1:nrow(exon_length.cln)){ 
    if (!(rownames(exon_length.cln)[i] == rownames(prf.cln)[i])) {
      stop(paste0(i, " Wrong"))
    }
  }  
  dat <- prf.cln/exon_length.cln$Length
  dat_FPKM <- t(t(dat)/colSums(prf.cln)) * 10^9
  dat_TPM <- t(t(dat)/colSums(dat)) * 10^6
  
  dat_count <- prf.cln[order(rownames(prf.cln)), ]
  dat_fpkm <- dat_FPKM[order(rownames(dat_FPKM)), ]
  dat_tpm <- dat_TPM[order(rownames(dat_TPM)), ]
  
  res <- list(count=dat_count, fpkm=dat_fpkm, tpm=dat_tpm)
  return(res)
}
prf_out <- function(result, type="STAR"){
  
  dir <- "../../Result/Profile/"
  if(!dir.exists(dir)){
    dir.create(dir)
  }
  count_name <- paste0(dir, type, "_filtered_counts.tsv")
  FPKM_name <- paste0(dir, type, "_filtered_FPKM.tsv")
  TPM_name <- paste0(dir, type, "_filtered_TPM.tsv")  
  
  write.table(x = result$count, 
              file = count_name, 
              sep = '\t', 
              quote = F,
              col.names = NA)
  write.table(x = result$fpkm, 
              file = FPKM_name, 
              sep = '\t', 
              quote = F,
              col.names = NA)
  write.table(x = result$tpm, 
              file = TPM_name, 
              sep = '\t', 
              quote = F,
              col.names = NA)
}

kallisto_prf <- get_profile(kallisto_count, kallisto_length)
# output
prf_out(kallisto_prf, type = "kallisto")

Reference

  1. kallisto manual

  2. salmon github

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
禁止轉(zhuǎn)載,如需轉(zhuǎn)載請通過簡信或評論聯(lián)系作者耗式。
  • 序言:七十年代末胁住,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子刊咳,更是在濱河造成了極大的恐慌彪见,老刑警劉巖,帶你破解...
    沈念sama閱讀 221,695評論 6 515
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件娱挨,死亡現(xiàn)場離奇詭異余指,居然都是意外死亡,警方通過查閱死者的電腦和手機让蕾,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 94,569評論 3 399
  • 文/潘曉璐 我一進店門浪规,熙熙樓的掌柜王于貴愁眉苦臉地迎上來或听,“玉大人,你說我怎么就攤上這事笋婿∮桑” “怎么了?”我有些...
    開封第一講書人閱讀 168,130評論 0 360
  • 文/不壞的土叔 我叫張陵缸濒,是天一觀的道長足丢。 經(jīng)常有香客問我,道長庇配,這世上最難降的妖魔是什么斩跌? 我笑而不...
    開封第一講書人閱讀 59,648評論 1 297
  • 正文 為了忘掉前任,我火速辦了婚禮捞慌,結(jié)果婚禮上耀鸦,老公的妹妹穿的比我還像新娘。我一直安慰自己啸澡,他們只是感情好袖订,可當(dāng)我...
    茶點故事閱讀 68,655評論 6 397
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著嗅虏,像睡著了一般洛姑。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上皮服,一...
    開封第一講書人閱讀 52,268評論 1 309
  • 那天楞艾,我揣著相機與錄音,去河邊找鬼龄广。 笑死硫眯,一個胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的蜀细。 我是一名探鬼主播舟铜,決...
    沈念sama閱讀 40,835評論 3 421
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼奠衔!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起塘娶,我...
    開封第一講書人閱讀 39,740評論 0 276
  • 序言:老撾萬榮一對情侶失蹤归斤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后刁岸,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體脏里,經(jīng)...
    沈念sama閱讀 46,286評論 1 318
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 38,375評論 3 340
  • 正文 我和宋清朗相戀三年虹曙,在試婚紗的時候發(fā)現(xiàn)自己被綠了迫横。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片番舆。...
    茶點故事閱讀 40,505評論 1 352
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖矾踱,靈堂內(nèi)的尸體忽然破棺而出恨狈,到底是詐尸還是另有隱情,我是刑警寧澤呛讲,帶...
    沈念sama閱讀 36,185評論 5 350
  • 正文 年R本政府宣布禾怠,位于F島的核電站,受9級特大地震影響贝搁,放射性物質(zhì)發(fā)生泄漏吗氏。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 41,873評論 3 333
  • 文/蒙蒙 一雷逆、第九天 我趴在偏房一處隱蔽的房頂上張望弦讽。 院中可真熱鬧,春花似錦膀哲、人聲如沸坦袍。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,357評論 0 24
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽捂齐。三九已至,卻和暖如春缩抡,著一層夾襖步出監(jiān)牢的瞬間奠宜,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,466評論 1 272
  • 我被黑心中介騙來泰國打工瞻想, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留压真,地道東北人。 一個月前我還...
    沈念sama閱讀 48,921評論 3 376
  • 正文 我出身青樓蘑险,卻偏偏與公主長得像滴肿,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子佃迄,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 45,515評論 2 359

推薦閱讀更多精彩內(nèi)容