如何構(gòu)建SNPs-based phylogenetic tree

如果根據(jù)SNP或者Indel 構(gòu)建其系統(tǒng)進化樹,可以展示群體中不同個體的相互關(guān)系陋气,基因變異相似的往往會在同一個樹的cluster中最爬,一顆好的樹可以給你一個群體大概的分類(你這個群體中有多少個cluster,一般同一個亞種或者有親緣關(guān)系的個體會形成一個cluster)淹冰,這是群體遺傳中重要的一部分叮喳。其構(gòu)建的核心原理就是把每個位點SNPs的信息提取被芳,然后計算每個變異位點的差異得到算法中的“距離”。

在實戰(zhàn)中馍悟,我們?nèi)后w中樣本的數(shù)量往往會是成百的畔濒,所以一般call出來的SNP變異的位點,或者說文件大小會很大锣咒,如果我們直接將沒有過濾掉的文件拿來直接構(gòu)建系統(tǒng)發(fā)育樹侵状,這不但會產(chǎn)生很大的誤差(低質(zhì)量的位點會影響距離的計算),而且一般的構(gòu)樹軟件也不難接受如此大的文件宠哄,一來很消耗內(nèi)存壹将,二來運算量很大嗤攻,你可能需要幾個星期或幾個月去完成你的建樹毛嫉。

這個時候你需要首先對你raw SNP calling的結(jié)果進行初步的過濾。

Filtering the raw SNPs vcf file

經(jīng)典的過濾方式妇菱,承粤,保留比較可信的變異位點

only include SNPs with MAF >= 0.05 and include only SNPs with a 90% genotyping rate (10% missing) use

~/biosoft/plink --vcf All_Gm_combine.vcf --maf 0.05 --geno 0.1 --recode  vcf-iid --out All_test_11 --allow-extra-chr

進一步根據(jù)暴区,.對標記進行中性LD篩選,并提取

用到的參數(shù)的基本介紹

--indep-pairphase [window size]<kb> [step size (variant ct)] [r^2 threshold]

~/biosoft/plink --vcf All_test_11.vcf  --allow-extra-chr--indep-pairwise 50 10 0.2 --out test_12

~/biosoft/plink --allow-extra-chr --extract test_12.prune.in --make-bed --out test_12.prune.in --recode vcf-iid --vcf All_test_11.vcf

具體的建樹過程我用了兩種不同的工具和不同的算法進行構(gòu)建

Methods 1 Megax

Mega是一款很經(jīng)典的構(gòu)樹軟件辛臊,重比對到建樹仙粱,還有各種不同參數(shù)的選擇都做的不錯。而且它還支持各種平臺有user interface的版本和命令行的版本彻舰,方便我們?nèi)ナ褂谩?/p>

工具下載地址:

https://www.megasoftware.net/

首先將vcf 格式轉(zhuǎn)成phylip格式

Transferrring the format into phylip

python2 vcf2phylip.py -i All_edit_Gm_tab.vcf

用到的 Script:

'''
The script converts a collection of SNPs in VCF format into a PHYLIP, FASTA, 
NEXUS, or binary NEXUS file for phylogenetic analysis. The code is optimized
to process VCF files with sizes >1GB. For small VCF files the algorithm slows
down as the number of taxa increases (but is still fast).
'''


__author__      = "Edgardo M. Ortiz"
__credits__     = "Juan D. Palacio-Mejía"
__version__     = "1.5"
__email__       = "e.ortiz.v@gmail.com"
__date__        = "2018-04-24"


import sys
import os
import argparse


def main():
    parser = argparse.ArgumentParser(description="Converts SNPs in VCF format into an alignment for phylogenetic analysis")
    parser.add_argument("-i", "--input", action="store", dest="filename", required=True,
        help="Name of the input VCF file")
    parser.add_argument("-m", "--min-samples-locus", action="store", dest="min_samples_locus", type=int, default=4,
        help="Minimum of samples required to be present at a locus, default=4 since is the minimum for phylogenetics.")
    parser.add_argument("-o", "--outgroup", action="store", dest="outgroup", default="",
        help="Name of the outgroup in the matrix. Sequence will be written as first taxon in the alignment.")
    parser.add_argument("-p", "--phylip-disable", action="store_true", dest="phylipdisable", default=False,
        help="A PHYLIP matrix is written by default unless you enable this flag")
    parser.add_argument("-f", "--fasta", action="store_true", dest="fasta", default=False,
        help="Write a FASTA matrix, disabled by default")
    parser.add_argument("-n", "--nexus", action="store_true", dest="nexus", default=False,
        help="Write a NEXUS matrix, disabled by default")
    parser.add_argument("-b", "--nexus-binary", action="store_true", dest="nexusbin", default=False,
        help="Write a binary NEXUS matrix for analysis of biallelic SNPs in SNAPP, disabled by default")
    args = parser.parse_args()


    filename = args.filename
    min_samples_locus = args.min_samples_locus
    outgroup = args.outgroup
    phylipdisable = args.phylipdisable
    fasta = args.fasta
    nexus = args.nexus
    nexusbin = args.nexusbin


    # Dictionary of IUPAC ambiguities for nucleotides
    # '*' means deletion for GATK (and other software?)
    # Deletions are ignored when making the consensus
    amb = {("A","A"):"A",
           ("A","C"):"M",
           ("A","G"):"R",
           ("A","N"):"A",
           ("A","T"):"W",
           ("C","A"):"M",
           ("C","C"):"C",
           ("C","G"):"S",
           ("C","N"):"C",
           ("C","T"):"Y",
           ("G","A"):"R",
           ("G","C"):"S",
           ("G","G"):"G",
           ("G","N"):"G",
           ("G","T"):"K",
           ("N","A"):"A",
           ("N","C"):"C",
           ("N","G"):"G",
           ("N","N"):"N",
           ("N","T"):"T",
           ("T","A"):"W",
           ("T","C"):"Y",
           ("T","G"):"K",
           ("T","N"):"T",
           ("T","T"):"T",
           ("*","*"):"-",
           ("A","*"):"A",
           ("*","A"):"A",
           ("C","*"):"C",
           ("*","C"):"C",
           ("G","*"):"G",
           ("*","G"):"G",
           ("T","*"):"T",
           ("*","T"):"T",
           ("N","*"):"N",
           ("*","N"):"N"}


    # Dictionary for translating biallelic SNPs into SNAPP
    # 0 is homozygous reference
    # 1 is heterozygous
    # 2 is homozygous alternative
    gen_bin = {"./.":"?",
               ".|.":"?",
               "0/0":"0",
               "0|0":"0",
               "0/1":"1",
               "0|1":"1",
               "1/0":"1",
               "1|0":"1",
               "1/1":"2",
               "1|1":"2"}


    # Process header of VCF file
    with open(filename) as vcf:

        # Create a list to store sample names
        sample_names = []

        # Keep track of longest sequence name for padding with spaces in the output file
        len_longest_name = 0

        # Look for the line in the VCF header with the sample names
        for line in vcf:
            if line.startswith("#CHROM"):

                # Split line into fields
                broken = line.strip("\n").split("\t")

                # If the minimum-samples-per-locus parameter is larger than the number of
                # species in the alignment make it the same as the number of species
                if min_samples_locus > len(broken[9:]):
                    min_samples_locus = len(broken[9:])

                # Create a list of sample names and the keep track of the longest name length
                for i in range(9, len(broken)):
                    name_sample = broken[i].replace("./","") # GATK adds "./" to sample names
                    sample_names.append(name_sample)
                    len_longest_name = max(len_longest_name, len(name_sample))
                break

    vcf.close()


    # Output filename will be the same as input file, indicating the minimum of samples specified
    outfile = filename.replace(".vcf",".min"+str(min_samples_locus))

    # We need to create an intermediate file to hold the sequence data 
    # vertically and then transpose it to create the matrices
    if fasta or nexus or not phylipdisable:
        temporal = open(outfile+".tmp", "w")
    
    # if binary NEXUS is selected also create a separate temporal
    if nexusbin:
        temporalbin = open(outfile+".bin.tmp", "w")


    ##################
    # PROCESS VCF FILE
    index_last_sample = len(sample_names)+9

    # Start processing SNPs of VCF file
    with open(filename) as vcf:

        # Initialize line counter
        snp_num = 0
        snp_accepted = 0
        snp_shallow = 0
        snp_multinuc = 0
        snp_biallelic = 0
        while True:

            # Load large chunks of file into memory
            vcf_chunk = vcf.readlines(100000)
            if not vcf_chunk:
                break

            # Now process the SNPs one by one
            for line in vcf_chunk:
                if not line.startswith("#") and line.strip("\n") != "": # pyrad sometimes produces an empty line after the #CHROM line

                    # Split line into columns
                    broken = line.strip("\n").split("\t")
                    for g in range(9,len(broken)):
                        if broken[g] in [".", ".|."]:
                            broken[g] = "./."

                    # Keep track of number of genotypes processed
                    snp_num += 1

                    # Print progress every 500000 lines
                    if snp_num % 500000 == 0:
                        print str(snp_num)+" genotypes processed"

                    # Check if the SNP has the minimum of samples required
                    if (len(broken[9:]) - ''.join(broken[9:]).count("./.")) >= min_samples_locus:
                        
                        # Check that ref genotype is a single nucleotide and alternative genotypes are single nucleotides
                        if len(broken[3]) == 1 and (len(broken[4])-broken[4].count(",")) == (broken[4].count(",")+1):

                            # Add to running sum of accepted SNPs
                            snp_accepted += 1

                            # If nucleotide matrices are requested
                            if fasta or nexus or not phylipdisable:

                                # Create a dictionary for genotype to nucleotide translation
                                # each SNP may code the nucleotides in a different manner
                                nuc = {str(0):broken[3], ".":"N"}
                                for n in range(len(broken[4].split(","))):
                                    nuc[str(n+1)] = broken[4].split(",")[n]

                                # Translate genotypes into nucleotides and the obtain the IUPAC ambiguity
                                # for heterozygous SNPs, and append to DNA sequence of each sample
                                site_tmp = ''.join([(amb[(nuc[broken[i][0]], nuc[broken[i][1]])]) for i in range(9, index_last_sample)])

                                # Write entire row of single nucleotide genotypes to temporary file
                                temporal.write(site_tmp+"\n")

                            # Write binary NEXUS for SNAPP if requested
                            if nexusbin:

                                # Check taht the SNP only has two alleles
                                if len(broken[4]) == 1:
                                    
                                    # Add to running sum of biallelic SNPs
                                    snp_biallelic += 1

                                    # Translate genotype into 0 for homozygous ref, 1 for heterozygous, and 2 for homozygous alt
                                    binsite_tmp = ''.join([(gen_bin[broken[i][0:3]]) for i in range(9, index_last_sample)])

                                    # Write entire row to temporary file
                                    temporalbin.write(binsite_tmp+"\n")

                        else:
                            # Keep track of loci rejected due to multinucleotide genotypes
                            snp_multinuc += 1
                            # Keep track of loci rejected due to exceeded missing data
                            snp_shallow += 1

                    else:
                        # Keep track of loci rejected due to exceeded missing data
                        snp_shallow += 1

        # Print useful information about filtering of SNPs
        print str(snp_num) + " genotypes processed in total"
        print "\n"
        print str(snp_shallow) + " genotypes were excluded because they exceeded the amount of missing data allowed"
        print str(snp_multinuc) + " genotypes passed missing data filter but were excluded for not being SNPs"
        print str(snp_accepted) + " SNPs passed the filters"
        if nexusbin:
            print str(snp_biallelic) + " SNPs were biallelic and selected for binary NEXUS"
        print "\n"

    vcf.close()
    if fasta or nexus or not phylipdisable:
        temporal.close()
    if nexusbin:
        temporalbin.close()


    #######################
    # WRITE OUTPUT MATRICES

    if not phylipdisable:
        output_phy = open(outfile+".phy", "w")
        header_phy = str(len(sample_names))+" "+str(snp_accepted)+"\n"
        output_phy.write(header_phy)

    if fasta:
        output_fas = open(outfile+".fasta", "w")

    if nexus:
        output_nex = open(outfile+".nexus", "w")
        header_nex = "#NEXUS\n\nBEGIN DATA;\n\tDIMENSIONS NTAX=" + str(len(sample_names)) + " NCHAR=" + str(snp_accepted) + ";\n\tFORMAT DATATYPE=DNA" + " MISSING=N" + " GAP=- ;\nMATRIX\n"
        output_nex.write(header_nex)

    if nexusbin:
        output_nexbin = open(outfile+".bin.nexus", "w")
        header_nexbin = "#NEXUS\n\nBEGIN DATA;\n\tDIMENSIONS NTAX=" + str(len(sample_names)) + " NCHAR=" + str(snp_biallelic) + ";\n\tFORMAT DATATYPE=SNP" + " MISSING=?" + " GAP=- ;\nMATRIX\n"
        output_nexbin.write(header_nexbin)


    # Store index of outgroup in list of sample names
    idx_outgroup = "NA"

    # Write outgroup as first sequence in alignment if the name is specified
    if outgroup in sample_names:
        idx_outgroup = sample_names.index(outgroup)

        if fasta or nexus or not phylipdisable:
            with open(outfile+".tmp") as tmp_seq:
                seqout = ""

                # This is where the transposing happens
                for line in tmp_seq:
                    seqout += line[idx_outgroup]

                # Write FASTA line
                if fasta:
                    output_fas.write(">"+sample_names[idx_outgroup]+"\n"+seqout+"\n")
                
                # Pad sequences names and write PHYLIP or NEXUS lines
                padding = (len_longest_name + 3 - len(sample_names[idx_outgroup])) * " "
                if not phylipdisable:
                    output_phy.write(sample_names[idx_outgroup]+padding+seqout+"\n")
                if nexus:
                    output_nex.write(sample_names[idx_outgroup]+padding+seqout+"\n")

                # Print current progress
                print "Outgroup, "+outgroup+", added to the matrix(ces)."

        if nexusbin:
            with open(outfile+".bin.tmp") as bin_tmp_seq:
                seqout = ""

                # This is where the transposing happens
                for line in bin_tmp_seq:
                    seqout += line[idx_outgroup]

                # Write line of binary SNPs to NEXUS
                padding = (len_longest_name + 3 - len(sample_names[idx_outgroup])) * " "
                output_nexbin.write(sample_names[idx_outgroup]+padding+seqout+"\n")

                # Print current progress
                print "Outgroup, "+outgroup+", added to the binary matrix."


    # Write sequences of the ingroup
    for s in range(0, len(sample_names)):
        if s != idx_outgroup:
            if fasta or nexus or not phylipdisable:
                with open(outfile+".tmp") as tmp_seq:
                    seqout = ""

                    # This is where the transposing happens
                    for line in tmp_seq:
                        seqout += line[s]

                    # Write FASTA line
                    if fasta:
                        output_fas.write(">"+sample_names[s]+"\n"+seqout+"\n")
                    
                    # Pad sequences names and write PHYLIP or NEXUS lines
                    padding = (len_longest_name + 3 - len(sample_names[s])) * " "
                    if not phylipdisable:
                        output_phy.write(sample_names[s]+padding+seqout+"\n")
                    if nexus:
                        output_nex.write(sample_names[s]+padding+seqout+"\n")

                    # Print current progress
                    print "Sample "+str(s+1)+" of "+str(len(sample_names))+", "+sample_names[s]+", added to the nucleotide matrix(ces)."

            if nexusbin:
                with open(outfile+".bin.tmp") as bin_tmp_seq:
                    seqout = ""

                    # This is where the transposing happens
                    for line in bin_tmp_seq:
                        seqout += line[s]

                    # Write line of binary SNPs to NEXUS
                    padding = (len_longest_name + 3 - len(sample_names[s])) * " "
                    output_nexbin.write(sample_names[s]+padding+seqout+"\n")

                    # Print current progress
                    print "Sample "+str(s+1)+" of "+str(len(sample_names))+", "+sample_names[s]+", added to the binary matrix."

    if not phylipdisable:
        output_phy.close()
    if fasta:
        output_fas.close()
    if nexus:
        output_nex.write(";\nEND;\n")
        output_nex.close()
    if nexusbin:
        output_nexbin.write(";\nEND;\n")
        output_nexbin.close()

    if fasta or nexus or not phylipdisable:
        os.remove(outfile+".tmp")
    if nexusbin:
        os.remove(outfile+".bin.tmp")


    print "\nDone!\n"


if __name__ == "__main__":
    main()

然后使用megax的內(nèi)置工具將phylip格式的文件轉(zhuǎn)化成mega格式的文件


使用MEGAX常用的參數(shù)去構(gòu)建系統(tǒng)進化樹伐割,因為我的樣本都是來自同一個specie的不同亞種,所以NJ tree 已經(jīng)適用了刃唤。

最后生成.nwk 文件隔心,進一步使用其他工具將其美化。

Method 2 SnPhylop

然后除了Mega之外SNPhylo這款工具也很適合利用具有很多個個體的變異文件來構(gòu)建系統(tǒng)進化樹尚胞。這款軟件有其內(nèi)置的過濾機制硬霍,可以根據(jù)你所選參數(shù)的條件進行過濾(或者其default的參數(shù)),然后最后還會幫你把圖也輸出(當然不太美觀笼裳,最好還是要進行后期的美化)唯卖。它的畫圖需要用到R包,這里如果你想要它幫你畫圖躬柬,它所需要的R包也要裝好拜轨。

下載地址

http://chibba.pgml.uga.edu/snphylo/

其算法及流程

用法manual:

Usage:
snphylo.sh -v VCF_file [-p Maximum_PLCS (5)] [-c Minimum_depth_of_coverage (5)]|-H HapMap_file [-p Maximum_PNSS (5)]|-s Simple_SNP_file [-p Maximum_PNSS (5)]|-d GDS_file [-l LD_threshold (0.1)] [-m MAF_threshold (0.1)] [-M Missing_rate (0.1)] [-o Outgroup_sample_name] [-P Prefix_of_output_files (snphylo.output)] [-b [-B The_number_of_bootstrap_samples (100)]] [-a The_number_of_the_last_autosome (22)] [-r] [-A] [-h]

Options:
-A: Perform multiple alignment by MUSCLE
-b: Performs (non-parametric) bootstrap analysis and generate a tree
-h: Show help and exit
-r: Skip the step removing low quality data (-p and -c option are ignored)

Acronyms:
PLCS: The percent of Low Coverage Sample
PNSS: The percent of Sample which has no SNP information
LD: Linkage Disequilibrium
MAF: Minor Allele Frequency

Simple SNP File Format:
#Chrom Pos SampleID1 SampleID2 SampleID3 ...
1 1000 A A T ...
1 1002 G C G ...
...
2 2000 G C G ...
2 2002 A A T ...

當安裝好工具,準確放好輸入文件允青,簡單敲一行命令撩轰,運算就會開始了:

~/biosoft/SNPhylo/snphylo.sh -v Gpan.prune.in.vcf -A -b -B 300

運算完,生成的文件如下

rw-r----- 1 21230309 domain users  98K Sep  4 09:08 snphylo.output.bs.png
-rw-r----- 1 21230309 domain users  27K Sep  4 09:08 snphylo.output.bs.tree
-rw-r----- 1 21230309 domain users 4.9M Sep  1 20:04 snphylo.output.fasta
-rw-r----- 1 21230309 domain users 488M Sep  1 19:48 snphylo.output.filtered.vcf
-rw-r----- 1 21230309 domain users  31M Sep  1 20:03 snphylo.output.gds
-rw-r----- 1 21230309 domain users  72K Sep  1 20:03 snphylo.output.id.txt
-rw-r----- 1 21230309 domain users  95K Sep  3 19:37 snphylo.output.ml.png
-rw-r----- 1 21230309 domain users  25K Sep  3 19:37 snphylo.output.ml.tree
-rw-r----- 1 21230309 domain users 257K Sep  3 19:37 snphylo.output.ml.txt
-rw-r----- 1 21230309 domain users 5.4M Sep  1 20:49 snphylo.output.phylip.txt
-rw-r----- 1 21230309 domain users  20K Sep  1 19:16 snphylo.tar.gz
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末昧廷,一起剝皮案震驚了整個濱河市堪嫂,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌木柬,老刑警劉巖皆串,帶你破解...
    沈念sama閱讀 217,277評論 6 503
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異眉枕,居然都是意外死亡恶复,警方通過查閱死者的電腦和手機,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,689評論 3 393
  • 文/潘曉璐 我一進店門速挑,熙熙樓的掌柜王于貴愁眉苦臉地迎上來谤牡,“玉大人,你說我怎么就攤上這事姥宝〕嵊” “怎么了?”我有些...
    開封第一講書人閱讀 163,624評論 0 353
  • 文/不壞的土叔 我叫張陵腊满,是天一觀的道長套么。 經(jīng)常有香客問我培己,道長,這世上最難降的妖魔是什么胚泌? 我笑而不...
    開封第一講書人閱讀 58,356評論 1 293
  • 正文 為了忘掉前任省咨,我火速辦了婚禮,結(jié)果婚禮上玷室,老公的妹妹穿的比我還像新娘零蓉。我一直安慰自己,他們只是感情好穷缤,可當我...
    茶點故事閱讀 67,402評論 6 392
  • 文/花漫 我一把揭開白布壁公。 她就那樣靜靜地躺著,像睡著了一般绅项。 火紅的嫁衣襯著肌膚如雪紊册。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,292評論 1 301
  • 那天快耿,我揣著相機與錄音囊陡,去河邊找鬼。 笑死掀亥,一個胖子當著我的面吹牛撞反,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播搪花,決...
    沈念sama閱讀 40,135評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼遏片,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了撮竿?” 一聲冷哼從身側(cè)響起吮便,我...
    開封第一講書人閱讀 38,992評論 0 275
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎幢踏,沒想到半個月后髓需,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,429評論 1 314
  • 正文 獨居荒郊野嶺守林人離奇死亡房蝉,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,636評論 3 334
  • 正文 我和宋清朗相戀三年僚匆,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片搭幻。...
    茶點故事閱讀 39,785評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡咧擂,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出檀蹋,到底是詐尸還是另有隱情松申,我是刑警寧澤,帶...
    沈念sama閱讀 35,492評論 5 345
  • 正文 年R本政府宣布,位于F島的核電站攻臀,受9級特大地震影響焕数,放射性物質(zhì)發(fā)生泄漏纱昧。R本人自食惡果不足惜刨啸,卻給世界環(huán)境...
    茶點故事閱讀 41,092評論 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望识脆。 院中可真熱鬧设联,春花似錦、人聲如沸灼捂。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,723評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽悉稠。三九已至宫蛆,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間的猛,已是汗流浹背耀盗。 一陣腳步聲響...
    開封第一講書人閱讀 32,858評論 1 269
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留卦尊,地道東北人叛拷。 一個月前我還...
    沈念sama閱讀 47,891評論 2 370
  • 正文 我出身青樓,卻偏偏與公主長得像岂却,于是被迫代替她去往敵國和親忿薇。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 44,713評論 2 354

推薦閱讀更多精彩內(nèi)容