Amplicon sequencing analysis pipeline through qiime2 platform
qiime2是扩增子数据分析的最佳平台之一,其提供了大量从原始data到统计分析的插件,尤其是它的可重复分析且可扩展插件的理念使得其成为扩增子分析首选的平台。
Platform
qiime2是扩增子数据分析的最佳平台之一,其提供了大量从原始data到统计分析的插件,尤其是它的可重复分析且可扩展插件的理念使得其成为扩增子分析首选的平台。对于如何安装该平台,个人建议使用conda安装,并且官网也提供了conda安装的yaml文件,但为了快速安装,我们需要把conda镜像重新设置修改一下。
step1: download the yaml file
wget https://data.qiime2.org/distro/core/qiime2-2020.8-py36-linux-conda.yml
step2: update the conda version
conda update conda -y
step3: modify the .condarc and yaml file
# reset the conda channels
channels:- conda-forge- bioconda- biobakery- qiime2- ohmeta- defaults
show_channel_urls: true
channel_alias: https://mirrors.bfsu.edu.cn/anaconda
default_channels:- https://mirrors.bfsu.edu.cn/anaconda/pkgs/main- https://mirrors.bfsu.edu.cn/anaconda/pkgs/free- https://mirrors.bfsu.edu.cn/anaconda/pkgs/r
custom_channels:conda-forge: https://mirrors.bfsu.edu.cn/anaconda/cloudbioconda: https://mirrors.bfsu.edu.cn/anaconda/cloudbiobakery: https://mirrors.bfsu.edu.cn/anaconda/cloudqiime2: https://mirrors.bfsu.edu.cn/anaconda/cloudohmeta: https://mirrors.bfsu.edu.cn/anaconda/cloudknights-lab: https://conda.anaconda.orgalienzj: https://conda.anaconda.org# change the yaml channel- qiime2/label/r2020.8+ qiime2
step4: create the qiime2-2020.8 conda environment
conda env create -n qiime2-2020.8 --file qiime2-2020.8-py36-linux-conda_update.yml -y
step5: activate environment and test installation
# actiavate
conda activate qiime2-2020.8
# test
qiime --help
# deactivate
conda deactivate # Note: if you have trouble with no activate command, you need use *eval "$(conda shell.bash hook)"*
Analysis pipeline
Check the fastq phred score
通过fastqc软件处理fastq获取原始数据的reads质量分布等情况,为后续设置碱基质量阈值做好准备。
fastqc --noextract -f fastq input.fq.gz -o result/outdir
Get the positions of primer in the fastq
DADA2算法需要提供forward和reverse引物序列在reads上的位置信息,我们可以通过使用usearch和vsearch软件获取引物的位置信息,并将其作为参数配置给qiime2 dada2插件。获取primer_hits.txt文件的开始和结束10个reads的primer信息,找到引物的位置信息。
# step1: merge fq
echo "step1: mkdir result and merger all PE fq files"
mkdir result
./usearch -fastq_mergepairs rawdata/*_R1.fq -fastqout result/all_samples_merged.fq -relabel @# step2: sampling sequence
echo "step2: sampling sequence for primer check"
./usearch -fastx_subsample result/all_samples_merged.fq -sample_size 20000 -fastqout result/all_sub_for_primer_check.fq# step3: search primer position
echo "step3: search primer position of the sequence"
./usearch -search_oligodb result/all_sub_for_primer_check.fq -db primers.fasta -strand both -userout result/primer_hits.txt -userfields query+qlo+qhi+qstrand# step4: obtain the position information of primer
head primer_hits.txt # head 23 tail 19
#HTN1.182 452 471 -
#HTN1.182 7 23 +
#HTN1.183 447 466 -
#HTN1.183 7 23 +
#HTN1.233 428 447 -
#HTN1.233 7 23 +
#HTN1.570 430 449 -
#HTN1.570 7 23 +
#HTN1.219 448 467 -
#HTN1.219 7 23 +
tail primer_hits.txt # head 23 tail 19
#Normal6.996134 429 448 -
#Normal6.996134 7 23 +
#Normal6.996331 430 449 -
#Normal6.996331 7 23 +
#Normal6.996257 448 467 -
#Normal6.996257 7 23 +
#Normal6.996360 429 448 -
#Normal6.996360 7 23 +
#Normal6.961965 430 449 -
#Normal6.961965 7 23 +
Convert fastq into qza
step1: 准备符合import格式的文件 manifest.tsv(包含样本ID以及forward和reverse fq路径的文件)和sample-metadata.tsv(样本的分组信息)
# generate manifest.tsv
find /rawdata/ -name "*gz" | grep '1_' | sort | perl -e 'print"sampleid\tforward-absolute-filepath\treverse-absolute-filepath\n"; while(<>){chomp; $name=(split("\/", $_))[-1]; $name1=$name; $name2=$_; $name1=~s/_[1|2]_.fq.gz//g; $name2=~s/1_/2_/; print "$name1\t$_\t$name2\n";}' > pe-33-manifest-trimmed.tsv# sample-metadata.tsv details
#sampleid Treatment
#HTN1 HTN
#HTN2 HTN
#Normal1 Normal
step2: import fastq into qza,双端和单端数据的import参数不同,并且不同的phred score使用的参数也不一样
# PE mode
qiime tools import \--type 'SampleData[PairedEndSequencesWithQuality]' \--input-path pe-33-manifest-trimmed.tsv \--output-path result/paired-end-demux.qza \--input-format PairedEndFastqManifestPhred33V2# SE mode
qiime tools import \--type "SampleData[SequencesWithQuality]" \--input-path single-33-manifest.tsv \--output-path result/single-end-demux.qza \--input-format SingleEndFastqManifestPhred33V2
Sequence quality control
Step1: 为了更加准确地过滤低质量的碱基,可以再使用qiime2自带summarize插件查看低质量碱基的位置分布,最后再结合第二步usearch和vsearch的primer位置信息设置适合过滤的参数。
qiime demux summarize \--i-data result/paired-end-demux.qza \--o-visualization result/paired-end-demux-summary.qzv
Step2: DADA2算法相比常用的OTU算法,其计算的amplicon variant sequences(ASV)的feature会更好一些,feature代替OTU是一种趋势。在此之后,Usearch的开发者Robert C Edgar迅速开发了更好的unoise2算法,该算法已更新到unoise3,并放话unoise2比DADA2更准确。
DADA2是R的一个软件包,可以进行过滤,去重,嵌合体过滤,reads的拼接,可以修正扩增子的测序错误,确定更多的真实变异。扩增子测序本身就具有内在的限制,但是聚类OTU的方式进一步限制了它的发展。OTU不是物种,它们不应该成为错误的一部分,DADA2可以具有更高的分辨率
DADA(Divisive Amplicon Denoising Algorithm)含义为区分扩增子降噪方程可以确定真实的变异在454测序扩增子数据输出更少的假阳性。DADA2是DADA的扩展和增强可以应用于Illumina测序数据
- 特点:DADA2最重要的优势是它用了更多的数据。DADA2的错误模型包含了质量信息,而其他的方法都在过滤低质量之后把序列的质量信息忽略。而且DADA2的错误模型也包括了定量的丰度,而且该模型也计算了各种不同转置的概率A->C。而且DADA2以自身数据的错误模型为参数,不用依赖于其他参数分布模型。
DADA2算法:一种分列式算法- 原理:
- 1 首先将每个reads全部看作单独的单元,Sequence相同的reads被纳入一个sequence,reads个数即成为该sequence的丰度(abundance)(其实就是去冗余的过程)
- 2 计算每个sequence丰度的p-value。当最小的p-value低于设定的阈值时, 将产生一个新的partition。每一个sequence将会被归入最可能生成该sequence的partition。
- 3 依次类推,完成分割归并。
# DADA2 denosie
qiime dada2 denoise-paired \--i-demultiplexed-seqs result/paired-end-demux.qza \--p-trim-left-f 23 \--p-trim-left-r 19 \--p-trunc-len-f 0 \--p-trunc-len-r 0 \--p-n-threads 20 \--o-table result/table.qza \--o-representative-sequences result/rep-seqs.qza \--o-denoising-stats result/stats.qza# summary feature tableqiime feature-table summarize \--i-table result/table.qza \--o-visualization result/table.qzv \--m-sample-metadata-file sample-metadata.tsv
Taxonomic annotation
step1: 通过比对已知分类学组成的参考数据库的序列,可以获知feature table的代表序列的物种注释情况。在qiime2通常可以使用已经搭建好的分类学分类器:silva132和Greengene 13_8等。
GreenGene数据库比较明显的问题就是属种水平注释低,所以很多条目里,g和s下划线后面都是空的,如果关注属种水平的注释,则不建议使用该数据库。
结合相关专业人士的反馈意见,个人建议使用silva数据库作为物种注释的首选参考数据库。
# downlaod silva classifier data
wget https://data.qiime2.org/2020.8/common/silva-138-99-nb-classifier.qza # annotation
qiime feature-classifier classify-sklearn \--i-classifier database/silva-138-99-nb-classifier.qza \--i-reads result/rep-seqs.qza \--o-classification result/taxonomy-dada2-sliva.qza \--p-n-jobs 20 \--verbose \--output-dir result/
step2: 可通过将某些代表序列与扩增子数据库在blast软件下再进行物种注释,该结果与qiime2提供的分类学分类器结果比较,从而可以评估分类学分类器的性能,这适合在构造新的分类学分类器时候使用。
# extract the validated sequences by qiime2 view network site
qiime feature-table tabulate-seqs \--i-data result/rep-seqs.qza \--o-visualization result/rep-seqs.qzv
Filter the unsuitable ASV
step1: remove low occurrence ASVs
根据table的结果设置过滤threshold,阈值有frequency和samples,即ASV在所有样本的总reads和出现在样本数目。计算平均采样深度(对所有ASV的count加和并求平均值),设置采样阈值后再乘以平均采样深度即获得frequency阈值,另外也可以设置ASV出现在多少样本内。
qiime feature-table filter-features \--i-table result/table.qza \--p-min-frequency 10 \--p-min-samples 1 \--o-filtered-table result/table_filter_low_freq.qza
step2: remove contamination and mitochondria, chloroplast sequence.
16s扩增子常见污染序列是来自于线粒体和叶绿体等16s序列,另外也存在一些未注释的序列,均需要去除。
qiime taxa filter-table \--i-table result/table_filter_low_freq.qza \--i-taxonomy result/taxonomy-dada2-sliva.qza \--p-exclude mitochondria,chloroplast \--o-filtered-table result/table_filter_low_freq_contam.qza
step3: drop the low depth samples:
经过上述处理后,某些样本含有较少的ASV总量,因此可以将其剔除。通常使用的threshold的范围是1,000 - 4,000 reads。
# summarise all the ASV counts in each sample
qiime feature-table summarize \--i-table result/table_filter_low_freq_contam.qza \--o-visualization result/table_filter_low_freq_contam_summary.qzv
# remove samples
qiime feature-table filter-samples \--i-table result/table_filter_low_freq_contam.qza \--p-min-frequency 4000 \--o-filtered-table result/final_table.qza
# representative sequence
qiime feature-table filter-seqs \--i-data result/rep-seqs.qza \--i-table result/final_table.qza \--o-filtered-data result/final_rep_seqs.qza
# reannotate
qiime feature-classifier classify-sklearn \--i-classifier database/silva-138-99-nb-classifier.qza \--i-reads result/final_rep_seqs.qza \--o-classification result/final_taxonomy_sliva.qza \--p-n-jobs 20 \--verbose \--output-dir result/
# core features
qiime feature-table core-features \--i-table result/final_table.qza \--p-min-fraction 0.6 \--p-max-fraction 1 \--p-steps 11 \--o-visualization result/final_table_cores.qzv \--output-dir result
Downstream analysis
Constructing phylogenetic tree and diversity analysis
step1: 系统发育树能够服务于后续多样性分析
qiime phylogeny align-to-tree-mafft-fasttree \--i-sequences result/final_rep_seqs.qza \--o-alignment result/final_rep_seqs_aligned.qza \--o-masked-alignment result/final_rep_seqs_masked.qza \--p-n-threads 20 \--o-tree result/unrooted-tree.qza \--o-rooted-tree result/rooted-tree.qza
step2: rarefication curve:
稀疏曲线可以了解测序深度与ASV的关系
qiime diversity alpha-rarefaction \--i-table result/final_table.qza \--i-phylogeny result/rooted-tree.qza \--p-max-depth 60000 \--m-metadata-file sample-metadata.tsv \--o-visualization result/p-max-depth-60000-alpha-rarefaction.qzv
step3: diversity analysis
根据ASV的最小测序深度设置sampling参数
# all diversity index and distance
qiime diversity core-metrics-phylogenetic \--i-phylogeny result/rooted-tree.qza \--i-table result/final_table.qza \--p-sampling-depth 60000 \--m-metadata-file sample-metadata.tsv \--output-dir result/sample-depth-60000-core-metrics-results
step4: faith_pd diversity parameters
# example for faith_pd_vector of group analysis
qiime diversity alpha-group-significance \--i-alpha-diversity result/sample-depth-60000-core-metrics-results/faith_pd_vector.qza \--m-metadata-file sample-metadata.tsv \--o-visualization result/sample-depth-60000-core-metrics-results/faith-pd-group-significance.qzv
# example for alpha diversity of group analysis
qiime diversity alpha-group-significance \--i-alpha-diversity result/sample-depth-60000-core-metrics-results/shannon_vector.qza \--m-metadata-file sample-metadata.tsv \--o-visualization result/shannon_compare_groups.qzv
# beta diversity
qiime diversity beta-group-significance \--i-distance-matrix result/sample-depth-60000-core-metrics-results/unweighted_unifrac_distance_matrix.qza \--m-metadata-file sample-metadata.tsv \--m-metadata-column Treatment \--p-pairwise false \--p-permutations 999 \--o-visualization result/unweighted-unifrac-subject-significance.qzv
# three dimensions to show beta diversity
qiime emperor plot \--i-pcoa result/sample-depth-60000-core-metrics-results/unweighted_unifrac_pcoa_results.qza \--m-metadata-file sample-metadata.tsv \--p-custom-axes Treatment \--o-visualization result/unweighted-unifrac-emperor-height.qzv
visualizing taxonomic composition
qiime taxa barplot \--i-table result/final_table.qza \--i-taxonomy result/final_taxonomy_sliva.qza \--m-metadata-file sample-metadata.tsv \--o-visualization result/final_taxa_barplots_sliva.qzv
Analysis of composition of microbiomes (ANCOM)
ANCOM(可以了解下sparse compositional correlation (SparCC) to analyze correlation networks among taxa)可用于比较微生物在组间差异的分析方法, 结果与LEfse类似。该方法基于成分对数比的方法,即先对count数据进行对数转换,再通过简单的秩和检验(stats包内的aov, friedman.test, lme等函数)进行比较,最后计算统计量w。ANCOM的结果用W值来衡量组间差异显著性。W值越高代表该物种在组间的差异显著性越高。ANCOM的R代码(推荐!!!)。
# add pseudocount for log transform
qiime composition add-pseudocount \--i-table result/final_table.qza \--p-pseudocount 1 \--o-composition-table result/final_table_pseudocount.qza
# ANCOM
qiime composition ancom \--i-table result/final_table_pseudocount.qza \--m-metadata-file sample-metadata.tsv \--m-metadata-column Treatment \--output-dir result/ancom_output
export qza into other format type data
qza数据文件
QIIME2为了使分析流程标准化,分析过程可重复,制定了统一的分析过程文件格式
.qza
;qza文件类似于一个封闭的系统,里面包括原始数据、分析的过程和结果;这样保证了文件格式的标准,同时可以追溯每一步的分析,以及图表绘制参数。这一方案为实现将来可重复的分析提供了基础。
# representative sequences
qiime tools export \--input-path result/final_rep_seqs.qza \--output-path final_result
# features table
qiime tools export \--input-path result/final_table.qza \--output-path final_result
biom normalize-table \-i final_result/feature-table.biom \-r \-o final_result/feature-table-norm.biom
biom convert \-i final_result/feature-table-norm.biom \-o final_result/feature-table-norm.tsv \--to-tsv \--header-key taxonomy
LEfse
LEfse是LDA Effect Size分析,其本质是一类判别分析。其结果一般配合进化分支图使用,也即是展示差异物种在进化上的关系。推荐使用yintools的LEfse的R脚本。remotes::install_github(“ying14/yingtools2”)
**原理:**首先使用non-parametric factorial Kruskal-Wallis (KW) sum-rank test(非参数因子克鲁斯卡尔—沃利斯和秩验检)检测具有显著丰度差异特征,并找到与丰度有显著性差异的类群。最后,LEfSe采用线性判别分析(LDA)来估算每个组分(物种)丰度对差异效果影响的大小。
**进化分支图:**由内至外辐射的圆圈代表了由门至属(或种)的分类级别。在不同分类级别上的每一个小圆圈代表该水平下的一个分类,小圆圈直径大小与相对丰度大小呈正比。着色原则:无显著差异的物种统一着色为黄色,差异物种Biomarker跟随组进行着色,红色节点表示在红色组别中起到重要作用的微生物类群,绿色节点表示在绿色组别中起到重要作用的微生物类群,若图中某一组缺失,则表明此组中并无差异显著的物种,故此组缺失。图中英文字母表示的物种名称在右侧图例中进行展示。
step1: install lefse through conda
conda create -n lefse -c biobakery lefse -y
conda activate lefse
which format_input.py
step2: collapse the table.gza to the L6 level
qiime taxa collapse \--i-table result/final_table.qza \--o-collapsed-table collapse/collapse.table.qza \--p-level 6 \--i-taxonomy result/final_taxonomy_sliva.qza
step3: calculate relative-frequency for the collapsed table (relative abundance instead of counts)
qiime feature-table relative-frequency \--i-table collapse/collapse.table.qza \--o-relative-frequency-table collapse/collapse.frequency.table.qza \--output-dir collapse/
step4: export biom file
qiime tools export \--input-path collapse/collapse.frequency.table.qza \--output-path collapse/
step5: convert biom to text file
biom convert \-i collapse/feature-table.biom \-o collapse/collapse.frequency.table.tsv \--header-key "taxonomy" \--to-tsv
step6: filter tax
sed 's/;/\|/g' collapse/collapse.frequency.table.tsv | \awk '{split($1, a, "|");if( a[6] != "__"){print $0}}' | \#sed 's/d\_\_Bacteria|//g' | \grep -vE "g__uncultured|d__Archaea|p__WPS-2|p__SAR324_clade|Constructed" | \sed 's/#OTU ID/Group/g;s/taxonomy//g' > collapse/collapse.frequency.table.lefse.tsv
step7: run lefse
conda activate lefse
# convert text file into lefse.input file
format_input.py \collapse/collapse.frequency.table.lefse.tsv \result/collapse.frequency.table.lefse.in \-c 1 \-m f \-o 100000
# run lefse
run_lefse.py \result/collapse.frequency.table.lefse.in \result/collapse.frequency.table.lefse.res
# select significant result Lefse
grep -E "HTN|Normal" \result/collapse.frequency.table.lefse.res \> result/collapse.frequency.table.lefse_signif.res
# plot lda
plot_res.py \result/collapse.frequency.table.lefse_signif.res \result/lefse_final_lda.pdf \--format pdf \--autoscale 0
# plot cladogram
plot_cladogram.py \result/collapse.frequency.table.lefse_signif.res \result/lefse_total_clado.pdf \--format pdf
Functional prediction: picrust2
Picrust是Phylogenetic Investigationof Communities by Reconstruction of Unobserved States的简称,是一款基于16s rRNA基因序列预测微生物群落功能的软件。
其原理:
(1)基因内容预测(gene content inference)。该步先对Greengenes数据库的“closed reference”序列划分OTU后构建进化树,通过祖先状态重构(Ancestralstate reconstruction)算法并结合IMG/M数据库,预测出树中未进行全基因组测序OTU的基因组信息。
(2)宏基因组预测(metagenome inference)。将16SrDNA测序结果与Greengenes数据库进行比对,挑选出与“closed reference”数据库相似性高的(默认为≥97%)OTU;根据OTU对应基因组中16SrDNA的拷贝数信息,将每个OTU对应序列数除以其16S拷贝数来进行标准化;最后,将标准化的数据乘以其对应的基因组中基因含量从而实现宏基因组预测的目的。获得的预测结果可以通过KEGG Orthology、COGs或Pfams等对基因家族进行分类。
qiime2-2020.8版本暂时无法安装q2-picrust插件,因此使用picurst2软件做微生物功能预测分析。
# install picrust2
conda create -n picrust2 -c bioconda -c conda-forge picrust2=2.3.0_b -y
# export representative sequences
conda activate qiime2-2020.8
qiime tools export --input-path result/final_rep_seqs.qza --output-path ./
conda deactivate
# run picrust2
conda activate picrust2
picrust2_pipeline.py -s dna-sequences.fasta -i feature-table.biom -o picrust2_out_pipeline -p 30
conda deactivate
The key output files are:
EC_metagenome_out
- Folder containing unstratified EC number metagenome predictions (pred_metagenome_unstrat.tsv.gz
), sequence table normalized by predicted 16S copy number abundances (seqtab_norm.tsv.gz
), and the per-sample NSTI values weighted by the abundance of each ASV (weighted_nsti.tsv.gz
).KO_metagenome_out
- AsEC_metagenome_out
above, but for KO metagenomes.pathways_out
- Folder containing predicted pathway abundances and coverages per-sample, based on predicted EC number abundances.
Reference
- qiime2 docs
- fastqc tutorial
- usearch tutorial
- vsearch tutorial
- import data in qiime2
- 扩增子分析软件qiime2必知必会
- 扩增子数据库整理
- Greengene数据库整理
- SILVA 数据库整理
- 差异检验
- lefse after qiime2
- lefse scripts github
- picrust2安装及对16s数据进行功能预测
- picrust2 wiki