fastq文件预处理神器——fastp

一、写在前面

fastp是一个基于C++用于处理fastq文件的一体化工具。其特点有:

1、功能多样:能够处理多种多样的过滤前/后fastq数据,能够检测质量曲线、碱基含量、Q20/Q30、GC含量、重复序列、接头等信息。

2、过滤掉不好的序列(低质量、短序列、N含量过高的序列等)

3、通过滑窗的平均质量得分过滤掉5'端或3'端的序列

4、能够自动识别并切除接头,这样大家就无需输入接头序列文件,这对数据挖掘的同学比较友好。

5、矫正双端测序的错配

6、修剪polyX尾(例如mRNA常在3'端有polyA尾)

7、预处理唯一分子识别标识(UMI)

8、通过JSON格式输出结果,有利于数据理解

9、以html网页的形式输出质控结果

10、能够拆分数据参与并行计算

11、支持长读长(利好三代测序)

12、支持标准输出与标准错误输出(这里看不懂的同学可以学习:管道符)

13、处理速度超快(基于C++的优势),质控方面快于fastqc,修剪reads方面快于trimmomatic

如果你对下面的教程比较迷茫,那么你可以先行学习Linux教程:

十小时学会Linux

生信Linux及服务器使用技巧

如果你的计算机不足以支持下面流程的计算,可按需选用适合自己的计算资源:

共享(经济实惠):有root权限的共享服务

独享(省电省心):生信分析不求人

实体(稳定高效):配置一个心仪的工作站(硬件+环境配置)

访问链接:https://biomamba.xiyoucloud.net/

二、最简单的用法

1、单端测序:

fastp -i in.fq -o out.fq

2、双端测序:

其中-i选项的参数为正向reads的路径,-I选项的参数为反向reads的路径,-o和-O选项的参数则对应正向和反向reads的输出路径。

fastp -i in.R1.fq.gz -I in.R2.fq.gz -o out.R1.fq.gz -O out.R2.fq.gz

可以看到,fastp支持压缩/非压缩文件数据的处理。

输出的报告可参考:

HTML report: http://opengene.org/fastp/fastp.html

JSON report: http://opengene.org/fastp/fastp.json

三、安装fastp

以下三种方式三选一即可

(1)利用conda(使用教程:生信软件管家——conda的安装、使用、卸载)安装起来会非常快:

mamba install -c bioconda fastp

(2)当然你也可以自己下载二进制的可执行文件:

wget http://opengene.org/fastp/fastp
chmod a+x ./fastp

(3)如果以上两种方式你都失败了,可以尝试用源码编译(这种是最容易失败的)​​​​​​​
 

### 安装isal库 ###
# 抓取源码:
git clone https://github.com/intel/isa-l.git
cd isa-l
# 配置环境:
./autogen.sh
./configure --prefix=/usr --libdir=/usr/lib64
# 编译:
make
# 安装:
sudo make install
###  安装libdeflate ###
git clone https://github.com/ebiggers/libdeflate.git
cd libdeflate
cmake -B build
cmake --build build
cmake --install build
### 安装fastp ###
# 抓取源码:  
git clone https://github.com/OpenGene/fastp.git

# 编译:
cd fastp
make

# 安装:
sudo make install

四、常用选项参数介绍

0、并行计算

--thread可以指定参与计算的线程数,这个选项非常重要,决定了程序运行的速度。例如我的电脑有28线程,用默认的3线程跑简直浪费时间。

1、非配对reads

非配对 reads 指的是在测序过程中,两端的 DNA 片段长度不一致或者无法对应上的 reads。非配对 reads 可能是由于 DNA 片段长度过长或者过短,或者测序过程中出现了一些技术问题导致的。在进行生物信息学分析时,非配对 reads 通常会作为噪声被过滤掉,因为它们可能会影响数据的准确性和可靠性。因此,在进行后续分析之前,通常会对测序数据进行质量控制和预处理,包括过滤掉非配对 reads、去除低质量 reads 等步骤,以确保分析的准确性和可靠性。

在fastp的使用过程中会自动识别并过滤非配对reads,如果有需要,你可以使用--unpaired1和--unpaired2来分别输出reads1和reads2的非配对reads。--unpaired1和--unpaired2的参数可以是一样的,这样非配对reads会被统一输出到一个文件中。

2、未通过质量过滤的reads

未通过质量过滤的原因主要为测序质量低或长度不足,这部分read可以通过--failed_out选项跟文件名为参数来输出。

3、仅查看数据质量

如果你想把fastp当fastqc使用也是完全可以的,通过选项--reads_to_process可以设置不对fastq文件做任何处理,仅作质量查看。

4、按照质量分数过滤

fastp可以提供默认的质量过滤参数,当然你也可以手动设置参数进行过滤。例如,-q, --qualified_quality_phred 15可以保留质量大于Q15的碱基。-u, --unqualified_percent_limit 40可以过滤掉40%的低质量碱基。-e, --average_qual可以按照平均质量来过滤对应reads。

5、按照长度过滤

-L或 disable_length_filtering能够过滤长度不足的序列,利用--length_limit可以过滤掉长度过高的序列。

6、过滤掉简单序列

一些高重复性序列过于简单,大多不包含有意义的信息(特殊研究目的除外),因此可以通过-y或--low_complexity_filter选项来指定序列的复杂度(实质为一个比例,因此取值0~100)进行过滤。

# 下面是一个51bp的序列:
seq = 'AAAATTTTTTTTTTTTTTTTTTTTTGGGGGGGGGGGGGGGGGGGGGGCCCC'
# 可以看到上面序列中仅三个碱基与其下一位相邻的碱基不相同,因此这个序列的复杂性为:
complexity = 3/(51-1) = 6%

7、去除接头

这一部分就不介绍了,不用设置,fastp自动会判定并去除接头。

8、利用滑窗评分修剪reads

从v0.19.6版本以后,fastp支持通过滑窗来修剪reads,-5或--cut_front可以从5'端开始进行滑窗(-3, --cut_tail不用解释了吧),头部的N碱基会被切除。cut_front_window_size可用于设置滑窗的大小。-M, --cut_mean_quality 可以用于设置滑窗需要满足的平均质量。

9、修剪polyX

<poly_x_min_len>可以指定polyX进行过滤,例如<poly_x_min_10>可以过滤掉包含重复10个及以上的碱基序列。

10、输出文件拆分

为了后续能够并行处理fastq文件,可以按照限制单个文件的reads数(--split_by_lines)或限制文件的个数来输出fastq文件。例如--split_prefix_digits=4, --out1=out.fq, --split=3可以设置输出的文件前缀数字为四位数、文件后缀为out.fq、共拆分为三个文件。

11、过滤过表达序列

默认情况下fastp会过滤掉表达量超过1/20的序列,若设置-P 100则表达比例超过1/100的序列会被过滤掉。

12、合并配对reads

--merged_out可以指定配对reads的输出文件路径,--out1和--out2输出的是通过所有过滤条件,但不能成功配对合并的reads。--unpaired1和--unpaired2不用多说,就是字面意思。

13、更多参数

作者给出了完整用法与参数:

usage: fastp -i <in1> -o <out1> [-I <in1> -O <out2>] [options...]
options:
  # I/O options
  -i, --in1                          read1 input file name (string)
  -o, --out1                         read1 output file name (string [=])
  -I, --in2                          read2 input file name (string [=])
  -O, --out2                           read2 output file name (string [=])
      --unpaired1                      for PE input, if read1 passed QC but read2 not, it will be written to unpaired1. Default is to discard it. (string [=])
      --unpaired2                      for PE input, if read2 passed QC but read1 not, it will be written to unpaired2. If --unpaired2 is same as --unpaired1 (default mode), both unpaired reads will be written to this same file. (string [=])
      --failed_out                     specify the file to store reads that cannot pass the filters. (string [=])
      --overlapped_out                 for each read pair, output the overlapped region if it has no any mismatched base. (string [=])
  -m, --merge                          for paired-end input, merge each pair of reads into a single read if they are overlapped. The merged reads will be written to the file given by --merged_out, the unmerged reads will be written to the files specified by --out1 and --out2. The merging mode is disabled by default.
      --merged_out                     in the merging mode, specify the file name to store merged output, or specify --stdout to stream the merged output (string [=])
      --include_unmerged               in the merging mode, write the unmerged or unpaired reads to the file specified by --merge. Disabled by default.
  -6, --phred64                      indicate the input is using phred64 scoring (it'll be converted to phred33, so the output will still be phred33)
  -z, --compression                  compression level for gzip output (1 ~ 9). 1 is fastest, 9 is smallest, default is 4. (int [=4])
      --stdin                          input from STDIN. If the STDIN is interleaved paired-end FASTQ, please also add --interleaved_in.
      --stdout                         output passing-filters reads to STDOUT. This option will result in interleaved FASTQ output for paired-end input. Disabled by default.
      --interleaved_in                 indicate that <in1> is an interleaved FASTQ which contains both read1 and read2. Disabled by default.
      --reads_to_process             specify how many reads/pairs to be processed. Default 0 means process all reads. (int [=0])
      --dont_overwrite               don't overwrite existing files. Overwritting is allowed by default.
      --fix_mgi_id                     the MGI FASTQ ID format is not compatible with many BAM operation tools, enable this option to fix it.

  # adapter trimming options
  -A, --disable_adapter_trimming     adapter trimming is enabled by default. If this option is specified, adapter trimming is disabled
  -a, --adapter_sequence               the adapter for read1. For SE data, if not specified, the adapter will be auto-detected. For PE data, this is used if R1/R2 are found not overlapped. (string [=auto])
      --adapter_sequence_r2            the adapter for read2 (PE data only). This is used if R1/R2 are found not overlapped. If not specified, it will be the same as <adapter_sequence> (string [=])
      --adapter_fasta                  specify a FASTA file to trim both read1 and read2 (if PE) by all the sequences in this FASTA file (string [=])
      --detect_adapter_for_pe          by default, the adapter sequence auto-detection is enabled for SE data only, turn on this option to enable it for PE data.

  # global trimming options
  -f, --trim_front1                    trimming how many bases in front for read1, default is 0 (int [=0])
  -t, --trim_tail1                     trimming how many bases in tail for read1, default is 0 (int [=0])
  -b, --max_len1                       if read1 is longer than max_len1, then trim read1 at its tail to make it as long as max_len1. Default 0 means no limitation (int [=0])
  -F, --trim_front2                    trimming how many bases in front for read2. If it's not specified, it will follow read1's settings (int [=0])
  -T, --trim_tail2                     trimming how many bases in tail for read2. If it's not specified, it will follow read1's settings (int [=0])
  -B, --max_len2                       if read2 is longer than max_len2, then trim read2 at its tail to make it as long as max_len2. Default 0 means no limitation. If it's not specified, it will follow read1's settings (int [=0])

  # duplication evaluation and deduplication
  -D, --dedup                          enable deduplication to drop the duplicated reads/pairs
      --dup_calc_accuracy              accuracy level to calculate duplication (1~6), higher level uses more memory (1G, 2G, 4G, 8G, 16G, 24G). Default 1 for no-dedup mode, and 3 for dedup mode. (int [=0])
      --dont_eval_duplication          don't evaluate duplication rate to save time and use less memory.

  # polyG tail trimming, useful for NextSeq/NovaSeq data
  -g, --trim_poly_g                  force polyG tail trimming, by default trimming is automatically enabled for Illumina NextSeq/NovaSeq data
      --poly_g_min_len                 the minimum length to detect polyG in the read tail. 10 by default. (int [=10])
  -G, --disable_trim_poly_g          disable polyG tail trimming, by default trimming is automatically enabled for Illumina NextSeq/NovaSeq data

  # polyX tail trimming
  -x, --trim_poly_x                    enable polyX trimming in 3' ends.
      --poly_x_min_len                 the minimum length to detect polyX in the read tail. 10 by default. (int [=10])

  # per read cutting by quality options
  -5, --cut_front                      move a sliding window from front (5') to tail, drop the bases in the window if its mean quality < threshold, stop otherwise.
  -3, --cut_tail                       move a sliding window from tail (3') to front, drop the bases in the window if its mean quality < threshold, stop otherwise.
  -r, --cut_right                      move a sliding window from front to tail, if meet one window with mean quality < threshold, drop the bases in the window and the right part, and then stop.
  -W, --cut_window_size                the window size option shared by cut_front, cut_tail or cut_sliding. Range: 1~1000, default: 4 (int [=4])
  -M, --cut_mean_quality               the mean quality requirement option shared by cut_front, cut_tail or cut_sliding. Range: 1~36 default: 20 (Q20) (int [=20])
      --cut_front_window_size          the window size option of cut_front, default to cut_window_size if not specified (int [=4])
      --cut_front_mean_quality         the mean quality requirement option for cut_front, default to cut_mean_quality if not specified (int [=20])
      --cut_tail_window_size           the window size option of cut_tail, default to cut_window_size if not specified (int [=4])
      --cut_tail_mean_quality          the mean quality requirement option for cut_tail, default to cut_mean_quality if not specified (int [=20])
      --cut_right_window_size          the window size option of cut_right, default to cut_window_size if not specified (int [=4])
      --cut_right_mean_quality         the mean quality requirement option for cut_right, default to cut_mean_quality if not specified (int [=20])

  # quality filtering options
  -Q, --disable_quality_filtering    quality filtering is enabled by default. If this option is specified, quality filtering is disabled
  -q, --qualified_quality_phred      the quality value that a base is qualified. Default 15 means phred quality >=Q15 is qualified. (int [=15])
  -u, --unqualified_percent_limit    how many percents of bases are allowed to be unqualified (0~100). Default 40 means 40% (int [=40])
  -n, --n_base_limit                 if one read's number of N base is >n_base_limit, then this read/pair is discarded. Default is 5 (int [=5])
  -e, --average_qual                 if one read's average quality score <avg_qual, then this read/pair is discarded. Default 0 means no requirement (int [=0])


  # length filtering options
  -L, --disable_length_filtering     length filtering is enabled by default. If this option is specified, length filtering is disabled
  -l, --length_required              reads shorter than length_required will be discarded, default is 15. (int [=15])
      --length_limit                 reads longer than length_limit will be discarded, default 0 means no limitation. (int [=0])

  # low complexity filtering
  -y, --low_complexity_filter          enable low complexity filter. The complexity is defined as the percentage of base that is different from its next base (base[i] != base[i+1]).
  -Y, --complexity_threshold           the threshold for low complexity filter (0~100). Default is 30, which means 30% complexity is required. (int [=30])

  # filter reads with unwanted indexes (to remove possible contamination)
      --filter_by_index1               specify a file contains a list of barcodes of index1 to be filtered out, one barcode per line (string [=])
      --filter_by_index2               specify a file contains a list of barcodes of index2 to be filtered out, one barcode per line (string [=])
      --filter_by_index_threshold      the allowed difference of index barcode for index filtering, default 0 means completely identical. (int [=0])

  # base correction by overlap analysis options
  -c, --correction                   enable base correction in overlapped regions (only for PE data), default is disabled
      --overlap_len_require            the minimum length to detect overlapped region of PE reads. This will affect overlap analysis based PE merge, adapter trimming and correction. 30 by default. (int [=30])
      --overlap_diff_limit             the maximum number of mismatched bases to detect overlapped region of PE reads. This will affect overlap analysis based PE merge, adapter trimming and correction. 5 by default. (int [=5])
      --overlap_diff_percent_limit     the maximum percentage of mismatched bases to detect overlapped region of PE reads. This will affect overlap analysis based PE merge, adapter trimming and correction. Default 20 means 20%. (int [=20])

  # UMI processing
  -U, --umi                          enable unique molecular identifier (UMI) preprocessing
      --umi_loc                      specify the location of UMI, can be (index1/index2/read1/read2/per_index/per_read, default is none (string [=])
      --umi_len                      if the UMI is in read1/read2, its length should be provided (int [=0])
      --umi_prefix                   if specified, an underline will be used to connect prefix and UMI (i.e. prefix=UMI, UMI=AATTCG, final=UMI_AATTCG). No prefix by default (string [=])
      --umi_skip                       if the UMI is in read1/read2, fastp can skip several bases following UMI, default is 0 (int [=0])

  # overrepresented sequence analysis
  -p, --overrepresentation_analysis    enable overrepresented sequence analysis.
  -P, --overrepresentation_sampling    One in (--overrepresentation_sampling) reads will be computed for overrepresentation analysis (1~10000), smaller is slower, default is 20. (int [=20])

  # reporting options
  -j, --json                         the json format report file name (string [=fastp.json])
  -h, --html                         the html format report file name (string [=fastp.html])
  -R, --report_title                 should be quoted with ' or ", default is "fastp report" (string [=fastp report])

  # threading options
  -w, --thread                       worker thread number, default is 3 (int [=3])

  # output splitting options
  -s, --split                        split output by limiting total split file number with this option (2~999), a sequential number prefix will be added to output name ( 0001.out.fq, 0002.out.fq...), disabled by default (int [=0])
  -S, --split_by_lines               split output by limiting lines of each file with this option(>=1000), a sequential number prefix will be added to output name ( 0001.out.fq, 0002.out.fq...), disabled by default (long [=0])
  -d, --split_prefix_digits          the digits for the sequential number padding (1~10), default is 4, so the filename will be padded as 0001.xxx, 0 to disable padding (int [=4])

  # help
  -?, --help                         print this message

参考:

Shifu Chen. 2023. Ultrafast one-pass FASTQ data preprocessing, quality control, and deduplication using fastp. iMeta 2: e107. https://doi.org/10.1002/imt2.107

Shifu Chen, Yanqing Zhou, Yaru Chen, Jia Gu; fastp: an ultra-fast all-in-one FASTQ preprocessor, Bioinformatics, Volume 34, Issue 17, 1 September 2018, Pages i884–i890, https://doi.org/10.1093/bioinformatics/bty560

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值