Overcoming bias and systematic errors in next generation sequencing data
© BioMed Central Ltd 2010
Published: 10 December 2010
Skip to main content
© BioMed Central Ltd 2010
Published: 10 December 2010
Considerable time and effort has been spent in developing analysis and quality assessment methods to allow the use of microarrays in a clinical setting. As is the case for microarrays and other high-throughput technologies, data from new high-throughput sequencing technologies are subject to technological and biological biases and systematic errors that can impact downstream analyses. Only when these issues can be readily identified and reliably adjusted for will clinical applications of these new technologies be feasible. Although much work remains to be done in this area, we describe consistently observed biases that should be taken into account when analyzing high-throughput sequencing data. In this article, we review current knowledge about these biases, discuss their impact on analysis results, and propose solutions.
While microarrays were rapidly accepted in research applications, incorporating them in clinical settings has required over a decade of benchmarking, standardization and the development of appropriate analysis methods. Extensive cross-platform and cross-laboratory analyses demonstrated the importance of low-level processing choices [1–3], including data summarization, normalization, and adjustment for laboratory or 'batch' effects , on outcome accuracy. Some of this work was done under the auspices of the Food and Drug Administration (FDA), most notably the Microarray Quality Control (MAQC) studies, which were developed specifically in order to determine the utility of microarray technologies in a clinical setting [5, 6]. Microarray-measured gene expression signatures now form the basis of several FDA-approved clinical diagnostic tests, including MammaPrint, and Pathwork's Tissue of Origin test [7, 8].
With high-throughput sequencing still in its infancy, many questions remain to be addressed before any hope of achieving approval for clinical applications is warranted. Although a study on the scale of the MAQC analyses for microarrays has yet to be carried out for sequencing (although one is in the works), there is already evidence that similar technical biases are present in sequencing data, and these will need to be understood and adjusted for to enable use of these new technologies in a clinical setting. In this commentary, we present some of these known biases and discuss the current state of solutions aimed at addressing them. Looking ahead to the application of this new technology in the clinical setting, we see both hurdles and promise.
Biases arise when an observed measurement does not reflect the quantity to be measured due to a systematic distorting effect. For a concrete example from microarrays, non-specific hybridization at microarray probes produces an observed intensity that is not an unbiased measure of the presence of the target sequence in the population being studied. Thorough investigation has revealed that the chemical composition of microarray probes influences this effect, and analysis methods have been developed to alleviate it .
Similarly, batch effects, whereby external factors, for example, time or technician, have a systematic influence on experimental outcomes across a condition, have been seen in many high-throughput technologies, and can cause confounding without proper study design and analysis techniques [4, 10].
So far, there is evidence that these issues are present in experiments employing high-throughput sequencing data, indicating that similar precautions and methodological developments will be necessary before sequencing data can be used with confidence in the clinic.
High-throughput sequencing involves the parallel sequencing of millions of DNA fragments simultaneously. Generally, these fragments are sequenced one base at a time, and, at each step or cycle, the current base is determined through fluorescent detection. For a review, see Holt and Jones . Although sequencing platform chemistries differ, in all cases care must be taken to avoid introducing bias at this early stage.
Focusing on the Illumina Genome Analyzer platform, base-call errors are not randomly distributed across the cycle positions in sequenced reads . Although not as extensively studied, similar biases have been observed and low-level signal correction methods have been developed for other sequencing platforms .
Incorrect base calls can have a deleterious impact downstream in aligning reads to the reference genome (resulting in fewer or incorrect alignments) and in variant detection (contributing to false-positive variant calls). In experiments aimed at detecting variants in genomic DNA, concern about false positives may lead researchers to employ stringent filtering criteria. Many researchers are hypothesizing that the discovery of rare variants will be a crucial next step in understanding the genetic causes of complex diseases , and overly strict filtering criteria may eliminate exactly the variants of most interest and impact. By improving the quality of nucleotide calls, either through better base calling or error correction, more accurate variant calls will be possible.
Another long-observed phenomenon of high-throughput sequencing data is the strong, reproducible effect of local sequence content on the coverage of a genomic region by sequencing reads . This phenomenon is analogous to probe effects for microarray platforms. For sequencing projects where coverage levels are compared across regions, such as RNA-Seq, chromosome immunoprecipitation-sequencing (ChIP-Seq) or copy number detection, this phenomenon can be particularly problematic.
Genomic regions that are identical or highly similar to one another create ambiguity in alignment to the genome, and ambiguous reads are generally discarded. The low coverage in these regions can produce biased measurements or remove the regions from consideration in downstream analysis, potentially eliminating important signals from the data. Methods have been developed for taking this mappability property into account to adjust the observed signal in these regions .
Some spatial biases seem to be unique to the sample preparation protocol being used. Hansen et al.  have shown that random hexamer priming can lead to coverage bias in RNA-Seq analyses, and Li et al.  present a model for the non-uniformity of RNA-Seq read coverage. Both papers provide solutions to adjust for these biases and achieve more uniform coverage.
The primary way of avoiding batch effects is through careful experimental design. Randomization of all experimental variables across treatment conditions should be employed to avoid systematic effects within a condition. In order to correct for these batch effects after the fact, they need to first be detected, and then adjusted for, be it through the use of covariates in linear models, or more involved procedures such as surrogate variable analysis . These methods will work best when confounding between the technical variable and the outcome of interest are avoided; thus, careful experimental design is essential.
One challenge of using sequencing technologies in clinical applications is that conclusions are likely to be drawn by comparing newly acquired data with genome profiles derived from previously collected data. Interpreting findings derived from this type of comparison is made difficult by the batch effect. Better understanding of batch-to-batch variation and development of single-sample methods such as fRMA  will be important steps forward in addressing this challenge.
Just as is the case for other high-throughput biological assays, high-throughput sequencing presents many challenges when it comes to avoiding bias and batch effects. Promising solutions to these problems are already in development, including: low-level improvements in base calling and error correction, improved per-position data quality metrics, adjustments to coverage estimates to alleviate context-specific or protocol-specific effects, and experimental designs that minimize potential confounding effects of batch. The lessons learned through the development of clinical applications of microarrays, such as the need for benchmark studies such as those conducted by the MAQC project, should help accelerate the process of incorporating high-throughput sequencing into the clinic.
Food and Drug Administration
Microarray Quality Control
The authors thank Sunduz Keles for sharing figures for this manuscript. Funding for this work as provided by NIH grant HG005220.