Open Access

Consensus: a framework for evaluation of uncertain gene variants in laboratory test reporting

  • David K Crockett1, 2Email author,
  • Perry G Ridge2,
  • Andrew R Wilson2,
  • Elaine Lyon2,
  • Marc S Williams1, 3,
  • Scott P Narus1,
  • Julio C Facelli1 and
  • Joyce A Mitchell1
Genome Medicine20124:48

DOI: 10.1186/gm347

Received: 12 October 2011

Accepted: 28 May 2012

Published: 28 May 2012


Accurate interpretation of gene testing is a key component in customizing patient therapy. Where confirming evidence for a gene variant is lacking, computational prediction may be employed. A standardized framework, however, does not yet exist for quantitative evaluation of disease association for uncertain or novel gene variants in an objective manner. Here, complementary predictors for missense gene variants were incorporated into a weighted Consensus framework that includes calculated reference intervals from known disease outcomes. Data visualization for clinical reporting is also discussed.


For appropriate and effective patient treatment, relevant clinical information should be available to the clinician on demand. Accurate interpretation of gene test results, including phenotype association of gene variants, is an important component in customizing patient therapy. Recent endeavors such as the NCBI Genetic Testing Registry, MutaDATABASE, 1000 Genomes and the Human Variome Project draw attention to this growing interest in gene variant annotation and clinical interpretation in human disease [14]. Ongoing efforts to catalog human genome variation for many years have led to authoritative repositories of gene variants, with clear association to disease phenotype finally beginning to emerge [58].

Rapidly evolving technologies such as SNP chip genome-wide association studies and next-generation sequencing have lowered the cost and increased the speed of genomic analysis, yielding much larger data sets [9]. Currently, gene variants are being discovered at an unprecedented pace. One recent report found an average of 3 million variants per personal genome [10]. Unfortunately, an ever-widening gap exists between this fast growing collection of genetic variation and practical clinical interpretation due to a lack of understanding of the phenotypic consequences (if any) of any given variant. Although the number of genetic testing laboratories has remained around 600 over the past several years, recent data show that clinical testing is currently available for well over 2,200 different genes or genetic conditions [11]. As medical records increasingly incorporate genetic test information, improved decision support approaches are needed to provide clinicians with the preferred course of treatment [12, 13]. Furthermore, for decision support rules to be of value, the clinical relevance of laboratory information must be well understood [14, 15].

Updated recommendations have been proposed from the American College of Medical Geneticists (ACMG) on reporting and classification of sequence variants, including approaches to help determine the clinical significance of variants of uncertain significance [16]. These guidelines delineate six interpretative categories of gene sequence variation, with defined classifications outlined and the hope of a unified standard terminology in gene test reporting. For improving interpretation of unclassified genetic variants, definitions and terminology have also been recommended by the International Agency for Research on Cancer (IARC), part of the World Health Organization [17].

Despite these recommendations, however, for genetic laboratories to unify and standardize terminology and classification of gene variant test reporting, various terms such as 'deleterious', 'mutation', 'pathogenic' or 'causative of disease' are still being used [18]. In a similar vein, test results such as 'indeterminate', 'unknown', 'uncertain', 'unclassified' and 'undetermined' make it difficult to interpret the significance of a gene test result. Further compounding this issue, word modifiers such as 'likely', 'suspected', 'predicted' and 'mild', 'moderate' or 'severe' sometimes also accompany variant classification. Of this environment, one recent study perceptively noted, 'The outcome of this inconsistency for clinicians and patients in such cases is uninformative; unhelpful at best and, at worst, open to misinterpretation' [19]. In this light, the prevailing question becomes how to best help clinicians faced with decisions around gene variants of uncertain significance.

A brief review of the literature indicates that gene test reports of variants of uncertain significance range widely. One laboratory site reported that from 30% to 50% of sequence changes reported for BRCA1 and BRCA2, respectively, were reported as variants of uncertain significance [20]. Similarly, analysis of a second laboratory revealed that a physician who orders BRCA1 and BRCA2 testing had an equal likelihood (13%) of receiving an uncertain variant result as seeing a test report containing a known pathogenic mutation [21]. More recent data indicate that identification of variants of uncertain significance has continued to decline to approximately 5% of BRCA tests performed - a testament to the importance of maintaining and updating variant databases [22].

Another well-known example is hereditary nonpolyposis colorectal cancer syndrome, where according to the US Preventive Services Task Force and others, a clinician may expect some 13% to 31% of tests reports to say mutation of unknown significance (uncertain variant) [23, 24]. An uncertain variant indicates that the risk of cancer is not fully defined and patient treatment is then based on personal and family history of cancer. Clinicians may be further frustrated when the chance of receiving a test report containing an uncertain variant is even higher for individuals from under-represented ethnic groups due to insufficient data on common polymorphisms from that population [25]. Additionally, newly identified variants from known genes present a greater challenge for interpretation of sequence-based results because they lack traditional confirming evidence of disease association [26].

Clinician frustration and obstacles to wide adoption of proposed guidelines may be two-fold. First, the lack of any quantitative metric or standardized scale for evaluation of novel or uncertain gene variants makes each difficult test result interpretation subjective to location and expertise at hand. A second and closely related challenge is the lack of an objective and standardized framework or context to make that metric meaningful. This quantitative metric and framework for evaluation become especially critical for interpretation of novel and uncertain gene variants where there is the obvious lack of existing evidence, such as family history, pedigree trios or sib pairs, confirming literature reports, bench assay biochemical evidence or colleague consensus of disease association.

In an effort to increase the transparency of providing gene variant evidence in test reporting to the clinical setting, we here present an implementation of our recently reported gene-specific predictor (Primary Sequence Amino Acid Properties (PSAAP)) into a standardized framework, in which results are systematically compared with those of other computer-based prediction methods for missense variants. Finally, with analogy to conventional laboratory testing, this Consensus model of complementary predictors also calculates gene variant 'reference intervals' using known disease outcomes. Examples of visualization are also explored for augmenting diagnostic decision making.

Materials and methods

Several clinically curated disease sets of gene variants with known pathogenicity are publicly available at ARUP Scientific Resource for Research and Education [27]. Each database relies on both medical and molecular expertise, and uniquely displays mutation and clinical information together. All sequence variants are verified for genomic position within a given reference gene and named following standard Human Genome Variation Society (HGVS) nomenclature [28]. Archived non-synonymous substitution variants were accessed from the RET proto-oncogene database in January 2012 [29].

Established prediction algorithms were chosen with various and complementary methodologies, such as amino acid substitution penalties, structural disruption, sequence homology (ortholog conservation) and neural nets. Mutation prediction was then performed for known benign (n = 46), known pathogenic (n = 51) and uncertain variants (n = 45) using our gene-specific PSAAP algorithm, and established algorithms MutPred [30], PMut [31], PolyPhen [32] and SIFT [33] as previously described [34, 35]. Prediction analysis was performed during December 2011 and January 2012 using the respective default settings for each algorithm.

Descriptive statistics such as mean, median, standard deviation, minimum and maximum were calculated for the numerical output from all five prediction algorithms. Normality of the variable distributions was assessed using the Shapiro-Wilk test, where the null hypothesis assumes that the data are normally distributed and interpreted by a Pvalue greater than the chosen alpha level means a normal distribution was found. Next, for predictor variables that were found to be statistically and significantly in a non-normal distribution, Spearman correlation coefficients were calculated to evaluate correlation between predictors. To account for correlations between predictor variables, and to establish a parsimonious subset of predictors, principle components were calculated using factor analysis. Finally, the resulting significant principle components were used to develop a set of linearly independent predictor values. The weighted average of the five predictor scores was then calculated as the 'Consensus' score (Table 1). All calculations were performed using SAS software, version 9.1 (SAS Institute Inc., Cary, NC, USA).
Table 1

Working example of calculating the weighted Consensus score



























































Vector1 =

0.85 × 0.56

+0.90 × 0.62

+0.98 × 0.46

+0.97 × -0.27

+(1-0.00) × 0.06

= 1.283

Vector2 =

0.85 × 0.25

+0.90 × 0.14

+0.98 × 0.36

+0.97 × 0.40

+(1-0.00) × 0.79

= 1.869

Vector3 =

0.85 × 0.25

+0.90 × 0.22

+0.98 × -0.06

+0.97 × 0.77

+(1-0.00) × -0.54

= 0.559


Weighted sum (× 100)

= 371.1










Vector1 =

0.07 × 0.56

+0.13 × 0.62

+0.19 × 0.46

+0.04 × -0.27

+(1-0.60) × 0.06

= 0.220

Vector2 =

0.07 × 0.25

+0.13 × 0.14

+0.19 × 0.36

+0.04 × 0.40

+(1-0.60) × 0.79

= 0.436

Vector3 =

0.07 × 0.25

+0.13 × 0.22

+0.19 × -0.06

+0.04 × 0.77

+(1-0.60) × -0.54

= - 0.150


Weighted sum (× 100)

= 50.6

aRET_HUMAN (UniProt #P07949) used as reference amino acid sequence. bPrimary Sequence Amino Acid Properties (PSAAP) algorithm, gene-specific trained. cAnalyzed with default settings at [57]. dAnalyzed with default settings at [58]. eAnalyzed with default settings at [59]. fAnalyzed with default settings at [60].

Next, with analogy to calculating analyte reference intervals for age or gender in traditional laboratory testing, a 'reference range' of Consensus scores for RET gene variants with known disease outcome was calculated using EP Evaluator 8 (Data Innovations, South Burlington, VT, USA). A nonparametric reference interval was used for benign (n = 46) and pathogenic (n = 51) with 95% confidence intervals (CI) for the lower and upper bounds (Table 2). The confidence ratio of the reference interval was also calculated. Due to the reciprocal nature of the SIFT score (where a lower prediction value corresponds to more 'pathogenic'), 1-SIFT was used. All predictor scores were normalized to a scale of 0 to 100.
Table 2

Consensus score reference intervals for RET gene variants

Disease outcome


Lower limit value

95% CI

Upper limit value

95% CI

Confidence ratio




< 76 to 98


231 to > 255

> 0.09




< 287 to 319


458 to > 470

> 0.16

Finally, in order to better evaluate the performance of the Consensus framework, a comparison to Condel [36] and SNPs&GO [37] was performed. Further, five-fold cross-validation was implemented and performed using the Weka software package [38]. Cross-validation area under the curve (AUC) was calculated and plotted using R (v2.14.2) and the ROCR package as shown in Figure 1. We also retrospectively removed seven RET gene variants with known disease association (two benign, five pathogenic) from training and test sets and repeated analysis using the proposed model framework. Disease outcome predictions and Consensus scoring were evaluated for each of these variants.
Figure 1

Cross-validation for individual predictor performance. (a) Calculated area under the curve (AUC) of RET predictor cross-validation results for individual predictor performance, including PSAAP (0.971), MutPred (0.845), PMut (0.698), PolyPhen-2 (0.555) and SIFT (0.630). (b) AUC results for the gene-specific predictor PSAAP (0.971), the combined predictor Consensus (0.998) and ConDel (0.587) with overcall of 'pathogenic' results for actual benign variants.

Appropriate graphical summary of diagnostic information, including predictive algorithms is key for visualization and interpretation of any results generated [39]. We have loosely based the Consensus display on output from representative algorithms such as Scolioscore and FibroTest, where sample test reports are shown in Figures S1 and S2 in Additional file 1, respectively [40, 41]. Finally, the use of radar (radial) plots is well documented and serves to preserve the contribution of each predictor in the weighted Consensus sum [42, 43].

Results and discussion

Prediction results (numerical output) from the five algorithms were obtained for RET gene variants with known disease association of benign (Table S1 in Additional file 2) and pathogenic (Table S2 in Additional file 2). Predictor results for RET gene variants with no reported disease association (uncertain) are summarized in Table S3 in Additional file 2. Results of correlation between predictors and significance of correlation are summarized in Table S4 in Additional file 2. Substantial correlation was seen in at least three of the five predictors (MutPred, PSAAP, and PMut). This significant correlation between variables indicates that a simple linear sum of predictors could not be used to combine the prediction scores. A weighted predictor sum (Consensus) required linear transformation of predictor outputs as determined by factors analysis.

Factor analysis was performed using principal components to determine weights of association between the five different predictors. More specifically, a set of eigenvectors was applied to weight each predictor accordingly by eigenvalues from principal components, with > 80% cumulative variance explained reached using only the first three eigenvalues. Factor analysis and cumulative percent variance explained by eigenvectors is detailed in Figure S3 in Additional file 1. PRINCOMP results and eigenvalues are summarized in Table S5 in Additional file 2.

A working example of the Consensus score for both a known benign and known pathogenic RET gene variant is detailed in Table 1, where each predictor sum is weighted and scaled to 100. Using this same method to sum each of the five predictors for each gene variant, we then computed reference range metrics for benign and pathogenic variants for the RET proto-oncogene. Benign variants ranged from 85 to 243, while pathogenic variants ranged from 305 to 462. Confidence ratios for the calculated reference intervals were 0.09 and 0.16, respectively. The RET gene variant Consensus reference intervals are summarized using scatter plot distribution of scores for benign and pathogenic as displayed in Figure 2. Further demonstrating the utility of a reference interval metric for gene variants, the distribution of Consensus scores for prediction of RET uncertain gene variants shows approximate groupings into reference interval ranges also plotted in Figure 2.
Figure 2

Gene-specific reference intervals. Scatter plot visualization of unweighted Consensus scores for RET gene variants, including known benign, known pathogenic disease association and gene variants of uncertain significance, demonstrating the utility of reference interval metrics for predicted benign and predicted pathogenic.

In combination, the overall Consensus score may augment the rare instance that a gene-specific prediction does not outperform the existing tools. Some advantage of Consensus over existing predictors was demonstrated by performing a comparison with recently popular tools such as Condel and SNPS&GO [36, 37]. The comparison demonstrated a surprisingly accurate agreement among gene variants (n = 121) with known pathogenic disease outcomes, where Condel showed 99.2% agreement and SNPs&GO 93.4% agreement. For variants with known benign outcomes (n = 67), however, Condel scored only 17.1% agreement, while SNPs&GO was slightly better with 28.6% agreement. Results for five-fold cross-validation showed acceptable reproducibility with 97.9% precision and 93.5% recall, yet a trend of overcalling disease causing predictions was readily apparent, as seen in Figure 1.

Further, to approximate the longitudinal and moving target of phenotype curation, Consensus performance was also retrospectively confirmed by removing seven RET gene variants with known disease association where originally they were classified as variants of uncertain significance. After excluding these seven variants from the gene-specific training set, analysis using the Consensus framework was repeated. Due to the lack of a representative variant in the training data, PSAAP only called disease association correctly for five out of seven variants. In combination, however, the Consensus score correctly predicted the sixth variant. Closer inspection showed the remaining seventh variant was a nucleotide-level 'silent' polymorphism (no amino acid change), which may have been recognized by splice effect prediction software.

Finally, one common graphing display technique to preserve contribution of each variable (predictor) is the use of radial plots (also known as radar or spider plots). RET Consensus scoring results (unweighted) for the pathogenic variant C609Y and benign variant V376A are shown using radar plots in Figure 3. For augmenting clinical decision making, a more comprehensive display for Consensus scoring is shown in Figure 4, which incorporates algorithm output, predictor calls, weighted sum and colorimetric scale.
Figure 3

Plotting individual predictor contribution. Using radar plots for Consensus scoring preserves the contribution of each predictor to the total sum. (a) Consensus score plot of 470 (85, 90, 98, 97, 100) for the pathogenic gene variant C609Y. (b) Consensus output of 103 (7, 13, 19, 4, 60) for a benign variant V376A. Individual predictor scores are shown here as unweighted.
Figure 4

Consensus score display. (a, b) Visualization of the five-predictor Consensus model, including algorithm output, predictor calls, weighting sum and colorimetric scale for pathogenic gene variant C634R (a), scoring 359, and benign variant G691S (b) with a Consensus score of 108.

Currently, there is no widely accepted computational predictor in clinical use for evaluating uncertain gene variants. Furthermore, a lack of standardized framework and quantitative metric for evaluation of disease association of novel and uncertain variants remains an obstacle to widespread implementation of proposed guidelines and definitions of gene test reporting. The analogy of conventional laboratory analyte testing with established cutoffs and reference intervals may serve as a pattern for gene variant testing. In this regard, we have developed a standardized framework and metric for evaluation of uncertain gene variants, with the idea that rather than giving a clinician a 'black box' interpretation of uncertain gene variants, the evidence and decision-making is transparent to clinicians so they can use this in consultation with the patient to make treatment decisions.

It is likely that providing this type of information will impact clinical decision making. While critics may argue that relying solely on a computational framework might 'mislead' clinicians in that we do not have the best evidence (that is, a true known genotype-phenotype correlation), the reality is that clinicians still have to make treatment decisions based on any 'uncertain significance' result. We propose that increasing the transparency of gene test evidence and interpretation would only help the clinician as compared to a situation where results that are on the border of benign and those on the border of pathogenic are treated the same. As Consensus is implemented into a laboratory setting, coordination with a clinical site to test how clinicians use the information would be an important and necessary follow-up study.

The lack of a widely accepted standard for computational predictors in a clinical setting remains a serious obstacle in the diagnostic utility of these algorithms. Gene-specific prediction algorithms have been shown to be an improvement over existing generalized prediction tools, where a larger data set 'n' for training algorithms may not compensate for lower quality of phenotype information. Examples of this gene-disease-specific focus using computational prediction have recently been shown for hypertrophic cardiomyopathy and in the RET proto-oncogene [35, 44]. We have recently summarized similar efforts in gene-specific prediction for an authoritative 20 gene-disease data set showing similar improved prediction [45]. Focusing prediction algorithms on authoritative and specific gene-disease settings may aid to bridge this acceptance gap and shed additional light on clinical interpretation of uncertain gene variants. With ongoing efforts to amass gene variation in human disease, newly emerging 'authoritative' or 'diagnostic grade' clinically curated gene variant archives should be leveraged for training and testing machine learning classification tools.

Medical geneticists rely on patient history, family segregation, literature review and trusted colleagues to stay informed of the phenotypic consequences of a given gene variant. In addition, well established computer prediction tools are often employed [16, 46]. One recent report (Condel) details combining various algorithms into a single score to assess 'deleteriousness' in nonsynonymous (missense) variants [36]. Supporting computational methods may serve to replicate this same mental process of gathering evidence from complementary sources, assessing agreement of the evidence and summarizing this evidence into a clinical context for interpretation of the gene variant finding [47].

For scenarios lacking conventional gene variant evidence, the five specific predictors used for Consensus were carefully chosen due to the varied computational approach of each algorithm. Analysis of variance shows the majority of the weighted average stems from three of the five predictors (PSAAP, MutPred and PMut). This may be indicative of the unique and varied approach of the three predictors. SIFT and PolyPhen were also included in the Consensus score for a 'wisdom of the crowd' historical context due to the fact that many laboratories may already have these prediction algorithms in use.

One limitation of this methodology is the fact that although several popular gene variants collections are ongoing (dbSNP has recently passed the 12 million unique human gene variant milestone), a relatively small number of clinically curated and authoritative gene-disease collections exist as used for diagnostic purposes. Fortunately, this number will likely continue to expand over time, not diminish, as gene-disease associations are better understood and personalized patient treatments advance. Another limitation is that mutation archives often have an unbalanced proportion of disease-causing gene variants, and appropriate machine learning techniques must be used to compensate for uneven training and test data. In addition, a given gene variant may not only result in a missense change as being evaluated here, but may also impact splicing and translation of a gene product and thus be deleterious even when an apparently benign effect is expected.

Perhaps the most important limitation to acknowledge is how can we know whether a prediction for a gene variant of uncertain significance is truly correct? The honest response is likely 'we can not'. While only the passage of time may confirm the accuracy of a computational prediction, an important point not to dismiss is - would this approach (or similar) likely lead to better or worse decision-making by providers? One recent article points to the importance of careful curation in locus-specific databases [48] and these collections should be leveraged for algorithm improvements. There may also be analogous situations in other existing laboratory tests, where, for example, anatomic pathology may yield some ideas that clinicians rely on for decision-making. The pathology report contains all information, not just the 'interpretation'. Importantly, this would imply that more information (not less) is appropriate for clinician decision-making [19, 49].

Another key issue is that disease classification of gene variants evolves over time as new knowledge becomes available. We note that this is a problem whether one uses this proposed framework or the status quo system for dealing with gene test results of uncertain significance. At present, there is no way to communicate new variant knowledge effectively between gene test laboratories and clinicians. Thus, a standardized framework would allow for consistent and objective data provenance for longitudinal tracking of both variants and patient results, where notifying interested parties in updated variant classification and disease association would be more feasible. We also note that development of this framework now using monogenic diseases may allow increased understanding that could eventually be applied to multi-gene panel or whole genome approaches.

There may be some perceived liability of a laboratory that would report using this augmented methodology as compared to existing gene test reporting approaches. Although correlation of genotype-phenotype offers therapeutic options that would otherwise remain hidden and may lead to disease-specific, mutation-guided management strategies, appropriate caution is justified when clinicians are asked to trust computational outcomes for determining patient care [50]. On the other hand, when results are reported to clinicians and patients as variants of unknown significance, it may take years for sufficient molecular or family evidence to be confirmed for the laboratory to make a final determination. Interpretation of gene test results that are unclear or uncertain may be troubling for patients, and must have some effect (good or bad) on how clinicians manage these patients [51]. Transparent communication of summarized gene variant evidence and continued interaction between clinicians and laboratorians to refine mutation-specific clinical classification is imperative to optimal patient care. Recent examples of this importance have been detailed in newborn screening and case studies from cardiovascular genetics [52, 53].


The vision of personalized medicine invokes an image of all relevant information being available to clinicians on demand. Proper interpretation of gene test results is one key area in customizing patient therapy [54, 55]. Gene variants are currently being identified at a tremendous pace. While many of these sequence changes may be considered as normal population allele variants, some percentage will certainly have disease association. Gene variants may be best leveraged for clinical utility by focusing on specific gene-disease areas. In concert, clinicians and diagnostic laboratories are the best source of authoritative gene variant annotation. Ranking agreement through the use of a weighted Consensus metric of predicted pathogenicity across several complementary algorithms may provide a level of clinical confidence in computational classifiers.

A proposed visual for augmenting the gene test report of an uncertain gene variant using known benign and pathogenic gene variants mapped onto a schematic of the RET protein is displayed in Figure 5. The protein diagram image is courtesy of the Human Protein Reference Database [56]. The variant being evaluated is denoted by 'X' along the length of the protein and Consensus scoring of the variant is detailed using both the reference intervals with colorimetric scale and a radial chart to show the contribution of each predictor. Ongoing efforts include expanding the Consensus scoring framework and phenotype reference intervals to additional genes and diseases. Future efforts will be necessary to incorporate algorithm layers for nucleotide-level prediction and functional protein motifs.
Figure 5

Protein view with Consensus scoring. (a, b) Proposed visualization of Consensus scoring using known gene variants plotted on the RET_HUMAN (UniProt #P07949) protein (image courtesy of the Human Protein Reference Database [56]) and weighted algorithm output with radar plots to summarize predictor evidence for pathogenic gene variant C634R (a), scoring 359, and benign variant G691S (b) with a Consensus score of 108.



area under the curve


confidence interval


Primary Sequence Amino Acid Properties


single nucleotide polymorphism.



This work has been partially supported by ARUP Institute for Clinical and Experimental Pathology®, National Library of Medicine (NLM) training grant #LM007124 and National Center for Research Resources (NCRR) Clinical and Translational Science Award #1KL2RR025763-01.

Authors’ Affiliations

University of Utah School of Medicine, Biomedical Informatics
ARUP Institute for Clinical and Experimental Pathology
Intermountain Healthcare Clinical Genetics Institute


  1. Javitt G, Katsanis S, Scott J, Hudson K: Developing the blueprint for a genetic testing registry. Public Health Genomics. 2010, 13: 95-105. 10.1159/000226593.PubMedView ArticleGoogle Scholar
  2. Bale S, Devisscher M, Van Criekinge W, Rehm HL, Decouttere F, Nussbaum R, Dunnen JT, Willems P: MutaDATABASE: a centralized and standardized DNA variation database. Nat Biotechnol. 2011, 29: 117-118. 10.1038/nbt.1772.PubMedView ArticleGoogle Scholar
  3. Durbin RM, Abecasis GR, Altshuler DL, Auton A, Brooks LD, Durbin RM, Gibbs RA, Hurles ME, McVean GA: A map of human genome variation from population-scale sequencing. Nature. 2010, 467: 1061-1073. 10.1038/nature09534.PubMedView ArticleGoogle Scholar
  4. Cotton RG, Al Aqeel AI, Al-Mulla F, Carrera P, Claustres M, Ekong R, Hyland VJ, Macrae FA, Marafie MJ, Paalman MH, Patrinos GP, Qi M, Ramesar RS, Scott RJ, Sijmons RH, Sobrido MJ, Vihinen M: Capturing all disease-causing mutations for clinical and research use: toward an effortless system for the Human Variome Project. Genet Med. 2009, 11: 843-849. 10.1097/GIM.0b013e3181c371c5.PubMedView ArticleGoogle Scholar
  5. Thony B, Blau N: Mutations in the BH4-metabolizing genes GTP cyclohydrolase I, 6-pyruvoyl-tetrahydropterin synthase, sepiapterin reductase, carbinolamine-4a-dehydratase, and dihydropteridine reductase. Hum Mutat. 2006, 27: 870-878. 10.1002/humu.20366.PubMedView ArticleGoogle Scholar
  6. Li W, Sun L, Corey M, Zou F, Lee S, Cojocaru A, Taylor C, Blackman S, Stephenson A, Sandford A, Dorfman R, Drumm M, Cutting G, Knowles M, Durie P, Wright F, Strug L: Understanding the population structure of North American patients with cystic fibrosis. Clin Genet. 2011, 79: 136-146. 10.1111/j.1399-0004.2010.01502.x.PubMedPubMed CentralView ArticleGoogle Scholar
  7. Crockett DK, Pont-Kingdon G, Gedge F, Sumner K, Seamons R, Lyon E: The Alport syndrome COL4A5 variant database. Hum Mutat. 2010, 31: E1652-1657. 10.1002/humu.21312.PubMedView ArticleGoogle Scholar
  8. Calderon FR, Phansalkar AR, Crockett DK, Miller M, Mao R: Mutation database for the galactose-1-phosphate uridyltransferase (GALT) gene. Hum Mutat. 2007, 28: 939-943. 10.1002/humu.20544.PubMedView ArticleGoogle Scholar
  9. Li C: Personalized medicine - the promised land: are we there yet?. Clin Genet. 2010, 79: 403-412.View ArticleGoogle Scholar
  10. Moore B, Hu H, Singleton M, Reese MG, De La Vega FM, Yandell M: Global analysis of disease-related DNA sequence variation in 10 healthy individuals: Implications for whole genome-based clinical diagnostics. Genet Med. 2011, 13: 210-217. 10.1097/GIM.0b013e31820ed321.PubMedPubMed CentralView ArticleGoogle Scholar
  11. GeneTests. []
  12. Hoffman MA: The genome-enabled electronic medical record. J Biomed Inform. 2007, 40: 44-46. 10.1016/j.jbi.2006.02.010.PubMedView ArticleGoogle Scholar
  13. Ashley EA, Butte AJ, Wheeler MT, Chen R, Klein TE, Dewey FE, Dudley JT, Ormond KE, Pavlovic A, Morgan AA, Pushkarev D, Neff NF, Hudgins L, Gong L, Hodges LM, Berlin DS, Thorn CF, Sangkuhl K, Hebert JM, Woon M, Sagreiya H, Whaley R, Knowles JW, Chou MF, Thakuria JV, Rosenbaum AM, Zaranek AW, Church GM, Greely HT, Quake SR, et al: Clinical assessment incorporating a personal genome. Lancet. 2010, 375: 1525-1535. 10.1016/S0140-6736(10)60452-7.PubMedPubMed CentralView ArticleGoogle Scholar
  14. Marshall E: Human genome 10th anniversary. Human genetics in the clinic, one click away. Science. 2011, 331: 528-529. 10.1126/science.331.6017.528.PubMedView ArticleGoogle Scholar
  15. Ormond KE, Wheeler MT, Hudgins L, Klein TE, Butte AJ, Altman RB, Ashley EA, Greely HT: Challenges in the clinical application of whole-genome sequencing. Lancet. 2010, 375: 1749-1751. 10.1016/S0140-6736(10)60599-5.PubMedView ArticleGoogle Scholar
  16. Richards CS, Bale S, Bellissimo DB, Das S, Grody WW, Hegde MR, Lyon E, Ward BE: ACMG recommendations for standards for interpretation and reporting of sequence variations: Revisions 2007. Genet Med. 2008, 10: 294-300. 10.1097/GIM.0b013e31816b5cae.PubMedView ArticleGoogle Scholar
  17. Goldgar DE, Easton DF, Byrnes GB, Spurdle AB, Iversen ES, Greenblatt MS: Genetic evidence and integration of various data sources for classifying uncertain variants into a single model. Hum Mutat. 2008, 29: 1265-1272. 10.1002/humu.20897.PubMedPubMed CentralView ArticleGoogle Scholar
  18. Vos J, van Asperen CJ, Wijnen JT, Stiggelbout AM, Tibben A: Disentangling the Babylonian speech confusion in genetic counseling: an analysis of the reliability and validity of the nomenclature for BRCA1/2 DNA-test results other than pathogenic. Genet Med. 2009, 11: 742-749. 10.1097/GIM.0b013e3181b2e608.PubMedView ArticleGoogle Scholar
  19. Plon SE, Eccles DM, Easton D, Foulkes WD, Genuardi M, Greenblatt MS, Hogervorst FB, Hoogerbrugge N, Spurdle AB, Tavtigian SV: Sequence variant classification and reporting: recommendations for improving the interpretation of cancer susceptibility genetic test results. Hum Mutat. 2008, 29: 1282-1291. 10.1002/humu.20880.PubMedPubMed CentralView ArticleGoogle Scholar
  20. Gomez-Garcia EB, Ambergen T, Blok MJ, van den Wijngaard A: Patients with an unclassified genetic variant in the BRCA1 or BRCA2 genes show different clinical features from those with a mutation. J Clin Oncol. 2005, 23: 2185-2190. 10.1200/JCO.2005.07.013.PubMedView ArticleGoogle Scholar
  21. Frank TS, Deffenbaugh AM, Reid JE, Hulick M, Ward BE, Lingenfelter B, Gumpper KL, Scholl T, Tavtigian SV, Pruss DR, Critchfield GC: Clinical characteristics of individuals with germline mutations in BRCA1 and BRCA2: analysis of 10,000 individuals. J Clin Oncol. 2002, 20: 1480-1490. 10.1200/JCO.20.6.1480.PubMedView ArticleGoogle Scholar
  22. Easton DF, Deffenbaugh AM, Pruss D, Frye C, Wenstrup RJ, Allen-Brady K, Tavtigian SV, Monteiro AN, Iversen ES, Couch FJ, Goldgar DE: A systematic genetic assessment of 1,433 sequence variants of unknown clinical significance in the BRCA1 and BRCA2 breast cancer-predisposition genes. Am J Hum Genet. 2007, 81: 873-83. 10.1086/521032.PubMedPubMed CentralView ArticleGoogle Scholar
  23. Commonwealth Hematology-Oncology/Genetic Counseling. []
  24. Chung DC, Rustgi AK: The hereditary nonpolyposis colorectal cancer syndrome: genetics and clinical implications. Ann Intern Med. 2003, 138: 560-570.PubMedView ArticleGoogle Scholar
  25. John EM, Miron A, Gong G, Phipps AI, Felberg A, Li FP, West DW, Whittemore AS: Prevalence of pathogenic BRCA1 mutation carriers in 5 US racial/ethnic groups. JAMA. 2007, 298: 2869-2876. 10.1001/jama.298.24.2869.PubMedView ArticleGoogle Scholar
  26. Botkin JR, Teutsch SM, Kaye CI, Hayes M, Haddow JE, Bradley LA, Szegda K, Dotson WD: Outcomes of interest in evidence-based evaluations of genetic tests. Genet Med. 2011, 12: 228-235.View ArticleGoogle Scholar
  27. ARUP Scientific Resource for Research and Education: Disease Databases. []
  28. HGVS: Nomenclature for the description of sequence variants. []
  29. Margraf RL, Crockett DK, Krautscheid PM, Seamons R, Calderon FR, Wittwer CT, Mao R: Multiple endocrine neoplasia type 2 RET protooncogene database: repository of MEN2-associated RET sequence variation and reference for genotype/phenotype correlations. Hum Mutat. 2009, 30: 548-556. 10.1002/humu.20928.PubMedView ArticleGoogle Scholar
  30. Li B, Krishnan VG, Mort ME, Xin F, Kamati KK, Cooper DN, Mooney SD, Radivojac P: Automated inference of molecular mechanisms of disease from amino acid substitutions. Bioinformatics. 2009, 25: 2744-2750. 10.1093/bioinformatics/btp528.PubMedPubMed CentralView ArticleGoogle Scholar
  31. Ferrer-Costa C, Gelpi JL, Zamakola L, Parraga I, de la Cruz X, Orozco M: PMUT: a web-based tool for the annotation of pathological mutations on proteins. Bioinformatics. 2005, 21: 3176-3178. 10.1093/bioinformatics/bti486.PubMedView ArticleGoogle Scholar
  32. Ramensky V, Bork P, Sunyaev S: Human non-synonymous SNPs: server and survey. Nucleic Acids Res. 2002, 30: 3894-3900. 10.1093/nar/gkf493.PubMedPubMed CentralView ArticleGoogle Scholar
  33. Ng PC, Henikoff S: SIFT: Predicting amino acid changes that affect protein function. Nucleic Acids Res. 2003, 31: 3812-3814. 10.1093/nar/gkg509.PubMedPubMed CentralView ArticleGoogle Scholar
  34. Crockett DK, Piccolo SR, Narus SP, Mitchell JA, Facelli JC: Computational Feature Selection and Classification of RET Phenotypic Severity. J Data Mining in Genom Proteomics. 2010, 1: 1-4.View ArticleGoogle Scholar
  35. Crockett DK, Lyon E, Williams MS, Narus SP, Facelli JC, Mitchell JA: Predicting phenotypic severity of uncertain gene variants in the RET proto-oncogene. PLoS One. 2011, 6: e18380-10.1371/journal.pone.0018380.PubMedPubMed CentralView ArticleGoogle Scholar
  36. Gonzalez-Perez A, Lopez-Bigas N: Improving the assessment of the outcome of nonsynonymous SNVs with a consensus deleteriousness score, Condel. Am J Hum Genet. 2011, 88: 440-449. 10.1016/j.ajhg.2011.03.004.PubMedPubMed CentralView ArticleGoogle Scholar
  37. Calabrese R, Capriotti E, Fariselli P, Martelli PL, Casadio R: Functional annotations improve the predictive score of human disease-related mutations in proteins. Hum Mutat. 2009, 30: 1237-1244. 10.1002/humu.21047.PubMedView ArticleGoogle Scholar
  38. Frank E, Hall M, Trigg L, Holmes G, Witten IH: Data mining in bioinformatics using Weka. Bioinformatics. 2004, 20: 2479-2481. 10.1093/bioinformatics/bth261.PubMedView ArticleGoogle Scholar
  39. Whiting PF, Sterne JA, Westwood ME, Bachmann LM, Harbord R, Egger M, Deeks JJ: Graphical presentation of diagnostic information. BMC Med Res Methodol. 2008, 8: 20-10.1186/1471-2288-8-20.PubMedPubMed CentralView ArticleGoogle Scholar
  40. Axial Biotech Scolioscore. []
  41. BioPredictive. []
  42. Cole WG: Integrality and meaning: essential and orthogonal dimensions of graphical data display. Proc Annu Symp Comput Appl Med Care. 1993, 404-408.Google Scholar
  43. Saary MJ: Radar plots: a useful way for presenting multivariate health care data. J Clin Epidemiol. 2008, 61: 311-317. 10.1016/j.jclinepi.2007.04.021.PubMedView ArticleGoogle Scholar
  44. Jordan DM, Kiezun A, Baxter SM, Agarwala V, Green RC, Murray MF, Pugh T, Lebo MS, Rehm HL, Funke BH, Sunyaev SR: Development and validation of a computational method for assessment of missense variants in hypertrophic cardiomyopathy. Am J Hum Genet. 2011, 88: 183-192. 10.1016/j.ajhg.2011.01.011.PubMedPubMed CentralView ArticleGoogle Scholar
  45. Crockett DK, Lyon E, Williams MS, Narus SP, Facelli JC, Mitchell JA: Utility of gene-specific algorithms for predicting pathogenicity of uncertain gene variants. J Am Med Inform Assoc. 2012, 19: 207-211. 10.1136/amiajnl-2011-000309.PubMedPubMed CentralView ArticleGoogle Scholar
  46. Bayrak-Toydemir P, McDonald J, Mao R, Phansalkar A, Gedge F, Robles J, Goldgar D, Lyon E: Likelihood ratios to assess genetic evidence for clinical significance of uncertain variants: hereditary hemorrhagic telangiectasia as a model. Exp Mol Pathol. 2008, 85: 45-49. 10.1016/j.yexmp.2008.03.006.PubMedView ArticleGoogle Scholar
  47. Goldgar DE, Easton DF, Byrnes GB, Spurdle AB, Iversen ES, Greenblatt MS: Genetic evidence and integration of various data sources for classifying uncertain variants into a single model. Hum Mutat. 2008, 29: 1265-1272. 10.1002/humu.20897.PubMedPubMed CentralView ArticleGoogle Scholar
  48. Samuels ME, Rouleau GA: The case for locus-specific databases. Nat Rev Genet. 2011, 12: 378-379. 10.1038/nrg3011.PubMedView ArticleGoogle Scholar
  49. Gulley ML, Braziel RM, Halling KC, Hsi ED, Kant JA, Nikiforova MN, Nowak JA, Ogino S, Oliveira A, Polesky HF, Silverman L, Tubbs RR, Van Deerlin VM, Vance GH, Versalovic J: Clinical laboratory reports in molecular pathology. Arch Pathol Lab Med. 2007, 131: 852-863.PubMedGoogle Scholar
  50. Tchernitchko D, Goossens M, Wajcman H: In silico prediction of the deleterious effect of a mutation: proceed with caution in clinical genetics. Clin Chem. 2004, 50: 1974-1978. 10.1373/clinchem.2004.036053.PubMedView ArticleGoogle Scholar
  51. van Dijk S, van Asperen CJ, Jacobi CE, Vink GR, Tibben A, Breuning MH, Otten W: Variants of uncertain clinical significance as a result of BRCA1/2 testing: impact of an ambiguous breast cancer risk message. Genet Test. 2004, 8: 235-239. 10.1089/gte.2004.8.235.PubMedView ArticleGoogle Scholar
  52. Botkin JR, Clayton EW, Fost NC, Burke W, Murray TH, Baily MA, Wilfond B, Berg A, Ross LF: Newborn screening technology: proceed with caution. Pediatrics. 2006, 117: 1793-1799. 10.1542/peds.2005-2547.PubMedView ArticleGoogle Scholar
  53. Caleshu C, Day S, Rehm HL, Baxter S: Use and interpretation of genetic tests in cardiovascular genetics. Heart. 96: 1669-1675.
  54. Del Fiol G, Williams MS, Maram N, Rocha RA, Wood GM, Mitchell JA: Integrating genetic information resources with an EHR. AMIA Annu Symp Proc. 2006, 904-Google Scholar
  55. Lubin IM, McGovern MM, Gibson Z, Gross SJ, Lyon E, Pagon RA, Pratt VM, Rashid J, Shaw C, Stoddard L, Trotter TL, Williams MS, Amos Wilson J, Pass K: Clinician perspectives about molecular genetic testing for heritable conditions and development of a clinician-friendly laboratory report. J Mol Diagn. 2009, 11: 162-171. 10.2353/jmoldx.2009.080130.PubMedPubMed CentralView ArticleGoogle Scholar
  56. Human Protein Reference Database. []
  57. MutPred. []
  58. Molecular Modelling & Bioinformatics Group: PMut. []
  59. PolyPhen-2. []
  60. SIFT Sequence. []


© Crockett et al.; licensee BioMed Central Ltd. 2012

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.