Skip to main content
Fig. 1 | Genome Medicine

Fig. 1

From: DeepGAMI: deep biologically guided auxiliary learning for multimodal integration and imputation to improve genotype–phenotype prediction

Fig. 1

Architecture of DeepGAMI. A DeepGAMI first uses available multimodal features for training the predictive model, e.g., SNP genotypes (orange) and gene expression (blue) of individuals from the major applications in this study. In particular, it learns the latent space of each modality (e.g., consisting of latent features at the first transparent hidden layer). This learning step is also regularized by prior knowledge enabling biological interpretability after prediction, i.e., the input and latent features are connected by biological networks (biological DropConnect). For instance, the input transcription factor genes can be connected to target genes as their latent features (e.g., CGEX) by a gene regulatory network (GRN). The input SNPs can be connected to associated genes as their latent features (e.g., CSNP) by Expression quantitative trait loci (eQTLs). Notably, an auxiliary learning layer is used to infer the latent space of one modality to another, i.e., cross-modality imputation. For instance, DeepGAMI learns a transfer function f(.) to estimate CGEX from CSNP. Finally, the latent features are concatenated and fed to the feed-forward neural networks for phenotype predictions, e.g., classifying disease vs. control individuals. B Using the learned predictive model from multimodal input along, DeepGAMI can predict phenotypes from a single modality, e.g., SNP genotypes of new individuals. Specifically, it first imputes other modal latent spaces using the optimal transfer function f*(.) and then feeds both imputed and input latent features into downstream neural network predictions, e.g., C*GEX from CSNP

Back to article page