Approximate Bayesian Inference for Science

Three practical examples of latent variable model design for science

Linear models and all their extensions (linear mixed models, GLMs, matrix factorization) have many advantages. First, reliable inference procedures, sometimes with guarantee of convergence and associated code are often widely available for those models. Second, these simple models are interpretable in the sense that it is possible to know which covariate is most associated with a certain target. Often, there are also direct connections to hypothesis testing.

To go beyond conjugacy and linearity, users of deep generative models sometimes sacrifice part of those benefits. In these manuscripts, we study two practical problems that emerged with the use of variational autoencoders: disentanglement of latent variables as well as hypothesis testing (in the more general context of decision-making). We also review recent advances and challenges of applying deep generative models for scientific discoveries in molecular biology applications.

Avatar
Romain Lopez
Postdoctoral Scholar

My research interests lie at the intersection of statistics, computation and modeling. A significant part of my research is driven by building more statistically accurate and faster machine learning software for analyzing biological data, with a focus on single-cell transcriptomics.

Publications

Large-scale differentiable causal discovery of factor graphs

A common theme in causal inference is learning causal relationships between observed variables, also known as causal discovery. This is …

Decision-making with auto-encoding variational Bayes

To make decisions based on a model fit by auto-encoding variational Bayes (AEVB), practitioners often let the variational distribution …

Enhancing scientific discoveries in molecular biology with deep generative models

Generative models provide a well established statistical framework for evaluating uncertainty and deriving conclusions from large data …

Information constraints on auto-encoding variational Bayes

Parameterizing the approximate posterior of a generative model with neural networks has become a common theme in recent machine …