Systematic Control of Collective Variables Learned from Variational Autoencoders
Jacob Monroe, Vincent K. Shen
Variational autoencoders (VAEs) are rapidly gaining popularity within molecular simulation for discovering low- dimensional, or latent, representations, which are critical for both analyzing and accelerating simulations. However, it remains unclear how the information a VAE learns is connected to its probabilistic structure, and, in turn, its loss function. Previous studies have focused on feature engineering, ad hoc modifications to loss functions, or adjustment of the prior to enforce desirable latent space properties. By applying effectively arbitrarily flexible priors via normalizing flows, we focus instead on how adjusting the structure of the decoding model impacts the learned latent coordinate. We systematically adjust the power and flexibility of the decoding distribution, observing that this has a significant impact on the structure of the latent space as measured by a suite of metrics developed in this work. By also varying weights on separate terms within each VAE loss function, we show that the level of detail encoded can be further tuned. This provides practical guidance for utilizing VAEs to extract varying resolutions of low-dimensional information from molecular dynamics and Monte Carlo simulations.
and Shen, V.
Systematic Control of Collective Variables Learned from Variational Autoencoders, The Journal of Chemical Physics, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=934949
(Accessed December 1, 2022)