Wesley N. Griffin, Afzal A. Godil, Jeffrey W. Bullard, Judith E. Terrill, Amitabh Varshney, Somay Jain
Scientific simulations often generate large amounts of multivariate time varying volumetric data. Visualizing these volumes is essential for understanding the underlying scientific processes which generate this data. In this paper, we present a method to obtain a data-driven compact representation of the given volumes using a deep convolutional autoencoder network. We show that the autoencoder learns high level hierarchical features, giving insights about the distribution of the underlying data. Moreover, the compact representation has surprisingly low storage requirements which enables it to fit on the Graphical Processing Unit (GPU) memory. The compact representation for a given time step is efficiently decompressed using GPUs to achieve interactive speeds for rendering and navigating large time varying datasets. Finally, the compact representation can also be used to transmit very large volumes over bandwidth sensitive networks. We show that our proposed compact representation takes only 7% of the original memory and reconstructs the original volume with minimal error.
Proceedings of the Large Scale Data Analysis and Visualization (LDAV) Symposium
October 2, 2017
Large Scale Data Analysis and Visualization (LDAV) Symposium