The Polymer Analytics project was established with the goal of accelerating the discovery of new polymer physics through the development of datasets, methods and tools through the integration of the four scientific paradigms (experiment, theory, computation, and machine learning) with a focus on broad problems including a lack of large datasets for machine learning, limited computational resources, complicated industrial formulations, and polymer waste streams, among others.
This project focuses on a variety of activities to achieve the aforementioned goal of accelerating the discovery of new polymer physics.
In collaboration with partners, we build FAIR (findable, accessible, interoperable, reproducible) data resources that enable machine learning across the entire polymers community and provide data to test emerging theories. Specifically, in collaboration with MIT, University of Chicago, Citrine Informatics and Dow, we contribute to the the development of A Community Resource for Innovation in Polymer Technology and in collaboration with CHiMaD and Air Force Research Laboratory, we contribute to the development of the Polymer Property Predictor and Database
We develop digital data infrastructure for the multiscale modeling of polymers starting at the molecular level to address limited computational resources and the time-consuming nature of setting up both atomistic and coarse-grained molecular simulations. Specific efforts include WebFF: Force-field repository for organic and soft materials and COMSOFT Workbench: Tools for Efficient Coarse-Grained Modeling of Soft Materials. The latter includes recent advances on preserving dynamics in addition to thermodynamics during coarse-graining as detailed here. Also part of our portfolio is pyPRISM: A Computational Tool for Liquid-State Theory Calculations of Macromolecular Materials and ZENO: A software tool for computation of material, solution, and suspension properties for a specified particle shape or molecular structure using path-integral and Monte Carlo methods. The open source code can be found here.
We develop autonomous small angle neutron scattering (SANS) and small angle X-ray scattering (SAXS) systems in order to address the need of industry to understand the phase behavior of their complicated, multi-component formulations. This activity involves both the hardware required to generate and measure samples, as well as software for running the hardware and identifying the next experiment using machine learning. The autonomous formulation laboratory is also part of the nSoft consortium. For more details see Autonomous Formulation Lab (AFL)
Two major hurdles to applying machine learning to polymer science is a lack of large datasets and a need to understand the model predictions often in terms of underlying physics. Here we simultaneously tackle these challenges by developing new methods for incorporating theory and domain knowledge into machine learning. Our efforts enable the widespread use of machine learning to accelerate the discovery of new polymer physics. For example, see our Theory aware Machine Learning code.
The need to address the growing waste stream of plastics has become a global challenge. To enable next generation recycling, we apply simulation techniques and machine learning to improve characterization of a common polymer, linear low density polyethylene and to identify methods for upcycling mixed waste streams through compatibilization. We also are investigating how to improve near infrared measurements of polyolefins through correlation with other slower, measurement techniques. All of these efforts are in collaboration with the Macromolecular Architectures project.
We also apply our skill sets in molecular simulation and machine learning in close collaboration with other projects in the Materials Science and Engineering Division. Please see Related NIST Projects for details.
For the most up to date publications, please visit the websites of staff members.