MIT Department of Mathematics
Wednesday, July 10, 2019, 3:00-4:00
Building 100, Lecture Room A
Wednesday, July 10, 2019, 1:00-2:00
Building 1, Room 4072
Host: Michael Mascagni
Abstract: Modeling practice seems to be partitioned into scientific models defined by mechanistic differential equations and machine learning models defined by parameterizations of neural networks. While the ability for interpretable mechanistic models to extrapolate from little information is seemingly at odds with the big data "model-free" approach of neural networks, the next step in scientific progress is to utilize the methodologies together in order to emphasize their strengths while mitigating their weaknesses. In this talk the audience will be introduced to how Julia's differentiable programming frameworks are bringing neural networks into differential equations and vice versa. The idea of prior structural information in neural architectures will be explained via the relationship of between ResNet and ordinary differential equations (ODEs) and convolutional layers to partial differential equations (PDEs), and thus generalized to differential equation layers of neural networks. Additionally, the ability for neural networks to learn nonlinear equations will be demonstrated through partial neural ODEs which allow for semi-mechanistic differential equation models with learned components. DifferentialEquations.jl's GPU-compatible high-order adaptive methods for stiff ODEs, SDEs, DDEs, DDEs, and PDEs will be demonstrated as an effective tool for handling these mixed models. These methods will be contextualized in the context of systems biology, showing how the behavior of biochemical reaction networks can be better understood through a mixed neural differential equation approach.
Note: Visitors from outside NIST must contact Cathy Graham; (301) 975-3800; at least 24 hours in advance.