Popis: |
This thesis presents two nonlinear model reduction methods for systems of equations. One model utilizes a structured neural network, taking the form of a “threelayer” network with the first layer constrained to lie on the Grassmann manifold and the first activation function set to identity, while the remaining network is a standard two-layer ReLU neural network. The Grassmann layer determines the reduced basis for the input space, while the remaining layers approximate the nonlinear inputoutput system. The training alternates between learning the reduced basis and the nonlinear approximation, and is shown to be more effective than fixing the reduced basis and training the network only. An additional benefit of this approach is, for data that lie on low-dimensional subspaces, that the number of parameters in the network does not need to be large. The other model utilizes a random feature expansion which also takes the form of a three-layer network. The first layer uses linear dimension reduction techniques to determine a reduced basis for the input space and the second layer is randomized with sparse weights. These two layers are held fixed, and the last layer is trained using ridge regression. One benefit of this approach is that the model shares similar function spaces as neural networks, but has a shorter training time. We show that these methods can be applied to scientific problems in the data-scarce regime, which is typically not well-suited for standard neural network approximations. Examples include reduced order modeling for nonlinear dynamical systems and several aerospace engineering problems. |