2021 POSTECH - Peking University
Joint SIAM Student Chapter Conference
Numerical Methods for Partial Differential Equations and Related Topics
April. 24 (Sat) ~ April. 25 (Sun) 2021
General Information
(~ April. 21) https://forms.gle/wNAANeubojbMo8ef9
POSTECH SIAM Student
- Sungha Cho
- Dongjin Lee
- Eunsuh Kim
- Dohyeon Lee
Peking University SIAM Student Chapter
- Yichen Yang
- Qichen Liu
Keynote Speakers
- Assistant professor, Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania (2018 ~ present)
- Research scientist, Department of Mechanical Engineering, Massachusetts Institute of Technology (2016 ~ 2017)
- Post-doctoral associate, Department of Mechanical Engineering, Massachusetts Institute of Technology (2015 ~ 2016)
- Ph.D. in Applied Mathematics, Brown University (2010 ~ 2015)
Student Speakers
Abstract
First, I will explain the Simplicial Homology briefly. Second, I will define the Persistent Homology and explain its stability and characterization. Third, I will explain the application of Persistent Homology to Data Analysis. If time permits, I will explain the some weakness of Persistent Homology and its overcoming process.
Abstract
In this talk, we are concerned with the Fokker-Planck equations associated with the Nonlinear Noisy Leaky Integrate-and-Fire model for neuron networks. Due to the jump mechanism at the microscopic level, such Fokker-Planck equations are endowed with an unconventional structure: transporting the boundary flux to a specific interior point, which makes the properties of solutions not yet completely understood. By far there has been no rigorous numerical analysis work concerning such models. We propose a conservative and conditionally positivity preserving scheme for these Fokker-Planck equations, and we show that in the linear case, the semi-discrete scheme satisfies the discrete relative entropy estimate, which essentially matches the long time asymptotic solution property of the equation. We also do extensive numerical exploration with this new scheme and show diversified solutions from various numerical observations.
Abstract
Gaussian process model (GP) is a popular machine learning model for regression and classification problems. The model is assumed to follow a Gaussian process, and its parameters are optimized to maximize the marginal likelihood of the model. This enables GP not only to give the solution to the problem but also quantify the uncertainty of the solution. Moreover, Bayesian posterior is computable analytically with a Gaussian likelihood in regression problems. However, training of the model involves computation of the inverse of the covariance matrix which costs O(N³), where N is the number of data points, which makes GP not appropriate for the huge dataset. In this talk, variational inference for GP regression is introduced to reduce the computational cost. This model approximates the posterior using the variational distribution with "inducing variables". With some matrix lemmas, the computational cost can be reduced to O(NM²), where M is the number of inducing variables smaller than N.
Abstract
We propose a high order numerical homogenization method for dissipative ordinary differential equations (ODEs) containing two time scales. Essentially, only first order homogenized model globally in time can be derived. To achieve a high order method, we have to adopt a numerical approach in the framework of the heterogeneous multiscale method (HMM). By a successively refined microscopic solver, the accuracy improvement up to arbitrary order is attained providing input data smooth enough. Based on the formulation of the high order microscopic solver we derived, an iterative formula to calculate the microscopic solver is then proposed. Using the iterative formula, we develop an implementation to the method in an efficient way for practical applications. Several numerical examples are presented to validate the new models and numerical methods.
Abstract
Neural Networks have been considered as powerful ansatzes of the solutions to differential equations (DEs). In particular, Physics Informed Neural Network (PINN) learns the governing equations, which was made possible by automatic differentiations. PINN can be applied to both forward and inverse problems. In forward problems, the solution is trained by the information about initial or boundary conditions and governing equation. In the case of inverse problems, PINN estimates underlying physics in the governing equation based on observation data and gets the solution values on the whole domain as well. In this talk, I will present the basic algorithms of the aforementioned method and their several results.
Abstract
Deep learning is now widely and typically used to figure out the relation between input and output data in many science and engineering problems. However, deep learning may highly depend on the model structure and the training data, which possibly yields inaccurate prediction results. This issue is particularly important in the fields related to human life, for example, autonomous driving and healthcare. Computing the uncertainty in model prediction provides a criterion for evaluating the prediction results, which can be a solution to this problem. Bayesian statistics is one of the good frameworks for quantifying uncertainty. Applying Bayesian statistics to deep learning is often referred to as Bayesian deep learning and has successfully predicted model uncertainty in many fields. This talk will introduce two main algorithms frequently used in Bayesian deep learning and show their results in some simple uncertainty quantification problems.
Abstract
A two-fluid model is derived from the plasma kinetic equations using the moment model reduction method. The moment method we adopt was developed recently with a globally hyperbolic regularization and the moment model attained is locally well-posed in time. Based on the hyperbolic moment model with well-posedness, the Maxwellian iteration method is utilized to get the closure relations for the resulted two-fluid model. By taking the Shakhov collision operator in the Maxwellian iteration, the two-fluid model inherits the correct Prandtl number from the plasma kinetic equations. The new model is formally the same as the five-moment two-fluid model except for the closure relations, where the pressure tensor is anisotropic and the heat flux is added. This provides the model the capacity to depict problems with anisotropic pressure tensor and large heat flux.