2021 POSTECH - Peking University
Joint SIAM Student Chapter Conference

Numerical Methods for Partial Differential Equations and Related Topics
April. 24 (Sat) ~ April. 25 (Sun) 2021

General Information

Date 2021-04-24 (Saturday) ~ 2021-04-25 (Sunday)
Place Room 404, Mathematics Science Building, POSTECH / Online Streaming (Zoom)
Zoom meeting information It will be announced by e-mail after registration
Registration
(~ April. 21)
https://forms.gle/wNAANeubojbMo8ef9
Organizing Committee

POSTECH SIAM Student

  • Sungha Cho
  • Dongjin Lee
  • Eunsuh Kim
  • Dohyeon Lee

Peking University SIAM Student Chapter

  • Yichen Yang
  • Qichen Liu
Contact chosungha24@postech.ac.kr

Programs

Keynote Speakers

10:00 AM (UTC +09:00), April 24, 2021
Paris Perdikaris, University of Pennsylvania
Shortbio
  • Assistant professor, Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania (2018 ~ present)
  • Research scientist, Department of Mechanical Engineering, Massachusetts Institute of Technology (2016 ~ 2017)
  • Post-doctoral associate, Department of Mechanical Engineering, Massachusetts Institute of Technology (2015 ~ 2016)
  • Ph.D. in Applied Mathematics, Brown University (2010 ~ 2015)
Title: Learning the Solution Operator of Parametric Partial Differential Equations with Physics-informed DeepONets
Abstract
Deep operator networks (DeepONets) are receiving increased attention thanks to their demonstrated capability to approximate nonlinear operators between infinite-dimensional Banach spaces. However, despite their remarkable early promise, they typically require large training data-sets consisting of paired input-output observations which may be expensive to obtain, while their predictions may not be consistent with the underlying physical principles that generated the observed data. In this work, we propose a novel model class coined as physics-informed DeepONets, which introduces an effective regularization mechanism for biasing the outputs of DeepOnet models towards ensuring physical consistency. This is accomplished by leveraging automatic differentiation to impose the underlying physical laws via soft penalty constraints during model training. We demonstrate that this simple, yet remarkably effective extension can not only yield a significant improvement in the predictive accuracy of DeepOnets, but also greatly reduce the need for large training data-sets. To this end, a remarkable observation is that physics-informed DeepONets are capable of solving parametric partial differential equations (PDEs) without any paired input-output observations, except for a set of given initial or boundary conditions. We illustrate the effectiveness of the proposed framework through a series of comprehensive numerical studies across various types of PDEs. Strikingly, a trained physics informed DeepOnet model can predict the solution of O(1000) time-dependent PDEs in a fraction of a second — up to three orders of magnitude faster compared to a conventional PDE solver.
10:00 AM (UCT +09:00), April 25, 2021
Tao Zhou, Chinese Academy of Sciences
Shortbio Tao Zhou is currently an Associate Professor in Chinese Academy of Sciences. Before joining CAS, he was a postdoc fellow in EPFL in Switzerland during 2011-2012. Dr. Zhou’s research interests include Uncertainty Quantification (UQ), Parallel-in-Time Algorithms, Spectral Methods and Stochastic Optimal Control. He has published more than 50 papers in top international journals such as SIAM Review, SINUM and JCP. He was a recipient of the NSFC Career Award for Excellent Young Scholars (2018) and CSIAM Excellent Young Scholar Prize (2016). Dr. Zhou serves as Associate Editors for many international journals, such as SIAM Journal on Scientific Computing (SISC) and Communications in Computational Physics (CiCP). He also serves as the Associate Editor-in-Chief of International Journal for UQ. Since 2018, he has been the Chief Scientist of Science Challenge Project on UQ supported by State Administration of Science, Technology and Industry for National Defense.
Title: Introduction to Uncertainty Quantification
Abstract
Uncertainty quantification (UQ) has been a hot research topic recently. UQ has a variety of applications, including hydrology, fluid mechanics, data assimilation, and weather forecasting. Among others, high order numerical methods have become one of the most important tools for UQ; and the relevant computational techniques and their mathematical theory have attracted great attention in recent years. This talk will present a brief introduction to some recent developments on high order algorithms for both forward UQ and inverse UQ.

Student Speakers

Abstract

First, I will explain the Simplicial Homology briefly. Second, I will define the Persistent Homology and explain its stability and characterization. Third, I will explain the application of Persistent Homology to Data Analysis. If time permits, I will explain the some weakness of Persistent Homology and its overcoming process.

Abstract

In this talk, we are concerned with the Fokker-Planck equations associated with the Nonlinear Noisy Leaky Integrate-and-Fire model for neuron networks. Due to the jump mechanism at the microscopic level, such Fokker-Planck equations are endowed with an unconventional structure: transporting the boundary flux to a specific interior point, which makes the properties of solutions not yet completely understood. By far there has been no rigorous numerical analysis work concerning such models. We propose a conservative and conditionally positivity preserving scheme for these Fokker-Planck equations, and we show that in the linear case, the semi-discrete scheme satisfies the discrete relative entropy estimate, which essentially matches the long time asymptotic solution property of the equation. We also do extensive numerical exploration with this new scheme and show diversified solutions from various numerical observations.

Abstract

Gaussian process model (GP) is a popular machine learning model for regression and classification problems. The model is assumed to follow a Gaussian process, and its parameters are optimized to maximize the marginal likelihood of the model. This enables GP not only to give the solution to the problem but also quantify the uncertainty of the solution. Moreover, Bayesian posterior is computable analytically with a Gaussian likelihood in regression problems. However, training of the model involves computation of the inverse of the covariance matrix which costs O(N³), where N is the number of data points, which makes GP not appropriate for the huge dataset. In this talk, variational inference for GP regression is introduced to reduce the computational cost. This model approximates the posterior using the variational distribution with "inducing variables". With some matrix lemmas, the computational cost can be reduced to O(NM²), where M is the number of inducing variables smaller than N.

Abstract

We propose a high order numerical homogenization method for dissipative ordinary differential equations (ODEs) containing two time scales. Essentially, only first order homogenized model globally in time can be derived. To achieve a high order method, we have to adopt a numerical approach in the framework of the heterogeneous multiscale method (HMM). By a successively refined microscopic solver, the accuracy improvement up to arbitrary order is attained providing input data smooth enough. Based on the formulation of the high order microscopic solver we derived, an iterative formula to calculate the microscopic solver is then proposed. Using the iterative formula, we develop an implementation to the method in an efficient way for practical applications. Several numerical examples are presented to validate the new models and numerical methods.

Abstract

Neural Networks have been considered as powerful ansatzes of the solutions to differential equations (DEs). In particular, Physics Informed Neural Network (PINN) learns the governing equations, which was made possible by automatic differentiations. PINN can be applied to both forward and inverse problems. In forward problems, the solution is trained by the information about initial or boundary conditions and governing equation. In the case of inverse problems, PINN estimates underlying physics in the governing equation based on observation data and gets the solution values on the whole domain as well. In this talk, I will present the basic algorithms of the aforementioned method and their several results.

Abstract

Deep learning is now widely and typically used to figure out the relation between input and output data in many science and engineering problems. However, deep learning may highly depend on the model structure and the training data, which possibly yields inaccurate prediction results. This issue is particularly important in the fields related to human life, for example, autonomous driving and healthcare. Computing the uncertainty in model prediction provides a criterion for evaluating the prediction results, which can be a solution to this problem. Bayesian statistics is one of the good frameworks for quantifying uncertainty. Applying Bayesian statistics to deep learning is often referred to as Bayesian deep learning and has successfully predicted model uncertainty in many fields. This talk will introduce two main algorithms frequently used in Bayesian deep learning and show their results in some simple uncertainty quantification problems.

Abstract

A two-fluid model is derived from the plasma kinetic equations using the moment model reduction method. The moment method we adopt was developed recently with a globally hyperbolic regularization and the moment model attained is locally well-posed in time. Based on the hyperbolic moment model with well-posedness, the Maxwellian iteration method is utilized to get the closure relations for the resulted two-fluid model. By taking the Shakhov collision operator in the Maxwellian iteration, the two-fluid model inherits the correct Prandtl number from the plasma kinetic equations. The new model is formally the same as the five-moment two-fluid model except for the closure relations, where the pressure tensor is anisotropic and the heat flux is added. This provides the model the capacity to depict problems with anisotropic pressure tensor and large heat flux.

Sponsors