Schedule
MINDS SEMINAR
MINDS Seminar Series | Ji-Hoon Kang(KISTI) - INTRODUCTION TO PARALLEL COMPUTING : PARALLEL SYSTEMS AND PROGRAMMING MODEL
MINDS SEMINAR
period : 2024-11-05 ~ 2024-11-05
time : 16:15:00 ~ 17:00:00
개최 장소 : Math Bldg 208 & Online streaming (Zoom)
Topic : INTRODUCTION TO PARALLEL COMPUTING : PARALLEL SYSTEMS AND PROGRAMMING MODEL
개요
Date | 2024-11-05 ~ 2024-11-05 | Time | 16:15:00 ~ 17:00:00 |
Speaker | Ji-Hoon Kang | Affiliation | KISTI |
Place | Math Bldg 208 & Online streaming (Zoom) | Streaming link | ID : 688 896 1076 / PW : 54321 |
Topic | INTRODUCTION TO PARALLEL COMPUTING : PARALLEL SYSTEMS AND PROGRAMMING MODEL | ||
Contents | Parallel computing is a method of performing extensive computations using multiple computer resources. In particular, it refers to a computational approach that divides a large problem, which a single computer cannot handle, into smaller tasks that can be processed separately by many computers simultaneously. Parallel computing gained significant attention in the early to mid-2000s when the clock frequency of processors reached physical limits and processors began to equip multiple compute cores [1]. Mani-core processors, represented by graphics processing units (GPUs), became prevalent in the field of computational science and engineering, particularly after then, with the introduction of the CUDA toolkit [2]. Continuous improvements in memory and network performance also allowed for the interconnection of multiple processors and computer nodes. As a result, it has become typical in computational science and engineering research to use cluster systems with hundreds of cores in dozens of computer nodes. Choosing the right programming model is crucial for exploiting the full potential of various parallel architectures. The most famous ones are MPI (Message Passing Interface) [4] and OpenMP (Open Multi-Processing) [5] for distributed and shared memory architectures, respectively. These two programming models are de-facto standard of parallel programming modesl and widely used in scientific and engineering applications, computational simulations, and large-scale data processing where high-performance parallel and distributed computing is required. In this talk, we will discuss the necessity and significance of parallel computing, relating it to the evolution of parallel systems and parallel programming. Continuing the previous parallel systems such as symmetric multi-processing (SMP) and massively parallel processing (MPP) systems, parallel cluster systems have been mainstreams since mid-2000s, showing a thousand-fold improvement in performance over the past 20 years. This presentation will cover the evolution of parallel systems and provide an overview of parallel programming methods required for these systems. Furthermore, parallel computing and its various implementation will be also presented with practical examples. This will help us to understand the reason why parallel computing has become an essential method in modern computational science and engineering. |