The SPEEDUP workshop series has a long history in presenting and discussing the state-of-the-art in high-performance and parallel scientific computing. This includes algorithms, applications, and software aspects related to high-performance parallel computing. The focus of the 44th SPEEDUP workshop is on Fluid-Structure Interaction. The scientific program of September 10 consists of six 45-minutes talks and a poster session. Please encourage your collaborators to upload an abstract for the poster session. The deadline is Sept 4, 2015.
Directions to reach CSCS and University of Lugano can be found here
The conference will take place at CSCS
Can we simulate haemodynamics in a vascular district on a laptop in real time?
Even using modern algorithms and computers, one single heartbeat still takes several hours on an HPC platform. Indeed, blood flow in arteries needs to take into account the incompressibility of the fluid, the compliant vessel, and the patient specific data, at least for what concerns the geometry and some integrated quantity like flow rates or pressure. After discretizing the Fluid-Structure Interaction (FSI) coupled problem by finite differences in time and finite element in space, the computational time needed time to simulate a single heartbeat is about 3 hours on 1000 processors.
We propose a model order reduction and a numerical reduction. The former assumes a fixed fluid computational domain and a thin membrane structure which is integrated in the fluid equations as a generalised Robin boundary condition. The latter takes advantage of the reduced model and face it by Proper Orthogonal Decomposition (POD) and the Reduced Basis Method (RBM).
The combination of POD and RBM allows to split the computational effort into an offline and an online parts. The offline part runs on a HPC system and takes about 5 hours on 1000 processors, while the online part can be run in real time, i.e. 1 second of simulations in less than 1 second of cpu time, on a notebook. The real gain of such an approach is that after offline computations, the parameters of the patient specific simulation, like flow rate, heart pace, stiffness of the artery, can be changed online.
Numerical simulation of granular systems (i.e. the interaction of a multitude of particles with each other and with an interstitial fluid) has attracted researchers since decades due to its paramount importance in process industries, life sciences and environmental sciences. In general, numerical models can be organized into two modelling classes -- continuous two fluid models and discrete particle models. While in the first case the predictive capability of these models is limited by modelling uncertainties, in the second case numerical simulation are mainly limited by the sheer number of particles involved in real scale particle based processes. This requires huge computational resources and dedicated modelling efforts for distributing compute load on a given hardware.
In this talk I will start with providing an overview on existing modelling approaches for the simulation of granular systems. This will naturally lead to a discussion on the individual limitations and challenges with respect to their performance on distributed hardware systems. Classical hardware aware optimization (i.e. mainly optimisation of code parallelisation) of existing models only has limited potential for speeding up the performance of simulations. We will then conclude that the development levels of model creation, numerical discretization and implementation on a specific hardware cannot be regarded as independent tasks. Rather they strongly interact, with the level of model creation having the greatest impact on the simulation performance. Therefore, physicists dealing with the creation of new modelling approaches have to communicate with computer scientist in a very early project stage.
In a second part of this presentation I will discuss three new modelling concepts, which might lead to a significant gain in performance on distributed hardware systems. First, a lattice-Boltzmann magnification lens is presented, which combines the classical world of Computational Fluid Dynamics (CFD) with a local high resolution lattice-Boltzmann co-simulation. Second, a multi-level coarse graining concept is introduced for particle based simulations, which might accelerate Discrete Element Method (DEM) simulations by an order of magnitude. Finally, we sketch a new concept of randomized simulations, which utilize classical expensive CFD (or DEM) simulation of granular systems in order to train a random process. In a second step, this random process can be utilized for e.g. the simulation of species propagation. This new concept of randomized simulations can boost the performance of granular flow simulations by at least two orders of magnitude.
The numerical simulation of aortic valves is a multi-physics problem involving large deformations of soft tissue and transient vortical flow fields. Whereas soft tissue is most appropriately discretized on unstructured meshes in Lagrangian formulation, the three-dimensional flow field is discretized on a structured Cartesian grid to obtain an efficient implementation on modern HPC platforms. The tissue dynamics on the unstructured mesh and the flow on the structured grid are coupled with the immersed boundary method. The parallelization of such a hybrid discretization approach raises interesting questions with respect to data locality and load balancing under a domain decomposition paradigm.
The tutorial will take place at USI, room SI-008.
The new MPI standards (MPI-3.0) adds several key-concepts to deal with programming massively parallel modern hardware systems. In this tutorial, the three major concepts are covered:
Introductory: 25%, Intermediate: 50%, Advanced: 25%
We generally assume a basic familiarity with MPI, i.e., attendees should be able to write and execute simple MPI programs. We also assume familiarity with general HPC concepts (i.e., a simple understanding of batch systems, communication and computation tradeoffs, and networks).
A. Adelmann (PSI Villigen), P. Arbenz (ETH Zurich), H. Burkhart (U of Basel), B. Chopard (U Geneva), S. Deparis (EPF Lausanne), J. Hesthaven (EPF Lausanne), A. Janka (EIA Fribourg), R. Krause (USI Lugano), H. Nordborg (HSR), D. Obrist (U Berne), V. Rezzonico (EPF Lausanne), O. Schenk (USI Lugano), J. VandeVondele (ETH Zurich).