|Name:||Tutorial 03Advanced OpenMP Programming|
|Time:||Sunday, June 16, 2013
9:00 AM - 1:00 PM
|Room:||Lecture Room 9 (LR 9)
CCL - Congress Center Leipzig
|Breaks:||11:00 AM - 11:30 AM Coffee Break|
1:00 PM - 2:00 PM Lunch
|Presenter(s):||Bronis R. de Supinski, LLNL|
|Michael Klemm, Intel|
|Christian Terboven, RWTH Aachen University|
|Abstract:||With the increasing prevalence of multicore processors, shared-memory programming models are essential. OpenMP is a popular, portable, widely supported and easy-to-use shared-memory model. Developers usually find OpenMP easy to learn. However, they are often disappointed with the performance and scalability of the resulting code. This disappointment stems not from shortcomings of OpenMP but rather with the lack of depth with which it is employed. Our Advanced OpenMP Programming tutorial addresses this critical need by exploring the implications of possible OpenMP parallelization strategies, both in terms of correctness and performance.
While we quickly review the basics of OpenMP programming, we assume attendees understand basic parallelization concepts and will easily grasp those basics. We discuss how OpenMP features are implemented and then focus on performance aspects, such as data and thread locality on NUMA architectures, false sharing, and exploitation of vector units. We discuss language features in-depth, with emphasis on features recently added to OpenMP such as tasking. We close with an overview of the new OpenMP 4.0 directives for attached compute accelerators.
10% Introductory, 50% Intermediate, 40% Advanced
Our primary target is HPC programmers with some knowledge of OpenMP that
want to implement efficient shared-memory code.
Common knowledge of general computer architecture concepts (e.g., SMT, multicore, and NUMA); a basic knowledge of OpenMP; good knowledge of either C, C++ or Fortran.