Hands-on introduction to parallel programming with OpenMP
- This repository contains materials and code examples from a lecture I delivered about OpenMP, as part of a seminar I attended on Parallelization and Program Optimization.
- The lecture covered OpenMP fundamentals step by step:
- Parallel regions
- Synchronization (critical, atomic, barrier)
- Loop scheduling and reductions
- Worksharing constructs (for, sections, single, master)
- Tasks and advanced usage
- In addition, the repository includes a MATMUL folder with matrix multiplication implementations (classic and Strassen) parallelized with OpenMP tasks.
Hello World & Pi Approximation
Getting started with OpenMP and exploring scaling issues
Features:
hello_world.c
– simple parallel region with thread IDspi_spmd_simple.c
– parallel numerical integration (SPMD style)pi_spmd_final.c
– fixes false sharing, demonstrates critical & atomic sectionspi_loop.c
– loop-based parallelism with scheduling & reduction
Synchronization & Worksharing
Understanding control of data races and workload distribution
Features:
- Critical vs. atomic vs. barrier
- Worksharing constructs: for, single, sections, master
- Scheduling policies: static, dynamic, guided, runtime
Matrix Multiplication (MATMUL)
Parallel matrix multiplication with tasks
Features:
matmul.c
– classic matrix multiplication with OpenMP- Strassen’s matrix multiplication with divide-and-conquer
- OpenMP tasks for recursive subproblems
- Time analysis and performance comparison
-
Clone the repository:
git clone https://github.com/your-username/openmp-seminar.git cd openmp-seminar
-
Compile examples with GCC and OpenMP flag:
gcc -fopenmp hello_world.c -o hello_world gcc -fopenmp pi_loop.c -o pi_loop gcc -fopenmp MATMUL/matmul.c -o matmul
-
Run executables:
./hello_world ./pi_loop ./matmul
-
Control the number of threads:
export OMP_NUM_THREADS=4