Thank you for sending your enquiry! One of our team members will contact you shortly.
Thank you for sending your booking! One of our team members will contact you shortly.
Course Outline
Introduction
- What is GPU programming?
- Why use GPU programming?
- What are the challenges and trade-offs of GPU programming?
- What are the frameworks and tools for GPU programming?
- Choosing the right framework and tool for your application
OpenCL
- What is OpenCL?
- What are the advantages and disadvantages of OpenCL?
- Setting up the development environment for OpenCL
- Creating a basic OpenCL program that performs vector addition
- Using the OpenCL API to query device information, allocate and deallocate device memory, transfer data between host and device, launch kernels, and synchronise threads
- Using OpenCL C to write kernels that execute on the device and manipulate data
- Using OpenCL built-in functions, variables, and libraries to perform common tasks and operations
- Using OpenCL memory spaces, such as global, local, constant, and private, to optimise data transfers and memory accesses
- Using the OpenCL execution model to control work-items, work-groups, and ND-ranges that define parallelism
- Debugging and testing OpenCL programs using tools such as CodeXL
- Optimising OpenCL programs through techniques such as coalescing, caching, prefetching, and profiling
CUDA
- What is CUDA?
- What are the advantages and disadvantages of CUDA?
- Setting up the development environment for CUDA
- Creating a basic CUDA program that performs vector addition
- Using the CUDA API to query device information, allocate and deallocate device memory, transfer data between host and device, launch kernels, and synchronise threads
- Using CUDA C/C++ to write kernels that execute on the device and manipulate data
- Using CUDA built-in functions, variables, and libraries to perform common tasks and operations
- Using CUDA memory spaces, such as global, shared, constant, and local, to optimise data transfers and memory accesses
- Using the CUDA execution model to control threads, blocks, and grids that define parallelism
- Debugging and testing CUDA programs using tools such as CUDA-GDB, CUDA-MEMCHECK, and NVIDIA Nsight
- Optimising CUDA programs through techniques such as coalescing, caching, prefetching, and profiling
ROCm
- What is ROCm?
- What are the advantages and disadvantages of ROCm?
- Setting up the development environment for ROCm
- Creating a basic ROCm program that performs vector addition
- Using the ROCm API to query device information, allocate and deallocate device memory, transfer data between host and device, launch kernels, and synchronise threads
- Using ROCm C/C++ to write kernels that execute on the device and manipulate data
- Using ROCm built-in functions, variables, and libraries to perform common tasks and operations
- Using ROCm memory spaces, such as global, local, constant, and private, to optimise data transfers and memory accesses
- Using the ROCm execution model to control threads, blocks, and grids that define parallelism
- Debugging and testing ROCm programs using tools such as ROCm Debugger and ROCm Profiler
- Optimising ROCm programs through techniques such as coalescing, caching, prefetching, and profiling
HIP
- What is HIP?
- What are the advantages and disadvantages of HIP?
- Setting up the development environment for HIP
- Creating a basic HIP program that performs vector addition
- Using the HIP language to write kernels that execute on the device and manipulate data
- Using HIP built-in functions, variables, and libraries to perform common tasks and operations
- Using HIP memory spaces, such as global, shared, constant, and local, to optimise data transfers and memory accesses
- Using the HIP execution model to control threads, blocks, and grids that define parallelism
- Debugging and testing HIP programs using tools such as ROCm Debugger and ROCm Profiler
- Optimising HIP programs through techniques such as coalescing, caching, prefetching, and profiling
Comparison
- Comparing the features, performance, and compatibility of OpenCL, CUDA, ROCm, and HIP
- Evaluating GPU programs using benchmarks and metrics
- Learning best practices and tips for GPU programming
- Exploring current and future trends and challenges in GPU programming
Summary and Next Steps
Requirements
- A solid understanding of the C/C++ language and parallel programming concepts
- Basic knowledge of computer architecture and memory hierarchy
- Experience with command-line tools and code editors
Audience
- Developers who wish to learn the basics of GPU programming and the main frameworks and tools for developing GPU applications
- Developers who wish to write portable and scalable code capable of running across different platforms and devices
- Programmers who wish to explore the benefits and challenges of GPU programming and optimisation
21 Hours