GPU Programming with OpenACC Training Course
OpenACC is an open standard for heterogeneous programming that enables code to run across various platforms and devices, including multicore CPUs, GPUs, FPGAs, and others.
This instructor-led, live training (available online or on-site) is designed for beginner to intermediate-level developers who wish to leverage OpenACC to program heterogeneous devices and harness their parallelism.
By the conclusion of this training, participants will be able to:
- Set up an OpenACC development environment.
- Write and execute a basic OpenACC program.
- Annotate code using OpenACC directives and clauses.
- Utilise the OpenACC API and associated libraries.
- Profile, debug, and optimise OpenACC programs.
Course Format
- Interactive lectures and discussions.
- Abundant exercises and practical activities.
- Hands-on implementation within a live lab environment.
Course Customisation Options
- To request a customised version of this course, please contact us to make arrangements.
Course Outline
Introduction
- What is OpenACC?
- OpenACC compared with OpenCL, CUDA, and SYCL
- Overview of OpenACC features and architecture
- Setting up the development environment
Getting Started
- Creating an OpenACC project in Visual Studio Code
- Exploring project structure and files
- Compiling and running the program
- Displaying output using printf and fprintf
OpenACC Directives and Clauses
- Understanding OpenACC directives and clauses
- Using parallel directives to create parallel regions
- Using kernels directives for compiler-managed parallelism
- Using loop directives to parallelise loops
- Managing data movement with data directives
- Synchronising data with update directives
- Improving data reuse with cache directives
- Creating device functions with routine directives
- Synchronising events with wait directives
OpenACC API
- Understanding the role of the OpenACC API
- Querying device information and capabilities
- Setting device number and type
- Handling errors and exceptions
- Creating and synchronising events
OpenACC Libraries and Interoperability
- Understanding OpenACC libraries and interoperability
- Using math, random, and complex libraries
- Integrating with other models (CUDA, OpenMP, MPI)
- Integrating with GPU libraries (cuBLAS, cuFFT)
OpenACC Tools
- Understanding OpenACC tools in development
- Profiling and debugging OpenACC programs
- Performance analysis using PGI Compiler, NVIDIA Nsight Systems, and Allinea Forge
Optimisation
- Factors affecting OpenACC program performance
- Optimising data locality and reducing transfers
- Optimising loop parallelism and fusion
- Optimising kernel parallelism and fusion
- Optimising vectorisation and auto-tuning
Summary and Next Steps
Requirements
- An understanding of the C/C++ or Fortran programming languages and parallel programming concepts
- Basic knowledge of computer architecture and memory hierarchy
- Experience with command-line tools and code editors
Audience
- Developers seeking to learn how to use OpenACC to program heterogeneous devices and exploit their parallelism
- Developers aiming to write portable and scalable code capable of running on diverse platforms and devices
- Programmers interested in exploring the high-level aspects of heterogeneous programming and enhancing their code productivity
Open Training Courses require 5+ participants.
GPU Programming with OpenACC Training Course - Booking
GPU Programming with OpenACC Training Course - Enquiry
GPU Programming with OpenACC - Consultancy Enquiry
Provisional Upcoming Courses (Require 5+ participants)
Related Courses
Developing AI Applications with Huawei Ascend and CANN
21 HoursHuawei Ascend is a family of AI processors designed for high-performance inference and training.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI engineers and data scientists who wish to develop and optimise neural network models using Huawei's Ascend platform and the CANN toolkit.
By the end of this training, participants will be able to:
- Set up and configure the CANN development environment.
- Develop AI applications using MindSpore and CloudMatrix workflows.
- Optimise performance on Ascend NPUs using custom operators and tiling.
- Deploy models to edge or cloud environments.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of Huawei Ascend and CANN toolkit in sample applications.
- Guided exercises focused on model building, training, and deployment.
Course Customisation Options
- To request a customised training for this course based on your infrastructure or datasets, please contact us to arrange.
Deploying AI Models with CANN and Ascend AI Processors
14 HoursCANN (Compute Architecture for Neural Networks) is Huawei's AI compute stack for deploying and optimising AI models on Ascend AI processors.
This instructor-led, live training (online or on-site) is designed for intermediate-level AI developers and engineers who wish to deploy trained AI models efficiently to Huawei Ascend hardware using the CANN toolkit and tools such as MindSpore, TensorFlow, or PyTorch.
By the end of this training, participants will be able to:
- Understand the CANN architecture and its role in the AI deployment pipeline.
- Convert and adapt models from popular frameworks to Ascend-compatible formats.
- Use tools like ATC, OM model conversion, and MindSpore for edge and cloud inference.
- Diagnose deployment issues and optimise performance on Ascend hardware.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on lab work using CANN tools and Ascend simulators or devices.
- Practical deployment scenarios based on real-world AI models.
Course Customisation Options
- To request a customised training for this course, please contact us to arrange.
AI Inference and Deployment with CloudMatrix
21 HoursCloudMatrix is Huawei’s unified AI development and deployment platform, designed to support scalable, production-grade inference pipelines.
This instructor-led, live training (available online or onsite) is tailored for beginner to intermediate-level AI professionals who wish to deploy and monitor AI models using the CloudMatrix platform, with integrated support for CANN and MindSpore.
By the end of this training, participants will be able to:
- Leverage CloudMatrix for model packaging, deployment, and serving.
- Convert and optimise models for Ascend chipsets.
- Establish pipelines for both real-time and batch inference tasks.
- Monitor deployments and fine-tune performance in production environments.
Course Format
- Interactive lectures and group discussions.
- Practical, hands-on experience with CloudMatrix using real-world deployment scenarios.
- Guided exercises focused on model conversion, optimisation, and scaling.
Course Customisation Options
- To request a customised training session tailored to your AI infrastructure or cloud environment, please contact us to arrange.
GPU Programming on Biren AI Accelerators
21 HoursBiren AI Accelerators are high-performance GPUs tailored for AI and HPC workloads, offering robust support for large-scale training and inference.
This instructor-led, live training (available online or on-site) is designed for intermediate to advanced developers who aim to program and optimise applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
By the end of this training, participants will be able to:
- Understand Biren GPU architecture and memory hierarchy.
- Set up the development environment and utilise Biren’s programming model.
- Translate and optimise CUDA-style code for Biren platforms.
- Apply performance tuning and debugging techniques.
Format of the Course
- Interactive lectures and discussions.
- Hands-on use of the Biren SDK in sample GPU workloads.
- Guided exercises focused on porting and performance tuning.
Course Customisation Options
- To request a customised training session based on your application stack or integration needs, please contact us to arrange.
Cambricon MLU Development with BANGPy and Neuware
21 HoursCambricon MLUs (Machine Learning Units) are specialised AI chips optimised for inference and training in edge and datacentre scenarios.
This instructor-led, live training (online or on-site) is designed for intermediate-level developers who wish to build and deploy AI models using the BANGPy framework and Neuware SDK on Cambricon MLU hardware.
By the end of this training, participants will be able to:
- Set up and configure the BANGPy and Neuware development environments.
- Develop and optimise Python- and C++-based models for Cambricon MLUs.
- Deploy models to edge and datacentre devices running the Neuware runtime.
- Integrate ML workflows with MLU-specific acceleration features.
Course Format
- Interactive lectures and discussions.
- Hands-on application of BANGPy and Neuware for development and deployment.
- Guided exercises focused on optimisation, integration, and testing.
Course Customisation Options
- To request a customised training session for this course based on your Cambricon device model or specific use case, please contact us to make arrangements.
Introduction to CANN for AI Framework Developers
7 HoursCANN (Compute Architecture for Neural Networks) is Huawei's AI computing toolkit used to compile, optimise, and deploy AI models on Ascend AI processors.
This instructor-led, live training (online or on-site) is aimed at beginner-level AI developers who wish to understand how CANN fits into the model lifecycle from training to deployment, and how it works with frameworks such as MindSpore, TensorFlow, and PyTorch.
By the end of this training, participants will be able to:
- Understand the purpose and architecture of the CANN toolkit.
- Set up a development environment with CANN and MindSpore.
- Convert and deploy a simple AI model to Ascend hardware.
- Gain foundational knowledge for future CANN optimisation or integration projects.
Format of the Course
- Interactive lecture and discussion.
- Hands-on labs with simple model deployment.
- Step-by-step walkthrough of the CANN toolchain and integration points.
Course Customisation Options
- To request a customised training for this course, please contact us to arrange.
CANN for Edge AI Deployment
14 HoursHuawei's Ascend CANN toolkit delivers powerful AI inference capabilities on edge devices such as the Ascend 310. CANN provides essential tools for compiling, optimising, and deploying models in environments where compute and memory resources are limited.
This instructor-led, live training (available online or on-site) is designed for intermediate-level AI developers and integrators who wish to deploy and optimise models on Ascend edge devices using the CANN toolchain.
By the end of this training, participants will be able to:
- Prepare and convert AI models for the Ascend 310 using CANN tools.
- Build lightweight inference pipelines using MindSpore Lite and AscendCL.
- Optimise model performance for constrained compute and memory environments.
- Deploy and monitor AI applications in real-world edge use cases.
Format of the Course
- Interactive lectures and live demonstrations.
- Hands-on laboratory work with edge-specific models and scenarios.
- Live deployment examples on virtual or physical edge hardware.
Course Customisation Options
- To request a customised training session for this course, please contact us to make arrangements.
Understanding Huawei’s AI Compute Stack: From CANN to MindSpore
14 HoursHuawei's AI stack — spanning from the low-level CANN SDK to the high-level MindSpore framework — delivers a tightly integrated AI development and deployment environment optimised for Ascend hardware.
This instructor-led, live training (available online or on-site) is designed for beginner to intermediate-level technical professionals seeking to understand how CANN and MindSpore components work together to support AI lifecycle management and inform infrastructure decisions.
By the end of this training, participants will be able to:
- Understand the layered architecture of Huawei's AI compute stack.
- Identify how CANN supports model optimisation and hardware-level deployment.
- Evaluate the MindSpore framework and toolchain in relation to industry alternatives.
- Position Huawei's AI stack within enterprise or cloud/on-premises environments.
Course Format
- Interactive lecture and discussion.
- Live system demonstrations and case-based walkthroughs.
- Optional guided labs exploring model flow from MindSpore to CANN.
Course Customisation Options
- To request a customised training session for this course, please contact us to arrange.
Optimizing Neural Network Performance with CANN SDK
14 HoursCANN SDK (Compute Architecture for Neural Networks) is Huawei's AI compute foundation that enables developers to fine-tune and optimise the performance of deployed neural networks on Ascend AI processors.
This instructor-led, live training (available online or on-site) is designed for advanced-level AI developers and system engineers who wish to enhance inference performance using CANN's advanced toolset, including the Graph Engine, TIK, and custom operator development.
By the end of this training, participants will be able to:
- Understand CANN's runtime architecture and performance lifecycle.
- Use profiling tools and the Graph Engine for performance analysis and optimisation.
- Create and optimise custom operators using TIK and TVM.
- Resolve memory bottlenecks and improve model throughput.
Course Format
- Interactive lectures and discussions.
- Hands-on labs featuring real-time profiling and operator tuning.
- Optimisation exercises using edge-case deployment scenarios.
Course Customisation Options
- To request a customised training session for this course, please contact us to arrange.
CANN SDK for Computer Vision and NLP Pipelines
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) delivers powerful deployment and optimisation tools for real-time AI applications in computer vision and NLP, particularly on Huawei Ascend hardware.
This instructor-led, live training (online or on-site) is designed for intermediate-level AI practitioners who wish to build, deploy and optimise vision and language models using the CANN SDK for production use cases.
By the end of this training, participants will be able to:
- Deploy and optimise CV and NLP models using CANN and AscendCL.
- Use CANN tools to convert models and integrate them into live pipelines.
- Optimise inference performance for tasks such as detection, classification and sentiment analysis.
- Build real-time CV/NLP pipelines for edge or cloud-based deployment scenarios.
Course Format
- Interactive lecture and demonstration.
- Hands-on lab covering model deployment and performance profiling.
- Live pipeline design using real-world CV and NLP use cases.
Course Customisation Options
- To request a customised training session for this course, please contact us to arrange.
Building Custom AI Operators with CANN TIK and TVM
14 HoursCANN TIK (Tensor Instruction Kernel) and Apache TVM enable advanced optimisation and customisation of AI model operators for Huawei Ascend hardware.
This instructor-led, live training (online or onsite) is aimed at advanced-level system developers who wish to build, deploy, and tune custom operators for AI models using CANN's TIK programming model and TVM compiler integration.
By the end of this training, participants will be able to:
- Write and test custom AI operators using the TIK DSL for Ascend processors.
- Integrate custom ops into the CANN runtime and execution graph.
- Use TVM for operator scheduling, auto-tuning, and benchmarking.
- Debug and optimise instruction-level performance for custom computation patterns.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on coding of operators using TIK and TVM pipelines.
- Testing and tuning on Ascend hardware or simulators.
Course Customisation Options
- To request a customised training for this course, please contact us to arrange.
Migrating CUDA Applications to Chinese GPU Architectures
21 HoursChinese GPU architectures, including Huawei Ascend, Biren, and Cambricon MLUs, provide CUDA alternatives specifically designed for local AI and high-performance computing (HPC) markets.
This instructor-led, live training (available online or on-site) is tailored for advanced-level GPU programmers and infrastructure specialists seeking to migrate and optimise existing CUDA applications for deployment on Chinese hardware platforms.
By the end of this training, participants will be able to:
- Evaluate the compatibility of existing CUDA workloads with Chinese chip alternatives.
- Port CUDA codebases to Huawei CANN, Biren SDK, and Cambricon BANGPy environments.
- Compare performance and identify optimisation opportunities across platforms.
- Address practical challenges related to cross-architecture support and deployment.
Course Format
- Interactive lectures and discussions.
- Hands-on code translation and performance comparison labs.
- Guided exercises focused on multi-GPU adaptation strategies.
Course Customisation Options
- To request a customised training session for this course based on your platform or CUDA project, please contact us to arrange.
Performance Optimization on Ascend, Biren, and Cambricon
21 HoursAscend, Biren, and Cambricon are leading AI hardware platforms in China, each offering unique acceleration and profiling tools for production-scale AI workloads.
This instructor-led, live training (online or onsite) is aimed at advanced-level AI infrastructure and performance engineers who wish to optimise model inference and training workflows across multiple Chinese AI chip platforms.
By the end of this training, participants will be able to:
- Benchmark models on Ascend, Biren, and Cambricon platforms.
- Identify system bottlenecks and memory/compute inefficiencies.
- Apply graph-level, kernel-level, and operator-level optimisations.
- Tune deployment pipelines to improve throughput and latency.
Format of the Course
- Interactive lecture and discussion.
- Hands-on use of profiling and optimisation tools on each platform.
- Guided exercises focused on practical tuning scenarios.
Course Customisation Options
- To request a customised training for this course based on your performance environment or model type, please contact us to arrange.