Get in Touch

Course Outline

Overview of CANN Optimisation Capabilities

  • How inference performance is managed within CANN
  • Optimisation goals for edge and embedded AI systems
  • Understanding AI Core utilisation and memory allocation

Using Graph Engine for Analysis

  • Introduction to the Graph Engine and execution pipeline
  • Visualising operator graphs and runtime metrics
  • Modifying computational graphs for optimisation

Profiling Tools and Performance Metrics

  • Using the CANN Profiling Tool (profiler) for workload analysis
  • Analysing kernel execution time and bottlenecks
  • Memory access profiling and tiling strategies

Custom Operator Development with TIK

  • Overview of TIK and the operator programming model
  • Implementing a custom operator using TIK DSL
  • Testing and benchmarking operator performance

Advanced Operator Optimisation with TVM

  • Introduction to TVM integration with CANN
  • Auto-tuning strategies for computational graphs
  • When and how to switch between TVM and TIK

Memory Optimisation Techniques

  • Managing memory layout and buffer placement
  • Techniques to reduce on-chip memory consumption
  • Best practices for asynchronous execution and reuse

Real-World Deployment and Case Studies

  • Case study: performance tuning for a smart city camera pipeline
  • Case study: optimising the autonomous vehicle inference stack
  • Guidelines for iterative profiling and continuous improvement

Summary and Next Steps

Requirements

  • A strong understanding of deep learning model architectures and training workflows
  • Experience with model deployment using CANN, TensorFlow, or PyTorch
  • Familiarity with the Linux CLI, shell scripting, and Python programming

Audience

  • AI performance engineers
  • Inference optimisation specialists
  • Developers working with edge AI or real-time systems
 14 Hours

Number of participants


Price per participant

Provisional Upcoming Courses (Require 5+ participants)

Related Categories