Edge AI for Robots: TinyML, On-Device Inference & Optimization Training Course
Edge AI empowers artificial intelligence models to run directly on embedded or resource-constrained devices, reducing latency and power consumption while enhancing autonomy and privacy within robotic systems.
This instructor-led, live training (available online or on-site) is designed for intermediate-level embedded developers and robotics engineers seeking to implement machine learning inference and optimisation techniques directly on robotic hardware using TinyML and edge AI frameworks.
By the conclusion of this training, participants will be able to:
- Grasp the fundamentals of TinyML and edge AI in the context of robotics.
- Convert and deploy AI models for on-device inference.
- Optimise models for speed, size, and energy efficiency.
- Integrate edge AI systems into robotic control architectures.
- Evaluate performance and accuracy in real-world scenarios.
Course Format
- Interactive lectures and discussions.
- Hands-on practice using TinyML and edge AI toolchains.
- Practical exercises on embedded and robotic hardware platforms.
Course Customisation Options
- To request a customised training session for this course, please contact us to arrange.
Course Outline
Introduction to Edge AI and TinyML
- Overview of AI at the edge
- Benefits and challenges of running AI on devices
- Use cases in robotics and automation
Fundamentals of TinyML
- Machine learning for resource-constrained systems
- Model quantisation, pruning, and compression
- Supported frameworks and hardware platforms
Model Development and Conversion
- Training lightweight models using TensorFlow or PyTorch
- Converting models to TensorFlow Lite and PyTorch Mobile
- Testing and validating model accuracy
On-Device Inference Implementation
- Deploying AI models to embedded boards (Arduino, Raspberry Pi, Jetson Nano)
- Integrating inference with robotic perception and control
- Running real-time predictions and monitoring performance
Optimisation for Edge Performance
- Reducing latency and energy consumption
- Hardware acceleration using NPUs and GPUs
- Benchmarking and profiling embedded inference
Edge AI Frameworks and Tools
- Working with TensorFlow Lite and Edge Impulse
- Exploring PyTorch Mobile deployment options
- Debugging and tuning embedded ML workflows
Practical Integration and Case Studies
- Designing edge AI perception systems for robots
- Integrating TinyML with ROS-based robotics architectures
- Case studies: autonomous navigation, object detection, predictive maintenance
Summary and Next Steps
Requirements
- A solid understanding of embedded systems
- Experience with Python or C++ programming
- Familiarity with core machine learning concepts
Audience
- Embedded developers
- Robotics engineers
- System integrators working on intelligent devices
Open Training Courses require 5+ participants.
Edge AI for Robots: TinyML, On-Device Inference & Optimization Training Course - Booking
Edge AI for Robots: TinyML, On-Device Inference & Optimization Training Course - Enquiry
Edge AI for Robots: TinyML, On-Device Inference & Optimization - Consultancy Enquiry
Testimonials (2)
Supply of the materials (virtual machine) to get straight into the excersises, and the explanation of the Ros2 core. Why things work a certain way.
Arjan Bakema
Course - Autonomous Navigation & SLAM with ROS 2
its knowledge and utilization of AI for Robotics in the Future.
Ryle - PHILIPPINE MILITARY ACADEMY
Course - Artificial Intelligence (AI) for Robotics
Provisional Upcoming Courses (Require 5+ participants)
Related Courses
Artificial Intelligence (AI) for Robotics
21 HoursArtificial Intelligence (AI) for Robotics brings together machine learning, control systems, and sensor fusion to create intelligent machines capable of perceiving, reasoning, and acting autonomously. With modern tools such as ROS 2, TensorFlow, and OpenCV, engineers can now design robots that navigate, plan, and interact intelligently with real-world environments.
This instructor-led, live training (available online or on-site) is designed for intermediate-level engineers who want to develop, train, and deploy AI-driven robotic systems using current open-source technologies and frameworks.
By the end of this training, participants will be able to:
- Use Python and ROS 2 to build and simulate robotic behaviours.
- Implement Kalman and Particle Filters for localisation and tracking.
- Apply computer vision techniques using OpenCV for perception and object detection.
- Use TensorFlow for motion prediction and learning-based control.
- Integrate SLAM (Simultaneous Localisation and Mapping) for autonomous navigation.
- Develop reinforcement learning models to enhance robotic decision-making.
Course Format
- Interactive lectures and discussions.
- Hands-on implementation using ROS 2 and Python.
- Practical exercises in both simulated and real robotic environments.
Course Customisation Options
To request a customised training session for this course, please contact us to arrange.
AI and Robotics for Nuclear - Extended
120 HoursIn this instructor-led, live training in New Zealand (online or onsite), participants will learn the various technologies, frameworks, and techniques for programming different types of robots to be used in the fields of nuclear technology and environmental systems.
The six-week course is held five days a week. Each day spans four hours and combines lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete a range of real-world projects relevant to their work, allowing them to apply and reinforce their newly acquired knowledge.
The target hardware for this course will be simulated in 3D using specialised simulation software. The ROS (Robot Operating System) open-source framework, along with C++ and Python, will be used to program the robots.
By the end of this training, participants will be able to:
- Understand the key concepts underpinning robotic technologies.
- Understand and manage the interaction between software and hardware within a robotic system.
- Understand and implement the software components that form the foundation of robotics.
- Build and operate a simulated mechanical robot capable of seeing, sensing, processing information, navigating, and interacting with humans via voice.
- Grasp the essential elements of artificial intelligence (including machine learning, deep learning, and more) applicable to building intelligent robots.
- Implement filters (such as Kalman and Particle filters) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning strategies.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to allow a robot to map out an unknown environment.
- Enhance a robot's ability to perform complex tasks through Deep Learning.
- Test and troubleshoot a robot in realistic scenarios.
AI and Robotics for Nuclear
80 HoursIn this instructor-led, live training in New Zealand (available online or on-site), participants will explore the various technologies, frameworks, and techniques for programming different types of robots to be deployed in nuclear technology and environmental systems.
The four-week course runs five days a week, with each day lasting four hours. It combines lectures, discussions, and hands-on robot development within a live lab environment. Participants will complete a range of real-world projects relevant to their work to apply and reinforce their newly acquired knowledge.
The target hardware for this course will first be simulated in 3D using dedicated simulation software. The code will then be loaded onto physical hardware (such as Arduino or similar platforms) for final deployment testing. The ROS (Robot Operating System) open-source framework, along with C++ and Python, will be used to program the robots.
By the end of this training, participants will be able to:
- Grasp the key concepts underpinning robotic technologies.
- Understand and manage the interaction between software and hardware within a robotic system.
- Comprehend and implement the software components that form the foundation of robotics.
- Build and operate a simulated mechanical robot capable of seeing, sensing, processing, navigating, and interacting with humans via voice.
- Understand the essential elements of artificial intelligence (including machine learning, deep learning, etc.) required to build intelligent robots.
- Implement filters (such as Kalman and Particle filters) to enable the robot to locate moving objects in its environment.
- Apply search algorithms and motion planning techniques.
- Implement PID controls to regulate a robot's movement within an environment.
- Deploy SLAM algorithms to allow a robot to map an unknown environment.
- Test and troubleshoot robots in realistic scenarios.
Autonomous Navigation & SLAM with ROS 2
21 HoursROS 2 (Robot Operating System 2) is an open-source framework designed to support the development of complex and scalable robotic applications.
This instructor-led, live training (online or onsite) is aimed at intermediate-level robotics engineers and developers who wish to implement autonomous navigation and SLAM (Simultaneous Localisation and Mapping) using ROS 2.
By the end of this training, participants will be able to:
- Set up and configure ROS 2 for autonomous navigation applications.
- Implement SLAM algorithms for mapping and localisation.
- Integrate sensors such as LiDAR and cameras with ROS 2.
- Simulate and test autonomous navigation in Gazebo.
- Deploy navigation stacks on physical robots.
Format of the Course
- Interactive lecture and discussion.
- Hands-on practice using ROS 2 tools and simulation environments.
- Live-lab implementation and testing on virtual or physical robots.
Course Customisation Options
- To request a customised training for this course, please contact us to arrange.
Developing Intelligent Bots with Azure
14 HoursAzure Bot Service combines the capabilities of the Microsoft Bot Framework and Azure Functions, delivering a robust platform for rapidly building intelligent bots.
In this instructor-led, live training, participants will explore how to efficiently develop intelligent bots using Microsoft Azure.
By the end of the training, participants will be able to:
Grasp the core concepts underpinning intelligent bots.
Build intelligent bots using cloud-based applications.
Gain practical expertise in the Microsoft Bot Framework, the Bot Builder SDK, and Azure Bot Service.
Apply proven bot design patterns to real-world scenarios.
Create and deploy their first intelligent bot using Microsoft Azure.
Audience
This course is tailored for developers, hobbyists, engineers, and IT professionals with an interest in bot development.
Format of the course
The training blends lectures and discussions with hands-on exercises, placing a strong emphasis on practical application.
Computer Vision for Robotics: Perception with OpenCV & Deep Learning
21 HoursOpenCV is an open-source computer vision library that enables real-time image processing, while deep learning frameworks such as TensorFlow provide the tools for intelligent perception and decision-making in robotic systems.
This instructor-led, live training (online or onsite) is aimed at intermediate-level robotics engineers, computer vision practitioners, and machine learning engineers who wish to apply computer vision and deep learning techniques for robotic perception and autonomy.
By the end of this training, participants will be able to:
- Implement computer vision pipelines using OpenCV.
- Integrate deep learning models for object detection and recognition.
- Use vision-based data for robotic control and navigation.
- Combine classical vision algorithms with deep neural networks.
- Deploy computer vision systems on embedded and robotic platforms.
Format of the Course
- Interactive lecture and discussion.
- Hands-on practice using OpenCV and TensorFlow.
- Live-lab implementation on simulated or physical robotic systems.
Course Customisation Options
- To request a customised training for this course, please contact us to arrange.
Developing a Bot
14 HoursA bot, or chatbot, is like a digital assistant designed to automate user interactions across various messaging platforms, enabling tasks to be completed faster without requiring human intervention.
In this instructor-led, live training, participants will learn how to begin developing their own bots by creating sample chatbots using industry-standard bot development tools and frameworks.
By the end of this training, participants will be able to:
- Understand the diverse uses and applications of bots
- Grasp the end-to-end process of bot development
- Explore the range of tools and platforms used in building bots
- Build a sample chatbot for Facebook Messenger
- Build a sample chatbot using the Microsoft Bot Framework
Audience
- Developers interested in creating their own bots
Course Format
- A blend of lecture, discussion, hands-on exercises, and practical application
Human-Centric Physical AI: Collaborative Robots and Beyond
14 HoursThis instructor-led, live training in New Zealand (online or on-site) is designed for intermediate-level participants who wish to explore the role of collaborative robots (cobots) and other human-centric AI systems in modern workplaces.
By the end of this training, participants will be able to:
- Understand the principles of Human-Centric Physical AI and its applications.
- Explore the role of collaborative robots in enhancing workplace productivity.
- Identify and address challenges in human-machine interactions.
- Design workflows that optimise collaboration between humans and AI-driven systems.
- Promote a culture of innovation and adaptability in AI-integrated workplaces.
Human-Robot Interaction (HRI): Voice, Gesture & Collaborative Control
21 HoursHuman-Robot Interaction (HRI): Voice, Gesture & Collaborative Control is a hands-on course designed to introduce participants to the design and implementation of intuitive interfaces for human–robot communication. The training combines theory, design principles, and programming practice to build natural and responsive interaction systems using speech, gesture, and shared control techniques. Participants will learn how to integrate perception modules, develop multimodal input systems, and design robots that safely collaborate with humans.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level participants who wish to design and implement human–robot interaction systems that enhance usability, safety, and user experience.
By the end of this training, participants will be able to:
- Understand the foundations and design principles of human–robot interaction.
- Develop voice-based control and response mechanisms for robots.
- Implement gesture recognition using computer vision techniques.
- Design collaborative control systems for safe and shared autonomy.
- Evaluate HRI systems based on usability, safety, and human factors.
Format of the Course
- Interactive lectures and demonstrations.
- Hands-on coding and design exercises.
- Practical experiments in simulation or real robotic environments.
Course Customisation Options
- To request a customised training for this course, please contact us to arrange.
Industrial Robotics Automation: ROS-PLC Integration & Digital Twins
28 HoursIndustrial Robotics Automation: ROS-PLC Integration & Digital Twins is a hands-on course focused on bridging industrial automation with modern robotics frameworks. Participants will learn to integrate ROS-based robotic systems with PLCs for synchronized operations and explore digital twin environments to simulate, monitor, and optimise production processes. The course emphasises interoperability, real-time control, and predictive analysis using digital replicas of physical systems.
This instructor-led, live training (online or onsite) is aimed at intermediate-level professionals who wish to build practical skills in connecting ROS-controlled robots with PLC environments and implementing digital twins for automation and manufacturing optimisation.
By the end of this training, participants will be able to:
- Understand communication protocols between ROS and PLC systems.
- Implement real-time data exchange between robots and industrial controllers.
- Develop digital twins for monitoring, testing, and process simulation.
- Integrate sensors, actuators, and robotic manipulators within industrial workflows.
- Design and validate industrial automation systems using hybrid simulation environments.
Format of the Course
- Interactive lecture and architecture walkthroughs.
- Hands-on exercises integrating ROS and PLC systems.
- Simulation and digital twin project implementation.
Course Customisation Options
- To request a customised training for this course, please contact us to arrange.
Artificial Intelligence (AI) for Mechatronics
21 HoursThis instructor-led, live training in New Zealand (available online or on-site) is designed for engineers who wish to explore how artificial intelligence can be applied to mechatronic systems.
By the end of this training, participants will be able to:
- Gain a comprehensive overview of artificial intelligence, machine learning, and computational intelligence.
- Understand the fundamental concepts of neural networks and various learning methodologies.
- Select appropriate artificial intelligence approaches to effectively address real-world problems.
- Implement AI-driven applications within mechatronic engineering contexts.
Multi-Robot Systems and Swarm Intelligence
28 HoursMulti-Robot Systems and Swarm Intelligence is an advanced training course that explores the design, coordination, and control of robotic teams inspired by biological swarm behaviours. Participants will learn how to model interactions, implement distributed decision-making, and optimise collaboration across multiple agents. The course combines theory with hands-on simulation to prepare learners for applications in logistics, defence, search and rescue, and autonomous exploration.
This instructor-led, live training (online or on-site) is aimed at advanced-level professionals who wish to design, simulate, and implement multi-robot and swarm-based systems using open-source frameworks and algorithms.
By the end of this training, participants will be able to:
- Understand the principles and dynamics of swarm intelligence and cooperative robotics.
- Design communication and coordination strategies for multi-robot systems.
- Implement distributed decision-making and consensus algorithms.
- Simulate collective behaviours such as formation control, flocking, and coverage.
- Apply swarm-based techniques to real-world scenarios and optimisation problems.
Format of the Course
- Advanced lectures with algorithmic deep dives.
- Hands-on coding and simulation in ROS 2 and Gazebo.
- Collaborative project applying swarm intelligence principles.
Course Customisation Options
- To request a customised training for this course, please contact us to arrange.
Multimodal AI in Robotics
21 HoursThis instructor-led, live training in New Zealand (available online or onsite) is designed for advanced-level robotics engineers and AI researchers who wish to leverage Multimodal AI to integrate diverse sensory data, creating more autonomous and efficient robots that can see, hear, and touch.
By the end of this training, participants will be able to:
- Implement multimodal sensing within robotic systems.
- Develop AI algorithms for sensor fusion and decision-making.
- Create robots capable of performing complex tasks in dynamic environments.
- Tackle challenges related to real-time data processing and actuation.
Smart Robots for Developers
84 HoursA Smart Robot is an Artificial Intelligence (AI) system capable of learning from its environment and experiences, continuously enhancing its capabilities through that knowledge. Smart Robots are designed to collaborate with humans, working alongside them and learning from their behaviour. Moreover, they are equipped to perform not only manual tasks but also cognitive functions. Beyond physical robots, Smart Robots can exist purely as software applications, residing within a computer without moving parts or direct physical interaction with the external world.
In this instructor-led, live training programme, participants will explore the diverse technologies, frameworks, and techniques used to program various types of mechanical Smart Robots. They will then apply this knowledge to complete their own Smart Robot projects.
The course is structured into four sections, each spanning three days of lectures, discussions, and hands-on robot development in a live lab environment. Each section concludes with a practical, hands-on project, enabling participants to practice and demonstrate their newly acquired skills.
The target hardware for this course will be simulated in 3D using specialised simulation software. The ROS (Robot Operating System) open-source framework, along with C++ and Python, will be used to programme the robots.
By the end of this training, participants will be able to:
- Grasp the key concepts underpinning robotic technologies
- Understand and manage the interaction between software and hardware within a robotic system
- Comprehend and implement the software components that form the foundation of Smart Robots
- Build and operate a simulated mechanical Smart Robot capable of seeing, sensing, processing, grasping, navigating, and interacting with humans via voice
- Enhance a Smart Robot's ability to perform complex tasks through Deep Learning
- Test and troubleshoot a Smart Robot in realistic scenarios
Target Audience
- Developers
- Engineers
Course Format
- A blend of lecture, discussion, exercises, and intensive hands-on practice
Note
- To customise any aspect of this course (programming language, robot model, etc.), please contact us to arrange.
Smart Robotics in Manufacturing: AI for Perception, Planning, and Control
21 HoursSmart Robotics involves integrating artificial intelligence into robotic systems to enhance perception, decision-making, and autonomous control.
This instructor-led, live training (available online or on-site) is designed for advanced-level robotics engineers, systems integrators, and automation leads who aim to implement AI-driven perception, planning, and control within smart manufacturing environments.
By the end of this training, participants will be able to:
- Understand and apply AI techniques for robotic perception and sensor fusion.
- Develop motion planning algorithms for collaborative and industrial robots.
- Deploy learning-based control strategies for real-time decision making.
- Integrate intelligent robotic systems into smart factory workflows.
Course Format
- Interactive lectures and discussions.
- Abundant exercises and practical sessions.
- Hands-on implementation in a live-lab environment.
Course Customisation Options
- To request a customised training session for this course, please contact us to arrange.