Data Streaming and Real Time Data Processing Training Course
Course Overview
This course offers a practical and structured introduction to building real-time data streaming systems. It covers core concepts, architecture patterns, and industry tools used to process continuous data at scale. Participants will learn how to design, implement, and optimise streaming pipelines using modern frameworks. The course progresses from foundational ideas to hands-on applications, enabling learners to confidently build production-ready real-time solutions.
Format of Training
• Instructor-led sessions with guided explanations
• Concept walkthroughs with real-world examples
• Hands-on demonstrations and coding exercises
• Progressive labs aligned with daily topics
• Interactive discussions and Q&A
Course Objectives
• Understand real-time data streaming concepts and system architecture
• Differentiate between batch and streaming data processing models
• Design scalable and fault-tolerant streaming pipelines
• Work with distributed streaming tools and frameworks
• Apply event time processing, windowing, and stateful operations
• Build and optimise real-time data solutions for business use cases
This course is available as onsite live training in New Zealand or online live training.Course Outline
Course Outline Day 1
• Introduction to data streaming concepts
• Batch vs real-time processing fundamentals
• Event-driven architecture basics
• Common use cases in industry
• Overview of streaming ecosystem
Day 2
• Streaming architecture design patterns
• Fundamentals of distributed messaging systems
• Producers and consumers
• Topics, partitions, and data flow
• Data ingestion strategies
Day 3
• Stream processing concepts and frameworks
• Event time vs processing time
• Windowing techniques and use cases
• Stateful stream processing
• Fault tolerance and checkpointing basics
Day 4
• Data transformation in streaming pipelines
• ETL and ELT in real-time systems
• Schema management and evolution
• Stream joins and enrichment
• Introduction to cloud-based streaming services
Day 5
• Monitoring and observability in streaming systems
• Security and access control basics
• Performance tuning and optimisation
• End-to-end pipeline design review
• Real-world use cases such as fraud detection and IoT processing
Open Training Courses require 5+ participants.
Data Streaming and Real Time Data Processing Training Course - Booking
Data Streaming and Real Time Data Processing Training Course - Enquiry
Data Streaming and Real Time Data Processing - Consultancy Enquiry
Testimonials (1)
Hands on exercises. Class should have been 5 days, but the 3 days helped to clear up a lot of questions that I had from working with NiFi already
James - BHG Financial
Course - Apache NiFi for Administrators
Provisional Upcoming Courses (Require 5+ participants)
Related Courses
Administrator Training for Apache Hadoop
35 HoursAudience:
This course is designed for IT professionals seeking a solution to store and process large datasets within a distributed system environment.
Goal:
To develop in-depth expertise in Hadoop cluster administration.
Big Data Analytics with Google Colab and Apache Spark
14 HoursThis instructor-led, live training in New Zealand (delivered either online or on-site) is designed for intermediate-level data scientists and engineers who wish to leverage Google Colab and Apache Spark for big data processing and analytics.
By the end of this training, participants will be able to:
- Set up a big data environment using Google Colab and Spark.
- Process and analyse large datasets efficiently with Apache Spark.
- Visualise big data within a collaborative environment.
- Integrate Apache Spark with cloud-based tools.
Big Data Analytics in Health
21 HoursBig data analytics involves the process of examining large volumes of diverse datasets to uncover correlations, hidden patterns, and other valuable insights.
The health sector manages vast quantities of complex, heterogeneous medical and clinical data. Applying big data analytics to health data holds significant potential for generating insights that can enhance healthcare delivery. However, the sheer scale of these datasets presents considerable challenges for analysis and practical application in clinical settings.
In this instructor-led, live training (delivered remotely), participants will learn how to perform big data analytics in the health domain by working through a series of hands-on, live lab exercises.
By the end of this training, participants will be able to:
- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the characteristics of medical data
- Apply big data techniques to manage medical data effectively
- Examine big data systems and algorithms within the context of health applications
Audience
- Developers
- Data Scientists
Course Format
- A blend of lecture, discussion, exercises, and extensive hands-on practice.
Note
- To request a customised training programme for this course, please contact us to make arrangements.
Hadoop For Administrators
21 HoursApache Hadoop is the most popular framework for processing Big Data across clusters of servers. In this three-day course (with an optional fourth day), attendees will explore the business benefits and use cases for Hadoop and its ecosystem, learn how to plan cluster deployment and growth, and gain hands-on experience installing, maintaining, monitoring, troubleshooting and optimising Hadoop. Participants will also perform bulk data loads into clusters, become familiar with various Hadoop distributions, and practice installing and managing Hadoop ecosystem tools. The course concludes with a discussion on securing clusters using Kerberos.
“…The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organised”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising
Audience
Hadoop administrators
Format
Lectures and hands-on labs, with an approximate balance of 60% lectures and 40% labs.
Hadoop for Developers (4 days)
28 HoursApache Hadoop is the most widely used framework for processing Big Data across clusters of servers. This course introduces developers to the various components of the Hadoop ecosystem, including HDFS, MapReduce, Pig, Hive, and HBase.
Advanced Hadoop for Developers
21 HoursApache Hadoop is one of the most popular frameworks for processing Big Data across clusters of servers. This course explores data management in HDFS, along with advanced techniques in Pig, Hive, and HBase. These advanced programming skills will be particularly valuable for experienced Hadoop developers.
Audience: developers
Duration: three days
Format: lectures (50%) and hands-on labs (50%).
Hadoop Administration on MapR
28 HoursAudience:
This course is designed to demystify big data and Hadoop technology, demonstrating that it is accessible and straightforward to understand.
Hadoop and Spark for Administrators
35 HoursThis instructor-led, live training in New Zealand (online or on-site) is tailored for system administrators who want to learn how to set up, deploy, and manage Hadoop clusters within their organisation.
By the end of this training, participants will be able to:
- Install and configure Apache Hadoop.
- Understand the four core components of the Hadoop ecosystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Leverage the Hadoop Distributed File System (HDFS) to scale a cluster to hundreds or even thousands of nodes.
- Configure HDFS as the storage engine for on-premises Spark deployments.
- Set up Spark to interface with alternative storage solutions such as Amazon S3 and NoSQL database systems including Redis, Elasticsearch, Couchbase, Aerospike, and others.
- Perform key administrative tasks such as provisioning, managing, monitoring, and securing an Apache Hadoop cluster.
HBase for Developers
21 HoursThis course introduces HBase – a NoSQL store built on top of Hadoop. It is designed for developers who will use HBase to build applications, as well as administrators responsible for managing HBase clusters.
We will guide developers through HBase architecture, data modelling, and application development on HBase. The course also covers using MapReduce with HBase and explores key administrative topics, including performance optimisation. With a strong emphasis on practical learning, the course includes numerous hands-on lab exercises.
Duration : 3 days
Audience : Developers & Administrators
Apache NiFi for Administrators
21 HoursApache NiFi is an open-source, flow-based data integration and event-processing platform. It enables automated, real-time data routing, transformation, and system mediation between disparate systems, with a web-based UI and fine-grained control.
This instructor-led, live training (onsite or remote) is aimed at intermediate-level administrators and engineers who wish to deploy, manage, secure, and optimise NiFi dataflows in production environments.
By the end of this training, participants will be able to:
- Install, configure, and maintain Apache NiFi clusters.
- Design and manage dataflows from varied sources and sinks.
- Implement flow automation, routing, and transformation logic.
- Optimise performance, monitor operations, and troubleshoot issues.
Format of the Course
- Interactive lecture with real-world architecture discussion.
- Hands-on labs: building, deploying, and managing flows.
- Scenario-based exercises in a live-lab environment.
Course Customisation Options
- To request a customised training for this course, please contact us to arrange.
Apache NiFi for Developers
7 HoursIn this instructor-led, live training in New Zealand, participants will learn the fundamentals of flow-based programming as they develop a range of demo extensions, components, and processors using Apache NiFi.
By the end of this training, participants will be able to:
- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom-develop their own Apache NiFi processor.
- Ingest and process real-time data from diverse and uncommon file formats and data sources.
PySpark and Machine Learning
21 HoursThis training offers a hands-on introduction to building scalable data processing and Machine Learning workflows using PySpark. Participants will learn how Apache Spark functions within modern Big Data ecosystems and how to efficiently process large datasets using distributed computing principles.
Python and Spark for Big Data (PySpark)
21 HoursIn this instructor-led, live training in New Zealand, participants will learn how to combine Python and Spark to analyse big data through hands-on exercises.
By the end of this training, participants will be able to:
- Learn how to use Spark with Python to analyse Big Data.
- Work on exercises that mirror real-world scenarios.
- Apply various tools and techniques for big data analysis using PySpark.
Python, Spark, and Hadoop for Big Data
21 HoursThis instructor-led, live training in New Zealand (available online or on-site) is tailored for developers seeking to use and integrate Spark, Hadoop, and Python to process, analyse, and transform large and complex data sets.
By the end of this training, participants will be able to:
- Set up the necessary environment to begin processing big data using Spark, Hadoop, and Python.
- Understand the key features, core components, and architecture of both Spark and Hadoop.
- Learn how to integrate Spark, Hadoop, and Python for efficient big data processing.
- Explore tools within the Spark ecosystem, including Spark MLlib, Spark Streaming, Kafka, Sqoop, and Flume.
- Build collaborative filtering recommendation systems similar to those used by Netflix, YouTube, Amazon, Spotify, and Google.
- Use Apache Mahout to scale machine learning algorithms.
Stratio: Rocket and Intelligence Modules with PySpark
14 HoursStratio is a data-centric platform that brings together big data, AI, and governance into a single solution. Its Rocket and Intelligence modules enable rapid data exploration, transformation, and advanced analytics within enterprise environments.
This instructor-led, live training (available online or on-site) is designed for intermediate-level data professionals who want to effectively use the Rocket and Intelligence modules in Stratio with PySpark, with a focus on looping structures, user-defined functions, and advanced data logic.
By the end of this training, participants will be able to:
- Navigate and work within the Stratio platform using the Rocket and Intelligence modules.
- Apply PySpark in the context of data ingestion, transformation, and analysis.
- Use loops and conditional logic to control data workflows and feature engineering tasks.
- Create and manage user-defined functions (UDFs) for reusable data operations in PySpark.
Course Format
- Interactive lectures and discussions.
- Plenty of exercises and hands-on practice.
- Live-lab implementation in a real-world environment.
Course Customisation Options
- To request a customised training session for this course, please contact us to arrange.