Get in Touch

Course Outline

Part 1 – Deep Learning and DNN Concepts

Introduction to AI, Machine Learning & Deep Learning

  • History, fundamental concepts, and common applications of artificial intelligence, beyond the myths often associated with the field
  • Collective Intelligence: aggregating knowledge shared by numerous virtual agents
  • Genetic algorithms: evolving a population of virtual agents through selection
  • Conventional Machine Learning: definition
  • Types of tasks: supervised learning, unsupervised learning, reinforcement learning
  • Types of actions: classification, regression, clustering, density estimation, dimensionality reduction
  • Examples of Machine Learning algorithms: Linear regression, Naive Bayes, Random Tree
  • Machine Learning vs Deep Learning: problems where Machine Learning remains the state of the art (Random Forests & XGBoosts)

Basic Concepts of a Neural Network (Application: multi-layer perceptron)

  • Recap of mathematical foundations
  • Definition of a neural network: classical architecture, activation functions, and
  • Weighting of previous activations, network depth
  • Definition of neural network learning: cost functions, back-propagation, stochastic gradient descent, maximum likelihood
  • Modelling a neural network: modelling input and output data according to the problem type (regression, classification, etc.). The curse of dimensionality
  • Distinguishing between multi-feature data and signals. Choosing a cost function based on the data
  • Function approximation using a neural network: presentation and examples
  • Distribution approximation using a neural network: presentation and examples
  • Data Augmentation: how to balance a dataset
  • Generalisation of neural network results
  • Initialisation and regularisation of a neural network: L1/L2 regularisation, Batch Normalisation
  • Optimisation and convergence algorithms

Standard ML / DL Tools

A concise overview covering advantages, disadvantages, ecosystem positioning, and usage is planned.

  • Data management tools: Apache Spark, Apache Hadoop
  • Machine Learning: NumPy, SciPy, scikit-learn
  • High-level DL frameworks: PyTorch, Keras, Lasagne
  • Low-level DL frameworks: Theano, Torch, Caffe, TensorFlow

Convolutional Neural Networks (CNN)

  • Overview of CNNs: fundamental principles and applications
  • Basic operation of a CNN: convolutional layers, kernel usage
  • Padding & stride, feature map generation, pooling layers. Extensions to 1D, 2D, and 3D
  • Overview of different CNN architectures that achieved state-of-the-art performance in classification
  • Images: LeNet, VGG Networks, Network in Network, Inception, ResNet. Presentation of innovations introduced by each architecture and their broader applications (e.g., 1x1 convolutions or residual connections)
  • Use of attention models
  • Application to a common classification case (text or image)
  • CNNs for generation: super-resolution, pixel-to-pixel segmentation. Presentation of
  • Main strategies for increasing feature maps in image generation

Recurrent Neural Networks (RNN)

  • Overview of RNNs: fundamental principles and applications
  • Basic operation of an RNN: hidden activation, back-propagation through time, unfolded version
  • Evolution towards Gated Recurrent Units (GRUs) and LSTM (Long Short-Term Memory)
  • Overview of different states and the advancements brought by these architectures
  • Convergence and vanishing gradient problems
  • Classical architectures: time series prediction, classification, etc.
  • RNN Encoder-Decoder architecture. Use of attention models
  • NLP applications: word/character encoding, translation
  • Video applications: prediction of the next generated image in a video sequence

Generative Models: Variational AutoEncoder (VAE) and Generative Adversarial Networks (GAN)

  • Overview of generative models and their connection to CNNs
  • Auto-encoder: dimensionality reduction and limited generation
  • Variational Auto-encoder: generative model and distribution approximation of a given dataset. Definition and use of latent space. Reparameterisation trick. Applications and observed limitations
  • Generative Adversarial Networks: fundamentals
  • Dual network architecture (generator and discriminator) with alternating learning and available cost functions
  • GAN convergence and encountered difficulties
  • Improved convergence: Wasserstein GAN, Began, Earth Mover's Distance
  • Applications for image or photograph generation, text generation, and super-resolution

Deep Reinforcement Learning

  • Overview of reinforcement learning: controlling an agent within a defined environment
  • Via states and possible actions
  • Using a neural network to approximate the state function
  • Deep Q-Learning: experience replay and application to video game control
  • Policy optimisation. On-policy && off-policy. Actor-critic architecture. A3C
  • Applications: control of a single video game or a digital system

Part 2 – Theano for Deep Learning

Theano Basics

  • Introduction
  • Installation and configuration

Theano Functions

  • Inputs, outputs, updates, givens

Training and Optimisation of a Neural Network using Theano

  • Neural network modelling
  • Logistic regression
  • Hidden layers
  • Training a network
  • Computing and classification
  • Optimisation
  • Log loss

Testing the Model

Part 3 – DNN using TensorFlow

TensorFlow Basics

  • Creating, initialising, saving, and restoring TensorFlow variables
  • Feeding, reading, and preloading TensorFlow data
  • Using TensorFlow infrastructure to train models at scale
  • Visualising and evaluating models with TensorBoard

TensorFlow Mechanics

  • Prepare the data
  • Download
  • Inputs and placeholders
  • Build the graphs
    • Inference
    • Loss
    • Training
  • Train the model
    • The graph
    • The session
    • Training loop
  • Evaluate the model
    • Build the evaluation graph
    • Evaluation output

The Perceptron

  • Activation functions
  • The perceptron learning algorithm
  • Binary classification with the perceptron
  • Document classification with the perceptron
  • Limitations of the perceptron

From the Perceptron to Support Vector Machines

  • Kernels and the kernel trick
  • Maximum margin classification and support vectors

Artificial Neural Networks

  • Non-linear decision boundaries
  • Feedforward and feedback artificial neural networks
  • Multilayer perceptrons
  • Minimising the cost function
  • Forward propagation
  • Back propagation
  • Improving the way neural networks learn

Convolutional Neural Networks

  • Goals
  • Model architecture
  • Principles
  • Code organisation
  • Launching and training the model
  • Evaluating a model

Brief introductions will be provided for the following modules (subject to time availability):

TensorFlow – Advanced Usage

  • Threading and queues
  • Distributed TensorFlow
  • Writing documentation and sharing your model
  • Customising data readers
  • Manipulating TensorFlow model files

TensorFlow Serving

  • Introduction
  • Basic serving tutorial
  • Advanced serving tutorial
  • Serving Inception model tutorial

Requirements

A background in physics, mathematics, and programming is required, along with involvement in image processing activities.

Delegates should have a prior understanding of machine learning concepts and practical experience with Python programming and its libraries.

 35 Hours

Number of participants


Price per participant

Testimonials (2)

Provisional Upcoming Courses (Require 5+ participants)

Related Categories