Get in Touch

Course Outline

DAY 1 – ARTIFICIAL NEURAL NETWORKS

Introduction and ANN Structure.

  • Biological neurons and artificial neurons.
  • Model of an ANN.
  • Activation functions used in ANNs.
  • Typical classes of network architectures.

Mathematical Foundations and Learning Mechanisms.

  • Revisiting vector and matrix algebra.
  • State-space concepts.
  • Concepts of optimisation.
  • Error-correction learning.
  • Memory-based learning.
  • Hebbian learning.
  • Competitive learning.

Single-Layer Perceptrons.

  • Structure and learning of perceptrons.
  • Pattern classifier – introduction and Bayes' classifiers.
  • Perceptron as a pattern classifier.
  • Perceptron convergence.
  • Limitations of perceptrons.

Feedforward ANNs.

  • Structures of multi-layer feedforward networks.
  • Backpropagation algorithm.
  • Backpropagation – training and convergence.
  • Functional approximation with backpropagation.
  • Practical and design issues in backpropagation learning.

Radial Basis Function Networks.

  • Pattern separability and interpolation.
  • Regularisation Theory.
  • Regularisation and RBF networks.
  • RBF network design and training.
  • Approximation properties of RBFs.

Competitive Learning and Self-Organising ANNs.

  • General clustering procedures.
  • Learning Vector Quantisation (LVQ).
  • Competitive learning algorithms and architectures.
  • Self-organising feature maps.
  • Properties of feature maps.

Fuzzy Neural Networks.

  • Neuro-fuzzy systems.
  • Background of fuzzy sets and logic.
  • Design of fuzzy systems.
  • Design of fuzzy ANNs.

Applications

  • A few examples of Neural Network applications, along with their advantages and challenges, will be discussed.

DAY 2 – MACHINE LEARNING

  • The PAC Learning Framework
    • Guarantees for finite hypothesis set – consistent case
    • Guarantees for finite hypothesis set – inconsistent case
    • Generalities
      • Deterministic vs. Stochastic scenarios
      • Bayes error and noise
      • Estimation and approximation errors
      • Model selection
  • Rademacher Complexity and VC – Dimension
  • Bias – Variance trade-off
  • Regularisation
  • Overfitting
  • Validation
  • Support Vector Machines
  • Kriging (Gaussian Process regression)
  • PCA and Kernel PCA
  • Self-Organising Maps (SOM)
  • Kernel-Induced Vector Space
    • Mercer Kernels and Kernel-Induced Similarity Metrics
  • Reinforcement Learning

DAY 3 – DEEP LEARNING

This will be taught in relation to the topics covered on Day 1 and Day 2.

  • Logistic and Softmax Regression
  • Sparse Autoencoders
  • Vectorisation, PCA and Whitening
  • Self-Taught Learning
  • Deep Networks
  • Linear Decoders
  • Convolution and Pooling
  • Sparse Coding
  • Independent Component Analysis
  • Canonical Correlation Analysis
  • Demos and Applications

Requirements

A solid understanding of mathematics.

A good grasp of basic statistics.

Basic programming skills are not mandatory but are strongly recommended.

 21 Hours

Number of participants


Price per participant

Testimonials (2)

Provisional Upcoming Courses (Require 5+ participants)

Related Categories