Course Outline
DAY 1 – ARTIFICIAL NEURAL NETWORKS
Introduction and ANN Structure.
- Biological neurons and artificial neurons.
- Model of an ANN.
- Activation functions used in ANNs.
- Typical classes of network architectures.
Mathematical Foundations and Learning Mechanisms.
- Revisiting vector and matrix algebra.
- State-space concepts.
- Concepts of optimisation.
- Error-correction learning.
- Memory-based learning.
- Hebbian learning.
- Competitive learning.
Single-Layer Perceptrons.
- Structure and learning of perceptrons.
- Pattern classifier – introduction and Bayes' classifiers.
- Perceptron as a pattern classifier.
- Perceptron convergence.
- Limitations of perceptrons.
Feedforward ANNs.
- Structures of multi-layer feedforward networks.
- Backpropagation algorithm.
- Backpropagation – training and convergence.
- Functional approximation with backpropagation.
- Practical and design issues in backpropagation learning.
Radial Basis Function Networks.
- Pattern separability and interpolation.
- Regularisation Theory.
- Regularisation and RBF networks.
- RBF network design and training.
- Approximation properties of RBFs.
Competitive Learning and Self-Organising ANNs.
- General clustering procedures.
- Learning Vector Quantisation (LVQ).
- Competitive learning algorithms and architectures.
- Self-organising feature maps.
- Properties of feature maps.
Fuzzy Neural Networks.
- Neuro-fuzzy systems.
- Background of fuzzy sets and logic.
- Design of fuzzy systems.
- Design of fuzzy ANNs.
Applications
- A few examples of Neural Network applications, along with their advantages and challenges, will be discussed.
DAY 2 – MACHINE LEARNING
- The PAC Learning Framework
- Guarantees for finite hypothesis set – consistent case
- Guarantees for finite hypothesis set – inconsistent case
- Generalities
- Deterministic vs. Stochastic scenarios
- Bayes error and noise
- Estimation and approximation errors
- Model selection
- Rademacher Complexity and VC – Dimension
- Bias – Variance trade-off
- Regularisation
- Overfitting
- Validation
- Support Vector Machines
- Kriging (Gaussian Process regression)
- PCA and Kernel PCA
- Self-Organising Maps (SOM)
- Kernel-Induced Vector Space
- Mercer Kernels and Kernel-Induced Similarity Metrics
- Reinforcement Learning
DAY 3 – DEEP LEARNING
This will be taught in relation to the topics covered on Day 1 and Day 2.
- Logistic and Softmax Regression
- Sparse Autoencoders
- Vectorisation, PCA and Whitening
- Self-Taught Learning
- Deep Networks
- Linear Decoders
- Convolution and Pooling
- Sparse Coding
- Independent Component Analysis
- Canonical Correlation Analysis
- Demos and Applications
Requirements
A solid understanding of mathematics.
A good grasp of basic statistics.
Basic programming skills are not mandatory but are strongly recommended.
Testimonials (2)
Working from first principles in a focused way, and moving to applying case studies within the same day
Maggie Webb - Department of Jobs, Regions, and Precincts
Course - Artificial Neural Networks, Machine Learning, Deep Thinking
It was very interactive and more relaxed and informal than expected. We covered lots of topics in the time and the trainer was always receptive to talking more in detail or more generally about the topics and how they were related. I feel the training has given me the tools to continue learning as opposed to it being a one off session where learning stops once you've finished which is very important given the scale and complexity of the topic.