Get in Touch

Course Outline

Introduction to Parameter-Efficient Fine-Tuning (PEFT)

  • Motivation and limitations of full fine-tuning
  • Overview of PEFT: objectives and benefits
  • Industry applications and use cases

LoRA (Low-Rank Adaptation)

  • Concept and intuition underpinning LoRA
  • Implementing LoRA using Hugging Face and PyTorch
  • Hands-on: Fine-tuning a model with LoRA

Adapter Tuning

  • How adapter modules operate
  • Integration with transformer-based models
  • Hands-on: Applying Adapter Tuning to a transformer model

Prefix Tuning

  • Leveraging soft prompts for fine-tuning
  • Strengths and limitations compared to LoRA and adapters
  • Hands-on: Prefix Tuning on an LLM task

Evaluating and Comparing PEFT Methods

  • Metrics for assessing performance and efficiency
  • Trade-offs in training speed, memory usage, and accuracy
  • Benchmarking experiments and interpretation of results

Deploying Fine-Tuned Models

  • Saving and loading fine-tuned models
  • Deployment considerations for PEFT-based models
  • Integration into applications and data pipelines

Best Practices and Extensions

  • Combining PEFT with quantisation and distillation
  • Application in low-resource and multilingual contexts
  • Future directions and active research areas

Summary and Next Steps

Requirements

  • A solid understanding of machine learning fundamentals
  • Practical experience working with large language models (LLMs)
  • Proficiency in Python and PyTorch

Audience

  • Data scientists
  • AI engineers
 14 Hours

Number of participants


Price per participant

Provisional Upcoming Courses (Require 5+ participants)

Related Categories