Course Outline
Day 1: Foundations and Core Threats
Module 1: Introduction to the OWASP GenAI Security Project (1 hour)
Learning Objectives:
- Understand the evolution from the OWASP Top 10 to GenAI-specific security challenges.
- Explore the OWASP GenAI Security Project ecosystem and resources.
- Identify key differences between traditional application security and AI security.
Topics Covered:
- Overview of the OWASP GenAI Security Project mission and scope.
- Introduction to the Threat Defense COMPASS framework.
- Understanding the AI security landscape and regulatory requirements.
- AI attack surfaces versus traditional web application vulnerabilities.
Practical Exercise: Setting up the OWASP Threat Defense COMPASS tool and performing an initial threat assessment.
Module 2: OWASP Top 10 for LLMs - Part 1 (2.5 hours)
Learning Objectives:
- Master the first five critical LLM vulnerabilities.
- Understand attack vectors and exploitation techniques.
- Apply practical mitigation strategies.
Topics Covered:
LLM01: Prompt Injection
- Direct and indirect prompt injection techniques.
- Hidden instruction attacks and cross-prompt contamination.
- Practical examples: Jailbreaking chatbots and bypassing safety measures.
- Defence strategies: Input sanitisation, prompt filtering, and differential privacy.
LLM02: Sensitive Information Disclosure
- Training data extraction and system prompt leakage.
- Model behaviour analysis for sensitive information exposure.
- Privacy implications and regulatory compliance considerations.
- Mitigation: Output filtering, access controls, and data anonymisation.
LLM03: Supply Chain Vulnerabilities
- Third-party model dependencies and plugin security.
- Compromised training datasets and model poisoning.
- Vendor risk assessment for AI components.
- Secure model deployment and verification practices.
Practical Exercise: Hands-on lab demonstrating prompt injection attacks against vulnerable LLM applications and implementing defensive measures.
Module 3: OWASP Top 10 for LLMs - Part 2 (2 hours)
Topics Covered:
LLM04: Data and Model Poisoning
- Training data manipulation techniques.
- Model behaviour modification through poisoned inputs.
- Backdoor attacks and data integrity verification.
- Prevention: Data validation pipelines and provenance tracking.
LLM05: Improper Output Handling
- Insecure processing of LLM-generated content.
- Code injection through AI-generated outputs.
- Cross-site scripting via AI responses.
- Output validation and sanitisation frameworks.
Practical Exercise: Simulating data poisoning attacks and implementing robust output validation mechanisms.
Module 4: Advanced LLM Threats (1.5 hours)
Topics Covered:
LLM06: Excessive Agency
- Autonomous decision-making risks and boundary violations.
- Agent authority and permission management.
- Unintended system interactions and privilege escalation.
- Implementing guardrails and human oversight controls.
LLM07: System Prompt Leakage
- System instruction exposure vulnerabilities.
- Credential and logic disclosure through prompts.
- Attack techniques for extracting system prompts.
- Securing system instructions and external configuration.
Practical Exercise: Designing secure agent architectures with appropriate access controls and monitoring.
Day 2: Advanced Threats and Implementation
Module 5: Emerging AI Threats (2 hours)
Learning Objectives:
- Understand cutting-edge AI security threats.
- Implement advanced detection and prevention techniques.
- Design resilient AI systems capable of withstanding sophisticated attacks.
Topics Covered:
LLM08: Vector and Embedding Weaknesses
- RAG system vulnerabilities and vector database security.
- Embedding poisoning and similarity manipulation attacks.
- Adversarial examples in semantic search.
- Securing vector stores and implementing anomaly detection.
LLM09: Misinformation and Model Reliability
- Hallucination detection and mitigation.
- Bias amplification and fairness considerations.
- Fact-checking and source verification mechanisms.
- Content validation and human oversight integration.
LLM10: Unbounded Consumption
- Resource exhaustion and denial-of-service attacks.
- Rate limiting and resource management strategies.
- Cost optimisation and budget controls.
- Performance monitoring and alerting systems.
Practical Exercise: Building a secure RAG pipeline with vector database protection and hallucination detection.
Module 6: Agentic AI Security (2 hours)
Learning Objectives:
- Understand the unique security challenges of autonomous AI agents.
- Apply the OWASP Agentic AI taxonomy to real-world systems.
- Implement security controls for multi-agent environments.
Topics Covered:
- Introduction to Agentic AI and autonomous systems.
- OWASP Agentic AI Threat Taxonomy: Agent Design, Memory, Planning, Tool Use, Deployment.
- Multi-agent system security and coordination risks.
- Tool misuse, memory poisoning, and goal hijacking attacks.
- Securing agent communication and decision-making processes.
Practical Exercise: Threat modelling exercise using the OWASP Agentic AI taxonomy on a multi-agent customer service system.
Module 7: OWASP Threat Defense COMPASS Implementation (2 hours)
Learning Objectives:
- Master the practical application of Threat Defense COMPASS.
- Integrate AI threat assessment into organisational security programs.
- Develop comprehensive AI risk management strategies.
Topics Covered:
- Deep dive into the Threat Defense COMPASS methodology.
- OODA Loop integration: Observe, Orient, Decide, Act.
- Mapping threats to the MITRE ATT&CK and ATLAS frameworks.
- Building AI Threat Resilience Strategy Dashboards.
- Integration with existing security tools and processes.
Practical Exercise: Complete threat assessment using COMPASS for a Microsoft Copilot deployment scenario.
Module 8: Practical Implementation and Best Practices (2.5 hours)
Learning Objectives:
- Design secure AI architectures from the ground up.
- Implement monitoring and incident response for AI systems.
- Create governance frameworks for AI security.
Topics Covered:
Secure AI Development Lifecycle:
- Security-by-design principles for AI applications.
- Code review practices for LLM integrations.
- Testing methodologies and vulnerability scanning.
- Deployment security and production hardening.
Monitoring and Detection:
- AI-specific logging and monitoring requirements.
- Anomaly detection for AI systems.
- Incident response procedures for AI security events.
- Forensics and investigation techniques.
Governance and Compliance:
- AI risk management frameworks and policies.
- Regulatory compliance considerations (GDPR, AI Act, etc.).
- Third-party risk assessment for AI vendors.
- Security awareness training for AI development teams.
Practical Exercise: Design a complete security architecture for an enterprise AI chatbot, including monitoring, governance, and incident response procedures.
Module 9: Tools and Technologies (1 hour)
Learning Objectives:
- Evaluate and implement AI security tools.
- Understand the current AI security solutions landscape.
- Build practical detection and prevention capabilities.
Topics Covered:
- AI security tool ecosystem and vendor landscape.
- Open-source security tools: Garak, PyRIT, Giskard.
- Commercial solutions for AI security and monitoring.
- Integration patterns and deployment strategies.
- Tool selection criteria and evaluation frameworks.
Practical Exercise: Hands-on demonstration of AI security testing tools and implementation planning.
Module 10: Future Trends and Wrap-up (1 hour)
Learning Objectives:
- Understand emerging threats and future security challenges.
- Develop continuous learning and improvement strategies.
- Create action plans for organisational AI security programs.
Topics Covered:
- Emerging threats: Deepfakes, advanced prompt injection, model inversion.
- Future OWASP GenAI project developments and roadmap.
- Building AI security communities and knowledge sharing.
- Continuous improvement and threat intelligence integration.
Action Planning Exercise: Develop a 90-day action plan for implementing OWASP GenAI security practices in participants' organisations.
Requirements
- General understanding of web application security principles
- Basic familiarity with AI/ML concepts
- Experience with security frameworks or risk assessment methodologies is preferred
Audience
- Cybersecurity professionals
- AI developers
- System architects
- Compliance officers
- Security practitioners
Testimonials (1)
I really enjoyed learning about AI attacks and the tools out there to begin practicing and actively using for security testing. I took a lot of knowledge away which I didn't have at the beginning and the course met what I hoped it would be. My favorite part shown from the training was Comet Browser and was amazed at what it could do. Definitely something will be looking into more. Overall it was a great course and enjoyed learning all OWASP GenAI Top 10.