Get in Touch

Course Outline

Introduction to Multimodal LLMs in Vertex AI

  • Overview of multimodal capabilities within Vertex AI
  • Gemini models and their supported modalities
  • Enterprise and research use cases

Setting Up the Development Environment

  • Configuring Vertex AI for multimodal workflows
  • Working with datasets across different modalities
  • Hands-on lab: environment setup and dataset preparation

Long Context Windows and Advanced Reasoning

  • Understanding long-context workflows
  • Use cases in planning and decision-making
  • Hands-on lab: implementing long-context analysis

Cross-Modal Workflow Design

  • Combining text, audio, and image analysis
  • Chaining multimodal steps within pipelines
  • Hands-on lab: designing a multimodal pipeline

Working with Gemini API Parameters

  • Configuring multimodal inputs and outputs
  • Optimising inference performance and efficiency
  • Hands-on lab: tuning Gemini API parameters

Advanced Applications and Integrations

  • Interactive multimodal agents and assistants
  • Integrating external APIs and tools
  • Hands-on lab: building a multimodal application

Evaluation and Iteration

  • Testing multimodal performance
  • Metrics for accuracy, alignment, and drift
  • Hands-on lab: evaluating multimodal workflows

Summary and Next Steps

Requirements

  • Proficiency in Python programming
  • Experience in developing machine learning models
  • Familiarity with multimodal data (text, audio, images)

Target Audience

  • AI researchers
  • Advanced developers
  • Machine learning scientists
 14 Hours

Number of participants


Price per participant

Provisional Upcoming Courses (Require 5+ participants)

Related Categories