Thank you for sending your enquiry! One of our team members will contact you shortly.
Thank you for sending your booking! One of our team members will contact you shortly.
Course Outline
iOS ML Environment & Development Setup
- Apple’s on-device ML architecture: CoreML, Vision, Speech, NaturalLanguage.
- Setting up the development environment: Anaconda, Python, Xcode, and Swift.
- Introduction to coremltools and the iOS ML conversion pipeline.
- Lab 1: Validate the macOS/Swift environment, set up Python/Anaconda, and verify Xcode command-line integration.
Training Custom Models with Python & Popular ML Libraries
- Model selection: When to use Keras/TensorFlow versus scikit-learn versus libsvm.
- Data preprocessing, training loops, and evaluation metrics in Python.
- Integrating Anaconda & Spyder for efficient model development and debugging.
- Handling legacy models: importing Caffe networks via coremltools.
- Lab 2: Train a custom classification/regression model in Python (Keras/scikit-learn) and export to .h5/.pkl.
Converting Models to CoreML & iOS Integration
- Using coremltools to convert TensorFlow, Keras, scikit-learn, libsvm, and Caffe models to .mlmodel.
- Inspecting CoreML models in Xcode: layers, inputs/outputs, precision, and optimization levels.
- Loading CoreML models in Swift: MLModel, MLFeatureProvider, and async inference.
- Lab 3: Convert a Python-trained model to CoreML, inspect it in Xcode, and load it in a Swift playground.
Building iOS Intelligence with CoreML & Vision
- Vision framework: face detection, object detection, text recognition, and barcode scanning.
- CoreGraphics integration: image preprocessing, ROI masking, and overlay rendering.
- GameplayKit: applying AI behavior trees, pathfinding, and game logic alongside ML in-app.
- Real-time inference optimization: multi-model pipelines, caching, and memory management.
- Lab 4: Implement a real-time image analysis feature using Vision + custom CoreML model + CoreGraphics overlay.
Speech Recognition, NLP & Siri Integration
- Speech framework: real-time speech-to-text, custom vocabulary, and language model injection.
- NaturalLanguage framework: tokenization, sentiment analysis, NER, and language identification.
- SiriKit & Shortcuts: adding voice commands, custom intents, and on-device Siri support.
- Privacy & security: CoreML sandboxing, data encryption, and on-device vs. cloud inference tradeoffs.
- Lab 5: Add voice commands, text analysis, and Siri Shortcuts to the iOS app.
Capstone Project & App Deployment
- End-to-end workflow: Python training → CoreML conversion → Swift UI → iOS deployment.
- Performance profiling: Instruments, CoreML diagnostics, and model quantization (FP16/INT8).
- App Store guidelines for ML apps: size limits, privacy manifests, and on-device data handling.
- Capstone: Deploy a complete iOS app with a custom CoreML model, Vision processing, speech/NLP features, and Siri integration.
- Review, Q&A, & Next Steps: Scaling to SwiftUI, Core ML multi-modal, and MLOps for iOS.
To request a customized course outline for this training, please contact us.
Requirements
- Proven experience programming in Swift (Xcode, SwiftUI/UIKit, async/await, closures).
- No prior machine learning or data science background required.
- Familiarity with command-line basics and Python syntax is helpful.
Audience
- iOS & Mobile Developers.
- Software Engineers transitioning to on-device AI.
- Technical leads evaluating iOS ML deployment strategies.
14 Hours
Testimonials (1)
The way of transferring knowledge and the knowledge of the trainer.