LogoLogo
DiscordGitHub
  • Welcome!
  • ML OBSERVABILITY COURSE
    • Module 1: Introduction
      • 1.1. ML lifecycle. What can go wrong with ML in production?
      • 1.2. What is ML monitoring and observability?
      • 1.3. ML monitoring metrics. What exactly can you monitor?
      • 1.4. Key considerations for ML monitoring setup
      • 1.5. ML monitoring architectures
    • Module 2: ML monitoring metrics
      • 2.1. How to evaluate ML model quality
      • 2.2. Overview of ML quality metrics. Classification, regression, ranking
      • 2.3. Evaluating ML model quality [CODE PRACTICE]
      • 2.4. Data quality in machine learning
      • 2.5. Data quality in ML [CODE PRACTICE]
      • 2.6. Data and prediction drift in ML
      • 2.7. Deep dive into data drift detection [OPTIONAL]
      • 2.8. Data and prediction drift in ML [CODE PRACTICE]
    • Module 3: ML monitoring for unstructured data
      • 3.1. Introduction to NLP and LLM monitoring
      • 3.2. Monitoring data drift on raw text data
      • 3.3. Monitoring text data quality and data drift with descriptors
      • 3.4. Monitoring embeddings drift
      • 3.5. Monitoring text data [CODE PRACTICE]
      • 3.6. Monitoring multimodal datasets
    • Module 4: Designing effective ML monitoring
      • 4.1. Logging for ML monitoring
      • 4.2. How to prioritize ML monitoring metrics
      • 4.3. When to retrain machine learning models
      • 4.4. How to choose a reference dataset in ML monitoring
      • 4.5. Custom metrics in ML monitoring
      • 4.6. Implementing custom metrics in Evidently [OPTIONAL]
      • 4.7. How to choose the ML monitoring deployment architecture
    • Module 5: ML pipelines validation and testing
      • 5.1. Introduction to data and ML pipeline testing
      • 5.2. Train and evaluate an ML model [OPTIONAL CODE PRACTICE]
      • 5.3. Test input data quality, stability and drift [CODE PRACTICE]
      • 5.4. Test ML model outputs and quality [CODE PRACTICE]
      • 5.5. Design a custom test suite with Evidently [CODE PRACTICE]
      • 5.6. Run data drift and model quality checks in an Airflow pipeline [OPTIONAL CODE PRACTICE]
      • 5.7. Run data drift and model quality checks in a Prefect pipeline [OPTIONAL CODE PRACTICE]
      • 5.8. Log data drift test results to MLflow [CODE PRACTICE]
    • Module 6: Deploying an ML monitoring dashboard
      • 6.1. How to deploy a live ML monitoring dashboard
      • 6.2. ML model monitoring dashboard with Evidently. Batch architecture [CODE PRACTICE]
      • 6.3. ML model monitoring dashboard with Evidently. Online architecture [CODE PRACTICE]
      • 6.4. ML monitoring with Evidently and Grafana [OPTIONAL CODE PRACTICE]
      • 6.5. Connecting the dots: full-stack ML observability
Powered by GitBook
On this page
  • Welcome!
  • How to participate?
  • Links
  • What the course is about
  • Course structure
  • Course calendar and deadlines for the 2023 cohort
  • Our approach
  • Prerequisites
  • Who is it for

Welcome!

Free Open-source ML observability course for data scientists and ML engineers by Evidently AI.

NextModule 1: Introduction

Last updated 1 year ago

Welcome!

Welcome to the Open-source ML observability course!

How to participate?

  • Learn at your own pace. We published all 40 lessons with videos, course notes, and code examples.

Links

What the course is about

This course is a deep dive into ML model observability and monitoring.

We explore different types of evaluations, from data quality to data drift, and how this fits in the model lifecycle. We also cover the engineering aspect of ML observability and how to integrate it with your ML services and pipelines.

Course structure

ML observability course is organized into six modules. You can follow the complete course syllabus or pick only the modules that are most relevant to you.

Course calendar and deadlines for the 2023 cohort

Module
Week

October 16, 2023

October 23, 2023

October 30, 2023

November 6, 2023

November 13, 2023

November 20, 2023

Final assignment

November 27, 2023 Quizzes and assignment due December 4, 2023

Our approach

  • Blend of theory and practice. The course combines key concepts of ML observability and monitoring with practice-oriented tasks.

  • Practical code examples. We provide end-to-end deployment blueprints and walk you through the code examples.

  • Focus on open-source. The course is built upon open-source tools to make ML observability accessible to all.

  • The course is free and open to everyone. All course videos are public so you can rewatch them anytime.

Prerequisites

There are both theoretical and code-focused modules that require knowledge of Python. We will walk you through the code, but you can skip these parts and still learn a lot.

Who is it for

This course is useful to professionals who have dealt with ML models in production and those preparing to deploy ML models:

  • Data scientists,

  • ML engineers,

  • Technical product managers,

  • Analysts.

Let’s dive in!

Join the course cohort. To submit assignments and earn a certificate of completion, you must enroll in the course cohort. to save your seat and be notified when the next cohort starts.

Newsletter. to receive course updates and be notified when the next cohort starts.

Discord community. Join the to ask questions and chat with others.

Code examples. Are published in this GitHub .

YouTube playlist. to the course YouTube playlist.

Enjoying the course? Evidently on GitHub to contribute back! This helps us create free, open-source tools and content for the community.

The 2023 cohort has completed. You can learn at your own pace or for the next cohort.

Sign up
Sign up
community
repository
Subscribe
Star
Module 1: Introduction
Module 2: ML monitoring metrics
Module 3: ML monitoring for unstructured data
Module 4: Designing effective ML monitoring
Module 5: ML pipelines validation and testing
Module 6: Deploying an ML monitoring dashboard
sign up
Module 1: Introduction to ML monitoring and observability
Module 2: ML monitoring metrics: model quality, data quality, data drift
Module 3: ML monitoring for unstructured data: NLP, LLM and embeddings
Module 4: Designing effective ML monitoring
Module 5: ML pipelines validation and testing
Module 6: Deploying an ML monitoring dashboard
ML observability course: welcome video