Module 2: ML monitoring metrics

This module covers different aspects of the production ML model performance. You will learn how to apply data quality, model quality, and data drift metrics for structured data.

This module will cover different aspects of the production ML model performance. We will explain some popular metrics and tests and how to apply them:

  • what it means to have a “good” ML model;

  • evaluating ML model quality;

  • tracking data quality in production;

  • data and prediction drift as proxy metrics.

This module includes both theoretical parts and code practice for each evaluation type. At the end of this module, you will understand the contents of ML observability: metrics and checks you can run and how to interpret them.

Last updated