LogoLogo
DiscordGitHub
  • Welcome!
  • ML OBSERVABILITY COURSE
    • Module 1: Introduction
      • 1.1. ML lifecycle. What can go wrong with ML in production?
      • 1.2. What is ML monitoring and observability?
      • 1.3. ML monitoring metrics. What exactly can you monitor?
      • 1.4. Key considerations for ML monitoring setup
      • 1.5. ML monitoring architectures
    • Module 2: ML monitoring metrics
      • 2.1. How to evaluate ML model quality
      • 2.2. Overview of ML quality metrics. Classification, regression, ranking
      • 2.3. Evaluating ML model quality [CODE PRACTICE]
      • 2.4. Data quality in machine learning
      • 2.5. Data quality in ML [CODE PRACTICE]
      • 2.6. Data and prediction drift in ML
      • 2.7. Deep dive into data drift detection [OPTIONAL]
      • 2.8. Data and prediction drift in ML [CODE PRACTICE]
    • Module 3: ML monitoring for unstructured data
      • 3.1. Introduction to NLP and LLM monitoring
      • 3.2. Monitoring data drift on raw text data
      • 3.3. Monitoring text data quality and data drift with descriptors
      • 3.4. Monitoring embeddings drift
      • 3.5. Monitoring text data [CODE PRACTICE]
      • 3.6. Monitoring multimodal datasets
    • Module 4: Designing effective ML monitoring
      • 4.1. Logging for ML monitoring
      • 4.2. How to prioritize ML monitoring metrics
      • 4.3. When to retrain machine learning models
      • 4.4. How to choose a reference dataset in ML monitoring
      • 4.5. Custom metrics in ML monitoring
      • 4.6. Implementing custom metrics in Evidently [OPTIONAL]
      • 4.7. How to choose the ML monitoring deployment architecture
    • Module 5: ML pipelines validation and testing
      • 5.1. Introduction to data and ML pipeline testing
      • 5.2. Train and evaluate an ML model [OPTIONAL CODE PRACTICE]
      • 5.3. Test input data quality, stability and drift [CODE PRACTICE]
      • 5.4. Test ML model outputs and quality [CODE PRACTICE]
      • 5.5. Design a custom test suite with Evidently [CODE PRACTICE]
      • 5.6. Run data drift and model quality checks in an Airflow pipeline [OPTIONAL CODE PRACTICE]
      • 5.7. Run data drift and model quality checks in a Prefect pipeline [OPTIONAL CODE PRACTICE]
      • 5.8. Log data drift test results to MLflow [CODE PRACTICE]
    • Module 6: Deploying an ML monitoring dashboard
      • 6.1. How to deploy a live ML monitoring dashboard
      • 6.2. ML model monitoring dashboard with Evidently. Batch architecture [CODE PRACTICE]
      • 6.3. ML model monitoring dashboard with Evidently. Online architecture [CODE PRACTICE]
      • 6.4. ML monitoring with Evidently and Grafana [OPTIONAL CODE PRACTICE]
      • 6.5. Connecting the dots: full-stack ML observability
Powered by GitBook
On this page
  1. ML OBSERVABILITY COURSE
  2. Module 6: Deploying an ML monitoring dashboard

6.2. ML model monitoring dashboard with Evidently. Batch architecture [CODE PRACTICE]

A code example walkthrough of creating a live ML monitoring dashboard for batch architecture using Evidently.

Previous6.1. How to deploy a live ML monitoring dashboardNext6.3. ML model monitoring dashboard with Evidently. Online architecture [CODE PRACTICE]

Last updated 1 year ago

Video 2. , by Emeli Dral

In this video, we create a script to generate Reports and Test Suites for several batches of data and design different panels to display on a live ML monitoring dashboard.

Want to go straight to code? Here is the to follow along.

Outline: Introduction Overview of the script Imports Create global variables Load the data Implement the function to generate Reports and Test Suites Create a Project Add a counter panel (dashboard title) How to filter which data to display Add a line plot panel (target drift) Add a bar plot panel (dataset drift) Add test suite panels Implement the function to generate the Dashboard Live script debugging Run the script and monitoring service View and explore the dashboard in the browser View the individual Reports and Test Suites Recap and next steps

00:00
00:47
02:27
03:28
05:41
07:19
11:32
13:53
15:10
16:31
19:55
21:23
22:47
23:46
27:11
28:36
30:21
31:38
ML model monitoring dashboard with Evidently. Batch architecture [CODE PRACTICE]
code example