# Module 5: ML pipelines validation and testing

In previous modules, we covered what ML monitoring is, which metrics and tests to use, and what to consider in ML monitoring design. Now, let’s get to practice! This is a code-focused module.

We will apply the learnings and **implement data and model quality tests as part of a pipeline**. If you deal with batch models, such test-based monitoring can often cover all your needs. For online models, this can be a part of your setup. You can run batch checks when you get labeled data or retain the models.

We will go through an **end-to-end pipeline using a toy dataset**. We will train a model and design tests for data and model quality using Evidently. We will also explore how to automate the data pipeline testing using tools like Airflow, Prefect, and Mlflow.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://learn.evidentlyai.com/ml-observability-course/module-5-ml-pipelines-validation-and-testing.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
