Skip to main content

Monitoring

Q. What is the purpose of the Monitoring Module?

A. The Monitoring Module provides dashboards to track the health, performance, and stability of deployed machine learning models. It helps identify issues such as data drift, infrastructure bottlenecks, and performance degradation.

Q. What types of dashboards are available in the Monitoring Module?

A. The available dashboards are :

  • Infrastructure Dashboard – Tracks compute, memory, and system stability metrics.
  • Categorical Target Drift Dashboard – Detects shifts in categorical target distributions.
  • Classification Performance Dashboard – Evaluates classification model performance (e.g., confusion matrix, quality trends).
  • Data Drift Dashboard – Identifies statistical changes in numerical features.
  • Image Drift Dashboard – Monitors distribution changes in image datasets.
  • Numerical Target Drift Dashboard – Tracks drifts in numerical target values.
  • Regression Performance Dashboard – Assesses regression model accuracy (e.g., RMSE, MAE).

Q. Why is monitoring important for deployed models?

A. Monitoring ensures that models remain accurate and relevant when exposed to new data. It helps proactively detect data drift, target drift, performance degradation, and infrastructure limitations.

Q. Can the Monitoring Module detect data drift in images?

A. Yes. The Image Drift Dashboard is specifically designed to monitor statistical changes in visual datasets, ensuring image-based models remain valid over time.

Q. How does the module handle classification and regression models?

A. The module offers two seperate dashboards for these models.

  • Classification models are monitored via the Classification Performance Dashboard.
  • Regression models are monitored via the Regression Performance Dashboard.

Q. Who should use the Monitoring Module?

A. Data scientists, ML engineers, and operations teams should use the Monitoring Module to ensure deployed models continue performing well in production.