Skip to main content

Intended Audience

Intended Audience

This guide is intended for users of the mlangles mlops platform who are involved in building, training, deploying, or consuming machine learning models. The platform supports the complete ML lifecycle—from pipeline creation and experiment tracking to model deployment and performance monitoring—and is designed to meet the needs of the following user roles:

Data Scientists

Data scientists are responsible for developing, experimenting with, and evaluating machine learning models. They use the platform to:

  • Conduct exploratory data analysis and feature engineering.
  • Train and compare models using the Experiment module.
  • Register selected models to ModelHub for version tracking and governance.
  • Review model performance over time through built-in monitoring dashboards.

Their primary objective is to create robust, reproducible, and high-performing models using the tools provided by mlangles.

Machine Learning Engineers

Machine learning engineers operationalize machine learning workflows by building automated and scalable pipelines. They use the platform to:

  • Deploy models to production using Batch Serving and Online Serving mechanisms.
  • Implement retraining and redeployment strategies.
  • Integrate Mlangleswith CI/CD systems and production environments.

Their focus is on ensuring that models are reliably deployed, monitored, and maintained across the ML lifecycle.

Data Engineers

Data engineers support the infrastructure and data workflows required for model training and inference. They use the platform to:

  • Prepare and ingest large datasets into the platform.
  • Configure access and function roles for data operations.
  • Design and orchestrate workflows using the Pipeline module.

They play a critical role in ensuring that clean, high-quality data is consistently available for model training and consumption.

End Users

End users are business or functional stakeholders who interact with the outcomes of machine learning models but are not directly involved in model development or pipeline configuration. Their primary objective is to access and utilize model predictions as part of business workflows, decision-making, or system integrations.

Typical Roles

  • Operations Analysts – Use model results to trigger business processes (e.g., flagging high-risk transactions or prioritizing support tickets).
  • Business Users – Leverage prediction outputs through reporting tools or embedded applications.
  • Compliance or Risk Teams – Evaluate output consistency and fairness of deployed models over time.

Platform Interaction

End users typically interact with the platform in one of the following ways:

  1. Calling Deployed Models via Online Serving APIs Models deployed using the Online Serving module are accessible via REST endpoints. End users (or their systems) can send real-time requests to the deployed model servers to retrieve predictions. For example:
    1. Input: A JSON payload containing customer transaction data.
    2. Output: A probability score or classification label returned via HTTP response. This mode is ideal for use cases requiring low latency and real-time decision support.
  2. Accessing Batch Predictions via Batch Serving For large datasets or asynchronous use cases, models can be scheduled using the Batch Serving module. End users may upload a batch input file or configure a data source, and the platform will generate predictions that can be downloaded as reports or routed to connected systems.
  3. Monitoring Outputs and Reports End users may access dashboards generated by the platform to view key outputs such as:
    1. Prediction results
    2. Model confidence scores
    3. Historical trend analyses
    4. Alerts for significant performance shifts or data drift

These interfaces provide transparency and help users interpret model behavior in operational contexts.