Skip to main content

Experiment

Q. What is the Experiment Module?

A. The Experiment Module is a feature of the MlAngles MLOps Platform that enables users to train and evaluate machine learning models using transformed datasets from pipelines.

Q. How do I access the Experiment Module?

A. The steps are:

  • From the left-hand sidebar, click the Experiment Module icon.
  • The Experiment Homepage will open, displaying all experiment runs along with key details such as run name, project, pipeline, version, status, creator, and timestamps.

Q. Can I customize algorithms before starting a run?

A. Yes. By clicking on the Configure Algorithm button, you can:

  • Select an ML library.
  • Choose one or more algorithms.
  • Submit your configuration before starting the experiment run.

Q. How can I create a new experiment run?

A. There are two ways:

  1. From the Experiment Homepage
    1. Use the Project Selection dropdown to choose a project.
    2. Click + New Run to go to the Experiment Run Design page.
  2. From the Project Workspace
    1. Click on the desired Project Title Card.
    2. Navigate to the Experiment Tracking tab.
    3. Click New Run.

Q. Does the system provide recommended configurations automatically?

A. Yes. Once a pipeline is selected, the system’s integrated Large Language Model (LLM) analyzes the pipeline metadata and auto-fills recommended parameters to speed up setup.

Q. What is the benefit of using auto-filled parameters?

A. These defaults allow users—especially beginners—to launch experiments quickly with well-optimized configurations, while advanced users can still customize them before execution.

Q. What is the benefit of using auto-filled parameters?

A. These defaults allow users—especially beginners—to launch experiments quickly with well-optimized configurations, while advanced users can still customize them before execution.

Q. Can I manually tune hyperparameters instead of relying on defaults?

A. Yes. Advanced users can choose to manually configure hyperparameter tuning. The platform supports:

  • Optuna – Automated, efficient hyperparameter optimization.
  • Hyperopt – Distributed asynchronous search.
  • Grid Search – Exhaustive search over a specified hyperparameter space.

Users must define the range or set of values for each parameter to optimize.

Q. How do I launch an experiment after configuration?

A. After selecting defaults or customizing settings:

  1. Click Submit to queue the experiment.
  2. The system schedules execution and redirects to the Project’s Experiment Page.
  3. From there, users can:
    1. Monitor real-time progress.
    2. View logs and status updates (Running, Failed, Success).
    3. Access trained models and outputs once the run is complete.

Q. Where can I view experiment results?

A . To view results of any experiment run:

  1. Navigate to the Project’s Experiment Page.
  2. Locate the desired run.
  3. Click the Run Name to open the Run Details Panel

Q. How do I register a model for deployment?

A. Steps for deployment :

  • Open the Artifacts tab.
  • Click Model Hub → Select a model.
  • Provide a Registration Name.
  • Click Register. The model is then available in the Model Hub Module for deployment.