Skip to main content

Glossary

Artifact : A file or output generated during the experiment process, such as trained models, datasets, or logs.

Create Run : An action that starts a new experiment run by navigating the user to the Experiment Design Page for configuration.

Code Generation Interface : A system feature that generates and customizes code for each pipeline step using natural language prompts.

Experiment Run : An execution of a machine learning training process with a specific set of parameters, algorithms, and data.

Hyperparameters : Configurable variables that govern the model training process, such as learning rate, batch size, or number of epochs.

Model Hub : A central repository within the platform where trained models can be registered for deployment.

Metrics : Quantitative measures used to evaluate model performance, such as accuracy, precision, recall, F1-score, and RMSE.

Parameters : Values or settings that influence the training process, including both algorithm-specific and experiment-level configurations.

Pipeline : A sequence of data processing and model training steps executed in a specific order to achieve an ML task.

Project : A logical container that groups pipelines and experiment runs related to a specific use case.

Access Role : A set of permissions assigned to a user that determines what data sources, projects, pipelines, and system features they can view or modify. Access roles are typically defined by administrators to control security and compliance.

Function Role : A set of operational capabilities or responsibilities assigned to a user that dictates what actions they can perform within the platform (e.g., creating pipelines, running experiments, deploying models).

Administrator (Admin) : A function role with the highest level of privileges. Admins can manage projects, users, access roles, function roles, and system configurations.