Skip to main content
Version: latest

Model Experiments - Python Library

This is a guide to using the Cortex-python library, Fabric APIs, and CLI to run, view, list, store, and use model experiments.

Experiments are used by data scientists to train and test AI model algorithms and make modifications that improve the model results for specific use cases. The goal of the experiments API, library, and CLI is to save, view, update, and download the details (metadata, parameters, and metrics) and output (artifacts) of experiments and experiment-runs for analysis, so the optimum model for a business use case can be packaged for deployment.

SENSA Fabric APIs, Python Library, and CLI provide tools to allow data scientists to compare test and training runs, so they can collaborate with the developers who package the optimum models into Skills. Ultimately the Skills are run independently or used to build Agents that run in development and production environments to provide end-to-end model-based AI solutions for business use cases.

The Cortex-python library, which can be easily used by data scientists in any IDE, is the primary tool for working with model experiments in SENSA Fabric. The methods are derived from the Experiments OpenAPI, which may also be accessed directly. In addition the Fabric CLI provides developer commands for managing experiments that have been run using the Cortex-python library or API methods.

Model Experiment components

Experiments have:

  • Runs: Invocation instances of a model experiment
  • Artifacts: The output from an experiment that is saved in managed content
  • Metadata: The informational content of an Experiment record (e.g. artifact_name)
  • Metrics: The attributes of the experiment being measured and how they are measured expressed as key-value pairs (e.g. precision, accuracy)
  • Parameters: Experiment attributes expressed as key-value pairs defining an experiment (e.g. category, version)

Use Cortex Python Library to Manage Experiments

The instructions and descriptions below are for the most commonly used methods in the cortex-python library.

NOTE: You can directly use ExperimentClient() as an alternative for all of the experiment methods. These methods are listed below.

Prerequisites

Run and Save Model Experiments

From your IDE:

  1. Import Cortex-python library

    from cortex import Cortex
  2. Create the Cortex client context

    client = Cortex.client()

    Pass in Project ID, auth token, and apiEndpoint. (comma delimited):

    (e.g. cortex_client = Cortex.client(project=input_msg.projectId, token=input_msg.token, api_endpoint=apiEndpoint))

    Token can be obtained following the instructions in Access API.

  3. Name your model experiment.

    experiment = client.experiment()

    Pass in the experiment name (e.g.experiment = client.experiment('sample_experiment'))

    caution

    Names must be alphanumeric, beginning with a letter and ending with a letter or number. In between dashes and underscores are allowed; no other special characters can be used.

  1. Get the model (pickle file)

    f_obj = open()

    Pass in local file path and file format. (comma delimited) In the example "rb" indicates reading the file in binary format.:

    Example:

    "/Users/swinchester/Documents/cs_projects/skills_tutorial/experiments_example/model/model.pickle"`, `"rb"`
  2. Create an experiment run with an artifact name.

    with experiment.start_run() as run:
    run.log_artifact()

    Pass in the artifact name.

    Example:

    with experiment.start_run() as run:
    run.log_artifact("model", f_obj)`)
  3. Log metrics if required for the experiment.

    run.log_metric()

    Pass in each metric as a comma delimited "key","value" pair.

    Example:

    run.log_metric("precision", "0.5")
    run.log_metric("accuracy", "0.8")
    run.log_metric("loss", "0.05")
  4. Log parameters if required for the experiment.

    run.log_param()

    Pass in each parameter as a comma delimited "key","value" pair.

    Example:

    run.log_param("category", "Financial Model")
    run.log_param("version", "1")
    run.log_param("SourceData", "Upstream Server Data")
  5. Log metadata if required for the experiment.

    run.set_meta()

    Pass in each metadata item as a comma delimited "key","value" pair.

    Example:

    run.set_meta("modelType", "Catboost Model")
  6. Close the file.

    f_obj.close()

Compare Model Experiments

The commands below demonstrate a variety of ways you can retrieve different kinds of model experiment data in order to compare runs when you are developing models.

Get the last run of an experiment

print(experiment.last_run())

Get model details from an experiment with the artifact name

run = experiment.last_run()
model = run.get_artifact()
print()

Pass in the Artifact name and the key/name of the details you want to retrieve:

  • model name

  • metadata

  • metrics

  • parameters

  • Example: Get model (where model is "model")

    run = experiment.last_run()
    model = run.get_artifact('model')
    print(model)
  • Example: Get metadata (where meta is "modelType")

    modelType = run.get_meta("modelType")
    print("modelType:", modelType)
  • Example: Get metrics (where metric is "precision")

    precision = run.get_metric("precision")
    print("precision:", precision))
  • Example: Get parameters (where param is "category")

    category = run.get_param("category")
    print("category:", category)

List Experiment Runs

To get a list of all experiment runs:

experiment.runs()

Delete Experiment Runs

To delete an experiment run use the following:

experiment.reset()

Example:

experiment._client.delete_experiment(experiment_name="sample_experiment", project=project)

Use the Fabric CLI to Manage Experiments

The Fabric CLI Experiment commands are used primarily by developers for managing experiments that have been run by data scientists.

CLI commands are NOT available for all model experiment activities. To invoke a model experiment run or update experiment metrics, metadata, or parameters you must use Cortex-python library or OpenAPI methods.

caution

Experiments CLI commands may not be optimized for Fabric v6. If you encounter problems, use the Cortex-python library or OpenAPI methods described herein.

Prerequisites

CLI Commands

Developers can use the SENSA Fabric CLI to do the following: (For command syntax and options go to the Experiments section of the CLI Reference Guide)

  • List experiments
  • List experiment runs
  • View experiment details
  • View experiment run details
  • Download experiment output (artifact) records
  • Delete experiments
  • Delete experiment runs

Use OpenAPI to Manage Model Experiments

The OpenAPI spec for Experiments provides all of the available Experiment API methods.

Prerequisites

OpenAPI methods

Experiment API methods allow users to:

  • Invoke experiment runs
  • Save experiments
  • Save experiment run metadata
  • Save experiment run parameters
  • Save experiment run metrics
  • List experiments
  • List experiment runs
  • View experiment details
  • View experiment run details
  • View experiment run output (artifact) details
  • Update experiment run details
  • Update experiment run metadata
  • Update experiment run parameters
  • Update experiment run metrics
  • Update experiment run output (artifact) details
  • Download experiment output (artifact) records
  • Delete experiments
  • Delete experiment runs