This page provides an overview to defining Skills for Cortex Fabric using the CLI
cortex workspaces replaces
cortex generate skills. Other commands used in the previous process continue to run, however the process described in v6.3.0 forward is considered the best practice. The
cortex generate process can be found in v6.2.2 and earlier.
Specifics about developing the different Skill types can be found on the following pages:
Cortex workspaces are collections of Skills and other resources used by Skills that are managed using the VS Code Developer Extension or on your local drive using the CLI during a development session.
When you choose to use the CLI it is important to remember that the Skill files created are only available to you locally. If you wish to collaborate with other developers, you must export the files to a file share and versioning system like GitHub.
When you publish the Skill, it is added to the Catalog and deployed to Fabric where those with access to your Fabric project can select it.
Developer created Skills serve three main purposes in Cortex Fabric:
- Skills are the computational components of an Agent. Skill parameters define the expected input connection, an action in the form of a Docker image, a payload, and a response.
- Skills are deployed as Mission Intervention runtime actions.
- Skills are run independently (as Agents).
Other types of Skills
In addition to the Skills you build and develop, Fabric also uses Skills in other ways.
- Internal Skills are used to perform platform operations like running Mission Simulations, ingesting data for Data Sources and Profiles, and other processes.
- System Skills may be selected to build Agent patterns, thereby accelerating development.
Skill Building Tools
There are three options for building Skills for Cortex Fabric:
- Use the VS Code Cortex Fabric Developer Extension.
- Use the CLI
cortex workspacesprocess (described below).
- Use the Skill Builder in the Fabric Console Currently, Skill Builder can only be used to build Skills with Model Experiments.
A Skill is composed of the following core elements:
Metadata that describes the Skill. Skill metadata is defined with CAMEL. At a minimum, it must include the Skill's name, title, CAMEL version, and at least one input that defines how to route input messages that the Skill receives when it is invoked. It also supports other optional fields, like description, outputs, and properties.
The runtime to execute when a Skill is invoked. Input messages sent to a Skill are processed based on the runtime type:
cortex/external-api. The runtime type determines what should happen when the Skill is invoked. Jobs and daemons are routed to an action, while an External API runtime takes the input and routes it to an API endpoint.
- Daemons execute code and provide access to a web server to handle requests.
- Jobs execute long running or scheduled tasks (like training an ML model).
- External API routes inputs to external APIs, where you specify the API URL, path, headers, and method in the Skill definition.
The payload. Data that the runtime acts upon/returns a response to. (In the VS Code Extension and the
cortex workspaces createfile structure the payload is saved in the
Overview of CLI Skill Definition
Implementing a Skill requires you to:
- Generate a Skill file framework from a workspace template (local).
- Save the Skill definition and build the Skill image (local).
- Publish/push the Skill image to a Fabric container registry (add the Skill to the Fabric catalog).
- Deploy the Skill in a Fabric instance (happens automatically when the Skill is published).
- Invoke/run the Skill in your Fabric cluster.
- (Optional/recommended) Export the Skill to a source control repository like GitHub.
- Install the Cortex CLI.
- Authenticate to the Cortex CLI.
- Install and run Docker locally (to build images).
Overview of Skill generation steps
The list below provides only an overview of the Skill definition process. Specific steps are expanded upon in the content that follows.
Pre-development: Define the resources required by the Skill:
- Configure Fabric CLI workspaces to work with GitHub.
- Generate the Skill scaffolding in a local folder.
- (Best practice) Export the Skill scaffolding to source control.
- (Optional) Customize the Skill files using your desired IDE.
main.py: This file is run by the action image's entrypoint/command to handle the action's custom logic. Parameters depend on what the action is configured to do. For example:
- Request method
- Project ID
- Response parameters
requirements.txt: This file provides packages or libraries that the action requires.
skills.yaml: This file provides:
- input service name
- input parameters
- output service name
- output parameters
- Actions (one or more of the same type may be added to a Skill)
- image (automatically updated when the Skill is built)
- other info (e.g. daemon port,)
message.json: This file provides the payload message located in the folder within the main Skill -
Save the Skill definition and build the Skill image.
Publish/push the Skill image to the Fabric container repository and deploy the Skill to a Fabric instance.
Test the Skill by adding payload to the
message.jsonfile in the "invoke" folder and invoking the Skill.
The Skill is added to the Cortex Fabric Catalog and is available for selection:
- In the Fabric Console's Campaign Designer tool - Select a Skill as an Intervention Action for runtime execution.
- In the Fabric Console's Agent Composer tool - Select Skills when building Agents.
Skills that are deployed may be invoked (run) either independently or within Agents.
Step 1: Configure workspaces
When you work with the VS Code Extension or the Cortex CLI workspaces, you must configure a GitHub repository. You may also configure a registry where the Skill image is stored and called during runtime.
- Configure a GitHub repository connection.cortex workspaces configure
- The first time you configure workspaces, a browser window opens with a code that you must enter in the CLI window.
- Enter a Template Repository URL if you wish to source templates from a repo different from the default.
- Enter a branch in the repo to source from if different from the default.
Step 2: Generate Skill scaffolding
cortex workspaces CLI command to generate
a Skill that includes a basic set of files for the runtime you select.
Run this Cortex CLI command in a directory where you want to generate the Skill files, for example inside a source code directory for a Git repo.
Using the CLI, run:cortex workspaces generate [options][skillName][type]
EXAMPLEcortex workspaces generate smmDaemon daemon
skillNamemust follow Fabric naming conventions:
20 characters or fewer
beginning with a letter
ending with a letter or number.
In between dashes and underscores are allowed
no other special characters can be used
--registry [registryURL]- You may enter an override location for storing images.
--color [on/off]- for JSON output
notree- suppresses the display of the file tree that is created by the command. By default this is false and the tree is displayed.
--template [templateName]- names the template you wish to select rather than selecting from a list.
-h- displays this list of options that may be used with the command.
Template files are added to a directory based on the Skill type selected.
Step 3: Export Skill Definition
The Skill definition files that are created using workspaces are saved locally. You can run the Skills and test them locally, but in order to develop them collaboratively they must be exported to a file share repository such as GitHub.
Export your local Workspace files using the best practices for whatever version control and file hosting system you use.
Step 4: Customize the Skill Files
The Skill templates are written to run out-of-the-box. Optionally, you may customize the template files using any IDE.
Define the Skill main.py
main.py file provides the instructions for running (invoking) the action and the response.
Modify the requirements.txt file
Add to the
requirements.txt file any package or libraries (versions may be added) required to deploy the action defined in
NOTE: The contents of this folder varies based on your Skill and system preferences.
Configure the skill.yaml
The Skill provides a wrapper for the action and (optionally the model) that specifies the properties, parameters, and routing required to run.
skill.yaml must be modified to configure (detailed in sections below):
- Skill properties
- Input parameters and routes
- Output parameters
- (Optionally) Action definition (when the action is deployed with the Skill rather than independently)
See the Skill object reference documentation for information about the optional and required fields in a Skill definition.
- Skills have inputs and outputs.
- Skill inputs are routed to a runtime that processes the service messages received by the Skill: a
- Outputs may be used for Skill orchestration.
- Job and daemon runtimes require the name of a deployed action image OR an action definition to execute when the Skill is invoked in Fabric.
- Skills may be configured with multiple Input routes.
Skill Object reference
This is from the Skill definition section of the CAMEL spec:
Skill Object: Example
An example "Hello World" Skill is shown below:
Skill Object: Fixed Fields
|REQUIRED - CAMEL Specification Version|
|REQUIRED - Resource Name|
|REQUIRED - Resource Title|
|OPTIONAL - Resource Description|
|Tag Object||OPTIONAL - Resource Tags|
|Property Object||OPTIONAL - An array of Skill properties|
|Skill Input Object||REQUIRED - An array of Input Objects; At least one Input is required.|
|Skill Output Object||OPTIONAL - An array of Output Objects|
Skill Input Object
This object defines an input message used by the Skill.
Skill Input Object: Fixed fields
|Parameter Object | Reference Object||REQUIRED|
Skill Output Object
This object defines an output message used by the Skill.
The Skill output provides flexible orchestration between Skills in an Agent.
A Skill may have one or more outputs associated with its runtime. When you define the Skill, those outputs are named.
Skill Output Object: Fixed fields
|Parameter Object | Reference Object||REQUIRED|
This object defines the routing rules for a Skill input. Skills route Messages received on an Input to an Skill/action for processing and then to an Output.
A Skill MUST define at least one routing rule for each Input.
Messages can be routed based on properties or Message field values. The simplest form of routing is the
all routing which routes all Messages received on a given Input to a single action.
Routing Object: Examples
ALL routing. Routes all messages to a single action.
Property based routing
Field based routing
"All" Routing: Fixed fields
Property Routing: Fixed fields
|REQUIRED - The name of the property to apply routing rules to|
|default||"All" Routing fixed fields||OPTIONAL - The default routing rule is used if no property matches are made|
|Routing Rule Object||REQUIRED - List of routing rules to apply to the specified property value|
Field Routing: Fixed fields
|REQUIRED - The name of the Message field to apply routing rules to.|
|"All" Routing fixed fields||OPTIONAL - The default routing rule used if no property matches are made|
|[Routing Rule Object||REQUIRED - List of routing rules to apply to the specified field value|
Routing Rule Object
This object defines a routing rule to apply to a value that comes from either a Skill property or Message field value.
Routing Rule Object: Fixed fields
|The value to match|
|REQUIRED - The Resource Name of the action to route to|
|OPTIONAL - The Resource Name of the action runtime to use; the default runtime is assumed if not provided|
|output||REQUIRED - The name of the Output to route to|
Skill Properties Object
Skills support the addition of configurable properties. A Skill can have zero to any number of properties associated with it. Skill properties are set in the skill.yaml or in the Console: Studio Properties panel. When a Skill is invoked, the input message passed to the associated action includes each property field and the value set for the field.
Use the CAMEL property definition object to declare properties in a Skill.
Skill Properties Example
The example below has two properties:
- For the
modelproperty, users can select between two options:
Synchronousfor daemons or
api_keyproperty is scoped with the
Synchronous is selected as the
model, then the user can set the
api_key property cannot be set. It is also a
to hide the API key value (see the guide about how
to use secrets.
CAMEL Property definition object
The property definition object is used to declare a configurable property of a Skill (or any Fabric resource).
Property Definition Object: Fixed Fields
|REQUIRED - The unique name of the property within the resource. Tools and libraries MUST use the name to uniquely identify the property, therefore, it is RECOMMENDED to follow common programming naming conventions|
|OPTIONAL - Default value is |
|REQUIRED - Must be one of |
|OPTIONAL - The default value for this property|
|REQUIRED - An array of valid values for use with the |
|OPTIONAL - Scopes this property to another property value by name. For example, for a property named |
|OPTIONAL - Default value is |
Property Value Object
The property value object is used to set a property.
Property Value Object: Fixed Fields
|The property name|
|The property value|
Mark a Skill property as secure
secure: true for a property to mark it as secure.
When set, Skill users must enter a variable key for the property to use the Skill. See Define Secrets for additional information.
When Skills are connected in Agent Composer, the outputs from the Skill are mapped to provide input data to other Skills in the Agent.
Skill patterns provides additional information about how Skills can be configured to work together in different patterns.
Step 5: Save Skill definitions and build Skill images
The Dockerfile provides the container or image that deploys at runtime. The following activities must take place each time the Skill Action is modified.
Modify the Dockerfile.
Dockerfiles directly invoke executable. The Dockerfile sets an
CMD(daemons) that allows the image to run the python executable.
- For daemons
- [For jobs](/cortex-fabric/docs/build-skills/jobs#build-skill images)
Build a new local Docker image in the Skill directory that contains your Dockerfile.
Each Skill is packaged as a Docker image. To deploy an action, the Docker image must be pushed to an image repository that is accessible by the Kubernetes cluster where Cortex is deployed.
Make sure that you have Docker running on your machine.
Step 6: Publish and deploy Skill Image
Publish/push the Skill image to a container registry that is connected to your Kubernetes cluster. When you publish a Skill, it is automatically deployed.
- For daemons
- [For jobs](/cortex-fabric/docs/build-skills/jobs#publish-and-deploy-skill image)
Step 7: Test the Skill
As a best practice test a Skill:
- Modify and save the
message.jsonfile in the "invoke" folder.
Undeploy and Redeploy Skills
Skills in your catalog can be "turned on" or "turned off" by running the
cortex skills deploy or
cortex skills undeploy CLI command (or by calling the API directly).
Skills must be deployed before they can be added to Agents or Interventions and before they can be invoked.
The Skill remains in your Catalog even if it has been undeployed.
Later, you may redeploy the Skill by running the
cortex skills deploy CLI command:
If you have made changes to the Skill definition, use
cortex skills save to update. If you have made changes to the action, you must rebuild the Docker image and push the new image to your registry before saving and redeploying the Skill.
Deleting Skills is a protected action in Cortex. When you run the delete command an Impact Assessment is run, and if downstream dependencies are found, the deletion is not allowed, and the resources using that Skill are listed. You must remove dependencies to unblock the delete action.
Skills cannot be deleted via the Fabric Console.
Skills may be deleted via the CLI by running:
When Skills are deleted via the CLI an Impact Assessment is run that returns a list of artifacts that are affected by the deletion.
Retag a Docker Image
To retag a pre-existing Docker image run: