Skip to main content
Version: latest

Build Daemons

Step-by-step instructions for defining Skills of all types are found here. This page contains details specific to the development of daemon type Skills.

A daemon is a long-running HTTP server that listens for incoming server requests. A daemon can be invoked as the action of a Skill.

When to Use Daemons

  • If you already host a RESTful service, and you want a Skill's action to be an endpoint of that server
  • If you have an action that returns responses in less that 30 seconds (or synchronously)
  • If you need an action triggered in response to a request, use a daemon instead of a job, which is best used for long-running batch computing processes.

How to Create a Daemon

Implementing a daemon for use in a Skill requires the following steps:

  1. Create a Docker image to run the REST service and deploy to a Docker registry that the Kubernetes cluster can pull from (whether self-hosted or external such as DockerHub).
  2. Deploy a daemon action to Fabric.
  3. Route inputs to the daemon action in the Skill definition.
  4. (Optional) Define Skill properties to override the default daemon path and method.

Daemon Requirements

  • You can use any technology you prefer to implement the web server (for example, Nginx and Flask).
  • Implement an HTTP endpoint that listens for requests in the daemon. It can listen on any port and use any path.
  • By default Fabric assumes the web server has an invoke endpoint, but this value can be overridden in the Skill definition that invokes the daemon.
  • All return messages from the daemon must be a JSON object with a payload key for it to be processed by Fabric. (NOTE: If a payload is not provided, the response defaults to an empty '{}' payload)

Template Files

When you run the cortex workspaces generate command or create a Skill in any other way template files are generated. During the Skill Building lifecycle you may modify these files. The files below are generated for a simple daemon.

  • Dockerfile - Instructions for building the Docker image.
  • main.py - A starting action definition that uses the cortex-python library
  • README.md - A getting started helper for Skill collaboration or sharing
  • requirements.txt - A list of dependency libraries
  • skill.yaml - A starter Skill definition for the runtime selected
  • message.json- Contains the payload needed to invoke the Skill

The simple daemon file structure looks like this:

/Users/smichalski/daemon
└──┐
├─ skills
│ └─ smmdaemon
│ ├─ skill.yaml
│ ├─ actions
│ │ └─ smmdaemon
│ │ ├─ requirements.txt
│ │ ├─ main.py
│ │ └─ Dockerfile
│ └─ invoke
│ └─ request
│ └─ message.json
└─ docs
└─ smmdaemon
└─ README.md

Define Daemon main.py

The main.py file provides the instructions for running (invoking) the action and the response.

For daemon actions you must define:

  • the input
  • return payloads

Daemon main.py


from fastapi import FastAPI

app = FastAPI()


@app.post('/invoke')
def run(request_body: dict):
payload = request_body['payload']
return {'payload': payload}

All return messages from a daemon must be a JSON object with a payload key for it to be processed by Fabric.

Daemon return message structure:

{
"payload": "..."
}

Modify requirements.txt file

Add to the requirements.txt file any package or libraries (versions may be added) required to deploy the action defined in main.py.

requirements.txt example

cortex-python
fastapi
uvicorn

Skill Definition

The Skill Definition provides a wrapper for the action (and optionally the model) and specifies the properties, parameters, and routing required to run.

The Skill is defined in the skill.yaml. For general information about Skills go to the Define Skills page.

Example daemon skill.yaml with action definition

camel: 1.0.0
name: smmdaemon
title: smmdaemon Title
description: smmdaemon Description

inputs:
- name: request
title: API Request
parameters:
- name: params
type: object
description: Request Parameters
required: true
routing:
all:
action: smmdaemon
runtime: cortex/daemons
output: response

outputs:
- name: response
title: API Response
parameters:
- name: result
type: object
description: API Response
required: true

properties:
- name: daemon.method
title: Daemon method
description: Update default value to HTTP method supported by endpoint
required: true
type: String
defaultValue: POST
- name: daemon.path
title: Daemon path
description: Update default value to HTTP endpoint path in container
required: true
type: String
defaultValue: invoke
- name: daemon.port
title: Daemon Port
description: Update default value to port on which app will be running
required: true
type: String
defaultValue: 5000

actions:
- name: smmdaemon
type: daemon
image: smmdaemon
environmentVariables: '"TEST"="value"'
command:
- uvicorn
- "main:app"
- --host
- "0.0.0.0"
- --port
- "5000"
- --workers
- "2"
port: 5000

Skill properties for daemons

You must define the following Skill properties for daemons:

  • daemon.method: The HTTP verb. Default is POST.
  • daemon.path: The URL path to invoke. Default is invoke.
  • daemon.port: The port where the web service runs. Default is 5000

Example daemon properties:

- name: daemon.method
title: Daemon method
description: Update default value to HTTP method supported by endpoint
required: true
type: String
defaultValue: POST
- name: daemon.path
title: Daemon path
description: Update default value to HTTP endpoint path in container
required: true
type: String
defaultValue: invoke
- name: daemon.port
title: Daemon Port
description: Update default value to port on which app will be running
required: true
type: String
defaultValue: 5000

Save Skill and Build Skill Images

  1. Modify the Dockerfile. Dockerfiles directly invoke the executable. The Dockerfile sets a CMD that allows the image to run the python executable.

    For best practices on how to write a Dockerfile, see Dockerfile best practices.

    Default Dockerfile

    FROM c12e/cortex-python-lib:fabric6
    ADD . /app
    WORKDIR /app
    RUN pip install -r requirements.txt

    CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "5000", "--workers", "2"]
  2. Build a new local Docker image - FOR ALL WORKSPACE SKILLS.

    cortex workspaces build  

    This creates Docker images tagged with the <image-name>:<version> where image and Skill name are the same.

    To build a single Skill you may append the command with --skill skillName.

    Example

    cortex workspaces build --skill smmDaemon

Publish Skills to Image Repository and Deploy

To push/publish AND deploy ALL WORKSPACE SKILL IMAGES to an image registry that is connected to your Kubernetes instance, run:

cortex workspaces publish

Images published to a repository connected to your Fabric Kubernetes instance are also deployed automatically.

To publish a single Skill you may append the command with --skill skillName.

Example

cortex workspaces publish --skill smmDaemon