Version: 1.3.14.1

Prepare model endpoints for scanning

Prerequisites

  1. Download the Certifai toolkit

To scan your own model with Certifai, you must provide an endpoint that matches the expected predict API.

Examples of using the Certifai Model SDK to package your Python models are provided in the models folder of the cortex-certifai-examples repository.

Create a model endpoint using Certifai Model SDK

For development purposes, you can run your model in a local flask application without using Docker.

This walkthrough uses the german_credit model from the cortex-certifai-examples repository.

  1. In another terminal, clone the cortex-certifai-examples repository:

    git clone https://github.com/CognitiveScale/cortex-certifai-examples.git
  2. Go to the models/german_credit folder in the cloned cortex-certifai-examples repository.

    cd cortex-certifai-examples/models/german_credit
  3. Create a new conda environment.

    conda create -n model-server python=3.6
  4. Activate the conda environment.

    conda activate model-server
  5. Install the cortex-certifai-common and cortex-certifai-model-sdk packages from the Certifai Toolkit. Replace certifai_toolkit in the following with the path where you unzipped the toolkit.

    On MAC or Linux:

    pip install certifai_toolkit/packages/all/cortex-certifai-common*
    pip install certifai_toolkit/packages/all/cortex-certifai-model-sdk*

    For Windows Powershell:

    Get-ChildItem .\packages\all\cortex-certifai-common* | ForEach-Object -Process { pip install $_ }
    Get-ChildItem .\packages\all\cortex-certifai-model-sdk* | ForEach-Object -Process { pip install $_ }
  6. Train the models using the provided Python script.

    python train.py
  7. Run the prediction service for the Decision Tree model using the provided Python script.

    python app_dtree.py

    You should see build output that ends similar to the following:

    * Serving Flask app "certifai.model.sdk.simple_wrapper" (lazy loading)
    * Environment: production
    WARNING: This is a development server. Do not use it in a production deployment.
    Use a production WSGI server instead.
    * Debug mode: off

    Your prediction service is running at: http://127.0.0.1:8551/predict.

Next steps

You are now ready to define and run scans locally.