Version: 1.3.15

Redhat OpenShift Installation

Installation prerequisites

  • An OpenShift 4.2+ deployment
  • Either a Google Cloud storage, Azure Blob storage, or S3 compatible storage account where Cortex Certifai scan results will be stored
    • You can use the object store included in the OpenShift Container Storage (e.g. noobaa, ceph) as an S3 compatible storage account
    • Create the storage account and capture the associated credentials
      • For S3 compatible storage, capture the endpoint, access key, secret key, and storage path
      • For Google Cloud Storage, capture the JSON content of the application credentials file for a service account obtained from GCP
      • For Azure blob storage, capture the account key, account name, and SAS token
  • A namespace/OpenShift project where you will install Cortex Certifai
    • In the "OpenShift Console" go to "Projects" and "Create Project"

Install Certifai from the RedHat Marketplace

  1. In the RedHat Marketplace, either purchase or enroll for the free trial of the Cortex Certifai Operator.

    RHOS Marketplace 1

  2. After purchasing the trial/plan of the Cortex Certifai Operator, navigate to the operator page under My Software in the Workspace dropdown menu. You should see the following four tabs on the page:

    • Overview
    • Operators
    • Documentation
    • Support

    Click Install Operator, under the Operators tab to begin.

    RHOS Marketplace 1.5

  3. On the Install Operator page select the namespace for this installation from the dropdown. (Namespaces are created by system admins as a prerequisite step.) The other options are set to defaults as follows:

    • Update Channel = "stable"
    • Approval Strategy = "automatic"

    RHOS Marketplace 2

  4. Click Install.

  1. After the installation is complete, you should see that the status of the your cluster is Up to date. Next, navigate to the cluster console. You can reach the console, by clicking the menu icon at the far right of the row with Namespace Scope you selected in Step 2.

    RHOS Marketplace 2

  2. Navigate to the Installed Operators page, under the Operators tab on the left navigation panel. The page will display a list view of operators you've installed. Click the name of the operator you just added.

    RHOS Installed Operators

  3. A page opens that displays an overview of your operator and six other tabs:

    • YAML
    • Subscription
    • Events
    • All Instances
    • Cortex Certifai Operator
    • CertifaiScan Controller

    RHOS Operator Details

  4. Go to the tab labeled Cortex Certifai Operator, and click Create Certifai.

    RHOS Operator Create Instance

  5. You can configure the Certifai Operator instance by either filling the information under the "Form View" or "YAML View". Select "YAML View" for full control of the object creation.

    RHOS Operator Configure Instance

    The page will have a window with the following YAML config:

    apiVersion: cortex.cognitivescale.com/v1
    kind: Certifai
    metadata:
    name: default-certifai
    namespace: certifai-test01
    spec:
    console:
    authorization-type: none
    azure:
    account-key: your account key
    account-name: your account name
    sas-token: your SAS token
    gcs:
    application-credentials: |
    {
    "type": "",
    "project_id": "",
    "private_key_id": "",
    "private_key": "",
    "client_email": "",
    "client_id": "",
    "auth_uri": "",
    "token_uri": "",
    "auth_provider_x509_cert_url": "",
    "client_x509_cert_url": ""
    }
    replicas: 1
    route-type: none
    s3:
    access-key: your access key
    endpoint: 'https://s3.amazonaws.com'
    secret-key: your secret key
    verify-cert: false
    scan-dir: ''
    policy:
    authorization-type: none
    azure:
    account-name: your account name
    account-key: your account key
    sas-token: your SAS token
    gcs:
    application-credentials: |
    {
    "type": "",
    "project_id": "",
    "private_key_id": "",
    "private_key": "",
    "client_email": "",
    "client_id": "",
    "auth_uri": "",
    "token_uri": "",
    "auth_provider_x509_cert_url": "",
    "client_x509_cert_url": ""
    }
    enabled: false
    questionnaire-dir: ""
    replicas: 1
    route-type: none
    s3:
    access-key: "your access key"
    secret-key: "your secret key"
    endpoint: "https://s3.amazonaws.com"
    verify-cert: false
    deployment-type: openshift
    scan-manager:
    enabled: false
    replicas: 1
    scan-data-dir: ""
    ingress-config:
    proxy-read-timeout: "180"
    dex:
    connector:
    add-config: |
    orgs:
    - name: my-org
    client-id: clientid
    client-secret: clientsecret
    name: Github
    type: github
    enabled: false
    replicas: 1
    reference-model:
    enabled: true
    reporting:
    db-conn-str: 'postgresql://user:password@service:port/db'
    enabled: true
    period: '*/15 * * * *'
  6. Edit the parameters in the YAML configuration.

Required Changes:

  • spec/console/scan-dir must be set to a remote storage path. It is only necessary to provide a single set credentials for the remote storage type of the console's scan directory.
  • If Scan Manager is enabled (spec/scan-manager/enabled is true), then spec/scan-manager/scan-data-dir must be set to a remote storage path. Note that Scan Manager reads the credentials and other deployment configurations from the spec/console section.
  • If the spec/console/scan-dir / spec/scan-manager/scan-data-dir is a GCS path, you only need to fill in the spec/console/gcs/application-credentials field.
  • If the spec/console/scan-dir / spec/scan-manager/scan-data-dir is an S3 compatible storage path, you only need to fill in the S3 related credentials:
    • spec/console/s3/access-key
    • spec/console/s3/secret-key
    • spec/console/s3/endpoint
  • If the spec/console/scan-dir / spec/scan-manager/scan-data-dir is an Azure blob storage path, you only need to fill in the Azure related credentials:
    • spec/policy/azure/account-name
    • spec/policy/azure/account-key
    • spec/policy/azure/sas-token
  • If policy is enabled (spec/policy/enabled is true) and the spec/policy/questionnaire-dir is set to a remote storage path, then it is only necessary to provide a single set credentials for the remote storage type of the questionnaire directory.

    • If the questionnaire directory is a GCS path, you only need to fill in the spec/policy/gcs/application-credentials field.
    • If the questionnaire directory is an S3 compatible storage path, you only need to fill in the S3 related credentials:
      • spec/policy/s3/access-key
      • spec/policy/s3/secret-key
      • spec/policy/s3/endpoint
    • If the questionnaire directory is an Azure blob storage path, you only need to fill in the Azure related credentials:
      • spec/policy/azure/account-name
      • spec/policy/azure/account-key
      • spec/policy/azure/sas-token
  • If reporting is enabled (spec/reporting/enabled is true), then a valid Postgres connection string must be provided in the spec/reporting/db-conn-str field.

  • It is strongly recommended you review the spec/console/route-type field, and similarly the spec/policy/route-type field if policy is enabled, and set it to the appropriate option for your environment.

    NOTE: In RHOS when route-type is set to oauth, authentication is through OpenShift, rather than Dex.

  • For RHOS authorization set the following to "rbac" if the route-type is set to "oauth" for both console and policy:

    • spec/console/authorization-type
    • spec/policy/authorization-type

    NOTE: If scan manager is enabled, you'll need to re-create a ConfigMap manually before applying the Certifai CR. Delete existing Scan Manager ConfigMap as:

kubectl delete cm certifai-scan-manager -n <NAMESPACE>

Save the following yaml contents into a file scan-manager-cm.yaml

apiVersion: v1
data:
config: |
scan-config:
default:
parallel: 1
cpu-req: "1000m"
mem-req: "500Mi"
kind: ConfigMap
metadata:
name: certifai-scan-manager

and apply this file into your cluster as: kubectl apply -f scan-manager-cm.yaml -n <NAMESPACE>

Below is a table with descriptions of each parameter:

ParameterDescriptionExample
apiVersionAPIVersion defines the versioned schema of this representation of an object. Should NOT be changed by users.cortex.cognitivescale.com/v1
kindIdentifies the package type you are installing(Always) Certifai
metadata/nameThe name of your installation as configureddefault-certifai
metadata/namespaceThe namespace you selected in step 2 abovecertifai
spec/deployment-typeLeave it set to openshift. This guide is for OpenShift so this is the only option that works here.default = openshift
spec/dex/enabledEnable Dex as an authentication provider to access the Certifai Console and remote CLI operations.default = false; set this to true
spec/dex/connector/typeDex connector type: Should be one of the options described at https://dexidp.io/docs/connectors/github
spec/dex/connector/nameThe name of the Dex connector. Can be set to a sane value of your choice. Required field when Dex is enabledyour-dex-connector-name
spec/dex/connector/client-idOAuth app client ID for the Dex connector of your choice. Required field when Dex is enabledyour-oauth-provider-client-id
spec/dex/connector/client-secretOAuth app client secret for the Dex connector of your choice. Required field when Dex is enabledyour-oauth-provider-client-secret
spec/dex/connector/add-configAdditional configuration (yaml) you may want to pass on to Dex, including specific organization, teams etc. Refer to the documentation for connectors. All fields, excluding client ID, client Secret and redirectURI, maybe specified in this section. Optional field when Dex is enabledorgs:
- name: organization-with-certifai-access
spec/console/replicasThe number of console instances you want your organization to run concurrentlydefault = 1
spec/console/route-typeConsole access options are:
none (default): no authentication is required to open the Console
http: Unsecured for a closed network
https: Secured for the internet
oauth: Enables login with RHOS credentials or Dex connector credentials
default = none
spec/console/authorization-typeIf route_type is "oauth" and you want to control access to the Certifai Console, set this field to rbac.default = none
spec/console/scan-dirA path-like string prefixed with gs:// for gcs storage, abfs:// for Azure Storage Accounts and s3:// for S3 / Ceph / Noobaa storage accountss3://certifai-tes01/reports
spec/console/gcs/application-credentialsJSON content of the application credentials file for a service account obtained from GCP. This needs to be a JSON key and not P12.application-credentials
spec/console/s3/access-keyThe s3 (or Ceph) access key you configured during your infrastructure setupaccess-key
spec/console/s3/endpointThe s3 (or noobaa/Ceph) endpoint you configured during your infrastructure setups3.amazonaws.com
spec/console/s3/secret-keyThe s3 (or Ceph) secret-key you configured during your infrastructure setups3secret1234
spec/console/s3/verify-certWhether or not to verify SSL certificates from the S3 client. If false, SSL will still be used, but certificates will not be verified.default = false
spec/console/azure/account-nameThe name of your Azure Storage Account with Blob containersaccount-name
spec/console/azure/account-keyAn account key for a Blob Container present in the account-name referenced aboveaccount-key
spec/console/azure/sas-tokenAn SAS token for a Blob Container present in the account-name referenced abovesas-token
spec/policy/replicasThe number of policy instances you want your organization to run concurrentlydefault = 2
spec/policy/route-typePolicy Access options are:
none (default): no authentication is required to open the policy
http: Unsecured for a closed network
https: Secured for the internet
oauth: Enables login with RHOS credentials or Dex connector credentials
default=none
spec/policy/authorization-typeIf route_type is "oauth" and you want to control access to the Certifai Policy, set this field to rbac.default = none
spec/policy/questionnaire-dirA path-like string prefixed with gs:// for gcs storage, abfs:// for Azure Storage Accounts and s3:// for S3 / Ceph / Noobaa storage accountss3://certifai-tes01/reports
spec/policy/gcs/application-credentialsJSON content of the application credentials file for a service account obtained from GCP. This needs to be a JSON key and not P12.application-credentials
spec/policy/s3/access-keyThe s3 (or Ceph) access key you configured during your infrastructure setupaccess-key
spec/policy/s3/endpointThe s3 (or noobaa/Ceph) endpoint you configured during your infrastructure setups3.amazonaws.com
spec/policy/s3/secret-keyThe s3 (or Ceph) secret-key you configured during your infrastructure setups3secret1234
spec/policy/s3/verify-certWhether or not to verify SSL certificates from the S3 client. If false, SSL will still be used, but certificates will not be verified.default = false
spec/policy/azure/account-nameThe name of your Azure Storage Account with Blob containersaccount-name
spec/policy/azure/account-keyAn account key for a Blob Container present in the account-name referenced aboveaccount-key
spec/policy/azure/sas-tokenAn SAS token for a Blob Container present in the account-name referenced abovesas-token
spec/scan-manager/replicasThe number of scan manager instances you want your organization to run concurrentlydefault = 1
spec/scan-manager/scan-data-dirA path-like string prefixed with gs:// for gcs storage, abfs:// for Azure Storage Accounts and s3:// for S3 / Ceph / Noobaa storage accountss3://certifai-tes01/data
spec/scan-manager/ingress-config/proxy-read-timeoutProxy read timeout for connections to the upstream servers (Optional)"180" (seconds)
spec/reference-model/enabledBoolean. Enables or disables the reference model server that is added to your installation. Users may disable the reference model at any time to remove it from the installation to save resources.default = true
spec/reporting/enabledBoolean. Enables or disables the reporting ETL job that is added to your installation. Users may disable the reporting ETL job at any time to remove it from the installation to save resources.default = true
spec/reporting/periodCron time string describing how often the reporting ETL job should run. The default value is 15 minutes.default = */15 * * * *
spec/reporting/db-conn-strPostgreSQL connection string. The location the reporting ETL will load report data to. Required field when reporting is enabled.default = postgresql://user:password@service:port/db
  1. After you finish editing the YAML click Create at the bottom of the page. You will then see a list of created Certifai Operator instances including the instance you just created.

    RHOS Operator List Instances

Verify Cortex Certifai Installation

Open the Cortex Certifai Console to verify the remote storage is connected.

To find your Console URL:

  • If the Route-Type option is http, https or oauth:

    • Click Networking in the left navigation panel and then Routes. The Certifai Console URL is displayed in the Location column.
    • Click the link to open your Console
    • If oauth is selected for route-type, accept the OpenShift prompts to grant permission during login.

    RHOS Navigate To Console

  • If the Route-Type option is none:

    • Click Networking in the left navigation panel and then Routes. Click Create Route, then fill in the prompts to create a route for the service "certifai-console"
    • After you create the route, the Certifai Console URL is displayed under Location.

    RHOS Create Route for Console

NOTE: If there are no existing Certifai scan reports within your remote storage path you will see an error that no usecases could be found within the scan directory. You can setup example reports, so you may view sample and remote scan report visualizations in the remote Console.

Certifai Console No Reports Found

If you see a different error message, then verify that the Certifai Console's scan directory (spec/console/scan-dir) and corresponding authentication credentials are correct.

NOTE: If Scan Manager is enabled, you will have to set spec/console/console-url to the console URL you find from the step above and re-apply the Certifai CR.

Change Certifai configuration options

To make changes to the Console instance configuration:

  1. After the installation is completed, click Administration in the left navigation panel.

  2. Click Custom Resource Definition.

  3. Click Certifai in the resource list.

    RHOS Operator List CRDs

  4. On the Custom Resource Definition Details page open the Instances tab.

    RHOS Operator Select CRD Instance

  5. Click the name of the instance that you want to configure.

  6. Open the YAML tab.

    RHOS Operator Edit Configuration

  7. Find the configuration parameter you want to change, set it to the desired option, then click Save.

Log in to the OpenShift Kubernetes cluster

Prerequisite: Install the OpenShift CLI

Steps

  1. Log in to your OpenShift Console

    RHOS Console

  2. Click the User icon at the top right and select "Copy login command".

  3. In the browser window that opens click "Display token".

  4. Copy the code snippet that is displayed under "Login with this token".

    RHOS certs command

  5. Open a terminal window, paste the command you copied, and run the command.

    NOTE: If you are using a self-signed cert enter y when queried to allow an insecure connection.

You are now logged in to your Cortex Certifai cluster.

Next steps

  1. Setup example reports, so you may view sample and remote scan report visualizations in the remote Console.

  2. Run scan jobs in RHOS and view result visualizations in the remote Console.