certifai.scanner.explanation_utils module

class certifai.scanner.explanation_utils.Explanation

Abstract base class for explanations - different analyses will have different forms of explanation, so this class just serves as a type-root

property contribution: Optional[float]

Return the contribution of this data point to the aggregate measure under analysis

class certifai.scanner.explanation_utils.IExplainedPrediction

Interface for prediction instance explanation

property explanation_type: certifai.engine.engine_api_types.ExplanationType

Returns the type of explanation (e.g. - counterfactual, shap, …)

property prediction: Any

Return the prediction the model made for the instance to which this explanation pertains

property instance: numpy.ndarray

Return the instance data for the instance to which this explanation pertains

property field_names: List[str]

Return the names of fields in the order they appear as columns in other structures such as the instance property or aspects of the particular explanation

property explanation: certifai.scanner.explanation_utils.Explanation

Details of the explanation. The exact form will vary depending on what type of explanation analysis was run

property alternate: bool

If True then this explanation is an alternate explanation rather than the primary

class certifai.scanner.explanation_utils.ShapExplanation(class_expected_score: float, features_weights: numpy.ndarray, contribution: Optional[float] = None)

SHAP-specific explanation details

property class_expected_score: float

Expected model score for the class being predicted

Returns

baseline soft score from the model for examples of this class

property feature_weights: numpy.ndarray

SHAP values for each feature

Returns

numpy array of feature weights (same order as field_names in the parent IExplainedPrediction)

property contribution: Optional[float]

Return the contribution of this data point to the aggregate measure under analysis

class certifai.scanner.explanation_utils.Counterfactual(counterfactual_type: str, prediction: Any, data: numpy.ndarray, fitness: float)

A specific counterfactual instance

property counterfactual_type: str
Type of counterfactual

Current possible values are: * “prediction more beneficial” * “prediction less beneficial” * “prediction changed” * “prediction decreased” (regression tasks only) * “prediction increased” (regression tasks only)

Returns

human-readable string describing the type of change the counterfactual illustrates

property prediction: Any

Prediction from the model for this example

Returns

model predicted class or value

property data: numpy.ndarray

Example field values for this example

Returns

numpy array of feature values (same order as field_names in the parent IExplainedPrediction)

property fitness: float

Fitness for this example counterfactual. For a specific evaluation, larger values are ‘better’ counterfactuals (e.g. closer to the model decision boundary).

Returns

fitness value

class certifai.scanner.explanation_utils.CounterfactualExplanation(best_individuals: List[certifai.scanner.explanation_utils.Counterfactual], contribution: Optional[float] = None)

Counterfactual explanation details

property best_individuals

Illustrative set of counterfactual instances for this example

Returns

list of Counterfactual

property contribution: Optional[float]

Return the contribution of this data point to the aggregate measure under analysis

class certifai.scanner.explanation_utils.ExplainedPrediction(input_features: numpy.ndarray, feature_names: Iterable[str], prediction: Any, explanation_type: certifai.engine.engine_api_types.ExplanationType, explanation: certifai.scanner.explanation_utils.Explanation, is_alternate: bool = False)
property explanation_type: certifai.engine.engine_api_types.ExplanationType

Returns the type of explanation (e.g. - counterfactual, shap)

property prediction: Any

Return the prediction the model made for the instance to which this explanation pertains

property instance: numpy.ndarray

Return the instance data for the instance to which this explanation pertains

property field_names: List[str]

Return the names of fields in the order they appear as columns in other structures such as the instance property or aspects of the particular explanation

property explanation: certifai.scanner.explanation_utils.Explanation

Details of the explanation. The exact form will vary depending on what type of explanation analysis was run

property alternate

If True then this explanation is an alternate explanation rather than the primary

static shap_explanation(input_features: numpy.ndarray, feature_names: Iterable[str], prediction: Any, explanation: certifai.scanner.explanation_utils.ShapExplanation) certifai.scanner.explanation_utils.IExplainedPrediction
static counterfactual_explanation(input_features: numpy.ndarray, feature_names: Iterable[str], prediction: Any, explanation: certifai.scanner.explanation_utils.CounterfactualExplanation, is_alternate: bool = False) certifai.scanner.explanation_utils.IExplainedPrediction
certifai.scanner.explanation_utils.extract_variant_sets(explanations, type_fn)
certifai.scanner.explanation_utils.explanations_from_list(explanations_list: List[dict], fields: List[str], include_alternates: bool = True)
certifai.scanner.explanation_utils.explanations(report: dict, model_id: Optional[str] = None, include_alternates: bool = False, base_path: Optional[str] = None) Dict[str, Iterable[certifai.scanner.explanation_utils.IExplainedPrediction]]

Extract all the explanations from a scan output dictionary

Parameters
  • report (dict) – Dictionary produced by a scan

  • model_id (Optional[str]) – Optional model id to restrict to (default None)

  • base_path (Optional[str]) – Optional path to resolve explanations files relative to (default None) If omitted then the base path specified in the report will be used

Returns

Dictionary keyed on model_id whose values are Iterable[IExplainedPrediction]

certifai.scanner.explanation_utils.explanations_dataframe_metadata_columns(prefix: Optional[str] = None, df: Optional[pandas.core.frame.DataFrame] = None)
certifai.scanner.explanation_utils.construct_explanations_dataframe(explanations_dict: Dict[str, Iterable[certifai.scanner.explanation_utils.IExplainedPrediction]], cf_type: Optional[str] = None, prefix: Optional[str] = None)

Returns a dataframe containing the original and counterfactual instances for the explained predictions

Parameters
  • explanations_dict – Dictionary of explained predictions. Can be created from a report using the ‘explanations’ function

  • cf_type – Optional counterfactual type, e.g. ‘prediction decreased’. Use to filter the results if desired

  • prefix – Optional prefix for metadata columns. Use if needed to avoid conflicts with feature names

Returns

DataFrame

certifai.scanner.explanation_utils.construct_shap_dataframe(explanations_dict: Dict[str, Iterable[certifai.scanner.explanation_utils.IExplainedPrediction]], prefix: Optional[str] = None)

Returns a dataframe containing the Shapley values for the explained predictions

Parameters
  • explanations_dict – Dictionary of explained predictions. Can be created from a report using the ‘explanations’ function

  • prefix – Optional prefix for metadata columns. Use if needed to avoid conflicts with feature names

Returns

DataFrame

certifai.scanner.explanation_utils.drop_metadata_columns(explanations_df: pandas.core.frame.DataFrame, is_shap: bool = False, prefix: Optional[str] = None)

Drops metadata columns from an explanations dataframe

Given an explanations dataframe created using construct_shap_dataframe or construct_explanations_dataframe, returns a dataframe containing only the values of the features and not the additional metadata.

Parameters
  • explanations_df – Dataframe of explained predictions. Can be created from a report using the ‘explanations’ function.

  • is_shap – Set to True if dataframe is shap values (default False).

  • prefix – Optional prefix for metadata columns. Use if needed to avoid conflicts with feature names.

Returns

DataFrame

certifai.scanner.explanation_utils.counterfactual_feature_frequency(explanations_df: pandas.core.frame.DataFrame, cf_type: Optional[str] = None, prefix: Optional[str] = None)

Returns a series of the features and their freqency of change

Given a dataframe of counterfactual explanations created using construct_explanations_dataframe, returns a series containing the list of features and the frequency of change. If there are multiple counterfactuals, it will only consider the first one.

Parameters
  • explanations_df – Dataframe of counterfactual explanations

  • cf_type – Optional counterfactual type, e.g. ‘prediction decreased’. Use to filter the results if desired

  • prefix – Optional prefix for metadata columns Use if needed to avoid conflicts with feature names

Returns

Series

certifai.scanner.explanation_utils.shap_feature_weights(explanations_df: pandas.core.frame.DataFrame, prefix: Optional[str] = None)

Returns a series containing the list of mean shap values

Given a dataframe of shap explanations created using construct_shap_dataframe, returns a series containing the list of mean shap values (e.g. for a global interpretation).

Parameters
  • explanations_df – Dataframe of shap explanations

  • prefix – Optional prefix for metadata columns. Use if needed to avoid conflicts with feature names

Returns

Series

certifai.scanner.explanation_utils.counterfactual_changes(df: pandas.core.frame.DataFrame, prefix: Optional[str] = None)

Returns a dataframe containing the changes for a single explanation

The input dataframe should contain the original and counterfactuals for a single explanation with the same model and row values.

Parameters
  • df – Dataframe containing original and counterfactuals for an explanation

  • prefix – Optional prefix for metadata columns. Use if needed to avoid conflicts with feature names

Returns

DataFrame