Version: 1.3.14


ATXAI Trust Index - a weighted average of trust factor scores from your scan outputs (robustness, fairness, one performance metric, and explainability)
Baseline ModelThe model that is set (using the CLI) as the basis for comparison for the Use Case details visualization in the Console.
Binary ClassificationA learning task type that given a set of data elements predicts which one of two groups the set belongs to (e.g. Loan granted or Loan denied). One of the Learning Task types that Certifai evaluates models for.
BurdenBurden indicates the effort required to produce a positive outcome (via counterfactuals) for a feature group. If the burden for one group is very high and the burden for another group is low, that could indicate the model is being unfair to a feature group.
Certifai Enterprise editionA multi-user server-hosted edition of Certifai used to run remote scans either locally or in production environment.
Certifai Pro editionA single-user server-hosted edition of Certifai used to either run scans locally or remotely on a configured server
Certifai Toolkit editionA free local edition of Certifai used to run scans locally.
CounterfactualsAlternative predictive data points that Certifai uses to infer an imaginary line representing the boundary between the possible outcomes produced by the model. Distance from the counterfactual to the line is used to calculate report values.
Data TypesData types are assigned to dataset features during evaluation setup to define how the Certifai algorithm handles the observation values for that feature. Data type options are:

- Categorical - data that falls into categories like boolean data or data that is expressed as one of a small set of strings or numbers.
- Numerical-Int - data that is expressed as a whole number.
- Numerical-Float - data that is expressed as a decimal number.
DatasetA set of data features (columns) with multiple sets (rows) of values, where each set of values constitutes an observation in Certifai.  When a dataset is configured for evaluations the following values are calculated and displayed for each feature.

- Min: The minimum value of the feature expressed in the dataset if the feature is numerical.
- Max: The maximum value of the feature expressed in the dataset if the feature is numerical.
- # of Categories: The number of categories expressed in feature values for the dataset if the feature is categorical.
- MAD: Median Average Deviation for the feature values in the given dataset if the feature is numerical.
Dataset RestrictionsData restrictions are configured for each feature of a dataset you want to constrain when you assign scan definitions. The restriction options ensure that the algorithm follows necessary rules when creating counterfactuals for the feature values. Options are as follows:

- No restriction - which allows the data feature values to be used to generate counterfactual values without any limitations.
- No changes - which prevents counterfactual values from being generated for that feature.
- Min/Max - which allows counterfactual values to be generated within the range specified in the fields that are displayed after the Min/Max restriction selection.
- +/- % - which allows counterfactual values to be generated for the feature values within a specified percentage (plus or minus) the original feature value in a dataset. The percentage value field opens after this restriction option is selected.
Evaluation DatasetAn Evaluation Dataset is any dataset selected to run evaluation visualizations. A second smaller dataset is defined for a scan and used to run the Explanations evaluation, because dataset size affects the amount of time required to run that report.
ExplainabilityExplainability measures the average simplicity of counterfactual explanations provided for each model. An explanation that requires a single changed feature will score 100%. Explanations that require more changed features will score lower.
Explainability ScoreThe score assigned to capture how few features need to be changed in assigning counterfactual values to alter the predicted outcome for dataset observations. A single feature change is assigned a score of 100, and scores decrease as more feature values must be changed to alter the outcome.
ExplanationsExplanations display the prediction provided through the generation of counterfactuals for the change that must occur in a dataset with given restrictions to obtain a different outcome. To alter an outcome some dataset feature values must change while others remain constant. Each observation row of the dataset is displayed in a table that shows the changed features, as well as the original values and counterfactual values for that feature.  Users can explore the entire dataset one observation at a time to understand what features changed and by how much to obtain a different result.
FairnessFairness measures the difference required to change the outcome for different groups implicit in a feature given the same model and dataset. For example, implicit groups male, female, and nonbinary belong to the feature, "gender". A fair model shows that all 3 groups require a similar amount of change to alter the results.
Feature GroupSome dataset features are categorical rather than numeric. Any feature that is categorical contains groups. For example: the feature gender may have groups such as male, female, and nonbinary. Features with groups may be selected to be evaluated for Fairness across the groups.
Gini IndexThe Gini index or Gini coefficient is a statistical measure of distribution developed by the Italian statistician Corrado Gini in 1912. It is often used as a gauge of economic inequality, measuring income distribution or, less commonly, wealth distribution among a population. The coefficient ranges from 0 (or 0%) to 1 (or 100%), with 0 representing perfect equality and 1 representing perfect inequality. Values over 1 are theoretically possible due to negative income or wealth.
Learning Task TypeA kind of machine learning problem.
Mean Absolute Deviation (MAD)The mean absolute deviation of a dataset is the average distance between each data point and the mean. It demonstrates the variability in a dataset.
Multiclass ClassificationA learning task type where the model predicts which of a specified set of classes each data point belongs to. One of the Learning Task types that Certifai evaluates models for.
ObservationsEach observation corresponds to a single record (row) from the assigned dataset; it is a set of feature values from the dataset.
Parallel scanThe ability to to improve efficiency by running model/scan-type combinations simultaneously by specifying the -p parameter and a concurrency number in the remote scan command
Performance MetricAny model quality that a data scientist has measured during model development  that they are interested in visualizing that offers a basis for comparison in the form of a numeric value between 0 and 1 (e.g. Accuracy). The performance metrics selected are defined in the scan definition. One of the metric options must be selected to be used in the ATX calculation.
RegressionA learning task type that is used to estimate the relationships between a dependent variable (the outcome variable) and one or more independent variables (or features). Regression models allow models to estimate the conditional expectation (or population average value) of the dependent variable when the independent variables take on a given set of values.
RobustnessRobustness measures how well models retain an outcome given changes to the data feature values. The more robust a model is, the greater the changes required to alter the outcome.
ScannerThe component of the Certifai toolkit that runs the scan algorithms and outputs the reports.
Spread- For categorical data spread = 1.

- For numerics spread = MAD (median absolute deviation) of the feature in the dataset provided, unless MAD = 0, which happens when more than half the data points have the same value. In this case Spread = standard deviation of the feature values.
ToolkitThe component of Certifai that contains the local download for the scanner, libraries, cli, examples (reports, datasets, definitions, and notebooks) and console. Toolkit is the base Certifai product. Users can add a single-user server component for the Pro offering or a multi-user server component for the Enterprise version.