loader
banner

While the doomsday scenario of bots taking over the world and ruling over the humans might be far-fetched, a closer-to-home question that often comes up in conversations is – Can you trust Artificial Intelligence? With the increasing use of Machine Learning (‘ML’) and Artificial Intelligence (‘AI’) solutions for business decision making, it is imperative that the models become explainable to non-data specialists and business executives in the organisation.

Like humans, ML models also struggle to provide a sufficient explanation of their own decisions. A model can list all the inputs going into a large number of variables in time series or cross-sectional data, however, it can have a hard time communicating how it combined all that data. For example, an AI and ML model can suggest a 90% chance of default if a consumer’s credit card utilization is more than 95% and recommend to avoid further cross-selling, but how did it arrive at this decision and more importantly, can this decision be trusted?

If Neural Networks are deployed in such scenarios, the job of explaining the reasons behind the decisions taken by the model becomes even more difficult as neural networks are usually much more complex than linear models. A good ML model design can have millions of input values, but they need to be traced back to the training data or features, resulting in the discovery of the anomalies. However, do not ignore the fact that the required degree of interpretation might be low in some use cases such as manpower planning in a manufacturing plant

From the perspective of explainability, broadly there are two categories of ML models and the mechanism of addressing interpretation differs between Black Box and Whitebox models.

Interpretability for Black Box Models –

Black Box model’s output and input have an observable relationship, but the user does not know about what goes on behind the scenes. For example, when a neural network is trained with millions of parameters to classify images, it works great at that task but understanding the function of these parameters and how it arrived at the results is difficult to know, without deriving the entire function.

It is for this reason that we employ the services of the friendly ‘datahood’ explainers – LIME and SHAP. LIME is Local Interpretable Model-Agnostic Explanations and SHAP is Shapley Additive Explanations, and they can be used for interpreting black box models.

They work towards helping us with an understanding of the importance of features. They are also called surrogate models as they only tweak the inputs and show change in predictions.

  • LIME) – This is a technique where we approximate the black-box model with a linear model for a small sample of data to find which features are impacting results. The Discovery of familiar domain-specific inputs getting highlighted builds the trust of the user in the Black Box Model
  • SHAP) – This technique is based on Shapley values which have origins in game theory. For explaining output from a particular input, each feature of that input is combined with other features in a combinatorial way, and we study the marginal impact on output.

Let’s consider an example not from the wild but from what a data scientist usually encounters.

Take for instance a document classification exercise where the objective is to predict the author of the document. Let’s assume that we get Support Vector Machines (SVM) to do a good job at this task.

Now, primarily, the Support Vector Machines can have a complex mathematical formulation for multiclass classification objective. Secondly, as inputs are numeric representation of actual English words, connecting the dots back to latter is difficult. LIME helps to link the output to input (words). It also tells which words have a positive impact and which have a negative impact when it comes to predicting the document’s author. The ML model of Support Vector Machine does not learn using English words but transforms vectors that are not understood by experts. Further, the SVM model itself uses the concept of separating hyperplanes which comes under the black box category. LIME helps to link the output of the model to vectors, from there to words and also tells which word had a positive and negative impact on the output value of the author.

Interpretability for White Box models

A White Box model’s processes are known to the user. Hence, interpretability of data being processed is relatively easier.

Inference statistics in regression-

  • Z test for estimated coefficient helps in filtering out unwanted variables.
  • Confidence Intervals
    • These are the way of measuring Type I error. Cases when we mistakenly reject the Null hypothesis and erroneously include a variable that we should not have. If the cost of rejecting the null hypothesis is more and has a direct business impact if there is an error, it is better to keep the level of 1% or less. For example, the target variable is advertisement costs. In this case, more variables might get selected.
  • If P-value is less than alpha, you can consider keeping the variable as the coefficient is coming statistically significant. This helps in variable selection as we are certain of excluding redundant variables.

Let’s look at this approach through an example. Assume that we need to predict credit card default using hundreds of dependant variables. Also consider that we use a logistic regression for achieving our classification objective i.e., default or no default. Typical regression output will have P values for all the coefficients. They will tell us that for the given confidence interval we have set, are we rejecting (coefficient is significant, low P values) or are we accepting (coefficient is not significant, high P values).

Note that for calculation of P values, a Z test is used. Hence, we can remove non-statistically

significant variables by this method. When we need to predict credit default for a credit card, we can use a logistic regression model. However, the number of variables is in the hundreds and using P values, correlations, and using the log of odds to find information content in independent variables, we can reduce it to 25 all-important variables.

Some other interesting aspects which ultimately increase the Interpretation of Machine Learning models are:

  • Set the training objective to match your true goal (e.g., train for the acceptable probability of false alarms, not accuracy). For instance, there needs to be a high recall, correctly calling out a cancer case in detecting cancers. If it means comparatively low precision, it is incorrectly classifying non-cancer cases as cancer cases.
  • Use a high-quality dataset to test the model and ensure it keeps working a month on month and year on year. This helps in the quick discovery of deviations and aids in better Interpretation of results.
  • Use text, graphs, and statistics to provide insights. Use visualizations, as they help speed up the consumption of data, especially when meeting senior executives.

So, can you trust AI?