Questions tagged [lime]

Questions related to the Locally Interpretable Model-Agnostic Explanations (LIME) method of explaining black-box machine learning models.

Lime stands for Locally Interpretable Model-Agnostic Explanations. It is a method developed by Ribeiro. He has a short blog post explaining the idea here, and a paper introducing the technique here (pdf). Very briefly, given a prediction at a certain location in the covariate space, LIME fits a linear model to simulated data and the model's predictions for them in the neighborhood of the original prediction. This allows the analyst to see what features are influential there.

25 questions
32
votes
1 answer

Comparison between SHAP (Shapley Additive Explanation) and LIME (Local Interpretable Model-Agnostic Explanations)

I am reading up about two popular post hoc model interpretability techniques: LIME and SHAP I am having trouble understanding the key difference in these two techniques. To quote Scott Lundberg, the brains behind SHAP: SHAP values come with the…
user248884
  • 431
  • 1
  • 4
  • 4
5
votes
2 answers

LIME Analysis Linear Model

I am looking at explaining a single prediction for a linear model: Y = F(X) = x0 + a1x1 + a2x2 + ... anxn i.e. F: X -> Y i.e. given a single instance z in X, return the relative contribution of each feature to the prediction of z. I could look at…
Mike Tauber
  • 807
  • 1
  • 5
  • 14
5
votes
2 answers

Reasons that LIME and SHAP might not agree with intuition

I'm leveraging the Python packages lime and shap to explain single (test-set) predictions that a basic, trained model is making on new, tabular data. WLOG, the explanations generated by both methods do not agree with user intuition. For example,…
AmeySMahajan
  • 123
  • 6
4
votes
1 answer

Is there any reason to use LIME now that shap is available?

The context: explaining a binary classifier XGBoost model. If we say that we are limited to the LIME and Shapley Additive Explanation aka "shap" package, is there any reason to use LIME? My impression is that LIME is a flawed, half-solution to the…
JPErwin
  • 443
  • 2
  • 10
4
votes
1 answer

How to interpret probabilities together with output from R lime package?

My question is related to this one: LIME explanation confusion. But since it does not have a reproducible example or an answer, I am asking here with an example. I have a dataset with unbalanced classes. Here I make a reproducible example with the…
user29609
  • 225
  • 2
  • 9
3
votes
1 answer

Using Lime on a binary classification neural network

I would like to use Lime to interpret a neural network model. For the sake of this question, I made a simple Dense model using this dataset: https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv To make this…
Liz
  • 53
  • 6
2
votes
1 answer

LIME Shows Very High Probability Score, But Breakdown Has All Negative Factors

I'm using LIME to break down the observation for each row and am taking a look at the positive and negative factors that contribute to the probability outputted. I filtered my dataset down to only records with a 95% or higher probability score, but…
Jon
  • 73
  • 6
2
votes
0 answers

Interpreting/Quantifying what is causing changes in ML model predictions week over week Ask

We are currently predicting an online students likelihood of completing a class each week. We use a lot of demographic information (which is constant throughout the class), as well as a small number of performance metrics (test results, logins, time…
L Xandor
  • 1,119
  • 2
  • 9
  • 13
2
votes
0 answers

What are the differences between LIME and SHAP as model interpretation techniques?

Model interpretation has been an important area of study nowadays and because of that some different techniques have risen to help on this task. Maybe the two most famous ones are LIME and SHAP. How would you explain the main differences between the…
2
votes
1 answer

Understanding functioning of LIME(Local Interpretable Model-agnostic Explanations)

Following are the steps that occur in LIME's algorithm https://cran.r-project.org/web/packages/lime/vignettes/Understanding_lime.html I have been trying to read and understand why this process is followed. My questions are around some of steps. 1)…
Pb89
  • 255
  • 5
  • 18
2
votes
1 answer

H2o interpretability - LIME

I have trained a model to predict heart attacks using random forest algorithm using H2O. I have good performance in cross validation. Now, I want to give more interpretation to the predictions in a test set, I used Lime and I followed this…
Jasam
  • 21
  • 3
1
vote
0 answers

How to handle inconsistency in ML explanations?

I found out that we have different solutions like below to explain the ML predictions a) LIME b) SHAP Despite using all these approaches, I see that all of them work for certain data points and not work for certain data points. for example, let's…
1
vote
0 answers

Lime predictions - Interpretation

I am working on a binary classification using random forest and explanation using LIME. I already referred the posts here, here and here. I have the feature contribution info from LIME like as shown below (for predicted class = 1) Predicted class…
1
vote
1 answer

Understanding the output of LIME - How is the contribution of features related to the predicted output?

How is the contribution of features extract from explaining a regression model with LIME locally related to the predicted output of the surrogate model? I thought that LIME is additive (some blog post as source), but wasn't able to get this…
So S
  • 523
  • 5
  • 9
1
vote
2 answers

Choosing Interpretable Models model vs choosing black box model and explain it with shap/lime

I analyse a dateset of article. The articles are labeled as popular or not popular and off course each article has features like: article section, article writer and etc. I don't want to predict if a new article and unlabeled one will be popular or…
Amit S
  • 27
  • 7
1
2