0

I'm playing with LIME to explain the prediction of a machine learning model.

LIME trains a (locally weighted) linear surrogate model around a point of interest. The weights of that surrogate model are the feature importances of your model's prediction at that point.

However, it's not clear to me how to interpret the intercept term. It's the "base" prediction if all features are zero, which seems meaningless.

(Compare this to SHAP, where the "base" value is the average prediction across the training set.)

So, does it make sense to run LIME without using an intercept term? This way, the feature importances will directly sum up to the prediction.

If not, which I suspect, how do I interpret the intercept term?

kennysong
  • 931
  • 10
  • 18
  • Hey! I am also looking into ways of how to understand the intercept in LIME. My intuition/understanding is (by looking at many explanations of many different models), that the intercept shows how much cannot be explained by feature weights. Does that make sense? Did you find another interpretation of the intercept term? – Verena Apr 19 '21 at 11:04
  • oh, and another remark: the feature weights (no matter with or without intercept) do not have to sum up to the prediction. – Verena Apr 19 '21 at 13:50
  • @TheGreat the link to your question doesn't work anymore! what solution are you looking for? this post addresses several issues i think. – Verena Feb 09 '22 at 15:36
  • @Verene - You can refer this one https://datascience.stackexchange.com/questions/107928/usefulness-of-intercept-in-layman-terms-eli5 – The Great Feb 10 '22 at 03:21
  • I have placed a bounty on my post. Hope this can interest you. I would be grateful if you can help me with the above post of intercept – The Great Feb 12 '22 at 06:41

0 Answers0