less than 1 minute read

If you can’t understand how a model makes a prediction, how can you trust that prediction?

LIME [is] a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction.

(via Why Should I Trust You?: Explaining the Predictions of Any Classifier)

Leave a comment