CRAN/E | lime

lime

Local Interpretable Model-Agnostic Explanations

Installation

About

When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the outcome of black box models by fitting a local model around the point in question an perturbations of this point. The approach is described in more detail in the article by Ribeiro et al. (2016) .

lime.data-imaginist.com
github.com/thomasp85/lime
Bug report File report

Key Metrics

Version 0.5.3
Published 2022-08-19 618 days ago
Needs compilation? yes
License MIT
License File
CRAN checks lime results

Downloads

Yesterday 117 0%
Last 7 days 596 +2%
Last 30 days 2.218 -0%
Last 90 days 6.528 -36%
Last 365 days 27.999 -38%

Maintainer

Maintainer

Emil Hvitfeldt

emilhhvitfeldt@gmail.com

Authors

Emil Hvitfeldt

aut / cre

Thomas Lin Pedersen

aut

Michaël Benesty

aut

Material

README
NEWS
Reference manual
Package source

Vignettes

Understanding lime

macOS

r-release

arm64

r-oldrel

arm64

r-release

x86_64

r-oldrel

x86_64

Windows

r-devel

x86_64

r-release

x86_64

r-oldrel

x86_64

Old Sources

lime archive

Imports

glmnet
stats
ggplot2
tools
stringi
Matrix
Rcpp
assertthat
methods
grDevices
gower

Suggests

xgboost
testthat
mlr
h2o
text2vec
MASS
covr
knitr
rmarkdown
sessioninfo
magick
keras
htmlwidgets
shiny
shinythemes
ranger

LinkingTo

Rcpp
RcppEigen

Reverse Suggests

DALEXtra