CRAN/E | iml

iml

Interpretable Machine Learning

Installation

About

Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) , accumulated local effects plots described by Apley (2018) , partial dependence plots described by Friedman (2001) , individual conditional expectation ('ice') plots described by Goldstein et al. (2013) doi:10.1080/10618600.2014.907095, local models (variant of 'lime') described by Ribeiro et. al (2016) , the Shapley Value described by Strumbelj et. al (2014) doi:10.1007/s10115-013-0679-x, feature interactions described by Friedman et. al doi:10.1214/07-AOAS148 and tree surrogate models.

Citation iml citation info
christophm.github.io/iml/
github.com/christophM/iml/
Bug report File report

Key Metrics

Version 0.11.2
Published 2024-03-29 29 days ago
Needs compilation? no
License MIT
License File
CRAN checks iml results

Downloads

Yesterday 171 0%
Last 7 days 769 -22%
Last 30 days 4.138 +19%
Last 90 days 10.437 -19%
Last 365 days 38.216 +13%

Maintainer

Maintainer

Giuseppe Casalicchio

giuseppe.casalicchio@lmu.de

Authors

Giuseppe Casalicchio

aut / cre

Christoph Molnar

aut

Patrick Schratz

aut

Material

NEWS
Reference manual
Package source

In Views

MachineLearning

Vignettes

Introduction to iml: Interpretable Machine Learning in R
Parallel computation of interpretation methods

macOS

r-release

arm64

r-oldrel

arm64

r-release

x86_64

Windows

r-devel

x86_64

r-release

x86_64

r-oldrel

x86_64

Old Sources

iml archive

Imports

checkmate
data.table
Formula
future
future.apply
ggplot2
Metrics
R6

Suggests

ALEPlot
bench
bit64
caret
covr
e1071
future.callr
glmnet
gower
h2o
keras ≥ 2.2.5.0
knitr
MASS
mlr
mlr3
party
partykit
patchwork
randomForest
ranger
rmarkdown
rpart
testthat
yaImpute

Reverse Imports

counterfactuals
FACT
moreparty

Reverse Suggests

DALEXtra
explainer
mistyR
mlr3fairness
tidyfit