Interpretability of Machine Learning models (in Python)

D. Hainaut, PhD

Description

Machine learning (ML) models are powerful tools of prediction but suffer from a lack of interpretability.

The aim of this course is to introduce the local and global methods analyzing relations between output and input of complex ML algorithms.

The module is illustrated with examples in Python provided to participants.

Program

  1. Partial dependence plots

  2. Permutation feature importance

  3. Friedman’s interactions

  4. Global surrogate models

  5. Local Interpretable Model-Agnostic explanations (LIME)

  6. Shapley’s value (SHAP)

Speaker

Donatien Hainaut

Donatien Hainaut

Scientific Advisor, Detralytics
Professor, UCLouvain

Date : On-Demand

Duration : 3h

Accreditation : 3CPD | 18PPC

Level : All

Acquired skills

At the end of the training, participants will be able to:

  • Program and analyze partial dependence plots;
  • Estimate the importance of explanatory variables;
  • Determine the relevant factors driving a single prediction.

About our Speaker

Donatien Hainaut

LAST call

Don't miss our upcoming Lunch & Learn