This seminar is based on the recent review papers on Explainable AI. At the moment, this topic has managed to attract serious public attention after the publication in the journal Nature in 2018 the article by Scott M. Lundberg, Bala Nair, Monica S. Vavilala, Mayumi Horibe, Michael J. Eisses, Trevor Adams, David E. Liston, Daniel King-Wai Low, Shu-Fang Newman, Jerry Kim & Su-In Lee, Explainable machine-learning predictions for the prevention of hypoxaemia during surgery, Nature Biomedical Engineering volume 2, pages 749–760(2018)
The first part of the seminar will describe the main approaches and models of Explainable AI.
The second part of the seminar will discuss a set of existing methods for Explainable Ai, namely LIME, DeepLIFT, SHAP (SHapley Additive exPlanation) Values, Kernel SHAP, Deep SHAP.
The third and final part of the seminar will present the concrete results of our Explainable AI research group for High Dimensional Anomaly Detection.
Video will available soon...