Machine learning models, particularly black-box, make powerful decisions
and recommendations. However, these models lack transparency and hence cannot be
explained directly. The respective decisions need explanation with the help of
techniques to gain users' trust and ensure the correct interpretation of a particular
recommendation. Nowadays, Knowledge graphs (K-graph) has been recognized as a
powerful tool to generate explanations for the predictions or decisions of black-box
models. The explainability of the machine learning models enhances transparency
between the user and the model. Further, this could result in better decision support
systems, improvised recommender systems, and optimal predictive models.
Unfortunately, while these black box devices have no detail on the reasons behind their
forecasts, they lack clarity. White box structures, on the other hand, will quickly
produce interpretations due to their existence. The chapter presents an exhaustive
review and step-by-step description for using knowledge graphs in generating
explanations for black-box recommender systems, which further helps in generating
more persuasive and personalized explanations for the recommended items. We also
implement a case study on the MovieLens dataset and WikiData using K-graph to
generate accurate explanations.
Keywords: Black-Box Recommender System, Explainability, Knowledge Graph.