In 2020, one message in the(AI) market came through loud and clear: AI’s got some explaining to do!
Explainable AI (XAI) has long been a fringe discipline in the broader world of AI and machine learning. It exists because many machine-learning models are either opaque or so convoluted that they defy human understanding. But why is it such a hot topic today?
AI systems making inexplicable decisions are your governance, regulatory, and compliance colleagues’ worst nightmare. But aside from this, there are other compelling reasons for shining a light into the inner workings of AI. For one, as more and more companies adopt AI, they find that the business stakeholders who will rely on AI for their workflows won’t trust decisions if they don’t have at least a general understanding of how they were made. Also, opaque AI obfuscates the “second-order insights,” such as nonintuitive correlations that emerge from the inner workings of a machine-learning model.
Explainable AI Is Not One-Dimensional
There are many different flavors of explainable AI and a whole host of related techniques. Determining the right approach depends whether:
Your use case requires complete transparency or if interpretability is sufficient. Use transparent approaches for high-risk and highly regulated use cases. For less risky use cases where explainability is important, consider adopting an interpretability approach such as LIME or SHAP that produces a post-hoc surrogate model to explain the opaque model.
Your stakeholders require global or local explanations. Some stakeholders, such as regulators, may want to understand how the entire model operates — a global explanation. Other stakeholders, such as your end customers, may want local explanations that clarify how the system made the decision that impacted them. Tailor your explanations to the technical acuity of your stakeholders. Not everyone’s a data scientist.
To understand the business and technology trends critical to 2021, download Forrester’s complimentary 2021 Predictions Guide here.
This post was written by Principal Analyst Brandon Purcell, and it originally appeared here.