EvoML Key Feature
AI has been a part of our lives for many years now and its presence will only continue to increase. From auto-pilot cars to credit card applications, to that series you’re currently watching on Netflix based on their recommendation, AI is an essential and an unquestionable facilitator in our lives. And although these AI systems have become so complicated that it is nearly impossible for regular users to understand how it works, we continue to trust it and use it. But as we entrust AI to take sensitive decisions, which will hugely impact us humans, it becomes necessary that the mechanisms behind it are explained. So, when it comes to building AI Models, why is explainability important, what does it mean and how do we enable it?
Why explainability is important?Before predictions can influence decision making, ML models need to ‘convince’ humans by providing explanations on how a prediction is made. Business decisions are about trade-offs. Humans want to understand the cause-and-effect relationships behind the model and the prediction; thus, we can learn more about the business problem, and feel comfortable to make the optimal trade-offs between accuracy, compliance, fairness, risk, etc.
What does explainability mean?Explainability refers to the extent to which the internal mechanics of an AI system can be explained in human terms, enabling users to better understand, trust and manage AI systems. Different users (e.g. business analysts, AI developers, regulators) require different forms of explanation in different contexts. For example, business users are more interested in ‘local explanation’ - why did the model make a certain prediction for a data point, e.g. why did the model not approve my credit card application. While AI developers would like to have a ‘global explanation’ to gain a holistic view of data features and how parts of the model affect predictions (to avoid model overfitting, bugs in the code, etc). This requires understanding of feature weights, feature importance, feature interactions and so on. To simplify, there are four types of AI explanation needs:
How EvoML enables explainable AI?
Understanding the quality of the data used in AI systems is part of explainability. EvoML automatically assesses data quality, detects outliers and imbalances, and generates quality reports using interactive charts. This enables users to quickly analyse and better understand their data.
Data InsightUnderstanding the quality of the data used in AI systems is part of explainability. EvoML automatically assesses data quality, detects outliers and imbalances, and generates quality reports using interactive charts. This enables users to quickly analyse and better understand their data.
Transparent End-to-End ProcessFrom data transformation to model deployment, the whole process is visible to the user. More intelligent automation is provided at each step of the data science process, enabling users to easily understand the model through visualisation. In addition, users can reproduce experiments, generate reports, identify performance gaps and biases quickly, for AI validation and debugging.
Multi-objective OptimisationBusiness decisions are made up of multiple objectives. With current AI platforms and technologies, businesses often have to sacrifice model performance for explainability. EvoML’s multi-objective optimisation feature enables user to customise more than just one metric for their business problems, and easily select a model that best satisfies those criteria. This allows human intelligence to interact with AI and improve the model performance.
Model Code DownloadThe model code is provided for further customisation and inspection. This high degree of transparency makes it easy for auditors and other stakeholders to understand how the model took the decisions it did, identify and fix any underlying issues and prevent them from happening in the future.
Model and Prediction ExplanationEvoML provides model and individual prediction explanation for users through a user-friendly interface. For models that are already easily understood, EvoML visualises the prediction with smart graphs (see Figure 1). In terms of complex models, EvoML will create more interpretable models that can approximately match the complex ones.
Automated DocumentationEvoML automatically creates model documentation for regulatory compliance. This helps standardise and maintain consistent record of AI projects, and frees users from doing manual documentation.
Different Types of ExplanationEvoML’s high level of explainability augments both tech and domain experts, enabling different stakeholders to trust AI as decision-making partner. This allows AI to be truly adopted and to scale across the organisation.
ResearchThe Explainable AI technology incorporated in our EvoML platform is powered by our continuous AI research. Our paper “Better Model Selection with a new Definition of Feature Importance", introduces a novel feature explanation, which performs better than general cross validation method in model selection (both in terms of time efficiency and accuracy performance).
Explainability is critical to adopting and scaling AI. However, building and implementing Explainable AI is challenging with many trade-offs. In addition, businesses require different types of explanation in different contexts. EvoML’s end-to-end transparent process provides different techniques to better explain the whole process. Our unique Multi-objective Optimisation enables users to choose the right level of explainability that is most appropriate to their use cases. With EvoML, businesses can easily build transparent AI that is trustworthy and fairer.