Apr 3, 2024
God created humans and they, in turn, created AI algorithms. Neither human decisions nor AI decisions are fully interpretable or explainable. What then, is interpretable and explainable? Per the English dictionary, ‘interpret’ means understand, construe, infer, deduce, decipher, unravel, and so on. ‘Explain’ means to elucidate, describe, enlighten, make it clear, and so on. Both in combination are required to make a meaningful decision.
These two words have brought a storm in the world of AI. Advances in AI models have led to the invention of autonomous cars, automated diagnostics, disease detection, smart Q&A systems, intuitive surveillance systems, and so on. The debate is about how AI algorithms can make these decisions and what internal processes happen which can be interpreted and explained in layman terms.
While the debate continues, have we been successful in deciphering how humans make decisions or how the brain functions while making these decisions? For the same problem on hand, humans make different decisions and the result of that has led to different kinds of impact on mankind. Brain research has been going on for many years and it will take an infinite number of years to understand and interpret how the brain processes inputs and comes up with different decisions. For example, a patient who is recommended for surgery usually goes for a second and third opinion and then, takes an informed decision for a faster and safer cure. These are all more probabilistic than deterministic. In hindsight, these decisions can be explained with better rationale and data points but not immediately in foresight. Our foresight is ambiguous relative to the hindsight.
Traditional analytics algorithms like regression and decision trees are easier to explain relative to ensemble algorithms like random forest, XGBoost, etc. The more we endeavor for better accuracy, the less will be the interpretability and explainability. As we move towards deep learning (DL) the explainability further goes for a toss. It is a black box and that’s where the challenge arises. AI and DL models are designed around the brain which consists of billions of neurons that are networked together and activated for making decisions.
There are two schools of thought prevailing today on explainable AI. The first school led by start-ups and researchers who are fully focused on advancing current algorithms to improve accuracy, precision, and reducing training time and inferencing time. They are happy with the black box decisions and more concerned about the end results than understanding how it is being made. Geoff Hinton, the pioneer in the AI world, said, “I am an expert on trying to get the technology to work, not an expert on social policy. One place where I do have technical expertise that’s relevant is whether regulators should insist that you can explain how your AI system works. I think that would be a complete disaster”.
The second school of thought led by regulators, social policymakers and ethical AI proponents want the AI models to be fully opened up, analyze everything under the hood, understand the complete architecture, how feature engineering is done through thousands of epochs, iterations, neurons being triggered through activations, weights, and biases being adjusted to optimize the cost function during forward and backward propagation. Explaining the very aspect of this and how a decision is being made through these computations would be practically impossible and could potentially put roadblocks for the current progress being made in this space. Following are some of the scenarios to explore:
Even when humans are subjected to the abovementioned scenarios, we could get varying results. We still have credit being denied to the deserving, genuine people arrested and later absolved by the judiciary, the increasing number of accidents due to a rise in the complexity on the roads, varying opinions from one doctor to another. We have many laws, policies, guidelines, and enforcement around the world to manage the complex ecosystem. It would be utopian to expect that the AI systems – which are trained to use historical data labeled by human beings – that are seeded with imperfections would be perfect? Model perfection and explainability are iterative and evolving processes and we achieve it over a period. The more we pilot the solutions in the real world, learn from the real data, the more these solutions mature and perform better than humans for many use cases. Not everything in this world is expected to be done by AI algorithms. What will we do if that happens? It’s not great to be lazy all day.
There are two approaches to develop explainable AI systems; post-hoc and ante-hoc. Ante-hoc techniques involve seeding explainability into a model right from the beginning. Post-hoc techniques continue with the black box phenomenon, where explainability is based on various test cases and their results. Reversed Time Attention Model (RETAIN) and Bayesian deep learning (BDL) are examples of ante-hoc models. Local Interpretable Model-Agnostic Explanations (LIME) and Layer-wise Relevance Propagation (LRP) are examples of post-hoc models. The best approach is to use a combination of both to enhance the explainability of current AI systems.
The future of AI is quite promising. It is still a new kid on the block, whom we need to feed, nurture, groom, care, motivate, and evolve. The debate on explainability and interpretability will be noisier in the coming days. It’s preferable to take a mid-path among the two schools of thought – black box vs. full explainability. Imperfection and uncertainty are the beauty of the universe and mankind can work around that. Like humans, AI will soon mature to perceive, learn, abstract, and reason. AI is here to stay and the future of AI will keep evolving.
Explore our design and development talents with our expert services
© 2024 OneTick Technologies Pvt. Ltd. All Rights Reserved