Subscribe to keep updated with our latest courses and news

© 2023 by Andrew Cutajar

Search

AI: It’s magic … I owe you no explanation!



Up until a few years ago, Artificial Intelligence (AI) was considered the domain of the illuminated few. Computer Scientists created systems, which seemed to work well and you had to blindly trust that they will give you good results. However, this is not always the case!


The amazing snow classifier


Classifying between a Husky and a Wolf is not a straight forward task, yet algorithms manage to obtain 90% accuracy. Or do they?


A recent experiment at the University of Washington sought to create a classifier capable of distinguishing between images of Wolves from Huskies. The system was trained on a number of images and tested on an independent set of images as it is usually the case in machine learning. Surprisingly, even though Huskies are very similar to Wolves, the system managed to obtain around 90% accuracy. The researchers were ecstatic with such a result. However, on running an explainer function capable of explain why the algorithm managed to obtain such good results, it became immediately evident that the model was basing its decisions primarily on the background. Wolf images usually had a snowy background, while husky images rarely did. So rather than creating a Wolves Vs Huskies classifier, the researchers had unwittingly created an amazing snow detector. Just by looking at the performance measures like accuracy, we would not have been able to catch that! Such an experiment was lab based, but what if the life of patients depended on that classification?


Making life decisions is hard!


Autonomous vehicles will have to take hard decisions and we need to be in a position to understand the rationale behind those decisions. Luckily for those patients, a human being was in control of that system, but what if we have autonomous systems like self-driving cars?


Think about the famous trolley problem whereby a self-driving car is going to be involved in an accident. As a result, someone will die for sure! What should the AI choose? Prioritize humans over animals? Passengers over pedestrians? More lives over fewer? Women over men? Young over old? Higher social status over lower? Law-abiders over criminals? Or maybe stay on course and don’t intervene? When someone dies, who is going to take responsibility? The algorithm, the programmer, the corporation or someone else?


Time to explain?


Because of this, we need to move towards Explainable AI (XAI) which is the idea of creating AI algorithms capable of explaining their purpose, rationale and decision-making process in a human understandable way. Current AI models are opaque, non-intuitive, and difficult for people to understand but if we want people to trust the autonomous decisions taken by AI systems, users must understand, trust and effectively manage AI.


Apart from the issues mentioned earlier which focus on the protection and well-being of humans, there are also other considerations such as the following.... Algorithms and automation are becoming more pervasive. Many business challenges are being addressed with AI driven analytics and we’re seeing inroads in domains where the human expert was traditionally uncontested (E.g. medical diagnosis). AI can process much more data than human operators; they are faster and don’t get tired. On the other hand we have to be careful since biases and erroneous decisions can also spread more quickly.


AI models are becoming larger with millions of parameters thus making them much more difficult to understand by human users. Some of the systems are being trained on human operators but this is not sufficient for us to trust such systems. Furthermore, some models offer no control over the logic behind the algorithm and there is no simple means of correcting any of its errors. With the introduction of the European General Data Protection Regulation (GDPR) in May, 25, 2018 black-box approaches have become much more difficult to use in business since they are not capable of providing an explanation of their internal workings. GDPR also introduced the right of erasure apart from the right to be forgotten. So anyone can make a request to any organisation and demand the deletion of their personal data. However this is much more complex than it sounds. It is not just a matter of removing ones’ personal data from search results. It has to be deleted from the knowledge-base storing that data, from crash recovery backups, from data which is mirrored in other data-center and physically deleted from the hard-disk (rather than just removing the link). All of this renders the deletion task extremely complex.


Conclusion


As we the world moves forward, AI has evolved as the next big initiative which is helping companies in all sectors. We can see all sorts of AI helping us in our daily life; from personal assistants on our mobile phone to self-driving cars. The importance of AI is expected to grow further in the coming years, so much so that it is being dubbed as being the new electricity of this century.


However, we do acknowledge that AI can be somewhat complex to use. People get lost around terms such as machine learning, natural language processing, computer vision, and much more. Because of this, many business leaders aren’t sure where to start. The scope of Agilis is to help business leaders and guide them in the AI transformation process of their company. We provide strategic consulting that helps SME and technology experts to lead the companies towards solving their problems. We make AI more accessible, affordable and efficient. We facilitate the capacity building in these organisations by providing resources to enhance your current abilities or take care of all your AI needs.

Author: Prof. Alexiei Dingli Original link: https://becominghuman.ai/its-magic-i-owe-you-no-explanation-explainableai-43e798273a08

0 views