Can we build reliable 'AI' as Maas models are expected to take over?  - MedCity News

Regulators in several geographies have defined the components and frameworks, but what is needed to create “trustworthy AI”. What are the challenges we see in delivering it?

AI continues to be incorporated into everyday business processes, industries and use cases. However, one concern remains constant – the need to understand “AI”. Unless this happens, people will not fully trust AI decisions.

The opacity of these systems, often referred to as “BlackBox AI,” raises several ethical, business, and regulatory concerns, creating barriers to AI/ML adoption, especially for critical functions and in highly regulated industries. No matter how accurately a model makes predictions, unless there is clarity about what is going on inside the model, the issue of blind trust in the model will always be a valid concern for all stakeholders. So how can one trust AI?

AI decisions – to trust or not to trust

To trust any system, accuracy is never enough; justifications for forecast accuracy are equally important. A prediction may be right, but is it the right one? This can only be established if there is sufficient explanation and evidence to support the prediction. Let’s understand this through an example from the healthcare industry.

Industry is considered one of the most exciting areas of application of AI. It has intriguing applications in radiology, diagnostic recommendations/personalization, and drug discovery. Due to the increasing complexity in healthcare, data overload and shortage of experts, the diagnosis and treatment of critical illnesses have become complex. Although the use of AI/ML solutions for such tasks can provide the best balance between predictive ability and diagnostic scope, the significant problem of “explainability” and “lack of confidence” remains. A model’s prediction will be accepted by all users if the model can provide convincing supporting evidence and an explanation behind the prediction that satisfies all users, i.e. doctors, patients and governing bodies. Such explanations can help the doctor judge whether the decision is reliable and create an effective dialogue between doctors and AI models.

However, it is more difficult to achieve reliable AI. AI systems are very complex by nature. The process of ideating, researching and testing production systems is difficult, and keeping them in production is even more difficult. Model behavior is different in training and production. If you have to trust the AI ​​model, it can’t be breakable.

What are the critical components of reliable AI?

In machine learning, explainability refers to understanding and understanding the behavior of your model, from input to output. It solves the “black box” problem by making models transparent. It should be noted that “transparency” and “explainability” are quite different. Transparency makes it clear what data is being looked at, which inputs provide outputs, and so on. Explainability covers a much wider scope – explaining technical aspects, demonstrating the impact of changing variables, how much weight is given to inputs, etc.

For example, if the AI ​​algorithm predicted the prognosis of “cancer” from the patient data provided, the doctor would need evidence and an explanation of the prognosis. Without which it simply acts as an unreliable proposition.

Much has been said about the opacity or black box nature of AI algorithms. The solution should outline the roles and responsibilities for implementing the “AI” solution and thus the reason for the failure. Capturing such artifacts and recording them through records can provide a traceable and verifiable in-depth genealogy.

AI models in production may perform differently compared to training/test environments. And models suffer from data or target bias. Even if the models are periodically retrained, there is no guarantee that the results will be consistent in production. Frequent breakdowns reduce the reliability of the model and create distrust in the user’s mind.

AI/ML model predictions can be error-free and still have major biases. Models reproduce associations based on training data, and biases prevalent in training data can easily creep into production. Taking the earlier example from health care, markers for a particular disease may vary in Americans and Asians. Ideally, the model should be trained and able to distinguish between the two while making predictions.

  • The need for continuous adaptation

In the real world, people are constantly learning, especially in both prediction and diagnosis. With increasing scientific publications and studies, there may be gradual or drastic changes for the same given preconditions. “AI” solutions must ensure up-to-date knowledge consumption.

  • Human controls in the loop

It becomes difficult to prevent AI from uncertainty due to automated decision making. It is not possible to define the necessary controls without understanding how the system works. With human oversight, it becomes much easier to manage and shape AI behavior to reflect organizational, societal and business preferences.

Regulatory Guidelines for Achieving Trusted AI

Globally, the regulation of AI has become a common discussion point for governments. Many countries are creating a regulatory environment to decide what they consider acceptable uses of artificial intelligence (AI). In 2019, the European Union published Ethical Guidelines for Trusted Artificial Intelligence. The guidelines present a set of 7 key requirements that AI systems must meet to be considered trustworthy – namely, human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and justice, society and environmental well-being, Accountability. In October 2022, the White House Office of Science and Technology Policy (OSTP) revealed its Blueprint for an AI Bill of Rights. The blueprint is a non-binding set of guidelines for the design, development and deployment of artificial intelligence (AI) systems. This happens one year later The White House announced its intention to develop a process to launch a “bill of rights” to protect itself from the “powerful technologies created”. The Monetary Authority of Singapore (MAS), in February 2022. announced the release of five white papers detailing the Fairness, Ethics, Accountability and Transparency (FEAT) assessment methodologies to guide the responsible use of AI by financial institutions (FIs).

While we can derive some relevant elements from the guidelines, there are some additional elements that can now be added to the above list as technology has advanced over the past few years.

So what elements are needed for an AI framework to be reliable?

What are some of the obstacles to reliable AI?

The components of Trusted AI are fairly standardized across geographies and use cases, but when asked about its adoption, it is still a work in progress across all use cases. There are many reasons for this.

  • Explanations and evidence are highly contextual!

The complex nature of AI makes it difficult for humans to interpret the logic behind AI predictions, and whatever is generated today is only comprehensible to an AI expert! Typically, data science or ML teams look at these explanations and try to understand the behavior of the model. But when it comes to connecting them in a business sense, they get lost in translation. The explanation must be translated into a language understandable to all users. The purpose is diluted if only a few users can understand these explanations. It becomes the task of AI creators to offer such easy-to-understand explanations to all types of users. And it is not easy to achieve such explanation templates that are acceptable for all users and for all use cases. Explanations and proofs are highly contextual to the use case, the user and the geography!

  • Explanations must be true to the model.

Just to meet regulatory or user requirements, we have seen cases where explanations are applied using surrogacy models or unrealistic synthetic data. Methods such as LIME/SHAP use synthetic data to explain patterns. While they provide a good starting point, they cannot be used as the sole approach to explanation due to the sensitivity of the use case. Few studies have shown that these methods can be cheated. Explanations must be consistent, accurate and true to the model.

And even after providing all these explanations, AI solutions can face expert bias, as they would trust a human more than a machine’s predictions. The reasons may not be intellectual but biased towards human relations.

Often users expect to dig further into an explanation or proof to find the learning source and validate it. Although there may be a simplistic tree built around this, but achieving a fully dynamic nexus path is very complex, as the path of the path can be debated for its authenticity.

Conclusion:

Many users may assume that the “AI” would have matured by now due to the numerous advertising cycles in recent times. This has become a pattern that – whenever we find a solution to an aspect of the problem using ‘AI’, we may tend to overestimate its expectations of sufficiency and tout it as the ‘blue pill’ of ‘AI’ superiority “! But in reality, like any other technology, it will take years to perfect the pattern and make it fully proof. By then it would have become the norm in the industry and found mass acceptance. To achieve this, we need all blocks of innovation to collaborate and co-validate – regulatory, academic, corporate and customer. The definitions of these ideal components of “trustworthy AI” will continue to change in the coming months and years, but “AI” may find near-term acceptance in a limited scope. We are already witnessing an upsurge in finding a number of ideal use cases for “AI” today. And almost all of them have a human-in-the-loop component as a critical criterion for sensitive use cases. We will continue to see the same trend for a few more years until we see the reversal of trust between “AI” and “human experts”.

Photo: Gerd Altmann, Pixabay

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *