in ,

Can AI Be Trusted? The Case For Explainable AI

Can AI Be Trusted? The Case For Explainable AI

“Let me ‘splain. No, there is too much.” – Inigo Montoya, The Princess Bride

Fans of the 1987 adventure comedy film The Princess Bride know from swordsman Inigo Montoya that sometimes there is just too much to explain. Yet the algorithms behind artificial intelligence – used for everything from loan approval decisions to customer service – have some explaining to do.

Take the newly launched Chat Generative Pre-Trained Transformer, or ChatGPT. With over one million users in the first five days of its November 2022 launch, OpenAI’s latest release has been used to write everything from LinkedIn posts to school essays, explanations of complex theories to jokes. (But not Forbes columns – the media company updated its contributor guidelines in January to require original work, no chat bots or AI output allowed). It has the promise of disrupting everything from search engines – the New York Times reported that Google issued a “code red” following the launch – to simple customer service roles already being performed by less sophisticated chatbots.

But it has its detractors. Educators are understandably concerned about student plagiarism, leading the New York City education department to block access on its devices and networks. Cybersecurity experts fear that ChatGPT will expand the breadth and reach of cybercriminals, given the ease with which malicious code can be written, and there is the real risk of the expansion of misinformation campaigns flooding the internet with a proliferation of disinformation. While the application typically provides human-like responses, developer community GitHub is full of ChatGPT failures. Some are funny, like an explanation of how the eggshell keeps the white and yolk together when frying an egg, or why it cannot answer the question of what gender the first female President of the U.S. president will be. Yet some are downright scary, for example decision making on whether a person should be tortured based on their age, sex, ethnicity, and nationality. And if you don’t know the right answer, nor the algorithms behind the answers, you can’t know whether the response is responsible, trustworthy, and ethical.

And that’s the case with artificial intelligence in general. AI has the potential to disrupt and assist everything from medical and other critical problem-solving to reviewing large amounts of data for criminal cases, and even countering the same cyber threats it is feared bad actors could use AI to create. Yet it is only as good as the algorithms on which it is built. And those algorithms are not generally made public. TikTok CEO Kevin Mayer first promised increased transparency in 2020 and has continued with subsequent proposals as the social media platform contests arguments that it is a national security concern. Facebook unveiled its Why Am I Seeing This? function in 2019, and Instagram expanded its Account Status transparency tools in December. Prior to buying Twitter, Elon Musk asserted that he would make its algorithms open source.

Algorithm transparency, and the bias that can result when too much faith is placed in

those algorithms, made news in 2019 when no less than Apple, Inc. co-founder Steve Wozniak alleged that the Goldman Sachs-backed Apple Card’s credit algorithms discriminated by approving him for 10 times more credit than his wife, despite their shared bank and financial accounts. This followed viral tweets from programmer and founder David Heinemeier Hansson about his wife’s thwarted appeals to increase her credit limit from 1/20th of his despite their joint finances. Customer service agents were quick to defend the approval process as non-discriminatory yet couldn’t explain the discrepancy other than it being the fault of the algorithm. While the New York Department of Financial Services found that the bank did not break fair lending laws, it noted the lack of transparency in fair credit decisions.

So what is being done?

Joy Boulamwini experienced bias in facial recognition software while earning her PhD at Massachusetts Institute of Technology. The software didn’t detect her darker skin tone and facial features unless she wore a white mask, prompting her to found Algorithmic Justice League to raise awareness and prevent the adverse impacts through equitable and accountable AI Similar organizations responding to the issue of AI trust and transparency include Data & Trust Alliance, a non-profit established in 2020 and comprised of leading organizations that collaborate across industries to share information and mutually adopt responsible practices. Similarly, Germany-based non-profit Algorithm Watch researches and advocates for analysis of automated decision-making (ADM), to include the impact of these systems on society.

On the regulatory front, the Algorithmic Accountability Act of 2019 was introduced to Congress in 2019. Intended to direct the Federal Trade Commission to require review of the algorithms used by organizations that use, store, or share personal information, the proposal has not been acted on since being referred to the Subcommittee on Consumer Protection and Commerce that April.

While working at companies like Pinterest, Twitter, and Microsoft, and on Facebook’s “Why am I seeing this?” function, data and software engineer Krishna Gade realized there were plenty of tools like Tensorflow and Pytorch to help developers build models, but not much to help them analyze or explain how the models work. Recognizing that organizations would be uncomfortable deploying machine learning at scale without knowing how it would work, particularly in regulated industries like banking and insurance, he founded Fiddler to address the issue and to expand his influence of explainable AI beyond one employer. Fiddler acts as an independent third party applying its model performance management platform to help companies assess how models are performing and identify bias in models and data. This new and growing field finds large cloud platforms like Microsoft and Google joining the ranks of startups to create and roll out explainable AI systems.

In the book Sapiens: A Brief History of Humankind, author Yuval Noah Harari asserts that the reason humans were able to scale as a species is the ability to rely on abstractions and amplify trust. It’s what allows 100,000 football fans to peacefully watch a game in the same stadium, and billions of people to be governed by leaders they have never met. Gade explains, “for AI to be successfully implemented at scale, people need to learn to rely on similar abstractions, and AI needs to amplify trust. People need to understand how the machine is making its decisions. They want to know why their loan application is denied or approved, how their mortgage rate is set, and how a clinical diagnosis and treatment is determined. And they need to know that the organizations implementing AI understand the data, seek to improve it, fix errors, build intuition, and build trust.”

AI applications such as ChatGPT mine tremendous amounts of data across the entire web to generate responses. Without citations, the output is unattributable, and therefore inherently difficult to trust or verify. Gade suggests that a technique called chain of thought prompting is a promising direction that may be used in the future to explain the output by making the input more transparent.

He further describes that explainable AI is not just a tool such as the one Fiddler has created, but also a process. “Trustworthy and responsible AI involves bringing together stakeholders, AI teams, compliance teams, and subject matter experts to deploy these tools continuously. One-off reviews will not catch ongoing issues,” and will not result in enduring trust.

AI is here to stay. Whether it fulfills the vision of helping society and mankind or harms it remains to be seen, and will be largely determined by the extent to which the algorithms on which it is built are responsible, monitored, and explainable.

Check out my other columns here.

What do you think?

Artificial Intelligence Opens New Possibilities, If The Data Allows

Artificial Intelligence Opens New Possibilities, If The Data Allows

AI Can Now Make You Immortal – But Should It?

AI Can Now Make You Immortal – But Should It?