Is Artificial Intelligence really unbiased? | By Anshika & Sivkan
Nov 9, 2021
5 min read
18
194
0
When it comes to what technology, computers or artificial intelligence can or cannot do without human intervention, there are numerous views out there defining the possibilities and perils of AI. As the world is developing, we are evolving day by day making our life more complex. In the race for development, we are expanding so much that now data has become our necessity. But, now due to all complex processes, data is on such a large scale that it is difficult to manage for humans. It is almost impossible for humans to analyse and interpret data on a large scale and then take any decision. So, to ease the process of problem-solving, humans introduced the concept of Artificial Learning. In simple words, Artificial Learning is a process through which humans are trying to make computers act like them.
However even today, most people don’t even know what artificial intelligence actually is, though they are using it in their daily lives. To illustrate, since most of us have subscribed to at least one OTT platform, one such very popular platform, Netflix, whose recommendation engine (powered by AI) is worth $1 billion a year.
In other words, AI is something that humans have designed to mimic humans themselves. Sounds weird? Once Stephen Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race." It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
One of the biggest advantages of artificial intelligence due to which organisations and people all over the world use it is because they believe that it provides them with unbiased and objective results in almost every situation. But is it really true? According to Larry Page, artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on. Also, at the same time how can we forget that even a computer is designed by a human only. And of course, humans have a natural tendency of being biased and making mistakes and that’s eventually passed on to machines. For instance, technical experts often use personal experiences and preferences for a specific set of data while designing programs or apps for public use. Therefore, eventually what we think the machine is doing automatically is actually inbuilt in its code that was designed by a human being. So, as long as people write the code, humans will have to wrestle with their own biases.
Let’s understand this biasness with the help of a real life example. Amazon is using AI in its recruiting tool which has caused a blunder. It has been discovered that his recruiting engine is biased. Isn’t this surprising?
The team had built computer programs to review resumes of job applicants. Their aim through this engine was to shortlist the top 5 resumes from a bunch of 100 resumes within minutes for which they rated all the resumes on a scale of 1 to 5. What they thought would be the perfect way to access the resumes, actually turned out unfair. The drawback with this system was that the tool was trained to rate applicants by observing the pattern of the last 10 years. The computer models were taught to rate the resumes on the skills that were common in the past candidates’ resumes. As we all know, IT companies were dominated by men during that decade, and even now the same pattern is going on. The AI tool directly rejected resumes that contained words like women. This is how many good candidates got rejected. The technology also favored candidates who used the verbs which were commonly used in other resumes as the tool was working on the patterns. Resumes that contained verbs such as “executed” and “captured” normally found on male engineers’ resumes, were shortlisted. But that doesn’t mean the resumes which didn’t contain these words or were different from the pattern were not good enough.
All this shows that AI cannot overpower the decisions of human minds. Though it is time-consuming, in the end, we can assure better levels of accuracy when it comes to rational decisions. Employers have long dreamed of harnessing technology to widen the hiring net and reduce reliance on the subjective opinions of human recruiters. But computer scientists such as Nihar Shah, who is a teacher of machine learning at Carnegie Mellon University, say there is still much work to do.“How to ensure that the algorithm is fair, how to make sure the algorithm is really interpretable and explainable - that’s still quite far off,” he said.
We all can agree that AI removes a degree of subjectivity from decision making. But since it cannot altogether remove the bias, the ultimate goal is to reduce it to the maximum possible extent. There are various ways that companies can adopt to steer clear of discriminatory algorithms while deploying AI techniques in their routine tasks. One way is that all employees should be made aware of the way AI tools work in an organization and how it affects their decisions. Organizations must ask their employees before using the AI tools or they should provide sufficient justification as to why they are using such tools and how they can help the company. As Amazon used AI tools to shortlist candidates, so in such cases companies should acquaint employers with the way resumes are going to be assessed so that with group decisions they can ensure the best usage of the tools. The decisions which require personal and expert opinions or which can affect a person's career or life to a great extent, the reliability of AI should be minimised. On the other hand, routine decisions or tasks which don’t require much skill can be done totally using computer systems. Another way to reduce bias in decision making is to try the same task with different algorithms and engineers should monitor the changes occurring due to a slight change in algorithm. That is how the final model should be created after a few trials so that there’s less scope for any prejudice. Also, there are some existing legal standards that have been specifically made to provide a good baseline for organizations seeking to combat unfairness in their AI. These standards recognize the impracticality of a one-size-fits-all approach to measuring unfair outcomes.
At the same time, one should realise that there’s always room for improvement and continuously strive to adapt to the dynamic environment as one can understand that when deployed thoughtfully, technology can be a powerful force for good and if not, the same boon can convert into outcomes which both businesses and society at large cannot afford. And as Jean Baudrillard once said, “The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence.”