•   
  •   
  •   

Technology AI bias is an ongoing problem, but there's hope for a minimally biased future

02:40  05 may  2021
02:40  05 may  2021 Source:   techrepublic.com

Politically driven show trials move from campus to culture

  Politically driven show trials move from campus to culture In The Trial of the Chicago 7, Aaron Sorkin’s award-winning 2020 legal drama, activist Abbie Hoffman urges lawyer William Kunstler to recognize that “this is a political trial that was already decided for us. Ignoring that reality is just weird to me.” © Provided by Washington Examiner Kunstler, who in real life was known as a master of political lawyering, disagrees with his Yippie client. Kunstler insists, “There are civil trials, and there are criminal trials. There's no such thing as a political trial.” Hoffman smirks knowingly. The rest of the movie unfolds as a vindication of Hoffman’s view.

TechRepublic's Karen Roby spoke with Mohan Mahadevan, VP of research for Onfido, an ID and verification software company, about bias in artificial intelligence. © Image: cherezoff/Shutterstock

TechRepublic's Karen Roby spoke with Mohan Mahadevan, VP of research for Onfido, an ID and verification software company, about bias in artificial intelligence.

a person holding a sign © Provided by TechRepublic
AI is far from perfect, but it's improving
Watch Now

More about artificial intelligence

  • Why ML, not AI, is the right way forward for data science
  • Coding for robots: Need-to-know languages and skills
  • 40 women leading the way in AI innovation
  • AI on the high seas: Digital transformation is revolutionizing global shipping (free PDF)

TechRepublic's Karen Roby spoke with Mohan Mahadevan, VP of research for Onfido, an ID and verification software company, about bias in artificial intelligence. The following is an edited transcript of their conversation.

Reagan-Appointed Circuit Judge Issues Scathing Dissent Calling NYT and WaPo ‘Democratic Party Broadsheets,’ Praising Fox News

  Reagan-Appointed Circuit Judge Issues Scathing Dissent Calling NYT and WaPo ‘Democratic Party Broadsheets,’ Praising Fox News A Washington, D.C. federal appeals court judge spent a considerable number of pages in a Friday dissent rubbishing a U.S. Supreme Court case that is the pillar of the modern press press: New York Times v. Sullivan. While so doing, he slammed The New York Times itself, The Washington Post, and other major publications in the current media age for becoming "virtual[] Democratic Party broadsheets." The post Reagan-Appointed Circuit Judge Issues Scathing Dissent Calling NYT and WaPo ‘Democratic Party Broadsheets,’ Praising Fox News first appeared on Law & Crime.

Karen Roby: We talk a lot about AI and the misconceptions involved here. What is the biggest misconception? Do you think it's that people just think that it should be perfect, all of the time?

SEE: Hiring Kit: Video Game Programmer (TechRepublic Premium)

Mohan Mahadevan: Yeah, certainly. I think whenever we try to replace any human activity with machines, the expectation from us is that it's perfect. And we want to very much focus on finding problems, every little nitpicky problem that the machine may have.

Karen Roby: All right, Mohan. And if you could just break down for us, why does bias exist in AI?

Mohan Mahadevan: AI is driven primarily by data. AI refers to the process by which machines learn how to do certain things, driven by data. Whenever you do that, you have a particular dataset. And any dataset, by definition, is biased, because there is no such thing as a complete dataset, right? And so you're seeing a part of the world, and from that part of the world, you're trying to understand what the whole is like. And you're trying to model behavior on the whole. Whenever you try to do that, it is a difficult job. And in order to do that difficult job, you have to delve into the details of all the aspects, so that you can try to reconstruct the whole as best as you can.

Identity crisis: White supremacist, racist American cops must be removed, experts say. It will take resolve.

  Identity crisis: White supremacist, racist American cops must be removed, experts say. It will take resolve. Studies have shown that white supremacy has made its way into police forces across the nation. These experts weighed in on how to extract it.There were more than 400 of them. Facebook posts containing Confederate flags, anti-immigrant and homophobic and racist rhetoric. Posts sharing memes from white supremacist groups. It all hit too close to home.

Karen Roby: Mohan, you've been studying and researching AI for many years now. Talk a little bit about your role, there at Onfido, and what your job entails.

Mohan Mahadevan: Onfido is a company that takes a new approach to digital identity verification. So what we do is we connect the physical identity to a digital identity, thereby enabling you to prove who you are, to any service or product that you wish to access. It could be opening a bank account, or it could be renting a car, or opening an account and buying cryptocurrency, in these days. What I do, particularly, is that I run the computer vision and the AI algorithms that power this digital identity verification.

SEE: Digital transformation: A CXO's guide (free PDF) (TechRepublic)

Karen Roby: When we talk about fixing the problem, Mohan, "how" is a very complex issue when we talk about bias. How do we fix it? What type of intervention is needed at different levels?

New bid for hate crime laws in Wyoming, 1 of 3 states without them

  New bid for hate crime laws in Wyoming, 1 of 3 states without them Wyoming lawmakers on Tuesday introduced a bill aimed at combating hate crimes in the state where Matthew Shepard was killed in 1998. © Andy Carpenean / AP Anti-Discrimination More than two decades later, Wyoming remains without the law even though the 2009 federal anti-hate crime law bears Shepard's name. A state hate crime bill narrowly failed to pass the legislature the year after Shepard's killing, and several more attempts  to introduce bills since then have failed to gain traction, according to a report on hate crimes in Wyoming issued by an advisory committee to the U.S.

Mohan Mahadevan: I'll refer back to my earlier point, just for a minute. So what we covered there was that, any dataset by itself is incomplete, which means it's biased in some form. And then, when we build algorithms, we then exacerbate that problem by adding more bias into the situation. Those are two things first that we need to really pay close attention to and handle well. Then what happens is, the researchers that formulate these problems, they bring in their human bias into the problem. That could either fix the problem or make it worse, depending on the motivation of the researchers and how focused they are on solving this particular problem. Lastly, let us assume that all of these things worked out really well. OK? The researchers were unbiased, the dataset completion problem was solved.

The algorithms were modeled correctly. Then you have this perfect AI system that is currently unbiased or minimally biased. There's no such thing as unbiased. It's minimally biased. Then, you take it and apply it in the real world. You take it to the real world. And the real world data is always going to drift and move and vary. So, you have to pay close attention to monitor these systems when they're deployed in the real world, to see that they remain minimally biased. And you have to take corrective actions as well, to correct for this bias as it happens in the real world.

Adam Schiff wants to "reset" the House Intelligence Committee

  Adam Schiff wants to On the "Intelligence Matters" podcast this week, host Michael Morell talks with the chairman of the House Intelligence Committee : "I would like to get back to some level of comity — I realize it's going to take time. Within the Democratic caucus, there is continuing anger, among other emotions, over the fact that even after the failed insurrection, so many of our Republican colleagues were back on the House floor trying to overturn the results of the election and propagating the same falsehoods that led to that attack on the Capitol.

SEE: Hyperautomation takes RPA to the next level, allowing workers to do more important tasks (TechRepublic)

Karen Roby: I think people hear a lot about bias and they think they know what that means. But what does it really mean, when bias exists in an AI?

Mohan Mahadevan: In order to understand the consequences, let's look at all the stakeholders in the equation. You have a company that builds a product based on AI. And then you have a consumer that consumes that product, which is driven by AI. So let's look at both sides, and the consequences are very different on both sides.

On the human side, if I get a loan rejected, it's terrible for me. Right? Even if, for all the Indian people ... So I'm from India. And so for all the Indian people, if a AI system was proven to be fair, but I get my loan rejected, I don't care that it's fair for all Indian people. Right? It affects me very personally and very deeply. So, as far as the individual consumer goes, the individual fairness is a very critical component.

As far as the companies go, and the regulators and the governments go, they want to make sure that no company is systematically excluding any group. So they don't care so much about individual fairness, they look at group fairness. People tend to think of group fairness and individual fairness as separate things. If you just solve the group, you're OK. But the reality is, when you look at it from the perspective of the stakeholders, they're very different consequences.

Jacob deGrom Isn't as Unlucky as You Think

  Jacob deGrom Isn't as Unlucky as You Think There is a problem with the rule that says Jacob deGrom is a victim of low run support: it’s not really true. View the original article to see embedded media.Welcome to The Opener, where every weekday morning you’ll get a fresh, topical column to start your day from one of SI.com’s MLB writers.A clinical psychologist in London devised a trial in 1960 in which subjects were told a series of three numbers, 2-4-6.

Karen Roby: We'll flip the script a little bit here, Mohan. In terms of the positives with AI, what excites you the most?

SEE: 9 questions to ask when auditing your AI systems (TechRepublic)

Mohan Mahadevan: There are just so many things that excite me. But in regards to bias itself, I'll tell you. Whenever a human being is making a decision on any kind of thing, whether it be a loan, whether it be an admission or whatever, there's always going to be a conscious and unconscious bias, within each human being. And so, if you think of an AI that looks at the behavior of a large number of human beings and explicitly excludes the bias from all of them, the possibility for a machine to be truly or very minimally biased is very high. And this is exciting, to think that we might live in a world where machines actually make decisions that are minimally biased.

Karen Roby: It definitely impacts us all in one way or another, Mohan. Wrapping up here, there's a lot of people that are scared of AI. Anytime you take people, humans, out of the equation, it's a little bit scary.

Mohan Mahadevan: Yeah. I think we should all be scared. I think this is not something that we should take lightly. And we should ask ourselves the hard questions, as to what consequences there can be of proliferating technology for the sake of proliferating technology. So, it's a mixed bag, I wish I had a simple answer for you, to say, "This is the answer." But, overall, if we look at machines like the washing machine, or our cars, or our little Roombas that clean our apartments and homes, there's a lot of really nice things that come out of even AI-based technologies today.

Coding interviews are terrible. Can we make them better?

  Coding interviews are terrible. Can we make them better? Software engineers have long faced excruciating interview processes involving unstructured, arbitrary exercises that seem rigged to catch them out. So why are they still putting up with it?You don't have to go far to find stories of candidates fighting their way through to the interview process, only to be flummoxed by a technical question they've never encountered before – or are even likely to see in the real world.

Those are examples of what we think of as old-school technologies, that actually use a lot of AI today. Your Roomba, for example, uses a lot of AI today. So it certainly makes our life a lot easier. The convenience of opening a bank account from the comfort of your home, in these pandemic times, oh, that's nice. AI is able to enable that. So I think there's a lot of reason to be excited about AI, the positive aspects of AI.

The scary parts I think come from several different aspects. One is bias-related. When an AI system is trained poorly, it can generate all kinds of systematic and random biases. That can cause detrimental effects on a per-person and on a group level. So we need to protect ourselves against those kinds of biases. But in addition to that, when it is indiscriminately used, AI can also lead to poor behaviors on the part of humans. So, at the end of the day, it's not the machine that's creating a problem, it's how we react to the machine's behavior that creates bigger problems, I think.

Both of those two areas are important. It's not only the machines giving us good things, but also struggling with bias when the humans don't build them right. Then, when the humans use them indiscriminately and in the wrong way, they can create other problems as well.

a person holding a sign: TechRepublic's Karen Roby spoke with Mohan Mahadevan, VP of research for Onfido, an ID and verification software company, about bias in artificial intelligence. Image: Mackenzie Burke © Provided by TechRepublic TechRepublic's Karen Roby spoke with Mohan Mahadevan, VP of research for Onfido, an ID and verification software company, about bias in artificial intelligence. Image: Mackenzie Burke

The Chauvin Trial’s Jury Wasn’t Like Other Juries .
Its guilty verdict resulted not just from the strength of the evidence, but from a jury-selection process that departed from American norms.But even with all that evidence, convictions don’t happen on their own. Twelve people, selected by lot from the public, must come to a unanimous decision. That jury—who it comprised, how those people saw the world—was of enormous consequence. This wasn’t just any jury, and the difference that made should invite a major reckoning with how juries—the deciding bodies of the country’s judicial system—are selected in America.

usr: 2
This is interesting!