Technology: Healthcare risk algorithm had 'significant racial bias' - 200 million to better pay for dental care - PressFrom - US
  •   
  •   
  •   

Technology Healthcare risk algorithm had 'significant racial bias'

04:05  27 october  2019
04:05  27 october  2019 Source:   engadget.com

Man sues LMPD claiming racial bias following traffic stop in west Louisville

  Man sues LMPD claiming racial bias following traffic stop in west Louisville A Louisville man is suing the Louisville Metro Police Department claiming racial bias following a traffic stop in west Louisville. Body camera video obtained by WLKY shows Detective Kevin Crawford pulling over Tyrone Daugherty for failing to use his turn signal near Hale and Cecil avenues last September. Daugherty, who is a black man, told the officer that he had a legally owned gun in the car and showed the officer his concealed carry license, the video shows.Police ordered Daugherty out of his vehicle before K-9 officers searched the car for drugs—none were found. "The goal was to pull him over and search the vehicle.

Researchers have determined that a "widely used" risk prediction algorithm from a major (but unnamed) healthcare provider had a " significant racial bias ." While it didn't directly consider ethnicity, its emphasis on medical costs as bellwethers for health led to the code routinely underestimating the

Significant racial bias has been uncovered in algorithms yet again—this time in a nationally deployed healthcare system that insurers use to dole out healthcare to millions of people annually, according to a new study published in Science. The risk -prediction tool was found to consistently underestimate

There's more evidence of algorithms demonstrating racial bias. Researchers have determined that a "widely used" risk prediction algorithm from a major (but unnamed) healthcare provider had a "significant racial bias." While it didn't directly consider ethnicity, its emphasis on medical costs as bellwethers for health led to the code routinely underestimating the needs of black patients. A sicker black person would receive the same risk score as a healthier white person simply because of how much they could spend.

a woman standing in a kitchen

The differences were sometimes acute. Scientists reckoned that eliminating the algorithmic bias would increase the percentage of black patients receiving extra help from 17.7 percent to 46.5 percent. When millions of customers were processed through the algorithm, this meant that legions of black people weren't receiving enough support.

MIT’s algorithm could improve imaging techniques used during pregnancy

  MIT’s algorithm could improve imaging techniques used during pregnancy The placenta plays a critical role in pregnancy: connecting the fetus to the maternal blood system. But assessing placental health is difficult because modern imaging techniques provide limited information. Researchers from MIT's Computer Science and Artificial Intelligence Lab (CSAIL) think they might be able to change that using a volumetric mesh-based algorithm. Even with MRIs, the curved surface of the uterus makes images of the placenta difficult to interpret. Using this new algorithm, the CSAIL team was able to model the placenta without curves.

System sold by Optum estimates health needs based on medical costs, which are much less than for white patients, report finds.

And new research has now found that a widely used healthcare algorithm that predicts which patients will need extra care is no different: It We show that a widely used algorithm , typical of this industry-wide approach and affecting millions of patients, exhibits significant racial bias : At a given risk score

In this case, the bias appear to have largely been ironed out. The team helped the healthcare provider switch to alternative labeling such as "active chronic conditions" and "avoidable costs," putting the focus on the actual health of the patient instead of their costs. This helped reduce the quantifiable bias by 84 percent. While code changes are far from the only ways to address allegations of systemic bias, the study suggests that they could play a larger role than you might think.

Science (1), (2)

Can We Force AIs to Be Fair Towards People? Scientists Just Invented a Way .
Artificial intelligence, it seems, can figure out how to do just about anything. All these kinds of advancements are meant to be for our own good. But what about when they're not? In recent times, algorithmic systems that already affect people's lives have demonstrated alarming levels of bias in their operation, doing things like predicting criminality along racial lines and determining credit limits based on gender.

—   Share news in the SOC. Networks

Topical videos:

usr: 3
This is interesting!