Biased Algorithms Affect Healthcare for Millions
Algorithm-based healthcare is not unbiased and can lead to
substantial inequality in patient healthcare, according to a
study published online October 24 in Science.
"We show that a widely used algorithm, typical of this industry-wide
approach and affecting millions of patients, exhibits significant racial
bias," write Ziad Obermeyer, MD, from the University of California,
Berkeley, and colleagues.
The scale of impact of this bias is large; bias affects healthcare
decisions for tens of millions of patients every year, Obermeyer said in
an interview with Medscape Medical News.
Most healthcare systems use high-risk care management programs to
provide extra resources to patients with complex health needs,
Obermeyer explained. These programs aim to reduce emerging health
problems and their associated extra healthcare costs.
However, providing this extra help is expensive, he said, so health
systems use algorithms to identify the patients who most need this
help.
But there is mounting concern that, because humans create the
algorithm inputs and design, these machine-learning programs are not
unbiased. Rather, the algorithms may contain racial and gender
biases of the people who develop them.
To find out if that was happening in practice, Obermeyer and
colleagues tested a commercially available algorithm called Impact
Pro, from Optum, on a large patient dataset at an academic hospital.
Their main sample comprised 6079 patients who self-identified as
black and 43,539 patients who self-identified as white. Overall, 71.2%
of the sample were commercially insured, and 28.8% were enrolled in
Medicare.
The researchers followed the patients over 11,929 and 88,080 patient-
years, respectively, and obtained algorithmic risk scores for each
patient-year.
They found that, at a given risk score, black patients were
considerably sicker than white patients, as demonstrated by signs of
uncontrolled illnesses.
Removing the bias in this algorithm would more than double the
number of black patients who would be eligible for a program that
provides extra medical help to the highest-risk patients, said
Obermeyer - raising it from 17.7% to 46.5% in this particular health
system.
Cost as Input Measure
Obermeyer explained that the issue of this bias arises not necessarily
in the algorithm itself but in the problem that the algorithm aims to
solve.
Although identifying patients who need the most care seems
straightforward, companies must choose a variable in the dataset that
will accomplish this, he said.
Cost is typically an easy variable to choose in this case, Obermeyer
explained, especially because it is frequently used as a proxy for
health in some settings.
However, that proxy is not always reasonable, such as when
comparing black patients with white patients. Black patients tend to
generate fewer costs at the same level of health, he said. Inequalities
in access to healthcare significantly contribute to this - for example, it
can be much harder to get to a doctor's appointment if one can't pay
for transportation or can't take the day off from work.
Courtesy : Algorithms