>We emphasize that model ability to predict self-reported race is itself not the issue of importance. However, our findings that AI can trivially predict self-reported race -- even from corrupted, cropped, and noised medical images -- in a setting where clinical experts cannot, creates an enormous risk for all model deployments in medical imaging: if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to.
Secretly used its knowledge... Like how? Guessing based on patient's name? As in shaniqua renae?
He's saying that since the AI "knows" the race, it might do something with that knowledge and act on it. The AI could diagnose better or worse based on race by the mere fact that race is one of the factors that it "knows".
Patient's names aren't given to the AI for this very reason.
To play a devil's advocate, let's say the AI is trained on black patients and white patients. In its training data the black patients happened to all be sick, and the white patients happened to all be healthy. Instead of learning to diagnose whatever condition it's being trained to diagnose, it finds a different correlation: race. So when it gets fed an x-ray, instead of actual diagnoses it bases its guess on race alone.
A less extreme example would be training AI to detect STDs. Black have very high rates of STDs compared to whites. So if it "diagnoses" whether a person has an STD solely based on race it would be making pretty accurate guesses--much better than chance.
>He's saying that since the AI "knows" the race, it might do something with that knowledge and act on it.
>Patient's names aren't given to the AI for this very reason.
So, why give the patient's race to the AI then?
(post is archived)