In the era of rapid technology development, we see companies like Google, Tesla, and Uber taking us closer to the delight of using automatic drive cars.
But, the recent accidents by self-driving cars make it clear that we’re still far away from that day.
Making instant decisions being one of the major problems, researchers have found another issue with self-driving cars.
Researchers from the Georgia Institute of Technology found out that self-driving systems are five percent less reliable in detecting dark-skinned pedestrians.
The study analyzed the accuracy of decision making of state-of-the-art object-detection models by taking into consideration how such systems detect people from different demographic groups.
The dataset contained images of pedestrians and then further segregated it into images with pedestrians having different skin tones.
The system indicated preference even when different variables like time of day in images and occasional obstruction of pedestrians were taken into account.
One of the main reasons behind the issue is algorithmic bias. It happens as the computer system imitates the behavior of its designers which impacts its decision-making capabilities.
The study does reveal an algorithmic bias towards dark-skinned people but doesn’t use the actual object-detection model used by most autonomous vehicles manufacturers. Rather it is based on publicly available datasets used by academic researchers. Companies don’t make the actual data available, which is an issue in itself.
Nevertheless, the issues raised through the study are genuine and the companies have to take concrete steps to bail out biased behavior of computer systems. That should make technology safer for every individual.