YouTube video

Eliminating Hidden Bias in Autonomy & Beyond

“While working on training neural networks, I’ve come across the concept of a biased dataset,” says Deepti Mahajan, a machine learning research engineer for Ford Greenfield Labs. “This can cause issues when training a neural network because any biases that exist in the training data are learned by the network.”

In this video presentation for Women in Autonomy, Mahajan explains why it’s essential to address hidden bias as we work towards systems that serve everyone. Unfortunately, in many cases, the data used to design these systems fails to represent us all equally and can even reinforce or amplify existing biases.

Stay up to date!
Subscribe to our newsletter to stay up to date on all the latest ADAS news in the industry:

Your info is safe with us. Read our Privacy Policy for more details.