COLLEGE STATION, TX — A new initiative at Texas A&M University is bringing together experts from various disciplines in order to eliminate harmful social biases in AI and machine learning.
While AI and machine learning software aren't created to be biased, data algorithms and social biases of the engineers who create these systems can impact the AI.
An issue Texas A&M's Code^Shift lab is taking aim at resolving for future generations.
Code Shift will confront this issue by using collaborative research models bringing together experts from social science, data science, and engineering to combat these biases as well as raise bias awareness to the general public.
"So much of our everyday life is being decided based on automated machine-led decision making and so I think it's important to raise awareness that there can be many biases in these processes." shared Srivi Ramasbramanian, professor, School of Communications at Texas A&M.
Bias in AI and machine learning made national headlines in 2020 after facial recognition technology misidentified a man arrested and jailed by the Detroit Police Department.
“LIKE” 25 NEWS KRHD ON FACEBOOK FOR ALL THE LATEST BRAZOS VALLEY STORIES!”