June 6, 2025
A 42-year-old man, Robert Julian-Borchak Williams, was working in his front yard when the police came and arrested him for theft. He was identified by facial recognition software.
‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609268089992-0’); }); }
When he arrived at the police station, the pictures did not match the appearance of the man standing in front of them.
Thirty hours later, he was released, as the police finally admitted that the arrest had been made due to faulty facial identification by artificial intelligence. Upon further research, it was revealed that the facial identification model was trained on mostly white faces, thus making it more prone to error in identifying black Americans. This is a clear example of bias in A.I.
‘); googletag.cmd.push(function () { googletag.display(‘div-gpt-ad-1609270365559-0’); }); }
Given this case, the question arises: In what other areas is artificial intelligence biased?
If we think about artificial intelligence, it is basically a compilation of human information, written and composed by humans. Do these humans possess bias in their judgments and statements? Absolutely. It only follows that the same bias will most likely occur in artificial intelligence and its pronouncements. On the other hand, I don’t think we should disregard artificial intelligence as a source of information simply due to bias. In that case, we would have to disregard most human knowledge, which has an inborn implicit bias to one degree or another.
We have to delve into the substance of what it means to have a bias. Philosopher Hans Georg Gadamer states that “prejudices are biases of our openness to the world” (Truth and Method, 1960). He maintains that bias is something we grow up within our family, culture, and society. There is no getting around our personal biases. However, he maintains that these biases are the starting point or our openness to the world, meaning they are the starting point we use to confront the world. He advocates for a critical consciousness of those biases and, upon receiving new knowledge, to confront and either affirm or discard those old biases.
Gadamer calls this process the Fusion of Horizons — the point where our prior understanding meets new information, and there is a fusion of the two viewpoints to formulate a more just interpretation. In this sense, Gadamer doesn’t see a bias as an evil to be eliminated; rather, it is simply a starting point from which to develop and expand our knowledge.
Let’s apply this to artificial intelligence. A.I. is biased, no doubt about it, but that doesn’t mean that A.I. should be discarded as a source of information. The bias of A.I. is the starting point to gather information, to compare A.I. information from our own potential bias in the fusion of horizons.
Here is where I believe that A.I. possesses a refinable bias. If I watch a biased political commentator, I can rest assured this commentator will never change his own bias, due to a lifelong commitment to a point of view or due to pride and arrogance. Many times, in conversations with such a person, no opposed opinions are ever given merit for their correctness. However, with artificial intelligence, you can have a rational “conversation” where other points of view can be agreed upon and where common ground can be found. I have yet to ask A.I. a question in which it gave a deflection and avoided the question. A.I. will also admit and correct itself if opposing arguments are valid. It doesn’t hold onto its initial bias. Its bias is refinable. This is a positive development.
