How can artificial intelligence be biased? Bias in artificial intelligence is when a machine gives consistently different outputs for one group of people when compared to another. Typically these biased outputs follow classic human societal biases like race, gender, biological sex, nationality, or age.
Biases can be as a result of assumptions made by the engineers who developed the AI, or they can be as a result of prejudices in the training data that taught the AI, which is what Johann Diedrick explains in the latest edition of Mozilla Explains. Watch the video below.