What Facebook's Casual Conversation doesn't fix
IDSC "BlackTechLogy": Artificial intelligence results have to stop matching to mugshots
Facebook is working on a new project: Casual Conversations. Composed of 45,186 videos and 3,011 participants, the project will be used to “understand the multifaceted, ongoing challenges of fairness and bias” in order to build more inclusive technology.
Paid users of varying skin tones provided their own ages, gender labels and unscripted responses to scripted questions. Recorded in low lighting conditions, annotators labeled each user’s skin tone using the Fitzpatrick scale, which is organized into six categories.
It’s not exactly breaking news to recognize the kind of implicit bias that Facebook is studying—based on everything from skin complexion, age, names and education. As artificial intelligence (AI) continues to advance, if left in the wrong hands, human biases could carry over to AI, further co-signing stereotypes and industry-wide problems.
Curiously though, one could argue that the people in the videos should’ve been able to self-identify their skin types instead of the annotator, just as participants were able to self-identify their own ages and gender. Why can they be trusted with the latter two but not the first factor?
Recommended Read: “Artificial intelligence improves, deepfake risks continue ~ IDSC ‘BlackTechLogy’ March 2023 Exclusive: Who’s responsible when a shared video looks real, but it’s deepfake deception?”