Artificial intelligence improves, deepfake risks continue
IDSC "BlackTechLogy" March 2023 Exclusive: Who’s responsible when a shared video looks real, but it’s deepfake deception?
More than three years after the Senate passed the Deepfake Report Act of 2019, deepfake incidents keep popping up. If you’re unfamiliar with the issue, deepfakes are a combination of artificial intelligence (AI) used to create fake audio and videos.
Largely intended to frame a person for doing something negative, a deepfake was recently linked to a Pennsylvania mother named Raffaela Spone. CNN reported that Spone was charged with three counts of cyber harassment of a child and three counts of harassment, both misdemeanors. Why? She allegedly used deepfake technology to cyberbully three girls from her daughter’s cheerleading team in compromising positions: nude, drinking alcohol and vaping.
Is artificial intelligence making people’s lives worse?
AI has become its own worst enemy. While the tech and business world will all but certainly continue to increase and improve its sophistication level, hackers are usually lurking nearby and just as tech savvy.
In all fairness, AI isn’t always linked to negativity. It’s been useful for YouTube video comments, as a personal assistant (Siri and Alexa), for self-driving cars, and improving customer service for phone representatives. It’s also had admirable results working with medical professionals to identify breast cancer.
ADVERTISEMENT ~ Amazon
As an Amazon affiliate, I earn a percentage from each product purchased using my referral link.
However, the smarter it gets, the more it can be both beneficial and detrimental to businesses and technology companies. While Spone is a rare example of an overzealous mom hacker, the bigger issue is deepfake tactics could largely lead viewers to believe that actual video footage is fake. If used on a larger scale, this could also affect the criminal justice system, background checks and business reputations. If technology has a hard enough time dissecting what is fake news versus a legitimate video, how can we trust our own eyes?
Artificial intelligence has complicated relationship with minorities
There’s already a love-hate relationship between African-Americans and facial recognition software. For many years, it’s been widely reported that African Americans are two times more likely to be targeted and arrested as members of any other race in the United States. As arrests increase, so do mug shot photos. And those mug shot databases and surveillance footage are too often matching up melanin-rich people who may not be the person in the photo or video. In all of AI’s advancements, recognizing features on darker skin is not one of them.