In 2024, MIT Technology Review reported that over 80% of AI-driven apps siphon user data, often targeting melanated communities with ads for low-wage jobs, exploitative loans, and inflammatory content. This manipulation reinforces racial hierarchies and perpetuates economic disenfranchisement. For example, a 2023 study by the ACLU found that facial recognition systems used by law enforcement had a false positive rate of 35% for Black women, compared to just 1% for white men. These errors have led to wrongful arrests, such as the case of Robert Williams in Detroit, who was mistakenly identified by an AI system in 2020.
The lack of diversity in AI development teams exacerbates these issues. A 2022 report by the AI Now Institute revealed that less than 10% of AI researchers are Black, and only 2% of AI engineers are women of color. This homogeneity results in algorithms that fail to account for the experiences of marginalized communities. For a deeper dive into how AI entrenches systemic inequality, read "The Digital C/age: Supremacy of A.I.".
Comments
Post a Comment