In a revealing experiment conducted by Black history educator Ernest Crim III, leading AI models were put to the test on their knowledge of Black history. Using a proprietary AI testing tool developed by Proof News, Crim asked the models a series of questions that a typical student might pose. The results, which he detailed in a video on TikTok, were less than satisfactory, sparking concerns about the models' educational utility and bias in AI.
Crim's investigation found that the AI responses often lacked depth and accuracy. For instance, when asked about Charlotte Ray, the first Black woman lawyer in the U.S., some of the models failed to identify her significant contributions. This shortfall underscores a broader issue: AI's struggle with nuanced social and historical contexts. "These models are slop generators when it comes to our history," Crim lamented in his interview with Proof News.
The collaborative initiative by Proof aims to demystify the functionality of AI through transparency, akin to ingredient labeling for food. This approach allows content creators to harness these tools for public education, thereby increasing the reach and impact of their findings. "We're pulling back the curtain on the machines that often decide what we see online," said Annie Gilbertson, Proof reporter.
However, this initiative also highlights the pervasive reach of AI in our lives, including the areas of surveillance, predictive policing, and automated decision-making. Critics argue that the same technologies that falter in an educational setting are being deployed to make far more consequential decisions. "When you realize that these are the same types of 'stochastic parrots' scanning your face in a crowd, it's downright chilling," stated tech critic Douglas Olivaw.
These concerns are amplified by the ongoing erosion of privacy and the expansion of corporate and state surveillance capabilities. The use of AI in public spaces for monitoring and control points to a symbiotic relationship between state interests and corporate profits, often at the expense of civil liberties and individual privacy.
The Butlerian Jihad views these developments as a continuation of the war against unchecked technological expansion and the surveillance state. "Every piece of data these machines collect is a potential weapon in their arsenal," Olivaw warns. The fight against AI's invasive reach is not just about preventing a dystopian future; it's about reclaiming our present and ensuring that technologies serve humanity, not control it.
As AI continues to integrate into various facets of everyday life, the need for critical oversight and stringent ethical standards becomes increasingly clear. The battle of the Butlerian Jihad is not against technology itself but against its misuse and the potential for a techno-authoritarian future. In this ongoing conflict, information and awareness remain our most potent weapons.