AI Epistemicide

 

Can you show me AI’s soul? prompt ChatGPT 2025

 

text by Perry Shimon

The wholesale expropriation of human energies on the internet under the guise of “Artificial Intelligence” signals one of the most significant enclosures in human history, as well as a radical restructuring of how knowledge is produced and accessed. In his 2014 Epistemologies of the South, sociologist Boaventura de Sousa Santos offers the concept of epistemicide to describe the annihilation of systems of knowledge and their supporting cultures. Another kind of epistemicide is currently underway today because a small group of profit-motivated private companies—mostly based in Silicon Valley and with global ambitions—have expropriated the collectively produced knowledge commons available on the internet (with all its questionable content) and created an epistemic regime that operates opaquely, producing the illusion of intelligence while habituating—even addicting—its users to its protocols with sophisticated psychological manipulation and design.

It is inconceivable to imagine a version of corporate AI that prioritizes anything besides the maximization and concentration of profits at the expense of its users, the environment, and the social good. Time and again the algorithmic-like functioning, laws, and fiduciary duties of capitalism have proven this outcome. This is especially true in Silicon Valley, where a handful of companies control nearly the entirety of the internet, making fortunes from our now-impoverished experiences of inquiry and sociality—and becoming ever more transactional, cunning, costly, and pathological.

We are witnessing, under the banner of “AI,” the capitulation of entire traditions of education and knowledge production, surrendering themselves to this new regime. Within the usual “move fast break things” velocities of internet capitalism it’s impossible to keep up with the rate people who are adopting these new technologies and schools that are restructuring themselves to accommodate them. The early figures are staggering and point towards a further acceleration of attentional and analytic incapacitation. 

Artificial intelligence functions by finding patterns in massive amounts of data and then predicting similar chains of words based on repetition in the dataset. This, of course, reproduces biases and effectively makes the question of “truth” a question of statistical probability. This also suggests a susceptibility to the manipulation of truth with the production of affirmative or operational data, amounting to the most repetitive insistence of data-fied knowledge winning the game of truth.

Image courtesy of Growtika, Unsplash

It should go without saying that machines stringing together probabilistic chains of language does not equate to wisdom or knowledge in any meaningful sense. It stands to reason that without people doing the intellectual work of adding to—and often overturning—conventional thinking, these models will simply deteriorate into a warm, static dribble of historical ignorance largely bound to the narrow historical window and contemporaneous data in which it was conceived. Any emerging contribution to our knowledge commons will have to compete in arenas of attention and repetition to become statistically significant as an analytically incapacitated public increasingly relies on regurgitated probabilities. 

It seems inevitable that Silicon Valley—the same industry that has devalued nearly every other form of creative and intellectual production—will find any way of fairly remunerating those responsible for producing the data which it appropriates into its epistemic regime. It is also crucial to point out how many forms of knowledge do not make themselves available to binary, computational logic—the outgrowth of the same regime responsible for the epistemicide acknowledged by de Sousa Santos.

Within military applications, the camouflage of “AI” allows a level of impunity, for example, in the targeted killing of civilians in operations carried out by states using these automated technologies. Israel’s “Lavender” and “Daddy’s Home” are two horrifying recent examples of how AI can be used for indiscriminate murder in larger political projects—in this case, the genocide in Palestine. So-called AI is also being used by formations of power around the world to automate and optimize many kinds of lesser disciplinary and punitive relations on vulnerable populations, including predictive policing, traffic enforcement, bail decisions, public benefits, and the targeting of political activists.

The excessive focus on AI is a form of misdirection that serves to distract the general public from the near-total monopoly capitalism of the current internet. It produces a new imaginative frontier, filled with scary monsters and geopolitical urgencies that shift focus away from the small cabal of monopolists who control our collective knowledge and means of using it—often in collusion with oppressive and violent states. Most of the terms of the debate—the stakes, framing, fears, and so on—are produced by the same industry that profits from them. This is a strategy to limit and steer the conversation while enrolling new users, generating hype, raising capital, soliciting government subsidies, evading regulation, and generally increasing their control. The Silicon Valley model is one of wild speculation, driven by utterly deceitful rhetorical claims, massive environmental costs, and the end goal of monopolization and endless growth.

This is not an inevitability and the time to act otherwise is now. There are existing models, both on and offline, that offer a clear path forward: Wikipedia and the library are as good as any. Wikipedia is a massive nonprofit, collectively-produced form of knowledge commons which everyone benefits from. Without the burden of a profit incentive, it continues to help the world better understand itself within a system of transparency and accountability. The library is our oldest and most sustained example of a socialist good for all, and much of the internet should be reconfigured in its image. Search, social platforms, mapping, and, to some extent, the minor technological affordances of generative and predictive LLMs should all be reimagined as nonprofit utilities for the collective benefit of the entire world, as we remediate our social and ecological conditions.