First principles: Articulating a Generative AI Counterpoint
Or, the always already new of artificial intelligence
In 2018, I was trying to make the case to Bob Entman, the esteemed political communication scholar, that the algorithmic sorting of news and information was a black box — and therefore, fundamentally unknowable to any news consumer outside of bare guesswork.
Bob asked me why an algorithmic black box that obscures news selection was any different from a newsroom that is making decisions about what to cover. He pointed out that a newsroom editorial meeting is a black box, too, at least insofar as the public is concerned.
This was a familiar counterpoint, one I had to keep coming up against in my earlier work. It is rooted in phenomenology: How do we distinguish from our lived experience whether this new “digital” innovation indeed presents a rupture from the past ways of doing, being, and knowing?
Is generative AI the rupture we think it is? How do we know?
I want to push back against the hype and the buzz by suspending the belief, for a moment at least, that ipso facto we are indeed experiencing a massive rupture from the known, that we have never seen at scale like this before. Rather, it is a cop out to think that everything has changed. This approach gives generative AI and its creators too much power, as I’ve written before. So what can we learn from discourse about past ruptures?
Taking the case of algorithmic curation vs. human newsroom news production is instructive for thinking through just how much we should engage in panic about generative AI and how we might seek to intervene in the possible public harms.
First, what values are under threat? What are the stakes? News organizations at their best would claim to be serving democratic goals; even if we are critical of news values, they are fairly predictable, even if problematic: “neutrality,” covering knowns, novelty/human interest, conflict, the bad instead of the good, and so forth. Mainstream news organizations are not setting out to manipulate people by exposure to content to some greater truth - if anything, news organizations should be faulted in their general news coverage for failing to provide any sort of moral guidance at all. (I promise!)
Platforms would claim to be agnostic in terms of the content they carry, with the most important value harvesting our attention - whatever brings the most data or generates the best way to keep our eyes on the platform is the content that platforms are serving, automatically and algorithmically optimized for us. Democracy, much less general public interest and importance of ongoing events or people in the world is certainly not a guidepost, although connecting people to facilitate still further digital engagement is certainly important for optimizing that time we spend with a platform.
When it comes to artificial intelligence, specifically the generative AI that appears to create “new” content from old content, rather than (at this point) merely turn to old patterns of intelligence without new pathways for adaptation — well, I see a few fundamental sources of moral panic and confusion that are common to already existing technologies and part and parcel of how we have historically talked about innovation in the public sphere across news media, pop culture, and also to each other.
People do not like being tricked and feel like they will not know when they are being played by the new technology
Concerns about the vulnerability of specific populations that reflect power imbalances - those populations we worry about misleading are also ones that are seen to lack the power to reason on their own (the most likely to be tricked)
The “how it works” is beyond the comprehension of most non-specialists
The sense that some important element of humanity is being left behind in favor of machines that care little about craft, expertise, and the human condition
A population of the utopians that herald new technology as a great leap forward
The corollary of leaving behind people: concerns about impact on jobs and the economy
Some sort of government (in)action that affects modes of standardization and profit seeking from innovators
In addition, more specific to generative AI, I see the following major discourses in terms of public panic and celebration. All of what I identify above are part of the more general discourse about innovation and also apply to generative AI.
Human creativity is threatened or made obsolete
Human intellectual property is being stolen without consent
Government regulations cannot keep up with moderating the worst excess of tech businesses
Our larger information environment is going to be further polluted
News organizations are specifically harmed
Democracy is under assault, and its opponents are now further armed and ready to go
There is a real not existential threat to the continuity of human existence
The environmental challenges to keeping this innovation going are greater than we can even foresee due to lack of resources/power costs
My kid recently took out a visual history of the U.S. I turned to the pages from the 1840s to the 1890s, which have sub-periodizations within the timeframe, reflecting dominant turns in cultural and political history. Not bad for a kids encyclopedia, but the question I began thinking about was the specific scale of generative AI as a transformative innovation.
The important reminder before taking each one of these major discursive turns around new technology is to note that form of these debates are fairly predictable, even if the innovation has shifted. A focus on historical comparisons can in turn help frame the scale of disruption.
I will make the case that generative AI is a rupture point of heightened proportions for the negotiation of human value - literally, what makes us human? But I also want to make the case that generative AI is fundamentally a new means to an old end. And if we can see that the ends are the same, then we can backtrack to takeaway some of the over-essentialized power AI has on our imaginary.
Lisa Gitleman’s book, Always Already New: Media, History, and the Data of Culture (MIT 2006) offers a general set of observations about how new technologies are absorbed by culture (I take these from the introduction).
The introduction of new media (or new technology) is “never entirely revolutionary” but instead points to a space of negotiation for what matters (and who matters)
When we give too much agency to new technology (or new media), or we give it too much power that assumes a unified force and deterministic, known “end of history.”
Debates over new technologies are “socially embedded sites for the ongoing negotiation of meaning as such….a view, that is, of the contested relations of force that determine the pathways by which new media may eventually become old hat.”
So I want to think through this question of rupture and generative AI, and make the case for the always, already, new - not that generative AI does not have its own mark to make in the world, or its own history, but that generative AI has become, in Gilteman’s words, but less rupture and more as a site for negotiating what it means to be human, essentially.
As the years go on and this moment in history becomes history, what scale of change will we associate with generative AI’s integration into social life? Will it be equivalent of the light bulb? Effective trains? Telegraphy? Indoor plumbing? The phonograph? (or from earlier, the cotton gin, which many argue led to an acceleration of slavery). Certainly, perhaps non-material innovations are also relevant: statistics (in the service of eugenics and crop yields), scientific nomenclature for species, the discovery of gravity and electricity, Taylorism… etc. This question between “hard” and “soft” technological innovations and whether they introduce differentiated forms of popular discourse is one worth bracketing for later.
I also don’t want to diminish the importance of shifts in aesthetics and culture more generally that also come alongside these moments of rupture, especially given the concern among creatives about the challenge they face from generative AI. I do want to issue a note about caution: artists are at the cutting edge and the conservative flank all at once when innovation comes calling. I could write at length about this, but one can think about the mesmerizing “Unsupervised” by Refik Anadol at the MoMa and then the lawsuits filed by artists for generative AI companies stealing their work and posing a threat to their livelihood.
Are we in just another arc of history, one more microcsom in the vast story that is humanity, or are we at a precipice of a shift that will fundamentally alter the potential for the continued existence of our species? We cannot know the future as the present unfolds, but we can look to the past for insights - and the past provides a backbone upon which to project the unknowable future. I guess this is another theoretical commitment I have to inquiry and to developing a sense of personal agency amid the chasm of despair and general incapacity of one person to change structure: there is some grain of humanity having made it through another moment like this one we are living, and we can learn from past mistakes and past successes at least some version of what to expect.
My challenge to myself is to begin to unpack the meta-arguments I’ve defined above. Perhaps you are like me and you need to understand where you fall in these debates. I encourage you to engage in a similar exploration. The first few I want to explore are these:
The generalized discursive concern:
People do not like being tricked and feel like they will not know when they are being played by the new technology
The specific fears about generative AI:
Our larger information environment is going to be further polluted
News organizations are specifically harmed
Democracy is under assault, and its opponents are now further armed and ready to go
To take some agency back over the power of Sam Altman and others, our goal should be to identify the order of magnitude of difference: what is discourse and what is different about generative AI? Coming back to the newsroom and algorithmic black boxes of news selection and curation, the order of magnitude is one of scale and values. But the end result in terms of public understanding is kind of the same, insofar as the public’s understanding of what they see and why when it comes to news production or what’s in their feeds is largely guesswork.
I want to make the bold claim that generative AI changes the rules of the game, slightly, but the outcome is fundamentally not all that different from the way that we understand knowledge, beauty, truth, and creativity. What is different in terms of knowledge creation and our social order is the power dynamic: that massive, global corporations are structuring that knowledge acquisitions and the values embedded in the most probable result of any generative AI. In the next few posts, I am going to try these arguments out in a safe space (my Substack) and maybe I’ll hear from some of you where I’m going wrong.