What journalists need to know about AI
My reminders for how journalists (and everyone else) can reclaim agency
I’ve been writing about the importance of reclaiming agency and choice as a form of resistance to big tech, to the constant monetization of our attention, and to the changing political headwinds that seek to undermine my basic liberties.
When I had the opportunity to talk about AI Ethics and the future of news and information at the biennial East-West Center Independent Media Conference, my hand was also forced — what, exactly is my take on the rise of generative AI — moving beyond the summary of discourse to what I thought journalists needed to know about what generative AI means for the work that they do.
Importantly, the conversation around generative AI is fundamentally different from other discourses around news and new technology. The difference I see is the direct reckoning among journalist about how this new technology isn’t just a challenge for the news industry, but has consequences for the world beyond the newsroom. Most significantly, journalists from the get go are fully aware about the threat that generative AI and disinformation more generally poses to free and fair elections, democratic norms, and our information environment more generally. This is good - I cannot remember a "new” technology that wasn’t considered from a journalist-first perspective.
However, the fact that journalists are actually thinking about the consequences for generative AI beyond themselves is also a signal of how much power they have ceeded to big tech. This acknowledgment is reflection of big tech psyching out journalists to think that there has been a fundamental rupture to social life that undermines the very existence of journalists, not just their autonomy but their purpose.
So here’s my list, just briefly, of the bullet takeaways for any journalist anywhere in the world who is waking up to the world of chat-prompted generative AI.
AI is a meaningless term when applied too broadly. Go back to basics: AI is intelligence as if human — in other words, it can passably perform tasks and engage with us in ways that we cannot immediately tell are not human. Machine learning simply is a way of talking about how (mostly) algorithms are training to produce results and make future predictions based on past data. In the case of generative AI, the framework for imagining AI is not just as a single human’s as-if intelligence, but at the collective sum of all the data that the tech tool has been trained on. Generative AI is “narrow” - created for specific functions and tasks.
When AI becomes a bogeyman rather than specific instantiation of a use case of technology, AI is a monolith without nuance — and that makes it more difficult to target exactly what journalists really need to know in order to use this technology well.
AI is not the robot dogs or terminators (this is best understood as “artificial general intelligence - IBM defines this as “synthetic minds that can learn and solve complex problems.” Or perhaps in our imaginary, as sentient, thinking, feeling, acting in the material world with capacities beyond those of humans — it’s just not happening yet. It may never happen in the way that Terminator or Westworld plays out.
Even then, if artificial general intelligence comes to pass in some massive way, human motivations remain our own - while we might be prodded and prompted to go running, the actual decision to move one foot forward before the other is one we get to make. Our material instantiation is what gives us our humanity - and this is special in all the ways that philosophers and theologians have discussed for centuries
At its core, generative AI is just a lot, a lot of math (my favorite explainer is Jaron Lanier’s How to Picture AI) - and he explains it way better than I do above. Math seems bullet proof: 1+1 = 2. But math can be wrong when it is used for making predictions — estimates based on probability is just about chance - some chances more certain than others. Generative AI is not always right, and people are finding many ways in which the chat-prompted generative AI falls short. Know this: AI is the over-and-over again best-case prediction based on probabilities of what came before - calculations performed instantly and as a series of possible best matches over multiple dimensions (matrix algebra on serious steroids).
Worrying about AI as a threat to our information environment (and thus a threat to democracy) oversimplifies the problem. Fixing our information environment, whatever that looks like, is a bandaid to the real problem - an unequal world where flows of capital favor a few winners — and culturally, that threats to pluralism and to tolerance of difference and collective protections for individual liberty are threatened thanks to a contemporary rise in illiberalism (this push/pull seems to always be with us, these dips and troughs). (Separately, worries about people being manipulated really presumes a) people are paying lots of attention to boring political information and b) that people do not think for themselves in culturally-situated ways. The hope that media literacy (if only people understood media better), they would vote the way “we” want them to vote is actually a win for all the anti-elitist populists who point out the smugness of the progressive knowledge class.
AI is material. The adage by Arthur C Clark, “Any powerful technology is indistinguishable from magic” seems to be true for generative AI, a bit of the genie effect where poof and a prompt reveals some new trick that makes our lives easier. But AI, much like the Cloud, or Bitcoin, or many of these software technologies that are imagined as invisible infrastructure happening in the sky are very much not the stuff of magic. Server farms house our clouds, not clouds. Rather, generative AI takes lots and lots of resources: power — lots and lots of energy. Water - to keep machines cool. Metals to create chips.
As Sasha Luccioni argues in Verge:
“The generative AI revolution comes with a planetary cost that is completely unknown to us.”
Other insights: “the average smartphone uses 0.012 kWh to charge — so generating one image using AI can use almost as much energy as charging your smartphone.”
The growth of generative AI depends somewhat on the resources we have to power the tech and what tradeoffs we are willing to make in order to enable this expenditure of energy relative to the damage to the planet. Do not forget that AI is rooted in actual physical hardware with actual, real, energy needs.
Concerns about Generative AI are actually longstanding worries, not new ones
Remember the Google News snippet? How Google didn’t want to acknowledge copyright or that copyright frameworks hadn’t caught up to digital laws? And how Google ultimately broke most efforts at the nation-state level by news organizations by threatening to keep news outlets from search indexes? Snippet culture rules generative AI, not just in text, but also in video, photography, audio, and beyond.
One caveat though: The death of the link and the move toward summaries of the depth of knowledge of the internet rather than connecting queries across networks of interconnected links — that is a problem. Indeed, the fundamental link structure of the web is under threat, at least insofar it that is visible to users. The networked architecture of the web is what made the web seem like magic (silent digital infrastructure connecting ideas across space and time)
Just because AI “can” doesn’t mean it “can” well
Let’s think about another domain: pop music. For much of the past half-decade, if not more, it has been possible to algorithmically generate music that “ought” to be a pop hit, based on previous hits. In other words, it has been possible to computationally engineer a hit - after all, isn’t bubble gum pop all the same? (Or verse-chorus-versus rock?) So if generative AI works for creative industries, there should be a hit every time, or the likelihood of a pop hit. But humans are fickle. And it turns out that Beyonce breaks the algorithmic music predictive power - seriously, though, would current models have predicted the rise of mainstream country and alt country up through the charts? Probably not.
Most of our creative work is not groundbreaking. Few of us will ever reach the skill of Mary Oliver, TS Eliot, Shakira, Madonna, Lewitt, Maplethorpe, Murakami (why O offer these examples opposed to others, no idea) - but seriously, most art doesn’t become ART because most art made by most of us simply isn’t all that great, or doesn’t have the institutional/social capital to propel the artist into success. The long tail, the extremes, the new — these are the unpredictables or least-predictables. And this is what AI cannot do well. Derivative creative work is just meh. AI gives us Meh, because the “not Meh” of art has the aura that Benjamin talked about - there is a feel that resonates beyond the mechanistic. Separately, this idea of most of our work being fundamentally “meh” derivative is actually probably why it’s so easy to ask AI to write academic journal articles.
I firmly believe that asking generative AI to create knowledge or art with the average prompt talent of most people will result in “meh” — and to this end, the sum total of all college essays written about Moby Dick is really just inspiring via generative AI some “meh” essays on the white whale.
More to come on the East-West Conference (magic itself) in a later post….