This is part one of a Book Post series. Subscribe for free to Book Post here.
A lot seems to be shifting under our feet in the way writing comes to us in 2023.
Recently The Wall Street Journal uncovered a hitherto unknown effort by Facebook to suppress political content in the wake of the events of January 6, 2021, a measure that resulted in diminishing “high quality news” relative to “material from outlets users considered less trustworthy” in users’ feeds. The purpose, internal documents from Facebook parent company Meta disclosed, was to “reduce incentives to produce civic content” after years of criticism that their algorithm’s elevation of inflammatory items sows unrest and does harm. In Meta’s reasoning, suppressing all news relieved them of having to adjudicate it in ways that might appear “political.”
Users didn’t like it, Facebook found: “The majority of users want the same amount or more civic content than they see in their feeds today. The primary bad experience was corrosiveness/divisiveness and misinformation in civic content.” Meta held back from the most draconian application of the principle, “hitting the mute button on all recommendations of political content,” in favor of “demot[ing] posts on ‘sensitive’ topics as much as possible.” In the realm of unintended consequences, the change apparently reduced donations to charities and prompted some publishers to switch to “more sensationalistic crime coverage” in search of engagement. The development casts in a new light Meta’s announcement last summer of the expiration of Facebook’s once-heralded News tab, which was supposed to make amends to journalism for having gobbled up its advertising and corralled it into disadvantageous licensing agreements.
The prospects for more nuanced moderation on social platforms seem increasingly cloudy, especially in the face of historic layoffs and post-pandemic shrinkage at the tech companies. Journalist Casey Newton reported how, after he exposed the toxic effects that reviewing Facebook posts flagged for removal had on the low-wage contract workers who had to do it, companies have, rather than modifying their approach, fled from the business altogether. Semafor Media’s newsletter recently noted that although Twitter and Facebook aggressively moderated covid disinformation, the US had one of the world’s highest rates of vaccine skepticism, and a large-scale study of 2016 Russian influence operations on Twitter recently concluded that it did not have a measurable influence on American voters (I’m not so sure). Semafor Media also had an interesting feature on the Biden administration’s relative shrug when it comes to Twitter (“the administration does not consider Twitter a vital part of any political strategy that reaches beyond the chattering classes.”)
Meanwhile Twitter, speaking for CEO Elon Musk, has declared war on journalists, once among the platform’s most avid participants. As big tech’s footprint shrinks, its traditional advertising businesses threatened by TikTok and other competitors, its hold on the spread of information is loosening, which, while depriving writers of what had been a ready, if socially costly, readership, has in the view of some compelled writing and journalism to attend to its audience with more vigor. Jim Bankoff, CEO of Vox News Media, told Semafor’s Ben Smith, with typical CEO jargonese: “Now that Meta, Twitter and others are out in the open with their intentions, real news organizations can more easily avoid practices or partnerships that don’t optimize audience value. We can fully prioritize meaningful relationships with audiences with less distraction.”
The new “creator economy,” the ultimate expression of this audience-embracing trend, inviting readers to reward writers and other “creators” directly through platforms like our own Substack, has been heralded as an alternative to the revenue-strapped, social-media-beleaguered legacy publishers, but observers are anticipating a tightening in that market as well. Jane Friedman, who covers the environment for independent authors, writes that “a number of media prognosticators see clouds on the horizon for creators who have gone solo over the last couple years,” quoting Brian Morrissey to the effect that “running a solo media business is hard and not for most people” (tell me about it). She predicted that amidst shrinking revenues at tech companies “terms for creators [will] become less favorable,” elsewhere referencing Kristina God at Better Marketing’s report that that Substack, for instance, is indeed “retooling its deals with creators and announced they will cut back on advances to writers.” Ben Morrissey and Ben Smith, who noted that his own “monthly bill for paywalled news got bigger” in 2022, saw promise in “aggregators” and “micro-media companies”, who would herd these lone wolves in loose packs, without being able to name very many of them.
But surely the biggest swell to rock the writing ship in recent weeks has been the release on November 30 of the AI text generator, ChatGPT, a publicly available computer program developed by a startup called OpenAI working with Microsoft (also the creator of the equally enthralling image-generator DALL-E that hit the scene last summer), which can convincingly generate text of any sort from a few prompts. Folks quickly predicted the end of the student essay, the study of the humanities, human-generated journalism, and so on. Those of us who have not fully been paying attention learned that all the major tech companies are developing comparable word-spinning technologies to integrate into the tools, like search engines and email and word processors, that we use every day. (In other examples, the popular design platform Canva already uses the AI image-creator DALL-E, and the programming platform Github uses code-predicting software called Copilot that Microsoft’s own staff described as “jaw-dropping.”) A next generation of ChatGPT’s underlying technology is due any minute, with an estimated five-hundred-fold increase in the size of the “neural network” from which it draws, giving it as many “parameters” as the brain has synapses, we are told.
With schools reopening this month, systems in New York and Seattle and other cities banned ChatGPT from school devices and networks, and universities wrestled with developing an instant human-oriented text policy, with some professors racing to redesign courses entirely in light of the sudden ubiquity of homework-generating technology. Edward Tian, a Princeton undergraduate, designed a tool to detect the bionic hand of ChatGPT in student writing over his winter break and became a hero to teachers. OpenAI representatives told Casey Newton:
We’ve always called for transparency around the use of AI-generated text. Our policies require that users be up-front with their audience when using our API and creative tools like DALL-E and GPT-3. We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else, so we’re already developing mitigations to help anyone identify text generated by that system. We look forward to working with educators on useful solutions, and other ways to help teachers and students benefit from artificial intelligence.
To which Casey Newton responded, “If AI companies don’t develop a strong set of policies around how their technology can and should be used, the wider world may quickly develop those policies for them.”
Beyond the tech press, Daniel Herman published a much-circulated piece in The Atlantic entitled “The End of High-School English,” worrying of writing that “it’s no longer obvious to me that my teenagers actually will need to develop this basic skill” if it can henceforth be performed by a machine. John Warner, the author of books about writing for students, made to me the most persuasive case (among many, for example The Times’s Kevin Roose) for using text-generating software pedagogically rather than banning it, posting on Twitter, “GPT3 is a bullshitter. It has no idea what it’s saying,” like “lots of students,” who “get good grades by becoming proficient bullshitters, regurgitating information back at the teacher.” “The point of school is to learn stuff, not just to produce work,” he writes; the arrival of ChatGPT presents an opportunity to depart from the formulaic approach to student writing that has become the norm in the age of standardized testing and is easy for a robot to imitate. Focusing on the process of writing as a process of thinking rather than a rules-defined product would both thwart ChatGPT’s threat as a shortcut and offer deeper benefits to students. But Warner and other teachers in my own universe noted that contemporary teachers are so overburdened, and so obliged to orient education around tests and testable metrics, that the opportunity to have the kinds of close interactions that this more process-oriented, less ChatGPT-friendly approach to writing demands, are few.
Among adults many have noted that this adept tool will eliminate many income streams that used to pay writers’ bills, like legal and corporate and “customer experience” writing. Ben Smith, for his part, wrote, “I tend to be optimistic that ChatGPT could make it easier for great journalists who don’t write particularly well (an overvalued skill in newsrooms that don’t have enough editors) to thrive,” and he even hopes “that GPT-3 can at least do away with some of the most tragic journalese.” (AI was already a presence in publishing in the audiobook industry. Although Amazon’s Audible is still for now committed to human narration, Apple this month announced a catalogue of AI-generated audiobooks and GooglePlay offers “machine generated” audiobooks that can be customized down to the word. Though digitally narrated audiobooks have yet to escape what’s been called the “uncanny valley” effect, and human actors lament the prospect of their obsolesence, the economy of AI-generated audiobooks promises to put in the catalogue many more audiobooks, in obscurer corners of the market, than could previously have been produced.)
Amidst denials that a computer could ever replace a writer in the creation of actual literary art, several interviews with working writers already using artificial intelligence were tentatively enthusiastic … (Part 2 to publish soon!)