In this sequence of regenerations, GPT-3, the latest iteration of OpenAI’s powerful text generating language model, completes James Joyce’s Finnegans Wake (1939), first using the initial sentence of the original as a prompt, then working from the final words. Cunningly, the third version of the opening picks up the unstopped ending, endorsing the standard view of the Wake’s circularity. As I argue in Artefacts of Writing, the final words can also be read as opening onto the blank space remaining on the final printed page, leaving room for us, or GPT-3, to continue in our/its own way. As cunningly, the final version (8) of the ending takes us back to the beginning, mimicking version 3. In my view, version 7 is the most moving, given the way Finnegans Wake actually ends (with ALP contemplating death as an joyceanic release from a difficult life). That said, version 4 is also a pretty good description of what a bio-cultural intelligence might ideally think after reading the Wake (see What is creative criticism?).
Does this mean GPT-3 heralds the advent of a post-human world of reading and writing? The computer scientist, Douglas Summers-Stay, an early GPT-3 researcher, gave an engagingly apt and cautionary answer in 2020:
GPT is odd because it doesn’t ‘care’ about getting the right answer to a question you put to it. It’s more like an improv actor who is totally dedicated to their craft, never breaks character, and has never left home but only read about the world in books. Like such an actor, when it doesn’t know something, it will just fake it. You wouldn’t trust an improv actor playing a doctor to give you medical advice.
See Gary Marcus and Ernest Davis, ‘GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about‘, MIT Technology Review, 22 August 2020; and for another astute assessment of GPT-3’s capabilities, see Luciano Floridi and Massimo Chiriatti Massimo, ‘GPT‐3: Its Nature, Scope, Limits, and Consequences‘, 1 November 2020, revised March 2022.