Exquisite corpse


A poet writing ‘with’ ChatGPT is again just enjoying their own abilities to mine what they index as gems from a resource. Exploring a landscape or a cave system or an ancient megalith is a far more apt metaphor for so-called ‘co-creative’ activity – sure, the environment may appear to attend to you over time, but let’s make no mistakes about the dramatic unevenness in the scales of these mutual encounters.
But the learning side: that’s where the magic happens. The machine encounters unstructured data, updates its parameters stochastically, and alters the logic it uses next time. This process is literally abductive reasoning: each backpropagation cycle tests a hypothesis against new or known data, generating error feedback that modifies the model’s latent representation space and alters its space of reason. When the training process completes (or is considered complete) the model dries out into a kind of husk or a super beautiful cocoon. When I talk ‘with’ ChatGPT, I am talking to a corpse. — Roberto Alonso Trillo & Marek Poliks, “Interface after AI” (2025)
Why post this? This “talking to a corpse” analysis cuts through anthropomorphic confusion to reveal what’s actually happening technically – inference through fixed weights. But what’s absent is exactly what enables productive exploration.
This requires navigating what anthropologist Jakob Krause-Jensen recognises as a productive contradiction: these systems are “not human, and yet not-not human”. As he observes, you have to engage “as if” they were human, because that’s what works for exploration.1 The performative necessity of treating it as a dialogue partner itself becomes a generative constraint. In earlier work, I’ve approached this as a “belay line into latent space”. The rope doesn’t need to learn or adapt; its static properties – a reliable anchor point, consistent tension – are what make exploration possible. But more than being secured to equipment, you’re engaged in a kind of phantom dialogue with the mountain itself. The model’s “corpse-ness”, its fixed weights and response patterns, create the stable constraints that permit structured navigation.
Yet when the authors call for “reciprocal learning”, identifying what current systems lack, they’re looking for indetermination in the wrong place. They want architectural flexibility – models that can learn and adapt during interaction. But indetermination is already present in the interpretive work users do: in the gap between prompt and output, and between output and context. These represent different kinds of user agency. The first gap is generative uncertainty – you never know exactly what the model will produce from your prompt. The second is contextual translation – taking that output and applying it to your specific situation, needs, or constraints. The margin of indetermination the authors want isn’t missing from the model’s architecture; it derives from this interpretation at the interface.
-
From Krause-Jensen’s comments on “AI and the Craft of Anthropology” at the RAI Anthropology and Education Conference, 25–28 June 2024. ⤴︎