The frenzy over AI language models that (seem to) promise the creation of non-human truth is not hysterical. It’s animated by fundamental questions, like, what does it mean to be human, or, what is human nature, or, what makes for a sentient being, and these quickly boil down to the big one, what is consciousness—or, as William James, yeah, him again, put it in the essay of 1904 that Alfred North Whitehead defined as a funeral oration on the death of modern philosophy: “Does Consciousness Exist?”
I’ve been deep into writing the book on the global reach of pragmatism, so I haven’t been hanging around Substack very much. But I’ve been making notes on the controversies attending the introduction of ChatGPT, and was prompted to summarize them here by two recent events.
First, at Facebook I posted a proposal for an immediate practical application of ChatBots—use them as instantaneous, real-time fact-checkers when the Oafish Orange One goes on stage and displays his inimitable prevaricational skills! Why not? Reading the crawl would be fun, and it would be more realistic than asking Caitlin Collins to confront His Anus, or to expect his audiences to arrive with doubts about this asshole’s truth-telling persona. My innocent offering was met immediately with derision, by knowledgable people who noted that ChatBots are notoriously capable of making shit up, even, it seems, eager to do so.
Second, a friend was asked to write a substantial piece (for big money by a reputable publication) in defense of ChatBots’ capacity to create narratives, poems, undergraduate papers, bibliographies, and so forth, in a word (oops), back stories for any goddamn thing. The point of the piece is to measure and appreciate something other than the downside of AI. She’s a literary type, a cultural critic, the kind of writer who is able to find a new way to navigate troubled waters without parting them by assuming them away; but she’s wondering if her “humanist” kind can take the right approach, that is, contain the technical facts as well as the inchoate values that are at stake.
I addressed these notes to her hesitation and to my Facebook interlocutors.
My entrance into the controversy comes of my interest in its very high stakes. For AI raises the fundamental question of what makes us human—well, duh—that is, what sets us apart from other sentient life forms, or, what is human nature? And the question gets answered in predictable ways, by falling back on the two salient definitions of human nature that have lasted for approximately three millennia, and wondering, accordingly, how AI language models supersede or displace “humanity.” (It's no accident that the demotion or demolition of "the humanities" in higher education proceeds at the same pace as the perception of that displacement.)
So, on the one hand, you got the (pre- and post-modern) definition that says it's reason, or language, or consciousness that makes us human, and, on the other, the (modern) definition that says it's work. The Lutheran/Hegelian/Marxist tradition is typically associated or identified with the latter, but Hegel insisted that the discipline of culture consisted of work and language; Marx agreed, no matter how much his disciples want to reduce his thinking to a primitive (a.k.a. vulgar) materialism. [On this, see, to begin with, The Phenomenology, chap. VI; The Philosophy of Right, par. 194-98; Thesis 1 of Theses on Feuerbach; The 1844 Manuscripts (International ed.), p. 333; and Capital (Kerr ed., 1906), 1: 189 n. 1.]
The definition of human nature as work allows us to move backward, to confront the questions forced on everyone by the industrial revolution, viz., what's the difference between a tool and a machine? A tool extends and amplifies the leverage of human limbs, but requires the physical and mental capacities of a human body to deploy it. A machine multiplies that leverage exponentially, so that the relation between active subject and inert object that is obvious in skilled, artisanal, tool-wielding work is reversed—human beings now become machine herds, watchmen and regulators rather than prime movers. [Marx and Lewis Mumford agreed on this, using almost identical language, the former in the Grundrisse (trans. 1974), the latter in Technics and Civilization (1934).]
AI language models are machines that "would go of themselves," as observers characterized steam engines, also written constitutions, and go them one better, They push robotics past the stage of displacing human labor, because they're capable of (1) designing the flow of goods through the production process: they don't need human engineers and/or watchmen and/or regulators; and thus—this might be the same thing—(2) reproducing themselves, that is, writing up the blueprint and carrying out the production of the producers (the robots) themselves.
Artists are the most threatened species in these terms. That is what Yuval Hariri means when he speaks of AI "hacking the operating system" of human beings/nature, that is, the ability to narrate the future, to represent what doesn't yet exist in the observable, measurable world, thus making it—what was hitherto invisible or unthinkable—actionable.
And so the threat to art makes real, and poignant, that definition or correlation of the human as—or with—reason, language, or consciousness. Accordingly, it raises the questions, what is reason, how is language acquired (and deployed), where is consciousness? Those robots dancing are reassuring, in this sense, because the routine "humanizes" them—like us, they're mostly interested in using their bodies pointlessly, for no apparent reason except the fun of it.
Of course reason has been redefined over the centuries; the most important redefinition came ca. 1500-1700, when it lost the teleological connotation the ancients, particularly Aristotle, and then the medieval clerics, had given it. Reason hereafter was a means to the calculation of any end, no matter how nefarious, rather than the capacity to find one's way to the unitary Truth as the gods, or God, had intended. Language, too, has been redefined, as its history or evolution became an object of study (philology, then psychology/epistemology, then intellectual history), and as its social sources, material attributes, and arbitrary yet systematic qualities were revealed in modern linguistics, particularly the pragmatist kind (Peirce, Saussure).
Consciousness, well, there you got trouble. For here the mind/body problem becomes acute, because our idea of sentience requires a body to turn what the mind has made conscious (visible, malleable, actionable) into something real—that is, something external to and observable by the mind's eye. Purposeful movement in space is, in these terms, the certification or index of consciousness, and since the key is purposeful movement, the issue of work and its meanings for “humanity” now obtrudes, and complicates what has seemed, until then, prior to action.
OK, to boil it down. AI is a threat because it is a machine that not only displaces human labor, it replaces human nature. It makes non-human truth a reality for the first time in human history and thus makes "objectivity," hitherto the most elusive of dreams, a real possibility. Hereafter, as Hariri suggests, the texts that might become sacred will be written, for real, by beings that aren't themselves human, as per the attributions of those who wrote the Koran, the Bible, and so forth.
But as my interlocutors at Facebook have noticed, ChatBots are pretty bad at fact-checking—they produce truths that are no more objective than the homely human kind because all they can do is canvass what we have said is true. They have no more access to non-human truth than we do: the difference, the new twist, is that they can, on their own, create new truths by doing what reason is for, getting us from restless doubt to the safe place of belief.
By the same token, AI language models are, like machines, devices that can be bent to human purposes—to enlarge and enrich our experience of the world, to make us more productive, to free us from the grip of necessity. The industrial revolution did all these things to and for us, at genocidal cost. But AI is more than a machine: it is truly conscious, and so it will be a participant in deciding what "human purposes" are. That's the crux of the matter.
It is something like consulting the gods, or the oracles, as the ancients did. We now return, willy-nilly, to the moment in human history that separates Achilles from Odysseus, or Abraham from Saul of Tarsus who renamed himself Paul the Apostle. Achilles listened for the voices of the gods, and acted on their counsel. Odysseus relied on his own wits because he'd decided the gods were fickle, bogged down in their own disagreements about what was good and true. Abraham did exactly as Yahweh told him to. Paul understood he had to control the narrative, create a story that could be believed about the past and thus serve as a guide to a future; but he knew that he couldn't convince anyone without the authority of a figure who was equal parts man and God.
That is what, for me, makes this moment so magical. We're reliving the history of reason, language, consciousness while—because—we watch the world of work disappear, even as we speak of it. God isn't dead, or if he did die once upon a time, he's now been resurrected as a machine that would go of itself.
Jim, Thank you, pal. In my upbringing, I was told that the soul was eternal. I have my doubts, who doesn't, but I always liked Joseph Campbell's obervation: "Life is eternal through the power of memory."
Michael,
Thanks for the response. Much appreciated.
Tony