What a fascinating set of possibilities generalized AI opens up. William James’s provocation of 1904, in the first installment of radical empiricism, was an essay called “Does Consciousness Exist?” You can’t get to an answer by way of subtraction, he said, in the manner of metaphysics from Kant to Moore, whereby you get to what must be the activity of consciousness by specifying its content—the object of contemplation—and naming what’s left over, without ever explaining its nature or its form (except, when pressed, by saying that remainder is “the subject” or “the self”). But AI makes us ask James’s question all over again, as if he’s come back from the dead to haunt us with his exasperating presence.
For example, I just stumbled on an article from 2020 by Gary Gensler, the former MIT professor who now runs the SEC, written with a colleague, Lily Bailey, which explores the practical effects of AI on the financial sector—thus on the “real economy,” wherein banking in the broadest sense has become the headquarters: “Deep Learning and Financial Stability,” SSRN (13 November 2020), pp. 1-45. I can’t pretend to understand the algorithmic details of the argument (neither does he), but there are two striking conclusions we can draw, tentatively, from the simple fact that these AI models are constantly evolving in unpredictable ways.
Actually, these are conclusions Gensler draws, one in passing, the other as the premise of new regulatory initiatives —I’m thinking about their implications. First: “If deep learning predictions were explainable, they wouldn’t be used in the first place.” Second: “Models built on the same datasets are likely to generate highly correlated predictions that proceed in lockstep, causing crowding and herding.”
I’ll work backward. The defense of markets as the bulwark of both efficiency in resource allocation and the liberty of individuals has always rested on the assumption that free markets can produce as well as convey more, and more variegated, information than systems predicated on authority, whether political, intellectual, or ideological (or, in the case of pre-capitalist modes of production, ancestral, which translated into political authority anyway). Deep learning as developed in and through AI destroys that assumption, and with it, the neoliberal notion—made popular by von Hayek, Friedman, the Masters of the Universe, and the morons who lead MAGA Nation—that seeking to modify market forces in the name of justice threatens freedom as such.
Gensler’s other conclusion, a sort of aside, is even more intriguing. The implication, at least as I see it, is that Big Science is no less irrational than Organized Religion, or, to put it differently, that blind faith is an essential ingredient in the most reasonable procedure. Science proceeds, pretty clearly, on the basis of unproven and indemonstrable premises, or as mathematical proofs become operational hypotheses and laboratory experiments, that is, as an idea acquires equipmental embodiment and can therefore be enacted in real time. Religion does, too, in the sense that acting on faith has made, and still makes for measurable change—in moral climates, to be sure, but also in the basic material structures of everyday life, from buildings and landscapes to the perception and use of time.
Does consciousness exist? Yes. Do we know how it works? Sort of, but what difference would it make if we did? Does AI possess this quintessentially “human” capacity? Of course—so maybe Donna Haraway’s “Manifesto for Cyborgs” needs an update.