I attended Derek Parfit’s public lectures at Rutgers-New Brunswick when he was a visiting professor in the Philosophy Department there, in Spring 2007. My daughter was pursuing a minor in philosophy and I had a abiding interest in it due to my ongoing studies in pragmatism, so it was a good way to stay in conversation with her, and to sample the state of the art. But for me, listening to this man, by all accounts the leading moral philosopher of our time, was an appalling, benumbing experience.
It reminded me of bad poetry readings at smoke-filled coffee houses because Parfit was spinning elaborate yarns, offering up layer upon layer of counter-factual details to conjure a final scene of moral decision. That was his stock-in-trade, refinements on the absurd situation of the crowded lifeboat. You must remember that one from your sophomore year in high school, when you were asked to winnow the crew according to the utilitarian criterion of the group’s survival—the greatest good of the greatest number—then discuss Lord of the Flies as if it were a realist novel. Parfit’s only narrative innovation was to relocate us to out-of-control trolley cars, or, more egregiously, in these lectures, to invent pre-historical situations where the apparatus of modern life (including the formative attitudes) was absent.
I suppose I was then channeling Alasdair MacIntyre, the philosopher who insisted that the history of the ideas which constituted his discipline was the groundwork of serious thinking in the present, and who, accordingly, claimed that any relation between intentions, actions, and consequences which could not be embodied in actual societies, past or present, was a metaphysical conceit unworthy of serious consideration. (MacIntyre could have found an alternative to Aristotle in William James, but, like most philosophers, he was too steeped in contempt for Anglo-American empiricism to read the pragmatists with care.)
I suppose I was also reliving my angry response to a manuscript I had recently “peer reviewed” for the Journal of the History of Ideas, an essay that claimed my discipline (History) had taken a “moral turn” in the aftermath of 9/11, that is, a turn toward “questions of value, concerns about the power of belief,” as if these had not been the principal items on the agenda of historical writing since the 17th century, when the moral agency of every person—what we know as a “conscience”—was summoned by the Reformation and its progeny, the first modern revolution in England. Here is an excerpt from my report to the editors, which appeared in print along with the essay when it was published later, in 2008:
“‘In a time when our politicians and students rest too comfortably in certitude, history’s moral turn may help create productive confusion, a willingness to recognize that behind all of our moral choices—not to mention choices made in the past—lurks [sic] paradox, irony, and tragedy.’
“That is how the author advertises his essay in its opening pages. What is frightening about this advertisement is its complete detachment from the realities on display every day of our lives, on television, at the bar, in the classroom, in the grocery store, even at the mall.
“Maybe the author’s students ‘rest comfortably in their certitude.’ I doubt it. Mine certainly don’t. They’re worried about a lot of things, and they can deploy all kinds of idioms to sort out their productive confusions. They know that moral luck and moral choices are involved in everything they do. They wouldn’t be able to name these active dimensions of of their everyday lives any more than I would be able to give a name to the symptoms I summarize for my doctor. But does the absence of a formal designation make them ignorant of the moral problems of our time? Does it make me ignorant of my affliction? Again, I doubt it.
“My point is not that we should celebrate the vernacular version of moral philosophy or medical advice. My point is instead that we must be able to recognize the real knowledge, the genuine insight, that resides in the extra-academic idioms of public opinion, as we encounter them in the classroom and in the world elsewhere—we must be able to see that we are hearing and translating and codifying dialects of moral discourse that are less systematic but more urgent than ours.”
Among the things I was trying to say is that narrative as such presupposes a kind of forensic responsibility. I mean that the moral of the story, any story, resides in the telling itself because it requires the addition of coherence and meaning to pure experience—to what is typically lived, day to day, as random or at least disorderly sequence. To acknowledge this simple fact, you don’t have to go as far as Hayden White does in assessing the value of narrativity, when he claims that recording history in narrative forms requires exact ideas about legal agency and state power plus the sense of an ending, which, taken together, beseech us to redeem the past; all you have to do is realize that the pure experience of everyday life is meaningless precisely because it lacks the formal attributes of a narrative.
Ah, but there’s the trouble with my hard-boiled disdain for Parfit’s counter-factuals. Fiction and philosophy, not to mention the discipline of history or the conduct of science, are animated as well as characterized by the same principle, which is the urge to find new truths, to “produce new knowledge,” by supplementing, amplifying, and rearranging—in a word, changing—what is already known in the present. Who wants to think of themselves as known quantities, fixed entities, stick-in-the-muds? Who doesn’t want to make the world a better place? Who would admit to such inertia? There’s no scandal in Marx’s Eleventh Thesis on Feuerbach: the point of philosophy has always been to change the world, as it has been of fiction, science, and history. And so there’s no scandal in denying the difference between interpretations of the past and the past as such: since our only access to the latter is through the linguistic portal we call history, to reinterpret the past is to change it.
Science fiction, or “speculative fiction” as some of it is now called because of its proximity and plausibility—the pace of change in our time makes it feels like the impossible is always already upon us—functions as the inversion of history, so conceived: it changes the future by supplementing, amplifying, and rearranging the present. “We live forward,” Kierkegaard famously said in 1843, “but we understand backward.” That used to be the case, it had to be, and it used to be comforting to conventional historians, antiquarians, and genealogists. No longer, not in these times, when recycling or merely renaming familiar cultural artifacts passes for innovation. Nowadays we live backward, and we understand forward. That’s why Fredric Jameson’s big book on utopia and science fiction is called Archaeologies of the Future (2005).
In other words, the impossible future rushes toward us so fast that new iterations of AI language models (and their infrastructure in the chips Nvidia makes) seem to appear quarterly, each equipped with almost exponentially enhanced capacities, and as it does so, the very idea of normal becomes quaint, along with inherited notions of human being, consciousness, agency, individuality, etc. What looks novel or even transgressive today will be boring by tomorrow, and not just because the talking heads have learned to treat Donald Trump’s next vicious utterance without raising their eyebrows or voices, thus domesticating it as the “new normal.” The unknowable and the unspeakable have become the everyday.
II
I’m reminded of these events and ideas because last week I watched “Godzilla Minus One” and “Hit Man” back to back, or rather squeezed the latter between the two halves of the former, without knowing they’d be telling a similar story, or drawing a similar moral from different tales. In retrospect, I can think of them both as “speculative fictions” thanks to a recent book by Steven Shaviro, Extreme Fabulations (2021) and a review of it in the context of kaiju media (monster movies, cartoons, toys, comics) by Joseph Weiss at Public Books (April 4, 2024).
I was led to the book by the review because watching the movies raised the same question that my close observation of politics keeps forcing on me—how and why am I taking this shit seriously?
Here is a clownish buffoon who would pimp out his own daughter, a stupid, illiterate, ridiculous figure who is a running joke in his own hometown, a pathetic petty criminal who squandered an inherited fortune trying to be a big shot, a petit-bourgeois landlord who pretends to be the friend of the forgotten man, a malevolent racist who has preyed on every woman, every person, who came within reach except men with more money or power—you’re telling me that this guy, this guy, is not the laughing stock of both mainstream and social media, but is instead favored in the polling for President of the United States, against a doddering but decent man whose policies are a clean break from the neoliberal nightmare of the past 40 years?
That is not believable. Why isn’t everybody laughing at the gibberish on offer from the moron wearing the long red tie and the orange complexion? How can I take this shit seriously? Why do I?
But then, how and why does “Godzilla Minus One” move me to tears? What is remotely believable about a monstrous saurian (kaiju) rising from the Pacific as a result of atomic testing by the US, then stampeding through the streets of postwar Tokyo for no apparent reason? Or about a kamikaze pilot who seeks to redeem himself by reverting to his suicidal vocation?
How and why does “Hit Man” seem perfectly plausible, and therefore genuinely charming? Glen Powell is no Jerry Lewis. What, then, is remotely believable about his portrayal of a mild-mannered professor who convinces himself as well as his students that Nietzsche is right—there’s no doer behind the deed—and accordingly constructs an entirely new self by performing it, just like Judith Butler says we do in deciding on the difference gender makes? I mean, doesn’t this guy cover up his girlfriend’s murder of her ex and then commit another, and get away with both crimes in “real life,” with the blessing of the audience?
How can I take this shit seriously?
You could say, it’s the suspension of disbelief, stupid. But the question then becomes how and why that works. I have come to think we do so because we know, without thinking, that the world is not a lifeless mass, an inert object, but is already endowed with “worldness,” with meaning—equipped not with subjectivity as we experience it, since other sentient beings bring neither a priori concepts of space and time nor language as we know it to bear on the reality we share, but rather with purpose or intentionality of a kind we can’t comprehend. We don’t impose meaning on the world, we add it.
We know without thinking that a story doesn’t find meaning in the world by stripping it down to the things themselves, to the point where our categories and metaphors don’t get in the way of our understanding: the world is not an equation, the map is not the territory. Instead, a story adds meaning or gives voice to a world that seems inert or mute only because it doesn’t speak our languages. Stories make sense of a world that is both formless, lacking narrative coherence, and yet is palpably, undeniably there, as an active, changing shape that saturates our bodies, permeates our minds, inhabits our lives, demanding all the while that we explain it as something more than random particles. That is why the overture of “once upon a time” at the outset of a non-fiction essay or book would surprise and perhaps offend us, even though we know the writer is about to use elaborate artifice to tell a story. We can assume that random sequence is foreign to the written page, and to the cinematic screen, whatever kind of claim to truth is being made on these surfaces.
OK, but these stories? Again, an amphibious tyrannosaur pops out of the ocean to solicit the resources of a people almost annihilated by war, a boring geek becomes a player willing to kill people in the name of love? Weiss understands why we would ask the question of his favorite genre. To answer, so that we will take Godzilla as seriously as he does, he enlists Shaviro:
“[Speculative fiction] is not limited by what can be known; rather, it can instead imagine what could be known, understood, or experienced. ‘In this way,’ Shaviro writes, ‘science fiction is counterfactual, or . . . counter-actual: it offers a provisional and impossible resolution, suspended in potentiality, of dilemmas and difficulties that are, themselves, all too real.’ . . .
"The notion of giant radiated monsters fighting one another in toyetic combat might seem worlds away from the serious and sober science fiction discussed by Shaviro and, indeed, the very notion of speculative fiction as a ‘respectable’ genre.
“And yet, the kaiju fit Shaviro’s definition even as they unsettle it. Silly as they might seem, the best kaiju media grapple with the ‘impossible’ resolution that, for Shaviro, empowers speculative fiction. More specifically, kaiju media can do something specific and powerful in relation to the notion that science fiction can provide us ‘resolution’ at all.
“That is to say, it is the possibility of resolution itself that kaiju challenge. As such, kaiju are a most troublesome genre of speculative fiction indeed, and a most serious one, despite (and sometimes because) of all the bright colors and flashing death rays.”
Weiss is right, Shaviro is dealing with sober philosophical speculations dressed up as science fiction—each chapter addresses a story or novel or concept album (!) that takes Kant’s categories as its point of departure, and finds that if computer simulations of human consciousness manage to erase all the linguistic phenomena that stands between sapiens and the things themselves, the world constituted by such things comes to an end. Not because some new iteration of AI has turned on its makers and obliterated their world, but because at the moment of “transcendence,” when the a priori human categories of space and time are overcome, reality as sapiens have conjured it would, necessarily, disappear.
Still, Weiss is onto something. I would suggest that speculative fiction as Shaviro defines it—as the realistic portrayal of the impossible—becomes normal, the stuff of idle conversation and common sense, when the pace of technological change outruns our capacity to explain or contain it in the name of civilized (or mere social) life, when, for example, computer-generated effects increase the phenomenological density and thus heighten the “reality” of movie scenes featuring monsters; or when young people know they can choose their individual identities as if from a menu because post-structuralist theory has now been validated by their species’ release from the grip of biological determination, economic necessity, and patriarchal ideology; or when the tech bros who invented and now organize the training of AI language models confess that they don’t know how it works.
The industrial revolution was only the premonition of this state of mind. Tools were extensions of the human body which increased its leverage and increased its precision, thus magnifying its output little by little; machines driven by steam power replaced human strength and skill, and, by allowing the conversion of work into abstract social labor conducted in factories, enlarged the output of goods at hitherto unimaginable, almost exponential rates. Steam power was, however, easy to explain, in part because by the 19th century it was a 2000 year-old discovery, and it could be contained, in theory and practice, by new forms of politics and social organization grounded in the belief that “all men are created equal.”
We can’t explain the capacities of AI, and we’ve lost our faith in equality, which means we lack the means of containing or even managing the effects of a technology that is manifestly “man-made,” as we used to say, but that clearly supersedes human capacities. In this precarious situation, the continuous onset of the impossible seems inevitable, and retreat from or resignation to it looks to be the only rational response: we don’t have the words, which is to say we don’t have the narrative means, to describe it, let alone evaluate it. We can believe, with good reason, that absurdity is the content of everyday existence.
III
Speculative fiction thus becomes merely ordinary, nothing too intricate or arcane. But it thereby reminds us that “transcendence,” and with it the overpowering sense of an ending, are immanent, that is, already residing in and flowing from our mostly boring experience. Anxiety is the tell, the symptom, of this collective anticipatory crouch, this social-psychological fortification against the impending eclipse of the human. For, to borrow from Heidegger, if anxiety is the apprehension of death’s approach, care is the experience of temporality that allows, and requires, an awakening to the facticity of non-being, or nothingness, that margin of existence where consciousness can find its purpose in the creation of meaning.
The “resolution” of spec fiction is impossible, in these terms, because it lives comfortably with the simple fact that we can’t peek over the edges of our existence as if we aren’t there—there’s nothing to be known in our absence, burdened (or enabled) as we are by our languages and the attendant mistakes we call metaphor. Only a God is equipped to recognize the things in themselves, and so to see the future. Us mere mortals have to be satisfied with much more, a surplus of meanings, due to the additions we make to reality in the form of stories, and we can thank God the concept of omniscience for the ability to see in this way beyond the limits of those mere things.
But does AI know anxiety, or experience temporality? If not, because it can’t apprehend the approach of death, does it finally give us a God’s eye view of the world? Do the algorithms that elicit, arouse, and regulate our attention do something similar? Is that why social media can be diagnosed as the cause of young people’s anxiety, because they let kids stare into the abyss of nothingness, the non-being at the outer edge of their own identities, on a daily, even hourly basis, without affording them access to the raw materials of meaning creation—the care of the world—in their own time and place?
Got me. Jonathan Haidt has a definitive answer to that last question, of course, but as William Davies points out in his review of The Anxious Generation (2022) at LRB (6/22/24), it seems slightly counter-intuitive and profoundly counter-productive to prescribe more freedom for what ails a generation supposedly paralyzed by the range of choice available to it:
“Anxiety has often been interpreted as the consequence of an excess of freedom, of there being too much that might happen and not enough that definitely will. Existentialists and psychoanalysts agreed that anxiety has an anticipatory quality, stemming from the indeterminacy of the future. . . .
“We should be cautious of generalisations about the youth mental health crisis. Yet some kind of narrative is needed, if the post-2008 trend is to be recognised as a political and economic phenomenon, rather than just left as a blizzard of disparate statistics and diagnoses. Perhaps the reasons so many young people are crippled with anxiety (as well as depression) have something to do with the anticipatory dimension of a society governed in the interests of finance and in which there are no guarantees about the future. To be young today is to face the future – the planet’s as well as one’s own – at a time when social safety nets and familiar institutional pathways are being eroded. Education has been recast as an individual investment, whose consequences for good and ill extend for decades. Millions of young people find ordinary parts of life such as school or work impossibly dangerous. If depression represents a grinding to an exhausted standstill, anxiety is a terror of ever getting started—but that must be at least in part because the road ahead appears so long and arduous.” [my italics]
There’s a start on explaining the ubiquity of anxiety and what I take to be its objective correlative, the absurdities of speculative fiction: “the anticipatory dimension of a society governed in the interest of finance and in which there are no guarantees about the future.” That is also a start on explaining why the post-liberals who call themselves conservatives or Catholics, and who want to impose what they grasp as God’s will on the rest of us, sound like Marxists who have mistaken state capitalism for socialism. As Jacques Derrida noted in 1993, the specters of Marx who still roam the earth are themselves collateral debt obligations, the ghostly dividends of financial speculation.
Perhaps the narrative was propelled by the necessary ? formation of a central bank, privately owned. So that borrowing & lending could begin & continue apace, so too, the pace of change and the transactional quality of just about everything.