META-REVIEW: I finished The Most Human Human back in June and have been struggling to put my thoughts about it into words ever since. (Okay, a lot of the time I was just playing Mario Kart.) When I finished the book, I felt vaguely ambivalent. But I didn’t have a coherent reaction until one of my friends asked me what I thought about it, and then told me to write a review when I failed to adequately explain myself. So for the next three months I undertook a closer inspection of the book, throughout which I cycled between mild annoyance at Christian’s gratingly earnest style and anguished self-questioning, occasionally descending into fits of righteous, impotent rage. Many scrapped drafts later, this essay is the result of my exertions.
I came across Brian Christian’s The Most Human Human while I was hunting around my house for a book to read last winter break. First, the title caught my eye: The most human human? Was that a catastrophic typo the book’s copy-editor somehow missed? And the tagline was equally attention-grabbing: “What talking with computers teaches us about what it means to be alive.” So I started reading it. Admittedly, I didn’t actually make it through the introduction of the book until June (I kept falling asleep), but the book’s cover didn’t lie—Christian’s argument in the book is just as far-reaching as its tagline would suggest. But in the end, it didn’t exactly manage to convince me.
First, the book’s main premise: Christian enters as a contestant in the Loebner Prize, a competition where the Turing test is enacted in real life. Judges conduct online conversations with anonymous partners, and then must determine whether these partners are human or computer. Artificial intelligence researchers enter their programs in the hopes of fooling as many judges as possible and earning the distinction of “Most Human Computer.” But the humans competing (or confederates, as they’re called) are also awarded a title for collecting the most votes of confidence in their humanity: the Most Human Human. When Christian’s selected as a confederate, he makes it his goal to win this title. (Spoiler: he does.)
Christian spends several months preparing for the contest, reading lots of books and calling up psychologists, computer scientists and philosophers. The Most Human Human is the result of his months of study, and in it he argues that, in the process of learning how to most effectively convey his humanness in conversation, he discovered valuable truths about what it means to be human. As he puts it in the introduction:
“…the Turing test is, at bottom, about the act of communication. I see its deepest questions as practical ones: How do we connect meaningfully with each other, as meaningfully as possible, within the limits of language and time? How does empathy work? What is the process by which someone comes into our life and comes to mean something to us? These, to me, are the test’s most central questions—the most central questions of being human” (13-14).
Christian leads us through several ways computers help us define which parts of humanity are “more human”: by automating rote tasks, programs free up humans to do work that’s less easily quantified, like art. Under the Internet’s blanket of computerized anonymity, humans have to inject more of themselves into their online conversations to identify themselves, thereby better expressing their humanity. And the Turing test chatbots struggle to maintain consistent identities, keep up with the natural flow of conversation, and reference their immediate environment—things that come easily if you have a physical body and a life history.
In all of this, Christian emphasizes self-expression and uniqueness as the mainstays of humanness. Computers aren’t surprising, because they do exactly what you tell them to do. People don’t—they improvise and innovate and change. In a particularly good chapter, Christian considers chess grandmaster Garry Kasparov’s defeat by the computer Deep Blue. Kasparov said the game didn’t count: it was played entirely within the set of moves known to professional chess players as canonical ways to begin and end a match, collectively called “the book.” The real game, Kasparov and Christian assert, occurs when you get out of book, the middle of the game when the moves aren’t following a known pattern and the players are going by skill alone. Kasparov blundered during the game and never managed to get out of book, so, he reasons, it didn’t really count. And Christian says that this framework applies to a host of other things: the way we start our conversations (“How are you?” “I’m good, how are you?” “Good!”), our letters (“Dear ___,” and “Sincerely, ___”), and even life itself (we all start out as squalling, incompetent babies, and we all end up dead). It’s what happens between automatic routine that makes the game/conversation/life unique, and, Christian says, more human.
In another example, Christian articulates the ways language and communication are forms of data compression. Much in the way mp3s compress audio files, describing your day or writing a book review compresses content to its essential nub. If you can easily compress your day down to a mundane sentence (“It was good”), that’s somehow lamentable. (Christian would probably be cheered by the fact that this review of his book is 3,025 words long.) Conceptual art fares especially badly in this rubric: “who needs to see a Duchamp toilet when you can hear about one so much faster and extract most of the experience from that?” (237). As a test, Christian takes a passage from Joyce’s Ulysses, puts it in a text file, and compresses the file with his computer. He then does the same to a text file of equal size, containing nothing but the words “blah blah blah” over and over again. The “blah” file compressed down to 28% of its original size, but Joyce only got down to 79%. “When the compressor pushed down,” he writes, “something in the Joyce pushed back.”
What Christian’s trying to say, overall, is that we can be more human by behaving less like bots, in all senses. He’s concerned with locating “that human spark,” and he finds it in the ways people express their innate uniqueness, through their work and words and ways of being. And this idea comes to represent a sort of utopian vision: If people could express themselves without being constrained by automatic response, the world would be a happier, more exciting place. “The highest ethical calling, it strikes me, is curiosity,” he writes (257), and he calls for humanity writ large to avoid complacency, to continually see the world in new ways, to fill our lives chock-full with unique thoughts and experiences.
But Christian’s argument seems shoehorned together: The book seems to alternate between serving as a how-to guide for a Loebner Prize confederate and an argument for communication as the central pillar of human existence. Winning the Most Human Human prize was all about proving his essential humanness through conversation, his argument goes, which also happens to be the way he thinks people can be better humans. But some of his examples only apply in contrived Turing test-type situations, not in typical human interaction. You don’t need to authenticate yourself as a human if you’re talking face to face with someone, and using “site-specific” tactics like talking about the weather or current events might stymie a chatbot, but don’t guarantee meaningful conversation. And maybe it’s hard for computers to know when to jump into a conversation naturally. Well, it’s hard for me, too. Am I less human if my conversations limp along or my comedic timing is terrible? (Or if I wait for a person to finish talking before I start?) All humans have this instinct, so it is an integral part of being human, but so is the instinct to eat lots and lots of chips. It doesn’t say anything meaningful about what being human is.
But beyond the computers, most of his book centers on conversation: its challenges and rewards and especially how to have a good one. Be specific, he advises. Leave “holds,” or departure points, for your conversation partner to latch on to, and intersperse what you’re saying with tiny pauses so they can respond. The goal is to have a conversation that’s interesting, where you learn something about the other person and build some sort of connection. But I can’t shake the feeling that his focus on conversation is kind of missing the point. As if the ability to hold a good conversation were proof of your humanity! Christian tells us about the time he calls to activate his new credit card, and has a conversation with the call center lady for 10 minutes. “My roommate, passing through the living room, assumes it’s an old friend on the line,” he writes (87). It’s great that Christian can project his warmth to everyone, even people he doesn’t know. And his implication is clear: by having better conversations, we can be more empathetic. But his anecdote feels glib to me, and a little self-congratulatory. Chatting with a stranger about the weather in Seattle (as he does) doesn’t exactly strike me as plumbing the depths of human connection. Do I have to be effusive or forthcoming when I meet people to be somehow more human? The ability to have a good conversation, to be charming and entertaining, doesn’t really say anything about the person who’s speaking—conversation alone (especially casual conversation) can’t get at some incredibly human things. Like our deeply-held convictions, or our senses of identity, or our moral systems.
Conversations are, of course, important. They’re the main way we connect with people, and being social creatures, we crave connection. But there are a million reasons conversations aren’t interesting or exciting, and it’s glib to say that bad conversations occur just because of carelessness or bot-like, unthinking behavior. What if the person is desperately afraid of judgment, or introverted, or depressed? What if the conversation partner just doesn’t want to talk to that particular person? Wherever humanness is concentrated, I don’t think casual conversation is the place to look for it.
The call-center lady anecdote isn’t the only time Christian brings in his own conversations to demonstrate his conversational prowess. There’s that time his friend calls him up and he starts talking to her about Infinite Jest (of course he’s reading Infinite Jest), there’s the part when he talks to another call center lady about replacing a plastic tab on his phone, and so on. This is how he closes the last proper chapter in the book:
“The shopkeeper notices my accent, I tell her I’m from Seattle; she is a grunge fan; I comment on the music playing in the store; she says it’s Florence + the Machine; I tell her I like it and that she would probably like Feist…
“I walk into a tea and scone store called the Mock Turtle and order the British equivalent of coffee and a donut, except it comes with thirteen pieces of silverware and nine pieces of flatware; I am so in England, I think; an old man, probably in his eighties, is shakily eating a pastry the likes of which I’ve never seen; I ask him what it is: “coffee meringue,” he says and remarks on my accent; an hour later he is telling me about World War II, the exponentially increasing racial diversity of Britain, that House of Cards is a pretty accurate description of British politics, minus the murders, but that really I should watch Spooks; do you get Spooks on cable, he is asking me…” [he recounts another conversation after this, but you get the idea.]
At this point I’m thinking, who cares? The thing is, these are easy conversations. They’re not talking about anything in particular, and Christian has no special emotional investment in any of these people (nor should he, he’s just met them). These conversations have nothing at stake. From the way he runs through these anecdotes, he’s almost tokenizing the conversations—these people are just interesting experiences for him to have (look how British and old this guy is!). And it feels dishonest, given his huge, weighty claims about “empathy” and “the central questions of being human.” The conversations he presents to us don’t encapsulate the times when human communication is really tested—navigating disagreements, discussing deep-seated, uncomfortable emotions, and so on. On these matters, Christian says pretty much nothing.
The value Christian places on conversation, I think, comes because he believes self-expression is one of the high aims of humanity. He speaks glowingly of artists who are constantly innovating, following a philosophy he describes as the “economy-in-all-senses-of-the-word-be-damned battle cry of the artist” (89). (As Christian sees it, artists are pretty much as human as you can get.) Christian describes himself as a very “forthright, forthcoming person,” and you can tell. It’s a priority of his to draw people out in conversation, to learn new things about them, and to broker some exchange of ideas. But unmentioned in this know-all, tell-all impulse is risk. For Christian, there’s no difference between his inner self and the face he presents to the world—in fact, he strives to make them match. But it’s not so easy for many other people (say, if you’re gay, or trans, or depressed, or just private) who might feel the need to act differently around people they don’t know or trust. Sometimes people face repercussions for being too honest about themselves. In any case, this isn’t an issue to be bandied about casually, so it feels like an oversight when Christian argues that all conversation partners should be as open as he is.
But it sort of makes sense with Christian’s general life philosophy. He pinpoints curiosity as life’s highest ethical calling, and creativity in the face of routine as the “great problem of living”: “How do you still feel creative when you’re creating more and more of the same thing?” (96). (He concludes, in the next sentence, that you should just create more and more different things.) But creativity isn’t an end. It’s a means. A focus on “creativity” in the abstract casts life as just one big art project, where the emphasis is on what’s new and interesting but not necessarily lasting or good. So his “great problem of living” strikes me as sort of selfish—it focuses on a single individual’s life as a craft, where the great concern is how you can keep doing your own thing but better and better, without any attention to how the fruits of your creativity affect other people. If empathy were really something he wanted to consider, why didn’t he focus equally on listening or understanding, the quieter but equally crucial half of communication?
Christian also pinpoints emotion as a distinctively human quality. He spends a chapter lamenting the emphasis society puts on rational, analytical thought—he decries rational actor theory in economics for being totally unrealistic (which it is), and criticizes the way we prize left-brained skills more highly than right-brained ones. Since computers are the masters of logic (they’re practically built by it), logic ceases to be humans’ sole domain. So we should appreciate the associative, emotional ways we think, he says, the ways computers struggle to emulate. What’s unique about human brains is that they’re a combination of analytical reasoning and animal reaction: “computer tacked onto creature,” “the monkey and robot holding hands,” or “the estuary of desire and reason in a system aware enough to apprehend its own limits, and to push at them” (yes, this is what the whole book is like). This seems to me a sensible enough description of what’s going on.
But Christian doesn’t really talk about the uglier side of human emotion. He tells us about a user who gets ensnared into an hour and a half-long conversation with a bot, without ever realizing he was talking to a computer. How? The computer responded only with ad hominems. Computers can’t stay on topic or deal with context, says Christian, so if we try to steer our arguments towards the disagreement itself instead of getting sidetracked, we’d be much better off. “Better living through science,” he tells us, and that’s that. But the fact that the computer instigated the “argument” doesn’t tell us anything about why we argue in the first place. We do it because of messy human emotion. Words hurt. We react the way we do because it’s painful when someone is yelling at us, especially when it’s someone we love. Our brains, naturally, go to protect themselves. Yes, it’s an automatic reaction, and yeah, it’s not a good way to deal with problems. Christian’s suggestion to take a step back and turn the conversation to a more productive end is a good one. But it’s a good idea not because a computer can’t do it, but because it’s attentive to the needs of both parties. Christian implies that the automatic, protective reaction that perpetuates arguments is somehow “less human.” But it is human, arguably just as human as the desire to create or express oneself.
And that’s what I think Christian’s main problem is—he puts most of our foibles up to miscommunication and lack of thought. But even with perfect, thoughtful communication, people will still have the capacity to hurt each other, just by the very nature of being human. People’s desires are always operating at cross-purposes—much like a bad breakup, people sometimes just want different things, and that can hurt. And a normal part of being human is experiencing our share of ugly emotions. But expressing those desires, no matter how clearly, isn’t always the answer. Christian just glosses over these things; he ignores the myriad ways people harm each other, unmediated by anything else. Self-expression is great, but he seems to think that people will just express things that are fascinating, or good-hearted, or worthy. And that’s just not true.
What he seems to call most “human” are the things we do that are “good,” while everything bad (the automatic, the hurtful) he puts down as bot-like behavior, the things a mere computer could do. But I think what’s more truthful, and harder to accept, is that being human also includes the stuff that hurts people. Let’s not be sentimental about what being human is—a shitty person is just as human as Mother Teresa. And to say that harmful actions are caused by mere thoughtlessness trivializes them, and diverts attention away from the steps we can take to prevent them. The key distinction, to me, isn’t knowing what’s “more human” or “less human,” but to sort through our various human reactions and pick the best one. All reactions are human, by virtue of the very fact that we have them, but some might be morally wrong, or hurtful, or just plain unproductive. Navigating through these choices is the hard part.
Edit: I can’t spell “Mother Teresa.”