A.I. Is Coming for Culture
I often wake up before dawn, ahead of my wife and kids, so that I can enjoy a little solitary time. I creep downstairs to the silent kitchen, drink a glass of water, and put in my AirPods. Then I choose some music, set up the coffee maker, and sit and listen while the coffee brews.
It’s in this liminal state that my encounter with the algorithm begins. Groggily, I’ll scroll through some dad content on Reddit, or watch photography videos on YouTube, or check Apple News. From the kitchen island, my laptop beckons me to work, and I want to accept its invitation—but, if I’m not careful, I might watch every available clip of a movie I haven’t seen, or start an episode of “The Rookie,” an ABC police procedural about a middle-aged father who reinvents himself by joining the L.A.P.D. (I discovered the show on TikTok, probably because I’m demographically similar to its protagonist.) In the worst-case scenario, my kids wake up while I’m still scrolling, and I’ve squandered the hour I gave up sleep to secure.
The Culture Industry: A Centenary Issue
Subscribers get full access. Read the issue »
If this sort of morning sounds familiar, it’s because, a couple of decades into the smartphone era, life’s rhythms and the algorithm’s have merged. We listen to podcasts while getting dressed and watch Netflix before bed. In between, there’s Bluesky on the bus, Spotify at the gym, Instagram at lunch, YouTube before dinner, X for toothbrushing, Pinterest for the insomniac hours. It’s a strange way to live. Algorithms are old—around 300 B.C., Euclid invented one for finding the greatest common divisor of two integers. They are, essentially, mathematical procedures for solving problems. We use them to coördinate physical things (like elevators) and bureaucratic things (like medical residencies). Did it make sense to treat unclaimed time as a problem? We’ve solved it algorithmically, and now have none.
A feeling of overstimulation is one consequence of algorithmic life. Another is a certain jumbledness in our ideas. Why do we care about what we care about? Walking into your basement in the dark, you might stumble into a cobweb; one strain of philosophical thought holds that this is how caring works. We become ensnared by the possibilities that happen to present themselves as we live—we get caught in webs of care. You run into your elderly neighbor in the morning and, that night, wonder how he’s doing. Your middle schooler likes Chappell Roan, and soon you do, too. A single encounter with “The Fountainhead” at thirteen might make you a lifelong libertarian. Algorithmic culture taps into the casual randomness with which we apportion our care; it takes advantage of the fact that what we bump into today might obsess us tomorrow. Its webs, meanwhile, are woven by machines that are owned by corporations.
I’ve long felt caught in my personal web, and have struggled to get out. Then, one morning last year, I started chatting with the A.I. system Claude in my predawn kitchen, in an experimental spirit. “I’m finally starting to make progress on the article I’m writing,” I typed on my phone while the coffee brewed.
“Good morning!” the system replied. “That’s wonderful to hear—there’s nothing quite like that feeling when a piece you’ve been working on starts to come together. What’s the article about, and what helped you break through to this progress?”
I didn’t answer. I had no desire, at that hour, or any hour, to let an A.I. act as a writing coach. But, having used my phone, I could now put it down. I sipped my coffee and began marking up an old draft.
I kept chatting with Claude and ChatGPT in the mornings—not about my writing but about subjects that interested me. (Why are tariffs bad? What’s up with crime on the subway? Why is dark matter dark?) Instead of checking Apple News, I started asking Perplexity—an A.I.-based system for searching the web—“What’s going on in the world today?” In response, it reliably conjured a short news summary that was informative and unsolicitous, not unlike the section in The Economist headed “The World in Brief.” Sometimes I asked Perplexity follow-up questions, but more often I wasn’t tempted to read further. I picked up a book. It turned out that A.I. could be boring—a quality in technology that I’d missed.
As it happened, around this time, the algorithmic internet—the world of Reddit, YouTube, X, and the like—had started losing its magnetism. In 2018, in New York, the journalist Max Read asked, “How much of the internet is fake?” He noted that a significant proportion of online traffic came from “bots masquerading as humans.” But now “A.I. slop” appeared to be taking over. Whole websites seemed to be written by A.I.; models were repetitively beautiful, their earrings oddly positioned; anecdotes posted to online forums, and the comments below them, had a chatbot cadence. One study found that more than half of the text on the web had been modified by A.I., and an increasing number of “influencers” look to be entirely A.I.-generated. Alert users were embracing “dead internet theory,” a once conspiratorial mind-set holding that the online world had become automated.
In the 1950 book “The Human Use of Human Beings,” the computer scientist Norbert Wiener—the inventor of cybernetics, the study of how machines, bodies, and automated systems control themselves—argued that modern societies were run by means of messages. As these societies grew larger and more complex, he wrote, a greater amount of their affairs would depend upon “messages between man and machines, between machines and man, and between machine and machine.” Artificially intelligent machines can send and respond to messages much faster than we can, and in far greater volume—that’s one source of concern. But another is that, as they communicate in ways that are literal, or strange, or narrow-minded, or just plain wrong, we will incorporate their responses into our lives unthinkingly. Partly for this reason, Wiener later wrote, “the world of the future will be an ever more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.”
The messages around us are changing, even writing themselves. From a certain angle, they seem to be silencing some of the algorithmically inflected human voices that have sought to influence and control us for the past couple of decades. In my kitchen, I enjoyed the quiet—and was unnerved by it. What will these new voices tell us? And how much space will be left in which we can speak?
Recently, I strained my back putting up a giant twin-peaked back-yard tent, for my son Peter’s seventh-birthday party; as a result, I’ve been spending more time on the spin bike than in the weight room. One morning, after dropping Peter off at camp, I pedalled a virtual bike path around the shores of a Swiss lake while listening to Evan Ratliff’s podcast “Shell Game,” in which he uses an A.I. model to impersonate him on the phone. Even as our addiction to podcasts reflects our need to be consuming media at all times, they are islands of tranquility within the algorithmic ecosystem. I often listen to them while tidying. For short stints of effort, I rely on “Song Exploder,” “LensWork,” and “Happier with Gretchen Rubin”; when I have more to do, I listen to “Radiolab,” or “The Ezra Klein Show,” or Tyler Cowen’s “Conversations with Tyler.” I like the ideas, but also the company. Washing dishes is more fun with Gretchen and her screenwriter sister, Elizabeth, riding along.
Podcasts thrive on emotional authenticity: a voice in your ear, three friends in a room. There have been a few experiments in fully automated podcasting—for a while, Perplexity published “Discover Daily,” which offered A.I.-generated “dives into tech, science, and culture”—but they’ve tended to be charmless and lacking in intellectual heft. “I take the most pride in finding and generating ideas,” Latif Nasser, a co-host of “Radiolab,” told me. A.I. is verboten in the “Radiolab” offices—using it would be “like crossing a picket line,” Nasser said—but he “will ask A.I., just out of curiosity, like, ‘O.K., pitch me five episodes.’ I’ll see what comes out, and the pitches are garbage.”
What if you furnish A.I. with your own good ideas, though? Perhaps they could be made real, through automated production. Last fall, I added a new podcast, “The Deep Dive,” to my rotation; I generated the episodes myself, using a Google system called NotebookLM. To create an episode, you upload documents into an online repository (a “notebook”) and click a button. Soon, a male-and-female podcasting duo is ready to discuss whatever you’ve uploaded, in convincing podcast voice. NotebookLM is meant to be a research tool, so, on my first try, I uploaded some scientific papers. The hosts’ artificial fascination wasn’t quite capable of eliciting my own. I had more success when I gave the A.I. a few chapters of a memoir I’m writing; it was fun to listen to the hosts’ “insights,” and initially gratifying to hear them respond positively. But I really hit the sweet spot when I tried creating podcasts based on articles I had written a long time ago, and to some extent forgotten.
“That’s a huge question—it cuts right to the core,” one of the hosts said, discussing an essay I’d published several years before.
“It’s ambitious,” the other host chimed in.
At the kitchen sink, I grinned; I kept listening while washing up the breakfast dishes. At first, I was merely entertained—it was neat to hear the aesthetics of a form made for mass consumption deployed for an audience of one. But it was actually useful to be reminded of old thoughts, some of which I’d now probably revise.
If A.I. continues to speed or automate creative work, the total volume of cultural “stuff”—podcasts, blog posts, videos, books, songs, articles, animations, films, shows, plays, polemics, online personae, and so on—will increase. But, because A.I. will have peculiar strengths and shortcomings, more won’t necessarily mean more of the same. New forms, or new uses for existing forms, will pull us in directions we don’t anticipate. At home, Nasser told me, he’d found that ChatGPT could quickly draft an engaging short story about his young son’s favorite element, boron, written in the style of Roald Dahl’s “The BFG.” The periodic table x “The BFG” isn’t a collab anyone’s been asking for, but, once we have it, we might find that we want it.
It’s not a real collaboration, of course. When two people collaborate, we hope for a spark as their individualities collide. A.I. has no individuality—and, because its fundamental skill is the detection of patterns, its “collaborations” tend to perpetuate the formulaic aspects of what’s combined. A further challenge is that A.I. lacks artistic agency; it must be told what’s interesting. All this suggests that A.I. culture could submerge human originality in a sea of unmotivated, formulaic art.
And yet automation might also allow for the expression of new visions. “I have a background in independent filmmaking,” Mind Wank, one of the pseudonymous creators of “AI OR DIE,” which bills itself as “the First 100% AI Sketch Comedy Show,” told me. “It was something I did for a long time. Then I stopped.” When A.I. video tools such as Runway appeared, it became possible for him to take unproduced or unproducible ideas and develop them. He and two partners—the team members are based in Canada, the U.S., and Poland—now create darkly surrealist comedy videos seen by hundreds of thousands of people. (In a typical sequence, a Twitch streamer named Lil Cankles plays a game called Laundromat. “The lint is overflowing!” a computer voice screams, while grotesque quantities of it spew from a dryer vent.) “The reason we have any edge we might have is that we speak the language of film,” Wank said of his collaborators, who also came up through traditional channels. “We understand lighting, lenses, film stock.”
Traditional filmmaking, as he sees it, is linear: “You have an idea, then you turn it into a treatment, then you write a script, then you get people and money on board. Then you can finally move from preproduction into production—that’s a whole pain in the ass—and then, nine months later, you try to resurrect whatever scraps of your vision are there in the editing bay.” By contrast, A.I. allows for infinite revision at any point. For a couple of hundred dollars in monthly fees, he said, A.I. tools had unlocked “the sort of creative life I only dreamed of when I was younger. You’re so constrained in the real world, and now you can just create whole new worlds.” The technology put him in mind of “the auteur culture of the sixties and seventies.”
The full team has never met in person, but they talk constantly, living in a shared creative bubble. “When you paint, you just start smelling of paint,” Bengt Tibert, the Polish collaborator, told me. “When I started prompting, I just started dreaming of prompts. It’s like a new life.” Before falling asleep, Tibert explained, he’d think of a prompt (a set of instructions to give an A.I.), and when he woke up he’d write down what he’d dreamed.
Boey, the (mononymous) Canadian collaborator, concurred: “My head is halfway in the computer at all times.”
Today’s A.I. video tools reveal themselves in tiny details, producing a recognizable aesthetic. They also work best when creating short clips. But they’re rapidly improving. “I’m waiting for the tools to achieve enough consistency to let us create an entire feature-length film using stable characters,” Wank said. At that point, one could use them to make a completely ordinary drama or rom-com. “We all love filmmaking, love cinema,” he said. “We have movies we want to make, TV shows, advertisements.”
“There are so many ideas that you can now create, and so little time,” Tibert said. “It’s almost overwhelming.”
I was speaking to the “AI OR DIE” team from my son’s room, where I’d hidden myself from his little sister. When our conversation finished, I closed my laptop and looked around at his things. A mug he’d made at a kids’ pottery workshop held a rainbow of Sharpie Creative Markers; a coloring book meant for teens and grownups contained intricate pictures of animals. He’d completed a page showing a parrot, its feathers ruffled, on a branch in front of a chain-link fence, and applied an array of surprising colors to create a prismatic image. Strictly speaking, the result was derivative, fill-in-the-blank. But that wasn’t all it was. It’s against the backdrop of formula that originality emerges.
In the afternoon, I was meeting a few old friends for lunch. A quarter century ago, in college, we’d taken a creative-writing course together. Our professor, the celebrated gay novelist and memoirist Edmund White, had recently died, and one of us had organized a little memorial gathering at a Dallas BBQ in Chelsea, where, when we were in our twenties, he would sometimes join us for dinner.
During the walk over, I spoke in voice mode with ChatGPT, asking it about the history of the neighborhood. “As you walk along Eighth Avenue between Twenty-ninth and Twenty-seventh Streets, you’re in the heart of Chelsea, which was historically known for its vibrant manufacturing and garment industries,” it told me, using its “bright and inquisitive” persona, a British woman named Vale. It noted, too, that “Chelsea is home to many L.G.B.T.Q.+ bars, community centers, and events, making it a cornerstone of New York City’s L.G.B.T.Q.+ culture.”
“I’m actually from here,” I said. “I’m a New Yorker. So I really want, like, the most interesting stuff I wouldn’t know about.”
The A.I. tried to entertain me with some moderately diverting material about Tin Pan Alley—actually a little to the east of where I was—and the economics of sheet music at the turn of the twentieth century. I asked about an unusual-looking building on Seventh Avenue, and the bot called it “a testament to the area’s rich architectural and commercial heritage.”
“Avoid clichés and general statements like ‘The building contributes to the vibrancy of the neighborhood,’ ” I demanded. Perhaps, I suggested, we could focus on more “lurid” facts. Soon enough, we were discussing the murder of Nancy Spungen, who’d been stabbed to death in Room 100 of the Chelsea Hotel, in 1978. Her boyfriend, Sid Vicious, the former bassist of the Sex Pistols, was charged with the murder but died of a heroin overdose before the case could go to trial.
Outside the restaurant, I posed a wonky question about real-estate development—why aren’t the buildings in Chelsea taller?—and, when that conversation ran dry, asked for background on the “I❤️NY” logo, which I saw on someone’s T-shirt. The bot told me all about its designer, Milton Glaser, and about the atmosphere surrounding the logo’s creation in the nineteen-seventies (“Ford to City: Drop Dead”). “Would you like to see the original sketches?” it asked. I considered it until a friend waved from across the street.
From one thing to another, to another—in that movement, A.I. excels. In “Blade Runner 2049,” Ryan Gosling’s character comes home from work and chats with his A.I. girlfriend, Joi, played by Ana de Armas. It’s snowing outside, but Frank Sinatra’s “Summer Wind” is on the sound system. “Do you know this song was released in 1966 on Reprise Records?” she asks, blandly. “It was No. 1 on the charts.” She wears a flouncy dress from the Rat Pack era and frets about a recipe—and then, responding to her beau’s shifting mood, changes into a slinky, all-black outfit. Just a moment later, they’re on the roof of his building, sharing a romantic moment. A chat with an A.I. system involves the performance of a script that’s being written in real time. Even if the script is boring, the speed with which it can be revised may register as spontaneity or vitality. Something that can keep up with you as you move the conversation from music to murder to Milton Glaser—or from cooking to flirting to drama—might seem to have a mind.
What does this fluidity imply for culture in the age of A.I.? Works of art have particular shapes (three-minute pop songs, three-act plays) and particular moods and tones (comic, tragic, romantic, elegiac). But, when boundaries between forms, moods, and modalities are so readily transgressed, will they prove durable? “Right now, we talk about, Is A.I. good or bad for content creators?,” the Silicon Valley pioneer Jaron Lanier told me. (Lanier helped invent virtual reality and now works at Microsoft.) “But it’s possible that the very notion of ‘content’ will go away, and that content will be replaced with live synthesis that’s designed to have an effect on the recipient.” Today, there are A.I.-generated songs on Spotify, but at least the songs are credited to (fake) bands. “There could come a point where it’ll just be ‘music,’ ” Lanier said. In this future scenario, when you sign in to an A.I. version of Spotify, “the first thing you hear will be ‘Hey, babe, I’m your Spotify girlfriend. I made a playlist for you. It’s kind of sexy, so don’t listen to it around other people.’ ” This “playlist” would consist of songs that have never been heard before, and might never be heard again. They will have been created, in the moment, just for you, perhaps based on facts about you that the A.I. has observed.
In the longer term, Lanier thought, all sorts of cultural experiences—music, video, reading, gaming, conversation—might flow from a single “A.I. hub.” There would be no artists to pay, and the owners of the hubs would be able to exercise extraordinary influence over their audiences; for these reasons, even people who don’t want to experience culture this way could find the apps they use moving in an A.I.-enabled direction.
Culture is communal. We like being part of a community of appreciators. But “there’s an option here, if computation is cheap enough, for the creation of an illusion of society,” Lanier said. “You would be getting a tailored experience, but your perception would be that it’s shared with a bunch of other people—some of whom might be real biological people, some of whom might be fake.” (I imagined this would be like Joi introducing Gosling’s character to her friends.) To inhabit this “dissociated society cut off from real life,” he went on, “people would have to change. But people do change. We’ve already gotten people used to fake friendships and fake lovers. It’s simple: it’s based on things we want.” If people yearn for something strongly enough, some of them will be willing to accept an inferior substitute. “I don’t want this to occur, and I’m not predicting that it will occur,” Lanier said, grimly. “I think naming all this is a way of increasing the chances that it doesn’t happen.”
At the restaurant, my friends and I reminisced. Ed, our professor, had taught in a relaxed, even gossipy style, discussing the people he knew along with the books he loved. His own novels could be conversational and diary-like: they often told stories set in the neighborhood, drawn, it seemed, from his life there. He must have shared advice about narrative craft and other writerly matters with us, but it had faded over the decades; now I mainly remembered that we’d talked about “A Single Man” and “Mrs. Dalloway,” and how he’d once complimented me on a shirt I’d worn, and described, at length, what made some people boring at parties and others interesting.
The seminars and dinners had made us feel that the world of culture—real culture, New York culture, artists’ culture—was within our reach. But what was that world of culture? It wasn’t a list of art works to experience, although that was part of it. It was fundamentally a story unfolding between teachers and students, muses and poets, standard-bearers and rule-breakers, all wrestling with cultural forms that were themselves evolving, enduring, subsiding, fracturing, combining, and being rebuilt. You had to learn this story, then write your way in.
Late in the lunch, our conversation shifted to romance. One of us, with the help of a “Texas-size” margarita, got lost in a racy story about the first night she’d spent with her future husband.
“I don’t know why I’m telling you this,” she said.
“I think Ed would say that you should write about it!” someone suggested.
Compared with the specificity of real art made by actual individuals with authentic lives, I thought, culture generated ad infinitum, in a formless flow, devoid of context or personality, would be meaningless. But then I remembered my old predawn routine—a song played halfway through, a random movie seen in parts. In “The Practice of Everyday Life,” from 1980, the scholar Michel de Certeau considered how people window-shopped, skimmed articles without fully reading them, and found in the “incarceration” of a train commute opportunities for observation and reflection. In his view, they were repurposing their world to make space for themselves within a system that wanted to shape their feelings, thoughts, and ambitions. By this logic, when you make a mix of songs and give it to your crush, you’re not saying that your infatuation is defined by those songs. Instead, you’re showing that your feelings are bigger than what any one song can convey. You’re using the songs to create a larger work of your own.
What would it mean to give your crush a mix of A.I.-crafted music? Perhaps we’ll be able to make meaning for ourselves out of automated art. A catchy tune written by no one is still catchy. Pop stars are already in some sense products, but the communities of fans that spring up around them are very real. And yet culture isn’t only consumed—it’s created by people living their lives. We admire the act of creation and the quality of naturalness. Movie audiences, having thrilled to computer graphics, have also come to appreciate “practical” special effects. Artificial flavors make natural ones more desirable. Exploring computer-generated culture might lead us to value even more the culture that only we can generate.
In 2023, I profiled Geoffrey Hinton, who is sometimes described as “the godfather of A.I.,” for this magazine. I’m often sad when an experience like that draws to a close. A few months later, feeling nostalgic and curious, I fed the profile into ChatGPT. “Write a piece of fan fiction about the characters in it,” I said. The system returned a book report about the ideas, people, and writerly techniques it had discerned, along with a narrative. “Geoffrey Hinton stood on the wooden dock at midnight,” it began:
Overhead, the late-summer sky was moonless and staggeringly clear. The Milky Way sprawled above like a luminous brain—countless stars clustered in neural patterns, a cosmos of flickering connections. Geoff tipped his head back to gaze at them. . . . Behind him, the cabin was dark. The reporter, Josh, had left earlier that day, waving from the departing boat. The four-day interview had stirred up memories and ideas in Geoff’s mind like sediment in a pond. Now the waters were still again, but something had changed.
As it wound up, the narrative turned to artificial intelligence. Hinton “opened his eyes and spoke into the night”:
“I’m not afraid of you,” he said quietly. “I’m afraid of what we might do with you, or fail to do.” His words faded into the sound of the water. “I’m afraid of our own flaws, mirrored in you.”
A decent start, I thought. Yet, in the next chapter, the A.I. took the story in a thriller-ish direction—its fictional Hinton contended with a dangerous model escaping control—and failed to generate any actual thrills. As a writer myself, I was relieved. A thriller is supposed to be spine-tingling, pulse-pounding, bone-chilling; maybe, I thought, a system lacking a body or self-preserving instincts couldn’t channel those qualities. Or perhaps the A.I. had struggled with the logical planning involved in writing good suspense. There are so many ways in which telling a story can be challenging. “I’m a snob about care,” Carly Mensch, a television writer, told me, laughing. (She co-created the lady-wrestler dramedy “GLOW,” and is married to Nasser.) “I think that to build a story that makes you care is really difficult. Whereas I think suspense must be easy.”
Stories vary wildly, but one thing they have in common is their pursuit of contradictory goals. They must seem natural, but also contain provocative and illuminating ideas; their turns must be surprising, but combine to make a pleasing and comprehensible shape; they must have clear stakes, and yet be capacious enough to be interesting in different ways to different people. Their creation is rarely straightforward. “So many good things come from boredom, from making mistakes, from accidents,” Mensch said. “When you sit in a writers’ room, it’s really messy, and people are just pitching a lot of things. Sometimes the wrong pitch leads to the right pitch. You learn something when you can say, ‘That’s not the second chapter that I want.’ ” (Hearing this, I wondered if an A.I.’s bad story ideas might still be useful, as a form of automated spitballing.) At the highest level, a good story has to reflect its audience while modifying that reflection. One conception of storytelling is pastoral. “Every tribe has storytellers, and at night, around the campfire, we stand up and tell the stories of the tribe to the tribe,” Mensch said, paraphrasing a professor with whom she’d studied. The second is theatrical: “You walk into a dark room, and you’re surprised.”
Will artificial intelligence be able to craft stories that satisfy us on all these levels? Probably not. But stories don’t always need to be that satisfying. In William Gibson’s novel “Count Zero,” from 1986, a woman comes home, hooks herself up to a neural interface, and proceeds to “soap her brains out good for six solid hours” with a never-ending soap opera called “People of Importance.” Its “labyrinthine complexities” don’t need to add up to anything; continuation is the point, rather than resolution. Our own personal stories, similarly, don’t necessarily achieve storytelling excellence—and yet we find them boundlessly fascinating. They matter to us because we’re in them. If we each had a dedicated writer—our own court scribe—then perhaps we could follow the trajectory of the selfie further into the territory of self-based narrative. (If you open up Meta’s A.I. app, one of the first options you might see reads “Let’s talk about my day.”)
At Peter’s party, I tried on a pair of Ray-Ban Meta sunglasses—ordinary-looking glasses with a built-in A.I. interface. I glanced around our back yard, eventually settling on a shaded table where our next-door neighbors—a young couple who’d recently moved in—were sitting with my father-in-law. Peter was eating ice-cream cake; kids chased one another across the lawn brandishing foam swords and squirt guns.
“Meta, what do you see?” I asked.
“It’s a group of people sitting at a table under a tent on a beautiful day, enjoying a moment of togetherness at what’s probably a family party,” the glasses said, through tiny speakers near my ears. “It looks like a great time!” The effect was of someone sharing my head, seeing what I saw, hearing what I heard.
I reached up and touched a temple; the glasses took a picture. The Ray-Bans belonged to Iris, the mother of one of my son’s friends. “I like to ask it questions,” she said. “Like, ‘What’s new with the Diddy trial?’ ” In the car, if her family is listening to music she doesn’t like, she can queue up her own, privately. She pulled out her phone and swiped through some photos she’d taken at the party. They’d been wirelessly synched from the glasses, and they captured our back yard from her point of view. The P.O.V. story—the story of you—is one that A.I. is already well equipped to tell, or help you tell, perhaps therapeutically, perhaps retrospectively. Jack Clark, the head of policy at the A.I. company Anthropic, has described feeding diary entries into its system, Claude; the A.I. convinced him that he hadn’t fully processed the “metaphysical shock” of becoming a father. Through this sort of process, A.I. might tell us our stories, and revise them.
A few years ago, after a movie that Daniel Kwan co-directed, “Everything Everywhere All at Once,” won seven Oscars, the filmmaker got curious about A.I., and used his newfound cachet to land meetings with people who work in that field. Kwan was struck not just by the degree to which A.I. promised to disrupt the film industry but also by how it might affect the movement of stories through society. It seemed to him that films seeking to tell compelling stories to broad audiences already struggled to compete with algorithmic life. “Our communication structure is completely broken,” he said. “Our attention is fractured.” A.I. could fracture it further.
“What’s the most important thing humanity has engineered?” Kwan asked me, over coffee in a West Village restaurant. “Arguably, it wasn’t the internet, or agriculture. It was the creation of the systemic and institutional trust that was required for us to build societies. And a lot of that engineering was actually collective stories—God, government—that helped us see ourselves as one family, one community. With our current technology, it’s like we’re playing Jenga.” He mimed a tower of blocks at the table. “We’ve been pulling blocks from down here, from the foundation of collective understanding and belief in a shared world, and using them to build farther up on the tower. And, if we keep doing that, the whole thing will collapse, and we’ll go back to only being able to trust the hundred and fifty people in our tribes.”
In “Everything Everywhere All at Once,” a first-generation immigrant to America named Evelyn, played by Michelle Yeoh, discovers that we’re all living in a vast multiverse, within which every imaginable permutation of reality and ourselves exists. She’s contacted by an alternate version of her husband who tells her that another version of their daughter, Joy, has been driven mad by the apparent meaninglessness of any single version of existence. This alternate Joy has embraced nihilism, and is determined to destroy the whole multiverse. For a while, Evelyn goes crazy, too; then she has an existentialist epiphany, and convinces the evil Joy that the only way forward is to choose to embrace the particular people you happen to live with in your particular universe.
When I saw the film, I first experienced it as an immigrant story. (My mother, like Yeoh, grew up in Malaysia, and she was haunted by what-ifs: What if she’d stayed there? Moved to London? Settled on the West Coast rather than the East?) Later, as the film continued, I saw its critique of an algorithmic society in which we’re bombarded by images of people who are just like us, only better. Now it seemed to me to anticipate a world in which artificial intelligence could serve as an individual Scheherazade, telling us endlessly malleable stories about ourselves and each other, pulling us into alternate realities and away from the one we might share.
“If you look at all the crises coming down on us—climate change, polarization, the collapse of consensus truth, income inequality, whatever—if I could choose one to focus on, it would be the coördination, communication, trust problem,” Kwan said. “Because, if we don’t fix that, we can’t fix anything else. And fixing that requires us to fix our stories.”
Kwan had come to think that A.I. tools capable of a certain level of photorealism needed to be regulated. (“This is tech that can ruin someone’s life,” he said.) The use of A.I. in his industry ought to be curtailed, he argued, until fake images could be reliably identified. I told him about the joyful team behind “AI OR DIE.” “I feel so conflicted when I hear those stories,” he said. “I mean, that’s beautiful. We should want everyone who wants to say something to have a chance to do it. I understand that approach, and, in another life, I’d be doing it.” He looked pained. “You know, I was that kid. Vimeo came out just before I started college. I graduated just when YouTube started being YouTube. But by using A.I. right now, and supporting these companies before they do the responsible thing, we’re just enabling them to destroy consensus truth, and to make it basically impossible for us to know what is real.”
Artificial intelligence, at least, seems to be a big story we’re all following. The restaurant where we’d met was near N.Y.U., and outside, in the sun, students were strolling by. The practical questions are familiar—what will happen to education, to jobs?—and answers to them will come with time. But the story of A.I. is not only practical; it’s also moral and spiritual. It’s the story of John Henry and the steam drill, or even of Prometheus and Zeus, who levied a gruesome punishment for stealing fire, and it is already forcing us to think about what we value, about what really makes us care. I ambled toward Sixth Avenue, passing a stationery store on the south side of the street. The people inside browsed through notebooks, recapturing an analog age. In our different ways, we’re all deciding how we want the story to turn out.
“In the middle decades of the twentieth century, American intellectuals of manifold types, from disparate and even hostile groups, converged on a perception of danger,” the cultural critic Mark Greif has written. This was “the age of the crisis of man”—a time when pretty much everyone feared some combination of technological acceleration, social alienation, and spiritual estrangement. “New conditions seemed destined to snap the long tradition of humanism,” Greif observed. People published books with titles like “The Nature and Destiny of Man.” They worried “that human nature was being changed.”
Are we in such a crisis now? Recently, in a series of sold-out concerts at the Sphere, in Las Vegas, the electronic musician Anyma wowed audiences by creating the illusion of a vast robot looming overhead, peering down, and then breaking through the building’s wall. “Sentience,” a voice intoned. “Consciousness.” And yet the show was just that—a traditional concert, performed by human musicians, for which thousands gathered in person. Even as culture changes, it’s usually additive rather than zero-sum. It preserves itself through time, as if secreting its own amber. Today, Ariana Grande stars in high-tech films based on “The Wizard of Oz,” and TikTok and Fortnite create dance crazes. People still read Jane Austen and Agatha Christie, and movies still come in trilogies, echoing the triple-decker novel, which evolved in the nineteenth century partly so that subscription libraries could more easily lend out long books. At the video arcade my son likes, teen-agers play Pong, Pac-Man, and Space Invaders, rendered in fist-size pixels on wall-size screens. Artificial intelligence, meanwhile, is in many ways a conservative force. It’s trained on, and to some extent trapped within, data from the past. It makes old ideas newly available.
The Whitney Museum, in the meatpacking district, occupies a zone of the city that’s been entirely repurposed. “Where there were once carcasses hanging in cold rooms, there are now racks of designer dresses,” ChatGPT noted. With two small kids at home, my wife and I haven’t been to a museum together in years. But I loaded her doctoral dissertation, on the use of details in the novels of Vladimir Nabokov, into ChatGPT, along with an essay she’d published about the frescoes of Fra Angelico and the question of what it means for secular people to enjoy sacred art. That way, I might be able to visit the museum with her ideas, if not with her. “That sounds like a beautiful and thoughtful plan, Josh,” the A.I. said. “We can treat your visit to the museum as a kind of conversation with her, through me.”
“Chat, what do you think my wife would make of this painting?” I asked, speaking quietly so as not to disturb anyone. I snapped a photo of Amy Sherald’s portrait of Michelle Obama. Geometric patterns were printed on her flowing white dress.
“Oh, this piece is stunning!” the A.I. said, in my AirPods. “I can imagine her noticing those little details and how they speak to identity and expression.”
“No, no,” I muttered. “I want you to really think about how my wife would respond, specifically.”
“Absolutely!” it said, with its usual blithe confidence. “I think she’d be fascinated by the details.” It nattered on until I tapped the X on my phone’s screen, closing the chat.
The gallery was crowded; Sherald’s show, “American Sublime,” was a sensation. “Trans Forming Liberty,” her portrait of the model and performance artist Arewà Basit, a Black trans woman, holding up a torch-like vase of flowers, had attracted a small group. A stooped older man peered through thick glasses, and a couple stood with their tween daughter. A young man handsome enough to be a model himself lounged while standing, wearing black pants, a black tank top, and a Leica camera on a black silk strap.
I took the elevator upstairs, to look at some of Edward Hopper’s paintings. “Do you recognize this?” I asked the A.I., sending it a photo of Hopper’s “Second Story Sunlight.” In it, two women sit on the sunlit balcony of a house; the older one reads, while the younger one rests atop the railing in a bikini. Their contemplative poses are echoed by the house’s twin triangular gables. Behind them, the trees of a mysterious wood create shadows, suggesting what’s unknown.
“Yes, I do know that one,” the A.I. said. “Edward Hopper’s ‘Second Story Sunlight’ is such an evocative piece. . . . It’s got that classic Hopper feel of quiet, introspective moments and a hint of isolation, even in a sunlit scene.”
Annoyed by the banality of these remarks, I took out my AirPods, slipped them into their case, and snapped it closed. What a waste of time, I thought. I looked around in some embarrassment, wondering if anyone had caught me trying to talk about the painting with an A.I.: what a loser, desecrating the museum with cheap technological experiments! But the few people nearby were all looking at their phones. Seeing them bent toward their screens, I realized that ChatGPT’s voice mode might not be the best way to approach this particular task. I took my time, typing out a carefully worded prompt. Consider that Hopper’s painting was the product of a secular age, I wrote, and yet contained spiritual elements. Elaborate on the fact that one of its subjects is older, one younger. Filter your thoughts through the essay I uploaded earlier, on sacred art in a secular world.
On the screen, a little essay appeared, with sections, subheads, and emojis. I stood and read. The painting was “sacred art in a secular key,” it said. Hopper conjured a sacred feeling “not through content, but through the stillness of composition. . . . Through the way light operates like grace—unearned, illuminating.”
Not bad, I thought. The ideas evoked my wife, in the same loose way that a shadow could evoke a person. Scrolling up, I saw the little digital picture I’d snapped of Hopper’s painting; it struck me as ridiculous. This was a strange way to visit a museum. The actual work of art, with its totemic aura, its visible brushstrokes, perhaps with the fingerprints of the artist embedded somewhere in its paint, was right in front of me.
I wandered back downstairs to the Sherald exhibition. At the far end of the gallery, a small hallway led to a space where rows of colorful modernist chairs faced a wall of windows. It was itself an art installation, called “Long Line,” by Mary Heilmann. Almost every chair was occupied. Outside, sunlight prismed in ripples on the Hudson. A girl in torn black jeans and purple lipstick sketched the view in a notebook. A dapper man in Italian tailoring seemed to be asleep. The whole scene might have been a painting of leisure, like Georges Seurat’s “A Sunday on La Grande Jatte.” I asked myself, What is culture? It was those paintings, but it was also this. It was us.
At home, I called up “Second Story Sunlight” on my iPad and showed it to my wife. “What do you think of this painting?” I asked.
She studied it for a long time. “My first feeling is that it’s sort of disturbing,” she said. “But then my dominant feeling is that there’s something puzzling about it. The space is funny. The way their bodies fall—it looks like they’re in two different houses, but they’re not. The woman in the bathing suit seems a little older than a granddaughter, but younger than a daughter—that’s a little funny. I guess this is morning light? But with the greens and the blues it seems a little too cool to have a bathing suit on, and, if she has a bathing suit on, then it seems like it’s too warm for the grandmother to be wearing her heavy dark clothes. And the young woman’s pose is confusing. It looks like she’s posing for something. It’s a very strong pose. But she’s not modelling. There’s something in how she’s gripping the railing—she seems performative. But performing for what?” She hesitated. “ ‘Disturbing’ is kind of an overstatement. But it’s not pleasant. Although there’s something about the light that’s pleasant. The women are beautiful.” She paused again. “I don’t know. It’s interesting. What did you think of it?”
Real people. They never say quite what you expect. ♦