You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.
Dystopias

How Big Tech Is Ruining the Dream of AI

Artificial intelligence once promised to make us healthier and wealthier. Now, we’re faced with either gimmicky chatbots or total annihilation.

Imagine a future where you’re being interviewed by antagonistic police officers in a foreign country, but an artificially intelligent “agent” on your mobile device is able to keep you from saying anything locals would consider abnormal or suspicious, and prevent your arrest. That same day, your agent attends a post-op appointment with your mother’s neurosurgeon, and helps her ask substantive questions about the risks of each treatment path. The next day, your agent notices subtle signs of developmental trouble in your infant child, and advises that you seek out an aggressive regimen of therapy years earlier than you might have otherwise.

If situations like these—in which artificial intelligence materially improves a normal person’s life in tangible ways—still feel well out of reach, that’s because the tech industry seems to want it that way. And if the whole topic of “AI” makes you cringe, that’s just as well for the companies poised to profit from it, because a public with high expectations about their lives being improved by the technology could become a liability.

It may be all the better for the tech titans if the public expects everything to actively get worse as AI spreads. Just listen to the industry’s leaders’ own words. “I try to be up-front,” Sam Altman, CEO of OpenAI, the firm behind the phenomenon ChatGPT, told The New York Times in 2019. “Am I doing something good? Or really bad?” Demis Hassabis, a pioneer who co-founded Google DeepMind, one of the tech giant’s AI projects, has attempted a similar routine, telling Time, “When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful.” He added that many people working with it “don’t realize they’re holding dangerous material.”

In theory, anyone with a functioning imagination should find it at least a little exhilarating that the latest supposed technological revolution is something called “artificial intelligence,” despite its current status as a business buzzword. After all, even the most enervating, corporatized definition of the term holds immense promise. On that front, McKinsey, that lodestar of anodyne corporatespeak, is actually somewhat helpful. According to its 2020 white paper titled “An executive’s guide to AI,” it is not an individual technology, but an attribute of many: “the ability of a machine to perform cognitive functions we associate with human minds, such as perceiving, reasoning, learning, and problem solving.”

The word “cognitive” is still a little presumptuous, but it’s a useful definition. You can apply it to what Wired called the “uncanny, addictive AI” of the social-media app TikTok’s recommendation algorithm. Or to the creepily human—if factually compromised—text outputs of the so-called chatbot ChatGPT, which takes text requests (or “prompts”) and offers a synthesis of whatever it can scrape (true or otherwise) from the internet, accompanied by the product of the model’s fine-tuning by human beings, back at you.

But it’s long been a bit of an afterthought in AI literature—and attendant mythology—that these machines might have some kind of practical purpose that average people can benefit from.

For instance, the most influential mass market AI book is probably 1999’s The Age of Spiritual Machines: When Computers Exceed Human Intelligence, by computer scientist and futurist Ray Kurzweil. It’s mostly about the creation of a superior kind of artificial consciousness, and humanity’s eventual assimilation into this enhanced state of being. However, along the way, Kurzweil predicts that machines will take care of “the basic necessities of food, shelter, and security,” and that there will soon be “almost no human employment in production, agriculture, and transportation.” To Kurzweil, these revelations seem to be a mere detour from humans’ cognitive ascent to a kind of digital nirvana, rather than, you know, the point of the whole thing.

The problem, as contemporary AI proliferates in the real world and our collective imagination, is that to most people outside of Silicon Valley, digital nirvana isn’t at all tempting. We have more practical needs, like jobs and health care. But also: We already live in a world dominated by tech companies—and we hate it.

Despite the public’s fascination with chatbots, image generator applications, and other eye-popping AI tools and toys that have come along in the past year, many of us are anxious about the future they’ll create. Americans in particular are hyperaware of what AI may do to employment. We’ve already seen examples of these systems being horrifically racist: A Facebook AI system once mistook videos of Black men for “videos about Primates,” and several Black people have been wrongfully arrested after facial recognition software misidentified them as suspects in crimes. We know social-media and other tech companies and the bosses who own them can’t be trusted; the very damning The Social Network won Oscars all the way back in 2011. And yet, in response to our worries about the Next Big Thing in tech, we’re fed either Kurzweil-style sermons about the ascendancy of our species, or fresh prophecies of doom.

But it’s not too much to ask that the people who stand to gain enormously from the proliferation of these technologies—the rich people who own or control them—be both enthusiastic about them and able to explain, in convincing and granular terms, why the rest of the people of Earth should be, too. By not painting normal people a clear picture of a better world and promising us that they’ll get us there, tech impresarios are only protecting themselves. They seem to be planning their defense at their eventual tribunals, and that’s far from good enough.

The ChatGPT Smoke Screen

AI didn’t come along in the past year, but the hysteria around it did reach comical new heights. That hysteria has largely centered on ChatGPT. This focus is obscuring AI’s real potential and the fact that ordinary people stand—or at least should stand—to benefit from that potential in ways that go far beyond sending emails more quickly, populating their website with SEO-friendly content, or patching up code.

What chatbots do is basically a magic trick. Something called a large language model reduces language to numerical “tokens,” and puts them in order based on the probability that a given token will come next in a sequence. Then, the model—fed by the content of the internet and “trained” by low-wage workers across the planet—spits language back at you. The concept is older than you might think.

In 1726, Jonathan Swift more or less predicted, and preemptively satirized, ChatGPT in Gulliver’s Travels. When the naïve Gulliver visits the grand academy of Lagado, he meets—and is greatly impressed by—a professor who has invented an “engine” full of words arrayed on moving die-size cubes, calibrated to the “strictest computation of the general proportion there is in books between the numbers of particles, nouns, and verbs, and other parts of speech.” When put to use—an operation that requires a team of humans to crank its many knobs—the engine churns out snippets of text, and the harebrained professor claims that one day his machine will allow “the most ignorant person” to “write books in philosophy, poetry, politics, laws, mathematics, and theology, without the least assistance from genius or study.”

This may sound familiar to Americans who have read about ChatGPT users asking the tech to write them barbed essays with the wit of Voltaire, or a short story in the trademark register of Hemingway. But somewhat more quietly in the background, applications like HyperWrite’s Personal Assistant have cropped up, which can theoretically do almost anything for you, as long as it can be done in a web browser, including complex tasks that combine the communication skills of a large language model with the ability to plan, and then execute that plan. These are among the first plausible AI agents, and more will follow.

If things keep progressing at the current pace, a much more powerful sort of intelligence—no more sentient than ChatGPT, but powered by capacities approaching what computer scientists call strong AI, or “artificial general intelligence” (AGI)—could come along soon. It may run on something like the language models that power the chatbots we already have. But where ChatGPT needs to be fed text inputs, a system with AGI could take information in any format, and its model would be able to analyze, learn, and modify itself based on that information.

Such a machine could, in theory, make our dreams come true.

The Bizarre Expectations Game of the AI Hypemen

Tools of unprecedented power in the hands of ordinary people should be an easy concept to sell, but the salespeople for these technologies are decidedly not making that pitch. That’s probably because they’re the wrong people to make that pitch—and we likely wouldn’t believe them if they did.

On one hand, people like Altman can, when the situation calls for it, wax downright messianic about the utopia they’re allegedly taking us to. Altman has said in no uncertain terms that building AGI is OpenAI’s overarching goal, and suggests there’ll one day be an AI-based tool that can “cure all diseases,” whatever that means. Altman is more famous, however, for his grim forecasts, as when he appeared before the U.S. Senate in May, and said his “worst fears are that we—the field, the technology, the industry—cause significant harm to the world,” and that, “if this technology goes wrong, it can go quite wrong.” This came after Altman had said in March, “I think people should be happy that we are a little bit scared of this.”

“Implicit in this argument,” the Los Angeles Times’ Brian Merchant pointed out in a column, “is the notion that we should simply trust him and his newly cloistered company with how best to [release this technology], even as they work to meet revenue projections of $1 billion next year.”

Tech leaders probably think they’re describing something promising for normies. Altman, for his part, generally stops at saying AI will “massively increase productivity.” I reached out to Altman and asked him to make the “case for AI being beneficial to normal people,” but a representative said he was unavailable for an interview. Bill Gates gamed out what consumers will do with AI during a Q&A session way back in 2019, describing “the so-called personal agent that is permissioned in to see all your information and helps you instead of you running 20 applications.” He said Microsoft’s potential product of this sort would likely be available as a paid subscription. Billionaire venture capitalist Marc Andreessen’s now-notorious blog post “Why AI Will Save the World” includes a litany of areas he thinks AI may positively impact, a few of which might interest a non-billionaire, like “understanding others’ perspectives, creative arts, parenting outcomes, and life satisfaction.” But it’s not exactly crystal clear how.

The collective inability—or refusal—to be specific about why we should be excited about AI defines the tech elite of our time.

Reid Hoffman, the co-founder of LinkedIn and a billionaire supporter of AI, claimed in a YouTube video earlier this year that the rise of AI marked a “Promethean moment.” I asked him to clarify what, exactly, made AI Promethean. It was, after all, something of a disturbing analogy: In Greek mythology, learning to make fire wasn’t all gravy for the humans, who started fighting wars and were driven away from the gods—nor for Prometheus, who was sentenced to have his liver chewed on by an eagle for eternity.

To his credit, Hoffman did elaborate, but it was another rather fuzzy prophecy. Fire, Hoffman told me, “gives humanity self-determination, the power to pursue its own destiny and make its own meaning.” He said that it is “self-definition through innovation that I think defines us as human beings. That’s what I mean when I talk about ‘Homo Techne’ as a better name for us than Homo Sapiens.” Humanity will put AI “in countless contexts and complementary technologies,” he said, and then we will “attain new heights of civilization and human flourishing.”

If you’re still having a difficult time pinning down where tech “thought leaders” are on AI, you should be. In March came the alarmed open letter titled “Pause Giant AI Experiments,” which was signed by such luminaries as Elon Musk, Apple co-founder Steve Wozniak, and social critic Yuval Noah Harari. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said, calling for a six-month pause on large-scale AI development. Less than four months later, signatory Musk announced that he had founded an AI company.

Oren Etzioni, a computer scientist and the founding CEO of the Allen Institute for AI, the research nonprofit started by Microsoft co-founder Paul Allen, offered a bit more detail on what this crowd thinks is coming. Specifically, he linked me to a story about a brain-spine interface that relied on what The New York Times’ Oliver Whang called an “artificial intelligence thought decoder,” which had helped a patient with a spine injury regain the ability to walk. Etzioni is a great conversationalist, and was in a much less dreamy and more direct mode than Hoffman. He was uncompromising in his optimism, and told me he disagreed with the idea of a six-month pause. (I worked briefly last year on a since-scuttled collaboration between the Allen Institute and The Seattle Times.)

But as for what normal people should imagine for the future? “I’m not going to promise anything too specific,” Etzioni told me. Referring to technology companies in general, he added, “People can look at our track record.”

The track record of tech entrepreneurs, though, is a troubled one at best. And they seem to know it.

One can debate whether, for instance, the Luddites—textile workers who smashed the machines that some of the earliest tech entrepreneurs had used to annihilate many of their livelihoods at the kickoff of the Industrial Revolution—were unfairly maligned (answer: yes). But it’s more useful to look just at tech’s track record since the rise of the iPhone in 2007, perhaps the last moment of unbridled, widely shared optimism in modern technology.

Sixteen years later, smartphones are an obligatory biannual money-dumping ritual. Social media is all “fever swamps,” “hellsites,” and addictive, mental health–degrading apps that the government wants (not unreasonably) to regulate or outright ban. Democrats and Republicans alike distrust social media—and they should. Spotify has nuked musicians’ livelihoods. Airbnb kicked fuel on the fire of our housing crisis. And the gig economy further disempowered America’s already precarious workers.

These days, new tech seems to smash into our lives and reshape them every few years. Then it becomes crucial. Then it starts to suck. The phenomenon is called “enshittification,” a term coined by author Cory Doctorow that’s spread like wildfire this year. Tech services begin their lives as promising and user-focused, then become advertiser-focused and start to enshittify. After this, Doctorow’s theory goes, they tend to exclusively prioritize the needs of shareholders and their demands for increased revenue. At this point, beloved features cease to exist or become paywalled, the user experience degrades, and the enshittification is complete.

Is it possible to avoid such a future with AI? It should be. But the government would need to do a hell of a lot more than it’s done to rein in Big Tech monopolies to make it plausible.

The Wrong Way to Do AI

Altman and other AI wisemen expressing fear about the technology have done what might seem logical, and announced interest in erecting safeguards. “The regulation of AI is essential,” he told Congress in May. But it’s abundantly clear that OpenAI and the other AI-focused companies want to be the driving force behind the regulatory apparatus. We can’t let that happen.

OpenAI lobbied heavily to soften the European Union’s AI Act, the most meaningful set of regulations on the technology passed so far by any lawmaking body in the world, Time reported. The company did this by arguing successfully that its language model “is not a high-risk system,” and that it shouldn’t be regulated as inherently “high-risk.” According to the version of the law that passed, it won’t be. Then, in late July, OpenAI, Anthropic, Google, and Microsoft, operating as a sort of bloc, formed a group called the Frontier Model Forum, membership in which is exclusive to “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models,” according to the forum’s criteria. In other words, big kids only. The aim of the forum is ostensibly to promote safety research within the companies themselves, and foster communications between the AI industry and lawmakers. But it appears to be a lobby that exists to sculpt government action to fit its own vision.

An eerily familiar juggernaut is on the horizon: Government and Big Tech seem to be unifying in pursuit of economic growth. The Biden administration’s restrictions on China’s access to the advanced GPUs that power AI development suggest the United States senses an advantage in its economic knock-down, drag-out with its rival, and isn’t about to let off the gas.

Certainly, tech has sometimes powered the economy in ways that benefited the masses. Workers made real gains during the early days of the increasingly unionized automotive industry, for instance. But around the mid–twentieth century came trends toward deregulation and the decline of organized labor. Then, as Daron Acemoglu, an MIT economist and his co-author, Simon Johnson, write in this year’s Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, “digital technologies became the graveyard of shared prosperity.”

According to them, “A new, more inclusive vision of technology can emerge only if the basis of social power changes.” The authors call for “the rise of counterarguments and organizations that can stand up to the conventional wisdom. Confronting the prevailing vision and wresting the direction of technology away from the control of a narrow elite may even be more difficult today than it was in nineteenth-century Britain and America. But it is no less essential.”

This, in turn, would require individuals who aren’t Big Tech TED Talkers or CEOs to discern what AI’s capabilities are and demand specific, beneficial things from it. Insisting on a beneficial rollout of this technology—not just one that steers clear of the apocalypse—is reasonable. Crouching in revulsion, and hoping we survive when the tech steamroller inevitably rolls over us once again, is not.

A Path Toward Something Good

When Stony Brook University art professor Stephanie Dinkins met Hanson Robotics’ Bina48, a talking robot whose AI model was trained on the actual words of a Black woman named Bina Aspen, she was dazzled. Then she was irked by the things she didn’t like about it. She was glad the tech world was putting in the effort to represent Black women like her, but not won over by its accuracy.

“If we’re making these technologies, and even the folks who are trying to do so well are doing it in a way that’s PC, and flattens us as humans, what does that mean going forward?” she recalled wondering. It didn’t make her want to write off AI as a technology. It made her want to steer it.

“And so the question became, Can I make something like that?”

Dinkins, who emphasized repeatedly that she is not a coder, told me she ultimately found useful code on a platform called Hugging Face, a database of open-source models. Since then, she’s been using her own chatbot, and other AI tools, to help her students push past the assumptions consumer-facing AI technology makes about them.

Dinkins’s software is not polished. “My chatbot is stupid,” she said. “It doesn’t work well. It actually doesn’t have enough data to be expansive, but it does a good enough job to get us thinking about what we want from these systems.” It is also, she suggested, a little edgier than what’s available to consumers, in part because she encourages students to ask, “How far can you push it?”

Relying on open-source software has the advantage of making collaboration easy, and rendering your work transparent and flexible. Anyone, anywhere, can be your collaborator—and if your project is popular online, people can come out of the woodwork with fixes and new ideas. In May, an internal memo by an anonymous Google engineer leaked, which Google’s head of AI later confirmed to be genuine. The tone was panicked, and it described the open-source community as a threat to software dominance that the big AI companies like Google and OpenAI have no hope of quelling.

“I do think that open-source development is more likely to produce AI that empowers individuals than the tech giants,” sci-fi author and AI commentator Ted Chiang told me when I asked him about a positive vision for the technology. “Obviously, there are dangers with open-source, as seen with deepfake porn, but the tech giants pose major threats, too, just of a different sort.”

Dinkins’s approach to building generative AI is promising not just because it’s anarchic and leans on open-source technology, but also because it’s almost uncomfortably personal and intimate. Training her chatbot required a large corpus of text just to reach baseline functionality. Suggestions from others included Reddit—a famously contentious place that has hosted its share of white supremacy, and a source she dismissed outright—and the Cornell Movie-Dialogs Corpus, a collection of conversations from hundreds of movies. (That didn’t sit right with her either, because movies, she said, “have not been too friendly, or very supportive, to Blackness.”)

Instead of one of these datasets, she said, “I made oral histories with my family.” With those, along with some other carefully curated materials, she told me she “had a big enough dataset to make something sort of viable.”

The idea of some future ChatGPT trained on the text of your family narrating the details of their lives in natural speech might sound downright alarming. But Dinkins’s chatbot is not ChatGPT. She is in charge of her language model. She chooses who has access to it, and she puts it to her own uses. It’s hers, and it’s guided by her own vision.

“There are two ways in which the public can benefit from any technology: They can own it, or the technology goes in a direction that then generates high wages and high employment,” Acemoglu, the economist, told me. He stressed that the public owning the whole AI sector—turning it into a state-owned enterprise like one you might see in China—isn’t his first choice, though it may be for some on the left.

In any case, Acemoglu said, the key questions should be the same no matter how far to the left someone is: “Can we use this technology to empower people? Because if you do that, it pushes up wages for diverse skills, and that’s the best way of serving the public.”

The theoretical possibilities that kicked off this essay may have creeped you out, or sounded more thorny than they did utopian. I proposed a rudimentary concept along those lines to Chiang, and shared similar ideas with another sci-fi author, Yudhanjaya Wijeratne, since they’re much better storytellers than I, and the fictional scenarios were ultimately shaped by what they said. Chiang, for instance, told me he “certainly wondered about the possibility of a personal AI agent which works on your behalf rather than on a company’s behalf, and whether there’s a viable business model for such a thing.”

I have my own ideas about how or even whether an AI agent that purports to advocate for older people seeking medical care—which, to be clear, does not yet exist—should be a profit-making enterprise. That’s just one of the subjects that ought to be active topics for discussion among people who don’t work in Big Tech. Currently, these questions are being mulled over in the corridors of power, well before the rest of us have the chance to form our own answers.

Don’t get me wrong: I’m not saying we all need to (or should) think happy thoughts about AI and hope against hope that our dreams will come true. But if ordinary people have a reasonable, shared vision for what AI can do for us, then even if it doesn’t materialize, we’ll be able to articulate exactly what Big Tech stole away.

Tech bosses hold a monopoly on visions for the AI future. It should be smashed as soon as possible.