You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

Who Gets a Say in Our Dystopian Tech Future?

A.I. research scientist Timnit Gebru raised red flags about Google’s most exciting new tech. She says she was forced out for it.

Kimberly White/Getty Images
Timnit Gebru speaking at an industry event

Last Wednesday, Timnit Gebru, a staff research scientist and co-lead of the Ethical Artificial Intelligence team at Google, said that she had been ousted from the company over a research paper she co-authored and submitted for review. According to Gebru’s account, a manager had asked her to recant or remove her name from the paper, which raised ethical concerns about A.I. language models of the sort used in Google’s sprawling search engine. Gebru, one of the most prominent ethicists in Silicon Valley and one of Google’s few Black women in a leadership position, responded by asking for transparency about the review, offering that she would remove her name from the paper if the company gave her a fuller understanding of the process and developed a road map for researchers to follow for future reviews. If they couldn’t agree to those conditions, she said, she would leave the company at a later date. Google saw that as an invitation. They swiftly accepted what they called her resignation.  

You might recognize Gebru’s name from the pathbreaking group she founded, Black in AI, or the explosive paper she co-authored with Joy Buolamwini of the MIT Media Lab, which proved that Amazon’s facial recognition program, Rekognition, was extremely biased toward white and lighter-skinned subjects. The study found that Rekognition was adept at clocking lighter-skinned men but had a considerable failure rate when it came to women, particularly women of color: It misclassified women as men 19 percent of the time; women of color with darker complexions were misclassified 31 percent of the time. One can only imagine the havoc it could wreak on trans people.

Amazon attempted to refute the study, calling it “misleading,” but the Rekognition paper, after it was reported on widely in the national press, sowed the seed of justified fear in the public: Amazon had been marketing its technology to police departments at the time the story broke. 

Gebru’s most recent paper raises similar stakes: Titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” it’s about language, computers, human beings, and the scalable biases that can creep into that triangular relation, unnoticed or unaddressed by engineers. The idea of artificial intelligence acting against our interests has been the stuff of science fiction nightmares for decades. Now it’s just a fact of life. 

In the back and forth in the week since—Google challenges Gebru’s telling of events, while 1,604 Google employees and 2,658 outside supporters have organized on her behalf to condemn what they call “unprecedented research censorship”—many spectators of this scandal, myself included, have felt a shift in their feelings about A.I., with the fear that Google’s profit motive and the human species’ interests are at odds. That Google is also the arbiter of what can be published or said in public by its staff experts about the technology it uses to enrich itself compounds those fears. (On Wednesday, CEO Sundar Pichai announced an internal investigation of Gebru’s exit. Google did not respond to The New Republic’s request for comment.) 

At the heart of the Gebru scandal is an old story, about the way whistleblowers are punished by profit-seeking executives who instinctively know how to weaponize difference markers like race or gender to win the game. But the scale Google operates at guarantees that its biases—which are intrinsic to all unequal workplaces and need to be actively corrected—will in the future expand to represent the conditions of human existence. Gebru’s work to date has shown that, perhaps inevitably, machine learning is absorbing the biases and misunderstandings that human beings are already susceptible to—in particular those that proliferate among the executive class in Silicon Valley, who get to choose whose voices are heard (literally and figuratively) and are disproportionately male. These are quickly developing technologies that, as they touch more and more parts of our lives, carry real possibilities and real risk. Gebru’s draft paper called for caution—which also potentially threatened her employer’s bottom line.  

The key to Gebru et al’s paper lies in the words “language models.” Innocent-sounding enough, this is the technology that allows A.I. to mimic human expression. Remember when we collectively realized that you can create a game out of predictive text? You know how it goes: Begin a sentence on your smartphone with a prompt, then let autocomplete fill in the rest. The result is fun to read, because it’s always slightly garbled but very close to your actual writing style. It’s solipsistic and eerie at the same time, resembling the pleasure of admiring oneself in a fun-house mirror. 

The technology behind predictive text relies on a statistical model of how likely one word is to follow another, created through breakthroughs in the field of natural language processing, or NLP, that allow your phone to learn your writing habits. The “natural” in this term refers to the mysterious stuff that comes out of human mouths, which professors of linguistics don’t even fully understand. The “language processing” part refers to work done by computers to read, decipher, and create useful tools out of its data. In this sense, A.I. can “understand” natural language, but in the rudimentary fashion of mimicry and impersonation—hence the parrots of the paper’s title. Google increasingly relies on NLP for communications, improving its search engine, and—crucially—revenue. 

There is a document circulating online purporting to be Gebru and her co-authors’ paper, but the MIT Technology Review reviewed the official draft. According to its recent article, “Stochastic Parrots” is about the dangers of large language models, which have been at the center of breakthrough advances in NLP in the last three years. By extending the data in play to vast, incomprehensible sizes, the paper argues, according to the MIT report, new language models have allowed A.I. to produce startlingly accurate human “voices”—good enough to smooth that fun-house mirror flat and fool you more completely.

The implications of this technology are expansive. According to the paper, processing such enormous quantities of data poses environmental dangers because it uses so much electricity. This also means that it is so expensive to create and manage that it increases the advantage rich companies already hold. “It is past time for researchers to prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access to resources,” Gebru and her co-authors write in their draft paper, according to MIT.

The risks of the bias the researchers say they found in the program loom large. These enormous models collect each and every bit of data they can find on the internet—what’s to stop racist or sexist or otherwise malicious language creeping in? Furthermore, Gebru argues, these models are not capable of understanding the nuance or recent changes to the meaning of words like, say, Black Lives Matter. They will also only work effectively on languages already overrepresented on the internet, like English, augmenting an already existing inequality between languages and homogenizing the internet further. 

Of the over 7,000 languages currently alive on earth, Siri supports 21; Google Home supports 13; and Alexa supports eight. The result of this narrowing, Khalid Choukri of the European Language Resources Association told Karen Ainslie in a blog post about linguistic diversity in A.I., is “languages with a few million speakers that don’t have access to these technologies.” Google and its peers are therefore essentially excluding large swathes of the world’s potential users on the basis of certain languages’ overrepresentation on the internet, of which some 50 percent is in English.

When dealing with such vast quantities of data, it becomes difficult even to observe or document the biases at work in A.I. “A methodology that relies on datasets too large to document is therefore inherently risky,” reads an excerpt of the draft paper published by MIT. “While documentation allows for potential accountability, ... undocumented training data perpetuates harm without recourse.” How can Google fix something it can’t even see?

Underneath the compelling questions raised by the paper lies a fascinating paradox: A.I. cannot overcome bias for the simple reason that it does not actually understand human language. As previously mentioned, not even Noam Chomsky fully understands why or how language develops in the young human brain—we just know that it does. All these models can do, therefore, is mimic, manipulate, and extract the saleable information it finds in order to, for example, make Google Adwords work better. 

Language is a mystery, and experts in sociolinguistics have examined its relation to power and inequality for decades—at least since William Labov published his 1966 study of New York City’s department stores, the first to prove that pronunciation correlates with socioeconomic stratification. Norman Fairclough’s classic textbook Language and Power is assigned to students of language all over the world, teaching them that there is no simple relation “between” language and society—they are not independent entities. In fact, “there is not an external relationship ‘between’ language and society, but an internal and dialectical relationship,” which works like a machine to generate communication as we know it from the ingredients of dialogue, self-expression, observation, identification, and so on. Language is how we process the world and the gateway into other people’s otherwise unknowable minds. Messing with that dialectic is the peak of technological hubris. See the work of Arthur C. Clarke for details. 

Last year, a group of scientists published a paper observing that machine learning college courses rarely require their students to read books like Fairclough’s: Although “ethics is part of the overall educational landscape in [computing] programs, it is not frequently a part of core technical ML [machine learning] courses,” they note. That doesn’t mean that Fairclough’s observations aren’t relevant.

What happened to Timnit Gebru is a threefold scandal, encompassing the unethical labor practices not coincidental with but caused by Silicon Valley company practices, the systematic gaslighting of Black workers by a notoriously unempathetic and profit-driven sector, and a fundamental misunderstanding about the nature of language. Each of those factors is being monetized, every day, for a private company’s benefit. As Ifeoma Ozoma put it on Twitter, the “financial, career, physical, and mental toll” of speaking out against one’s employer—even or especially when they have explicitly hired you to do so—is “a lot.” 

The unfairness inherent to the shape of this scandal is universal to capitalist competition. As mathematics expert Cathy O’Neil put it in a terrifyingly prescient 2013 blog post, there is no market solution for ethics: “The public needs to be represented somehow, and without rules and regulations, and without leverage of any kind, that will not happen.” O’Neil’s observations are based on an analogy between financial deregulation in the 1970s and the lack of regulation in tech in the present day; anybody who doubts that the free market doesn’t guarantee the best outcome for each citizen should ask a family who became homeless after the 2008 crash, or somebody bankrupted by cancer, their feelings on the matter. 

The only light at the end of this tunnel is the public relations disaster that Google has manufactured. The support visible under the hashtag #IStandWithGebru makes her standing among her colleagues and peers clear. In an ideal outcome, that—alongside Gebru’s record of urgent work on these matters—would be enough. It rarely is.