THE UR-TEXT of Auto-Tune, the narrative that established the use of technology for pitch correction as a nexus of ethical debate, predates the digital age by half a century. That text is Singin’ in the Rain, the movie musical produced for MGM by Arthur Freed, the Tin Pan Alley tunesmith turned production unit chief, in 1952. Set on the cusp of the silent and sound eras, the film pokes gentle, toe-tapping fun at Hollywood’s first panic at the prospect of technological change—a crisis embodied in the character of Lina Lamont, a shrewish, shellac-haired silentmovie queen who can barely speak in English sentences, let alone sing on key, as she needs to do to meet the demands of the new all-talking, all-singing pictures. Technology, having started the problem, provides a nettlesome solution as Donald O’Connor hatches a scheme to have a perky unknown with a pretty voice, Kathy Selden (played by the 19-year-old Debbie Reynolds), dub Lamont’s voice off-screen. Lamont mouths the words to the Arthur Freed and Nacio Herb Brown ballad “Would You,” and the movie audience within the movie hears Debbie Reynolds trilling sweetly in perfect intonation—until the upending of things in the happy ending, when O’Connor and his pals literally pull the curtain on the scam. The good girl, who can really sing, triumphs over the bad one, who had betrayed the public with electronic trickery.
Since the late 1990s, digital sound processing has made pitch correction possible without the necessity of hiding Debbie Reynolds. The method commonly referred to as Auto-Tune—the brand name for the first and best-known of several software programs for the manipulation of pitch, tone, and other aspects of sound—does Reynolds’s work with improved efficiency and flexibility, if less perkiness. Meanwhile, most of the discussion today over the use of technology to fix the sound for off-tune singing has the same thematic content as an old movie musical. The natural voice stands for virtue; technology stands for vice. Vocal technique—specifically, the skill sufficient to produce notes in accord with the twelve-tone tempered scale—is perceived as evidence of legitimacy. Digital processing, when employed to accomplish something that used to be the exclusive domain of living beings, is taken as a cheat. Sixty years after MGM’s Technicolor vindication of vocal and moral purity in the form of Kathy Selden, undoctored singing is still thought of as right as the rain that splashed under Gene Kelly’s tap shoes. Yet just like manufactured showers, that conception is more complicated than it appears.
Invented as an afterthought, the by-product of research in a related field, Auto-Tune was developed by Harold Hildebrand, a one-time engineer for Exxon, as an outgrowth of his research in the analysis of seismic data for the purpose of finding oil. The quasi-accidental nature of Auto-Tune’s origin makes for a cute story, one that puts the invention broadly in the company of Teflon, the microwave oven, and the Frisbee, while offsetting any suspicion of Machiavellian intent on the part of Hildebrand, who left Exxon to start the company that introduced and still markets Auto-Tune. (Founded as Jupiter Systems in 1990, the firm is now called Antares Audio Technologies.)
Hildebrand, an amateur flutist who got his undergraduate education on a music scholarship and later earned a Ph.D. in electrical engineering, goes by the nickname Andy and likes to be called “Dr. Andy,” in the manner of a self-help author or a pediatric dentist. In interviews he gives Auto-Tune a sagely public face, talking with non-critical affection for both music and technology, shrugging off ethical questions with folksy humor. “Well, I don’t know if it’s bad or good,” Hildebrand said in an interview with The Seattle Times. “I’m not a judge of that. It’s very popular, so in that sense it’s good. I don’t place value judgments on things like that.... Someone asked me at one point in time if I thought that Auto-Tune was evil. I said, ‘Well, my wife wears make-up. Is that evil?’ And yeah, in some circles that is evil. But in most circles, it’s not.”
To the extent that use is a measure of popularity, Hildebrand is correct about Auto-Tune. (Attitudes are a different kind of measure, of course, since users of things can have mixed feelings about the things they use.) Auto-Tune is a fixture in popular music today, employed far more widely than most people realize. There are no hard statistics to quantify the use of digital pitch correction; Antares declines to release its sales figures, and so does its main competitor, the German company Celemony, which calls its software Melodyne. In recording studios, pitch correction tends to be employed discreetly, if not surreptitiously, to preserve the reputation of singers. Each day, meanwhile, less and less pop recording takes place in the foam-padded studios of the old-paradigm record industry, and more and more is done in private, at home, with laptop software. Pitch-correction plug-ins are all but standard accessories for home recording, as the old lines between professionalism and amateurism, vocation and avocation, dissolve. The physics are simple: the lower the singers’ levels of skill, experience, or talent, the higher the value of Auto-Tune. The fact that one can or cannot sing no longer has much bearing on whether one will or will not sing.
When most of us think of Auto-Tune, the sound we likely conjure in our minds is not the sound that Auto-Tune provides on the pitch-corrected hits that dominate the pop charts today. We probably think of the novelty uses of pitch correction that first brought Auto-Tune attention and made it the subject of high-profile controversy several years ago. It is fourteen years now since Cher and her producers pushed Auto-Tune past its safety settings on the single “Believe,” producing that quavering, metallic chipmunk sound—“the Cher effect”—that established Auto-Tune as a gimmick in the public consciousness. Over the years since, dozens of acts prominent in pop and hip-hop have followed Cher and pushed Auto-Tune further for conspicuous effect or stunt purpose: Lil Wayne, with “Lollipop,” his juvenile fantasy of cheap sex; the Black Eyed Peas, with their unctuously catchy “Boom Boom Pow”; Kanye West, almost approaching irony with the plastic crooning on “Heartless”; Daft Punk, the electronic dance-music duo, with their spacey “One More Time”; T-Pain on “Buy U a Drank” and twenty or thirty other tracks that wallow in Auto-Tune as an aural equivalent to the extravagant excesses in his lyrics; Rihanna, exulting in multiple modes of disorientation on “Disturbia”; and Ke$ha, whose voice gains most of its character from the electronic aura imposed by Auto-Tune—a digital essence that neatly inverts Walter Benjamin’s formulation to give electronic creations an aura that is non-existent in life.
Yet none of the music I just mentioned has much to do with the way Auto-Tune now dominates contemporary pop. Since the rise and decline of Auto-Tune as a popular gimmick, digital pitch correction has pervaded recorded music, but in a way more significant and even creepier than “Buy U a Drank”: in stealth. If we don’t think of Auto-Tune when we hear the pop songs wafting around the shampoo aisle as we shop, it’s only because we don’t recognize it. We don’t hear what we’re hearing. As Dr. Andy has explained in an online interview, he intended his invention to be imperceptible, and it is to most ears, most of the time. “Auto-Tune can be used very gently to nudge a note more accurately into tune,” Hildebrand says. “In these applications, it is impossible for skilled producers, musicians, or algorithms to determine that Auto-Tune has been used.”
WHAT DOES IT mean to say that someone “can sing”?
My wife, the cabaret singer Karen Oberlin, is a third-generation musician. Her parents met at Tanglewood when they were playing in a youth orchestra under Leonard Bernstein. Her paternal grandparents were vaudeville performers who sang and played light classics and comedy songs on the Chautauqua circuit. Karen and I have a nineyear-old son, and since he was in pre-school, his teachers have been telling us that the kid has musical talent. But what are they saying, exactly?
As I just suggested by relaying that family history, it is natural to think of musical ability as naturally ingrained, a gift—something endowed, if not by genetic inheritance, then by God. There is evidence of the inheritability of artistic talent in gene research, and there is a case for the divine in every concert review that describes a piece of music as transcendent or miraculous. Not that no one believes that creative skills (in music or any of the arts) cannot be learned, to some degree, or developed through training and experience. Without such a faith, where would the MFA industry be? Still, the Nietzschean conception of talent as a natural endowment—and more than that, a supernatural one—persists, only bolstered and gussied up now in DNA lingo.
This line of thinking underlies the widespread contempt for Auto-Tune as an extra-natural method of accomplishing what should supposedly come naturally, and it helps preserve our enduringly romantic conception of artists as special creatures, anointed or made differently than the rest of us. We resent Auto-Tune not so much because it is non-human—we put our faith (and, increasingly, our affection) in electronic devices every day—but more because the power it applies, in providing a way to sing in perfect intonation, seems superhuman and, in practice, indiscriminate. Auto-Tune defies the myth of the creative gift.
To say that someone can sing suggests a physical endowment, and maybe a metaphysical one, though technology has influenced the physical process of singing in the past. When my wife’s grandmother performed “Under the Greenwood Tree” on stage in Pittsburgh, part of the proof that she could sing was her ability to project her voice from the footlights to the balcony. That skill became considerably less important after the invention of the microphone and electronic amplification, along with the development of radio and records, and the commensurate relocation of popular entertainment from the public sphere to the home. The microphone, in a sense, was the Auto-Tune of its day, doing for amplitude what Hildebrand’s invention has done for pitch. In fact, the first vocalists to exploit the potential of the microphone—Rudy Vallee and Bing Crosby, early among them—were once taken as incompetent for their failure to project, with gusto, from the diaphragm. In a quickie film called Crooner, made in 1932, a critic of the leading man snaps, “He can’t sing. He only croons.”
Yet the analogy between the mic and pitch correction is imprecise—or perhaps still incomplete. With the microphone, singers did not simply sing quietly and sound loud; they sang differently than Al Jolson, Bessie Smith, and other song-belters of the proscenium era. In the electronic age, singers learned to work more intimately, conversationally, sensually, and subtly, establishing a new set of aesthetic standards for pop vocalists. Auto-Tune has not much changed the way singers sing, though it may well end up doing so in ways I cannot foresee. Digital pitch correction is a technology more active than the microphone: rather than capturing a singer’s voice passively, it alters it, raising or lowering the tone to match the settings on the controls. It is, indeed, all about control—specifically, about conforming strictly to a traditional standard of correctness, the Western tempered scale.
There, to me, lies the tyranny of Auto-Tune. To say that someone can sing can mean simply that the person can sing on key, and it is elementally important to hit the right notes. The trouble with Auto-Tune is that it applies too rigid a definition of rightness. It adjusts every tone with unyielding, unvarying precision, squarely in the mathematical center of the note. But no one sings that way—not even the world’s most esteemed opera singers. In every form of vocal music, the scale is a framework for expressive interpretation, not a system of regimentation. What it means above all to say that someone can sing is that the person can communicate the content of the words and music; and emotional expression, in vocal music, involves the deft, intelligent manipulation of pitch. A skilled singer knows how to shade a moment in a song by, say, hovering near the bottom of a note—within the note, in tune, but just below the center of the tone. A great blues singer may use three chords, but find countless possibilities for tonal variation in a single note. The music, the art, is contained in those variations. Bessie Smith, processed through Auto-Tune, would have all the soul of Siri.
It is easy to see the problem with the “auto” in Auto-Tune. Automation is inhuman; still, automation is merely a method of production, and even automated music can be interesting intellectually. The emphasis on tuning is a problem, too. What matters most in music—what music is—is sound, and I can think of no sound quite as oppressive as the systematic execution of technical perfection. Auto-Tune, by making every song perfectly correct, makes every song wrong.
More than being correct, music has to sound right; and, to this date, few works in any art exemplify the distinction between reality and perception better than Singin’ in the Rain. After all, when Kathy Selden is dubbing for the voice of Lina Lamont, and we hear the sound of Debbie Reynolds crooning “Would You,” we are not really hearing Debbie Reynolds. We are actually hearing a ghost singer named Betty Noyes, who dubbed Reynolds’s voice on the song, without credit.
Some years ago, I learned about this at a press event for one of the video releases of the film, and Reynolds was on hand for pictures and a few questions. When the subject of Noyes’s once-secret dubbing came up, Reynolds smiled her Kathy Selden smile and said, “Oh—my singing was too good to use.”
This article appeared in the July 12, 2012 issue of the magazine.