What kind of fact are you looking up when you look up a word in the dictionary? A fact it certainly is. It is not just a matter of opinion that there is no such word as misunderestimated, that the citizens of modern Greece are Greeks and not Grecians, and that divisive policies Balkanize rather than vulcanize society. Even former president George W. Bush, who uttered these words, conceded as much in a self-deprecating speech.
But the facts of lexicography are a peculiar kind of fact. They are certainly not logical or mathematical truths one could prove like theorems, nor scientific discoveries one could make in the lab. Nor are they stipulations of some governing body, like the rules of Major League Baseball. If you are using this dictionary as the official rulebook of English meaning and pronunciation, prepare for a disappointment. When I asked the executive editor how he and his colleagues decide what goes into The American Heritage Dictionary, he replied, “We pay attention to the way people use language.”
Yes, that is the dirty little secret of lexicography. There’s no one in charge; the lunatics are running the asylum. When you, a speaker of English, look up a word in the dictionary, the authoritative answer you seek is nothing more than a distillation of the way that other speakers use the word. The editors of a dictionary read a lot of newspapers, books, websites, and special-interest publications. They make notes about new words and usages, and record their contexts. They home in on the ones that are used by many writers in multiple contexts over several years. They sanctify a word for inclusion when they judge that it is prominent, widespread, or just plain interesting (like the use of ass as an intensifying suffix in constructions like That is one crazy-ass idea). As the executive editor told me, “There is some measure of subjectivity.”
How can we reconcile the apparent circularity of a dictionary with the indisputable fact that certain usages are just plain wrong— that a speaker who refers to the alma maters who graduated from her university, or who congratulates a colleague who is on the precipice of great accomplishments, has made an error? These examples may be confined to a single speaker, but when many speakers misuse a word on many occasions in the same way—like credible for credulous , enervate for excite, or protag onist for proponent—who’s to say they’re wrong? When enough people misuse a word, it becomes perverse to insist that they’re misusing it at all. Who today would deprecate the people who “misuse” deprecate to mean “belittle or deplore” just because its original meaning was “ward off by prayer”? Or the ones who use meticulous to mean “painstaking” rather than “timid”? A glance at the etymologies of the words in this dictionary, or better still, a dip into the Appendix of Indo-European Roots, will show that the proportion of words that retain their meanings over time is very close to zero. Every change in meaning began with a usage that was nonstandard at best and a solecism at worst.
A Mind that Slices Meanings Finely
The paradox that word usage can be objective and precise while its provenance is subjective and messy can be resolved, I think, by invoking two features of human nature.
One is that the faculty of language can make wafer-thin distinctions of meaning. This is a feature of our psychology that fills me with wonder even after decades of studying how people use words. Take the simplest construction of English grammar, the transitive sentence, as in The cat killed the rat. It seems straightforward: the verb denotes an action, the subject acts, the object is acted upon. But now consider some variations, like the insertion of at to mean “try repeatedly.” You can cut at the rope, or chip at the rock, or kick at the dog, but you can’t kill at the rat, touch at the ceiling, break at the glass, or split at the wood. In another alternation, you can hit Brian on the leg or touch him on the arm, but you can’t break him on the arm or split him on the lip. And then there is the middle voice, which allows you to say The glass breaks easily or This rope cuts like a dream. But a mental alarm goes off when you read Babies kiss easily or The dog slaps easily or The ceiling reaches like a dream.
People mentally slice the semantics of verbs far more thinly than just distinguishing “action” verbs from the rest. They cross-classify actions according to whether they involve motion, causation, and contact, and each combination of these microconcepts allows the verb to appear in a different set of constructions. The meaning of a word in a speaker’s mind—its semantic representation—is a complex assembly of these microconcepts. The intuition that a word is “wrong” arises from a mismatch between the semantic representation in the speaker’s mind and the collection of microconcepts that are found in an utterance. The middle voice, for example, requires that the object of the verb be changed by the action, and while that is true with break and cut, it is not true with kiss, slap, or reach.
Intuitions about which words contain which microconcepts are not perfectly uniform across speakers. Some people, when assimilating a meaning, may not attend to every nuance, or may latch on to idiosyncratic ones. Nonetheless, I am always amazed by the degree of consensus in people’s intuitions and by the stability of those intuitions over time. Research on linguistics, which assumes that the reader will make the same judgments as the author, occasionally bogs down in disputes over examples, but usually it can count on agreement. And whenever I have lifted a distinction from the linguistics literature and presented it to people in a questionnaire, the average of their ratings—though of course not every individual rating—lines up with the linguists’ gut feeling.
It’s far from clear how this consensus gels. None of the conceptual distinctions that govern the use of words is explicitly taught in English classes or style manuals. Yet literate adults end up with highly overlapping semantic representations. This convergence is the outcome of a poorly understood interaction between nature and nurture. People are equipped with an inventory of basic concepts like space, time, causation, and intention, and they use them to analyze massive amounts of written and spoken material.
This commonality of fine-grained conceptual structure in the minds of literate speakers is what makes a dictionary possible. The editors distill a consensual core of meaning from the appearances of a word, and articulate in concise prose the distinctions that are alive in the minds of the writers who respect it. Their explications are then available to writers who have not encountered the word in enough contexts to feel sure of the consensus. They can also enlighten experienced writers who may be unable to articulate a distinction that they intuit only tacitly.The American Heritage Dictionary is replete with elegant dissections of familiar concepts into their conceptual microanatomy: the shift in viewpoint from bring to take, the force dynamics that distinguish repress and suppre ss, the shades of meaning that differentiate synonym sets such as harass, harry, hound, badger, pester, and plague or healthy, wholesome, sound, hale, robust, and well.
The Quest for Common Knowledge
There is more to accepted usage than an overlap in the minds of careful writers. There is also a phenomenon that logicians call common knowledge. When a piece of information, x, is merely shared between two knowers, then A knows x and B knows x. But when it is common knowledge, A knows x and B knows x, and A knows that B knows x, and B knows that A knows x, and A knows that B knows that A knows x, ad infinitum. The “ad infinitum” may suggest that common knowledge can never be attained by mortal humans, but common knowledge can be represented in a simple mental formula, and it can be acquired whenever a bit of information is known to be salient to a community of knowers. The most famous illustration of common knowledge is the story known as The Emperor’s New Clothes. When the little boy blurted out that the emperor was naked, he was not telling anyone anything they could not already see. But he changed the state of their knowledge nonetheless, because now everyone knew that everyone else knew that everyone else knew … that the emperor was naked.
Common knowledge is essential to coordinating interactions between people. When we drive on the road and stay to the right, or surrender an object of value for bits of green paper, we are resting our well-being on common knowledge. We hug the right not just because we know the law, or even because we know that other people know it, but because we know that they know that we know it and vice versa. If another driver knew that the law required everyone to drive on the right, but mistakenly thought that everyone else thought they must drive on the left, he would drive on the left, too. You might have impeccable reasons to insist that your scrip is legal tender entitling you to goods and services, but if the shopkeeper doesn’t see it that way, your money is mere paper.
Knowledge of usage in language is common knowledge. When we use fortuitous to mean “accidental,” fulsome to mean “excessive,” or disinterested to mean “impartial,” we don’t just have those senses in mind. We count on our readers to have them in mind, we count on them to know that we have them in mind, and so on. Without this common knowledge, our care in selecting the word will have gone to waste. As with the imperial leader in his birthday suit, a private consensus is not enough; the knowledge must be common.
In the absence of publicized regulations like traffic laws, the elevation of shared knowledge to common knowledge can be unpredictable, even chaotic. Outlandish fashions, surprise bestsellers, dark-horse candidates, currency hyperinflations, and asset bubbles and crashes are all cases in which people behave according to the way they expect other people to expect other people to expect other people to behave. The craving for common knowledge can even lead to a false consensus, in which everyone is convinced that everyone believes something, and believes that everyone else believes that they believe it, but in fact no one actually believes it. One example is the cachet that college students place on drinking till they puke. In many surveys it turns out that every student, questioned privately, thinks that binge drinking is a terrible idea, but each is convinced that his peers think it’s cool.
The maddening paradox of false consensus has long afflicted lexicographers and grammarians. The problem goes by various names— folklore, fetishes, superstitions, bugaboos, and hobgoblins—but I call them bubbe meises, Yiddish for “grandmother’s tales,” in tribute to the late language columnist William Safire, who called himself a language maven, Yiddish for “expert.” A grammatical bubbe meise is a rule of usage that everyone obeys because they think everyone else thinks it should be obeyed, but that no one can justify because the rule does not, in fact, exist.
The most notorious grammatical bubbe meise is the prohibition against split verbs, where an adverb comes between an infinitive marker like to, or an auxiliary like will, and a main verb. According to this superstition, Captain Kirk made an error when he declared that the five-year mission of the starship Enterprise was to boldly go where no man has gone before; it should have been to go boldly. Likewise, Dolly Parton should not have declared I will always love you, but I always will love you or I will love you always.
The rumored rule against split verbs has been deplored by just about every reputable style manual (see the Usage Note on split infinitive). It originated centuries ago in a thick-witted analogy to Latin, in which it is impossible to split an infinitive because it is a single word like dicere, “to say.” But in English, infinitives like to go and future-tense forms like will go are two words, not one, and there is not the slightest reason to interdict adverbs from the position between them. Doing so only leads to awkwardness, as in Flynn wanted more definitively to identify the source of the rising IQ scores and Hobbes concluded that the only way out of the mess is for everyone permanently to surrender to an authoritarian ruler. The bugaboo can even lead to a crisis of governance. During the 2009 presidential inauguration, Chief Justice John Roberts, a famous stickler for grammar, could not bring himself to have Barack Obama “solemnly swear that I will faithfully execute the office of president of the United States.” Instead he led him to “solemnly swear that I will execute the office of president to the United States faithfully.” To preempt doubts about the legitimacy of the new administration, they had to repeat the oath verbatim, split verb and all, later that afternoon.
How do ludicrous fetishes like the prohibition of split verbs become entrenched? For a false consensus to take root against people’s better judgment it needs the additional push ofenforcement. People not only avow a dubious belief that they think everyone else avows, but they punish those who fail to avow it, largely out of the belief—also false—that everyone else wants it enforced. False conformity and false enforcement can magnify each other, creating a vicious circle that entraps a community into a practice that few of its members would accept on their own. Experiments on wine-tasting have shown that people not only praise a wine that has been surreptitiously spiked with vinegar if they see everyone else praise it, but they will disparage a lone rater who calls it as he tastes it.
The same cycle of false enforcement could entrench a linguistic bubbe meise as a bogus rule of usage. It begins when a self-anointed expert elevates one of his peeves or cockamamie theories into an authoritative pronouncement that some usage is incorrect, or better still, ignorant, barbaric, and vulgar. Insecure writers are intimidated into avoiding the usage. They add momentum to the false consensus by derogating those who don’t keep the faith, much like the crowds who denounced witches, class enemies, and communists out of fear that they would be denounced first. The linguists Thomas Pyles and John Algeo write of the “apparently delighted disapproval” with which the famed usage mavens H.W. Fowler and F.G. Fowler cited an error from the writing of another usage maven, commenting that “purists love above all to catch other purists in some supposed sin against English grammar.” Safire referred to one group of his indignant letter-writers as the Gotcha! Gang, and another as the UofallPeople Club.
Fortunately, as H.L. Mencken observed, “The excellent tribe of grammarians … have as much power to prohibit a single word or phrase as a gray squirrel has to put out Orion with a flicker of its tail.” Richard White had no luck banning standpoint and washtub ; William Cullen Bryant was commanding the tide to recede when he outlawed commence, compete, lengthy, and leniency; and Ambrose Bierce was unsuccessful in stigmatizing banquet, bogus, brainy, demise, negotiate, and preparedness. Even Strunk and White’s sanctified Elements of Style has some head-scratchers, like the rule that clever means “good-natured” when referring to a horse, and that people must never be preceded by a number. But as with facetious YouTube videos, some pronouncements of the language mavens unpredictably go viral, and a few, like the rule against split verbs, refuse to die.
The Usage Panel: A Barometer of Common Knowledge
The writers of dictionaries face a challenge. Careful speakers command intricate representations of the meanings of words. This collective understanding is a treasure that is well worth preserving and refining, and dictionaries are what we count on to do it. We use them to avoid conveying unintended meanings, to ease the burden on our readers, to enhance the grace and vigor of our writing and speaking, to understand and appreciate the words of others. But while dictionaries attune their users to the common knowledge of a community of careful speakers, they also help to create that common knowledge, because the users will adapt their expectations to the expectations that are ratified in dictionary entries. Dictionaries thus face the danger of perpetuating a false consensus—of entrenching bubbe meises by the very act of acknowledging that some number of speakers respect them. Yet it would be irresponsible not to acknowledge them, because literate people want to speak and write as everyone expects literate people to speak and write.
The American Heritage Dictionary contends with this challenge in two ways. Rather than dictating matters of usage by ukase (look it up), it discusses them in more than 500 Usage Notes, appearing at the end of various entries. A Usage Note warns the reader about an alleged problem raised by a language maven or style manual, flags a new meaning or pronunciation which has drawn commentary, or identifies tricky issues that arise when using the word. The note summarizes the history of opinion on the issue, offers a readout of the prevailing expectations, and lets readers decide for themselves how to use the word. In other words it treats usage as a subject for criticism, analysis, and discernment, with no pretense that a dictionary legislates correct usage, or even that there is a fact of the matter as to what the correct usage is. A meaning or pronunciation is correct to the extent that literate speakers treat it is as correct and expect each other to treat it that way, and sometimes those expectations can be squishy or in flux.
The other innovation is the Usage Panel: a Rolodex of almost 200 people who use language in their line of work and who can be expected to care about words. If a dictionary can do no more than inform its readers on what a virtual community of literate readers and writers expect, why not round up such a community, and check out questions of usage with them? There is, after all, no higher authority.
The panelists fill out semiannual ballots with their judgments of the acceptability of several dozen words and constructions. Some have appeared recently on the editors’ radar screens; others are repeated from earlier ballots to assess ongoing changes in the language. Here are a few examples from the most recent ballots.
epicenter:a. Saudi Arabia is the epicenter of terrorist financing.b. The Castro district is the city’s gay epicenter.c. Located at the epicenter of European immigration, Columbia could hardly ignore New York’s vast Jewish population.d. The assault on affirmative action gained momentum as California became the epicenter of this movement.doubt:a. I doubt that it will rain tomorrow.b. I doubt whether it will rain tomorrow.c. I doubt if it will rain tomorrow.puerile:Which are acceptable? Which is your preferred pronunciation?PYURR-uhl
PYURR-aisle
PYOO-er-uhl
PYOO-er-AISLE
PWAIR-uhl
PWAIR-aisle
The policy in using these judgments is simple: the Usage Panel is always right. In general, 51 percent is the threshold of acceptability, though in such borderline cases the Usage Note conveys the lack of consensus. If more than two-thirds of the panelists accept a usage, it is deemed acceptable. The percentages are generally reproduced in the Usage Note to give writers a sense of how a usage will be perceived by other writers.
The Usage Notes: A Window into Our Language and Our Minds
The Usage Notes are a window into our changing language, the anxieties it raises, and even human cognitive processes. In them you will find the real story behind bubbe meises like aggravate, convince, different than, healthful, hoi polloi, hopefully, intrigue, It is I, like, loan, momentarily, quote, transpire, whom, whose, and why (as in the reason why). The next time you are upbraided for being anxious to leave, raising a child, getting nauseous, or any of several dozen other peccadilloes of diction, you can explain to your one-upper that many of these usages go back centuries, appear in the writings of Shakespeare, Burke, Austen, and James, and flout no defensible principle of meaning, syntax, or style.
Most of the notes fall into a few categories. There are questions of pronunciation, such as aberrant, comparable, dour, nuclear, short-lived, and status. There are verbs converted from nouns, like author, critique, demagogue, impact, and premiere, which seem to rankle certain people, even though a fifth of English verbs started out life as nouns, and the conversions have been going on for centuries. A similar anxiety attends back-formations like enthuse and intuit, despite the unexceptionability of earlier back-formations like diagnose, edit, and resurrect. Many writers bristle at words that smack of cubicle farms such as to dialogue, to grow a company, to impact, to interface, and to leverage. But some examples of bizspeak, like finalize, incentivize, and prioritize, have no exact synonyms, and others that grated in earlier decades, like to contact (denounced by Strunk and White as “vague and self-important”) have become indispensable. Another breeding ground that can taint a word is pop psychology, as in conflicted in the sense of “ambivalent” and issues in the sense of “problems.”
Items that are balloted in successive decades give a stop-action sequence of language change. Resistance is melting, for instance, to once-problematic uses of comprise, crescendo, critique, decimate, due to, graduate, moot, myself, paradigm, quote, saving, sometime, and transpire. The changes in the average judgments raise the question of whether our panelists mellow as they age or whether the newer and younger members bring different judgments into the mix with them. I looked at the data for one item, snuck, which has been gaining on sneaked for more than a century. The change seems to be driven by an increasing number of younger panelists who have no problem with snuck. If the pattern is general, it suggests that language, like science (according to the quip by Max Planck), advances funeral by funeral.
Though some words are swept by a historical tide, others stand their ground despite the constant battering of speakers who alter their meanings. Careful writers may rejoice that these words are still available to convey subtle shades of meaning, to show off an elegant etymology and composition, and to enliven prose and poetry with distinctive diction. Well worth preserving are the distinctions that delineate the standard uses of bemused, credible, enervate, flaunt, fortuitous, fulsome, infer, luxuriant, mitigate, practicable, presumptive, protagonist, reticent, sidelight, tortuous, unexceptionable, and untenable . But writers are advised to check the Usage Notes to be sure their intended meaning is conveyed by the potential ambiguities in ad hominem, deceptively, decimate, factoid, historic, holocaust, impeach, and roil.
Some distinctions are governed by perverse irregularities and seductive similarities that are so demanding of close attention that it’s almost a miracle they have survived. Among these traps are the noun and verb senses of affect and effect; the spelling of all right; the meanings ofdisinterested and enormity; the difference between forebear and forbear; the conjugation of intransitive lie/lay/has lain and transitive lay/ laid/has laid; and the distinctions among wake, waken, awake, awaken, woken, awoken, waked, awaked, and awakened. Writing that respects these distinctions is what biologists and economists call a costly signal. It advertises that care has been invested into one’s choice of words, commending the writer as someone to whom precision matters.
One of the most thickly populated categories of Usage Note advises the user on sensitive terms for kinds of people, including Amerindian, Chicano, disabled, Eskimo, –ess, First Nation, gay, handicapped, he, Hispanic, Inuit, lady, Latina, man, minority, mute, native, Native American, Negroid, nonwhite, old, oriental, person of color, Pygmy, queer, Scotch, and senior. Careful attention must be paid, because despite the various rationalizations, the choice of an acceptable term for people in a given decade has no semantic rhyme or reason. Hispanic, Latino, andChicano go in and out of political fashion, and though colored people is racist, people of color is fine. The terms rotate through the polite lexicon as a current one gets tainted by an emotional coloring and calls for a fresh replacement. Woe betide the speaker who does not keep up. After giving a lecture on the genocides of the 20th century, including the Ukrainian terror-famine, I received an enraged email from a human rights activist. It seems I had referred to the country as the Ukraine, the name I had grown up with; I never got the news that the definite article had become offensive.
Many notes fret about number and magnitude. Some discuss qualities that are allegedly all-or-none but that people treat as if they were continuous, as when something is said to be more certain, complete, equal, parallel, perfect, or unique than something else. Other notes consider whether a word is confined to referring to exactly two items (alternative, between, best, either, neither); how to shoehorn a logical expression (which does not really refer to any number) into the singular-plural dichotomy (anyone, every, none, nothing, only, there, they); whether a word for a multitude refers to a forest or the trees (less, majority, more, over, pair, percentage, series); and when a conjunction of singulars adds up to a plural (or, plus, with). The anxiety points to incompatibilities between the digital design of language, with its discrete words making binary distinctions, and various analogue, quantitative, and logical features of reality.
Despite the best-laid plans of lexicographers, some words and senses we cherish will go extinct. It happened to our forebears, and it is vanity to think that it won’t happen to us. But like the coming of spring or the birth of a baby, a new edition of a dictionary awakens hope in the heart. It reminds us that the loss of word senses does not impoverish a language, because new ones are coined as quickly as old ones are lost.
The new entries in a dictionary remind us that speakers are constantly enriching the language with expressions that allow complex concepts to be conveyed economically and thereby expand the richness of discourse. Some random examples: adverse selection, back-load, comorbid, drama queen, evil twin, false memory, hinky, low-hanging fruit, parallel universe, perfect storm, probability cloud, reverse engineering, smackdown, sock puppet, Swift Boat, train wreck. Other additions are invitations to reflect on the events and obsessions of recent history. Everyone will find their own Proustian madeleines in these pages, but I was entranced by Abrahamic, air rage, amuse-bouche, annus horribilis, backward-compatible, box cutter, brain freeze, bubble tea, bushmeat, butterfly effect, camel toe, carbon footprint, cargo pants, Chinglish, cojones, comb-over, community policing, crabstick, cred, crop circle, and crowdsourcing—and that’s just from the first three letters of the alphabet.
A final comment. In this essay, I have ended sentences with prepositions, used between and each other for more than two, used where for in which, begun sentences with and, but, and so, treated none as plural, and followed an everyone with their. You got a problem with that? Check the Usage Notes!
Hey WordNerd,
ReplyDeleteThanks a lot for sharing the Pinker essay from AHD. The latest edition of AHD has reignited the age-old battle between the descriptivists and prescriptivists camps in linguistic philosophy. I was working on a blogpost on the same. This is where your post became extremely helpful. You can check out the post on my blog: http://worldlyphilosophy.wordpress.com/
On a side note, I would love to see more posts from you. The world definitely needs more word-nerds!
Check the source, This site is really helped me out gave me relief from essay headaches. Evolution Writers Good luck!
ReplyDelete