AUTCOM.com
| Home | About AUTCOM | Membership Info | Newsletter | Bookstore | Links | Contact |





The IQ Fallacy

A Three Part Series

Through most of the 20th century, those little numbers have been used both to take away needed assistance and to take away hope. Yet such was not their original intent.

Part I: "Natural" Hierarchies

The nineteenth century can be seen as a time when the Western world was finally forced to come to terms with the diversity of life on earth. Centuries of European exploration -- and colonization -- had piled up dazzling and baffling evidence of a great variety of cultures and people. The bombshell of Charles Darwin's Origin of Species (1859) supplied the further challenge of orienting mankind's great diversity within nature's great diversity (yes, "mankind" -- but more about that later).

This challenge was posed to a society which was acutely hierarchical in structure. Moreover, much of its scientific efforts were still in the "natural history" stage -- that is, concerned with describing and categorizing natural phenomena. The concept of evolution, although never intended as a means of ranking creatures or of suggesting that "newer" organisms were "better" than more ancient versions, was all too easily misrepresented as a natural hierarchy: all creatures were to be scientifically described and slotted into their appropriate ranks in a Great March of Progress.

Since evolutionary theory stated that only the fit survived, social Darwinists assumed that those human beings who survived the nineteenth century social order were therefore fit (read "deserving") and those who failed to flourish deserved to fail. With this lopsided reasoning in place, it remained only to describe "nature's" human failures and human successes, record the physical and mental characteristics that determined their permanent place in mankind's Great March, and set public policy to accord with nature's intent.

But how could these characteristics be marked permanently, indelibly, "scientifically"? To achieve such a system, evolutionary theory quickly formed what one modern biologist has called "an unholy alliance" with those mainstays of natural history, numbers and measurement. Nineteenth century social scientists and psychologists dreamed that if only they could measure rigorously and copiously enough, they could turn their "soft" science into a science as verifiable and undeniable as Newtonian physics.

One of the earliest attempts to measure and rank human beings was known as craniometry, the measuring of heads. It was based on the assumption that a simple correlation existed between the size of the head and the size of the brain, and therefore of the intelligence and natural social rank of that head's possessor. By the mid-1800s craniometricians were announcing that the brains of white males were larger than the brains of dark-skinned races, whose brains were compared to those of the great apes, and of course larger than the brains of females of any race, whose measurements were compared to those of children, "savages," and gorillas.

Needless to say, these arguments lost their luster as data emerged on the existence of small-brained geniuses, large-brained criminals, and even the occasional educable female. Similar attempts were made to rank populations by "body type," yet these systems too were cumbersome and susceptible of argument. But the next major attempt to develop a system of ranking by intelligence would be simpler to implement, and far harder to argue away. It is with us still.

Alfred Binet (1857-1911) was a researcher in psychology at the Sorbonne during the heyday of craniometry, but late in his career his enthusiasm for this form of measurement faltered. When, in 1904, he was commissioned by France's minister of Public Education to identify children who were having trouble with normal classroom tasks and might benefit from special help, he decided to develop a very different, and far more pragmatic, type of test. Binet resolved to bring together a series of short tasks dealing with as many different skills and abilities as he could imagine. His working motto was "the more, the better," since the larger the sample the less the chance of skewing the overall pattern with an unusually high score or an "unfair" low score due to testing error.

Binet's 1905 test attempted to arrange the tasks in ascending order of difficulty, but by 1908 he had decided to assign an age level to each task, defining the levels as the youngest ages at which a child of normal intelligence should be able to complete that task. The test-taker would now move through the levels until "stumped," and the last level successfully completed would be called his/her "mental age." A few years later, after Binet's death, it became the practice to divide this mental age by the chronological age, yielding the intelligence quotient, or IQ. Eighty years later, the momentous effect of these mighty numbers is still felt in all aspects of Western society and education.

Binet fulfilled his charge from the ministry, but what did he really intend his test to be? First of all, he never intended to suggest that a complex phenomenon like intelligence was an independent entity that could be measured with a number. He insisted that his rough, practical device for noting which students needed help did not necessarily refer to anything innate, and that the notion of "mental age" was a tool, a shorthand expression, which taken literally would only result in "illusion." At the close of his life, Binet expressed great apprehension that IQ testing might one day be used to label and rank the mental worth of children, and he decried the possibility of such "brutal pessimism."

In short order Binet's worst dreams were to be realized, as the new science of IQ spread to America and took on that very cast of hereditarian "brutal pessimism." By the 1920s H.H. Goddard, the crusading director of research at the Vineland Training School for Feeble-Minded Boys and Girls in New Jersey, would be warning against the costs of allowing "high-grade defectives" (those with a "mental age" as high as 12) to reproduce, and recommending their prompt institutionalization -- for the good of society. R.M. Terman would persuade the army to test 1.75 million men during World War I, producing a skewed and subjective body of data which was tailor- made for the validation of racial and ethnic prejudices, and would lead to the Immigration Restriction Act of 1924.

Continue to Part II

FOR FURTHER READING:

The Mismeasure of Man, by Stephen Jay Gould (New York, W. W. Norton and Company, 1981).

Frames of Mind, by Howard Gardner (New York, Basic Books, Inc., 1983).



Information Sharing Policy
AUTCOM believes in the power of good information to drive out bad. You are welcome to download, copy, reprint, and redistribute any information from our Home Page. When doing so, please give credit to the Autism National Committee. If possible, drop us a note and let us know what proved useful and what is still needed.

This site is copyright © 1998-2007 Autism National Committee
Supported by a grant from the Hussman Foundation.
Site design by 1WebsiteDesigners.com