We have discarded many disproven historical theories that used faulty science to buttress inaccurate ideas. Still, the possibility that flawed methodology, unsound reasoning, and bad information could taint scientific study is ever-present. AI researchers face the challenge of avoiding bad inputs and damaging impacts as they develop technologies that teach machines to learn and do automatic processes. Explore this curated selection of items from the NLM collection related to debunked historical ideas about personal attributes like intelligence and character, historical reasoning, and efforts to ensure accuracy and reliability, mitigate bias, and prevent harmful consequences in late 20th century AI research.
Brain and mind, or, Mental science considered in accordance with the principles of phrenology, and in relation to modern physiology, H.S. Drayton and James McNeill, 1880close
Phrenology, a system that analyzed the shape, size, and bumps of the head to determine mental ability and character, enjoyed popularity from the 19th century until the 20th century, when it was discredited and abandoned. Like physiognomy, scientists used phrenology to support inaccurate ideas, including the beliefs that personal traits were innate and dictated solely by nature, that traits like criminality and psychopathy were measurable and localized to specific parts of the brain, and that women, people of color, neurodivergent people, and other marginalized groups were inherently inferior.
U.S. physician Henry Shipton Drayton (1840–1923) and James McNeill (1846–1916) provide an overview of phrenological theory and practice in this work. Drayton was a major advocate for phrenology.
Phrenology Examined, Pierre Flourens, 1846close
In his 1846 text, French physiologist Marie Jean Pierre Flourens (1794–1867) dissects phrenology—a pseudoscience that used cranial features to discern quality of character and mental abilities. Flourens disagreed with one of phrenology’s main assertions: that intelligence and personality traits were localized to specific areas of the brain.
Intelligence Measurement: A psychological and statistical study based upon the block-design tests, Samuel Calmin Kohs, 1923close
Intelligence testing emerged in response to social scientists’ desire to measure mental capacity systematically at the turn of the 20th century. American psychologist and social worker Samuel Calmin Kohs (1890–1984) developed the Kohs Block Design Test, which assesses non-linguistic facets of intelligence, like spatial reasoning and motor skills. In one section of this book, Kohs discusses the then recently dispelled concept of a physical and mental “criminal type,” citing studies that used intelligence testing to determine that prisoners had similar mental abilities to the general population and found no correlation between the prisoners’ physical attributes and psychodiagnostic test results.
Studies in Mental Deviations, S.D. Porteus, 1923close
Australian psychologist Stanley Porteus (1883–1972) presents the findings of his research on students at Vineland Training School, an educational facility for people with developmental disabilities. He used intelligence tests and other investigatory methods to measure physical and mental abilities, then hypothesized about the students’ educational needs and professional prospects. Porteus, like many of his cohorts, believed that differences in intelligence testing results supported erroneous ideas about the inferiority of neurodivergent people, women, people of color, and other marginalized groups.
Abnormal man: being essays on education and crime and related subjects, with digests of literature and a bibliography, Arthur MacDonald, 1893close
In Abnormal Man, a specialist with the U.S. Bureau of Education (the precursor to the current Department of Education) provides an overview of the field of criminology and synopses of literature from Europe and the U.S. on the cause of crime, which he expanded upon with his own thinking. The author was an admirer of Italian criminologist Cesare Lombroso and French biometrician Alphonse Bertillon and believed in the now debunked idea that a connection exists between physical appearance and criminality, mental illness, and poverty.
Elements of Medical Logic: Illustrated by practical proofs and examples, Gilbert Blane, 1822close
In this text, Scottish physician Gilbert Blane (1749–1834) details the progress of logic throughout human history, invoking Francis Bacon, Galileo, and others. He states that even the greatest scientists could be led astray by the cultural misunderstandings of their time.
The intuitions of the mind inductively investigated, James McCosh, 1860close
This text from 1860 discusses how inductive reasoning skills, crucial to scientific discovery, are gained and practiced. The author states, “It is possible also for error to arise from a chain of erroneous deduction from principles which are genuine in themselves and soundly interpreted,” acknowledging the possibility of miscalculations in thinking despite a strong logical foundation. Ironically, the book itself demonstrates flawed reasoning. It assumes that, due to their intelligence, scientists would commit only small logical errors that their likeminded colleagues would detect and correct swiftly.
Psicología del pensamiento [How We Think], John Dewey, originally published 1910close
In How We Think, American philosopher and progressive education reformer John Dewey (1859–1952) discusses how education can be a guard against common pitfalls like bias. Psicología del pensamiento is the Spanish translation.
“The Problems and Promises of Artificial Intelligence,” Research Resources Reporter, Gregory Freiherr, September 1979close
This 1979 document offers an overview of how artificial intelligence (AI) works. It describes human readable white box, a concept related to explainable AI that is the opposite of black box AI, in which humans can only see and understand the inputs and outputs but not the “thinking” of the program. In addition to experience and training, white box AI requires the program to have a large body of knowledge from multiple sources, consistent logic, and the ability to explain its reasoning. Black box AI is often critiqued for its perceived susceptibility to bias while white box AI raises concerns around human interference, due to its ability to be parsed by humans.
“Technology and Society: A Conflict of Interest?”, Congressional Record #115, Cornelius E. Gallagher, April 1969close
This congressional report from 1969 details some of the burgeoning fears around advances in technology that surfaced in the mid-20th century, including the loss of “our individuality, our dignity, and our privacy” to computers. The report touches on issues that are still concerns today, as well as those that seem outdated. Echoing sentiments from authors much earlier in history, at the close, the author notes, “The standard acronym is GIGO: Garbage in, garbage out. My purpose is to disabuse non-professionals of the notion that it really means, Garbage in, gospel out.”
Proceedings of the First Annual Artificial Intelligence in Medicine Workshop, Deirdre Sridharan ed., 1975close
These workshop notes give a good overview of the issues that experts were grappling with as artificial intelligence (AI) grew in the medical field. Many of the discussions center around how to ensure accuracy, build the ability to apply knowledge, and teach what may not be known. Throughout the proceedings, many comparisons are made to how clinicians and AI programs are taught, and panelists discuss how to make AI expert systems like clinical decision support (CDS) acceptable to clinicians.
E-Mail from Joshua Lederberg to William J. McGuire, 1997close
In this message, American geneticist and artificial intelligence (AI) trailblazer Joshua Lederberg (1925–2008) muses about the difference between human and machine learning, describing how his thinking on the matter has changed with experience. Joshua Lederberg helped develop DENDRAL, the first “expert system,” or computer program that simulates the judgement of human experts on a topic. DENDRAL analyzed chemicals as proficiently as human chemists.
“Artificial Intelligence,” Chimia, J.T. Clerc, P. Naegeli, & J. Seibl, 1973close
Scientists may allow their beliefs or internalized associations to influence their work as they create and greenlight computer science algorithms. This letter to the editor of a chemistry journal discusses how human bias, errors, and small datasets can cause artificial intelligence programs to produce inaccurate results.
“How DENDRAL Was Conceived and Born,” Joshua Lederberg, from A History of Medical Informatics, Bruce I. Blum and Karen Duncan, eds., 1990close
DENDRAL was the first artificial intelligence (AI) “expert system,” or computer program that simulates the judgement of a human expert in a topic. It was designed to analyze chemicals as proficiently as human chemists. On page 25, geneticist Joshua Lederberg (1925–2008) provides a diagram that shows how concepts are mapped in an expert system.
Computer science in the 90s, Johns Hopkins University, 1989close
Towards the end of the 20th century, there was increasing public interest in technology and its capabilities. Computer science advanced rapidly. This poster advertises a lecture series that happened on the cusp of the 1990s tech boom, in which experts, including past National Library of Medicine director Donald A.B. Lindberg, discussed artificial intelligence, computer science, and bioinformatics. Note the diagram that references phrenology, a historical pseudoscience similar to physiognomy, in the bottom right of the poster.
“Our Pal, the Computer,” The Washington Post Outlook, Joshua Lederberg, January 1967close
In the 1960s, as in the present, there were reservations about computers and automation. Here, artificial intelligence pioneer Joshua Lederberg (1925–2008) attempts to assuage fear around technology and highlights some of the ways computers were already being used to improve life at the time.
“Data Characterization for Reliable AI in Medicine,” Recent Trends Image Process Pattern Recognition: 5th Annual Conference, Sivaramakrishnan Rajaraman, Ghada Zamzmi, Feng Yang, Zhiyun Xue, and Sameer K. Antani, 2023close
Computer science and artificial intelligence research at the National Library of Medicine (NLM) has advanced techniques to help predict and spot health conditions and make biomedical discoveries. In this paper, NLM researchers discuss methodology for developing machine learning algorithms that power medical computer vision. (Computer vision trains machines to make sense of visual media.)