The Ice Age Returns: The Second AI Winter and Silent Hibernation (1987-1993)
On October 19, 1987, Wall Street experienced its worst single-day decline since 1929, known as “Black Monday.” That same year, a $500 million industry—the specialized AI hardware market—collapsed almost overnight. This was no coincidence, but a clear signal of the second AI winter’s arrival. Unlike the first winter, this harsh cold would last nearly a decade, yet beneath the ice, the seeds of AI’s future were quietly germinating.
Signs of Winter: The 1987 Technical Earthquake
1987 marked a dramatic shift in the AI industry from prosperity to recession. That year, general-purpose workstations from Sun Microsystems and other companies surpassed the performance of LISP machines specifically designed for AI, while costing only a fraction of the price.
The Demise of LISP Machines
LISP machines had once been the pride of AI Chronicle. These computers, specifically designed to run the LISP programming language, represented the pinnacle of AI hardware in the early 1980s. Machines produced by companies like Symbolics, LMI (LISP Machines Inc.), and Texas Instruments featured advanced garbage collection mechanisms, symbol processing optimization, and specialized AI development environments.
However, by 1987, desktop computers from Apple and IBM had become more powerful than expensive LISP machines. Benchmark tests showed that workstations maintained their performance advantage over LISP machines. More importantly, these general-purpose computers offered simpler, more popular architectures for running LISP applications.
“There was no longer a good reason to buy LISP machines. An entire industry worth half a billion dollars was destroyed overnight.”
Symbolics’ financial data clearly reflected this change: after revenue peaked in 1986, it declined continuously from 1987 to 1989, showing negative returns. This former AI hardware giant began its long decline.
Fatal Flaws of Expert Systems
The collapse of expert systems wasn’t just a hardware problem; deeper causes lay in the fundamental flaws of these systems themselves. Despite some notable successes in the early 1980s, expert systems quickly revealed fatal limitations.
The Knowledge Acquisition Bottleneck
The greatest challenge facing expert systems was the “knowledge acquisition bottleneck”—how to convert human expert knowledge into rules and facts that computers could process. This process proved far more difficult than initially anticipated. Experts often couldn’t clearly articulate their tacit knowledge, and knowledge engineers struggled to capture the subtleties of expert decision-making processes.
System Brittleness
More seriously, expert systems exhibited alarming “brittleness.” When faced with abnormal inputs outside their training data, these systems could make “ridiculous errors.” They couldn’t learn, were difficult to update, and couldn’t explain their reasoning processes at an abstraction level that ordinary users could understand.
Maintenance Cost Nightmare
Even the earliest success stories began showing problems. DEC’s XCON system, once hailed as a paradigm of expert system commercialization, ultimately proved too costly to maintain. These systems were difficult to update, couldn’t learn new knowledge, and when business requirements changed, often required extensive manual intervention to modify rule bases.
Funding Freeze and Policy Shifts
Technical problems and funding issues mutually reinforced each other, creating a vicious cycle. Government and corporate investment in AI began to decline sharply, further accelerating the AI winter’s arrival.
DARPA’s Change of Attitude
In 1987, Jack Schwarz took over leadership of DARPA’s IPTO (Information Processing Technology Office). He was extremely skeptical of expert systems, dismissing them as “clever programming,” and “deeply and brutally” cut AI funding, “destroying” the SCI (Strategic Computing Initiative). Schwarz believed DARPA should focus on technologies showing the greatest promise; in his words, DARPA should “surf” rather than “dog paddle,” and he strongly believed AI was not “the next wave.”
Failure of Japan’s Fifth Generation Computer Project
In 1992, the Japanese government announced that its ambitious Fifth Generation Computer Systems (FGCS) project had essentially failed. This project, which cost over $400 million, originally aimed to create intelligent computers based on logic programming and parallel processing, but ultimately failed to achieve its grand goals.
The Japanese government even expressed willingness to provide the project’s developed software free to anyone, including foreigners. This project’s failure not only affected Japan but also dealt a major blow to global confidence in AI Chronicle.
Economic Environment Deterioration
The Black Monday stock market crash of 1987 intensified economic conservatism, leading to significant reductions in investment in risky technologies like AI. Companies began questioning the return on AI investments, and many AI companies faced funding shortages.
Underground Seeds: The Revival of Connectionism
However, amid the gloom of the AI winter, some researchers didn’t give up. Instead, they began exploring approaches completely different from symbolic AI—connectionism, what we now call neural networks.
The Backpropagation Breakthrough
In 1986, just before the AI winter’s arrival, Geoffrey Hinton, David Rumelhart, and Ronald Williams published a landmark paper in Nature: “Learning representations by back-propagating errors.”
This paper demonstrated how to use the backpropagation algorithm to effectively train and optimize multi-layer neural networks. While the basic idea of backpropagation wasn’t entirely new, Hinton and colleagues’ work made it practical and scalable.
Hinton’s Persistence
Geoffrey Hinton had been “obsessed with the problem of how to learn connection strengths in deep neural networks” since beginning his research career in 1972.
LeCun’s Convolutional Breakthrough
Yann LeCun, after earning his PhD in 1987, became Hinton’s postdoctoral researcher at the University of Toronto. He subsequently joined AT&T Bell Labs, where he developed convolutional neural networks (CNNs) and applied them to handwriting recognition. This work ultimately led to the development of a bank check recognition system that processed over 10% of US checks in the late 1990s and early 2000s.
New Voices in Behaviorism
Meanwhile, another entirely new AI paradigm was also emerging. Rodney Brooks at MIT proposed behavior-based AI and the Subsumption Architecture, challenging traditional AI’s basic assumptions.
The Revolution of Subsumption Architecture
Brooks’ Subsumption Architecture, proposed in 1986, represented a fundamental rethinking of traditional AI approaches. Unlike the traditional “sense-think-act” model, the Subsumption Architecture adopted a direct “sense-act” coupling approach.
This architecture didn’t rely on internal symbolic representations of the world, but achieved intelligent behavior through multiple parallel behavioral layers. Higher-level behaviors could “subsume” or inhibit lower-level behaviors, thus achieving complex intelligent performance.
Behavior-Based Robotics
Brooks’ approach achieved significant success in robotics. His insect-like robot Genghis demonstrated realistic gaits in 1988, proving that intelligent behavior could be achieved without complex central planning.
This approach’s success ultimately led to the founding of iRobot, which remains the world’s leading supplier of robotic vacuum cleaners, having produced 16 million devices, all built on Subsumption Architecture.
Challenge to Traditional AI
Brooks’ work posed fundamental questions to traditional AI. He argued that intelligence didn’t require complex symbolic representation and reasoning, but could be achieved through the emergence of simple behaviors. This view was considered marginal and unserious at the time, but as the limitations of traditional AI methods became increasingly apparent, Brooks’ ideas began gaining more attention.
Reflection and Lessons from the Winter
The second AI winter exposed deep problems in AI Chronicle and industrialization, but also provided valuable lessons for future development.
The Danger of Technology Hype
This winter clearly demonstrated the danger of disconnection between technology hype and actual capabilities. Expert systems were over-promoted as universal tools capable of solving various complex problems, but in reality, their capabilities fell far short of these expectations.
The Importance of Research Diversification
The development of connectionism and behavior-based AI during the winter proved the importance of maintaining research path diversity. When mainstream symbolic AI methods encountered bottlenecks, these “fringe” approaches provided new possibilities for AI’s future development.
The Value of Basic Research
The persistence of researchers like Hinton, LeCun, and Brooks during the winter proved the long-term value of basic research. Their work, though not recognized by the mainstream at the time, ultimately became the foundation of modern AI.
Life Beneath the Ice
Although the second AI winter was harsh, it wasn’t the end of AI development, but a necessary adjustment period. During this seemingly stagnant period, truly important technological breakthroughs were quietly occurring.
Laying Foundations for the Future
The refinement of backpropagation algorithms, the development of convolutional neural networks, the rise of behavior-based AI—these “underground” advances during the winter ultimately became the foundation for AI’s revival in the late 1990s and 2000s. When computing power became sufficient and data became abundant, these technologies would demonstrate amazing potential.
Cyclical Development Patterns
AI’s development history shows that technological progress often exhibits cyclical characteristics. Each winter is accompanied by corrections to the previous phase’s excessive optimism, while also accumulating necessary technical foundations and theoretical preparation for the next breakthrough.
Insights for Today
The experience of the second AI winter still offers important insights for today’s AI development:
Avoid Over-Hype: A reasonable balance must be maintained between technical capabilities and market expectations, avoiding the repetition of expert systems’ over-promising.
Value Basic Research: Even under commercial pressure, investment in basic research must be maintained, because today’s “useless” research may be tomorrow’s breakthrough foundation.
Maintain Path Diversity: All resources shouldn’t be invested in a single technical route; multiple approaches should be encouraged to explore in parallel.
Rationally View Setbacks: Setbacks and failures in technological development are normal; the key is learning from them and preparing for future breakthroughs.
The second AI winter tells us that real technological progress often occurs when it’s least expected. Beneath the ice-covered land, new life is quietly sprouting, waiting for spring’s arrival. And when spring truly comes, those seeds that persisted through the winter will bloom into the most brilliant flowers.