The History of Artificial Intelligence | Its Rise to Consumer Market

In our past three posts on Artificial Intelligence, we have extensively covered all the basics of AI, machine learning, neural networks, and their capabilities. AI is a technology that has changed the future in ways most of us would never predict, but now, to fully understand the concept and the ethical concerns it raises, we must look back. Today, we cover the history of Artificial Intelligence in detail.

The History of Artificial Intelligence

The theory of artificial intelligence, manmade human replications, and conscious robots have prevailed throughout much of human history. In ancient Greek mythology, the god of the forge, Hephaestus, built robot-like servants made of gold and bronze.10

Hephaestus' Mythical Creation, Talos
Hephaestus’ Mythical Creation, Talos

In ancient Egypt, the Pharos had statues of gods constructed which they personified and, in some cases, believed to be real.2 5

Throughout the middle -ages it was believed that if you placed a piece of paper with any of God’s names on it inside the mouth of a clay statue that it would animate and become a golem.3

A Golem from Jewish Mythology.
Depiction of Mythical history of Artificial Intelligence.
A Golem from Jewish Mythology

As time progressed so did people and the way we think. Philosophers like Aristotle, Rene Descartes, Ramon Llull, and Thomas Bayes helped forge our understanding of mathematics and the human thought process. By simplifying how we perceive the world around us, these philosophers helped lay the foundation for modern AI concepts like general Knowledge representation and reasoning. This is how we convey what we call ‘common knowledge’ to a machine.

Then, in the early 19th century Mary Shelley had a get-together with a few of her friends and came up with the story of Frankenstein; The Modern Prometheus. Prometheus, of course, is the titan who created humans in Hellenistic Greek mythology. This tale rivetted readers and left them pondering the possibility of the manmade creation of another living being. But this possibility can only be achieved through research.

Frankenstein, Or The Modern Prometheus.
Frankenstein, Or The Modern Prometheus

Timeline of AI

In 1836 Cambridge University mathematicians, Charles Babbage and Augusta Ada Byron invented the design for the first programmable machine.

Charles Babbage in front of his Calculating Engine, the first programmable machine.
Charles Babbage in front of his Calculating Engine, the first programmable machine.


Later, in the 1920s, Jan Lukasiewicz and Alfred Tarski began their studies on infinite-value logic which is now known as fuzzy logic.7

In 1940, Princeton mathematician John Von Neumann conceived the architecture for the stored-program computer. This revelation is what gave computers the ability to store data in memory.

The Stored Program Computer
The Stored Program Computer

Shortly after, in 1943, Warren McColloch and Walter Pitts laid the foundation for neural networks by explaining how any computable function could be evaluated by using networks of simulated neurons.

Then, in 1950, codebreaker, Alan Turing, released his Turing Test. This test evaluated a computer’s simulated intelligence by attempting to answer questions from a human interrogator as accurately as possible to make said interrogator believe that they, in fact, are communicating with a real human rather than a machine.

Alan Turing on the Turing Test
Alan Turing on the Turing Test

A year later Christopher Strachey used the Ferranti Mark 1 machine at the University of Manchester to create Game AI. He used this program to create a checkers game between humans and AI. Then Dietrich Prinz wrote one for chess and Arthur Samuel perfected Strachey’s checker game. Game AI then became a symbol of the progress of AI as the years went by and the program was added upon.8

Soon enough, depictions of the Tin Man in Wizard of Oz flooded viewers’ screens at the same time scientists and engineers were attempting the impossible in ways no one expected.

1956 DARPA Summer Conference

In 1956, a summer conference at Dartmouth College was held. Sponsored by DARPA, this conference was attended by some of the greatest minds of the time and ignited a spark in the technology field. Those in attendance include Marvin Minsky, Oliver Selfridge, and John McCarthy to name a few. McCarthy was even credited with coining the term Artificial Intelligence.

Attendees of the 1956 DARPA Summer Conference
Attendees of the 1956 DARPA Summer Conference

After that, in the 1960s, Allen Newell, John C. Shaw, and Herbert A. Simon created the General Problem Solver. This software program worked as a universal problem solver. It was the first program to be able to separate its knowledge of problems from its strategy of how to solve problems. This research soon led to the Soar Architecture for artificial intelligence, which is a computerized simulation of the structure of the human mind composed of large neural networks.6

In 1965, Azerbaijani scientist, Lotfi Zadeh, introduced his studies on fuzzy sets which was the foundation for fuzzy logic.11

Lotfi Zadeh, founder of Fuzzy Sets
Lotfi Zadeh, founder of Fuzzy Sets

Despite the massive amount of progress already gained since the 1940s, researchers hit a wall in that the machines that they were producing had too little memory storage to replicate the human brain.

AI Winter

Then, In 1973 the US and British governments decided to stop undirect funding of all AI research sparking what is now known as the “AI Winter”.

Eventually, summer breached through the cold of winter as a new form of AI research called “expert systems” came about in the 1980s. These systems used their limited memory to focus on one subject matter to simulate a human expert on the given subject. Because of this, funding flooded AI researchers, especially within the Japanese government.5

Then came the start of the “Cyc” program in the mid-late 80s. This was the first attempt at creating a database of common knowledge and facts that the average person would know. This was planned as a very long-term project that would require much attention.

Soon enough, in the late 80s, AI hit another winter. Funding setbacks halted progress on expert systems as the consumers were more interested in the computers from IBM and Apple.5

By the end of 1993, over 300 companies closed their doors ending the first wave of commercial AI production.5

May 11th, 1997, chess champion, Garry Kasparov, was beaten by an AI chess-playing system named Deep Blue. The system was able to calculate 200,000,000 moves per second. The event was broadcast online and breached 74 million viewers.

ACM Chess Challenge, Gary Kasparov vs. Deep Blue AI
ACM Chess Challenge, Gary Kasparov vs. Deep Blue Artificial Intelligence


Throughout the early 2000s, AI gained some more traction as processing power for computers had increased exponentially.

In 2005, a robot from Stanford drove 131 miles autonomously through an unknown desert trail to win the DARPA Grand Challenge.1

2005 DARPA Grand Challenge Winner
2005 DARPA Grand Challenge Winner

Then in 2007, a CMU team won the DARPA Urban Challenge by autonomously driving 55 miles in Urban terrain with traffic laws.1

In February of 2011, IBM’s Watson beat two of the greatest chess champions in a Jeopardy! Exhibition by a landslide.4

By 2016, the AI consumer industry and all related products reached an estimated 8 billion dollars in worth.9

AI Used in Industry
AI Used in Industry

Now, AI is used in almost every industry including healthcare, business, education, finance, law, manufacturing, banking, transportation, and security just to name a few. In our next post, we will discuss these industries and how they use AI as well as the ethics of AI.


As we reflect on the history of artificial intelligence, it’s evident that AI’s journey from ancient mythologies to a cornerstone of modern society is a testament to human ingenuity and perseverance. The evolution of AI, marked by both groundbreaking achievements and formidable challenges, underscores the importance of continuous innovation and ethical considerations. As AI continues to shape our future, understanding its history is crucial for leveraging its potential responsibly and effectively. The “history of AI” is not just a tale of technology but a mirror reflecting our quest to extend the boundaries of what’s possible, ensuring AI benefits humanity while navigating the ethical landscapes it unfolds.

Key Points:

  • The origins of AI can be traced back to ancient mythologies.
  • Key milestones include the invention of programmable machines, development of neural networks, and AI’s application in games and industries.
  • Figures like Charles Babbage, Alan Turing, and companies like IBM played pivotal roles.
  • AI Winters were periods of reduced funding and interest, but innovation eventually resumed.
  • AI’s impact is now seen across numerous sectors, demonstrating its versatility and potential.

Keywords List:

  • History of AI
  • Artificial Intelligence development
  • AI milestones
  • AI in gaming and industries
  • AI Winters
  • Impact of AI


  1. “DARPA Grand Challenge – home page”
  2. Ever Wonder Why Egyptian Sculptures Are Missing Their Noses? The Answer Will Surprise You | Artnet News
  3. “GOLEM – JewishEncyclopedia.com”. www.jewishencyclopedia.com. Retrieved 15 March 2020.
  4. Markoff, John (16 February 2011). “On ‘Jeopardy!’ Watson Win Is All but Trivial”. The New York Times.
  5. Newquist, HP (1994), The Brain Makers: Genius, Ego, And Greed in the Quest For Machines That Think, New York: Macmillan/SAMS, ISBN 978-0-9885937-1-8
  6. Norvig, Peter (1992). Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp. San Francisco, California: Morgan Kaufmann. pp. 109–149. ISBN 978-1-55860-191-8
  7. Pelletier, Francis Jeffry (2000). “Review of Metamathematics of fuzzy logics (PDF). The Bulletin of Symbolic Logic. 6 (3): 342–346. doi:10.2307/421060. JSTOR 421060. Archived (PDF) from the original on 2016-03-03.
  8. Schaeffer, Jonathan. One Jump Ahead:: Challenging Human Supremacy in Checkers, 1997,2009, Springer, ISBN 978-0-387-76575-4. Chapter 6.
  9. Steve Lohr (17 October 2016), “IBM Is Counting on Its Bet on Watson, and Paying Big Money for It”, New York Times
  10. The World’s First Robot: Talos – Wonders & Marvels (wondersandmarvels.com)
  11. Zadeh, L.A. (1965). “Fuzzy sets”. Information and Control. 8 (3): 338–353. doi:10.1016/s0019-9958(65)90241-x.

Master Modeling, Simulation, and Training (MS&T)

Want to learn about Simulation, Artificial Intelligence (AI), or Law and Corporate Practices? Check out our Training Courses!

Learn more about how AVT Simulation helps change the simulation training industry with our products and services.

Initially, Applied Visual Technology Inc., AVT has been developing modeling and simulation expertise through engineering services since 1998. This is due to our founder who has accumulated over 30 years of military MS&T expertise in aviation applications. Nonetheless, everyone at AVT specializes in making old training systems new again and making new ones for less. Consequently, for 20 years AVT has served our Air Force, Army, Navy, and Marine customers by providing the highest quality of service and solutions. Following its inception, AVT’s highly specialized staff of engineers has included some of the top leaders in the simulation industry. With over 20 years of simulation experience, our dedicated team provides specialized solutions for customers with complex problems.