The History of AI

Artificial Intelligence (AI) may seem like a relatively new term and technology to many, but its roots date back several decades. In fact, the concept can be traced back to the early days of modern computing in the 1950s, with its mathematical and theoretical foundations going back even further.

With recent advancements, AI appears to be expanding at an unprecedented speed. Understanding the history of AI is crucial to anticipating its future.

Pre 20th Century

AI’s history surprisingly dates back before the invention of computers. Records from as early as 400 BCE show ancient philosophers contemplating the possibility of creating non-human, in particular mechanical, life. ‘Automatons’, mechanical devices, were developed during this period.(1) These devices could move without human assistance. The earliest record of an automaton, a mechanical pigeon, was created around 400 BCE by mathematician Archytas.(2)

The Beginning of AI

Famously, most people date AI back to 1944 when Alan Turing and Donald Michie, both working at Bletchley Park, would discuss the possibility of building intelligent computer programmes.(3)

Alan Turing continued to explore these ideas. His 1950s paper, Computing Machinery and Intelligence, nowadays known as The Turing Test, discussed how to build intelligent machines and how to test their intelligence.(4) Turing was limited to the work he could produce and test at the time as there were some significant barriers. Computers could not store commands, meaning they could be told what to do but did not have the tools to remember what they did. Computing was also very expensive - leasing a computer could cost up to $200,000 a month. Only the most prestigious universities and companies could afford to invest this kind of money.(4)

The term ‘Artificial Intelligence’ was coined by John McCarthy in 1956 during a summer Dartmouth Conference, marking the formal beginning of AI as a field of research.(5) Although first publicly shared in 1956, it was the year before where he picked the name as he approached the Rockefeller Foundation to request funding for the Dartmouth Conference. His proposal stated that a 10-man, 2-month study of AI would attempt to “find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves”.(6) He proposed that such advances could be made over the summer with a sufficient group of scientists working on it together.(6)

Notable early examples of AI include Arthurs Samuel’s checkers programme (1952) that could learn the game and play independently(1), and The Logic Theorist (1955), developed by Herbert Simon, Allen Newell, and Cliff Shaw which could replicate the human problem-solving skills.(3)

The Rise of AI

From the late 1950s to early 1970s, advances in computer technology allowed AI to flourish.

This period saw the creation of ELIZA (1966), the first chatbot, which used natural language processing (NLP) to simulate conversation.(7)

Other significant projects included:

The Perceptron (1957)

  • Developed by American psychologist Frank Rosenblatt. Perceptron was an artificial neural network that could recognise patterns based on a two-layer computer learning network.(3)

LISP (1958)

  • John McCarthy developed List Processing (LISP), a tool which remains popular in AI research.(1)

Unimate (1961)

  • The first industrial robot, who worked on General Motors’ assembly line transporting die casting and welding parts on cars.(1)

Shakey the Robot (1966-1972)

  • Developed by the Artificial Intelligence Centre at the Stanford Research Initiative. Shakey was a mobile robot built with sensors and a camera who was capable of basic navigation and problem-solving.(7)

  • Although impressive at the time, it fell behind the advancement expectations. Any obstacles that would get in Shakey’s way slowed the robot down drastically. Obstacles and moving objects could confuse the machine and cause it to need to remap, which could take hours.(8)

AI Winter

Despite initial enthusiasm, the mid-1970s brought a period known as the AI Winter, characterised by reduced funding and interest due to unmet expectations.(8) A critical report by Professor Sir James Lighthill spearheaded the reduced funding as he highlighted the gap between AI promises and the reality.(7,8)

The AI Boom

Historians have named 1981 as the end of the AI Winter. This period saw AI’s commercial potential recognised, leading to more investment opportunities.(8) Pioneers like John Hopefield and David Rumelhard advanced deep learning techniques which enabled computers to learn from user experience.(4) Another major milestone was Edward Feigenbaum’s development of the expert system, which could emulate decision-making processes by learning from field experts. These systems became valuable tools across various industries.(4)

Before the resurgence of AI in the 1980s, a significant milestone was the founding of the Association for the Advancement of Artificial Intelligence (AAAI).(1) The organisation is “dedicated to advancing the scientific understanding of the mechanisms underlying thought and intelligent behaviour and their embodiment in machines”.(7) Their first conference was held in 1980 at Stanford University.(1)

In 1981, the first commercial expert system, XCON, began operation at the Digital Equipment Corporation. This systems role was to organise new computer system orders.(1) The system was highly effective, reportedly saving the company $40 million a year.(8)

Further advancements that emerged from the AI boom include:

The Japanese Government Fifth Generation Computer Project (1980s)

  • The Japanese Government as part of their Fifth Generation Computer Project (FGCP) invested approximately $850 million in AI projects. The aim was to transform computer processing, enabling abilities such as language translation, human-like conversation and reasoning.(1) Unfortunately, many of these were not achieved.(4)

The First Self-Driving car (1986)

  • A team lead by Ernst Dickmann at Bundeswehr University in Munich developed the first driverless car. The Mercedes van was equipped with sensors and a computer system, allowing it to navigate roads without obstacles or passengers, reaching speeds of up to 55 mph.(1,7)

Alacrity (1987)

  • Alactrous Inc. launched Alacrity, the first strategy managerial advisory system.(1)

Jabberwacky (1988)

  • Jabberwacky, created by computer programmer, Rollo Carpenter, was a chatbot system designed to engage humans in interesting conversations.(1)

The 1990s

Although funding dipped again in the 1990s, AI continued to advance.

Significant breakthroughs included:

IBM’s Deep Blue (1997)

  • During a highly-publicised 1997 match, world chess champion Gary Kasparov was defeated by IBM’s chess-playing computer programme Deep Blue. This was a huge advancement in AI as it demonstrated its power in decision-making.(4)

  • Deep Blue was able to process information at a higher speed than that of the human brain, giving it the power to assess 200 million potential chess moves in just one second.(7)

Dragon Systems Speech Recognition Software (1997)

  • Dragon Systems’ speech recognition software was incorporated into Windows software, marking a major milestone in AI.(4)

Such breakthroughs, especially that of Dragon Systems began the shift of AI implementation into more companies.(5) More and more companies were beginning to see the potential of AI and taking advantage of the tool.

The Early 2000s

The early 2000s saw AI become increasingly integrated into everyday life thanks to enhanced computational power and the growing availability of data.

Key developments included:

Kismet (2000)

  • Cynthia Breazeal developed Kismet, a robot capable of recognising and displaying human emotions.(1,7)

Roomba (2002)

  • The first commercial autonomous robot vacuum cleaner, Roomba, was released.(8) Roomba, equipped with simple sensors and processing was able to effectively clean homes.(8)

Mars Rovers (2003)

  • Nasa deployed AI-equipped Rovers to Mars. These rovers were able to cross and explore Mars’s autonomously.(1,5)

Social Media AI (2006)

  • Platforms such as Twitter, Facebook and Netflix began using AI to deliver more personalised advertising and enhance user experience.(1)

Xbox 360 Kinect (2010)

  • AI advancements reached gaming, with Microsoft’s launch of Xbox 360 Kinect. This software could understand and track body movement, translating them into game actions.(1)

Watson (2011)

  • Following the success of IBM’s Deep Blue, the company developed Watson, an NLP programmed computer capable of answering questions accurately.(5) Watson’s abilities were showcased when the computer beat two former Jeopardy champions on television.(5)

Siri and Alexa (2011-2014)

  • Now household names, Siri from Apple, and Alexa from Amazon, were released between 2011-2014.(5) Both are command-and-control systems, and drastically changed how people interact with technology.

Geoffrey Hinton’s Neural Networks (2012)

  • Geoffrey Hinton, a computer scientist, made significant contributions to the development of neural networks, which enable AI systems to process data and make predictions.(5) Although Hinton began exploring neural networks in the 1970s, it was not until 2012 that he and his graduate students published ground-breaking findings.(5) Hinton's work has been instrumental in advancing AI, and he joined Google to further his research. He resigned in 2023 to openly discuss the potential dangers of AI.(5)

Today

In recent years AI has become a central topic of discussion and advancements in technology have significantly expanded AI’s potential.

The recent surge in societal interest in AI is largely due to the development of generative AI. When people talk about AI today, they are often referring to generative AI. 2023 and 2024 have been pivotal for generative AI with the widespread availability of tools such as OpenAI’s ChatGPT.

AI is no longer a futuristic concept; it has transformed our lives in many ways, often blending into our daily routines - perhaps without many recognising. As with any technological advancements, there can be apprehensions about its impact. It is therefore vital to understand more about AI and how to use it responsibly.

At the Scottish AI Alliance we recognise the potential and risks of AI. We aim to educate the people of Scotland and beyond on the trustworthy, ethical, and inclusive use of AI. Our Playbook and Living with AI course are designed to equip you with the knowledge and skills to navigate the AI landscape confidently.

The Future

Understanding AI’s history helps us to anticipate its future trajectory. AI’s future will likely involve further integration into all sectors, transforming the way we work and creating new opportunities.

We should, however, remain aware of the risks around AI, and advocate for policies that ensure AI is developed, promoted, and used ethically and inclusively. This will allow for this technology to maximise the benefits for the greater good of everyone.

References

  1. https://www.tableau.com/data-insights/ai/history

  2. https://www.eejournal.com/fresh_bytes/the-first-robot-created-in-400-bce-was-a-steam-powered-pigeon/

  3. https://oxford.shorthandstories.com/ai-a-history/index.html

  4. https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/

  5. https://technologyquotient.freshfields.com/post/102ip8m/a-very-brief-history-of-artificial-intelligence

  6. http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf

  7. https://www.coursera.org/articles/history-of-ai

  8. https://www.bbc.co.uk/teach/articles/zh77cqt

Previous
Previous

AI and the Future of Democracy

Next
Next

Accelerate AI in Creative Industries with BridgeAI