Artificial Intelligence

Buzzword busting part 2: Artificial Intelligence

We all use a lot of jargon when talking or writing about technology. However, research suggests that the majority of consumers don’t know what terms like AI, Blockchain, or IoT actually mean. In the second part of my buzzword busting blog series, I look at Artificial Intelligence and the impact of the hype surrounding it.

What is Artificial Intelligence?

There are many definitions of AI out there, too many in fact. While Gartner defines AI as applying advanced analysis and logic-based techniques, including machine learning, to interpret events, support and automate decisions, and take actions”, other companies like Amazon define it as the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, problem solving, and pattern recognition.”

Scratching your head, anyone?

It seems that the focus of artificial intelligence shifts depending on the organisation providing the definition. And when you google it, the first images that appear for AI are brains or robots… not helpful is it?

So, like most of the general public, when you hear about AI, a Hollywood version of a robot probably comes to mind. It’s just simpler.

However, in reality, it isn’t. But before we get into the nitty-gritty, let’s briefly delve into the history of Artificial Intelligence.

1950s – Although the concept of intelligent machines dates back to ancient Egyptian automatons and early myths of Greek robots, it is only in the 1950s that the world became familiarised with it. This was thanks to Alan Turning’s seminal paper “Computing Machinery and Intelligence” and the word “Artificial Intelligence” being coined by John McCarthy and Marvin Minsky at the Dartmouth Summer Research Project in 1956. During that conference, the Logic Theorist, the first computer program to emulate human problem-solving skills, developed by Allen Newell, Herbert Simon and Cliff Shaw was presented.

This event was a catalyst to funding AI research.

1960s, 1970s and 1980s – In the next two decades, AI flourished. As computers became more accessible and could store more information, machine learning algorithms got better. Shakey, the first general purpose mobile robot was also born, enticing Marvin Minsky to tell Life Magazine a year later that “from three to eight years we will have a machine with the general intelligence of an average human being.”

However, it was not as straight forward as he thought and a mountain of obstacles got in the way, including computers being too weak and not processing information fast enough. Patience was at an all time low and AI research was slowing down.

Luckily it picked up again in the 80s thanks to the expansion of the algorithmic toolkit, and a boost of funds following “deep learning” techniques being popularised, and expert systems which mimicked the decision-making process of a human expert being introduced.

1990s to present – While earlier attempts at AI might have given us insight into what models to use, they were held back by a lack of computational power. From the 1990s onwards, this lack of resources became less of an issue, especially as we gained access to on demand compute power through the “Cloud”. This gave us the golden age.

During those 20 to 30 years, many of the AI milestones we know today were achieved. In 1997, world chess champion Gary Kasparov was defeated by IBM’s Deep Blue, a chess playing computer program, while a speech recognition software developed by Dragon Systems, was implemented on Microsoft Windows. The year 2000 saw ASIMO, Honda’s AI humanoid robot walk upright and on two legs. In 2011, IBM’s Watson, a question-answering computer system, defeated two former champions in the TV game show Jeopardy. And most recently in 2016 and 2017, Google’s AlphaGo algorithm won against the world’s top champions in the ancient board game of Go.

Computer storage and processing power was no longer holding us back. Are we again entering the age of endless possibilities?

What does the history of AI tell us?

  1. Artificial Intelligence is more complex than it appears and involves different moving parts
  2. It requires a lot of resources and money to work and evolve
  3. No, AI doesn’t equal robots
  4. It’s likely we are too optimistic about the outlook of AI

It is AI, is it not?

A survey from London venture capital firm MMC published earlier this year reveals that 40 percent of European start-ups that are classified as “AI” companies don’t actually use artificial intelligence. This shows how hyped the technology has become and how many companies are capitalising off the buzzword.  Indeed, MMC found that the start-ups claiming to work in AI can attract between 15 and 50 percent more funding compared to others.

So how do you know if it is?

In its most complete and general form, an AI might have all the cognitive capabilities of humans, including the ability to learn. However, a machine is only required to have a minute fraction of these skills to qualify as an AI. Then again, do we even have a clear definition of what intelligence is?

Like the “Cloud”, which could often just be called IaaS (Infrastructure-as-a-service), maybe we can call AI by what it practically is: Applied statistical models running on an abundance of computational power?

Written by Florie Lhuillier

LinkedIn

Get in touch to work with a world-class team of B2B tech marketers

Improve your industry reputation and influence, grow your customers base and drive investment through transformative integrated marketing.