Artificial Intelligence and Machine Learning: Part 1: Defining Artificial Intelligence

In this series, I am writing several blog articles about Artificial Intelligence. We finally have enough processing power and memory to accomplish great things. However, after 40 years of working in this field I am amazed at what some companies are trying to sell as Artificial Intelligence and Machine Learning. The purpose of these articles is to help the reader sift through the hype and discriminate real AI/ML from useless marketing that lacks real substance. This week’s articles concentrate on a simple but important aspect of the AI/ML problem….What is the REAL definition of AI and ML?

If you go on the Internet and look for definitions of Artificial Intelligence (AI) and Machine Learning (ML) you will find almost as many different definitions as there are websites on the subject. Over 40 years ago when I started writing AI programs, we had to use software like CLIPS or LISP (What a nightmare!!). Computers were mostly mainframes that you had to share with hundreds of other people trying to do complex calculations so your best bet was to write your program, print a card deck and wait until after hours to batch process it overnight. The point is that computers and software of that decade didn’t really have the processing speed, power or memory needed to do the kind of programming needed for AI.

Most of the DoD sponsored AI conferences I have attended lately have spent most of their time arguing about the definition of AI and ML. After AI lost its bling in the 1970’s when people realized it had been oversold, I changed the definition of what I was doing to “Cognitive Processing”. Other terms in use from that time include smart programming, Intelligent programming, and automation. Today, Real AI is said to be the search for a way to create an artificial human intelligence. Narrow AI is one of the terms used to describe the automation of one or a few functions such as navigation, mobility or decision-making. To make things even worse, the Defense Advanced Research Projects Agency (DARPA) has chosen to muddy the waters even further by defining AI in temporal Tiers or waves. It’s no wonder that DOD program managers searching for intelligent solutions for their programs are buying anything that is called AI. Rather than get into a rather subjective and useless argument over what the definition of AI and ML is, I prefer to just give you my simplest definitions and move on.

Humans possess real intelligence. When a non-human (synthetic) machine comes close to the breadth and depth of learning ability, knowledge storage, emotion and morality of a real human being, that will be AI. Anything else in my definition is either intelligent programming or machine Learning…..Simple. Just because a machine can beat a human at chess does not mean it is artificially intelligent. Intelligence is far more complex than following learned rules to make the “best” decision to reach a goal. This kind of search and decide strategy is called Means-end analysis and is a relatively poor way to make decisions because it is limited to using the context at hand. (NOTE: I will talk about the importance of context in a later blog article in this series).

At this point, I would also like to make sure that the reader understands that statistics is NOT AI. Well-known large companies have captured a large portion of the AI market selling statistical analysis programs as AI programs. Although statistics might be able to describe the likelihood of an event happening, they in no way are, or should be sold as AI.

Don’t get me wrong…there are places where statistics, smart programming and other software techniques are useful…but don’t buy them as AI or ML.

Real machine learning will be able to look at a previously unknown input and, based upon some long-term contextual memory built into the program, decide what that input is or means. This structure mimics the operation of the human brain where memory “chunks” are stored in contextual structures referred to as schemas. Based upon my over 40 years of building machine learning (Cognitive processing) algorithms for DOD applications, this is the path I have taken because I believe to build an artificial brain you must mimic the actual operation of the brain and not simulate it.

Andy Bevilacqua, Ph.D.
Cognitive Optical Psychophysicist

Read All 5 Parts of this Series

Leave a Reply

Your email address will not be published. Required fields are marked *