This is the first installment in a 3-part series on Artificial Intelligence (AI). This installment provides an overview of the current technology. The next article discusses many different applications (good and bad) for AI. Read Part 2 here.
You have all heard of AI. In fact, this topic is applicable to many different facets of our daily lives. Marketing, health care, manufacturing, stocks and commodities, matchmaking, scientific exploration, and self-driving cars, to name just a few, all use artificial intelligence. An algorithm can recommend a new style of shirt in your favorite color, or it can search for very subtle patterns embedded deep in the human genome.
AI used to be discussed exclusively by PhDs, academics, and other intellectuals. However, now it is much more mainstream. There are several open source (“freeware”) AI programs that are available to any novice with an AWS account. YouTube videos will provide a good foundation for getting started, and voila, a new AI application can be created.
So, what is AI? According to IBM, “Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.” Or, said differently way back in 1950, the legendary Alan Turing wanted to create “thinking machines.” Needless to say, these are very complex and abstract concepts. There are many layers to AI. This means that one open source program is probably vastly different from another. And it also means that there are many different peripheral topics that should also be considered along with fundamental AI.
Starting at the Beginning:
The concept of artificial intelligence dates back to the mid-twentieth century, which is practically prehistoric in Internet years. It has gone through several catchy name changes such as “expert systems” and “knowledge engineering.” AI has also included companion applications like inference engines, machine learning, deep neural networks, and natural language processors, all of which will be discussed below.
So, after 70 long years, why is AI becoming so popular now? The answer is simple: scalability. The world now possesses a nearly infinite amount of data. Another 2.5 quintillion bytes (that’s 18 zeros!) are created every day. This is an overwhelming amount of data; no humble human units could ever process even a tiny fraction of this information. Fortunately, computing power has also evolved. Thanks to cloud computing, CPU resources are also virtually unlimited. This combination of massive amounts of data, along with vast computational power, creates an ideal environment for artificial intelligence deployments in the 2020s.
The Queen’s Gambit
In order to define some of the AI processes, let’s consider a common application: a computerized chess game. This is a simple real-world example, but the terms and processes are pertinent for many other AI applications.
As a human-vs-machine chess game unfolds, the computer will analyze every position on the board. It will consider all possible moves, and then it will use its programming to select the best move. This deductive process of weighing all of the alternatives uses an inference engine, which is the essence of artificial intelligence.
Now suppose that the exact same position on the board is repeated in another game a few months later. Again, the computer will try to deduce the best move. Assuming that the programming hasn’t changed, it should select the same move once again. However, if the computer uses machine learning, it will recognize the position from the past, and it will review the outcome from the previous game. This information will be an additional input used for selecting the optimal move.
Next, if the computer’s AI system supports deep learning or deep neural networking (DNN), it can search the entire Internet for all previously recorded games of chess. All of those can collectively be used as inputs in order to select the best move for any particular position on the board. Or the computer could play thousands of games of chess against itself in order to develop a huge knowledge base of successful moves.
Finally, another layer can be added to this AI system called natural language processing (NLP). This is a combination of linguistics, computer science, and artificial intelligence that provides a simple way to communicate with the system (think of Alexa or Siri). You can verbally tell the computer your chess moves, ask for hints, replay previous games, try different tactics, or even reset the whole game.
Artificial Intelligence offers 21st century users – scientists, gamers, doctors, and businesspeople alike – an incredible tool kit. Depending upon the users’ needs, along with their programming skills, they can use some or all of these tools; the applications are limitless. Some of these applications will be discussed in Part 2 – stay tuned!
How Tech Giants are adapting to the disruption of AI No tech company wants to be seen to be lagging behind. Finding the balance innovative and being prudently cautious can be challenging, especially when you see competitors forging ahead. Most will recall the...
Download the full solution brief here. In our increasingly interconnected world, local and county governments are facing a unique set of challenges. They are dealing with a vast, constantly changing landscape of network infrastructures while keeping pace with rapid...
Less than a year after ChatGPT and Bard put generative AI into the hands of ordinary people and there are few in commerce that haven’t experimented with it. Yet people are divided on whether it’s a good thing. Sci-fi movie horror scenarios aside, there are some...