Not long ago, I published an article titled, "Is AI the Beginning of the Terminator Timeline?." The piece was intended to serve as the introduction to a new long-term series – about one article every two weeks or so – titled, “The Future of Humanity in an AI World.”
If there’s one thing I’ve learned over the years, it’s that you cannot learn a subject if you don’t understand the language. For example, how can you be expected to read and truly understand Caesar’s Commentaries, if you don’t know Latin?
Accounting is another example. It has its own language, too – debits, credits, assets, liabilities, etc. Without understanding those concepts, you cannot be expected to comprehend a set of financial statements.
When it comes to Artificial Intelligence (AI), you must understand that a lot of concepts go beyond words like “server,” “internet,” and “computer.” That’s why as part of my new series, I think we all need to share a common language. This second installment may seem a little short, but it’s essential because it focuses on the “Nomenclature of Artificial Intelligence.”
In doing so, I hope it brings us all to a common basic understanding of the key concepts that makeup AI.
AI, believe it or not, doesn’t have a universally accepted definition. While many people tend to focus on its software and hardware in looking for a definition, the scope of AI is more about a spectrum of intelligence than a single defining characteristic.
Even though we may not have an all-inclusive definition, a clear and concise description of AI was set forth in Nils J. Nilsson’s, “The Quest for Artificial Intelligence: A History of Ideas and Achievements1.” Nilsson wrote, “Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.”
Algorithmic game theory and computational choice focuses on the economic and social computing dimensions of AI. One example would be how AI-systems handle potential misalignments, including self-interested humans or human entities (like businesses) and the AI-based automated agents representing those human entities.
Big Data refers to extremely large data sets that are computationally analyzed to reveal patterns, trends and unique associations related to human interactions and behaviors.
Collaborative systems related to those models and algorithms for development of autonomous systems that can work collaboratively with other systems as well as with human entities.
Computer vision is the most prominent form of machine perception currently available. This sub-area of AI has been significantly transformed by “deep learning.” Many computers are now able to perform some vision tasks better than humans. At present, significant research is underway on the further advancement of “computer vision” in the areas of automatic image and video captioning.
Crowd-sourcing and human computation is associated with augmentation of computer systems via enablement of automated calls to human with expertise to assist the computer in solving problems the computer cannot solve alone.
Deep learning is a form of learning that has facilitated object recognition via images and video, along with activity recognition. Research is underway into other areas of perception, including audio, speech and natural language processing.
The Internet of Things (IoT) encompasses concepts related to an array of devices, many of which are used in our everyday lives. Things like your appliances, home, office building, cameras and vehicles may all be (or soon will be) connected via the internet to permit the collection and sharing of information for intelligent purposes.
Large-scale machine learning refers to the design of learning algorithms aimed at interpreting, understanding and working with extremely large ‘Big Data’ sets.
Natural Language Processing is sometimes referred to, or coupled with, speech recognition. This form of AI is quickly developing for widely spoken languages associated with large data sets. At the same time, developers are refining NLP systems so that they can interact directly with people through simple dialog rather than specifically stylized requests. As part of this emerging form of AI, multi-lingual forms of NLP are being designed so that systems can interact with anyone speaking any language on the planet.
Neuromorphic computing technologies seek to mimic biological neural networks to improve the efficiency of computing hardware and robustness of AI systems. Common with this AI, technology is elimination of the previous approach of using separate modules for input/output, instruction processing, and memory and combining those processes into a single AI interface.
Reinforcement learning shifts the focus of machine learning from pattern recognition to experience-driven decision-making. This technology will bring AI to the real world, and in doing so, impact millions of lives. Strides continue to be made in the practical implementation of this form of learning as part of a broadening of AI real-world environments.
Robotics is the process of developing and training robots to interact with the world in predictable ways. This includes the facilitation and manipulation of objects in interactive environments and with people. Substantial advances in robotics have been made in the past few years based upon the successes of other AI related technologies, including computer vision and other forms of machine perception.
With these common definitions in place, next time, we can begin our examination of how each will potentially impact the future of humanity in an AI world.
I hope you will stay tuned.
Footnote(s):
1 – Nils J. Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements (Cambridge, UK: Cambridge University Press, 2010).