INTRODUCTION: THE MACHINE LEARNING ERA IS HERE
Artificial intelligence (AI) and machine learning (ML) are emerging technologies that will transform organizations faster than ever before. In the digital transformation era, success will be based on using analytics to discover the insights locked in the massive volume of data being generated today. Historically, these insights were discovered through manually intensive data analytics—but the amount of data continues to grow, as does the complexity of data. AI and ML are the latest tools for data scientists, enabling them to refine the data into value faster. In the past, businesses worked with a finite set of data generated from large systems of record.
Today, there are so many more endpoints connected to a business, each generating its own set of data that needs be analyzed. For example, a decade ago, the concept of the Internet of Things (IoT) did not exist. Now, businesses are connecting new devices at a furious rate. ZK Research forecasts that by 2025, there will be 80 billion connected endpoints (Exhibit 1), each generating significant volumes of data. IoT isn’t needed for AI and ML, as many other data sources exist, but the addition of IoT accelerates the need for AI and ML. Given the difficulty companies have analyzing today’s volumes, it’s impossible to see how organizations will adapt to the upcoming explosion of ingested data. The only way to compete effectively is by using AI and ML.
The terms “machine learning” and “artificial intelligence” are often used interchangeably, which is incorrect. AI is a broad term used to describe the process in which computers mimic human intelligence. Machine learning is a set of algorithms that can create models that mimic the exhibited behavior based on a data set. Because there are also other ways for algorithms to mimic human intelligence, machine learning is generally considered a subset of artificial intelligence.
AI and ML are applicable across all verticals—hence there is no single “killer application.” Each enterprise has different business challenges and access to different data sets. Therefore, the approach and application of AI and ML will differ. Despite the differing uses of AI and ML, here are some of their more common use cases:
- Anomaly detection: Based on the set of training data, machine learning–based systems can identify things that are anomalistic in nature. A common use case for this is in the healthcare industry, where AI can locate bleeds, tumors, or other problems in brain MRI scans that are typically indiscernible during human inspection.
- Classification: An AI system learns from a set of training data and can then classify new inputs into specific groupings. An example of this is when a self-driving car sees an object and can then categorize it as a tree, a person, a sign, or another object.
- Predictions: AI is used to estimate or predict the next value in a specific sequence. Human prediction has obviously existed for decades, but AI can incorporate a wider set of data. For example, a retailer can more accurately forecast future sales by including weather information.
- Recommendations: An AI system can make specific suggestions regarding responses to questions or comments. These systems are gaining traction in contact centers to help agents respond to common customer complaints. Also, chatbots are now being used to suggest products to buy based on the patterns of similar individuals.
- Categorization: Data often needs to be stored in specific clusters. An AI system can analyze large data sets and group data instances by common traits. For example, when studying shopping habits, AI can determine specific age demographics such as age and income. Another emerging use case is using AI for voice analytics to determine if an audio stream contains a person’s voice or background noise, the latter of which should be automatically muted.
- Translation of information: Machine learning–based systems can be used to quickly translate data from one form to another. The best example of this is AI combined with natural language processing to help people on a video call each speak in his or her native language. AI can understand what the language is and translate it in real time so everyone can participate in the conversation.
“Deep learning” is another term that is commonly used in AI circles to describe the utilization of deep layers of neural networks. Traditional machine learning algorithms are linear or “shallow” in nature, whereas deep learning algorithms use neural networks to handle the varying complexity and abstractions of the incoming data. Consider the following example. A parent tells a child what a cat is by pointing to a cat and then confirming it is indeed a cat. The parent then can point out what is not a cat as well. Over time, the child becomes more aware of which features define a cat and which ones do not and is eventually able to quickly identify them. The child is actually doing complex abstraction (i.e., cat identification) by building a hierarchy in which each level of abstraction is created using the knowledge stored in preceding layers.
Computer deep learning goes through roughly the same process, where each algorithm in the hierarchy applies a non-linear transformation to the data to create a model for inferencing. The stacked processing layers can be very deep—hence the term “deep learning.” However, deep learning requires a massive amount of data.