AI: What it can already do, and what we’re still waiting on
Who hasn’t heard of Artificial Intelligence (AI)? For many CEOs, CTOs and students of IT as well as people outside of the tech world, AI is an oft-repeated if vaguely understood term. According to a June 2017 Fortune magazine survey, 73 percent of Fortune 500 CEOs reported that the greatest challenge to their business was the fast- changing technological landscape, particularly AI.
Their perception stems partly from the fact that many corporate leaders lack a comprehensive understanding of AI as opposed to the hype surrounding it.
AI basically refers to a technology that centers on the development of machines that are capable of intelligent behavior — that can act like humans under certain conditions. Robotics is an interdisciplinary branch of engineering that and computer science that creates robots. A robot is a machine that can be programmed to carry out a series of actions within set parameters.
Well-known examples include assembly-line machines, mechanical waiters and even self-driving vehicles. Artificial intelligence programmed into a robot will enable it to take actions that “maximize its chance of achieving a designated goal” such as navigating across a room.
Machine Learning (ML) is currently the most rapidly-developing of all AI technologies. ML refers to a machine’s ability to learn continuously from examples and task completion in such a way that it will improve its performance without human supervision. In recent years, ML has become more efficient and extended its reach in various areas.
At present, machine learning complements human tasks, enabling people to focus on the remaining activities and enhance their own performance, thereby increasing overall efficiency and productivity. We haven’t yet arrived at a stage where machine learning alone can complete an entire process or business model.
Adaptive machine learning (AML) is another important sub-field of AI. AML uses millions of examples to train machines to learn to perform intelligently and has played an essential role in the development of self-driving vehicles, voice and image recognition devices, and computers that can play various games, often beating human experts.
One important aspect of AML is voice recognition. Voice recognition apps especially have improved considerably in the last two years, resulting in greater accuracy of output. It’s now faster to dictate to your cell phone than type on it.
According to HBR, the error rate for voice recognition has dropped from 8.5 percent to 4.9 percent. Numerous applications, such as Google Docs Voice Typing, Apple Dictation, Dragon Naturally Speaking, and Windows Speech Recognition are widely used today.
Image recognition technology has also advanced substantially. Apps can now recognize people’s faces, animals and birds. Image recognition is being adopted by an increasing number of organizations in place of employees’ physical IDs.
Machines are even significantly better at discerning differences among similar looking categories. Some machines are even able to recognize handwriting and detect signs of illness or disease in medical images.
These advances are helping computers become better at reasoning and solving problems — so much so that, in some areas, they perform with greater accuracy than humans. Machines have beaten human experts at strategy games like Poker and Go, and large internet companies like PayPal and Amazon are utilizing intelligent systems and ML to improve money-laundering detection capabilities, monitor fraud, and make more appropriate product recommendations to customers.
Limitations of AI
For all the talk about AI changing the world and putting billions out of work, artificial intelligence is not yet widely applicable. This is partly because ML technology, unlike a human, is still unable to give us a single system that can recognize, reason, and answer queries across a diversity of domains, something at which humans are very capable.
ML systems are exceptionally proficient when performing specific tasks, but unlike humans, cannot perform tasks based on competency in a related function. For example, a Tokyo resident who speaks Japanese and English is also likely to be able to tell a visitor which eateries serve the best tempura prawns or soba noodles, and where to watch a worthwhile Noh performance.
ML technology is mostly incapable of the broad understanding and generalizing ability that humans possess. As Brian Bergstein, contributing editor at MIT Technology Review, puts it, “(M)achines don’t have common sense.”
One reason why intelligent machines can do certain things better than humans is because they can process phenomenal amounts of data in a brief period. An example is a self-driving car that simultaneously analyzes multiple sensor-based inputs to steer clear of hazards on the road and avoid accidents … most of the time.
Machines, unlike humans, are still unable to adapt to unknown or unforeseen circumstances. To date, there is no proof of how an ML system would respond to situations that were not represented in the data used to train the machine. This is a key hurdle to be cleared before an ML system can be deployed in mission-critical functions at places like intensive patient care units or nuclear facilities.
Unlike logic-based systems, neural network systems are extraordinarily complex because they process data from millions of connections. The outcome depends on inputs from each of these connections, which is why it is inordinately difficult to diagnose and rectify any causes of error.
Since ML systems are still incapable of adapting to situations without a precedent, they may not yield the correct solution or action as circumstances change from what they were when the machine was trained. There is also a risk that an ML outcome may incorporate biases, not because they were trained to be so, but rather due to implicit biases present in the data used during training.
While ML applications have learned to interpret human facial expressions and voice tones to discern a person’s emotional state, they remain incapable of working with that person to influence their mental condition. This is a skill that humans are adept at. As social beings, we have an inherent capacity to influence one another.
What do we expect from AI over the next decade?
Having made rapid progress in the areas of perception and cognition, ML technology, bolstered by increasing investments in time and money, will continue to advance. Today, ML applications can recruit and promote people, prescribe medication, drive cars and book a table for six at a restaurant. It is realistic to expect AI to perform extraordinarily well in these and similar functions over the next decade.
Because voice communication comes naturally to humans and is the fastest mode for sharing ideas, the use of voice-enabled interfaces will continue to expand at home and in the workplace. Rather than tapping, clicking and typing as we’ve done for so long, we will increasingly use our voices to interact with machines.
Many experts expect robots to assume a wider role in maintaining patient wellbeing, especially in the realm of elder care. However, one must wonder how this area will progress as many of the elderly may prefer interaction with other humans over a machine.
As Erik Brynjolfsson and Andrew MacAfee believe, more and more companies will “transform their core processes and business models to take advantage of machine learning.” What is increasingly clear is that most of us will have to learn to work with AI. While workplaces are not expected to be bereft of people, over time there will likely be less room for those who refuse to embrace the use of AI and its myriad of advantages.