Skip to Content

The Rise Of The Machines

Advances in artificial intelligence are having a growing impact on sectors as diverse as retail, healthcare and manufacturing.

Hand of robot  holding a chip

"Over recent years, AI capability has improved to such an extent that a range of commercial applications are now possible in areas like consumer electronics, industrial automation and online retail."

AMONG THE MANY EMERGING TRENDS IN THE technology sector, the rise of artificial intelligence (AI) is likely to be one of the most significant over the coming years. AI refers to the ability of machines to perform tasks that would typically be associated with human cognition such as responding to questions, recognizing faces, playing video games or describing objects. Over recent years, AI capability has improved to such an extent that a range of commercial applications are now possible in areas like consumer electronics, industrial automation and online retail.

Technology companies of all sizes and in locations all around the world are developing AI-driven products aimed at reducing operating costs, improving decision-making and enhancing consumer services across a range of client industries. And despite a decline in venture capital funding across industries overall in 2016, AI startups raised a record $5 billion globally last year – a 71% annualized growth rate and near-tenfold rise over the 2012 level (see EXHIBIT 1).

Graph representing  AI capital continues to climb

The Rise of AI

Ongoing improvements in microprocessor speed and memory, as predicted by Moore’s Law (the observation that computer performance doubles every 18 to 24 months), have enabled continuing increases in computer processing capacity. But the underlying rate of increase in processing capacity is just one of a series of factors that has driven the performance improvement and surge of investor interest in AI over recent years. At least three other factors have been central. The first is new techniques in AI software development. Traditional programming approaches have relied on system engineers writing vast sets of instructions into computer software to account for every unique problem that a machine may encounter in a given task. To program a computer to recognize an image of a blue table, for example, the engineer would have to specify a range of dimensions for the object and a range of pixel shades for the color. More recently however, the application of so-called “machine learning” techniques has allowed computers to perform complex AI functions without the need for direct commands. Instead of inputting instructions, software designers train the program to develop its own rules for performing tasks by using a series of labeled examples as inputs, and allowing the machine to detect the underlying similarities between them. To identify the same blue table using machine learning, for example, the engineer would feed the program a large number of sample images, which the computer could then use to classify new inputs through inference from these past observations.

Another important driver of advances in AI has been the steady increase in digital data as more individuals have come online and per-user data consumption has increased

In 2016 alone, the amount of data exchanged over the Internet increased by 22% with the total expected to double by the end of the decade, according to projections from Cisco Systems. Whether it be from social media uploads, stored enterprise data on customer transactions or digital patient health records, vast amounts of digital information are being generated every day. The majority comes from personal communication devices such as smartphones, tablets and computers. But the growth expected in other connected objects (health devices, industrial machinery, cars, planes, public infrastructure, surveillance equipment and more) will only further swell the digital universe. Software engineers are using these vast quantities of images, sound, digital transactions and other online activity to develop their AI programs. Data scientists at a leading U.S. Internet firm have stated, for example, that doubling the amount of model voice data input reduced errors in their speech recognition software by 10%.

In addition to new programming approaches and large data sets, hardware changes have also accelerated progress in machine learning. Central processing unit (CPU) chips still perform most day-to-day computing tasks, such as word processing or media streaming, but the use of graphical processing units (GPUs) in machine learning has sped up the process of developing AI programs significantly. While a CPU may contain a few independent processors that read and execute software program commands, GPUs contain thousands of smaller processing cores that can run a large number of similar operations at the same time. This makes them far better suited for the billions of repetitive calculations that programmers may need to perform on batches of video, audio or transaction data to develop AI algorithms through machine learning.

The sum of these drivers — new programming techniques, more data and faster chips — has seen AI converge with human-level performance in the key areas of image classification and speech recognition over recent years (see EXHIBIT 2).

Graph representing computers getting smarter year by year

AI Incorporated

Artificial intelligence capability has therefore advanced considerably over recent years, but real-world applications of the technology are still at a relatively nascent stage. Over the coming years, we expect AI software and related products to gain more widespread industrial and consumer adoption across a number of sectors (see EXHIBIT 3), including:

Image representing AI application categories
  • Transportation. Autonomous cars use sensory hardware, such as cameras, radar and lasers to track their surroundings as they move from point to point. The information gathered by these sensors must be processed and interpreted in real time so the vehicle can brake, accelerate and turn at the appropriate times. Similarly, the commercial use of unmanned aerial vehicles for package delivery will require AI assistance to detect obstacles and avert potential collisions without the need for direct human control.
  • Healthcare. Medical laboratories and healthcare startups are using AI to develop imaging and diagnosis tools. Researchers have already developed software that can recognize a range of conditions from early-stage tumors to mutated blood sample DNA. According to the National Academy of Medicine, incorrect diagnoses are responsible for up to 10% of U.S. patient mortality, and close to 60% of medical malpractice claims. This leaves plenty of scope for AI technology to improve outcomes and reduce costs for providers.
  • Security. Both advanced video surveillance and next-generation cybersecurity will rely on AI to improve their functionality. Facial recognition technology is already used for aviation security, law enforcement and even at retail stores and hotels. Meanwhile, advanced cybersecurity software uses AI to detect unusual behavior once attackers have gained access to the network, rather than putting up firewalls against known computer viruses.
  • Consumer electronics. AI is being used by mobile application developers in areas ranging from language translation to real-time photo recognition. Voice-controlled digital assistants that can provide search responses or set and remind users of appointments are also growing in popularity. Similarly, online AI chatbots are gaining traction among large retailers for automated customer service.
  • Robotics. Computer programs that can process data from machine-mounted camera sensors will help industrial robots operating in unstructured environments. In agriculture, for example, a significant portion of fruit and vegetable harvesting is still performed manually, but improved machine vision has spawned prototypes for robotic pickers that can identify produce that is ready for harvest. In manufacturing, vision-guided robots will help to lower operational costs by enabling a single production line to make multiple products and by allowing robots to perform new functions such as defect detection that may have previously required human involvement.

As artificial intelligence is incorporated in more sectors, we expect the biggest beneficiaries to be providers of AI-enabling technology and industries that gain the most in product enhancement from improvements in AI performance. Chipmakers stand to benefit from increased demand for processing power, particularly makers of graphical processing units for AI program training. Semiconductor manufacturing equipment makers should also gain, and the group has already been one of the leading industries in this year’s technology sector rally. On the software side, stand-alone developers of AI computer vision and speech recognition solutions appear to be well positioned to benefit as AI capability is embedded into more products. Industrial robot manufacturers that incorporate AI into their products are likely to attract more sales, particularly in higher-wage, ageing economies as companies look to reduce labor costs. And internet companies with AI at the core of their consumer services (such as digital assistants and new software features) stand to benefit directly from improvements in speech recognition and image classification.

Related Insights

TOP