Hardware revolution pushes AI into the mainstream

Artificial Intelligence

2019-12-13 / www.ft.com




Only the biggest groups and governments can afford the cost, but users can rent AI systems by the hour.

Smart algorithms have topped the list of artificial intelligence breakthroughs in recent years, as computers have demonstrated their superiority to humans in increasingly complex tasks.
These days, however, another force may be having an even bigger impact in pushing AI forward. Advances in specialized chips and other hardware have boosted the capabilities of the most advanced AI systems, while also taking the technology into the mainstream. Whether this is producing tangible business benefits is another matter.

On most measures, the algorithms are not making the leaps seen in recent years. That is partly because, in some tasks, there is not much more to be gained: in image recognition, for instance, the computers have already surpassed humans.

It also reflects the fact that the problems still to be cracked are getting harder and progress is slower. Language, the next frontier for machine intelligence, is notoriously difficult. While tasks such as speech recognition and language translation have been tamed, understanding and reasoning are still areas where humans rule.
Instead, the most eye-catching improvements have been coming from hardware. These rest on the specialized chips designed to handle the huge amounts of data required for machine learning, along with purpose-built systems tuned for the job.

US research group OpenAI points to a hardware inflection point in 2012. Before that time, Moore's Law, the chip industry rule of thumb that processing power doubles every two years, ruled in the world of AI computing.
Since then, AI systems have followed a Moore's Law on steroids. With new types of hardware and greater resources being thrown at the problem, the capabilities of the most advanced AI systems have doubled every 3.4 months.

There is a paradox in this hardware acceleration. On the one hand, at the frontiers of the science, it has turned AI into an arms race in which few will be able to play.
Big companies and governments that can command huge computing resources will be the only ones able to play. OpenAI, which has been operating on the theory that the AI researchers with the biggest computer will inherit the earth, recently raised $1bn from Microsoft to stay in the game.
The other effect of the hardware revolution, however, has been to push the technology into the mainstream. Google's TPUs (*), among the world's most advanced chips for handling machine learning, can be rented by the hour through the company's cloud computing platform (only $1.35, if your workload is not time-sensitive and you don't mind being at the back of the queue).

With cloud services such as Amazon Web Services making low-cost hardware and machine-learning tools widely available, training a neural network - the most computing intensive part of AI - is suddenly within general reach.

According to Stanford's DawnBench project, which provides a way to benchmark AI systems, the time taken to train a system on the widely-used ImageNet data set has fallen from three hours to only 88 seconds in the space of less than two years. This translates into massive price deflation, with the cost falling from $2,323 to $12.

Whether the enormous reductions in time and cost are making advanced AI a practical technology is another matter.

Some 1.32 per cent of job postings in the US in October related to AI, up from 0.26 in 2010. The figure is still small and the definition of an "AI job" is debatable, but the direction is clear.
Erik Brynjolfsson, an MIT professor who studies the impact of new technologies in the economy, says this is only a small down payment on the hoped-for impact from AI. Companies that have hired data scientists and machine-learning experts will not see an immediate payback, he warns: they first need to overcome internal bottlenecks by developing the new work processes required to get the most out of the technology.
This is creating what he calls J-curve in productivity that may already be visible in the wider economy, as the costs associated with the technology come ahead of the hoped-for benefits. The AI race is now on to reap real-world benefits from a much-hyped technology.

(*) A tensor processing unit (TPU) is an AI accelerator application-specific integrated circuit(ASIC) developed by Google specifically for neural network machine learning.

HIGHLIGHTS


Top