Comment from Kai Beckmann, member, executive board, Merck KGaA

Humans have always been inspired by nature when it comes to technological innovations – just think of the lotus effect or the streamlined shapes in modern design. Nature is now providing a model for the next generation of computers as well. Neuromorphic computers that are inspired by the human brain could be the thing that finally helps artificial intelligence (AI) break through into our everyday lives. Merck is also investing in this field.

New requirements

Today’s information technology – from PCs and smartphones to supercomputers – is based primarily on the von Neumann model, originally designed back in the 1940s by mathematician John von Neumann. In this computer chip architecture, the processor and the memory are strictly separated. To put it in simple terms, during an operation, data moves from the memory to the processor, which then processes the data before transferring it back to the memory.

This architecture has been continually refined and proven itself over the decades. The digital transformation has led, however, to computers taking on more and more tasks that require information to be processed instinctively based on the situation – something that only humans have been able to do so far. For the last decades computers improved enormously in tasks like to calculate the square root of 7,583 with 100 digits accuracy in a matter of nanoseconds. But today, we expect much more: computers which talk to us, navigate cars autonomously through traffic and learn independently.

Conventional computers have their limits

The types of applications with AI at their core require enormous computing capacity and the employed deep learning networks have grown so big that they have essentially maxed out the hardware they run on. A modern chip the size of a fingernail now contains billions of transistors and latest high-end computers use thousands of these.

Moore’s law states that the number of transistors on a microchip doubles roughly every two years. However, squeezing billions of transistors in ever smaller areas is an incredible complex challenge resulting in higher design and production costs. Many feature sizes are already close to atomic scales and physical limits prevent traditional designs to further scale down.

One possible solution is to extend two-dimensional chip architectures into three-dimensional ones. Within the performance materials business sector of Merck, our semiconductor materials are playing a big role in 3D NAND technology. But even these solutions do not solve a fundamental issue of the von Neumann architecture – the slow data transfer. While today’s processors are calculating very efficiently and data can be held in memory without consuming power in 3D NAND memory chips (for example employed in solid-state drives), the transfer of data between the components is a major bottleneck which reduces computer system performance. This not only limits today’s systems in speed but it also costs a lot of energy – moving data around the system consumes orders of magnitude more energy than the calculation itself!

The brain as blueprint

Resolving this so-called “von Neumann bottleneck” requires developing completely new computing technologies. Research institutes and companies around the world are working on novel computers that function fundamentally differently. Neuromorphic chips are one of the most promising new technologies, alongside quantum computers. They are based on the human brain, with deeply connected artificial neurons and synapses.

The development teams aim to imitate a crucial characteristic of the human brain: in contrast to conventional computers, the human brain is extremely flexible and can adapt intuitively to unpredictable environments. Just like humans, neuromorphic chips use stored data to develop their own problem-solving skills and can then tackle problems for which they were not specifically programmed. For example, the more a neuromorphic computer is used for facial recognition, the faster and more reliably it identifies eyes, noses and mouths.

Neuromorphic computer chips are able to perform better because they can simultaneously store and process information, just like the neurons and synapses in the human brain do. While conventional computers run commands sequentially, constantly moving data packets back and forth from the memory to the processor, neuromorphic computers process and store data largely at the same time, making them both faster and extremely energy efficient, just like the human brain.

A catalyst for AI

Intel’s Loihi neuromorphic chip can execute complex programs a thousand times faster and 10,000 times more efficiently than current microchips with conventional architecture. This allows design engineers to use ever more complicated deep learning neural networks. At Merck, we are investing in this field, as are other technology giants including Microsoft, IBM, Qualcomm and Google. We are cooperating, for example, with the US startup MemryX, which develops neuromorphic computer chips for AI applications.

Neuromorphic computers have the potential to take artificial intelligence and machine learning to the next level. Everyday applications include autonomous vehicles and voice assistants, for example. In the medical field, they could be used to automatically monitor a patient’s heart rate remotely, or implants could replace diseased peripheral neural systems, such as the retina. And in smart factories, neuromorphic chips could be used to optimize the motions and sequences for robots as well as workflows in general.

It will still be some time before neuromorphic computers are just as intelligent as humans. With 100 million artificial neurons, Intel’s Loihi chip is “only” on the level of a small mammal. In comparison, the human brain has almost 90 billion neurons, which are connected by trillions of synapses. It will be exciting to trace the further development of neuromorphic computers. Because many very intelligent people are putting their heads together to develop the next generation of computers, using their own trillions and trillions of neurons in the process.

First published on LinkedIn