‘Hardware neural network’ to make Artificial Intelligence tech faster, cheaper
Researchers at the Technion and TowerJazz have developed a revolutionary technology that can turn TowerJazz’s commercial flash memory components into memristors—devices that contain both memory and computing power. The technology, which was inspired by the operation of the human brain, significantly accelerates the operation of artificial intelligence (AI) algorithms.
Published in the Nature Electronics journal, the research was led by doctoral student Loai Danial and Professor Shahar Kvatinsky of the Andrew & Erna Viterbi Faculty of Electrical Engineering at the Technion, in collaboration with Prof. Yakov Roizin and Dr. Evgeny Pikhay from TowerJazz and Prof. Ramez Daniel of the Faculty of Biomedical Engineering at the Technion.
From the outset, the ability of computers to solve computational problems has been superior to that of humans. Yet for decades, when it came to identifying images, classifying image attributes and making decisions, computers lagged behind humans. In recent years, artificial intelligence has begun to narrow this gap and has managed to carry out complex operations by means of training based on examples. For the past few decades, vast resources have been devoted to developing artificial intelligence on the software level. This investment has generated a quantum leap in AI effectiveness in many and varied fields, among them medicine, intelligent transportation, robotics and agriculture.
Artificial intelligence is fueled by data, and specifically by extremely large data sets known as big data. For this reason, the major breakthrough in the field of artificial intelligence had to “wait” for dramatic improvements in computing power. Yet hardware lagged behind these rapid developments in software performance, such that the development of hardware that would meet the demands of AI software was delayed for years. Such hardware must work well in terms of speed, low power demand, accuracy, area and cost. These requirements are very difficult to satisfy with the traditional hardware model based on digital computation.
The digital model limits hardware performance in two main contexts: 1) Digital hardware has difficulty performing many operations in parallel, for it was originally intended to perform a relatively small number of operations. 2) This type of hardware can provide great accuracy only at the cost of extremely high energy and time consumption. As a result, the researchers say innovative hardware is needed that will meet the needs of the artificial intelligence era.
According to Prof. Kvatinsky: “One of the major challenges that AI poses to hardware engineers is how to implement complex algorithms that require a) storage of massive amounts of data in the computer memory, b) rapid retrieval from memory, c) performing many computations in parallel, and d) high accuracy. Standard digital platforms hardware (processors) is not suited for this for the reasons mentioned above.”
This is the background for the new technology described in the article published in Nature Electronics. “Our technology transforms hardware that is digital in nature into a neuromorphic platform—an analog infrastructure of sorts that resembles the human brain,” said Prof. Kvatinsky. “Just as the brain can perform millions of operations in parallel, our hardware is also capable of performing many operations in parallel, thus accelerating all associated operations.”
Doctoral student Loai Danial goes on to explain: “I am personally interested in neuromorphic computations, both as a computer engineering student and as someone who lost his father to a rare neurological disease. The brain has always served as an inspiration for computational systems, and my challenge is to use engineering tools to understand the computational mechanism of brain operations. In the current research we showed that an electrical chip based on standard commercial technology has two critical abilities: associative memory that, like the brain, operates based on features rather than index searching, and the ability to learn.”
Associative memory, which is familiar to us from human thought, means, for example, that when we see eyes we do not search some clause in an index of items to find a match for an eye but rather identify the eye associatively. This mechanism is rapid, efficient and energy-saving. Moreover, as with the brain, the system’s ability to learn improves as the links between the synapses and the nerve cells change and are updated.
According to Prof. Roizin of TowerJazz: “The new technology is easy to implement and transforms TowerJazz’s transistors, originally designed to store data only, into memristors—units that contain not only memory but also computing ability. Because the memristors are situated on existing TowerJazz transistors, they immediately interface with all the devices the transistors work with. The new technology has been tested under real conditions, demonstrating that it can be implemented in building neural hardware networks, thus significantly improving the performance of commercial artificial intelligence systems. Like the brain, the improved system excels in its ability to store data over the long term and in its very low energy consumption.”
According to Prof. Ramez Daniel, formerly an electrical engineer at TowerJazz and now a member of the Technion Faculty of Biomedical Engineering: “The computing power of the improved device stems from its ability to function in the sub-conduction area, or to put it more simply, in a way that resembles natural biological mechanisms. As a result, we have achieved high efficiency with low output, similar to mechanisms that developed in nature over billions of years of evolution.”
Technion researchers Eric Herbelin, Nicolas Wainstein, Vasu Gupta and Nimrod Wald from Prof. Kvatinsky’s research group participated in the research.
This research was supported by the Planning and Budgeting Committee (PBC), the KAMIN grant from the Israel Innovation Authority, the Andrew Viterbi and Erna Finci Viterbi Scholarship for Graduate Students and the European Research Council (ERC) starting grant. Recently, Loai Danial presented this research at the Nature Conference in China and was awarded the prize for the best paper award at the conference.