How analog AI hardware may one day reduce costs and carbon emissions

How analog AI hardware may one day reduce costs and carbon emissions

Couldn’t make it to Transform 2022? Watch all the sessions from the summit in our on-demand library now! Be careful gentlemen.

Could analog, rather than digital, artificial intelligence (AI) hardware harness fast, low-power processing to solve the rising costs and carbon footprint of machine learning?

Researchers say yes: Logan Wright and Tatsuhiro Onodera, research scientists at NTT Research and Cornell University, envision a future where machine learning (ML) will be performed on novel physical hardware, such as those based on photonics or nanomechanics. These unconventional devices, they say, could be applied in both edge and server environments.

Deep neural networks, which are at the heart of AI efforts today, rely on heavy use of digital processors like GPUs. But for years there have been concerns about the monetary and environmental cost of machine learning, which increasingly limits the scalability of deep learning models.

A 2019 paper from the University of Massachusetts, Amherst, for example, performed a life cycle assessment to train several common large AI models. It found that the process can emit more than 626,000 pounds of carbon dioxide equivalent, almost five times the lifetime emissions of the average American car, including the making of the car itself.

In a session with NTT Research at the VentureBeat Transform executive summit on July 19, CEO Kazu Gomi said machine learning doesn’t have to rely on digital circuitry, but can run on a physical neural network. This is a type of artificial neural network in which physical analog hardware is used to emulate neurons rather than software-based approaches.

“One of the obvious benefits of using analog instead of digital systems is the power consumption of the AI,” he said. “The consumption problem is real, so the question is what are the new ways to make machine learning faster and more energy efficient?”

Analog AI: more like the brain?

From the earliest days of AI history, people weren’t trying to figure out how to make digital computers, Wright said.

“They were trying to think about how we could emulate the brain, which of course is not digital,” he explained. “What I have in my head is an analog system, and it’s actually much more efficient at doing the kinds of calculations that are done in deep neural networks than today’s digital logic circuits.”

The brain is an example of analog hardware for doing AI, but others include systems that use optics.

“My favorite example is waves, because a lot of things like optics are based on waves,” he said. “In a bathtub, for example, you could formulate the problem to encode a set of numbers. At the front of the bathtub, you can set a wave, and the height of the wave gives you this vector X. You let the system evolve for some time and the wave propagates to the other end of the tub. After a while, you can measure the height of that, and that gives you another set of numbers.”

Essentially, nature itself can perform calculations. “And you don’t have to plug it into anything,” he said.

Analog AI Hardware Approaches

Researchers across the industry are using a variety of approaches to develop analog hardware. IBM Research, for example, has invested in analog electronics, particularly memristor technology, to perform machine learning calculations.

“It’s pretty promising,” Onodera said. “These memristor circuits have the property that information is naturally calculated by nature as electrons ‘flow’ through the circuit, allowing them to potentially have much lower power consumption than digital electronics.”

However, NTT Research focuses on a more general framework that is not limited to memristor technology. “Our work is focused on also enabling other physical systems, for example those based on light and mechanics (sound), to perform machine learning,” he said. “By doing so, we can make smart sensors in the native physical domain where the information is generated, such as in the case of a smart microphone or a smart camera.”

Startups, including Mythic, are also focusing on analog AI using electronics, which Wright says is a “big step, and probably the lowest risk way to get into analog neural networks.” But it is also incremental and has a limited ceiling, he added: “There is only limited performance improvement that is possible if the hardware is still based on electronics.”

Long-term potential or analog AI

Several startups, such as LightMatter, Lightelligence, and Luminous Computing, use light, rather than electronics, to perform computation, known as photonics. This is riskier and less mature technology, Wright said.

“But the long-term potential is much more exciting,” he said. “Light-based neural networks could be much more energy efficient.”

However, light and electrons are not the only things a computer can be made of, especially for AI, he added. “You could do it with biological materials, electrochemistry (like our own brains), or with fluids, acoustic (sound) waves, or mechanical objects, modernizing early mechanical computers.”

MIT Research, for example, announced last week that it had new proton programmable resistors, a network of artificial analog neurons and synapses that can do calculations similar to a digital neural network by repeatedly iterating sets of programmable resistors in intricately layered layers. They used a “handy inorganic material in the manufacturing process,” they said, that allows their devices “to run 1 million times faster than previous versions, which is also about 1 million times faster than synapses in the human brain.” “.

NTT Research says it’s taking all these approaches a step further and asking much bigger, long-term questions: What can we make a computer out of? And if we want to achieve the fastest, most energy-efficient AI systems, what should we physically make them out of?

“Our paper provides the first answer to these questions by telling us how we can make a neural network computer using any physical substrate,” Logan said. “And so far, our calculations suggest that making these strange computers will soon make a lot of sense, as they can be much more efficient than digital electronics and even analog electronics. Light-based neural network computers appear to be the best approach so far.” , but even that question is not fully answered.”

Analog AI isn’t the only non-digital hardware gamble

According to Sara Hooker, a former Google Brain researcher who now heads the nonprofit Cohere research lab for AI, the AI ​​industry is “at this really interesting hardware stage.”

Ten years ago, he explains, the massive AI advance was really a hardware advance. “Deep neural networks didn’t work until GPUs, which were used for video games [and] they were simply repurposed for deep neural networks,” he said.

The change, he added, was almost instantaneous. “Overnight, what took 13,000 CPUs overnight took two GPUs,” she said. “That’s how dramatic it was.”

It’s very likely that there are other ways of representing the world that could be just as powerful as digital, he said. “If even one of these data directions starts to show progress, you can unlock a lot of both efficiency and different ways of learning representations,” she explained. “That’s what makes it worthwhile for labs to back them up.”

Hooker, whose 2020 essay “The Hardware Lottery” explored the reasons various hardware tools succeeded and failed, says that the success of GPUs for deep neural networks was “actually a strange and lucky coincidence: winning the lottery “.

GPUs, he explained, were never designed for machine learning, they were developed for video games. Much of the adoption of GPUs for AI use “depends on the right timing of the alignment between progress on the hardware side and progress on the modeling side,” he said. “Making more hardware options available is the most important ingredient because it allows for more unexpected moments where you see those advancements.”

However, analog AI is not the only option researchers are considering when it comes to reducing the costs and carbon emissions of AI. Researchers are banking on other areas, such as field-programmable gate arrays (FPGAs) as application-specific accelerators in data centers, which can reduce power consumption and increase operational speed. There are also efforts to improve the software, he explained.

Analog, he said, “is one of the riskiest bets.”

Expiration date in the current approach

Still, risks must be taken, Hooker said. Asked if he thinks big tech companies are supporting the analog future and other types of non-digital alternative AI, he said: “One hundred percent. There’s a clear motivation,” adding that what’s missing is sustained government investment. in a long-term hardware landscape.

“It has always been difficult when investment is based solely on companies, because it is very risky,” he said. “Often it has to be part of a nationalist strategy for it to be a convincing long-term bet.”

Hooker said he wouldn’t bet on widespread adoption of analog AI hardware, but insists the research efforts are good for the ecosystem as a whole.

“It’s kind of like NASA’s initial flight to the moon,” he said. “There are so many scientific advances that happen just by having a goal.

And there is an expiration date on the industry’s current approach, he warned: “There is an understanding among people in the field that there has to be some bet on riskier projects.”

The future of analog AI

The NTT researchers made it clear that the earliest, narrowest applications of their analog AI work will take at least 5-10 years to materialize, and even then they will likely be used first for specific applications, such as on the edge.

“I think the more short-term applications will be on the edge, where there are fewer resources, where you may not have as much power,” Onodera said. “I think that’s where there really is more potential.”

One of the things the team is thinking about is what types of physical systems will be the most scalable and offer the greatest advantage in terms of power efficiency and speed. But in terms of getting into deep learning infrastructure, it’s likely to happen incrementally, Wright said.

“I think it would come to market slowly, with a multilayer network with maybe the front end in the analog domain,” he said. “I think it’s a much more sustainable approach.”

The VentureBeat Mission is to be a digital public square for technical decision makers to learn about transformative business technology and transact. Learn more about membership.

Leave a Comment

Your email address will not be published.