Artificial neural networks, computer algorithms that take inspiration from the human brain, have demonstrated fancy feats such as detecting lies, recognizing faces, and predicting heart attacks. But most computers can’t run them efficiently. Now, a team of engineers has designed a computer chip that uses beams of light to mimic neurons. Such “optical neural networks” could make any application of so-called deep learning—from virtual assistants to language translators—many times faster and more efficient.
“It works brilliantly,” says Daniel Brunner, a physicist at FEMTO-ST Institute, in Besançon, France, who was not involved in the work. “But I think the really interesting things are yet to come.”
Most computers work by using a series of transistors, gates that allow electricity to pass or not pass. But decades ago, physicists realized that light might make certain processes more efficient—for example, building neural networks. That’s because light waves can travel and interact in parallel, allowing them to perform lots of functions simultaneously. Scientists have used optical equipment to build simple neural nets, but these setups required tabletops full of sensitive mirrors and lenses. For years, photonic processing was dismissed as impractical.
Now, researchers at the Massachusetts Institute of Technology (MIT), in Cambridge, Massachusetts, have managed to condense much of that equipment to a microchip just a few millimeters across.
The new chip is made of silicon, and it simulates a network of 16 neurons in four “layers” of four. Data enters the chip in the form of a laser beam split into four smaller beams. The brightness of each entering beam signifies a different number, or…