What is the computational complexity of a single neuron?


Our mushy brain It seems a far cry from the solid silicon chips in computer processors, but scientists have a long history of comparing the two.As Alan Turing Take it 1952: “We are not interested in the fact that the brain has the consistency of cold porridge.” In other words, the medium is not important, the important thing is computing power.

Today, the most powerful artificial intelligence systems use a type of machine learning called deep learning. Their algorithm learns by processing large amounts of data through hidden layers of interconnected nodes (called deep neural networks). As the name suggests, deep neural networks are inspired by real neural networks in the brain, and their nodes are modeled on real neurons-or at least based on what neuroscientists knew about neurons in the 1950s, an influential neuron model at that time The so-called perceptron was born. Since then, our understanding of the computational complexity of a single neuron has expanded dramatically, so biological neurons are known to be more complex than artificial neurons. But how much?

To understand, David Benjagoff, Itami Segev with Michael LondonAll at the Hebrew University of Jerusalem, trained an artificial deep neural network to simulate the calculation of simulated biological neurons.them show Deep neural networks require five to eight layers of interconnected “neurons” to represent the complexity of a single biological neuron.

Even the author did not anticipate this complexity. “I think it will be simpler and smaller,” Benjagev said. He estimated that three to four layers would be enough to capture the calculations performed in the cell.

Timothy LilyklapAt DeepMind, a Google-owned AI company, designed decision-making algorithms, he said that the new results indicate that it may be necessary to rethink the old tradition of loosely comparing neurons in the brain with neurons in the context of machine learning. “This paper really helps to promote more careful thinking about this issue, and to try to figure out how far you can make these analogies,” he said.

The most basic analogy between artificial neurons and real neurons involves how they process incoming information. Both types of neurons will receive incoming signals, and based on this information, decide whether to send their own signals to other neurons. Although artificial neurons rely on simple calculations to make this decision, decades of research have shown that this process is much more complicated in biological neurons. Computational neuroscientists use input-output functions to simulate the relationship between the input received by the long tree-like branches of biological neurons (called dendrites) and the neuron’s decision to send signals.

This function is a function that the author of the new work teaches artificial deep neural network simulation to determine its complexity. They first performed a large-scale simulation of the input and output functions of a neuron with different dendritic branch trees on the top and bottom, called pyramidal neurons, from the rat cortex. Then they input the simulation into a deep neural network with up to 256 artificial neurons in each layer. They continued to increase the number of layers until they reached 99% millisecond accuracy between the input and output of the simulated neuron. Deep neural networks successfully predict the behavior of neuron input-output functions through at least five—but no more than eight—artificial layers. In most networks, just one biological neuron is equivalent to approximately 1,000 artificial neurons.

Neuroscientists now know that the computational complexity of a single neuron, such as the pyramidal neuron on the left, relies on dendritic branches that are bombarded by incoming signals. Before a neuron decides whether to send a signal called a “spike,” these will cause local voltage changes, which appear as changes in the color of the neuron (red for high voltage and blue for low voltage). This spike 3 times, as shown by the trace of a single branch on the right, where the color represents the position of the dendrites from the top (red) to the bottom (blue).

Video: David Benjagev

“[The result] Formed a bridge from biological neurons to artificial neurons,” said Andreas Torres, A computational neuroscientist at Baylor College of Medicine.

But the author of the study warned that this is not a direct correspondence. “The relationship between how many layers there are in a neural network and the complexity of the network is not obvious,” London said. So we can’t really say how much complexity will increase from moving from four to five. Nor can we say that the need for 1,000 artificial neurons means that the complexity of biological neurons is exactly 1,000 times that of biological neurons. In the end, using exponentially increased artificial neurons in each layer may eventually form a deep neural network with only one layer-but it may require more data and time for the algorithm to learn.


Source link