An overview of brain-like computing: Architecture, applications, and future trends

Introduction

Achieving artificial intelligence as the major goal of mankind has been at the top of the heated debate. Since the Dartmouth Conference in 1956 (McCarthy et al., 2006), the development of AI has gone through three waves. They can be roughly divided into four basic ideas: symbolism, connectionism, behaviorism, and statism. These different ideas have captured some of the characteristics of “intelligence” in different aspects, but only partially surpassed the brain of humans in the aspect of function. In recent years, the computer hardware base has become more perfect, and deep learning has revealed its huge potential (Huang Y. et al., 2022; Yang et al., 2022). In 2016, AlphaGo defeated Lee Sedol, the ninth-degree Go master, which marked that the third wave of artificial intelligence technology revolution has reached its peak.

In particular, the realization of AI has become one of the wrestling points of national power competition. In 2017, China released and implemented a new generation of artificial intelligence development planning. In June 2019, the United States released the latest version of the National Artificial Intelligence Research and Development Strategic Plan (Amundson et al., 1911). Europe has also identified AI as a priority development project: in 2016, the European Commission proposed a legislative motion on AI; in 2018, the European Commission submitted the European Artificial Intelligence (Delponte and Tamburrini, 2018), and published Coordinated Plan on Artificial Intelligence with the theme “Made in Europe with Artificial Intelligence.”

Achieving artificial intelligence requires more powerful information processing capabilities, but relying on the current classical computer architecture cannot meet the huge amount of data computing. The classical computer system has encountered two major bottlenecks in its development: the storage wall effect due to von Neumann structure and Moore's law will fail in the next few years. On the one hand, traditional processor architecture is inefficient and energy intensive. When dealing with intelligent problems in real-time, it is impossible to construct suitable algorithms for processing unstructured information. In addition, the mismatch between the rate of programs or data transferred back and forth and the rate of the central processor processing information leads to a storage wall effect. On the other hand, as the chip's size assembly gets closer to the size of a single atom, the devices are getting closer to the limits of their respective physical miniaturization. So, the cost of performance enhancement will become higher and the technical implementation will become more difficult. Therefore, researchers put their hopes on brain-like computing in order to break through the current technical bottleneck.

Early research in brain-like computing followed the traditional computer manufacturing process that we first recognize how the human brain works and develop a neuromorphic computer based on the theory. But after more than a decade of research, mankind is almost standing still in the field of brain science. So, the path of theory before technology was abandoned by mainstream brain-like research. Looking back at human development, we see that many technologies precede theories. For example, in the case of airplanes, we can build the physical object before conducting research to refine the theory. Based on it, researchers adopted structural brain analogs: using existing brain science knowledge and technology to simulate the structure of the human brain, and then refining the theory after success.

This article first introduces the idea behind the research significance of brain-like computing in a general way. Then we summarize the research history and compare the current research progress with analysis and outlook. The article structure is shown in Figure 1.

www.frontiersin.org

Figure 1. The structure of the article is as follows the analysis of relevant models, the establishment of related platforms, implementation of related applications, challenges, and prospects.

Progress in brain-like computing

Brain-like computers use spiking neural networks (SNNs) instead of the von Neumann architecture of classical computers and use micro and nano-optoelectronic devices to simulate the characteristics of information processing of biological neurons and synapses (Huang, 2016). Brain-like computers are not a new idea, in 1943, before the invention of the computer, Turing and Shannon had a debate about the imaginary “computer” (Hodges and Turing, 1992). In 1950, Turing mentioned it in Computers and Intelligence (Neuman, 1958). In 1958, Von Neumann also discusses neurons, neural impulses, neural networks, and information processing mechanisms of the brain of humans in the Computers and the Human Brain (Yon Neumann, 1958). However, due to the limitations of various technologies at that time and the ideal future described by Moore's theorem, brain-like computing did not receive enough attention. Around 2005, it was generally believed that Moore's law would come to an end around 2020. Researchers began to shift their focus to brain-like computing. Then, the brain-like computing officially entered an accelerated period of development.

A summary of the evolution of brain-like computing (Mead, 1989; Gu and Pan, 2015; Andreopoulos et al., 2018; Boybat et al., 2018; Gleeson et al., 2019) is shown in Figure 2.

www.frontiersin.org

Figure 2. Brain-like computing has evolved from conceptual advancement to technical hibernation to accelerated development due to the possible end of Moore's law.

Brain-like computing models

There are three main aspects of brain-like computing: simulation of neurons, information encoding of neural systems, and learning algorithms of neural networks.

Neuron model

Neurons are the basic structural and functional units of the human brain nervous system. The most commonly used models in the SNN network construction are the Hodgkin–Huxley (HH) model (Burkitt, 2006), integrate-and-fire (IF) model (Abbott, 1999; Burkitt, 2006), leaky integrate-and-fire (LIF) model (Gerstner and Kistler, 2002), Izhikevich model (Izhikevich, 2003; Valadez-Godínez et al., 2020), and AdExIF model (Brette and Gerstner, 2005), and so on.

1) HH model

The HH model is closest to biological reality in the description of neuronal features and is widely used in the field of computational neuroscience. It can simulate many neuronal functions, like activation, inactivation, action potentials, and ion channels. The HH model describes the neuronal electrical activity in terms of ionic activity. The cell membrane contains sodium, potassium, and leaky channels. Each ion channel has different gating proteins. It can restrict the passage of ions, so the permeability of each kind of ions is different in the membrane. Because of this, neurons have abundant electrical activity. At a mathematical level, the binding effect of gating proteins is equivalent to ion channel conductance. The conductance of the ion channel, as a dependent variable, varies with the variables of activation and deactivation of the ion channel. The current of the ion channel is determined by the conductance of ion channel, the reversal potential of the ion channel, and the membrane potential. And the total current consists of the leakage, sodium, potassium current, and the current due to membrane potential changes. Therefore, the HH model also equates the cell membrane to a circuit diagram.

2) IF and LIF models

In 1907, the integrate-and-fire neuron model was proposed by Lapicque (1907). According to the variation of neuronal membrane potential with time in the model, it can be divided into the IF model and the LIF model. The IF model describes the membrane potential of neurons with input current, as shown in Equation 1:

Cm represents the neuronal membrane capacitance, which determines the rate of change of the membrane potential. I represents the neuronal input current. The model is called the leak-free IF model because the neuronal membrane potential is only correlated with the input current. When the current input zero, the membrane potential remains unchanged. Its discrete form is shown in Equation 2:

V(t)=V(t-Δt)+I(t)    (2)

where Δt is the step length of discrete sampling.

In contrast, the LIF model adds the simulation of neuron voltage leakage. When there is no current input for a certain period of time, the membrane voltage will gradually leak to resting potential, as shown in Equation 3 (citing Equation 1):

CmdVdt=gleak(Erest-V)+I    (3)

gleak is the leaky conductance of the neuron. Erest is the resting potential of the neuron. Neuroscience-related studies have shown that the binding of neurotransmitters to receptors in the postsynaptic membrane primarily affects the electrical conductance of postsynaptic neurons, thereby altering the neuronal membrane potential. So, it is more biologically reasonable to expand the input current I in Equation 1 into excitatory and inhibitory currents described by conductance. However, both neurons change to resting potentials directly after activation unable to retain the previous spike.

3) Izhikevich model

In 2003, researcher Eugene M. lzhikevich proposed the lzhikevich model from the perspective of nonlinear dynamical systems (Izhikevich, 2004). It can present the firing behavior of a variety of biological neurons with an arithmetic complexity close to that of the LIF model, as shown in Equation 4:

dVdt=0.04V2+5V+140-U+I    (4)

if V≥30mV, then=,,,]},,,]},,,]},,,,,]}],"socialLinks":[,"type":"Link","color":"Grey","icon":"Facebook","size":"Medium","hiddenText":true},,"type":"Link","color":"Grey","icon":"Twitter","size":"Medium","hiddenText":true},,"type":"Link","color":"Grey","icon":"LinkedIn","size":"Medium","hiddenText":true},,"type":"Link","color":"Grey","icon":"Instagram","size":"Medium","hiddenText":true}],"copyright":"Frontiers Media S.A. All rights reserved","termsAndConditionsUrl":"https://www.frontiersin.org/legal/terms-and-conditions","privacyPolicyUrl":"https://www.frontiersin.org/legal/privacy-policy"}'>

留言 (0)

沒有登入
gif