What are the electrochemical that connects neuron with each other?

In the presynaptic neuron, a substance, the neurotransmitter, is produced and stored in vesicles to be released on demand.

From: Prostheses for the Brain, 2021

Active Zone☆

M.Y. Wong, P.S. Kaeser, in Reference Module in Biomedical Sciences, 2014

Introduction

Synapses are highly specialized contacts between presynaptic neurons and postsynaptic cells. In a nerve terminal, the presynaptic action potential is translated into a membrane fusion reaction that releases neurotransmitters. Neurotransmitters are released through exocytosis of synaptic vesicles (Katz, 1969). Within a presynaptic terminal, synaptic vesicle exocytosis occurs within less than a millisecond after the arrival of an action potential and is restricted to spots that are exactly opposed to postsynaptic receptors. This temporal and spatial precision minimizes the time for diffusion of transmitters to their receptors and is critical for the speed at which neural circuits operate (Kaeser and Regehr, 2014). This chapter discusses the membrane specialization in a nerve terminal called the active zone that restricts and controls fusion in this very precise manner.

Active zones appear as dense structures in electron micrographs. They are composed of a protein network anchored to the presynaptic plasma membrane (Sudhof, 2012). A key function of the active zone is to recruit and dock synaptic vesicles to their future sites of release, close to where presynaptic calcium channels are localized. This occurs through interactions with proteins on synaptic vesicles and on the plasma membrane, and with motor and cytoskeletal proteins. Synaptic vesicles can then be primed on site to turn fusion-competent through build-up of fusion machinery. In addition to these fundamental functions in generating releasable vesicles close to sites of calcium influx, active zones are also essential for presynaptic assembly during synapse development and maturation. It has become clear that they couple the release machinery to transsynaptic adhesion molecules, but the molecular nature of this coupling is not yet understood. More recent work also proposes that neuronal activity and various presynaptic signaling pathways regulate the density, expression and molecular function of individual active zone components. Dynamic regulation of the active zone may play important roles in controlling short- or long-term synaptic plasticity.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012801238304486X

Galantamine☆

Samia Kausar, ... Amin Badshah, in Reference Module in Biomedical Sciences, 2019

Effects on Acetylcholinesterase (AChE) Activity

The neurotransmitter acetylcholine (AChE) is formed in the pre-synaptic neuron and released into the synaptic cleft where it reversibly binds with (Geldmacher and Whitehouse, 1997) different classes of acetylcholine receptors. These receptors are nicotinic and muscarinic receptors. Galantamine being a selective and competitive inhibitor of AChE which is potentially responsible for the hydrolysis of ACh at the neuromuscular junction. This junction is in peripheral and central cholinergic synapses and in parasympathetic target organs. As galantamine binds with AChE, catabolism of acetyl choline slows down resulting elevated levels of acetylcholine in the synaptic cleft. X-ray crystallographic results indicated that galantamine reversibly binds to the active site of AChE (Greenblatt et al., 1999). The drug binds at the base of active site gorge hence interacting with two binding sites through hydrogen bonding. These sites choline binding site [amino acid 84 (tryptamine)] and the acyl-binding pocket (amino acids 288 and 290; both phenylalanine) (Farlow, 2003).

Human brain postmortem and fresh cortical biopsy samples examined by ex vivo research showed that IC50 values were 3.2 and 2.8 μmol/L for the frontal cortex and hippocampal regions of the brain (Thomsen et al., 1991a). Galantamine was proven to be less potent than tacrine or physostigmine at inhibiting AChE and was 10-fold less potent at inhibiting brain than erythrocyte AChE (Thomsen et al., 1991a) (Fig. 4). Galantamine displayed a 53-fold selectivity for AChE (Thomsen and Kewitz, 1990) over butyryl cholinesterase (Thomsen et al., 1991b).

What are the electrochemical that connects neuron with each other?

Fig. 4. Comparative Drug Concentrations required to Inhibit AChE by 50% (IC50) in Human Brain Tissue and Erythrocytes (Czollner et al., 1998).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128012383981741

Neuronal excitation

Andrej Kral, ... Hannes Maier, in Prostheses for the Brain, 2021

Thresholds in excitation

Individual neurons function as an integrator of inputs; the activation of the presynaptic neuron ultimately causes postsynaptic potentials that sum up at the passive membrane of the postsynaptic neuron. However, as these propagate along the passive membrane, their amplitudes decrease due to the leaky nature of the passive membrane (Fig. 4.10). At the point where the axon originates (called the "action potential initiation zone" or "trigger zone"), the voltage-gated channels are found in large numbers. If the depolarization is sufficient at this point to generate an action potential, this will be propagated along the axonal membrane to the presynaptic elements. Thus, the neuron is a leaky integrator of inputs, whereas the temporal and spatial properties are defined by the time and the length constants of the neuronal membrane. A significant portion of the charge injected into a neuron may exit the stimulated cell and may therefore not be effective. This also has implications for neuroprosthetic stimulation.

The excitation threshold itself as well as the synaptic efficacy can change over time in response to a repeated stimulus. It is assumed that these processes underlie the ability for learning (see Chapter 9). However, not all synapses learn similarly fast. Synapses located in the peripheral nervous system or connecting sensory organs to the brain as a rule are highly active, must be very reliable, and therefore do not undergo plastic changes. Synapses in the central nervous system, particularly the cerebral cortex, are more plastic and change with learning.

For the present context of brain prostheses, the temporal and spatial constraints for excitation are of essential importance. The properties make clear that a constant electrical field that does not change in time or space does not induce neuronal excitation. In such a steady state, the membrane will keep its constant transmembrane potential given by the concentrations of ions on both sides of the membrane. Only if the electrical field around the neuron can induce a change in transmembrane potential sufficient to generate action potentials, excitation results. For this, gradients in time and space are required that are steeper than the membrane constants. Chapter 6 examines these in detail.

The neurons as described above form macroscopic structures: nerves, neuronal ganglia, the spinal cord, and the brain. They are embedded in structures that provide protection, nutrition, and oxygen. These macroscopic structures are the eventual targets for neuroprosthetic intervention.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128188927000031

Nature's Learning Rule

Bernard Widrow, ... Jose Krause Perin, in Artificial Intelligence in the Age of Neural Networks and Brain Computing, 2019

9 The Synapse

The connection linking neuron to neuron is the synapse. Signal flows in one direction, from the presynaptic neuron to the postsynaptic neuron via the synapse which acts as a variable attenuator. A simplified diagram of a synapse is shown in Fig. 1.16A [20]. As an element of neural circuits, it is a “two-terminal device.”

What are the electrochemical that connects neuron with each other?

Figure 1.16. A synapse corresponding to a variable weight. (A) Synapse. (B) A variable weight.

There is a 0.02 μ gap between the presynaptic side and the postsynaptic side of the synapse which is called the synaptic cleft. When the presynaptic neuron fires, a protein called a neurotransmitter is injected into the cleft. Each activation pulse generated by the presynaptic neuron causes a finite amount of neurotransmitter to be injected into the cleft. The neurotransmitter lasts only for a very short time, some being reabsorbed and some diffusing away. The average concentration of neurotransmitter in the cleft is proportional to the presynaptic neuron's firing rate.

Some of the neurotransmitter molecules attach to receptors located on the postsynaptic side of the cleft. The effect of this on the postsynaptic neuron is either excitatory or inhibitory, depending on the nature of the synapse and its neurotransmitter chemistry [20–24]. A synaptic effect results when neurotransmitter molecules attach to the receptors. The effect is proportional to the average amount of neurotransmitter present and the number of receptors. Thus, the effect of the presynaptic neuron on the postsynaptic neuron is proportional to the product of the presynaptic firing rate and the number of receptors present. The input signal to the synapse is the presynaptic firing rate, and the synaptic weight is proportional to the number of receptors. The weight or the synaptic “efficiency” described by Hebb is increased or decreased by increasing or decreasing the number of receptors. This can only occur when neurotransmitter is present [20]. Neurotransmitter is essential both as a signal carrier and as a facilitator for weight changing. A symbolic representation of the synapse is shown in Fig. 1.16B.

The effect of the action of a single synapse upon the postsynaptic neuron is actually quite small. Signals from thousands of synapses, some excitatory, some inhibitory, add in the postsynaptic neuron to create the (SUM) [20,25]. If the (SUM) of both the positive and negative inputs is below a threshold, the postsynaptic neuron will not fire and its output will be zero. If the (SUM) is greater than the threshold, the postsynaptic neuron will fire at a rate that increases with the magnitude of the (SUM) above the threshold. The threshold voltage within the postsynaptic neuron is a “resting potential” close to −70 mV. Summing in the postsynaptic neuron is accomplished by Kirchoff addition.

Learning and weight changing can only be done in the presence of neurotransmitter in the synaptic cleft. Thus, there will be no weight changing if the presynaptic neuron is not firing, that is, if the input signal to the synapse is zero. If the presynaptic neuron is firing, there will be weight change. The number of receptors will gradually increase (up to a limit) if the postsynaptic neuron is firing, that is, when the (SUM) of the postsynaptic neuron has a voltage above threshold. Then the synaptic membrane that the receptors are attached to will have a voltage above threshold since this membrane is part of the postsynaptic neuron. See Fig 1.17. All this corresponds to Hebbian learning, firing together wiring together. Extending Hebb's rule, if the presynaptic neuron is firing and the postsynaptic neuron is not firing, the postsynaptic (SUM) will be negative and below the threshold, the membrane voltage will be negative and below the threshold, and the number of receptors will gradually decrease.

What are the electrochemical that connects neuron with each other?

Figure 1.17. A neuron, dendrites, and a synapse.

There is another mechanism having further control over the synaptic weight values, and it is called synaptic scaling [26–30]. This natural mechanism is implemented chemically for stability, to maintain the voltage of (SUM) within an approximate range about two set points. This is done by scaling up or down all of the synapses supplying signal to a given neuron. There is a positive set point and a negative one, and they turn out to be analogous to the equilibrium points shown in Fig. 1.8. This kind of stabilization is called homeostasis and is a phenomenon of regularization that takes place over all living systems. The Hebbian-LMS algorithm exhibits homeostasis about the two equilibrium points, caused by reversal of the error signal at these equilibrium points. See Fig. 1.8. Slow adaptation over thousands of adapt cycles, over hours of real time, results in homeostasis of the (SUM).

Fig. 1.17 shows an exaggerated diagram of a neuron, dendrites, and a synapse. This diagram suggests how the voltage of the (SUM) in the soma of the postsynaptic neuron can by ohmic conduction determine the voltage of the membrane.

Activation pulses are generated by a pulse generator in the soma of the postsynaptic neuron. The pulse generator is energized when the (SUM) exceeds the threshold. The pulse generator triggers the axon to generate electrochemical waves that carry the neuron's output signal. The firing rate of the pulse generator is controlled by the (SUM). The output signal of the neuron is its firing rate.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128154809000013

Alkaloids as Potential Multi-Target Drugs to Treat Alzheimer's Disease

Josélia A. Lima, Lidilhone Hamerski, in Studies in Natural Products Chemistry, 2019

Cholinergic Neurotransmission in the Central Nervous System

The transmission of information between cholinergic neurons (Fig. 8.3) occurs through the release of ACh by presynaptic neurons in the synaptic cleft. ACh diffuses to bind itself to nicotinic acetylcholine receptors (nAChR) and muscarinic acetylcholine receptors (mAChR) in the postsynaptic neurons, as illustrated in Fig. 8.3. Much of the released ACh (about 90%) is rapidly hydrolyzed in choline and acetate by AChE, found in soluble form in the synaptic cleft or bound to the basement membrane. The remainder (about 10%) of ACh diffuses through the synaptic cleft and reaches the postsynaptic neuron where it interacts with cholinergic receptors, activating them. After dissociating from receptors, ACh is rapidly hydrolyzed by AChE.

What are the electrochemical that connects neuron with each other?

Fig. 8.3. Representation of cholinergic neurotransmission. ACh is synthesized in the presynaptic neuron, is is released in the synaptic cleft, and moves to the postsynaptic neuron where it binds to cholinergic receptors activating them. ACh is hydrolyzed by AChE in the synaptic cleft.

ACh is synthesized in the neuronal cytoplasm, from choline and acetyl coenzyme A (AcCoA), by the catalytic action of choline acetyltransferase (ChAT), a cytosolic protein found only in cholinergic neurons. So, ACh is stored in synaptic vesicles (on average 8000 molecules per vesicles), which are matured in the axon and transported, by axonal transport via microtubules, to the axon terminal where they anchor in regions called active zones (AZ), as shown in Fig. 8.3. The vesicles anchored in AZ are preferentially released when cytoplasmic calcium (Ca2 +) levels increase, due to depolarization induced by the generation of an action potential [45].

Any interference in the steps of synthesis, storage, or release of ACh leads to a reduction in the release of this neurotransmitter and, consequently, to a failure in the transmission of information.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444641830000087

Noise Exploitation and Adaptation in Neuromorphic Sensors

Thamira Hindo, Shantanu Chakrabartty, in Engineered Biomimicry, 2013

2.2 Organization of neurobiological sensory systems

The typical structure of a neurobiological sensory system is shown in Figure 2.2. The system consists of an array of sensors (mechanoreceptors, optical, or auditory) that are directly coupled to a group of sensory neurons, also referred to as afferent neurons. Depending on the type of sensory system, the sensors (skin, hair, retina, cochlea) convert input stimulus such as sound, mechanical, temperature, or pressure into electric stimuli. Each of the afferent neurons could potentially receive electrical stimuli from multiple sensors (as shown in Figure 2.2), an organization that is commonly referred to as the sensory receptive field. For example, in the electric fish, the electro-sense receptors distributed on the skin detect a disruption in the electric field (generated by the fish itself) that corresponds to the movement and identification of the prey. The receptive field in this case corresponds to electrical intensity spots that are then encoded by the afferent neurons using spike trains [10]. The neurons are connected with each other through specialized junctions known as synapses. While the neurons (afferent or non-afferent) form the core signal-processing unit of the sensory system, the synapses are responsible for adaptation by modulating the strength of the connection between two neurons. The dendrites of the neurons transmit and receive electrical signals to and from other neurons, and the soma receives and integrates the electrical stimuli. The axon, which is an extension of the soma, transmits the generated signals or spikes to other neurons and higher layers.

What are the electrochemical that connects neuron with each other?

Figure 2.2. Organization of a generic neurobiological sensory system. Images adapted from Wikipedia and Ref. 11.

The underlying mechanism of a spike or action-potential generation is due to unbalanced movement of ions across a membrane, as shown in Figure 2.3, which alters the potential difference between the inside and the outside of the neuron. In the absence of any stimuli to the neuron, the potential inside the membrane with respect to the potential outside the membrane is about −65 mV, also referred to as the resting potential. This potential is increased by the influx of sodium ions (Na+) inside the cell, causing depolarization, whereas the potential is decreased by the efflux of potassium ions (K+) outside the cell, causing hyper polarization Once the action potential is generated, the Na+ ion channels are unable to reopen immediately until a built-up potential is formed across the membrane. The delay in reopening the sodium channels results in a time period called the refractory period, as shown in Figure 2.3, during which the neuron cannot spike.

What are the electrochemical that connects neuron with each other?

Figure 2.3. Mechanism of spike generation and signal propagation through synapses and neurons. (Images from Wikipedia.)

The network of afferent spiking neurons can be viewed as an analog-to-digital converter, where the network faithfully encodes different features of the input analog sensory stimuli using a train of spikes (that can be viewed as a binary sequence). Note that the organization of the receptive field introduces significant redundancy in the firing patterns produced by the afferent neurons. At the lower level of processing, this redundancy makes the encoding robust to noise, but as the spike trains are propagated to higher processing layers this redundancy leads to degradation in energy efficiency. Therefore, the network of afferent neurons self-optimizes and adapts to the statistics of the input stimuli using inhibitory synaptic connections.

The process of inhibition (among the same layer of neurons) is referred to as lateral inhibition, where by the objective is to optimize (reduce) the spiking rate of the network while faithfully capturing the information embedded in the receptive field. This idea is illustrated in Figure 2.2, where the afferent neural network emphasizes the discriminatory information present in the input spike trains while inhibiting the rest. This not only reduces the rate of spike generation at the higher layer of the receptive field (leading to improved energy efficiency), but it also optimizes the information transfer that facilitates real-time recognition and motor operation. Indeed, later inhibition and synaptic adaptation are related to the concept of noise shaping. Before we discuss the role of noise in neurobiological sensory systems, let us introduce some mathematical models that are commonly used to capture the dynamics of spike generation and spike-based information encoding.

2.2.1 Spiking models of neuron and neural coding

As a convention, the neuron transmitting or generating a spike and incident onto a synapse is referred as the presynaptic neuron, whereas the neuron receiving the spike from the synapse is referred as the postsynaptic neuron (see Figure 2.3). Also, there are two types of synapses typically encountered in neurobiology: excitatory synapses and inhibitory synapses. For excitatory synapses, the membrane potential of the postsynaptic neuron (referred to as the excitatory postsynaptic potential, or EPSP) increases, whereas for inhibitory synapses, the membrane potential of the post-synaptic neuron (referred to as the inhibitory postsynaptic potential, or IPSP) decreases.It is important to note that the underlying dynamics of EPSP, IPSP, and the action potential are complex and several texts have been dedicated to discuss the underlying mathematics [12]. Therefore, for the sake of brevity, we only describe a simple integrate-and-fire neuron model that has been extensively used for the design of neuromorphic sensors [9] and is sufficient to explain the noise exploitation techniques described in this chapter.

We first define a spike train ρ(t) using a sequence of time-shifted Kronecker delta functions as

(2.1)ρ(t)=∑m=1∞δ(t-tm),

where δ(t) = 0 for t ≠ 0 and ∫-∞+∞δ(τ)dτ=1. In the above Eq. (2.1), the spike is generated when t is equal to the firing time of the neuron tm. If the somatic (or membrane) potential of the neuron is denoted by v(t), then the dynamics of the integrate-and-fire model can be summarized using the following first-order differential equation:

(2.2)ddtv(t)=-v(t)/τm-∑j=1NWj[h(t)∗ρj(t)]+x(t),

where N denotes the number of presynaptic neurons, Wj is a scalar transconductance representing the strength of the synaptic connection between the jth presynaptic neuron and the postsynaptic neuron, τm is the time constant that determines the maximum firing rate, h(t) is a presynaptic filtering function that filters the spike train ρj(t) before it is integrated at the soma, and * denotes a convolution operator. The variable x(t) in Eq. (2.2) denotes an extrinsic contribution to the membrane current, which could be an external stimulation current. When the membrane potential v(t) reaches a certain threshold, the neuron generates a spike or a train of spikes. Again, different chaotic models have been proposed that can capture different types of spike dynamics. For the sake of brevity, specific details of the dynamical models can be found in Ref. 13. We next briefly describe different methods by which neuronal spikes encode information.

The simplest form of neural coding is the rate-based encoding [13] that computes the instantaneous spiking rate of the ith neuron Ri(t) according to

(2.3)Ri( t)=1T∫tt+Tρi(t)dt,

where ρi(t) denotes the spike train generated by the ith neuron and is given by Eq. (2.1), and T is the observation interval over which the integral or spike count is computed. Note that the instantaneous spiking rate R(t) does not capture any information related to the relative phase of the individual spikes, and hence it embeds significant redundancy in encoding. However, at the sensory layer, this redundancy plays a critical role because the stimuli need to be precisely encoded and the encoding have to be robust to the loss or temporal variability of the individual spikes.

Another mechanism by which neurons improve reliability and transmission of spikes is through the use of bursting, which refers to trains of repetitive spikes followed by periods of silence. This method of encoding has been shown to improve the reliability of information transmission across unreliable synapses [14] and, in some cases, to enhance the SNR of the encoded signal. Modulating the bursting pattern also provides the neuron with more ways to encode different properties of the stimulus. For instance, in the case of the electric fish, a change in bursting signifies a change in the states (or modes) of the input stimuli, which could distinguish different types of prey in the fish’s environment [14].

Whether bursting is used or not, the main disadvantage of rate-based encoding is that it is intrinsically slow. The averaging operation in Eq. (2.3) requires that a sufficient number of spikes be generated within T to reliably compute Ri(t). One possible approach to improve the reliability of rate-based encoding is to compute the rate across a population of neurons where each neuron is encoding the same stimuli. The corresponding rate metric, also known as the population rate R(t), is computed as

(2.4)R(t)=1 N∑i=1NRi(t),

where N denotes the number of neurons in the population. By using the population rate, the stimuli can now be effectively encoded at a signal-to-noise ratio that is N1/2 times higher than that of a single neuron [15]. Unfortunately, even an improvement by a factor of N is not efficient enough to encode fast-varying sensory stimuli in real time. Later, in Section 2.4, we show that lateral inhibition between the neurons would potentially be beneficial to enhance the SNR of a population code by a factor of N2[16] through the use of noise shaping.

We complete the discussion of neural encoding by describing other forms of codes: time-to-first spike, phase encoding, and neural correlations and synchrony. We do not describe the mathematical models for these codes but illustrate the codes using Figure 2.4d.

What are the electrochemical that connects neuron with each other?

Figure 2.4. Different types of neural coding: (a) rate, (b) population rate, (c) burst coding, (d) time-to-spike pulse code, (e) phase pulse code, and (f) correlation and synchrony-based code. Adapted from Ref. 13.

The time-to-spike is defined as the time difference between the onset of the stimuli and the time when a neuron produces the first spike. The time difference is inversely proportional to the strength of the stimulus and can efficiently encode the real-time stimuli compared to the rate-based code. Time-to-spike code is efficient since most of the information is conveyed during the first 20–50 ms [17, 18]. However, time-to-first-spike encoding is susceptible to channel noise and spike loss; therefore, this type of encoding is typically observed in the cortex, where the spiking rate could be as low as one spike per second.

An extension of the time-to-spike code is the phase code that is applicable for a periodic stimulus. An example of phase encoding is shown in Figure 2.4e, where the spiking rate is shown to vary with the phase of the input stimulus. Yet another kind of neural code that has attracted significant interest from the neuroscience community uses the information encoded by correlated and synchronous firings between groups of neurons [13]. The response is referred to as synchrony and is illustrated in Figure 2.4f, where a sequence of spikes generated by neuron 1, followed by neuron 2 and neuron 3, encodes a specific feature of the input stimulus. Thus information is encoded in the trajectory of the spike pattern and so can provide a more elaborate mechanism of encoding different stimuli and its properties [19].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124159952000027

A Dynamic Net for Robot Control

Bridget Hallam, ... Gillian Hayes, in Neural Systems for Robotics, 1997

8.6.3 Varying Neuronal Gains

The gains in the input register affect the time that the neuron goes on and off and therefore the time relationship between pre- and postsynaptic neuron. They do not affect the weight change that happens with any given time relationship.

The value of the activity register up gains, and even the correlation in these gains between pre- and postsynaptic neuron, affect the final weight only if the burst length is short. If the burst length is sufficiently long, then the initial co-activation values are not represented in the command register value achieved.

The neuronal time constants most influential in affecting the weight change are those governing the decay of the activity registers. With command register gains set at 1, the activity register down gain had to be over 1.5 for any strengthening to occur and under 4 if there was to be weakening when R was on “too long.” Greatest “R on too long” weakening occurred with an activity register down gain of 2.5. Since strengthening was also strong with this gain, this was the value chosen for subsequent experiments. Results for some of the activity register down gains tried are given in Figure 8.11. In each case, t exp was 2 time units, synapse weight started at 0.5, S was on for 10 time units from t (0), and R on a variable time from t (0) causing the difference in firing between S and R to be as indicated.

What are the electrochemical that connects neuron with each other?

FIGURE 8.11. Effect of varying activity register down gains at various time relationships.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080925097500129

Neural computing

Zhongzhi Shi, in Intelligence Science, 2021

3.4.4 Competition among neurons

Sufficient biological evidence shows that there are numerous competition phenomena in the neural activities of brain. Perry and Linden’s work demonstrated that there are competitive relations among the cells in the retina Poldrack and Packard’s research proved that, for both human beings and animals, there are broad competitive phenomena in various regions of brain cortex [30]. Accordingly, we import competitive mechanism into our model.

Let X1 and X2 be two different neurons; let F1 and F2 be the set of their feeding presynaptic neurons, respectively. Then there exists a competitive relationship between X1 and X2 if and only if at least one of the two following conditions holds.

1.

F1∩F2≠Ø

2.

Existing f1∈F1 and f2∈F2, f1 and f2 are competitive.

To implement competitive relations, we normalize the firing probabilities of the neurons that are competitive each other.

Let X1, X2,…, Xn be n neurons that are competitive with each other; Pbefore(Xi) is the firing probability of Xi before competition. Then the firing probability of Xi after competition is:

(3.16)Pafter(Xi)=Pbefore(Xi )∑j=1nPbefore(Xj)

Based on this discussion, the complete BLFM is shown in Fig. 3.7. The model is also a network model composed of many neurons, and the neurons in the model contain two types of inputs: One is the feeding input, the other is the linking input, and the coupling relationship between the two types of inputs is multiplication. Different from the Elkhorn model, in order to solve the problem of feature binding, we also introduce the idea of a noise neuron model, the Bayesian method, and a competition mechanism.

BLFM model is a network composed of neurons, which has the following characteristics:

1.

It uses the noise neuron model, that is, the input and output of each neuron is the probability of release, not the pulse value.

2.

Each neuron can contain two parts of input: feeding input and linking input.

3.

The connection weight between neurons reflects the statistical correlation between them, which is obtained through learning.

4.

The output of neurons is not only affected by input but also restricted by competition.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780323853804000038

Implementation of biomimetic central pattern generators on field-programmable gate array

M. Ambroise, ... S. Saïghi, in Biomimetic Technologies, 2015

12.3.3.2 System blocks

Presenting the architecture in this way provides an overview of the way the entire system operates, on the basis of three blocks: the computation core dedicated to neurons, the one dedicated to synapses, and the RAM. The way each of these three blocks is connected is presented in Figure 12.3.

To summarize, the computation core dedicated to neurons updates the state variables for all the neurons (u and v) and applies the exponential decrease to each synaptic current. In our neural network, the role of the computation core dedicated to synapses is to update all the currents and connection weights. This computation core, therefore, behaves in two different ways, depending on whether it receives a spike or not.

Furthermore, the IZH model has a time step resolution of 1 ms, which must be applied. Consequently, all the “u” and “v” value updates, the exponential decrease in synaptic currents, and the synaptic value updates (currents and depression factor) must be completed during the same millisecond.

Our implementation consisted of a network of Nn neurons and Ns synapses. Each synapse was described by three parameters: a weight, Wsyn (indicating whether the synapse was inhibitory or excitatory, depending on its sign), a scaling factor, xsyn, and a percentage, P. Consequently, three twin matrixes were stored in the RAM for these three parameters, in addition to a connectivity matrix (indicating the postsynaptic neurons connected to each presynaptic neuron).

To minimize the size of RAM, the matrixes were created with Nn lines. The ith line in the connectivity matrix thus corresponded to the connectivity of presynaptic neuron Ni with the other neurons. In this way, synapses were identified by the addresses of their postsynaptic neurons. To summarize, so that each neuron could have a different-sized connectivity list, each line in the connectivity matrix ended with the address of a virtual “end of list” neuron, Nn + 1. This address, therefore, limited the connectivity list of each neuron and saved memory space.

This is not an optimal solution in cases where a neuron is connected to itself and all the others (the worst case) but saves memory in other cases. Furthermore, this worst case does not correspond to a biologically plausible network and this remains the best solution for a CPG implementation with few synapses. Indeed, according to Marom and Shahaf (2002) and Garofalo et al. (2009), each neuron in a network is connected to 10–30% of the other neurons in the same network.

The occurrences with the address Nn + 1 in the connectivity matrix are replaced by element 0 in the three other matrices. For example, if synapse 10 is the synapse connecting presynaptic neuron 2 to postsynaptic neuron 15 with a synaptic weight of 5, a scaling factor of 1, and a percentage of 0.1%, then address 10:

in the connectivity matrix is address 15 (address of the postsynaptic neuron)

in the synaptic weight matrix is 5 (synaptic weight value)

in the percentage matrix is 0.1

in the scaling factor matrix is 1

As RAM is also a precious resource, it was not used to store Nn × Nn matrices. Indeed, CPGs are only small neural networks (8 neurons and 12 synapses).

In the European Brainbow project, our platform hosted 100 neurons and 1200 (external and internal) synapses. Furthermore, a synaptic delay was added to our network for this project. A synaptic delay consists of delaying the arrival of an action potential for a time ranging from 1 to 51 ms. To achieve this, each synapse had a 6-bit delay value Tdelay and a 50-bit vector capable of storing an action potential in the Tdelay position. Every millisecond, the 50-bit vectors are shifted to the right and the current action potential is indicated by the least-significant bit.

Table 12.1 shows the resources required for the implementation, according to the number of options (delay and short-term plasticity) required.

Table 12.1. Implementation on Spartan 6 LX 150 for 100 neurons and 1200 synapses

Architecture fixed point on 31 bits
Options of the SNN
Delay X X
Short-term plasticity X X
Resources Available total
Slices FFs 184,304 2398 1981 2075 1720
Slice LUT 92,152 4219 3558 4034 3409
DSP48A1 180 36 28 36 28
RAMB16WER 268 40 22 20 8
RAMB8BWER 536 17 11 15 11
Total RAM 4824 kb

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081002490000124

Neurotransmitter Microsensors for Neuroscience

P. Salazar, ... J.L. González-Mora, in Encyclopedia of Interfacial Chemistry, 2018

Introduction

Nowadays the study of the chemical communication between brain cells is the cornerstone of neuroscience.1–3 Neural communication, essential for the correct functioning of the central nervous system (CNS), occurs by the exocytosis release of neurotransmitters between the presynaptic and postsynaptic neurons.2,4,5 After exocytosis, neurotransmitters diffuse across the synaptic cleft and bind to specific receptors on the postsynaptic neuron, changing its permeability and generating a postsynaptic potential.4,5 Therefore, neurotransmitters are essential molecules for neural communications and are related with many physiological and behavior processes (cognition, memory, plasticity, learning, and addition process) and with several neurological disorders (schizophrenia, Parkinson’s and Huntington’s diseases, epilepsy, amyotrophic lateral sclerosis, and stroke).6,7

Traditionally, microdialysis (MD) has been used for in vivo monitoring of extracellular neurotransmitters and metabolites in neuroscience.3,8,9 MD is a minimally invasive sampling technique that allows the continuous measurement of multiple analytes using a semipermeable hollow fiber membrane. After MD probe insertion, a specific ringer solution (with similar composition of the brain extracellular fluid (ECF)) is perfused and neurotransmitters diffuse across the membrane thanks to a concentration gradient. Finally, the recovered sample is analyzed using an appropriate analytical method such as high-performance liquid chromatography and electrophoresis.8,10 However, due to the dilution effect, the MD device needs to be well calibrated prior to any experiment, and in some case (where neurotransmitters are at low concentrations) physiological changes may be masked due to the very low analyte concentration in the recovered sample. In addition, the high dimensions of the MD probes (Φ ∼ 200–500 μm, length ∼ 1–4 mm),8–10 significantly higher than the synaptic cleft, may produce traumatic brain injury during insertion and may cause a significant perturbation in the closed proximity of the probe.11 More importantly, MD often displays poor time (10–30 min) and spatial resolution and needs to have an available expensive instrumentation to analyze MD samples.8,10 Based on the above observations and because extracellular neurotransmitters concentrations can vary locally (close to the synaptic cleft) with very fast kinetics, it is essential to use an analytical technique with high temporal resolution, in the order of seconds or less, and with better spatial resolution.12–15

More recently, implantable electrochemical microsensors (which will be discussed here) have been developed to overcome the limitations of MD.1,2,4,12–15 Such devices have high sensitivity and selectivity, simplicity of starting materials, low cost and dimensions, the possibility to develop user-friendly, real-time output and ready-to-use capability. Therefore, in vivo electrochemical devices are replacing MD in real-time and fast-scan applications. Moreover, the possibility of miniaturizing and integrating such sensors with implantable wireless devices allows the study of the brain neurochemistry in freely moving animals and preclinical applications. Moreover, the development of microelectrode arrays (MEAs),16,17 compatible with neurochemical applications and applicable for a variety of molecules, allows the in situ determination of multiple analytes, and it may be considered as a full integrated laboratory on a tip (lab-on-a-tip). Finally, with the recent advances in material science, electronics, and technology, such devices will be more competitive compared to MD devices in the near future.

Since the introduction of electrochemical sensors in neuroscience, the main limitation of such devices has been their poor selectivity against different electroactive molecules presented in the ECF.1–3 The existence of a large number of electroactive species (ascorbic acid (AA), uric acid (UA), dopamine (DA), serotonin (5-HT), 3,4-dihydroxyphenylacetic acid (DOPAC), 5-hydroxyindoleacetic acid (5-HIAA), homovanillic acid (HVA), etc.) that oxidize/reduce at a similar potential window on the surface of the working electrode led to researchers developing different strategies to solve this problem.18–25 Fortunately, this limitation has been solved by: (1) introducing different modifications in the electrochemical techniques employed during detection, (2) applying an electrochemical pretreatment on the surface electrode, (3) modifying the surface electrode with perm-selective coatings that repel the interference, (4) including high-selective enzymes in the design of the electrochemical transducer to decrease the applied potential (even allowing the determination of nonelectroactive molecules such as Glu, glucose, ACh, lactate), (5) introducing electrocatalytic materials, (6) using principal component regression methods, or (7) using self-referencing electrodes to subtract nonspecific variations in the sensor response.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124095472139174

What connects neurons to each other?

Neurons are connected to each other through synapses, sites where signals are transmitted in the form of chemical messengers.

What is the junction between 2 neurons called?

synapse, also called neuronal junction, the site of transmission of electric nerve impulses between two nerve cells (neurons) or between a neuron and a gland or muscle cell (effector).