Overview on NAC

Neurons generate action potential (spikes) lasting 1 or 2 milliseconds. Such spikes propagate though the axons with finite speed. The consequence is that it takes time for spikes to travel from one neuron to the others. When a spike reaches a receiver neuron, the synaptic strength between them may amplify or attenuate the impact of such spike. These are the two main elements in NAC: the synaptic weight connecting two neurons and the spike propagation delay among all neurons.

These two components (delay and synaptic weights) naturally cause the appearance of some neural groups acting together. As proposed by Hebb in 1949, these two elements are responsible for appearance of ‘cell assemblies’. Popularly known as Hebb’s law, this phrase resumes the idea: “neurons that fire together, wire together”.

Therefore, the notion that neural assembly might represent, memorize, and compute is not new (see Buzsáky-10).  The question is: how cell assemblies effectively represent, memorize, and process information?

From spikes to inteligente/cognitive agents based on adaptive automaton:

This is a link for a presentation at the CS-DC’15 World e-Conference.


This video shows our view on how spiking neural networks (SNN) with propagation delay process information. We call this approach Neural Assembly Computing (NAC). The video shows how NAC performs several logical functions, finite-state automata, and algorithms. In our view, interactive automata in nervous systems generate intelligent behavior. Due to the synaptic and neural plasticity, the neural automata undergo constant changes as the SNN pass through experiences. Such changes are responsible for agents to learn new skills and acquire new responses to their ‘repertoire’ during their lifetimes, which may be considered as ‘cognitive processes’.

The NAC explanation:

The Fig.1 shows a raster plot illustrating all elements necessary for neural assembly computing. The figure comes from a simulation in Matlab. You can download the code and a tutorial at:



In this simulation we used 20 neurons per assembly. Each neural coalition (or assembly) is an ephemeral event in the spiking neural network, denoted in the picture by K0 to K6.

Figure 1: A raster plot showing the fundamentals of NAC: logical functions, branching, dismantling and bistable memory.

In this raster plot, assemblies fire as polychronous groups, an idea proposed by Izhikevich. Assembly zero (K0) is fired by the program, it represents the input stimuli. Neurons in K0 are connected to K1. Actually, K0 causes K1 after ~40 ms (the variable tBD in the Matlab program saves the mean delay propagation among assemblies).

K1 triggers K2, which triggers K3; then K3 triggers K2. Thus, K2 and K3 reverberate and form a bistable neural assembly (BNA). If no inhibitions occur, K2 and K3 remain firing each other. It means they memorize one bit of information, as well as they represent the event (K1) which triggers them. Note that no plasticity mechanism is involved in this memory process.

Now, let us analyze the stochastic logical function executed by other kernels in the network. After K2 firing it takes some time (0.5*tBD ms) for K2 triggering K4, and it also takes (0.5*tBD ms) for K3 triggering K4. All synaptic weights from K2 to K4 (K2=>K4) and from K3=>K4 are equal to a default value (sw0), and such synaptic weights are large enough for each assembly to trigger K4. Therefore, K2 OR K3 can singly trigger K4.

What does it mean?  It means that K4 fires tBD/2 ms after K2 and K4 also fires tBD/2 ms after K3, doubling its firing frequency.

If users change the synaptic weight sw0 between K2=>K4, or between K3->K4 (e.g. to 0.1*sw0) the synaptic strengths are not enough for triggering K4 singly, then K4 fails for that connection.

On the other hand, the delay from K2=>K5 is ~1.25*tBD ms, and the delay from K3=>K5 is ~0. 25*tBD  ms. Note that spikes from K2 reaches K3 tBD ms later, then K3 fires. Therefore, the spikes from K3 and those from K2 coincide in K5 (~1.25*tBD ms after K2). Note also that neither K2 nor K3 can trigger K5 singly because the synaptic weights K2=>K5 and K3=>K5 are all sw0/2. Hence, it is necessary the coincidence from K2 AND K3 in order to trigger K5. Users are encouraged to change the synaptic weights (e.g. to 0.1*sw0) in any connection (K2=>K5 or K3=>K5) in order to inhibit the event K5.

Assemblies derived from predecessor events are examples of branching, for instance, when K1 triggers K2, or the BNA K2-K3 formation, the K4 and K5 triggering events, and so on. Nevertheless, at ~350 ms an event burst inhibits K2; hence, neither BNA K2-K3 nor the logical functions derived from such bistable memory remain active. This is called dismantling events, which are equivalent to a NOT logical function.

Any digital computer can be constructed from few logic gates: AND, NOT, and OR. When we arrange these gates we obtain the NAND, NOR, and EXCLUSIVE ones. In fact, a computer and any ‘algorithm’ may be constructed by NANDs or NOR gates. Memories (the flip-flop circuits) are constructed by NAND or NOR gates.

Now, let us turn back to the NAC approach. Whenever assemblies trigger other coalitions they are branching parallel processes, which may be caused by a single coalition or by association of assemblies.  When assemblies interact with other(s) two relations may occur: a disjunction or a conjunction.

In disjunctions, any coalition (singly) may trigger another assembly, which means that, from the resulting coalition (K4) point of view, the events K2 OR K3 can cause it. The OR logical function is executed in this case and we denoted it as: K4 = K2 + K3.

In conjunctions it is necessary that spikes from two or more assemblies coincide in order to generate the resulting coalition (e.g. K5). It means that K5 can be triggered when K2 AND K3 coincide at certain time.

When two or more assemblies feedback and trigger themselves they form a loop. This is how assemblies can memorize events. Such memories do not need to have any plasticity mechanism, and they may be short-term or long-term memories. In order to ‘dismantle’ a bistable memory loop it is necessary to inhibit any assembly involved in the loop.

Therefore, we have all the elements to form parallel ‘algorithms’: logical functions, branching, assemblies inhibition, and memories.  The interaction among these elements can transform ephemeral phenomena into behavior.

A good point in our approach is that cell assemblies are, at the same time, the representation (the data connected to an external or internal phenomena) and the control  for the continuous flux of information (equivalent to the instructions in a digital computer).

Further issues:

Above we have described the ‘static’ operation of ‘digital’ assemblies. For further information see Ranhel-12).

We consider digital assembly the one that operates in two well defined states: ON, when all or almost all neurons fire together, and OFF, when almost none neuron fire for that assembly. The ‘static’ operation is related to the fact that we are not describing the dynamical effects in the neural network, mainly due to the plasticity mechanisms. These dynamics certainly are responsible for learning and cognitive processes, but we are only starting the investigations in these fields.