【Paper Note】 Spiking neural P systems: main ideas and results (1)

[Paper Note] Spiking neural P systems: main ideas and results (1)

Title: spiking neural P systems: main ideas and results

Authors: Alberto Leporati, Giancarlo Mauri, Claudio Zandron

Citation: Leporati, A., Mauri, G., & Zandron, C. (2022). Spiking neural P systems: main ideas and results. Natural Computing: An International Journal, 21(4), 629–649.

Background

Spiking neural P system is a distributed language processing system.

Membrane system also called P system.

Original definition of P system:

  • A membrane structure
    • Composed by several cell-mem-branes, hierarchically embedded in a main membrane called the skin membrane.
    • Membranes delimit regions.
    • Membrane can contains objects.
    • Objects evolve according rules.
  • Development of P systems
    • Tissue P systems
      • substituting tree-like hierarchy into undirected graph.
    • Spiking neural P systems
      • Neurons are nodes.
      • Arrows are synapse-like.

Contraction

  • $V^*$: free monoid generated by $V$.
  • $V^+$: all finite sequences of symbols from $V$.
  • $vw$: concatnation.
  • $\lambda$: empty string.

Computing with standard SN P systems

Definition 1. Spiking Neural P system of degree $m \ge 1$ is a construct of the form
$$
\Pi =(O,\sigma_1, …,\sigma_m, syn, i_0)
$$

  1. $O={a}$ is the singleton alphabet (a is spike)

  2. of the form $\sigma_i = (n_i, R_i)$, $1\le i\le m$, where

    • $n_i \ge 0$ is the initial number of spikes contained in $\sigma_i$.

    • $R_i$ is a finite set of rules of one of the following two forms:

      • $E/a^c\rightarrow a;d$, where $E$ is a regular expression over the alphabet ${a}$ and $c\ge 1$, $d\ge 0$ are natural numbers.
        • $a^s \rightarrow \lambda$ where $s\ge 1$ is a natural number, with the restriction that for each rule $E/a^c\rightarrow a;d$ of type (1) from $R_i$, we have $a^s\notin L(E)$, where $L(E)$ is the RE defined by $E$.
  3. $syn\subseteq {1,2,…,m}\times {1,2,…,m}$ with $i\neq j$ for all $(i,j)\in syn,1\le i,j\le m$, is the set of synapses between neurons. $i_0\in {1,2,…,m}$ indicates the output neuron.

If for every rule $E/a^c\rightarrow a;d$ the regular language $L(E)$ is finite, then the SN P system is called finite.

  • Firing rule: type (1)
  • Forgetting rule: type (2)

System behaviors

  • synchronous
  • global discrete clocks-driven
  • every time clock tick, each enabled-rule-hold neuron must fire one and only one spike.

Applicability of rules $E/a^c \rightarrow a;d$ of neuron $\sigma_i$

  • Enabled rule:
    • The neuron at time t contains a number $n$ of spikes s.t., $a^n \in L(E), n\ge c$.
  • Content Variation after a firing rule apply:
    • remove $c$ spikes from $\sigma_i$
    • prepare one spike
    • delivered spike to all the neurons $\sigma_i$
  • $d$ is the delay associated by the neuron. While $d = 0$ the spike is emitted immediately.
  • In delay duration (refractory period), neuron is idle or closed.
    • Not able to receive new spikes.
    • Cannot fire new rules.
  • Forgetting rules:
    • $a^s \rightarrow \lambda$ can be applied only if the neuron contains exactly $s$ spikes.
    • Can be applied only no other rule can be used.

Computing Process

configuration of the system at time $t$:
$$
C_t = <k_1/t_1,…,k_m/t_m>
$$

  • $k_i$: number of spikes contained in the neuron $\sigma_i$.
  • $t_i \ge 0$: time steps count down until the end of refractory period.
    • Neuron is open, if $t_i = 0$.

computation:
$$
C_0 \Rightarrow C_1\Rightarrow …\Rightarrow C_t \Rightarrow C_{t+1}
$$
initial configuration $C_0 = <n_1/0, …,n_m/0>$

The rules to be executed at time t are chosen following two criteria:

  1. Sequentiality: at single neuron level: choose rule non-deterministically if more than 1 rule can be applied. Forgetting rule can be used only when no other rule can be use. And at most 1 forgetting rule is enabled at a certain time step $t$.
  2. Maximal parallelism at system: all neuron evolve in parallel. If an open neuron can emit spike it must do it.

deterministic system

For any possible contents of the neuron at most one firing rule rule may be enabled, we say that the system is deterministic.

Information Encoding

  1. SN P system halting is not relevant, and time is used as a way to encode information.

Mode 1: generating

  • Halt exactly 2 spikes are emitted from neurons.

Mode 2: accepting

  • $i_0$ as input neuron
  • input is the number $n$ of time steps separating them.
  • number is accepted if after receiving two spikes, the system starts a halting computations.

Mode 3: computing mode for $f: \mathbb N\rightarrow \mathbb N$

  • Input neuron receives two spikes from environment at time step $t$ and $t+n$.

  • The output neuron communicates to the environment the result $f(n)$ the same form.

    • As the duration of the interval between two spikes at time $u$ and $u+f(n)$.
  • Or feed number $n$ of spikes in the input neuron at time $t_0$, and read the result as the numbers of spikes contained in the output neuron when the computation halts.

  • Extension

    • Make $f: \mathbb N^k \rightarrow \mathbb N$
      • Method 1
        • $k$ natural numbers: $n_i, 1 \le i \le n $
        • reading sequence $z=0^b10^{n_1}10^{n_2}1…10^{n_k}10^g$
      • Method 2
        • $n_1,…,n_k$ arrive simultaneously.
    • Sometimes binary are encoded into binary form

Example

image-20250624034259344

Original paper’s Figure 1. An example of a Spiking Neural P system generating the set of all natural numbers greater than zero.

  1. neuron 1 apply 1st rule, consumes 1 spike, send spikes to neuron 2 and 3. neuron 2 non-deterministically applies one of its rules.
  2. If neuron 2 apply 1st rule at 1. A spike send to neuron 1 and 3. And neuron 3 consume 3 spikes. 1st fired, and send 1 spike to env.
  3. Now neuron 1 contain 2 spikes, and neuron 1 a single spike.
  4. Neuron 3 now apply 3rd rule, forget spikes from previous neuron 1 and 2.
  5. Proceed computation same way, until neuron 2 non-deterministically apply 2nd rule $a\rightarrow a;1$
  6. Spike of neuron 1 enter neuron 3, but not cannot enter neuron 2.
  7. Neuron 1 forget 1 spike, neuron 2 fires, and neuron 3 applies 2nd rule $a\rightarrow a;1$ , which consume the spike and will fire in the next time step.
  8. Neuron 2 now empty. Neuron 1 contains the single spike emitted from neuron 2, and neuron 3 is empty and closed for one time step.
  9. Neuron 1 forget again the single spike it contains, neuron 2 does not work, and neuron 3 emit its spike. Halt!