Current Experiment Underway (2024-07-13 updated) :
  ⚪ EEG Language Extraction.
  ⚪ Network Dynamics-Integrated SNN Deployment Optimization.
  ⚪ SNN-based Real-Time Seizure Detection (Software).
Book in Writing (2024-11-09 updated) :
  ⚪ Neuromorphics Computing Handbook Github

(Updated: 2025/08/27) Thanks for browsing this website!

I am currently seeking a PhD position ^_^.

My interests lie at the intersection of computer science, computational neuroscience, psychology, philosophy, physics, algebra, and linguistics, aiming to understand and analyze structures (e.g., natural or formal language structures, neuroimaging structures, brain networks, geometries in dynamical systems, relationships of individuals, etc.) in this world and their generation processes (dynamics).

The outcomes are expected to be applied in medicine, education, and welfare.

Email: huangwanhong.g.official@gmail.com

CV


Parallel MRI reconstruction relies heavily on knowing the sensitivity of each coil. ESPIRiT offers a data-driven way to learn these sensitivities, directly from acquired k-space data. Let’s break down the ideas step by step.

1. The Key Idea: Coupling Between Coils

In multi-coil MRI, the signals from different coils are correlated. ESPIRiT leverages these correlations to learn coil sensitivities.

  • Each coil is a sensor providing a different view of the same underlying image.
  • Instead of assuming a coil model, ESPIRiT observes the data to learn how coils are coupled.

2. Capturing Local Correlations: k-Space Patches

To extract these couplings:

  1. Take small patches in $k$-space across all coils.
  2. Flatten and stack them into a matrix $A$, where each row represents one patch.
  3. Solve for the null-space vector $h$, which acts as a convolution kernel in k-space:

$$
(A∗h)≈0
$$

  • $h$ also represents a constraint that all valid k-space patches satisfy.
  • Intuitively, $h$ is the “language” of coil relationships—any valid $k$-space data should approximately satisfy it.
Read more »

GFT addresses ontological questions and those related to generativity: specifically, what are the primitive units for generating complex structures, and how are complex structures generated?

To answer these two questions, GFT does not assume a specific structure or particle as a primitive, but rather takes an abstract algebraic group structure as the fundamental entity. Constraints are then defined on this structure to generate a field, with these constraints based on symmetries.

Through the cumulative effects of the field and the perturbative expansion (which allows us to introduce symmetry breaking), we witness the existence of a complex quantized discrete structure. This structure is then interpreted to yield our continuous spacetime. The complex structures involved are often described by spin foams, which involve triangular evolution dynamics.

It is important to note that the geometry derived from the expansion is complex and discrete. Only through the interpretive process do we arrive at the continuous, real spacetime.

Interestingly, symmetry breaking is one of the causes of generativity and creativity. In GFT, the generativity arising from symmetry breaking is realized through the perturbative expansion.

This type of generativity resulting from symmetry breaking is not only found in GFT but also has related manifestations in computational neuroscience. In some studies, the behavior flow is considered a result of symmetry breaking in brain networks. My intuition tells me that there is a certain connection between symmetry breaking and generativity, as well as creativity.

Read more »

[Paper Note] Spiking neural P systems: main ideas and results (1)

Title: spiking neural P systems: main ideas and results

Authors: Alberto Leporati, Giancarlo Mauri, Claudio Zandron

Citation: Leporati, A., Mauri, G., & Zandron, C. (2022). Spiking neural P systems: main ideas and results. Natural Computing: An International Journal, 21(4), 629–649.

Background

Spiking neural P system is a distributed language processing system.

Membrane system also called P system.

Original definition of P system:

  • A membrane structure
    • Composed by several cell-mem-branes, hierarchically embedded in a main membrane called the skin membrane.
    • Membranes delimit regions.
    • Membrane can contains objects.
    • Objects evolve according rules.
  • Development of P systems
    • Tissue P systems
      • substituting tree-like hierarchy into undirected graph.
    • Spiking neural P systems
      • Neurons are nodes.
      • Arrows are synapse-like.
Read more »

Thank you for visiting this site ^_^.

I am currently a relatively independent researcher, conducting a research about utilizing natural language processing techniques in Electroencephalogram (EEG) analysis.

My major interests are in domain of linguistics (especially computational linguistics), neuroscience and philosophy.

More specifically, my research interests involve following keywords:

  • Machine Learning
  • Processing-in Memory
  • Neuromorphic Computing
  • Computational Linguistics
  • Programming Language Theory
  • Programming Language Processing System
  • Optimization Algorithms
  • Bio-signal Processing

I am engaged in interdisciplinary research and study in linguistics and neuroscience to better understand human beings and has interests in developing applications in healthcare, communication enhancement, well-being, and related fields.

This website aims at sharing some research discovery and some knowledge.


Email: huangwanhong.g.official@gmail.com

Read more »

Title: The Grammar of the Ising Model: A New Complexity Hierarchy

Authors: Tobias Reinhart, Gemma De las Cuevas

Citation: Tobias Reinhart, & Gemma De las Cuevas (2022). The Grammar of the Ising Model: A New Complexity Hierarchy.

To explore the complexity of the Ising model, an effective approach is to analyze the complexity of its Ground State Energy (GSE) problem. Specifically, the GSE problem is defined as follows: given an interaction graph and a specific energy value, determine whether there exists an Ising spin configuration such that the system’s energy is lower than the given value.

The decidability of the GSE problem fundamentally depends on the planarity of the interaction graph of Ising sites, which divides the complexity of the Ising model into two categories. However, this classification method only considers the planarity of the interaction graph, which has certain limitations. To address this, Tobias Reinhart and colleagues, in their study, proposed an analysis method for the Ising model based on formal language modeling, from the perspectives of formal languages, generative grammars, and computational complexity theory. By associating the language of the Ising model with its position in the Chomsky hierarchy, they classified the complexity of the Ising model. Additionally, the study provided detailed discussions and proofs of related theorems and the complexity of seven Ising models as learning cases.

Read more »

The ising model energy is defined as:
$$
E(\sigma) = -\sum_{ij}J_{ij}\sigma_i\sigma_j - \sum_{i} H_i\sigma_i
$$
In which, $\sigma_i = {0, 1}$ in this work.

Let $\mathcal S$ is the set of all possible configuration. $|\mathcal S| = 2^n$,

where $n$ is the number of sites, and is equal to the length of $\sigma$.

In maximum entropy principle, we expect to maximize the entropy $S(p)=-\sum_\sigma p(\sigma)logp(\sigma)$, in the constraints that

$\langle \sigma_i \sigma_j\rangle^{emp} = \langle \sigma_i \sigma_j\rangle$, $\langle \sigma_i\rangle^{emp}=\langle \sigma_i\rangle$ and $\sum_\sigma p(\sigma) = 1$.

Combine to the Lagrange function:

$\mathcal L(p;J;H)=S(p) -\lambda((\sum_\sigma p(\sigma))-1)-\sum_{ij}J_{ij}(\langle \sigma_i \sigma_j\rangle -\langle \sigma_i \sigma_j\rangle^{emp})-\sum_{j}H_{i}(\langle \sigma_i \rangle -\langle \sigma_i\rangle^{emp})$​

Read more »

Chinese link: https://zhuanlan.zhihu.com/p/717409759

1. Temporal Event Encoding in Dynamical Systems

Time is the basis of many interesting human behaviors [1].

In connectionist methods, such as neural networks, explicitly encoding temporal events based on their absolute positions can pose problems for recognizing similar patterns. A similar example of this issue is pointed out in work [1]. Pattern vectors $[0,0,0,1,1,1,0,0]$ and $[0,1,1,1,0,0,0,0]$ (where $1$ represent temporal events) exhibit similar patterns, even though the relative positions of the temporal events differ. However, the explicit encoding of absolute positions causes these vectors to have large differences.

Moreover, since it is difficult to explain how biological systems might use mechanisms similar to shift registers to process patterns with differences in relative positions, this encoding method lacks some degree of biological interpretability [1].

In Jeffrey L. Elman et al.’s work in 1990 [1], a method was proposed for implicitly representing the influence of time on temporal data processing during sequence processing. Specifically, a recurrent neural network was used, where the network’s internal hidden states encoded input events at each moment.

Read more »

Paper Information

Title: An electronic neuromorphic system for real-time detection of high frequency oscillations (HFO) in intracranial EEG [1]

Authors: Mohammadali Sharifshazileh, Karla Burelo, Johannes Sarnthein, Giacomo Indiveri

Year: 2021

DOI: 10.1038/s41467-021-23342-2

Citation: Sharifshazileh, M., Burelo, K., Sarnthein, J.et al.An electronic neuromorphic system for real-time detection of high frequency oscillations (HFO) in intracranial EEG.Nat Commun12**, 3095 (2021). https://doi.org/10.1038/s41467-021-23342-2

Brief Introduction

High Frequency Oscillations (HFO) phenomenon in the context of EEG signal processing tasks refers to the observation of EEG at $80$-$500$ Hz from brain activity [1].HFO has shown a correlation with seizures in epileptic disorders, and has been utilized in several applications as a detection of epileptogenic zone as a kind of biomarkers.

In biological neural networks, the membrane potential of nerve cells changes over time and generates action potentials (spikes) under certain conditions. These spikes propagate through the synapse to the postsynaptic neuron. Artificial neural networks that simulate such behavior are also known as Spiking Neural Network (SNN).

Read more »
0%