Training Deep Normalization-Free Spiking Neural Networks with Lateral Inhibition.

ICLR 2026 Conference SubmissionAnonymous Authors
Spiking Neural NetworksNormalizationExcitation-Inhibition BalanceLateral Inhibition
Abstract:

Spiking neural networks (SNNs) have garnered significant attention as a central paradigm in neuromorphic computing, owing to their energy efficiency and biological plausibility. However, training deep SNNs has critically depended on explicit normalization schemes, leading to a trade-off between performance and biological realism. To resolve this conflict, we propose a normalization-free learning framework that incorporates lateral inhibition inspired by cortical circuits. Our framework replaces the traditional feedforward SNN layer with a circuit of distinct excitatory (E) and inhibitory (I) neurons that captures the features of the canonical architecture of cortical E-I circuits. The circuit dynamically regulates neuronal activity through subtractive and divisive inhibition, which respectively control the activity and the gain of excitatory neurons. To enable and stabilize end-to-end training of the biologically constrained SNN, we propose two key techniques: E-I Init and E-I Prop. E-I Init is a dynamic parameter initialization scheme that balances excitatory and inhibitory inputs while performing gain control. E-I Prop decouples the backpropagation of the E-I circuits from the forward pass and regulates gradient flow. Experiments across multiple datasets and network architectures demonstrate that our framework enables stable training of deep normalization-free SNNs with biological realism and achieves competitive performance without resorting to explicit normalization schemes. Therefore, our work not only provides a solution to training deep SNNs but also serves as a computational platform for further exploring the functions of E-I interactions in large-scale cortical computation.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper proposes a normalization-free learning framework for deep spiking neural networks that replaces conventional normalization layers with cortical excitatory-inhibitory (E-I) circuits implementing lateral inhibition. According to the taxonomy, this work occupies the 'Normalization-Free Training with Cortical E-I Circuits' leaf under the broader 'Biologically-Inspired Learning Mechanisms in SNNs' branch. Notably, this leaf contains only the original paper itself, with no sibling papers identified, suggesting this represents a relatively sparse and emerging research direction within the SNN training landscape.

The taxonomy reveals that the broader field divides into biologically-inspired learning mechanisms and hardware implementations. The original paper's leaf sits adjacent to 'Synaptic Plasticity and Competitive Learning,' which explores related concepts like intrinsic plasticity and lateral inhibition but differs in scope by focusing on rate-coding or spiking perceptrons rather than deep normalization-free architectures. The taxonomy's scope notes explicitly distinguish these approaches: competitive learning methods may use lateral inhibition without addressing normalization replacement in deep networks, whereas this work specifically targets the normalization-performance trade-off through cortical E-I circuit design.

Among the five candidates examined across three identified contributions, no clearly refuting prior work was found. The core normalization-free E-I circuit framework examined three candidates with zero refutations, while the E-I Prop stabilization technique examined two candidates, also with zero refutations. The E-I Init initialization scheme was not matched against any candidates in this limited search. This suggests that within the top-five semantically similar papers retrieved, none provide substantial overlapping prior work on combining normalization-free training with cortical E-I circuits for deep SNNs, though the small search scope limits definitive conclusions about field-wide novelty.

Based on the limited literature search covering five candidates, the work appears to occupy a relatively unexplored niche at the intersection of deep SNN training and biologically plausible normalization alternatives. The absence of sibling papers in the taxonomy leaf and zero refutations across examined contributions suggest novelty within the retrieved sample, though a more exhaustive search across broader SNN training literature would be needed to assess whether similar E-I circuit approaches exist outside the top-five semantic matches.

Taxonomy

Core-task Taxonomy Papers
2
3
Claimed Contributions
5
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: training deep normalization-free spiking neural networks with lateral inhibition. The field of spiking neural networks (SNNs) has evolved along two main branches. The first branch, Biologically-Inspired Learning Mechanisms in SNNs, explores training strategies that draw on cortical principles such as excitatory-inhibitory balance, competitive dynamics, and normalization-free architectures. The second branch, Hardware and Optical SNN Implementations, focuses on physical substrates—ranging from neuromorphic chips to optical devices—that can efficiently realize spiking computations. While the former emphasizes algorithmic innovations inspired by neuroscience, the latter addresses the engineering challenges of deploying SNNs at scale, often leveraging novel materials or photonic components to achieve low-latency, energy-efficient inference. Within the biologically-inspired branch, a small handful of works have begun to investigate how lateral inhibition and competitive learning can replace conventional normalization layers, thereby simplifying deep SNN training while preserving biological plausibility. Lateral Inhibition SNNs[0] sits squarely in this emerging cluster, proposing cortical excitatory-inhibitory circuits as a normalization-free alternative. This contrasts with earlier competitive learning schemes such as Competitive Perceptrons[2], which also exploit winner-take-all dynamics but may differ in the specific inhibitory mechanisms or depth of the networks considered. Meanwhile, the hardware branch includes efforts like VCSEL Optical SNN[1], which demonstrates optical implementations of spiking dynamics but does not directly address normalization-free training. The central open question remains how to scale these biologically grounded, normalization-free methods to very deep architectures while maintaining both training stability and competitive performance on standard benchmarks.

Claimed Contributions

Normalization-free learning framework with E-I circuits for deep SNNs

The authors introduce a learning framework for deep spiking neural networks that replaces explicit normalization schemes with biologically inspired excitatory-inhibitory circuits. This framework uses distinct excitatory and inhibitory neuron populations with lateral inhibition to dynamically regulate neuronal activity through subtractive and divisive inhibition.

3 retrieved papers
E-I Init: dynamic parameter initialization scheme

The authors develop a dynamic initialization method that establishes initial excitation-inhibition balance and sets appropriate initial activity for gain control. This scheme ensures neurons operate in a responsive state from the start of training, preventing pathological network activity in deep architectures with E-I segregation constraints.

0 retrieved papers
E-I Prop: stabilization techniques for end-to-end training

The authors propose stabilization techniques that decouple forward and backward passes in E-I circuits. This includes adaptive stabilization of divisive inhibition to handle numerical instability and a straight-through estimator combined with gradient scaling to ensure stable gradient flow during backpropagation.

2 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Within the taxonomy built over the current TopK core-task papers, the original paper is assigned to a leaf with no direct siblings and no cousin branches under the same grandparent topic. In this retrieved landscape, it appears structurally isolated, which is one partial signal of novelty, but still constrained by search coverage and taxonomy granularity.

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Normalization-free learning framework with E-I circuits for deep SNNs

The authors introduce a learning framework for deep spiking neural networks that replaces explicit normalization schemes with biologically inspired excitatory-inhibitory circuits. This framework uses distinct excitatory and inhibitory neuron populations with lateral inhibition to dynamically regulate neuronal activity through subtractive and divisive inhibition.

Contribution

E-I Init: dynamic parameter initialization scheme

The authors develop a dynamic initialization method that establishes initial excitation-inhibition balance and sets appropriate initial activity for gain control. This scheme ensures neurons operate in a responsive state from the start of training, preventing pathological network activity in deep architectures with E-I segregation constraints.

Contribution

E-I Prop: stabilization techniques for end-to-end training

The authors propose stabilization techniques that decouple forward and backward passes in E-I circuits. This includes adaptive stabilization of divisive inhibition to handle numerical instability and a straight-through estimator combined with gradient scaling to ensure stable gradient flow during backpropagation.