Online Pseudo-Zeroth-Order Training of Neuromorphic Spiking Neural Networks
Overview
Overall Novelty Assessment
The paper proposes OPZO, a training method for spiking neural networks that uses noise injection and direct top-down signals for spatial credit assignment, avoiding symmetric weight transport and separate forward-backward phases. It sits in the 'Zeroth-Order and Feedback-Based Approximations' leaf, which contains only three papers total, indicating a relatively sparse research direction within the broader field of biologically plausible SNN training. This leaf focuses specifically on methods that replace explicit gradient backpropagation with noise-based or feedback mechanisms, distinguishing it from the more populated gradient-adjustment approaches in the sibling leaf.
The taxonomy reveals that OPZO's parent branch, 'Gradient-Based and Backpropagation Alternatives', contains two main directions: spatiotemporal gradient adjustment methods and zeroth-order/feedback approaches. The sibling leaf on gradient adjustment includes techniques that still compute explicit gradients but modify their flow, whereas OPZO's leaf emphasizes avoiding gradient computation entirely. Neighboring branches in 'Learning Rule Design' include local Hebbian mechanisms and reinforcement learning methods, which differ fundamentally by relying on unsupervised correlation-based rules or reward signals rather than supervised error-driven updates. The taxonomy's scope notes clarify that OPZO belongs here because it uses feedback signals without explicit gradient backpropagation, not in the Hebbian category.
Among the three contributions analyzed across twenty-eight candidate papers, the pseudo-zeroth-order formulation examined eight candidates with none providing clear refutation, while the OPZO training method with momentum feedback examined ten candidates, also without refutation. The biologically plausible on-chip training framework examined ten candidates and found one that appears to provide overlapping prior work. This suggests that the core algorithmic innovations appear relatively novel within the limited search scope, while the on-chip training framing may have more substantial precedent. The analysis explicitly covers top-K semantic matches and citation expansion, not an exhaustive literature review.
Based on the limited search of twenty-eight candidates, the work appears to occupy a sparsely populated research direction with modest prior overlap. The core pseudo-zeroth-order formulation and momentum feedback mechanisms show no clear refutation among examined candidates, though the on-chip training motivation has at least one overlapping prior work. The taxonomy structure suggests this is an emerging area within biologically plausible SNN training, though definitive novelty claims would require broader literature coverage beyond the semantic search scope employed here.
Taxonomy
Research Landscape Overview
Claimed Contributions
The authors introduce a formulation that separates the model function from the loss function, maintaining a zeroth-order approach for the model while leveraging first-order gradients of the loss. This decoupling enables more informative error signals compared to standard zeroth-order methods, reducing variance while preserving the black-box property of the model.
The authors develop the online pseudo-zeroth-order (OPZO) training method that uses only one forward pass with noise injection and direct top-down feedback via momentum-based connections. These connections are updated using one-point zeroth-order estimation of the Jacobian expectation, addressing the high variance problem of traditional zeroth-order approaches while maintaining computational efficiency.
By combining the pseudo-zeroth-order approach with online training methods, OPZO achieves a form similar to three-factor Hebbian learning with direct top-down modulations. This framework avoids the biological implausibility of spatial backpropagation (symmetric weights, separate forward-backward phases) and is designed to be compatible with neuromorphic hardware for on-chip SNN training.
Core Task Comparisons
Comparisons with papers in the same taxonomy category
[12] BioGrad: Biologically Plausible Gradient-Based Learning for Spiking Neural Networks PDF
[22] Approximating back-propagation for a biologically plausible local learning rule in spiking neural networks PDF
Contribution Analysis
Detailed comparisons for each claimed contribution
Pseudo-zeroth-order formulation for neural network training
The authors introduce a formulation that separates the model function from the loss function, maintaining a zeroth-order approach for the model while leveraging first-order gradients of the loss. This decoupling enables more informative error signals compared to standard zeroth-order methods, reducing variance while preserving the black-box property of the model.
[58] Relizo: Sample reusable linear interpolation-based zeroth-order optimization PDF
[59] Gradient-free policy architecture search and adaptation PDF
[60] Revising recurrent neural networks to eliminate numerical derivatives in forming Physics-Informed loss terms with respect to time PDF
[61] Training radial basis neural networks with the extended Kalman filter PDF
[62] Gradient-free learning based on the kernel and the range space PDF
[63] Optimization Design of Adaptive Loss Function Using Evolutionary Neural Networks PDF
[64] Homogenization of Composites using the Derivative-Free Loss Method for Neural Networks PDF
[65] A Stochastic Vanishing Viscosity Approach for Eikonal Equations PDF
OPZO training method with momentum feedback connections
The authors develop the online pseudo-zeroth-order (OPZO) training method that uses only one forward pass with noise injection and direct top-down feedback via momentum-based connections. These connections are updated using one-point zeroth-order estimation of the Jacobian expectation, addressing the high variance problem of traditional zeroth-order approaches while maintaining computational efficiency.
[66] Distributed gradient-free and projection-free algorithm for stochastic constrained optimization PDF
[67] Boosting One-Point Derivative-Free Online Optimization via Residual Feedback PDF
[68] A consensus-based global optimization method with adaptive momentum estimation PDF
[69] Enhancing zeroth-order fine-tuning for language models with low-rank structures PDF
[70] Convergence of first-order algorithms with momentum from the perspective of an inexact gradient descent method PDF
[71] Zo-adamu optimizer: Adapting perturbation by the momentum and uncertainty in zeroth-order optimization PDF
[72] Tensor-compressed back-propagation-free training for (physics-informed) neural networks PDF
[73] Gradient-free method for heavily constrained nonconvex optimization PDF
[74] Zo-adamm: Zeroth-order adaptive momentum method for black-box optimization PDF
[75] Hardware Aware Robust Compression of Neural Networks PDF
Biologically plausible on-chip training framework for SNNs
By combining the pseudo-zeroth-order approach with online training methods, OPZO achieves a form similar to three-factor Hebbian learning with direct top-down modulations. This framework avoids the biological implausibility of spatial backpropagation (symmetric weights, separate forward-backward phases) and is designed to be compatible with neuromorphic hardware for on-chip SNN training.