Expressive yet Efficient Feature Expansion with Adaptive Cross-Hadamard Products

ICLR 2026 Conference SubmissionAnonymous Authors
Efficient Vision ModelsHadamard ProductNeural Architecture SearchDifferentiable Sampling
Abstract:

Recent theoretical advances reveal that the Hadamard product induces nonlinear representations and implicit high-dimensional mappings for the field of deep learning, yet their practical deployment in efficient vision models remains underdeveloped. To address this gap, we introduce the Adaptive Cross-Hadamard (ACH) module, a novel operator that embeds learnability through differentiable discrete sampling and dynamic softsign normalization. This enables parameter-free feature reuse while stabilizing gradient propagation. Integrated into Hadaptive-Net (Hadamard Adaptive Network) via neural architecture search, our approach achieves unprecedented efficiency. Comprehensive experiments demonstrate state-of-the-art accuracy/speed trade-offs on image classification task, establishing Hadamard operations as fundamental building blocks for efficient vision models.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper introduces the Adaptive Cross-Hadamard (ACH) module and Hadaptive-Net, leveraging Hadamard products for efficient feature expansion in vision models. According to the taxonomy, this work resides in the 'Hadamard Product-Based Feature Expansion' leaf under 'Efficient Architecture Design and Feature Modulation'. Notably, this leaf contains only the original paper itself—no sibling papers are present—indicating a sparse and relatively unexplored research direction within the broader field of efficient feature expansion.

The taxonomy reveals that neighboring leaves focus on 'Efficient Modulation and Attention Mechanisms' (four papers) and 'Channel Dimension and Scaling Strategies' (two papers), both emphasizing parameter efficiency through different mechanisms. While modulation-based methods use learned gating or attention, and scaling strategies optimize channel configurations, the Hadamard product approach offers a distinct algebraic pathway for nonlinear feature interactions. The broader parent branch encompasses diverse architectural innovations, yet the Hadamard-specific direction remains underpopulated, suggesting the paper explores a niche with limited prior exploration.

Among the three contributions analyzed, the literature search examined 22 candidates total. The ACH module examined two candidates with zero refutable overlaps; Hadaptive-Net and GPU acceleration strategies each examined ten candidates, also with zero refutable overlaps. This suggests that, within the limited scope of top-K semantic search, no prior work directly anticipates these specific contributions. However, the small candidate pool (22 papers) and the absence of sibling papers in the taxonomy leaf indicate that the search may not have captured all relevant architectural innovations in efficient vision models.

Given the limited search scope and the isolated taxonomy position, the work appears to occupy a relatively novel niche within efficient architecture design. The absence of refutable prior work among 22 candidates and the lack of sibling papers suggest that Hadamard-based feature expansion is underexplored. However, the analysis does not cover exhaustive architectural literature, and broader surveys or domain-specific venues may reveal additional context not captured here.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
22
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: efficient feature expansion for vision models. The field encompasses a broad spectrum of techniques aimed at enriching learned representations while maintaining computational efficiency. At the highest level, the taxonomy divides into branches such as self-supervised contrastive learning with augmentation (e.g., SimCLR[3], Spatiotemporal Contrastive Video[2]), masked autoencoding and reconstruction-based learning (e.g., Masked Autoencoders[47], Mixed Autoencoder[17]), augmentation strategies for domain generalization and adaptation (e.g., COSDA[6]), methods tailored to long-tailed and few-shot scenarios (e.g., Long-Tailed Feature Space[24], FeatMatch[28]), continual and incremental learning with feature expansion (e.g., Multi-layer Rehearsal[8]), efficient architecture design and feature modulation (e.g., EfficientViT[14], Efficient Modulation[5]), task-specific feature enhancement architectures, cross-modal and multi-modal augmentation (e.g., Object-aware Audio-Visual[15]), domain-specific augmentation and applications, general augmentation theory and surveys (e.g., Image Augmentation Survey[23]), and concept expansion and open-vocabulary learning (e.g., Webly Concept Expansion[37], CLIP-Adapter[43]). These branches reflect diverse problem settings—from unsupervised pretraining and domain shift to resource-constrained deployment—and highlight the interplay between data augmentation, architectural innovation, and learning paradigms. Within the efficient architecture design and feature modulation branch, a small handful of works explore Hadamard product-based feature expansion, emphasizing lightweight yet expressive transformations. Adaptive Cross-Hadamard[0] sits squarely in this cluster, proposing adaptive mechanisms that modulate features via element-wise products to achieve efficient expansion without heavy computational overhead. This contrasts with broader augmentation strategies like Simple Feature Augmentation[4] or Efficient Feature Transformations[39], which may rely on different algebraic operations or learned mappings. Compared to Efficient Modulation[5], which also targets parameter-efficient feature manipulation, Adaptive Cross-Hadamard[0] focuses specifically on cross-layer Hadamard interactions to capture richer feature dependencies. The central trade-off in this line of work is balancing expressiveness—how much additional representational capacity is gained—against the simplicity and speed of the expansion operation, a question that remains active as practitioners seek scalable solutions for diverse vision tasks.

Claimed Contributions

Adaptive Cross-Hadamard (ACH) module

The authors propose a novel operator that makes Hadamard products learnable via two mechanisms: channel attention-guided feature gating with differentiable discrete sampling (Gumbel-TopK) and dynamic softsign normalization (DySoft). This enables parameter-free feature reuse while stabilizing gradient propagation.

2 retrieved papers
Hadaptive-Net architecture via neural architecture search

The authors construct Hadaptive-Net through gradient-based neural architecture search to jointly optimize model topology and ACH integration points, demonstrating how to systematically deploy the ACH module in efficient vision models.

10 retrieved papers
GPU acceleration strategies for cross-Hadamard products

The authors develop specialized GPU optimization approaches (Direct-Indexing and Parity-Balanced algorithms) to handle the triangular computation pattern of cross-Hadamard products, ensuring efficient on-device execution.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Within the taxonomy built over the current TopK core-task papers, the original paper is assigned to a leaf with no direct siblings and no cousin branches under the same grandparent topic. In this retrieved landscape, it appears structurally isolated, which is one partial signal of novelty, but still constrained by search coverage and taxonomy granularity.

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Adaptive Cross-Hadamard (ACH) module

The authors propose a novel operator that makes Hadamard products learnable via two mechanisms: channel attention-guided feature gating with differentiable discrete sampling (Gumbel-TopK) and dynamic softsign normalization (DySoft). This enables parameter-free feature reuse while stabilizing gradient propagation.

Contribution

Hadaptive-Net architecture via neural architecture search

The authors construct Hadaptive-Net through gradient-based neural architecture search to jointly optimize model topology and ACH integration points, demonstrating how to systematically deploy the ACH module in efficient vision models.

Contribution

GPU acceleration strategies for cross-Hadamard products

The authors develop specialized GPU optimization approaches (Direct-Indexing and Parity-Balanced algorithms) to handle the triangular computation pattern of cross-Hadamard products, ensuring efficient on-device execution.