Efficient Spatially-Variant Convolution via Differentiable Sparse Kernel Complex

ICLR 2026 Conference SubmissionAnonymous Authors
Kernel ApproximationDifferentiable FilteringSpatially-Varying ConvolutionEfficient Image Processing
Abstract:

Image convolution with complex kernels is a fundamental operation in photography, scientific imaging, and animation effects, yet direct dense convolution is computationally prohibitive on resource-limited devices. Existing approximations, such as simulated annealing or low-rank decompositions, either lack efficiency or fail to capture non-convex kernels. We introduce a differentiable kernel decomposition framework that represents a target spatially-variant, dense, complex kernel using a set of sparse kernel samples. Our approach features (i) a decomposition that enables differentiable optimization of sparse kernels, (ii) a dedicated initialization strategy for non-convex shapes to avoid poor local minima, and (iii) a kernel-space interpolation scheme that extends single-kernel filtering to spatially varying filtering without retraining and additional runtime overhead. Experiments on Gaussian and non-convex kernels show that our method achieves higher fidelity than simulated annealing and significantly lower cost than low-rank decompositions. Our approach provides a practical solution for mobile imaging and real-time rendering, while remaining fully differentiable for integration into broader learning pipelines.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper introduces a differentiable framework for decomposing complex convolution kernels into sparse samples, targeting efficient filtering on resource-limited devices. It resides in the 'Differentiable and Adaptive Kernel Decomposition' leaf, which contains only three papers total. This leaf sits within the broader 'Kernel Decomposition and Approximation Methods' branch, indicating a relatively sparse research direction compared to more crowded areas like network-level pruning or convolutional sparse coding. The small sibling set suggests this specific combination of differentiability, sparsity, and kernel-space optimization is less explored than adjacent decomposition strategies.

The taxonomy reveals neighboring leaves focused on low-rank factorizations, hybrid low-rank-sparse methods, and sparse kernel learning without differentiable optimization. The paper's leaf explicitly excludes fixed decomposition schemes, positioning it among methods that optimize kernel structure end-to-end. Nearby branches address network compression via filter pruning and convolutional sparse coding for signal reconstruction, but these operate at different abstraction levels—network architecture versus kernel representation. The scope notes clarify that this work targets kernel-level decomposition rather than network-level sparsity, distinguishing it from the larger body of pruning literature.

Among 26 candidates examined across three contributions, none were flagged as clearly refuting the proposed methods. The differentiable decomposition framework examined 10 candidates with no refutations; the initialization strategy for non-convex kernels examined 10 with none refuting; the filter-space interpolation scheme examined 6 with none refuting. This suggests that within the limited search scope—primarily top-K semantic matches and citation expansion—no prior work directly overlaps with the specific combination of differentiable sparse decomposition, non-convex initialization, and spatially varying interpolation. However, the modest candidate pool means the analysis does not cover the full breadth of related optimization or approximation literature.

Based on the limited search of 26 candidates, the work appears to occupy a relatively underexplored niche at the intersection of differentiable optimization and sparse kernel decomposition. The absence of refutations across all contributions, combined with the sparse taxonomy leaf, suggests novelty within the examined scope. Nonetheless, the analysis does not exhaustively cover adjacent areas such as learned filter reparameterization or hardware-aware kernel design, leaving open the possibility of relevant prior work outside the top-K semantic neighborhood.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
26
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: approximating complex convolution kernels with sparse decomposition. The field encompasses a diverse set of strategies for making convolutional operations more efficient or expressive by exploiting sparsity and decomposition. At the highest level, the taxonomy organizes work into kernel decomposition and approximation methods (which focus on factorizing or restructuring filters), network compression via sparsity regularization (which prunes or constrains weights to reduce model size), sparse convolution for irregular and point cloud data (which handles non-grid structures like Submanifold Sparse Convolutional[2]), convolutional sparse coding for signal processing (which applies dictionary-based representations as in Convolutional Sparse Superresolution[14]), hardware acceleration and efficient inference (which targets deployment on specialized architectures like Sparse CNN Accelerator[3]), theoretical foundations and algorithmic frameworks (which provide convergence guarantees and optimization insights), and domain-specific applications (which tailor sparse convolution to tasks such as seismic imaging or rain removal). These branches reflect complementary emphases: some prioritize computational savings through structured pruning or low-rank factorizations, while others seek to preserve or enhance representational power by learning adaptive decompositions or exploiting problem-specific structure. Within the kernel decomposition and approximation methods branch, a particularly active line of work explores differentiable and adaptive kernel decomposition, where filters are dynamically factorized or reparameterized during training. Sparse Kernel Complex[0] sits squarely in this cluster, emphasizing learnable sparse decompositions that can capture intricate filter patterns without manual design. Nearby, Compact Cross-Reparam[12] also pursues adaptive reparameterization to balance expressiveness and efficiency, while Constant Velocity Convolution[49] introduces a geometric perspective on kernel evolution. Compared to these neighbors, Sparse Kernel Complex[0] appears to place stronger emphasis on sparsity as a first-class constraint rather than a byproduct of reparameterization, potentially offering more direct control over the trade-off between approximation fidelity and computational cost. Across the broader landscape, open questions remain about how to best integrate sparsity with other decomposition strategies (e.g., low-rank or separable filters) and how to ensure that learned sparse structures generalize across diverse architectures and tasks.

Claimed Contributions

Differentiable kernel decomposition framework for sparse kernel optimization

The authors introduce a framework that formulates kernel approximation as an end-to-end differentiable optimization problem. This enables gradient-based learning of sparse kernel parameters across multiple layers, replacing heuristic search methods like simulated annealing with a more efficient and robust optimization approach.

10 retrieved papers
Initialization strategy combining radial and sparse sampling for non-convex kernels

The authors propose a two-part initialization scheme: a general radial strategy that distributes samples uniformly on expanding circles for stable convergence, and a rejection-based sparse sampling method that directly samples from the kernel's non-zero support to capture complex non-convex shapes and avoid vanishing gradients.

10 retrieved papers
Filter-space interpolation scheme for spatially varying filtering

The authors develop a method that pre-computes an optimized basis of sparse filters and synthesizes unique per-pixel filters at runtime through direct interpolation of basis filter parameters. This decouples kernel synthesis cost from image resolution, enabling complex spatially-varying effects with minimal computational overhead.

6 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Differentiable kernel decomposition framework for sparse kernel optimization

The authors introduce a framework that formulates kernel approximation as an end-to-end differentiable optimization problem. This enables gradient-based learning of sparse kernel parameters across multiple layers, replacing heuristic search methods like simulated annealing with a more efficient and robust optimization approach.

Contribution

Initialization strategy combining radial and sparse sampling for non-convex kernels

The authors propose a two-part initialization scheme: a general radial strategy that distributes samples uniformly on expanding circles for stable convergence, and a rejection-based sparse sampling method that directly samples from the kernel's non-zero support to capture complex non-convex shapes and avoid vanishing gradients.

Contribution

Filter-space interpolation scheme for spatially varying filtering

The authors develop a method that pre-computes an optimized basis of sparse filters and synthesizes unique per-pixel filters at runtime through direct interpolation of basis filter parameters. This decouples kernel synthesis cost from image resolution, enabling complex spatially-varying effects with minimal computational overhead.