Splat the Net: Radiance Fields with Splattable Neural Primitives

ICLR 2026 Conference SubmissionAnonymous Authors
neural renderingradiance field representation3DGSNeRF
Abstract:

Radiance fields have emerged as a predominant representation for modeling 3D scene appearance. Neural formulations such as Neural Radiance Fields provide high expressivity but require costly ray marching for rendering, whereas primitive-based methods such as 3D Gaussian Splatting offer real-time efficiency through splatting, yet at the expense of representational power. Inspired by advances in both these directions, we introduce splattable neural primitives, a new volumetric representation that reconciles the expressivity of neural models with the efficiency of primitive-based splatting. Each primitive encodes a bounded neural density field parameterized by a shallow neural network. Our formulation admits an exact analytical solution for line integrals, enabling efficient computation of perspectively accurate splatting kernels. As a result, our representation supports integration along view rays without the need for costly ray marching. The primitives flexibly adapt to scene geometry and, being larger than prior analytic primitives, reduce the number required per scene. On novel-view synthesis benchmarks, our approach matches the quality and speed of 3D Gaussian Splatting while using 10x fewer primitives and 6x fewer parameters. These advantages arise directly from the representation itself, without reliance on complex control or adaptation frameworks.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper introduces splattable neural primitives, a hybrid representation combining neural density fields with primitive-based splatting for efficient novel view synthesis. It resides in the Explicit Grid and Primitive-Based Representations leaf under Efficient Representation and Real-Time Rendering, alongside four sibling papers including Plenoxels, PlenOctrees, and point-based methods. This leaf contains five papers total within a taxonomy of fifty works, indicating a moderately populated research direction focused on balancing rendering speed with representational quality through explicit geometric structures rather than purely implicit neural networks.

The taxonomy reveals neighboring branches addressing complementary efficiency challenges: Compact Encodings and Compression focuses on model size reduction through hash encodings and pruning, while Distillation and Hybrid Representations bridges neural and light field models. The parent category Efficient Representation and Real-Time Rendering sits alongside branches handling sparse-view generalization, dynamic scenes, and specialized sensors, reflecting the field's diversification beyond foundational NeRF formulations. The scope note for this leaf explicitly excludes purely implicit MLP methods and dynamic representations, positioning the work within a cluster prioritizing fast rendering through explicit primitives like voxels, octrees, or point clouds.

Among thirty candidates examined, the analytical line integral solution shows the most substantial prior overlap, with four refutable candidates identified from ten examined for this contribution. The splattable neural primitives representation itself appears more distinctive, with zero refutable candidates among ten examined, suggesting limited direct precedent for bounded neural density fields as splatting primitives. The taxonomy framing contribution found no refutable work among ten candidates. These statistics reflect a focused semantic search rather than exhaustive coverage, indicating that while the core primitive design may be novel within this search scope, the analytical rendering technique has more established foundations in the examined literature.

Based on the limited search of thirty semantically related papers, the work appears to occupy a recognizable position within primitive-based efficiency research, with its main novelty likely concentrated in the specific formulation of neural primitives rather than the broader splatting paradigm. The analysis does not cover potential overlaps outside top-ranked semantic matches or recent concurrent work, and the refutability counts reflect only the examined candidate set, not the entire field.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
30
Contribution Candidate Papers Compared
4
Refutable Paper

Research Landscape Overview

Core task: novel view synthesis with radiance field representations. The field has evolved into a rich ecosystem of branches addressing complementary challenges. Foundational Neural Radiance Field Methods established the core volumetric rendering paradigm exemplified by NeRF[25], while Efficient Representation and Real-Time Rendering explores explicit grids and primitive-based structures like Plenoxels[29] and PlenOctrees[46] to accelerate inference. Sparse-View and Few-Shot Generalization tackles data-limited scenarios through methods such as RegNeRF[2] and pixelNeRF[42], whereas Dynamic and Temporal Radiance Fields extends representations to moving scenes with works like Nerfies[28]. Specialized branches handle diverse sensor inputs (Specialized Sensor Modalities), semantic understanding (Scene Understanding and Semantic Integration), appearance decomposition (Relighting and Appearance Modeling), and camera calibration (Camera Pose Estimation and Calibration). Domain-Specific Applications, Editing and Stylization, and Generative and 3D-Aware Synthesis address targeted use cases, while Hybrid and Image-Based Rendering bridges classical and neural techniques. Within Efficient Representation and Real-Time Rendering, a central tension exists between rendering speed and reconstruction quality. Explicit grid-based methods like Plenoxels[29] achieve interactive frame rates by replacing implicit MLPs with voxel grids, while point-based approaches such as Differentiable Point-Based[32] offer flexible geometric primitives. Splat the Net[0] situates itself in this explicit primitive cluster, closely aligned with Baking Neural Radiance[9] and PlenOctrees[46], which similarly convert volumetric fields into efficient discrete structures. Compared to Plenoxels[29], which uses dense voxel grids, Splat the Net[0] likely emphasizes splatting-based primitives for more compact scene representation. This contrasts with purely implicit methods that prioritize quality over speed, highlighting ongoing efforts to balance real-time performance with photorealistic fidelity across diverse scene complexities.

Claimed Contributions

Splattable neural primitives representation

The authors propose a novel volumetric representation where each primitive encodes a bounded neural density field parameterized by a shallow neural network. This design combines the expressivity of neural radiance fields with the rendering efficiency of primitive-based splatting methods.

10 retrieved papers
Exact analytical solution for line integrals enabling efficient splatting

The method derives a closed-form antiderivative for the neural density field that allows exact integration along view rays without ray marching. This enables the computation of perspectively accurate 2D splatting kernels for efficient rendering.

10 retrieved papers
Can Refute
Taxonomy highlighting dichotomy in radiance field representations

The authors present a systematic organization of radiance field representations along two dimensions: atomicity (monolithic to distributed) and neurality (non-neural to neural), revealing a gap that their method addresses.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Splattable neural primitives representation

The authors propose a novel volumetric representation where each primitive encodes a bounded neural density field parameterized by a shallow neural network. This design combines the expressivity of neural radiance fields with the rendering efficiency of primitive-based splatting methods.

Contribution

Exact analytical solution for line integrals enabling efficient splatting

The method derives a closed-form antiderivative for the neural density field that allows exact integration along view rays without ray marching. This enables the computation of perspectively accurate 2D splatting kernels for efficient rendering.

Contribution

Taxonomy highlighting dichotomy in radiance field representations

The authors present a systematic organization of radiance field representations along two dimensions: atomicity (monolithic to distributed) and neurality (non-neural to neural), revealing a gap that their method addresses.