Dynamic Novel View Synthesis in High Dynamic Range

ICLR 2026 Conference SubmissionAnonymous Authors
High Dynamic Range4D Gaussian SplattingDynamic Scene ModelingDeep Learning
Abstract:

High Dynamic Range Novel View Synthesis (HDR NVS) seeks to learn an HDR 3D model from Low Dynamic Range (LDR) training images captured under conventional imaging conditions. Current methods primarily focus on static scenes, implicitly assuming all scene elements remain stationary and non-living. However, real-world scenarios frequently feature dynamic elements, such as moving objects, varying lighting conditions, and other temporal events, thereby presenting a significantly more challenging scenario. To address this gap, we propose a more realistic problem named HDR Dynamic Novel View Synthesis (HDR DNVS), where the additional dimension ``Dynamic'' emphasizes the necessity of jointly modeling temporal radiance variations alongside sophisticated 3D translation between LDR and HDR. To tackle this complex, intertwined challenge, we introduce HDR-4DGS, a Gaussian Splatting-based architecture featured with an innovative dynamic tone-mapping module that explicitly connects HDR and LDR domains, maintaining temporal radiance coherence by dynamically adapting tone-mapping functions according to the evolving radiance distributions across the temporal dimension. As a result, HDR-4DGS achieves both temporal radiance consistency and spatially accurate color translation, enabling photorealistic HDR renderings from arbitrary viewpoints and time instances. Extensive experiments demonstrate that HDR-4DGS surpasses existing state-of-the-art methods in both quantitative performance and visual fidelity. Source code will be released.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper introduces HDR Dynamic Novel View Synthesis (HDR DNVS), combining high dynamic range reconstruction with temporal scene modeling through HDR-4DGS, a Gaussian Splatting architecture featuring dynamic tone-mapping. It resides in the Neural HDR Radiance Field Methods leaf, which contains six papers including the original work. This leaf sits within the HDR Reconstruction and Tone Mapping for Dynamic Scenes branch, indicating a moderately populated research direction focused on neural volumetric representations with explicit tone mapping for multi-exposure LDR-to-HDR conversion.

The taxonomy reveals neighboring leaves addressing related challenges: Deblurring and Alternating-Exposure HDR Reconstruction handles motion blur in monocular videos, while Image-Based HDR Fusion focuses on alignment-based merging without volumetric representations. The Dynamic Scene Representation branch contains 4D Gaussian Splatting methods that model temporal variations but typically assume standard dynamic range. The paper bridges these areas by jointly addressing HDR reconstruction and dynamic scene modeling, diverging from siblings like Fast HDR Radiance or GaussHDR that primarily target static or simpler dynamic scenarios.

Among thirty candidates examined, the problem formulation contribution shows one refutable candidate from ten examined, suggesting some prior work addresses dynamic HDR synthesis. The HDR-4DGS framework contribution examined ten candidates with none clearly refuting it, indicating potential architectural novelty in the dynamic tone-mapping module design. The benchmark dataset contribution also found one refutable candidate among ten, implying existing HDR dynamic datasets may exist. The limited search scope means these statistics reflect top-semantic matches rather than exhaustive field coverage, with most contributions showing substantial non-refutable candidates.

Based on the top-thirty semantic search results, the work appears to occupy a niche intersection between HDR reconstruction and dynamic scene modeling. The taxonomy structure shows this combination is less densely populated than either HDR or dynamic synthesis alone. However, the analysis acknowledges limited coverage: the search examined thirty candidates across three contributions, leaving open questions about broader prior work in HDR video synthesis or related multi-exposure dynamic capture methods not surfaced by semantic similarity.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
30
Contribution Candidate Papers Compared
2
Refutable Paper

Research Landscape Overview

Core task: Dynamic novel view synthesis in high dynamic range. The field addresses the challenge of reconstructing and rendering moving scenes with realistic lighting and exposure variations from multiple viewpoints. The taxonomy organizes research into four main branches. HDR Reconstruction and Tone Mapping for Dynamic Scenes focuses on methods that explicitly handle high dynamic range capture and processing, often integrating neural radiance fields with HDR imaging pipelines to recover accurate luminance across exposure brackets. Dynamic Scene Representation and Temporal Modeling emphasizes temporal consistency and motion modeling, developing representations that can track deformations and changes over time. Sparse-View and Monocular Dynamic Novel View Synthesis tackles the challenging regime of limited input views, where methods like Dynamic Monocular Synthesis[4] and Generative Novel View[3] must infer geometry and appearance from minimal observations. Specialized Dynamic Scene Reconstruction covers domain-specific applications and hybrid approaches that combine multiple sensing modalities or address particular motion patterns. Recent work shows active development in neural HDR radiance field methods, where researchers balance reconstruction quality against computational efficiency. Fast HDR Radiance[1] and GaussHDR[17] exemplify efforts to accelerate HDR rendering while maintaining photometric accuracy, whereas Casual3DHDR[14] explores more accessible capture setups. Dynamic HDR Synthesis[0] sits within this neural HDR radiance field cluster, sharing the goal of jointly modeling dynamic geometry and HDR appearance. Compared to neighbors like Adaptive Multi-Exposure[19], which focuses on exposure fusion strategies, or Dynamic HDR Flow[27], which emphasizes optical flow integration, the original work appears to emphasize end-to-end neural reconstruction that directly synthesizes HDR outputs for novel views of moving scenes. The broader landscape reveals ongoing tensions between representation expressiveness, temporal coherence, and practical capture requirements, with many studies exploring Gaussian splatting variants like Dynamic 3D Gaussians[2] as alternatives to purely implicit neural fields.

Claimed Contributions

HDR Dynamic Novel View Synthesis problem formulation

The authors formalize a new problem called High Dynamic Range Dynamic Novel View Synthesis (HDR DNVS), which extends existing HDR novel view synthesis methods to handle dynamic scenes with time-varying geometry and illumination, rather than being restricted to static scenes.

10 retrieved papers
Can Refute
HDR-4DGS framework with dynamic tone-mapping module

The authors introduce HDR-4DGS, a Gaussian Splatting-based framework that incorporates a biologically inspired dynamic tone-mapping module. This module uses a dynamic radiance context learner and per-channel tone-mapping functions to maintain temporal radiance coherence while translating between LDR and HDR domains.

10 retrieved papers
HDR-4D-Syn and HDR-4D-Real benchmark datasets

The authors create two new benchmark datasets for evaluating HDR DNVS methods: HDR-4D-Syn with 8 synthetic dynamic scenes and HDR-4D-Real with 4 real-world captured sequences. Each dataset includes ground-truth HDR images, time-varying 3D geometry, and synchronized multi-view LDR observations.

10 retrieved papers
Can Refute

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

HDR Dynamic Novel View Synthesis problem formulation

The authors formalize a new problem called High Dynamic Range Dynamic Novel View Synthesis (HDR DNVS), which extends existing HDR novel view synthesis methods to handle dynamic scenes with time-varying geometry and illumination, rather than being restricted to static scenes.

Contribution

HDR-4DGS framework with dynamic tone-mapping module

The authors introduce HDR-4DGS, a Gaussian Splatting-based framework that incorporates a biologically inspired dynamic tone-mapping module. This module uses a dynamic radiance context learner and per-channel tone-mapping functions to maintain temporal radiance coherence while translating between LDR and HDR domains.

Contribution

HDR-4D-Syn and HDR-4D-Real benchmark datasets

The authors create two new benchmark datasets for evaluating HDR DNVS methods: HDR-4D-Syn with 8 synthetic dynamic scenes and HDR-4D-Real with 4 real-world captured sequences. Each dataset includes ground-truth HDR images, time-varying 3D geometry, and synchronized multi-view LDR observations.