HDR-4DGS: High Dynamic Range 4D Gaussian Splatting from Alternating-exposure Monocular Videos
Overview
Overall Novelty Assessment
The paper introduces HDR-4DGS, a system for reconstructing 4D high dynamic range scenes from unposed monocular videos with alternating exposures using Gaussian Splatting. It resides in the 'Unposed Monocular Alternating-Exposure 4D HDR' leaf, which contains only two papers total (including this work). This indicates a highly sparse research direction within the broader taxonomy of 4D HDR reconstruction. The sibling paper, Mono4DGS-HDR, shares the same technical foundation, suggesting this specific problem formulation—joint pose estimation, dynamic scene modeling, and HDR synthesis from alternating exposures—is still in its early stages.
The taxonomy reveals three main branches: Gaussian Splatting-Based 4D HDR Reconstruction (where this paper sits), Learning-Based HDR Video Reconstruction (including multi-stage alignment networks and GAN-based synthesis), and Monocular 4D Dynamic Scene Reconstruction (focused on geometry without HDR). The paper's approach diverges from learning-based methods like SKFHDRNet and HDRVideo-GAN, which rely on trained networks for exposure fusion, by using explicit Gaussian representations for joint optimization. It also differs from broader monocular 4D methods like Vivid4D and DRSM, which do not address HDR synthesis or alternating-exposure input.
Among the three contributions analyzed, each was examined against a limited candidate pool: the core HDR-4DGS system (10 candidates, 1 refutable), the two-stage optimization framework (3 candidates, 1 refutable), and temporal luminance regularization (7 candidates, 1 refutable). The analysis is based on 20 total candidates from semantic search and citation expansion. The statistics suggest that while each contribution has at least one overlapping prior work among the examined candidates, the majority of candidates (9 out of 10 for the core system, 2 out of 3 for the framework, 6 out of 7 for regularization) do not clearly refute the novelty.
Given the sparse taxonomy leaf (only 2 papers) and the limited search scope (20 candidates), the work appears to occupy a relatively unexplored niche. The presence of one sibling paper and scattered refutable candidates suggests incremental refinement over closely related methods rather than a completely new problem formulation. However, the analysis does not cover exhaustive literature review, and the true novelty may depend on technical details not captured in the abstract-level comparison.
Taxonomy
Research Landscape Overview
Claimed Contributions
The authors present the first system capable of reconstructing 4D HDR scenes from monocular LDR videos with alternating exposures and unknown camera parameters. This addresses a previously unexplored challenging task in HDR novel view synthesis.
The authors introduce a novel two-stage optimization approach where the first stage learns HDR Gaussians in orthographic camera coordinate space without requiring camera poses, and the second stage transforms these to world space and jointly refines world Gaussians with camera parameters. This includes a video-to-world Gaussian transformation strategy based on 2D covariance invariance.
The authors propose a temporal luminance regularization strategy using flow-guided photometric loss to align per-pixel HDR irradiance between consecutive frames. This ensures temporally consistent HDR appearance across the reconstructed video, particularly for dynamic content.
Core Task Comparisons
Comparisons with papers in the same taxonomy category
Contribution Analysis
Detailed comparisons for each claimed contribution
HDR-4DGS system for 4D HDR reconstruction from unposed monocular alternating-exposure videos
The authors present the first system capable of reconstructing 4D HDR scenes from monocular LDR videos with alternating exposures and unknown camera parameters. This addresses a previously unexplored challenging task in HDR novel view synthesis.
[5] HDRVideo-GAN: deep generative HDR video reconstruction PDF
[14] Hdrflow: Real-time hdr video reconstruction with large motions PDF
[15] Deep HDR video from sequences with alternating exposures PDF
[16] DeepHS-HDRVideo: Deep high speed high dynamic range video reconstruction PDF
[17] High-Speed HDR Video Reconstruction from Hybrid Intensity Frames and Events PDF
[18] HDR video reconstruction from events and LDR frames via spatiotemporal attention and exposure compensation PDF
[19] Compressive Sensing-Based HDR-Like Image Encryption and Artifact-Mitigated Reconstruction PDF
[20] Single-Image HDR Reconstruction Assisted Ghost Suppression and Detail Preservation Network for Multi-Exposure HDR Imaging PDF
[21] Diffusion-Promoted HDR Video Reconstruction PDF
Two-stage optimization framework with video-to-world Gaussian transformation
The authors introduce a novel two-stage optimization approach where the first stage learns HDR Gaussians in orthographic camera coordinate space without requiring camera poses, and the second stage transforms these to world space and jointly refines world Gaussians with camera parameters. This includes a video-to-world Gaussian transformation strategy based on 2D covariance invariance.
Temporal luminance regularization strategy for HDR temporal consistency
The authors propose a temporal luminance regularization strategy using flow-guided photometric loss to align per-pixel HDR irradiance between consecutive frames. This ensures temporally consistent HDR appearance across the reconstructed video, particularly for dynamic content.