Color3D: Controllable and Consistent 3D Colorization with Personalized Colorizer

ICLR 2026 Conference SubmissionAnonymous Authors
3D Gaussian Splatting3D Editing3D Colorization3D Generation
Abstract:

In this work, we present Color3D, a highly adaptable framework for colorizing both static and dynamic 3D scenes from monochromatic inputs, delivering visually diverse and chromatically vibrant reconstructions with flexible user-guided control. In contrast to existing methods that focus solely on static scenarios and enforce multi-view consistency by averaging color variations which inevitably sacrifice both chromatic richness and controllability, our approach is able to preserve color diversity and steerability while ensuring cross-view and cross-time consistency. In particular, the core insight of our method is to colorize only a single key view and then fine-tune a personalized colorizer to propagate its color to novel views and time steps. Through personalization, the colorizer learns a scene-specific deterministic color mapping underlying the reference view, enabling it to consistently project corresponding colors to the content in novel views and video frames via its inherent inductive bias. Once trained, the personalized colorizer can be applied to infer consistent chrominance for all other images, enabling direct reconstruction of colorful 3D scenes with a dedicated Lab color space Gaussian splatting representation. The proposed framework ingeniously recasts complicated 3D colorization as a more tractable single image paradigm, allowing seamless integration of arbitrary image colorization models with enhanced flexibility and controllability. Extensive experiments across diverse static and dynamic 3D colorization benchmarks substantiate that our method can deliver more consistent and chromatically rich renderings with precise user control. The code will be publicly available.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

Color3D proposes a unified framework for colorizing both static and dynamic 3D scenes from monochromatic inputs by fine-tuning a personalized colorizer on a single key view and propagating its color to novel views and time steps. The paper resides in the 'Personalized Colorizer with Cross-View Consistency' leaf under '3D Scene Colorization via Neural Radiance Fields', where it is currently the sole occupant among fifteen total papers in the taxonomy. This positioning suggests a relatively sparse research direction focused specifically on personalization-driven consistency, contrasting with the more populated sibling leaves addressing direct radiance field colorization and Gaussian splatting-based methods.

The taxonomy tree reveals that Color3D's nearest neighbors include 'Direct Radiance Field Colorization' methods using Lab color space or knowledge distillation, and 'Gaussian Splatting-Based 3D Colorization' approaches emphasizing temporal super-resolution. The broader '3D Scene Colorization via Neural Radiance Fields' branch sits alongside '2D Video and Multi-View Colorization' and 'Single Image Colorization with User Control', indicating that the field spans a spectrum from purely 2D interactive methods to fully 3D geometry-aware reconstruction. Color3D's emphasis on personalized colorizers and cross-time consistency distinguishes it from sibling approaches that either lack personalization or operate exclusively on static scenes.

Among thirty candidates examined, the unified framework contribution shows one refutable candidate out of ten examined, suggesting some overlap with prior work in the limited search scope. The key view selection and augmentation scheme, as well as the Lab Gaussian representation, each examined ten candidates with zero refutations, indicating these contributions appear more novel within the analyzed subset. The statistics reflect a focused semantic search rather than exhaustive coverage, so the presence of one overlapping candidate for the framework contribution does not preclude broader novelty but does signal that related unified approaches exist in the examined literature.

Based on the limited top-thirty semantic search, Color3D appears to occupy a sparsely populated niche combining personalization, cross-view consistency, and dynamic scene handling. The taxonomy structure and contribution-level statistics suggest that while the overall framework concept has some prior overlap, the specific mechanisms for key view selection and Lab-based Gaussian reconstruction may offer incremental advances. A more exhaustive literature review would be needed to confirm the extent of novelty beyond the examined candidate set.

Taxonomy

Core-task Taxonomy Papers
15
3
Claimed Contributions
30
Contribution Candidate Papers Compared
1
Refutable Paper

Research Landscape Overview

Core task: Controllable and consistent 3D colorization from monochromatic inputs. The field encompasses diverse approaches to adding color to grayscale or monochromatic data, organized into several main branches. The taxonomy reveals a spectrum from purely 2D methods—such as single image colorization with user control and video-based multi-view colorization—to fully 3D scene colorization via neural radiance fields, which explicitly model geometry and view consistency. Domain-specific and multi-instance colorization addresses specialized settings like medical imaging or broadcast footage, while geometric and visualization applications explore colorization for scientific or technical purposes. Representative works like Colorizing Radiance Fields[1] and Gaussian Colorization[6] illustrate the 3D scene branch, whereas Language Video Colorization[4] and Broadcast Colorization[7] exemplify temporal and multi-view consistency in 2D settings. User-guided methods such as User-Guided Colorization[15] and Controllable Colorization[9] highlight the importance of interactive control across different dimensionalities. A particularly active line of work focuses on achieving cross-view consistency in 3D scenes, balancing personalization with geometric coherence. Color3D[0] sits within the personalized colorizer cluster under neural radiance fields, emphasizing user control while maintaining consistency across novel viewpoints. This contrasts with fully automatic approaches like Colorizing Radiance Fields[1], which rely on pre-trained priors without explicit user guidance, and with methods like Magiccolor[3], which may prioritize semantic plausibility over fine-grained user intent. The central challenge across these branches is reconciling the flexibility of user-driven color assignment with the structural demands of 3D geometry and multi-view rendering. Color3D[0] addresses this by integrating personalized color hints into a radiance field framework, positioning it as a bridge between interactive single-image colorization and fully consistent 3D scene reconstruction.

Claimed Contributions

Color3D unified controllable 3D colorization framework

The authors introduce Color3D, a framework that unifies controllable colorization for both static and dynamic 3D scenes. It achieves this by fine-tuning a personalized colorizer for each scene, thereby advancing controllability and interactivity in 3D colorization tasks.

10 retrieved papers
Can Refute
Key view selection and single view augmentation scheme

The authors develop a key view selection strategy and a single view augmentation method to improve the personalized colorizer's ability to generalize and produce richer colors. This facilitates more effective tuning of the scene-specific colorizer.

10 retrieved papers
Lab Gaussian representation for color reconstruction

The authors propose a dedicated Lab color space Gaussian splatting representation that separately optimizes luminance and chrominance components. This representation enhances color reconstruction fidelity and preserves scene structures more effectively.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Within the taxonomy built over the current TopK core-task papers, the original paper is assigned to a leaf with no direct siblings and no cousin branches under the same grandparent topic. In this retrieved landscape, it appears structurally isolated, which is one partial signal of novelty, but still constrained by search coverage and taxonomy granularity.

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Color3D unified controllable 3D colorization framework

The authors introduce Color3D, a framework that unifies controllable colorization for both static and dynamic 3D scenes. It achieves this by fine-tuning a personalized colorizer for each scene, thereby advancing controllability and interactivity in 3D colorization tasks.

Contribution

Key view selection and single view augmentation scheme

The authors develop a key view selection strategy and a single view augmentation method to improve the personalized colorizer's ability to generalize and produce richer colors. This facilitates more effective tuning of the scene-specific colorizer.

Contribution

Lab Gaussian representation for color reconstruction

The authors propose a dedicated Lab color space Gaussian splatting representation that separately optimizes luminance and chrominance components. This representation enhances color reconstruction fidelity and preserves scene structures more effectively.

Color3D: Controllable and Consistent 3D Colorization with Personalized Colorizer | Novelty Validation