FullPart: Generating each 3D Part at Full Resolution
Overview
Taxonomy
Research Landscape Overview
Claimed Contributions
The authors introduce FullPart, a framework that first generates bounding box layouts using implicit vecset diffusion, then generates each part at full resolution within its own dedicated voxel grid using explicit representation. This design addresses limitations of prior methods by enabling fine geometric details while maintaining global coherence.
The authors propose a center-corner encoding mechanism that embeds absolute spatial context for each voxel by encoding the positions of its center and eight corners in a unified super-high-resolution global coordinate system. This addresses the scale misalignment problem when parts of different sizes exchange information through attention mechanisms.
The authors introduce PartVerse-XL, the largest human-annotated 3D part dataset to date, containing 40K objects and 320K parts with associated part-aware texture descriptions. The dataset was created through mesh pre-segmentation followed by human refinement to ensure high-quality, semantically consistent annotations.
Core Task Comparisons
Comparisons with papers in the same taxonomy category
[3] From one to more: Contextual part latents for 3d generation PDF
Contribution Analysis
Detailed comparisons for each claimed contribution
FullPart framework combining implicit and explicit paradigms
The authors introduce FullPart, a framework that first generates bounding box layouts using implicit vecset diffusion, then generates each part at full resolution within its own dedicated voxel grid using explicit representation. This design addresses limitations of prior methods by enabling fine geometric details while maintaining global coherence.
[51] Partsdf: Part-based implicit neural representation for composite 3d shape parametrization and optimization PDF
[52] Neural parts: Learning expressive 3d shape abstractions with invertible neural networks PDF
[53] Anise: Assembly-based neural implicit surface reconstruction PDF
[54] Generating Part-Aware Editable 3D Shapes without 3D Supervision PDF
[55] Learning Explicit Contact for Implicit Reconstruction of Hand-held Objects from Monocular Images PDF
[56] ImplicitâExplicit Coupling Enhancement for UAV Scene 3D Reconstruction PDF
[57] Implicit Neural Head Synthesis via Controllable Local Deformation Fields PDF
[58] PARIS: Part-level Reconstruction and Motion Analysis for Articulated Objects PDF
[59] SENS: PartâAware Sketchâbased Implicit Neural Shape Modeling PDF
[60] LEIA: Latent View-invariant Embeddings for Implicit 3D Articulation PDF
Center-corner encoding strategy for part coherence
The authors propose a center-corner encoding mechanism that embeds absolute spatial context for each voxel by encoding the positions of its center and eight corners in a unified super-high-resolution global coordinate system. This addresses the scale misalignment problem when parts of different sizes exchange information through attention mechanisms.
[3] From one to more: Contextual part latents for 3d generation PDF
[4] Ultra3D: Efficient and High-Fidelity 3D Generation with Part Attention PDF
[43] Romantex: Decoupling 3d-aware rotary positional embedded multi-attention network for texture synthesis PDF
[44] Omnipart: Part-aware 3d generation with semantic decoupling and structural cohesion PDF
[45] PartCrafter: Structured 3D Mesh Generation via Compositional Latent Diffusion Transformers PDF
[46] Videograin: Modulating space-time attention for multi-grained video editing PDF
[47] Multi-Modality Regional Alignment Network for Covid X-Ray Survival Prediction and Report Generation PDF
[48] Enhanced Monocular Depth Estimation Based on Improved Self-Attention Mechanisms and Composite Loss Functions PDF
[49] mpAuvS: multi-perspective attention for unsupervised video summarizationâcapturing global, local, and spatiotemporal context: C. Xin et al. PDF
[50] Directional Non-Commutative Monoidal Structures for Compositional Embeddings in Machine Learning PDF
PartVerse-XL dataset
The authors introduce PartVerse-XL, the largest human-annotated 3D part dataset to date, containing 40K objects and 320K parts with associated part-aware texture descriptions. The dataset was created through mesh pre-segmentation followed by human refinement to ensure high-quality, semantically consistent annotations.