Noisy-Pair Robust Representation Alignment for Positive-Unlabeled Learning

ICLR 2026 Conference SubmissionAnonymous Authors
PU LearningNon-contrastive representation Learning
Abstract:

Positive-Unlabeled (PU) learning aims to train a binary classifier (positive vs. negative) where only limited positive data and abundant unlabeled data are available. While widely applicable, state-of-the-art PU learning methods substantially underperform their supervised counterparts on complex datasets, especially without auxiliary negatives or pre-estimated parameters (e.g., a 14.26% gap on CIFAR-100 dataset). We identify the primary bottleneck as the challenge of learning discriminative representations under unreliable supervision. To tackle this challenge, we propose NcPU, a non-contrastive PU learning framework that requires no auxiliary information. NcPU combines a noisy-pair robust supervised non-contrastive loss (NoiSNCL), which aligns intra-class representations despite unreliable supervision, with a phantom label disambiguation (PLD) scheme that supplies conservative negative supervision via regret-based label updates. Theoretically, NoiSNCL and PLD can iteratively benefit each other from the perspective of the Expectation-Maximization framework. Empirically, extensive experiments demonstrate that: (1) NoiSNCL enables simple PU methods to achieve competitive performance; and (2) NcPU achieves substantial improvements over state-of-the-art PU methods across diverse datasets, including challenging datasets on post-disaster building damage mapping, highlighting its promise for real-world applications. Code: https://github.com/ICLR2026-285/NcPU.git.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper proposes NcPU, a non-contrastive framework for positive-unlabeled learning that combines a noisy-pair robust supervised loss (NoiSNCL) with phantom label disambiguation (PLD). Within the taxonomy, it occupies the 'Non-Contrastive Representation Alignment' leaf under 'Representation Learning for PU', where it is currently the sole paper. This leaf sits alongside 'Contrastive PU Learning', which contains two papers focused on hard negative mining and self-supervision. The sparse population of this specific leaf suggests the non-contrastive approach to PU representation alignment is relatively underexplored compared to contrastive methods.

The taxonomy reveals that representation-based PU learning is one branch within broader 'Core Positive-Unlabeled Learning Methods', which also includes cost-sensitive and risk-based formulations. The sibling 'Contrastive PU Learning' leaf contains methods like Weighted Contrastive PU and PU Contrastive Learning, which rely on instance-level discrimination and negative sampling. The taxonomy narrative explicitly contrasts NcPU's noise-robust alignment with contrastive methods' clean pair assumptions, positioning it as addressing a distinct challenge. Neighboring branches in domain adaptation (e.g., 'Discriminative Feature Alignment') tackle related representation problems but assume labeled source data, which the taxonomy's exclude notes clarify as out of scope for PU methods.

Among eleven candidates examined, none clearly refuted the three core contributions. The NcPU framework contribution examined ten candidates with zero refutable overlaps; NoiSNCL examined one candidate with no refutation; PLD examined zero candidates. This limited search scope—eleven papers total—means the analysis captures only a narrow slice of potentially relevant work. The absence of refutable candidates across all contributions suggests either genuine novelty within the examined set or insufficient coverage of closely related noise-robust representation methods. The statistics indicate the framework and loss components were more thoroughly vetted than the PLD scheme, which received no candidate examination.

Based on the top-eleven semantic matches examined, the work appears to occupy a sparsely populated niche combining non-contrastive alignment with noise robustness for PU learning. However, the limited search scope leaves open whether broader literature in noisy label learning or robust representation methods might contain relevant prior work. The taxonomy structure confirms that non-contrastive PU representation alignment is less crowded than contrastive approaches, though the single-paper leaf status may reflect taxonomy granularity rather than absolute novelty.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
11
Contribution Candidate Papers Compared
0
Refutable Paper

Research Landscape Overview

Core task: Positive-unlabeled learning with discriminative representation alignment. This field addresses scenarios where only a subset of positive examples is labeled while the remaining data is unlabeled, requiring methods that align representations to improve discrimination between positive and negative instances. The taxonomy reveals a rich landscape organized into seven main branches. Core Positive-Unlabeled Learning Methods focus on foundational PU techniques, including representation learning approaches that leverage contrastive frameworks like Weighted Contrastive PU[3] and PU Contrastive Learning[14], as well as non-contrastive alignment strategies. Unsupervised Domain Adaptation with Feature Alignment encompasses works that align distributions across domains through adversarial training, prototype-based methods such as Prototype Driven Adaptation[17], and discriminative feature mining techniques like Transferable Discriminative Features[6]. Semi-Supervised Domain Adaptation and Source-Free settings address scenarios with partial labels or no source access, while Domain Adaptation for Specific Modalities tackles vision, hyperspectral, and temporal data. Industrial applications span fault diagnosis in bearings and gearboxes, and Open-World settings explore decentralized and continual learning challenges. Several active research directions reveal key trade-offs in how representations are aligned and how label scarcity is handled. Contrastive methods like Weighted Contrastive PU[3] emphasize instance-level discrimination but may struggle with noisy pairs, while non-contrastive approaches such as Kernel Alignment PU[1] avoid explicit negative sampling. Prototype-based alignment methods balance class-level structure with instance diversity, as seen in works like Class Prototype Guided[18]. Within this landscape, Noisy Pair Robust PU[0] sits in the non-contrastive representation alignment cluster, addressing robustness to noisy correspondences—a challenge that distinguishes it from contrastive counterparts like Weighted Contrastive PU[3], which rely on clean pair assumptions. Compared to Kernel Alignment PU[1], which focuses on kernel-based feature matching, Noisy Pair Robust PU[0] emphasizes handling label noise during alignment, positioning it as a bridge between classical PU learning and modern robust representation techniques.

Claimed Contributions

NcPU framework for noisy-pair robust representation alignment

The authors introduce NcPU, a framework that combines noisy-pair robust supervised non-contrastive loss (NoiSNCL) with phantom label disambiguation (PLD) to learn discriminative representations in positive-unlabeled learning without requiring auxiliary negatives or pre-estimated parameters.

10 retrieved papers
Noisy-pair robust supervised non-contrastive loss (NoiSNCL)

The authors propose NoiSNCL, a loss function that aligns intra-class representations while being robust to noisy pairs by ensuring that clean pairs dominate the optimization process through gradient magnitude analysis.

1 retrieved paper
Phantom label disambiguation (PLD) scheme

The authors develop PLD, a label disambiguation strategy that provides conservative negative supervision through regret-based label updating using class prototypes and a PhantomGate mechanism to prevent trivial solutions.

0 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Within the taxonomy built over the current TopK core-task papers, the original paper is assigned to a leaf with no direct siblings and no cousin branches under the same grandparent topic. In this retrieved landscape, it appears structurally isolated, which is one partial signal of novelty, but still constrained by search coverage and taxonomy granularity.

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

NcPU framework for noisy-pair robust representation alignment

The authors introduce NcPU, a framework that combines noisy-pair robust supervised non-contrastive loss (NoiSNCL) with phantom label disambiguation (PLD) to learn discriminative representations in positive-unlabeled learning without requiring auxiliary negatives or pre-estimated parameters.

Contribution

Noisy-pair robust supervised non-contrastive loss (NoiSNCL)

The authors propose NoiSNCL, a loss function that aligns intra-class representations while being robust to noisy pairs by ensuring that clean pairs dominate the optimization process through gradient magnitude analysis.

Contribution

Phantom label disambiguation (PLD) scheme

The authors develop PLD, a label disambiguation strategy that provides conservative negative supervision through regret-based label updating using class prototypes and a PhantomGate mechanism to prevent trivial solutions.