Curse of Slicing: Why Sliced Mutual Information is a Deceptive Measure of Statistical Dependence
Overview
Overall Novelty Assessment
The paper contributes a critical analysis of Sliced Mutual Information (SMI), identifying saturation behavior, sensitivity failures, and redundancy bias as fundamental limitations. Within the taxonomy, it occupies the 'Limitations and Critical Analysis' leaf under 'Theoretical Foundations and Extensions of Sliced Mutual Information'. Notably, this leaf contains only the original paper itself—no sibling papers exist in this category. This isolation suggests that systematic critical examination of SMI's failure modes represents a sparse research direction, contrasting sharply with the more populated leaves addressing SMI variants and applications.
The taxonomy reveals substantial activity in neighboring areas: 'Max-Sliced Mutual Information' contains four papers exploring optimality conditions, while 'k-Sliced and Higher-Dimensional Extensions' includes two papers on dimensional scalability. The 'Core Sliced Mutual Information Theory' leaf holds one foundational paper. The original work diverges from these directions by questioning SMI's reliability rather than extending its capabilities. Its scope explicitly excludes positive theoretical developments (which belong in sibling leaves) and application-specific issues (which belong under 'Applications to Deep Learning Analysis' or 'Domain-Specific Applications'), focusing instead on intrinsic theoretical and practical limitations.
Among twelve candidates examined through limited semantic search, the 'Saturation and Sensitivity Analysis' contribution shows one potentially refutable candidate from six examined, while 'Redundancy Bias' was examined against zero candidates, and 'Curse of Dimensionality' found no refutations among six candidates. The statistics indicate that within this restricted search scope, most contributions lack substantial overlapping prior work. The saturation analysis appears most vulnerable to existing literature, though even here only one candidate among six provides potential overlap. The redundancy bias and dimensionality curse contributions appear more novel within the examined sample.
Based on the top-twelve semantic matches and taxonomy structure, the work addresses an underexplored critical perspective within SMI research. The analysis does not claim exhaustive coverage of all possible prior work, and the limited search scope means additional relevant papers may exist outside the examined candidates. The taxonomy's sparse 'Limitations' leaf and the low refutation rates suggest the critical angle is relatively unexplored, though definitive novelty claims require broader literature review.
Taxonomy
Research Landscape Overview
Claimed Contributions
The authors demonstrate both theoretically and empirically that sliced mutual information reaches a plateau early and becomes insensitive to further increases in statistical dependence, even in simple low-dimensional settings. This saturation behavior undermines SMI's ability to accurately track changes in dependence structure.
The authors provide a counterexample showing that SMI does not favor linearly extractable information as previously believed. Instead, they reveal that SMI prioritizes information redundancy over information content, which can lead to catastrophic failures in applications like representation learning.
The authors reinterpret the curse of dimensionality for SMI, showing that while sample complexity may be favorable, SMI uniformly decays to zero as dimensionality increases. This decay occurs due to diminishing redundancy and makes SMI ineffective for statistical analysis in high-dimensional settings.
Core Task Comparisons
Comparisons with papers in the same taxonomy category
Contribution Analysis
Detailed comparisons for each claimed contribution
Saturation and Sensitivity Analysis of SMI
The authors demonstrate both theoretically and empirically that sliced mutual information reaches a plateau early and becomes insensitive to further increases in statistical dependence, even in simple low-dimensional settings. This saturation behavior undermines SMI's ability to accurately track changes in dependence structure.
[5] Sliced Information Plane for Analysis of Deep Neural Networks PDF
[3] Max-sliced mutual information PDF
[4] On slicing optimality for mutual information PDF
[6] Using sliced mutual information to study memorization and generalization in deep neural networks PDF
[13] -Sliced Mutual Information: A Quantitative Study of Scalability with Dimension PDF
[21] Pointwise Information Measures as Confidence Estimators in Deep Neural Networks: A Comparative Study PDF
Redundancy Bias of SMI
The authors provide a counterexample showing that SMI does not favor linearly extractable information as previously believed. Instead, they reveal that SMI prioritizes information redundancy over information content, which can lead to catastrophic failures in applications like representation learning.
Curse of Dimensionality for SMI
The authors reinterpret the curse of dimensionality for SMI, showing that while sample complexity may be favorable, SMI uniformly decays to zero as dimensionality increases. This decay occurs due to diminishing redundancy and makes SMI ineffective for statistical analysis in high-dimensional settings.