Abstract:

Work in psychology has highlighted that the geometric model of similarity standard in deep learning is not psychologically plausible because its metric properties such as symmetry do not align with human perception of similarity. In contrast, (Tversky,1977) proposed an axiomatic theory of similarity with psychological plausibility based on a representation of objects as sets of features, and their similarity as a function of their common and distinctive features. This model of similarity has not been used in deep learning before, in part because of the challenge of incorporating discrete set operations. In this paper, we develop a differentiable parameterization of Tversky's similarity that is learnable through gradient descent, and derive basic neural network building blocks such as the \emph{Tversky projection layer}, which unlike the linear projection layer can model non-linear functions such as {\sc xor}. Through experiments with image recognition and language modeling neural networks, we show that the Tversky projection layer is a beneficial replacement for the linear projection layer. For instance, on the NABirds image classification task, a frozen ResNet-50 adapted with a Tversky projection layer achieves a 24.7% relative accuracy improvement over the linear layer adapter baseline. With Tversky projection layers, GPT-2's perplexity on PTB decreases by 7.8%, and its parameter count by 34.8%. Finally, we propose a unified interpretation of both types of projection layers as computing similarities of input stimuli to learned prototypes for which we also propose a novel visualization technique highlighting the interpretability of Tversky projection layers. Our work offers a new paradigm for thinking about the similarity model implicit in modern deep learning, and designing neural networks that are interpretable under an established theory of psychological similarity.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper develops a differentiable parameterization of Tversky's feature-based similarity theory and introduces the Tversky projection layer as a neural network building block. According to the taxonomy, this work occupies the 'Tversky Similarity and Feature-Based Models' leaf, which contains only this paper—indicating a sparse research direction within the broader field of psychologically-grounded similarity learning. The taxonomy shows 50 papers across 14 leaf nodes, with sibling leaves like 'Human Similarity Judgment Alignment' (5 papers) and 'Psychological Similarity Space Construction' (4 papers) representing more populated adjacent areas.

The taxonomy structure reveals that most related work clusters in neighboring leaves focused on aligning neural representations with human judgments or constructing psychological similarity spaces through multidimensional scaling. The 'Cognitive Representation and Conceptual Modeling' leaf (7 papers) addresses broader cognitive structures, while 'Case-Based Reasoning with Neural Similarity' (2 papers) applies similarity learning to retrieval tasks. The original paper's leaf explicitly excludes geometric similarity models and approaches not grounded in psychological feature theories, positioning it as a foundational architectural contribution rather than an application-oriented or judgment-alignment method.

Among 25 candidates examined, the first contribution (differentiable Tversky parameterization) shows one refutable candidate from 5 examined, suggesting some prior exploration of making Tversky similarity differentiable. The second contribution (Tversky projection layer) examined 10 candidates with none clearly refuting it, indicating relative novelty in architectural integration. The third contribution (unified interpretation framework) also examined 10 candidates without clear refutation. The limited search scope means these statistics reflect top-K semantic matches rather than exhaustive coverage, and the single refutable finding for the core parameterization warrants attention to how the implementation differs from prior attempts.

Based on the limited 25-candidate search, the architectural contributions appear more novel than the core differentiability mechanism. The taxonomy's sparse population of the Tversky-specific leaf suggests this direction has received minimal prior attention, though the single refutable candidate indicates the fundamental idea may have precedent. The analysis captures semantic neighbors but cannot rule out relevant work in adjacent cognitive science or optimization literature outside the search scope.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
25
Contribution Candidate Papers Compared
1
Refutable Paper

Research Landscape Overview

Core task: Incorporating psychologically plausible similarity measures into neural network architectures. This field bridges cognitive science and machine learning by embedding human-like notions of similarity—such as feature-based comparisons, asymmetric judgments, and context-dependent weighting—directly into neural models. The taxonomy reveals four main branches: Psychologically-Grounded Similarity Learning focuses on integrating classical cognitive theories (e.g., Tversky's feature-based models) into neural architectures, often drawing on human similarity judgments and psychological embeddings to guide representation learning. Cognitive Similarity in Case-Based and Retrieval Systems adapts these principles for memory-augmented and retrieval-oriented tasks, where similarity drives case selection and analogical reasoning. Domain-Specific Applications demonstrate how psychologically informed similarity can enhance performance in areas ranging from medical diagnosis to recommendation systems, while Theoretical Foundations and Learning Paradigms explore the underlying computational principles, including biologically plausible learning rules and developmental cognitive architectures. Recent work highlights a tension between faithfully modeling human similarity and achieving robust generalization in neural systems. Studies like Human Similarity Judgments[7] and Psychological Similarity Mapping[48] emphasize capturing fine-grained human perceptual structure, while others such as Generalizing Similarity Spaces[40] and ImageNet Psychological Embeddings[43] investigate how well these embeddings transfer across tasks. Tversky Neural Networks[0] sits squarely within the Psychologically-Grounded branch, explicitly incorporating Tversky's asymmetric, feature-based similarity framework into neural computation. This contrasts with approaches like Neural CBR Enhancement[2] and Hybrid CBR Similarity[5], which prioritize retrieval efficiency and case adaptation over strict adherence to cognitive theory. The original work's emphasis on feature-level psychological plausibility positions it as a foundational contribution, offering a principled alternative to purely data-driven similarity metrics while raising questions about scalability and domain adaptation.

Claimed Contributions

Differentiable parameterization of Tversky similarity for gradient-based learning

The authors propose a novel differentiable formulation of Tversky's feature-based similarity function by representing features as vectors and objects dually as both vectors and sets. This enables the incorporation of Tversky's psychologically plausible similarity model into neural networks trained with gradient descent, addressing the challenge of differentiating through discrete set operations.

5 retrieved papers
Can Refute
Tversky projection layer as a neural network building block

The authors introduce the Tversky projection layer, a neural network module analogous to the linear projection layer but based on Tversky similarity. This layer can model non-linear functions like XOR that linear layers cannot, and serves as a replacement for standard projection layers in deep learning architectures.

10 retrieved papers
Unified interpretation framework and visualization technique for projection layers

The authors present a unified framework interpreting both linear and Tversky projection layers as computing similarities between inputs and learned prototypes. They introduce a novel visualization method that specifies projection parameters in the data domain, enabling human-interpretable visualization of learned prototypes and features.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Within the taxonomy built over the current TopK core-task papers, the original paper is assigned to a leaf with no direct siblings and no cousin branches under the same grandparent topic. In this retrieved landscape, it appears structurally isolated, which is one partial signal of novelty, but still constrained by search coverage and taxonomy granularity.

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Differentiable parameterization of Tversky similarity for gradient-based learning

The authors propose a novel differentiable formulation of Tversky's feature-based similarity function by representing features as vectors and objects dually as both vectors and sets. This enables the incorporation of Tversky's psychologically plausible similarity model into neural networks trained with gradient descent, addressing the challenge of differentiating through discrete set operations.

Contribution

Tversky projection layer as a neural network building block

The authors introduce the Tversky projection layer, a neural network module analogous to the linear projection layer but based on Tversky similarity. This layer can model non-linear functions like XOR that linear layers cannot, and serves as a replacement for standard projection layers in deep learning architectures.

Contribution

Unified interpretation framework and visualization technique for projection layers

The authors present a unified framework interpreting both linear and Tversky projection layers as computing similarities between inputs and learned prototypes. They introduce a novel visualization method that specifies projection parameters in the data domain, enabling human-interpretable visualization of learned prototypes and features.