PerFit: Exploring Personalization Shifts in Representation Space of LLMs

ICLR 2026 Conference SubmissionAnonymous Authors
PersonalizationLarge Language Models
Abstract:

Personalization has become a pivotal field of study in contemporary intelligent systems. While large language models (LLMs) excel at general knowledge tasks, they often struggle with personalization, i.e., adapting their outputs to individual user expectations. Existing approaches that steer LLM behavior to meet users’ implicit preferences and behavior patterns, primarily relying on tune-free methods (e.g., RAG, PAG) or parameter fine-tuning methods (e.g., LoRA), face challenges in effectively balancing effectiveness and efficiency. Moreover, the mechanisms underlying personalized preferences remain underexplored. To address these challenges, we first uncover key patterns of user-specific information embedded in the representation space. Specifically, we find that (1) personalized information lies within a low-rank subspace represented by vectors, and (2) these vectors demonstrate both a collective shift shared across users and a personalized shift unique to each individual user. Building on these insights, we introduce PerFit, a novel two-stage solution that directly fine-tunes interventions in the hidden representation space by addressing both collective and user-specific shifts, thereby achieving precise steering of LLM with minimal parameter overhead. Experimental results demonstrate that \perfit delivers strong performance across six datasets while \cutting the number of parameters by an average of 92.3% compared to the state-of-the-art method.

Disclaimer
This report is AI-GENERATED using Large Language Models and WisPaper (A scholar search engine). It analyzes academic papers' tasks and contributions against retrieved prior work. While this system identifies POTENTIAL overlaps and novel directions, ITS COVERAGE IS NOT EXHAUSTIVE AND JUDGMENTS ARE APPROXIMATE. These results are intended to assist human reviewers and SHOULD NOT be relied upon as a definitive verdict on novelty.
NOTE that some papers exist in multiple, slightly different versions (e.g., with different titles or URLs). The system may retrieve several versions of the same underlying work. The current automated pipeline does not reliably align or distinguish these cases, so human reviewers will need to disambiguate them manually.
If you have any questions, please contact: mingzhang23@m.fudan.edu.cn

Overview

Overall Novelty Assessment

The paper introduces PerFit, a two-stage method that fine-tunes LLM representations to capture both collective and user-specific personalization shifts. It resides in the User-Specific Parameter-Efficient Personalization leaf, which contains five papers including the original work. This leaf sits within the broader Parameter-Efficient Fine-Tuning for Personalization branch, indicating a moderately populated research direction focused on lightweight adaptation techniques. The taxonomy shows this is an active area with established sibling works exploring similar parameter-efficient personalization strategies.

The taxonomy reveals neighboring research directions that contextualize PerFit's approach. The Representation Space Manipulation and Steering branch contains methods for direct latent intervention, including representation steering for truthfulness and embedding perturbation techniques, which share conceptual overlap with PerFit's representation-space focus. Meanwhile, the Personalization Frameworks and Architectures branch explores user embedding generation and synthetic data approaches that achieve personalization through different mechanisms. PerFit bridges these areas by combining representation-space manipulation with parameter-efficient fine-tuning, positioning itself at the intersection of two established methodological traditions.

Among thirty candidates examined, the contribution-level analysis reveals mixed novelty signals. The discovery of personalization patterns in representation space examined ten candidates with one refutable match, suggesting some prior exploration of similar phenomena. The PerFit method itself also examined ten candidates with one refutable match, indicating that representation-space fine-tuning for personalization has precedent in the limited search scope. The claim of being the first to fine-tune LLMs in representation space for personalization examined ten candidates with zero refutable matches, though this may reflect search limitations rather than definitive novelty.

Based on the limited thirty-candidate search, PerFit appears to occupy a recognized research direction with established sibling works in parameter-efficient personalization. The representation-space focus provides a distinctive angle within this area, though the analysis cannot confirm whether this constitutes a fundamental departure from prior art. The taxonomy structure suggests the work contributes to an active but not overcrowded subfield, with clear connections to both representation manipulation and personalization framework research.

Taxonomy

Core-task Taxonomy Papers
50
3
Claimed Contributions
30
Contribution Candidate Papers Compared
2
Refutable Paper

Research Landscape Overview

Core task: personalization of large language models through representation space fine-tuning. The field encompasses diverse strategies for adapting LLMs to individual users or specialized domains by manipulating internal representations rather than relying solely on prompt engineering or full retraining. The taxonomy reveals several major branches: Representation Space Manipulation and Steering focuses on direct intervention in latent activations to guide model behavior, while Parameter-Efficient Fine-Tuning for Personalization explores lightweight adaptation methods such as LoRA-based techniques that preserve base model knowledge. Personalization Frameworks and Architectures address system-level designs for user-specific customization, and Domain-Specific Fine-Tuning targets vertical applications like healthcare or legal domains. Additional branches cover LLM Embeddings for Recommendation Systems, which leverage learned representations for collaborative filtering, and Interpretability and Analysis of LLM Representations, which seeks to understand what these internal spaces encode. Cross-Modal and Multimodal LLM Applications extend personalization beyond text, while Meta-Learning and Fine-Tuning Emulation investigate how to simulate or accelerate adaptation processes. Within Parameter-Efficient Fine-Tuning for Personalization, a particularly active line of work centers on user-specific adaptation modules that balance efficiency with expressiveness. PerFit[0] exemplifies this direction by introducing representation-space fine-tuning tailored to individual preferences, closely aligning with approaches like Personalized PEFT[8] and Personalization Shifts[5], which similarly explore how small parameter updates can capture user-level variation. Nearby works such as Democratizing LLMs[11] and Lifelong Low-Rank Adaptation[49] emphasize scalability and continual learning, raising questions about how to maintain personalization quality as user bases grow or preferences evolve over time. Compared to these neighbors, PerFit[0] places particular emphasis on operating within the representation space itself, contrasting with methods that primarily adjust adapter weights or prompt embeddings, and thus occupies a niche where internal feature manipulation meets parameter efficiency.

Claimed Contributions

Discovery of personalization patterns in LLM representation space

The authors identify two key observations about how personalized information is encoded in LLMs' hidden representations: (1) personalized information lies within a low-rank subspace, and (2) these vectors exhibit both a collective shift shared across users and personalized shifts unique to individual users.

10 retrieved papers
Can Refute
PerFit: two-stage personalized fine-tuning method in representation space

PerFit is a novel personalized fine-tuning approach that operates directly in the hidden representation space rather than parameter space. It uses a two-stage training procedure to address both collective and user-specific shifts, achieving precise steering of LLM behavior with minimal parameter overhead.

10 retrieved papers
Can Refute
First work to fine-tune LLMs in representation space for personalization

The authors claim this is the first approach to apply representation-space fine-tuning specifically to personalized language model tasks, distinguishing it from prior work in activation engineering or parameter-efficient fine-tuning methods.

10 retrieved papers

Core Task Comparisons

Comparisons with papers in the same taxonomy category

Contribution Analysis

Detailed comparisons for each claimed contribution

Contribution

Discovery of personalization patterns in LLM representation space

The authors identify two key observations about how personalized information is encoded in LLMs' hidden representations: (1) personalized information lies within a low-rank subspace, and (2) these vectors exhibit both a collective shift shared across users and personalized shifts unique to individual users.

Contribution

PerFit: two-stage personalized fine-tuning method in representation space

PerFit is a novel personalized fine-tuning approach that operates directly in the hidden representation space rather than parameter space. It uses a two-stage training procedure to address both collective and user-specific shifts, achieving precise steering of LLM behavior with minimal parameter overhead.

Contribution

First work to fine-tune LLMs in representation space for personalization

The authors claim this is the first approach to apply representation-space fine-tuning specifically to personalized language model tasks, distinguishing it from prior work in activation engineering or parameter-efficient fine-tuning methods.