ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.00124
33
1

Evaluation of LLMs-based Hidden States as Author Representations for Psychological Human-Centered NLP Tasks

28 February 2025
Nikita Soni
Pranav Chitale
Khushboo Singh
Niranjan Balasubramanian
H. Andrew Schwartz
ArXivPDFHTML
Abstract

Like most of NLP, models for human-centered NLP tasks -- tasks attempting to assess author-level information -- predominantly use representations derived from hidden states of Transformer-based LLMs. However, what component of the LM is used for the representation varies widely. Moreover, there is a need for Human Language Models (HuLMs) that implicitly model the author and provide a user-level hidden state. Here, we systematically evaluate different ways of representing documents and users using different LM and HuLM architectures to predict task outcomes as both dynamically changing states and averaged trait-like user-level attributes of valence, arousal, empathy, and distress. We find that representing documents as an average of the token hidden states performs the best generally. Further, while a user-level hidden state itself is rarely the best representation, we find its inclusion in the model strengthens token or document embeddings used to derive document- and user-level representations resulting in best performances.

View on arXiv
@article{soni2025_2503.00124,
  title={ Evaluation of LLMs-based Hidden States as Author Representations for Psychological Human-Centered NLP Tasks },
  author={ Nikita Soni and Pranav Chitale and Khushboo Singh and Niranjan Balasubramanian and H. Andrew Schwartz },
  journal={arXiv preprint arXiv:2503.00124},
  year={ 2025 }
}
Comments on this paper