ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.17538
41
0

Policy Learning with a Natural Language Action Space: A Causal Approach

24 February 2025
Bohan Zhang
Yixin Wang
Paramveer S. Dhillon
    CML
ArXivPDFHTML
Abstract

This paper introduces a novel causal framework for multi-stage decision-making in natural language action spaces where outcomes are only observed after a sequence of actions. While recent approaches like Proximal Policy Optimization (PPO) can handle such delayed-reward settings in high-dimensional action spaces, they typically require multiple models (policy, value, and reward) and substantial training data. Our approach employs Q-learning to estimate Dynamic Treatment Regimes (DTR) through a single model, enabling data-efficient policy learning via gradient ascent on language embeddings. A key technical contribution of our approach is a decoding strategy that translates optimized embeddings back into coherent natural language. We evaluate our approach on mental health intervention, hate speech countering, and sentiment transfer tasks, demonstrating significant improvements over competitive baselines across multiple metrics. Notably, our method achieves superior transfer strength while maintaining content preservation and fluency, as validated through human evaluation. Our work provides a practical foundation for learning optimal policies in complex language tasks where training data is limited.

View on arXiv
@article{zhang2025_2502.17538,
  title={ Policy Learning with a Natural Language Action Space: A Causal Approach },
  author={ Bohan Zhang and Yixin Wang and Paramveer S. Dhillon },
  journal={arXiv preprint arXiv:2502.17538},
  year={ 2025 }
}
Comments on this paper