ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16732
15
0

Sequential Monte Carlo for Policy Optimization in Continuous POMDPs

22 May 2025
Hany Abdulsamad
Sahel Iqbal
Simo Särkkä
ArXivPDFHTML
Abstract

Optimal decision-making under partial observability requires agents to balance reducing uncertainty (exploration) against pursuing immediate objectives (exploitation). In this paper, we introduce a novel policy optimization framework for continuous partially observable Markov decision processes (POMDPs) that explicitly addresses this challenge. Our method casts policy learning as probabilistic inference in a non-Markovian Feynman--Kac model that inherently captures the value of information gathering by anticipating future observations, without requiring extrinsic exploration bonuses or handcrafted heuristics. To optimize policies under this model, we develop a nested sequential Monte Carlo~(SMC) algorithm that efficiently estimates a history-dependent policy gradient under samples from the optimal trajectory distribution induced by the POMDP. We demonstrate the effectiveness of our algorithm across standard continuous POMDP benchmarks, where existing methods struggle to act under uncertainty.

View on arXiv
@article{abdulsamad2025_2505.16732,
  title={ Sequential Monte Carlo for Policy Optimization in Continuous POMDPs },
  author={ Hany Abdulsamad and Sahel Iqbal and Simo Särkkä },
  journal={arXiv preprint arXiv:2505.16732},
  year={ 2025 }
}
Comments on this paper