ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.00761
41
0

TRACE: A Self-Improving Framework for Robot Behavior Forecasting with Vision-Language Models

2 March 2025
Gokul Puthumanaillam
Paulo Padrao
Jose Fuentes
Pranay Thangeda
William E. Schafer
Jae Hyuk Song
Karan Jagdale
Leonardo Bobadilla
Melkior Ornik
ArXivPDFHTML
Abstract

Predicting the near-term behavior of a reactive agent is crucial in many robotic scenarios, yet remains challenging when observations of that agent are sparse or intermittent. Vision-Language Models (VLMs) offer a promising avenue by integrating textual domain knowledge with visual cues, but their one-shot predictions often miss important edge cases and unusual maneuvers. Our key insight is that iterative, counterfactual exploration--where a dedicated module probes each proposed behavior hypothesis, explicitly represented as a plausible trajectory, for overlooked possibilities--can significantly enhance VLM-based behavioral forecasting. We present TRACE (Tree-of-thought Reasoning And Counterfactual Exploration), an inference framework that couples tree-of-thought generation with domain-aware feedback to refine behavior hypotheses over multiple rounds. Concretely, a VLM first proposes candidate trajectories for the agent; a counterfactual critic then suggests edge-case variations consistent with partial observations, prompting the VLM to expand or adjust its hypotheses in the next iteration. This creates a self-improving cycle where the VLM progressively internalizes edge cases from previous rounds, systematically uncovering not only typical behaviors but also rare or borderline maneuvers, ultimately yielding more robust trajectory predictions from minimal sensor data. We validate TRACE on both ground-vehicle simulations and real-world marine autonomous surface vehicles. Experimental results show that our method consistently outperforms standard VLM-driven and purely model-based baselines, capturing a broader range of feasible agent behaviors despite sparse sensing. Evaluation videos and code are available atthis http URL.

View on arXiv
@article{puthumanaillam2025_2503.00761,
  title={ TRACE: A Self-Improving Framework for Robot Behavior Forecasting with Vision-Language Models },
  author={ Gokul Puthumanaillam and Paulo Padrao and Jose Fuentes and Pranay Thangeda and William E. Schafer and Jae Hyuk Song and Karan Jagdale and Leonardo Bobadilla and Melkior Ornik },
  journal={arXiv preprint arXiv:2503.00761},
  year={ 2025 }
}
Comments on this paper