ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13180
  4. Cited By
ViPlan: A Benchmark for Visual Planning with Symbolic Predicates and Vision-Language Models

ViPlan: A Benchmark for Visual Planning with Symbolic Predicates and Vision-Language Models

19 May 2025
Matteo Merler
Nicola Dainese
Minttu Alakuijala
Giovanni Bonetta
Pietro Ferrazzi
Yu Tian
Bernardo Magnini
Pekka Marttinen
    LM&Ro
    VLM
ArXivPDFHTML

Papers citing "ViPlan: A Benchmark for Visual Planning with Symbolic Predicates and Vision-Language Models"

28 / 28 papers shown
Title
InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models
InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models
Jinguo Zhu
Weiyun Wang
Zhe Chen
Ziwei Liu
Shenglong Ye
...
Dahua Lin
Yu Qiao
Jifeng Dai
Wenhai Wang
Wei Wang
MLLM
VLM
144
89
1
14 Apr 2025
DASH: Detection and Assessment of Systematic Hallucinations of VLMs
DASH: Detection and Assessment of Systematic Hallucinations of VLMs
Maximilian Augustin
Yannic Neuhaus
Matthias Hein
VLM
95
3
0
30 Mar 2025
Gemma 3 Technical Report
Gemma 3 Technical Report
Gemma Team
Aishwarya B Kamath
Johan Ferret
Shreya Pathak
Nino Vieillard
...
Harshal Tushar Lehri
Hussein Hazimeh
Ian Ballantyne
Idan Szpektor
Ivan Nardini
VLM
172
107
0
25 Mar 2025
Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs
Abdelrahman Abouelenin
Atabak Ashfaq
Adam Atkinson
Hany Awadalla
Nguyen Bach
...
Ishmam Zabir
Yunan Zhang
Li Zhang
Yanzhe Zhang
Xiren Zhou
MoE
SyDa
96
59
0
03 Mar 2025
Qwen2.5-VL Technical Report
Qwen2.5-VL Technical Report
S. Bai
Keqin Chen
Xuejing Liu
Jialin Wang
Wenbin Ge
...
Zesen Cheng
Hang Zhang
Zhibo Yang
Haiyang Xu
Junyang Lin
VLM
287
546
0
20 Feb 2025
Planning with Vision-Language Models and a Use Case in Robot-Assisted Teaching
Planning with Vision-Language Models and a Use Case in Robot-Assisted Teaching
Xuzhe Dang
Lada Kudláčková
Stefan Edelkamp
LM&Ro
3DV
85
2
0
29 Jan 2025
From Pixels to Predicates: Learning Symbolic World Models via Pretrained Vision-Language Models
From Pixels to Predicates: Learning Symbolic World Models via Pretrained Vision-Language Models
Ashay Athalye
Nishanth Kumar
Tom Silver
Yichao Liang
Tomás Lozano-Pérez
Leslie Pack Kaelbling
Leslie Kaelbling
LM&Ro
57
6
0
31 Dec 2024
DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced
  Multimodal Understanding
DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding
Z. F. Wu
Xiaokang Chen
Zizheng Pan
Xianglong Liu
Wen Liu
...
Xingkai Yu
Haowei Zhang
Liang Zhao
Yijiao Wang
Chong Ruan
MLLM
VLM
MoE
180
140
0
13 Dec 2024
VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning
VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning
Yichao Liang
Nishanth Kumar
Hao Tang
Adrian Weller
J. Tenenbaum
Tom Silver
Joao Henriques
Kevin Ellis
95
12
0
30 Oct 2024
ReplanVLM: Replanning Robotic Tasks with Visual Language Models
ReplanVLM: Replanning Robotic Tasks with Visual Language Models
Aoran Mei
Guo-Niu Zhu
Huaxiang Zhang
Zhongxue Gan
61
14
0
31 Jul 2024
Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs
Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs
Shengbang Tong
Zhuang Liu
Yuexiang Zhai
Yi-An Ma
Yann LeCun
Saining Xie
VLM
MLLM
89
329
0
11 Jan 2024
Measuring and Improving Chain-of-Thought Reasoning in Vision-Language
  Models
Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models
Yangyi Chen
Karan Sikka
Michael Cogswell
Heng Ji
Ajay Divakaran
LRM
91
26
0
08 Sep 2023
Reasoning with Language Model is Planning with World Model
Reasoning with Language Model is Planning with World Model
Shibo Hao
Yi Gu
Haodi Ma
Joshua Jiahua Hong
Zhen Wang
D. Wang
Zhiting Hu
ReLM
LRM
LLMAG
134
578
0
24 May 2023
Leveraging Pre-trained Large Language Models to Construct and Utilize
  World Models for Model-based Task Planning
Leveraging Pre-trained Large Language Models to Construct and Utilize World Models for Model-based Task Planning
L. Guan
Karthik Valmeekam
S. Sreedharan
Subbarao Kambhampati
LLMAG
62
172
0
24 May 2023
Visual Instruction Tuning
Visual Instruction Tuning
Haotian Liu
Chunyuan Li
Qingyang Wu
Yong Jae Lee
SyDa
VLM
MLLM
529
4,740
0
17 Apr 2023
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image
  Encoders and Large Language Models
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
Junnan Li
Dongxu Li
Silvio Savarese
Steven C. H. Hoi
VLM
MLLM
426
4,550
0
30 Jan 2023
LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large
  Language Models
LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models
Chan Hee Song
Jiaman Wu
Clay Washington
Brian M Sadler
Wei-Lun Chao
Yu-Chuan Su
LLMAG
LM&Ro
119
413
0
08 Dec 2022
Do Vision-and-Language Transformers Learn Grounded Predicate-Noun
  Dependencies?
Do Vision-and-Language Transformers Learn Grounded Predicate-Noun Dependencies?
Mitja Nikolaus
Emmanuelle Salin
Stéphane Ayache
Abdellah Fourtassi
Benoit Favre
58
14
0
21 Oct 2022
PlanBench: An Extensible Benchmark for Evaluating Large Language Models
  on Planning and Reasoning about Change
PlanBench: An Extensible Benchmark for Evaluating Large Language Models on Planning and Reasoning about Change
Karthik Valmeekam
Matthew Marquez
Alberto Olmo
S. Sreedharan
Subbarao Kambhampati
ReLM
LRM
93
229
0
21 Jun 2022
Winoground: Probing Vision and Language Models for Visio-Linguistic
  Compositionality
Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality
Tristan Thrush
Ryan Jiang
Max Bartolo
Amanpreet Singh
Adina Williams
Douwe Kiela
Candace Ross
CoGe
98
425
0
07 Apr 2022
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Michael Ahn
Anthony Brohan
Noah Brown
Yevgen Chebotar
Omar Cortes
...
Ted Xiao
Peng Xu
Sichun Xu
Mengyuan Yan
Andy Zeng
LM&Ro
182
1,954
0
04 Apr 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
814
9,387
0
28 Jan 2022
Grounding Predicates through Actions
Grounding Predicates through Actions
Toki Migimatsu
Jeannette Bohg
183
35
0
29 Sep 2021
BEHAVIOR: Benchmark for Everyday Household Activities in Virtual,
  Interactive, and Ecological Environments
BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments
S. Srivastava
Chengshu Li
Michael Lingelbach
Roberto Martín-Martín
Fei Xia
...
Chenxi Liu
Silvio Savarese
H. Gweon
Jiajun Wu
Li Fei-Fei
LM&Ro
226
164
0
06 Aug 2021
iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday
  Household Tasks
iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks
Chengshu Li
Fei Xia
Roberto Martín-Martín
Michael Lingelbach
S. Srivastava
...
Karen Liu
H. Gweon
Jiajun Wu
Li Fei-Fei
Silvio Savarese
LM&Ro
219
232
0
06 Aug 2021
Learning Transferable Visual Models From Natural Language Supervision
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya A. Ramesh
Gabriel Goh
...
Amanda Askell
Pamela Mishkin
Jack Clark
Gretchen Krueger
Ilya Sutskever
CLIP
VLM
925
29,436
0
26 Feb 2021
VQA: Visual Question Answering
VQA: Visual Question Answering
Aishwarya Agrawal
Jiasen Lu
Stanislaw Antol
Margaret Mitchell
C. L. Zitnick
Dhruv Batra
Devi Parikh
CoGe
202
5,478
0
03 May 2015
The Fast Downward Planning System
The Fast Downward Planning System
M. Helmert
83
1,901
0
27 Sep 2011
1