ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.07441
  4. Cited By
Multi-VQG: Generating Engaging Questions for Multiple Images

Multi-VQG: Generating Engaging Questions for Multiple Images

14 November 2022
Min-Hsuan Yeh
Vicent Chen
Ting-Hao Haung
Lun-Wei Ku
    CoGe
ArXivPDFHTML

Papers citing "Multi-VQG: Generating Engaging Questions for Multiple Images"

5 / 5 papers shown
Title
JourneyBench: A Challenging One-Stop Vision-Language Understanding Benchmark of Generated Images
JourneyBench: A Challenging One-Stop Vision-Language Understanding Benchmark of Generated Images
Zhecan Wang
Junzhang Liu
Chia-Wei Tang
Hani Alomari
Anushka Sivakumar
...
Haoxuan You
A. Ishmam
Kai-Wei Chang
Shih-Fu Chang
Chris Thomas
CoGe
VLM
61
2
0
19 Sep 2024
Location-Aware Visual Question Generation with Lightweight Models
Location-Aware Visual Question Generation with Lightweight Models
Nicholas Collin Suwono
Justin Chih-Yao Chen
Tun-Min Hung
T. Huang
I-Bin Liao
Yung-Hui Li
Lun-Wei Ku
Shao-Hua Sun
13
4
0
23 Oct 2023
Visually Grounded Reasoning across Languages and Cultures
Visually Grounded Reasoning across Languages and Cultures
Fangyu Liu
Emanuele Bugliarello
E. Ponti
Siva Reddy
Nigel Collier
Desmond Elliott
VLM
LRM
106
168
0
28 Sep 2021
How Much Can CLIP Benefit Vision-and-Language Tasks?
How Much Can CLIP Benefit Vision-and-Language Tasks?
Sheng Shen
Liunian Harold Li
Hao Tan
Mohit Bansal
Anna Rohrbach
Kai-Wei Chang
Z. Yao
Kurt Keutzer
CLIP
VLM
MLLM
196
405
0
13 Jul 2021
Unifying Vision-and-Language Tasks via Text Generation
Unifying Vision-and-Language Tasks via Text Generation
Jaemin Cho
Jie Lei
Hao Tan
Mohit Bansal
MLLM
256
525
0
04 Feb 2021
1