ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1411.5726
  4. Cited By
CIDEr: Consensus-based Image Description Evaluation

CIDEr: Consensus-based Image Description Evaluation

20 November 2014
Ramakrishna Vedantam
C. L. Zitnick
Devi Parikh
ArXivPDFHTML

Papers citing "CIDEr: Consensus-based Image Description Evaluation"

50 / 2,137 papers shown
Title
What is the Role of Recurrent Neural Networks (RNNs) in an Image Caption
  Generator?
What is the Role of Recurrent Neural Networks (RNNs) in an Image Caption Generator?
Marc Tanti
Albert Gatt
K. Camilleri
24
56
0
07 Aug 2017
Referenceless Quality Estimation for Natural Language Generation
Referenceless Quality Estimation for Natural Language Generation
Ondrej Dusek
Jekaterina Novikova
Verena Rieser
18
29
0
05 Aug 2017
Bottom-Up and Top-Down Attention for Image Captioning and Visual
  Question Answering
Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering
Peter Anderson
Xiaodong He
Chris Buehler
Damien Teney
Mark Johnson
Stephen Gould
Lei Zhang
AIMat
34
4,181
0
25 Jul 2017
OBJ2TEXT: Generating Visually Descriptive Language from Object Layouts
OBJ2TEXT: Generating Visually Descriptive Language from Object Layouts
Xuwang Yin
Vicente Ordonez
VLM
40
55
0
22 Jul 2017
Why We Need New Evaluation Metrics for NLG
Why We Need New Evaluation Metrics for NLG
Jekaterina Novikova
Ondrej Dusek
Amanda Cercas Curry
Verena Rieser
51
453
0
21 Jul 2017
cvpaper.challenge in 2016: Futuristic Computer Vision through 1,600
  Papers Survey
cvpaper.challenge in 2016: Futuristic Computer Vision through 1,600 Papers Survey
Hirokatsu Kataoka
Soma Shirakabe
Yun He
S. Ueta
Teppei Suzuki
...
Ryousuke Takasawa
Masataka Fuchida
Yudai Miyashita
Kazushige Okayasu
Yuta Matsuzaki
30
1
0
20 Jul 2017
Supervising Neural Attention Models for Video Captioning by Human Gaze
  Data
Supervising Neural Attention Models for Video Captioning by Human Gaze Data
Youngjae Yu
Jongwook Choi
Yeonhwa Kim
Kyung Yoo
Sang-Hun Lee
Gunhee Kim
17
69
0
19 Jul 2017
MDNet: A Semantically and Visually Interpretable Medical Image Diagnosis
  Network
MDNet: A Semantically and Visually Interpretable Medical Image Diagnosis Network
Zizhao Zhang
Yuanpu Xie
Fuyong Xing
M. McGough
L. Yang
MedIm
18
301
0
08 Jul 2017
Automated Audio Captioning with Recurrent Neural Networks
Automated Audio Captioning with Recurrent Neural Networks
K. Drossos
Sharath Adavanne
Tuomas Virtanen
17
128
0
30 Jun 2017
Actor-Critic Sequence Training for Image Captioning
Actor-Critic Sequence Training for Image Captioning
Li Zhang
Flood Sung
Feng Liu
Tao Xiang
S. Gong
Yongxin Yang
Timothy M. Hospedales
16
111
0
29 Jun 2017
The E2E Dataset: New Challenges For End-to-End Generation
The E2E Dataset: New Challenges For End-to-End Generation
Jekaterina Novikova
Ondrej Dusek
Verena Rieser
28
450
0
28 Jun 2017
Paying More Attention to Saliency: Image Captioning with Saliency and
  Context Attention
Paying More Attention to Saliency: Image Captioning with Saliency and Context Attention
Marcella Cornia
Lorenzo Baraldi
Giuseppe Serra
Rita Cucchiara
27
79
0
26 Jun 2017
Using Artificial Tokens to Control Languages for Multilingual Image
  Caption Generation
Using Artificial Tokens to Control Languages for Multilingual Image Caption Generation
Satoshi Tsutsui
David J. Crandall
16
19
0
20 Jun 2017
Best of Both Worlds: Transferring Knowledge from Discriminative Learning
  to a Generative Visual Dialog Model
Best of Both Worlds: Transferring Knowledge from Discriminative Learning to a Generative Visual Dialog Model
Jiasen Lu
A. Kannan
Jianwei Yang
Devi Parikh
Dhruv Batra
BDL
29
136
0
05 Jun 2017
NMTPY: A Flexible Toolkit for Advanced Neural Machine Translation
  Systems
NMTPY: A Flexible Toolkit for Advanced Neural Machine Translation Systems
Ozan Caglayan
Mercedes García-Martínez
Adrien Bardet
Walid Aransa
Fethi Bougares
Loïc Barrault
27
65
0
01 Jun 2017
Adversarial Ranking for Language Generation
Adversarial Ranking for Language Generation
Kevin Qinghong Lin
Dianqi Li
Xiaodong He
Zhengyou Zhang
Ming-Ting Sun
GAN
26
331
0
31 May 2017
Multimodal Machine Learning: A Survey and Taxonomy
Multimodal Machine Learning: A Survey and Taxonomy
T. Baltrušaitis
Chaitanya Ahuja
Louis-Philippe Morency
15
2,865
0
26 May 2017
Bidirectional Beam Search: Forward-Backward Inference in Neural Sequence
  Models for Fill-in-the-Blank Image Captioning
Bidirectional Beam Search: Forward-Backward Inference in Neural Sequence Models for Fill-in-the-Blank Image Captioning
Q. Sun
Stefan Lee
Dhruv Batra
BDL
33
43
0
24 May 2017
STAIR Captions: Constructing a Large-Scale Japanese Image Caption
  Dataset
STAIR Captions: Constructing a Large-Scale Japanese Image Caption Dataset
Yuya Yoshikawa
Yutaro Shigeto
A. Takeuchi
3DV
19
118
0
02 May 2017
Multi-Task Video Captioning with Video and Entailment Generation
Multi-Task Video Captioning with Video and Entailment Generation
Ramakanth Pasunuru
Joey Tianyi Zhou
33
116
0
24 Apr 2017
Paying Attention to Descriptions Generated by Image Captioning Models
Paying Attention to Descriptions Generated by Image Captioning Models
Hamed R. Tavakoli
Rakshith Shetty
Ali Borji
Jorma T. Laaksonen
21
79
0
24 Apr 2017
Skeleton Key: Image Captioning by Skeleton-Attribute Decomposition
Skeleton Key: Image Captioning by Skeleton-Attribute Decomposition
Yufei Wang
Zhe-nan Lin
Xiaohui Shen
Scott D. Cohen
G. Cottrell
21
105
0
23 Apr 2017
Attend to You: Personalized Image Captioning with Context Sequence
  Memory Networks
Attend to You: Personalized Image Captioning with Context Sequence Memory Networks
C. C. Park
Byeongchang Kim
Gunhee Kim
14
171
0
21 Apr 2017
Deep Reinforcement Learning-based Image Captioning with Embedding Reward
Deep Reinforcement Learning-based Image Captioning with Embedding Reward
Zhou Ren
Xiaoyu Wang
Ning Zhang
Xutao Lv
Li-Jia Li
31
324
0
12 Apr 2017
Creativity: Generating Diverse Questions using Variational Autoencoders
Creativity: Generating Diverse Questions using Variational Autoencoders
Unnat Jain
Ziyu Zhang
A. Schwing
22
152
0
11 Apr 2017
Egocentric Video Description based on Temporally-Linked Sequences
Egocentric Video Description based on Temporally-Linked Sequences
Marc Bolaños
Álvaro Peris
F. Casacuberta
Sergi Soler
Petia Radeva
EgoV
26
25
0
07 Apr 2017
Weakly Supervised Dense Video Captioning
Weakly Supervised Dense Video Captioning
Zhiqiang Shen
Jianguo Li
Zhou Su
Minjun Li
Yurong Chen
Yu-Gang Jiang
Xiangyang Xue
24
134
0
05 Apr 2017
Towards a Visual Privacy Advisor: Understanding and Predicting Privacy
  Risks in Images
Towards a Visual Privacy Advisor: Understanding and Predicting Privacy Risks in Images
Rakshith Shetty
Bernt Schiele
Mario Fritz
35
223
0
30 Mar 2017
Speaking the Same Language: Matching Machine to Human Captions by
  Adversarial Training
Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training
Rakshith Shetty
Marcus Rohrbach
Lisa Anne Hendricks
Mario Fritz
Bernt Schiele
17
142
0
30 Mar 2017
Survey of the State of the Art in Natural Language Generation: Core
  tasks, applications and evaluation
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
Albert Gatt
E. Krahmer
LM&MA
ELM
27
810
0
29 Mar 2017
Where to put the Image in an Image Caption Generator
Where to put the Image in an Image Caption Generator
Marc Tanti
Albert Gatt
K. Camilleri
47
96
0
27 Mar 2017
Recurrent Models for Situation Recognition
Recurrent Models for Situation Recognition
Arun Mallya
Svetlana Lazebnik
20
30
0
18 Mar 2017
Towards Diverse and Natural Image Descriptions via a Conditional GAN
Towards Diverse and Natural Image Descriptions via a Conditional GAN
Bo Dai
Sanja Fidler
R. Urtasun
Dahua Lin
GAN
22
450
0
17 Mar 2017
MAT: A Multimodal Attentive Translator for Image Captioning
MAT: A Multimodal Attentive Translator for Image Captioning
Chang Liu
F. Sun
Changhu Wang
Feng Wang
Alan Yuille
20
58
0
18 Feb 2017
Attention-Based Multimodal Fusion for Video Description
Attention-Based Multimodal Fusion for Video Description
Chiori Hori
Takaaki Hori
Teng-Yok Lee
Kazuhiro Sumi
J. Hershey
Tim K. Marks
41
359
0
11 Jan 2017
Context-aware Captions from Context-agnostic Supervision
Context-aware Captions from Context-agnostic Supervision
Ramakrishna Vedantam
Samy Bengio
Kevin Patrick Murphy
Devi Parikh
Gal Chechik
20
152
0
11 Jan 2017
Learning Visual N-Grams from Web Data
Learning Visual N-Grams from Web Data
Ang Li
Allan Jabri
Armand Joulin
L. V. D. van der Maaten
VLM
20
136
0
29 Dec 2016
Understanding Image and Text Simultaneously: a Dual Vision-Language
  Machine Comprehension Task
Understanding Image and Text Simultaneously: a Dual Vision-Language Machine Comprehension Task
Nan Ding
Sebastian Goodman
Fei Sha
Radu Soricut
VLM
19
9
0
22 Dec 2016
Re-evaluating Automatic Metrics for Image Captioning
Re-evaluating Automatic Metrics for Image Captioning
Mert Kilickaya
Aykut Erdem
Nazli Ikizler-Cinbis
Erkut Erdem
17
180
0
22 Dec 2016
An Empirical Study of Language CNN for Image Captioning
An Empirical Study of Language CNN for Image Captioning
Jiuxiang Gu
G. Wang
Jianfei Cai
Tsuhan Chen
31
132
0
21 Dec 2016
Temporal Tessellation: A Unified Approach for Video Analysis
Temporal Tessellation: A Unified Approach for Video Analysis
Dotan Kaufman
Gil Levi
Tal Hassner
Lior Wolf
11
16
0
21 Dec 2016
Attentive Explanations: Justifying Decisions and Pointing to the
  Evidence
Attentive Explanations: Justifying Decisions and Pointing to the Evidence
Dong Huk Park
Lisa Anne Hendricks
Zeynep Akata
Bernt Schiele
Trevor Darrell
Marcus Rohrbach
AAML
24
79
0
14 Dec 2016
Text-guided Attention Model for Image Captioning
Text-guided Attention Model for Image Captioning
Jonghwan Mun
Minsu Cho
Bohyung Han
VLM
10
92
0
12 Dec 2016
Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image
  Captioning
Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning
Jiasen Lu
Caiming Xiong
Devi Parikh
R. Socher
85
1,442
0
06 Dec 2016
Areas of Attention for Image Captioning
Areas of Attention for Image Captioning
M. Pedersoli
Thomas Lucas
Cordelia Schmid
Jakob Verbeek
33
205
0
03 Dec 2016
Making the V in VQA Matter: Elevating the Role of Image Understanding in
  Visual Question Answering
Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
Yash Goyal
Tejas Khot
D. Summers-Stay
Dhruv Batra
Devi Parikh
CoGe
113
3,126
0
02 Dec 2016
Guided Open Vocabulary Image Captioning with Constrained Beam Search
Guided Open Vocabulary Image Captioning with Constrained Beam Search
Peter Anderson
Basura Fernando
Mark Johnson
Stephen Gould
21
232
0
02 Dec 2016
Self-critical Sequence Training for Image Captioning
Self-critical Sequence Training for Image Captioning
Steven J. Rennie
E. Marcheret
Youssef Mroueh
Jerret Ross
Vaibhava Goel
11
1,877
0
02 Dec 2016
Improved Image Captioning via Policy Gradient optimization of SPIDEr
Improved Image Captioning via Policy Gradient optimization of SPIDEr
Siqi Liu
Zhenhai Zhu
Ning Ye
S. Guadarrama
Kevin Patrick Murphy
31
440
0
01 Dec 2016
Video Captioning with Multi-Faceted Attention
Video Captioning with Multi-Faceted Attention
Xiang Long
Chuang Gan
Gerard de Melo
22
88
0
01 Dec 2016
Previous
123...40414243
Next