ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.02163
  4. Cited By
Egocentric Video Description based on Temporally-Linked Sequences

Egocentric Video Description based on Temporally-Linked Sequences

7 April 2017
Marc Bolaños
Álvaro Peris
F. Casacuberta
Sergi Soler
Petia Radeva
    EgoV
ArXivPDFHTML

Papers citing "Egocentric Video Description based on Temporally-Linked Sequences"

3 / 3 papers shown
Title
Sensor-Augmented Egocentric-Video Captioning with Dynamic Modal
  Attention
Sensor-Augmented Egocentric-Video Captioning with Dynamic Modal Attention
Katsuyuki Nakamura
Hiroki Ohashi
Mitsuhiro Okada
EgoV
31
12
0
07 Sep 2021
Predicting the Future from First Person (Egocentric) Vision: A Survey
Predicting the Future from First Person (Egocentric) Vision: A Survey
Ivan Rodin
Antonino Furnari
Dimitrios Mavroeidis
G. Farinella
EgoV
26
42
0
28 Jul 2021
Multimodal Compact Bilinear Pooling for Visual Question Answering and
  Visual Grounding
Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding
Akira Fukui
Dong Huk Park
Daylen Yang
Anna Rohrbach
Trevor Darrell
Marcus Rohrbach
167
1,464
0
06 Jun 2016
1