ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.07058
  4. Cited By
Ego4D: Around the World in 3,000 Hours of Egocentric Video

Ego4D: Around the World in 3,000 Hours of Egocentric Video

13 October 2021
Kristen Grauman
Andrew Westbury
Eugene Byrne
Zachary Chavis
Antonino Furnari
Rohit Girdhar
Jackson Hamburger
Hao Jiang
Miao Liu
Xingyu Liu
Miguel Martin
Tushar Nagarajan
Ilija Radosavovic
Santhosh Kumar Ramakrishnan
Fiona Ryan
J. Sharma
Michael Wray
Mengmeng Xu
Eric Z. Xu
Chen Zhao
Siddhant Bansal
Dhruv Batra
Vincent Cartillier
Sean Crane
Tien Do
Morrie Doulaty
Akshay Erapalli
Christoph Feichtenhofer
A. Fragomeni
Qichen Fu
A. Gebreselasie
Cristina González
James M. Hillis
Xuhua Huang
Yifei Huang
Wenqi Jia
Weslie Khoo
J. Kolár
Satwik Kottur
Anurag Kumar
F. Landini
Chao Li
Yanghao Li
Zhenqiang Li
K. Mangalam
Raghava Modhugu
Jonathan Munro
Tullie Murrell
Takumi Nishiyasu
Will Price
Paola Ruiz Puentes
Merey Ramazanova
Leda Sari
Kiran Somasundaram
Audrey Southerland
Yusuke Sugano
Ruijie Tao
Minh Vo
Yuchen Wang
Xindi Wu
Takuma Yagi
Ziwei Zhao
Yunyi Zhu
Pablo Arbelaez
David J. Crandall
Dima Damen
G. Farinella
Christian Fuegen
Guohao Li
V. Ithapu
C. V. Jawahar
Hanbyul Joo
Kris M. Kitani
Haizhou Li
Richard Newcombe
A. Oliva
H. Park
James M. Rehg
Yoichi Sato
Jianbo Shi
Mike Zheng Shou
Antonio Torralba
Lorenzo Torresani
Mingfei Yan
Jitendra Malik
    EgoV
ArXivPDFHTML

Papers citing "Ego4D: Around the World in 3,000 Hours of Egocentric Video"

50 / 786 papers shown
Title
VCBench: A Controllable Benchmark for Symbolic and Abstract Challenges
  in Video Cognition
VCBench: A Controllable Benchmark for Symbolic and Abstract Challenges in Video Cognition
Chenglin Li
Qianglong Chen
Zhi Li
Feng Tao
Yin Zhang
34
0
0
14 Nov 2024
EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video
  Generation
EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation
Xiaofeng Wang
Kang Zhao
F. Liu
Jiayu Wang
Guosheng Zhao
Xiaoyi Bao
Zheng Hua Zhu
Yingya Zhang
Xingang Wang
VGen
56
6
0
13 Nov 2024
Which Viewpoint Shows it Best? Language for Weakly Supervising View Selection in Multi-view Instructional Videos
Which Viewpoint Shows it Best? Language for Weakly Supervising View Selection in Multi-view Instructional Videos
Sagnik Majumder
Tushar Nagarajan
Ziad Al-Halah
Reina Pradhan
Kristen Grauman
31
1
0
13 Nov 2024
Past, Present, and Future of Sensor-Based Human Activity Recognition Using Wearables: A Surveying Tutorial on a Still Challenging Task
Past, Present, and Future of Sensor-Based Human Activity Recognition Using Wearables: A Surveying Tutorial on a Still Challenging Task
H. Haresamudram
Chi Ian Tang
Sungho Suh
P. Lukowicz
Thomas Ploetz
76
2
0
11 Nov 2024
Moving Off-the-Grid: Scene-Grounded Video Representations
Moving Off-the-Grid: Scene-Grounded Video Representations
Sjoerd van Steenkiste
Daniel Zoran
Yi Yang
Yulia Rubanova
Rishabh Kabra
...
Thomas Keck
João Carreira
Alexey Dosovitskiy
Mehdi S. M. Sajjadi
Thomas Kipf
31
3
0
08 Nov 2024
HourVideo: 1-Hour Video-Language Understanding
HourVideo: 1-Hour Video-Language Understanding
Keshigeyan Chandrasegaran
Agrim Gupta
Lea M. Hadzic
Taran Kota
Jimming He
Cristobal Eyzaguirre
Zane Durante
Manling Li
Jiajun Wu
L. Fei-Fei
VLM
48
31
0
07 Nov 2024
DesignMinds: Enhancing Video-Based Design Ideation with Vision-Language
  Model and Context-Injected Large Language Model
DesignMinds: Enhancing Video-Based Design Ideation with Vision-Language Model and Context-Injected Large Language Model
Tianhao He
Andrija Stankovic
E. Niforatos
Gerd Kortuem
MLLM
VGen
VLM
39
0
0
06 Nov 2024
HiMemFormer: Hierarchical Memory-Aware Transformer for Multi-Agent
  Action Anticipation
HiMemFormer: Hierarchical Memory-Aware Transformer for Multi-Agent Action Anticipation
Zirui Wang
Xinran Zhao
Simon Stepputtis
Woojun Kim
Tongshuang Wu
Katia P. Sycara
Yaqi Xie
OffRL
49
0
0
03 Nov 2024
Human-inspired Perspectives: A Survey on AI Long-term Memory
Human-inspired Perspectives: A Survey on AI Long-term Memory
Zihong He
Weizhe Lin
Hao Zheng
Fan Zhang
Matt Jones
Laurence Aitchison
X. Xu
Miao Liu
Per Ola Kristensson
Junxiao Shen
77
2
0
01 Nov 2024
Robots Pre-train Robots: Manipulation-Centric Robotic Representation
  from Large-Scale Robot Datasets
Robots Pre-train Robots: Manipulation-Centric Robotic Representation from Large-Scale Robot Datasets
Guangqi Jiang
Yifei Sun
Tao Huang
Huanyu Li
Yongyuan Liang
Huazhe Xu
25
4
0
29 Oct 2024
VLMimic: Vision Language Models are Visual Imitation Learner for
  Fine-grained Actions
VLMimic: Vision Language Models are Visual Imitation Learner for Fine-grained Actions
Guanyan Chen
Hao Wu
Te Cui
Yao Mu
Haoyang Lu
...
Mengxiao Hu
Haizhou Li
Y. Li
Yi Yang
Yufeng Yue
VLM
28
3
0
28 Oct 2024
Egocentric and Exocentric Methods: A Short Survey
Egocentric and Exocentric Methods: A Short Survey
Anirudh Thatipelli
Shao-Yuan Lo
Amit K. Roy-Chowdhury
EgoV
42
2
0
27 Oct 2024
WorldSimBench: Towards Video Generation Models as World Simulators
WorldSimBench: Towards Video Generation Models as World Simulators
Yiran Qin
Zhelun Shi
Jiwen Yu
Xijun Wang
Enshen Zhou
...
Lu Sheng
Jing Shao
Junlin Wu
Wanli Ouyang
Ruimao Zhang
EGVM
VGen
126
381
0
23 Oct 2024
EVA: An Embodied World Model for Future Video Anticipation
EVA: An Embodied World Model for Future Video Anticipation
Xiaowei Chi
Hengyuan Zhang
Chun-Kai Fan
Xingqun Qi
Rongyu Zhang
...
Chi-Min Chan
Wei Xue
Wenhan Luo
Shanghang Zhang
Yike Guo
VGen
38
5
0
20 Oct 2024
CAGE: Causal Attention Enables Data-Efficient Generalizable Robotic
  Manipulation
CAGE: Causal Attention Enables Data-Efficient Generalizable Robotic Manipulation
Shangning Xia
Hongjie Fang
Hao-Shu Fang
Cewu Lu
CML
31
5
0
19 Oct 2024
UniMTS: Unified Pre-training for Motion Time Series
UniMTS: Unified Pre-training for Motion Time Series
Xiyuan Zhang
Diyan Teng
Ranak Roy Chowdhury
Shuheng Li
Dezhi Hong
Rajesh K. Gupta
Jingbo Shang
AI4TS
21
3
0
18 Oct 2024
Your Interest, Your Summaries: Query-Focused Long Video Summarization
Your Interest, Your Summaries: Query-Focused Long Video Summarization
Nirav Patel
Payal Prajapati
Maitrik Shah
25
0
0
17 Oct 2024
Human Action Anticipation: A Survey
Human Action Anticipation: A Survey
Bolin Lai
Sam Toyer
Tushar Nagarajan
Rohit Girdhar
S. Zha
James M. Rehg
Kris M. Kitani
Kristen Grauman
Ruta Desai
Miao Liu
AI4TS
41
1
0
17 Oct 2024
It's Just Another Day: Unique Video Captioning by Discriminative
  Prompting
It's Just Another Day: Unique Video Captioning by Discriminative Prompting
Toby Perrett
Tengda Han
Dima Damen
Andrew Zisserman
19
3
0
15 Oct 2024
VidEgoThink: Assessing Egocentric Video Understanding Capabilities for
  Embodied AI
VidEgoThink: Assessing Egocentric Video Understanding Capabilities for Embodied AI
Sijie Cheng
Kechen Fang
Yangyang Yu
Sicheng Zhou
Yangqiu Song
Ye Tian
Tingguang Li
Lei Han
Yang Liu
51
8
0
15 Oct 2024
Visual-Geometric Collaborative Guidance for Affordance Learning
Visual-Geometric Collaborative Guidance for Affordance Learning
Hongchen Luo
Wei-dong Zhai
J. Wang
Yang Cao
Zheng-jun Zha
25
0
0
15 Oct 2024
Latent Action Pretraining from Videos
Latent Action Pretraining from Videos
Seonghyeon Ye
Joel Jang
Byeongguk Jeon
Sejune Joo
Jianwei Yang
...
Kimin Lee
J. Gao
Luke Zettlemoyer
Dieter Fox
Minjoon Seo
35
27
0
15 Oct 2024
Incorporating Task Progress Knowledge for Subgoal Generation in Robotic
  Manipulation through Image Edits
Incorporating Task Progress Knowledge for Subgoal Generation in Robotic Manipulation through Image Edits
Xuhui Kang
Yen-Ling Kuo
38
3
0
14 Oct 2024
Ego3DT: Tracking Every 3D Object in Ego-centric Videos
Ego3DT: Tracking Every 3D Object in Ego-centric Videos
Shengyu Hao
Wenhao Chai
Zhonghan Zhao
Meiqi Sun
Wendi Hu
...
Yixian Zhao
Qi Li
Yizhou Wang
Xi Li
Gaoang Wang
37
1
0
11 Oct 2024
Learning to Generate Diverse Pedestrian Movements from Web Videos with
  Noisy Labels
Learning to Generate Diverse Pedestrian Movements from Web Videos with Noisy Labels
Zhizheng Liu
Joe Lin
Wayne Wu
Bolei Zhou
VGen
140
0
0
10 Oct 2024
OmniPose6D: Towards Short-Term Object Pose Tracking in Dynamic Scenes
  from Monocular RGB
OmniPose6D: Towards Short-Term Object Pose Tracking in Dynamic Scenes from Monocular RGB
Yunzhi Lin
Yipu Zhao
Fu-Jen Chu
Xingyu Chen
Weiyao Wang
Hao Tang
Patricio A. Vela
Matt Feiszli
Kevin J Liang
29
0
0
09 Oct 2024
MM-Ego: Towards Building Egocentric Multimodal LLMs for Video QA
MM-Ego: Towards Building Egocentric Multimodal LLMs for Video QA
Hanrong Ye
Haotian Zhang
Erik Daxberger
Lin Chen
Zongyu Lin
...
Haoxuan You
Dan Xu
Zhe Gan
Jiasen Lu
Yinfei Yang
EgoV
MLLM
88
12
0
09 Oct 2024
GR-2: A Generative Video-Language-Action Model with Web-Scale Knowledge
  for Robot Manipulation
GR-2: A Generative Video-Language-Action Model with Web-Scale Knowledge for Robot Manipulation
Chi-Lam Cheang
Guangzeng Chen
Ya Jing
Tao Kong
Hang Li
...
Hongtao Wu
Jiafeng Xu
Yichu Yang
Hanbo Zhang
Minzhao Zhu
VGen
LM&Ro
61
52
0
08 Oct 2024
Comparing Zealous and Restrained AI Recommendations in a Real-World
  Human-AI Collaboration Task
Comparing Zealous and Restrained AI Recommendations in a Real-World Human-AI Collaboration Task
Chengyuan Xu
Kuo-Chin Lien
Tobias Höllerer
27
10
0
06 Oct 2024
TR-LLM: Integrating Trajectory Data for Scene-Aware LLM-Based Human
  Action Prediction
TR-LLM: Integrating Trajectory Data for Scene-Aware LLM-Based Human Action Prediction
Kojiro Takeyama
Yimeng Liu
Misha Sra
29
1
0
05 Oct 2024
VEDIT: Latent Prediction Architecture For Procedural Video
  Representation Learning
VEDIT: Latent Prediction Architecture For Procedural Video Representation Learning
Han Lin
Tushar Nagarajan
Nicolas Ballas
Mido Assran
Mojtaba Komeili
Joey Tianyi Zhou
Koustuv Sinha
AI4TS
54
3
0
04 Oct 2024
AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark
AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark
Wenhao Chai
Enxin Song
Y. Du
Chenlin Meng
Vashisht Madhavan
Omer Bar-Tal
Jeng-Neng Hwang
Saining Xie
Christopher D. Manning
3DV
84
26
0
04 Oct 2024
Video Instruction Tuning With Synthetic Data
Video Instruction Tuning With Synthetic Data
Yuanhan Zhang
Jinming Wu
Wei Li
Bo Li
Zejun Ma
Ziwei Liu
Chunyuan Li
SyDa
VGen
47
140
0
03 Oct 2024
Saliency-Guided DETR for Moment Retrieval and Highlight Detection
Saliency-Guided DETR for Moment Retrieval and Highlight Detection
Aleksandr Gordeev
Vladimir Dokholyan
Irina Tolstykh
Maksim Kuprashevich
31
4
0
02 Oct 2024
Robo-MUTUAL: Robotic Multimodal Task Specification via Unimodal Learning
Robo-MUTUAL: Robotic Multimodal Task Specification via Unimodal Learning
Jianxiong Li
Zhihao Wang
Jinliang Zheng
Xiaoai Zhou
Guanming Wang
...
Yu Liu
Jingjing Liu
Ya-Qin Zhang
Junzhi Yu
Xianyuan Zhan
38
2
0
02 Oct 2024
Cognition Transferring and Decoupling for Text-supervised Egocentric
  Semantic Segmentation
Cognition Transferring and Decoupling for Text-supervised Egocentric Semantic Segmentation
Zhaofeng Shi
Heqian Qiu
Lanxiao Wang
Fanman Meng
Q. Wu
Hongliang Li
30
2
0
02 Oct 2024
AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures
  in Robotic Manipulation
AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation
Jiafei Duan
Wilbert Pumacay
Nishanth Kumar
Yi Ru Wang
Shulin Tian
Wentao Yuan
Ranjay Krishna
Dieter Fox
Ajay Mandlekar
Yijie Guo
VLM
LRM
23
19
0
01 Oct 2024
Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in
  Instructional Videos
Propose, Assess, Search: Harnessing LLMs for Goal-Oriented Planning in Instructional Videos
Md. Mohaiminul Islam
Tushar Nagarajan
Huiyu Wang
Fu-Jen Chu
Kris M. Kitani
Gedas Bertasius
Xitong Yang
35
2
0
30 Sep 2024
HEADS-UP: Head-Mounted Egocentric Dataset for Trajectory Prediction in
  Blind Assistance Systems
HEADS-UP: Head-Mounted Egocentric Dataset for Trajectory Prediction in Blind Assistance Systems
Yasaman Haghighi
Celine Demonsant
Panagiotis Chalimourdas
Maryam Tavasoli Naeini
Jhon Kevin Munoz
Bladimir Bacca
Silvan Suter
Matthieu Gani
Alexandre Alahi
EgoV
34
1
0
30 Sep 2024
Feature Extractor or Decision Maker: Rethinking the Role of Visual Encoders in Visuomotor Policies
Feature Extractor or Decision Maker: Rethinking the Role of Visual Encoders in Visuomotor Policies
Ruiyu Wang
Zheyu Zhuang
Shutong Jin
Nils Ingelhag
Danica Kragic
Florian T. Pokorny
31
0
0
30 Sep 2024
Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge Augmentation
Procedure-Aware Surgical Video-language Pretraining with Hierarchical Knowledge Augmentation
Kun Yuan
V. Srivastav
Nassir Navab
N. Padoy
44
7
0
30 Sep 2024
Grounding 3D Scene Affordance From Egocentric Interactions
Grounding 3D Scene Affordance From Egocentric Interactions
Cuiyu Liu
Wei Zhai
Yuhang Yang
Hongchen Luo
Sen Liang
Yang Cao
Zheng-Jun Zha
34
1
0
29 Sep 2024
Temporal2Seq: A Unified Framework for Temporal Video Understanding Tasks
Temporal2Seq: A Unified Framework for Temporal Video Understanding Tasks
Min Yang
Zichen Zhang
Limin Wang
AI4TS
39
0
0
27 Sep 2024
EgoLM: Multi-Modal Language Model of Egocentric Motions
EgoLM: Multi-Modal Language Model of Egocentric Motions
Fangzhou Hong
Vladimir Guzov
Hyo Jin Kim
Yuting Ye
Richard Newcombe
Ziwei Liu
Lingni Ma
32
5
0
26 Sep 2024
E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding
E.T. Bench: Towards Open-Ended Event-Level Video-Language Understanding
Ye Liu
Zongyang Ma
Zhongang Qi
Yang Wu
Ying Shan
Chang Wen Chen
36
16
0
26 Sep 2024
Episodic Memory Verbalization using Hierarchical Representations of
  Life-Long Robot Experience
Episodic Memory Verbalization using Hierarchical Representations of Life-Long Robot Experience
Leonard Barmann
Chad DeChant
Joana Plewnia
Fabian Peller-Konrad
Daniel Bauer
Tamim Asfour
Alex Waibel
LM&Ro
32
1
0
26 Sep 2024
EAGLE: Egocentric AGgregated Language-video Engine
EAGLE: Egocentric AGgregated Language-video Engine
Jing Bi
Yunlong Tang
Luchuan Song
A. Vosoughi
Nguyen Nguyen
Chenliang Xu
45
8
0
26 Sep 2024
Gen2Act: Human Video Generation in Novel Scenarios enables Generalizable
  Robot Manipulation
Gen2Act: Human Video Generation in Novel Scenarios enables Generalizable Robot Manipulation
Homanga Bharadhwaj
Debidatta Dwibedi
Abhinav Gupta
Shubham Tulsiani
Carl Doersch
Ted Xiao
Dhruv Shah
Fei Xia
Dorsa Sadigh
Sean Kirmani
VGen
LM&Ro
42
28
0
24 Sep 2024
QUB-PHEO: A Visual-Based Dyadic Multi-View Dataset for Intention
  Inference in Collaborative Assembly
QUB-PHEO: A Visual-Based Dyadic Multi-View Dataset for Intention Inference in Collaborative Assembly
Samuel Adebayo
Seán F. McLoone
J. Dessing
19
0
0
23 Sep 2024
Multi-Modal Generative AI: Multi-modal LLM, Diffusion and Beyond
Multi-Modal Generative AI: Multi-modal LLM, Diffusion and Beyond
Hong Chen
Xin Wang
Yuwei Zhou
Bin Huang
Yipeng Zhang
Wei Feng
Houlun Chen
Zeyang Zhang
Siao Tang
Wenwu Zhu
DiffM
55
7
0
23 Sep 2024
Previous
12345...141516
Next