ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.15408
  4. Cited By
Chronologically Accurate Retrieval for Temporal Grounding of
  Motion-Language Models

Chronologically Accurate Retrieval for Temporal Grounding of Motion-Language Models

22 July 2024
Kent Fujiwara
Mikihiro Tanaka
Qing Yu
ArXivPDFHTML

Papers citing "Chronologically Accurate Retrieval for Temporal Grounding of Motion-Language Models"

8 / 8 papers shown
Title
MixerMDM: Learnable Composition of Human Motion Diffusion Models
MixerMDM: Learnable Composition of Human Motion Diffusion Models
Pablo Ruiz-Ponce
Germán Barquero
Cristina Palmero
Sergio Escalera
José García Rodríguez
DiffM
62
0
0
01 Apr 2025
MotionLab: Unified Human Motion Generation and Editing via the Motion-Condition-Motion Paradigm
MotionLab: Unified Human Motion Generation and Editing via the Motion-Condition-Motion Paradigm
Ziyan Guo
Zeyu Hu
Na Zhao
De Wen Soh
VGen
94
2
0
13 Mar 2025
TMR: Text-to-Motion Retrieval Using Contrastive 3D Human Motion
  Synthesis
TMR: Text-to-Motion Retrieval Using Contrastive 3D Human Motion Synthesis
Mathis Petrovich
Michael J. Black
Gül Varol
VGen
70
77
0
02 May 2023
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image
  Encoders and Large Language Models
BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
Junnan Li
Dongxu Li
Silvio Savarese
Steven C. H. Hoi
VLM
MLLM
281
4,244
0
30 Jan 2023
Human Motion Diffusion Model
Human Motion Diffusion Model
Guy Tevet
Sigal Raab
Brian Gordon
Yonatan Shafir
Daniel Cohen-Or
Amit H. Bermano
DiffM
VGen
223
724
0
29 Sep 2022
TEACH: Temporal Action Composition for 3D Humans
TEACH: Temporal Action Composition for 3D Humans
Nikos Athanasiou
Mathis Petrovich
Michael J. Black
Gül Varol
89
142
0
09 Sep 2022
BLIP: Bootstrapping Language-Image Pre-training for Unified
  Vision-Language Understanding and Generation
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
S. Hoi
MLLM
BDL
VLM
CLIP
392
4,137
0
28 Jan 2022
The KIT Motion-Language Dataset
The KIT Motion-Language Dataset
Matthias Plappert
Christian Mandery
Tamim Asfour
193
273
0
13 Jul 2016
1