Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2307.02609
Cited By
MRecGen: Multimodal Appropriate Reaction Generator
5 July 2023
Jiaqi Xu
Cheng Luo
Weicheng Xie
Linlin Shen
Xiaofeng Liu
Lu Liu
Hatice Gunes
Siyang Song
VGen
Re-assign community
ArXiv (abs)
PDF
HTML
Github (3★)
Papers citing
"MRecGen: Multimodal Appropriate Reaction Generator"
6 / 6 papers shown
Title
Multiple Appropriate Facial Reaction Generation in Dyadic Interaction Settings: What, Why and How?
Siyang Song
Micol Spitale
Yi-Xiang Luo
Batuhan Bal
Hatice Gunes
CVBM
69
21
0
13 Feb 2023
Scaling Instruction-Finetuned Language Models
Hyung Won Chung
Le Hou
Shayne Longpre
Barret Zoph
Yi Tay
...
Jacob Devlin
Adam Roberts
Denny Zhou
Quoc V. Le
Jason W. Wei
ReLM
LRM
194
3,128
0
20 Oct 2022
Learning to Listen: Modeling Non-Deterministic Dyadic Facial Motion
Evonne Ng
Hanbyul Joo
Liwen Hu
Hao Li
Trevor Darrell
Angjoo Kanazawa
Shiry Ginosar
VGen
61
94
0
18 Apr 2022
PaLM: Scaling Language Modeling with Pathways
Aakanksha Chowdhery
Sharan Narang
Jacob Devlin
Maarten Bosma
Gaurav Mishra
...
Kathy Meier-Hellstern
Douglas Eck
J. Dean
Slav Petrov
Noah Fiedel
PILM
LRM
500
6,279
0
05 Apr 2022
Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation of Facial Gestures in Dyadic Settings
Patrik Jonell
Taras Kucherenko
G. Henter
Jonas Beskow
CVBM
59
61
0
11 Jun 2020
Gesticulator: A framework for semantically-aware speech-driven gesture generation
Taras Kucherenko
Patrik Jonell
S. V. Waveren
G. Henter
Simon Alexanderson
Iolanda Leite
Hedvig Kjellström
SLR
52
180
0
25 Jan 2020
1