Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.06229
Cited By
Towards visually prompted keyword localisation for zero-resource spoken languages
12 October 2022
Leanne Nortje
Herman Kamper
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards visually prompted keyword localisation for zero-resource spoken languages"
6 / 6 papers shown
Title
Improved Visually Prompted Keyword Localisation in Real Low-Resource Settings
Leanne Nortje
Dan Oneaţă
Herman Kamper
VLM
40
0
0
09 Sep 2024
Visually Grounded Speech Models have a Mutual Exclusivity Bias
Leanne Nortje
Dan Oneaţă
Yevgen Matusevych
Herman Kamper
SSL
47
0
0
20 Mar 2024
SpeechCLIP+: Self-supervised multi-task representation learning for speech via CLIP and speech-image data
Hsuan-Fu Wang
Yi-Jen Shih
Heng-Jui Chang
Layne Berry
Puyuan Peng
Hung-yi Lee
Hsin-Min Wang
David Harwath
VLM
45
2
0
10 Feb 2024
Integrating Self-supervised Speech Model with Pseudo Word-level Targets from Visually-grounded Speech Model
Hung-Chieh Fang
Nai-Xuan Ye
Yi-Jen Shih
Puyuan Peng
Hsuan-Fu Wang
Layne Berry
Hung-yi Lee
David Harwath
VLM
37
1
0
08 Feb 2024
Visually grounded few-shot word learning in low-resource settings
Leanne Nortje
Dan Oneaţă
Herman Kamper
VLM
15
4
0
20 Jun 2023
Visually grounded few-shot word acquisition with fewer shots
Leanne Nortje
Benjamin van Niekerk
Herman Kamper
23
1
0
25 May 2023
1