Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1610.03342
Cited By
From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning
11 October 2016
Lieke Gelderloos
Grzegorz Chrupała
Re-assign community
ArXiv
PDF
HTML
Papers citing
"From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning"
11 / 11 papers shown
Title
Towards visually prompted keyword localisation for zero-resource spoken languages
Leanne Nortje
Herman Kamper
29
6
0
12 Oct 2022
Word Segmentation on Discovered Phone Units with Dynamic Programming and Self-Supervised Scoring
Herman Kamper
34
25
0
24 Feb 2022
Keyword localisation in untranscribed speech using visually grounded speech models
Kayode Olaleye
Dan Oneaţă
Herman Kamper
32
7
0
02 Feb 2022
Probing artificial neural networks: insights from neuroscience
Anna A. Ivanova
John Hewitt
Noga Zaslavsky
24
16
0
16 Apr 2021
Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure
Dieuwke Hupkes
Sara Veldhoen
Willem H. Zuidema
24
274
0
28 Nov 2017
Semantic speech retrieval with a visually grounded model of untranscribed speech
Herman Kamper
Gregory Shakhnarovich
Karen Livescu
29
53
0
05 Oct 2017
Imagination improves Multimodal Translation
Desmond Elliott
Ákos Kádár
29
136
0
11 May 2017
What do Neural Machine Translation Models Learn about Morphology?
Yonatan Belinkov
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
James R. Glass
61
410
0
11 Apr 2017
Visually grounded learning of keyword prediction from untranscribed speech
Herman Kamper
Shane Settle
Gregory Shakhnarovich
Karen Livescu
19
63
0
23 Mar 2017
Representations of language in a model of visually grounded speech signal
Grzegorz Chrupała
Lieke Gelderloos
A. Alishahi
41
131
0
07 Feb 2017
Pixel Recurrent Neural Networks
Aaron van den Oord
Nal Kalchbrenner
Koray Kavukcuoglu
SSeg
GAN
269
2,552
0
25 Jan 2016
1