ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.09657
  4. Cited By
Learning compositionally through attentive guidance

Learning compositionally through attentive guidance

20 May 2018
Dieuwke Hupkes
Anand Singh
K. Korrel
Germán Kruszewski
Elia Bruni
    CoGe
ArXivPDFHTML

Papers citing "Learning compositionally through attentive guidance"

12 / 12 papers shown
Title
Compositional Capabilities of Autoregressive Transformers: A Study on
  Synthetic, Interpretable Tasks
Compositional Capabilities of Autoregressive Transformers: A Study on Synthetic, Interpretable Tasks
Rahul Ramesh
Ekdeep Singh Lubana
Mikail Khona
Robert P. Dick
Hidenori Tanaka
CoGe
39
7
0
21 Nov 2023
Layer-wise Representation Fusion for Compositional Generalization
Layer-wise Representation Fusion for Compositional Generalization
Yafang Zheng
Lei Lin
Shantao Liu
Binling Wang
Zhaohong Lai
Wenhao Rao
Biao Fu
Yidong Chen
Xiaodon Shi
AI4CE
48
2
0
20 Jul 2023
Self-Organising Neural Discrete Representation Learning à la Kohonen
Self-Organising Neural Discrete Representation Learning à la Kohonen
Kazuki Irie
Róbert Csordás
Jürgen Schmidhuber
SSL
32
1
0
15 Feb 2023
CTL++: Evaluating Generalization on Never-Seen Compositional Patterns of
  Known Functions, and Compatibility of Neural Representations
CTL++: Evaluating Generalization on Never-Seen Compositional Patterns of Known Functions, and Compatibility of Neural Representations
Róbert Csordás
Kazuki Irie
Jürgen Schmidhuber
NAI
19
11
0
12 Oct 2022
The Neural Data Router: Adaptive Control Flow in Transformers Improves
  Systematic Generalization
The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization
Róbert Csordás
Kazuki Irie
Jürgen Schmidhuber
AI4CE
33
54
0
14 Oct 2021
Can Transformers Jump Around Right in Natural Language? Assessing
  Performance Transfer from SCAN
Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCAN
Rahma Chaabouni
Roberto Dessì
Eugene Kharitonov
35
20
0
03 Jul 2021
One Semantic Parser to Parse Them All: Sequence to Sequence Multi-Task
  Learning on Semantic Parsing Datasets
One Semantic Parser to Parse Them All: Sequence to Sequence Multi-Task Learning on Semantic Parsing Datasets
Marco Damonte
Emilio Monti
AIMat
38
6
0
08 Jun 2021
Location Attention for Extrapolation to Longer Sequences
Location Attention for Extrapolation to Longer Sequences
Yann Dubois
Gautier Dagan
Dieuwke Hupkes
Elia Bruni
23
40
0
10 Nov 2019
On the Realization of Compositionality in Neural Networks
On the Realization of Compositionality in Neural Networks
Joris Baan
Jana Leible
Mitja Nikolaus
David Rau
Dennis Ulmer
Tim Baumgärtner
Dieuwke Hupkes
Elia Bruni
CoGe
21
16
0
04 Jun 2019
Transcoding compositionally: using attention to find more generalizable
  solutions
Transcoding compositionally: using attention to find more generalizable solutions
K. Korrel
Dieuwke Hupkes
Verna Dankers
Elia Bruni
30
31
0
04 Jun 2019
The Fine Line between Linguistic Generalization and Failure in
  Seq2Seq-Attention Models
The Fine Line between Linguistic Generalization and Failure in Seq2Seq-Attention Models
Noah Weber
L. Shekhar
Niranjan Balasubramanian
102
30
0
03 May 2018
Effective Approaches to Attention-based Neural Machine Translation
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
218
7,929
0
17 Aug 2015
1