ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.06877
  4. Cited By
Learning to Skim Text

Learning to Skim Text

23 April 2017
Adams Wei Yu
Hongrae Lee
Quoc V. Le
    RALM
ArXivPDFHTML

Papers citing "Learning to Skim Text"

50 / 58 papers shown
Title
SOI: Scaling Down Computational Complexity by Estimating Partial States
  of the Model
SOI: Scaling Down Computational Complexity by Estimating Partial States of the Model
Grzegorz Stefański
P. Daniluk
Artur Szumaczuk
Jakub Tkaczuk
29
0
0
04 Oct 2024
No-Skim: Towards Efficiency Robustness Evaluation on Skimming-based
  Language Models
No-Skim: Towards Efficiency Robustness Evaluation on Skimming-based Language Models
Sheng Zhang
Mi Zhang
Xudong Pan
Min Yang
17
1
0
15 Dec 2023
Internally Rewarded Reinforcement Learning
Internally Rewarded Reinforcement Learning
Mengdi Li
Xufeng Zhao
Jae Hee Lee
C. Weber
S. Wermter
19
10
0
01 Feb 2023
State-Regularized Recurrent Neural Networks to Extract Automata and
  Explain Predictions
State-Regularized Recurrent Neural Networks to Extract Automata and Explain Predictions
Cheng Wang
Carolin (Haas) Lawrence
Mathias Niepert
18
3
0
10 Dec 2022
Boosted Dynamic Neural Networks
Boosted Dynamic Neural Networks
Haichao Yu
Haoxiang Li
G. Hua
Gao Huang
Humphrey Shi
30
7
0
30 Nov 2022
Informative Text Generation from Knowledge Triples
Informative Text Generation from Knowledge Triples
Z. Fu
Yi Dong
Lidong Bing
W. Lam
21
0
0
26 Sep 2022
Transkimmer: Transformer Learns to Layer-wise Skim
Transkimmer: Transformer Learns to Layer-wise Skim
Yue Guan
Zhengyi Li
Jingwen Leng
Zhouhan Lin
Minyi Guo
70
38
0
15 May 2022
A Survey on Dynamic Neural Networks for Natural Language Processing
A Survey on Dynamic Neural Networks for Natural Language Processing
Canwen Xu
Julian McAuley
AI4CE
24
28
0
15 Feb 2022
Block-Skim: Efficient Question Answering for Transformer
Block-Skim: Efficient Question Answering for Transformer
Yue Guan
Zhengyi Li
Jingwen Leng
Zhouhan Lin
Minyi Guo
Yuhao Zhu
25
30
0
16 Dec 2021
A Survey on Green Deep Learning
A Survey on Green Deep Learning
Jingjing Xu
Wangchunshu Zhou
Zhiyi Fu
Hao Zhou
Lei Li
VLM
73
83
0
08 Nov 2021
Part & Whole Extraction: Towards A Deep Understanding of Quantitative
  Facts for Percentages in Text
Part & Whole Extraction: Towards A Deep Understanding of Quantitative Facts for Percentages in Text
Lei Fang
Jian-Guang Lou
15
4
0
26 Oct 2021
PonderNet: Learning to Ponder
PonderNet: Learning to Ponder
Andrea Banino
Jan Balaguer
Charles Blundell
PINN
AIMat
96
80
0
12 Jul 2021
Revisiting the Weaknesses of Reinforcement Learning for Neural Machine
  Translation
Revisiting the Weaknesses of Reinforcement Learning for Neural Machine Translation
Samuel Kiegeland
Julia Kreutzer
AAML
28
45
0
16 Jun 2021
Doping: A technique for efficient compression of LSTM models using
  sparse structured additive matrices
Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices
Urmish Thakker
P. Whatmough
Zhi-Gang Liu
Matthew Mattina
Jesse G. Beu
14
6
0
14 Feb 2021
Dynamic Neural Networks: A Survey
Dynamic Neural Networks: A Survey
Yizeng Han
Gao Huang
Shiji Song
Le Yang
Honghui Wang
Yulin Wang
3DH
AI4TS
AI4CE
18
621
0
09 Feb 2021
SG-Net: Syntax Guided Transformer for Language Representation
SG-Net: Syntax Guided Transformer for Language Representation
Zhuosheng Zhang
Yuwei Wu
Junru Zhou
Sufeng Duan
Hai Zhao
Rui-cang Wang
46
36
0
27 Dec 2020
Rank and run-time aware compression of NLP Applications
Rank and run-time aware compression of NLP Applications
Urmish Thakker
Jesse G. Beu
Dibakar Gope
Ganesh S. Dasika
Matthew Mattina
11
11
0
06 Oct 2020
Knowledge Efficient Deep Learning for Natural Language Processing
Knowledge Efficient Deep Learning for Natural Language Processing
Hai Wang
10
2
0
28 Aug 2020
EagerNet: Early Predictions of Neural Networks for Computationally
  Efficient Intrusion Detection
EagerNet: Early Predictions of Neural Networks for Computationally Efficient Intrusion Detection
Fares Meghdouri
Maximilian Bachl
Tanja Zseby
13
3
0
27 Jul 2020
Surprisal-Triggered Conditional Computation with Neural Networks
Surprisal-Triggered Conditional Computation with Neural Networks
Loren Lugosch
Derek Nowrouzezahrai
B. Meyer
14
6
0
02 Jun 2020
Recurrent Chunking Mechanisms for Long-Text Machine Reading
  Comprehension
Recurrent Chunking Mechanisms for Long-Text Machine Reading Comprehension
Hongyu Gong
Yelong Shen
Dian Yu
Jianshu Chen
Dong Yu
17
41
0
16 May 2020
Machine Reading Comprehension: The Role of Contextualized Language
  Models and Beyond
Machine Reading Comprehension: The Role of Contextualized Language Models and Beyond
Zhuosheng Zhang
Hai Zhao
Rui Wang
16
62
0
13 May 2020
SparseIDS: Learning Packet Sampling with Reinforcement Learning
SparseIDS: Learning Packet Sampling with Reinforcement Learning
Maximilian Bachl
Fares Meghdouri
J. Fabini
Tanja Zseby
16
6
0
10 Feb 2020
MEMO: A Deep Network for Flexible Combination of Episodic Memories
MEMO: A Deep Network for Flexible Combination of Episodic Memories
Andrea Banino
Adria Puigdomenech Badia
Raphael Köster
Martin Chadwick
V. Zambaldi
Demis Hassabis
Caswell Barry
M. Botvinick
D. Kumaran
Charles Blundell
KELM
26
33
0
29 Jan 2020
Knowledge Tracing with Sequential Key-Value Memory Networks
Knowledge Tracing with Sequential Key-Value Memory Networks
Ghodai M. Abdelrahman
Qing Wang
AI4Ed
KELM
11
186
0
29 Oct 2019
Interactive Machine Comprehension with Information Seeking Agents
Interactive Machine Comprehension with Information Seeking Agents
Xingdi Yuan
Jie Fu
Marc-Alexandre Côté
Yi Tay
C. Pal
Adam Trischler
22
16
0
27 Aug 2019
SG-Net: Syntax-Guided Machine Reading Comprehension
SG-Net: Syntax-Guided Machine Reading Comprehension
Zhuosheng Zhang
Yuwei Wu
Junru Zhou
Sufeng Duan
Hai Zhao
Rui Wang
25
187
0
14 Aug 2019
Assessing the Ability of Self-Attention Networks to Learn Word Order
Assessing the Ability of Self-Attention Networks to Learn Word Order
Baosong Yang
Longyue Wang
Derek F. Wong
Lidia S. Chao
Zhaopeng Tu
11
31
0
03 Jun 2019
Leap-LSTM: Enhancing Long Short-Term Memory for Text Categorization
Leap-LSTM: Enhancing Long Short-Term Memory for Text Categorization
Ting-Hao 'Kenneth' Huang
Gehui Shen
Zhihong Deng
VLM
AI4TS
17
22
0
28 May 2019
A Text Classification Framework for Simple and Effective Early
  Depression Detection Over Social Media Streams
A Text Classification Framework for Simple and Effective Early Depression Detection Over Social Media Streams
Sergio G. Burdisso
M. Errecalde
M. Montes-y-Gómez
21
163
0
18 May 2019
Neural Speed Reading with Structural-Jump-LSTM
Neural Speed Reading with Structural-Jump-LSTM
Christian B. Hansen
Casper Hansen
Stephen Alstrup
J. Simonsen
Christina Lioma
18
36
0
20 Mar 2019
FastFusionNet: New State-of-the-Art for DAWNBench SQuAD
FastFusionNet: New State-of-the-Art for DAWNBench SQuAD
Felix Wu
Boyi Li
Lequn Wang
Ni Lao
John Blitzer
Kilian Q. Weinberger
FedML
22
5
0
28 Feb 2019
Evidence Sentence Extraction for Machine Reading Comprehension
Evidence Sentence Extraction for Machine Reading Comprehension
Hai Wang
Dian Yu
Kai Sun
Jianshu Chen
Dong Yu
David A. McAllester
Dan Roth
28
56
0
23 Feb 2019
Multi-step Reasoning via Recurrent Dual Attention for Visual Dialog
Multi-step Reasoning via Recurrent Dual Attention for Visual Dialog
Zhe Gan
Yu Cheng
Ahmed El Kholy
Linjie Li
Jingjing Liu
Jianfeng Gao
11
104
0
01 Feb 2019
State-Regularized Recurrent Neural Networks
State-Regularized Recurrent Neural Networks
Cheng Wang
Mathias Niepert
18
39
0
25 Jan 2019
Reducing state updates via Gaussian-gated LSTMs
Reducing state updates via Gaussian-gated LSTMs
Matthew Thornton
Jithendar Anumula
Shih-Chii Liu
19
1
0
22 Jan 2019
Incremental Reading for Question Answering
Incremental Reading for Question Answering
Samira Abnar
Tania Bedrax-Weiss
Tom Kwiatkowski
William W. Cohen
CLL
LRM
14
0
0
15 Jan 2019
Learning to Remember More with Less Memorization
Learning to Remember More with Less Memorization
Hung Le
T. Tran
Svetha Venkatesh
19
38
0
05 Jan 2019
Layer Flexible Adaptive Computational Time
Layer Flexible Adaptive Computational Time
Lida Zhang
Abdolghani Ebrahimi
Diego Klabjan
AI4CE
28
1
0
06 Dec 2018
Long Short-Term Memory with Dynamic Skip Connections
Long Short-Term Memory with Dynamic Skip Connections
Tao Gui
Qi Zhang
Lujun Zhao
Y. Lin
Minlong Peng
Jingjing Gong
Xuanjing Huang
24
27
0
09 Nov 2018
Image-based Natural Language Understanding Using 2D Convolutional Neural
  Networks
Image-based Natural Language Understanding Using 2D Convolutional Neural Networks
Erinc Merdivan
Anastasios Vafeiadis
D. Kalatzis
S. Hanke
J. Kropf
...
Dimitrios Giakoumis
Dimitrios Tzovaras
Liming Luke Chen
R. Hamzaoui
M. Geist
VLM
17
2
0
24 Oct 2018
Pyramidal Recurrent Unit for Language Modeling
Pyramidal Recurrent Unit for Language Modeling
Sachin Mehta
Rik Koncel-Kedziorski
Mohammad Rastegari
Hannaneh Hajishirzi
19
10
0
27 Aug 2018
Robust Text Classifier on Test-Time Budgets
Robust Text Classifier on Test-Time Budgets
Md. Rizwan Parvez
Tolga Bolukbasi
Kai-Wei Chang
Venkatesh Sarigrama
OOD
VLM
19
1
0
24 Aug 2018
JUMPER: Learning When to Make Classification Decisions in Reading
JUMPER: Learning When to Make Classification Decisions in Reading
Xianggen Liu
Lili Mou
Haotian Cui
Zhengdong Lu
Sen Song
29
20
0
06 Jul 2018
Generating Titles for Web Tables
Generating Titles for Web Tables
Braden Hancock
Hongrae Lee
Cong Yu
LMTD
19
2
0
30 Jun 2018
Learning to Search in Long Documents Using Document Structure
Learning to Search in Long Documents Using Document Structure
Mor Geva
Jonathan Berant
RALM
20
15
0
09 Jun 2018
QANet: Combining Local Convolution with Global Self-Attention for
  Reading Comprehension
QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
Adams Wei Yu
David Dohan
Minh-Thang Luong
Rui Zhao
Kai Chen
Mohammad Norouzi
Quoc V. Le
RALM
AIMat
21
1,091
0
23 Apr 2018
EmoRL: Continuous Acoustic Emotion Classification using Deep
  Reinforcement Learning
EmoRL: Continuous Acoustic Emotion Classification using Deep Reinforcement Learning
Egor Lakomkin
M. Zamani
C. Weber
S. Magg
S. Wermter
13
22
0
03 Apr 2018
Learning Longer-term Dependencies in RNNs with Auxiliary Losses
Learning Longer-term Dependencies in RNNs with Auxiliary Losses
Trieu H. Trinh
Andrew M. Dai
Thang Luong
Quoc V. Le
17
179
0
01 Mar 2018
Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention
  for Sequence Modeling
Reinforced Self-Attention Network: a Hybrid of Hard and Soft Attention for Sequence Modeling
Tao Shen
Tianyi Zhou
Guodong Long
Jing Jiang
Sen Wang
Chengqi Zhang
AI4TS
34
143
0
31 Jan 2018
12
Next