ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.08575
  4. Cited By
Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio
  Representation
v1v2v3v4v5 (latest)

Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio Representation

18 May 2020
Po-Han Chi
Pei-Hung Chung
Tsung-Han Wu
Chun-Cheng Hsieh
Yen-Hao Chen
Shang-Wen Li
Hung-yi Lee
    SSL
ArXiv (abs)PDFHTML

Papers citing "Audio ALBERT: A Lite BERT for Self-supervised Learning of Audio Representation"

25 / 25 papers shown
Title
Efficient Training of Self-Supervised Speech Foundation Models on a Compute Budget
Efficient Training of Self-Supervised Speech Foundation Models on a Compute Budget
Andy T. Liu
Yi-Cheng Lin
Haibin Wu
Stefan Winkler
Hung-yi Lee
93
3
0
09 Sep 2024
Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing
Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing
Yonggan Fu
Yang Zhang
Kaizhi Qian
Zhifan Ye
Zhongzhi Yu
Cheng-I Jeff Lai
Yingyan Lin
147
9
0
02 Nov 2022
A Simple Framework for Contrastive Learning of Visual Representations
A Simple Framework for Contrastive Learning of Visual Representations
Ting-Li Chen
Simon Kornblith
Mohammad Norouzi
Geoffrey E. Hinton
SSL
372
18,778
0
13 Feb 2020
Deep Contextualized Acoustic Representations For Semi-Supervised Speech
  Recognition
Deep Contextualized Acoustic Representations For Semi-Supervised Speech Recognition
Shaoshi Ling
Yuzong Liu
Julian Salazar
Katrin Kirchhoff
SSL
64
139
0
03 Dec 2019
Momentum Contrast for Unsupervised Visual Representation Learning
Momentum Contrast for Unsupervised Visual Representation Learning
Kaiming He
Haoqi Fan
Yuxin Wu
Saining Xie
Ross B. Girshick
SSL
207
12,085
0
13 Nov 2019
What does a network layer hear? Analyzing hidden representations of
  end-to-end ASR through speech synthesis
What does a network layer hear? Analyzing hidden representations of end-to-end ASR through speech synthesis
Chung-Yi Li
Pei-Chieh Yuan
Hung-yi Lee
33
31
0
04 Nov 2019
Mockingjay: Unsupervised Speech Representation Learning with Deep
  Bidirectional Transformer Encoders
Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders
Andy T. Liu
Shu-Wen Yang
Po-Han Chi
Po-Chun Hsu
Hung-yi Lee
SSL
148
374
0
25 Oct 2019
Speech-XLNet: Unsupervised Acoustic Model Pretraining For Self-Attention
  Networks
Speech-XLNet: Unsupervised Acoustic Model Pretraining For Self-Attention Networks
Xingcheng Song
Guangsen Wang
Zhiyong Wu
Yiheng Huang
Dan Su
Dong Yu
Helen Meng
SSL
65
49
0
23 Oct 2019
Improving Transformer-based Speech Recognition Using Unsupervised
  Pre-training
Improving Transformer-based Speech Recognition Using Unsupervised Pre-training
Dongwei Jiang
Xiaoning Lei
Wubo Li
Ne Luo
Yuxuan Hu
Wei Zou
Xiangang Li
61
99
0
22 Oct 2019
vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations
vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations
Alexei Baevski
Steffen Schneider
Michael Auli
SSL
161
666
0
12 Oct 2019
ALBERT: A Lite BERT for Self-supervised Learning of Language
  Representations
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Zhenzhong Lan
Mingda Chen
Sebastian Goodman
Kevin Gimpel
Piyush Sharma
Radu Soricut
SSLAIMat
371
6,463
0
26 Sep 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
665
24,528
0
26 Jul 2019
Analyzing Phonetic and Graphemic Representations in End-to-End Automatic
  Speech Recognition
Analyzing Phonetic and Graphemic Representations in End-to-End Automatic Speech Recognition
Yonatan Belinkov
Ahmed M. Ali
James R. Glass
82
32
0
09 Jul 2019
Sharing Attention Weights for Fast Transformer
Sharing Attention Weights for Fast Transformer
Tong Xiao
Yinqiao Li
Jingbo Zhu
Zhengtao Yu
Tongran Liu
57
52
0
26 Jun 2019
XLNet: Generalized Autoregressive Pretraining for Language Understanding
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Zhilin Yang
Zihang Dai
Yiming Yang
J. Carbonell
Ruslan Salakhutdinov
Quoc V. Le
AI4CE
232
8,444
0
19 Jun 2019
Semi-supervised Sequence-to-sequence ASR using Unpaired Speech and Text
Semi-supervised Sequence-to-sequence ASR using Unpaired Speech and Text
M. Baskar
Shinji Watanabe
Ramón Fernández Astudillo
Takaaki Hori
L. Burget
J. Černocký
61
40
0
30 Apr 2019
wav2vec: Unsupervised Pre-training for Speech Recognition
wav2vec: Unsupervised Pre-training for Speech Recognition
Steffen Schneider
Alexei Baevski
R. Collobert
Michael Auli
SSL
73
418
0
11 Apr 2019
An Unsupervised Autoregressive Model for Speech Representation Learning
An Unsupervised Autoregressive Model for Speech Representation Learning
Yu-An Chung
Wei-Ning Hsu
Hao Tang
James R. Glass
SSL
81
408
0
05 Apr 2019
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
Yang You
Jing Li
Sashank J. Reddi
Jonathan Hseu
Sanjiv Kumar
Srinadh Bhojanapalli
Xiaodan Song
J. Demmel
Kurt Keutzer
Cho-Jui Hsieh
ODL
240
999
0
01 Apr 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
1.8K
95,114
0
11 Oct 2018
Recurrent Stacking of Layers for Compact Neural Machine Translation
  Models
Recurrent Stacking of Layers for Compact Neural Machine Translation Models
Raj Dabre
Atsushi Fujita
57
60
0
14 Jul 2018
Universal Transformers
Universal Transformers
Mostafa Dehghani
Stephan Gouws
Oriol Vinyals
Jakob Uszkoreit
Lukasz Kaiser
85
755
0
10 Jul 2018
Representation Learning with Contrastive Predictive Coding
Representation Learning with Contrastive Predictive Coding
Aaron van den Oord
Yazhe Li
Oriol Vinyals
DRLSSL
327
10,349
0
10 Jul 2018
Deep contextualized word representations
Deep contextualized word representations
Matthew E. Peters
Mark Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
NAI
219
11,565
0
15 Feb 2018
Using the Output Embedding to Improve Language Models
Using the Output Embedding to Improve Language Models
Ofir Press
Lior Wolf
79
734
0
20 Aug 2016
1