Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2004.02984
Cited By
MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
6 April 2020
Zhiqing Sun
Hongkun Yu
Xiaodan Song
Renjie Liu
Yiming Yang
Denny Zhou
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices"
50 / 172 papers shown
Title
Demystifying AI Agents: The Final Generation of Intelligence
Kevin J McNamara
Rhea Pritham Marpu
31
0
0
15 May 2025
IM-BERT: Enhancing Robustness of BERT through the Implicit Euler Method
Mihyeon Kim
Juhyoung Park
Youngbin Kim
34
0
0
11 May 2025
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via
α
α
α
-
β
β
β
-Divergence
Guanghui Wang
Zhiyong Yang
Zhilin Wang
Shi Wang
Qianqian Xu
Qingming Huang
44
0
0
07 May 2025
FineScope : Precision Pruning for Domain-Specialized Large Language Models Using SAE-Guided Self-Data Cultivation
Chaitali Bhattacharyya
Yeseong Kim
50
0
0
01 May 2025
The Rise of Small Language Models in Healthcare: A Comprehensive Survey
Muskan Garg
Shaina Raza
Shebuti Rayana
Xingyi Liu
Sunghwan Sohn
LM&MA
AILaw
92
0
0
23 Apr 2025
FedMentalCare: Towards Privacy-Preserving Fine-Tuned LLMs to Analyze Mental Health Status Using Federated Learning Framework
S M Sarwar
AI4MH
51
0
0
27 Feb 2025
A Lightweight and Extensible Cell Segmentation and Classification Model for Whole Slide Images
N. Shvetsov
T. Kilvaer
M. Tafavvoghi
Anders Sildnes
Kajsa Møllersen
Lill-ToveRasmussen Busund
L. A. Bongo
VLM
71
1
0
26 Feb 2025
Towards Cross-Tokenizer Distillation: the Universal Logit Distillation Loss for LLMs
Nicolas Boizard
Kevin El Haddad
C´eline Hudelot
Pierre Colombo
83
15
0
28 Jan 2025
HadamRNN: Binary and Sparse Ternary Orthogonal RNNs
Armand Foucault
Franck Mamalet
François Malgouyres
MQ
85
0
0
28 Jan 2025
CLIP-PING: Boosting Lightweight Vision-Language Models with Proximus Intrinsic Neighbors Guidance
Chu Myaet Thwal
Ye Lin Tun
Minh N. H. Nguyen
Eui-nam Huh
Choong Seon Hong
VLM
79
0
0
05 Dec 2024
SHAKTI: A 2.5 Billion Parameter Small Language Model Optimized for Edge AI and Low-Resource Environments
Syed Abdul Gaffar Shakhadri
Kruthika KR
Rakshit Aralimatti
VLM
25
1
0
15 Oct 2024
On Importance of Pruning and Distillation for Efficient Low Resource NLP
Aishwarya Mirashi
Purva Lingayat
Srushti Sonavane
Tejas Padhiyar
Raviraj Joshi
Geetanjali Kale
34
1
0
21 Sep 2024
NanoMVG: USV-Centric Low-Power Multi-Task Visual Grounding based on Prompt-Guided Camera and 4D mmWave Radar
Runwei Guan
Jianan Liu
Liye Jia
Haocheng Zhao
Shanliang Yao
Xiaohui Zhu
Ka Lok Man
Eng Gee Lim
Jeremy S. Smith
Yutao Yue
65
5
0
30 Aug 2024
Toward Attention-based TinyML: A Heterogeneous Accelerated Architecture and Automated Deployment Flow
Philip Wiese
Gamze İslamoğlu
Moritz Scherer
Luka Macan
Victor J. B. Jung
Luca Bompani
Francesco Conti
Luca Benini
42
0
0
05 Aug 2024
Accelerating Large Language Model Inference with Self-Supervised Early Exits
Florian Valade
LRM
44
1
0
30 Jul 2024
LPViT: Low-Power Semi-structured Pruning for Vision Transformers
Kaixin Xu
Zhe Wang
Chunyun Chen
Xue Geng
Jie Lin
Xulei Yang
Min-man Wu
Min Wu
Xiaoli Li
Weisi Lin
ViT
VLM
51
7
0
02 Jul 2024
Factual Dialogue Summarization via Learning from Large Language Models
Rongxin Zhu
Jey Han Lau
Jianzhong Qi
HILM
57
1
0
20 Jun 2024
Fast Vocabulary Transfer for Language Model Compression
Leonidas Gee
Andrea Zugarini
Leonardo Rigutini
Paolo Torroni
35
27
0
15 Feb 2024
DE
3
^3
3
-BERT: Distance-Enhanced Early Exiting for BERT based on Prototypical Networks
Jianing He
Qi Zhang
Weiping Ding
Duoqian Miao
Jun Zhao
Liang Hu
LongBing Cao
40
3
0
03 Feb 2024
Cross-Modal Prototype based Multimodal Federated Learning under Severely Missing Modality
Huy Q. Le
Chu Myaet Thwal
Yu Qiao
Ye Lin Tun
Minh N. H. Nguyen
Choong Seon Hong
Choong Seon Hong
70
4
0
25 Jan 2024
DSFormer: Effective Compression of Text-Transformers by Dense-Sparse Weight Factorization
Rahul Chand
Yashoteja Prabhu
Pratyush Kumar
20
3
0
20 Dec 2023
A Comparative Analysis of Pretrained Language Models for Text-to-Speech
M. G. Moya
Panagiota Karanasou
S. Karlapati
Bastian Schnell
Nicole Peinelt
Alexis Moinet
Thomas Drugman
39
3
0
04 Sep 2023
Mobile Foundation Model as Firmware
Jinliang Yuan
Chenchen Yang
Dongqi Cai
Shihe Wang
Xin Yuan
...
Di Zhang
Hanzi Mei
Xianqing Jia
Shangguang Wang
Mengwei Xu
42
19
0
28 Aug 2023
Accurate Retraining-free Pruning for Pretrained Encoder-based Language Models
Seungcheol Park
Ho-Jin Choi
U. Kang
VLM
42
5
0
07 Aug 2023
Sensi-BERT: Towards Sensitivity Driven Fine-Tuning for Parameter-Efficient BERT
Souvik Kundu
S. Nittur
Maciej Szankin
Sairam Sundaresan
MQ
33
2
0
14 Jul 2023
An Efficient Sparse Inference Software Accelerator for Transformer-based Language Models on CPUs
Haihao Shen
Hengyu Meng
Bo Dong
Zhe Wang
Ofir Zafrir
...
Hanwen Chang
Qun Gao
Zi. Wang
Guy Boudoukh
Moshe Wasserblat
MoE
41
4
0
28 Jun 2023
Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing
Yelysei Bondarenko
Markus Nagel
Tijmen Blankevoort
MQ
23
87
0
22 Jun 2023
Breaking On-device Training Memory Wall: A Systematic Survey
Shitian Li
Chunlin Tian
Kahou Tam
Ruirui Ma
Li Li
39
2
0
17 Jun 2023
FedMultimodal: A Benchmark For Multimodal Federated Learning
Tiantian Feng
Digbalay Bose
Tuo Zhang
Rajat Hebbar
Anil Ramakrishna
Rahul Gupta
Mi Zhang
Salman Avestimehr
Shrikanth Narayanan
42
48
0
15 Jun 2023
GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model
Shicheng Tan
Weng Lam Tam
Yuanchun Wang
Wenwen Gong
Yang Yang
...
Jiahao Liu
Jingang Wang
Shuo Zhao
Peng Zhang
Jie Tang
ALM
MoE
33
11
0
11 Jun 2023
Just CHOP: Embarrassingly Simple LLM Compression
A. Jha
Tom Sherborne
Evan Pete Walsh
Dirk Groeneveld
Emma Strubell
Iz Beltagy
36
3
0
24 May 2023
Can Public Large Language Models Help Private Cross-device Federated Learning?
Wei Ping
Yibo Jacky Zhang
Yuan Cao
Bo-wen Li
H. B. McMahan
Sewoong Oh
Zheng Xu
Manzil Zaheer
FedML
29
37
0
20 May 2023
Lifting the Curse of Capacity Gap in Distilling Language Models
Chen Zhang
Yang Yang
Jiahao Liu
Jingang Wang
Yunsen Xian
Benyou Wang
Dawei Song
MoE
32
19
0
20 May 2023
Tailoring Instructions to Student's Learning Levels Boosts Knowledge Distillation
Yuxin Ren
Zi-Qi Zhong
Xingjian Shi
Yi Zhu
Chun Yuan
Mu Li
27
7
0
16 May 2023
Word Sense Induction with Knowledge Distillation from BERT
Anik Saha
Alex Gittens
B. Yener
20
1
0
20 Apr 2023
MiniRBT: A Two-stage Distilled Small Chinese Pre-trained Model
Xin Yao
Ziqing Yang
Yiming Cui
Shijin Wang
31
3
0
03 Apr 2023
BERTino: an Italian DistilBERT model
Matteo Muffo
E. Bertino
VLM
18
14
0
31 Mar 2023
oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
Daniel Fernando Campos
Alexandre Marques
Mark Kurtz
Chengxiang Zhai
VLM
AAML
13
2
0
30 Mar 2023
EdgeTran: Co-designing Transformers for Efficient Inference on Mobile Edge Platforms
Shikhar Tuli
N. Jha
36
3
0
24 Mar 2023
Rediscovering Hashed Random Projections for Efficient Quantization of Contextualized Sentence Embeddings
Ulf A. Hamster
Ji-Ung Lee
Alexander Geyken
Iryna Gurevych
36
0
0
13 Mar 2023
Greener yet Powerful: Taming Large Code Generation Models with Quantization
Xiaokai Wei
Sujan Kumar Gonugondla
W. Ahmad
Shiqi Wang
Baishakhi Ray
...
Ben Athiwaratkun
Mingyue Shang
M. K. Ramanathan
Parminder Bhatia
Bing Xiang
MQ
33
6
0
09 Mar 2023
Gradient-Free Structured Pruning with Unlabeled Data
Azade Nova
H. Dai
Dale Schuurmans
SyDa
40
20
0
07 Mar 2023
DTW-SiameseNet: Dynamic Time Warped Siamese Network for Mispronunciation Detection and Correction
R. Anantha
Kriti Bhasin
Daniela Aguilar
Prabal Vashisht
Becci Williamson
Srinivas Chappidi
24
0
0
01 Mar 2023
Full Stack Optimization of Transformer Inference: a Survey
Sehoon Kim
Coleman Hooper
Thanakul Wattanawong
Minwoo Kang
Ruohan Yan
...
Qijing Huang
Kurt Keutzer
Michael W. Mahoney
Y. Shao
A. Gholami
MQ
36
102
0
27 Feb 2023
MUX-PLMs: Data Multiplexing for High-throughput Language Models
Vishvak Murahari
Ameet Deshpande
Carlos E. Jimenez
Izhak Shafran
Mingqiu Wang
Yuan Cao
Karthik Narasimhan
MoE
26
5
0
24 Feb 2023
HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers
Chen Liang
Haoming Jiang
Zheng Li
Xianfeng Tang
Bin Yin
Tuo Zhao
VLM
29
24
0
19 Feb 2023
LEALLA: Learning Lightweight Language-agnostic Sentence Embeddings with Knowledge Distillation
Zhuoyuan Mao
Tetsuji Nakagawa
FedML
19
19
0
16 Feb 2023
The Framework Tax: Disparities Between Inference Efficiency in NLP Research and Deployment
Jared Fernandez
Jacob Kahn
Clara Na
Yonatan Bisk
Emma Strubell
FedML
38
10
0
13 Feb 2023
Lightweight Transformers for Clinical Natural Language Processing
Omid Rohanian
Mohammadmahdi Nouriborji
Hannah Jauncey
Samaneh Kouchaki
Isaric Clinical Characterisation Group
Lei A. Clifton
L. Merson
David Clifton
MedIm
LM&MA
29
12
0
09 Feb 2023
Bioformer: an efficient transformer language model for biomedical text mining
Li Fang
Qingyu Chen
Chih-Hsuan Wei
Zhiyong Lu
Kai Wang
MedIm
AI4CE
32
18
0
03 Feb 2023
1
2
3
4
Next