ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.08144
  4. Cited By
MIDAS: Multi-level Intent, Domain, And Slot Knowledge Distillation for Multi-turn NLU
v1v2v3 (latest)

MIDAS: Multi-level Intent, Domain, And Slot Knowledge Distillation for Multi-turn NLU

15 August 2024
Yan Li
So-Eon Kim
Seong-Bae Park
S. Han
ArXiv (abs)PDFHTML

Papers citing "MIDAS: Multi-level Intent, Domain, And Slot Knowledge Distillation for Multi-turn NLU"

38 / 38 papers shown
Title
ChuLo: Chunk-Level Key Information Representation for Long Document Processing
ChuLo: Chunk-Level Key Information Representation for Long Document Processing
Yan Li
Soyeon Caren Han
Yue Dai
Feiqi Cao
80
0
0
14 Oct 2024
A Two-Stage Prediction-Aware Contrastive Learning Framework for
  Multi-Intent NLU
A Two-Stage Prediction-Aware Contrastive Learning Framework for Multi-Intent NLU
Guanhua Chen
Yutong Yao
Derek F. Wong
Lidia S. Chao
122
2
0
05 May 2024
Gemma: Open Models Based on Gemini Research and Technology
Gemma: Open Models Based on Gemini Research and Technology
Gemma Team
Gemma Team Thomas Mesnard
Cassidy Hardin
Robert Dadashi
Surya Bhupatiraju
...
Armand Joulin
Noah Fiedel
Evan Senter
Alek Andreev
Kathleen Kenealy
VLMLLMAG
242
513
0
13 Mar 2024
MISCA: A Joint Model for Multiple Intent Detection and Slot Filling with
  Intent-Slot Co-Attention
MISCA: A Joint Model for Multiple Intent Detection and Slot Filling with Intent-Slot Co-Attention
Thinh-Le-Gia Pham
Tran Chi
Dat Quoc Nguyen
VLM
64
6
0
10 Dec 2023
Towards a Unified Conversational Recommendation System: Multi-task
  Learning via Contextualized Knowledge Distillation
Towards a Unified Conversational Recommendation System: Multi-task Learning via Contextualized Knowledge Distillation
Yeongseo Jung
Eunseo Jung
Lei Chen
79
5
0
27 Oct 2023
Qwen Technical Report
Qwen Technical Report
Jinze Bai
Shuai Bai
Yunfei Chu
Zeyu Cui
Kai Dang
...
Zhenru Zhang
Chang Zhou
Jingren Zhou
Xiaohuan Zhou
Tianhang Zhu
OSLM
275
1,920
0
28 Sep 2023
Joint Multiple Intent Detection and Slot Filling with Supervised
  Contrastive Learning and Self-Distillation
Joint Multiple Intent Detection and Slot Filling with Supervised Contrastive Learning and Self-Distillation
Nguyen Anh Tu
Hoang Thi Thu Uyen
Tu Minh Phuong
Ngo Xuan Bach
VLM
47
5
0
28 Aug 2023
Llama 2: Open Foundation and Fine-Tuned Chat Models
Llama 2: Open Foundation and Fine-Tuned Chat Models
Hugo Touvron
Louis Martin
Kevin R. Stone
Peter Albert
Amjad Almahairi
...
Sharan Narang
Aurelien Rodriguez
Robert Stojnic
Sergey Edunov
Thomas Scialom
AI4MHALM
480
12,124
0
18 Jul 2023
Tri-level Joint Natural Language Understanding for Multi-turn
  Conversational Datasets
Tri-level Joint Natural Language Understanding for Multi-turn Conversational Datasets
H. Weld
Sijia Hu
Siqu Long
Josiah Poon
S. Han
46
1
0
28 May 2023
Ensemble knowledge distillation of self-supervised speech models
Ensemble knowledge distillation of self-supervised speech models
Kuan-Po Huang
Tzu-hsun Feng
Yu-Kuan Fu
Tsung-Yuan Hsu
Po-Chieh Yen
Wei-Cheng Tseng
Kai-Wei Chang
Hung-yi Lee
100
18
0
24 Feb 2023
Group is better than individual: Exploiting Label Topologies and Label
  Relations for Joint Multiple Intent Detection and Slot Filling
Group is better than individual: Exploiting Label Topologies and Label Relations for Joint Multiple Intent Detection and Slot Filling
Bowen Xing
Ivor W. Tsang
BDL
86
22
0
19 Oct 2022
Explainable Slot Type Attentions to Improve Joint Intent Detection and
  Slot Filling
Explainable Slot Type Attentions to Improve Joint Intent Detection and Slot Filling
Kalpa Gunaratna
Vijay Srinivasan
Akhila Yerukola
Hongxia Jin
59
7
0
19 Oct 2022
A Fast Attention Network for Joint Intent Detection and Slot Filling on
  Edge Devices
A Fast Attention Network for Joint Intent Detection and Slot Filling on Edge Devices
Liang Huang
Senjie Liang
Feiyang Ye
Nan Gao
86
4
0
16 May 2022
mcBERT: Momentum Contrastive Learning with BERT for Zero-Shot Slot
  Filling
mcBERT: Momentum Contrastive Learning with BERT for Zero-Shot Slot Filling
Seong-Hwan Heo
WonKee Lee
Jong-Hyeok Lee
80
4
0
24 Mar 2022
Bi-directional Joint Neural Networks for Intent Classification and Slot
  Filling
Bi-directional Joint Neural Networks for Intent Classification and Slot Filling
S. Han
Siqu Long
Huichun Li
H. Weld
Josiah Poon
74
16
0
26 Feb 2022
Cross-modal Knowledge Distillation for Vision-to-Sensor Action
  Recognition
Cross-modal Knowledge Distillation for Vision-to-Sensor Action Recognition
Jianyuan Ni
Raunak Sarbajna
Yang Liu
A. Ngu
Yan Yan
HAI
158
37
0
08 Oct 2021
Towards Joint Intent Detection and Slot Filling via Higher-order Attention
Dongsheng Chen
Zhiqi Huang
Xian Wu
Shen Ge
Yuexian Zou
75
22
0
18 Sep 2021
A Context-Aware Hierarchical BERT Fusion Network for Multi-turn Dialog
  Act Detection
A Context-Aware Hierarchical BERT Fusion Network for Multi-turn Dialog Act Detection
Ting-Wei Wu
Ruolin Su
B. Juang
52
13
0
03 Sep 2021
Joint Multiple Intent Detection and Slot Filling via Self-distillation
Joint Multiple Intent Detection and Slot Filling via Self-distillation
Lisong Chen
Peilin Zhou
Yuexian Zou
VLM
59
31
0
18 Aug 2021
Knowledge Distillation from BERT Transformer to Speech Transformer for
  Intent Classification
Knowledge Distillation from BERT Transformer to Speech Transformer for Intent Classification
Yiding Jiang
Bidisha Sharma
Maulik C. Madhavi
Haizhou Li
93
26
0
05 Aug 2021
GL-GIN: Fast and Accurate Non-Autoregressive Model for Joint Multiple
  Intent Detection and Slot Filling
GL-GIN: Fast and Accurate Non-Autoregressive Model for Joint Multiple Intent Detection and Slot Filling
Libo Qin
Fuxuan Wei
Tianbao Xie
Xiao Xu
Wanxiang Che
Ting Liu
81
86
0
03 Jun 2021
One Teacher is Enough? Pre-trained Language Model Distillation from
  Multiple Teachers
One Teacher is Enough? Pre-trained Language Model Distillation from Multiple Teachers
Chuhan Wu
Fangzhao Wu
Yongfeng Huang
72
65
0
02 Jun 2021
X-METRA-ADA: Cross-lingual Meta-Transfer Learning Adaptation to Natural
  Language Understanding and Question Answering
X-METRA-ADA: Cross-lingual Meta-Transfer Learning Adaptation to Natural Language Understanding and Question Answering
Meryem M'hamdi
Doo Soon Kim
Franck Dernoncourt
Trung Bui
Xiang Ren
Jonathan May
89
20
0
20 Apr 2021
Intent Detection and Slot Filling for Vietnamese
Intent Detection and Slot Filling for Vietnamese
M. Dao
Thinh Hung Truong
Dat Quoc Nguyen
VLM
88
35
0
05 Apr 2021
Encoding Syntactic Knowledge in Transformer Encoder for Intent Detection
  and Slot Filling
Encoding Syntactic Knowledge in Transformer Encoder for Intent Detection and Slot Filling
Jixuan Wang
Kai Wei
Martin H. Radfar
Weiwei Zhang
Clement Chung
60
36
0
21 Dec 2020
Exploring Transfer Learning For End-to-End Spoken Language Understanding
Exploring Transfer Learning For End-to-End Spoken Language Understanding
Subendhu Rongali
Bei Liu
Liwei Cai
Konstantine Arkoudas
Chengwei Su
Wael Hamza
51
23
0
15 Dec 2020
Reinforced Multi-Teacher Selection for Knowledge Distillation
Reinforced Multi-Teacher Selection for Knowledge Distillation
Fei Yuan
Linjun Shou
J. Pei
Wutao Lin
Ming Gong
Yan Fu
Daxin Jiang
71
124
0
11 Dec 2020
Meta-KD: A Meta Knowledge Distillation Framework for Language Model
  Compression across Domains
Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains
Haojie Pan
Chengyu Wang
Minghui Qiu
Yichang Zhang
Yaliang Li
Jun Huang
83
51
0
02 Dec 2020
A Co-Interactive Transformer for Joint Slot Filling and Intent Detection
A Co-Interactive Transformer for Joint Slot Filling and Intent Detection
Libo Qin
Tailu Liu
Wanxiang Che
Bingbing Kang
Sendong Zhao
Ting Liu
ViT
64
118
0
08 Oct 2020
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Densely Guided Knowledge Distillation using Multiple Teacher Assistants
Wonchul Son
Jaemin Na
Junyong Choi
Wonjun Hwang
88
119
0
18 Sep 2020
MultiWOZ 2.2 : A Dialogue Dataset with Additional Annotation Corrections
  and State Tracking Baselines
MultiWOZ 2.2 : A Dialogue Dataset with Additional Annotation Corrections and State Tracking Baselines
Xiaoxue Zang
Abhinav Rastogi
Srinivas Sunkara
Raghav Gupta
Jianguo Zhang
Jindong Chen
89
279
0
10 Jul 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
1.0K
42,651
0
28 May 2020
BERT for Joint Intent Classification and Slot Filling
BERT for Joint Intent Classification and Slot Filling
Qian Chen
Zhu Zhuo
Wen Wang
VLM
97
558
0
28 Feb 2019
Improved Knowledge Distillation via Teacher Assistant
Improved Knowledge Distillation via Teacher Assistant
Seyed Iman Mirzadeh
Mehrdad Farajtabar
Ang Li
Nir Levine
Akihiro Matsukawa
H. Ghasemzadeh
102
1,090
0
09 Feb 2019
Dialogue Learning with Human Teaching and Feedback in End-to-End
  Trainable Task-Oriented Dialogue Systems
Dialogue Learning with Human Teaching and Feedback in End-to-End Trainable Task-Oriented Dialogue Systems
Bing-Quan Liu
Gokhan Tur
Dilek Z. Hakkani-Tür
Pararth Shah
Larry Heck
OffRL
59
160
0
18 Apr 2018
Sequential Dialogue Context Modeling for Spoken Language Understanding
Sequential Dialogue Context Modeling for Spoken Language Understanding
Ankur Bapna
Gokhan Tur
Dilek Z. Hakkani-Tür
Larry Heck
88
54
0
08 May 2017
Attention-Based Recurrent Neural Network Models for Joint Intent
  Detection and Slot Filling
Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling
Bing-Quan Liu
Ian Lane
116
677
0
06 Sep 2016
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
369
19,791
0
09 Mar 2015
1