ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.14701
  4. Cited By
Speech Prefix-Tuning with RNNT Loss for Improving LLM Predictions

Speech Prefix-Tuning with RNNT Loss for Improving LLM Predictions

20 June 2024
M. Baskar
Andrew Rosenberg
Bhuvana Ramabhadran
Neeraj Gaur
Zhong Meng
ArXivPDFHTML

Papers citing "Speech Prefix-Tuning with RNNT Loss for Improving LLM Predictions"

6 / 6 papers shown
Title
Beyond Silent Letters: Amplifying LLMs in Emotion Recognition with Vocal
  Nuances
Beyond Silent Letters: Amplifying LLMs in Emotion Recognition with Vocal Nuances
Mieko Ochi
Ziwei Gong
D. Komura
Pengyuan Shi
Kaan Donbekci
Julia Hirschberg
44
10
0
31 Jul 2024
An Embarrassingly Simple Approach for LLM with Strong ASR Capacity
An Embarrassingly Simple Approach for LLM with Strong ASR Capacity
Ziyang Ma
Guanrou Yang
Yifan Yang
Zhifu Gao
Jiaming Wang
...
Fan Yu
Qian Chen
Siqi Zheng
Shiliang Zhang
Xie Chen
AuLLM
47
38
0
13 Feb 2024
SLM: Bridge the thin gap between speech and text foundation models
SLM: Bridge the thin gap between speech and text foundation models
Mingqiu Wang
Wei Han
Izhak Shafran
Zelin Wu
Chung-Cheng Chiu
...
Zhong Meng
Golan Pundak
Nikhil Siddhartha
J. Schalkwyk
Yonghui Wu
AuLLM
39
56
0
30 Sep 2023
End-to-End Speech Recognition Contextualization with Large Language
  Models
End-to-End Speech Recognition Contextualization with Large Language Models
Egor Lakomkin
Chunyang Wu
Yassir Fathullah
Ozlem Kalinli
M. Seltzer
Christian Fuegen
55
17
0
19 Sep 2023
Google USM: Scaling Automatic Speech Recognition Beyond 100 Languages
Google USM: Scaling Automatic Speech Recognition Beyond 100 Languages
Yu Zhang
Wei Han
James Qin
Yongqiang Wang
Ankur Bapna
...
Pedro J. Moreno
Chung-Cheng Chiu
J. Schalkwyk
Franccoise Beaufays
Yonghui Wu
VLM
79
253
0
02 Mar 2023
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
238
806
0
14 Oct 2021
1