ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.05932
  4. Cited By
Stabilized In-Context Learning with Pre-trained Language Models for Few
  Shot Dialogue State Tracking

Stabilized In-Context Learning with Pre-trained Language Models for Few Shot Dialogue State Tracking

12 February 2023
Derek Chen
Kun Qian
Zhou Yu
ArXivPDFHTML

Papers citing "Stabilized In-Context Learning with Pre-trained Language Models for Few Shot Dialogue State Tracking"

10 / 10 papers shown
Title
Are Prompt-based Models Clueless?
Are Prompt-based Models Clueless?
Pride Kavumba
Ryo Takahashi
Yusuke Oda
VLM
139
13
0
19 May 2022
Reason first, then respond: Modular Generation for Knowledge-infused
  Dialogue
Reason first, then respond: Modular Generation for Knowledge-infused Dialogue
Leonard Adolphs
Kurt Shuster
Jack Urbanek
Arthur Szlam
Jason Weston
KELM
LRM
204
41
0
09 Nov 2021
Meta-learning via Language Model In-context Tuning
Meta-learning via Language Model In-context Tuning
Yanda Chen
Ruiqi Zhong
Sheng Zha
George Karypis
He He
234
156
0
15 Oct 2021
Internet-Augmented Dialogue Generation
Internet-Augmented Dialogue Generation
M. Komeili
Kurt Shuster
Jason Weston
RALM
244
280
0
15 Jul 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,848
0
18 Apr 2021
What Makes Good In-Context Examples for GPT-$3$?
What Makes Good In-Context Examples for GPT-333?
Jiachang Liu
Dinghan Shen
Yizhe Zhang
Bill Dolan
Lawrence Carin
Weizhu Chen
AAML
RALM
275
1,312
0
17 Jan 2021
Few Shot Dialogue State Tracking using Meta-learning
Few Shot Dialogue State Tracking using Meta-learning
Saket Dingliwal
Bill Gao
Sanchit Agarwal
Chien-Wei Lin
Tagyoung Chung
Dilek Z. Hakkani-Tür
OffRL
57
21
0
17 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
243
1,919
0
31 Dec 2020
Exploiting Cloze Questions for Few Shot Text Classification and Natural
  Language Inference
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Timo Schick
Hinrich Schütze
258
1,589
0
21 Jan 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
344
11,684
0
09 Mar 2017
1