ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.01051
  4. Cited By
Generation, Distillation and Evaluation of Motivational
  Interviewing-Style Reflections with a Foundational Language Model

Generation, Distillation and Evaluation of Motivational Interviewing-Style Reflections with a Foundational Language Model

1 February 2024
Andrew Brown
Jiading Zhu
Mohamed Abdelwahab
Alec Dong
Cindy Wang
Jonathan Rose
    LLMAG
ArXivPDFHTML

Papers citing "Generation, Distillation and Evaluation of Motivational Interviewing-Style Reflections with a Foundational Language Model"

4 / 4 papers shown
Title
Knowledge Distillation Using Frontier Open-source LLMs: Generalizability
  and the Role of Synthetic Data
Knowledge Distillation Using Frontier Open-source LLMs: Generalizability and the Role of Synthetic Data
Anup Shirgaonkar
Nikhil Pandey
Nazmiye Ceren Abay
Tolga Aktas
Vijay Aski
ALM
SyDa
39
0
0
24 Oct 2024
TempoFormer: A Transformer for Temporally-aware Representations in
  Change Detection
TempoFormer: A Transformer for Temporally-aware Representations in Change Detection
Talia Tseriotou
Adam Tsakalidis
Maria Liakata
36
0
0
28 Aug 2024
Can Large Language Models Be an Alternative to Human Evaluations?
Can Large Language Models Be an Alternative to Human Evaluations?
Cheng-Han Chiang
Hung-yi Lee
ALM
LM&MA
229
579
0
03 May 2023
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
304
7,005
0
20 Apr 2018
1