ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.06829
  4. Cited By
Joint Energy-based Model Training for Better Calibrated Natural Language
  Understanding Models

Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models

18 January 2021
Tianxing He
Bryan McCann
Caiming Xiong
Ehsan Hosseini-Asl
ArXivPDFHTML

Papers citing "Joint Energy-based Model Training for Better Calibrated Natural Language Understanding Models"

8 / 8 papers shown
Title
LSEBMCL: A Latent Space Energy-Based Model for Continual Learning
LSEBMCL: A Latent Space Energy-Based Model for Continual Learning
Xiaodi Li
Dingcheng Li
Rujun Gao
Mahmoud Zamani
Latifur Khan
CLL
KELM
54
0
0
09 Jan 2025
Preserving Pre-trained Features Helps Calibrate Fine-tuned Language
  Models
Preserving Pre-trained Features Helps Calibrate Fine-tuned Language Models
Guande He
Jianfei Chen
Jun Zhu
45
20
0
30 May 2023
Exploring Energy-based Language Models with Different Architectures and
  Training Methods for Speech Recognition
Exploring Energy-based Language Models with Different Architectures and Training Methods for Speech Recognition
Hong Liu
Z. Lv
Zhijian Ou
Wenbo Zhao
Qing Xiao
24
0
0
22 May 2023
Calibration Meets Explanation: A Simple and Effective Approach for Model
  Confidence Estimates
Calibration Meets Explanation: A Simple and Effective Approach for Model Confidence Estimates
Dongfang Li
Baotian Hu
Qingcai Chen
16
8
0
06 Nov 2022
Consistent Training via Energy-Based GFlowNets for Modeling Discrete
  Joint Distributions
Consistent Training via Energy-Based GFlowNets for Modeling Discrete Joint Distributions
C. Ekbote
Moksh Jain
Payel Das
Yoshua Bengio
44
4
0
01 Nov 2022
EBMs vs. CL: Exploring Self-Supervised Visual Pretraining for Visual
  Question Answering
EBMs vs. CL: Exploring Self-Supervised Visual Pretraining for Visual Question Answering
Violetta Shevchenko
Ehsan Abbasnejad
A. Dick
Anton Van Den Hengel
Damien Teney
49
0
0
29 Jun 2022
On Reinforcement Learning and Distribution Matching for Fine-Tuning
  Language Models with no Catastrophic Forgetting
On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting
Tomasz Korbak
Hady ElSahar
Germán Kruszewski
Marc Dymetman
CLL
27
51
0
01 Jun 2022
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
304
7,005
0
20 Apr 2018
1