ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.08106
  4. Cited By
Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation

Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation

20 January 2021
Lingyun Feng
Minghui Qiu
Yaliang Li
Haitao Zheng
Ying Shen
ArXivPDFHTML

Papers citing "Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation"

2 / 2 papers shown
Title
Stacked Hybrid-Attention and Group Collaborative Learning for Unbiased
  Scene Graph Generation
Stacked Hybrid-Attention and Group Collaborative Learning for Unbiased Scene Graph Generation
Xingning Dong
Tian Gan
Xuemeng Song
Jianlong Wu
Yuan Cheng
Liqiang Nie
24
92
0
18 Mar 2022
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
1