ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.00844
  4. Cited By
Distilling Neural Networks for Greener and Faster Dependency Parsing

Distilling Neural Networks for Greener and Faster Dependency Parsing

1 June 2020
Mark Anderson
Carlos Gómez-Rodríguez
ArXivPDFHTML

Papers citing "Distilling Neural Networks for Greener and Faster Dependency Parsing"

5 / 5 papers shown
Title
The Fragility of Multi-Treebank Parsing Evaluation
The Fragility of Multi-Treebank Parsing Evaluation
I. Alonso-Alonso
David Vilares
Carlos Gómez-Rodríguez
22
1
0
14 Sep 2022
Not All Linearizations Are Equally Data-Hungry in Sequence Labeling
  Parsing
Not All Linearizations Are Equally Data-Hungry in Sequence Labeling Parsing
Alberto Muñoz-Ortiz
Michalina Strzyz
David Vilares
27
9
0
17 Aug 2021
A Modest Pareto Optimisation Analysis of Dependency Parsers in 2021
A Modest Pareto Optimisation Analysis of Dependency Parsers in 2021
Mark Anderson
Carlos Gómez-Rodríguez
41
9
0
08 Jun 2021
Structural Knowledge Distillation: Tractably Distilling Information for
  Structured Predictor
Structural Knowledge Distillation: Tractably Distilling Information for Structured Predictor
Xinyu Wang
Yong-jia Jiang
Zhaohui Yan
Zixia Jia
Nguyen Bach
Tao Wang
Zhongqiang Huang
Fei Huang
Kewei Tu
26
10
0
10 Oct 2020
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot
  Learners
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners
Timo Schick
Hinrich Schütze
51
956
0
15 Sep 2020
1