ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.02249
  4. Cited By
jiant: A Software Toolkit for Research on General-Purpose Text
  Understanding Models

jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models

4 March 2020
Yada Pruksachatkun
Philip Yeres
Haokun Liu
Jason Phang
Phu Mon Htut
Alex Jinpeng Wang
Ian Tenney
Samuel R. Bowman
    SSeg
ArXivPDFHTML

Papers citing "jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models"

27 / 27 papers shown
Title
Paraphrasing in Affirmative Terms Improves Negation Understanding
Paraphrasing in Affirmative Terms Improves Negation Understanding
MohammadHossein Rezaei
Eduardo Blanco
44
1
0
11 Jun 2024
Hate Cannot Drive out Hate: Forecasting Conversation Incivility
  following Replies to Hate Speech
Hate Cannot Drive out Hate: Forecasting Conversation Incivility following Replies to Hate Speech
Xinchen Yu
Eduardo Blanco
Lingzi Hong
36
8
0
08 Dec 2023
Language acquisition: do children and language models follow similar
  learning stages?
Language acquisition: do children and language models follow similar learning stages?
Linnea Evanson
Yair Lakretz
J. King
30
27
0
06 Jun 2023
A Stability Analysis of Fine-Tuning a Pre-Trained Model
A Stability Analysis of Fine-Tuning a Pre-Trained Model
Z. Fu
Anthony Man-Cho So
Nigel Collier
23
3
0
24 Jan 2023
Investigating Reasons for Disagreement in Natural Language Inference
Investigating Reasons for Disagreement in Natural Language Inference
Nan-Jiang Jiang
M. Marneffe
27
26
0
07 Sep 2022
minicons: Enabling Flexible Behavioral and Representational Analyses of
  Transformer Language Models
minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models
Kanishka Misra
19
58
0
24 Mar 2022
Slovene SuperGLUE Benchmark: Translation and Evaluation
Slovene SuperGLUE Benchmark: Translation and Evaluation
Aleš Žagar
Marko Robnik-Šikonja
25
10
0
10 Feb 2022
Adversarially Constructed Evaluation Sets Are More Challenging, but May
  Not Be Fair
Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair
Jason Phang
Angelica Chen
William Huang
Samuel R. Bowman
AAML
28
13
0
16 Nov 2021
LMdiff: A Visual Diff Tool to Compare Language Models
LMdiff: A Visual Diff Tool to Compare Language Models
Hendrik Strobelt
Benjamin Hoover
Arvind Satyanarayan
Sebastian Gehrmann
VLM
37
19
0
02 Nov 2021
IndoNLI: A Natural Language Inference Dataset for Indonesian
IndoNLI: A Natural Language Inference Dataset for Indonesian
Rahmad Mahendra
Alham Fikri Aji
Samuel Louvan
Fahrurrozi Rahman
Clara Vania
26
29
0
27 Oct 2021
Fine-Tuned Transformers Show Clusters of Similar Representations Across
  Layers
Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers
Jason Phang
Haokun Liu
Samuel R. Bowman
30
25
0
17 Sep 2021
Learning Bill Similarity with Annotated and Augmented Corpora of Bills
Learning Bill Similarity with Annotated and Augmented Corpora of Bills
Jiseon Kim
Elden Griggs
In Song Kim
Alice Oh
AILaw
20
5
0
14 Sep 2021
Curriculum learning for language modeling
Curriculum learning for language modeling
Daniel Fernando Campos
16
32
0
04 Aug 2021
Evaluation of contextual embeddings on less-resourced languages
Evaluation of contextual embeddings on less-resourced languages
Matej Ulvcar
Alevs vZagar
C. S. Armendariz
Andravz Repar
Senja Pollak
Matthew Purver
Marko Robnik-vSikonja
36
11
0
22 Jul 2021
He Thinks He Knows Better than the Doctors: BERT for Event Factuality
  Fails on Pragmatics
He Thinks He Knows Better than the Doctors: BERT for Event Factuality Fails on Pragmatics
Nan-Jiang Jiang
M. Marneffe
27
21
0
02 Jul 2021
The Case for Translation-Invariant Self-Attention in Transformer-Based
  Language Models
The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Ulme Wennberg
G. Henter
MILM
35
21
0
03 Jun 2021
MOROCCO: Model Resource Comparison Framework
MOROCCO: Model Resource Comparison Framework
Valentin Malykh
Alexander Kukushkin
Ekaterina Artemova
Vladislav Mikhailov
Maria Tikhonova
Tatiana Shavrina
21
0
0
29 Apr 2021
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
Vassilina Nikoulina
Maxat Tezekbayev
Nuradil Kozhakhmet
Madina Babazhanova
Matthias Gallé
Z. Assylbekov
34
8
0
02 Mar 2021
AutoNLU: An On-demand Cloud-based Natural Language Understanding System
  for Enterprises
AutoNLU: An On-demand Cloud-based Natural Language Understanding System for Enterprises
Nham Le
T. Lai
Trung Bui
Doo Soon Kim
23
0
0
26 Nov 2020
Counterfactually-Augmented SNLI Training Data Does Not Yield Better
  Generalization Than Unaugmented Data
Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data
William Huang
Haokun Liu
Samuel R. Bowman
21
37
0
09 Oct 2020
On Losses for Modern Language Models
On Losses for Modern Language Models
Stephane Aroca-Ouellette
Frank Rudzicz
22
32
0
04 Oct 2020
Can neural networks acquire a structural bias from raw linguistic data?
Can neural networks acquire a structural bias from raw linguistic data?
Alex Warstadt
Samuel R. Bowman
AI4CE
20
53
0
14 Jul 2020
NAS-Bench-NLP: Neural Architecture Search Benchmark for Natural Language
  Processing
NAS-Bench-NLP: Neural Architecture Search Benchmark for Natural Language Processing
Nikita Klyuchnikov
I. Trofimov
Ekaterina Artemova
Mikhail Salnikov
M. Fedorov
Evgeny Burnaev
VLM
18
101
0
12 Jun 2020
Revisiting Few-sample BERT Fine-tuning
Revisiting Few-sample BERT Fine-tuning
Tianyi Zhang
Felix Wu
Arzoo Katiyar
Kilian Q. Weinberger
Yoav Artzi
41
441
0
10 Jun 2020
Contextual Embeddings: When Are They Worth It?
Contextual Embeddings: When Are They Worth It?
Simran Arora
Avner May
Jian Zhang
Christopher Ré
13
58
0
18 May 2020
On the Robustness of Language Encoders against Grammatical Errors
On the Robustness of Language Encoders against Grammatical Errors
Fan Yin
Quanyu Long
Tao Meng
Kai-Wei Chang
33
34
0
12 May 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
1