ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.12471
  4. Cited By
Neural Network Acceptability Judgments

Neural Network Acceptability Judgments

31 May 2018
Alex Warstadt
Amanpreet Singh
Samuel R. Bowman
ArXivPDFHTML

Papers citing "Neural Network Acceptability Judgments"

50 / 880 papers shown
Title
Data Bias According to Bipol: Men are Naturally Right and It is the Role
  of Women to Follow Their Lead
Data Bias According to Bipol: Men are Naturally Right and It is the Role of Women to Follow Their Lead
Irene Pagliai
G. V. Boven
Tosin P. Adewumi
Lama Alkhaled
Namrata Gurung
Isabella Sodergren
Elisa Barney
39
1
0
07 Apr 2024
A Morphology-Based Investigation of Positional Encodings
A Morphology-Based Investigation of Positional Encodings
Poulami Ghosh
Shikhar Vashishth
Raj Dabre
Pushpak Bhattacharyya
34
1
0
06 Apr 2024
Polarity Calibration for Opinion Summarization
Polarity Calibration for Opinion Summarization
Yuanyuan Lei
Kaiqiang Song
Sangwoo Cho
Xiaoyang Wang
Ruihong Huang
Dong Yu
38
0
0
02 Apr 2024
LayerNorm: A key component in parameter-efficient fine-tuning
LayerNorm: A key component in parameter-efficient fine-tuning
Taha ValizadehAslani
Hualou Liang
51
1
0
29 Mar 2024
Adverb Is the Key: Simple Text Data Augmentation with Adverb Deletion
Adverb Is the Key: Simple Text Data Augmentation with Adverb Deletion
Juhwan Choi
Youngbin Kim
21
0
0
29 Mar 2024
A Two-Phase Recall-and-Select Framework for Fast Model Selection
A Two-Phase Recall-and-Select Framework for Fast Model Selection
Jianwei Cui
Wenhang Shi
Honglin Tao
Wei Lu
Xiaoyong Du
41
0
0
28 Mar 2024
Constructions Are So Difficult That Even Large Language Models Get Them
  Right for the Wrong Reasons
Constructions Are So Difficult That Even Large Language Models Get Them Right for the Wrong Reasons
Shijia Zhou
Leonie Weissweiler
Taiqi He
Hinrich Schütze
David R. Mortensen
Lori S. Levin
34
6
0
26 Mar 2024
Incorporating Exponential Smoothing into MLP: A Simple but Effective
  Sequence Model
Incorporating Exponential Smoothing into MLP: A Simple but Effective Sequence Model
Jiqun Chu
Zuoquan Lin
AI4TS
35
2
0
26 Mar 2024
Monotonic Paraphrasing Improves Generalization of Language Model
  Prompting
Monotonic Paraphrasing Improves Generalization of Language Model Prompting
Qin Liu
Fei Wang
Nan Xu
Tianyi Yan
Tao Meng
Muhao Chen
LRM
43
7
0
24 Mar 2024
Adapprox: Adaptive Approximation in Adam Optimization via Randomized
  Low-Rank Matrices
Adapprox: Adaptive Approximation in Adam Optimization via Randomized Low-Rank Matrices
Pengxiang Zhao
Ping Li
Yingjie Gu
Yi Zheng
Stephan Ludger Kölker
Zhefeng Wang
Xiaoming Yuan
21
1
0
22 Mar 2024
Enhancing Effectiveness and Robustness in a Low-Resource Regime via
  Decision-Boundary-aware Data Augmentation
Enhancing Effectiveness and Robustness in a Low-Resource Regime via Decision-Boundary-aware Data Augmentation
Kyohoon Jin
Junho Lee
Juhwan Choi
Sangmin Song
Youngbin Kim
40
0
0
22 Mar 2024
A Unified Framework for Model Editing
A Unified Framework for Model Editing
Akshat Gupta
Dev Sajnani
Gopala Anumanchipalli
KELM
70
26
0
21 Mar 2024
Automatic Annotation of Grammaticality in Child-Caregiver Conversations
Automatic Annotation of Grammaticality in Child-Caregiver Conversations
Mitja Nikolaus
Abhishek Agrawal
Petros Kaklamanis
Alex Warstadt
Abdellah Fourtassi
35
2
0
21 Mar 2024
Do Not Worry if You Do Not Have Data: Building Pretrained Language
  Models Using Translationese
Do Not Worry if You Do Not Have Data: Building Pretrained Language Models Using Translationese
Meet Doshi
Raj Dabre
Pushpak Bhattacharyya
SyDa
36
2
0
20 Mar 2024
Knowing Your Nonlinearities: Shapley Interactions Reveal the Underlying
  Structure of Data
Knowing Your Nonlinearities: Shapley Interactions Reveal the Underlying Structure of Data
Divyansh Singhvi
Andrej Erkelens
Raghav Jain
Diganta Misra
Naomi Saphra
23
0
0
19 Mar 2024
Generalizable and Stable Finetuning of Pretrained Language Models on
  Low-Resource Texts
Generalizable and Stable Finetuning of Pretrained Language Models on Low-Resource Texts
Sai Ashish Somayajula
Youwei Liang
Abhishek Singh
Li Zhang
Pengtao Xie
32
2
0
19 Mar 2024
BiLoRA: A Bi-level Optimization Framework for Overfitting-Resilient
  Low-Rank Adaptation of Large Pre-trained Models
BiLoRA: A Bi-level Optimization Framework for Overfitting-Resilient Low-Rank Adaptation of Large Pre-trained Models
Rushi Qiang
Ruiyi Zhang
Pengtao Xie
AI4CE
30
8
0
19 Mar 2024
A Closer Look at Claim Decomposition
A Closer Look at Claim Decomposition
Miriam Wanner
Seth Ebner
Zhengping Jiang
Mark Dredze
Benjamin Van Durme
49
18
0
18 Mar 2024
Matrix-Transformation Based Low-Rank Adaptation (MTLoRA): A
  Brain-Inspired Method for Parameter-Efficient Fine-Tuning
Matrix-Transformation Based Low-Rank Adaptation (MTLoRA): A Brain-Inspired Method for Parameter-Efficient Fine-Tuning
Yao Liang
Yuwei Wang
Yang Li
Yi Zeng
44
0
0
12 Mar 2024
Enhancing Transfer Learning with Flexible Nonparametric Posterior
  Sampling
Enhancing Transfer Learning with Flexible Nonparametric Posterior Sampling
Hyungi Lee
G. Nam
Edwin Fong
Juho Lee
BDL
32
5
0
12 Mar 2024
SVD-LLM: Truncation-aware Singular Value Decomposition for Large Language Model Compression
SVD-LLM: Truncation-aware Singular Value Decomposition for Large Language Model Compression
Xin Wang
Yu Zheng
Zhongwei Wan
Mi Zhang
MQ
57
44
0
12 Mar 2024
Rebuilding ROME : Resolving Model Collapse during Sequential Model
  Editing
Rebuilding ROME : Resolving Model Collapse during Sequential Model Editing
Akshat Gupta
Sidharth Baskaran
Gopala Anumanchipalli
KELM
65
21
0
11 Mar 2024
Hybrid Human-LLM Corpus Construction and LLM Evaluation for Rare
  Linguistic Phenomena
Hybrid Human-LLM Corpus Construction and LLM Evaluation for Rare Linguistic Phenomena
Leonie Weissweiler
Abdullatif Köksal
Hinrich Schütze
37
4
0
11 Mar 2024
Automatic and Universal Prompt Injection Attacks against Large Language
  Models
Automatic and Universal Prompt Injection Attacks against Large Language Models
Xiaogeng Liu
Zhiyuan Yu
Yizhe Zhang
Ning Zhang
Chaowei Xiao
SILM
AAML
46
33
0
07 Mar 2024
Improving Group Connectivity for Generalization of Federated Deep
  Learning
Improving Group Connectivity for Generalization of Federated Deep Learning
Zexi Li
Jie Lin
Zhiqi Li
Didi Zhu
Chao Wu
AI4CE
FedML
43
0
0
29 Feb 2024
Language Models Represent Beliefs of Self and Others
Language Models Represent Beliefs of Self and Others
Wentao Zhu
Zhining Zhang
Yizhou Wang
MILM
LRM
50
8
0
28 Feb 2024
Variational Learning is Effective for Large Deep Networks
Variational Learning is Effective for Large Deep Networks
Yuesong Shen
Nico Daheim
Bai Cong
Peter Nickl
Gian Maria Marconi
...
Rio Yokota
Iryna Gurevych
Daniel Cremers
Mohammad Emtiyaz Khan
Thomas Möllenhoff
43
22
0
27 Feb 2024
MELoRA: Mini-Ensemble Low-Rank Adapters for Parameter-Efficient
  Fine-Tuning
MELoRA: Mini-Ensemble Low-Rank Adapters for Parameter-Efficient Fine-Tuning
Pengjie Ren
Chengshun Shi
Shiguang Wu
Mengqi Zhang
Zhaochun Ren
Maarten de Rijke
Zhumin Chen
Jiahuan Pei
MoE
41
14
0
27 Feb 2024
Sinkhorn Distance Minimization for Knowledge Distillation
Sinkhorn Distance Minimization for Knowledge Distillation
Xiao Cui
Yulei Qin
Yuting Gao
Enwei Zhang
Zihan Xu
Tong Wu
Ke Li
Xing Sun
Wen-gang Zhou
Houqiang Li
62
5
0
27 Feb 2024
Layer-wise Regularized Dropout for Neural Language Models
Layer-wise Regularized Dropout for Neural Language Models
Shiwen Ni
Min Yang
Ruifeng Xu
Chengming Li
Xiping Hu
41
0
0
26 Feb 2024
LoRA Meets Dropout under a Unified Framework
LoRA Meets Dropout under a Unified Framework
Sheng Wang
Liheng Chen
Jiyue Jiang
Boyang Xue
Lingpeng Kong
Chuan Wu
23
14
0
25 Feb 2024
Towards Efficient Active Learning in NLP via Pretrained Representations
Towards Efficient Active Learning in NLP via Pretrained Representations
Artem Vysogorets
Achintya Gopal
36
0
0
23 Feb 2024
Advancing Parameter Efficiency in Fine-tuning via Representation Editing
Advancing Parameter Efficiency in Fine-tuning via Representation Editing
Muling Wu
Wenhao Liu
Xiaohua Wang
Tianlong Li
Changze Lv
Zixuan Ling
Jianhao Zhu
Cenyuan Zhang
Xiaoqing Zheng
Xuanjing Huang
25
19
0
23 Feb 2024
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables
  Parameter-Efficient Transfer Learning
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning
Zhisheng Lin
Han Fu
Chenghao Liu
Zhuo Li
Jianling Sun
MoE
MoMe
33
5
0
23 Feb 2024
Towards Unified Task Embeddings Across Multiple Models: Bridging the Gap
  for Prompt-Based Large Language Models and Beyond
Towards Unified Task Embeddings Across Multiple Models: Bridging the Gap for Prompt-Based Large Language Models and Beyond
Xinyu Wang
Hainiu Xu
Lin Gui
Yulan He
MoMe
AIFin
36
1
0
22 Feb 2024
Beyond Simple Averaging: Improving NLP Ensemble Performance with Topological-Data-Analysis-Based Weighting
Beyond Simple Averaging: Improving NLP Ensemble Performance with Topological-Data-Analysis-Based Weighting
P. Proskura
Alexey Zaytsev
33
0
0
22 Feb 2024
Improving Language Understanding from Screenshots
Improving Language Understanding from Screenshots
Tianyu Gao
Zirui Wang
Adithya Bhaskar
Danqi Chen
VLM
43
10
0
21 Feb 2024
On Sensitivity of Learning with Limited Labelled Data to the Effects of
  Randomness: Impact of Interactions and Systematic Choices
On Sensitivity of Learning with Limited Labelled Data to the Effects of Randomness: Impact of Interactions and Systematic Choices
Branislav Pecher
Ivan Srba
Maria Bielikova
69
3
0
20 Feb 2024
HyperMoE: Towards Better Mixture of Experts via Transferring Among
  Experts
HyperMoE: Towards Better Mixture of Experts via Transferring Among Experts
Hao Zhao
Zihan Qiu
Huijia Wu
Zili Wang
Zhaofeng He
Jie Fu
MoE
32
9
0
20 Feb 2024
Comparing Specialised Small and General Large Language Models on Text Classification: 100 Labelled Samples to Achieve Break-Even Performance
Comparing Specialised Small and General Large Language Models on Text Classification: 100 Labelled Samples to Achieve Break-Even Performance
Branislav Pecher
Ivan Srba
Maria Bielikova
ALM
39
7
0
20 Feb 2024
In-Context Learning Demonstration Selection via Influence Analysis
In-Context Learning Demonstration Selection via Influence Analysis
Vinay M.S.
Minh-Hao Van
Xintao Wu
37
4
0
19 Feb 2024
Induced Model Matching: Restricted Models Help Train Full-Featured Models
Induced Model Matching: Restricted Models Help Train Full-Featured Models
Usama Muneeb
Mesrob I. Ohannessian
16
0
0
19 Feb 2024
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for
  Ultra-Low-Parameter Fine-Tuning of Large Language Models
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models
Yifan Yang
Jiajun Zhou
Ngai Wong
Zheng Zhang
21
7
0
18 Feb 2024
Contrastive Instruction Tuning
Contrastive Instruction Tuning
Tianyi Yan
Fei Wang
James Y. Huang
Wenxuan Zhou
Fan Yin
Aram Galstyan
Wenpeng Yin
Muhao Chen
ALM
23
5
0
17 Feb 2024
Uncertainty Quantification for In-Context Learning of Large Language
  Models
Uncertainty Quantification for In-Context Learning of Large Language Models
Chen Ling
Xujiang Zhao
Xuchao Zhang
Wei Cheng
Yanchi Liu
...
Katsushi Matsuda
Jie Ji
Guangji Bai
Liang Zhao
Haifeng Chen
29
14
0
15 Feb 2024
Reusing Softmax Hardware Unit for GELU Computation in Transformers
Reusing Softmax Hardware Unit for GELU Computation in Transformers
C. Peltekis
K. Alexandridis
G. Dimitrakopoulos
32
0
0
15 Feb 2024
JAMDEC: Unsupervised Authorship Obfuscation using Constrained Decoding
  over Small Language Models
JAMDEC: Unsupervised Authorship Obfuscation using Constrained Decoding over Small Language Models
Jillian R. Fisher
Ximing Lu
Jaehun Jung
Liwei Jiang
Zaid Harchaoui
Yejin Choi
39
6
0
13 Feb 2024
Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
Haeju Lee
Minchan Jeong
SeYoung Yun
Kee-Eung Kim
AAML
VPVLM
53
2
0
13 Feb 2024
Should I try multiple optimizers when fine-tuning pre-trained
  Transformers for NLP tasks? Should I tune their hyperparameters?
Should I try multiple optimizers when fine-tuning pre-trained Transformers for NLP tasks? Should I tune their hyperparameters?
Nefeli Gkouti
Prodromos Malakasiotis
Stavros Toumpis
Ion Androutsopoulos
37
5
0
10 Feb 2024
A Unified Causal View of Instruction Tuning
A Unified Causal View of Instruction Tuning
Luyao Chen
Wei Huang
Ruqing Zhang
Wei Chen
J. Guo
Xueqi Cheng
28
1
0
09 Feb 2024
Previous
12345...161718
Next