ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.16672
  4. Cited By
CAVE: Controllable Authorship Verification Explanations

CAVE: Controllable Authorship Verification Explanations

24 June 2024
Sahana Ramnath
Kartik Pandey
Elizabeth Boschee
Xiang Ren
ArXivPDFHTML

Papers citing "CAVE: Controllable Authorship Verification Explanations"

42 / 42 papers shown
Title
Trends and Challenges in Authorship Analysis: A Review of ML, DL, and LLM Approaches
Trends and Challenges in Authorship Analysis: A Review of ML, DL, and LLM Approaches
Nudrat Habib
Tosin Adewumi
Marcus Liwicki
Elisa Barney
88
0
0
21 May 2025
A Bayesian Approach to Harnessing the Power of LLMs in Authorship
  Attribution
A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution
Zhengmian Hu
Tong Zheng
Heng Huang
BDL
72
2
0
29 Oct 2024
Can Large Language Models Identify Authorship?
Can Large Language Models Identify Authorship?
Baixiang Huang
Canyu Chen
Kai Shu
DeLMO
49
17
0
13 Mar 2024
Tailoring Self-Rationalizers with Multi-Reward Distillation
Tailoring Self-Rationalizers with Multi-Reward Distillation
Sahana Ramnath
Brihi Joshi
Skyler Hallinan
Ximing Lu
Liunian Harold Li
Aaron Chan
Jack Hessel
Yejin Choi
Xiang Ren
LRM
ReLM
47
16
0
06 Nov 2023
Who Wrote it and Why? Prompting Large-Language Models for Authorship
  Verification
Who Wrote it and Why? Prompting Large-Language Models for Authorship Verification
Chia-Yu Hung
Zhiqiang Hu
Yujia Hu
Roy Ka-wei Lee
52
16
0
12 Oct 2023
Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think"
  Step-by-Step
Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step
Liunian Harold Li
Jack Hessel
Youngjae Yu
Xiang Ren
Kai-Wei Chang
Yejin Choi
LRM
AI4CE
ReLM
74
141
0
24 Jun 2023
Let's Verify Step by Step
Let's Verify Step by Step
Hunter Lightman
V. Kosaraju
Yura Burda
Harrison Edwards
Bowen Baker
Teddy Lee
Jan Leike
John Schulman
Ilya Sutskever
K. Cobbe
ALM
OffRL
LRM
191
1,164
0
31 May 2023
Trusting Your Evidence: Hallucinate Less with Context-aware Decoding
Trusting Your Evidence: Hallucinate Less with Context-aware Decoding
Weijia Shi
Xiaochuang Han
M. Lewis
Yulia Tsvetkov
Luke Zettlemoyer
Scott Yih
HILM
56
207
0
24 May 2023
Decoupled Rationalization with Asymmetric Learning Rates: A Flexible
  Lipschitz Restraint
Decoupled Rationalization with Asymmetric Learning Rates: A Flexible Lipschitz Restraint
Wei Liu
Jun Wang
Yining Qi
Rui Li
Yang Qiu
Yuankai Zhang
Jie Han
Yixiong Zou
83
13
0
23 May 2023
Learning Interpretable Style Embeddings via Prompting LLMs
Learning Interpretable Style Embeddings via Prompting LLMs
Ajay Patel
D. Rao
Ansh Kothary
Kathleen McKeown
Chris Callison-Burch
61
26
0
22 May 2023
From Pretraining Data to Language Models to Downstream Tasks: Tracking
  the Trails of Political Biases Leading to Unfair NLP Models
From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models
Shangbin Feng
Chan Young Park
Yuhan Liu
Yulia Tsvetkov
63
245
0
15 May 2023
ZARA: Improving Few-Shot Self-Rationalization for Small Language Models
ZARA: Improving Few-Shot Self-Rationalization for Small Language Models
Wei-Lin Chen
An-Zi Yen
Cheng-Kuang Wu
Hen-Hsen Huang
Hsin-Hsi Chen
ReLM
LRM
50
11
0
12 May 2023
Are Machine Rationales (Not) Useful to Humans? Measuring and Improving
  Human Utility of Free-Text Rationales
Are Machine Rationales (Not) Useful to Humans? Measuring and Improving Human Utility of Free-Text Rationales
Brihi Joshi
Ziyi Liu
Sahana Ramnath
Aaron Chan
Zhewei Tong
Shaoliang Nie
Qifan Wang
Yejin Choi
Xiang Ren
HAI
LRM
61
33
0
11 May 2023
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
304
546
0
03 May 2023
SCOTT: Self-Consistent Chain-of-Thought Distillation
SCOTT: Self-Consistent Chain-of-Thought Distillation
Jamie Yap
Zhengyang Wang
Zheng Li
K. Lynch
Bing Yin
Xiang Ren
LRM
96
95
0
03 May 2023
Low-Resource Authorship Style Transfer: Can Non-Famous Authors Be
  Imitated?
Low-Resource Authorship Style Transfer: Can Non-Famous Authors Be Imitated?
Ajay Patel
Nicholas Andrews
Chris Callison-Burch
66
7
0
18 Dec 2022
Explanations from Large Language Models Make Small Reasoners Better
Explanations from Large Language Models Make Small Reasoners Better
Shiyang Li
Jianshu Chen
Yelong Shen
Zhiyu Zoey Chen
Xinlu Zhang
...
Jingu Qian
Baolin Peng
Yi Mao
Wenhu Chen
Xifeng Yan
ReLM
LRM
85
136
0
13 Oct 2022
Making Large Language Models Better Reasoners with Step-Aware Verifier
Making Large Language Models Better Reasoners with Step-Aware Verifier
Yifei Li
Zeqi Lin
Shizhuo Zhang
Qiang Fu
B. Chen
Jian-Guang Lou
Weizhu Chen
ReLM
LRM
82
223
0
06 Jun 2022
Quark: Controllable Text Generation with Reinforced Unlearning
Quark: Controllable Text Generation with Reinforced Unlearning
Ximing Lu
Sean Welleck
Jack Hessel
Liwei Jiang
Lianhui Qin
Peter West
Prithviraj Ammanabrolu
Yejin Choi
MU
140
216
0
26 May 2022
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
513
4,077
0
24 May 2022
Maieutic Prompting: Logically Consistent Reasoning with Recursive
  Explanations
Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations
Jaehun Jung
Lianhui Qin
Sean Welleck
Faeze Brahman
Chandra Bhagavatula
Ronan Le Bras
Yejin Choi
ReLM
LRM
268
196
0
24 May 2022
Same Author or Just Same Topic? Towards Content-Independent Style
  Representations
Same Author or Just Same Topic? Towards Content-Independent Style Representations
Anna Wegmann
M. Schraagen
D. Nguyen
54
50
0
11 Apr 2022
Mitigating Gender Bias in Distilled Language Models via Counterfactual
  Role Reversal
Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal
Umang Gupta
Jwala Dhamala
Varun Kumar
Apurv Verma
Yada Pruksachatkun
Satyapriya Krishna
Rahul Gupta
Kai-Wei Chang
Greg Ver Steeg
Aram Galstyan
47
53
0
23 Mar 2022
UNIREX: A Unified Learning Framework for Language Model Rationale
  Extraction
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
Aaron Chan
Maziar Sanjabi
Lambert Mathias
L Tan
Shaoliang Nie
Xiaochang Peng
Xiang Ren
Hamed Firooz
77
43
0
16 Dec 2021
Symbolic Knowledge Distillation: from General Language Models to
  Commonsense Models
Symbolic Knowledge Distillation: from General Language Models to Commonsense Models
Peter West
Chandrasekhar Bhagavatula
Jack Hessel
Jena D. Hwang
Liwei Jiang
Ronan Le Bras
Ximing Lu
Sean Welleck
Yejin Choi
SyDa
99
333
0
14 Oct 2021
LoRA: Low-Rank Adaptation of Large Language Models
LoRA: Low-Rank Adaptation of Large Language Models
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
OffRL
AI4TS
AI4CE
ALM
AIMat
439
10,328
0
17 Jun 2021
Evaluating Explanations: How much do explanations from the teacher aid
  students?
Evaluating Explanations: How much do explanations from the teacher aid students?
Danish Pruthi
Rachit Bansal
Bhuwan Dhingra
Livio Baldini Soares
Michael Collins
Zachary Chase Lipton
Graham Neubig
William W. Cohen
FAtt
XAI
60
109
0
01 Dec 2020
Measuring Association Between Labels and Free-Text Rationales
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
313
182
0
24 Oct 2020
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial
  Explanations of Their Behavior in Natural Language?
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?
Peter Hase
Shiyue Zhang
Harry Xie
Joey Tianyi Zhou
53
101
0
08 Oct 2020
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Su Lin Blodgett
Solon Barocas
Hal Daumé
Hanna M. Wallach
155
1,242
0
28 May 2020
WT5?! Training Text-to-Text Models to Explain their Predictions
WT5?! Training Text-to-Text Models to Explain their Predictions
Sharan Narang
Colin Raffel
Katherine Lee
Adam Roberts
Noah Fiedel
Karishma Malkan
58
201
0
30 Apr 2020
Explainable Authorship Verification in Social Media via Attention-based
  Similarity Learning
Explainable Authorship Verification in Social Media via Attention-based Similarity Learning
Benedikt T. Boenninghoff
Steffen Hessler
D. Kolossa
R. M. Nickel
45
62
0
17 Oct 2019
Attention is not not Explanation
Attention is not not Explanation
Sarah Wiegreffe
Yuval Pinter
XAI
AAML
FAtt
120
909
0
13 Aug 2019
Parameter-Efficient Transfer Learning for NLP
Parameter-Efficient Transfer Learning for NLP
N. Houlsby
A. Giurgiu
Stanislaw Jastrzebski
Bruna Morrone
Quentin de Laroussilhe
Andrea Gesmundo
Mona Attariyan
Sylvain Gelly
210
4,460
0
02 Feb 2019
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
411
638
0
04 Dec 2018
Proximal Policy Optimization Algorithms
Proximal Policy Optimization Algorithms
John Schulman
Filip Wolski
Prafulla Dhariwal
Alec Radford
Oleg Klimov
OffRL
499
19,065
0
20 Jul 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,906
0
22 May 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
188
5,989
0
04 Mar 2017
Understanding Neural Networks through Representation Erasure
Understanding Neural Networks through Representation Erasure
Jiwei Li
Will Monroe
Dan Jurafsky
AAML
MILM
88
565
0
24 Dec 2016
Rationalizing Neural Predictions
Rationalizing Neural Predictions
Tao Lei
Regina Barzilay
Tommi Jaakkola
113
812
0
13 Jun 2016
Representation of linguistic form and function in recurrent neural
  networks
Representation of linguistic form and function in recurrent neural networks
Ákos Kádár
Grzegorz Chrupała
Afra Alishahi
65
162
0
29 Feb 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
16,976
0
16 Feb 2016
1