Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2108.01335
Cited By
Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
3 August 2021
Roman Levin
Manli Shu
Eitan Borgnia
Furong Huang
Micah Goldblum
Tom Goldstein
FAtt
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability"
7 / 7 papers shown
Title
Talking Nonsense: Probing Large Language Models' Understanding of Adversarial Gibberish Inputs
Valeriia Cherepanova
James Zou
AAML
33
4
0
26 Apr 2024
KS-Lottery: Finding Certified Lottery Tickets for Multilingual Language Models
Fei Yuan
Chang Ma
Shuai Yuan
Qiushi Sun
Lei Li
39
3
0
05 Feb 2024
A Performance-Driven Benchmark for Feature Selection in Tabular Deep Learning
Valeriia Cherepanova
Roman Levin
Gowthami Somepalli
Jonas Geiping
C. Bayan Bruss
Andrew Gordon Wilson
Tom Goldstein
Micah Goldblum
28
18
0
10 Nov 2023
Understanding Parameter Saliency via Extreme Value Theory
Shuo Wang
Issei Sato
AAML
FAtt
21
0
0
27 Oct 2023
Identification of Attack-Specific Signatures in Adversarial Examples
Hossein Souri
Pirazh Khorramshahi
Chun Pong Lau
Micah Goldblum
Rama Chellappa
AAML
MLAU
43
4
0
13 Oct 2021
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks
Jiawei Du
Hanshu Yan
Jiashi Feng
Qiufeng Wang
Liangli Zhen
Rick Siow Mong Goh
Vincent Y. F. Tan
AAML
113
132
0
07 Oct 2021
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
417
2,588
0
03 Sep 2019
1