ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.10024
  4. Cited By
Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks

Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks

18 November 2022
Stephen Casper
K. Hariharan
Dylan Hadfield-Menell
    AAML
ArXivPDFHTML

Papers citing "Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks"

10 / 10 papers shown
Title
A Multimodal Automated Interpretability Agent
A Multimodal Automated Interpretability Agent
Tamar Rott Shaham
Sarah Schwettmann
Franklin Wang
Achyuta Rajaram
Evan Hernandez
Jacob Andreas
Antonio Torralba
34
17
0
22 Apr 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
34
78
0
25 Jan 2024
Diagnostic Tool for Out-of-Sample Model Evaluation
Diagnostic Tool for Out-of-Sample Model Evaluation
Ludvig Hult
Dave Zachariah
Petre Stoica
14
1
0
22 Jun 2022
BLIP: Bootstrapping Language-Image Pre-training for Unified
  Vision-Language Understanding and Generation
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
S. Hoi
MLLM
BDL
VLM
CLIP
392
4,137
0
28 Jan 2022
Natural Language Descriptions of Deep Visual Features
Natural Language Descriptions of Deep Visual Features
Evan Hernandez
Sarah Schwettmann
David Bau
Teona Bagashvili
Antonio Torralba
Jacob Andreas
MILM
204
117
0
26 Jan 2022
Robust Feature-Level Adversaries are Interpretability Tools
Robust Feature-Level Adversaries are Interpretability Tools
Stephen Casper
Max Nadeau
Dylan Hadfield-Menell
Gabriel Kreiman
AAML
48
27
0
07 Oct 2021
A Survey on Bias in Visual Datasets
A Survey on Bias in Visual Datasets
Simone Fabbrizzi
Symeon Papadopoulos
Eirini Ntoutsi
Y. Kompatsiaris
132
121
0
16 Jul 2021
Constructing Unrestricted Adversarial Examples with Generative Models
Constructing Unrestricted Adversarial Examples with Generative Models
Yang Song
Rui Shu
Nate Kushman
Stefano Ermon
GAN
AAML
185
302
0
21 May 2018
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,684
0
28 Feb 2017
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
296
39,198
0
01 Sep 2014
1