ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.05502
  4. Cited By
Scrutinizing and De-Biasing Intuitive Physics with Neural Stethoscopes

Scrutinizing and De-Biasing Intuitive Physics with Neural Stethoscopes

14 June 2018
F. Fuchs
Oliver Groth
Adam R. Kosiorek
Alex Bewley
Markus Wulfmeier
Andrea Vedaldi
Ingmar Posner
ArXivPDFHTML

Papers citing "Scrutinizing and De-Biasing Intuitive Physics with Neural Stethoscopes"

14 / 14 papers shown
Title
ShapeStacks: Learning Vision-Based Physical Intuition for Generalised
  Object Stacking
ShapeStacks: Learning Vision-Based Physical Intuition for Generalised Object Stacking
Oliver Groth
F. Fuchs
Ingmar Posner
Andrea Vedaldi
69
100
0
21 Apr 2018
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters
  in Deep Neural Networks
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks
Ruth C. Fong
Andrea Vedaldi
FAtt
64
264
0
10 Jan 2018
UnDeepVO: Monocular Visual Odometry through Unsupervised Deep Learning
UnDeepVO: Monocular Visual Odometry through Unsupervised Deep Learning
Ruihao Li
Sen Wang
Zhiqiang Long
Dongbing Gu
MDE
67
512
0
20 Sep 2017
SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning
  Dynamics and Interpretability
SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability
M. Raghu
Justin Gilmer
J. Yosinski
Jascha Narain Sohl-Dickstein
DRL
46
31
0
19 Jun 2017
Controllable Invariance through Adversarial Feature Learning
Controllable Invariance through Adversarial Feature Learning
Qizhe Xie
Zihang Dai
Yulun Du
Eduard H. Hovy
Graham Neubig
OOD
78
291
0
31 May 2017
Real Time Image Saliency for Black Box Classifiers
Real Time Image Saliency for Black Box Classifiers
P. Dabkowski
Y. Gal
62
588
0
22 May 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
74
1,517
0
11 Apr 2017
Grad-CAM: Why did you say that?
Grad-CAM: Why did you say that?
Ramprasaath R. Selvaraju
Abhishek Das
Ramakrishna Vedantam
Michael Cogswell
Devi Parikh
Dhruv Batra
FAtt
61
470
0
22 Nov 2016
Reinforcement Learning with Unsupervised Auxiliary Tasks
Reinforcement Learning with Unsupervised Auxiliary Tasks
Max Jaderberg
Volodymyr Mnih
Wojciech M. Czarnecki
Tom Schaul
Joel Z Leibo
David Silver
Koray Kavukcuoglu
SSL
88
1,228
0
16 Nov 2016
Learning to Navigate in Complex Environments
Learning to Navigate in Complex Environments
Piotr Wojciech Mirowski
Razvan Pascanu
Fabio Viola
Hubert Soyer
Andy Ballard
...
Ross Goroshin
Laurent Sifre
Koray Kavukcuoglu
D. Kumaran
R. Hadsell
91
880
0
11 Nov 2016
Learning to Pivot with Adversarial Networks
Learning to Pivot with Adversarial Networks
Gilles Louppe
Michael Kagan
Kyle Cranmer
63
227
0
03 Nov 2016
Visualizing Deep Convolutional Neural Networks Using Natural Pre-Images
Visualizing Deep Convolutional Neural Networks Using Natural Pre-Images
Aravindh Mahendran
Andrea Vedaldi
FAtt
68
534
0
07 Dec 2015
Domain-Adversarial Training of Neural Networks
Domain-Adversarial Training of Neural Networks
Yaroslav Ganin
E. Ustinova
Hana Ajakan
Pascal Germain
Hugo Larochelle
François Laviolette
M. Marchand
Victor Lempitsky
GAN
OOD
366
9,467
0
28 May 2015
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
295
7,279
0
20 Dec 2013
1