ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.08813
  4. Cited By
European Union regulations on algorithmic decision-making and a "right
  to explanation"

European Union regulations on algorithmic decision-making and a "right to explanation"

28 June 2016
B. Goodman
Seth Flaxman
    FaML
    AILaw
ArXivPDFHTML

Papers citing "European Union regulations on algorithmic decision-making and a "right to explanation""

50 / 217 papers shown
Title
Learning Small Decision Trees with Few Outliers: A Parameterized Perspective
Learning Small Decision Trees with Few Outliers: A Parameterized Perspective
Harmender Gahlawat
Meirav Zehavi
7
4
0
21 May 2025
Feature Relevancy, Necessity and Usefulness: Complexity and Algorithms
Feature Relevancy, Necessity and Usefulness: Complexity and Algorithms
Tomás Capdevielle
Santiago Cifuentes
FAtt
45
0
0
06 May 2025
Thinking Outside the Template with Modular GP-GOMEA
Thinking Outside the Template with Modular GP-GOMEA
Joe Harrison
Peter A. N. Bosman
Tanja Alderliesten
43
0
0
02 May 2025
Securing the Future of IVR: AI-Driven Innovation with Agile Security, Data Regulation, and Ethical AI Integration
Securing the Future of IVR: AI-Driven Innovation with Agile Security, Data Regulation, and Ethical AI Integration
Khushbu Mehboob Shaikh
Georgios Giannakopoulos
21
0
0
02 May 2025
Towards Improved Cervical Cancer Screening: Vision Transformer-Based Classification and Interpretability
Towards Improved Cervical Cancer Screening: Vision Transformer-Based Classification and Interpretability
K. T. Nguyen
Ho-min Park
Gaeun Oh
J. Vankerschaver
W. D. Neve
MedIm
33
0
0
30 Apr 2025
Transformation of audio embeddings into interpretable, concept-based representations
Transformation of audio embeddings into interpretable, concept-based representations
Alice Zhang
Edison Thomaz
Lie Lu
34
0
0
18 Apr 2025
Recent Advances in Malware Detection: Graph Learning and Explainability
Recent Advances in Malware Detection: Graph Learning and Explainability
Hossein Shokouhinejad
Roozbeh Razavi-Far
Hesamodin Mohammadian
Mahdi Rabbani
Samuel Ansong
Griffin Higgins
Ali Ghorbani
AAML
81
2
0
14 Feb 2025
Decision Information Meets Large Language Models: The Future of Explainable Operations Research
Decision Information Meets Large Language Models: The Future of Explainable Operations Research
Yansen Zhang
Qingcan Kang
Wing-Yin Yu
Hailei Gong
Xiaojin Fu
Xiongwei Han
Tao Zhong
Chen Ma
OffRL
66
1
0
14 Feb 2025
Symbolic Knowledge Extraction and Injection with Sub-symbolic Predictors: A Systematic Literature Review
Giovanni Ciatto
Federico Sabbatini
Andrea Agiollo
Matteo Magnini
Andrea Omicini
60
14
0
28 Jan 2025
Explaining Deep Learning-based Anomaly Detection in Energy Consumption Data by Focusing on Contextually Relevant Data
Explaining Deep Learning-based Anomaly Detection in Energy Consumption Data by Focusing on Contextually Relevant Data
Mohammad Noorchenarboo
Katarina Grolinger
49
1
0
10 Jan 2025
Explainable AI: Definition and attributes of a good explanation for
  health AI
Explainable AI: Definition and attributes of a good explanation for health AI
E. Kyrimi
S. McLachlan
Jared M Wohlgemut
Zane B Perkins
David A. Lagnado
W. Marsh
the ExAIDSS Expert Group
XAI
36
1
0
09 Sep 2024
Human-inspired Explanations for Vision Transformers and Convolutional
  Neural Networks
Human-inspired Explanations for Vision Transformers and Convolutional Neural Networks
Mahadev Prasad Panda
Matteo Tiezzi
Martina Vilas
Gemma Roig
Bjoern M. Eskofier
Dario Zanca
ViT
AAML
41
1
0
04 Aug 2024
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
47
13
0
27 Jul 2024
Feature Inference Attack on Shapley Values
Feature Inference Attack on Shapley Values
Xinjian Luo
Yangfan Jiang
X. Xiao
AAML
FAtt
46
19
0
16 Jul 2024
CHILLI: A data context-aware perturbation method for XAI
CHILLI: A data context-aware perturbation method for XAI
Saif Anwar
Nathan Griffiths
A. Bhalerao
T. Popham
44
0
0
10 Jul 2024
A Moonshot for AI Oracles in the Sciences
A Moonshot for AI Oracles in the Sciences
Bryan Kaiser
Tailin Wu
Maike Sonnewald
Colin Thackray
Skylar Callis
AI4CE
51
0
0
25 Jun 2024
Applications of Generative AI in Healthcare: algorithmic, ethical, legal
  and societal considerations
Applications of Generative AI in Healthcare: algorithmic, ethical, legal and societal considerations
Onyekachukwu R. Okonji
Kamol Yunusov
Bonnie Gordon
MedIm
46
3
0
15 Jun 2024
Graphical Perception of Saliency-based Model Explanations
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAI
FAtt
49
2
0
11 Jun 2024
AI with Alien Content and Alien Metasemantics
AI with Alien Content and Alien Metasemantics
H. Cappelen
J. Dever
21
4
0
30 May 2024
Explainable Automatic Grading with Neural Additive Models
Explainable Automatic Grading with Neural Additive Models
Aubrey Condor
Z. Pardos
ELM
32
2
0
01 May 2024
Best of Both Worlds: A Pliable and Generalizable Neuro-Symbolic Approach
  for Relation Classification
Best of Both Worlds: A Pliable and Generalizable Neuro-Symbolic Approach for Relation Classification
Robert Vacareanu
F. Alam
M. Islam
Haris Riaz
Mihai Surdeanu
NAI
40
2
0
05 Mar 2024
InfFeed: Influence Functions as a Feedback to Improve the Performance of
  Subjective Tasks
InfFeed: Influence Functions as a Feedback to Improve the Performance of Subjective Tasks
Somnath Banerjee
Maulindu Sarkar
Punyajoy Saha
Binny Mathew
Animesh Mukherjee
TDI
36
0
0
22 Feb 2024
Implementing local-explainability in Gradient Boosting Trees: Feature
  Contribution
Implementing local-explainability in Gradient Boosting Trees: Feature Contribution
Ángel Delgado-Panadero
Beatriz Hernández-Lorca
María Teresa García-Ordás
J. Benítez-Andrades
45
52
0
14 Feb 2024
What's documented in AI? Systematic Analysis of 32K AI Model Cards
What's documented in AI? Systematic Analysis of 32K AI Model Cards
Weixin Liang
Nazneen Rajani
Xinyu Yang
Ezinwanne Ozoani
Eric Wu
Yiqun Chen
D. Smith
James Zou
52
15
0
07 Feb 2024
Legal and ethical implications of applications based on agreement
  technologies: the case of auction-based road intersections
Legal and ethical implications of applications based on agreement technologies: the case of auction-based road intersections
José-Antonio Santos
Alberto Fernández
Mar Moreno-Rebato
Holger Billhardt
José-A. Rodríguez-García
Sascha Ossowski
22
3
0
18 Jan 2024
On the Relationship Between Interpretability and Explainability in
  Machine Learning
On the Relationship Between Interpretability and Explainability in Machine Learning
Benjamin Leblanc
Pascal Germain
FaML
36
0
0
20 Nov 2023
Inspecting Explainability of Transformer Models with Additional Statistical Information
Inspecting Explainability of Transformer Models with Additional Statistical Information
Hoang C. Nguyen
Haeil Lee
Junmo Kim
ViT
32
3
0
19 Nov 2023
Deep Natural Language Feature Learning for Interpretable Prediction
Deep Natural Language Feature Learning for Interpretable Prediction
Felipe Urrutia
Cristian Buc
Valentin Barriere
39
2
0
09 Nov 2023
Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability
Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability
Md. Tanzib Hosain
Md. Mehedi Hasan Anik
Sadman Rafi̇
Rana Tabassum
Khaleque Insi̇a
Md. Mehrab Siddiky
21
6
0
13 Oct 2023
Explaining Deep Face Algorithms through Visualization: A Survey
Explaining Deep Face Algorithms through Visualization: A Survey
Thrupthi Ann
S. M. I. C. V. Balasubramanian
M. Jawahar
CVBM
38
1
0
26 Sep 2023
Precise Benchmarking of Explainable AI Attribution Methods
Precise Benchmarking of Explainable AI Attribution Methods
Rafael Brandt
Daan Raatjens
G. Gaydadjiev
XAI
27
4
0
06 Aug 2023
Confident Feature Ranking
Confident Feature Ranking
Bitya Neuhof
Y. Benjamini
FAtt
37
3
0
28 Jul 2023
Modeling Inverse Demand Function with Explainable Dual Neural Networks
Modeling Inverse Demand Function with Explainable Dual Neural Networks
Zhiyu Cao
Zihan Chen
P. Mishra
Hamed Amini
Zachary Feinstein
35
2
0
26 Jul 2023
Feature Importance Measurement based on Decision Tree Sampling
Feature Importance Measurement based on Decision Tree Sampling
Chao Huang
Diptesh Das
Koji Tsuda
FAtt
31
2
0
25 Jul 2023
Robust Ranking Explanations
Robust Ranking Explanations
Chao Chen
Chenghua Guo
Guixiang Ma
Ming Zeng
Xi Zhang
Sihong Xie
FAtt
AAML
40
0
0
08 Jul 2023
The Case Against Explainability
The Case Against Explainability
Hofit Wasserman Rozen
N. Elkin-Koren
Ran Gilad-Bachrach
AILaw
ELM
36
1
0
20 May 2023
BELLA: Black box model Explanations by Local Linear Approximations
BELLA: Black box model Explanations by Local Linear Approximations
N. Radulovic
Albert Bifet
Fabian M. Suchanek
FAtt
42
1
0
18 May 2023
Explaining black box text modules in natural language with language
  models
Explaining black box text modules in natural language with language models
Chandan Singh
Aliyah R. Hsu
Richard Antonello
Shailee Jain
Alexander G. Huth
Bin-Xia Yu
Jianfeng Gao
MILM
36
47
0
17 May 2023
Impact Of Explainable AI On Cognitive Load: Insights From An Empirical
  Study
Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study
L. Herm
26
22
0
18 Apr 2023
TreeC: a method to generate interpretable energy management systems
  using a metaheuristic algorithm
TreeC: a method to generate interpretable energy management systems using a metaheuristic algorithm
Julian Ruddick
L. R. Camargo
M. A. Putratama
M. Messagie
Thierry Coosemans
22
2
0
17 Apr 2023
A Comprehensive Survey on Deep Graph Representation Learning
A Comprehensive Survey on Deep Graph Representation Learning
Wei Ju
Zheng Fang
Yiyang Gu
Zequn Liu
Qingqing Long
...
Jingyang Yuan
Yusheng Zhao
Yifan Wang
Xiao Luo
Ming Zhang
GNN
AI4TS
72
142
0
11 Apr 2023
Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language
  Models
Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models
Emilio Ferrara
SILM
36
248
0
07 Apr 2023
Dermatologist-like explainable AI enhances trust and confidence in
  diagnosing melanoma
Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma
T. Chanda
Katja Hauser
S. Hobelsberger
Tabea-Clara Bucher
Carina Nogueira Garcia
...
J. Utikal
K. Ghoreschi
S. Fröhling
E. Krieghoff-Henning
T. Brinker
31
67
0
17 Mar 2023
CoRTX: Contrastive Framework for Real-time Explanation
CoRTX: Contrastive Framework for Real-time Explanation
Yu-Neng Chuang
Guanchu Wang
Fan Yang
Quan-Gen Zhou
Pushkar Tripathi
Xuanting Cai
Xia Hu
46
20
0
05 Mar 2023
Dermatological Diagnosis Explainability Benchmark for Convolutional
  Neural Networks
Dermatological Diagnosis Explainability Benchmark for Convolutional Neural Networks
Raluca Jalaboi
Ole Winther
A. Galimzianova
FAtt
22
1
0
23 Feb 2023
Less is More: The Influence of Pruning on the Explainability of CNNs
Less is More: The Influence of Pruning on the Explainability of CNNs
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
FAtt
36
1
0
17 Feb 2023
Experimenting with Emerging RISC-V Systems for Decentralised Machine
  Learning
Experimenting with Emerging RISC-V Systems for Decentralised Machine Learning
Gianluca Mittone
Nicolò Tonci
Robert Birke
Iacopo Colonnelli
Doriana Medić
...
Francesco Beneventi
Mirko Polato
Massimo Torquati
Luca Benini
Marco Aldinucci
19
10
0
15 Feb 2023
Explaining text classifiers through progressive neighborhood
  approximation with realistic samples
Explaining text classifiers through progressive neighborhood approximation with realistic samples
Yi Cai
Arthur Zimek
Eirini Ntoutsi
Gerhard Wunder
AI4TS
22
0
0
11 Feb 2023
Understanding User Preferences in Explainable Artificial Intelligence: A
  Survey and a Mapping Function Proposal
Understanding User Preferences in Explainable Artificial Intelligence: A Survey and a Mapping Function Proposal
M. Hashemi
Ali Darejeh
Francisco Cruz
47
3
0
07 Feb 2023
An investigation of challenges encountered when specifying training data
  and runtime monitors for safety critical ML applications
An investigation of challenges encountered when specifying training data and runtime monitors for safety critical ML applications
Hans-Martin Heyn
E. Knauss
Iswarya Malleswaran
Shruthi Dinakaran
32
4
0
31 Jan 2023
12345
Next