ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.09098
  4. Cited By
A fuzzy-rough uncertainty measure to discover bias encoded explicitly or
  implicitly in features of structured pattern classification datasets

A fuzzy-rough uncertainty measure to discover bias encoded explicitly or implicitly in features of structured pattern classification datasets

20 August 2021
Gonzalo Nápoles
Lisa Koutsoviti Koumeri
ArXivPDFHTML

Papers citing "A fuzzy-rough uncertainty measure to discover bias encoded explicitly or implicitly in features of structured pattern classification datasets"

7 / 7 papers shown
Title
Towards detecting unanticipated bias in Large Language Models
Towards detecting unanticipated bias in Large Language Models
Anna Kruspe
33
3
0
03 Apr 2024
Measuring Implicit Bias Using SHAP Feature Importance and Fuzzy
  Cognitive Maps
Measuring Implicit Bias Using SHAP Feature Importance and Fuzzy Cognitive Maps
Isel Grau
Gonzalo Nápoles
Fabian Hoitsma
Lisa Koutsoviti Koumeri
K. Vanhoof
FAtt
17
0
0
16 May 2023
Forward Composition Propagation for Explainable Neural Reasoning
Forward Composition Propagation for Explainable Neural Reasoning
Isel Grau
Gonzalo Nápoles
M. Bello
Yamisleydi Salgueiro
A. Jastrzębska
22
0
0
23 Dec 2021
Modeling Implicit Bias with Fuzzy Cognitive Maps
Modeling Implicit Bias with Fuzzy Cognitive Maps
Gonzalo Nápoles
Isel Grau
Leonardo Concepción
Lisa Koutsoviti Koumeri
João Paulo Papa
16
26
0
23 Dec 2021
Prolog-based agnostic explanation module for structured pattern
  classification
Prolog-based agnostic explanation module for structured pattern classification
Gonzalo Nápoles
Fabian Hoitsma
A. Knoben
A. Jastrzębska
Maikel Leon Espinosa
17
13
0
23 Dec 2021
Improving fairness in machine learning systems: What do industry
  practitioners need?
Improving fairness in machine learning systems: What do industry practitioners need?
Kenneth Holstein
Jennifer Wortman Vaughan
Hal Daumé
Miroslav Dudík
Hanna M. Wallach
FaML
HAI
192
743
0
13 Dec 2018
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,090
0
24 Oct 2016
1