Making AI Fair
BIU and Israeli Researchers Develop a Method Detecting Bias Without Exposing Personal Data

Artificial intelligence has become central to decision-making in healthcare, banking, insurance, and public services. But with great power comes responsibility: how can we guarantee that algorithms do not discriminate against certain populations?
Until now, measuring fairness often required direct access to sensitive personal data or proprietary code, resources that are not always available. A new study presented at the International Conference on Machine Learning (ICML), one of the world’s leading scientific gatherings, introduces a way to expose bias in AI models without compromising privacy.
A Cross-University Collaboration
The study was led by:
- Prof. Sivan Sabato, Department of Computer Science at Ben-Gurion University and McMaster University, Canada
- Prof. Eran Treister, Department of Computer Science, Ben-Gurion University
- Prof. Elad Yom-Tov, Department of Computer Science, Bar-Ilan University
Together, the researchers developed a fairness metric that detects bias in predictive algorithms, even when direct access to the model’s code or individual-level data is unavailable.
How the Method Works
At its core, the new metric estimates how many people receive systematically different predictions compared to what they would have received if the algorithm treated groups equally. Unlike existing accuracy-focused measures, this approach highlights both the degree and the nature of bias across populations.
Real-World Applications
- Banking: Determining whether a loan classification model systematically disadvantages single-parent families, even without knowing the decision for each individual.
- Healthcare: Identifying whether an AI diagnostic tool is biased against people with specific skin tones when recommending treatment for conditions like melanoma or heart disease.
This ability to detect clinically significant or socially impactful disparities, without breaching privacy, marks a major step forward in ethical AI oversight.
Privacy-Preserving, Fairness-Enhancing
Prof. Yom-Tov explains: “Our goal was to create tools that regulators, scientists, and developers can use to ensure AI systems are fair—without requiring access to personal medical records, bank accounts, or proprietary source code. Improving transparency in the algorithms that govern our lives is not only an ethical imperative but also a matter of public health and social justice.”
Prof. Sabato adds: “In many cases, fairness checks are impossible because the data is private or the model is proprietary. Our method allows us to assess discrimination by using population-level statistics, making fairness evaluation broadly feasible.”
From Research to Public Impact
The algorithms developed in this project have been released as open-source tools, allowing regulators, data scientists, and developers worldwide to adopt them. With their help, decision-makers can better ensure:
- Public safety in healthcare
- Equality in financial services
- Fairness in social and public policy
The new method is particularly powerful in evaluating complex, multidimensional AI models, where traditional fairness checks are impractical or impossible.
Toward Transparent and Trustworthy AI
By enabling bias detection without privacy invasion, this research paves the way for more ethical, transparent, and socially responsible AI systems.
As AI continues to shape critical aspects of daily life, the work of Prof. Sabato, Prof. Treister, and Prof. Yom-Tov sets a global benchmark for ensuring technology serves justice, equity, and human dignity.
Download the tool here: https://github.com/sivansabato/DCPmulticlass