We help you find the perfect fit.

Swiss Ai Research Overview Platform

28 AI Topics Taxonomy
Reset all filters
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Filter
Reset all filters
Select all Unselect all
Close Show projects Back
Show all filters
71 Application Fields Taxonomy
Reset all filters
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Filter
Reset all filters
Select all Unselect all
Close Show projects Back
Show all filters
57 Institutions Taxonomy
Reset all filters
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Filter
Reset all filters
Select all Unselect all
Close Show projects Back
Show all filters
Fairness and Intersectional Non-Discrimination in Human Recommendation

Lay summary

Just hiring! The project FindHR spotlights discrimination in algorithmic hiring (AI in recruiting). It develops algorithms, methods, and training to detect, mitigate, and avoid discrimination among job seekers based on AI.

Abstract

How do you find the most suitable applicants from a large application pool? Application and hiring processes are challenging and involve much time and effort. The desire to let algorithmic decision-making systems do part of the work is therefore understandable. Such algorithmic systems can, for example, pre-sort applications based on resumes or rank applicants with the help of online tests. They’re supposed to find the best applicants while saving time and increasing efficiency. However, experience has shown that the use of such systems can reproduce discrimination or even increase discriminatory barriers in the labor market.

One of the best-known examples is an application software Amazon allegedly developed some years ago. Amazon wanted to increase the efficiency of the hiring process. While the software was still in testing phase, it turned out that the software's recommendations discriminated against women, as their resumes tended to be sorted out. Amazon said it abandoned the project before deploying the software because attempts to end the discrimination against women weren't successful.

Research objective

What needs to be considered when developing and using personnel selection software to prevent discrimination? In the project “FINDHR - Fairness and Intersectional Non-Discrimination in Human Recommendation”, we aim to find answers to this question. This EU-funded project is designed to develop fair algorithms for personnel selection using a context-sensitive approach. Not only the technical aspects of algorithms are taken into account, but extends its scope by factoring in the development context of the system or the social consequences it might have. The interdisciplinary and international research consortium, of which AlgorithmWatch CH is a member, started in November 2022 and works on:

- Methods to measure and prevent discrimination in algorithmic hiring
- Tools that reduce the risk of discrimination in algorithmic hiring
- Trainings to raise awareness of the risk of discrimination in algorithmic hiring

Legal and ethical perspective

The development of discrimination-sensitive AI applications requires the processing of sensitive data. Therefore, the project includes a legal analysis discussing the tensions between data protection legislation and anti-discrimination legislation in Europe. Throughout the project, groups potentially affected by discrimination are involved.

AlgorithmWatch CH, together with other project partners, is focusing on the development of tools ensuring that algorithms in personnel selection procedures are based on ethical principles and won’t reproduce discriminatory biases. With its evidence-based advocacy work, AlgorithmWatch CH also contributes to communicating the results of the project to various target groups.