International research project led by JGU presents results in a freely available collection
Artificial intelligence (AI) is increasingly being used in many countries around the world in the distribution of public social benefits, for example in the distribution of pensions or unemployment benefits, the approval of asylum applications or the allocation of kindergarten places. Among other things, the technology is intended to help apply fairness criteria for the individual receipt of such benefits and evaluate applicants accordingly – although the standards of fairness applied vary from country to country. For example, in India the distribution of social benefits is based on the caste system, in China on the quality of civic behavior. But even within Europe, concepts of fairness in the distribution of scarce state resources differ enormously. These are some of the key results of the international research project “AI FORA – Artificial Intelligence for Assessment”, which were obtained through participatory research and recently published in a collection that is freely available online. The German Research Center for Artificial Intelligence in Kaiserslautern, the University of Augsburg and the University of Surrey in the UK were among those involved in the project, which lasted around three and a half years and was led by Johannes Gutenberg University Mainz (JGU). The project was funded by the Volkswagen Foundation with around EUR 1.5 million and was completed in December 2024.
Comparison of AI-powered social evaluations in nine countries on four continents
The 300-page book that has now been published compares the status quo and the desired scenarios of AI-supported social evaluations in nine countries on four continents: Germany, Spain, Estonia, Ukraine, the USA, Nigeria, Iran, India and China. “The case studies highlight the extent to which equity criteria for the receipt of state benefits are culturally and contextually dependent. Even within societies, there are very different perspectives on this, which are constantly being negotiated. This must be reflected in the technology. It is therefore not enough to develop a single standardized AI system for social evaluations in public service provision and deploy it worldwide. We need flexible, dynamic and adaptive systems. Their development depends on the contribution of all social actors, including vulnerable groups, to the design of participatory, context-specific and fair AI,” emphasizes Prof. Dr. Petra Ahrweiler from the Institute of Sociology at JGU, who led the AI FORA project. According to her, another book will soon be published in which the researchers will present the policy-relevant modeling and simulation results of the AI FORA project and then show how artificial intelligence can be specifically improved to solve problems of fairness or discrimination in the allocation of public social services.
Publication
P. Ahrweiler (ed.), Participatory Artificial Intelligence in Public Social Services. From Bias to Fairness in Assessing Beneficiaries, Springer Cham, March 3, 2025,
DOI: 10.1007/978-3-031-71678-2