Categories
Workshop

Workshop : Building an interdisciplinary Vocabulary: Explainability, Transparency and Fairness in AI in Care Settings

Care work in long-term care (LTC) is considered as a genuine human-centred activity, requiring empathy, emotional investment, physical encounters and intimate, trust-based relations between various care-givers and care-recipients. Artificial Intelligence (AI) technologies are introduced in this professional field to assist care workers in their daily activities and provide an additional measure of care for clients. This has changed the provision of care, affecting care givers and recipients alike. ALGOCARE’s goal is to understand these changes through AI in the field of care, the functionality and bias of algorithmic governing systems of care and their effects on care givers and recipients from an interdisciplinary perspective between informatics, sociology and gerontology.

Researching AI-systems with an interdisciplinary project team requires a common understanding of the terms and concepts used in each of the disciplines. Particularly when discussing a technology like “artificial intelligence”, misconceptions about the underlying concepts of each of the disciplines can hamper the successful research and certainly the communication between research teams. Hence, the reason for this workshop on building an interdisciplinary vocabulary, where in a first instance, we want discuss the term of “artificial intelligence” – which can be considered either as a research field or as an application in the form of Machine Learning (ML) and Deep Learning (DL) – and closely related terms such as ML, DL, algorithms, etc. In the second part of the workshop, we want to discuss and define the terms “Explainability”, “Fairness”, “Transparency”, to cover three terms that have emerged in recent years regarding the responsible use of AI. We also want to contextualize these concepts in the environments in which AI is applied: care settings and the life-worlds of older adults.

We want to discuss these central questions (among others) with experts in the field of informatics, sociology and gerontology:

  • How to define central concepts (AI, explainability, fairness, transparency) from an interdisciplinary perspective?
  • How to contextualize these terms in environments of care and with older users in mind?
  • For what actor(s) confronted with AI-Systems are these terms more/less important and why?
Categories
Uncategorized

What is AlgoCare about?

Care work in long-term care (LTC) is considered as a genuine human-centred activity, requiring empathy, emotional investment, physical encounters and intimate, trust-based relations between various care-givers and care-recipients. AI technologies are introduced in this professional field to assist care workers in their daily activities and provide an additional measure of care for clients. This has changed the provision of care, affecting care givers and recipients alike.

So far, little research has been done on the biases that emerge from AI in this field and the risks that algorithmic governance of care offers in the profession. Based on data generated by AI  technologies, unfair decisions can remain unnoticed in the process of linking different big data sets, leading to ethical and social issues in LTC.

ALGOCARE’s goal is to understand the functionality of algorithmic governing systems of care and their effects on care givers and recipients. Insight from ethnographic research in LTC will provide an
understanding of the impact and needs of care in relation to AI systems. The use-value of eXplainable AI (XAI) methods (trustworthiness, privacy awareness, explainable procedures) and different levels of transparency that either the model itself provides or methods that provide them after development are explored. Based on this insight, metrics are developed to evaluate the eXplainability of machine learning models for care.