HUMAN-CENTERED AI AND EXPLAINABLE DECISION SUPPORT
https://doi.org/10.59982/18294359-25.2-hc-18
Abstract
The deployment of artificial intelligence (AI) systems in consequential domains, such as healthcare, criminal justice, finance, and others, has raised urgent questions about transparency, fairness, and human agency. While Explainable AI (XAI) techniques offer methods to interpret model predictions, they have often been designed primarily for model developers rather than for end users making critical decisions. Human-centered AI (HCAI) advocates a fundamentally different approach: systems should be designed around human needs, values, and autonomy, with explanation as a means to support human reasoning rather than merely to justify algorithmic outputs.
This paper synthesizes current research at the intersection of HCAI and explainable decision support. It examines what explanations mean in socio-technical contexts, reviews empirical evidence on how different explanation types and interfaces affect user understanding and decision quality, and proposes design principles for systems that preserve meaningful human control while leveraging AI’s analytical strengths. The paper argues that genuinely human-centered explainable decision support requires moving beyond technical interpretation of models toward collaborative dialogue that respects human expertise, acknowledges uncertainty, and enables people to maintain their judgment capacities over time.
Keywords: Explainable AI (XAI), Human-Centered AI (HCAI), Explainable Decision Support, Human-AI Collaboration, Domain-Specific Explainability, Adaptive Explanation.
PAGES : 176-182