EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) MODELS FOR HIGH-STAKES DECISION-MAKING SYSTEMS

Authors

  • Bekturdiyeva Dilnura Author

Keywords:

The term Explainable Artificial Intelligence (XAI) refers to methods and techniques that provide transparent insight into AI systems decision-making processes,

Abstract

As artificial intelligence (AI) continues to permeate high-stakes domains, such as 
healthcare and finance, the demand for Explainable Artificial Intelligence (XAI) has 
become increasingly urgent. The necessity for transparency in AI-driven decision
making systems arises not only from ethical considerations but also from the inherent 
complexities associated with machine learning models, which can often render their 
outputs opaque to users. It is critical to recognize that while AI models may 
demonstrate accuracy on averaged data, they can lack reliability when applied to 
specific individuals, necessitating robust frameworks for personalized uncertainty 
quantification ((Banerji et al., 2025)). XAI aims to bridge this gap by providing 
interpretable models or post hoc explanations that enhance human understanding and 
trust in AI systems ((Finzel et al., 2025)). Furthermore, as organizations navigate 
ethical and managerial implications, augmented leadership is essential for integrating 
AI insights while fostering transparency and combating biases ((Erhan et al., 2025), 
(Thulasiram et al., 2025)). Thus, XAI models serve as a vital component of responsible 
decision-making in high-stakes environments.

References

● Banerji, Christopher RS, Bianconi, Ginestra, Bräuninger, Leandra, Chakraborti,

et al. (2025) Personalized uncertainty quantification in artificial intelligence. doi:

https://core.ac.uk/download/667267545.pdf

● Finzel, Bettina (2025) Current methods in explainable artificial intelligence and

future

prospects

for

integrative

https://core.ac.uk/download/661186637.pdf

physiology.

doi:

● Erhan, Tuğba, Çeri, Şahin Özgür (2025) A Conceptual Study on the Effects of

Artificial

Intelligence

in

Managerial

https://core.ac.uk/download/660978003.pdf

Decision-Making.

doi:

● Thulasiram, Prasad Pasam (2025) EXPLAINABLE ARTIFICIAL

INTELLIGENCE (XAI): ENHANCING TRANSPARENCY AND TRUST IN

MACHINE LEARNING MODELS. doi: https://core.ac.uk/download/648320225.pdf

● Andrea Passerini, Aryo Gema, Burcu Sayin, Katya Tentori, Pasquale Minervini

(2025) Fostering effective hybrid human-LLM reasoning and decision making. doi:

https://core.ac.uk/download/648121690.pdf

● Muralinathan, Srinath (2025) Transforming Cybersecurity Through Artificial

Intelligence. doi: https://core.ac.uk/download/662926582.pdf

● Halawi, Leila, Holley, Sam, Miller, Mark (2025) Beyond the Blue Skies: A

Comprehensive

Guide

for

Risk

https://core.ac.uk/download/651410909.pdf

Assessment

in Aviation.

doi:

● Ocal, Fikret Emre, Torun, Salih (2025) Leveraging Artificial Intelligence for

Enhanced

Disaster

Response

https://core.ac.uk/download/672485178.pdf

Coordination.

doi:

● Amit Nandal, Vivek Yadav (2025) Ethical Challenges and Bias in AI Decision

Making Systems. doi: https://core.ac.uk/download/661160657.pdf

● De-Arteaga, Maria, Elmer, Jonathan, Schoeffer, Jakob (2025) Perils of Label

Indeterminacy:A Case Study on Prediction of Neurological Recovery After Cardiac

Arrest. doi: https://core.ac.uk/download/653282459.pdf

Published

2025-12-19

How to Cite

EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) MODELS FOR HIGH-STAKES DECISION-MAKING SYSTEMS. (2025). ОБРАЗОВАНИЕ НАУКА И ИННОВАЦИОННЫЕ ИДЕИ В МИРЕ, 83(7), 325-335. https://journalss.org/index.php/obr/article/view/11354