Explainable AI in High-Stakes Decision Making: Beyond Accuracy

Authors

  • Dr Rambabu Kalathoti Koneru Lakshmaiah Education Foundation Author

DOI:

https://doi.org/10.63345/sjaibt.v2.i3.103

Keywords:

Explainable AI, interpretability, high-stakes decision-making, transparency, accountability

Abstract

Artificial Intelligence (AI) is increasingly shaping high-stakes decision-making across healthcare, finance, criminal justice, defense, and autonomous systems. Traditionally, model evaluation has been dominated by accuracy-centric metrics; however, these are insufficient in contexts where decisions can directly affect human life, liberty, or well-being. Black-box models, despite high predictive performance, often fail to provide transparent reasoning, undermining accountability, fairness, and stakeholder trust. Explainable AI (XAI) has emerged as a paradigm shift that emphasizes interpretability and human-centered accountability over raw statistical accuracy. This paper critically examines the limitations of accuracy as a sole benchmark and investigates how explainability functions as a safeguard against bias, ethical lapses, and systemic risks. Drawing upon a mixed-methods design, we integrate quantitative survey data from healthcare, finance, and justice professionals with qualitative case analyses of real-world AI deployment failures. Statistical evidence demonstrates that stakeholders consistently prioritize interpretability, fairness, and trustworthiness over marginal accuracy improvements. 

Downloads

Download data is not yet available.

References

https://www.mdpi.com/diagnostics/diagnostics-14-00128/article_deploy/html/images/diagnostics-14-00128-g001-550.jpg

https://www.researchgate.net/publication/364269308/figure/fig2/AS:11431281122166533@1677208799314/Flowchart-for-implementing-substantive-algorithmic-fairness-The-process-begins-at-the.png

• Adadi, A., & Berrada, M. (2020). Explainable AI for decision support in healthcare: A survey. Artificial Intelligence in Medicine, 107, 101903.

• Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities, and challenges toward responsible AI. Information Fusion, 58, 82–115.

• Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and Machine Learning: Limitations and Opportunities. MIT Press.

• Belle, V., & Papantonis, I. (2021). Principles and practice of explainable machine learning. Frontiers in Big Data, 4, 688969.

• Carvalho, D. V., Pereira, E. M., & Cardoso, J. S. (2020). Machine learning interpretability: A survey on methods and metrics. Electronics, 9(8), 883.

• Chen, J., Song, Y., Wainwright, M. J., & Jordan, M. I. (2020). Learning to explain: An information-theoretic perspective on model interpretation. Proceedings of the National Academy of Sciences, 117(45), 28542–28549.

• Doshi-Velez, F., & Kim, B. (2017, reprinted 2021). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

• Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2022). Explaining explanations: An overview of interpretability of machine learning. Proceedings of IEEE, 110(3), 295–320.

• Holzinger, A., Carrington, A., & Müller, H. (2022). Measuring the quality of explanations: The system causability scale (SCS). KI-Künstliche Intelligenz, 36(1), 31–41.

• Hutchinson, B., Deng, J., & Mitchell, M. (2021). Towards accountability in machine learning: A review of fairness, transparency, and explainability. ACM Computing Surveys, 54(6), 1–38.

• Kumar, I. E., Venkatasubramanian, S., Scheidegger, C., & Friedler, S. A. (2020). Problems with Shapley-value-based explanations as feature importance measures. Proceedings of ICML, 119, 5491–5500.

• Lundberg, S. M., & Lee, S. I. (2020). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 33, 4765–4774.

• Miller, T. (2021). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 298, 103535.

• Molnar, C. (2022). Interpretable Machine Learning (2nd ed.). Lulu.com.

• Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K. R. (2021). Explaining deep neural networks and beyond: A review of methods and applications. Proceedings of the IEEE, 109(3), 247–278.

• Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551.

• Suresh, H., & Guttag, J. (2021). A framework for understanding sources of harm throughout the machine learning lifecycle. Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO’21), 1–9.

• Tjoa, E., & Guan, C. (2021). A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793–4813.

• Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., & Zhu, J. (2020). Explainable AI: A brief survey on history, research areas, approaches, and challenges. Natural Language Processing Journal, 2, 1–25.

• Zhang, Y., Sheng, Q. Z., Alhazmi, A. A., & Li, C. (2023). Adversarial attacks and defenses for deep learning: A survey. IEEE Transactions on Neural Networks and Learning Systems, 34(1), 1–22.

Published

01-07-2025

Issue

Section

Review Article

How to Cite

Explainable AI in High-Stakes Decision Making: Beyond Accuracy. (2025). Scientific Journal of Artificial Intelligence and Blockchain Technologies, 2(3), July(18-26). https://doi.org/10.63345/sjaibt.v2.i3.103

Similar Articles

1-10 of 76

You may also start an advanced similarity search for this article.