ETHICS OF ARTIFICIAL INTELLIGENCE: WHO IS RESPONSIBLE FOR A MACHINE’S MISTAKE
Keywords:
artificial intelligence, ethics, accountability, responsibility, autonomous systems, transparency, regulation, AI governance.Abstract
This article explores the ethical and legal challenges arising from the growing autonomy of artificial intelligence (AI) systems. It examines the problem of accountability when AI errors lead to harm or social consequences, highlighting the ambiguity of assigning responsibility between developers, users, and autonomous algorithms. Examples from healthcare, transportation, and digital platforms are analyzed to demonstrate the complexity of ethical evaluation. The author concludes that the key to responsible AI lies in transparency, shared accountability, and the preservation of human oversight in all critical decision-making processes.
References
1. Floridi L., Cowls J. A Unified Framework of Five Principles for AI in Society. — Harvard Data Science Review, 2021.
2. Boddington P. Towards a Code of Ethics for Artificial Intelligence. — Springer, 2017.
3. Bryson J. The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation. — Law, Innovation and Technology, 2019.
4. European Commission. Ethics Guidelines for Trustworthy AI. — Brussels, 2019.
5. Jobin A., Ienca M., Vayena E. The Global Landscape of AI Ethics Guidelines. — Nature Machine Intelligence, 2019.
6. Russell S., Norvig P. Artificial Intelligence: A Modern Approach. — Pearson, 2021.