Ethical and Security Risks of Autonomous AI Systems
DOI:
https://doi.org/10.47392/IRJAEH.2025.0440Keywords:
Machine Ethics, Adversarial Attacks, AI Accountability, Bias in AI, AI Governance, AI Security, Ethical Risks, Autonomous AIAbstract
As AI systems grow more independent, the challenges of keeping them both ethical and secure are only getting tougher. From self-driving cars and military drones to algorithms making real-world decisions, these technologies often operate with little—or no—human input. This paper takes a close look at the risks that come with that shift. On the ethical side, we’re talking about things like bias, unclear accountability, and the risk of sidelining human judgment. On the security front, there’s the threat of adversarial attacks, hacking, and deeper system flaws that could be exploited. By reviewing what’s already out there—research, case studies, real-world examples—we aim to unpack how these risks are currently being managed. More importantly, we offer ideas on how future systems can be built with stronger guardrails, both in terms of tech and governance.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Research Journal on Advanced Engineering Hub (IRJAEH)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.