Ethical and Security Risks of Autonomous AI Systems

Authors

  • Ilakiya Ulaganathan Independent Researcher, Tagore Engineering College (affiliated to Anna University), Chennai, India. Author

DOI:

https://doi.org/10.47392/IRJAEH.2025.0440

Keywords:

Machine Ethics, Adversarial Attacks, AI Accountability, Bias in AI, AI Governance, AI Security, Ethical Risks, Autonomous AI

Abstract

As AI systems grow more independent, the challenges of keeping them both ethical and secure are only getting tougher. From self-driving cars and military drones to algorithms making real-world decisions, these technologies often operate with little—or no—human input. This paper takes a close look at the risks that come with that shift. On the ethical side, we’re talking about things like bias, unclear accountability, and the risk of sidelining human judgment. On the security front, there’s the threat of adversarial attacks, hacking, and deeper system flaws that could be exploited. By reviewing what’s already out there—research, case studies, real-world examples—we aim to unpack how these risks are currently being managed. More importantly, we offer ideas on how future systems can be built with stronger guardrails, both in terms of tech and governance.

Downloads

Download data is not yet available.

Downloads

Published

2025-06-27

How to Cite

Ethical and Security Risks of Autonomous AI Systems. (2025). International Research Journal on Advanced Engineering Hub (IRJAEH), 3(06), 2988-2995. https://doi.org/10.47392/IRJAEH.2025.0440

Similar Articles

1-10 of 620

You may also start an advanced similarity search for this article.