Over the past few years, the world has been awing at the abilities of Large Language Models (LLMs) and especially their growth in generative capabilities. Seeing AI Agents generate thousands of lines of code, create images out of thin air and even generate deepfake videos already makes us feel threatened of being replaced. However, AI as of 2025 has become much more dangerous than we think especially with the rise of Agentic AI.
Agentic AI can autonomously make decisions, plan, and carry out complex tasks with little or no human supervision while actively pursuing its own goals, adapting to changing situations, and learning from experience, acting almost independently. For example, Youmio’s blockchain based AI agent network has independent agents with access to wallets, a memory and the ability to perform independent actions across all kinds of digital environments.
Anthropic, the company behind Claude, reported just last month that Agentic AI is actively being used to perform cyber-attacks. Imagine an agent that never gets tired and keeps trying to hack into a digital fortress till it makes it in, all without any human supervision or intervention.
The sudden rise in Agentic AI indicates that we have already given AI the power to not only think on its own like LLMs, but also take actions on its own like making financial transactions, making independent trades and investments, and even performing sophisticated cyber-attacks. It is clear that companies not only want AI to analyse systems and give them reports on what can be optimised, they want AI to use that information and perform actions and operations to implement their optimisations.
Theoretically Agentic AI should be able to make more informed, context appropriate and objective actions faster than humans. And to some extent that is correct, given the large amount of data these systems are built on and their ability to adapt to changes so quickly. The era where AI agents can take actions just like humans can while having access to more data has already begun.
However, are we really ready for AI to take decisions on its own? Are the developers behind agentic AI fully aware of all the possible forms this AI could take? Do we even know how fast this AI is evolving? To answer these questions IMD has developed the AI Safety Clock, a concept similar to that of the doomsday clock meant to measure of how far humanity is from total nuclear self-destruction. The AI clock is constantly analysing news feeds and research reports, changing its time based on quantitative data from three key dimensions: sophistication, autonomy and execution capabilities of Artificial Intelligence. Intuitively, midnight is the point where Uncontrolled Artificial General Intelligence could pose serious threat to society. Since its launch in September 2024, the clock has updated its time three time. The first update December 2024 to a time of 11:31 PM, was caused due to rapid advances in LLM technologies, and the early implementation of AI autonomy. The second update in February 2025 to 11:34 caused by major early breakthroughs in Agentic AI and emerging military applications of AI systems indicating rapid movement towards autonomous AI systems and a new third update in September 2025 moving the time to 11:40 PM caused by enterprise scale deployment of Agentic AI and its possible execution in AI weaponisation, humanoid robots and the growing the AI arms race between countries like USA and China.
Such rapid movement in the clock within the span of just a single year indicates the speed at which we are moving towards the creation of an uncontrolled artificial general intelligence. As the world moves into a phase where AI controls almost every system we have built, critics and researchers have been ever-alert, trying to understand if we can trust what we are building.
In their book “If Anyone Builds It, Everyone Dies”, authors Eliezer Yudkowsky and Nate Soares claim that development of superintelligent AI is moving too fast for developer and companies to truly understand the threat of their potential creations and how an AI with intellectual abilities far greater than humans could exist in just two years is something the world is just not ready for yet.
When it comes to humans developing a tool for massive self-destructions, we must look back at the development of nuclear weapons. The scientists involved in the making of these weapons knew about the destructive powers of their potential creation since the start of their development. However, the rest of the world had to witness the nuclear destruction of Hiroshima and Nagasaki before understanding the actual threat which led to formation of regulations and safety measures regarding the use of nuclear weapon with the clear goal of preventing global nuclear catastrophe.
However, when it comes to superintelligent AI and agentic AI agents, Eliezer Yudkowsky and Nate Sores highlight how countries and companies are so focused on the AI arms race and the race towards general super-intelligence that they have failed to create a clear ethical framework on handling irregularities in autonomous agents’ actions and establishing global regulations on the power given to agentic and superintelligent AI. The arms race between countries to weaponize AI systems is evident by the US intelligence reporting on North Korea’s position in global AI arms race combined their new application of AI systems in drone and nuclear missiles, partnering with scientists from China and Russia. With this constant competition in building AI-powered military systems, will it really be possible to create a universal ethical power restriction on agentic and superintelligent AI agents? Do ordinary people have any information or control over where the AI race is headed? Perhaps, we are closer to AI doomsday than we think.
References
“Agentic AI in Financial Services: The Future of Autonomous Finance Solutions | Amazon Web Services.” Amazon Web Services, 8 Sept. 2025, aws.amazon.com/blogs/awsmarketplace/agentic-ai-solutions-in-financial-services/. Accessed 21 Sept. 2025.
Clark, Jay. “IMD Safety Clock - Big Leap - Agentic AI - I by IMD.” IMD Business School for Management and Leadership Courses, 19 Sept. 2025, www.imd.org/ibyimd/artificial-intelligence/imd-ai-safety-clock-makes-biggest-leap-yet-amid-weaponization-and-rise-of-agentic-ai/. Accessed 21 Sept. 2025.
“Detecting and Countering Misuse of AI: August 2025.” Anthropic.com, 2025, www.anthropic.com/news/detecting-countering-misuse-aug-2025.
Kelliher, Fiona. “Kim Jong Un Declares AI Military Drone Development a “Top Priority.”” Al Jazeera, 19 Sept. 2025, www.aljazeera.com/news/2025/9/19/kim-jong-un-declares-ai-military-drone-development-a-top-priority. Accessed 21 Sept. 2025.
Louallen, Doc. “New Book Claims Superintelligent AI Development Is Racing toward Global Catastrophe.” ABC News, 19 Sept. 2025, abcnews.go.com/US/new-book-claims-superintelligent-ai-development-racing-global/story?id=125737766. Accessed 21 Sept. 2025.
Sundararajan, Ramji, and Uzayr Jeenah. “The End of Inertia: Agentic AI’s Disruption of Retail and SME Banking.” McKinsey & Company, 15 Aug. 2025, www.mckinsey.com/industries/financial-services/our-insights/the-end-of-inertia-agentic-ais-disruption-of-retail-and-sme-banking.