It seems that autonomous AI agents are having their moment to shine. No longer are they passengers in the conversation; they are now behind the wheel, taking control. But are these active AI agents a good idea?
If we look at two recent cases – each created for a different audience, but essentially with the same goal – to have AI do the work that is mundane and repetitive or perhaps to help you out in a pinch. When, for example, you’re running late, but that presentation has got to be in the office by 9am.
Claude, created by AI safety and research company Anthropic, recently launched its latest iteration of the Claude family offering – its autonomous AI agent, Claude Cowork, aimed at professional developers and knowledge workers who need secure, reliable AI tools integrated into their lives as and when they need them. Whilst, in November 2025, OpenClaw.ai, aimed at tech-savvy users and developers who want an “always-on”, highly-autonomous personal AI OS, developed its AI agent to provide flexibility, local control, and an ecosystem of community plug-ins.
So how do these agents work?
Claude’s active AI agents are session-based. The agent will work from a given prompt and access the relevant files, enabling it to start and perform a task. Once complete, it comes back out of the system or, if necessary, it will await further instruction. So it can, for example, open apps, navigate a web browser, fill in a spreadsheet, create a morning briefing, make changes in an IDE, open a pull request, run tests, etc. Only when Claude doesn’t have a connector or tool to be able to perform the task, such as Google Calendar or Slack, will it navigate the screen directly, accessing relevant information that way. Much like if you were giving someone from IT remote access to your device – all without requiring setup.
With OpenClaw’s open-source, general-purpose “life agent”, you pick a plan, the company set you up, and you give it a prompt via Whatsapp, Telegram, Discord, etc., and it will run on your machine using large LLMs like GPT-4 or Claude to perform prompted tasks, such as browse the web, fill in forms, book holidays/manage travel, etc.
In fact, in a recent interview with CNBC, NVIDIA’s CEO Jensen Huang told the news outlet that these AI agents are “definitely the next ChatGPT”. NVIDIA has worked with OpenClaw to deliver NemoClaw, an enterprise-level version of OpenClaw where anyone can run these self-evolving agents from anywhere with one command.
How is this possible?
Autonomous agents have been a big push for AI companies for some time now. It feels like this level of access was being lauded in tech circles almost as soon as OpenAI announced the first iteration of ChatGPT. The adage, “you snooze, you lose,” feels like an apt motto when it comes to the peer pressure of rapid AI adoption.
Previously, autonomous agents were restricted to sandboxed, Cloud-based environments because they couldn’t safely interact with local data, applications, and operating system APIs, and so the security protocols meant that, effectively, these agents couldn’t operate on a person’s device as a matter of security. But, as one LinkedIn user commented: “[Y]ou can’t ban a user on a computer.”
So the capability is there. But capability and wisdom are different things.
Is it a good idea?
Although this is good on the one hand, as with every tech leap, there is always another hand that must balance the scales. The pressure to push further faster is always the breath at the back of technological growth. There seem to be three camps of people. The ‘sayers’. The ‘nay-sayers’. And the ‘don’t-know-what-to-sayers’.
The ‘sayers’: AI works autonomously to help complete tasks that you are too busy to do yourself. AI can make you more efficient – much faster than any PA.
The ‘nay-sayers’: AI makes a mistake, and you are worse off than when you started. Because of the speed, the fact that it doesn’t need a salary or sleep, it might just displace more jobs than it already does.
In fact, in January 2026, the UK government found that increased AI exposure was linked to a 3.9% reduction in job posting volume where that job had a higher level of exposure to AI. A concerning indicator given that around 70% of the UK workforce are working in roles with tasks that AI has the potential to perform or even enhance.
Whilst the report does highlight the “high complementarity” workers – those in roles where AI could boost efficiency, it also highlights those on the other end of the spectrum, the “low complementarity” workers, where AI is more likely to perform tasks currently delivered by humans.
A question of safety
You also have to wonder at the safety of these capabilities. While Anthropic is an ethical AI company and has built safeguards, it has also warned that this tool is still a work in progress, noting that “Claude can make mistakes, and while we continue to improve our safeguards, threats are constantly evolving.” It also cautions against giving its agents access to sensitive data. OpenClaw also carries safety concerns, especially around sensitive information and inexperienced users accessing the platform.
You have to wonder what is the end goal for these agents? Is AI here to help companies become more productive and, in turn, give employees a better work/life balance? Or, is it here to help companies become more productive, by giving employees more work to do now that AI can do the mundane tasks? Or is it here to lay off swathes of staff and reap all the bucks itself?
It seems these agents are just the next step in AI pervading everyday technology, and it is up to you as a user, for now anyway, whether you adopt this technology or whether you shun it in favour of completing tasks yourself. But it seems with ever-increasing workloads, options such as this become more and more accepted into everyday life.
I suppose the question boils down to, how are companies allowing these agents to be used? Because AI no longer just advises users – it acts on their behalf.