Anthropic, a London-based AI company, has unveiled a new feature called ‘auto mode’ for Claude Code, enabling AI to make permissions-level decisions on users’ behalf. This development aims to provide a safer option for vibe coders, balancing between excessive oversight and granting AI risky levels of autonomy.
Claude Code’s ability to act independently on behalf of users presents both utility and risks, including unwanted actions like file deletion, data sharing, and executing malicious code. The auto mode feature is designed to mitigate these risks by identifying and blocking potentially harmful actions before execution, allowing the AI agent to retry or seek user intervention.
Currently, auto mode is accessible as a research preview for Team plan users, with plans to extend access to Enterprise and API users soon. However, Anthropic cautions that the tool is experimental and does not entirely eliminate risks, advising developers to utilize it in controlled environments.
Source: The Verge