OpenAI has updated its Agents software development toolkit (SDK) with new features aimed at enterprise teams building agentic AI—software agents that can take actions, not just answer questions. Announced by TechCrunch on April 15, 2026, the update adds sandboxing so agents can run in controlled computer environments, and an in-distribution harness for “frontier models” to work with files and approved tools inside a workspace. OpenAI says the goal is to make it easier for businesses to build safer, more capable agents on top of OpenAI’s model stack.
The update matters for developers because it targets two practical constraints that show up when companies move from demonstrations to deployment: how to limit what an agent can access and do, and how to test and run agent workflows against advanced models without losing control of the surrounding system. By packaging these controls into the Agents SDK, OpenAI is effectively turning operational safety and tooling into first-class developer primitives.
Agentic AI’s operational problem: control and unpredictability
In the TechCrunch account, agentic AI is described as “the tech industry’s newest success story,” with companies such as OpenAI and Anthropic “racing to give enterprises the tools they need to create these automated little helpers.” The key technical challenge, however, is not whether agents can take actions—it’s whether they can do so reliably within boundaries.
OpenAI’s update addresses this directly with a sandboxing ability. TechCrunch reports that the sandbox allows agents to operate in controlled computer environments. The source also explains why this matters: running agents “in a totally unsupervised fashion can be risky due to their occasionally unpredictable nature.” In other words, the risk is tied to behavior variability, not just security in the abstract.
With sandbox integration, the agents can work in a siloed capacity within a particular workspace. The reported behavior is that they can access files and code only for particular operations, while the sandbox otherwise aims to protect the system’s overall integrity. For enterprise developers, this is a concrete design pattern: rather than letting an agent interact freely with a machine, the agent is constrained to a workspace boundary and permissioned actions.
The new SDK pieces: sandboxing and an in-distribution harness
OpenAI’s updated Agents SDK includes two major additions described by TechCrunch.
1) Sandboxing for controlled execution. The SDK’s sandboxing capability is positioned as a way to keep agents from running “totally unsupervised,” by placing them in controlled environments. The workspace model—accessing files and code only for particular operations—suggests an approach where the agent’s tool use is mediated by environment constraints.
2) An in-distribution harness for frontier models. TechCrunch also reports that the new version includes an “in-distribution harness” for “frontier models.” In the source’s explanation, a “harness” refers to “the other components of an agent besides the model that it’s running on.” An in-distribution harness, TechCrunch says, often allows companies to “both deploy and test the agents running on frontier models.”
Operationally, the harness is described as letting agents “work with files and approved tools within a workspace.” This is important because it ties model execution to an approved tool and file boundary—mirroring the sandbox’s goal, but at the agent architecture level. The source’s definition of harness components also signals that OpenAI is treating the agent system as more than just a model prompt: the SDK is meant to include the surrounding machinery that makes agent behavior usable in production workflows.
OpenAI product team member Karan Sharma, quoted by TechCrunch, framed the update as compatibility work: “This launch, at its core, is about taking our existing agents SDK and making it so it’s compatible with all of these sandbox providers.” That statement implies a broader ecosystem strategy: rather than forcing every enterprise to use a single execution environment, the SDK is being positioned to integrate with multiple sandbox providers.
Long-horizon agents and the infrastructure choice problem
The source connects the new features to a specific agent capability category: “long-horizon” tasks. TechCrunch notes that such tasks are generally considered to be more complex and multi-step work. Long-horizon behavior is exactly where control mechanisms tend to matter most—agents must plan across steps, use tools repeatedly, and maintain coherence over time.
OpenAI’s Sharma is also quoted with the rationale for the harness and sandbox combination. TechCrunch reports that OpenAI hopes users can “go build these long-horizon agents using our harness and with whatever infrastructure they have.” This is less about a new model and more about integration: if the harness can run with different infrastructure choices, enterprises may be able to adopt agentic workflows without fully changing their existing tooling.
From an industry standpoint, this suggests that the competitive differentiator in agent platforms may increasingly be developer ergonomics and deployment safety—how quickly teams can build, constrain, test, and run agents—rather than only model capability.
Rollout details: API availability, languages, and additional agent features
OpenAI says the new Agents SDK capabilities are being offered to all customers via the API, using standard pricing. The source also provides a rollout sequence for developer support: initially, the harness and sandbox capabilities are launching in Python, with TypeScript support planned for a later release.
OpenAI is also working to bring more agent capabilities—specifically “code mode and subagents”—to both Python and TypeScript, according to TechCrunch. While the source does not provide technical details of these features beyond their names, their inclusion signals that the SDK update is part of a broader expansion of agent functionality across languages.
Finally, OpenAI told TechCrunch it will “continue to expand the Agents SDK over time.” For teams evaluating agent platforms, that matters because sandboxing and harness-based testing are often foundational components; if they become stable and portable, they can reduce the friction of moving from experimentation to internal deployment.
Why this matters for enterprise AI engineering
Based on the TechCrunch report, the update’s practical significance is that it turns two concerns—controlled execution and agent system composition—into SDK features. The sandboxing capability targets unpredictability by limiting unsupervised behavior inside controlled environments. The in-distribution harness targets deployment and testing by tying frontier model agent runs to approved tools and file access within a workspace.
Analysis-wise, OpenAI’s emphasis on compatibility with “all of these sandbox providers” (as quoted by Sharma) suggests an enterprise reality: organizations often have infrastructure and security boundaries already in place, and agent tooling needs to fit those boundaries rather than replace them. Observers may watch how quickly sandbox integrations broaden and whether enterprises adopt the harness pattern to standardize agent testing for long-horizon workflows.
Source: TechCrunch