OpenAI has launched a new tool to help developers catch dangerous flaws in their software. The new AI agent, called Codex Security, is designed to find serious vulnerabilities quickly while reducing the time security teams waste chasing down false alarms.
In the past, many AI security tools frustrated developers. They often flagged minor issues or created false positives. This forced security experts to spend hours sorting through noise just to find the real threats. Meanwhile, because AI is helping people write code faster than ever, the security review process has become a massive bottleneck that slows down entire projects.
OpenAI says Codex Security solves this problem by focusing on context. The AI deeply analyzes a specific project to understand how everything connects. This allows it to spot complex, high-impact bugs that other tools simply miss.
The company claims that by combining advanced reasoning with automated checks, Codex Security delivers “high-confidence findings.” It doesn’t just point out problems; it also suggests actionable fixes. This means security teams can focus their energy on the vulnerabilities that actually matter, allowing companies to ship safe software much faster.
Codex Security is actually an upgraded version of a tool previously known as “Aardvark.” OpenAI spent time testing it in a private beta with a select group of customers. During that time, the company refined the system to increase its accuracy and significantly cut down on those annoying false positives.
Now, the tool is moving into a broader “research preview” phase. It is currently available to users on ChatGPT Pro, Enterprise, Business, and Edu plans through the Codex website.
For now, OpenAI is letting these customers try Codex Security for free for the next month. However, this limited-time offer suggests the company will likely charge an extra fee for the powerful new security agent in the future.











