Open AI introduces Chat GPT Pro $100 tier with 5X usage limits for Codex compared to Plus | Venture Beat
Overview
Open AI introduces Chat GPT Pro $100 tier with 5X usage limits for Codex compared to Plus
Open AI is making moves to try and court more developers and vibe coders (those who build software using AI models and natural language) away from rivals like Anthropic.
Details
Today, the firm arguably most synonymous with the generative AI boom announced it will begin offering a new, more mid-range subscription tier — a
Open AI also currently offers Edu, Business ($25 per user monthly, formerly known as Team) and Enterprise (variably priced) plans for organizations in said sectors.
So why introduce a new $100 Chat GPT Pro plan, then?
The big selling point from Open AI is that the new plan offers five times greater usage limits on Codex, the company's agentic vibe coding application/harness (the name is shared by both, as well as a lineup of coding-specific language gmodels), than the existing,
As Open AI co-founder and CEO Sam Altman wrote in a post on X: "It is very nice to see Codex getting so much love. We are launching a $100 Chat GPT Pro tier by very popular demand."
However, alongside this, Open AI's official company account on X noted that "we’re rebalancing Codex usage in [Chat GPT] Plus to support more sessions throughout the week, rather than longer sessions in a single day."
That sounds a lot like Open AI is also simultaneously reducing how much Chat GPT Plus users can use its Codex harness and application per day.
What are the new usage limits for the new 20 Plus?
So, what are the current limits on the $20 Plus plan? The new Pro plan gives you 5X greater than...what?
Turns out, this is trickier than you'd think to calculate, because it actually varies depending on which underlying AI model you are using to power the Codex application or harness, and whether you are working on code stored in the cloud or locally on your machine or servers.
Open AI's Developer website notes that for individual users, usage is categorized by "Local Messages" (tasks run on the user's machine) and "Cloud Tasks" (tasks run on Open AI's infrastructure), both of which share a five-hour rolling window. Currently, it actually shows the
GPT-5.4-mini: 110–560 local messages every 5 hours.
GPT-5.4-mini: 110–560 local messages every 5 hours.
GPT-5.3-Codex: 45–225 local messages and 10–60 cloud tasks every 5 hours.
GPT-5.3-Codex: 45–225 local messages and 10–60 cloud tasks every 5 hours.
GPT-5.4-mini: 1100-5600 local messages every 5 hours.
GPT-5.4-mini: 1100-5600 local messages every 5 hours.
GPT-5.3-Codex: 450-2,250 local messages and 100-600 cloud tasks every 5 hours.
GPT-5.3-Codex: 450-2,250 local messages and 100-600 cloud tasks every 5 hours.
GPT-5.4-mini: 2,200-11,200 local meessages every 5 hours.
GPT-5.4-mini: 2,200-11,200 local meessages every 5 hours.
GPT-5.3-Codex: 900-4,500 local messages and 200-1,200 cloud tasks every 5 hours.
GPT-5.3-Codex: 900-4,500 local messages and 200-1,200 cloud tasks every 5 hours.
Exclusive Access: Includes GPT-5.3-Codex-Spark (research preview), which has its own dynamic usage limit.
Exclusive Access: Includes GPT-5.3-Codex-Spark (research preview), which has its own dynamic usage limit.
"The number of Codex messages you can send within these limits varies based on the size and complexity of your coding tasks, and where you execute tasks. Small scripts or simple functions may only consume a fraction of your allowance, while larger codebases, long running tasks, or extended sessions that require Codex to hold more context will use significantly more per message."
Open AI’s sudden move toward the $100 price point and expanded agentic capacity comes amid the unprecedented financial ascent of its chief rival, Anthropic.
Just days ago, Anthropic revealed its annualized run-rate revenue (ARR) has topped
This growth has been fueled by the massive adoption of Claude Code and Claude Cowork, products that have set the benchmark for enterprise-grade autonomous coding.
The competitive friction intensified on April 4, 2026, when Anthropic officially blocked Claude subscriptions from being used to provide the intelligence for third-party agentic AI harnesses like Open Claw.
To be clear, Anthropic Claude models themselves can still be used with Open Claw, users just must now pay for access to Claude models through Anthropic's application programming interface (API) or extra usage credits, rather than as part of the monthly Claude subscription tiers (which some have likened to an "all-you-can eat" buffet, making the economics challenging for Anthropic when power users and third-party harnesses like Open Claw consume more than the
Open Claw’s creator, Peter Steinberger, was notably hired by Open AI in February 2026 to lead their personal agent strategy, and has, since joining, actively spoken out against Anthropic's limitations — advising that Open AI's Codex and models generally don't have the same restrictions as Anthropic is now imposing.
By hiring Steinberger and subsequently launching a Pro tier that provides the high-volume capacity Anthropic recently restricted, Open AI is effectively courting the displaced Open Claw community to reclaim the professional developer market.
Deep insights for enterprise AI, data, and security leaders
By submitting your email, you agree to our Terms and Privacy Notice.
Key Takeaways
-
Open AI introduces Chat GPT Pro $100 tier with 5X usage limits for Codex compared to Plus
-
Open AI is making moves to try and court more developers and vibe coders (those who build software using AI models and natural language) away from rivals like Anthropic
-
Today, the firm arguably most synonymous with the generative AI boom announced it will begin offering a new, more mid-range subscription tier — a
8 monthly), Plus (200 monthly) plans for individuals using Chat GPT and related Open AI products -
Open AI also currently offers Edu, Business ($25 per user monthly, formerly known as Team) and Enterprise (variably priced) plans for organizations in said sectors
-
So why introduce a new $100 Chat GPT Pro plan, then



