Skip to content

Reddit AI Coding — 2026-04-13

1. What People Are Talking About

1.1 Claude Opus 4.6 Quality Collapse and Trust Erosion (🡕)

The single most dominant narrative across r/ClaudeCode today is that Opus 4.6 has been severely degraded. A dozen high-engagement posts and hundreds of comments document the same pattern: a model that worked well through February and into March is now producing shallow diffs, ignoring context files, hallucinating package names, and forgetting steps mid-plan. The frustration is compounded by Anthropic's silence — users perceive the degradation as undeniable while the company has not acknowledged it.

u/LumonScience posted a side-by-side comparison of Opus 4.5 and 4.6 on a simple car wash logic problem, calling it "the strongest evidence that Opus 4.6 has been lobotomized." The post drew 128 comments and 233 upvotes. u/ketosoy confirmed in discussion that "opus 4.6 is lobotomized during peak hours and fine off peak" using the same test run five times (Opus 4.5 vs Opus 4.6).

u/Wayplorer documented Opus 4.6 in Max Setting creating a six-step implementation plan in plan mode, then immediately forgetting themes 2 and 6 when asked to execute — burning 50,000+ tokens to recover the original plan. "It can't count from 1 to 6," they wrote (Claude Opus is nuked beyond repair). The top comment from u/Emergency-Leopard-24 (90 upvotes) stated: "Opus is nerfed as hell right now. ~3 weeks ago it was performing complex tasks without issue."

u/More-School-7324 reported an enterprise-wide pattern: "In our company most of the devs are using Max20 plans... until end of March it was working great... over the past week... SEVERELY degraded performance." Multiple colleagues observed the same behavior simultaneously, and the team began evaluating alternatives (Finally happened to me and my colleagues).

u/Xccelerate_ aggregated five confirmed issues into a single post — high token usage, nerfed Opus 4.6 (citing confirmation from "AMD's Senior AI Director"), increased hallucinations, buggier CLI releases, and inflated non-peak-hour consumption. The post closed with "#MakeOpusGreatAgain" (For the people that are having problems with ClaudeCode).

Not everyone agrees. u/dennisplucinik posted "Maybe I'm an outlier here?" and received 172 comments — the second-most-discussed thread of the day — from users who report no issues. Satisfied users tend to have structured CLAUDE.md files, disciplined context management, and established workflows. u/Square-Display555 observed: "I think most people who had a technical background or role are having a great time with it still" (Maybe I'm an outlier here?).

Discussion insight: u/workphone6969 asked mods to ban complaint posts (175 score, 215 comments) and the community split sharply. u/Ill-Boysenberry-6821 (96 upvotes) argued that "people have been sold a false product on false benchmarks" (Can we ban the constant shit-posting).

Comparison to prior day: This theme was already the dominant narrative on April 12, where the top post ("Completely IMMORAL business practices from Anthropic," 653 score) and multiple others documented the same degradation. The volume and intensity have not diminished; if anything, the April 13 data shows the conversation maturing from raw complaints toward specific technical evidence and workarounds.

1.2 Token Quota Inflation and the Cache TTL Regression (🡕)

A technical explanation for the quota complaints surfaced today: Anthropic appears to have silently switched the default prompt cache TTL from one hour to five minutes around early March 2026.

u/silver_gr filed GitHub issue #46829 with analysis of 119,866 API calls spanning January through April 2026 across two independent machines. The data shows four distinct phases: 5-minute-only TTL in January (pre-1h availability), consistent 1-hour TTL from February 1 through March 5, a transition on March 6-7, and 5-minute-dominant TTL from March 8 onward. The regression caused a 20-32% increase in cache creation costs (Cache TTL silently regressed).

u/Medium_Island_2795 independently corroborated the finding by querying their own conversations.db, producing five data visualization charts showing the same TTL distribution shift (follow-up: anthropic quietly switched the default cache TTL).

Cache TTL distribution chart showing shift from 1h to 5m

This cache change directly explains the surge in quota complaints: users performing the same work are now consuming significantly more of their weekly allocation because context is being re-sent rather than cached. u/UnknownEssence reported burning 60% of a $100 Max Plan in less than two days despite careful usage practices, while u/vapepencil explained in comments that the /compact command compounds the problem by "trashing the entire kv cache" — each compact forces a full cache rebuild ($100 Max Plan - 60% used in less than 2 days).

Usage bar showing 60% consumption in under 2 days on Max plan

Comparison to prior day: On April 12, u/six3oo posted a detailed token tracking analysis showing Claude Code subscriptions costing far more through the API equivalent, and another user documented a 178x token reduction through workflow changes. Today's cache TTL data provides the missing technical explanation for what many users experienced as sudden, unexplained quota acceleration.

1.3 The Claude-to-Codex Migration Wave (🡕)

A clear migration pattern is forming from Claude Code to OpenAI's Codex, driven by both the Opus degradation and the quota inflation.

u/fourier54 ran an A/B test feeding identical prompts to both Claude and Codex for planning a medium-sized C++ project (~10k lines). Codex consistently produced better plans, found holes in Claude's plans, while "Claude always said 'Great plan! caught all these things I didn't see.'" The conclusion: "claude code today is much worse than codex on both planning, code analysis and execution" (Codex clearly superior to Claude).

u/tehlx posted "Actually at the Moment you should use Codex," reporting that a free-trial Codex Pro account outperformed their Max 5x Claude subscription. In the discussion, u/Lilith7th (16 upvotes) described using "codex to debug claude," while u/0bran shared an elaborate workflow of having Codex audit Opus's plans, finding that "Opus keeps making mistakes even with a very clear, structured setup... a $20 Codex setup is consistently outperforming what's supposed to be the strongest Claude model" (Actually at the Moment you should use Codex).

The pragmatic middle ground was articulated by u/jco1510: "Get Claude code and codex subscriptions and get over it. All my repos have Claude and agents.md files so workflows are interoperable." The top response from u/TeamBunty (19 upvotes): "$400/mo is cheap for what you're getting" (Get Claude code and codex subscriptions).

u/No-Cryptographer45 took a different approach: using Omniroute to route GPT-5.4 through the Claude Code interface, preserving the Claude Code UX while using Codex models underneath (using Omniroute).

Discussion insight: Several commenters mentioned GLM-5.1 and Deepseek v3.2 as additional alternatives. u/Euphoric_Oneness (32 upvotes) commented: "Claude models are currently nerfed. Everything performs better now. Try GLM5.1." The migration is not just Claude-to-Codex — it is Claude-to-anything.

1.4 Vibe-Coded Apps: Traction, Security, and Market Saturation (🡒)

The daily crop of "I built this" posts continued, but today's standout was a cautionary tale rather than a success story.

u/Upper-Pop-5330 posted a detailed analysis of the Quittr breach — a vibe-coded quit-porn recovery app that reached $1M revenue in six months, an Oprah mention, and 600,000 user records exposed through Firebase's default "test mode" rules. Among those records were 100,000 minors' self-reported data including masturbation frequency and personal confessions. The post noted this was the fourth major Firebase/BaaS breach in the past year, following Cal AI (3.2M health records), Tea (72K government IDs), and a 916-project epidemic (125M records total). The top comment from u/opi098514 (175 upvotes) was blunt: "No that's exactly what it is. He was careless and now all his users are compromised" (Quittr is the vibe coding success story of 2025).

Meanwhile, u/4_max_4, a 20-year veteran developer, asked whether the industry is "on the brink of seeing an infinite number of clones of pretty much every app out there." They had replaced three Airtable inventories, built device sync apps, a media remote, a time tracking system, and started an accounting app — none commercially ready but all functional. The top reply from u/WeUsedToBeACountry (50 upvotes): "Very, very, very shitty clones that don't get updated or supported." u/Forsaken_Ant7459 added: "the issue isn't building stuff but managing and maintaining it" (Are we on the brink of seeing an infinite number of clones).

Comparison to prior day: April 12 included similar builder stories but the Quittr breach adds a new dimension — a concrete case where the speed of vibe coding outpaced the builder's security knowledge, with real consequences.

1.5 Multi-Provider Tooling and AI Agent Orchestration (🡕)

A growing number of posts reflect developers building infrastructure to work across multiple AI providers simultaneously, treating models as interchangeable utilities.

u/Personal_Offer1551 built Proxima, an MCP server connecting Antigravity to ChatGPT, Claude, Gemini, and Perplexity simultaneously without API keys — using existing browser sessions instead. The project includes 45+ MCP tools and is available on GitHub (I built mcp server).

u/Objective_River_5218 demonstrated AgentHandover, a local-first macOS system that observes user workflows via screen capture, clusters patterns, and synthesizes reusable "skills" for coding agents. Uses Ollama for local inference. Available on GitHub (Demo: Agent that watches your screen).

Discussion insight: These projects address the same underlying pain point: context is lost every time a developer switches between AI tools or starts a new session. The multi-provider approach also serves as a hedge against the kind of quality degradation Claude users are experiencing.

1.6 Enterprise Accountability for AI-Generated Code (🡕)

u/Unlucky_Blueberries posted what appears to be an enterprise policy holding developers personally responsible for AI-generated code output (150 score, 56 comments). The top comment from u/ARC4120 (102 upvotes): "Literally the most sane decision. You are responsible for the final output." u/Sufficient-Farmer243 described their company's approach: "You'll be assigned SPs, do the SPs how you see fit with APPROVED AI. Your metrics for code smells, reviews, quality, etc will remain the exact same. Meaning if you PR unreviewed AI code and it's full of bugs, EOY, don't expect a bonus" (sure to go over well with everyone).

This signals a maturation: enterprises are moving past "should we use AI" to "how do we hold people accountable for AI output."


2. What Frustrates People

Claude Opus 4.6 Degradation — High Severity

The most pervasive frustration of the day. Users paying $100-$200/month on Max plans report that Opus 4.6 has regressed from a "senior developer" level to producing shallow, error-prone output. Specific complaints: forgetting steps mid-plan, hallucinating package names and commit SHAs, ignoring CLAUDE.md rules it acknowledges reading, and producing code that breaks previously working features. u/dutchviking captured the sentiment: "Every effing change made things worse, every time I pushed back it failed even harder." The model's own apology — "I'm sorry for the sloppy execution. The rules are clearly documented — I just didn't follow them" — became a widely shared symbol of the problem (I'm sorry for the sloppy execution).

Coping strategies include: downgrading to Opus 4.5 via /model claude-opus-4-5-20251101, switching to Sonnet 4.6 for simpler tasks, setting effortLevel: "high" and disabling adaptive thinking, using CLI version 2.1.81 instead of newer releases, and keeping context under 200k tokens. u/YeXiu223 provided the most detailed mitigation, noting that Claude Code creator Boris Cherny confirmed on Hacker News that adaptive thinking can allocate zero reasoning tokens to turns it deems "simple" — precisely the turns where hallucinations occurred (Disabling 1m context and adaptive thinking helped).

Silent Quota and Cost Changes — High Severity

Users are frustrated not just by limits themselves but by the lack of communication about changes. The cache TTL regression from one hour to five minutes happened without announcement, causing subscription users to blow through quotas they previously never touched. u/ArcticMooss reported a simple 80k-token audit of a CLAUDE.md file consuming 16% of a 5-hour limit on the Max 5x plan — a task that should have cost a fraction of that (It finally happened). GitHub Copilot users face parallel frustrations: u/Far-Equivalent4128 reported usage continuing to increment even during rate-limited periods when requests were not processed (Unfair rate Limits Bugs).

The frustration extends across platforms. u/Abject-Sherbert1917 documented spending $1,400/month on Cursor + Claude API costs — $1,200 in API charges on top of a $200 Ultra plan — and asked the community how to manage costs while maintaining an agentic workflow ($1,400/month with Cursor + Claude API).

Firebase Default Security as a Systemic Risk — High Severity

The Quittr breach exposed a pattern specific to vibe-coded apps: Firebase ships with "test mode" rules that allow unrestricted read/write access, and the rules configuration lives in a separate Console tab from the one developers build against. The app works identically whether rules are open or locked, there is no deploy-time warning, and firebase deploy ships test mode to production silently. u/Upper-Pop-5330 documented four major Firebase/BaaS breaches in the past year and provided a deny-by-default rules template (Quittr is the vibe coding success story of 2025). u/Silpher9 (32 upvotes) responded: "This is why I've only created closed apps for me and my family and friends. I'm too afraid I might fuck up something causing other people harm."

AI Dependency and Knowledge Loss — Medium Severity

Multiple threads explored the psychological cost of AI-assisted development. u/baldierot asked whether the limits squeeze was "a wake-up call about their dependence on AI," noting they are "utterly stuck" when they hit limits (wake-up call about their dependence on AI). u/Litlyx described a different dimension: "I'm shipping more than ever with Cursor... but I have zero memory of what I actually built" — cognitive offloading where the developer cannot reconstruct their own decisions at end of day (I'm shipping more than ever).


3. What People Wish Existed

Transparent, Predictable AI Coding Subscriptions

Across every platform community, users want clear, honest billing. They want to know exactly what they are paying for, what changed when it changed, and why. The cache TTL regression was invisible to users for weeks. u/Deep_Ad1959 summarized: "the lack of communication is what kills trust, not the degradation itself. every tool has bad weeks." Multiple users expressed willingness to pay more if the value proposition were stable and transparent. u/t0rgar argued for treating LLMs like utility power — switching freely based on current quality without loyalty penalties (We need to treat LLMs like power).

Vendor-Agnostic Agent Configuration

u/chintakoro explicitly asked whether anyone had implemented a contingency plan to accommodate switching between agent vendors, suggesting "symlinking CLAUDE. and .claude/ to vendor agnostic AGENTS. and .agents/." u/jco1510 already maintains both Claude and agents .md files in every repo for interoperability. The need is concrete: a standardized project configuration format that any coding agent can consume, so that switching providers does not require rebuilding workflow infrastructure (Get Claude code and codex subscriptions).

Security Guardrails for Non-Developer Builders

The Quittr breach revealed that vibe coders need security checks built into their deployment pipeline — not optional, not in a separate tab, but blocking. u/Upper-Pop-5330 suggested that firebase deploy should refuse to ship if true rules without an explicit override flag. More broadly, there is a need for security linting tools that catch BaaS misconfigurations, exposed API keys, and open databases before deployment, integrated directly into the AI coding workflow (Quittr is the vibe coding success story of 2025).

Decision Audit Trails for AI-Assisted Development

u/Litlyx is building "Brain0" specifically to address the problem of having no record of decisions made during AI-assisted development sessions. The comments suggested existing workarounds — auto-updating changelogs from commit history, MCP-connected Confluence documentation, structured commit messages — but no integrated solution exists. The need is for automatic capture of what was tried, what was decided, and why, without requiring manual documentation (I'm shipping more than ever with Cursor).

Local AI Models at Opus-Level Intelligence

u/SatanVapesOn666W noted that "Gemma 4 31b hits Sonnet 4.5 performance," but the gap to Opus-level reasoning remains large. Multiple users expressed wanting local inference that matches cloud model quality to escape subscription dependency and quota limits entirely. u/CreamPitiful4295: "Can't wait for the models to get opus intelligence at home" (wake-up call about their dependence on AI).


4. Tools and Methods in Use

Tool Category Sentiment Strengths Limitations
Claude Code (Opus 4.6) AI Coding Agent (-) Strong when "un-nerfed"; deep reasoning on complex tasks; best for technical writing Severe quality regression since late March; high token consumption; silent quota changes; adaptive thinking can skip reasoning entirely
Claude Code (Sonnet 4.6) AI Coding Agent (+/-) Faster, lower token cost; performs better than Opus on simple tasks Less capable on complex reasoning; users switching to it as Opus workaround
Claude Opus 4.5 LLM (+) Users reporting superior output to current 4.6 via model override (/model claude-opus-4-5-20251101) 200k context window vs 1M; not the default model
OpenAI Codex (GPT-5.4) AI Coding Agent (+) Better planning and execution than current Opus; more predictable limits; "xhigh" effort setting praised Limits tightening on Pro plan; less rich CLI tooling than Claude Code
GitHub Copilot AI Coding Agent (+) Predictable billing; /fleet for subagents; no 5hr session limits Retiring Opus 4.6 Fast from Pro tier; model switching transparency concerns; VS Code extension missing 1M context support
Cursor IDE + Agent (+/-) Best UX for agent-assisted coding; file change review workflow; fast iteration Expensive at scale ($1,400/mo with API costs); "trust-me-bro" concerns with v3 agentic shift
Google Antigravity AI Coding Agent (+/-) Gemini 3 Flash excellent for repetitive/execution tasks; generous quotas Pro model (3.1 Pro High) weaker than Claude Sonnet; frequent network failures; skill overuse inflates context
Omniroute Model Router (+) Routes GPT-5.4 through Claude Code interface; preserves UX while switching models Additional configuration layer
GLM-5.1 LLM (+) Strong coding performance; works via OpenCode Less ecosystem tooling; newer to Western markets
Deepseek v3.2 LLM (+) Cheap; works via OpenRouter; "80% as good" per user report Not as capable as top-tier models
Gemma 4 (31b) Local LLM (+) Sonnet 4.5-level performance; runs locally; no quota limits Large model requiring significant VRAM; gap to Opus-level reasoning
Firebase/Supabase BaaS (+/-) Fast prototyping; generous free tiers Firebase defaults are insecure; Supabase egress limits hit quickly for active apps

The overall tool landscape is in upheaval. Claude Code, which dominated the agentic coding space through February, is losing users to Codex. The migration is hampered by switching costs (CLI configuration, workflow files, muscle memory). The pragmatic response is multi-subscription: maintaining both Claude and Codex accounts and switching based on current quality. A notable workflow pattern within Antigravity: u/Distinct-Survey475 (23 upvotes) articulated "Opus can write really good implementation plans, and Flash can execute them."


5. What People Are Building

Project Who built it What it does Problem it solves Stack Stage Links
Proxima u/Personal_Offer1551 MCP server connecting 4 AI providers without API keys Context loss between AI tools; API cost avoidance MCP, browser sessions, Windows Beta GitHub
AgentHandover u/Objective_River_5218 Watches screen, generates reusable skills for coding agents Manual prompt/agent configuration; workflow knowledge transfer macOS, Ollama, local-first Alpha GitHub
GridWatch u/MajorDifficulty Desktop dashboard for GitHub Copilot CLI session monitoring No visibility into Copilot CLI sessions, token usage, and session history Electron, TypeScript Shipped (v0.28.0) GitHub
matchy.gg u/Difficult-Season3600 Tinder-style app for finding gaming buddies via Steam LFG posts are ineffective; no playtime-based matching PHP, vanilla JS, PWA Shipped matchy.gg
Hoardo u/duus_j Home inventory app — rooms, boxes, items with search Storage room chaos; existing solutions too complex Lovable, Cursor, Claude Sonnet via OpenClaw Shipped hoardo.com
IndieAppCircle u/luis_411 Platform for indie app testing exchange via credits Indie developers can't get real user feedback Not specified Shipped indieappcircle.com
RoamPads u/who_opsie Airbnb filter for remote work-friendly listings Can't filter Airbnb for workspace quality React, Next.js, Supabase, Airtable, Vercel Beta roampads.com
Arkhaven u/talonxzxz Space exploration colony survival game 15-year game designer couldn't build without engineering team Natural language prompts only Alpha omw.run/arkhaven
Diablo 2 AARPG u/sharkymcstevenson2 Vibe-coded dark fantasy AARPG with multiplayer AI game development capability testing Not specified Alpha (Day 7) Video demo
Contral.ai u/contralai IDE that teaches you what you're vibe coding in real time Knowledge gap in AI-generated code Not specified Alpha contral.ai
Caffeine Curfew u/pythononrailz Apple Watch app tracking caffeine half-life No caffeine tracking on wearables Apple Watch, iOS Shipped (2,500 users) App Store

Proxima addresses the pain point of losing context between AI providers. By using browser sessions instead of API keys, it avoids the cost layer entirely. AgentHandover takes a novel approach to agent configuration: rather than writing instruction files manually, it observes how a developer works and synthesizes reusable skills, using local inference via Ollama to avoid cloud dependency. Hoardo is a textbook case of finding distribution outside tech communities — posted in r/organizing, it got 1,300 upvotes and 1,300+ users from people who cared about the problem, not the technology.

A repeated pattern: builders independently described resisting feature creep as their most important learning. u/duus_j: "The version that got 1,300 upvotes was simpler than a Google Sheet with better UX."


6. New and Notable

Cache TTL Regression Documented With Hard Data

The most significant new signal today is the documentation of Anthropic's cache TTL regression from 1h to 5m, backed by 119,866 API calls across two independent machines. This is not speculation — it is observable in the JSONL session logs that Claude Code writes locally. The finding explains the sudden quota acceleration that many users experienced starting in early March and directly undermines Anthropic's position that nothing has changed. If the issue is confirmed and addressed, it could resolve a significant portion of the quota complaints. See GitHub issue #46829.

Chinese AI Models as Untapped Alternatives

u/leoyang2026 flagged that Chinese AI providers (Moonshot Kimi, Zhipu GLM, MiniMax) are offering aggressive Pro/Ultra plans with large unused quotas to gain market share. While Western providers are tightening limits, Chinese competitors have excess capacity and are pricing accordingly. This market asymmetry has not been widely discussed in English-language AI coding communities (Dev in China here).

GitHub Copilot Retiring Opus 4.6 Fast and Enforcing New Limits

u/TastyNobbles surfaced the GitHub blog changelog from April 10 announcing enforced limits and retirement of Opus 4.6 Fast from Copilot Pro. The tightening of limits is now a cross-platform phenomenon — Claude, Copilot, and Codex are all constraining usage simultaneously (Details on the new limits).

Workflow Discipline as the Real Differentiator

The 172-comment "Maybe I'm an outlier" thread revealed that the gap between frustrated and satisfied Claude users may not be entirely about model quality. Users reporting good results consistently described: tight CLAUDE.md files with explicit constraints, context capped below 200k tokens, fresh sessions per task, and automated test hooks. This suggests that a significant portion of the degradation complaints may be amplified by workflow practices that worked at earlier context sizes but break at scale.


7. Where the Opportunities Are

[+++] Vendor-agnostic agent configuration and orchestration — The demand for switching between Claude, Codex, Copilot, and other providers without rebuilding workflow infrastructure appears in at least eight threads today. Projects like Proxima, Omniroute, and dual .md file strategies are all workarounds for the same missing layer: a standard configuration format and routing layer that lets developers treat AI coding agents as interchangeable utilities.

[+++] Security linting and deployment guardrails for AI-generated apps — The Quittr breach (600K records, 100K minors) is the fourth major Firebase default-rules incident in a year. An automated pre-deployment security scan — checking Firebase rules, exposed credentials, open databases, missing auth — integrated into AI coding workflows would address a high-urgency failure mode.

[++] AI coding session observability and cost management — GridWatch, token tracking analyses, and the cache TTL investigation all point to the same gap: developers cannot see what their AI tools are doing under the hood. A cross-platform dashboard showing token usage, cache behavior, and cost projection would serve the professional user segment.

[++] Local-first AI coding agents at production quality — Users explicitly mention Gemma 4 (31b), Ollama, and local inference as escape routes from subscription dependency. Building tooling around local-first inference (as AgentHandover does) positions for the transition when local models match cloud quality.

[+] Non-tech distribution channels for vibe-coded apps — Hoardo's success via r/organizing suggests the vibe-coded app ecosystem is oversaturated within tech communities but underexplored in domain-specific communities where users care about problems, not technology.


8. Takeaways

  1. Claude Opus 4.6 is experiencing its worst quality crisis to date, and the community is no longer debating whether it happened — they are debating what to do about it. The migration to Codex, Copilot, GLM-5.1, and multi-provider setups is accelerating. (Opus 4.5 vs Opus 4.6)

  2. The cache TTL regression from 1h to 5m, documented across 119,866 API calls, provides a concrete technical explanation for the quota complaints that have dominated r/ClaudeCode for weeks. This is the most actionable new evidence of the day. (GitHub issue #46829)

  3. Workflow discipline — not model choice — may be the primary differentiator between satisfied and frustrated AI coding users. The 172-comment "outlier" thread consistently showed that users with structured instruction files, context caps, and fresh-session habits report good results even with the current Opus. (Maybe I'm an outlier here?)

  4. The Quittr breach — 600K records including 100K minors exposed through Firebase defaults — is the clearest warning yet that vibe-coded apps need security guardrails built into the deployment pipeline, not bolted on after launch. (Quittr is the vibe coding success story of 2025)

  5. Multi-provider workflows are emerging as the professional standard. Maintaining subscriptions to two or more AI coding services, with vendor-agnostic configuration files, is being treated as basic operational hygiene rather than an edge case. (Get Claude code and codex subscriptions)

  6. The "clone army" concern is real but the unsolved problem is maintenance, not building. AI makes building fast; no one has solved the problem of maintaining, updating, and supporting the resulting apps at scale. (Are we on the brink of seeing an infinite number of clones)

  7. Limits are tightening across all major platforms simultaneously — Claude, Copilot, and Codex — suggesting this is an industry-wide capacity constraint, not a single vendor's decision. (Details on the new limits)