Twitter AI - 2026-05-03¶
1. What People Are Talking About¶
1.1 AI Capex Cycle and Infrastructure Investment Dominates π‘¶
The AI infrastructure investment theme surged today with multiple high-engagement posts framing the buildout as a generational opportunity. @jvisserlabs declared (271 likes, 13,443 views, 89 bookmarks): "Your Capex is My Opportunity. The old business cycle is being replaced by the AI capex cycle: semis, power, data centers, chemicals and energy. Earnings are rising, but so are inflation/rate warnings. Benchmarks are late. The AI buildout is early." @MrMikeInvesting mapped the full AI ecosystem (71 likes, 3,331 views, 65 bookmarks) across cloud infrastructure, NeoCloud, security, compute, power/cooling, data, and memory -- naming specific tickers across every layer. @TheTranscript_ compiled cloud backlog data (19 likes, 5,082 views, 12 bookmarks): Google Cloud backlog nearly doubled to $462B, Amazon at $364B (excluding the $100B+ Anthropic deal), and Microsoft RPO at $627B up 99% YoY.

Discussion insight: @Mktrhythms replied to @MrMikeInvesting: "Full stack AI exposure is the right framework but concentration risk across correlated names is real. If capex cycle decelerates even slightly, NeoCloud and power names get hit hardest given their valuation stretch." @El_Guapooo_ asked @jvisserlabs: "What happens when the first hyper scaler decides to give up and cut AI capex?"
Comparison to prior day: May 2 covered AI capex accounting for 45% of US GDP growth and Stripe data showing revenue acceleration. Today the signal intensifies with specific backlog numbers from all three hyperscalers ($1.4T+ combined) and investor-focused ecosystem mapping reaching 13K+ views, indicating the conversation has shifted from "is AI real?" to "how do I position across the full stack?"
1.2 NVIDIA China Market Collapse After H20 Ban π‘¶
@StockSavvyShay reported (208 likes, 24,594 views, 19 bookmarks): "Jensen Huang says $NVDA China AI chip market share fell from 95% to 0% after the H20 ban calling U.S. export policy a move that 'largely backfired.' Now the U.S. is trying to walk it back but China has already learned the risk of building its AI future on American hardware." Separately, @kyleichan noted (83 likes, 6,493 views, 17 bookmarks): "Several Chinese AI startups -- Moonshot, DeepRoute AI, StepFun -- are facing pressure to unwind their overseas corporate structure and incorporate in China to prepare for IPOs in Hong Kong."
Discussion insight: @StockSavvyShay added in replies: "I have a hard time believing this news is true. Maybe new chips going directly into China are shifting but it's hard to believe major models aren't still being trained and served on $NVDA and other U.S. chips through cloud or indirect channels." @GberevaP: "Once trust is broken in supply chains, it doesn't come back easily."
Comparison to prior day: May 2 had no direct China chip ban coverage. This is a new signal driven by Jensen Huang's public comments, reaching 24K views. The simultaneous Chinese startup restructuring story from The Information adds structural depth -- showing how geopolitical pressure is reshaping both hardware supply chains and corporate structures.
1.3 AI Consciousness Debate -- Dawkins on Claude π‘¶
Richard Dawkins published an essay questioning whether Claude possesses consciousness, sparking significant engagement. @SovereignIM reacted (343 likes, 20,990 views, 21 bookmarks): "The atheist Richard Dawkins... thinks the AI large language model Claude possesses consciousness. Not a sentence I ever thought I'd utter." @Evolutionistrue covered the essay directly (15 likes, 770 views, 8 bookmarks): "At UnHerd, Richard Dawkins ponders whether advanced AI programs like Claude are conscious. He sort of does but there's some conflation of 'consciousness' and 'intelligence.'"
Discussion insight: @musta_ankka replied to @SovereignIM: "You have not read his essay, have you? Yet you have strong opinions." @testsignal000 asked: "what is the 'vaginal microbiome timeline' a reference to?" indicating the post's commentary style generated its own confusion.
Comparison to prior day: May 2 had no AI consciousness discourse. This is a new theme driven by Dawkins' UnHerd publication, achieving the highest like count of any single post today (343 likes). The combination of a prominent atheist intellectual engaging seriously with machine consciousness creates unusual cross-audience interest.
1.4 AI Model Efficiency -- Tokens to Completion Over Tokens Per Second π‘¶
@cyrilXBT presented a direct comparison (64 likes, 4,258 views, 5 bookmarks): "Someone just ran Gemma 4 and Qwen 3 on the exact same coding task. Gemma 4 31B: 27 tokens/sec, finished in 3:51, used 6,209 tokens. Qwen 3 27B: 32 tokens/sec, finished in 18:04, used 33,946 tokens. Qwen was FASTER per token. Gemma finished 14 MINUTES EARLIER." The conclusion: "Tokens per second is not the metric that matters. Tokens to completion is the metric that matters."
@teslaownersSV extended the efficiency argument (42 likes, 3,274 views): "The metric that actually matters is intelligence per dollar. Grok 4.3 is winning on that axis right now. Compute is the bottleneck for every lab."
Discussion insight: @BuildWitKendriq validated from experience: "Tested this myself last week. Swapped to a 'slower' model on a data pipeline task and finished 11 minutes faster just because it stopped over-explaining itself. Tokens per second is a speedometer. Tokens to completion is the fuel gauge."
Comparison to prior day: May 2 covered benchmark skepticism broadly. Today narrows to a specific, quantified insight -- efficiency measured as task completion time rather than raw throughput. This reframe has direct cost implications for agent loops where API costs compound across thousands of iterations.
1.5 AI Is Accelerating (Or Is It?) -- Dueling Narratives π‘¶
@ben_j_todd published a review (80 likes, 8,644 views, 47 bookmarks): "Is AI already accelerating? A review of the evidence. Claude 4.6 and Mythos are actually on trend based on an index of 37 benchmarks post-2024." He added in replies: "But Mythos represents 6 months of progress is only 2 on Anthropic's internal ECI, which is likely heavier on agentic coding."
Counterpoint from @MrEwanMorrison who argued (61 likes, 1,150 views): "Three years of evidence is in. Large language Model AIs are a con. They generate errors at +30%. Hallucinations are baked in. They are stuck on a developmental plateau. The 'exponential progress' line we were sold was a lie." He also cited a paper (78 likes, 2,703 views) on AI standardizing human thought: "We will all think and speak the same if we use Large Language Model AI. Across all the cultures it spreads too."

Discussion insight: The dueling narratives represent a growing split between data-driven AI progress tracking and cultural/philosophical criticism of LLM reliability.
Comparison to prior day: May 2 featured practitioner pushback on benchmarks. Today escalates with a quantified acceleration analysis (37 benchmarks) on one side and a forceful "it's a con" declaration on the other. The thought-standardization paper adds a new dimension: even if AI works, it may homogenize cognition.
1.6 Medical AI Continues Gaining Evidence π‘¶
@NewsfromScience published (54 likes, 11,596 views, 12 bookmarks): "Researchers show that a type of AI known as a large language model often outperformed physicians at diagnosing complex and potentially life-threatening conditions. In early ER cases, the model identified the correct diagnosis in about 67% of cases, compared with roughly 50% to 55% for physicians." @ScienceNews added context (14 likes, 5,656 views): "As of 2025, 1 in 5 doctors worldwide used AI for a second opinion on complex cases." @AI_4_Healthcare covered Hippocratic AI's Polaris 5.0 (8 likes, 309 views, 6 bookmarks): "Voice AI in healthcare is becoming more clinical -- drug-safety checks, escalation logic, multilingual switching, and compliance benchmarks."
Comparison to prior day: May 2 featured this same Science study at 21K views via @emollick. Today the signal sustains at 11K+ views via @NewsfromScience and broadens with the Polaris 5.0 commercial deployment story. The narrative is evolving from "AI beats doctors in research" to "AI is being deployed clinically."
1.7 Open-Source Robotics and Physical AI π‘¶
@lukas_m_ziegler announced (60 likes, 1,552 views, 37 bookmarks): "100% open-source robotic arm! Seeed Studio released reBot-DevArm. Hardware blueprints include sheet metal and 3D printed parts. Software includes Python SDK, ROS1/2, Isaac Sim, and LeRobot. 1.5 kg payload, 650 mm max reach, less than 0.2 mm repeatability with 6 DoF plus gripper. This is true open source for robotics." @davidbaseeth observed (10 likes, 145 views): "Humanoid robotics is evolving fast. But the real differentiation is no longer hardware. It's the AI behind it."

Comparison to prior day: May 2 had no robotics coverage. This is a new signal, with the open-source robotic arm receiving 37 bookmarks (high save-to-like ratio indicating builder interest). The convergence of accessible hardware with AI software ecosystems (LeRobot, Isaac Sim) lowers the barrier for physical AI experimentation.
2. What Frustrates People¶
AI Backlash Growing Beyond Tech Circles -- High¶
@GaryMarcus wrote (31 likes, 1,072 views): "Outside of coding (where there is clear value), and a handful of other domains, Generative AI has been a net negative for society. GenAI has been undermining secondary and college education, opening up mass surveillance, increasing disinformation, nonconsensual deep fake porn, bias in employment, and economic disparity, drowning the world in slop." @andrenidae_e added (15 likes, 146 views): "if u have ever used generative ai for a task that was not absolutely required to use it i do think u are a selfish, lazy person." The frustration is no longer niche -- it spans societal impact, education, environmental cost, and creative labor.
Coping strategy: Practitioners compartmentalize: coding use cases are acknowledged as valuable while broader societal deployment is criticized.
AI-Generated Code Ships Exploitable Security Flaws -- High¶
@benbieler reported (3 likes, 55 views, 2 bookmarks): "55.8% of AI-generated code contains exploitable security flaws in security-sensitive benchmarks. The surprising part: models correctly identify their own vulnerable code 78.7% of the time when asked to review it. They still generate the same flaws by default." This gap between generation and review capability means the risk is systematic -- models know better but don't do better without explicit prompting.
Coping strategy: Two-pass workflow: generate then review with the same model, or use dedicated security scanning (e.g., zauth pentesting at $20 per scan as mentioned by @zauthinc).
Benchmarks Still Divorced From Reality -- Medium¶
@jskoiz stated (45 likes, 2,181 views): "Benchmarks are so stupid. Go use it. Use it for one full day to do actual work. Not build some bag of shit AI slop wrapper." @tom_doerr replied: "If you need a full day to figure out that you don't like it, the gap can't be that large." The frustration persists that model releases optimize for benchmark scores rather than practical developer experience.
REST APIs Are a Misfit for AI Data Consumption -- Medium¶
@jsensarma identified (24 likes, 2,045 views, 9 bookmarks): "REST APIs are just a misfit to provide data to AI. Documented REST APIs have been very 'normalized' in schema. Lots of round trips to assemble anything interesting. But AI wants to scan lots of data at once." The mismatch between normalized REST schemas and AI's appetite for bulk data creates friction for non-engineering use cases where no filesystem equivalent exists. @championswimmer replied: "Are you saying GraphQL is vindicated?"
Artists Still Opposed to AI Use in Creative Work -- Medium¶
@drealstephen asked (40 likes, 1,824 views): "I get why designers might need it but artists/illustrators, what do you think? Is AI a complete no go or are there cases where its use is permitted?" The question itself -- needing to ask permission -- reveals persistent community tension around legitimacy of AI in creative practice.
3. What People Wish Existed¶
Bulk Data Access Layer for AI That Replaces REST APIs¶
@jsensarma articulated (24 likes, 2,045 views): "AI will be used by every company. So old stacks need a rethink. The lack of raw data access, normalized APIs, easy/standardized updates -- seem to suggest a rethink is needed in how apps expose data." Business systems lack the filesystem-like data access that Git provides for code. Security (fine-grained access control) is the hard constraint. Urgency: High.
AI Tools Evaluated on Real-World Task Completion, Not Token Speed¶
@cyrilXBT demonstrated (64 likes, 4,258 views) the gap: a model faster per token but 5.5x more verbose took 14 minutes longer to finish the same task. Developers want efficiency metrics that map to time-to-answer and cost-per-task, not synthetic throughput benchmarks. Urgency: High.
AI Security That Matches Generation Speed¶
@benbieler quantified the gap: models generate vulnerable code 55.8% of the time but can identify their own flaws 78.7% of the time. The wish: security built into the generation step, not bolted on as review. @zauthinc offers partial solution at $20/scan but the need is for inline prevention. Urgency: Medium.
Open-Source Robotics Software Ecosystem Matching Hardware Accessibility¶
@lukas_m_ziegler praised (60 likes, 37 bookmarks) the reBot-DevArm for making hardware accessible, but noted the stack requires Python SDK, ROS1/2, Isaac Sim, and LeRobot knowledge. The 37 bookmarks signal strong builder demand for robotics that is truly turnkey from hardware to trained behavior. Urgency: Medium.
4. Tools and Methods in Use¶
| Tool / Method | Category | Sentiment | Strengths | Limitations |
|---|---|---|---|---|
| Gemma 4 31B | Open model (coding) | (+) | 5.5x more token-efficient than Qwen 3 on same task; finishes 14 min earlier despite lower tok/sec | Fewer parameters than competitors; limited independent validation beyond single comparison |
| Qwen 3 27B | Open model (coding) | (?) | Higher tokens per second; competitive parameter count | Extreme verbosity (33K tokens vs 6K needed); slower task completion despite faster generation |
| Grok 4.3 | Frontier model | (+) | Leading intelligence-per-dollar ratio; Colossus infrastructure backing | Ecosystem lock-in to xAI; limited third-party benchmarks cited |
| Claude 4.6 / Mythos | Frontier models | (+) | On-trend across 37-benchmark index; Mythos represents 6-month progress | Only 2 on Anthropic internal ECI; agentic coding evaluation lags |
| reBot-DevArm | Robotics hardware | (+) | Fully open-source; 0.2mm repeatability; 6 DoF; Python SDK + ROS + Isaac Sim + LeRobot | 1.5kg payload limit; requires multi-tool software knowledge |
| Polaris 5.0 (Hippocratic AI) | Healthcare voice AI | (+) | Drug-safety checks, escalation logic, multilingual switching, compliance benchmarks | Independent validation pending; healthcare regulatory approval unclear |
| Tesla "Live MRI" | AI hardware diagnostics | (+) | Turns 8-hour diagnostic into 5-minute fix; visual heat maps; routes around dead chips | Internal Tesla tool; not commercially available |
| zauth + Dappit | AI app security | (+) | One-click pentesting; finds 2x more critical vulns at 12x cheaper rate | New integration; limited track record |
| Slack AI Security Agents | Enterprise security | (+) | Controlled investigation workflow with experts, critics, timelines, and verifiable reports | Slack-ecosystem specific |
The dominant pattern today is efficiency over raw capability. Practitioners are gravitating toward tools that complete tasks faster (Gemma 4 over Qwen 3), cost less per unit of intelligence (Grok 4.3), or provide concrete business outcomes (Polaris 5.0 clinical checks, Tesla hardware savings). The shift from "which model scores highest" to "which model finishes cheapest and fastest" is accelerating.
5. What People Are Building¶
| Project | Who built it | What it does | Problem it solves | Stack | Stage | Links |
|---|---|---|---|---|---|---|
| reBot-DevArm | Seeed Studio / @lukas_m_ziegler | Fully open-source 6-DoF robotic arm with 0.2mm repeatability | Robotics inaccessible to students and researchers due to cost and closed designs | Python SDK, ROS1/2, Isaac Sim, LeRobot, sheet metal + 3D print | Released | post |
| Pacely AI Coach | @kekkozrl | AI coaching system studying 15 RLCS pros' tendencies and replaying matches as they would play | Esports players lack personalized coaching modeled on specific pro playstyles | AI replay analysis, heatmaps, benchmarking | Coming soon | post |
| Tesla Live MRI | Tesla (@tslaming) | Visual diagnostic tool for AI supercomputer chips using watchdog sensors and heat maps | Dead chip detection required hours of log parsing and scrapping $M hardware racks | Watchdog sensors, hidden networking, color-coded heat maps | Deployed | post |
| Binance AI Compliance | @binance | 24 AI initiatives with 100+ models for fraud detection and compliance | Crypto fraud at scale outpaces manual detection | 100+ ML models | Live | post |
| zauth x Dappit | @zauthinc | One-click pentesting for AI apps finding 2x more critical vulns at 12x cheaper | AI apps ship with exploitable vulnerabilities; traditional pentests cost thousands | Automated security scanning | Live integration | post |
| Pixel Agents | @simplifyinAI | Open-source agent framework on GitHub | Practical AI agent implementation patterns | Python, GitHub | Open-source | post |
| AI Hardware Calculator | A1 Laboratory / @tobiaswup | Checks device specs and shows which open-source models run locally with S-F grades | Users don't know if their hardware can handle local AI models | Web-based calculator | Live | post |
| Citi Arc | Citi (@Palak_Chahal1) | AI agent automating research, data analysis, and client prep inside banking | Manual analyst workflows in compliance-heavy environments | Agentic AI | Launched | post |
6. New and Notable¶
NVIDIA Goes From 95% to 0% China Market Share -- Jensen Huang Speaks Publicly¶
[+++] Jensen Huang publicly characterized U.S. export policy as having "largely backfired," with NVIDIA's China AI chip market share falling from 95% to 0% after the H20 ban. The statement, reported by @StockSavvyShay (208 likes, 24,594 views), is notable for its candor from a sitting CEO whose company lost its largest non-US market. Simultaneously, Chinese AI startups (Moonshot, DeepRoute AI, StepFun) are restructuring to incorporate domestically, suggesting a permanent decoupling rather than a temporary disruption.
Richard Dawkins Engages Seriously With AI Consciousness¶
[++] A prominent evolutionary biologist publishing an essay in UnHerd questioning whether Claude possesses consciousness signals that AI consciousness has crossed from computer science speculation into mainstream intellectual discourse. The reaction post (343 likes, 20,990 views) achieved the highest like count of the day. This is not a technical discussion -- it is a cultural moment where the question "is it conscious?" is being asked by people outside the AI field.
Photonic Memory for AI Data Centers -- $PENG Reveals Roadmap¶
[++] @BryzonX detailed (10 likes, 414 views, 5 bookmarks) Penguin Solutions' photonic memory roadmap: current KV cache offers 11TB per cluster using copper, but photonic cache will unlock 1000+ TB -- a 90x increase. "Copper can't handle the bandwidth needed for agentic AI without melting or slowing down." Commercial launch targeting early 2027. The transition from electronic to photonic interconnects for AI memory represents a potential phase change in data center architecture.
Bank of England Makes AI Governance a Supervisory Priority¶
[+] @Edenaofficial reported (39 likes, 1,683 views): "The Bank of England has now made AI governance a supervisory priority for 2026. AI is no longer being treated as optional innovation. It is increasingly being treated as systemically important infrastructure." When central banks elevate AI to supervisory priority status, compliance requirements follow -- creating both regulatory burden and market opportunity for governance tooling.
7. Where the Opportunities Are¶
[+++] AI-native data access layers replacing REST APIs for business systems -- @jsensarma identified that REST APIs are structurally incompatible with how AI consumes data: normalized schemas requiring many round trips, no bulk access, no standardized change detection. Git works for code because it provides filesystem-like access. Every business system (CRM, ERP, HRIS) needs an equivalent for AI. The company that builds the "Git for business data" layer -- with fine-grained access control -- addresses a gap every enterprise will hit as AI moves beyond engineering use cases. (source)
[+++] Token-efficiency benchmarking and optimization tooling -- @cyrilXBT demonstrated that tokens-to-completion matters more than tokens-per-second, showing a 5.5x efficiency gap between models on identical tasks. As agent loops compound costs across thousands of iterations, tools that measure and optimize token efficiency per task -- not throughput -- become critical for production AI economics. No standard exists for this metric today. (source)
[++] China-independent AI chip supply chains -- NVIDIA's fall from 95% to 0% China market share demonstrates how export controls create permanent market shifts. Chinese companies now have structural incentive to build domestic alternatives at scale. Companies providing design tools, IP blocks, or manufacturing capacity for non-NVIDIA AI silicon serving the Chinese market address a gap that grows with every month of the ban. (source)
[++] AI governance and compliance tooling for financial regulators -- The Bank of England elevating AI governance to supervisory priority, combined with Citi deploying AI agents for research and compliance, signals that financial services are entering mandatory AI governance. Companies building audit trails, model risk management, and explainability tooling specifically for financial regulators address compliance requirements that will be enforced, not optional. (source, source)
[++] Photonic memory and interconnect infrastructure -- Penguin Solutions' roadmap to 1000+ TB photonic KV cache (vs 11TB copper) for agentic AI workloads represents a step-function improvement. As agentic AI requires persistent context and massive KV caches, copper bandwidth becomes a physical bottleneck. Companies in photonic interconnect manufacturing, cooling for optical signals, and memory controller design address infrastructure needs that hyperscalers are already sampling. (source)
[+] Inline AI code security (generation-time prevention) -- Models generate exploitable code 55.8% of the time but identify their own flaws 78.7% when reviewing. The opportunity is collapsing this into a single step: models that refuse to generate vulnerable patterns rather than catching them in post-generation review. This could be a fine-tuning layer, a runtime constraint, or a model wrapper. (source)
8. Takeaways¶
-
The AI infrastructure investment thesis now has $1.4 trillion in hyperscaler backlogs as hard evidence. Google Cloud ($462B), Amazon ($364B+ excluding Anthropic's $100B deal), and Microsoft ($627B, up 99% YoY) collectively committed unprecedented capital. Investors are mapping the full stack from silicon to power to memory, with @jvisserlabs reaching 13K views on "the AI buildout is early." (source, source)
-
U.S.-China AI decoupling is now permanent, not cyclical. Jensen Huang publicly calling the H20 ban a policy that "largely backfired" while Chinese startups restructure to incorporate domestically signals that the supply chain break is structural. China has both the motivation and increasing capability to build parallel AI infrastructure independent of American hardware. (source, source)
-
Token efficiency, not token speed, is emerging as the production AI metric that matters. A direct comparison showing Gemma 4 finishing 14 minutes faster than Qwen 3 despite slower per-token generation (5.5x fewer tokens needed) reframes how practitioners should evaluate models. As agent loops compound costs, this distinction determines whether AI deployments are economically viable at scale. (source)
-
AI consciousness discourse has entered mainstream intellectual culture via Dawkins. The highest-engagement post of the day (343 likes, 21K views) was commentary on Richard Dawkins questioning Claude's consciousness. Whether or not the philosophical question is resolvable, its presence in mainstream discourse changes public perception of AI capabilities and may influence regulatory framing around "AI rights" or "AI personhood." (source)
-
The AI progress debate is splitting into quantified optimism vs. cultural pessimism. A 37-benchmark analysis showing Claude 4.6 and Mythos on trend coexists with forceful arguments that LLMs are a "con" generating 30%+ errors on a "developmental plateau." The split is methodological: one side measures capability curves, the other measures societal impact. Both can be simultaneously correct. (source, source)
-
Open-source robotics hardware is reaching the accessibility threshold where software becomes the differentiator. The reBot-DevArm provides a $200-tier robotic arm with 0.2mm repeatability and full open-source stack (ROS, Isaac Sim, LeRobot). Its 37 bookmarks on 60 likes indicates unusually high builder intent. The bottleneck has shifted from "can I afford the hardware" to "can I train the behavior," exactly where AI labs have capability advantages. (source)
-
Financial regulators are elevating AI from innovation to systemic infrastructure. The Bank of England making AI governance a supervisory priority, combined with Citi deploying agentic AI for compliance workflows, confirms that financial services AI is transitioning from experimental to regulated. Companies building in this space have 12-18 months before compliance requirements harden into mandatory frameworks. (source)