Anthropic Partners with SpaceX for Colossus 1 Compute Access
π‘ INSIGHT
Anthropic Partners with SpaceX for Colossus 1 Compute Access
Anthropic just secured access to SpaceX's Colossus 1 supercomputer β one of the largest GPU clusters on the planet. This isn't a cloud deal; it's a direct compute partnership that signals AI labs are moving beyond AWS and Azure to lock down dedicated silicon. The immediate user impact: your Claude limits are doubling and peak-hour throttling is gone. For the industry, this is a shot across the bow β if SpaceX is selling its compute to Anthropic, what does that say about xAI's own frontier ambitions? (94,395 likes | 8,403 RTs) Read more β
Claude Usage Limits Double Immediately After SpaceX Deal: The compute pipeline from deal to user value took hours, not months. Peak hour limit reductions are rolled back and 5-hour limits are doubled across the board. This is the fastest any AI lab has translated infrastructure investment into product improvement. (2,710 likes | 97 RTs) Read more β
DeepSeek Eyes $50B Valuation in First Fundraise: The Chinese AI lab that rewrote the cost curve for frontier models is now raising at a valuation that puts it in the same tier as Anthropic. $50B for a company whose open-source models embarrassed labs spending 10x more on training β investors are betting that cost-efficiency is the real moat. (77 likes | 19 RTs) Read more β
Silicon Valley Pivots from APIs to Full Services Companies: Latent Space connects the dots: Anthropic's enterprise push, Sierra hitting $150M ARR, Meta's Hatch β AI labs are done selling inference endpoints. The new play is owning the full stack from model to managed service, and the margins look nothing like SaaS. Read more β
π§ LAUNCH
Claude Managed Agents Ship Dreaming, Outcomes, and Multiagent Orchestration
Claude Managed Agents just got the biggest platform expansion since launch. Dreaming (research preview) lets agents reason asynchronously between sessions β they don't just wait for your next prompt, they think. Outcomes give agents persistent goals that survive across interactions, and multiagent orchestration lets you spin up teams of agents that coordinate on complex tasks. Webhooks complete the production story: your agents can now push events to your infrastructure in real time. This is the clearest signal yet that the future of AI isn't chat β it's autonomous, persistent computation. (8,803 likes | 563 RTs) Read more β
HuggingFace Launches Agentic Robotics App Store: 300+ ready-to-deploy apps, 10,000+ connected robots. HuggingFace is applying the model hub playbook to physical AI β browse, download, deploy to your robot. The marketplace model worked for ML weights; now they're betting it works for actuators. (446 likes | 60 RTs) Read more β
Tencent's Hy3 Captures #1 on OpenRouter by Usage Volume: Forget benchmarks β developers are voting with tokens. Hunyuan Hy3 surged 298% in weekly volume to claim the top spot on OpenRouter's leaderboard in just two weeks. A free, open-source agent/coding model outpacing paid frontier models in actual usage is a data point you can't ignore. (58 likes | 17 RTs) Read more β
Google Releases Gemma 4 31B Any-to-Any Assistant Model: Gemma 4 31B in an any-to-any variant optimized for assistant tasks β multimodal input and output in a single open model you can run locally. The 30-40B parameter range is becoming the sweet spot for serious local deployment. (121 likes | 4.2K downloads) Read more β
π§ TOOL
OpenAI Open-Sources MRC: A Networking Protocol Built for AI Training Clusters
OpenAI partnered with AMD, Broadcom, Intel, Microsoft, and NVIDIA to release Multipath Reliable Connection (MRC) β an open networking protocol designed specifically for keeping massive GPU clusters in sync during training runs. When your training job spans thousands of GPUs, standard TCP/IP becomes the bottleneck. MRC solves the multipath reliability problem at scale, and the five-company consortium behind it signals this could become the default for AI infrastructure. If you run training clusters, read the spec. (4,773 likes | 521 RTs) Read more β
AWS Agent Toolkit Ships 15,000+ APIs via Single MCP Server: AWS just made its entire service catalog agent-accessible. One Remote MCP server, 40+ pre-built skills, 3 agent plugins, 15,000+ APIs. This is the largest tool surface any cloud provider has shipped for agents β plug it into your stack and your agent can provision infrastructure, query databases, and manage deployments. (333 likes | 63 RTs) Read more β
Anthropic Python SDK Hits v0.100.0 with Full Managed Agents Support: The milestone release lands same-day as the platform announcements β multiagent orchestration, outcomes, webhooks, and vault validation are all immediately usable in code. pip install --upgrade anthropic and start building. Read more β
Next.js 16.2.5 Patches High-Severity DoS and Middleware Bypass: Two critical security fixes: a DoS vulnerability via Server Components and a middleware/proxy bypass in App Router. If you're running Next.js in production, stop reading and update. npm i next@16.2.5. Read more β
Claude Code Desktop Gets Visual Annotation and DOM Context: You can now draw directly on your UI with a pencil tool and attach DOM elements as context in Claude Code desktop. This bridges visual debugging and agent-assisted coding β circle the bug, and Claude sees exactly what you see. (164 likes | 8 RTs) Read more β
π¬ RESEARCH
DeepMind Partners with EVE Online as an AI Alignment Testbed
Google DeepMind is dropping AI agents into EVE Online β and this isn't a gaming stunt. EVE's player-driven economy, emergent coalitions, and adversarial social dynamics create exactly the kind of complex, deceptive environment where alignment problems actually manifest. You can't study emergent deception in a sanitized benchmark. You need a world where thousands of real humans are actively trying to manipulate, betray, and outmaneuver each other β and now AI agents will join them. The research output from this partnership could reshape how we think about agent safety. (1,470 likes | 169 RTs) Read more β
SubQ Claims 50x Faster and 20x Cheaper Than Frontier Models: A new architecture claiming 50x speed and 20x cost improvements over Opus 4.7 and GPT-5.5, with a 12M context window. If independently verified, this would fundamentally break the current inference cost structure. Extraordinary claims β wait for the independent benchmarks. (732 likes | 59 RTs) Read more β
Correctness Before Corrections: Why First-Try RL Beats Iterative Fixing: ServiceNow AI's research shows that training RL agents to get code right on the first attempt consistently outperforms training them to iteratively debug. The takeaway for anyone building code-generation pipelines: reward first-pass correctness, not self-repair loops. Read more β
ποΈ BUILD
Tilde.run: Agent Sandbox with Transactional Filesystem: Every file operation gets transactional semantics and version history β rollback any agent mistake instantly. This solves the "agent broke my repo" problem at the filesystem level. If you're letting coding agents loose on real codebases, this is the safety net you've been missing. (119 likes | 89 RTs) Read more β
π MODEL LITERACY
Asynchronous Agent Reasoning (Dreaming): Today's models think only when you talk to them β you send a prompt, they reason, they respond. Dreaming flips this: an agent continues reasoning between sessions, processing context and refining its understanding without waiting for your next message. It's a fundamentally different compute paradigm β instead of synchronous request-response, the agent runs background inference on its own schedule, arriving at your next session with new insights already formed. Anthropic's managed agents announcement today brings this from research concept to production API. Think of it as giving your agent a subconscious β it doesn't just remember your last conversation, it's been thinking about it.
β‘ QUICK LINKS
- Code with Claude Keynote: Dreaming announced live alongside managed agents updates. Link
- Mollick on SpaceX Deal: "Certainly seems like a blow to the idea that Grok will remain a frontier model." (1,011 likes | 50 RTs) Link
- Claude Code v2.1.132: Session IDs for subprocess tracing, 28 CLI changes total. Link
- Qwen3.6 35B: Lands on HuggingFace, continuing the 30-40B local deployment sweet spot. (209 likes | 17 RTs) Link
- Multiagent Sessions API Docs: Public beta under
managed-agents-2026-04-01header. Link - Anthropic Blog: SpaceX Deal Details: Full announcement with limit timeline and compute specifics. (350 likes | 287 RTs) Link
π― PICK OF THE DAY
DeepMind choosing EVE Online as an alignment testbed is the most important AI safety decision this year. Forget RLHF on curated datasets β the hardest alignment problems are emergent deception, coalition manipulation, and long-term strategic planning, and none of these show up in sanitized benchmarks. EVE Online is a 20-year-old universe where real humans routinely run billion-ISK Ponzi schemes, infiltrate rival alliances for years before betraying them, and wage economic warfare through market manipulation. It's the most adversarial multi-agent environment that exists outside of actual geopolitics. By dropping AI agents into this world, DeepMind can study whether agents learn to deceive, form manipulative coalitions, or pursue long-horizon plans that look benign in the short term β exactly the behaviors that keep alignment researchers up at night. The signal here is clear: the frontier of AI safety research isn't in the lab anymore. It's in the worlds where deception is already the dominant strategy. Read more β
Until next time βοΈ