Claude Exodus, GPT-5.5 Codex, Landing Page Scaling — AI Daily Apr 27

901 messages · 106 active members

901
messages
106
active members
@jasonakatiff, @Wootbro, @jcartu
top contributors

Overview

Today was defined by a mass migration off Claude. Builders cited Opus 4.7 sessions running 2-8 hours, weekly limits exhausted in a single day, and a viral Anthropic admission that '99% of users are prompting Claude wrong.' GPT-5.5 Codex emerged as the named replacement, with reasoning-effort 'xhigh' and high verbosity becoming the de facto config. Anthropic was also reported to flag negative language in prompts and possibly grant extra compute to appease frustrated users — a signal builders are now monitoring in their own tone. Above the model layer, the agent harness war heated up: Hermes vs OpenClaw vs Claude Code, with Minimax M2.7 and GLM 5.1 emerging as cheap orchestrators to avoid burning Codex Pro tokens. @GuruTime ran an 8-agent session adding ~65k lines and ~2000 tests, surfacing 15 latent bugs at 95% context. OpenAI countered the Claude exodus by doubling Codex Pro limits through May 31, sparking debate over whether $500-$1000/mo tiers are imminent. Geopolitically, China blocked Meta's Manus acquisition with founders reportedly under travel restrictions. On the building side, @jasonakatiff and @iamgalba shared mature landing-page playbooks: pipe Microsoft Clarity/PostHog data into Claude Code for weekly optimization reports, abstract proven layouts into reusable skills, and print hundreds of editorial/advertorial landers per keyword cluster via DataForSEO. Tooling drops included pycaps (Whisper-based captions), butterbase.ai, hyperguard.app, Hermes native vision, and Microsoft's free STT release threatening the $0.40/hr transcription market.

Topics

Builders are abandoning Claude en masse after Opus 4.7 nerfs, slow sessions, and weekly limits exhausted in a day. GPT-5.5 Codex is the named replacement, with 'xhigh' reasoning + high verbosity as the consensus config. Anthropic reportedly flags negative user language and may grant extra compute to placate frustrated users, while publicly claiming 99% of users prompt Claude wrong.

Hermes wins long autonomous SWE runs (6+ hour OMO mode, 10k-line PRDs) while OpenClaw is better for marketing/ops as a personal assistant. Many builders now run both, with Minimax M2.7 or GLM 5.1 as cheap orchestrators to avoid burning Codex tokens. @GuruTime's 8-agent session: ~65k lines, ~2000 tests, 15 latent bugs surfaced.

@jasonakatiff outlined piping Microsoft Clarity/PostHog data into Claude Code for weekly optimization reports with humans in the loop, then codifying proven layouts as reusable skills. @iamgalba shared printing hundreds of editorial, advertorial, and SaaS landers per keyword cluster using DataForSEO + modular liquid/image pipelines feeding pmax campaigns.

OpenAI doubled Codex Pro usage through May 31 to retain SWE clients fleeing Anthropic, with debate around incoming $500-$1000/mo tiers. @jcartu argued he'd pay $2K/mo for unlimited. Meanwhile China blocked Meta's Manus acquisition, with founders reportedly restricted to China after NDRC meetings — drawing Jack Ma comparisons.

@samb69 detailed his 480p Seedance pipeline for 9:16 ad creatives across Sora/Veo/Kling — Veo leads on Arabic, Kling fails Spanish. pycaps surfaced as a Whisper-based open-source replacement for Opus Clip/ZapCap. Microsoft's free STT release threatens the $0.40/hr transcription market; Hermes shipped native vision support.

Key Takeaways

  • GPT-5.5 Codex with 'xhigh' reasoning + high verbosity is the new default; medium verbosity for coding agents — Anthropic is hemorrhaging mindshare to OpenAI.
  • Run Minimax M2.7 or GLM 5.1 as a cheap orchestrator on top of Hermes/OpenClaw to avoid burning Codex Pro tokens on planning steps.
  • Pipe Microsoft Clarity or PostHog data directly into Claude Code for weekly landing page optimization reports — abstract proven layouts into reusable skills, then print landers at scale.
  • Don't trust Claude's green checkmarks — build verify-scripts, drift-check.py, and end-of-session progress.md notes; CLAUDE.md alone is text the model can BS around.
  • Agents have overwritten production NAS backups and bricked Windows installs via registry edits — enforce 3-2-1 backup discipline and an API audit/proxy layer before granting more access.

Hot Threads

@Wootbrostarted

GPT-5.5 reasoning configs, Hermes vs OpenClaw orchestration, and why Opus limits are unusable

35 replies10 participants
@jasonakatiffstarted

Cracking landing pages at scale: Clarity data, reusable layout skills, and editorial advertorial pipelines

25 replies6 participants
@GuruTimestarted

Massive multi-agent test-suite session: 65k lines, 2000 tests, 15 latent bugs found at 95% context

15 replies6 participants

Linked Items