Hermes Agents, Codex vs Claude, Seedance Ad Pipelines — AI Daily May 15
870 messages · 91 active members
Overview
Topics
@jasonakatiff returned to Claude after Codex silently broke a payments backfill, and the community converged on running both CLIs as mutual reviewers — 'when in doubt, use more tokens.' The unlock for Codex is forcing it to ingest existing CLAUDE.md/skills and emit an AGENTS.md, without which it broke 50% of systems. Anthropic also reinstated OpenClaw with restrictions, and Grok 4.3 via OpenCode is a viable cheap creative-tier alternative.
Multiple builders migrated to Hermes with multi-profile setups (8 profiles, distinct Telegram accounts, Obsidian KBs) running stably on 32GB M1 Macs. Honcho + Hindsight memory combos recall reliably out of the box, and cron-based memory.md compression plus CLI-based inter-agent messaging are emerging as standard patterns. MiniMax also launched Max Hermes as a competing agent platform.
Builders are running fully automated script-to-finished-UGC flows in Claude Code, generating 10 videos per request with subtitles, silence detection, and emotion pacing. Seedance rated 8.5/10 for character consistency but chokes on technical pronunciations; Kieran shared a ffmpeg + WPM-targeting trick to auto-adjust scene pacing. @tidemid is testing SynthID stripping for reach penalties, and @tounano estimated ~$1.2M to fully generate a 120-min feature film.
@jasonakatiff walked through his lead-distribution platform: super-admin SendGrid/Twilio with tenant inheritance, auto-provisioned Google Sheets per buyer, onboarding wizard, and $50–$500/mo pricing designed to undercut aggregators. Key tactical insights: text leads before calling (60% FB pickup rate) and match DID to first 6 digits for 3–4x pickup.
@mb29266 and @yangthegoat detailed multi-layer copy pipelines with orchestrator agents, per-brand bug logs organized by severity, and batch-fix cycles. Consensus: Sonnet 4.6 for first drafts and brief-checking (watch the 30k input tokens/min limit), Opus 4.6[1m] for editing where 90% of quality emerges. @arielletolome's LLM-based prediction scorer drew skepticism — past rubric scoring rarely correlates with real ad performance; binary 'beats control' backtests are better.
Key Takeaways
- Run Codex and Claude Code as mutual reviewers — neither is reliable solo on sophisticated systems, and Codex needs an AGENTS.md derived from your CLAUDE.md before it stops breaking things.
- Set Claude model to claude-opus-4-6[1m] in settings.json to unlock 1M context and dodge Sonnet's 30k tokens/min rate limit; use Sonnet for drafts, Opus for editing layers.
- Hermes multi-profile setups (8 profiles, Honcho + Hindsight memory, Obsidian KB) run stably on 32GB M1 Macs — no heavy hardware required.
- For lead distribution: text before you call (60% FB lead pickup), match DID to first 6 digits for 3–4x pickup, and build direct buyer relationships instead of aggregator dependency.
- Prediction-scoring rubrics on creatives rarely correlate with results — backtest with binary 'beats control or not' instead of numeric scores, and capture creative lineage + hypothesis + win conditions in a campaign DB.
Hot Threads
Copy pipeline bottlenecks, bug logging, and Sonnet vs Opus model selection
End-to-end Seedance/Veo3 video ad workflows — group call forming
LeadRouter architecture, pricing, and direct-buyer strategy