These things are so tricky because everyone has a seemingly conflicting experience. Part of the fun I guess!
I use Codex when Claude Code is down, and I only began using Claude when ChatGPT was down
yes codex is very fast, I go back to Claude for now
Opus 4.7 + Rust is a killer combo.
Heck I prefer DeepSeek to both of those.
I was running deepseek through claude's code agent harness. Maybe it works better through a different tool?
That and the lack of image-read support surprised me. I'm a big fan of feeding screenshots into my llm and that killed it for me.
I would have been much more impressed with v4 about 6 months ago. But I've been spoiled by opus 4.7. Deepseek isn't at the same level.
My systems are hitting exponential delay retries, so this might not get better because retries overload things again.
> {'type': 'error', 'error': {'details': None, 'type': 'overloaded_error', 'message': 'Overloaded'}, 'request_id': 'req_ ...
I can see a weird spike in my cache hit-rate a few minutes before, so this might actually be some extra caching they have thrown in.