Then from that they realized they could just run API calls more like staff, fast, not at capacity.
Then they leave the billion other people's calls at remaining capacity.
https://thezvi.substack.com/i/185423735/choose-your-fighter
> Ohqay: Do you get faster speeds on your work account?
> roon: yea it’s super fast bc im sure we’re not running internal deployment at full load
In the past month, OpenAI has released for codex users:
- subagents support
- a better multi agent interface (codex app)
- 40% faster inference
No joke, with the first two my productivity is already up like 3x. I am so stoked to try this out.