Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Somehow that article totally ignored the insane pricing of cached input tokens set by Anthropic and OpenAI. For agentic coding, typically 90~95% of the inference cost is attributed to cached input tokens, and a scrappy China company can do it almost for free: https://api-docs.deepseek.com/news/news0802


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: