Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An LLM is optimized for its training data, not for newly built formats or abstractions. I don’t understand why we keep building so-called "LLM-optimized" X or Y. It’s the same story we’ve seen before with TOON.


Yeah fwiw I agree. I was impressed at how well the agents were able to understand and write their invented language, but fundamentally they're only able to do that because they've been trained on "similar" code in many other languages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: