Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs are not reasoning machines. They are basically semantic compression machines with a build in search feature.

This is just a god of the gaps argument. Understanding is a form of semantic compression. So you're saying we have a system that can learn and construct a database of semantic information, then search it and compose novel, structured and coherent semantic content to respond to an a priori unknown prompt. Sounds like a form of reasoning to me. Maybe it's a limited deeply flawed type of reasoning, not that human reason is perfect, but that doesn't support your contention that it's not reasoning at all.



It’s basically an argument that boils down to “it’s not because I don’t like it”


I bite the bullet on the god of the gaps




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: