> LLMs are not reasoning machines. They are basically semantic compression machines with a build in search feature.
This is just a god of the gaps argument. Understanding is a form of semantic compression. So you're saying we have a system that can learn and construct a database of semantic information, then search it and compose novel, structured and coherent semantic content to respond to an a priori unknown prompt. Sounds like a form of reasoning to me. Maybe it's a limited deeply flawed type of reasoning, not that human reason is perfect, but that doesn't support your contention that it's not reasoning at all.
This is just a god of the gaps argument. Understanding is a form of semantic compression. So you're saying we have a system that can learn and construct a database of semantic information, then search it and compose novel, structured and coherent semantic content to respond to an a priori unknown prompt. Sounds like a form of reasoning to me. Maybe it's a limited deeply flawed type of reasoning, not that human reason is perfect, but that doesn't support your contention that it's not reasoning at all.