Absolutely. LLMs use methods that are not really provably solving the given problem, they just happen to work in most relevant cases. That's a heuristic.
> Our methodology leverages state-of-the-art natural language processing techniques to systematically evaluate the evolution of research approaches in computer vision. The results reveal significant trends in the adoption of general-purpose learning algorithms and the utilization of increased computational resources. We discuss the implications of these findings for the future direction of computer vision research and its potential impact on broader artificial intelligence development.
I know that these algorithms are the best that exists for many tasks, but any non-deterministic non-provable algorithm is still technically a heuristic. Also, currently, the bitter lesson seems to be that more of the same runs into diminishing returns, contrary to "The Bitter Lesson"(tm). There will probably be a better AI architecture at some point, but "lol just add more GPUs / data / RAM" times are kind of over for now.
I don't think that I misunderstood - it says that more computing power beats "hand-engineering". But more computing power doesn't seem to work that well anymore to improve AI performance - and there is not much more computing power coming in the next couple of years, not orders of magnitude more for free like it used to be. Case in point: the somewhat disappointing new nVidia generation.