Implementing a world model seems to be mostly solved by LLMs. Finding one that can be evaluated fast enough to actually solve games is extremely hard, for humans and AI alike.
Optimization is harder than writing out the rules of a game.
For most board games, it is trivial to describe all possible next states, but it is not at all trivial to search through all of these to find the best action to take.