I'd never thought about LLM failure modes like this, but it makes sense.
Cross too many possibility streams, instead of prompting / contexting down to one, and it vacillates between them and outputs garbage.
I'd never thought about LLM failure modes like this, but it makes sense.
Cross too many possibility streams, instead of prompting / contexting down to one, and it vacillates between them and outputs garbage.