> They're also the kind of person who consults chatgpt or copilot on a subject while you're explaining that subject to them, to check that the LLM agrees with what you are saying. They'll even challenge people to prove some copilot output is incorrect. It seems to me that they consider LLMs more reliable than people.
Dear lord, these tools have just come out, how can they already have invented a new type of asshole?
We had a product manager that made requirements based mostly on ChatGPT.
It would output completely nonsensical stuff like QR-Code formats that don't exist, or asking to connect to hallucinated APIs.
It was often caught by lead devs quite quickly: the documentation wasn't a link or a PDF but rather some block of text.
But in the cases it wasn't, it was super costly: some developer would spend hours trying to make the API work to no avail, or, in the case of the QR code, it would reach QA which would be puzzled about how to test it.
Hah, thanks to LLMs we’ve drastically reduced the barrier to entry, in terms of knowing enough to be dangerous. Hopefully there’s a corresponding reduction in the level of knowledge required to be useful…
Dear lord, these tools have just come out, how can they already have invented a new type of asshole?