I’m worried that LLMs could facilitate cheap, scaled astroturfing.
I understand that people encounter discrimination based on English skill, and it makes sense that people will use LLMs to help with that, especially in a professional context. On the other hand, I’d instinctively be more trusting of the authenticity of a comment with some language errors than one that reads like it was generated by ChatGPT.
I understand that people encounter discrimination based on English skill, and it makes sense that people will use LLMs to help with that, especially in a professional context. On the other hand, I’d instinctively be more trusting of the authenticity of a comment with some language errors than one that reads like it was generated by ChatGPT.