Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Once we've solved social engineering scams, we can iterate 10x as hard and solve LLM prompt injection. /s

It's like having 100 "naive/gullible people" who are good at some math/english but don't understand social context, all with your data available to anyone who requests it in the right way..



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: