Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have this impression that LLMs are so complicated and entangled (in comparison to previous machine learning models) that they’re just too difficult to tune all around.

What I mean is, it seems they try to tune them to a few certain things, that will make them worse on a thousand other things they’re not paying attention to.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: