Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not the core foundation model. The foundation model still only predicts the next token in a static way. The reasoning is tacked onto the instructGPT style finetuning step and its done through prompt engineering. Which is the shittiest way a model like this could have been done, and it shows


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: