Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why not have a crack with a local LLM or two? You work in an industry with a lot of money involved.

Recently Apple have released beasties with up to 512GB of RAM. Apples have unified RAM (both for general use and GPU) so that 512GB looks a bit handy, and they have quite a lot of CPU cores too. They are of the order of £10,000. You should be able to run some pretty large models on that.

I've just blown a fair bit of money on network infra (yum: more switches that boot Linux for the control plane and shuffle packets at incredible speeds) at work so will need to wait a bit or perhaps persuade wifie that we really do need a really expensive Apple box at home.

The snag I have is getting over my mild distaste for Apple! I'm sure I'll manage it.



Environmental restrictions! All our data is on cloud and for customer privacy we're not allowed to download anything locally. We've access to most of the LLM models from all big vendors. I've found them to be very similar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: