I haven't heard of this app before, so I looked around their site and docs. I was mildly interested in trying it out until I saw the requirements:
"A system with at least 4GB of RAM and 2 CPU cores.", but recommended 6 GB of RAM. Why does an image storage solution needs so much RAM?
Because it is written in Node, it uses several other softwares (i.e. Postgresql), it runs image/video processing on the fly (transcoding to various formats depending on what you upload and who views), it does face/object recognition running a local model and a few other nice features that yeah, require more power. It's not a static HTML of your photos.
The last time I tried Immich (a year ago or so), my impression was that Immich tries to imitate Google Photos as much as possible. This includes features such as searching by a person or by "cat", which requires some machine learning sophistication, which is done locally (you can also disable these features). This would be my guess, but I'm not entirely sure.
Because it's not just an "image storage solution".
A thumb drive would be an image storage solution. If you're indexing, making geo queries, serving over the network, categorize, transcode video and everything else that's needed to create a google photos competitor, you're going to need the hardware to back it up.
I hope you slept well in your cryo-chamber, Austin Powers. It’s the year 2025, 6GB of RAM is not a lot. A stick of 32GB of RAM costs about $50. Most simple telephones come with 8GB of RAM.
You get my point. It shouldn’t come as a surprise that replacing a massively scaled $100+/year photo/video hosting service offered by the biggest multinational companies in the world will consume compute resources.
You are either consuming your own server’s resources or you’re paying Apple/Google/someone else to handle it for you.
Your mistaken point is easy to get, but you don't get the counter argument that the relevant pricing here is that of, for example, VPS, which haven't scaled as much in memory availability as your cited consumer examples, so the constraints are still there.
> will consume compute resources
This is an empty statement, the argument here is about the amount of resources and how it relates to the underlying technology.
And all this "biggest multinational scale" is just as meaningless, no specific resource requirement follows from that (maybe a big part of that $100 is exactly because of "big multi scale")
> News flash, when you query ChatGPT,
News flash, this is not chatGPT. Another news is that different models have very different memory requirements. Also, do you know that those requirements are only due to the models?
So again, your example doesn't help provide any justification.
In comparison, qemu does emulation of literally every hardware that there is and it is only 1/3 more code, and it is without counting code required to run nodejs, docker, postgresql, redis that is dependencies of this image catalogue software.
I'll take the high level language developed solution that I can use now over a low level language version that would reach feature parity with this 10 years from now.