Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I haven't heard of this app before, so I looked around their site and docs. I was mildly interested in trying it out until I saw the requirements: "A system with at least 4GB of RAM and 2 CPU cores.", but recommended 6 GB of RAM. Why does an image storage solution needs so much RAM?


Because it is written in Node, it uses several other softwares (i.e. Postgresql), it runs image/video processing on the fly (transcoding to various formats depending on what you upload and who views), it does face/object recognition running a local model and a few other nice features that yeah, require more power. It's not a static HTML of your photos.


The last time I tried Immich (a year ago or so), my impression was that Immich tries to imitate Google Photos as much as possible. This includes features such as searching by a person or by "cat", which requires some machine learning sophistication, which is done locally (you can also disable these features). This would be my guess, but I'm not entirely sure.


Because it's not just an "image storage solution". A thumb drive would be an image storage solution. If you're indexing, making geo queries, serving over the network, categorize, transcode video and everything else that's needed to create a google photos competitor, you're going to need the hardware to back it up.


If you just need an image storage solution: no, but then you don't need Immich. If you want the feature Immich offers: yes


You can turn off AI features. You won't need as much RAM, then.


Because it wants to recognize objects in you images.


I hope you slept well in your cryo-chamber, Austin Powers. It’s the year 2025, 6GB of RAM is not a lot. A stick of 32GB of RAM costs about $50. Most simple telephones come with 8GB of RAM.


Synology Photos runs on NASes with 2GB still.


Are you going to host this app on a simple telephone?


You get my point. It shouldn’t come as a surprise that replacing a massively scaled $100+/year photo/video hosting service offered by the biggest multinational companies in the world will consume compute resources.

You are either consuming your own server’s resources or you’re paying Apple/Google/someone else to handle it for you.


Your mistaken point is easy to get, but you don't get the counter argument that the relevant pricing here is that of, for example, VPS, which haven't scaled as much in memory availability as your cited consumer examples, so the constraints are still there.

> will consume compute resources

This is an empty statement, the argument here is about the amount of resources and how it relates to the underlying technology.

And all this "biggest multinational scale" is just as meaningless, no specific resource requirement follows from that (maybe a big part of that $100 is exactly because of "big multi scale")

> News flash, when you query ChatGPT,

News flash, this is not chatGPT. Another news is that different models have very different memory requirements. Also, do you know that those requirements are only due to the models?

So again, your example doesn't help provide any justification.


It comes as a surprise because there's products that do the same with massively lower resources.


It is ridiculous bloat of "high level technobabble"

    npm install for web/ server/ 
    % cloc .
    81808 text files.
    51415 unique files. 
    Language                             files          blank        comment           code
    ---------------------------------------------------------------------------------------
    JavaScript                           25050         453101         913671        4436663
    TypeScript                           14997          85346         630503         831318
    JSON                                  2238            168              0         457435
    Perl                                   233          10541          34463         293957
    Markdown                              1977          89859           1551         234399
    Dart                                  1257          24152          14748         229937
    Svelte                                 678           5917            190          51223
    HTML                                   128          16361             90          31719
    ... 
    ---------------------------------------------------------------------------------------
    SUM:                                 51415         713200        1617503        6726009
In comparison, qemu does emulation of literally every hardware that there is and it is only 1/3 more code, and it is without counting code required to run nodejs, docker, postgresql, redis that is dependencies of this image catalogue software.

    qemu-10.1.0 % cloc .
    58961 text files.
    43347 unique files. 
    Language                             files          blank        comment           code
    ---------------------------------------------------------------------------------------
    C                                    18063        1068616        1312988        5590243
    C/C++ Header                         13980         366138         907350        1774239
    Assembly                              1370          42839          55872         320027
    Python                                1345          67483          89294         255732
    SUM:                                 43347        1789003        2759752        9220521
    ---------------------------------------------------------------------------------------


QEMU has been in development for decades.

I'll take the high level language developed solution that I can use now over a low level language version that would reach feature parity with this 10 years from now.

Speed of development is a feature.


You're counting a lot of generated code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: