My particular task involves a lot of (>100 per sec or perhaps >1000 per sec) reads/writes to the database. Though I haven't tested, but I doubt the db I am using (MySQL with MyISAM as storage engine) would scale to that level without a lot of additional resources (slaves, memory, faster disks).
So I was considering replacing it with a (distributed) memory based key-value or normal RDBMS. The problem is that I want data to be persistent and be able to dump least used data to disk periodically. Ideally it would be like memcache, except instead of deleting expired keys-values, it would dump them to a persistent database.
I have explored Memcache, Memcachedb, Redis, Tokyo Cabinet, in-memory SQLlite and some other solutions but none is close to what I have in mind. Can anybody can suggest a solution? Perhaps tweaking settings of existing solutions would do the job?
PS: I am using PHP as the language, but practically any language would do if I get what I am looking for.
I get what you are saying, seems like you want a write-through cache. Sure seems that you are in the right ball park for a solution, but may have to write a bit of application code layered to do what you want with those parts.
It would probably be helpful if you broke up the read/writes estimates. Also, are those writes mostly inserts or updates? There are different types of solutions depending one what you expect your profile to be like.
Also are you reads just a key look up or are you doing some sort of aggregation/searching across all records?
Seems like you may be prematurely optimizing your solution and underestimating the capabilities of database engines. Most of them can probably handle the load you are thinking with correct usage and sufficient tuning and hardware. For example, I believe twitter uses MySQL:
http://highscalability.com/scaling-twitter-making-twitter-10...