Mongo - Sure. I'm hardpressed to find genuine reasons to use MongoDB these days. Postgres provides everything I need in a persistent data store and has a ton of other stuff I didn't know I'd need (till you do!). Mongo was great to use with something like node.js where the async/schema-less model translates perfectly but using Postgres with the async driver[1] isn't bad either. Add in hstore, JSON support, plV8 ... and what exactly would somebody use Mongo for? :D
Redis - No clue what you're saying here. If you're using Redis then you're keeping your data in memory and dealing with low level structures. It's for completely different use cases. A classic example is rate limiting for an API. Doing it in Postgres with a disk I/O per request would cripple any semi-popular API. The data doesn't need to be exactly persistent (if we miss a few updates due to the server crashing we don't really care). In exchange for that it's blazing fast for individual writes that we can batch together to backup to something like Postgres (or Redis's built in persistence like AOF).
I'm in the middle of moving from mongo to postgres and I really couldn't be happier about it. During the migration I've found various bit of data that were corrupted over the life of the product (due to changes in the application). I already feel the relief of knowing what state my data is in. Oh, and I get to use sqlalchemy again which is a total win.
There is a Redis foreign data wrapper (https://github.com/pg-redis-fdw/redis_fdw) so you can access Redis from PostgreSQL. The new 9.3 adds writable FDW so you could even write to Redis from Postgres when the adapter supports it.
Redis is a good complement to Postgres. It provides caching and fast, non-persistent data store, while Postgres is the durable database.
Additionally (as unclogged tables will cause total data loss on failure as opposed to just missing some updates) you can alternatively tune, per transaction, the durability of your changes: PostgreSQL lets you tune the durability of individual transactions using the synchronous_commit variable, letting you go entirely asynchronous (where you toss data at the database, it returns immediately, and will be saved to disk during the next ganged commit).
Redis - No clue what you're saying here. If you're using Redis then you're keeping your data in memory and dealing with low level structures. It's for completely different use cases. A classic example is rate limiting for an API. Doing it in Postgres with a disk I/O per request would cripple any semi-popular API. The data doesn't need to be exactly persistent (if we miss a few updates due to the server crashing we don't really care). In exchange for that it's blazing fast for individual writes that we can batch together to backup to something like Postgres (or Redis's built in persistence like AOF).
[1]: https://github.com/brianc/node-postgres