Yes! Something like ControlNet is great for this. I use this[1] API on Replicate specifically because it offers the various methods (depth maps, edge detection, etc.)
Replicate sometimes give you free use (I forget if this model does), but if you pay then an image output will cost you about one cent.
Give it your image, write a prompt (something like "a modern living room"), choose your ControlNet model from the drop-down, and submit.
If you choose depth map, for example, it will generate its best guess depth map for your image and use that to steer the Stable Diffusion output. It's fascinating, and a lot of fun to play with.
Replicate sometimes give you free use (I forget if this model does), but if you pay then an image output will cost you about one cent.
Give it your image, write a prompt (something like "a modern living room"), choose your ControlNet model from the drop-down, and submit.
If you choose depth map, for example, it will generate its best guess depth map for your image and use that to steer the Stable Diffusion output. It's fascinating, and a lot of fun to play with.
[1] https://replicate.com/jagilley/controlnet
Edit: I'd love to see what you produce, if you do use it.