Yeah, so now you're basically running a heavy instance in order to get the network throughput and the RAM, but not really using that much CPU when you could probably handle the encode with the available headroom. Although the article lists TLS handshakes as being a significant source of CPU usage, I must be missing something because I don't see how that is anywhere near the top of the constraints of a system like this.
Regardless, I enjoyed the article and I appreciate that people are still finding ways to build systems tailored to their workflows.
The scalable in-memory solution took quite a bit of testing to get right. Building this on the early side of the business when the requirements are not well known can be a giant budget and time tar pit. Plus without customers it’s hard to confidently test at scale.
Using S3 for an MVP and marking this component as “done” seems like the right solution, regardless of the serverless paradigm.
Sticking something with 2 second lifespan on disk to shoehorn it into aws serverless paradigm created problems and cost out of thin air here
Good solution moving at least partially to a in memory solution though