Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Lambda is fundamentally a request/response architecture and is meant to be tied together with several other AWS services. As such, I don't think Modals offering is really comparable, nor is "lambda on hard mode" a particularly good description for what they've made.

Perhaps "EC2 on easy mode" is more like it.



I've deployed lambdas written in rust, mostly because I needed a good C interop and didn't want to mess around with C++ AWS (been down that road, without native json support it's a pain).

The rust lambda SDK is just fine. You can write your rust endpoint on, e.g. Axum, then deploy straightaway.

I run a few full fledged apis with just one lambda in this way. It's not hard mode at all.


The work around Rust and AWS Lambda integration makes it a joy to work with. Fast developer experience, fast and cheap runtime, when using rust in lambda it’s fast anyway.

Like yourself I use Axum, and pretend AWS lambda doesn’t exist. By default I use the standard HTTP server for local development, I then have an environment variable to toggle lambda mode for when running in lambda. If I wanted to I can run the app anywhere, lambda, ec2, eks, fargate, 3rd party vps/server.

Using Cargo Lambda and its associated AWS CDK plugin it cross compiles to ARM64, sets up AWS stacks with databases or other resources, removes a bunch of manual tasks.

The only thing that is inconvenient is you can not use custom domains for function urls. If you need a vanity name, you have to go via api gateway and associated costs. That’s by design though.

The monolith lambda works well. The KISS approach


Yep exactly how I do it. API gateway is such a pain, just to specify a default route pointing to the lambda. But it's a one time setup.


This plus because you can dockerize and run on lambda, essentially you can run most anything these days (most things i've encountered are reasonably easy to dockerize, i'm sure there are exceptions, but in the main easy)


I'm curious about latency, cold and warm, using docker. I have a dockerized number cruncher and it's a breeze to maintain, and I'm thinking of moving everything over. What's your experience?


My understanding is that cold starts on containerized Lambdas is actually better than non-containerized for some workloads, because using containers allows Lambda to do better caching of the code, as well as lazy-loading. YMMV of course based on exactly what image you use (eg if you're not using a common base, like Ubuntu or Amazon Linux, you won't get as much benefit from the caching) and how much custom code you have (like hundreds of MBs worth).

There's a very interesting blog post about it here, as well an an accompanying whitepaper: https://brooker.co.za/blog/2023/05/23/snapshot-loading.html


I never had a case where cold starts mattered because either 1) it was the kind of service where cold starts intrinsically didnt matter, or 2) we generally had > 1 req/15mins meaning we always had something warm.

3) Also you can pay for provisioned capacity[1] if the cold start thing makes it worth the money, though also just look into fargate[2] if that's the case.

[1]: https://docs.aws.amazon.com/lambda/latest/dg/configuration-c...

[2]: https://aws.amazon.com/fargate/


My experience with rust coldstarts was very good, my ECS backed lambdas coldstart and return a response in <40ms.


Yeah at this point vercel and AWS and basically everyone support serverless docker. It's probably dumb to do anything else.


There are lots of kinds of containerization too btw, if i'm not mistaken AWS has a lot of investment in Firecracker too https://firecracker-microvm.github.io/


Docker is a bit more cold start time over native (zipped). That said, rust is so much faster than the scripted languages it's still much faster than what most are doing.


> been down that road, without native json support it's a pain

Did rust get native JSON support in the year since I last used it?

If you need JSON support in C++ nlohmann json is the defacto just like serde would be for rust.

Now if you just aren't adept at C++ build tooling that is fine as a reason to use Rust for this but "because there is no JSON support" definitely isn't a valid reason.


People keep saying nlohmann json is defacto for C++, but it’s literally the worst performer out of any C++ json offering.


You are using JSON. Performance stopped being an option about 3 decisions before choosing Nlohmann.

If you cared about performance a JSON parser isn't on your list and if it is it's a relatively minor part of the product stack so once again, use the thing that works and is popular.

If your primary means of communication is JSON you are likely optimising a little too hard if you are looking for the most performant parser implementation, good enough is good enough there. If you want performance pick a different format.


Not really. The entire adtech industry revolves around passing trillions of json messages around.


You are proving my point here.

The single largest contributor to performance degradation on websites is that very industry.

Look JSON has it's advantages and is a fine tradeoff, performance isn't a place it's good at, that's not something that makes it a bad format, it's just if you want to optimise for performance I would start by reducing the sheer amount of redundant data being passed around in a JSON blob long before I would hyper optimise on a C++ JSON parser.

Sure if you are using a JS or python json parser there are massive gains to be had by calling into a lower level language to do the parsing but picking between the choices in C++ parsers is probably bikeshedding.

Now if your use case truely needs to absolute most performant JSON parser and you will trade off usability and portability for it then sure but another one of hose axioms apply. The solution for the 99th percentile is rarely the correct solution for the 50th percentile


Serde/mini-serde is pretty ubiquitous, and axum is getting close to the default as well.


> a request/response architecture

fundamentally it's a HTTPS server too, you can actually invoke them with direct HTTPS calls, no SDK required. [1]

[1]: https://docs.aws.amazon.com/lambda/latest/dg/urls-invocation...


Function URLs aren’t part of lambda, they’re just a thin abstraction around API Gateway v2 (http APIs) that allow all calls, and have randomly generated domains, so you’re not gaining anything and losing some functionality by doing this instead of running an API GW with lambda proxy integration yourself. If setting up API GW is too difficult, you could use SAM or Serverless Framework to automatically provision it. Then you can have a real domain, SSL, failover, endpoint validation, etc.


>so you’re not gaining anything and losing some functionality by doing this instead of running an API GW

You're gaining the fact that Function URLs are free while APIGW can be pretty costly, as well as the fact that Function URLs are fantastically less complex than APIGW if your use cases fit it.


Function URLs are not limited to 30 seconds. That's massive


I think they are only an asynchronous invocation in that case though? The reason they do that is they dont want your connections holding a port for 15 minutes.


I don’t get why they don’t support gRPC (HTTP 2)? They already support Websockets.


(Author) Modal tackles how to make FaaS work, but for actual _function calls_, and also with containers that have much higher resource caps (see article: 3 CPUs vs 64 CPUs, or 10 GB RAM vs 336 GB RAM).

EC2 isn't the same compute shape. We run fast-booting (think: seconds, not minutes), dynamic sandboxed containers on a single host (think: gVisor, Firecracker) and optimized file system lookups (FUSE, distributed caching, readahead, profiling). It also means we bill by the CPU cycle, scale rapidly, and bill you only for 100% utilization. You do not manage individual VMs.

This is why scaling the limits of functions-as-a-service is quite different from scaling VMs, and that's what the content of the article focuses on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: