Isn’t that exactly what the person you replied to said? You’re showing what you’ve been charged for previously. That part works, at a certain level of detail.
That doesn’t show what you are being charged for right now, nor does it help you predict what your total costs will be at the end of the month.
You bring up a good point - it would be a good idea for AWS to set some sensible max budget defaults for new accounts to prevent surprises.
Since they don't, if you are literally so concerned with your current costs for this month, you can set a budget and alerts for whatever you want, so that you don't exceed some cost. Also, they show you your current cost for this month, up until the moment, as well as a daily breakdown of your costs.
> sensible max budget defaults for new accounts to prevent surprises.
Amazon, in my experience, has issued organizations a one-time credit in cases of surprise bills.
The bigger crux of the issue, is Amazon has democratized CapEx into OpEx, and it's too easy to be ignorant of the details of infrastructure planning, which is rolled into AWS pricing.
Before AWS, IT teams had to plan for and estimate costs prior to procuring hardware. This is no longer needed with ondemand, programmatic infrastructure.
Engineers, who (reductively) focus on solving business problems with code, aren't necessarily thinking about the costs of deploying their solutions, the way Ops and IT would be.
The couple of AWS acquaintances I asked about this said that this kind of feature doesn't exist because:
- There's almost no demand for it from the majority of their customers.
- Billing code is a big, scary, tangled mess and also happens to be the one area where you really can't afford to introduce bugs and make mistakes, so meaningful change is already exceptionally difficult.
I already got a surprise, because in one place it says t2.micro and t3.micro instances are free for twelve months.
But, if the instance runs Windows, or the zone is this or that, or something else I don't know, the only free tier instance is actually the t2.micro.
And then there's the Unlimited option, and CPU credits, where if your CPU use goes over 25% for some time, it will either: Be throttled so slow as to be useless, or, not being free and you still have to pay for each hour of use.
> You bring up a good point - it would be a good idea for AWS to set some sensible max budget defaults for new accounts to prevent surprises.
I didn’t say anything about this at all. It would certainly be a nice feature, and one people have asked for since the very beginning of AWS.
> Since they don't, if you are literally so concerned with your current costs for this month, you can set a budget and alerts for whatever you want, so that you don't exceed some cost.
Again, this is a strawman. This isn’t a response to anything I said.
If you’re trying to understand how much a new deployment on AWS is costing you, and find ways to optimize that, you don’t care about the total budget of the account, or having some auto shutoff. You need detailed, real-time information from Amazon, which I’ve never seen in the console.
It’s been several years since I’ve had to be digging around in the billing console, so I have no stake in this discussion and things might have changed. Based on other comments being made around here, I really doubt anything significant has changed.
I was just trying to clarify a comment you completely misunderstood.
> If you’re trying to understand how much a new deployment on AWS is costing you, and find ways to optimize that, you don’t care about the total budget of the account, or having some auto shutoff. You need detailed, real-time information from Amazon, which I’ve never seen in the console.
No one who is working on this cares about real-time by the second data - the daily cost is enough.
> Again, this is a strawman. This isn’t a response to anything I said.
OK, what are you trying to say? If my response above did not answer the question, can you give me a one sentence summary of the point?
> can you give me a one sentence summary of the point?
>> I was just trying to clarify a comment you completely misunderstood.
Alternatively,
“The billing UI and billing infrastructure is a major weakness of AWS when it comes to helping people understand what their usage will cost.” You can clearly see this is true by the large number of people who complain about it.
You can opt in for detailed billing report, and get detailed daily reports on what you are being charged on. You can load it in any db of your choice and ask all the questions that you want.
This is really terrible blanket advice, CF is one of AWS' worst products in my extensive experience with it (My team manages a ~$4 million monthly bill and locked into CF well before I joined). It has a number of significant limitations that aren't applied to terraform, it's pretty important to understand those tradeoffs if you're buying in.
Given that AWS supports terraform, it's really on them to provide a calculator for it as well. We're large enough that we just spent an obscene amount of money on employee time and effort to track our billing (lots of automated tagging), but until we did that it wasn't uncommon for a couple hundred thousand dollars of unnecessary charges to hit our bill every month.
A lot of people use Terraform (my shop does) and I've found a good terraform setup way better than CF templates. You can do tags in Terraform too. You should be forced to use their already bad tooling just to get correct cost estimates.
I'm training on AWS and part of an AWS infra team - but we use Terraform partially because we intend to have production support at our company for all three big US clouds, and want the IaC layer to be uniform.
I get that there are times to dive all the way in, but there is still a part of me that says "Is this what the Internet is now?"
That is correct; I've always found it a missed opportunity in Terraform. I still like it a lot as a tool.
A unified abstraction layer would certainly work for a subset of cloud offerings. Think the more basic stuff like EC2, VPCs, etc. Platform specific extensions could be handled as optional arguments to the unified objects.
That said, even though you need custom Terraform definitions for each cloud vendor, using one IaC tool still beats having to use a cloud-specific one for each of your deployments. The parent's point stands.
Learning terraform for the various providers = learning the different syntaxes and design philosophies of different python libraries.
Learning the different IaC tools for each provider = Learning Python, Javascript, and PHP.
Also, Terraform does state management and can be used to package deployments of apps to different platforms. Our IaC pipeline uses Terraform to deploy resources to AWS and several other cloud toolsets, all in one language.
The provisioners are the easy part - understanding the ecosystem and how the different parts of the cloud providers relate is the harder part. I routinely go back and forth between C#, Python and JavaScript using the official AWS SDKs and I understand how to accomplish what I need in either language. Using those same languages, I wouldn’t know where to start with Azure or GCP.
Using Terraform vs CF is easy compared to using AWS vs Azure.
Moving from Terraform to CloudForm is as pleasant as chewing glass, and the last thing we need is more AWS lock-in unless you're running an AWS consultancy.
If you are using Terraform you are still “locked into” AWS since all of their provisioners are cloud specific.
Also if you are at any type of scale, your IAAC choice is the least of your problems.
Have you actually done a realistic project plan and estimated how much it would cost to migrate infrastructure from one provider to another once you reach any scale - including regression testing, auditing, training, etc? Do you really just use your cloud provider to host a bunch of VMs? Even then it’s not that easy at scale.
Sigh, I should've gone ahead with the longer version of my previous comment that I deleted because I felt that pre-emptively responding to this would be unnecessary by now.
Yes I've dealt with this issue (to be more precise multi-cloud rather than migration, also used other TF providers in tandem like the Kubernetes one), and yes your TF resources are provider-specific, but being able to handle it all with the same tool instead of having to deal with an awful vendor-specific tool (to be charitable, because the reality is having to deal with mutiple ones and having them inter-operate through bash glue) vastly helps in reducing and controlling lock-in, and I would argue it's the baseline step you have to take if you don't want to risk your whole business on a single cloud provider not going to the gutter either because they've grown too big for their own good or because they're being squashed.
CloudFormation means jumping into the pit of lock-in and you can only climb back out by digging your fingers into the dirt wall, Terraform means you have climbing gear to rappel down and if you have to, get back up without as much hardship. Sure you have to put the gear on but your descent is controlled and then you can climb back up at your own leisure.
Is the only thing you are doing with your cloud vendor is using VMs? Not S3? Not IAM? Not SQS? SNS? You don’t have to go through security audits? Hybrid networks via VPNs/Direct Connect? You don’t have a massive amount of data that has to be transferred? Your DNS entries? Your build pipelines?
No longer at that gig, but in order: Kubernetes applications using Kubernetes abstractions, S3 API abstracted away at the library level, RBAC done through Kubernetes (which includes IAM integration), we did use SQS and SNS but those were easy to replace given our abstractions. No security audits (third party ones at least, we did have scripts and checklists for deps and GDPR compliance). No hybrid networks, we either had an internal API/frontend for our services or we used bastion servers for SSH proxying. Wherever data transfer was a major issue we stayed within AWS, but that's our fault for going with AWS in the first place, which doesn't belong to the bandwidth alliance; a hard migration had been discussed and informally planned for but punted for later. DNS can be handled cross-vendor easily with Terraform since it's easy to set a module's outputs to another's parameters. Pipelines we ran with CodeBuild and hosted in ECR, but running a single command and docker pushing to a repo is not something I would even consider as a migration pain.
Everything seems easy until you actually get your project management organization involved, and your IT staff, your QA, your business analysts, your compliance department, start allocating cost for your staff etc....
Well that's because you're locked-in :). If you're already locked in, you're screwed and have to work your way out or live with it and the consequences that come either now or later.
If I get to make the call and I care one bit about the business beyond next quarter, I would always have a clear way out to heavily reduce risks to a situation that we have seen play out many times before with the Oracles and IBMs of the world.
I'm not sure why that would be terribly important though? A month of delay in optimizing your services isn't going to make or break an average company.
That doesn’t show what you are being charged for right now, nor does it help you predict what your total costs will be at the end of the month.