Having an LLM spit out a few hundred lines of HTML and JavaScript is not a colossal waste of resources, it's equivalent to running a microwave for a couple of seconds.
Not to mention, my little tool is using much less electricity running than just about anything else I could easily find on-line, simply by the virtue of being minimal, and completely free of superfluous visual bullshit, upsells, tracking, telemetry, and other such secondary aspects of anything people publish and advertise for others to use.
Don't get the anti-AI propaganda get to you too much. Inference is cheap on the margin.
Consider: there are models capable (if barely) of doing this job, that you can run locally, on a upper-mid-range PC with high-end consumer GPU. Take that as a baseline, assume it takes a day instead of an hour because of inference speed, tally up total electricity cost. It's not much. Won't boil oceans any more than people playing AAA video games all day will.
Sure, the big LLMs from SOTA vendors use more GPUs/TPUs for inference, but this means they finish much faster. Plus, commercial vendors have lots of optimizations (batch processing, large caches, etc.), and data centers are much more power-efficient than your local machine, so "how much it'd cost me in power bill if I did it locally" is a good starting estimate.