Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Twelve-Factor App (2011) (12factor.net)
200 points by tosh on April 28, 2022 | hide | past | favorite | 102 comments


One thing that I think could be updated in the 12F approach is the use of env vars for config, especially secrets.

I’ve generally found it much better to mount these into the filesystem and read from there. It helps with, for example, secret rotation for long running processes. Relying on process restarts can be ugly in some setups, especially if startup time is expensive.


The best approach I've seen is to have the secrets in a vault and to keep the vault URL in an env var (so different vaults can be used for dev and prod, for example). The secrets location within the vault is held in a variable defined in an application config file which is checked in to source control. At runtime, a piece of code consults the variable and queries the vault using the provided location and then dynamically configures the db connection with the returned credentials. The credentials are dumped from memory once the transaction completes.

(I should also point out that this is in a batch ETL system.)

The nice thing about this approach is that it is virtually impossible for secrets to end up in source control. The risk is that the vault becomes a major single point of failure.


It also makes cycling credentials easy. All you have to do is update the values in the vault. No need to hunt down all those places where you set environment variables.

I think you don’t even need to redeploy any long-running tasks. If they have a connection open, it should continue working, and if they don’t, they should consult the vault when they open one and get the new credentials from it (this may be different for different kinds of credentials)

There will be a race condition between creating credentials and storing them in the vault, but even that can be avoided:

  - create new credentials with same rights as the old ones
  - update vault
  - delete old credentials
Another race where credentials get changed between reading them from the vault and using them can’t be avoided, but if you wait a few minutes before you delete the old credentials, it’s extremely unlikely you’ll hit it.


How does the system authenticate with the vault? Why not use that same system to authenticate the database connection? A vault is only useful at scale or with additional compliance requirements. Otherwise, keep it simple. Very few systems actually need that additional level of indirection.


It's usually rely on some globa mechanims by the underlying architecture.

Example, if it's AWS it may rely on ec2 instance role, to allow it access to the secret manager.

If it's kubernetes, it can be done through k8s token mount, basically allow token in namespace access the vault, and the token(which is generated and manage by k8s, which is just a JWT btw) is mounted into your pod.


There are often multiple sets of credentials that you need to pass to a microservice, some of which may be shared between multiple instances of the service or even between multiple services. Changing them would require plenty of case-specific updates to service configurations or just one update in the vault. By reducing the amount of work to update the credentials you also reduce security and quality-related risks.


I don't know and I don't need to know (I'm a mere dev -- no "ops" in my role). And, yes, this client has very high compliance requirements.


Do you have any particular vault in mind? We are looking for a simple solution which would work exactly like you described. We just need couple of different sets of variables for our web server, workers and other services.


I'm not sure which vault we're using. There's no identifying information in the server responses. The storage type shows as "Consul" so that suggests Hashicorp Vault.

I'm not the person to ask how to set it up but it's seamless in use and can return data in json format which makes it easy to parse. If it were my money I'd look very hard at this product.


I built onboardbase.com for this exact purpose and it has continuously solved a lot of use cases for me personally. There is also hashicorp vault which is amazing as well.


I've used GCP secrets manager and it worked well. Pretty simple API and the option for providing your own encryption keys if you want it.


This is the way!


I agree completely. Storing secrets in environment variables is wrong: https://blog.forcesunseen.com/stop-storing-secrets-in-enviro...

Even unsetting environment variables leaves them in /proc/self/environ. There isn't a thread-safe way to unset environment variables in POSIX, so even if you `unset TOP_SECRET` from within a program the contents of /proc/self/environ will remain unchanged and available.


But /proc/*/environ is only readable by the user the process is running as and root, so if you can read it, you're already on the other side of the airtight hatchway.


Arbitrary file read vulnerabilities are extremely common.


Sure, but if you have one of them, then you can read the secret files out of /vault or whatever too.


Correct! This is why I explicitly said the following in the blog post I linked above:

> Once the application hits “readiness” status (as determined by health endpoints or the load balancer), the secrets volume should be unmounted and made inaccessible.


Then I'll just pull it from the processes memory directly. It's security through obscurity at best.


It seems to me in line with the comment that the post quoted:

> Ultimately, secrets need to live somewhere and need to be accessed as plain text. Just make sure that the access as small window is as [sic] possible, and try to obliterate it after use, if possible.

This is not an all-or-nothing situation; it's a game of mitigation. If the process needs to retain the secret in memory, true, there's not much you can do about that. But I don't think that minimizing where else you're storing it is just security through obscurity. Fewer possible attack vectors is still fewer possible attack vectors.


Now, if you encrypt them in memory, or even better obtain the creds in an audited fashion, use them and clear the memory promptly, it's slightly better.


Just an arbitrary file read vulnerability away at /proc/$pid/mem


I really like my applications to at least try to reestablish dropped connections though. But I do appreciate the minimization mindset with respect to secrets.

The last time I was responsible for the scaffolding of a new software service it supported reading secrets from a file. It confused the operations team and I was just told to include support for an environment variable. Oh well, I tried.


Or, if you only have access to proc, from the process's heap.


How do you actually unmount the volume with kubernetes once the secrets are read and the process is ready? I've never heard of a feature like that.


From my comment farther below:

> If your app can’t interface directly w/k8s, but it can read secrets from a file, you can use a small init program to fetch the k8s secret and write it to a named pipe. This is advantageous compared to mounting as a volume, because the pipe disappears after both ends close their connection to it.


you dont ... you remove the mapping and restart the pod.


It's the part of 12 factor that makes me giggle a bit; like it how does it get into the environment?

Reminds of the "Front fell off" comic sketch. "No, we towed it out of the environment"

https://www.youtube.com/watch?v=3m5qxZm_JqM


"Well what's out there?"

"Nothing's out there."

"Well there must be something out there."

"There is nothing out there. All there is, is sea, and birds, and fish."

"And?"

"And 20,000 tons of crude oil."

"And what else"?

"And a fire."

"And anything else?"

"And the part of the ship that the front fell off. But there's nothing else out there. It's a complete void."


It's environment variables all the way down.


> One thing that I think could be updated in the 12F approach is the use of env vars for config, especially secrets.

The 12F section on config already mentions quite prominently the use of config files.

Taken from https://12factor.net/config

> "Another approach to config is the use of config files which are not checked into revision control, such as config/database.yml in Rails. This is a huge improvement over using constants which are checked into the code repo, but still has weaknesses: it’s easy to mistakenly check in a config file to the repo; there is a tendency for config files to be scattered about in different places and different formats, making it hard to see and manage all the config in one place. Further, these formats tend to be language- or framework-specific."


Check out the Secretless Broker at https://secretless.io. It's a cool open source project that allows applications to not need to know secrets which adheres to 12-factor app guidelines.


hmmm.... im trying to understand the benefit of secretless broker... if someone compromises thisnwouldnt they have access to all credentials for everything?

now we are just moving from trusting a bunch of distinct services to trusting this single broker... just moving the responsibility of trust to a single point of potential failure no?

Also dont credentials have to be passed to secretless broker? how does it know the application has access to the service? isnt that still at risk of being leaked.

i like the idea of not thinking about secrets but it seems to good to be true.


I’ll have to dig into it to see how it compares, but https://spiffe.io/ is what I look to in this area.

Not having long lived secrets is the ultimate destination, but we all live with the legacy around us.


EnvKey[1] can help with process reloading, and can facilitate both restarts and hot reload updates. (Disclaimer: I’m the founder.)

The pros/cons of environment variables vs. files (or other approaches) is also something I’ve thought about a lot while working on EnvKey.

We use environment variables as a default approach, since it seems to be the most common way to pass secrets/config to a process in the wild and we want to meet people where they’re at. But we also make it easy to use files or system calls instead (I think system calls are actually the most secure.)

One thing it’s always important to remember though in security: if you make the “secure way” too hard, people will route around it, making matters worse in practice. There’s always a balance to be struck.

Honestly, I’m skeptical that threat models where environments are exposed but files are safe are realistic enough to be worth worrying about. At that point, it seems like rearranging deck chairs on the Titanic.

It seems simpler to say “our last line of defense is the OS boundary.” You trust the host and go from there. If the host is breached, you’re screwed in a plethora of ways. There’s no point in sweating the particulars. Just don’t let it happen in the first place!

And when it comes to concerns about leaking the environment to sub-processes, this seems like a deeper problem. Even if you don’t store secrets in the environment, that doesn’t mean it’s safe to just send env vars off wherever. At this point, like it or not, environments are sensitive, because enough people and programs treat them as sensitive that it’s a self-fulfilling prophecy. If they might be leaked, then the leak is the security problem in my eyes, not the data in the environment.

1 - https://envkey.com


EnvKey looks great!

Only one thing strikes me as unfortunate, which is it seems like it fragments the development process because now your code history is tracked in git but your config history is in EnvKey. It means everything I do is duplicated (for example, tags, branches etc). It isn't clear to me how I would take, for example common operations such as rebasing one git branch on another and replicate the same on the two EnvKey branches involved.

We track all our non-sensitive config in git for this reason - then merging two branches is literally merging the config changes. However, we don't have any of the nice features EnvKey is doing like service restarting, inheritance, secret management etc.

Just curious if I'm missing something here or you have a strategy for what I discuss above?


Thanks!

I definitely see your point on splitting up the history!

One thing to consider though is that you're already forced to split up the history in order to handle secrets securely.

Given this unfortunate truth, I think you're better off storing all the config and secrets together rather than splitting them up and keeping config with the code.

There's not always a clear line on what's a secret and what's "safe" config, so attempting to split them up like this is asking for trouble: you're almost guaranteed to end up with secrets in your git repo eventually.

All that said, we've thought about adding a git integration that would output a non-sensitive state file ala 'tfstate' so that you can see your EnvKey history baked right into your git history--it wouldn't show values but you could see what keys had changed. Would something along those lines help to address your concern?


thanks for the reply!

You're right about the split situation for secrets. I guess secrets have a different lifecycle generally anyway (just cos you roll back your code doesn't mean the password to production should be what it was at the time of the prior release). For this reason they (nearly) always sit outside of the versioning process for the code.

However our secrets are probably less than 1/10th of the config and even less of the complexity. So I am content to use a separate secrets manager / process for that.

Would be great to think about how to integrate with git workflows. Seems a bit messy but something like you suggest where it dumps it out to a file could be a way. Would be good to solve as incorrect / missing config is one of our major reasons for deployment issues, and so far despite all its evils, tracking them in source control as part of the dev process has been the best solution to minimise it.


In kubernetes I run external secrets, which is nice. I store The secrets in key value or file format in AWS Secrets Manager, which gets synchronized to the cluster into a secret. From there it gets mounted into the running pod via the envFrom or volume mount method.


External secrets are great, especially if your app can read them directly from k8s and avoid ever having them mounted as a volume (or in env var).

If your app can’t interface directly w/k8s, but it can read secrets from a file, you can use a small init program to fetch the k8s secret and write it to a named pipe. This is advantageous compared to mounting as a volume, because the pipe disappears after both ends close their connection to it.


> I’ve generally found it much better to mount these into the filesystem and read from there.

Is there a standardized/best practice way to do this? Some convention for file names or format?


I'm fond of the XDG path convention. Which basically boils down to putting configs in ~/.config/your_app/ and secrets in ~/.config/your_app/secrets.

If you use pydantic, it supports a secrets dir with a predefined path.

https://specifications.freedesktop.org/desktop-entry-spec/de...


There is docker secrets and some specialized convention for example to share the ssh control socket during build time etc.


I wouldn't put them on a filesystem; do you have any guarantees the secrets are wiped clean after your application is shut down? At least env vars are in memory (...I... think...?), and will be offloaded if the server shuts down.

A secrets manager may be a better option, these days. I don't believe they were a thing in 2011 though.

Also 2011 was 11 years ago, and I still keep this webpage in mind when building server-side software <_<.


Seconded, but porque no los dos? I tend to put my applications' env vars in `/etc/environment`.


It’s a good question and idea.

I’ve been trending towards treating secrets and non-secrets differently.

For non-secrets, I tend to have a config discovery mechanism that lets me read config from files, then from env, then from cli args as final fallback. This is mostly for ergonomics. I prefer to use files for deployed processes, and env vars or cli params when debugging or working interactively. Env vars are handy for cli args I’m going to be repeating a lot.

But for secrets I’ve tended to only fetching those from files, but allowing the file location to be specified by env var or cli arg for quick changes.

It sounds kinda complicated, but you write that logic into a library and then everything behaves the way you expect consistently.


For what it's worth at $dayjob (public company, 1B+ revenue) we do the exact same, although either I misunderstood you or we have the exact opposite priority: args always win, if they're not defined then env wins, then config file, then default config value in code which sometimes is an actual value, and sometimes it defaults to throwing an exception and never passing a readiness check, depending on the behavior of the config in question. This last one is mostly a relic of the past and we believe it's better to blow up if there's no default configuration value in the chart in all cases; it also makes it way easier not to have to chase defaults through the code.

We use these for running locally and debugging as you say, but come deployment time we ship all applications as helm charts with default config values, store overrides and static secrets (with SOPS) in a different, company-global repo, and use kubernetes secrets to mount a tmpfs volume to pass them in to the application. There's no way to pass args or env there.


One pattern that we use at our company is that we have multiple env files, and we launch programs using `run_with_config <config-file> <command>`. The script loads environment variables from config-file and then launches command.


I like to do something similar with my personal projects: `set -o allexport && source /ect/environment && set +o allexport; command`


<Laughs in Rails.> Credentials.yml.enc/master.key sorts this pretty well. I have been very grateful for its addition to the stack.


When i first found 12 factor i was really impressed, it was/is a collection of guidelines that suit their defined purpose really well, easy to read and implement.

I always feel like there should be more parts of software/web dev with guidelines like this. An easily digestible format that is easy to discuss with colleagues

other examples i can think. - https://refactoring.guru/ - not exactly a list of guidelines but a nice digestible collection of common patterns, pro's cons and when to/not use them - to a lesser degree, agile manifesto. a nice list of guidelines, but not as implementable (mainly because it is more related to people/management)

but saying that i am not sure exactly what parts of the field i need this for. I didn't know i needed this for building/deploying web apps before i saw 12 factor.

maybe application folder structures, guidelines to mitigate/remove software rot?


It's been a long time since I read this. I just did a mental audit of the services my company runs, and they all (more or less) match what's in this document. It's not even a conscious thing anymore.

I think any mid-level backend engineer (or any web-related software engineer, for that matter) would do well to read through this and dive deeper into each of these recommendations to understand why they were chosen.


Lots of past submissions but not that interesting threads:

Twelve Factors of Web Application Development - https://news.ycombinator.com/item?id=3267187 - Nov 2011 (37 comments)

The Twelve-Factor App - https://news.ycombinator.com/item?id=19947507 - May 2019 (3 comments)

Related:

Twelve-factor app development on Google Cloud - https://news.ycombinator.com/item?id=21415488 - Nov 2019 (63 comments)

12 Factor CLI Apps - https://news.ycombinator.com/item?id=18172689 - Oct 2018 (247 comments)


I was hoping this would be about twelve-factor authentication


Please delete this comment I don't want anyone getting ideas.


I've just done my taxes using the country's digital sign-in.

- Go to tax website

- Click on log in

- Website tells you to open the app

- App tells you to push a button, this shows a four letter code

- You type in the four letter code on the website

- Website shows a QR code

- Scan the QR code on the app

- App asks you if you want to sign in to the website, hit yes

- App asks for a 5-digit PIN code

I think there was another confirmation, but anyway, there were a Lot of steps and I'm not convinced adding more steps made it safer. I mean they could've skipped the first "type a four letter code" by just showing the QR code directly, it can tell the app what code to use and what website it is, etc.



I really dislike 12FA. It's meaningless. The name sounds neat, but it is undistinguishable from "some guy's twelve preferences".

Of course advocating for "some guy's twelve preferences" would never be considered serious argumentation in an engineering context. Yet we do for 12FA.

I might agree with this or that point, but taking the package as a whole just leads to weaponization and cargo-culting.


I disagree. It perhaps started as someone's preference, but the fact that it got so widely known and shared made it something more. Same goes for the Joel Test.

And it's not about cargo-culting, it's value lies mostly in having a set of things you can show your company: "this is industry standard, we're far from there and should work on that".

Not everything in these lists needs to be done that way. But they're still nice pointers for where to move towards (or beyond).


I can’t tell if you’re being sarcastic or not. In case your not, The 12 Factor App was a seminal piece of work that underpinned the foundation of so much we take for granted about “cloud native” today.

It was mostly written by the folks who started Heroku and were doing containerisation and elastic scalability before Docker existed.

It really isn’t “some guy’s preference”.


I wasn't seminal. All the things in the list were already being done, someone just wrote down what they thought were best practices. Some of the things in the list are widely viewed as bad advice (e.g. using environment variables).


It might not be the most secure method to propagate secrets but the alternatives have significant complexities.

In $DAY_JOB we only ever use single-machine docker-compose for deployments and there they work decently.


I'm not sure how you can say it's meaningless. It's clearly not just a bunch of babble that you often see in business writing: each of these is a specific, concrete suggestion. I'm also fairly sure it wasn't just one guy's preferences, and in any case those preferences were born out of experience with the early days of the web.

I've worked on web services that did not follow those principles as well as ones that do, and it seems pretty clear to me these principles are all more or less correct. In my experience this is now more-or-less best practice* for web services.

*with maybe minor exceptions such as accessing secrets from an external vault service.


"12 factor app" is exactly as meaningful as "24 factor app" or "100 factor app", i.e. not at all.

Who gets to say how many "factors" are relevant? What's the common thread of the 12 factors?


I'm struggling to understand your criticism. Why the authors decided to put 12 instead of 24 or 100 (or whatever) is irrelevant. What's relevant is whether the factors themselves are meaningful.

The common thread is covered in the introduction. It's about building services that back web applications in a way that makes it easy to deploy and manage them.


So for each of the concerns mentioned, what do you do instead? You're providing criticism without a counter-argument or alternative.


You didn't get the point I conveyed which wasn't criticism of each point.

Instead of promoting any arbitrary set of N "factors", we should think in a Unixy way: each concern ("factor") is considered isolatedly, and composed with other concerns however we see best for a given project/context.


I feel there is a little misdirection in the 12F idea.

> The twelve-factor methodology can be applied to apps written in any programming language, and which use any combination of backing services (database, queue, memory cache, etc).

Not every landscape fits the WEB-LOGIC-mumble_db_mumble model. Sometimes your backing services are not Someone Else's Problem: maybe you need to curate them and deploy them with the rest of your 12F fleet (applying 12F where possible) but the realities are lots of handholdy things like state, schemas, migrations, and entropy in general.


Just today I pondered how you can resell old content as new. One example was a song that I heard as a young teen, strawberry fields by some random Brit on MTV. I liked the song to only find out years later that it's a song by the Beatles.

A bloody song by the Beatles with some new drums and some spaz spazzing to the music...

Well there you go again... the 12factor app from 2011. A few years ago when I was browsing job offers that was a hard requirement in job ads I saw. Now it's forgotten, but since k8s and cloud hype all I see is bs cloud hype job ads. Microservices and cloud hype agile scrum blockchain ai machine learning devops bs bs bs. When will they ever learn when will they ever learn...

Why can't we get away from hype driven development and recruitment?

For the record, I'm not saying 12factor is a bad idea, it actually great, refined by experience. But in the newsosphere it was forgotten. I see it as a standard that makes sense, but I'm getting annoyed that that 11 year old site is something that makes the hacker news. And how the memory of us humans is so limited. Of course young people starting out likely never heard of 12factor. So it's new and shiny for them. Goto 10.


I don't feel like the (valid) point you are making applies to this case.

The 12 Factor App describe reasoning and motivations behind many common architectures today (speaking as someone that doesn't work with k8s) and for anyone that was not around in 2011-2015, it could be an interesting read


Onboardbase was built to solve this use case. It manages and syncs your env across every stage of development.(Disclaimer: I’m the founder.)

I honestly started building this out of the need not share credentials and envs over communication channels like slack anymore. I needed a seamless and secure way to work with envs without having it scattered and fragmented across several mediums.

Envs are essential to how you work and i believe a solution shouldn't just be secure but it should also be seamless to how you work

https://onboardbase.com/blog/say-hello-to-onboardbase


I really like this methodology and it still applies today across techs, such as with Docker and Kubernetes. It feels like one of those things that'll be around for another 10+ years.

If anyone is interesed in seeing each pattern applied directly to an open source code base I made a video on that at: https://nickjanetakis.com/blog/creating-a-twelve-factor-app-...


Your podcast is great btw


Thanks a lot for listening to it.


I prefer the 15-factor app, as evolution of the 12: https://domenicoluciani.com/2021/10/30/15-factor-app.html


The three extra ones seem like good areas to think about but low effort and not well defined at all in the link ...

> We can use machine learning towards those metrics to derive future business strategies

really????

> Make sure all security policies are in place

no information


I really like 12 Factor Apps, the set of ideas and approaches has allowed me to make a few personal projects more easy to setup and run.

Additionally, i worked on the webpage and some components for Apturi Covid (Latvia's COVID contact tracing app, perhaps a bit less relevant now) which was also really easy to hand over to Ops to run thanks to the simple configuration: https://apturicovid.lv/#en

That said, at work there are still some people who don't want to use those approaches, their argumentation being along the lines of:

  - config: "we don't like long lists of environment variables, files are easier to read" (ignoring that those don't play nicely with containers)
  - config: "files let you group things more easily (processes.properties, datasource.properties, urls.properties)" (ignoring discoverability issues)
  - backing services: "ehh, there's nothing wrong with hardcoding at least parts of a path in the app" (disregarding that context paths can change)
  - build, release, run: "just put some default config in the app directory for running locally, bundle the front end and back end for ease of deployment" (ignoring that this bundling makes things more risky in regards to accidentally committing/shipping the wrong stuff, and bundling components slows everything down, can't build FE/BE in parallel)
  - processes: "how do i restart Tomcat inside of the container?" (treating containers as VMs, though 12FA aren't necessarily bound to containers per se)
  - concurrency: (the entire system is built in a way that is only compatible with a single instance, local filesystem used for stuff etc.)
  - disposability: (the older apps take minutes to start up due to being large monoliths)
  - logs: "but files are easier to work with" (they're only easier if you don't have proper log shipping infra in place)
What's my point with all of this? Well, for starters, if your apps are bad (large clunky monoliths), then it's not like 12 Factor App principles will make things that much better for you, since the actual implementation might be different due to previous assumptions that were made in the app design.

And even if they could, then you still have other people with conflicting opinions or simply views of cloud native and container based applications that will not have you easily running or scaling these apps anytime soon.

It's not like every system needs to be structured like that, but sometimes i wonder whether it would be easier to just work on new projects only and not bother modernizing the older ones. For all i (perhaps should) care about, those older systems might as well run with their config and log files on the file system, inside of Tomcat instances with JDK version and config drift, no scalability and manual restarts after crashes when things inevitably break, as well as eventual disk space issues - just with someone who feels comfortable spending their time that way behind the wheel, not me.

A bit like the blog post "Green Vs. Brown Programming Languages" which talked about most of the dreaded programming languages being old and most of the loved ones being new: https://earthly.dev/blog/brown-green-language/ Perhaps that's simply due to there not being any old legacy codebases to maintain in the newer languages, ergo them being liked.


I'm interested to know what causes submissions like this - content that is years old and posted with no context - to hit the first page on a random day.


Sometimes it's people who feel like posting oldies/perennials/classics, and sometimes it's people running across a thing for the first time.

We've had to learn not to crack down on these too heavily because there's always an up-and-coming cohort of new users who've never seen them before, and HN should be a place to run across them. Once a year seems to be an acceptable maximum rate (https://news.ycombinator.com/newsfaq.html).


Hey dang. I have a question too.

Does it really need a 2011 tag?

I thought only blog posts and news articles got year tags. 12FA is guidelines website that is mostly timeless.

What's the criteria for a year tag?


The year tag is also very useful for when you might have read it the first time, and want to know that it hasn't changed, or if you don't need/want to re-read it (while still participating in the discussion). It's also interesting to see how things have aged.


The year tags can and do go on anything, they're more for the readers than as some sort of commentary on the timeliness or lack thereof of the content. I think lots of people just find them handy to easily distinguish recent from not-so-recent stuff, as a very basic use case.


I’ve been curious what the oldest year tag applied on HN is.



Thanks for the explanation.

Now for my next question: Out of all the other posts out there right now, how on Earth did my little inquiry catch your eyes? ;-)


Purest randomness.


In this specific case: I stumbled upon Craig's tweet/poll yesterday https://twitter.com/craigkerstiens/status/151949609064658944...

  The killer feature of Heroku was/is:
  * git push heroku main
  * Review apps
  * Heroku postgres 
  * Add-ons
& this reply

https://twitter.com/srinathkrishna/status/151949673020236185...

  I'd say the 12factor manifesto was the biggest thing to come out of Heroku. There's a lot of wisdom in there.


Many of us recall with great fondness the first time we read about 12 factor apps. In particular, those of us that had been working on systems like this, but hadn’t codified it with explicit guidelines. It just made sense. The pleasantness of things just working consistently from scratch on a brand new server. Replacing the concept of pet servers with cattle. It’s all related.

I’m sure every time this makes the front page it reaches at least one fresh pair of eyes.

Relevant xkcd: https://xkcd.com/1053/


> Relevant xkcd: https://xkcd.com/1053/

I'm one of the 10.000 today with this comic (well, actually no because I'm European). I really love the concept behind: focus on how cool is to for someone to discover some great new thing they will enjoy instead of focusing on how they missed it all this time.


Is HN a search engine for xkcd?

Honestly people, do you have them indexed somewhere?

I'm amazed by your ability to find the appropriate xkcd whenever it's needed. Me, I'd have to spend at least half an hour on Google trying to find it, and then give up or see that someone else already posted it.

I love this community.


Some xkcd comics are classics - my gotos are the one about ISO dates (https://xkcd.com/1179/) and password strength (https://xkcd.com/936/). Both of these were found by search Google 'xkcd (dates|passwords)', and having them be the first result.

Explain XKCD is great if you know roughly what topic was covered by a comic, but can't quite find the URL. https://www.explainxkcd.com/


> Explain XKCD is great

Thanks for the tip, I added it to my favorites.

Could it be used to implement a Slack bot similar to /giphy?



It’s a kind of quirky humor that resonates well with a technical audience. I haven’t personally memorized any numbers, but I do know the “Ballmer peak” and “competing standards” XKCDs pop up a lot in casual conversations with my own peer groups.


I love those, but the only one I remember is https://xkcd.com/927/

It's always pleasant to see them pop up in conversations.


I can’t resist posting my favorite: https://xkcd.com/327/


I suspect the links or references to xkcd follow a power law, where a few comics get linked to all the time.

Relatedly, I believe it's rare to see an xkcd linked to above 2000.


You can find most iconic xkcd comics easily with Google. This one shows up with "xkcd new things" or "xkcd first time".


From a historical standpoint I would think it is interesting to someone who may have come into the profession in the last decade to know more about why the practises we today take for granted and as obvious came about.


The account “tosh” does this pretty regularly.


Yes, I've noticed this as well.


I personally am stunned that the advice for logging is taken as canon.

I can see using container stderr/stdout as a way to collect exceptions, but using it as a source of exceptions, access logs, authentication events, all mixed together where splunk or "hadoop" will parse it for you? Madness.


As opposed to what, exactly? Should I be reinventing logfile rotation inside my web app? Why not just log to a stream and have a dedicated logging service figure out what to do with it? It sounds reasonable to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: