I can tell you why people still use ncurses: if you’re actually interested in accounting for all the nuanced eldritch madness that is terminal emulation implementation and history it’s a monumental effort, at least imo.
Sure, if all you care about is coloring text on recent terminals, I do think just using ansi codes is fine.
However, I completely disagree with the assertion that hardcoding ansi codes is somehow “more readable” than using a well-named API call that abstracts such detail away from the user. Maybe I’m alone in this, but I really don’t want to waste brain space memorizing ansi codes. Yeah I can write my own library or little macros to do this but…why? If I’m doing anything more involved than coloring text why wouldn’t I just pull in the lib I need? There’s no way in hell I want to maintain a custom implementation of terminal cursor movement for kicks. I’d much rather reach for a battle tested library that scores of other devs have already used and improved upon, whether it’s ncurses or one of the more modern TUI libs.
Also, it’s become cool recently to hate ncurses for some reason. Are there things ncurses should do differently from today’s perspective? Sure. Is it’s api awkward to use in the face of modern programming features and paradigms that we’re now used to? Sure. But on the whole it’s design is really solid and it has plentiful documentation. Whenever I see rants like this I get the impression the author is a person that lacks an appreciation for history and the way technical development proceeds in general and imagines they are our technical savior, come down from the heavens to enlighten all us blindered fools about how terrible X is.
If you want to educate people about ansi codes, great. A really inefficient way to do that is to wrap your advice in an obnoxious rant.
But, you see, ncurses works, so it can't be allowed to stand. We have to disdain it, and pretend that it's bad for some undefined reason, so we can make half-functional software that ignores any solutions older than My August Personage. It's the Wheel Of Progress or something.
And hard-coding terminal escape codes intrinsically tie you to a VT102-derivative (because no one would hard-code VT-52 terminal codes) and that'd stop (say) my HP terminal from working.
Sure, going with VT102-family codes probably covers somewhere between four and seven 9s of cases, but...
I built a cross platform app to print 'pretty formatted' source code [1]. I didn't want to re-invent the wheel on formatting source code, so looked at all the existing libraries. Originally I figured formatting to HTML, and then building a print-friendly HTML render would work. But this proved super challenging. I tried a dozen HTML engines (including Chromium) but none gave me enough control to render just a single page of the original source file.
Then I noticed Pygments, a Python-based library for pretty formatting source code, has an option to output an ANSI formatted file. I quickly found a bunch of libraries that could render ANSI formatted text to a print canvas.
In the end, I put the original source code file through 'pygmentize -16m -o tempfile.an` (`16m` is the 16M color terminal ANSI formatter) and pipe the `tempfile.an` through a print-optimized renderer to actually print the source code.
> Originally I figured formatting to HTML, and then building a print-friendly HTML render would work. But this proved super challenging. [...] none gave me enough control to render just a single page of the original source file
Nothing like as powerful as your app and entirely tangential to the topic of ANSI escapes, but my preferred way to generate HTML from source code is simply:
vim +TOhtml +wq +q path/to/source/file.ext
That will save an HTML version at `path/to/source/file.ext.html` (conforming to my .vimrc's syntax settings, theme, etc) and I'm happy enough with my browser or system's print dialog to go from there.
The author seems not to be aware of the recent TUI rennaisance[1]. There are libraries like termbox and blessings (python) that are a middle-ground between full ncurses and adding your own ansi codes. There are a lot of modern TUI frameworks like tui-go or tui-rs that bring common GUI conventions back to the TUI (heck there are TUI programming libraries that are designed to be similar to react) - these too tend to be a lot nicer than working with ncurses.
Definitely worth checking out the full landscape these days if you're going to dive into making your console programs prettier.
[1] My words, I just coined that name (although I wouldn't be surprised if others had said it too). I mean that in the last ~decade there has been a lot of TUI work in the background, with lots of new programs and some pretty stunning results. I blame unicode - once the web folks realized they could use something like font-awesome instead of sprite-sheets (or maybe as a pre-build sprite sheet?), and the "icons in a font" movement took hold, terminal programs got a lot prettier too. It makes sense, its the same font/font renderer provided by the system no matter if the app is a browser or terminal emulator.
I'd add https://vt100.net/docs/vt100-ug/chapter3.html as it has details invisible-island doesn't go over as good as good a resource as it is. The other chapters are informative too.
The VT510 manual is also useful since it was designed to be used with a PC keyboard so it covers newer escape codes that aren't on older DEC terminals https://vt100.net/docs/vt510-rm/chapter8
Here's the reference I use for VT100 ANSI control sequences: https://github.com/jart/cosmopolitan/blob/c6bbca55e9f977e386... I created this reference because there wasn't one available before that could be easily copy/pasted into GNU C or Python string literals.
That document is very good, but it mostly describes control sequences as interpreted by xterm, with some notes for other terminal emulators, too. Some of the more advances ones will have different sequences in other terminals (for example for the mouse). Most basic ones (i.e. ANSI stuff) works in pretty much all terminal emulators of the last few decades though.
Either way, I wouldn't use it as an authoritative source uncritically.
The relevant ECMA standards are full of crazy bullshit that, to the best of my knowledge, nobody has ever implemented in any hardware or software terminal, like control sequences to make lines of text run vertically (SPD), or to fully justify text (JFY), or to use a Fraktur font (SGR 20). They're also incomplete -- they go into very little detail explaining what various control sequences should do, especially in exceptional circumstances, and they don't discuss Unicode at all (as they haven't been updated since the early 1990s).
Good to see you've added support for that but the cynic in me can't help wondering if that ANSI code is even needed. While you're right that unicode makes it easy to support, it's also even easier to output the Fraktur characters directly from the CLI application and do away with that particular escape code entirely. The bonus in doing that is you then support more terminals in the process since unicode is more widely supported than the Fraktur escape sequence.
This isn't me being critical in what you've done though. More of an adjacent commentary about the state of ANSI escape sequences.
Not all terminal emulators follow that completely. Most only include a subset and some include their own proprietary escape codes too (like iTerm and Terminology have codes for rendering images. tmux has an escape code for changing the session title).
Then there's other specs not included in that doc even outside of the aforementioned proprietary codes. Like Sixel, conventions on non-POSIX terminals, etc. Also lets not forget the popular-but-not-standardised conventions like the hyperlink escape codes (which I personally think shouldn't exist in the first place....but that's another topic entirely).
Even standard ASCII characters can differ from one platform to another. Backspace being a classic example: ISO 646 describe it as ^H whereas ASCII 1963 has it as ^?
This mess of differing compatibilities is exactly why termcap is a thing.
Being an author of a readline library, this is a topic quite close to my heart :)
> > ^? is the code for the [DEL] or [DELETE] key on physical terminals, that's why it's an ASCII standard.
> When the PC bucket keyboards came, IBM in their usual idiocy simply put the [BACKSPACE] key where the [DELETE] key normally is, and what's worse, they made it not only move back one space but delete too. Thus came the sadness of everlasting redefining of deletion to be ^H when using a PC bucket keyboard.
Indeed. And to confuse things even further, the [DEL] key on IBM keyboards sends the following ANSI escape sequence: [27 91 51 126]
So we now have 3 different standards for deleting characters (and that's before you start looking at the escape sequences for deleting rows, nor supporting vi and/or emacs/readline bindings).
This is a great table of control sequences, but the second author cited as maintainer from 1996 to 1999, Thomas Dickey, currently maintains a list at his invisible-island.net site.
Here is something I learned only several weeks ago. While working on Pipe Watch, I strayed into reading the standard.
* The ESC [ command start sequence is actually a compromise for 7 bit systems. The [ character is not chosen by accident. It has an obvious positional relationship to ESC in the ASCII code which is why, informally, Ctrl-[ is the same as ESC.
* If you have an 8-bit-clean channel to the terminal, only a single character is required: the "upper escape" from the C1 control character set (0x80 to 0x9F). This character, 128 + 27 or 0x9B is called CSI: control sequence introducer, which is basically its role in these terminal control sequences. Thus ESC [ is just an alternative way of encoding CSI for 7 bit. E.g instead of CSI 4 A, you use ESC [ 4 A.
> If you have an 8-bit-clean channel to the terminal, only a single character is required: the "upper escape" from the C1 control character set (0x80 to 0x9F).
I'd avoid using this. It conflicts badly with UTF-8's use of 0x80 through 0xBF as continuation characters.
I don't entirely buy the argument because in a regular ASCII control sequence like ESC[5A, you have the same problem. All those characters have a role outside of the signaling, so if either side is in an unexpected state, they get misinterpreted. This is just the risk of in-band signaling.
Of course, if the terminal is ignorant of UTF-8 then this is a nonstarter, because whenever CIS occurs as a continuation byte, it will be misinterpreted. If the terminal is ignorant of UTF-8, why would you send UTF-8 to it, though? If it's going to be interpreting UTF-8 as some branch of ISO Latin, the display will be a mess.
If the terminal and host do handle UTF-8, then this CSI signaling is just an extension of the state machine. It's also nice and simple that, in the absence of any data loss or synchronization error, the CSI code is unambiguously not part of any valid UTF-8 character (except as the second or subsequent byte where the receiver is in the right state to interpret it that way).
In my experience, it's the terminal->host direction where you get mixups, whereby the terminal generates some escape sequence like for an arrow key, but the host is not in the right state, and interprets part of it as data. This is exacerbated by a situation in which the host supports ESC as a UI command. CSI solves the abmbiguity between the control sequence start and ESC just being ESC.
The fundamental problem with mixing C1 controls with UTF-8 is that it forces the terminal emulator to break layering. It can't run a UTF-8 decoder first, because that'll turn the C1 controls into replacement characters, and it can't run a terminal sequence decoder first either, because that'll treat a lot of the UTF-8 continuation characters as control sequences. And what you're likely to find if you start using C1 controls is that support for them in terminal emulators is often incomplete and/or buggy. Handling them correctly in conjunction with UTF-8 text is difficult, and many terminals just don't bother.
The ambiguous nature of ESC in the terminal->host direction (as you put it) is unfortunate, but is difficult to fix. Some terminals (like iTerm) can be configured to use C1 controls for function keys, but my experience has been that a lot of software fails to recognize these sequences, making it impractical to use.
The fundamental problem with TCP/IP is that it forces the stack to break layering. In the same frame of bytes, you have a confusing mix of ethernet addressing, IP header, and a payload of application data, all from totally different pieces of software in the system. Even the data itself is fragmented, with some session wrapping around content that are done by different application stacks.
> It can't run a UTF-8 decoder first, because that'll turn the C1 controls into replacement characters, and it can't run a terminal sequence decoder first either because that'll treat a lot of the UTF-8 continuation characters as control sequences.
It has to have a state machine which recognizes the combined language of UTF-8 sequences and control sequences. Which is the approach you would take anyway, even with C0 controls.
That combined language is an unambiguous, regular set, so you could code it with your eyes closed.
Starting in an initial state, the legal inputs are: ASCII character, Unicode character, or escape sequence headed by CSI. This is decidable from reading exactly one byte value with no further lookahead.
That's just one way. You can in fact follow a layered approach whereby the terminal decodes everything with UTF-8 before analyzing it for control or data.
For instance, say we decode UTF-8 into integer code points. A valid character decodes into its implied code point. An invalid byte like CSI can decode into some reserved range like U+DCxx. The higher layer of the terminal's firmware then looks for values in that U+DCXX range: that's where it finds the CSI.
I have years of experience with this exact encoding scheme, which I baked into the text I/O streams of a programming language.
For instance, oh, /proc/self/environ is NUL-separated, right? No problem:
The NULs are rendered into \xDC00 codes. This is called the "pseudo-null" character in the terminology of this language, and has a symbolic name: #\pnul:
2> #\xDC00
#\pnul
We can split the data on it to recover the list of environment entries:
Does it? That depends on whether the control sequences are being encoded as UTF-8, or transmitted literally in between UTF-8 characters.
If they are being encoded, this is still useful. Though space isn't saved, any ambiguity between the control bytes and UTF-8 bytes is eliminated. The advantage is still present that CSI is different from ESC, and so in the terminal->host direction, you don't have the ambiguity between ESC as a UI command character versus control sequence signal byte.
> but i swear to god developers have so completely forgotten how terminals work that i might be one of a handful of people left on earth who actually has the knowledge to, so they all just layer their bullshit on top of ncurses (which should never have survived the '90s) instead and it's maddening.
Actually, yes, understanding the tty in detail seems to become a dark art.
However it's the best way to do complex things quickly: I did use some of these tricks like storing and then restoring the cursor position to have the time at which a command stop executing ABOVE the command itself and next to the time it started executing in https://github.com/csdvrx/bash-timestamping-sqlite
I had to, because I was using MSYS2 and the time to execute a command was a limiting factor in Windows before WSL2.
No, WYSIWYG: I don't bother much more than that with explaining how it's done: the source code + the comments + the github page already give all the details away to whoever wants to dig deeper.
I'm not much into social media or self promotion either: if people like what I do, they'll use it - I don't care much more than that, as everything was written for myself and my selfish needs first.
The tty/sixel world is very small world anyway, we generally know and recognize eachother, so we know where to look for cool new stuff :)
It should be a super simple feature to add to your terminal emulator: SCP works with a X,Y position. RCP just "jumps" there.
If you keep an accounting of how many lines you have displayed since then, you could alter the response to RCP by also doing the appropriate amount of scrolling: it should only take one variable, the deltaY to scroll.
I've used similar tricks with RCP/SCP but for simpler things: the only slight difficulty is the deltaY accounting, like when you are executing commands near the bottom of the screen because you must take into account that scolling will happen - but it's essentially similar to your idea.
Actually, now that I think more about your idea, it would be sweet to keep a SCP/RCP stack with multiple values, where you can push values with each SCP then pop them with RCP, say in sequence, or maybe just access the nth value with a different command that wouldn't pop them? That could be done nicely by augmenting RCP.
Also you could augment SCP with an optional flag to specify whether the terminal should scroll back upon RCP of this nth entry, and you'd have a great function that would be quite useful (ex: SCP with a jump bool when the return is non 0: you could make a shortcut to jump to the commands that have returned errors)
There's no reason to stop adding cool features to terminals: we're in a terminal renaissance!
One thing that the author doesn't seem to know about is /dev/tty. In the article the escape codes are just sent to stdout. Though an awful lot of applications (including the greatest ones) do this, IMHO this is wrong. The terminal control codes are used to control a terminal, and they are often not meant to be part of the output stream, for example when the output is piped or redirected to a text file. When what you intend to do is to make your output fancy only when the output is a terminal, surely you should just send everything to stdout and use isatty to decide whether to also send those terminal control codes. But if what you want to build is a whole TUI like vim or the author's example app, you should send all the control to /dev/tty. This way if needed you can extend your app to be able to be a useful part of a pipe, as bytes sent to /dev/tty will not be redirected but will always be handled by the terminal. To prove that this can be useful, fzf, a fuzzy matcher, uses a TUI to let users input the pattern and pick among the matches, and prints the result to stdout. Sadly, however, it uses stderr to control the terminal instead of /dev/tty; this makes its ability to print error messages somewhat limited and its behavior unexpected when stderr is redirected. Also imagine that you can use vim in a pipe, instead of sed or awk, to see the effect of your edits live. Also, try `vi > /dev/null`. I'd say the behavior is a bug. IIRC ncurses makes use of /dev/tty and by default makes the app made with it redirectable, and this is a reason we should use it in the 21st century, among others. What's sad to me is that so far all the Rust terminal libs I've seen ignore /dev/tty, so it's impossible to use them to build something that both have a good TUI and can be used in pipes.
The write_and_flush_... pseudo functions are written in this way just to make it clear that I'm describing the behavior when the output gets the bytes immediately. I don't mean that you should use /dev/tty like this.
Redirection won't have any effect on these lines of code. You always get a red "word" on the terminal. The redirection target doesn't get anything.
When redirected, your terminal stays red after the red "word" is printed, and your redirection target gets an extra escape code.
When stdout is a tty, /dev/tty is the same as stdout, so a write that goes to one goes to the other. In this case, even if you don't flush after write immediately it's likely not a problem. Still it's something that the developer should pay attention to.
-------------
Edit:
Please ignore all the pseudocode examples. They don't convey what I wanted to express.
Just use /dev/tty in the same way as how you would use stdin/stdout/stderr that is connected to a terminal. Only difference is that it is a rw device that can be used for input and output at the same time.
This isn't guaranteed to be serialized: Someone trying to log the output of your application might try: | tee /dev/tty | logger ... or they might be running under kubernetes/docker (which does much the same thing).
I suggest the following:
1. If fd 0 and fd 1 are both ttys (isatty), and they point to the same tty (ttyname) then use it a tty (like your first example)
2. If fd 1 is not a tty, and fd 0 is a tty, write plaintext to fd 1. The user wants to filter output.
3. If neither fd 0 or 1 is a tty, write twice: do interactive stuff and send your vt-sequences with the text embedded to /dev/tty and a plaintext copy to fd 1.
Now the user doesn't need the extra tee, and we no longer need to worry about synchronisation. Users don't typically (meaningfully) grep interactive applications, so I think it's a good compromise for interactive servers and tuis.
But if you're not reading input or doing absolute cursor movement, and you just want some spicy log lines, I don't think you should be doing any of this fiddling: Just write sequences to stdout if it's a tty (or if the user specifically tells you to).
I also recommend checking for $NO_COLOR in the environment and honouring the users wishes here: Some users are colourblind so this represents a real accessibility issue for them. One of the advantages of ncurses/terminfo is you get some accessibility features you might not have known you needed.
Have you considered that many of us want ansi codes in the pipeline? For example, all the log viewers I use understand things like color codes. If I don't want ANSI codes in the output, then I can just pipe it through sed 's/\x1b\[[;[:digit:]]*m//g' which is easy. However if a program tries to be "smart" like you're describing w.r.t. hiding ANSI codes, then I have to go to all this trouble wrapping it inside another program which is a fake pseudo-terminal, which captures and extracts the real output.
> Have you considered that many of us want ansi codes in the pipeline?
Yes I have. Using /dev/tty doesn't stop an application's author from adding a --color that lets their program send color codes to stdout. But if the author doesn't use /dev/tty either stdout or stderr of their application can't be redirected.
Yes but no one agrees on what that should be. For example, I need to flip through a 623 page manual to discover -fdiagnostic-color=always is the magic incantation for gcc. I have to repeat that for every app I use. Some do it using environment variables, except no one agrees on names there either, so I have to bloat my environment and it slows down process creation. Whereas with my sed solution, anyone who knows regex can could write it in a few minutes, and it only has to be figured out one time. Furthermore, there will eventually come a day when the tools we pipe stuff into all get good enough to gracefully consume ANSI codes, and there's no longer a need for sed. But we can't evolve in that direction if we use the solution you're proposing, which binds us to a colorless past. Erring on the side of having more information is always a good thing. The burden of the flag should be on disabling, not enabling, and there shouldn't even need to be a flag since there's sed.
Search color, its the first result as an abbreviated reference and the second as a full explanation. Its also the first result on google/ddg/etc.
You have a good argument, but claiming you have to "flip through a 623 page manual" and "with my sed solution, anyone who knows regex could write it in a few minutes" detracts from your point.
I loathe to imagine how those modern TUI libraries that have been popping up recently [1] that emulate Elm and React rendering in the terminal deal when isatty == false.
We're basically recreating Flash for the terminal, where we got a fancy UI but we lose the original text functionality, i.e. output is non pipable anymore. Not ideal when working on UNIX systems.
A very nervous very young me gave a bad talk on terminals at a conference about 10 years ago.
Reviews of my talk were pretty rough, but a number of them mentioned learning that they should have been sending their control codes to standard error, so it wasn't a total waste...
If you want to build a TUI, you absolutely shouldn’t try to mess with devices, nor should you assume your terminal is /dev/tty. You should be using isatty(3) libc function.
This will be obvious to any C programmer, but the macro[1] used throughout the article only "works" on string literals and arrays, not pointers.
Also, it will include the null terminator. Probably won't do any harm but quite silly if you're redirecting to a file for example. I'd subtract one, or use strlen which would cover the pointer case above and I'd hope a modern compiler would elide the call on a string literal anyway.
Every time I read this I disagree pretty hard. curses is not that pretty, but it exists to save us from the insanity that is direct terminal codes. I had an employer once who early on had invested heavily in wyse terminals, and here we were wanting to access these programs with terminal emulators, do you know how many good open source wyse terminal emulators there are? well it is pretty close to zero.
Almost every day the mandatory prayer "I wish they had used curses"
Some actual visual examples would've been nice, too. There is just text that describes what it would look like but, as we all know, one picture is worth a thousand words.
All the code is there. Even a multiframe GIF or video capture wouldn't do their code (and blog post/rant) complete justice.
The final example is both a total treat and a beast. I encourage anyone with any interest to chuck the code into GCC and behold the interactive experience for themself.
I see these comments on many links from here. There are extensions or reader mode in your browser. At least that would take care of the colors. In Vivaldi there are additional 'Page Actions' to turn a page to gray scale etc.
It's interesting, because I sometimes also find colors distracting, so for my Vim and other terminal tools I make an intensive use of text attributes (like bold, italic, underline and their various combinations)
Your comment may incentivize me to release my monochrome vimrc: it looks quite good on mintty/msys2 or Windows Terminal, and still very decent with xterm!
For a second after I read the title, before I saw the domain, I thought that Linus had written a followup article to his original -- very popular on HN -- one.
> Uh, no. Anyone on BBSes in the 90s is very aware of ANSI, thank you. And we've not died off yet.
You guys should be more vocal. When you have great knowledge like that, you can't just keep it to yourself as a fond memory. People wouldn't be saying what the OP said if more of the oldskool crowd was out there blogging and mentoring the younger generation.
It's not just a fond memory. I use it in my open source. Other people use it.
It's not obscure.
One thing I actually agree with the author about is that these escape codes are the only relevant thing. Outside of retro computing nobody should care about supporting anything else.
Other programs doing this are not exactly in short supply. Anyone can do "ls --color /bin/ls | hexdump -C" and see the secret sauce.
I blog about various things, partly to help out people who are less experienced. But I don't pretend what I write about is some sort of lost art, that "only a handful of people know".
Like "how do I make bold text in linux terminal" gives as second result this:
Not "everything you wanted to know about terminals".
Like, "how does Ctrl-C work? What's flow control?". No, this post is entirely about ANSI codes.
What's extra frustrating is that you too are calling this "oldskool crowd". I'm just not that old, and this just isn't forgotten. It's simply another tool that people use when they need to solve the problem of colors, etc.
Just because many people don't know how a malloc()/new becomes a mmap or sbrk doesn't make it "oldskool". It's simply a thing that many people have not learned yet, because they haven't needed to. If and when they need to it's quite documented and many others know it, if they want more hand holding.
Like say I didn't know how garbage collectors worked. I don't go "Oh you older generation of lisp programmers, you need to blog more and teach us younglings, so that we can understand the languages that we use". Sure, blogging etc about GCs is good, but who would be arrogant enough to just write an article about "old gen and new gen" and call it "everything you ever wanted to know about GC" and claim that they are one of the handful of people who understand GCs.
This blogger put so much effort into sharing a well-written blog post explaining how we can get a better richer experience from our terminals. That's not arrogance and your comment comes across as very condescending which is worse than arrogance. I always try to be encouraging when people feel passionate about tech since I think it leads to a better culture than saying, "oh, you think this stuff is new? it's been documented a thousand times before, don't bother".
> This blogger put so much effort into sharing a well-written blog post explaining how we can get a better richer experience from our terminals.
I don't even believe that you believe that this is an accurate description of the post.
> That's not arrogance
Straw man argument. I was not commenting on those things, and you know it.
> and your comment comes across as very condescending which is worse than arrogance.
Clearly the original article is both condescending and arrogant. And elitist.
> "oh, you think this stuff is new? it's been documented a thousand times before, don't bother".
I didn't say that. There's value in writing it again. I don't pretend that my blog post are pushing the envelope of knowledge either, but maybe I'll explain it in a way that fits better with how some reader will better absorb it.
Anyway: I am familiar with you from the past, and how even your friends have described you as someone who likes to live life at the edge of trolling. I sense that this is what's happening now, so I will not engage further.
I was aware of the ANSI escape sequences and even used them directly in scripts on occasion, but I still used ncurses "where it mattered" because I didn't know about compatibility. I didn't want to risk Windows or a random flavor of linux I'd never heard of or a group of anti-VT100 enthusiasts getting upset because I didn't use an agreed upon compatibility layer.
From the tone of this piece I gather that the ANSI escape codes are actually standard enough to target. Cool! Thanks for the heads up.
> From the tone of this piece I gather that the ANSI escape codes are actually standard enough to target.
termfo[1] comes with a "termfo" CLI utility which – among other things – can group terminals by escape code; for example "termfo find-cap save_cursor" shows that almost all terminals use "\x1b7", with just a few very old ones using something different (full output is a bit long, but it's at [2]).
It's useful to check "can I safely hard-code this escape code?" But like you said: for ANSI it's pretty safe to just hard-code most codes, especially the common ones, but never hurts to check.
> From the tone of this piece I gather that the ANSI escape codes are actually standard enough to target.
Correct. Hardware terminals are extinct in the wild, and essentially all software terminals (including the Windows terminal!) now support a reasonable subset of "extended VT100" terminal control sequences. Some of the weirder features of the VT100 (like double-high/double-wide text or VT52 compatibility mode) are usually omitted, and some features which were added in later DEC terminals (like color) are often added.
The different features omitted and added are precisely one of the reasons you'd still want to be using an abstraction library. Otherwise you end up having to stick with the lowest common denominator, or having the user figure out how to enable each terminal feature in every app individually. These libraries were invented with good reason.
Even now there's big differences between terminal emulators. Especially when you take modern features like images into account, there's like 6 different ways of doing it as some terminals have invented their own (kitty for one) and there's more standard ones like sixels which come originally from the Dec VT340 series. Though they were ridiculously slow on that hardware so it didn't take off until much later.
I feel like this depends a bit on what you consider to be "big differences", and what sort of abstraction library you have in mind.
So long as you're only concerned with placing and formatting characters on the screen, support for the subset of "extended VT100" which is required is essentially universal. There is no disagreement over what escape sequences can be used to move the cursor, for example, or to clear the screen. Using an abstraction layer like termcap is effectively a no-op here; it will output the exact same sequences for any modern terminal application.
And if you want to support modern and/or esoteric features like embedded graphics or custom characters ("sixels"), you're probably going to handcraft that anyway, as support for those features is very limited, and most abstraction libraries won't support them at all. So I'm not sure I see the case for an abstraction at either end of the spectrum.
A while ago I was trying to find a way to make my terminal scroll back up after a command executed, so that if the output was long I could read it from the top without having to scroll up manually. There are ways to get yours shell to print something after the command executes, so I just needed to find an ansi escape sequence that would scroll up. Unfortunately I didn't see any sequences that do this. Anyone have any ideas?
Piping to less is essentially the standard way to do this.
your_command_here | less
It will capture the output and let you scroll up and down.
Otherwise, this would really be a feature of the terminal, and not necessarily required to be supported (it makes sense when you consider that many early real terminals literally printed the output to paper -- who needs scroll-back, just look up the tape! And repeating previously shown output would result in a confusing print-out).
However, if your terminal emulator doesn't have a scroll-back, you could try something like tmux, or alternatively gnu screen. These add lots of little features to the terminal (a buffer to scroll back, split terminals, persistence so you can detach and reattach) (tmux is more modern feeling, IMO).
As far as the "terminal virtual machine" is concerned, text which scrolls off the top of the screen is gone forever. Most software terminals implement functionality to save that content into a buffer and allow the user to review it, but there are rarely any control sequences which interact with scrollback, and behavior which interacts with scrollback (like resizing the window) often varies between terminals.
The scrollback buffer may be a limitation, so you should try with enabling or disabling the alternate screen (ti/te)
If that fails, you may have to tweak your terminal emulator to "hook" the SCP/RCP to a specific point of the scrollback buffer, to allow this scrollback.
Yeah it sounds like you need tmux and / or screen as the other anons have been saying. You can then split your terminal in its window, have a status bar which can display things, create multiple virtual terminals, view them on other terminals at the same time, etc. When I was first introduced to Unix in the mid-1990s I didn't know about screen and it would have saved me a lot of pain as I used a dial up connection. Once you start using one or the other there is no going back.
Once it's scrolled off the top of the screen you're basically at the mercy of your terminal emulator's scrollback history. Some might have an escape sequence available to recall it but I there isn't any standard way of doing it.
You'd be better off piping into less / more / most. These are called "pagers" and are designed to do this. eg
eshell has a module called eshell-smart that borrows from Plan 9's 9term, it automatically pages long commands, stays on the same line for editing if you show intent to change it and automatically scrolls to the end if you start typing a new command.
This seems to only cover output. At least as interesting, from a usability perspective, is input - making sure that keyboard shortcuts work, in so far as they can be represented.
It's not hard to find yourself with an incorrectly configured terminal on a modern Linux distro. For example, try running emacs inside tmux inside rxvt-unicode and find out how Ctrl/Shift/Ctrl+Shift with arrow keys are bound.
I'm probably interested in reading the material, but the colours and font make it difficult to do so. Of course, I could switch my browser to reader mode. It would only take a moment. And perhaps this attention-seeking author has some great insights. Maybe my life would be changed, if I just took the time to decode the message. Maybe.
The only reason I found in the article was "because jesus f. christ" (near the end). Therefore I assume the author has religious objections against nurses. /s
In fact they mention various issues with their approach that nurses would fix automatically and without the user having to configure stuff in the app:
> sadly, true color isn't supported on many terminals, urxvt tragically included. for this reason, your program should never rely on it, and abstract these settings away to be configured by the user.
Apparently they don't care about the great terminal-independence ncurses offers because these ANSI sequences will only work fully on terminal emulators. And as they mention even those differ in supported features. Ncurses was created precisely to abstract these differences. It's not just good for "obscure dumb terminals from 1983". It would also be quite a pain having to deal with terminal resize events etc when building a TUI. And no, doing a full screen rewrite for each minor update does not make for smooth TUIs. Yes modern hardware is fast but sometimes you're on a slow SSH connection.
Personally I still use real terminals too at times (I own a real VT520 and love it) and apps that are totally ignoring termcap/terminfo are super annoying. But I know this is niche.
PS: Also from the article:
> also, i have effectively zero pull in the tech community and am also kind of a controversial character who is liable to give projects a bad reputation, which i don't normally care about
The article has no pictures of what it is selling, so it just seems like a rant, as suggested by the opening profanity. I don't see how I can use what they are saying to make a TUI.
Are you trying? I've been brutally bashing my head against some ancient telnet shit recently and this actually was concretely helpful and I could apply it immediately.
The tone and presentation seem calculated to annoy HN readers (nice) but this article has a bunch of difficult-to-uncover details with just enough context to apply them if you actually have a use for them and a desire to.
My understanding is that control codes for [every possible modifier key combination] + ['standard' US keyboard keys] aren't all standardized.
Is that correct?
I seem to recall having a hell of a time trying to figure out which key combinations Emacs could understand and why (keyboard -> OS -> terminal emulator -> protocol -> program.)
Are there any proposals? Workarounds? Proofs-of-concept?
Yes, this is correct. In particular, there are certain key combinations which cannot be distinguished (for example, Control+I is indistinguishable from Tab) and some keys aren't affected by modifiers (such as Shift and Control on Space).
They pretty much are. Terminals implement CTRL keyboard shortcuts as `c ^ 0100` and ALT works by prefixing '\e'. Stuff like arrow keys encode using VT100 codes. Those are three very simple rule that everyone knows.
$ wget https://justine.lol/ttyinfo.com
$ chmod +x ttyinfo.com
$ ./ttyinfo.com
"\001" is CTRL-A a.k.a. ^A
"\000" is CTRL-@ a.k.a. ^@
"\002" is CTRL-B a.k.a. ^B
"\f" is CTRL-L a.k.a. ^L
"\r" is CTRL-M a.k.a. ^M
"\033\001" is ALT-CTRL-A
"\033[A" is UP
"\033\033[A" is ALT-UP
Unfortunately it still leaves much to be desired. For example, have you ever wanted to have CTRL-ENTER as a keyboard shortcut in your terminal? Sadly you can't, because chr(ord('\r') ^ 0100) encodes as 'M'. Another issue is it's not possible to encode CTRL+[ as a keyboard shortcut, because it overlaps with \e a.k.a. 033 a.k.a. ASCII ESC. So if you type that shortcut, it'll cause your terminal to hang for a second.
It'd be great if the standard bodies could promulgate some kind of well-defined solution to this problem. Having read the ECMA standards documents, I think they really could be doing more to focus on the people who use it for terminals!
When I found this is a valid binary both in Windows and Unix (I've tested on my Linux and BSD VMs) I found it pretty funny, but I did not expect this to being able to grab the mouse movement in Windows at all. Pretty cool!
Glad you enjoyed it! If you want to read more about the technique, please see https://justine.lol/ape.html You can also support the project through Github Sponsors.
It's not exactly like that - it's more like there are competing "standards" and interpretations of these standards, and sometimes supporting one means not supporting the other.
Fortunately, such things are rare, and can be addressed by GUI options.
Take for example SGR1 for "bold/intense" text: read the whole issue that came to Microsoft Terminal team in 2018 up to its most recent discussions on https://github.com/microsoft/terminal/issues/109 then check the "simpler" version in wikipedia https://en.wikipedia.org/wiki/ANSI_escape_code#SGR_.28Select... and you will realize there's no "right" answer between bold and bright (or both!), just different preferences and interpretations of the standards.
It sounds like you’re talking about output, not input.
Are there standards to represent, for example, Ctrl+Alt+* (I.e. Ctrl+Alt+Shift+8) ? Additionally with cursor keys, Insert, Delete, Backspace, F1 - F12, etc.
I’ve searched several times for such standards before but never had any luck…
> It sounds like you’re talking about output, not input.
Sorry if I misunderstood your question!
However, for inputs the situation is essentially similar: the standards again depend on the terminal, and there can be multiple interpretation than can be conflicting.
Take for example Control-H and Control-? : which is delete and which is backspace depend on the terminal you use!
> I’ve searched several times for such standards before but never had any luck…
There are sources, mostly Thomas E. Dickey (of xterm fame) but personally, I try to take an emulator and make it behave to what my fingers are doing with the minimum level of configuration.
To follow up on Control-H and Control-?, here's what my .inputrc does:
### Delete to the right will be remapped to Delete
"\e[3~": delete-char
# keypad on .
#"\eOn": delete-char
# Special case for VT100 (borks backspace -> make it conditional)
$if term=vt100
"\C-?": delete-char
$endif
### Delete to left will be remapped to Backspace
"\C-?": backward-delete-char
# keypad on 5
#"\eOu": backward-delete-char
# Special case for VT100 (borks backspace -> make it conditional)
$if term=vt100
"\C-H" : backward-delete-char
$endif
(yes, the numpad delete has its own case!)
Here's another one, for Control-Backspace and Control-Delete
### Line cuts will Ctrl-Backspace and Ctrl-Delete
"\C-_": backward-kill-line
"\e[3;5~": kill-line
# Ctrl-Delete variants for tmux and urxvt
"\e[M": kill-line
"\e[3^": kill-line
And last time I checked, Windows terminal needed proper remaps cf https://github.com/microsoft/Terminal/issues/755 like { "keys": "ctrl+backspace", "command": { "action": "sendInput", "input": "\u0017" },
This would piggyback on "\C-h" backward-kill-word - which might be incompatible as it's got a different meaning for VT-100 as noted above.
Quoting you again:
> I’ve searched several times for such standards before but never had any luck…
You shouldn't bother too much about that: terminals are just tools to help you do your work, and while sometimes you can use a lot of pre-existing configurations or documentation to have as much compatibility among the various standards as possible, there are other times when you need to break the standards that stand in your way.
For example, I'm a Windows fan, so I believe Ctrl-C should be copy, Ctrl-V paste, and I don't really care about the traditional unixisms that would cause me try to learn to use Shift-Control-C for copy and Shift-Control-V for paste (just... no!!!)
You may disagree with that, so let's take a more staightfoward example: I care even less of the more obscure unixisms like Ctrl-S/Ctrl-Q that I rarely need. Who needs to "freeze" the tty output in 2022?
So in that case, I've remapped Ctrl-S to "kill word forward", something usually done with Esc-d by default.
To achieve that, in my .bashrc I first clear the defaults with stty:
## Remap ctrl-c to ctrl-x to copy/paste with ctrl-c and ctrl-v, and disable ctrl-s/ctrl-q
# The above doesn't show ^S,^Q,^R and ^O anymore in stty -a
Then I can do the remappings in my .inputrc:
### Backward-kill-word is ctrl-w, while kill-word is alt|esc-d,
# so instead, map ctrl-s which makes ctrl-k|u and ctrl-w|s very close
"\C-s": kill-word
All that to say, standards can be helpful, but there's nothing set in stone: with Control-H and Control-? already causing such headaches (but following all the standards) I don't feel so bad for abusing the standards with my use of Control-S for forward kill word, as it complements nicely the existing Control-K, Control-U and Control-W (all the 3 of which are standard BTW!)
This site would actually has a decent font size and a responsive layout (within reason, code snippes will have additional line breaks) but your browser (like all mobile browser) is intentionally pretending to have a larger desktop viewport and scaling down the result. The way for the website to disable this unhelpful behavior is include the following tag in their <head>:
It is unfortunate that mobile browser developers have decided to break usability of perfectly fine websites just to provide a workaround for the rare site that has a rigid table-based layout. It is sad that this workaround is still the default now that mobile browsers have a dedicated desktop mode that can be used for such sites.
I was actually thinking the opposite. I got quite a way through the article before I realized "Hang on - None of the paragraphs and sentences start with uppercase letters!"
As a father trying to teach two small children to read and write, I've begun to think it's kind of annoying and unnecessary to have two different forms. It's easy enough to teach when they are just bigger glyphs, e.g. O -> o, C -> c, S -> s, etc. But having uppercase and lowercase that are entirely different is just a pain in the ass, e.g. A -> a, D -> d, G -> g, R -> r, etc.
I do wonder if literacy might be improved if we got rid of uppercase, to little to no detriment elsewhere. (I'm sure someone here can probably sight some data on that!)
Uppercase letters are nice for emphasis. If we got away with having a set of both I'd vote for keeping the uppercase ones since they are easier to write. Mostly just simple straight lines. Especially useful when you're carving letters into something.
I can’t agree more. An author who can’t be bothered to follow well established forms of style, grammar, and punctuation in a technical article somehow deflates whatever else they have to say even if accurate. It’s not as if they are trying to emulate the works of E. E. Cummings, James Joyce, or Arno Schmidt.
y'all are the most boring people on the planet istg. in the entire legion of internet writers making in depth technical content there is what, one who deviates from the conventions and you can't handle it.
This author is a much much better writer than average for free technical content! They have an interesting, unique style! The typography and punctuation are part of that style! It would be tangibly worse if they adhered to the (bad, and also arbitrary!) typographical conventions of raw html just to please a bunch of square-ass nerds. Shit just makes me sad seriously.
> They have an interesting, unique style! The typography and punctuation are part of that style!
Her 'interesting, unique' 'typography and punctuation' makes me want to poke my eyes out instead of focusing on the content itself. You sneer at conventions, but they were developed after centuries of typesetting and typewriting.
Her entire style sheet is an abomination:
- The serif typeface she has used, EB Garamond, is best used for printed text. I am working on a 27" 1080p monitor, and the low DPI makes for very painful reading. Sans-serif humanist typefaces like Verdana (used at Hacker News, thank goodness) or Open Sans are best.
Better still, use the OS typeface in `font-family` (`-apple-system`, Segoe UI, SF Pro, Roboto, Lucida, Ubuntu, Open Sans, etc) instead of dragging in your own. If one so desperately wants a serif typeface, please use something that's nice and blocky, like Droid Serif or IBM Plex Serif.
Incidentally, I have the same qualms with the default typeface in LaTeX: the Computer Modern provided is far too spindly for digital reading. There is an alternative, MLModern[0] that is thicker, but it only has Type 1 glyphs rather than OpenType.
- The colours she has chosen are not as bad, but they are certainly distracting. Can't go wrong with a monochrome dark grey/white, or even straightforward black on white.
- Uppercase/lowercase letters and proper punctuation help break up the prose, and improve the reading experience by differentiating proper nouns, beginnings of sentences, etc etc. Dismissing this by saying 'Oooh, look at me, I'm different for different's sake' is just exasperating.
Incidentally, I'm probably of a similar age group as the writer, but I like to follow grammatical and typographical rules in long prose (I don't care as much in personal texts), because they make sense.
The reason it's important to stick to established standards is precisely to avoid discussions like this. It's a total waste of engineering time, so, in this case, be boring.
When I see writing like that, I think the author is either German[1], uses IM far too much, or is trying very hard to exude the "edgy teenager" stereotype. The entire site is in that style, and combined with the definitely unusual domain name, makes it very hard to take this person seriously.
[1] apparently an "overcompensation" coming from a language in which all nouns are capitalised.
You can't have "everything you wanted to know about terminals" with absolutely no mention of terminfo and (previously) termcap.
You don't necessarily need ncurses to have a portable smart terminal experience, but you do need terminfo.
ncurses magic relies on the lore stored in terminfo, which contains all of the obscure escape sequences and other information about the disparate world of terminals.
They're maintained as a combined package, but you don't need ncurses to drive the screen. You can get the codes yourself if that's what you want to do. You can use tput to make colorful labels in shell scripts.
While much of the world has moved on to the One Terminal running on the One OS, not everyone has.
> being compatible only with ANSI-capable terminals is a feature, not a bug
Driving old hardware terminals (as opposed to terminal emulators) is a fun stunt, not something of actual modern value. Every modern terminal emulator supports ANSI escape sequences. Some features are supported by a subset of terminals, but you can either 1) probe for those features by asking the terminal, which some terminals support, or 2) try them and have graceful degradation if they're not supported, or 3) have configuration options to use them, or 4) don't use them.
For every one case in which terminfo allows you to support some obscure non-ANSI terminal, there are many many more cases where terminfo won't happen to have a definition of the user's terminal (or won't know all the capabilities of the user's terminal) and you'll have less functionality than if you just used ANSI escapes. This is especially true over SSH and similar.
You don't need terminfo until you need to redirect those fancy outputs to a file, or you see a lot of weird [M; in your CI logs, or your ssh session does not display full 256 colors, or you question why we don't have true images in our terminals when we have bitmap escape codes for decades now[1]. No terminal was created equal and never will. terminfo is about future as it is about the past.
Color ansi codes might be everywhere today, but that's no reason to disgrace terminfo, there are a ton of other features that need checking and are not universal.
> until you need to redirect those fancy outputs to a file, or you see a lot of weird [M; in your CI logs
The application should be checking isatty by default, unless the user overrides that.
> or your ssh session does not display full 256 colors
I've most often seen this kind of thing because the remote end doesn't have the right terminfo entries for the local terminal.
> or you question why we don't have true images in our terminals when we have bitmap escape codes for decades now
Sixel, specifically, doesn't seem to have much support other than in xterm itself (as opposed to otherwise-xterm-compatible terminals). And other bitmap extensions are something that many terminals support autodiscovery for.
> No terminal was created equal and never will. terminfo is about future as it is about the past.
Part of the problem is that terminfo doesn't have much info about the future, as opposed to the past.
For going beyond terminfo, I think autodiscovery protocols would be a better future path.
Plus a lot of interesting things aren't mentioned in terminfo - bracketed paste, cursor shaping, synchronized output, ...
And even truecolor was added to it about 10 years after terminals started gaining support.
And many terminals just claim to be "xterm-256color".
>2) try them and have graceful degradation if they're not supported
Note: There are many cases where degradation isn't graceful. Many terminals on windows currently spew garbage on your screen if you send bracketed paste (alacritty, for instance).
>1) probe for those features by asking the terminal, which some terminals support
This requires waiting for a reply, which often isn't useful. E.g. if you want synchronized output, you want it from the very first paint (because that's when the terminal is most likely to still be resized, e.g. by a tiling window manager). So you would have to delay your startup until you've either gotten a reply or "enough" time has passed that you believe it's not supported.
Frankly, this is all a big mess and terminfo isn't very helpful, but we don't have a good alternative either.
> being compatible only with ANSI-capable terminals is a feature, not a bug, go the fuck away. terminfo is a fucking joke. nobody needs to target obscure dumb terminals (or smart terminals, for that matter) from 1983 anymore.
I don't have the expertise to opinion on this. But AFAICT most terminals that people still use[1] will self-report as xterm or xterm-256color so they'll only use the xterm escape sequences anyway.
[1] kitty is a notable exception. But it's rarely supported on preinstalled terminfo's, and manually adding them to all the remote servers that you access is a PITA.
There is no need for the complexity of terminfo or termcap, because we have an ECMA/ANSI/ISO standard for terminal control. We have had it since before terminfo. ECMA-48 dates back to 1976. People have had 46 years to upgrade to standard-conforming terminals.
Terminfo is not a de jure standard, only de facto. It is not in POSIX.
POSIX specifies a tput command with exactly three operations: clear, init and reset.
There is a Rationale section giving reasons for why terminfo wasn't standardized.
Bottom line: ECMA/ANSI is the real standard that can be used to drive a conforming terminal from any OS, without any special library other than basic I/O.
Also not just ECMA. The standard was ratified by ANSI X3.64-1979, ISO/IEC 6429, and FIPS-86 in addition to ECMA-48. The whole civilized world wants these codes (e.g. \033[A for UP) to be a standard. Because it makes console development blissful when you can ignore all the accidental complexity of curses and terminfo when you can just assume the terminal behaves sanely and isn't the leftover byproduct of bygone commercial rivalries.
I'm sure this is very interesting but I couldn't make it past the first sentence. I'm sure there's a term for someone who writes like this, but all I could think was "redditspeak," or otherwise stale tryhard wacky. Forcibly inserting so much 'character' into your sentences that your syntax implodes.
Good candidates for relevant terms may include "stream of consciousness" as a narrative style [0], with a writing tone that is "highly informal" and "colloquial" with "vulgarity" (from the inclusion of "fuck" and other profane words) throughout [1].
The author additionally uses the writing technique of "enallage", defined as a "slight deliberate grammatical mistake that makes a sentence stand out," [2] from the article excerpt: "in other words, we need to use termios. the ugly side of termios."
In other words: the author writes in a highly informal, colloquial tone with a stream of consciousness narrative style. The article is notable for its vulgarity and usage of enallage to achieve its exceptionally high degree of informality. The author is reminiscent of James Joyce as an experienced C programmer.
I’m not sure if there’s a term yet. I like to call the general vibe (colors, unnecessary font variations, “I’m gods gift to humanity and all you people and your years of hard work are idiotic and beneath me, even though I have yet to do anything life changing” attitude): programmer wishing they were postmodern artist.
Off-topic, but it would be nice if HN could properly parse the Punycode encoding of internationalized domain names, so rather than the URL appearing as xn--rpa.cc, it appeared properly as ʞ.cc
there are tools like dnscrypt-proxy or nfq which allow you to sinkhole any punicode dns domains (before DoH became a thing) and I'm glad that some browsers still show this as xn-- as it's now the only defense that stands between users clicking on hxxps://аррӏе.com instead of https://apple.com
Or just allow only using a single script. A domain in all Cyrillic: great! Mixing Latin & Cyrillic: nono.
In practice, browsers already check for this and display the "raw" punycode if they detect mixed script usage, but I wish such domains would not be registrable at all. These checks are somewhat complex and difficult, and easy to get wrong.
This would still let some homographs through. In particular, Cyrillic has a lot of characters which are confusingly similar to, or even indistinguishable from, Latin characters (e.g. "авсекморѕтѵху").
Right; you can construct "арр.com" or "аррꙆе.com" from that limited subset.
Those should be valid domains though IMHO; maybe show the used script in the address bar? I think users might be confused by that though and/or just ignore it, so idk. Then again, displaying "xn--80a6aa.com" and "xn--80ak6aa9058r.com" is pretty confusing too.
> being compatible only with ANSI-capable terminals is a feature, not a bug, go the fuck away. terminfo is a fucking joke. nobody needs to target obscure dumb terminals (or smart terminals, for that matter) from 1983 anymore.
No, terminal emulators differ all over the place, especially for new features. Do you want to lock yourself in and only use the ancient ANSI codes? In, like you say, “THE TWENTY FIRST FUCKING CENTURY”? Great, you do that.
What’s that you say? You want to use modern features on terminals which support it? Do you then write “if getenv("TERM") == "spiffy-terminal"”? Congratulations, you’ve just begun implementing your very own ad hoc, informally-specified and bug-ridden terminal UI library.
> my hope is that this tutorial will curtail some of the more egregiously trivial uses of ncurses and provide others with the knowledge needed to implement a 21st-century terminal UI library
If you don’t like curses specifically, then don’t use it, but there are now myriads of alternatives.
Sure, if all you care about is coloring text on recent terminals, I do think just using ansi codes is fine.
However, I completely disagree with the assertion that hardcoding ansi codes is somehow “more readable” than using a well-named API call that abstracts such detail away from the user. Maybe I’m alone in this, but I really don’t want to waste brain space memorizing ansi codes. Yeah I can write my own library or little macros to do this but…why? If I’m doing anything more involved than coloring text why wouldn’t I just pull in the lib I need? There’s no way in hell I want to maintain a custom implementation of terminal cursor movement for kicks. I’d much rather reach for a battle tested library that scores of other devs have already used and improved upon, whether it’s ncurses or one of the more modern TUI libs.
Also, it’s become cool recently to hate ncurses for some reason. Are there things ncurses should do differently from today’s perspective? Sure. Is it’s api awkward to use in the face of modern programming features and paradigms that we’re now used to? Sure. But on the whole it’s design is really solid and it has plentiful documentation. Whenever I see rants like this I get the impression the author is a person that lacks an appreciation for history and the way technical development proceeds in general and imagines they are our technical savior, come down from the heavens to enlighten all us blindered fools about how terrible X is.
If you want to educate people about ansi codes, great. A really inefficient way to do that is to wrap your advice in an obnoxious rant.