Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The backwards compatibility though is one of the major features of windows as an OS. That fact that a company can still load some software made 20 years ago developed by a company that is no longer in business is pretty cool (and I've worked at such places using ancient software on some windows box, sometimes there's no time or money for alternatives)


If you look at more recent Windows APIs, I'm really thankful that the traditional Win32 APIs still work. On average the older APIs are much nicer to work with.


> On average the older APIs are much nicer to work with

IMO this is because they are better written, by people who had deeper understanding of the entire OS picture and cared more about writing performant and maintainable code.


Well-illustrated in the article “How Microsoft Lost the API War”[0]:

    The Raymond Chen Camp believes in making things easy for developers by making it easy to write once and run anywhere (well, on any Windows box). 
    The MSDN Magazine Camp believes in making things easy for developers by giving them really powerful chunks of code which they can leverage, if they are willing to pay the price of incredibly complicated deployment and installation headaches, not to mention the huge learning curve. 
    The Raymond Chen camp is all about consolidation. Please, don’t make things any worse, let’s just keep making what we already have still work. 
    The MSDN Magazine Camp needs to keep churning out new gigantic pieces of technology that nobody can keep up with.  
[0] https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost...


> making things easy for developers by giving them really powerful chunks of code which they can leverage, if they are willing to pay the price of incredibly complicated deployment and installation headaches, not to mention the huge learning curve.

I feel the same way about Spring development for Java.

Also reminds me of:

https://www.infoq.com/presentations/Simple-Made-Easy/


Nicer to work with?

I can't think of any worse API in the entire world?


The 'win32' AOU calls are decent relative to themselves. If you understand their madness. For every positive call there is usually an anti call and a worker call. Open a handle, use the handle with its helper calls, close the handle. You must understand their 3 step pattern to all work. There are a few exceptions. But usually those are the sort of things where the system has given you a handle and you must deal with it with your calls to helper calls. In those cases usually the system handles the open/close part.

Now you get into the COM/.NET/UWP stuff and the API gets a bit more fuzzy on that pattern. The win32 API is fairly consistent in its madness. So are the other API stacks they have come up with. But only in their own mad world.

Also out of the documentation the older win32 docs are actually usually decently written and self consistent. The newer stuff not so much.

If you have the displeasure of mixing APIs you are in for a rough ride as all of their calling semantics are different.


There are some higher level COM APIs which are not exactly great, but the core Win32 DLL APIs (kernel32, user32, gdi32) are quite good, also the DirectX APIs after ca 2002 (e.g. since D3D9) - because even though the DirectX APIs are built on top of COM, they are designed in a somewhat sane way (similar to how there are 'sane' and 'messy' C++ APIs).

Especially UWP and its successors (I think it's called WinRT now?) are objectively terrible.


I've had to work with api functions like https://learn.microsoft.com/en-us/windows/win32/api/winuser/... and friends. It was by far the most unpleasant api I've ever worked with.


I think that particular pattern is a perfectly reasonable way to let the user ingest an arbitrarily long list of objects without having to do any preallocations -- or indeed, any allocations at all.


Because allocating well under a hundred handles is a biggest problem we have.

WinAPI is awful to work with for reasons above anyone’s comprehension. It’s just legacy riding on legacy, with initial legacy made by someone 50% following stupid patterns from the previous 8/16 bit decade and 50% high on mushrooms. The first thing you do with WinAPI is abstracting it tf away from your face.


Yes, but the inversion of control is unpleasant to deal with—compare Find{First,Next}File which don’t require that.


Which is a pattern that also exists in the Win32 API, for example in the handle = CreateToolhelp32Snapshot(), Thread32First(handle, out), while Thread32Next(handle, out) API for iterating over a process's threads.

I also find EnumChildWindows pretty wacky. It's not too bad to use, but it's a weird pattern and a pattern that Windows has also moved away from since XP.

https://learn.microsoft.com/en-us/windows/win32/toolhelp/tra...


The various WinRT APIs are even worse. At least Win32 is "battle tested"


X Windows and Motif, for example.


So, you don't use the newer Windows APIs?


Yeah, I like the smell of cbSize in the RegisterClassExA. Smells like… WNDCLASSEXA.lpfnWndProc.

Nothing can beat WinAPI in nicety to work with, just look at this monstrosity:

  gtk_window_new(GTK_WINDOW_TOPLEVEL);


On the other hand GTK has been rewritten 3 times and each new version deprecates a bunch of stuff, making it an absolute nightmare for apps to migrate.


Why can’t they stick an app to a specific gtk version? They were fine with what they started it in, what is the reason to migrate?

If the answer is ver++ anxiety, the problem is self-imposed (still better than using winapi).


Needing new features like wayland support? Better DE integration? Distros removing old versions? What an odd question.


What exactly does “wayland support” do to an existing x11 app? How they managed to ship either the app or the wayland without mutual “support” before?

What’s DE integration apart from tray and notifications? Why does an app need any DE integration beyond a tray icon?

These questions are valid, not odd.

Distros removing versions is a distro’s problem. Most gtk versions are installable on popular distros, afaiu.

Anyway, I find most of these points are moot, because they mirror winapi. Gdi -> directx, fonts scaling, need for msvcrts and so on. Looks like an argument for the sake of argument. You can’t make a modern app with winapi either, it will be a blurry non-integrated win2k window like device manager or advanced properties. The difference is you can’t migrate them at all, even MS can not.


All untrue from my perspective so I'm not sure which parallel universes we live in.


That and it's 30+ years (NT was released in 1993). Backwards compatibility is certainly one of the greatest business value Microsoft provides to its customers.


If you include the ability of 32-bit versions of Windows to run 16-but Windows and DOS applications with NTVDM, it is more like 40+ years.

https://en.wikipedia.org/wiki/Virtual_DOS_machine

(Math on the 40 years: windows 1.0 was released in 1985, the last consumer version of Windows 10 (which is the last Windows NT version to support 32-bit install and thus NTVDM) goes out of support in 2025. DOS was first released in 1981, more than 40 years ago. I don’t know when it was released, but I’ve used a pretty old 16-bit DOS app on Windows 10: a C compiler for the Intel 80186)


> I’ve used a pretty old 16-bit DOS app on Windows 10: a C compiler for the Intel 80186

It’s amazing that stuff still runs on Windows 10. I’m guessing Windows 10 has a VM layer both for 32-bit and 16-bit Windows + DOS apps?


Windows 10 only does 16-bit DOS and Windows apps on the 32-bit version of Windows 10, so it only has a VM layer for those 16-bit apps. (On x86, NTVDM uses the processor's virtual 8086 mode to do its thing; that doesn't exist in 64-bit mode on x86-64 and MS didn't build an emulator for x86-64 like they did for some other architectures back in the NT on Alpha/PowerPC era, so no DOS or 16-bit Windows apps on 64-bit Windows at all.)



True. I just assumed that 16-bit support got dropped since Windows 11 was 64-bit only.


Microsoft decided not to type "make" for NTVDM on 64-bit versions of Windows (I would argue arbitrarily). It has been unofficially built for 64-bit versions of Windows as a proof-of-concept: https://github.com/leecher1337/ntvdmx64


That’s okay, and if people want to test their specific use case on that and use it then great.

It’s a pretty different amount of effort to Microsoft having to do a full 16 bit regression suite and make everything work and then support it for the fewer and fewer customers using it. And you can run a 32 bit windows in a VM pretty easily if you really want to.


Or you can run 16-bit Windows 3.1 in DOSBox.


Sure, but again that’s on you to test and support.


> Microsoft decided not to type "make" for NTVDM on 64-bit versions of Windows (I would argue arbitrarily).

I recently discovered that Windows 6.2 (more commonly known as Windows 8) added an export to Kernel32.dll called NtVdm64CreateProcessInternalW.

https://www.geoffchappell.com/studies/windows/win32/kernel32...

Not sure exactly what it does (other than obviously being some variation on process creation), but the existence of a function whose name starts with NtVdm64 suggests to me that maybe Microsoft actually did have some plan to offer a 64-bit NTVDM, but only abandoned it after they’d already implemented this function.


But only to a degree, right? Only the last two decades of software is what the OS ideally needs to support, beyond that you can just use emulators.


Software is written against APIs, not years, so the problem with this sort of thinking is that software written -say- 10 years ago might still be using APIs from more than 20 years ago, so if you decide to break/remove/whatever the more-than-20-year-ago APIs you not only break the more-than-20-year-ago software but also the 10 year old software that used those APIs - as well as any other software, older or newer, that did the same.

(also i'm using "API" for convenience here, replace it with anything that can affect backwards compatibility)

EDIT: simple example in practice: WinExec was deprecated when Windows switched from 16bit to 32bit several decades ago, yet programs are still using it to this day.


Pretty much the only 16-bit software that people commonly encounter is an old setup program.

For a very long time those were all 16-bit because they didn't need the address space and they were typically smaller when compiled. This means that a lot of 32-bit software from the late 90s that would otherwise work fine is locked inside a 16-bit InstallShield box.


> Pretty much the only 16-bit software that people commonly encounter is an old setup program.

I know quite a lot of people who are still quite fond of some old 16-bit Windows games which - for this "bitness reason" - don't work on modern 64 bit versions of Windows anymore. People who grew up with these Windows versions are quite nostalgic about applications/games from "their" time, and still use/play them (similar to how C64/Amiga/Atari fans are about "their" system).


Maybe, but your app could also be an interface to some super expensive scientific/industrial equipment that does weird IO or something.


People tend to forget that it already is 2024.


Short of driver troubles at the jump from Win 9x to 2k/XP, and the shedding of Win16 compatibility layers at the time of release of Win XP x64, backwards compatibility had always been baked into Windows. I don’t know if there was any loss of compatibility during the MS-DOS days either.

It’s just expected at this point.


On DOS, if you borrow ReactOS' NTVDM under XP/2003 and Maybe Vista/7 under 32 bit (IDK about 64 bit binaries), you can run DOS games in a much better way than Windows' counterpart.



I think they recently improved NTVDM a lot.


Not long ago, it was posted here a link to a job advert for the german railway looking for a Win 3.11 specialist.

As I see it, the problem is the laziness/cheapness of companies when it comes to upgrades and vendor's reluctance to get rid of dead stuff for fear of losing business.

APIs could be deprecated/updated at set intervals, like Current -2/-3 versions back and be done with it.


Lots of hardware is used for multiple decades, but has software that is built once and doesn't get continuous updates.

That isn't necessarily laziness, it's a mindset thing. Traditional hardware companies are used to a mindset where they design something once, make and sell it for a decade, and the customer will replace it after 20 years of use. They have customer support for those 30 years, but software is treated as part of that design process.

That makes a modern OS that can support the APIs of 30 year old software (so 40 year old APIs) valuable to businesses. If you only want to support 3 versions that's valid, but you will lose those customers to a competitor who has better backwards compatibility


  > The backwards compatibility though is one of the major features of windows as an OS.
It is. That's even been stated by MSFT leadership time and time again.

But at what point does that become a liability?

I'm arguing that point was about 15-20 years ago.


There is another very active article on HN today about the launch of the new Apple iPhone 16 models.

The top discussion thread on that post is about “my old iPhone $version is good enough, why would I upgrade”.

It’s funny, if you ask tech people, a lot fall into the “I have to have the latest and greatest” but also a lot fall into the “I’ll upgrade when they pry the rusting hardware from my cold dead hands”.

For Microsoft, the driver for backwards compatibility is economic: Microsoft wants people to buy new Windows, but in order to do that, they have to (1) convince customers that all their existing stuff is going to continue to work, and (2) convince developers that they don’t have to rewrite (or even recompile) all their stuff whenever there’s a new version of Windows.

Objectively, it seems like Microsoft made the right decision, based on revenue over the decades.

Full disclosure: I worked for Microsoft for 17 years, mostly in and around Windows, but left over a decade ago.


> It’s funny, if you ask tech people, a lot fall into the “I have to have the latest and greatest” but also a lot fall into the “I’ll upgrade when they pry the rusting hardware from my cold dead hands”.

Not concerning the iPhone, but in general tech people tend to be very vocal about not updating when they feel that the new product introduces some new spying features over the old one, or when they feel that the new product worsens what they love about the existing product (there, their taste is often very different from the "typical customer").


Great related thread from yesterday: https://news.ycombinator.com/item?id=41492251


> It’s funny, if you ask tech people, a lot fall into the “I have to have the latest and greatest”

This is almost never a technical decision, but a 'showing off' decision IMO.


It's a "fun toys" decision.


Often one and the same.


It is not a liability because most of what you are talking about is just compatibility not backwards compatibility. What makes an operating system Windows? Fundamentally it is something that runs Windows apps. Windows apps existed 15-20 years ago as much as they exist today. If you make an OS that doesn't run Windows apps then it just isn't Windows anymore.

The little weird things that exist due to backwards compatibility really don't matter. They're not harming anything.


New frameworks have vulnerabilities. Old OS flavors have vulnerabilities. OpenSSh keeps making the news for vulnerabilities.

I’d argue that software is never finished, only abandoned, and I absolutely did not generate that quote.

Stop. Just stop.


>OpenSSH

Yes, just stop... with the bullshit. OpenBSD didn't make vulnerabilities. Foreign Linux distros (OpenSSH comes from OpenBSD, and they release a portable tgz too) adding non-core features and libraries did.


It is a great achievement. But the question is: Is it really relevant? Couldn't they move the compatibility for larger parts to a VM or other independent Subsystem?

Of course even that isn't trivial, as one wants to share filesystem access (while I can imagine some overlay limiting access), might need COM and access to devices ... but I would assume they could push that a lot more actively. If they decided which GUI framework to focus on.


> Couldn't they move the compatibility for larger parts to a VM or other independent Subsystem?

A huge amount of the compatibility stuff is already moved out into separate code that isn't loaded unless needed.

The problem too, though, is users don't want independent subsystems -- they want their OS to operate as a singular environment. Raymond Chen has mentioned this a few times on his blog when this sort of thing comes up.

Backwards compatibility also really isn't the issue that people seem to think it is.


Independent subsystems need not be independent subsystems that the user must manage manually.

The k8s / containers world on Linux ... approaches ... this. Right now that's still somewhat manual, but the idea that a given application might fire off with the environment it needs without layering the rest of the system with those compatibility requirements, and also, incidentally, sandboxing those apps from the rest of the system (specific interactions excepted) would permit both forward advance and backwards compatibility.

A friend working at a virtualisation start-up back in the aughts told of one of the founders who'd worked for the guy who'd created BCPL, the programming language which preceded B, and later C. Turns out that when automotive engineers were starting to look into automated automobile controls, in the 1970s, C was considered too heavy-weight, and the systems were imploemented in BCPL. Some forty years later, the systems were still running, in BCPL, over multiple levels of emulation (at least two, possibly more, as I heard it). And, of course, faster than in the original bare-metal implementations.

Emulation/virtualisation is actually a pretty good compatibility solution.


Users don't want sandboxing! It's frustrating enough on iOS and Android. They want to be able to cut and paste, have all their files in one place, open files in multiple applications at the same time, have plugins, etc.

Having compatibility requirements is almost the definition of an operating system.

If you bundle every application with basically the entire OS needed to run them then what exactly have you created?


There are a relatively limited set of high-value target platforms: MS DOS (still in some use), Win95, WinNT and successor versions. Perhaps additionally a few Linux or BSD variants.

Note that it's possible to share some of that infrastructure by various mechanisms (e.g., union mounts, presumably read-only), so that even where you want apps sandboxed from one another, they can share OS-level resources (kernel, drivers, libraries).

At a user level, sandboxing presumes some shared file space, as in "My Files", or shared download, or other spaces.

Drag-and-drop through the GUI itself would tend to be independent of file-based access, I'd hope.


What is gained by this? What would you get by virtualizing a WinAPI environment for app in Windows? (MS DOS compatibility is already gone from Windows). You get a whole bunch of indirection and solve a problem that doesn't exist.


Obvious obvious advantage is obviously obvious: the ability to run software which is either obsolete, incompatible with your present system, or simply not trusted.

In my own case, I'd find benefits to spinning up, say, qemu running FreeDOS, WinNT, or various Unixen. Total overhead is low, and I get access to ancient software or data formats. Most offer shared data access through drive mapping, networking, Samba shares, etc.

That's not what I'd suggested above as an integrated solution, but could easily become part of the foundation for something along those lines. Again, Kubernetes or other jail-based solutions would work where you need a different flavour that's compatible with your host OS kernel. Where different kernels or host architectures are needed, you'll want more comprehensive virtualisation.


As long as you ensure compatibility then software doesn't have to be obsolete or incompatible. The Windows API is so stable that it's the most stable API available for Linux.

I can already run VMs and that seems like a more total solution. To have an integrated solution you would need cooperation that you can't get from obsolete systems. I can run Windows XP in a VM. But if I want to run a virtualized Windows XP application seamlessly integrated into my desktop then I'm going need a Windows XP that is built to do that.


Compatibility comes with costs:

- Fundamental prerequisites cannot be changed or abandoned, even where they impose limitations on the overall platform.

- System complexity increases, as multiple fixed points must be maintained, regressions checked, and where those points introduce security issues, inevitable weaknesses entailed.

- Running software which presumed non-networked hosts, or a far friendlier network, tend to play poorly in today's word. Well over a decade ago, a co-worker who'd spun up a Windows VM to run Windows Explorer for some corporate intranet site or another noted that the VM was corrupted within the five minutes or so it was live within the corporate LAN. At least it was a VM (and from a static disk image). Jails and VMs isolate such components and tune exposure amongst them.

What you and I can, will, and do actually do, which is to spin up VMs as we need them for specific tasks, is viable for a minuscule set of people, most of whom lack fundamental literacy let alone advanced technical computer competency.

The reason for making such capabilities automated within the host OS is so that those people can have access to the software, systems, and/or data they need, without needing to think about, or even be aware of how or that it's being implemented.

I've commented and posted about the competency of the average person as regards computers and literacy. It's much lower than you're likely to have realised:

The tyranny of the minimum viable user: <https://web.archive.org/web/20240000000000*/https://old.redd...>

Adult literacy in the United States: <https://nces.ed.gov/pubs2019/2019179/index.asp> <https://news.ycombinator.com/item?id=29734146>

And no, I'm not deriding those who don't know. I've come to accept them as part of the technological landscape. A part I really wish weren't so inept, but wishing won't change it. At the same time, the MVU imposes costs on the small, though highly capable, set of much more adept technologists.


Generally speaking, the waste is only hard disk space. If no one ever loads some old DLL, it just sits there.


Nobody loads it, but the attacker. Either via a specially crafted program or via some COM service invoked from a Word document or something.


By moving the Win32 API onto Windows NT kernel, isn't that essentially what Microsoft did?


I think that VM software like Parallels has shown us that we are just now at the point where VMs can handle it all and feel native. Certainly NT could use a re write to eliminate all the legacy stuff…but instead they focus on copilot and nagging me not to leave windows edge internet explorer


My question is why cant M$ ship the old OS running as VM. And free themselves from Backward compatibility on a newer OS.


Users will want to use applications that require features of the earlier OS version, and newer ones that require newer features. They don't want to have to switch to using a VM because old apps would only run on that VM.


Putting apps from the VM on the primary desktop is something they have already done on WSLg. Launching Linux and X server is all taken care of when you click the app shortcut. Similar to the parent’s ask, WSL2/WSLg is a lightweight VM running Linux.


In many ways the old API layers are sandboxed much like a VM. The main problems are things like device drivers, software that wants direct access to external interfaces, and software that accesses undocumented APIs or implementation details of Windows. MS goes to huge lengths to keep trash like that still working with tricks like application specific shims.


They did that with Windows 7. Win 7 had an optional feature called "Windows XP Mode" that was XP running inside of a normal VM.

https://arstechnica.com/information-technology/2010/01/windo...


Backwards compatibility isn't their biggest problem to begin with, so that wouldn't be worth it. In effect they already did break it: the new Windows APIs (WinRT/UWP) are very different to Win32 but now people target cross platform runtimes like the browser, JVM, Flutter, etc. So it doesn't really matter that they broke backwards compatibility. The new tech isn't competitive.


If 20 years is so ancient, why did they go by so fast....


Bad news. NT wasn't 20 years ago. It was 31 years ago.


It's possible that wine is more "backwards compatible" than the latest version of Windows though.

And while wine doesn't run everything, at least it doesn't circumvent security measures put in place by the OS...


I've had more luck running games from 97-00 under wine than on modern Windows.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: