That certainly wasn't my impression from the mailing list. There are certainly some challenges with incorporating colour management into Wayland's model, but they also seem quite solvable (and the mailing list discussion seemed to end up with a solution).
Meanwhile the discussion on pixls seems to be a bunch of people complaining either that it's not solved yet, or if it is solved it won't work the same way it does on X.
The only remaining point of contention on the mailing list came down to colour calibration: one party wanted an API that allowed setting a temporary display-wide colour profile for the purpose of calibration, as this is how it has worked before. The response from a Wayland developer was that there should be a dedicated calibration extension that would allow setting a linear colour space for a specific region (eg. one matching the display). One of the reasons given for this is so that if the calibration application crashes, it doesn't leave the display in a bad state.
My understanding of the full solution is that applications would be able to specify a colour space for each buffer, and the compositor would do the appropriate transformation if that colour space does not match the actual display. Furthermore, there would be events given to the application when a window leaves/enters a display so that it can (if it wishes) choose to handle the colour space transformation itself by setting the buffer to use the same colour space as the display it is currently on. This means you get approximately correct colours when moving a window between displays, even before the application has had time to repaint itself, whilst also allowing the compositor flexibility like reusing the same output on multiple displays, displaying a preview of a window, etc.
I'm curious about this as well, but wouldn't be pointing fingers at the Wayland developers. See, for example, one of the most recent comments in the discussion linked to in the parent post, by Nilvus, one of the (wise and knowledgeable) darktable developers:
> The only thing is that color management for Wayland is progressing and far from being completely done. Even if work is quite slow, things seems to go in good direction. For correct and complete Wayland color management, we just have to wait again.
I can't find any reference to color management in the Blender meta-issue at https://developer.blender.org/T76428. For X11, I believe applications would have to manually determine the color profile of the display holding the current window, then query colord or and X atom to determine the profile. The application would then manually do the colorspace conversion. Does querying colord and making an in-application conversion works for Wayland until Wayland becomes colorspace-aware? Or if there are more wrinkles?
This is not a great summary. This discussion is full of people both saying wayland devs are clueless, having issues agreeing on any design in the same messages, and failing to actually present their case to upstream. It would probably go faster with less snark and us-vs-them.
I don't know much about the whole space, but from what I've read so far about Wayland, it starts to feel a lot like yet another "perpetual next-gen" tech, such as IPv6, fuel cells, the Semantic Web or XHTML2 (before that one was officially declared dead at least).
Like, according to the linked article, the standard is out there since 2008 - so the adoption period is already 14 years! And people are still haggling about basic stuff like color management, mouse cursors and window decorations?
What exactly is the envisioned timeframe for Wayland to replace X11 as the dominant windowing system?
If the standard has been promoted as the obvious next step for linux desktop environments for 14 years but still hasn't actually caught on, are we sure it really is the right direction to go?
> Like, according to the linked article, the standard is out there since 2008 - so the adoption period is already 14 years!
2008 is when the project started. In 2012 it was sorta-usable as a new weird experimental thing, but it was absolutely not even trying for adoption yet.
The first distro to make Wayland the default was Fedora 25 in 2016.
> 2008 is when the project started. In 2012 it was sorta-usable as a new weird experimental thing
Four years to get out an experimental sorta usable thing seems a huge amount of time to me. Even more so if you think that many basic things (drag and drop, screenshots, ...) came years later.
What? 4 years to get from "I think I'm going to rethink the entire GNU/Linux graphics stack" to "usable" is amazing, especially if you consider the GPU-drivers work that was involved.
And to be clear: A big part of only being "sorta" usable isn't anything to do with itself, it's that so little other software had been ported to it.
I cannot say for sure that it is true, but I read (about a year ago) that about 90% of Linux users are already on Wayland.
I tend to believe it because I've seen news reports as of 3 years ago saying that Xserver is no longer being maintained.
The change from Xserver to an arrangement using Wayland is transparent to the Linux user (and Linux admin) and I heard that most of the major distros made the transition a few years ago. A corollary of that, if it is true, is that some of the many Linux users appearing on this site to attack Wayland are in fact (unknown to themselves) using Wayland.
Specifically, the "display server" (terminology? I mean the software that talks to the graphics driver and the graphics hardware) on most distros these days uses the Wayland protocol to talk to apps. An app that have not been modified to use the Wayland protocol to talk to the display server is automatically given a "connection" (terminology?) to XWayland, whose job is to translate the X protocol to the Wayland protocol.
I think `printenv XDG_SESSION_TYPE` will tell you whether you are running Wayland or the deprecated Xserver.
The OP begins, "Recently we have been working on native Wayland support on Linux." What that means is that the blender app no longer needs XWayland: it can talk directly to the display server (using the Wayland protocol). There are certain advantages to that: one advantage is that you can configure all the UI elements on your screen to be scaled by an arbitrary factor without everything getting blurry.
I'm using the latest MacOS to make this comment, but for over a year until a few weeks ago, I was using Linux for all my computing needs, and I went out of my way to run only apps that used the Wayland protocol to talk to the display server (because of the aforementioned ability to scale the UI without blurriness). Chrome had to be started with certain flags for it to use the Wayland protocol. To induce Emacs to speak Wayland, I had to use a special branch of the git source repo, called feature/pgtk.
I agree a ton of people are using it without knowing, ever since it became the default option on most distros. But sharing windows in zoom is a must-have for me, so I'm still running good old xorg (and without any trouble, I might add)
It sounds like a bug in zoom, if anything. It does work with web tech (e.g. using it from discord in a browser), so the problem is entirely with zoom bundling an older/not built with the necessary flags browser.
I'm not primarily a graphics guy so I can't assign fault -- all I know is that it's the single issue that has kept me from adopting Wayland back when the defaults switched over.
"Høgsberg had the inspiration for Wayland while driving through the town of Wayland in Massachusetts, which gave the display server its name. Weston, the Wayland compositor, is named after a neighbouring town in the same state"
I think the story is that an X developer was driving from Weston to Wayland when he had a lightbulb moment concerning how to radically simplify the graphics stack.
I grew up in the Boston area, but I didn't make this connection! Wakefield borders where I grew up, but I don't recall the others being super nearby (maybe they're south of Boston, since I'm not as familiar with that area, or maybe they're further west and I have an overly narrow idea of what "near Boston"means due to being so close)
I use Manjaro with Gnome on Wayland. No issues there with Intellij. Works fine. I think the issue might be with Sway?
In general I'm not having a lot of issues with Wayland. At least not more than usual amount of "just tweak this file over there and it's fine" stuff you have with Linux in general. There's just an endless amount of libraries that need to be aligned with each other. It's one of the reasons I use an arch based distribution so that at least I'm not dealing with stuff that was fixed months/years ago.
The problem is, that jetbrains runs under xwayland. This means under the hood it still runs under xorg. On 4k monitors you probably want to scale your GUI and thats where the problems arise. It looks completely blurry and has a resolution of 1920x1080 but on a 4k monitor.
The largest issue is that it looks extremely blurry on 4k- not like the usual “1080p upscaled” more like 720p upscaled and not aligned with the pixels at all.
Works perfectly fine on 1080p, which was the majority of where I spend my time on Linux, until recently.
Yes, it is. Last news I heard is that they will implement the protocol themselves. Seems weird to me. They already support Gtk 3, why not use that? The biggest conceptual roadblock is `java.awt.Robot` because Wayland restricts moving pointers and taking screenshots.
This env variable is already in my /etc/environment. This makes the jetbrains products at least usable, but my problem is the blurryness on my 4k monitors.
Scaling is the whole problem, that is why it is so blurry. I can't read anything on my 4k monitor without scaling enabled and scaling works way better on wayland than on x.org that is why i switched. And because scaling on x.org doesn't work great, jetbrains products are blurry. I kind of hoped you had a secret weapon that could work around it.
> Gnome-shell has decided not to support server side decorations
I still can’t understand this at all. It’s such obvious and complete folly. And it’s not an isolated decision; there are quite a few related places where it is very apparent that GNOME has co-opted GTK and been actively sabotaging it for anyone that’s not GNOME, and it’s been heavily poisoning the Linux desktop space.
(Note that I’m not railing against client-side decorations, though the current free-for-all with no way of signalling even the simplest of conventions like the expected location of window controls (even apart from their appearance) is quite insane; I’m complaining about not supporting server-side decorations, since they are fundamental to some window managers (e.g. Sway) and completely sensible for many apps. The existence of “fallback client-side decorations” and the loose requirement that every app implement this thing that it often has no need of and can’t do as well as the window manager is at least moderately absurd.)
Redhat-steered projects act an awful lot like they're trying to make it really hard to compete with Redhat without picking the same projects they do—so, being at their mercy, unless you have a lot of resources to maintain forks against a maybe-hostile upstream. Been like that at least since the whole Systemd thing. Maybe that's not what they're doing, but it sure looks like it.
As a user I find this frustrating because I actually wouldn't mind some consolidation in Linux, especially in the GUI layers, but tend to hate RedHat's taste in these matters.
It is essentially the same "embrace, extend, extinguish" policy that microsoft used to use. The fact that the underlying code is open source apparently doesn't matter too much. If someone else did come up with a compelling alternative init system just make systemd a soft dependency of your desktop environment, and make sure that patching that out requires a lot of active maintenance and ongoing work.
I never noticed they pulled the plug on systemd for the extinguish, when did they do that?
And how exactly was any of it an embrace? Redhat was a Linux distribution provider from the start, they didn't take something over in order to destroy it. They just created systemd and provided help to distributions if they were willing to use it.
The distributions were fully aware what they were doing when they abandoned Sysvinit, it wasn't a backhanded tactic by redhat.
A recent example of something close to EEE is vscode, which is now getting "refactored" to utilize non-open source plug-ins for everything, effectively extinguishing it from a free perspective.
1. Embrace: Development of software substantially compatible with a competing product, or implementing a public standard.
2. Extend: Addition and promotion of features not supported by the competing product or part of the standard, creating interoperability problems for customers who try to use the "simple" standard.
3. Extinguish: When extensions become a de facto standard because of their dominant market share, they marginalize competitors that do not or cannot support the new extensions.
Systemd is part of "Extend", and what they're extending is the linux ecosystem. Sure, those extensions might make things better in the short term and for some uses cases, but they're now owned by IBM so good luck!
Yes, and I think it's pretty obvious to anyone who is familiar with the history. EEE is about a set of anti-competitive practices, ones that are definitely and obviously being used here. Whether they do that with malice aforethought is unknown, and nothing there is illegal by itself, but it is a pretty clear thing they're doing.
Did microsoft extinguish document editing software? No, they just used their market position to make sure that their implementation was the most dominant.
> extending those standards with proprietary capabilities, and then using those differences in order to strongly disadvantage its competitors.
But the one qualifying bit for the EEE strategy to work is to remove the open source option, replacing it with something that can only be provided by them.
And systemd is open source as far as I know, So the extinguish phase isn't possible in that sense.
>But the one qualifying bit for the EEE strategy to work is to remove the open source option, replacing it with something that can only be provided by them
Well if that's the definition you're using, why are we even talking? 6 comments deep and you're only now bringing up that by the definition you're using only proprietary software can use EEE? That's a pretty big point, and probably should have been brought up in comment number 1.
Like what are you even talking about if you're just bringing that up now?
And I mean sure, define EEE in a way so that it can only apply to proprietary software. However in my first comment I said this: "The fact that the underlying code is open source apparently doesn't matter too much."
Personally, I think you can use the techniques of EEE to get control of an open source project, and whether your alternative is open source or not doesn't really matter since what it's actually about is control. Making sure the open source community treats your solution as the de-facto standard, making sure that your solution with all it's eccentric little commands is the one taught in schools, selling ad-on projects like log-collections daemons and making it harder for your competitors to do the same.
EEE is descriptive of what's happening, it's not prescriptive. If you'd like I can say that "redhat is doing stuff that looks a lot like EEE except for this list of caveats which make it slightly different from these other cases, but not different from these cases". But if we're done debating definitions we can talk about the actual behavior I find objectionable, but that's all pretty well documented in other places.
I'd agree with that, there are no rock solid definitions we're going to agree on. Still, there are a number of places where a small concession from Redhat would have made other people's lives a lot easier. See also, this very thread and support for server-side-decorators. A decision made by someone directly employed by Redhat.
Innovation without compromise that significantly impacts downstream software, and where patches aren't accepted. That looks a lot like using a market position to unduly influence other projects to me, and they'd get a lot more sympathy if they at least said "pull requests welcome" or if Redhat paid an engineer to make libdecor less awful. (Note that redhat is also paying developers for libdecor development work)
There are a lot more examples like that floating around.
And that's always the rub, isn't it? It's next to impossible to prove that most of the people in key decision making positions work for Redhat or are Redhat alumni.
Have you ever worked with RedHat? This is just how they are, across the board. (And maybe now we should be saying the way IBM is, who's history is even worse...)
> And systemd is controversial in that a vocal minority says it was a disaster.
Not a minority, and some of the loudest voices are the experts whos opinions matter most.
I am one such expert. Most technical people I speak with about systemd are largely ignorant as to the design considerations.
Yeah, no. Most actual experts very much prefer systemd due to it actually solving the problems inherent in booting up a system with all the services, and resolving the dependencies between them, instead of just being hacky bash scripts that only work on a single machine (if at all).
The only people who have problem with systemd is “old-timers” that just refuse to learn anything new, and just blindly believe that the old status quo was better.
That's the kind of attitude I often see from people who have never spent time debugging a dependency resolution issue in startup actions. People who do not have significant experience in running large scale, reliable systems. People with a desktop focus, who know little about systems internals.
Engineering reliable systems often means reducing complexity. A major unaddressed complaint regarding systemd is that there actually isn't a need for dependency resolution during startup, at all. Dependency resolution is very complex and introducing it makes testing changes far more difficult. It is not a desirable feature for most server applications where reducing complexity and increasing operational visibility take precedence.
> The only people who have problem with systemd is “old-timers”
You may be confusing experience with age. The two are related but they are not the same.
> that just refuse to learn anything new, and just blindly believe that the old status quo was better.
This is just silly. There are all sorts of wonderful features related to containerization, cgroups for job control, structured invocation for processes, etc.
The primary issue with systemd isn't that any of these features are bad. The issue is that they have been conflated in a mostly unspecified, monolithic system which discourages composition, inspection, and simplified operation.
It's a big ball of mud architecture. You can't use the service invocation components standalone. You can't use the RC components standalone. You can't use the init components standalone. All of these things should be possible, but aren't, and that is the core issue.
Thanks for the level-headed answer, I’m sorry I lost my cool there, but all too often one can run into Wayland/systemd-haters who are like absolutely unreasonable.
What I compare systemd to is the previous iterations of linux init systems, and compared to them it is a huge positive change in my, and most of the linux world’s opinion (it was voted on multiple times in probably the most democratic process at debian, was decided independently at several other distro). It solves real problems (e.g. pre-disk mount logging) in a relatively modular way, and I think we should cut some slack here, because often times in complex problem domains a monolith is the correct choice. Multiple different tools will actually result in more complexity due to coordination, and I would go as far as that was part of the problem with previous init systems.
May I ask you what would be your preferred solution in place of dependency resolution at runtime then? I do agree that it is “one more moving part”, I just fail to see how else could it work.
I wouldn't do dependency resolution at runtime at all on servers. I would do it at install time and generate a static set of ordered steps. This has a few desirable effects:
* It ensures changes can be audited, observed and tracked (or caught and prohibited). Explicit change control points are key.
* It provides an interface and an option, divorcing the dependency tool from the system which invokes the ordering
* Ordered sequential steps takes longer, but adds determinism. It enormously simplifies the startup process, including auditing where things go wrong in logs after the fact. For servers, this is far more important than lower start times.
Most systems end up with a distro/systemd controlled base and a custom application on top. Often there's all kinds of special harness around the application, far beyond what can be fit into systemd. So there's always necessary duplication and struggles around interfacing different kinds of systems.
For custom applications, I tend to advise that the development team first come up with the success criteria for their app. What conditions would they page on, and what runtime dependencies does their app need? (Databases, up to date feeds, kill switches/feature flags, etc). I suggest that they re-use their monitoring code to also control orchestration of service startup. For example, not starting a front-end app unless a connection check to a backend SQL server passes. In these cases, the orchestration may be significantly more complex than what can be performed inside of systemd -- and this is part of why a clean and extensible interface is so important.
Note that Red Hat's business model is charging money to be Linux experts. It's against their interests for non-Red-Hat Linux to be viable, or for Red Hat to be comprehensible without their services.
I mean, as an engineer, I don't need to assume malicious intent (not saying you do) to understand how this happens. It's much easier to support a single code path rather than options for everything. It's the same reason I tend to shy away from premature abstractions: every option is a drag on future code built on top of it.
X11 was the only environment that did server side decorations.
Both win32 and macos draw decorations client-side. Even if you don't handle the respective messages in your event loop, the libraries that you linked to, do.
Server-side decorations are hard. You cannot properly synchronize two processes to draw perfect frame. When a single process is responsible, it is easy.
> Both win32 and macos draw decorations client-side.
That's technically true but misleading. Yes, they have client-side decorations, but also a single and unified UI toolkit all applications use. So effectively _you_, the application writer, do not have to care about decorations.
Whereas, on Linux Wayland, if you want to support GNOME, _you_ have to care about decorations, or you have no titlebar nor X button. Which means you need to link against libdecor yourself to get that functionality and in theory wouldn't be that bad if that library was maintained and functional, but it isn't. The one stable tag is 0.1.0 from one year ago, and it still causes massive lag when trying to resize a window if you have more than 1x scaling: https://gitlab.gnome.org/jadahl/libdecor/-/issues/37 — Don't be mislead by the title, affects AMD as well.
libdecor is the _only_ way to have decorations on Wayland on all major DEs, unless you actually want to write the entire decoration code yourself. And it's buggy and unmaintained. And, cherry on top, libdecor does not match the desktop environment's look and feel, so your app has decorations that look completely out of place.
I'm a GNOME and Wayland apologist, but libdecor is terrible, unavoidable, and exists only because GNOME couldn't pull their head out of their arse and created this situation.
Not supporting server-side decorations is only a good idea in a vacuum, not in the real world. Applications should be able to choose whether they care about it (i.e. Firefox for their UI), or not at all (i.e. a video game window), as it's possible to do in Windows or macOS.
> Yes, they have client-side decorations, but also a single and unified UI toolkit all applications use.
MacOS has Carbon and Cocoa (and WebKit). QuartzCore lets you make windows, but doesn't give you decorations. Pretty similar to libwayland.
Windows has Win32 and while it does provide window decorations new UWP apps and thus the rather long list of various UI toolkits made by microsoft provide their own decorations. This is why dark mode doesn't work on some apps. So the situation is again pretty similar to libwayland.
Window decorations aren't even the same for different versions of Windows/macOS, so if you want to implement your own you'll end up providing multiple implementations for each platform.
The real difference is that there's a single entity with the will to go through each and every UI toolkit and update them all to look the same ahead of a new major version. That's not to say that server-side decorations are therefore bad; it's just that Microsoft and Apple put the resources behind client-side decorations to make them work as well as they do.
> MacOS has Carbon and Cocoa (and WebKit). QuartzCore lets you make windows, but doesn't give you decorations. Pretty similar to libwayland.
Carbon was deprecated well over a decade ago, and has not been available for several iterations of macOS.
> The real difference is that there's a single entity with the will to go through each and every UI toolkit
No, the real difference is that it is absolutely clear on macOS and Windows what the obvious/preferred/blessed UI toolkit is, and anyone who chooses to use something else is extremely aware of the consequences of that (and if they weren't before making that choice, they will be very soon afterwards).
On Linux, the situation is more or less the opposite. Not only do you have the choice of actual UI toolkit libraries, with none being more or less "blessed" than any other, but with Wayland's adoption, the fundamental windowing technology that sits on top of the video drivers is also not a single thing either.
Apple certainly seem to think that WebKit is at least a partial alternative to Cocoa, given their use of it in parts of the system preferences. Apple is keeping those two in sync so things don't look out of place.
Microsoft has widgets in WinAPI, and then there's WinForms, WPF, WinUI, WinJS and don't forget Electron which Microsoft themselves also ship apps with. There's also the whole win32 & UWP distinction. If you dig deep enough into the system settings you'll even find Dialogs that use the wrong window decorations.
In the last year-ish I wrote a pure-Rust windowing and input library for Windows / MacOS.
I never even had to think about making the window controls appear, they were just there by default when I created a window.
If I eventually want to extend support to Wayland / Gnome I need to figure out how to pull in complex UI framework dependencies. Or I could write my own code to render window controls but it won't be a perfect match to the platform's aesthetic. Compared to Windows / MacOS it's a mess.
Unless you only used QuartzCore on macOS and asked win32 for no decorations you've also pulled in a complex UI framework dependency on Windows and macOS. I agree it's a mess, but not because things work on macOS and Windows without complex UI frameworks.
On MacOS "pulling in complex UI framework" amounts to "#[link(name = "AppKit", kind = "framework")]" somewhere in my Rust codebase. On Windows it's similar.
With Gnome it seems unclear how to do something equally simple to get decorations that match the OS look and feel. The most popular Rust windowing library ended up implementing their own client-side decorations rendering that imitates GTK: https://github.com/rust-windowing/winit/pull/2263.
And if every framework / app is doing this in their own subtly different way then the result is an OS where many apps have slightly different UX, buttons, text rendering, shadows, etc. A horribly unpolished experience.
This post is a nice case study why the year of desktop Linux is unlikely to happen any time soon, or didn’t happen in the past.
It takes large engineering work to write all the software, and needs discipline and people working on very boring areas and aspects of the UI. I find it unlikely any OSS community will ever pull it off, unless there is a clear monetary incentive to fund it and work hard.
There is no reason to think it can't be pulled off, but the Linux world suffers from catastrophic amounts of bike shedding.
The Linux desktop needs a leader figure. A Steve Jobs, or a Linus Torvalds.
Leave it to the community, and everybody wants to reinvent the wheel and paint it their favourite colour. Directed innovation can only be achieved from a single vantage point, not by a committee, let alone a ragtag of independent actors.
How did Linus convince hordes of people of contributing to his kernel, his trademark, for free? The official repo is under his personal account at github.com/torvalds/linux
And regular for profit companies aren't incompatible with Linux. Canonical, Red Hat, etc. make billions from open source.
Let me stress this again: the only reason the Linux desktop sucks is organizational. Not monetary, not technological. Linux would be a niche project today if Linus had been replaced by a committee or other loose organization. The Linux kernel is successful because there is a person at the top saying "No."
The Linux desktop has no such thing. None of the singular desktop environment have such a thing. GNOME has no BDFL, nor does KDE. So its endless bikeshedding and churning and going nowhere.
Point is, it's Linus that merges what he wants in his tree. The kernel development isn't a democratic process, not everything has to be. Everybody can fork it and be the big boss themselves, the fact that nobody and no company has succeeded in doing so is worth thinking about.
Yeah, and the thing about forking is, it's not that it shouldn't be done, but it should be done in an organized way- too much forking and you could have many people interested in a common alternative, but not enough coordination to pull it off. https://cmustrudel.github.io/papers/fse19forks.pdf "What the Fork: A Study of Inefficient and Efficient
Forking Practices in Social Coding" 2019
> How did Linus convince hordes of people of contributing to his kernel, his trademark, for free?
You will find -- unsurprisingly -- that most of them are not contributing for free.
That's not to say there aren't people contributing code written in their spare time. I'm one of them (but not completely: there's stuff I wrote on my own, and submitted in my own name, and also a bunch of stuff I wrote on the job). But the vast majority of people contributing critical code are not doing it for free, and haven't been doing it for free for a very, very long time. Unpaid contributions are the exception, rather than the norm.
> You will find -- unsurprisingly -- that most of them are not contributing for free.
How does that invalidate my point? GP asked why people would contribute to leader-directed open source without coercion. I just pointed out no coercion is needed. Free or paid is irrelevant.
> How do you propose that people work *for free* at the behest of a leader directing such unpaid contributors how to o their work?
(Emphasis mine)
No coercion is involved, but they don't work for free, either. Free vs. paid is extremely relevant. If you were to strip out the paid contributions from the driver tree, for example, you'd be left with a handful of drivers, virtually none of which cover non-trivial devices released in the last fifteen years or so with anything near full functionality.
> You will find -- unsurprisingly -- that most of them are not contributing for free.
Is that a meaningful distinction? I don't think the point was that the people actually writing the code aren't paid, but rather it still holds when you consider that the people paying them choose to allocate those efforts to a dictatorial organization rather than addressing their goals in some other way.
> How did Linus convince hordes of people of contributing to his kernel, his trademark, for free?
Right confluence of factors to a large extent. GNU needed a kernel, and BSD was mired in legal trouble. Linux was there at the right time to provide a GNU-friendly kernel made from scratch.
I think the GPL was also a fortunate choice. It ensured large companies couldn't easily have a closed in-house version and had incentives to contribute to the common good.
Lamentably I think you are right. Although to be fair I'm using linux as my daily driver and it works great (PopOS X11 still...) Though I fear adding extra confusion to writing desktop linux apps isn't going to help things get better.
I would like to write some applications for linux when my work life ends, but there seems to be a dozen different ways to do it. A ton of choices to make I don't fully understand, but I really don't want to have to know the nuances of the windowing system to be able to write something that works well with KDE and Gnome, Qt, GTK and whatever else one needs to know about compositors... A lot of stuff is web based now and linux handles that great, but desktop apps still have a place.
I feel like sometimes being the "best" platform doesn't matter so much as being able to target most distros with less work would help the ecosystem a ton. Using Linux as my daily driver so hope springs eternal.
FWIW, if you use Qt, your app will look and work good pretty much everywhere, excepting if you want to integrate deeply with the shell. But for most apps it's not an issue.
I'm optimistic and hopeful regarding newer GTK. It seems to be modernizing nicely, though I haven't actually written an app in it yet.
If I could wish for one thing for Linux though, it would be a great toolkit to target it that also makes distribution a breeze. That it is still an unsolved problem causes me pain.
It affects anything which put any sort of strain on the compositor. A hi poll-rate mouse also trigger the issue simply because there will be more resize events. Why gnome-shell doesn't apply some sort of back-pressure or opportunistic skipping of events when it is behind is beyond my understanding.
Well, I get at least six different cursor sizes depending on the window hovered. Sample apps with rough eyeballed scales: Firefox gets 1×, XWayland gets 1.5×, Alacritty 2×, Sway 2.5×, Zeal 4×, and I can’t remember what the app was that had something else again…
Probably some ubuntu fuckery. I have Firefox in a container as well (but it's Flatpak) on Fedora, and it uses the same cursor as the rest of the system.
> Whereas, on Linux Wayland, if you want to support GNOME
If you want to support GNOME, you use GTK or use a library that interfaces with GTK.
The binary compatibility with other desktops is just a "nice thing" to have, but GNOME really doesn't care for apps not made for GNOME.
There was a project where Qt would interface with GTK and let GTK handle its window management (similar to macOS/Windows), rather than dealing directly to the display server, but I think its dead.
But there is a great difference between not caring and sabotage.
Not caring is passive neglect, maybe not nice, yes. But sabotage is activly disturbing something with evil intentions. So this is a strong accusation, for which I like to see some more solid evidence, to believe it.
It’s the co-opting of GTK that drives that. GTK used to be fairly platform-neutral under Linux, and at least moderately neutral on other operating systems, adopting some of their conventions and with theme matching being possible. It has become increasingly aggressively GNOME-bound, so that with every major release it becomes more and more difficult to produce anything that looks or feels native on any platform other than GNOME. (Even under Linux; and it’s completely impossible to produce a good result for Windows any more.) Gutting theming. All but forcing a particular idiosyncratic form of overlay scrollbars. Insisting on client-side decorations and refusing almost to acknowledge the very existence of server-side decorations (in both GNOME Shell and GTK, from both ends). The list goes on, though these are probably the three biggest ones. From time to time they take a small step back (e.g. libadwaita) or present an apparent alternative (e.g. libdecor), but those options never work well, and I strongly suspect that, whether consciously or unconsciously, they’re just to try to placate the mobs. (libadwaita hasn’t gone anywhere near far enough, to the point where they might as well not have bothered: GTK is still riddled with mandatory GNOME HIG stuff that is strongly opposed to conventions on other platforms.)
’Tis said: never attribute to malice what can be adequately explained by incompetence.
But malice learns to wear incompetence as a mask, and organisations weaponsise incompetence.
I doubt that individual GNOME developers mean any malice. There may be no malice intended from any of the GNOME project leaders. But over the last few years I have steadily become convinced that cumulatively their actions and policies are active sabotage towards every last bit of the Linux desktop that isn’t GNOME.
See also what they did to Gtk3 in 2014-2015, removing a bunch of features, stopping respecting gsettings, making it so that it is impossible to paste a file path into a file-open dialog without triggering an error (you must first do an extra key sequence just to make the filepath box appear now).
They acknowledge that these are serious bugs but no one is willing to take on gtkfilechooserwidget.c anymore to fix it. So all filechoosers in gtk have been frozen broken since that time.
But GNOME/Redhat employees don't care. They switched to Gtk4 (where these bugs are fixed). All the programs depending on gtk3 can just rot according to them.
It’s funny to hear positive speech of GTK 4, though, because apart from the accessibility stuff (once it’s all finally hooked up) I don’t think I’ve heard a single positive thing about it, but only more discussion of things they’ve gutted and broken for non-GNOME environments (… including font rendering especially in Flatpak or whatever), and my experience from trying out rnote and gtk4-demo under Sway has not impressed me either—just more badly-forced CSD, new slow and poorly-designed animations (most notably focus, but also things like caret blink fading), and traditional menus are super ugly and apparently completely broken by keyboard (gtk4-demo; run the Builder demo; press Alt+F to open the File menu; marvel first at how access keys are no longer underlined until you further press Up/Down, clearly a bug; leave the menu by keyboard, either by activating an item or by pressing Escape; observe that now the keyboard does absolutely nothing of any sort until you click in the window again).
(And it’s easy finding more super obvious usability bugs. Compare the keyboard usability of the colour picker in the Pickers demo between gtk3-demo and gtk4-demo: they both get initial focus wrong, by keeping the Cancel/Select button focused if you clicked them and are reopening, but beyond that gtk3-demo is fine, while the gtk4-demo one has the wrong button as the default action when you press Space or Enter on an already-selected colour: it should obviously activate Select, but actually activates the Custom “+” button and then focuses the Cancel button. That suggests they’re modelling form controls in a somewhat weird way and made a fundamental change to the handling which will be responsible for bugs in a variety of similar places. And I can’t even be bothered filing any of this, but if anyone else wants to, feel free.)
I may have worded it a little strongly (despite tempering it with the word “almost”), but hear me out.
GTK will use SSD so long as you don’t customise the title bar. I think you can even query it, with effort (gdk_wayland_display_prefers_ssd, c.f. https://gitlab.gnome.org/GNOME/gtk/-/commit/f2adaba237519642...), and thus behave differently depending on compositor preference. (But even that is mildly nerfed from the org_kde_kwin_server_decoration_manager interface, since it only exposes “prefers SSD” and not “supports SSD”—though in practice I suspect there’s no difference in any compositor.)
But guess what? GTK 4 has regressed matters in this space. Fancy that. If I run gtk3-demo, as a tiled window it gets Sway SSD and an app header bar which duplicates the title (fine), but it doesn’t have the window border or shadow (good). When I float the window, it gets border (including top radii) and shadow. This is well-behaved software, not acting quite how I’d prefer it to (I want SSD even floating, even double-title-barring), but still reasonably. But gtk4-demo? It gets border and shadow (drawing outside its designated area even in tiling mode, and I’m not sure why it’s possible for it to do that) regardless of whether it’s floating or not. Progress. They’re forging ahead with ignoring the existence of SSD as far as possible even in GTK.
I just tested running gtk3-demo and gtk4-demo in sway. gtk4-demo does draw its shadows on other windows, but I wouldn't have noticed that if you hadn't brought it to my attention. Each has only the GTK title bar (no sway title bar) in both floating and tiled mode (but not full screen), which is, imo, a reasonable way for things to work.
Not sure how you’re getting no Sway title bar out of it. In fact, when I run `swaymsg border csd` with the tiled gtk4-demo focused, I get “This window doesn't support client side decorations” and I have no idea what’s up with that, especially given that the window definitely gets border csd when floating. ¯\_(ツ)_/¯
It’s possible it’s related to more recent changes in Sway. I haven’t updated Sway since March (since I’ve been using the high-DPI XWayland patches and updating is comparatively bothersome). I dunno.
Incidentally, I mostly use tabbed layout, and you’re always going to get server-side decorations out of that, since you’re breaking out of the rectangles mould.
Rhel 8 still allows x11 as an option, does that mean it won't be getting gnome updates? My business has decided to keep x11 for the foreseeable future. I don't mind at all, I use x forwarding a lot.
GNOME lifecycles in RHEL differ from other, leading-edge distributions. GNOME rebases aren't just simple API/ABI compatible minor updates, and things can break in doing so, therefore RHEL engineering wants to do them sparingly. Granted with GNOME 3.x there were quite a few rebases in RHEL 7 and RHEL 8 to fix issues and growing pains. Feature and fix backports are the preferred method for updating GNOME in RHEL, rather than complete rebases.
The Xorg Server has been deprecated in RHEL, but exists in both RHEL 8 and RHEL 9 and will be maintained in those distributions for the entirety of their lifespans. It will be removed in some future version of RHEL. X11 support is provided by XWayland.
"I doubt that individual GNOME developers mean any malice. There may be no malice intended from any of the GNOME project leaders. But over the last few years I have steadily become convinced that cumulatively their actions and policies are active sabotage towards every last bit of the Linux desktop that isn’t GNOME."
How?
I use XFCE and I do not develope native linux apps, so I lack detail knowledge here - but as far as I understands it, GTK is developed for GNOME. It has even Gnome in the name. So of course they mainly care about - well, GNOME.
So knowing this, I simply would never choose GTk as a plattform for my software, if I would not intend to have it mainly in the GNOME universe.
(I would probably use something like Qt, which is explicitely not advertised as bound to one Desktop, but right now I rather stay with the WEB and avoid all of that).)
I mean, did GTK advertise itself as a universal linux toolkit and promised eternal support at some point? Then there might be a point of them being assholes, but even then it would not be sabotage, if they simply focus on their priorities.
It would be entirely something else, if the company Redhat would do all the changes by purpose to break other stuff to make people switch to Gnome. This would warrant the term sabotage - but the evidence I have seen so far is not convincing.
So maybe it rather was people using GTK because it worked "fairly platform-neutral" and then expected it stays that way? And then got mad, when developement direction changed?
So making demands of something they got for free?
Like I said, it might be free, but I do not intend to be bound to gnome(and I do not like GTK too much) so I will never use it for my apps. And people who did, probably have to switch - or fork it and adopt it to their needs.
Good points, I was not aware of that anymore. But I think my broader point still stands, even though I just checked the GTK website and would agree, that they could probably make it clearer, that they primarily focus on Gnome. Still, Sabotage is a strong word.
This is exactly why I’m calling it co-opting and sabotage. GTK used to be capable of being even fairly OS-neutral, and was certainly quite neutral within Linux and so became the widget toolkit of choice for diverse desktop environments and worked well thus; but over time GNOME has taken it over completely, and the desires of other desktop environments are utterly ignored. The GNOME Foundation has become a very, very bad custodian for GTK.
This is a valid take, but it further fragments the Linux desktop. There are already too many different standards and technologies and transitions going on. Some of them for good reasons admittedly. Desktops only being compatible with their closely aligned toolkit is the last thing we need.
> Whereas, on Linux Wayland, if you want to support GNOME, _you_ have to care about decorations, or you have no titlebar nor X button
And you have basically two UI toolkits for linux that also does everything for you without caring about it, how is that different? Also, if you do decide against using said frameworks, you also have to care about accessibility and a million other things that you weren’t going to do either way let’s be honest, so I don’t know, it seems to be a non-issue to me.
Use libraries instead of reimplementing everything from scratch.
Sure, but then it is the job of those framework developers to.. abstract away the underlying display manager, so they should port their framework to support wayland as well, and application devs can just continue to write against this abstraction. I still see no problem here.
Those others - for better and worse have less developers. Having to figure out two different types of decorations is a mess that they don't have time to, so I can't blame them for picking one.
That is inherent in open-source/bazaar-style development, and we can all help wherever we can. There is no entity like Apple/Microsoft mandating that we change to Y from X.
That's impressive, but being an optional install and not even available on most distros doesn't really make it a useful target for applications that just want to show some window decorations. I also doubt that Debian have ported GTK 2 to Wayland or added client side decorations to GTK 2 so its even less useful in the context of this discussion.
New applications would not target GTK 2, obviously. But that's not what you asked.
GTK 2 was EOL'ed many years ago, and hence will never have Wayland support nor CSD. But there are dozens of existing applications in Debian's repos that still use it.
Wayland is going to have an X server for the foreseeable future, because otherwise the overwhelming majority of GUI applications for Linux will not run there.
I think you may be trying to draw a technical distinction which misses the point of what people mean by “client-side decorations” and “server-side decorations”.
Win32 has always defaulted to server-side decorations, by which we mean that the window manager controls the rendering, appearance and functionality of the title bar. Maybe this is actually run in client space via user32.dll or whatever as part of the OS-managed event loop, but functionally your app gets told “here, have a client area” (and isn’t that name telling!) and works inside that and completely ignores the remainder of the window area, and the window decorations implementation is provided by the window manager, so that theme changes (ancient Classic, XP Luna, recent Classic, Aero Glass, Modern, colours within each, &c. &c.) immediately apply globally.
This is what people mean by server-side decorations. They mean what is still the default under Win32 and I presume macOS, even if on both platforms it’s not the recommended style any more for most apps.
But it’s also worth noting that the recommended client-side decorations style on Win32 at least (don’t know much about macOS) is still guided by window manager stylistic conventions, including theme colours where possible.
In the Wayland/Xorg world, server-side decorations means its drawn by the compositor or display server, neither macOS or Windows do this.
Windows is a lot more complicated, the decorations are drawn by the client... but that client is part of the userspace dll from the system you link to in your app and gives you a handle to draw on.
macOS the only documented method to draw afaik is through Cocoa which can give you a space to draw on, but the linked Cocoa library will still draw your apps window.
To the user, I guess SSD achieve the same thing as Windows/macOS do, but with very different methods because unlike macOS and Windows, there's no central toolkit/library.
Huh? When I create a window on macOS, Windows or X11, I get a decorated window, and that's the only point that matters. Where this decoration is drawn within the operating system's window system is nothing the API user needs to or should worry about.
> X11 was the only environment that did server side decorations.
> Both win32 and macos draw decorations client-side
Which is why on linux if a process freezes I can kill it by pressing the normal X on the window, while on other operating systems I need to open the process manager.
And you'll also note that if a win32 process freezes (i.e. stops replying to wm messages in a timely fashion), the win32 window manager will take over drawing the window decoration, so that you can also kill the process by pressing the normal X close button on the window. At least since XP times.
Right, which makes using the bUt wInDowS UsEs CsDS argument against SSDs especially silly. Even MS found that CSDs have usability problems but are stuck with them so have to hack around the limitations - and even then there will still be a time where you can't even move the window before the SSD hack takes over.
I remember an old post years ago on DOSBox forum. Someone posted a patch to update the number of FPS on the window titlebar. Someone else tried it came back complaining that it ate the processor. A bit of investigation revealed how expensive it was to update the titlebar: DOSBox, in some form, notified the DE (window manager) that the titlebar needed update; then the window manager informed the window decorator about the update; it then updated the titlebar and notified X11 to redraw it; X11 then had to sync all that to post new screen with everything ready to show. Now, go back and count the number of processes and calculate how many context switches were required just to update the titlebar.
Server-side decoration made things more modular because it allowed the window decorator to be an entirely independent process. It could allow to use Beryl or kwin or any other window decorator with compiz, for example. Even considering these advantages, the price one paid for it was too expensive to justify. X11 did it, basically nobody else did; certainly for a good reason.
Server-side decoration is one thing I'm glad we are finally getting rid of.
Most user interactions with window decorations need to be handled server-side. Server side also makes sense because the compositor and the DE are basically the same under Wayland, so things like themes should be implemented once server-side and applications wouldn't have to do anything to change their frame to match.
On Xorg with Window Maker and no (desktop) compositor running, my own toolkit (example app[0]) resizes instantly.
On Xorg with KDE5 (not sure which exactly version, whatever openSUSE has) and a (desktop) compositor enabled, same toolkit also resizes without gaps (though there is a small delay, probably due to the compositor, i didn't try without it). Same with native apps.
On Wayland/KDE5 the same toolkit resizes without gaps but there is a visible "lag" for the titlebar to be updated that didn't exist with Xorg/KDE5. Since the toolkit only supports X11 i'm not sure if it is due to XWayland or KDE5 though.
TBH in none of the above configurations that'd be something i'd notice unless i was looking for it - or it was very laggy. Even on Wayland, it is something i noticed because i was resizing the window constantly back and forth to see if there is any lag.
However i didn't link to it intentionally since this site contains a three year old code dump and my current version has a lot of things changed in an incompatible way, so i wouldn't like people using it just yet. I still need to do a bunch of other changes to the API as well as fix some stuff and rethink how some other things work that can be hard to change later if i decide to make a proper release that people can depend on (e.g. i'd like to change how fonts work to allow for arbitrary font styles). I don't have much time for it right now though, but perhaps in a few months i'll be able to allocate some time for it.
I don’t know, if I resize the window I do want the border to reflect the changes even if the app froze, and drawing a black rectangle with a frame and optionally drawing the ready frame made by the app onto it doesn’t sound too hard to me.
Most of the time, though, you run applications that aren't frozen. Sure, having resizing do something in that case is useful, but you don't want those black bars to appear for the fraction of time between moving your mouse & the application redrawing, for every single frame of window resizing where the app didn't redraw in time (text reflowing, graphics initialization & other re-computation can often push resizing frame times above the 16ms of 60fps).
(X11 and its server-side decorations appear to be able to handle this as-is - decoration redrawing is delayed until either the application redraws, or some timeout (iirc around a second or two?), at which point you get the ugly black bars)
There could be the same problem outside of a window given that most windows are rendered by separate processes. Unless there is a compositor, which I think there is? And can that problem you mention not be avoided using a compositor?
So make the compositor use the size of the buffer the window supplied when rendering the decorations (or at least allow the old size for a frame or so). Next problem.
X11 was very much a "everything is a work-in-progress" culture. Experiments were fine, clients and servers could support whatever they wanted, etc, etc.
Wayland is an "opinionated" culture. Everything the core devs don't like is not allowed.
Everything HAS to fit into the "every frame is perfect" mantra. Which is ridiculous over-engineering and disallows a large number of perfectly acceptable (to end users) use-cases.
>Wayland is an "opinionated" culture. Everything the core devs don't like is not allowed.
False.
If your use case is so important nothing prevents you from making your own protocol and implementing client and compositor support for it. It's exactly the same as X.
(Note for those unfamiliar: I'm not saying "make a replacement for Wayland". Wayland is a collection of protocols that define one or more objects and the formats of their requests / responses / events. Showing windows is part of the xdg-shell protocol, etc.)
>>and implementing client and compositor support for it. It's exactly the same as X.
And if you think the GNOME and KDE and wlroots maintainers are all in some grand conspiracy, you are either very naively mistaken or just being toxic. Given you discarded a whole body of work as "ridiculously over-engineered" while contributing nothing except whinging, I'm inclined towards the latter.
Feel free to fork even one compositor and one client program to implement your amazing protocol.
> I already said:
> >>and implementing client and compositor support for it. It's exactly the same as X.
Not quite the same. X11 (notably the xfree86 project which maintained things for the last decade or so), was always quite free-wheeling about allowing things.
They subscribed to something similar to the linux kernels approach of "mechanism, not policy".
No, I don't think there is any conspiracy. Nor do I think they are bad people.
Nor do I think the whole thing is over-engineered. It's mostly pretty impressive.
Wayland was literally made with the explicit intent of providing extensions and they are a core part of the protocol — a client app is free to query the available protocols with versions, while there was nothing similar to that under X.
And really, “every frame is perfect” of a display manager managing.. frames is somehow strange? Would it be strange for a video player as well?
Every frame is perfect requires extreme amounts of coordination. Frequently multiple buffering, which increases latency and increases latency.
All these are costs which most use-cases don't care about.
Under normal conditions, with display managers that don't care, frames are extremely rarely "not perfect". And when they aren't perfect they are seldom "not perfect" for more than a single frame.
This is a fine example of "the perfect is the enemy of the good"
The reason they are often “perfect” under X is that you are quite likely to be using a compositor which adds the exact same latency, if not more (due to then Xserver itself being a useless middlemen only).
Did you try viewing a full screen video on X without a compositor? Even though today’s video hardware is indeed fast enough to mask it more often than not, display technology also requires more and more resources — I am sure you will find plenty of tearing at larger than 1080p resolution and higher framerate.
And come on, how is it extreme amounts of coordination? That’s like the most basic coordination there is. Especially that it can be circumvented in the rare case it is not needed — e.g. game windows in full screen can render at their heart’s content.
X is full of extensions. Even extensions that the server does not need to worry about, e.g. EWMH [0] because the protocol is flexible enough to support that.
It's not worth driving away the vast majority of potential end users to cater to a small number of hackers through, even if the hackers were there first.
When it comes to wide appeal, linux suffers from paradox of choice on multiple levels. But there are a huge number of people who want the stability of windows/mac/etc without the violation of privacy, as long as they don't have to spend ANY time configuring their system to keep it working. To them, just like fixing their car, that's someone else' job/hobby that they don't have time for or get any enjoyment from.
I don't see any reason why Linux can't offer that experience, but only if we have a windowing stack that is very opinionated about the user experience. Maybe it's time for an X12, but the Valves out there trying to make a competitive experience on linux for consumers will be dumping their money into Wayland.
I think the Window Controls Overlay[1] approach that Microsoft is proposing for Progressive Web Apps on windows seems like an interesting compromise to solve some of the issues you raise.
Specifically that the client code is allowed to customize _most_ of the headerbar, with the exception of an area reserved for the window controls (minimize, maximize, close, etc).
It seems like this approach would still support alternate or tiling window managers as they could omit or provide custom window controls.
Very nice! I hadn’t heard of that. I’m amused at how exactly it matches my mullings of how I’d do it in a web tech stack (manifest field, env() variables, navigator property with events, CSS property to indicate draggable area), with only a couple of missing pieces: the ability to match the appearance of the native controls and native conventions (e.g. `background: env(titlebar-area-background); color: env(titlebar-area-color); text-align: env(titlebar-area-align)`, though for best results you’d still need a little more here and there).
I’ve thought of trying to design something like this for Wayland server-side decorations or at least making a better replacement for libdecor, but it wouldn’t help me personally and it’s clear that GNOME isn’t having a bar of the entire approach, so I sadly just don’t think it’s worth the effort.
By that logic "not supporting" equals "actively sabotaging"?
I'm no Gnome developer, but the OSS landscape is fragmented enough as is and the number of developers is very limited, so not supporting something that does not help your project looks excusable. It hasn't stopped Blender from supporting Wayland in this case anyway.
The sabotaging I speak of is in other things too; this is just the proximate example of their entire approach which began destructive through negligence, but seems to have become weaponised somewhere along the way so that although I doubt any involved intend malice, they have become complicit in organisational malice: because GNOME is not the only player in the space, but they’ve tried to take it over and are doing harm to everything else with their policies.
In this case: they’ve taken something that used to work across the board under X, and which is still used extensively, and for which there have been a number of well-reasoned pleas, and actively removed it by refusing to implement it under Wayland, although it should be fairly straightforward to do and is required for a parity that a great many apps and users desire and some apps need, and although not doing it forces apps that don’t (perhaps can’t) use GTK and don’t need client-side decorations to produce worse results. (And requiring GTK is nasty for non-GNOME apps anyway.)
The discussion[0] pretty explicitly says that support isn't planned, so it's not just that there is a lack of developers. Either way, I would assume that the effort needed to add server-side decorations to GNOME would be way less than the combined effort of having every single GUI toolkit & program directly using Wayland needing to draw their own decorations (in addition to resulting in a better end-user experience of having native decorations everywhere).
Well my understanding is that pull requests are not welcome here, so it's not like any of these developers can even fix gnome's architectural problems for them.
GNOME has the unenviable position where everybody wants to use it —cite: we are— but we all still love to bitch about it. If it were so bad, we'd have jumped ship, or stayed with Mate or Cinnamon or whatever the 2.x branch was called.
I guess from our point of view, we feel a little like hostages and just because we don't like these changes, doesn't mean we want to migrate to another desktop; still, I cannot ignore that I'm paying nothing for the maintenance and development of this system and I'm not investing my time in its development or governance, so frankly who am I to say what should be what?
But it wasn’t GNOME’s project—they took over maintenance of something that was used by many, and for many years maintained this trust, acting benevolently. But more recently (mostly from GTK+ 3 on) they have acted increasingly hostilely, thoroughly violating the trust placed in them by the community. And the way things have happened makes it very difficult to successfully fork it, quite similarly to Embrace, Extend, Extinguish theory (though in a somewhat different space). It’s made particularly hard by the fact that GTK+ 3 and GTK 4 each make deeply-interwoven good and bad changes, so forking from an earlier version would be a significant regression as well as a significant fix. Combine the general mutual incompatibilities and how deeply GTK is woven into apps, and forking becomes quite impractical.
I personally _have_ stuck with cinnamon (and toyed with mate, too, but imho GTK3 itself has enough nice things to justify using it.)
So far cinnamon has mostly been able to roll back the worst of the changes, and I haven't had many complaints. But the push to do things like encourage every piece of software to draw its UI into the window decorations isn't something cinnamon can fix... they'd have to fork the world.
It's obvious sabotage. Don't ever use GNOME, don't use GNOME software, don't contribute to GNOME projects, don't go to their conferences and don't ever give them money.
I wonder if Wayland will achieve escape velocity before it gets replaced with some completely new paradigm that's clearly better than everything that came before.
python3 seems to have finally pulled through via deprecation.
I'm pretty convinced at this point IPv6 is going to be de facto replaced with SNI routing. You pretty much have to use TLS anyway.
> python3 seems to have finally pulled through via deprecation.
Nobody is going to depreciate x11. It's been 3 years since Red Hat said x11 is going into "hard maintenance mode" but here we are.
I'd love to be optimistic about the adoption of Wayland (I'm now using it myself on my gaming rig), but I have zero reason to believe that everyone will eventually use Wayland. x11 is simply more finished and robust.
> It's been 3 years since Red Hat said x11 is going into "hard maintenance mode" but here we are.
It is in hard maintenance mode though (and 3 years is not that long). Although X11-the-protocol isn't going anywhere, X11-the-server has no real future. Mind you, "maintenance mode" does not mean "abandoned", but it also means that it's not really going to evolve and improve - it will effectively just stay where it is for people who may still have to rely on it. Changes in Xorg codebases are very rare these days and most of them are related to XWayland.
The question is what is native support for Wayland? Wayland Compositors are so fragmented how could they possibly support the multitude of environments?
The idea about a standard is that all implementors adhere to it, except for very good reasons. Style and proprietary features are not covered for example. Compositors could also differ in which extensions they support. But the core behavior should be the same and applications should not have to care.
The problem is that you can't realistically do anything with just the base protocol. Everything relevant beyond setting up buffers and surfaces happen in extensions.
Unfortunately I've run into this a lot on modern machines. I wanted to port some of my GTK apps over to Wayland, but I can't easily do that since non-GNOME Wayland desktops don't render it properly. The fragmentation in Wayland implementations is real, and can hurt you.
Which part exactly? It sounds to me like getting blender running on wayland wasn't that hard and is in a good enough state to be considered for inclusion in the next release.
It's still not running properly and obviously requires many workarounds for problems known for years. So, Which part? every part.
If thier customers demand Wayland support they have to do it I guess. To me it seems like a huge waste of everybody's time. Wayland offers zero advatages over existing solutions.
Wayland's biggest disadvantage is that I can't use my computer as a heater anymore. With X11, I had that feature everytime I would try connecting a second screen to my HiDPI laptop.
Honestly though I don't get the Wayland hate. It's been stable to use and a joy to configure. X11 survives because of legacy and inertia, and I haven't looked back one second since the ~3 years I made the switch to Wayland/Sway.
> It's been stable to use and a joy to configure. X11 survives because of legacy and inertia, and I haven't looked back one second since the ~3 years I made the switch to Wayland/Sway.
Good for you, but I had the completely opposite experience. X11 just works for me without any serious issues, but the last time I tried Wayland a few months ago (on RDNA2) it was an unstable mess. Play a video with mpv? That's a crash. Firefox and some other applications I don't remember also had weird issues.
And Gnome seemed to be the only desktop that was somewhat usable (except for all the crashes...). KDE still felt quite incomplete and others would not run at all (Hikari just made my screens flicker).
There are a few things that really make me want to switch, but in the end I always end up back with X11.
> Honestly though I don't get the Wayland hate. It's been stable to use and a joy to configure.
That's fair, but you need to understand that the "haters" have the exact reverse position; X11 has been stable to use and a joy to configure, and Wayland remains full of "interesting" pitfalls. (If this is going to be that kind of thread: My personal irritation is that there's no consistent way to set keyboard/mouse layouts that works across compositors, or in many cases at all, because every single compositor does its own thing.)
> Honestly though I don't get the Wayland hate. It's been stable to use and a joy to configure
Firefox doesn't work properly out of the box with Wayland, together with Spotify, Discord, VS Code and tons of other applications. I'm a Blender user too, and this submission is good news, but before that, Blender was in this box too.
I migrated to Wayland just last week but having to add fixes to various applications I use day-to-day (every one I mentioned except VS Code) kind of sucks and is not needed at all with Xorg.
But the performance is so much better and also lower memory usage, that I power through it. But I can understand why people are resisting Wayland, it seems it's very early still as not a lot of what I use supports it fully.
Wayland is starting to be enjoyable for me, but the road leading here has been extremely painful and paved by top-down mistakes that have slowly yielded us a usable protocol.
I like 1:1 trackpad gestures and V-Sync, but was it worth breaking screen recording, GUI libraries, RDP and hundreds of desktop environments? It's hard to say, but the fact that it took us 10 years to get halfway there causes me concern.
I don't hate it, I just have zero reasons to switch and a switch would take significant effort.
I'm using bspwm, there is no bspwm alternative for Wayland I'm aware of. Sway might be the closest but not close enough. River might become an option one day, but is not there yet and would still require effort, and I have no incentive to expend effort on it as long as Xorg keeps working with the applications I need.
But for those reasons, seeing developer time going towards Wayland ports rather than other things is a nuisance. Just a nuisance - people can spend their time as they please -, but a nuisance all the same.
I don't hate it. Sway works mostly fine. except drag&drop between file managers and firefox is broken in several ways, in both directions. And since that's an important workflow for me I'm still on i3.
I'm checking once or twice a year and various other things improve. But D&D has always been broken.
Works fine for me, for at least the last two years. I upload files to drive.google.com in Firefox by dragging them from pcmanfm-qt. Both FF and pcmanfm-qt are running under Wayland, not Xwayland.
Every additional feature requires workarounds. Also, if you wouldn’t be full of it you could actually list some concrete problem perhaps, you know, to make a useful contribution to the discussion?
Back in the day Blender was basically a pure OpenGL program with all the controls/widgets implemented directly. I guess that's not the case any more. Does it use any kind of widget library now?
See the discussion here: https://discuss.pixls.us/t/wayland-color-management/10804