I've also written a color picker that uses the Machado et al. method to enforce CAM02-UCS minimum perceptual distance for normal vision and color vision deficiency [https://colorcyclepicker.mpetroff.net/].
That seems like a very useful tool for planning new colour schemes. I wish there were more discussion and tools based on true human perception of colours, not just numerical representations that aren’t necessarily calibrated to how human vision works.
Yes, much better ways of representing and working with colours are known. Sadly, support for them is missing in most of the software we use, including Adobe Illustrator and Photoshop, the Affinity suite, Sketch, Figma and all major browsers. The best we get out of the box is usually HSB/HSL.
Of course, you can make the effort to construct a colour palette using a better model and then convert the colours. However, as soon as you start deviating from those carefully chosen colours — to build a gradient, or to apply filters or transparency, for example — you’re back to relying on the software to do the maths, and if its internal colour model is weak, the results will reflect that.
Photoshop does support LAB but all of the advanced color science (and much better UX) is found in tools for movie production (Resolve & friends) and not in photo editors, which are largely shit.
I linked to the supplementary information [1] in my previous comment, but here's the link for the paper [2]. The method is implemented by the Colorspacious library [3] for Python, and the source for my color picker [4] contains both JavaScript and WebGL implementations.
That seems like a very useful tool for planning new colour schemes. I wish there were more discussion and tools based on true human perception of colours, not just numerical representations that aren’t necessarily calibrated to how human vision works.