Hacker Newsnew | past | comments | ask | show | jobs | submit | marcodiego's commentslogin

Hi Uecker!

I really don't know this is the best place to ask, but I don't know anywhere to ask you, so... Is C2Y getting any generic programming features? I'd really love the one with _Type as a new type that stores a type.


I hope so. WG14 seems to like it (but not everybody), but it is not existing practice. So it will mostly depend on me creating a prototype and doing a lot convincing.

In some sense... I bet there are more people writing assembly than FORTRAN today.

Doubtful. Fortran is big in HPC and has modern versions.

I'll take that bet.

"Application error: a client-side exception has occurred while loading www.amodeling.com (see the browser console for more information)."

Great start


Now, "This deployment is temporarily paused".



This page only contains links to about half of the backgrounds present on the page that is linked currently.


@dang can we update the post's URL to this one? It seems more relevant and usable.


I found it much less striking. The original link is perfect -- and subjectively, I find it more "usable" that it puts everything on a single page.


Ok, I've put that link in the top text so people can access both.


I updated the Wikipedia article on io_uring to dispute that.


> We're working to primarily rebuild the original AIM (AOL Instant Messenger), AOL Desktop, Yahoo and ICQ platforms as close to the originals as possible, and document the entire thing.

Why not contribute to one of many FLOSS implementations that were once maintained?


I don't know if it supports HDR on MacOS but, AFAIK, it doesn't on windows and in Linux it is only supported with Wayland.

Though I don't like the Wayland x X11 flamewar, I'm happy to see some modern features are only supported on Wayland.

That may please the crowd that will be able to say "sorry, I can't use x11 because it doesn't support a feature I need" bringing some symmetry to the discussion.

Edit: correction: it is about the development 5.0 version: https://devtalk.blender.org/t/vulkan-wayland-hdr-support/412...


> I'm happy to see some modern features are only supported on Wayland.

Why?

If anything that's a reason to why I wouldn't fully jump to blender.

I have been working on my own hobby game engine for the past 15 years and have been excited to introduce Blender to the workflow. If this is the case I don't like it. Wayland has never work for me the same way as X has.


What else are you going to use on Linux?


If they spent 15 years on the engine as is, what's another few more years rolling a proprietary modeling system?

On a serious note, I do wonder if this Wayland only limitation is something a fork could work around.


I don't think there's an X11 HDR standard, one would need to be created and implemented.


Nuke, Maya?


If the starting point is that Wayland is missing features that X has, the good outcome is not getting to a point where neither option has all the features, the good outcome is that either one has all the features.


That's at the cost of lots of duplicated work by the already sparse number of people capable of implementing a graphics server.

There's also a third option where Wayland is foundational and the X11 network protocol is implemented on top of that for people who need it. Why should a network GUI service implement a driver to talk to a specific model of video card?


> That's at the cost of lots of duplicated work by the already sparse number of people capable of implementing a graphics server.

Yeah, it kinda sucks but this is where we are.

> There's also a third option where Wayland is foundational and the X11 network protocol is implemented on top of that for people who need it. Why should a network GUI service implement a driver to talk to a specific model of video card?

Agreed; I have long argued that it would have been far better to transition to everything on the same backend with effectively rootful XWayland being the only (bare) X server, and then after that try to deal with the rest of the stack (if you really must). And maybe in 2026 we'll finally start to see movement in that direction with https://gitlab.freedesktop.org/wayback/wayback


It definitely does on macOS and I think also on Windows. You have to set the color management for the viewport to Display P3. In older versions this precluded you from using AGX or Filmic, but I think you can actually use AGX with Display P3 now.


I really wish to could use Wayland, but there is too much problems or bugs related to the software I use for work and also play. I will test it again with this new version Blender (that was one of that software with problems).


In general, most repositories use old stable versions of Blender. And often folks are reduced to using snap to maintain version specific Compatibility with add-ons etc.

Also, getting the most out of Intel+RTX CUDA card render machines sometimes means booting into windows for proper driver support. Sad but true... =3


The reality is most commercial software and users are on Windows machines. It is fundamentally a Blender interoperability, and 3rd party platform license compatibility issue. We all wish it wasn't so, as many artists find the Windows file systems and color-calibration concepts bewildering.

Making a feature platform specific to a negligible fraction of the users is inefficient, as many applications will never be ported to Linux platforms.

Blender should be looking at its add-on ecosystem data, and evaluate where users find value. It is a very versatile program, but probably should be focused on media and game workflow interfaces rather than gimmicks.

Best of luck =3


I agree with you, but I think this limitation is for much simpler reasons, like "the contributor only knew how to make this feature in Linux, and only in Wayland". cross compatibility for stuff a base as color grading can be a thorny issue.

If nothing else, it's better to have some implementation to reference for future platforms than none.


We've all seen too many plugins become version specific or indeterminately broken

https://www.youtube.com/watch?v=WFZUB1eJb34

Someone needs to write a Blender color calibration solution next vacation =3


The significant majority of the film and animation industry uses Linux.


Linux RTX CUDA drivers are getting better, but really depends on the use-case. For a Flamenco render farm it makes sense for sure.

Creatives on wacom tablets and Adobe products etc. will exclude the Linux Desktop option. =3


Not just for the farm, the large majority of the movie and tv vfx and animation you see is done by artists using Linux workstations.


Not the artists I meet, they love their wacom tablets and pressure responsive painting programs... i.e. most of the other software is windows only.

I like Linux (use it everyday), but many CAD, Media, Animation, and Mocap application vendors simply don't publish ports outside Windows.

Most studios have proprietary workflows with custom software. =3


The applications I agree, Wacom tablets though have great driver support on Linux (in my experience more stable than Windows).


Indeed but this is a discussion about Blender and you posted originally:

> Making a feature platform specific to a negligible fraction of the users is inefficient, as many applications will never be ported to Linux platforms.

All the large studios use Linux, that's why all the third party software that is used in feature animation and vfx is supported on Linux. So I'm just saying 'negligible fraction of users' in the case of Blender (which as a project would like to increase adoption in professional feature animation and vfx) isn't really true.


I am sure Studios account for a small portion of the 4.5 million unique downloads each release. Note that less the 20% of users ever touch film or animation projects, 73% are single users, and most related user applications are Adobe products.

Stats are available from the published 2024 feedback data:

https://survey.blender.org/feedback/2024/

Best of luck, =3

Recommended reading:

https://www.poetry.com/poem/101535/the-blind-men-and-the-ele...


I'm not sure download stats are hugely relevant because that would imply the needs of every person that downloads Blender are weighted equivalently which would make little sense.

Or are you suggesting the Blender foundation has no interest in getting wider adoption among film and animation studios?


I think the foundation projects hold a lot of potential, but what they release as "stable" is rarely ready for a production setting. People do use Blender for small side tasks commercially, but would you honestly bet your company reputation/job on their 31 years of shenanigans?

Updates still permute the core to break parts of the program, and brick countless add-ons or custom code. People turn users into Beta testers, partners into IT support, and hide workflow details under layers of feature-creep kludges.

When Blender updates for feature X, they will usually brick feature Y. YMMV

The Foundation may intend to improve user adoption, but they can't even cover there own unit-tests on internal add-on code. =3



I've seen energy-aware scheduling, literately decades of effort that culminated on the EEVDF scheduler so that it was possible to have a good scheduler that worked well on desktops, servers and HPC... and, between all those efforts, a giant parallel one to prevent or influence to OOM-Killer to behave better.

I really wonder if a "simple" memory-aware scheduler that punished tasks whose memory behavior (allocation or access) slows down the system would be enough. I mean, it doesn't happen anymore, but some years ago it was relatively simple to soft-crash a system just by trying to open a file that was significantly larger than the physical RAM. By 'soft-crashing' I mean the system became so slow that it was faster to reboot than wait for it to recover by itself. What if such a process was punished (for slowing down the system) by being slowed down (not being scheduled or getting lower cpu times) in a way that, no matter what it did, the other tasks continued fast enough so that it could be (even manually) killed without soft-crashing the system? Is there a reason why memory-aware scheduling was never explored or am I wrong and it was explored and proved not good?


For batch jobs I would really like a scheduler that will pause and fully swap out processes until memory is available again. For example, when compiling a C++ project, some source files or some link steps will require vast amounts of memory. In that case you would want to swap out all the other currently running compiler processes so the memory hungry one can do its job, then swap them back in. I don't want to punish the memory hungry process, actually I want exactly the opposite - I want everything else to get out of its way. The build system will eventually finish running processes that take up a lot of memory and will continue the ones that require little memory.


> I really wonder if a "simple" memory-aware scheduler that punished tasks whose memory behavior (allocation or access) slows down the system would be enough. What if such a process was punished (for slowing down the system) by being slowed down (not being scheduled or getting lower cpu times) in a way that, no matter what it did, the other tasks continued fast enough so that it could be (even manually) killed without soft-crashing the system?

This approach is hard to make work, because once the system is in memory shortage, mostly all processes will be slowing the system. There's already a penalty for accessing memory that's not currently paged in --- the process will be descheduled pending the i/o, and other processes can run during that time ... until they access memory that's not paged in. You can easily get into a situation where most of your cpu time is spend in paging and no useful work gets done. This can happen even without swap; the paging will just happen on memory mapped files, even if you're not using mmap for data files, your executables and libraries are mmaped, so the system will page those out and in in an effort to manage the memory shortage.

To make a system easier to operate, I like to run with a small swap partition and monitor swap usage both in % and by rate. You can often get a small window of a still responsive system to try to identify the rogue process and kill it without having to restart the whole thing. A small partition means a big problem will quickly hit the OOM killer without being in swap hell for ages.

There might be research or practice from commercial unix and mainframe where multi-tenancy is more common? What I've seen on the free software side is mostly avoiding the issue or trying to addressing it with policy limits on memory usage. Probably more thorough memory accounting is a necessary step to doing a better job, but adding more ram when you run into problems is effective mitigation, so....


I remember an interview with Kasparov. He said something, I don't remember exactly... It was something like "The skills chess develops are very important for... playing chess"; as a way to say "if you're good in chess, that doesn't mean you're particularly smart or good in other areas too".

As someone who played chess competitively in my childhood and teens, chess helped me a lot about concentration, problem solving and decision taking. I also learned to win and lose and to have respect for other people due to the competition.

As a teacher in my adulthood, I was extremely impressed by knowing a high rated player that was very weak student, especially in logic.

I now agree deeply with Kasparov about the importance of the skills chess develops.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: