Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, if you consider Maslow's hierarchy of needs, "creatively enabled" would be a luxury at the top of the pyramid with "self actualization". Luxuries don't matter if the things at the bottom of the pyramid aren't there -- i.e. you can't eat or put a shelter over your head. I think the big AI players really need a coherent plan for this if they don't want a lot of mainstream and eventually legislative pushback. Not to mention it's bad business if nobody can afford to use AI because they're unemployed. (I'm not anti-AI, it's an interesting tool, but I think the way it's being developed is inviting a lot of danger for very marginal returns so far)




You can be poor and creative at the same time. Creativity is not a luxury. For many, including myself, it's a means of survival. Creating gives me purpose and connection to the world around me.

I grew up very poor and was homeless as a teenager and in my early 20s. I still studied and practiced engineering and machine learning then, I still made art, and I do it now. The fact that Big Tech is the new Big Oil is besides the point. Plenty of companies are using open training sets and producing open, permissively licensed models.


> I think the big AI players really need a coherent plan for this if they don't want a lot of mainstream and eventually legislative pushback.

That's by far not the worst that could happen. There could very well be an axe attached to the pendulum when it swings back.

> Not to mention it's bad business if nobody can afford to use AI because they're unemployed.

In that sense this is the opposite of the Ford story: the value of your contribution to the process will approach zero so that you won't be able to afford the product of your work.


We were going to have to reckon with these problems eventually as science and technology inevitably progressed. The problem is the world is plunged in chaos at the moment and being faced with a technology that has the potential to completely and rapidly transform society really isn't helping.

Hatred of the technology itself is misplaced, and it is difficult sometimes debating these topics because anti-AI folk conflate many issues at once and expect you to have answers for all of them as if everyone working in the field is on the same agenda. We can defend and highlight the positives of the technology without condoning the negatives.


> Hatred of the technology itself is misplaced

I think hatred is the wrong word. Concern is probably a better one and there are many things that are technology and that it is perfectly ok to be concerned about. If you're not somewhat concerned about AI then probably you have not yet thought about the possible futures that can stem from this particular invention and not all of those are good. See also: Atomic bombs, the machine gun, and the invention of gunpowder, each of which I'm sure may have some kind of contrived positive angle but whose net contribution to the world we live in was not necessarily a positive one. And I can see quite a few ways in which AI could very well be worse than all of those combined (as well as some ways in which it could be better, but for that to be the case humanity would first have to grow up a lot).


I'm extremely concerned about the implications. We are going to have to restructure a lot of things about society and the software we use.

And like anything else, it will be a tool in the elite's toolbox of oppression. But it will also be a tool in the hands of the people. Unless anti-AI sentiment gets compromised and redirected such that support for limiting access to capable generative models to the State and research facilities.

The hate I am referring to is often more ideological, about the usage of these models from a purity standpoint. That only bad engineers use them, or that their utility is completely overblown, etc. etc.


> We are going to have to restructure a lot of things about society and the software we use.

There is a massive assumption there: that society as such will survive.


Just an unvoiced caveat. It's entirely possible that society won't survive the next century for a growing number of reasons.

Indeed, so why would we roll the dice on even more of those reasons? We could play it safe for a change.

It's just bad timing, but the ball is already rolling downhill, the cat's already out of the bag, etc. Best we can do at the moment is fight for open research and access.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: