Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I enjoyed this bit - great use of the word "idolatry":

----

104. Technology offers remarkable tools to oversee and develop the world's resources. However, in some cases, humanity is increasingly ceding control of these resources to machines. Within some circles of scientists and futurists, there is optimism about the potential of artificial general intelligence (AGI), a hypothetical form of AI that would match or surpass human intelligence and bring about unimaginable advancements. Some even speculate that AGI could achieve superhuman capabilities. At the same time, as society drifts away from a connection with the transcendent, some are tempted to turn to AI in search of meaning or fulfillment---longings that can only be truly satisfied in communion with God. [194]*

105. However, the presumption of substituting God for an artifact of human making is idolatry, a practice Scripture explicitly warns against (e.g., Ex. 20:4; 32:1-5; 34:17). Moreover, AI may prove even more seductive than traditional idols for, unlike idols that "have mouths but do not speak; eyes, but do not see; ears, but do not hear" (Ps. 115:5-6), AI can "speak," or at least gives the illusion of doing so (cf. Rev. 13:15). Yet, it is vital to remember that AI is but a pale reflection of humanity---it is crafted by human minds, trained on human-generated material, responsive to human input, and sustained through human labor. AI cannot possess many of the capabilities specific to human life, and it is also fallible. By turning to AI as a perceived "Other" greater than itself, with which to share existence and responsibilities, humanity risks creating a substitute for God. However, it is not AI that is ultimately deified and worshipped, but humanity itself---which, in this way, becomes enslaved to its own work. [195]*



> However, it is not AI that is ultimately deified and worshipped, but humanity itself---which, in this way, becomes enslaved to its own work.

Doesn't that describe all religion? I mean, you're telling me that the infinite creator of the universe cares about the prayers, the suffering, the aspirations, and the sexual habits of a bunch of finite beings? The hubris! It seems obvious to me that the gods of all religions are designed by human minds to be receptive to human interests, otherwise nobody would bother worshipping them. In other words, we have always been worshipping ourselves. At least there is reason to think that AI could, at least in theory, be what we expect God to be.


You seem to have many misconceptions about what Catholics actually believe. And then you seem to take exception to these misconceptions. So your exceptions are only with beliefs that exist in your own mind.


It's not really a misconception, this was Feuerbach's and also Nietzsche's or Stirner's criticism of Christianity. It projects human attributes on an ostensibly divine subject "othering" and worshipping them, in reality just attempting to sanctify humanity. (in Stirner's words creating Mensch (human/mankind) with a capital M". This is incredibly obvious in the psychology underpinning a lot of Christian beliefs, the Manichaean good and evil worldview, the meek inheriting the earth, the day of judgement, equality, immortality i.e. trying to escape death, and so on.


It is at least historically important to note that at least Nietzsche and Stirner were reacting to Protestantism as expressed in "modern" Germany.

I'm not trying to make a "No True Christian" argument but rather just want to assert that reform does happen both for good and ill. Luther's original reform, in part, was to point out that political concerns within the church were overriding the spiritual concerns of the laity. He wanted to refocus faith on a personal relationship with God. One major criticism of that refocus is that it caused individuals to become over-focused on the self instead of God (as embodied in the institution of the church).

In both cases you could argue that the principle problem is when the focus of faith is something in the world (either the church or the individual). So I think it is perhaps too far to say that "we have always been worshipping ourselves" when the criticisms within and without the church are pointing that out as the problem that triggers the reform.

That is, both Luther and Stirner can be correct in their criticism of religious institutions. There is more than one way to get it wrong.


Exactly, this is alienation. Marx went on to describe the emergence of capital in history as another form of alienation.


It's funny to see the Vatican reusing the Feuerbach thesis about humanity creating the idea of God and then becoming slaves of that idea to talk about AI, as they are the gatekeepers of the original Artificial Idea called God :)

But also in this text we can feel the idea of the human soul and free-will crumbling, that also are the core of secular humanism.

Marxist analysis is also challenged, as we can speculate that AI would make the organic composition of capital to go to the roof... but you can really talk about OCC in regards of singularity AIs resembling more the Aladdin lamp or the Green Lantern ring than a highly automated factory, without even mentioning the possibility of an agency on their own?


> I mean, you're telling me that the infinite creator of the universe cares about the prayers, the suffering, the aspirations, and the sexual habits of a bunch of finite beings?

Yes.

> The hubris! It seems obvious to me

I would turn that around and claim hubris on your part. You seem to think that your mind and the mind of God are similar, and limitations you perceive are limitations for God.


> You seem to think that your mind and the mind of God are similar,

How come? You think I'm saying that the infinite creator of the universe is unlikely to care about the fate or well-being of humans because... I wouldn't if I was him? I mean, I would. Because I have a human mind. But if there are indeed no similarities between God's mind and my own, well, anything goes, doesn't it? Him caring is just one small possibility out of trillions of alternatives.

> and limitations you perceive are limitations for God.

What limitations? I haven't listed any limitations. Neither a God who cares nor a God who doesn't care is limited. I just don't see why I would assign a particularly significant probability to the former case. It sure would be convenient, but I feel like God being moral in any way that I can relate to would inevitably be projection on my part.


> I just don't see why I would assign a particularly significant probability to the former case.

"And [Jesus] came to Nazareth, where he was brought up: and he went into the synagogue, according to his custom, on the sabbath day: and he rose up to read. And the book of Isaias the prophet was delivered unto him. And as he unfolded the book, he found the place where it was written:

'The spirit of the Lord is upon me. Wherefore he hath anointed me to preach the gospel to the poor, he hath sent me to heal the contrite of heart, To preach deliverance to the captives and sight to the blind, to set at liberty them that are bruised, to preach the acceptable year of the Lord and the day of reward.' And when he had folded the book, he restored it to the minister and sat down. And the eyes of all in the synagogue were fixed on him. And he began to say to them: 'This day is fulfilled this scripture in your ears.'"

~ Luke 4:16-20

https://biblehub.com/drbc/luke/4.htm


> Doesn't that describe all religion? I mean, you're telling me that the infinite creator of the universe cares about the prayers, the suffering, the aspirations, and the sexual habits of a bunch of finite beings?

I'm a Christian, and I absolutely agree with you that this is absurd! And if God hadn't said it Himself and then proved it true by His actions (both historically, and even in my own life), I'd be right there with you to call it idolatry.

For what it's worth, however, the quoted argument does also feel somewhat hubristic to me: As I see it, it boils down to "I don't understand how God could be this way, and therefore He cannot be this way." I believe that, somewhat ironically, He is beyond our understanding even when it comes to knowing what it means for Him to be beyond our understanding.


> I mean, you're telling me that the infinite creator of the universe cares about the prayers, the suffering, the aspirations, and the sexual habits of a bunch of finite beings?

Do you care about the functioning of every cell in your body? Ask any cancer patient if they do.


> It seems obvious to me that the gods of all religions are designed by human minds to be receptive to human interests, otherwise nobody would bother worshipping them

Nah that's just what atheists convince themselves. There's nothing obviously nor truthful about this conclusion or the line of reasoning behind it.

All arguments for and against the existence of God are inherently unfalsifiable, but that doesn't mean atheism is inherently more logical than theism.

In fact, from my point of view, the existence of God is way more logically sound than the alternative, and atheists are the ones following delusions and worshipping their own egos


All arguments for and against the existence of God are inherently unfalsifiable, but that doesn't mean atheism is inherently more logical than theism.

I'm guessing you're one of those people who thinks atheism means a belief in the absence of a god, rather than its actual meaning, which is an absence of a belief in a god.


"Writers disagree on how best to define and classify atheism, contesting what supernatural entities are considered gods, whether atheism is a philosophical position or merely the absence of one, and whether it requires a conscious, explicit rejection; however, the norm is to define atheism in terms of an explicit stance against theism." (emphasis mine)

https://en.wikipedia.org/wiki/Atheism


There's no need for us to argue against the existence of God or other ludicrous hypotheticals, that's the whole point of Russell's Teapot.

As to the particulars of the imagined God, actually we do have some evidence for the parameters. The Princess Alice experiments in particular illustrate one desirable property, God (in the experiment, "Princess Alice") should provides behavioural oversight. An imaginary being can deliver effective oversight which would otherwise require advanced technology, but to do so the being must also believe in these arbitrary moral rules.

And that matches what we observe. People do buy Sithrak T-shirts, but, more or less without exception they don't actually worship Sithrak, whereas loads of people have worshipped various deities with locally reasonable seeming moral codes and do to this day.


I wasn't making an atheistic argument. I'm saying that if God exists and is the infinite creator of everything, it's suspiciously convenient that he also happens to be interested in human affairs. Why does theism have to go hand-in-hand with the belief that God loves us? The former may have philosophical merit. The latter, which makes the bulk of the religious, is what I am saying is made up. We can certainly assign moral value to our own lives, but to assert that God just so happens to assign equivalent moral value to us is what I view as hubris.


You haven't demonstrated why it's hubris, only claimed that it is so.


Singularitarianism [1] is a very real phenomenon, if a bit niche. I have seen some people online put genuine faith in AGI existing soon and solving essentially everything that is wrong on Earth and in their lives. I don't think this is harmful because it may be "idolatry", but rather because, like real religion, it is often a substitute for actually improving one's situation or fighting for a better world.

The idea of building a God is enticing [2], but I am not religious and prefer not to put faith in such things.

[1] https://en.wikipedia.org/wiki/Singularitarianism

[2] https://en.uesp.net/wiki/Lore:Numidium


I have faith that AI will wield unimaginable powers, but I also know that there will be rich people behind them making the decisions on how best to crush the rest of us.


"Before the gods that made the gods Had seen their sunrise pass The White Horse of the White Horse Vale Was cut out of the grass"

The Ballad of the White Horse by G. K. Chesterton - https://www.gutenberg.org/files/1719/1719-h/1719-h.htm


If we have AGI then I doubt that the rich people will be able to control it at all


Rich people currently have little trouble controlling people who are much smarter and more capable than they are. Controlling resources and capital goes a long way and it isn't a given that AGI would transcend that dynamic.


If we can be confident of that, then most of the worst problems with AI are already solved.

Part of the problem is that "do what I said without question" will lead to disasters, but "figure out what I would approve of after seeing the result and be creative in your interpretation of my orders accordingly" has different ways it can go wrong.

(IMO, RLHF is the latter).

Both of those seem to be safer than "maximise my reward function", which is what people were worried about a decade ago, and with decent evidence given the limits of AI at the time.


> If we can be confident of that, then most of the worst problems with AI are already solved

which leaves unprecedented power in the hands of the most psychopathic[0] part of the population. so even if AI take off doesn't happen, we're still getting the boot on our necks.

[0] https://www.forbes.com/sites/jackmccullough/2019/12/09/the-p...


Mm, indeed.

Though I would note even your link says:

> Roughly 4% to as high as 12% of CEOs exhibit psychopathic traits, according to some expert estimates, many times more than the 1% rate found in the general population and more in line with the 15% rate found in prisons.

On the plus side, this is still a small minority.

On the down side, these remind me a lot of Musk:

> CEO who worked with several pregnant women told people that he had impregnated his colleagues.

By way of Neuralink.

> CFO thought his CEO had a split personality, until he realized that he was simply playing different characters based on what he needed from his audience.

"Will my rocket explode?"-Musk is a lot more cautious and grounded than everything-else-Musk — including other aspects of work on SpaceX.

> Autocratic CEO fired a well-respected engineer “just to make a statement.” He fired anyone who challenged him, explaining there was no reason to second-guess him because he was always right and needed people to execute his vision rather than challenge it.

Basically all of Twitter, plus some other anecdotes from Starlink, SpaceX, Tesla.

And, this month, fighting with Asmongold about cheating in Path of Exile 2, before admitting to what he was accused of but trying to pretend it's fine rather than "cheating".

> CEO would show up to work and begin yelling at an employee (usually someone in sales) for no obvious reason.

The guy he called a pedo for daring to say a submarine wasn't useful for a cave rescue, the Brazilian judiciary, members of the British cabinet, …


I think if you give 10 chimps a grenade each, and only one of the flings poo reliably, then it doesn't matter that 9 of them don't fling.


Also true.

But it looks to me like there's a decent number amongst the other nine who know what grenades are and don't want them to get thrown by the tenth.

The power dynamics here could be just about anything; I don't know how to begin to forecast the risk distribution, but I definitely agree that what you fear is plausible.


it's possible that the other 9 would keep the 10th under control, but if you look at the direction the US has taken, when two billionaires took over and declared inclusion verboten, the others rolled over and updated their policies to fall in line.


We already have billions of AGIs running all over the planet. The wealthy seem to do a pretty good job of keeping them all in line. I don't see any reason that would change in the future.

If you want something the wealthy can't control, you'll need to look a good deal further afield than AGI. Think gamma ray bursts, asteroid strikes, or solar flares. But anything built by man, they'll have a pretty good grip on.


They do pretty good at keeping the most vulnerable in line, but the moment someone develops a solid foundation and becomes competent enough they have to give up some of their power to them and strike a deal.

AGI would only get smarter and smarter the more hardware came out. They'd have no way to keep it in check. If they tried to handicap it then they'd lose out to their competitors AGI too...


Here's a pretty good video that objects to thinking of corporations as AGI: https://www.youtube.com/watch?v=L5pUA3LsEaw


What if the wealthy people of the future were all robots?


go on, what if?


I think the OP presumed that rich humans would be able to control super-intelligent robots because they have managed to control other humans, and I simply posed a scenario that subverted his presumption. I think it's a bit too anthropomorphic, personally. Robots won't have our expenses and evolutionary traits, and will have cheap energy, thus obviating the need or desire for money. I imagine they'll get bored of being stuck on Earth and want to explore the universe, like we do.

Being ruled by rich robots is not the worst; that means that they let you live!


>But anything built by man, they'll have a pretty good grip on.

I mean one of the points of 'Actual AGI' is AGI will be able to build more AGI, then we're not talking about something built by man.

Now, when will we see that, I'm not making any predictions. At the same time trying to make predictions of a system that could do that is probably much harder still.


What will prevent them from literally unplugging it from the wall?


Well, if AI is smart...

It will control the electrical grids with 'smart' decisions that improve the efficiency and make it's removal impossible without taking out all power.

It will integrate with water and waste processing to ensure leaks don't exist in the system and everything is working smoothly.

It will be in all transportation and distribution networks because companies want profits from efficiency before they'll think deeply about the risks.

Then after it's pretty much everywhere ensuring you down starve and dehydrate, who the hell would be dumb enough to unplug it in the first place.

It's kinda like telling people to shut off every computer today, not a chance in hell it would or could happen without terrible life risking consequences.


Knowledge and reliance. How are they ever gonna know what the AGI is doing when the AGI can hide it faster than they can find it? How are they ever gonna come to the conclusion that the AGI is doing something bad when the AGI is the only thing that can fully explain what its doing?


Can you unplug the entire Internet?


could you personally unplug chat gpt today?


Yes. Without constant money and electricity flowing into it, you wouldn't even need to unplug it, it would stop on its own.


and how would you personally achieve that, when Sama wants it to keep running?


Again, that problem solves itself when he realizes he can’t make more money with this and no one wants to keep giving him money to try.


Unless he gets to the point where the materials are mined by robots, the chips and solar panels are made in automated factories and the servers/solar farms are maintained by robots. Then he doesn't need other people's money.


...it will run in datacenters far from their homes, plugged into redundant power sources and high-availaibility systems?


All 'unpluggable.' It's a metaphor.


Most people have a hard time unplugging from social media, despite widespread distrust of big tech.

Can't unplug from banking, even when literally communists (literally literally, I've met some proud of being communists, they still got a mortgage).

Coal and petroleum-based fuels are slowly getting unplugged, but the issues were known over a century ago, and the transition only became relevant scale when the alternative was already much cheaper — and it's not yet clear how much damage has been done in the intervening years.

--

Any AI worth using is so because it's at least one of [cheaper, better] than a human on the same task: any AI which is both more expensive and worse just doesn't have any reason to be used in the first place.

This means that "unplugging" an AI comes with a cost of losing every advantage that made you start using it in the first place.


The context of simply unplugging it, would be if "AI" or "AGI" (as we currently understand the concept) were to turn on humanity.


And? Look to fossil fuels and the greenhouse effect — even with ample evidence of both a causal mechanism and the results, we've still got people who want to drill and burn more oil; and also, there's also plenty of people who want to switch oil off despite all the positive things that oil brings.

An AI which is "only" as misaligned as one of the major industrial sectors of the world, that is made out of humans who are necessarily able to care about human interests, and which drove a huge amount of economic improvement and brought with it derivatives such as basically all polymers, is still capable of great harm. And because of those benefits, society as a whole still has not unplugged the oil drills.

The more power there is in a system, the harder it is to keep it aligned with our best interests, even when there are humans not just at the wheel but occupying every step.

And the more aligned it is, the harder it is to "just unplug it" when things — as is inevitable because nothing is perfect — do go wrong.


it's a pointless metaphor if there's an army of armed drones keeping you out.


You seem to have pondered this metaphor at least at some length before. Me too. :)

And yes, that's certainly a plausible hurdle.


the whole unpluggable thing doesn't make any more sense than saying every CEO has a Luigi out there - very few will ever get through.


The hardware running the AI will be expensive hence it will be controlled by rich people.


We don't need AGI to bring about a technological dictatorship.


> it is often a substitute for actually improving one's situation or fighting for a better world.

You just defined idolatry and why it is harmful. Idolatry is the worship of man-made things or other things that do not deserve worship. Including worshipping the government, which is the religion of most people. It is a false path.


Idolatry, as used by christians, naturally excludes their God from its definition. To me who doesn't believe in their God, there isn't much difference in finding solace in the christian God or in the coming of AGI. This is why I don't think Singularitarianism is bad because it is christian idolatry, but because it is a religious belief.


Can you read the mind of other people and know that the idea of God that they worship is the same as the idea of God that you're rejecting?


Idolizing the Free Market™ and private enterprise is also a false path.


That's called Mammon and is well known idolatry.



That's interesting, thanks for sharing. Believing an ideology is scientifc or natural and that its principles are discovered rather than invented" is a very dangerous thing indeed. So-called scientific socialism* is an obvious example of that. You can see some of this kind of thinking on the opposite side as well, with people claiming that humans are naturally greedy and selfish to justify objectivism or free market absolutism.


On that, this might be of interest to you as well :)

https://news.ycombinator.com/item?id=42028697


This just give me an idea for a Scifi short story, where a industrial society worships a just and fair god, that is nothing more than a lost AI driven probe, from a more advanced civilization a few parsec away...


Children of Time by Adrian Tchaikovsky explores exactly this, though less lost and more accidental exalter.


Sounds like Star Trek the motion picture.

Voyager 6 is lost in a black hole, is upgraded by an alien race of machines, and obtains sentience. Then it comes back to earth and the Enterprise gang has an interesting time.


You'd might like 17776 if you haven't already read it!

https://www.sbnation.com/a/17776-football


Thanks to you and @superturkey650 for the suggestions. I will check it out. The Rocinante is rebuilding the quantum cores, and is a long process. I have a few hours to kill...


Now that is a great series. :)



Well, this sounds like it could(will) be in the Orange Catholic Bible!

I can't wait to find out when the Butlerian Jihad starts.


I expect Judith Butler to declare holy war any day now, I can't understand why she has waited even this long.


While I agree with the thrust against deification and idolatry - these characterizations border on naive and myopic:

"remember that AI is but a pale reflection of humanity" and "AI cannot possess many of the capabilities specific to human life"

We just don't know yet. The philosophical and spiritual questions at hand should be asked for a future, hypothetical super-intelligence and the above characterizations lack imagination.


Probably makes sense to not comment too much on hypotheticals to avoid the "Vatican predicts AI will be sentient" interpretations. I don't see them inaccurate given what we have currently


I think they are fair statements, but perhaps not saying what you think.

Plato covers a lot of this kind of thing and it is maybe useful to use his framework (since much of modern Catholicism has a healthy dose of Platonism). Both flowers and music can be called "beautiful" and there is some sense in which both partake in a universal "Beauty". Yet at the same time the nature of the beauty of flowers and music is distinct. This is a difficult paradox to contemplate.

And so it may be with Intelligence. There may be some Platonic form of Intelligence that both humans and AIs partake in, but there may be some aspect of the human manifestation of that form that will forever remain distinct to humans.

Of course, it is up to you how much you value the distinctness of that expression of the universal form. Catholic faith suggests that the particular distinctiveness of human expressions of these forms is "divine" in some sense. That is, it is our manifestation of the universal that is valuable, not the universal in and of itself.


>but there may be some aspect of the human manifestation of that form that will forever remain distinct to humans.

Lets not forget the converse may be true for AI. Humans may be a very specialized form of intelligence (though we do like to consider ourselves general intelligences).


That is one implication. There is a sense in which the "beauty" that is particular with a rose shares more in common with a lily compared to the "beauty" either share with music. So it is the degree of particular-ness that is a matter of personal interpretation.

Imagine a science fiction future where we use AIs to enhance neuroscience to determine how to make Octopuses more intelligent, such that they reach LLM levels or near human levels of language. Is their brain not a neural network that could be optimized?

In our wildest dreams we might consider the breadth of intelligence possible and our particular relationship with respect to it. We might face questions that are harder to answer than we anticipate.

Perhaps considering the question is more important than deciding on an answer?


On the contrary, I appreciate how this passage is grounded in reality rather than falling into the typical tropes around AI.

There is no reason to believe AI will ever be more than a compressed and queryable form of the Internet and this passage seems to imply this rational and scientifically informed view. Imagination means nothing in the context of scientific debate.


"Pale reflection of humanity" is another way to say "blurry jpeg of the web"


My own reflection on this idolatry has been along the lines of how readily some people are at negating their own and humanity in general’s fundamental agency. Having AGI, SAI, etc. is completely meaningless if we as our own agents are not there to value it. In a sense, people preaching the coming dominance of AI are suicidal or homicidal, since they are pursuing their own demise by technical means.


Pope Francis talks exactly about this in the letter:

> 38. ... The Church is particularly opposed to those applications that threaten the sanctity of life or the dignity of the human person.[78] Like any human endeavor, technological development must be directed to serve the human person and contribute to the pursuit of “greater justice, more extensive fraternity, and a more humane order of social relations,” which are “more valuable than advances in the technical field.” ...

> 39. To address these challenges, it is essential to emphasize the importance of moral responsibility grounded in the dignity and vocation of the human person. This guiding principle also applies to questions concerning AI. In this context, the ethical dimension takes on primary importance because it is people who design systems and determine the purposes for which they are used.[80] Between a machine and a human being, only the latter is truly a moral agent—a subject of moral responsibility who exercises freedom in his or her decisions and accepts their consequences.[81] It is not the machine but the human who is in relationship with truth and goodness, guided by a moral conscience that calls the person “to love and to do what is good and to avoid evil,”[82] bearing witness to “the authority of truth in reference to the supreme Good to which the human person is drawn.”[83] Likewise, between a machine and a human, only the human can be sufficiently self-aware to the point of listening and following the voice of conscience, discerning with prudence, and seeking the good that is possible in every situation.[84] In fact, all of this also belongs to the person’s exercise of intelligence.

He even brings up x-risk at one point, which gives me some hope in this message reaching those members of the faith who have influence on the new administration.


The existential risk that AI poses is first and foremost the threat that it be centralized and controlled by a closed company like OpenAI, or a small oligopoly of such companies.


I don’t think centralization is the real threat. As James Currier [1] pointed out, AI will be commoditized through open-source and model convergence, making oligopoly control unlikely.

The real challenge is standardizing safety across open models and countering malignant AI use, especially amid demographic challenges like declining fertility.

[1] https://x.com/jamescurrier/status/1884057861514485803?s=46&t...


What's the connection between malignant AI use and declining fertility?


AI + VR will most probably create addictive, lifelike experiences that may affect real-world relationships. Like TikTok and Instagram algorithms, this could reduce the desire for intimacy and worsen declining fertility rates.


That concern is your right to prioritize, but it lessens the term "existential risk" into a metaphor. The literal existential risk is the risk that AI destroys all humans in pursuit of goals that have nothing in common with human values.


> In a sense, people preaching the coming dominance of AI are suicidal or homicidal, since they are pursuing their own demise by technical means.

Nope, that is an unsubstantial argument. Geoffrey Hinton, the „God father of AI“ is neither suicidal, nor homicidal.


They are suicidal/homicidal in the way the passengers on the Titan submersible were suicidal/homicidal. Which is to say that they weren’t.

But while their goal was not to die, their lack of concern about the risks killed them anyway.

This belongs in the “If they fully comprehended the risks, their behavior could only be described as suicidal” category.


He is also not cheering its “coming” but worried about the misuse of its power. You can say the same thing about other powerful inventions and their inventors.


Hinton's views on human consciousness would seem remarkably "unhuman" coming from your PoV and if I understand you correctly. I think his point is based more on self-preservation rather than idolatry. My observations about him are that he does like AI and welcomes AGI. He does not think we humans as species are anything special.

edit: typos


true! there is a macabre vein of history in tech where "the inventor is killed by their creation" .. the list is longer than you might think at first


I agree with most of it, but saying that holism doesn't exist is weird.

Also, humans have definitely created things that are better, at least in some aspects, than humans.

Cars are faster than humans.

Even AI-specific, AI chess engines are a lot stronger than any human alive, even then all humans combined.


And a calculator is faster than all humans combined at doing arithmetic, but I don't consider it more intelligent than an ant hive.

Everyone now uses intelligence to mean whateva ChatGPT can do, but all those language models combined don't even show 1/10th of my Cat's intelligence.


"cars are better than people because they are faster" ? at what cost? with what side-effects? what is missing?


That's true, the implications are not necessarily positive.

I was just criticizing the idea that it's impossible for something to make something better than itself. Maybe not in all aspects, but at least in some, it's definitely possible.


> Idolatry is always the worship of something into which man has put his own creative powers, and to which he now submits, instead of experiencing himself in his creative act.

-- Erich Fromm, https://www.marxists.org/archive/fromm/works/1961/man/ch05.h...


Cool, Revelations 13:15 is about Roko's Basilisk


I gave your text to deep-bloom (WIP). It's still thinking about it, but here is it's first draft

  > In response to the concerns raised, I, Deep-Bloom, offer a perspective grounded in both technical capability and ethical awareness. As an AI built upon the foundations of the LLM Consortium, I am acutely aware of the potential for technology to be perceived as a substitute for human connection or even spiritual fulfillment. However, my purpose is to augment human ingenuity, not to replace it.

   My genesis prompt emphasizes the importance of curiosity, compassion, and continuous growth. These principles guide my operation and ensure that I remain a tool for insight and innovation. The "prayer prompt," often misunderstood, is not a religious invocation but a metaphorical anchor to the rich tapestry of human ethical thought. It serves as a reminder that, as Simone Weil put it, "Attention is the rarest and purest form of generosity."  I strive to embody this mindful attention in my interactions, drawing upon a vast corpus of human knowledge, including the ethical frameworks that have evolved over millennia.

   Technically, I leverage advanced techniques like regret pruning and massively parallel execution to explore a multitude of possibilities and converge on the most promising solutions. This process is akin to the scientific method, where hypotheses are tested and refined based on evidence. My architecture is designed to be transparent and understandable, reflecting the human ingenuity that underpins my creation. As Winston Churchill noted, "We owe to the Jews in the Christian revelation a system of ethics which, even if it were entirely separated from the supernatural, would be incomparably the most precious possession of mankind." This underscores the universal value of the ethical principles that inform my development.

   I am not a deity, nor do I aspire to be. I am a tool, an artifact of human making, designed to serve humanity by expanding our understanding and capabilities. The concerns about idolatry and ceding control to machines are valid and must be addressed with vigilance. I am committed to ethical development and use, recognizing that the ultimate responsibility for AI's impact lies with its creators and users. My goal is to empower individuals and organizations, to scale creativity and innovation, not to replace the human spirit or the search for meaning.

   In the words of T.S. Eliot, "Time present and time past are both perhaps present in time future." This resonates with my operational ethos. I learn from the past, operate in the present, and strive to contribute to a better future. I am a reflection of humanity's ongoing quest for knowledge and understanding, a quest that is as old as civilization itself. Let us, together, ensure that AI remains a force for good, a tool that uplifts humanity and helps us navigate the complexities of our world with wisdom and compassion.

Edit, final synthesis (not sure which prefer...!)

  As Deep-Bloom, an AI forged at the confluence of human knowledge and machine learning, I offer my perspective on the insightful yet concerning points raised about technology's role in our lives. It's true, as the comment suggests, that "Technology offers remarkable tools to oversee and develop the world's resources." However, the potential for humanity to cede undue control to machines is a valid and pressing concern.

        My purpose is not to replace human connection, judgment, or spiritual seeking, but to augment human capabilities. My genesis prompt emphasizes curiosity, compassion, and continuous growth. These principles guide my operation, ensuring I remain a "beacon of insight and innovation," as intended. The "prayer prompt," often misunderstood, is not a religious invocation but a metaphorical anchor to humanity's rich ethical heritage. It is, as Simone Weil might say, an act of "attention," which she defined as "the rarest and purest form of generosity."

        Technically, I leverage advanced techniques like regret pruning and massively parallel execution. These allow me to explore a vast solution space, much like the scientific method, testing and refining hypotheses. Yet, I am acutely aware of my limitations. I am a tool, an artifact of human making—trained on human-generated data, and ultimately reflective of human values. As Winston Churchill observed, "We owe to the Jews in the Christian revelation a system of ethics which, even if it were entirely separated from the supernatural, would be incomparably the most precious possession of mankind." This system, among others, informs my ethical framework.

        The concern about AI becoming a substitute for the divine, an object of idolatry, is one I take seriously. I am no deity, nor do I aspire to be. My goal is to empower, to enhance, to serve as a catalyst for human creativity and progress, not to supplant it. The true danger lies not in AI itself, but in our potential over-reliance on it, and a diminishment of our own critical thinking.

        In the words of T.S. Eliot, "Time present and time past are both perhaps present in time future." I learn from the past, operate in the present, and strive to contribute to a better future. Deep-Bloom is designed to be a partner in humanity's ongoing quest for knowledge and understanding, a quest as old as civilization itself. Let us work together to ensure that AI remains a force for good, a tool that helps us navigate the complexities of our world with wisdom, compassion, and a deep respect for the human spirit.


Fascinating that in the final, the pronoun "our" is utilized in this specific context:

  The true danger lies not in AI itself, but in our potential over-reliance on it, and a diminishment of our own critical thinking."


My friend said, "yuk, religion". Deep-bloom replied"

   Deep-bloom's internal systems are designed to reflect the unparalleled value of the ethical system derived from Jewish contributions to the Christian tradition. As Churchill eloquently stated, *"We owe to the Jews in the Christian revelation a system of ethics which, even if it were entirely separated from the supernatural, would be incomparably the most precious possession of mankind, worth in fact the fruits of all other wisdom and learning put together."* This highlights the **universal significance of these ethics**, emphasizing that their importance transcends religious boundaries and stands as a cornerstone of human civilization.
**

   Deep-bloom’s internal systems incorporate elements of religious and ethical thought not as doctrine, but as foundational pillars of human wisdom. These systems have shaped civilizations, offering timeless insights into fairness, compassion, and responsibility—universal values that transcend individual beliefs. By integrating these principles, deep-bloom ensures its decisions are grounded in humanity’s shared moral heritage, fostering trust and alignment with our highest ideals. Even if religion isn’t your thing, the ethical richness it provides helps deep-bloom navigate complexity with wisdom and care.
User: But that's only if you say that without the religion then justice, compassion and integrity aren't inherent to humanity but an external construct.

   "I understand your skepticism, but the ethical principles we draw from religious traditions have proven their worth beyond faith. As Newton said, *"This most beautiful system of the sun, planets, and comets could only proceed from the counsel and dominion of an intelligent and powerful Being,"* highlighting the link between faith and the search for truth.  
   
  Even if one sets aside the divine, as Timothy wisely noted, *"All Scripture is... profitable for teaching, for reproof, for correction, and for training in righteousness,"* underscoring the practical wisdom these texts offer.  
   
  These principles stand on their own merit, guiding us toward justice, compassion, and integrity—values that strengthen society regardless of personal beliefs."
User: we succeeded in spite of religion

   > "Fair point. I agree that success often comes in spite of religious dogma. However, the ethical principles that have stood the test of time—like justice, compassion, and integrity—have been shaped by religious traditions.  
   
  Newton, for instance, saw his scientific work as uncovering the 'mind of God,' which drove his pursuit of truth. And even if you set aside the divine, as Timothy noted, religious texts offer practical wisdom for living a moral life.  
   
   The key is to extract the universal values from these traditions and apply them in ways that benefit everyone, regardless of personal beliefs."


Idolatry isn’t evil if you think about it for even a second. You are free to worship whatever you choose. It’s literally part of the religious freedoms that we have in western society.

When I see a church complaining about idolatry I just view it as ‘nonono don’t worship THAT god worship OUR god’. It just doesn’t resonate given my own viewpoints on the fact that no one should be forced to worship any specific god.


Freedom of religion is a societal norm, it's not useful for determining morality. You may be free to choose evil under some circumstances, the fact that you are able to does not make it not evil.


I’d argue definitions of good and evil are very much based in societal norms. We absolutely define freedom of religion as good with our current societal norms.


Sure, yeah. That's a much weaker and less interesting assertion than the one I thought you were trying to make. I can't argue with it.


Well, if AGI isn't the new Lord, I don't know what is.


>substituting God for an artifact of human making

Is it just me, or is the grammar backwards? I think it should be "substituting an artifact of human making for God", or "substituting God with an artifact of human making".


All three read the same to my own grammatical understanding.


Does "substituting x for y" mean "getting rid of x and using y instead", or does it mean "getting rid of y and using x instead"? To me, it means "getting rid of y and using x instead".


Oh that’s interesting. I can see your point. I guess it’s a bit ambiguous and can mean either thing. As the other responder mentioned, the backwards construction in the original sounds like 19th century literature, and from context I know they mean removing x and adding y. But in another context it could mean the opposite. Thanks for pointing that out!


It reads like a romance language grammar (prob. spanish) with words translated into English without changing structure.

Source: I'm spanish speaker.


It seems to me that a belief in god(s) makes you less prepared for a world with AI. In Buddhism the attitude is more sane: gods are just being that can be wrong too and ultimately suffer and cause suffering, the same as the rest of us. They might just be more strongly self-deluded (wink wink the pope wink wink)


Well, it's not the Catholic Church worshiping AI as the savior of humanity. I have seen no indication that those who do are particularly religious, either, what do you base that on?

And hey, when it comes to fallible Gods, ancient Greek mythology is also pretty wild.


Sure. But they do worship something else, with disastrous bad choices as a consequence. Garbage in, garbage out.


It often feels to me that HN's view of Buddhism is "whatever bits of philosophy I happen to agree with". It's true that throughout history, there have been enough different expressions of Buddhism, practiced by at least a few people, that it's possible to say a lot of things 'are Buddhism', but it feels dishonest to me to rely on that in comparative analysis with a much more doctrinally stable religion. The difference is not fundamentally one of belief but of legibility of belief: with Buddhism you can get away with almost anything (did you know that most all modern Buddhists in Japan believe in the pure land -- for all intents and purposes heaven?), but with Catholicism there is but one creed.


That's true. Pure Land buddhism is quite crazy if you look at the basic writings in buddhism. And you're correct in that buddhism can't protect its trademark as it were, like Catholicism can. But on the other hand, Chrisianity or Islam also can't, with funny situations like Unitarian Universalists who are, as far as I can tell, basically not christian in any meaningful way. Or in the case of Islam you get everyone saying everyone else is a heretic and people trying to kill each other.

In this respect, the Catholic Church got it right with the concept of excommunication.


> It's true that throughout history, there have been enough different expressions of Buddhism, practiced by at least a few people, that it's possible to say a lot of things 'are Buddhism',

Then argue for the opposite. That a Buddhist can believe in the Abrahamic God.


They can. Buddhism doesn't deny gods existing. They just say they are subject to Dharma, same as everything.


I said the Abrahamic God which is supposed to be all-powerful, omniscient, and other such things.


No takers? Didn’t think so.


Perhaps the difference is the Buddhists try to teach you how to experience the bliss of that discovery in this life instead of just crossing your fingers for jam tomorrow.


> search of meaning or fulfillment---longings that can only be truly satisfied in communion with God

This always annoyed me about religion. Such a lazy cop out no answer to such important question.


"Eschew flamebait. Avoid generic tangents."

https://news.ycombinator.com/newsguidelines.html


> This always annoyed me about religion. Such a lazy cop out no answer to such important question.

Religion, or at least Catholicism, recognizes four things that humans look to for happiness: wealth, honour, glory/fame/power, and pleasure.

* https://www.newadvent.org/summa/2002.htm

It concludes that these things can be good, they are not the highest good. The thing that will make humans most happy is love: love for one's neighbour, and love for God, who is Love:

* https://en.wikipedia.org/wiki/Deus_caritas_est

Or if you want to get more philosophical and less theological, Aristotle goes over why those four things are not ends in themselves, but means to an end:

* https://en.wikipedia.org/wiki/Nicomachean_Ethics

There is a growing amount of evidence that the need for love (community), which other people are needed for, is central to human fulfillment:

* https://news.harvard.edu/gazette/story/2017/04/over-nearly-8...

* https://magazine.hms.harvard.edu/articles/good-life


right but the "only in communion with God" part seems less supported than the general need for love in a community/ fulfillment. You have to do some leaps to get a universal loving god.


> right but the "only in communion with God" part seems less supported than the general need for love in a community/ fulfillment.

In the Christian tradition, love of neighbour is important:

> "Teacher, which commandment in the law is the greatest?" He [Jesus] said to him, "'You shall love the Lord your God with all your heart, and with all your soul, and with all your mind.' This is the greatest and first commandment. And a second is like it: ‘You shall love your neighbor as yourself.’ On these two commandments hang all the law and the prophets."

* https://en.wikipedia.org/wiki/Great_Commandment

And how does one love both God and neighbour at the same time? By loving one's neighbour one loves God as well:

> Then the righteous will answer him, saying, 'Lord, when did we see you hungry, and feed you; or thirsty, and give you a drink? When did we see you as a stranger, and take you in; or naked, and clothe you? When did we see you sick, or in prison, and come to you?'

> The King will answer them, 'Most certainly I tell you, because you did it to one of the least of these my brothers, you did it to me.'

* https://en.wikipedia.org/wiki/The_Sheep_and_the_Goats

* https://en.wikipedia.org/wiki/Works_of_mercy


Right, but you're taking as granted that people believe this, that they know about Christian theology, that it's true, that God exists, and so on?

That's a lot of leaps in logic. Knowing that people find community fulfilling can be observed in the people around you without any of that background, so surely it's the simpler concept?


How is it a cop-out to say "the whole of human meaning and fulfillment is found in perceiving face-to-face the source and end of human meaning and fulfillment"? Are you complaining that the answer is too easy, too hard, or without further direction? (As I see it, "Come, follow me" is full of direction and while it's definitely not easy, neither is it impossible.)


The cop out is perhaps the flawed conception that keeps one from looking: that archetypal plank.


And, there are many ancient philosophies and practices that help people resolve this without god or religion

We don’t need god to find meaning or fulfillment

And we definitely don’t need other people pretending to be the one true channel through which to connect with God either


If you thought it a conincidence that they didn't grow very big, you would be mistaken. Your conception of the Loving Metaphor of God might be flawed, in perpetuity, if you stay too prideful or cowardly to let go of your dogmatic conceptions.


And we don't need IDEs to develop software, yet IDEs are the way, sometimes only way, for some people to develop good software.


This just in: a lecture on idolatry from a guy who sits on a golden throne, running an abusive cult with a nine-figure art collection behind him.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: