I think in order for AGI to be realized, the machine must be able to have algorithms that can dynamically work with new data, that it receives from its sensors, interprets it, and classify it.
It must generate its own data, and classify it, organize it, compartmentalize it, and regularly subdivide it. And most importantly, it needs to be able to invalidate it. It needs to operate on a ”most likely” scenario, based on its own gathered evidence. Where the scenario is true, until it isn’t, and then it needs to find the new scenario.
All of the data and information in the world, can not be encoded for the AGI, and manually spoon fed to it. This is the fallacy. It doesn’t scale. It cannot work, because it doesn’t operate on the “most likely” scenario. The whole concept would fall apart and crumble in on itself.
This data fidelity issue was the problem that ultimately killed the symbolic AI attempts in the 70s, with all the Lisp programming, where they attempted to manually give knowledge to a robot. And then, after running out of money, they realized that they just couldn’t sustain creating all of the data manually.
And the problem with today’s Deep Learning AI, is that it’s just a very fancy pattern matcher. It’s like a very fancy regular expression searcher, to give an analogy.
The problem with today’s AI attempt, is the same problem that doomed the 1970s attempt: namely, the lack of data fidelity. At some point, you can’t have a human go around classifying everything for you.
Given that, I still think AGI is possible, but a major rethinking is necessary to achieve it. The Deep Learning neural net ideas of today, will not achieve it.
It must generate its own data, and classify it, organize it, compartmentalize it, and regularly subdivide it. And most importantly, it needs to be able to invalidate it. It needs to operate on a ”most likely” scenario, based on its own gathered evidence. Where the scenario is true, until it isn’t, and then it needs to find the new scenario.
All of the data and information in the world, can not be encoded for the AGI, and manually spoon fed to it. This is the fallacy. It doesn’t scale. It cannot work, because it doesn’t operate on the “most likely” scenario. The whole concept would fall apart and crumble in on itself.
This data fidelity issue was the problem that ultimately killed the symbolic AI attempts in the 70s, with all the Lisp programming, where they attempted to manually give knowledge to a robot. And then, after running out of money, they realized that they just couldn’t sustain creating all of the data manually.
And the problem with today’s Deep Learning AI, is that it’s just a very fancy pattern matcher. It’s like a very fancy regular expression searcher, to give an analogy.
The problem with today’s AI attempt, is the same problem that doomed the 1970s attempt: namely, the lack of data fidelity. At some point, you can’t have a human go around classifying everything for you.
Given that, I still think AGI is possible, but a major rethinking is necessary to achieve it. The Deep Learning neural net ideas of today, will not achieve it.