> My opinion is that in the same way you'd struggle to call a human intelligent if literally all they could do was play genius level Go, most people would not call a computer program intelligent if all it could do was play Go.
Let's see, how many skills/domains can a particular human cover? For example, I can't speak Chinese. I haven't learned Chinese. Also, I have no idea about medicine. But people who learned medicine, know a great deal about it. Maybe a human can do 20-100 things, like walking, low level addition and multiplication, speaking a few languages, playing a few games and working in a few domains. Not an infinite list. In the same way RL systems and CNNs can be used for hundreds of different applications, depending on the data they are trained on.
Also, it is not the same neural net in the brain that handles two different skills, we use specialized neural nets for any of our skills too. If you stick together a few neural nets and a controller that selects the right one for the task, you could have a "single" system doing many things, just give it training data to learn those skills. Humans take 20 years to learn the necessary skills to function in society too. Neural nets can do it much faster and often surpass humans. DeepMind started Go a couple of years back and surpassed the best human player - how is it possible to do that, in such a short time span? And it wasn't a case of 'clever tricks' like the chess program Deep Blue.
Let's see, how many skills/domains can a particular human cover? For example, I can't speak Chinese. I haven't learned Chinese. Also, I have no idea about medicine. But people who learned medicine, know a great deal about it. Maybe a human can do 20-100 things, like walking, low level addition and multiplication, speaking a few languages, playing a few games and working in a few domains. Not an infinite list. In the same way RL systems and CNNs can be used for hundreds of different applications, depending on the data they are trained on.
Also, it is not the same neural net in the brain that handles two different skills, we use specialized neural nets for any of our skills too. If you stick together a few neural nets and a controller that selects the right one for the task, you could have a "single" system doing many things, just give it training data to learn those skills. Humans take 20 years to learn the necessary skills to function in society too. Neural nets can do it much faster and often surpass humans. DeepMind started Go a couple of years back and surpassed the best human player - how is it possible to do that, in such a short time span? And it wasn't a case of 'clever tricks' like the chess program Deep Blue.