About Strong AI
Is strong AI possible? This is probably one of the most interesting questions of our age. I want to discuss this question in the following post, but first I should describe what is meant by "strong AI":
The aim of strong AI is to great an artificial intelligence which is comparable to the human intelligence. She should be able to deduce logical conclusions, think creative, have own ideas and so on. Maybe she could could even have emotions or a consciousness. At the moment we it seems that we are far away from this stage of AI. It is interesting to ask, whether strong AI is basically possible. Of course there are many different view on this.
One argument which I hear sometime is: "Human beings cannot create anything that has the same intelligence or even more intelligence." You could also paraphrase it a bit more abstract: "An (intelligent) System cannot create an intelligent system which has at least the same level of intelligence." Unfortunately, it is not really defined what's meant by the word "intelligence". Defining this word is very hard and I'm sure there are several different definitions. Because of that I won't try to define this word, in the first place, and settle with the colloquial impression of this word.
In my opinion this argument is not very convincing. To explain this I have to tell you that I see basically three ways which could lead to strong AI and which require a different level of understanding of intelligence:
- The humankind and its intelligence evolved through evolution. Therefore, it should be theoretically possible to use a genetic algorithm to create strong AI. Such an algorithm simulates the evolution to put it in simple words. From a starting point you generate multiple descendants of a program, each of them randomly changed a little bit. With the most "intelligent" program of these you repeat this. From this programs created in the second step you chose again the most "intelligent" and so on. Theoretically strong AI could evolve during this process. Practically this method will not be possible, because the time span needed is far to long and you will need a really good and versatile test for intelligence to test the programs.
- The second method to create strong AI would be to simulate each cell of the human brain in a computer simulation. Because it would be simulated the human brain one to one, the simulation should have strong AI. Of course this is practically not possible, because the human brain has far too much cells with to complex connections to simulate the whole brain.
- At last it could be possible that we get such a deep understanding of intelligence and the function of the human brain that we can program strong AI based on that knowledge.
All three ways assume that all brain functions can be simulated with a silicon based computer. Personally I do not see a reason why this should not be possible. However, there are different opinions. As far as I know, Roger Penrose postulated a strange theory with quantum effects and micro tubili, why this should not be possible. But let us assume that it is possible and focus on the three ways to create strong AI:
The first way would already annul the introduction statement. We are able to program a genetic algorithm and this is clearly not intelligent. Therefore the evolution created as an unintelligent system a considerable more intelligent system and we should be able to do something similar with a genetic algorithm. Of course this is not very satisfying, because on the one hand this method needs too much time and on the other hand it would not result in a better understanding of intelligence or strong AI. Therefore, the question should rather be "Can we understand strong AI/intelligence?" than "Can we create strong AI?".
The second way is not different. A one to one simulation should not be a problem in theory as long as you know the underlying physical laws and all functions of a cell can be simulated with a computer. Anyways, the blunt copying of human brain would not lead to a better understanding. Not mentioning, that it is too complex to do this.
Really desirable is the third method, because it needs a deep understanding of intelligence and the human brain first. The question should not be, whether an (intelligent) system can create an intelligent system of at least the same level of intelligence. Much more interesting is the question whether a intelligent system can understand itself.
I would not deny that this is possible. As a small analogy: In a formal System which include at least the number theory you can express things about the system itself. For example this was used by Gödel in his incompleteness theorem. Like that I believe it is possible, that the human intelligence can make statements about itself and therefore may understand itself. However, this understanding will not be in a form that we know of every neuron it has exactly this or that task. Just because we have a limited number of neurons we cannot know this for each single neuron at the same time (however, we are able to "swap" knowledge to books). It is much more likely that we treat several united cell structures as a single unit and link this units together. But it is similar with our understanding of computers. We do not necessarily know what a single transistor does on a chip and it may not be of much use to print the whole circuit of the chip at the level of single transistors. However, if merge several transistors to units we can describe them with gates and several gates may described es memory cell. We can understand or construct the chip only with such abstractions and, I think, it will be similar with the human brain or artificial intelligence. But we will not be able to jump from the lowest abstraction level (single cells or even below that) to the highest (for example visual perception). Between these levels are many more and the task to find this abstraction levels will take some more time.