Mr. Estrada argues that my basic position requires a strong differentiation between the technological and cultural. This is, however, not what I have intended to convey. My paper rather concerns an argument for the comparison of the plausibility of Vernor Vinge’s AI (artificial intelligence) and IA (intelligence amplification) hypotheses. In other words, I do agree with much that Mr. Estrada writes. We are, in many senses, “tools all the way down”. As what comes to the nature of the mind, it is in a very profound sense extended to begin with. If an AI were forthcoming, it would in many senses contribute as an extended resource to the human mind.
My claim in the paper is not so much intended as the comparison of the intrinsic nature of a biological mind to a simulated mind (which, as I think Mr. Estrada rightly points out, cannot justly be separated), but rather the plausibility of whether an IA or AI explosion will take place sooner: in other words, where the focal point of the intelligence explosion will be: in the network itself (IA), or in identifiable components of it (AI).
The problem with the plausibility of the AI hypothesis is not that it would be impossible or somehow IA-incompatible. It is rather that we are not very likely to reach it before an IA explosion takes place. In addition to the complexity of the nervous system that can be postulated on the grounds of the Stanford experiment, the integration of nervous and extended processes is of a far higher order than in a simple sensory coupling or a feedback loop. The nervous system is dynamic to a far greater degree than any known computational system as is demonstrated by the massive literature on neuroplasticity. The nervous system does not compute – synaptic connections grow and shrink. The brain is not a machine. It is a garden.
In the light of what we now know about brain function, in the nervous system the hardware and the software are intrinsically intertwined. In other words, the brain is not a static processing and memory system where information is stored, but rather a dynamic feedback mechanism that *produces* information by creating complex enough connections. Once you add to this the ability to augment these connections by using the environment, there is a very profound sense in which human intelligence differs dramatically from what has been postulated as machine intelligence. Using Searle as an example was simply to show that there are some dramatic difficulties in attributing intelligence to a machine (whereas attributing intelligence to a human-machine coupling is by no means problematic).
Incidentally, this is not to say there could not be an intelligent machine. I do not subscribe to the fundamental Searlean assumption that this would be philosophically impossible. Quite the opposite: if a machine is constructed that for all purposes acts like an intelligent agent, it should be treated as an intelligent agent, even if this behavior came about in the way of complex enough computation. But this is not at all the point I am trying to make in the paper.
What I am arguing is that while an AI explosion may as well be on the way, it is not very likely to happen very soon. But once real-time networking of human beings is achieved (which should happen in a few years now), the IA explosion will take place. I have no doubts that this will also contribute to the AI explosion as well, whatever that will mean then, which will in turn augment the capacity of the IA system and so on and so forth.
To sum up, none of this is to say either that human intelligence and machine intelligence should be intrinsically separated, or even that the simulation of intelligence were impossible. It is just to say that the intelligence explosion that involves real-time networking of existing nervous systems of human beings is somewhat likelier to happen sooner than a significant enough advance in computing technology.