When I was a postdoctoral fellow at Fermilab, Marvin Minsky, one of the founding fathers of strong AI and a signatory of the 1956 Dartmouth Proposal, came to give a colloquium. After presenting his arguments as to why machines would soon be thinking (this was 1986), I asked him if, in that case, they would also develop mental pathologies such as psychosis and bipolar illness. His answer, to my amazement and incredulity, was a categorical “Yes!” Half-jokingly, I then asked if there would be machine therapists. His answer, again, was a categorical “Yes!” I guess these therapists would be specialized debuggers, trained in both programming and machine psychology.
One could argue, contra Minsky, that if we could reverse engineer the human brain to the point of being able to properly simulate it, we would understand quite precisely the chemical, genetic, and structural origin of such mental pathologies and should deal with them directly, creating perfectly healthy simulated minds. In fact, such medical applications are one of the main goals of replicating a brain inside computers. One would have a silicone-based laboratory to test all sorts of treatments and drugs without the need of human subjects. Of course, all this would depend on whether we would still be here by then.
Such sinister dreams of transhuman machines are the stuff of myths, at least for now. For one thing, Moore’s Law is not a law of Nature but simply reflects the fast-paced development of processing technologies, another tribute to human ingenuity. We should expect that it will break down at some point, given physical limitations in computing power and miniaturization. However, if the myth were to turn real, we would have much to fear from these codes-that-write-better-codes digital entities. To what sort of moral principles would such unknown intelligences adhere? Would humans as a species become obsolete and thus expendable? Kurzweil and others believe so and see this as a good thing. In fact, as Kurzweil expressed in his The Singularity Is Near, he can’t wait to become a machine-human hybrid. Others (presumably most in the medical and dental profession, athletes, bodybuilders, and the like) would not be so enthusiastic about letting go of our carbon carcasses. Pressing the issue a bit further, can we even understand a human brain without a human body? This separation may be unattainable, body and brain so intertwined with one another as to make it meaningless to consider them separately. After all, a big chunk of the human brain (the same for other animals) is dedicated to regulating the body and the sensory apparatus. What would a brain be like without the grounding activity of making sure the body runs? Could a brain or intelligence exist only to process higher cognitive functions—a brain in a jar? And would a brain in a jar have any empathy with or understanding of physical beings?
~~The Island of Knowledge -by- Marcelo Gleiser
One could argue, contra Minsky, that if we could reverse engineer the human brain to the point of being able to properly simulate it, we would understand quite precisely the chemical, genetic, and structural origin of such mental pathologies and should deal with them directly, creating perfectly healthy simulated minds. In fact, such medical applications are one of the main goals of replicating a brain inside computers. One would have a silicone-based laboratory to test all sorts of treatments and drugs without the need of human subjects. Of course, all this would depend on whether we would still be here by then.
Such sinister dreams of transhuman machines are the stuff of myths, at least for now. For one thing, Moore’s Law is not a law of Nature but simply reflects the fast-paced development of processing technologies, another tribute to human ingenuity. We should expect that it will break down at some point, given physical limitations in computing power and miniaturization. However, if the myth were to turn real, we would have much to fear from these codes-that-write-better-codes digital entities. To what sort of moral principles would such unknown intelligences adhere? Would humans as a species become obsolete and thus expendable? Kurzweil and others believe so and see this as a good thing. In fact, as Kurzweil expressed in his The Singularity Is Near, he can’t wait to become a machine-human hybrid. Others (presumably most in the medical and dental profession, athletes, bodybuilders, and the like) would not be so enthusiastic about letting go of our carbon carcasses. Pressing the issue a bit further, can we even understand a human brain without a human body? This separation may be unattainable, body and brain so intertwined with one another as to make it meaningless to consider them separately. After all, a big chunk of the human brain (the same for other animals) is dedicated to regulating the body and the sensory apparatus. What would a brain be like without the grounding activity of making sure the body runs? Could a brain or intelligence exist only to process higher cognitive functions—a brain in a jar? And would a brain in a jar have any empathy with or understanding of physical beings?
~~The Island of Knowledge -by- Marcelo Gleiser
No comments:
Post a Comment