I recently watched an interesting documentary called “The Transcendent Man” based upon the life and work of a man, Ray Kurzweil. The documentary is an exploration of the book, The Singularity is Near, which Kurzweil authored. In this book, one which I plan on reading next week, Kurzweil explores the next three decades and theorizes on what will come. He relies on Moore’s Law well as other technological and evolutionary factors to help him calculate the future. (Moore’s Law states that technology without exception doubles its capacity every two years) To summarize, Kurzweil predicts that by the year 2040 humanity will become indistinguishable from its technology, reaching a point of no return, a point which Kurzweil calls, The Singularity.
As I watched this documentary I was completely intrigued. Kurzweil makes an effort, successful in my mind, to show us that the human body is little more than an organic computer running “software” that has been modified continuously over time by evolution. He explains that glitches in our code (DNA) during replication/evolution are the reason for the many ailments we have today. He goes on to say that one day we will “correct the code” and solve the major diseases and mutations of our day. However, this is not the area I want to focus on.
Kurzweil theorizes that within the next few years we will develop the technology to map the human brain virtually down to the atom. In this process we will gain a more complete picture of the function of the brain and the co-dependence of its parts. This, in time, will enable us to construct a software program that emulates the physical function of the human brain, and end ultimately with transferring a person’s essence (soul, spirit, being) into this “digital” brain. This would then allow that person to live on indefinitely, theoretically, assuming that their essence is transferred intact instead of just knowledge, memories, etc.
This whole concept really got my gears turning concerning all the implications that this potential future and its positive or cataclysmic impact on humanity. I then began to ponder about what exactly makes a human being different from all other creatures. (This is an extremely slippery slope and for the purposes of this blog I will attempt to remain as basic and scientific as possible.) I, of course, was reminded of two episodes of Star Trek that I subsequently went back and watched, because I think they add to the discussion in a very relevant way.
The first episode I watched was, “The Measure of a Man” (TNG Season 2: Episode 9). In this episode Data, an Android (not the phone OS, an actual cybernetic life form), is ruled the property of Starfleet, is stripped of his rights, and is ordered to a new place to be disassembled for study. Captain Picard takes up the battle to prove to Starfleet that Data is in fact a sentient life form and has rights that cannot be taken away by Starfleet.. In this episode the definition of sentience is described as three things: intelligence, self-awareness (My place/existence in the world.), and consciousness (What does my place mean?). Data in the end is determined to be sentient and all is well by the end of the episode.
I then took it a step further and watched, “The Schizoid Man” (TNG Season 2: Episode 6). In this episode Data meets an “ancestor” (someone who knew and worked with Data’s creator, Dr. Soong), Dr. Graves, who is dying. Dr. Graves, at some point off camera, turns Data off and transfers his entire essence into Data so that he will not die. This attempt is successful, more or less, until he starts to exhibit erratic and violent behavior. At the end of the episode, Dr. Graves transfers his knowledge but not consciousness into the Enterprise’s computer, saving Data, because he cannot cope with the man he has become.
If you have not watched these episodes then I advise you to go and watch them immediately. Both episodes helped me look at the potential benefits and dangers to what Kurzweil proposes in our future. If we create a machine, robot, computer, etc. that conforms to all of the criteria for life; is it alive? If we create a machine that has intelligence (we can build computers many times more intelligent than ourselves) but also understands the world in which it exists and knows its function/place in that world; have we created life? Is it any more or less alive than say a human clone or someone who is brain dead/persistent vegetative state (these are two different medical classifications but are lumped together for this essay)? I am not qualified to answer these questions but the increasing rate of technological evolution will force us to face these issues quicker than we think. I have games sitting on my shelf right now that learn and adapt to how I play them. With my every choice, the game engine analyzes my strategy and probes any weaknesses that it may exhibit. This was virtually impossible as little as ten years ago but is common place now. The AI (Artificial Intelligence) in some games can even anticipate your actions based on as little as how aggressively you walk, explore, or communicate while playing. Then at every encounter it learns your personality, forcing you to adapt as it adapts. This kind of AI exists right now; is the future defined in Kurzweil’s book really that far off?
The ultimate question of this essay is; where do we draw the line? For some the line will be an easy one to draw, for others there may not be one. I am not sure where I fall on this issue but I do know one thing for sure, one day we will be forced to legally define this question: When does an entity become more than the just the sum of its parts and become sentient? After reading the book, I may revisit this issue again, exploring the other side of the debate. What happens when we get to the Singularity? Will we co-exist in peace? Will we reach symbiosis with the AI? Or will the AI view us as a plague and work towards eradication? (Some call it The Terminator threshold). This is truly the “undiscovered country”. Here’s one to the future…
As I watched this documentary I was completely intrigued. Kurzweil makes an effort, successful in my mind, to show us that the human body is little more than an organic computer running “software” that has been modified continuously over time by evolution. He explains that glitches in our code (DNA) during replication/evolution are the reason for the many ailments we have today. He goes on to say that one day we will “correct the code” and solve the major diseases and mutations of our day. However, this is not the area I want to focus on.
Kurzweil theorizes that within the next few years we will develop the technology to map the human brain virtually down to the atom. In this process we will gain a more complete picture of the function of the brain and the co-dependence of its parts. This, in time, will enable us to construct a software program that emulates the physical function of the human brain, and end ultimately with transferring a person’s essence (soul, spirit, being) into this “digital” brain. This would then allow that person to live on indefinitely, theoretically, assuming that their essence is transferred intact instead of just knowledge, memories, etc.
This whole concept really got my gears turning concerning all the implications that this potential future and its positive or cataclysmic impact on humanity. I then began to ponder about what exactly makes a human being different from all other creatures. (This is an extremely slippery slope and for the purposes of this blog I will attempt to remain as basic and scientific as possible.) I, of course, was reminded of two episodes of Star Trek that I subsequently went back and watched, because I think they add to the discussion in a very relevant way.
The first episode I watched was, “The Measure of a Man” (TNG Season 2: Episode 9). In this episode Data, an Android (not the phone OS, an actual cybernetic life form), is ruled the property of Starfleet, is stripped of his rights, and is ordered to a new place to be disassembled for study. Captain Picard takes up the battle to prove to Starfleet that Data is in fact a sentient life form and has rights that cannot be taken away by Starfleet.. In this episode the definition of sentience is described as three things: intelligence, self-awareness (My place/existence in the world.), and consciousness (What does my place mean?). Data in the end is determined to be sentient and all is well by the end of the episode.
I then took it a step further and watched, “The Schizoid Man” (TNG Season 2: Episode 6). In this episode Data meets an “ancestor” (someone who knew and worked with Data’s creator, Dr. Soong), Dr. Graves, who is dying. Dr. Graves, at some point off camera, turns Data off and transfers his entire essence into Data so that he will not die. This attempt is successful, more or less, until he starts to exhibit erratic and violent behavior. At the end of the episode, Dr. Graves transfers his knowledge but not consciousness into the Enterprise’s computer, saving Data, because he cannot cope with the man he has become.
If you have not watched these episodes then I advise you to go and watch them immediately. Both episodes helped me look at the potential benefits and dangers to what Kurzweil proposes in our future. If we create a machine, robot, computer, etc. that conforms to all of the criteria for life; is it alive? If we create a machine that has intelligence (we can build computers many times more intelligent than ourselves) but also understands the world in which it exists and knows its function/place in that world; have we created life? Is it any more or less alive than say a human clone or someone who is brain dead/persistent vegetative state (these are two different medical classifications but are lumped together for this essay)? I am not qualified to answer these questions but the increasing rate of technological evolution will force us to face these issues quicker than we think. I have games sitting on my shelf right now that learn and adapt to how I play them. With my every choice, the game engine analyzes my strategy and probes any weaknesses that it may exhibit. This was virtually impossible as little as ten years ago but is common place now. The AI (Artificial Intelligence) in some games can even anticipate your actions based on as little as how aggressively you walk, explore, or communicate while playing. Then at every encounter it learns your personality, forcing you to adapt as it adapts. This kind of AI exists right now; is the future defined in Kurzweil’s book really that far off?
The ultimate question of this essay is; where do we draw the line? For some the line will be an easy one to draw, for others there may not be one. I am not sure where I fall on this issue but I do know one thing for sure, one day we will be forced to legally define this question: When does an entity become more than the just the sum of its parts and become sentient? After reading the book, I may revisit this issue again, exploring the other side of the debate. What happens when we get to the Singularity? Will we co-exist in peace? Will we reach symbiosis with the AI? Or will the AI view us as a plague and work towards eradication? (Some call it The Terminator threshold). This is truly the “undiscovered country”. Here’s one to the future…