Mary Wollstonecraft Shelley's Frankenstein is one of the great literary works of its time. It questions the heart of man weighed against man's capacity for innovation, ingenuity, invention, and his ever increasing hunger for knowledge. Over the last two hundred years Frankenstein has been remade, reworked, and republished under many different titles and in just as many different media. This is simply because the themes and motifs of the novel hold just as true today as they did in 1818. While there are several dominant and underlying themes in Shelley's book I am going to concentrate on the technology and advancement of man's scientific and technological capabilities. If Shelley were alive today, and able to understand the field of artificial intelligence, or AI, she might see a dramatic connection between her monster in Frankenstein, and the prospect of the technological singularity; "The Singularity is the technological creation of smarter than human intelligence."(singinst.org) This is a point in time in which machines become so intelligent they are capable of making smarter versions of themselves without human intervention. There is scholarly debate that at this point machines will either see humanity as a threat and proceed accordingly or simply see us as useless; which could quite possibly end the same way. I have said all that to say this, a proper examination of the overwhelming theme of the ever growing presence of more and more advanced technology in our lives begs the following question: when is enough enough? At what point do humans throw in the towel on our technology for fear that, even done with the best of intentions, our own greatest achievements could ultimately be the harbinger of our demise. I am going to examine the possibility of achieving the technological singularity within our lifetime, the ever growing pursuit of scientific knowledge, often without regard to its dangers, and the prospect of a society so inundated with technology that human interaction becomes a thing of the past.
Shelley's Frankenstein looks at the use of human advances in technology, knowledge, and understanding, in this case to give life to an inanimate body. This capability has the capacity for both good and evil depending on its possessor, but there is a deeper question within; can man act as God in giving life back to those who have lost it? This is an intriguing moral ambiguity; one would think that having the ability to give life back would be something that would be cherished. This, however, is without looking at the fact that part of what makes humans human is mortality and its inherent fear. If one can be brought back from the dead, perhaps repeatedly, then that is achieving a level of immortality. While you may die your consciousness can be revived to live again, and this could have potentially disastrous side effects. If people no longer fear death or loss then their personalities would change. It will give the sense of ultimate power, being almost invincible, and to use the age old adage, power corrupts and absolute power corrupts absolutely. Even stemming from a desire to do good, to help those who have lost loved ones, perhaps even children, mankind could end up using the device to perpetuate untold evils on the world; reviving some of the most evil men, armies of "undead," so to speak. Shelley's examination of this moral question adds another level to the analysis of the infusion of knowledge and technology that continues to be injected into everyday life. It also begs the question of whether or not we ask ourselves these moral questions enough in modern times.
I doubt people living in the 19th century could have fathomed the level of technology that we would possess in only two hundred years. We currently have the ability to destroy the planet a few dozen times over, to send craft into deep space, to clone animals, and maybe humans for that matter, and to create computers that can think critically, and possibly even bestow machines with souls. Now, before you simply dismiss my assertions as a bad Syfy movie, allow me to elaborate. Part of the reason that Frankenstein's monster was so rejected was due to the process by which he was given life. People considered this to be an abomination against God and man and that it shouldn't exist. This is much the same as two of our modern day scientific advances, cloning and artificial intelligence. We still live in a world steeped in religion and superstition, and a large number of people believe that any life form not created by a "god" cannot have what we refer to as a soul. This is much the same as the monster; he was created by man, who is inherently flawed, and therefore was evil and lacking a soul. However, this argument, when applied to the idea of artificial intelligence being self aware, is inherently flawed. The soul is defined as "the immaterial essence, animating principle, or actuating cause of an individual life." (Merriam-webster) There is nothing in this definition that requires this to be bestowed by a supernatural being. Our consciousness is what makes us who we are, it develops from a blank slate at birth and our experiences, education, environment, and many other factors help it develop over the course of our entire lives. Many argue, however, that thinking is a function of man's immortal soul. (Turing, 14) So, let us suppose for a moment that it is. Does that mean that only human being can possess immortal souls? I do not believe it does. Also, according to Russell Bjork, an actual reading of the original Hebrew Old Testament reveals that what is actually said is "man became an immortal soul" not that he was gifted one. (Bjork, 97) However, this is not to say that I believe human beings should strive to create such a thing. I am simply saying that I do not agree with the fundamental argument against it; that such a creation would be without a soul and thus instantly "evil."
Now, this technological revolution we are currently experiencing is not going to stop any time soon. Over the last fifty years we have seen something along the lines of two hundred years of scientific and technological advancement, and according to Ray Kurzweil's Law of Accelerating Returns we will see 20,000 years of advancement in the 21st century. (Kurzweil, 2001) We live in a world that is ever shrinking due to this technological explosion and it is continuing to get smaller. We can use our cell phones to connect with people around the world, and with the newest phones you can talk face to face. This tech explosion is bringing us into a world where actual human interaction is becoming less and less. With ideas like Facebook, MySpace, FaceTalk, instant messages, text messaging, and so on, people are relying more and more on technology to meet and interact with the real world. And that is where this takes a turn for the SyFy channel yet again. According to James John Bell will we be experiencing life in a three dimensional computing environment similar to that of The Matrix movies possibly as early as 2030. (Bell 20, 2003) This, to me, is scary; existing in a world simulated by machines with little to no actual human interaction. The world is moving in this direction at a pace that was previously unheard of. Before accepting something like this we must carefully examine the possibility of being too reliant on machines. We as human beings evolved from groups that relied heavily on one another for simple survival, and now you can essentially live your life in solitary confinement, outside of society, and not really be any worse for the wear. I, however, believe that becoming so reliant on technology will one day come back to haunt the human race.
Right now there are moral arguments taking place over whether or not things like super intelligent artificial intelligence or clones should ever exist. Both are subjects that should never be taken lightly as they each have the capacity to forever alter the way of life on this planet. With that said, there will always be those few people out there who will strive to do something just to see if they can do it, or in other words, do it in the pursuit of greater knowledge and achievement. This is one area in which Shelley really shines when examining the character and character flaws of Frankenstein. He pursued his goal with unrelenting dedication until he finally succeeded in creating his being. This, I fear, is the most dangerous aspect of pursuing both of the aforementioned lines of scientific study. The pursuit of greater knowledge is what has advanced the human race so very far in only the last couple hundred years. Some say that if it were not for the Dark Ages we would already be colonizing space, but one must be careful when the pursuit of knowledge ceases to be for the betterment of humanity and becomes about power, recognition, pride, or vanity. This is where modern day Frankensteins come into play, but we now call them rogue scientists. They operate outside the law, advancing their individual fields without regard to the possible ramifications of their work and outside any legal jurisdiction. One such rogue scientist who popped into the lime light a few years back was Panayiotis Zavos. He claimed to have successfully cloned human beings, however never offered much proof to back up these claims. (Muehlenberg, 2009) He now claims that not only has he cloned humans but more breakthroughs are on the way and there really is not anything anyone can do to stop them. (Muehlenberg, 2009) This is where the advancement of science becomes terrifying. It is one thing for a group of scientists, monitored be either some civilian oversight or governmental body, working within the legal regulations of their field to go out and work on these advances, but for one man or a small rogue element to do it is breathtaking. The potential pitfalls of pursuing such research should be enough on its own to prevent such atrocities from occurring, but the human thirst for knowledge is unquenchable, especially in this era of instant gratification. I fear that there is a possibility for the "oh my God, what have I done" feeling to come over these men who strive for these great achievements at the cost of their own humanity. Hindsight is always 20/20 and we often do not have the ability to see all the ways something has of going terribly wrong until it is far too late to do much of anything about it. Here there is a lovely quote from Shelley's book that essentially says it perfectly: "I had worked hard for nearly two years, for the sole purpose of infusing life into an inanimate body. For this I had deprived myself of rest and health. I had desired it with an ardour that far exceeded moderation; but now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart." (Shelley, 34)
When looking at the themes I have chosen there remain a few things to be said on each topic. Mankind will always thirst for greater knowledge, to build upon what generations before them have left behind and strive to leave a better place for their children, but there is a line we should not cross. I am a firm believer in the use of technology to advance the human race, but I fear that ultimately it could be our downfall. History is full of examples of technologically superior peoples being brought down by "inferior" enemies partly because of their reliance on their technology. But more than an outside or yet unseen adversary coming in and taking over we have to examine the possibility of destroying ourselves with our own advancement. Yes, it may seem like a bad science fiction movie, machines becoming self aware and destroying humanity, but it is not an impossibility. As I said earlier I do not agree with the argument that this will happen because man cannot bestow a soul on its creations or that anything man makes is inherently evil, but I see the possibility out of accidentally advancing technology too quickly without proper restrictions. If we gave birth to artificial intelligence without considering the possibility that it, like a child, must grow slowly, then we could end up looking down upon our creation with the same horror and disgust as Frankenstein. I, however, am not too vain as to not recognize the possibility that we may never achieve such fantastic things, there are those that argue that "no matter how far we advance technology a machine will never be a model or copy of a 'mind'." (Lucas, 14) Lastly, I touched upon the potential for rogue science to advance fields like human cloning without considering the ramifications of such work. There is a reason that most countries in the world have strict bans on human cloning, and it is not just because of the religious argument. We have no idea what could happen from humans giving life to other humans, and it could end horribly if not controlled carefully. I have no qualms with the pursuit of knowledge, but there is such a thing as dangerous knowledge, things that we simply should not have power over. If we have the power to give life to other entities does that mean we should too have the power to take life from them at will? The argument is there to be made, but that is a topic for another day.