This concept has a long, fascinating history. In the 1970’s, John von Neumann provided one of the first answers, when he noted that “the ever-accelerating progress of technology…gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Von Neumann’s definition of “the Singularity” was prescient: For it referred to the moment beyond which “technological progress will become incomprehensibly rapid and complicated”, out-pacing the abilities of its makers (i.e. humans), and effectively taking on a life of its own.
In 1986, Vernor Vinge introduced the term “Technological Singularity” in his science fiction novel, “Marooned in Realtime”. In 1993, he developed this groundbreaking idea in his essay, “The Coming Technological Singularity”. His conceptualization is known as the “event horizon” thesis, whereby trans- / post-human minds will entail a more fantastical future than we can imagine. He averred at the time: “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”
That was 27 years ago. (!)
Vinge continued: “I think it’s fair to call this event a singularity. It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs until the notion becomes commonplace. Yet when it finally happens, it may still be a great surprise and a greater unknown.”
One of Vinge’s influences, I.J. Good, never used the term “Singularity”. However, Good referred to virtually the same thing–dubbing it an “intelligence explosion”. By this, he meant a positive feedback loop within which new (artificial) minds will make technology to improve on those same minds–a cycle which, once started, will rapidly surge upwards and create super-intelligence. Such a cycle may be unstoppable…and irreversible. There are now concerns about passing this pivotal threshold–a kind of technological Rubicon
More recently, Ray Kurzweil seized on the “intelligence explosion” hypothesis, conjecturing a more ominous vision of the Technological Singularity. In his book, “The Singularity Is Near: When Humans Transcend Biology”, Kurzweil defined this singularity as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.”
Transformed how? Well, therein lies the rub. Will the super-intelligence be benevolent or Machiavellian? That, Kurzweil suggests, is an open question. He noted: “Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself.”
It seems that whether the immanent “strong AI” is a good thing or a bad thing depends largely on what we do now, proactively, to ensure that–when it eventually happens–it will be the former and not the latter.