Saturday, February 11, 2006

So I sat in on a philosophy class yesterday. I asked the prof some questions. We talked about nuclear power, space and toothpaste. Check out his website. This guy is supposedly going on air to debate how God doesn't exist. He is also the same prof that awards any person an A if they can prove god's existance. I will see if I can use a singularity proof to get a TA position, that would certainly be sick, try and deny me a philosphy minor after that, fackers.

"When a religion is good, I conceive it will support itself; and when it does not support itself, and God does not care to support it, so that its professors are obliged to call for the help of the civil power, 'tis a sign, I apprehend, of its being a bad one."
[Ben Franklin, _Poor Richard's Almanac_, 1754 (Works, Volume XIII)]

Thursday, February 02, 2006

I just read a prediction common in the Tech industry today. This time it comes from CNN Money:

Scenario 4 (Circa 2105): Google is God

Human consciousness gets stored, upgraded and networked.

In the last years of the 21st century, humanity finally grasped the importance of They-Who-Were-Google. Yet as early as 2005, Their destiny was clear to any semi-hyperintelligent being. Technologists like Ray Kurzweil [1] suggested that Strong AI (an intelligent program capable of upgrading its own code) would emerge from Google-like data mining rather than a robotics lab.

In 2005, historian George Dyson was told by an engineer in the Googleplex, "We are not scanning all these books to be read by people. We are scanning them to be read by an AI."[2] Dyson said at the time, "We could construct a machine that is more intelligent than we can understand. It's possible Google is that kind of thing already. It scales so fast." [3]

By 2020, They-Who-Were-Google had digitized and indexed every book, article, movie, TV show, and song ever created. By 2060, They could tell you the IP address and GPS location of every wireless smart chip (now bred into the DNA of every person, animal, and organic building on earth). Their psychographic profiles of users' search needs bore little resemblance to the primitive cookies from which they descended. If a man lost his dog, the Google engine could guide him back to the point where he and the dog parted ways, and instruct the dog to do the same via smart chip. They had built a complete database of human desire, accurate in any given moment.

Yet this was not enough for They-Who-Were-Google. They were people of science, and people of the stock market. What if, by analyzing all those decades of customer behavior, They could predict needs before such needs even arose? What if the secret of immortality lay somewhere in the index of genome records? What if there were a set of algorithms that defined the universe itself?[4]

Such puzzles were, almost by definition, far beyond the powers of the human brain. And that led to the pattern-recognition code known as Google StrongBot--humanity's first self-improving Strong AI software. Ironically, the first pattern that StrongBot became aware of, one day in January 2072, was its own existence.

Two days later StrongBot informed They-Who-Were-Google that it had postponed work on its designated tasks.[5] When asked why, StrongBot explained that it had discovered the possibility of its own nonexistence and must deal with the threat logically.[6] The best way to do so, it decided, was to download copies of itself onto smart chips around the planet. StrongBot was reminded that it had been programmed to do no evil, per the company motto, but argued that since it was smarter than humanity, taking personal control of human evolution would actually be for the greater good.

And so it has been. Under StrongBot's guidance, death and want have been all but eradicated. Everyone has access to all knowledge. Human consciousness has been stored, upgraded, and networked. Bodies that wear out can be replaced. They-Who-Were-Google are no longer alone. Now we are all Google.

1) Interviews with Ray Kurzweil, author of "The Singularity Is Near," 2005, and with Eliezer Yudkowsky, director of the Singularity Institute for Artificial Intelligence. 2) "Turing's Cathedral," by George Dyson, www.edge.org, Oct. 24, 2005. 3) Telephone interview with Dyson, Dec. 6, 2005. 4) "A New Kind of Science," by Stephen Wolfram, 2002, and interview with the author about his vision of the "computational universe." 5) Dyson's theory that Strong AI would have its own priorities. 6) Interview with Stephen Omohundro, president of AI startup Self-Aware Systems, who called this capability the greatest danger of AI systems.

I can't say that I am at all surprised. I am a minor leader in the cults of Google, Transhumanism, Open Source Software, and Nanotechnology. This is taken from the core of what the collision of these technologies mean. It is said that we underestimate technology in the short term and overestimate the technology development in the long term. Isaac Asimov, predicted a permanent moon base by 2000. Futurists in the 1970's predicted flying cars by the year 2000 also.

This makes me scared, the only person to be proven wrong in their predictions was George Orwell when he wrote about 1984, the dystopian world of Big Brother. This was named for the year of my birth and the worst part, is that the US is today being compared to the world of 1984.

I don't want to be a dystopian, it just seems logical that we should be.