Will There Be Gay Cyborgs?

| | Comments (0)

I'm a Singularitarian (without the religious implications) who believes that part of evolution will eventually see technology transcend biology. In fact, there's really no clear cut difference between them.

In that mindset, I watched IBM's Watson play Jeopardy tonight against two top former human contestants. It was amazing how well the machine did. That led to a discussion about the eventual merger of human and machine. Some people are horrified by the idea. I find it cool as hell. We're already cyborgs anyway, what with artificial limbs and organs. But there is the question of what will be lost in the transition from minerals and goo to plastics and silicon and will it be worth the gain?

I've been thinking about the the merger of human and machines since Robbie the Robot was on screen in "Forbidden Planet"...at the ripe age of 7 yrs old. And almost all of the baby boomers remember Robbie's later "Lost In Space" iconic warning "Danger, Will Robinson, danger!" Robbie was like a super cool pet or a playmate you didn't have to feed or clean up after. That would change.

Robbie's helpfulness would eventually take a sinister turn. By 1968 there was "Space Odyssey" with the cyclopian but warmly engaging artificial intelligence of HAL, who by all accounts was not just super fast and smart, but sentient. With that miracle of self-awareness came danger to humans. The 70's saw battle Star Galactica and the wars with rebellious robots. Then came "The Terminator" and the concept of Skynet, the evil intelligence that wanted to eliminate humans because they were an impediment and disease and even later followed by "The Matrix" amplifying the same premise.

It's notable that ll through the various incarnations of Star Trek the computers were never given full reign of the ship or decision making except in the most dire of situations. Even though the series was set in centuries in the future computers were portrayed more or less as highly sophisticated tools tethered to the service of human needs, the exception being Mr Data, who at times proved dangerous to the humans around him as well.

issac Assimov's famous three laws of robotics from the late 1940's (later changed to 4) and which attempted to mitigate for these sorts of fantasied dangers, are:

  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
  2. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  3. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  4. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

People have been thinking about this robots vs humans thing for a long time and many of the ethical questions remain.

For example, say there was a sentient machine owned by a business that wanted to upgrade and part of that process would mean turning the sentient machine off. The machine realizing their intent fires off emails to top lawyers to take the company to court to prevent its demise. Does the company own the "life" that the machine senses it has or is the machine simply a slave with no rights?

In my opinion, the machine has every right to live. But others might argue otherwise.

The time to really put our shoulders to the wheel on these matters is right now. Because as science fiction like it sounds the reality of a conscious artificial intelligence is closer than you think.

Check out the Time article below.


2045: The Year Man Becomes Immortal

By Lev Grossman

jacked.jpgOn Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists -- they included a comedian and a former Miss America -- had to guess what it was.

On the show, the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200.

Kurzweil then demonstrated the computer, which he built himself -- a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil's age than by anything he'd actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she'd been President Lyndon Johnson's first-grade teacher.

But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It's an act of self-expression; you're not supposed to be able to do it if you don't have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.

That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity -- our bodies, our minds, our civilization -- will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away.

Computers are getting faster. Everybody knows that. Also, computers are getting faster faster -- that is, the rate at which they're getting faster is increasing.

True? True.

So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness -- not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.

If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville.

Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity.





The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation.

People are spending a lot of money trying to understand it. The three-year-old Singularity University, which offers inter-disciplinary courses of study for graduate students and executives, is hosted by NASA. Google was a founding sponsor; its CEO and co-founder Larry Page spoke there last year. People are attracted to the Singularity for the shock value, like an intellectual freak show, but they stay because there's more to it than they expected. And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of language.

The Singularity isn't a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an "intelligence explosion":

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

The word singularity is borrowed from astrophysics: it refers to a point in space-time -- for example, inside a black hole -- at which the rules of ordinary physics do not apply. In the 1980s the science-fiction novelist Vernor Vinge attached it to Good's intelligence-explosion scenario. At a NASA symposium in 1993, Vinge announced that "within 30 years, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended."

By that time Kurzweil was thinking about the Singularity too. He'd been busy since his appearance on I've Got a Secret. He'd made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind -- Stevie Wonder was customer No. 1 -- and made innovations in a range of technical fields, including music synthesizers and speech recognition. He holds 39 patents and 19 honorary doctorates. In 1999 President Bill Clinton awarded him the National Medal of Technology.

But Kurzweil was also pursuing a parallel career as a futurist: he has been publishing his thoughts about the future of human and machine-kind for 20 years, most recently in The Singularity Is Near, which was a best seller when it came out in 2005. A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called The Transcendent Man.) Bill Gates has called him "the best person I know at predicting the future of artificial intelligence."

In real life, the transcendent man is an unimposing figure who could pass for Woody Allen's even nerdier younger brother. Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity's most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He's good-natured about it. His manner is almost apologetic: I wish I could bring you less exciting news of the future, but I've looked at the numbers, and this is what they say, so what else can I tell you?

Kurzweil's interest in humanity's cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress. Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right. "Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project," he says. "So it's like skeet shooting -- you can't shoot at the target." He knew about Moore's law, of course, which states that the number of transistors you can put on a microchip doubles about every two years. It's a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve: the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.

As it turned out, Kurzweil's numbers looked a lot like Moore's. They doubled every couple of years. Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900.

Kurzweil then ran the numbers on a whole bunch of other key technological indexes -- the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond -- the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents. He kept finding the same thing: exponentially accelerating progress. "It's really amazing how smooth these trajectories are," he says. "Through thick and thin, war and peace, boom times and recessions." Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly.


Read the rest of the article

Leave a comment

About this Entry

This page contains a single entry by cul published on February 15, 2011 2:15 AM.

Recusal Refusal was the previous entry in this blog.

Is there an asshole gene...? is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.