‹ Go Back

Singularity or Not: Is It Time to Vote?

Posted

With artificial intelligence, we are summoning the demon.

Elon Musk, CEO of Tesla, CEO and founder of SpaceX

 

With my new interest in artificial intelligence (AI), I’m expanding my vocabulary. The most important new word is “Singularity.” It is a term both popularized and explained by inventor and futurist Ray Kurzweil in his 2005 book, The Singularity Is Near: When Humans Transcend Biology. Apparently this possibility has begun to create a lot of apprehension, and now that I have tuned in, I’m seeing frequent evidence of this in print.

In fact, on January 11, the Future of Life Institute published an open letter calling for “responsible oversight”  to ensure that artificial intelligence “works with humanity’s best interest in mind.” This group was established by luminaries like one of the founders of Skype and renowned professors from Harvard, MIT, Boston University, and UC Santa Cruz. Its advisory board includes cosmologist Stephen Hawking and technology entrepreneur Elon Musk.

The Future of Life Institute also published a paper on “research priorities for robust and beneficial artificial intelligence.” The eight, single-spaced pages look as though the very prominent signatories have looked at the demon and think it’s time to pause and think.

This is an interesting moment in view of all the excitement that has gone before.

Back in 1965, Gordon Moore, co-founder of Intel, described the potential for exponential growth in knowledge through integrated circuits. The idea was that it would double every two years until, say, about 2020. The human genome project became a dramatic example of how conservative that estimate may have been. The project was funded in 1990 with the idea that it would take about 15 years to complete. The genome was essentially done by 2003, two years ahead of schedule.

Since then, ideas about all that artificial intelligence can accomplish have gone kind of ballistic. In the January 11 New York Times Magazine, for example, there was an article about a Princeton neuroscientist who is working to map all 100 trillion connections between the neurons of the human brain. The goal is “to capture a person’s very essence: every memory, every skill, every passion.” Captured as data, our consciousness could endure forever.

Stanford University

Stanford University

I may sound as though I know what I’m talking about, but I don’t. And truth to tell, a world profoundly altered by artificial intelligence is not that appealing to me. I am of a different era.

To explain this, I must go back to an evening in about A.D. 1966. A little trailer had arrived on my university campus, and those of us who were interested were invited to sign up for an experience with this new thing called a computer. I remember it being a rather weird, uncomfortable experience. It reminded me of the hour I spent with a genetics tutor up in an attic somewhere on campus where I was sure I saw a fetus in a jar.

I don’t think there was anyone in the trailer but me and the young man managing it. I don’t even remember if there was a keyboard, but there was some 0011 stuff around–coding, right? I didn’t resonate at all, and at some point, the young man approached me and said something like, “I think you ought to leave.”

It may be that, because of my blog and self-publishing, I now know more about working with a computer than many people my age, but I think of it as a tool and am not particularly interested in exploring all the new capacity that seems to compound through constant updates. And I’m kind of human-focused, so there were a couple of ironies that caught my eye in this most recent material about what could come to pass through artificial intelligence.

The big concern is that these “superintelligences” could be created that might begin to improvise, so to speak, and fail to work our will. Ray Kurzweil recommended early on that the technology be invested with our values. These could include moral codes, ethics, and specifically things like tolerance and respect for diversity. That would be an improvement over some of us, wouldn’t it?

The real nightmare, however, seems to be that the “lethal autonomous weapons” already under development might fail to comply with humanitarian law and create accidental battles or wars. As I read that, I sat there thinking, “You mean to tell me that among these individuals working on superintelligence that could cause humans literally to transcend the ‘limitations of our biological bodies and brain’ as Kurzweil put it, there is the assumption that we will always need lethal weapons, that there will always be war? Why don’t they insert peaceful coexistence as a value into every single program?

Ah, well. One thing we have to remember is that this artificial intelligence continues to advance at an exponential rate because it makes money, and this can happen only if markets also expand at an exponential rate. So that’s not only every organization but also we as individuals financing it through our own purchases. This technology is rapidly and permanently displacing human workers, and it may be getting in the way of human creativity. I guess I’m a humanist, and this concerns me.

And now a final ironic detail. About a month ago, Stanford University, the site of my trailer experience all those decades ago, announced that it is launching a century-long study of the effects of artificial intelligence. The study is being funded by Eric Horvitz, a Microsoft executive who has served as president of the Association for the Advancement of Artificial Intelligence. In his framing of the study, Horvitz cited the overarching concern that the machines of the Singularity might one day actually become conscious, just like we are.

The first report from Stanford on the One-Hundred Year Study of Artificial Intelligence is due at the end of this year. I’ll be watching for it and I’ll share.

 

Comments are closed.