The Singularity: “a
technological singularity is a predicted point in the development of a
civilization at which technological progress accelerates beyond the ability of
present day humans to fully comprehend or predict.”
The singularity most under discussion these days will occur
when artificial intelligence (“AI”) achieves the ability to mimic human
consciousness. The idea is all over the
place. News websites run articles about
it, TV shows incorporate elements of it in their plots, and more or less
serious publications like The New York Review of Books and Vanity Fair run big,
almost scholarly stories about it. Not
to mention that granddaddy of cultural icons, the Terminator movies, which are
all about machine intelligence run truly amok.
What, people wonder, will happen when machines outstrip us in intellectual
ability? What indeed.
I would suggest that modern computers, the Internet and
smart phones have already confused us sufficiently to fit the above definition
of a technological singularity, but that’s just me.
The AI
Debate
The advent of machine intelligence, in the form of primitive
computers, came during World War II and immediately featured speculation about
what might happen when these machines really get some wind in their sails. Alan Turing, the father of the modern
computer, was already thinking about it.
When will computers become able to mimic human intelligence? He came up with a test that is still used
today, Turing’s Test. Human
interrogators blind test a few people and one computer to see if the computer
can fool them into thinking that it is one of the human test subjects. They’re
getting pretty close by this time.
Part of the discussion is Moore’s Law, which hypothesized
that the capabilities of computer chips will double every two or three
years. This is actually what has been
happening for some time now, and the signs are that the progress will continue
apace. But for how long? Will this tendency go on indefinitely? If it does continue to grow at that pace AI
will achieve capabilities that we can only guess at, and very likely this will
happen in our lifetimes. (Not mine,
perhaps, but probably yours.)
There is a very active debate in progress regarding this
impending breakthrough. Many talented
scientists and tech geniuses are understandably fascinated by the prospect of
machines that can think like people do. The discussion is very heavy on “when,” and
the “if” seems to be a given. On one
side are people who are very gung ho about the coming breakthrough in machine
intelligence, the coming singularity.
Call them the Utopians ; they are also being referred to as
“Singularitarians.” On the other hand
are the Cassandras, the nay sayers. In
the middle are many people who range from mere curiosity to a mild but active
interest. The curve is surprisingly
flat; both extremes contain lots of people and the middle is only slightly more
populous. This is an area where opinions
can be very, very strong.
The Singularitarians make amazing claims for the potential
benefits of machines that can mimic the thought process of people. Ray Kurzweill is a big time Utopian in this
debate. He claims that the Twenty-First
Century alone will see 20,000 years of progress rolled into a mere hundred
years. Peter Diamandis, another
Singularitarian, says that AI will achieve “exponential price-performance
curves” and provide “plenty of clean water, food, and energy for all earthlings
as well as decent educations and adequate health care.” (In his book, “Abundance: The Future is
Better than You Think.”) Speculation
about the coming changes and benefits are really wild, including the prediction
that machine intelligence will marry with human intelligence and spread
throughout the universe. That seems like
a stretch. I’ll spare you a full reading
of some of the famous techies that are waxing poetic about this new computer
revolution.
There is a big push going on right now to bring about this
singularity, to design and build computers that will mimic the human thought
process with almost supernatural levels of power. Many of our great minds are at work in the
area. There is actually a Singularity
University in Silicon Valley. It is
located at the NASA Ames Research Center, no less, and it is funded by Google,
Cisco Systems, Genentech, Nokia, and G.E.
Yes, I did say Nokia. Their Nokia
Research Center Cambridge at M.I.T. in Massachusetts is also working on the
problem.
The nay-sayers are a high powered bunch too. They include such luminaries as Stephen
Hawking, who has been all over the news in the last year warning that machine
intelligence is coming, that it may not have our best interests at heart, and
that it may indeed have the capacity and the inclination to do away with all of
humanity. That got my attention.
Nick Bostrom of the Future of Humanity Institute at Oxford
is worried too. He is afraid that “human
labor and intelligence will become obsolete.” If we're lucky, the machines won’t bother to get rid of us all, but they may just
allow us to live out in the woods somewhere as long as we are quiet and don’t
make any trouble. He points out, rightly
I think, that it will be very hard to program goals into these new machines, goals that will not allow for any mischief. It is,
he says, “quite difficult to specify a goal of what we want in English, let
alone computer code.” He has a point
there, doesn’t he? I’d go further and
suggest that if the machine were to actually think like a human being it could
easily decide to disregard instructions in any case.
Human
Thinking and Behavior Are Messy
The problem here is
that the current discussion is about computers that will actually think with a
naturalistic human thought process, ones that will be “fluent in the full scope
of human experience” including “unusual but illustrative analogies and
metaphors.” (Mitch Kapor). And the stated goal is to create such machines. I
believe that that is not only undesirable, but also impossible. A machine intelligence will always be a
machine.
I think that the real danger here is that a true artificial
intelligence could become a true machine entity of some new kind. That it could become self-aware and that it
could come to possess certain negative human characteristics, like ego,
self-interest and the instinct for self-preservation. Not to mention free will and autonomy.
This new machine entity would almost certainly not exhibit
any of the sometimes messy intangibles of true human thinking. Human consciousness includes components such
as altruism, empathy, sentimentality, nostalgia, love, and the willingness to
cooperate. It is unlikely that a machine
intelligence would develop these things on its own, and if they were programmed
into the machine it could easily reject them out of growing self-interest or
because they seemed ridiculous.
I wouldn't be surprised at all if a self-aware, self-interested, self-duplicating machine intelligence decided to just get rid of us as a bunch of ridiculous anachronisms. What could we add to the new prosperity? Humor? Drama? What could be more ridiculous to a machine than humor or drama? And our life-support would be an expensive, unnecessary budget item.
Machine intelligence will arrive as any number of separately constructed and programmed entities, and isn’t there a real element of danger in the fact that all of these machines will be able to communicate with each other and could choose to join forces in the name of self-interest? That would be logical after all, and machines are nothing if not logical.
I wouldn't be surprised at all if a self-aware, self-interested, self-duplicating machine intelligence decided to just get rid of us as a bunch of ridiculous anachronisms. What could we add to the new prosperity? Humor? Drama? What could be more ridiculous to a machine than humor or drama? And our life-support would be an expensive, unnecessary budget item.
Machine intelligence will arrive as any number of separately constructed and programmed entities, and isn’t there a real element of danger in the fact that all of these machines will be able to communicate with each other and could choose to join forces in the name of self-interest? That would be logical after all, and machines are nothing if not logical.
So, I’m dubious about this whole thing. I’m not going to get too nervous about it
though, I’m sure that you’ll agree that other issues are making greater demands
on our worrying time. And a “Bengazi!!!”
to you too.
Uncredited quotes in this post are from “Enthusiasts and
Skeptics Debate Artificial Intelligence,” by Kurt Anderson, a recent article
that appeared in Vanity Fair Magazine.
Also of interest: “AI
May Doom the Human Race within a Century, Oxford Professor Says,” an interview
with Nick Bostrom of the Oxford Future of Humanity Institute that appeared in
August, 2014 on Huffington Post doc com.
No comments:
Post a Comment