AIs and the Impending Singularity

Superintelligent AIs are coming!  They’ll far outstrip anything mere mortal minds can comprehend, and either usher us into a utopia or negligently crush us on their way to turning the planet into a giant computerized doughnut.

Or maybe not.  Despite the recent uptick in news on the subject, I think the focus is misplaced.

Let’s start with the obvious problem: Despite decades of optimism, and indeed, decades of surprising success, Moore’s Law has fallen.  There’s been writing on the wall since Intel came out with it’s Core 2 Duo line, which instead of doubling the quality of the processor to keep up with market expectations merely doubled the number of processors.  Unless some startling new technological leaps are made (a true general purpose quantum computer, for example), the future of computation is in refinement, optimization, and more efficient usage.  As such, the hard take-off of a transcendent AI is presently on pause.

The second problem with AI is that intelligence is difficult to measure or define, so I doubt it’s clear to AI researchers what their goal actually is.  Looking around for examples, I see that SATs do a poor job of predicting success in college, the IQ test may or may not be meaningful, and the ‘absent minded professor’ stereotype works because we all understand that some people can be both brilliant and idiotic.

Science Fiction writers have been excited about true, intelligent AI ever since a room full of computers could be replaced by a machine, but the kind of ‘thinking’ being done is very different indeed, and there are numerous examples of skills and analyses that are mundane for humans and next to impossible for computers.  Worse than that for AI hopes, even when tasks are well and mathematically defined, there’s a whole class or problems (NP-Complete) where even the best computer algorithms admit defeat and must include the fuzzy logic and error factors we’re told are human failings.  With this evidence that computers don’t think the way we do, and absent a cohesive theory as to why these differences are irrelevant to the AI goal, I don’t expect a conscious computer program.

Some futurists have (somewhat defensively in my opinion) moved on to considering the possibility of AI via simulating neurons.  The complexity of biology ranges from ‘known unknown’ to ‘depressingly well known’, so obviously there’s a lot of computational overhead in this method.  Also, simulated brains don’t give any particular hope of AIs being an improvement on or even competitive with the 7ish billion mushy versions available for the picking (please check minimum wage and/or anti-slavery laws in your area for full pricing details).

Of course, brute-forcing problems with available processing power has led to some pretty impressive simulations of human intelligence – we do have Siri and the other digital assistants, prototype self-driving cars, and cameras that will occasionally find all the faces in the frame!  While many of these achievements are built on machine-learning techniques, a true AI must not just improve solutions to problems, but improve the procedure by which solutions are obtained – self-improvement is meta-machine-learning.

Even if I thought AI researchers had a clear idea about what their hypothetical self-improving programs should be maximizing, it’s not clear how dangerous or surprising this self-improvement could be.  The industrial/scientific revolution was built on a transition from trial and error to the scientific method.  This has worked really well, but there are limitations in both the method (the scientific process can only disprove, never actually prove a theory) and the universe (see again, the end of Moore’s law) that continue to make scientific advances and better theories difficult to obtain.  Given these external limitations, and absent some superior method as far beyond science as science is above trial and error, yet unlike the scientific method be incomprehensible to mere humans, I’m unconvinced that a sufficiently powerful AI can actually outstrip human’s ability to keep up.

All that being said, we humans are pretty creative, and as such we absolutely, positively do NOT need to develop an AI or summon any digital demons to destroy ourselves.  Dumb expert systems will do that just fine, because the danger is not about the kinds of things we create, but in what we let those systems control.  An appropriately lethal example is the (hopefully ironically named) Skynet program, an expert system designed to detect terrorists from travel and communication data.  Being a suspected terrorist in Pakistan is not good for one’s life expectancy.  Now, if the system’s output is appropriately checked – by humans and damn the inefficiency – this is possibly creepy for a bunch of reasons but not the end of the world.  What we need to fear is an expert system accelerating what would otherwise be called ‘human error’ to a point we can’t stop or reverse it in time – one Flash Crash feedback loop in a system controlling weapons or medical procedures, and Skynet stops being a joke.

Is this worth concern and attention?  Yes.  But confusing these risks with Hal-9000 and time-traveling Schwarzeneggers is a public disservice that distracts from the dangers of networked world we live with today.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

  

  

  

This site uses Akismet to reduce spam. Learn how your comment data is processed.