Strong AI & Existential Risks


0[1]There has been a recrudescence of hysterical talks about Strong Artificial Intelligence (AI) lately.

Strong AI is artificial intelligence matching and eventually going beyond the full human cognitive capacity.  Weak AI, by opposition, is the replication of some facets of human cognition:  face recognition, voice recognition, pattern matching, etc.  .

The ultimate goal of AI is to get to what researchers call Artificial General Intelligence, i.e. an AI that could duplicate human cognitive capacity in every aspects, including creativity and social behaviour.

Today, what we have is weak AI.  For some tasks, weak AI perform better than humans already though.  For instance, the Bing Image team has observed that for classifying images in multiple categories (thousands), AI performed better (did less errors) than their human counterparts.  For a low amount of categories, humans are still better.  IBM Deep Blue was able to beat the world chess champion, IBM Watson is able to win at Jeopardy, etc.  .

But we are still far from Artificial General Intelligence.  Machines lack social aptitudes, creativity, etc.  .  Nevertheless they are evolving at a rapid pace.

AI-evolution

You might have read about the triumvirate of Elon Musk, Stephen Hawking and Bill Gates warning the world of the danger of strong AI.

Those declarations made quite some noise as those three gentlemen are bright and sober people you do not see wearing tin foil hats on national TV.  They also are super stars of their field.

Although their arguments are valid, I found the entire line of thoughts to be a bit vague and unfocussed to be compelling.

I compare this to when Hawking has been warning us about toning down the volume of our space probe messages in order not to attract malevolent aliens.  There was some merits to the argument I suppose, but it wasn’t well developed in my own opinion.

For strong AI, I found a much more solid argument at TED_logo_rgb1.

Ah, TED…  What did I do before I could get my 20 minutes fixes with you?  Thank you Chris Anderson!

Professor_Nick_Bostrom5[1]Anyway, there was Nick Bostrom, a Swedish philosopher which came to discuss What happens when our computers get smarter than we are?

Right off the bat, Bostrom makes you think.  He shows a photo of Milton from Office Space and mentions “this is the normal ways things are”, you can’t help but reflect on a few things.

Current office work / landscape is transitional.  There has been office workers since the late 19th century / early 20th century, as Kafka has depicted so vividly.  But the only constant in office work has been change.

In the time of Kafka, computers meant people carrying out computations!

2295022876_329393c9a2_b[1]Early in the banking career of my father, he spent the last Thursday night of every month at the office along a bunch of other workers computing the interests on the bank accounts!  They were using Abacus and paper since they didn’t have calculators yet.  You can imagine the constraints that put on the number of products the banks were able to sell back in those days.

I’ve been in IT for 20 years and about every 3 years my day to day job is redefined.  I’m still a solution architect, but the way I do my work, I spend my time, is different.  IT is of course extreme in that respect but the point is, office work has been changing since it started.

This is what you think about when you look at the photo Bostrom showed us.  This is what office work looked like in the 1990’s and already is an obsolete model.

The point of Bostrom was also that we can’t take today’s office reality as an anchor on how it will be in the future.  He then goes on to define existential risks (a risk that, if realised, would cause humanity extinction or drastic drop in the quality of life of human beings) and how Strong AI could pose one.  He spends a little time debunking typical arguments about how we could easily contain a strong AI.  You can tell it is a simplification of his philosophical work for a big audience.

All of this is done in a very structured and compelling way.

Parts of his argument resides in the observation that in the evolution of AI from Weak AI to Artificial General Intelligence to Strong AI, there is no train stop.  Once AI reaches our level it will be able to evolve on its own, without the help of human researchers.  It will continue to a Superintelligence, an AI more capable than humans in most ways.

asimov-robot[1]His conclusion isn’t that we should burn computers and go back to compute on paper but that we should prepare for a strong AI and work hard on how we will instruct it.  One can’t help to think back to Isaac Asimov 3 laws of robotics which were created, in the fictional world, for the exact same purpose.

He casts Strong AI as an optimization machine, which is how we defined Machine Learning at the end of a previous post.  His argument is that since AI will optimize whatever problem we give it, we should think hard about how we defined the problem so that we don’t end up being optimized out!

Nick Bostrom actually joined Elon Musk, Stephen Hawking and others in signing the open letter of the Future of Life Institute.  That institution is doing exactly what he proposed in the Ted Talk:  research on how to mitigate the existential risk posed by Strong AI.

If you are interested by his line of thoughts, you can read his book Superintelligence: Paths, Dangers, Strategies, where those arguments are developed more fully.

As mentionned above, I found this talk gives a much better shape to the existential risk posed by super intelligence argument we hear these days.

Personally, I remain optimistic regarding an existential risk posed by Strong AI, probably because I remain pessimistic for its realisation in a near future.  I think our lives are going to be transformed by Artificial Intelligence.  Super Intelligence might not occur in my life time, but pervasive AI definitely will.  It will assist us in many tasks and will gradually take off tasks.  For instance, you can see with the like of Skype Translator that translation will, in time, be done largely by machines.

transhumanism[1]Projections predict a huge shortage of software engineer.  For me this is an illusion.  What it means is that tasks currently requiring software engineering skills will become more an more common in the near future.  Since there aren’t enough software engineer around to do them, we will adapt the technology so that people with a moderate computer literacy background can perform them.  Look at the like of Power BI:  it empowers people with moderate computer skills to do analysis previously requiring BI experts.  Cortana Analytics Suite promises to take it even further.

I think we’ll integrate computers and AI more and more in our life & our work.  It will become invisible and part of us.

I think there are 3 outcomes of Strong AI:

  1. We are doom, Terminator 2 type of outcome
  2. We follow Nick Bostrom and make sure Strong AI won’t cause our downfall
  3. We integrate with AI in such a way there won’t be a revolution but just a transition

Transhumanism_h%2B_2Intuitively I would favor the third option as the more likely.  It’s a sort of transhumanism, where humanity will gradually integrate different technologies to enhance our capacities and condition.  Humanity would transition to a sort of new species.

All depends on how fast the Strong AI comes I suppose.

Ok, enough science-fiction speculation for today!  AI today is weak AI, getting better than us at very specific tasks, otherwise, we still rule!

Watch the TED talk, it’s worth it!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s