Team Human vs. Team Watson Round III

Well it seems the machines have won, select your pod early, you’ll want to get a good view of the energy harvesting machines.

We were talking about Watson around the ol’ AI research lab today and someone pointed out that Watson is yet another highly tailored solution to a particular problem just like DeepBlue (click it, its Arcade Fire, just CLICK IT) was for chess. It’s using a lot of brute force and some reasoning but it’s still not solving the same problem humans are and the domain is somewhat restricted.

Now the interesting difference is that whereas chess is a deterministic game where you can search for an optimal strategy, jeopardy has layers of uncertainty hidden behind human language and the behaviour of other players. So while it’s not a very realistic setting for general AI, and doesn’t claim to be, it has stepped over an important threshold from deterministic, logic based problems to ones that require reasoning under uncertainty and statistics.

This is very fitting as the field of AI research itself has gone through the same change in focus in the past 20 years as outlined very well by Peter Norvig recently. When I took my undergraduate AI classes in the 90s I fell in love with prolog and logical planning. That’s why I went into AI research later.

When I got to grad school I found out that during my undergrad AI courses I had been missing a renaissance that had been occurring which led to modern machine learning and probabilistic AI. Watson’s achievement is only possible with these new methods and the raw computing power increases we have had over the same period.

But  apparently it did also have one other advantage. As many people have speculated, the machine did seem to have a buzzer advantage. According to op-ed by Ken Jennings himself, Watson’s speed with the buzzer was decisive in making up for questions it got wrong.  Is this just sour grapes? Maybe just a little, you need some ego to be an intense competitor like Jennings, but I think he has a point. As I pointed out yesterday the quick reaction time between making the decision to buzz and registering a button press is something a machine can clearly be faster at. Is this what winning at Jeopardy means?

It shouldn’t be.

Winning should mean the ability to answer complex questions, with ambiguous meanings, under time pressure while making the best strategic betting choices.  That is the task Watson performed admirably at. It could have had a buzzer delay and read the screens with computer vision rather than receiving a text file to parse and perhaps it still would have won.

But we’ll never know now.

So you won this round Watson. And you’re impressive (well, the engineering team that built ‘you’ is impressive actually). Hopefully everyone has learned a bit about AI and hopefully some young girls or boys will be inspired to consider computer science or engineering that otherwise wouldn’t have.

But next year…next year you should come back and put it all the table.  Play it our way, the human way, you have the capability to at least try. And may the best machine, be they biological or electronic, win.

Advertisements

The Robot and the Hound

This term I’m TAing for a fourth year course on Artificial Intelligence at UBC so I’m going to have lots of links to news in AI coming in front of me.  I might as well do something with it so over the next few months I’ll post up comments on my thoughts about the current state of AI research and where its going both in theory and in real world applications.

I’ll add interesting links to my AINews list on the side of this page but for a fantastic source of all things AI check out this list from AAAI which is actually maintained using AI algorithms to combine reader reviews and natural language searches of the web.

First up, Robots.

Robert Silverberg recently decided to pine about the good ‘ol days with his Science Writer breathren about what they thought the ‘robots of the future’ would look like and how its really turned out.  It’s an interesting read from a great SF writer and gives a wonderful overview of the history of robots in fiction.

His conclusion essentially comes down to reminding roboticists to be sure to instill Asimov’s Three Laws of Robotics into all their machines lest they suffer the dire threat of lawsuits in the future.

The notion of  responsibility is indeed an important one that AI researchers will need to deal with.  Right now all robots and machines are pretty dumb no matter how smart they seem.  There is no computer on Earth with anything remotely resembling free will or self-awareness. Thus there is no way any machine could be blamed for its actions at this time so any errors are either the result of improper usage, bad planning by engineers or the random outcome of rules that no one had a problem with.

You can already begin to see this in the way Google, for example, responds to criticism about what shows up on their search results.  Their algorithms are complex and seeks patterns amongst huge amounts of data that no human can see.  They use a lot of AI algorithms and the particular results that show up when you search for something really aren’t the cause of what any one person at Google has done.  Google put up the system which allows those results to be generate but whether the results are offensive for one particular query isn’t something any human being made a decision about.
So I think Google would have a strong argument that some result which was offensive and somehow caused harm wasn’t their fault as long as they’ve taken reasonable precautions.

I think the next stage of moral responsibility for machine behaviour will be something similar to the responsibility of a dog owner for the actions of their dog. A dog owner is at the park walking their dog and the dog is running free and hurts a child or another dog.  The dog did the action and will likely be punished or put  down, but you can’t sue the dog.  However, depending on how the situation arose the owner has some responsibility. If the dog is usually nice and friendly, it was a small dog, it was a leash free park etc, then perhaps the owner would have no legal liability at all.  On the other extreme an abusive dog owner who raises a mean spirited dog that often attacks others and then released it in a leash-only park would face serious penalties although still likely less than if they performed the attack themselves.  I don’t know what the laws are but there is a reasonable way to write such laws that the owner can be held responsible but where the independent will of the dog itself is considered.

Someday, Isaac willing, someone will create a computer that begins to approach the level of complexity that we could say the engineers who made it can only be held as responsible for its actions as they are for their dog. While the Three Laws of Robotics are a nice idea they may be no easier to hardwire than how you train your dog not to bark at others and to always come when you call.  

So lets keep the intent of the three laws but lets make sure that when that day comes we’ve raised a nice, friendly dog rather than an uncontrollable pit bull.

I’ll Take Spectacle for $1000 Alex.

News now that the next big public spectacle in the battle between Man vs Machine will be….Jeaopardy?

Update: more detail here

You may remember that computer’s have now defeated the greatest human players of chess inspiring endless punditry and loose discussion about ‘thinking’ machines as well as inspiring awesome Arcade Fire songs. Computers are also now quite good at playing poker, have solved Checkers completely (no point playing that anymore…), provide us with frustrating ‘automated phone help’ bots and regularly vacuum the floors of geeks fairly adequately.

Sigh. Perhaps this is why the New York Times article, which is otherwise pretty clear and non-hyperbolic about the next spectacle, felt the need to throw this in:

Despite more than four decades of experimentation in artificial intelligence, scientists have made only modest progress until now toward building machines that can understand language and interact with humans.

Now, I’m an Artificial Intelligence researcher, so I’ll try to be rational about this sentence.

The first half of the sentence refers to the common observation that four decades of research into AI has not produced walking,m talking androids trying to take over the world and consume us for power.  Instead it had provided tremendous research gains and advances in technology that underly  many aspects of our modern world from google to space probes, from self-driving cars to face detecting auto-focus cameras, from management of complex energy systems to medical diagnostic tools.  The second half the sentence points out that on the problem everyone on the street really cares about, walking talking androids that can ‘think’ like us and understand what we’re saying…that progress has been below society’s ridiculously high expectations.

Granted. Voice recognition has got a lot better over the years but not up to say, the 4 year old child level. But you know, we don’t even really understand how our own brains work, that makes simulating one in a less complex computing machine that the one between our ears, you know, tricky.  (A separate approach that may outflank current AI might in fact be just building an equally complex simulation of a brain and letting it go, but that’s another post.)

But I love these public spectacles, they provide a chance to explain the current level of AI and open up some of the ideas of computation in the problem that are used in more relevant applications all around us.  Having a computer up on t.v. with Alex Trebek and other contestants will be fun and we probably won’t even have the embarrassing situation Mr. Kasparov was in of the computer beating the human, not yet anyways.

It will be entertaining, some of it will be funny and hopefully some of it will be informative to viewers who live in an increasingly computational world.  Playing jeopardy well is a much harder problem than playing chess well.  The challenges it requires in terms of understanding language, meaning, searching databases, forming sentences and making strategic decisions about bids and questions are all very rich domains that have more real world application than the way chess playing programs work which is generally some kind of brute force search.

I just hope when the computer loses, the show is over and they ship the computer back to IBM labs we don’t hear another round of  “why such modest progress”?  This ain’t rocket science people, its a lot harder than that.

The Future of Space Flight

Take a look at these interesting questionaires being used by the ESA for an upcoming seminar on the future of spaceflight.

They are trying to start a discussion about realistic, upcoming as well as fantastic dreams for the future of space.  This is a good way to go about it I think, until we know what it is we really dream about we can know how to choose from what is possible or how to push beyond the just possible.

One of the questions was along the lines of, if you had one thing to say to the designers of future space flight systems what would it be?  Here’s what I said:

Keep your mind open, think beyond government centralized control, think beyond scientific goals. Think of the internet, the blogosphere etc, how that exploded in a short time once people were enabled with technology. How could we enable small governments, corporations even groups of committed individuals to harness technology for use in space or even to explore space? Computer power is available, the knowledge is available. Committed groups of people exploring for their own goal may be willing to take risks and reap rewards that government programs never could. As space scientists and engineers how can you enable this kind of revolution?

%d bloggers like this: