Turing Centennial Year

For my Science Sunday post this week I’d like to point out that June 23, 2012 marks 100 years since the birth of one of the most important scientists or mathematicians of the last or any other century. Alan Turing is the father of Computer Science, was pivotal to the defeat of the Nazis in WWII and was tragically persecuted and punished for his homosexuality. This year has been declared Alan Turing year in commemoration and Computer Science conferences and around the world will be running special sessions to honour Turing and Computer Science departments everywhere will also be holding events. The museum at Bletchley Park where Turing worked in WWII to break Nazi codes has received special funding from software companies and others to build up the museum and run events.

Some of the core ideas that Turing considered were: What does it mean to compute something? Can computation ever be used to mimic or reproduce intelligence, and would be able to tell the difference?

You can find out about all the events on at http://www.turingcentenary.eu/

If you want to go one step further and learn more about what CS is about and how Turing’s ideas changed the world you may still be able to sign up for one of the courses being offered free and online by Stanford University.  The Computer Science 101 course is a good way to start understanding how the computers that make our modern world possible function, a world Alan Turing contributed so pivotally to making possible. For a bit more of a challenge the course on cryptography should address the same issues Turing and his team at Bletchley Park worried about trying break Nazi codes. Turing’s other popularly known contribution was about the relation between computation and intelligence. This would have been best addressed by the course on AI offerred last term which thousands of people registered for. That course is not offered this term, but some related courses this term are offered on machine learning and graphical models which are at the forefront of modern research into artificial intelligence.

Happy  ScienceSunday


In Defence of Algorithms

(I seriously never imagined I’d have to write that title.)

Today I came across an odd analysis of several legitimate problems by Barry Devlin.
(via Marshall Kirkpatrick’s google+ feed)

I’m sure the analysis is well intentioned and perhaps I have misread some of his claims but he appears to be blaming three major, societal problems on one thing they all seem to have in common … the use of algorithms.

The three problems he lists are:

  • Insurance Companies overanalysing patient data to deny them coverage
  • Automated Stock Trading software playing time delays to beat any poor human traders
  • Movie Studios analysing data to determine what movies people like to get the biggest bang for their buck

These are legitimate and worrying problems, but placing blame on overuse of algorithms per se is kinda strange.

Algorithms are simply systems which solve problems. They can be as simple as a recipe for baking a cake or as subtle and complex as Google’s search algorithm. Trying to encourage people to use less algorithms in the modern world is like encouraging people “to use less hammers” and beat the nails in with their hands. “We’re just throwing up buildings at an unnatural rate because of all these fancy hammers.” The problems he points out stem from other choices that are implicitly being made, the intensive use of algorithms does not cause them.

The problem of insurance companies over analysing their clients and denying coverage is inevitable when you have an unregulated, profit driven insurance industry. Regulate the industry so that they can’t use certain information, or so they cannot deny coverage in certain circumstances. Or if this is the US you are worried about, switch to a single payer health care system and take most of that power away from insurance companies altogether. Obviously they are going to do everything they can to make money. Either take away their incentive or restrict what they can do, complaining that they should not try so hard by analysing any data they are allowed to use doesn’t make sense.

High speed, automated trading is a very important issue which needs to be addressed. But again, this isn’t about not allowing people to do as much analysis as they want, it is about levelling the playing field. Why should large trading companies get an advantage because they can afford larger servers or can rent the rooms beside the NY Stock exchange computers to reduce their lag time? Implement a regulation saying that there must be a fixed minimum delay between all trades. Or alter the trading software in the markets to only accept trades every x microseconds. Again, saying algorithms are the problem is saying that are playing the game too well when you are only giving them incentives to play that game.

As for movies and how collaborative filtering will help studios understand exactly what kinds of movies people are willing to pay the most for, he answers the question himself. Any studio that only tries to make movies that are like last year’s hits is going to lose out to a more creative studio that actually makes popular movies no one was expecting. That’s not the algorithm’s fault, that’s just bad marketing strategy.

So I just don’t see where he’s coming from. Algorithms completely permeate our lives, they always have.
Computers just make it more obvious.

Team Human vs. Team Watson Round III

Well it seems the machines have won, select your pod early, you’ll want to get a good view of the energy harvesting machines.

We were talking about Watson around the ol’ AI research lab today and someone pointed out that Watson is yet another highly tailored solution to a particular problem just like DeepBlue (click it, its Arcade Fire, just CLICK IT) was for chess. It’s using a lot of brute force and some reasoning but it’s still not solving the same problem humans are and the domain is somewhat restricted.

Now the interesting difference is that whereas chess is a deterministic game where you can search for an optimal strategy, jeopardy has layers of uncertainty hidden behind human language and the behaviour of other players. So while it’s not a very realistic setting for general AI, and doesn’t claim to be, it has stepped over an important threshold from deterministic, logic based problems to ones that require reasoning under uncertainty and statistics.

This is very fitting as the field of AI research itself has gone through the same change in focus in the past 20 years as outlined very well by Peter Norvig recently. When I took my undergraduate AI classes in the 90s I fell in love with prolog and logical planning. That’s why I went into AI research later.

When I got to grad school I found out that during my undergrad AI courses I had been missing a renaissance that had been occurring which led to modern machine learning and probabilistic AI. Watson’s achievement is only possible with these new methods and the raw computing power increases we have had over the same period.

But  apparently it did also have one other advantage. As many people have speculated, the machine did seem to have a buzzer advantage. According to op-ed by Ken Jennings himself, Watson’s speed with the buzzer was decisive in making up for questions it got wrong.  Is this just sour grapes? Maybe just a little, you need some ego to be an intense competitor like Jennings, but I think he has a point. As I pointed out yesterday the quick reaction time between making the decision to buzz and registering a button press is something a machine can clearly be faster at. Is this what winning at Jeopardy means?

It shouldn’t be.

Winning should mean the ability to answer complex questions, with ambiguous meanings, under time pressure while making the best strategic betting choices.  That is the task Watson performed admirably at. It could have had a buzzer delay and read the screens with computer vision rather than receiving a text file to parse and perhaps it still would have won.

But we’ll never know now.

So you won this round Watson. And you’re impressive (well, the engineering team that built ‘you’ is impressive actually). Hopefully everyone has learned a bit about AI and hopefully some young girls or boys will be inspired to consider computer science or engineering that otherwise wouldn’t have.

But next year…next year you should come back and put it all the table.  Play it our way, the human way, you have the capability to at least try. And may the best machine, be they biological or electronic, win.

Team Human vs. Team Watson : Round II

Since Watson is doing so well there has been some confusion about what is actually going on as we watch the game. There’s been some talk that the Jeopardy challenge is unfair, and that’s true it is, but both sides have some unfair advantages.  Here’s how it is as far as I understand it.

Team Human Advantages (#teamhuman)

  • Using the most advanced computational system ever encountered, which has been under intense development for millions of years, the Human Brain. It has more raw computational power than Watson, can handle almost infinitely more parallelization and dynamic linking, is incredibly robust to new information and has pattern recognition heuristics which we are only barely beginning to comprehend. Its hard to underestimate how big an advantage this is and its hard to judge it, this is why it seems like a good problem for AI research.
  • They understand language – Watson does not understand language at all. It knows some things about language patterns and has learned how to match words and phrases for answers to other words and phrases which are questions for Jeopardy, and only Jeopardy. If a topic shows up which is has not seen much then it does not know what to do. Since it doesn’t understand language it can’t make the kind of leaps of reasoning that the humans can. This is why the topics are restricted to types of topics that have shown up on Jeopardy before, nothing that requires Trebek to explain the meaning of the question.

Watson Advantages (#teamwatson)

  • A huge memory database of facts which are relevant to jeopardy questions – its hard to say if this is more facts than Ken Jennings has in his head, it’s represented very differently but humans have amazing heuristics for accessing data quickly and linking it together. But perhaps Watson has an edge here.
  • A totally focussed system designed and optimized for years just to play Jeopardy (against most of us this is an advantage, against Ken Jennings and … its questionable who has spent more time training for these games)
  • Question sent as a text file – this really could be a bit of an advantage, the computer still needs to scan the text, parse it and analyze it.  The humans need to analyse the visuals and simultaneously listen to Alex Trebek describe the question, of course humans are very good at that kind of thing, Watson is not.
  • No video questions – ya, that’s just not on, you want to see a machine fail at something? I’ll show you my toaster trying to cook lasagna, we’re just not there yet.
  • Button pressing – it’s not clear exactly how Watson’s button actuator works. And how does Watson know it is allowed to press the button if its not listening to Trebek? It’s possible it has an ‘unfair’ fast reaction time to the humans, I don’t know.

So while this is a fascinating challenge and should demonstrate to everyone how far Artificial Intelligence research has come, everyone should keep in mind that Watson is not Data or Skynet, in fact it’s not even Wall-E.

Watson has been trained to play Jeopardy. It has the ability to answer questions and find data in a much more natural manner than was possible even 5 years ago. But it is not playing the same game that Ken Jennings and Brad Rutter are. Maybe next year.

My advice for next year’s challenge

Oh you know there will be one, Jeopardy ratings are through the roof! The engineers at IBM should make efforts to remove these complaints of unfairness in the following ways:

  • (1) Let us see Watson’s button – Ahem, you know what I mean. Put a robot out there or something more visual to let Watson press the button. Also, do some work with people who understand the human body to make sure it isn’t unfairly fast. How long does it take for a human being to physically press a button from the moment they ‘decide’ to press it? It may seem like a handicap to add this delay to Watson but it really would seem more fair. After all, we want to know that Watson is winning on the questioning answering part of the game, not these physical details, so remove them as issues.
  • (2) Give Watson a camera – Watson really could visually parse the questions to know what they are. At the beginning of the round it would scan the topics visually and build a database to start planning.  This shouldn’t be hard as visual text analysis is quite advanced, it wasn’t added because it was a needless complication of an engineering problem. But the optics, excuse the pun, are not good in terms of fairness.
  • (3a) Get rid of all talking – Lock each player in a room, they wouldn’t hear Alex Trebek, they’ll all just read the questions.  When another contestant answers this should be sent via text file to all the other players. Then it would be fair…but boring and weird.
  • OR
  • (3b) Give Watson speech recognition – This will be a real problem as speech recognition is one of those areas that really has turned out to be harder than anyone imagined. Vision? No Problem. Text analysis? Give me enough text and I will move the Earth. Robotic control? Are you kidding? Easy. But understanding human speech transmitted through a vibrating air column, damn that’s hard.

    But they should do it, they could use the latest technology available for this, which IBM is generally recognized to lead anyways, and just let it be the machine’s achilles heal.  It could train on Alex Trebek’s Canadian accent (no French Alex!) It could train on its opponents and it could nail the common and simple prompts that the host gives when it is it someone’s turn to play or to break for commercials. It could just ignore Trebek reading the question out except for the cue that it is time to press the button. It may still not do much better at taking advantage of  wrong answers by opponents and it would likely make some entertaining mistakes, but it would make it more realistic. Paradoxically it might get more credit for losing in this way than it will in winning the way it is currently set up.

So if humanity loses tonight, don’t fret. This machine is just a step along the way and sometimes things aren’t as smart as they first seem. Then again, you could also say its holding by not even trying to do everything at once. Would you be more scared/impressed if Watson really did everything its human opponents did and still faired well?

On to round three.








Team Human vs Team Watson : Round I

Just in case watching a computer play humans on Jeopardy wasn’t your idea of a romantic evening last night, here’s my summary of what happened.

They spent a lot of time explaining the way Watson was built and even gave a high level discussion of the fact that the system maintains a belief distribution over possible answers.  When the computer answers a question we see its top three picks with the probability weight on each answer and the threshold for buzzing in.

The first round was impressive with Watson dominating the first few minutes.  It answered quickly and flawlessly until the first commercial break. It seemed that the humans just weren’t fast enough.

But after the commercials it got interesting. Ken Jennings seemed to modify his strategy to simply press the button as early as possible.  It was clear he buzzed in several times having no idea what the answer was, then stalled for a moment and guessed, usually correctly.  This was a smart adaptation to Watson’s algorithm.  The computer won’t answer unless it is confident in its answer. But a human can keep thinking after they buzz in and gamble that they can come up with something.

Interestingly when Ken buzzed in very early the answers showing on the screen for Watson seemed to be lower quality, it still hadn’t converged on a good answer and froze at the buzzer.

At one point Ken buzzed in and got it wrong, then Watson buzzed in. We could see that its top answer was the same wrong answer Ken just gave, but it repeated it anyways.  This seems to indicate the algorithm does not keep computing after the buzzer and can’t take into account the answers of other players. A minor thing to change really.

The round ended with Watson answering the last question, correctly identifying the Event Horizon of a black hole. Fitting I think.

We’ll see what happens tonight.

What do you think about the challenge, leave your observations in the comments. If you are on twitter make sure to pick a side: Are you rooting for #teamhuman or #teamwatson?

%d bloggers like this: