Machines Want Your Job : Interrogators

Fascinating. It shouldn’t be too surprising to hear that human’s are very susceptible to suggestion by authority figures when being asked to remember events such as police questioning. But apparently, this new study found that if the identical words are used for questioning but delivered by a robot (I don’t know if it was a disembodied robotic voice or some physical robot) then this influence disappears. I assume there would still be lots of ways to bias the witness by the text of questions you ask but a huge amount of the influence comes from reading cues and listening to the human voice. So, chalk that up for another future career under threat from robots: Interrogator.

New Scientist: Robot inquisition keeps witnesses on the right track.

Discussion on G+

AAAI 2011 Wrap-Up

So I didn’t post an update about the AAAI 2011 conference every day, but really, this is more posts than I would have predicted with my prior model of my behaviour so it’s pretty good. I also wrote a separate post talking about the Computational Sustainability track.

This is just a few quick few notes about the events at AAAI this year and my own biased view of what was hot. But keep in mind there is  such a broad set of papers, presentations, posters and demos from a huge number of brilliant people that its impossible for one person to give you a full view of what went on.  I encourage you to look over the program here, and read the papers that interest you.

From the conference talks I attended and the people I talked to people were most excited about:

  • Learning Relationships from Social Networks – lots of fascinating work here including one of the invited talks.  Kind of ironic though that so few AAAI11 attendees seem to use social media like twitter during the conference. You can take a look at #aaa11 (and even #aaai and #aaai2011) for the limited chatter there was.
  • Planning with UCT and bandit problems
  • Microtext (I don’t know what that is but it’s apparently fascinating)
  • Computational Social Choice – a Borda manipulation proof won outstanding paper in this track.
  • Multiagent Systems applied to everything
  • Computational Sustainability
  • Natural Language Processing – especially from blogs and social media
  • and everyone I talked to seems to agree that Watson was pretty awesome
The poster session was very full and lots of great discussion ensued. Note for future attendees, best food of the conference was at the posters, by far, go for the food, stay for the Artificial Intelligence.
There were a number of robot demos as well, the fantastic PR2 platform was being demonstrated with an algorithm where users could train it to identify and manipulate household items like dishes and cups.  There were also a number of chess playing robots playing in competition, designed to be played against a human using vision to detect moves and locate pieces.
There was also a lot else going on I didn’t get to: The AI in Education track, a poker playing competition, IAAI the applied AI conference held in parrallel with AAAI and probably lots more.
To top it off, on Thursday morning those of us staying in the hotel were awakened to bull horns and shouted slogans. I had almost hoped that someone had arranged a protest spawned by the frightening advance of Artificial Intelligence and they had come to demand we stop our research immediately to avoid the inevitable enslavement/destruction/unemployment/ingestion of humanity by the machines. Not that this would be a valid concern or that I want to stop researching, but it would have provided some kind of strange vindication that the public thinks we are advancing.
Unfortunately it was actually a labour dispute between the Hyatt and some of its staff, they marched in front of the main entrance from 7am-7pm the entire final day of the conference.
Best overheard quote:
You care about AI more than our jobs!
I’m pretty sure most attendees didn’t have a predefined utility for comparing those two entities. Hopefully they work it out.
All in all, a great conference in a great city.

AAAI 2011 – Day 2

Day two of tutorials and workshops at AAAI 2011. The crowd is starting to grow and the robots are being set up. Things don’t get fully started until tomorrow but there is a growing number of activities going on today. I went to two tutorials which I describe in more details below. There were also a number of workshops going on, the one I heard about most was the Analyzing Microtext Workshop run by David Aha and others. In the evening the social events got started with the banquet and IAAI video competition winners.

From Structured Prediction to Inverse Reinforcement Learning
Hal Daume III

This was quite an interesting and well run tutorial.  He was trying to highlight the  relation between structured prediction and Inverse Reinforcement Learning, also called apprenticeship learning.  There was a lot of content but there was a common theme of learning linear classifiers using various methods (prerceptrons, SVMs) for part of language understanding and other problems. He then related this to Inverse Reinforcement Learning which is essentially trying to learn the reward function from an optimal policy rather than the other way around. This can be done by observing an ‘optimal’ agent (assuming you have one) and using the same techniques as in structured prediction to iteratively improve your estimate of the agent’s reward model. Once you have this reward model you can now do normal reinforcement learning to find an optimal policy that is more robust than simply doing supervised learning since you are not mimicking their actions but mimicking their intent.

Philosophy as AI and AI as Philosophy
Aaron Sloman

I had been torn this afternoon between two tutorials to attend, the overview of the latest results in classical planning which I really should attend and this very broad philosophy and AI tutorial. Nature intervened and made the choice for me though as the planning tutorial was cancelled. Prof. Sloman gave us a general overview of many branches of philosophy and how he believes they relate deeply to Artificial Intelligence.

The main point of his talk and much of his work over the past thirty years is increase the flow of ideas in both directions between philosophy and computer science, but specifically AI. He argues that there is lots each field could contribute to the other. So, philosophical ideas and open problems could guide AI research into areas that could help resolve philosophical questions. One fun question is what is humour? What does it mean for something to be funny? From an AI perspective, what would need to be true for us to say that a computer found a joke funny? This is as opposed to a computer identifying a statement as funny. Is there a difference?

Near the end when he was summing up he also stated that he believes philosophers who do learn about AI concepts are deeply changed by it and can then address philosophical questions in ways they previously could not. He calls on young AI researchers to join him in trying to bridge these gaps and start more discussion on the philosophical questions AI could address rather than just the engineering questions.

%d bloggers like this: