AAAI 2011 – Day 2

Day two of tutorials and workshops at AAAI 2011. The crowd is starting to grow and the robots are being set up. Things don’t get fully started until tomorrow but there is a growing number of activities going on today. I went to two tutorials which I describe in more details below. There were also a number of workshops going on, the one I heard about most was the Analyzing Microtext Workshop run by David Aha and others. In the evening the social events got started with the banquet and IAAI video competition winners.

From Structured Prediction to Inverse Reinforcement Learning
Hal Daume III

This was quite an interesting and well run tutorial.  He was trying to highlight the  relation between structured prediction and Inverse Reinforcement Learning, also called apprenticeship learning.  There was a lot of content but there was a common theme of learning linear classifiers using various methods (prerceptrons, SVMs) for part of language understanding and other problems. He then related this to Inverse Reinforcement Learning which is essentially trying to learn the reward function from an optimal policy rather than the other way around. This can be done by observing an ‘optimal’ agent (assuming you have one) and using the same techniques as in structured prediction to iteratively improve your estimate of the agent’s reward model. Once you have this reward model you can now do normal reinforcement learning to find an optimal policy that is more robust than simply doing supervised learning since you are not mimicking their actions but mimicking their intent.

Philosophy as AI and AI as Philosophy
Aaron Sloman

I had been torn this afternoon between two tutorials to attend, the overview of the latest results in classical planning which I really should attend and this very broad philosophy and AI tutorial. Nature intervened and made the choice for me though as the planning tutorial was cancelled. Prof. Sloman gave us a general overview of many branches of philosophy and how he believes they relate deeply to Artificial Intelligence.

The main point of his talk and much of his work over the past thirty years is increase the flow of ideas in both directions between philosophy and computer science, but specifically AI. He argues that there is lots each field could contribute to the other. So, philosophical ideas and open problems could guide AI research into areas that could help resolve philosophical questions. One fun question is what is humour? What does it mean for something to be funny? From an AI perspective, what would need to be true for us to say that a computer found a joke funny? This is as opposed to a computer identifying a statement as funny. Is there a difference?

Near the end when he was summing up he also stated that he believes philosophers who do learn about AI concepts are deeply changed by it and can then address philosophical questions in ways they previously could not. He calls on young AI researchers to join him in trying to bridge these gaps and start more discussion on the philosophical questions AI could address rather than just the engineering questions.

AAAI 2011 – Day 1

I skipped the afternoon tutorial sessions to work on my presentation and go around town a bit but I did attend a very interesting morning tutorial on time series. The tutorial was on Time Series and was led by Eamonn Keogh.

He makes some bold claims but it was a very informative tutorial and he seems to have a lot of very reasonable and interesting things to say.  His basic claim is that a lot of the time people try to fit complex, fully general models to classification and prediction problems when their data is actually linear and simpler methods would be much more effective. A time series is any data set which has data points occurring in a fixed linear ordering. In response to someone’s question he did make clear that you need to make some reasonable assumptions about how much variation there is over time. So data where events can occur in arbitrary order won’t work. Also, if it’s equally likely that some event may occur every 0.1 seconds or occur every 10 hours then it probably won’t work either. But a lot of real data sets actually vary within a small time range. What time series methods can handle, apparently, is almost arbitrary variation in the magnitude of the number, in anomalies in the middle of a set of data, in outliers that don’t fit the average cases and several other types of variation.


This makes time series very powerful for sensor data and medical data of well understood processes.  Keogh’s focus is on symbolic analysis of data which has obvious applications for genetic datasets but is apparently also very effective for continuous, real valued data. The basic idea is to cluster the time series into levels and label these levels. Patterns of these labels can then be used to discover motifs in the data which often have an understandable semantic meaning.


There are several questions that arise with this approach, some of which he answered as well.  Basically, he argues that most data can be analysed using simple Euclidean distance and linear transformations on the time series with anomaly detection. The worry with this is that you may just find a pattern when there isn’t really one present if you stretch and shift the data enough.  It is important to treat a pattern detected in this way as merely a theory which is then verified by going back to the original data or to some independent data source.  He confirmed that this is what they always do but that when such anomalies arise in real datasets they very often are meaningful or can guide further exploration.

A week in San Francisco at AAAI

This week I’m in San Francisco attending AAAI11 – the 2011 Conference of the Association for the Advancement of Artificial Intelligence.  It’s the largest and broadest AI Conference held every year. I’ll be trying to post up my thoughts and observations here each day about what I’m seeing at the conference.  For more immediate thoughts I’ll probably just post to Google+ (you can find me, if you’re not on yet you can join using my invites here.). That would be a great place to discuss what’s going on in real time or for people to meet up.

I’m very excited for this year’s AAAI because:

a) I’m presenting a paper on my thesis research on thursday (10:20am session – how can you resist a catchy title like “Policy Gradient Planning for Environmental Decision Making with Existing Simulators”? Also, the paper before me is listed as an “Outstanding Paper”, so at least you’ll see that.)

b) This is a very exciting time for AI research. There are lots of reasons for this but for me two exciting new fields where AI is being applied in new ways which are featured prominently in this year’s AAAI conference: Social Media and Sustainability.

I don’t know much about AI applied to social media so that will be fascinating to find out about.

There is a special track this year on Computational Sustainability which is a new field which focusses on applying machine learning, probabilistic modeling and optimization techniques to very challenging environmental problems.  This is the track my paper is in. It will be really interesting to meet with lots of people trying to use AI to better the world.

Watch this space.

%d bloggers like this: