NIPS Conference 2012

A month ago I attended the 2012 Neural Information Processing Systems conference in Lake Tahoe Nevada. I’ve already posted some of my thoughts on some of it up at the Computational Sustainability Blog for your interest.

Advertisements

AAAI 2011 Wrap-Up

So I didn’t post an update about the AAAI 2011 conference every day, but really, this is more posts than I would have predicted with my prior model of my behaviour so it’s pretty good. I also wrote a separate post talking about the Computational Sustainability track.

This is just a few quick few notes about the events at AAAI this year and my own biased view of what was hot. But keep in mind there is  such a broad set of papers, presentations, posters and demos from a huge number of brilliant people that its impossible for one person to give you a full view of what went on.  I encourage you to look over the program here, and read the papers that interest you.

From the conference talks I attended and the people I talked to people were most excited about:

  • Learning Relationships from Social Networks – lots of fascinating work here including one of the invited talks.  Kind of ironic though that so few AAAI11 attendees seem to use social media like twitter during the conference. You can take a look at #aaa11 (and even #aaai and #aaai2011) for the limited chatter there was.
  • Planning with UCT and bandit problems
  • Microtext (I don’t know what that is but it’s apparently fascinating)
  • Computational Social Choice – a Borda manipulation proof won outstanding paper in this track.
  • Multiagent Systems applied to everything
  • Computational Sustainability
  • Natural Language Processing – especially from blogs and social media
  • and everyone I talked to seems to agree that Watson was pretty awesome
The poster session was very full and lots of great discussion ensued. Note for future attendees, best food of the conference was at the posters, by far, go for the food, stay for the Artificial Intelligence.
There were a number of robot demos as well, the fantastic PR2 platform was being demonstrated with an algorithm where users could train it to identify and manipulate household items like dishes and cups.  There were also a number of chess playing robots playing in competition, designed to be played against a human using vision to detect moves and locate pieces.
There was also a lot else going on I didn’t get to: The AI in Education track, a poker playing competition, IAAI the applied AI conference held in parrallel with AAAI and probably lots more.
To top it off, on Thursday morning those of us staying in the hotel were awakened to bull horns and shouted slogans. I had almost hoped that someone had arranged a protest spawned by the frightening advance of Artificial Intelligence and they had come to demand we stop our research immediately to avoid the inevitable enslavement/destruction/unemployment/ingestion of humanity by the machines. Not that this would be a valid concern or that I want to stop researching, but it would have provided some kind of strange vindication that the public thinks we are advancing.
Unfortunately it was actually a labour dispute between the Hyatt and some of its staff, they marched in front of the main entrance from 7am-7pm the entire final day of the conference.
Best overheard quote:
You care about AI more than our jobs!
I’m pretty sure most attendees didn’t have a predefined utility for comparing those two entities. Hopefully they work it out.
All in all, a great conference in a great city.

CompSust11 – Computational Sustainability at AAAI11

This year the annual AAAI conference held a special track for the field of Computational Sustainability.  I attended the AAAI conference and presented a paper in the CompSust track but I also ended up spending most of my time listening to other talks from this track.  This was partly because each of the talks was interesting in itself but also because it turned out to be a great way to see a range of work going on in AI without changing rooms as often.

There was a huge diversity of problem domains and AI methods brought to bear on them. This was an interesting way to attend AAAI actually since each session had a variety of approaches, exposing a lot of the variety of approaches there is at a large general conference like AAAI. Most of my most vigourous discussions were with people in very different fields from me since we both needed to translate each other’s language and discover our own assumptions. I think this is something that happens less often at more focussed conferences.

One of papers chosen as outstanding paper (one of only two as far as I could tell) came from the CompSust track (Dynamic Resource Allocation in Conservation Planning by Daniel Golovin, Andreas Krause, Beth Gardner, Sarah J. Converse, Steve Morey). This was a very impressive project on reserve management for nature reserves to protect wildlife which was the result of wide collaboration between universities, government and industry.

Just some of the domains and methods used in the papers in this track to give you an idea of the variety the topics:

Domains
– smart energy grid design
– distributed energy storage
– nature reserve planning
– wildlife migration corridors
– building energy efficiency, comparing and improving efficiency
– water conservation in residental landscapes
– bird species tracking

Methods
– market simulation of energy tariffs with Qlearning
– multiagent planning – an agent buying and selling power from the grid from your local batteries
in order to lower your energy bill and maintain the necessary power needed no demand
– steiner multigraph optimization
– modelling interactions between plants as agents and optimize their placement and watering
– graphical probabilisitc models
– boosted regression trees

AAAI 2011 – Day 2

Day two of tutorials and workshops at AAAI 2011. The crowd is starting to grow and the robots are being set up. Things don’t get fully started until tomorrow but there is a growing number of activities going on today. I went to two tutorials which I describe in more details below. There were also a number of workshops going on, the one I heard about most was the Analyzing Microtext Workshop run by David Aha and others. In the evening the social events got started with the banquet and IAAI video competition winners.

From Structured Prediction to Inverse Reinforcement Learning
Hal Daume III

This was quite an interesting and well run tutorial.  He was trying to highlight the  relation between structured prediction and Inverse Reinforcement Learning, also called apprenticeship learning.  There was a lot of content but there was a common theme of learning linear classifiers using various methods (prerceptrons, SVMs) for part of language understanding and other problems. He then related this to Inverse Reinforcement Learning which is essentially trying to learn the reward function from an optimal policy rather than the other way around. This can be done by observing an ‘optimal’ agent (assuming you have one) and using the same techniques as in structured prediction to iteratively improve your estimate of the agent’s reward model. Once you have this reward model you can now do normal reinforcement learning to find an optimal policy that is more robust than simply doing supervised learning since you are not mimicking their actions but mimicking their intent.

Philosophy as AI and AI as Philosophy
Aaron Sloman

I had been torn this afternoon between two tutorials to attend, the overview of the latest results in classical planning which I really should attend and this very broad philosophy and AI tutorial. Nature intervened and made the choice for me though as the planning tutorial was cancelled. Prof. Sloman gave us a general overview of many branches of philosophy and how he believes they relate deeply to Artificial Intelligence.

The main point of his talk and much of his work over the past thirty years is increase the flow of ideas in both directions between philosophy and computer science, but specifically AI. He argues that there is lots each field could contribute to the other. So, philosophical ideas and open problems could guide AI research into areas that could help resolve philosophical questions. One fun question is what is humour? What does it mean for something to be funny? From an AI perspective, what would need to be true for us to say that a computer found a joke funny? This is as opposed to a computer identifying a statement as funny. Is there a difference?

Near the end when he was summing up he also stated that he believes philosophers who do learn about AI concepts are deeply changed by it and can then address philosophical questions in ways they previously could not. He calls on young AI researchers to join him in trying to bridge these gaps and start more discussion on the philosophical questions AI could address rather than just the engineering questions.

AAAI 2011 – Day 1

I skipped the afternoon tutorial sessions to work on my presentation and go around town a bit but I did attend a very interesting morning tutorial on time series. The tutorial was on Time Series and was led by Eamonn Keogh.

He makes some bold claims but it was a very informative tutorial and he seems to have a lot of very reasonable and interesting things to say.  His basic claim is that a lot of the time people try to fit complex, fully general models to classification and prediction problems when their data is actually linear and simpler methods would be much more effective. A time series is any data set which has data points occurring in a fixed linear ordering. In response to someone’s question he did make clear that you need to make some reasonable assumptions about how much variation there is over time. So data where events can occur in arbitrary order won’t work. Also, if it’s equally likely that some event may occur every 0.1 seconds or occur every 10 hours then it probably won’t work either. But a lot of real data sets actually vary within a small time range. What time series methods can handle, apparently, is almost arbitrary variation in the magnitude of the number, in anomalies in the middle of a set of data, in outliers that don’t fit the average cases and several other types of variation.

 

This makes time series very powerful for sensor data and medical data of well understood processes.  Keogh’s focus is on symbolic analysis of data which has obvious applications for genetic datasets but is apparently also very effective for continuous, real valued data. The basic idea is to cluster the time series into levels and label these levels. Patterns of these labels can then be used to discover motifs in the data which often have an understandable semantic meaning.

 

There are several questions that arise with this approach, some of which he answered as well.  Basically, he argues that most data can be analysed using simple Euclidean distance and linear transformations on the time series with anomaly detection. The worry with this is that you may just find a pattern when there isn’t really one present if you stretch and shift the data enough.  It is important to treat a pattern detected in this way as merely a theory which is then verified by going back to the original data or to some independent data source.  He confirmed that this is what they always do but that when such anomalies arise in real datasets they very often are meaningful or can guide further exploration.

%d bloggers like this: