The Big Deal about Big Data and Deep Learning

Recently, you may have been hearing a lot about “Big Data”, this refers to our growing ability to find patterns and process very  large amounts of data with or without structure. There are many reasons contributing to this but a recent set of algorithmic advances known as “Deep Learning” have led to a surge in accuracy in voice recognition and vision. This is relevant for scientists because Deep Learning is a particularly effective type of  ‘unsupervised learning’ which does not require you to specify a bias up front to speed up learning. Thus it only learns a pattern from the data if there is a statistical basis for it.

This recent article in Nature is causing some waves and explains the algorithms quite well, the discussion in Andrew Ng’s Google+ post is also good to add a bit more context about the wider progress in Artificial Intelligence research.

Could Machines Make Art?

Fascinating study out of the University of Trento on using Machine Vision algorithms to learn how people respond emotionally to abstract art.

Link : Computers identify what makes abstract art move us

Abstract art might be easier to replicate automatically since you don’t need to worry about as much symbolism and meaning as much. Is this going to put artists out of a job? Well no, people create art because they want to, or need to. If computers can generate abstract patterns and images that are emotionally evocative on demand then that would surely hurt artists who rely on selling their images or the rights to reproduce them in other media.

So, something for artists to be aware of.

Talking Robots and Psychedelic Drugs

Baby robot learns first words from human teacher – tech – 15 June 2012 – New Scientist

I’m always glad to see more methods from research in Artificial Intelligence/Machine Learning getting coverage in the media and being explained with some level of detail. Take a look at these two articles on applications of Artificial Intellgience methods in the study of the learning language in infants and in the effects of psychedelic drugs diagnosis. They give a nice high level overview of two powerful approaches that are not quite standard in AI and Machine Learning. The language learning robot is doing supervised learning with reinforcement learning approach where the agent randomly explores a landscape and weights good experiences to improve it’s model. The drugs study is applying a classifier to text descriptions about psychedelic trips and trying to predict the drug that causes it.

Journalists Should Welcome Their New Computer Overlords

Take a look at this interesting summary piece describing improving applications of AI to automated writing of news. I couldn’t resist repeating Ken Jenning’s infamous statement after he was defeated by the IBM Jeopardy playing computer Watson, but seriously, I don’t think there is any fear that computers will replace journalists as the writer seems to worry. No computer algorithm is anywhere near the point yet that they can write evocative, insightful prose that encapsulates the experience and reasoning that journalists bring to their jobs.

However, this kind of technology of producing summary posts on a topic in prose could be a useful feature when people are looking for breaking news. Right now if you want to find out about something which is occurring right now and isn’t being covered live on CNN you need to turn to sifting Twitter or google searches yourself manually. To get a good link between the various different feeds you usually need to wait for a human somewhere to integrate those facts together. Much of this initial grunt work of detecting a new story and compiling links can be automated now.

As a tool for journalists these generated articles could even be the initial seeds used to write news stories.  Further writing would always be needed and undoubtedly facts would need to be checked and more detail gathered on interesting aspects of the story. But an initial draft of an article generated by an AI system could actually help improve the quality of journalism by focussing humans on the important parts of the story that need to be filled in rather than spending lots of time gathering links to other sources, tweets and articles which are readily available. Perhaps we could even train the automated news summarizers to filter out less relevant stories and improve the quality of news overall.

AAAI 2011 Wrap-Up

So I didn’t post an update about the AAAI 2011 conference every day, but really, this is more posts than I would have predicted with my prior model of my behaviour so it’s pretty good. I also wrote a separate post talking about the Computational Sustainability track.

This is just a few quick few notes about the events at AAAI this year and my own biased view of what was hot. But keep in mind there is  such a broad set of papers, presentations, posters and demos from a huge number of brilliant people that its impossible for one person to give you a full view of what went on.  I encourage you to look over the program here, and read the papers that interest you.

From the conference talks I attended and the people I talked to people were most excited about:

  • Learning Relationships from Social Networks – lots of fascinating work here including one of the invited talks.  Kind of ironic though that so few AAAI11 attendees seem to use social media like twitter during the conference. You can take a look at #aaa11 (and even #aaai and #aaai2011) for the limited chatter there was.
  • Planning with UCT and bandit problems
  • Microtext (I don’t know what that is but it’s apparently fascinating)
  • Computational Social Choice – a Borda manipulation proof won outstanding paper in this track.
  • Multiagent Systems applied to everything
  • Computational Sustainability
  • Natural Language Processing – especially from blogs and social media
  • and everyone I talked to seems to agree that Watson was pretty awesome
The poster session was very full and lots of great discussion ensued. Note for future attendees, best food of the conference was at the posters, by far, go for the food, stay for the Artificial Intelligence.
There were a number of robot demos as well, the fantastic PR2 platform was being demonstrated with an algorithm where users could train it to identify and manipulate household items like dishes and cups.  There were also a number of chess playing robots playing in competition, designed to be played against a human using vision to detect moves and locate pieces.
There was also a lot else going on I didn’t get to: The AI in Education track, a poker playing competition, IAAI the applied AI conference held in parrallel with AAAI and probably lots more.
To top it off, on Thursday morning those of us staying in the hotel were awakened to bull horns and shouted slogans. I had almost hoped that someone had arranged a protest spawned by the frightening advance of Artificial Intelligence and they had come to demand we stop our research immediately to avoid the inevitable enslavement/destruction/unemployment/ingestion of humanity by the machines. Not that this would be a valid concern or that I want to stop researching, but it would have provided some kind of strange vindication that the public thinks we are advancing.
Unfortunately it was actually a labour dispute between the Hyatt and some of its staff, they marched in front of the main entrance from 7am-7pm the entire final day of the conference.
Best overheard quote:
You care about AI more than our jobs!
I’m pretty sure most attendees didn’t have a predefined utility for comparing those two entities. Hopefully they work it out.
All in all, a great conference in a great city.
%d bloggers like this: