Recently, you may have been hearing a lot about “Big Data”, this refers to our growing ability to find patterns and process very large amounts of data with or without structure. There are many reasons contributing to this but a recent set of algorithmic advances known as “Deep Learning” have led to a surge in accuracy in voice recognition and vision. This is relevant for scientists because Deep Learning is a particularly effective type of ‘unsupervised learning’ which does not require you to specify a bias up front to speed up learning. Thus it only learns a pattern from the data if there is a statistical basis for it.
This recent article in Nature is causing some waves and explains the algorithms quite well, the discussion in Andrew Ng’s Google+ post is also good to add a bit more context about the wider progress in Artificial Intelligence research.
Posted by Mark Crowley on January 15, 2014
http://www.technologyreview.com/view/511421/the-brain-is-not-computable/ I disagree with him on computability of human minds, but fascinating research otherwise. He’s showing that new senses can be integrated into mammal brains to produce mice that can see infrared and monkeys that can feel themselves in a fully immersed d computer avatar. I don’t see how people who know how brains, computers and say, complex systems like the internet work can say that it’s impossible, even in theory, to replicate the complexity of the brain in silica. Saying something is not Computable is a very strong claim, and a very precise claim. It may be infeasible, but impossible? Proclaiming that the randomness or interconnectivity of the brain can’t be reproduced on a computer severely misunderstands what it going on in computing and AI research these days.
Posted by Mark Crowley on February 26, 2013
Fascinating. It shouldn’t be too surprising to hear that human’s are very susceptible to suggestion by authority figures when being asked to remember events such as police questioning. But apparently, this new study found that if the identical words are used for questioning but delivered by a robot (I don’t know if it was a disembodied robotic voice or some physical robot) then this influence disappears. I assume there would still be lots of ways to bias the witness by the text of questions you ask but a huge amount of the influence comes from reading cues and listening to the human voice. So, chalk that up for another future career under threat from robots: Interrogator.
New Scientist: Robot inquisition keeps witnesses on the right track.
Discussion on G+
Posted by Mark Crowley on February 10, 2013
Here’s some interesting research on a Neural Network approach to teaching a machine to detect when someone is drowning. This could lead to better detection of people in need of help or dispatch and guidance for robotic lifeguards.
Posted by Mark Crowley on January 8, 2013
A month ago I attended the 2012 Neural Information Processing Systems conference in Lake Tahoe Nevada. I’ve already posted some of my thoughts on some of it up at the Computational Sustainability Blog for your interest.
Posted by Mark Crowley on January 7, 2013