Machines Want Your Job : Interrogators

Fascinating. It shouldn’t be too surprising to hear that human’s are very susceptible to suggestion by authority figures when being asked to remember events such as police questioning. But apparently, this new study found that if the identical words are used for questioning but delivered by a robot (I don’t know if it was a disembodied robotic voice or some physical robot) then this influence disappears. I assume there would still be lots of ways to bias the witness by the text of questions you ask but a huge amount of the influence comes from reading cues and listening to the human voice. So, chalk that up for another future career under threat from robots: Interrogator.

New Scientist: Robot inquisition keeps witnesses on the right track.

Discussion on G+

Advertisements

Robot Lifeguards?

Here’s some interesting research on a Neural Network approach to teaching a machine to detect when someone is drowning. This could lead to better detection of people in need of help or dispatch and guidance for robotic lifeguards.

The Robot and the Hound

This term I’m TAing for a fourth year course on Artificial Intelligence at UBC so I’m going to have lots of links to news in AI coming in front of me.  I might as well do something with it so over the next few months I’ll post up comments on my thoughts about the current state of AI research and where its going both in theory and in real world applications.

I’ll add interesting links to my AINews list on the side of this page but for a fantastic source of all things AI check out this list from AAAI which is actually maintained using AI algorithms to combine reader reviews and natural language searches of the web.

First up, Robots.

Robert Silverberg recently decided to pine about the good ‘ol days with his Science Writer breathren about what they thought the ‘robots of the future’ would look like and how its really turned out.  It’s an interesting read from a great SF writer and gives a wonderful overview of the history of robots in fiction.

His conclusion essentially comes down to reminding roboticists to be sure to instill Asimov’s Three Laws of Robotics into all their machines lest they suffer the dire threat of lawsuits in the future.

The notion of  responsibility is indeed an important one that AI researchers will need to deal with.  Right now all robots and machines are pretty dumb no matter how smart they seem.  There is no computer on Earth with anything remotely resembling free will or self-awareness. Thus there is no way any machine could be blamed for its actions at this time so any errors are either the result of improper usage, bad planning by engineers or the random outcome of rules that no one had a problem with.

You can already begin to see this in the way Google, for example, responds to criticism about what shows up on their search results.  Their algorithms are complex and seeks patterns amongst huge amounts of data that no human can see.  They use a lot of AI algorithms and the particular results that show up when you search for something really aren’t the cause of what any one person at Google has done.  Google put up the system which allows those results to be generate but whether the results are offensive for one particular query isn’t something any human being made a decision about.
So I think Google would have a strong argument that some result which was offensive and somehow caused harm wasn’t their fault as long as they’ve taken reasonable precautions.

I think the next stage of moral responsibility for machine behaviour will be something similar to the responsibility of a dog owner for the actions of their dog. A dog owner is at the park walking their dog and the dog is running free and hurts a child or another dog.  The dog did the action and will likely be punished or put  down, but you can’t sue the dog.  However, depending on how the situation arose the owner has some responsibility. If the dog is usually nice and friendly, it was a small dog, it was a leash free park etc, then perhaps the owner would have no legal liability at all.  On the other extreme an abusive dog owner who raises a mean spirited dog that often attacks others and then released it in a leash-only park would face serious penalties although still likely less than if they performed the attack themselves.  I don’t know what the laws are but there is a reasonable way to write such laws that the owner can be held responsible but where the independent will of the dog itself is considered.

Someday, Isaac willing, someone will create a computer that begins to approach the level of complexity that we could say the engineers who made it can only be held as responsible for its actions as they are for their dog. While the Three Laws of Robotics are a nice idea they may be no easier to hardwire than how you train your dog not to bark at others and to always come when you call.  

So lets keep the intent of the three laws but lets make sure that when that day comes we’ve raised a nice, friendly dog rather than an uncontrollable pit bull.

%d bloggers like this: