0

March 20, 2017 AT 7:00 am

Robot Knows the Right Question to Ask When It’s Confused #robotics

Via IEEE Spectrum

Last week, we (and most of the rest of the internet) covered some research from MIT that uses a brain interface to help robots correct themselves when they’re about to make a mistake. This is very cool, very futuristic stuff, but it only works if you wear a very, very silly hat that can classify your brain waves in 10 milliseconds flat.

At Brown University, researchers in Stefanie Tellex’s lab are working on a more social approach to helping robots more accurately interact with humans. By enabling a robot to model its own confusion in an interactive object-fetching task, the robot can ask relevant clarifying questions when necessary to help understand exactly what humans want. No hats required.

Whether you ask a human or a robot to fetch you an object, it’s a simple task to perform if the object is unique in some way, and a more complicated task to perform if it involves several similar objects. Say you’re a mechanic, and you want an assistant to bring you a tool. You can point at a shelf of tools and say, “Bring me that tool.” Your assistant, if they’re human, will look where you point, and if there are only a handful tools on the shelf, they’ll probably be able to infer what tool you mean. But if the shelf is mostly full, especially if it’s full of objects that are similar, your assistant might not be able to determine exactly what tool you’re talking about, so they’ll ask you to clarify somehow, perhaps by pointing at a tool and saying, “Is this the one you mean?”

To be useful in situations like these, your assistant has to have an understanding of ambiguity and uncertainty: They have to be able to tell when there is (or is not) enough information to complete a task, and then take the right action to help get more information when necessary, whether that uncertainty comes from the assistant not really getting something, or just from you not being specific enough about what you want. For robot assistants, it’s a much more difficult problem than it is for human assistants, because of the social components involved. Pointing, gestures, gaze, and language cues are all tricks that humans use to communicate information that robots are generally quite terrible at interpreting.

Read more.


Check out all the Circuit Playground Episodes! Our new kid’s show and subscribe!

Have an amazing project to share? Join the SHOW-AND-TELL every Wednesday night at 7:30pm ET on Google+ Hangouts.

Join us every Wednesday night at 8pm ET for Ask an Engineer!

Learn resistor values with Mho’s Resistance or get the best electronics calculator for engineers “Circuit Playground”Adafruit’s Apps!


Maker Business — Lessons Learned Scaling Airbnb 100X

Wearables — Start with a sketch

Electronics — When do I use X10?

Biohacking — Book Recommendation: Autonomous by Annalee Newitz

Get the only spam-free daily newsletter about wearables, running a "maker business", electronic tips and more! Subscribe at AdafruitDaily.com !



No Comments

No comments yet.

Sorry, the comment form is closed at this time.