Last week, we (and most of the rest of the internet) covered some research from MIT that uses a brain interface to help robots correct themselves when they’re about to make a mistake. This is very cool, very futuristic stuff, but it only works if you wear a very, very silly hat that can classify your brain waves in 10 milliseconds flat.
At Brown University, researchers in Stefanie Tellex’s lab are working on a more social approach to helping robots more accurately interact with humans. By enabling a robot to model its own confusion in an interactive object-fetching task, the robot can ask relevant clarifying questions when necessary to help understand exactly what humans want. No hats required.
Whether you ask a human or a robot to fetch you an object, it’s a simple task to perform if the object is unique in some way, and a more complicated task to perform if it involves several similar objects. Say you’re a mechanic, and you want an assistant to bring you a tool. You can point at a shelf of tools and say, “Bring me that tool.” Your assistant, if they’re human, will look where you point, and if there are only a handful tools on the shelf, they’ll probably be able to infer what tool you mean. But if the shelf is mostly full, especially if it’s full of objects that are similar, your assistant might not be able to determine exactly what tool you’re talking about, so they’ll ask you to clarify somehow, perhaps by pointing at a tool and saying, “Is this the one you mean?”
To be useful in situations like these, your assistant has to have an understanding of ambiguity and uncertainty: They have to be able to tell when there is (or is not) enough information to complete a task, and then take the right action to help get more information when necessary, whether that uncertainty comes from the assistant not really getting something, or just from you not being specific enough about what you want. For robot assistants, it’s a much more difficult problem than it is for human assistants, because of the social components involved. Pointing, gestures, gaze, and language cues are all tricks that humans use to communicate information that robots are generally quite terrible at interpreting.
Stop breadboarding and soldering – start making immediately! Adafruit’s Circuit Playground is jam-packed with LEDs, sensors, buttons, alligator clip pads and more. Build projects with Circuit Playground in a few minutes with the drag-and-drop MakeCode programming site, learn computer science using the CS Discoveries class on code.org, jump into CircuitPython to learn Python and hardware together, TinyGO, or even use the Arduino IDE. Circuit Playground Express is the newest and best Circuit Playground board, with support for CircuitPython, MakeCode, and Arduino. It has a powerful processor, 10 NeoPixels, mini speaker, InfraRed receive and transmit, two buttons, a switch, 14 alligator clip pads, and lots of sensors: capacitive touch, IR proximity, temperature, light, motion and sound. A whole wide world of electronics and coding is waiting for you, and it fits in the palm of your hand.