Dubbed “AlterEgo,” the device resembles a jawbone hooked around the ear and attached to the user’s face between lip and chin. AlterEgo Uses a bone conduction system to hear and respond to the wearer’s internal voice via electrodes attached to the skin.
The device will efficiently fuse human and machine together, according to Arnav Kapur, who led the development team at MIT’s Media Lab. Electrodes pick up subtle neuromuscular signals that are triggered when a person verbalizes internally.
When a user says words inside their head, artificial intelligence working within the device can match particular signals to particular words, feeding them into a computer. The computer in return responds via the device using a bone conduction speaker that plays sound into the ear without the need for an earphone to be inserted. The idea is to create a seemingly silent computer interface that only the AlterEgo wearer can hear and speak to.
“We basically can’t live without our cellphones, our digital devices. But at the moment, the use of those devices is very disruptive,” said Pattie Maes, a professor of media arts and sciences at MIT. “If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself.”
In a trial conducted on 10 people, AlterEgo managed an average of 92% transcription accuracy with about 15 minutes of customizing to each person. That pulls the device several percentage points below the 95% plus accuracy of Google’s voice transcription service. The human threshold for voice word accuracy is thought to be around 95%. Kapur said the system will improve in accuracy over time.