Omi looks a little like an Apple AirTag and wants to be your next wearable AI companion. The goal is for Omi to know when you’re talking to it without a prompt, no “Hey…” needed. Hopes for Omi go much farther, with the catchphrase: thought to action. Whether Omi takes off or not its pretty clear in near future, AI will always be listening (if it isn’t already).
For now, the Omi’s actual purpose is much simpler: it’s an always-listening device (the battery apparently lasts three days on a charge) that you wear on a lanyard around your neck that can help you make sense of your day-to-day life. There’s no wake word, but you can still talk to it directly because it’s always on. Think of it as 80 percent companion and 20 percent Alexa assistant.
Shevchenko is confident that Omi can improve on those other devices. All of Omi’s code is open source, and there are already 250 apps in the store. Omi’s plan is to be a big, broad platform, rather than a specific device or app — the device itself is only one piece of the puzzle. The company is using models from OpenAI and Meta to power Omi, so it can iterate more quickly on the product itself.
Every Wednesday is Wearable Wednesday here at Adafruit! We’re bringing you the blinkiest, most fashionable, innovative, and useful wearables from around the web and in our own original projects featuring our wearable Arduino-compatible platform, FLORA. Be sure to post up your wearables projects in the forums or send us a link and you might be featured here on Wearable Wednesday!
Adafruit publishes a wide range of writing and video content, including interviews and reporting on the maker market and the wider technology world. Our standards page is intended as a guide to best practices that Adafruit uses, as well as an outline of the ethical standards Adafruit aspires to. While Adafruit is not an independent journalistic institution, Adafruit strives to be a fair, informative, and positive voice within the community – check it out here: adafruit.com/editorialstandards
Stop breadboarding and soldering – start making immediately! Adafruit’s Circuit Playground is jam-packed with LEDs, sensors, buttons, alligator clip pads and more. Build projects with Circuit Playground in a few minutes with the drag-and-drop MakeCode programming site, learn computer science using the CS Discoveries class on code.org, jump into CircuitPython to learn Python and hardware together, TinyGO, or even use the Arduino IDE. Circuit Playground Express is the newest and best Circuit Playground board, with support for CircuitPython, MakeCode, and Arduino. It has a powerful processor, 10 NeoPixels, mini speaker, InfraRed receive and transmit, two buttons, a switch, 14 alligator clip pads, and lots of sensors: capacitive touch, IR proximity, temperature, light, motion and sound. A whole wide world of electronics and coding is waiting for you, and it fits in the palm of your hand.
Have an amazing project to share? The Electronics Show and Tell is every Wednesday at 7:30pm ET! To join, head over to YouTube and check out the show’s live chat and our Discord!
Python for Microcontrollers – Adafruit Daily — Python on Microcontrollers Newsletter: Open Hardware is In, New CircuitPython and Pi 5 16GB, and much more! #CircuitPython #Python #micropython @ThePSF @Raspberry_Pi
EYE on NPI – Adafruit Daily — EYE on NPI Maxim’s Himalaya uSLIC Step-Down Power Module #EyeOnNPI @maximintegrated @digikey
while omi seems cool the always listening nature and data overreach of ai models leaves much to be desired.
the neurosky platform was easier to build and integrate with more privacy.
it was even compatible with artoo in ruby. if they offered a circuit python version with a neurosky style approach that would be a more secure and less invasive interface.
thought to action still relies on bad models that mistranslate. even youtube cant get closed captions right so i doubt any model will truly get the thoughts right.
while omi seems cool the always listening nature and data overreach of ai models leaves much to be desired.
the neurosky platform was easier to build and integrate with more privacy.
it was even compatible with artoo in ruby. if they offered a circuit python version with a neurosky style approach that would be a more secure and less invasive interface.
thought to action still relies on bad models that mistranslate. even youtube cant get closed captions right so i doubt any model will truly get the thoughts right.