What does the world look like to AI? Researchers have puzzled over this for decades, but in recent years, the question has become more pressing. Machine vision systems are being deployed in more and more areas of life, from health care to self-driving cars, but “seeing” through the eyes of a machine — understanding why it classified that person as a pedestrian but that one as a signpost — is still a challenge. Our inability to do so could have serious, even fatal, consequences. Some would say it already has due to the deaths involving self-driving cars.
New research from Google and nonprofit lab OpenAI hopes to further pry open the black box of AI vision by mapping the visual data these systems use to understand the world. The method, dubbed “Activation Atlases,” lets researchers analyze the workings of individual algorithms, unveiling not only the abstract shapes, colors, and patterns they recognize, but also how they combine these elements to identify specific objects, animals, and scenes.
Stop breadboarding and soldering – start making immediately! Adafruit’s Circuit Playground is jam-packed with LEDs, sensors, buttons, alligator clip pads and more. Build projects with Circuit Playground in a few minutes with the drag-and-drop MakeCode programming site, learn computer science using the CS Discoveries class on code.org, jump into CircuitPython to learn Python and hardware together, TinyGO, or even use the Arduino IDE. Circuit Playground Express is the newest and best Circuit Playground board, with support for CircuitPython, MakeCode, and Arduino. It has a powerful processor, 10 NeoPixels, mini speaker, InfraRed receive and transmit, two buttons, a switch, 14 alligator clip pads, and lots of sensors: capacitive touch, IR proximity, temperature, light, motion and sound. A whole wide world of electronics and coding is waiting for you, and it fits in the palm of your hand.