In order to make fun animations on the LEDs, we need to know the exact location of each LED. With the MPC Renaissance, I started with a picture of the device and wrote a script that would record where I clicked on that picture. By clicking on the LEDs in the order they were addressed in software, I essentially mapped the LED software address to their physical locations.
We’re in 2017 now though and everything is supposed to be solved with computer vision (or neural nets).
There’s a great open source project called OpenCV (Open Computer Vision) which has a bunch of awesome tools for giving robots eyeballs and letting them do the boring work for you like read license plates.
Get the only spam-free daily newsletter about wearables, running a "maker business", electronic tips and more! Subscribe at AdafruitDaily.com !
This reminds me of Kyle McDonald’s “Light Leaks”. A projector (or maybe a couple of projectors) is pointed at a pile of mirrorball’s in the center of a room. They then do a pixel for pixel mapping from projector to final reflected destination, allowing them to send very “noisy” 2d images that map to actual images after they reflect off the balls.