Gordon Hollingworth at the Raspberry Pi Foundation demonstrated a rarely exploited low-energy consumption / media acceleration core elements of the “BCM2835 System on a Chip (SoC)” nature of the Raspberry Pi computer. It is possible to isolate and display vectors from coarse motion estimation on the Raspberry Pi:
…One of the most interesting of these parts is the motion estimation block in the H264 encoder. To encode video, one of the things the hardware does is to compare the current frame with the previous (or a fixed) reference frame, and work out where the current macroblock (16×16 pixels) best matches the reference frame. It then outputs a set of vectors which tell you where the block came from – i.e. a measure of the motion in the image.
In general, this is the mechanism used within the application motion. It compares the image on the screen with the previous image (or a long-term reference), and uses the information to trigger events, like recording the video or writing a image to a disk, or triggering an alarm. Unfortunately, at this resolution it takes a huge amount of processing to achieve this in the pixel domain; which is silly if the hardware has already done all the hard work for you!
So over the last few weeks I’ve been trying to get the vectors out of the video encoder for you, and the [below] animated gif shows you the results of that work. What you are seeing is the magnitude of the vector for each 16×16 macroblock equivalent to the speed at which it is moving! The information comes out of the encoder as side information (it can be enabled in raspivid with the -x flag). It is one integer per macroblock and is ((mb_width+1) × mb_height) × 4 bytes per frame, so for 1080p30 that is 120 × 68 × 4 == 32KByte per frame. And here are the results….
Each Friday is PiDay here at Adafruit! Be sure to check out our posts, tutorials and new Raspberry Pi related products. Adafruit has the largest and best selection of Raspberry Pi accessories and all the code & tutorials to get you up and running in no time!