2 Kinects 1 Box – Amazing real time 3D depth camera work
2 Kinects 1 Box – Amazing real time 3D depth camera work…
First test of merging the 3D video streams from two Kinect cameras into a single 3D reconstruction. The cameras were placed at an angle of about 90 degrees, aimed at the same spot in 3D space.
The two cameras were calibrated internally using the method described in the previous video, and were calibrated externally (with respect to each other) using a flat checkerboard calibration pattern and manual measurements.
Stop breadboarding and soldering – start making immediately! Adafruit’s Circuit Playground is jam-packed with LEDs, sensors, buttons, alligator clip pads and more. Build projects with Circuit Playground in a few minutes with the drag-and-drop MakeCode programming site, learn computer science using the CS Discoveries class on code.org, jump into CircuitPython to learn Python and hardware together, or even use Arduino IDE. Circuit Playground Express is the newest and best Circuit Playground board, with support for MakeCode, CircuitPython, and Arduino. It has a powerful processor, 10 NeoPixels, mini speaker, InfraRed receive and transmit, two buttons, a switch, 14 alligator clip pads, and lots of sensors: capacitive touch, IR proximity, temperature, light, motion and sound. A whole wide world of electronics and coding is waiting for you, and it fits in the palm of your hand.
Biohacking — Vitamin-C + Gelatin for Accelerated Recovery
Python for Microcontrollers — The Top Programming Languages 2019 – Python tops the charts with a CircuitPython nod! #Python #Adafruit #CircuitPython #PythonHardware @circuitpython @micropython @ThePSF @Adafruit
Get the only spam-free daily newsletter about wearables, running a "maker business", electronic tips and more! Subscribe at AdafruitDaily.com !
Awesome, now that there are 2 cameras, why not using 4? 90° separated aswell.
This is very interesting.
The synthetic view is generated as seen from a camera which is actually non-existent. A virtual camera positioned wherever you like. The scene is built from the 3d knowledge on the environment taken from the two actual cameras.
I would like to apply these algorithms to one-to-one videoconferencing so to build a virtual camera inside the screen so to solve the eye-contact problem and be able to video call looking into each other eyes.
Can you post a video with a person face as taken from the virtual cam, to see how it is rendered? Also it could be interesting to change the angle of the two cams, to something less than 90 degrees. (maybe 45-60 degrees).
Here is a similar experiment of mine with a screen scene changing basing on user face position.