2 Kinects 1 Box – Amazing real time 3D depth camera work
2 Kinects 1 Box – Amazing real time 3D depth camera work…
First test of merging the 3D video streams from two Kinect cameras into a single 3D reconstruction. The cameras were placed at an angle of about 90 degrees, aimed at the same spot in 3D space.
The two cameras were calibrated internally using the method described in the previous video, and were calibrated externally (with respect to each other) using a flat checkerboard calibration pattern and manual measurements.
Have an amazing project to share? The Electronics Show and Tell is every Wednesday at 7:30pm ET! To join, head over to YouTube and check out the show’s live chat and our Discord!
Python for Microcontrollers – Adafruit Daily — Python on Microcontrollers Newsletter: A New Arduino MicroPython Package Manager, How-Tos and Much More! #CircuitPython #Python #micropython @ThePSF @Raspberry_Pi
EYE on NPI – Adafruit Daily — EYE on NPI Maxim’s Himalaya uSLIC Step-Down Power Module #EyeOnNPI @maximintegrated @digikey
The synthetic view is generated as seen from a camera which is actually non-existent. A virtual camera positioned wherever you like. The scene is built from the 3d knowledge on the environment taken from the two actual cameras.
I would like to apply these algorithms to one-to-one videoconferencing so to build a virtual camera inside the screen so to solve the eye-contact problem and be able to video call looking into each other eyes.
Can you post a video with a person face as taken from the virtual cam, to see how it is rendered? Also it could be interesting to change the angle of the two cams, to something less than 90 degrees. (maybe 45-60 degrees).
Here is a similar experiment of mine with a screen scene changing basing on user face position.
Awesome, now that there are 2 cameras, why not using 4? 90° separated aswell.
This is very interesting.
The synthetic view is generated as seen from a camera which is actually non-existent. A virtual camera positioned wherever you like. The scene is built from the 3d knowledge on the environment taken from the two actual cameras.
I would like to apply these algorithms to one-to-one videoconferencing so to build a virtual camera inside the screen so to solve the eye-contact problem and be able to video call looking into each other eyes.
Can you post a video with a person face as taken from the virtual cam, to see how it is rendered? Also it could be interesting to change the angle of the two cams, to something less than 90 degrees. (maybe 45-60 degrees).
Here is a similar experiment of mine with a screen scene changing basing on user face position.
http://marco.guardigli.it/2010/01/screen-view-change-basing-on-user-face.html
Marco ( @mgua )