Hacked Kinect is now a 3D video capture tool, b00m! This is so cool.
Have an amazing project to share? The Electronics Show and Tell is every Wednesday at 7:30pm ET! To join, head over to YouTube and check out the show’s live chat and our Discord!
Join us every Wednesday night at 8pm ET for Ask an Engineer!
Join over 38,000+ makers on Adafruit’s Discord channels and be part of the community! http://adafru.it/discord
CircuitPython – The easiest way to program microcontrollers – CircuitPython.org
New Products – Adafruit Industries – Makers, hackers, artists, designers and engineers! — New Products 11/15/2024 Featuring Adafruit bq25185 USB / DC / Solar Charger with 3.3V Buck Board! (Video)
Python for Microcontrollers – Adafruit Daily — Python on Microcontrollers Newsletter: A New Arduino MicroPython Package Manager, How-Tos and Much More! #CircuitPython #Python #micropython @ThePSF @Raspberry_Pi
EYE on NPI – Adafruit Daily — EYE on NPI Maxim’s Himalaya uSLIC Step-Down Power Module #EyeOnNPI @maximintegrated @digikey
Adafruit IoT Monthly — The 2024 Recap Issue!
Maker Business – Adafruit Daily — Apple to build another chip at TSMC Arizona
Electronics – Adafruit Daily — SMT Tip – Stop moving around!
22 Comments
Sorry, the comment form is closed at this time.
Awesome!!
Now only if we had a way to record our movements with the Kinect in order animate a 3D computer model. Perhaps for quick and easy development of NPCs (Non-player characters) in video games. *hint*
Hey, I should actually thank you guys. Since I don’t own an Xbox, I couldn’t have done it without the USB protocol dumps you extracted. So, many thanks!
Anyone else thinking that all that’s needed now is several Kinect working together to get a full 3D real time capture?
Now you need multiple Kinectes to make a even better 3d model. .
You could also use this to scan in an object to print using a reprap or Makerbot
wow!
cool so you can make 3D models
So how about using 4 kinects together to get a full 3d image?!? Then, in theory, we could create full holograms!!! Heh, a unit built for gaming could lead to us being able to make holographic phone calls!!!
Hooo boy that is completely awesome. This is really, really big for the 3D printing community man, you could use this to generate a really high-definition 3D scan of an object for recreation on a Makerbot! Or for game avatars and such, of course.
Really, that’s excellent work. Thanks for sharing this, maybe I’ll actually get a Kinect without buying an Xbox just to experiment with this stuff. 😀
Two Kinects for full 3-D!
W0W just wow, the possibilities for this thing are endless now (not that they weren’t in the first place)
W0W just wow, the possibilities for this thing are endless now (not that they weren’t in the first place)
This actually made my jaw drop. Thank you Adafruit for setting up the prize. Everyone got so excited for the wiimote, now the Kinect is going to take off.
It could be used as a (coarse) 3D-Scanner! There are a number of nice papers on combining multiple depth images like this to create a non-shadowed version of for example the box he held up (i.e. hold it to the camera, rotate so it sees every face, combine the images: get a textured 3D-model of the box.
It even works with only 2D-webcam images (by building a 3D-model as the object is rotated, once it’s been seen from all sides you can also calculate its depth), but should be considerably easier when you already have the depth channel…
Next use 2 or more Kinects, and combine the data, to eliminate shadows/holes. 🙂
I wonder how difficult it would be to use and synchronise multiple Kinect devices to capture a full 3D textured image of a space. The hardest part would be the image registration (matching up the physical position of the objects), but the depth system would make that a lot easier.
Also, I wonder if multiple Kinect units can operate in the same space, or whether the IR grids they project would interfere with one another.
Anyone else thought about this?
multiple connects. four walls, four connects.
That’s truely cool!
So how about adding another Kinect on the other side of the person and show both sides of the person as you rotate around? This is some really cool stuff. Keep up the great work.
Some one should make a 3d projection modeling app. The kinect would be the perfect utility to make this happen.
If you use more than one Kinect, I don’t think they’ll be able to distinguish which Ir lights are from which one. You would need to hack the hardware to maybe flicker them so that only one is on at a time.
This is excellent. I’m already playing with the thing more than perhaps I should, to then turn it into a 3D capture device for 3D modelling would be great, as a good part of my time will be spent building 3D models for my final year project 🙂
Hey great work! I have been thinking a lot about AI recently and how our brains work — it would be neat to have a 3D capture system like yours working with an algorithm that could do a "fuzzy" guess at what the missing/black geometry and textures should look like (perhaps with bayesian networks/probabilities & thousands of real life samples). This would enhance the effect tremendously. As a 3D designer, I could make some pretty good guesses at what the missing geometry would look like, so why can’t a computer? I know this is a HUGE task, but I wanted to throw it out there. 🙂