1 / 4

How does kinect video recording work-converted

This PDF will discuss how Kinect video recording works and why azure Kinect volumetric capture is becoming popular for 3D object creation.<br><br>When recording with Kinect or other 3d cameras with the regular video stream also expect to get Depth frames. This makes 3D cameras unbelievably attractive as it enables them to measure the distance from the camera to every single pixel in the video.<br><br>Source: https://ef-eve.com/

EF
Download Presentation

How does kinect video recording work-converted

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How Does Kinect Video Recording Work? This article will discuss how kinect video recording works and why azure kinect volumetric capture is becoming popular for 3D object creation. When recording with kinect or other 3d cameras with the regular video stream also expect to get Depth frames. This makes 3D cameras unbelievably attractive as it enables them to measure the distance from the camera to every single pixel in the video. Such a method lets you have moving 3D objects without any animation or rigging. It recreates the real depth of the object as well as colour. You can then put this video into game engines, take out backgrounds, segment objects from the environment, etc Nowadays such recording is called volumetric capture and usually we use azure or real sense cameras for it. Azure Kinect volumetric capture is probably the most popular method for those who want to own the rig. The reason for that is the robustness and the stability of the camera. From my experience it rarely loses frames while Real sense has good and bad days. www.ef-eve.com

  2. Azure Kinect Volumetric Capture 4 Kinect Azure Volumetric Capture Setup. The problem with azure kinect volumetric capture but also with any other camera you will use is that they need to be synchronized. Here I probably need to mention that in order to get volumetric video you must use multiple 3D cameras. I would advise having 4 or more of them. The more of them you add the less occlusion point you will have and therefore your recordings will be better quality. You can try to work with Kinect sdk to do this but from my experience it's a complicated process and it gets you nowhere. Therefore, I found a subscription based product for azure kinect volumetric capture. EF EVE Volcap software calibrates the cameras using markers. I have tried their software with 4 azure kinects following their tutorial. www.ef-eve.com

  3. Calibration Of Cameras For Kinect Video Recording. Azure cameras need to be placed quite close to the object. The company recommends a distance of 2x2 metres setup but I saw that one of the functionalities of the software lets you extend the distance. The setup was simple but you need to have a stand on which you place the marker. This lets you measure and calibrate feeds from different cameras at a certain location in the capture space. The use of kinect video recording gives you point clouds in real time. The point cloud is good enough for a creative project but I was looking for realism. Therefore there was a need to apply mesh. For the mesh application I have used another company’s product Creator. This software has a lot of functionality for editing volumetric video. I have cut the video using their time editor and used their automatic frame cleaning for any noise as well as what they call “brush” to clean more important parts such as faces. Application of mesh was very straight forward and you can right away see the result. Then I took the kinect video recording and post- processed it. My file was large so I needed to leave the file to process for around 6h. Once that was over, I exported the file into unreal using the company's UN project. Conclusion The process was very smooth and I was really happy with the quality. However, I believe that in order to get very high resolution I should add another 2 or 4 cameras. The software lets you add up to 10 cameras and I had 4. The functionality is good and I was especially happy with automatic cleaning. However, the AI still needs to be www.ef-eve.com

  4. improved as some of the frames had noise. This can be cleaned with “brush” but if you have loads of noise then it might become time consuming. Saying that I can't imagine if I would have needed to do this by myself without this software. I would recommend trying out this azure kinect volumetric capture. www.ef-eve.com

More Related