![]() ![]() Note that the depth map doesn't need to be coming from MiDaS, you can certainly use your own depth map (although you may have to blur it). To visualize the point cloud which is in the ply format, you can use Meshlab or CloudCompare (preferred). The output of 3d photo inpainting is the MiDaS depth map, a point cloud of the 3d scene and four videos that kinda show off the inpainting (2 of the zoom type a la Ken Burns and two of the wiggle/wobble type). In the google colab implementation, they use MiDaS to get a depth map from a given reference image and then do extreme inpainting using AI. There's a Google Colab for it, which means we can check it out right there in the browser thanks to Google wthout installing anything and without the need for a gpu card. This paper: "3D Photography using Context-aware Layered Depth Inpainting" by Meng-Li Shih promises that inpainting can be done realistically with AI. ![]() ![]() Pretty neat, I must say, if the results are up to the hype. So, not only can AI generate depth maps from single images, it can also fill the disoccluded areas. Well, apparently, AI (Artifical Intelligence) can take care of that. Some people (not I) are not too keen on this effect and would prefer to see the background magically appears out of thin air. If you are a fan of Facebook 3d photos, you may have observed that these disoccluded areas get blurred. As you probably know, when you have an image and its associated depth map, whenever the point of view changes, areas in the background get disoccluded, that is, they become visible. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
December 2022
Categories |