Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experiment: Merge 1080p MVC Depth Maps with 2160p Videos. #22

Open
kalanihelekunihi opened this issue Jun 18, 2024 · 4 comments
Open

Experiment: Merge 1080p MVC Depth Maps with 2160p Videos. #22

kalanihelekunihi opened this issue Jun 18, 2024 · 4 comments

Comments

@kalanihelekunihi
Copy link

I have an idea, but stars need to align just right to make it viable.

Basically, we take /most/ of your current workflow, but allow the merging of a second, higher resolution source.
1080p 3D BluRays for the Depth Maps and 4K BluRays for the Video.

This would obviously only work if the two sources are in sync with one another and same aspect ratio, but would allow skipping the ai upscaler and instead use native sources to create the highest fidelity copy possible.

There are tools out there that will export the RGBD frames of a video for manual tweaking, and then re-combining.
I've experimented with this on Owl3D, but the workflow there is so slow compared to what you've made.

@cbusillo
Copy link
Owner

Interesting. Do you have any links or workflow examples? It would be easy enough to check aspect ratio. Ensuring sync would be a bit more difficult.

I understand how the 4k video can be used to replace one of the views, but where does the secondary plane come from?

@mikemonello
Copy link

I ripped a BD using the ai upscaler and ended up with a file that was twice as large but with little to no discernible difference when opened next to each other. How significant should the difference be?

I know 4K/UHD looks significantly better and that's what I was hoping to get. Is there are trick to the setting or a certain type of movie that it works best with?

@cbusillo
Copy link
Owner

@mikemonello I've had some people mention the upscale result was great and some agree with you. I'm not sure if it's the type of movie or what. I compared a 1080P and 4K 3D video. I noticed the difference on still frames, but didn't really notice a difference while in motion. It would be nice to have a place where discussions could happen with things like this. Honestly I am not great with video and I am always open to suggestions on quality techniques.

@kalanihelekunihi
Copy link
Author

The theory; you can combine the detail from the 4K 2D video with the perspective information from the 1080p 3D video by using the RGBD depth maps. But, it requires that the 2D video and 3D video share the same perspective, such as 2D = Left Eye.

Here is a general approach:

  1. Extract Depth Map
  2. Upscale the Depth Map to 4K to match target output
  3. Synchronize 4K 2D Video with the Depth Map

The big piece:
4) Merge Detail and Depth Information
Using the depth map data, treat any obscured perspective / missing data as transparent. Any non-transparent data is retained from the 4K 2D master version, whereas transparent data is a passthrough mask of the lower layer’s 1080p 3D master.

Blender’s texture layers and displacement mapping features are comparable. However, I suspect that overhead is massive overkill here, and I suspect this should be possible via a smaller image library.

If I wanted to do this in Blender:
Create a plane in the dimensions of the video. Apply the textures:
Bottom: 1080p 3D frame
Top: 4K 2D frame
Add a displacement map node, and apply the depth map
Texture bake
Export
Repeat for left and right, throughout the video.

I suspect this can be done with significantly less overhead via an image editing tool via adopting that “outside of perspective = transparency”, stack, and then flatten image approach.

But, as I said, this is an experiment. Something I’d done a lot when doing 3d animation, and adding an object to real footage to match lighting, shadows, and perspective. But, not fully sure how to do with less overhead. This is a pretty common practice where you’d take a bunch of high quality 2D photos and HDRIs, and then reconstruct a scene and do infill repainting to fill in painted out wires, or a new object for the scene.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants