-
-
Notifications
You must be signed in to change notification settings - Fork 35.8k
WebGPURenderer: enable better WebXR depth sensing #31075
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@cabanier I'd like to know more about the proposal. What you would need to modify even if the operation is executed first. Are you thinking of overwriting the depth of the material with a custom shader? |
I was thinking of depth testing the material against the real world first and if the real world is in front of it, we would skip the remainder of the fragment code. It would be similar to the code I added in https://immersive-web.github.io/webxr-samples/layers-samples/proj-multiview-occlusion.html |
I think we can do this with |
I think this would require the author to update each material that was used. The advantage of the fognode was that it was done at the scene level. |
Do you have an example that uses |
Description
The current WebXR depth sensing implementation simply copies the depth into the target buffer.
This has the drawback of not being able to use the latest WebXR additions which make the depth "stick" to the real world.
In addition, we can't blur with the occlusion pixels.
Solution
I'd like to enhance the WebGPURenderer with a custom shader that always runs first.
In it, we will do the reprojection of the depth pixels as well as the blurring similar to the example of the WebXR samples.
@Mugen87 Is there a way to enhance the renderer to always to an operation first? I noticed that fog always runs last.
Alternatives
We could implement what was done in the old render but the occlusion is not good that way
Additional context
No response
The text was updated successfully, but these errors were encountered: