-
Notifications
You must be signed in to change notification settings - Fork 0
How It Works
Shadow Mapping is a rendering technique which is fast and moderately realistic (but none of those for my code).
We can quickly project a scene onto the camera's view using geometry. However, this projected scene doesn't have shadows. Raytracing, another rendering technique, solves this problem by simulating light bouncing around the scene, which will allow the renderer to compute whether light hits a surface. See Wikipedia for more info.
However, raytracing is slow, because many samples and bounces are needed to remove noise. Shadow mapping solves this problem by first computing shadows from the lights, and then projecting the scene to the camera. This is explained in more depth below.
First, a depth map is generated for each light. This is an image where each pixel is interpreted as a length instead of a color. For our depth map, each pixel's value represents how close the nearest object is.
The depth map is calculated in all directions around the light (a 360 degree view), unlike a camera, which only looks at the space in front of it. The resulting depth map is a projection of the 360 degree view around the light onto a plane. You can think of it like flattening the Earth into a map.
These depth maps improve our speed, because they don't depend on where the camera is in the scene. Only the lights and the objects influence the map. This means that the camera can move around without needing to recompute depth maps. Shadow map engines are suitable for games, where fast rendering is needed.
Here is an example depth map with a cube in the scene:

The dark areas outside the cube indicate that they are infinitely far away, which means there is no object there. The cube brightens near the vertex, indicating that the vertex is closer to the light than other parts.
The cube is also distorted, which is the result of projecting a spherical view onto a plane. This won't cause any problems for our software as long as the projection is consistent.
Once we compute the depth maps, we render the image. We repeat a process for each pixel:
First, we find what object the pixel "sees". That is, we find the closest portion of an object in the direction of the pixel. I will refer to this as the voxel. If there is no object present (the pixel sees into the "void"), we can color the pixel as a predetermined background color.
Next, we need to figure out whether the voxel is lit by each light. Here, we use the depth maps of each light and do computations one light at a time. Consider this cube image:

Notice how two faces are in light while one is dark. Let's say that our voxel is on one of the light surfaces. Using geometry, we can figure out a few values:
- The 3D coordinates of the voxel.
- The distance from the voxel to the light.
- The direction from the light to the voxel.
- The value on the depth map in that direction.
Remember, the value on the depth map is the closest object. Anything behind that object falls in the shadow of that object. This means that the light only illuminates one voxel in a direction.
We now compare values 2 and 4. If values 2 and 4 are equal, then the voxel is the one illuminated by the light because the voxel is closest to the light according to the depth map. If value 2 is greater than value 4, then the voxel is farther away than the closest object, and the closest object casts a shadow.
If the voxel is illuminated by the light, we add the light's influence to the voxel. This depends on the power of the light and some other factors.
In the real world (although our world could be the simulation of some aliens), the brightness of a light is proportional to the inverse square of the distance from the light. If we double our distance, the strength decreases to 1/4 of what it was. See this video by 3Blue1Brown for an explanation of why this is.
We divide the power of our light by the square of the distance to it, which we calculated in the previous section.
Last, we decrease the light's strength based on how tilted the surface is with respect to the direction of the light. If you look back at the picture of the cube, the two faces in light have slightly different brightness, because they are tilted differently to the light.
If we have a face directly facing the light, it will take up a large amount of the light's rays. However, if we tilt the face away from the light, a smaller proportion of the light's rays hit the face now, which, spread out over the same area, results in a smaller brightness.
In our software, we can compute this falloff with the dot product of the voxel's normal vector and the vector of the voxel to the light.