Wednesday, September 11, 2013 Eric Richards

Dynamic Environmental Reflections in Direct3D 11 and C#

Last time, we looked at using cube maps to render a skybox around our 3D scenes, and also how to use that sky cubemap to render some environmental reflections onto our scene objects.  While this method of rendering reflections is relatively cheap, performance-wise and can give an additional touch of realism to background geometry, it has some serious limitations if you look at it too closely.  For one, none of our local scene geometry is captured in the sky cubemap, so, for instance, you can look at our reflective skull in the center and see the reflections of the distant mountains, which should be occluded by the surrounding columns.  This deficiency can be overlooked for minor details, or for surfaces with low reflectivity, but it really sticks out if you have a large, highly reflective surface.  Additionally, because we are using the same cubemap for all objects, the reflections at any object in our scene are not totally accurate, as our cubemap sampling technique does not differentiate on the position of the environment mapped object in the scene.

The solution to these issues is to render a cube map, at runtime, for each reflective object using Direct3D.  By rendering the cubemap for each object on the fly, we can incorporate all of the visible scene details, (characters, geometry, particle effects, etc) in the reflection, which looks much more realistic.  This is, of course, at the cost of the additional overhead involved in rendering these additional cubemaps each frame, as we have to effectively render the whole scene six times for each object that requires dynamic reflections.

This example is based on the second portion of Chapter 17 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0 , with the code ported to C# and SlimDX from the native C++ used in the original example.  You can download the full source for this example from my GitHub repository, at, under the DynamicCubeMap project.


Constructing a Dynamic CubeMap

To build a dynamic cubemap, we need to render our scene six times, once for each face of the cube map.  To do this, we will need to use additional render target buffers, in addition to the main render target we have been using thus far.  To save cycles on the GPU, we will render these additional render targets at a fraction of our normal resolution, which means that we also need to construct an appropriate Viewport and depth/stencil buffer that matches our cubemap resolution.  We’ll construct these render targets, depth/stencil and viewport in a helper function called from our Init() method, BuildDynamicCubeMapViews().

private const int CubeMapSize = 256;
private void BuildDynamicCubeMapViews() {
    // create the render target cube map texture
    var texDesc = new Texture2DDescription() {
        Width = CubeMapSize,
        Height = CubeMapSize,
        MipLevels = 0,
        ArraySize = 6,
        SampleDescription = new SampleDescription(1, 0),
        Format = Format.R8G8B8A8_UNorm,
        Usage = ResourceUsage.Default,
        BindFlags = BindFlags.ShaderResource | BindFlags.RenderTarget,
        CpuAccessFlags = CpuAccessFlags.None,
        OptionFlags = ResourceOptionFlags.GenerateMipMaps | ResourceOptionFlags.TextureCube
    var cubeTex = new Texture2D(Device, texDesc);
    // create the render target view array
    var rtvDesc = new RenderTargetViewDescription() {
        Format = texDesc.Format,
        Dimension = RenderTargetViewDimension.Texture2DArray,
        ArraySize = 1,
        MipSlice = 0
    for (int i = 0; i < 6; i++) {
        rtvDesc.FirstArraySlice = i;
        _dynamicCubeMapRTV[i] = new RenderTargetView(Device, cubeTex, rtvDesc);
    // Create the shader resource view that we will bind to our effect for the cubemap
    var srvDesc = new ShaderResourceViewDescription() {
        Format = texDesc.Format,
        Dimension = ShaderResourceViewDimension.TextureCube,
        MostDetailedMip = 0,
        MipLevels = -1
    _dynamicCubeMapSRV = new ShaderResourceView(Device, cubeTex, srvDesc);
    // release the texture, now that it is saved to the views
    Util.ReleaseCom(ref cubeTex);
    // create the depth/stencil texture
    var depthTexDesc = new Texture2DDescription() {
        Width = CubeMapSize,
        Height = CubeMapSize,
        MipLevels = 1,
        ArraySize = 1,
        SampleDescription = new SampleDescription(1, 0),
        Format = Format.D32_Float,
        Usage = ResourceUsage.Default,
        BindFlags = BindFlags.DepthStencil,
        CpuAccessFlags = CpuAccessFlags.None,
        OptionFlags = ResourceOptionFlags.None
    var depthTex = new Texture2D(Device, depthTexDesc);
    var dsvDesc = new DepthStencilViewDescription() {
        Format = depthTexDesc.Format,
        Flags = DepthStencilViewFlags.None,
        Dimension = DepthStencilViewDimension.Texture2D,
        MipSlice = 0,

    _dynamicCubeMapDSV = new DepthStencilView(Device, depthTex, dsvDesc);

    Util.ReleaseCom(ref depthTex);
    // create the viewport for rendering the cubemap faces
    _cubeMapViewPort = new Viewport(0, 0, CubeMapSize, CubeMapSize, 0, 1.0f);

Most of this code should be fairly familiar by now, although if you need to brush up on creating RenderTargetViews and DepthStenciViews, I covered that previously in the first article in this series.  One thing to notice is that we are making use of the OptionFlags field of the Texture2DDescription structure this time around.  We use the TextureCube flag to inform Direct3D to interpret this texture as a cubemap, and we use the GenerateMipMaps flag so that we can have the GPU create the mipmap chain for this texture for us, as we will only be rendering to the first mipmap level.  We’re going to be rendering the cubemaps at 256x256 pixels, which is about one-quarter of our normal resolution.  Higher resolution will give you somewhat better looking cubemaps, at the cost of more time spent rendering the textures; remember, we have to draw the scene six times, so we want to use the lowest resolution that produces good results, in order to save cycles and memory on the GPU.

The Cubemap Camera

For simplicity, we are going to use an array of six Cameras, one for each of the required directions, to render the environment map from the position of some object in our scene.  We’ll add a helper function to setup all of these cameras, BuildCubeFaceCamera(x,y,z), which will take the position of the environment mapped object as input and create the necessary cameras:

private void BuildCubeFaceCamera(float x, float y, float z) {
    var center = new Vector3(x, y, z);
    var targets = new[] {
        new Vector3(x + 1, y, z),
        new Vector3(x - 1, y, z),
        new Vector3(x, y + 1, z),
        new Vector3(x, y - 1, z),
        new Vector3(x, y, z + 1),
        new Vector3(x, y, z - 1)
    var ups = new[] {
        new Vector3(0, 1, 0),
        new Vector3(0, 1, 0),
        new Vector3(0, 0, -1),
        new Vector3(0, 0, 1),
        new Vector3(0, 1, 0),
        new Vector3(0, 1, 0),
    for (int i = 0; i < 6; i++) {
        _cubeMapCamera[i] = new FpsCamera();
        _cubeMapCamera[i].LookAt(center, targets[i], ups[i]);
        _cubeMapCamera[i].SetLens(MathF.PI/2, 1.0f, 0.1f, 1000.0f);

Note that we setup the cameras’ projection matrices such that each has a 90 degree field of view, and an aspect ratio of 1.0.  Combined with the target vectors we have chosen, this will ensure that we have a 360 degree view all about the object; to visualize how the cube map cameras’ view frustums are setup, imagine a cube sliced from corner to corner through the center.  Note also that we use a much closer near plane for these cameras (0.1f, rather than 1.0f).  This will allow us to capture objects that are very close to, but not abutting, the reflective object in the reflection cubemap.

In this example, we are using a single, static reflective object, so we can create these cubemap cameras once, at initialization time.  For a more robust implementation, with multiple reflectors, that can move about the scene, we would want to wrap up our cubemap camera functionality into a class with some more powerful capabilities.

Rendering the CubeMap

To draw the cubemap, we need to set each of the cubemap texture’s render targets as the active render target, assign the cubemap depth/stencil buffer and viewport, and then render our scene as normal, minus the reflecting object.  Then, we instruct the GPU to generate the mipmap chain for the dynamic cubemap texture, reset to our primary viewport, rendertarget and depth/stencil buffer, and render the scene, including the reflecting object this time.  To make this easier, we will override the DrawScene function with a new version, which accepts a Camera to render the scene from, and a flag indicating if the central reflective sphere in the scene should be drawn.

public override void DrawScene() {
    for (int i = 0; i < 6; i++) {
        ImmediateContext.ClearRenderTargetView(_dynamicCubeMapRTV[i], Color.Silver);
            1.0f, 0 );
        DrawScene(_cubeMapCamera[i], false);
    ImmediateContext.OutputMerger.SetTargets(DepthStencilView, RenderTargetView);


    ImmediateContext.ClearRenderTargetView(RenderTargetView, Color.Silver);
        1.0f, 0 );

    DrawScene(_camera, true);
    SwapChain.Present(0, PresentFlags.None);

The actual code of our overridden DrawScene function is almost the same as we have been using all along to render this particular scene; I’ve included only the (very slightly) interesting parts below:

private void DrawScene(CameraBase camera, bool drawCenterSphere) {
    ImmediateContext.InputAssembler.InputLayout = InputLayouts.Basic32;
    ImmediateContext.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;

    Matrix view = camera.View;
    Matrix proj = camera.Proj;
    Matrix viewProj = camera.ViewProj;

    // draw other scene geometry every time

    if (drawCenterSphere) {
        for (int p = 0; p < activeReflectTech.Description.PassCount; p++) {
            var pass = activeReflectTech.GetPassByIndex(p);
            var world = _centerSphereWorld;
            var wit = MathF.InverseTranspose(world);
            var wvp = world * viewProj;

            ImmediateContext.DrawIndexed(_sphereIndexCount, _sphereIndexOffset, _sphereVertexOffset);

    _sky.Draw(ImmediateContext, camera);

    ImmediateContext.Rasterizer.State = null;
    ImmediateContext.OutputMerger.DepthStencilState = null;
    ImmediateContext.OutputMerger.DepthStencilReference = 0;
    // don't present here

There we go…

Ultimately, what you’ll see running this example is the large central sphere reflecting the skybox, the columns around it and the box it rests upon, and the eerie whirling skull orbiting it for no discernable reason.


Already, our scene is starting to look pretty good.  It’s not Skyrim, but it looks pretty realistic for the amount of effort we’ve put in thus far.  If you could go back in a time machine fifteen years, this would stack up pretty well against the likes of Quake 2 and Half-Life, at least on graphics quality.  Gameplay… not so much yet…  There is still a lot more we can do, however, and so we’ll tackle a couple more simple techniques to improve the visual quality of our scene next time.  You may notice that the columns and pavement in our scene looks a little shabby next to our gorgeous skybox and shiny environment-mapped orb; even with textures applied, they look kinda flat, and very late-90s.  The textures give an impression of some surface texture, but the light doesn’t really match up with the grain of the textures, since the pixel normals are being interpolated smoothly across the surface, and the columns themselves still have the mathematically straight edges they came out of the GeometryGenerator with.  Fortunately, next time, when we move onto Chapter 18 of Mr. Luna’s book, we'll dive into normal and displacement mapping, both of which will help us give our scene objects a little more texture.


I have way too many programming and programming-related books. Here are some of my favorites.