Skip to content

ClemensX/ShadedPathV

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

949 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

logo ShadedPathV

Game Engine in Development!

ShadedPathV is a completely free C++ game engine built mainly on Khronos standards

Please note: ShadedPathV is in high development phase. We change something almost every day. At this time it is not recommended to use ShadedPath for anything else than tests. The interface from application to engine may change very much until release.

However, if you find something useful here, please just use it in your own projects. The very liberal license allows almost any kind of usage of ShadedPath!

Contact: At this point we do not accept pull requests, but we are happy to answer any questions or comments via email: shadedpath.org@gmail.com

Some features:

  • Multi-Thread Rendering
  • Modern Meshlet-based rendering engine built on top of Vulkan
    • true 64 bit adressing on GPU
    • Physically Based Rendering (PBR) model
  • Support VR games via OpenXR
  • Sound support for ogg vorbis: Both background music and spatial sound attached to objects
  • Built on open standards we currently support these environments:
    • Windows - this is where our focus lies and almost all our development is done
    • Linux - we also support Linux builds. Beware that graphics drivers on Linux can be a nightmare. As we do use very modern techniques that currently not many engines utilize, you may find that Linux drivers that work for something else might not do so for ShadedPath
    • NOT supported MacOS - We had tested builds for MacOS, but the current level of Vulkan support just isn't enough for ShadedPath. We hope to come back to MacOS if the compatibility increases in the future

Release Plan

Sorry, we announce no dates :-)

We plan to implement features according to this list:

  • V 0.1 (done): General Setup, Shaders, Audio, OpenXR
  • V 0.2 (done): PBR Rendering
  • V 0.3 (done): Meshlet Rendering
  • V 0.3.1: LOD System
  • V 0.4: Animation
  • V 0.5: Shadows

Table of content:

Current State (Q3 / 2025)

World Creator 2025

We changed our approach to import from World Creator (https://www.world-creator.com/). Now we use the Blender Bridge to prepare our scenes:

  1. Sync your scene in World Creator
  2. Import Terrain in Blender and bake all needed textures to enable proper glTF export.
  3. Export Object Instances in World Creator, parse the info in ShadedPath

Use Blender to export terrain as glTF

We need terrain data as glTF, like for every other object. Creating a proper glTF terrain file with World Creator is a bit complicated, because World Creator cannot export a glTF file with all the details we need on it's own. So we use the Blender Bridge for that:

After syncing in World Creator and importing the terrain in Blender you will have a complicated node structure like this:

Blender Bridge Import

Blender can technically export this scene to glTF, but the procedural setup does not export well. Blender is unable to create the needed glTF textures automatically. But we can do it manually: The idea is to create a new Principled BSDF node, create all textures manually and fill them with Blender's Bake function.

Create new textures like here:

Blender Create New Texture Nodes.

You have full control over the size and other attributes of your textures. Be sure to use a proper Color Space. Usually, sRGB for Diffuse and Non-Color for the rest. As our terrains end up too dark with sRGB we have got good results with Khronos PBR Neutral. Choose texture sizes according to your needs, especially for the diffuse texture containing the terrain base color.

Baking: While only the texture node is selected where you want to bake to, switch to Cycles render mode and bake your texture. Don't forget to save your textures manually. Blender will not do that automatically.

After all textures have been baked you just need to connect your BSDF output with the Surface input connector of your Material Output node. Finally you can export the scene to glTF and prepare the glTF file for use in ShadedPath like for every other asset (gltf-transform for compressing texture data and creating Meshlet Data).

Here is our terrain rendered in ShadedPath with very high detail (16k diffuse texture):

Terrain Render 1 Terrain Render 1

Place Object Instances

Export object instances in World Creator according to this: https://docs.world-creator.com/reference/export/conventional-export#object-instances Then parse the info in ShadedPath

(More details later)

LOD System

Every mesh intended to be rendered with the PBR shader should contain LOD (level of detail) information. You will get a warning if this is missing from your asset files.

ShadedPath mandates a fixed set of 10 levels - LOD 0 to LOD 9. Higher numbers mean less detail. Currently, we recommend reducing the triangle count by a factor of 0.25 with each level. But this is not a strict requirement and will not be checked at runtime. Use your artistic judgement: e.g. for the lower levels it may be much better to keep some more vertices to roughly match the original shape than to be fixed on reducing vertex count. 10 levels with 0.25 reduction factor means if you start with 6 million triangles on LOD 0 you will end with just around 20 triangles on LOD 9.

Usually, LOD info ist not contained in assets you aquired, so you must prepare them yourself. We did not find a good way to automate LOD generation. Trying to automate this in Blender also was underwhelming. So, here is our suggested manual Blender workflow for creating LOD info:

  • select LOD 0 in object mode and duplicate (Shift-D), right click to paste at the same location.
  • Rename the new mesh to something like name_LOD_1.
  • You might want to disable rendering of the older level in the Outliner to just see what you are currently working on.
  • Make sure you have LOD_1 selected, then add the Decimate modifier: Enable the Triangulate flag and set a Ratio of 0.25. You should see the changed mesh now. (Allow some time for Blender to work for very high triangle counts.)
  • Bake the modifier to the mesh by using Apply in the menu right of the mesh name.
  • Examine the mesh from all sides
  • To easily delete points and vertices that are not part of any triangle anymore, switch to Edit Mode, and select loose geometry by Select menu -> Select all By Trait -> Loose Geometry. Then delete unwanted vertices/points.
  • If there are visible holes (these usually start to appear after you have applied some iterations):
    • Use another modifier: Remesh: use Sharp mode and adjust Octree Depth to roughly get the triangle count you want. Also bake this modifier.
    • As you now have lost all UV mapping use Data Transfer Modifier to correct that: Select one of your older meshes as Source, then enable Face Corner Data, open it and choose UVs tab. Generate data by using the Generate Data Layers button. Also bake the result, once you are satisfied.
  • Now apply the same steps for LOD 2 to LOD 9

Obviously, this is just a suggestion. If you are an experienced artist or Blender user you might know better ways to create LOD levels. We just wanted to give a simple workflow example.

Meshlets

Before going deeper into animation we decided to come back to meshlet rendering. This topic has matured a lot since last we tested some years ago. Meaning that meshlet rendering is now supported by many NVIDIA and AMD GPUs and even mid-price laptops, e.g. on Intel Arc offer it.

See details about meshlet rendering in this NVIDIA article: https://developer.nvidia.com/blog/introduction-turing-mesh-shaders/

The glTF file format does not support meshlet data. Because of that we needed to provide additional storage for meshlet data in supplementory files. Each glTF mesh loaded into ShadedPath that is intended to be rendered as a PBR asset needs to have its meshlet data in a file with extension .meshlet next to the glTF file. We implemented a variation of the meshlet generation algorithm Greedy (vertex based) from here: https://github.com/Senbyo/meshletmaker

Look here for some additional details about proper use of meshlets. meshlets.md

Use the MeshManager tool to create and test your meshlet data:

Mesh with no meshlet data Meshlet data generated
Mesh Manager, no meshlet data Mesh Manager, meshlets data generated

Mesh Manager is also presented with more depth in a video on our YouTube channel. There are a number of debug options to help with checking meshlet rendering:

Regular Object Render Meshlet display enabled
meshlets full meshlets colored
Vertices overlayed Meshlet Bounding Boxes enabled
meshlets with vertices meshlets bounding boxes

Linux Build

We finally adapted our cmake build scripts for linux. (All test with Ubuntu 24)

Linux Build Linux Run
linux build linux run

We were very satisfied to see that only minor changes were necessary to build and run on Linux. However, graphics drivers for Linux are a nightmare. We tried a number of combinations and all had their own problems. As we only tested on an Intel Arc Laptop, we can't really say if the situation is better with NVIDIA or AMD graphics cards.

We could run most features with the exception of meshlet rendering. As the same code runs in Windows on the same machine, and we do not get any errors or warnings from Vulkan validation layer, we guess the problem is with the driver. Obviously, we cannot rule out the possibility that we made an error somewhere, that only manifests under Linux. We will continue testing Linux on other machines later...

Animation (to be picked up later)

Preliminary animation workflow:

  • Create in CC
  • Export to fbx (do not use blender bridge)
  • basically use this tutorial: https://www.youtube.com/watch?v=uaiyQVq0JXU
  • for exporting use 'Models with Motion' and enable every export option w/o 'Motion only'

Current State (Q1 / 2025)

PBR Render

We finished the glTF 2.0 based Physically Based Render System for the metallic-roughness workflow. This means lighting objects now correctly uses material properties and lighting conditions set by the skybox.

Daylight Cloudy Sky

pbr daylight

Dark Space

pbr space

Thread System Re-Write

threads

Reasoning

We sticked to the old thread system for quite some time and gathered a lot of experience with it. In the end, we were too ambitious.

We implemented a complete free thread system, with the idea that each render thread would run as independent from each other as possible and the graphics HW would be ideally used all the time. This worked out to a great deal, but we had to write a lot of synchronizing code to keep the system stable. While it is ok to have complicated code for complicated things, our neck was broken from an unexpected direction: VR headset input. Ideally, it should be easy to read position and orientation info from a headset and use this in any number of render threads. In practise, it is not possible. Or at least it was not possible for us. If you begin a frame in OpenXR you have to finish the frame including the final image copy to the headset. Just opening another frame in another thread is not allowed by OpenXR. We tried to program around that by beginning the OpenXR frame just before the final image copy, after the frame was already rendered internally (outside of OpenXR). But then we didn't have the right headset position and orientation prediction at the time we rendered the frame. After trying to fix that also, with more complicated code, we decided it is no longer worth it.

We implemented a more traditional rendering engine, where we use multiple threads to speed up rendering of the current frame, but finish the frame before we start another. You can read a good summary of the ideas we used here at the great Vulkan Guide.

Q3 / 2024

Using World Creator © terrain

We now can directly import gltf exports from world creator (https://www.world-creator.com/). This is for base terrain glTF and heightmap.

There is also code to get terrain height directly from the terrain vertices in ultimate precision. (Meaning exact same precision as terrain itself.) It's a constant time algorithm based on finding the right terrain triangle for given x/z float value and interpolate to get the exact height within the triangle.

wc_scene_1

wc_scene_2

Helper classes for app development

AppSupport class added which removes a lot of boilerplate code from applications. Mostly options and camera handling.

OpenXR Enablement

All sample apps are now VR enabled through OpenXR.

billboards_vr

Q2 / 2024

Screenshots

We always had the option to run the engine in screenshot mode, where every frame will be stored in the file system after rendering. This mode is mostly useful for automated tests. Now we added an option to store the next rendered single frame. The screenshot is taken from backbuffer, not from the output window, so it will be in the resolution of the backbuffer. See LandacapeGenerator app for details.

Creating Heightmaps

Creating heightmaps requires 2 steps:

  • Create a raw file with rectangular height data in world coordinates
  • Convert the raw file to a ktx2 texture

You can use the LandscapeGenerator app to create heightmaps: just press h to store a heightmap with the current pixel size. See the log to find the place where the file was written. Of course you can use any tool you like as long as you end up with a recangular raw file with 32-bit float values in world coordinates.

After that convert the raw file to ktx2 with a command like this:

ktx create --format R32_SFLOAT --raw --width 1025 --height 1025 ./heightmap.raw ./heightmap.ktx2

Landscape generation with Diamond-square algorithm

We needed a way to generate landscapes and decided to implement the diamond-square algorithm. This is shown in the app LandscapeGenerator. It also utilizes the new background GPU uploading mechanism. Otherwise there would be stuttering because e.g. uploading 2 million lines to GPU memory takes longer than drawing a frame. Uploading is done in the background and only after it is finished the render threads switch to the new resource set.

billboards

cmake build

Changed build from manual VS2022 project to cmake.

Currently this is Windows only. We also tried Mac build, but realized that their Vulkan support is lacking too much. Maybe we will pick up later if the situation changes.

Windows

We use vcpkg for dependency management. If some of the required libraries are missing the build will fail. If all dependencies are setup correctly on your local machine the build is initiated like this. Be sure to have installed and configured vcpkg correctly!

Clone the repo, then type commands in highest project folder:

Create the VS2022 project using cmake: Be sure to run this in a Developer PowerShell for VS or from within the IDE, otherwise you will get errors about missing cl.exe or ninja.exe

cmake --preset x64-debug

Now cmake config phase is done and you can create the executable in the IDE, or run this command:

cmake --build --preset x64-debug

If all goes well you will find the built app here:

 cd .\out\build\x64-debug\src\app\
 .\app.exe

Tests

You should be able to see the test in Test Explorer or run all by selecting the cmake test target and right-click Debug it.

You need to copy the folder shader.bin from .\out\build\x64-debug\src\app\shader.bin to .\out\build\x64-debug\src\test. Otherwise the test cases won't find the compiled shader files.

Debugging

Make sure to copy https://github.com/icaven/glm/blob/master/util/glm.natvis to your local machine. This will enable you to see glm vectors in the debugger properly. You can use this folder C:\Program Files (x86)\Microsoft Visual Studio\2022\Community\Common7\Packages\Debugger\Visualizers. Adapt the path to your VS2022 installation.

Linux

install MS Code, Vulkan, cmake, vcpkg, gcc, ninja-build

you might get an error like this: These can be installed on Ubuntu systems via sudo apt install libxinerama-dev libxcursor-dev xorg-dev libglu1-mesa-dev pkg-config ninja-build

then just install, clean build folder and try again e.g.:

sudo apt-get update
sudo apt-get install -y build-essential

Q1 / 2023

Using Objects from glTF files

Loading a glTF file is done like this. You just specify the file name and an id string that will be used to access meshes in this file. An id can only consist of letters and numbers and underscore.

engine.meshStore.loadMesh("grass.glb", "Grass");

Every object needs to be part of an object group (for some bulk operations). If there is only one mesh in the file you can access it like so to position the object into the world:

engine.objectStore.createGroup("ground_group");
WorldObject *obj = engine.objectStore.addObject("ground_group", "Grass", vec3(0.0f, 0.0f, 0.0f));

If there are more meshes in the gltf file you add a number selector to it's id when adding it. To select the 7th mesh (counting starts at 0):

engine.objectStore.addObject("ground_group", "Grass.7", vec3(0.0f, 0.0f, 0.0f));

The first mesh is available both with Grass.0 and Grass.

PAK Files

PAK file support for storing textures and glTF files in one big binary file.

If data.pak is found in asset folder it is auto opened and parsed. All asset files found inside will be used instead of single files. Only files not found in the pak file will be tried in the other asset folders texture and mesh.

PAK file creation is done with a simple Java class Pak.java. Currently, the list of files to add is hard-coded in the Java class.

Rationale behind our pak file support is as follows:

  • We need a way to protect the IP of assets. If we would simply ship assets in their original form anyone could just copy them into their own projects without any effort. The binary format is very simple, but still there is some protection. We will add a disclaimer to binary releases that reverse engineering of pak files is forbidden.
  • There will be no pak file extractor tool. It is a one-way ticket - pak files should only be read by a running engine.
  • Maybe we will need some more protection later (like encrypting assets in pak files), but for now this will do.

Billboards

billboards

Performance is ok on my GTX 2080ti with displaying 50 million billboards in 30 to 120 FPS. (4K backbuffer rendering and 1440 Display Window)

50 Million Trees far away @120Hz 50 Million Trees closer @30Hz
50 Million 1 50 Million 1

Mesh Shader

Started implementing a mesh shader for Billboard rendering, but realized that it is only supported with newer NVIDIA cards - not even my dev laptop does have this feature. So stopped for now. There is a user API to enable the feature for vulkan device creation. Probably not worth to invest more time now and try to implement a mesh shader. Maybe if support is extendend it makes sense some day.

Cube Maps

To be able to continue with PBR implementation we first need cube maps for a skybox. The maps need to be prepared offline like so:

  1. https://jaxry.github.io/panorama-to-cubemap/ Change rotation to 0. Load single file with distorted projection (and pixel width == 2* pixel height) into this url. Then click each tile to download all 6 .png files
  2. toktx --genmipmap --uastc 3 --zcmp 18 --verbose --t2 --cubemap cube.ktx2 px.png nx.png py.png ny.png pz.png nz.png

High res cube map in action: high res cube map

State Q2 / 2022

Texture preparation

Textures not embedded in glTF have to be prepared for engine use. See batch file for details: create_textures.sh. Command line is ../libraries/ktx/bin/toktx --genmipmap --uastc 3 --zcmp 18 --verbose --t2 ../data/texture/eucalyptus.ktx2 ../datainput/texture/eucalyptus.png PNG file must be properly exported: Remove ICC profile chunk during gimp export

PBR Shader

As a first step we can now parse glTF files and render objects with just the base texture. No Lighting.

pbr with only base texture pbr with only base texture

Texture Loading from glTF Files

gltf-transform needs this to run natively on Windows powershell:

* KTX-Software has to be in path
* https://github.com/coreybutler/nvm-windows
* nvm install latest
* npm install --global --save @gltf-transform/core @gltf-transform/extensions @gltf-transform/functions
* npm install --global @gltf-transform/cli
* Test install via: gltf-transform uastc WaterBottle.glb bottle3.glb

Implemented texture workflow for reading glTF files:

  1. Downloaded texture in glTF format (e.g. from Sketchfab) usually have simple .pga or .jpg textures with no mipmaps.

  2. We decided to go for KTX 2 texture containers with supercompressed format VK_FORMAT_BC7_SRGB_BLOCK. It seems to be the only texture compression format that has wide adoption.

  3. This solves two issues: texture size on GPU is greatly reduced and we can include mipmaps offline when preparing the glTF files.

  4. We use gltf-transform like this to prepare glTF files for engine use:

    Examples: gltf-transform uastc WaterBottle.glb bottle2.glb --level 4 --zstd 18 --verbose

    gltf-transform metalrough .\simple_grass_chunks.glb grass_pbr.glb
    gltf-transform uastc .\grass_pbr.glb grass.glb --level 2 --zstd 1 --verbose
    
    • KTX + Basis UASTC texture compression
    • level 4 is highest quality - maybe use lower for development
    • zstd 18 is default level for Zstandard supercompression
    • gltf-transform creates mipmaps by default. use --filter to change default filter lanczos4
  5. Decoding details are in texture.cpp. Workflow is like this (all methods from KTX library):

    Warning Do not forget to copy ktx.dll from ktx/bin to executable path. Especially after installing new ktx version!

    • the engine checks format support at startup like this: vkGetPhysicalDeviceFormatProperties(engine->global.physicalDevice, VK_FORMAT_BC7_SRGB_BLOCK, &fp);
    • after loading binary texture data into CPU memory: ktxTexture_CreateFromMemory()
    • check that we have the right texture type: kTexture->classId == class_id::ktxTexture2_c If this fails we have read a KTX 1 type texture which we cannot handle.
    • check supercompression with ktxTexture2_NeedsTranscoding();
    • inflate UASTC and transcode to GPU block-compressed format: ktxTexture2_TranscodeBasis(texture, KTX_TTF_BC7_RGBA, 0);
    • last step is to upload to GPU: ktxTexture2_VkUploadEx()

Q1 / 2022

I am somewhat ok with thread model for now. Seems stable and flexible: Application can switch between non-threaded rendering and aribtrary number of rendering threads. But real test will be when more complex rendering code is available with objects and animation.

Still experimenting a lot with managing vulkan code in meaningful C++ classes. Especially for organizing shaders in an easy-to-use fashion and clear architecture.

Simple app with lines and debug texture: Simple app with lines and debug texture Red lines show the world size of 2 square km. Lines are drawn every 1m. White cross marks center. The texture is a debug texture that uses a different color on each mip level. Only mip level 0 is a real image with letters to identify if texture orientation is ok. (TL means top left...) Same texture was used for both squares, but the one in background is displayed with a higher mip level. While the camera moves further back you can check the transition between all the mip levels. On upper right you see simple FPS counter rendered with Dear ImGui.

Same scene with camera moved back Same scene with camera moved back. You see lines resembling floor level and ceiling (380 m apart). The textures are so small that they should use highest or 2nd highest mip level with 1x1 or 2x2 image size.

Dev Setup and Library usage

Prerequisites that need to installed before dev setup:

  • C++ 20 supported by CMake (VisualStudio 2022, gcc, clang,...)
  • Vulkan SDK https://vulkan.lunarg.com/
    • install Vulkan with at least these options: Volk, Shader Toolchain Debug Symbols, Vulkan Memory Allocation Header
  • git
  • cmake

Install ShadedPathV to empty folder sp:

cd sp
git clone https://github.com/ClemensX/ShadedPathV.git

install vcpkg (for more details see https://learn.microsoft.com/en-us/vcpkg/get_started/get-started):

git clone https://github.com/microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh or bootstrap-vcpkg.bat

Add vcpkg env var and add to path (e.g. in ~/.bashrc or ~/.zshrc) Linux/macos:

export VCPKG_ROOT=/path/to/vcpkg
export PATH=$VCPKG_ROOT:$PATH

The remaining dependencies should be auto-installed during the first build step. To build from within ShadedPathV folder: (TODO add openxr to dependencies)

cmake -S . -B ./build -DCMAKE_TOOLCHAIN_FILE=C:\tools\vcpkg\scripts\buildsystems\vcpkg.cmake
cmake --build ./build

Now you also can just open VS2022. The solution file is in the build folder

You may want to add GLSL support to Visual Studio 2022. E.g. this:

  • Daniel Scherzers GLSL Visual Studio integration: https://marketplace.visualstudio.com/items?itemName=DanielScherzer.GLSL2022.

  • Set path to glslangValidator in VS Tools -> Options -> GLSL language integration -> External compiler. e.g. C:\dev\vulkan\libraries\vulkan\Bin\glslangValidator.exe. Also, set -V as Arguments to enable Vulkan mode.

  • OpenXR: install NuGet package OpenXR.Loader for all three projects in solution. If not found during compile or not displayed correctly: uninstall via NuGet Package Manager, then re-install

Use Khronos OpenXR sdk directly for VS 2022:

  • clone in parent of ShadedPathV folder, so that you have this folder structure:
parent/
├── ShadedPathV/
└── OpenXR-SDK/
mkdir build\win64
cd build\win64
cmake -G "Visual Studio 17" ..\..
  • open solution in VS 2022 at OpenXR-SDK\build\win64\OPENXR.sln and build
  • loader lib and pdb file will be here: \OpenXR-SDK\build\win64\src\loader\Debug
  • include folder here: \OpenXR-SDK\include\openxr
  • both include files and lib should be found by the cmake build, if not take a look at dependencies.cmake

TODO Major Steps

Current main topics we are working on:

  • Rework engine thread and app model
  • Finish Incoming Demo
  • PBR Shader
  • use Meshlets for PBR shader
  • Animation

TODO List

Things finished and things to do. Both very small and very large things, just as they come to my mind.

  • rework singleThreadMode, numCores, numWorkerThreads
  • engine.setSingleThreadMode(true) stopped working since new thread model. Check FrameResources and framesInFlight
  • Rework engine to allow multiple instances (remove static fields, apply manager pattern)
  • rework FP object placement relative to cam (gun stuttering in incoming demo)
  • Rest of PBR stages
  • Environment maps
  • BRDFLUT_TEXTURE cannot be used as cube map (all black). probably needs format conversion
  • image based tests
  • fix line shader backbuffer2 image wrong format in stereo mode if no dynamic add lines
  • add Release version to the current Debug config in cmake
  • Cube maps (needed for PBR environment maps)
  • Bug: billboard and possibly line shader cannot be last shader in app added (Validation Warning)
  • Bug: LineApp not running (problem with wireframe loading)
  • PBR Shader (simple: only base texture display, no lighting)
  • PBR object loading from glTF files (vertices with pos and text coord, textures)
  • Include KTX texture loading in PBR shader
  • (done via themed timer) re-use old fps counter (still needs fixing - values too high?)
  • Decouple Swap chain and backbuffer image rendering
  • backbuffer image saving
  • adapt backbuffer image size during rendering to window size
  • fix renderThreadContinue->wait() not waiting correctly (atomic_flag not suitable)
  • fix no shutdown for > 1 render threads
  • TextureStore to read and organize KTX textures
  • Include Dear ImGui with standard Demo UI
  • UI: FPS Counter
  • Find assets by looking for 'data' folder up the whole path, starting at .exe location
  • Thread pool for backbuffer rendering
  • dynamic lines for LineShader (added lines live only for one frame) in V 1.2 API
  • check for vulkan profile support: VP_KHR_roadmap_2022 level 1 (requires Feb 2022 drivers, only checked for nvdia)
  • Switch to V 1.3 API and get rid of framebuffer and renderpasses
  • LineText Shader with coordinate system display and dynamic text
  • finalize thread architecture
  • optimze thread performance
  • vr view
  • asset loading (library)
  • Shaders
  • vr controllers
  • animation
  • Demos
  • Games

Stereo Mode

Activate stereo mode from client with one of these:

  • engine.enableVR()
  • engine.enableStereo()

Stereo mode will enable all shaders to draw twice, for left and right eye. All internale instances are named without qualifier for single view mode / left eye. And with 2 added to the name for right eye. E.g. for line shader framebuffer:

  • VkFramebuffer ThreadResources.framebufferLine (for left eye or single view)
  • VkFramebuffer ThreadResources.framebufferLine2 (for right eye)

Only left eye will be shown in presentation window unless double view is activated with engine.enableStereoPresentation()

Formats

To decide formats to use we can run the engine in presentation mode and get a list of all supported swap chain formats and presentation modes. On my Laptop and PC I get list below. We decided for the formats in bold

Swap Chain Color Fomat and Space

Format VkFormat Color Space VkColorSpaceKHR
44 VK_FORMAT_B8G8R8A8_UNORM 0 VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
50 VK_FORMAT_B8G8R8A8_SRGB 0 VK_COLOR_SPACE_SRGB_NONLINEAR_KHR
64 VK_FORMAT_A2B10G10R10_UNORM_PACK32 0 VK_COLOR_SPACE_SRGB_NONLINEAR_KHR

Presentation mode

Mode VkPresentModeKHR
0 VK_PRESENT_MODE_IMMEDIATE_KHR
1 VK_PRESENT_MODE_MAILBOX_KHR
2 VK_PRESENT_MODE_FIFO_KHR
3 VK_PRESENT_MODE_FIFO_RELAXED_KHR

Thread Model

  • renderThreadContinue: ThreadsafeWaitingQueue<> (host controlled)
  • queue: FIFO queue (host controlled)
  • presentFence: VkFence
  • inFlightFence: VkFence
Remarks Queue Submit Thread Render Threads
renderThreadContinue push() renderThreadContinue->pop()
drawFrame()
queue.pop()
presentFence was created in set mode vkWaitForFences(presentFence)
vkReset
create graphics command buffers
queue.push()
renderThreadContinue->pop()
vkQueueSubmit(inFlightFence)
vkWaitForFence(inFlightFence)
vkReset
vkAcquireNextImageKHR(swapChain)
copy back buffer image to swapChain image
vkQueueSubmit(presentFence)
vkQueuePresentKHR()
renderThreadContinue push()
vkWaitForFences(presentFence)
vkReset
drawFrame()
queue.push()
queue.pop()
renderThreadContinue->pop()
vkQueueSubmit(inFlightFence)

Coordinate Systems

Right Handed Coordinate System

right handed

Right-handed coordinate system (positive z towards camera) used for ShadedPath Engine, OpenXR, and glTF. (picture taken from here)

OpenXR

From the spec (Chapter 2.16): The OpenXR runtime must interpret the swapchain images in a clip space of positive Y pointing down, near Z plane at 0, and far Z plane at 1.

Vulcan Device Coordinates (X,Y)

Device Coordinates

This means right handed with x (-1 to 1) to the right, y (-1 to 1) top to bottom and z (0 to 1) into the screen. See app DeviceCoordApp for details.

Issues

  • configure validation layers with Vulkan Configurator. Didn't succeed in configuring via app. Storing will enable debug config still active after configurator closes, but ALL Vulkan apps may be affected!
  • Enable Debug Output option to see messages in debug console

Replay capture file

C:\dev\vulkan>C:\dev\vulkan\libraries\vulkan\Bin\gfxrecon-replay.exe --paused gfxrecon_capture_frames_100_through_105_20211116T131643.gfxr

Integrate RenderDoc

glTF Model Handling

Online Model Viewer (drag-and-drop):

https://gltf-viewer.donmccurdy.com/

Enable compression and create mipmaps:

gltf-transform uastc WaterBottle.glb bottle2.glb --level 4 --zstd 18 --verbose

Internal

Stuff probably only intersting for internal development: planning.md

Copyrights of used Components

Asset Licenses

licenses.txt

About

Free C++ Game Engine

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •