Of course there is the usual slew of bug-fixes as well, which are listed here.
- Reversed-depth buffer support for D3D11 and OpenGL3+. See the accompanying tutorial for details.
- Full Unicode Path support, including ZIP archives, on Windows (on by default)
- The Real Time Shader System, now uses ShaderModel4 style texture sampling, which fixes multiple samples (mainly depth and 1D texture related)
- Overlays now properly support content scaling, which is needed for HiDPI screens.
- Native ImGui support through the Overlay component
If you follow my twitter you may have seen I tweeted about it.
Or if you follow our Ogre repo, you may have seen some commits.
Yes, we’re working on Vulkan support.
So far we only got to a clear screen, so this is all you’re gonna get thus far:
It is working with 3 different drivers: AMDVLK, AMD RADV, and Intel Mesa, so that’s nice.
Only X11 (via xcb library) works for now, but more Windowing systems are planned for later.
A very low level library
Vulkan is very low level, and setting this up hasn’t been easy. The motto is that all commands are submitted in order, but they are not guaranteed to end in order unless they’re properly guarded.
Want to present on screen? You better setup a semaphore so the present command waits for the GPU to finish rendering to the backbuffer.
Submitted twice to the GPU? You better sync these two submissions or else they may be reordered
On the plus side, a modern rendering library could take advantage of this to start rendering the next frame while e.g. compute postprocessing is happening on a separate queue on the current frame.
A lot of misinformation
There’s a lot of samples out there. But many of them are wrong or incomplete.
In many of the samples this is not a problem because they perform a full stall for demo purposes, but some of the more ‘real world’ samples do not.
They also do not teach how to deal with GPU systems where the present queue and the graphics rendering queue are different (I don’t know which systems have this setup, but I suspect it has to do with Optimus laptops and similar setups where GPU doing rendering is not the one hooked to the monitor).
Google’s samples are much better, but they still miss some stuff, such as inserting a barrier dependency on VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT so that the graphics queue doesn’t start rendering to a backbuffer before it has been fully acquired and no longer used for presentation.
This bug is hard to catch because often the race condition will never happen due to the nature of double and triple buffer, and worst case scenario this could result in tearing or similar artifacts (even if vsync is enabled).
Though there’s the possibility that failing to insert this barrier can result in severe artifacts in AMD GPUs due to DCC compression on render targets being dirty while rendering to it. Godot’s renderer had encountered this problem.
This is covered at the end of Synchronization Examples’ Swapchain Image Acquire and Present .
Last week, Khronos released a new set of official samples. So far these seem to perform all correct practices.
A VERY good resource on Vulkan Synchronization I found is Yet another blog explaining Vulkan synchronization. It is really, really good.
Once you get into the async mindset, Vulkan makes sense.
Where to next?
There’s a lot that needs to be done: Resizing the swapchain is not yet coded, separate Graphics and Present queues is not handled, there’s zero buffer management, no textures, no shaders.
The next task I’ll be focusing on is shaders; because they are useful to show stuff on screen and see if they’re working. Even if there are no vertex buffers yet, we can use gl_VertexID tricks to render triangles on screen.
And once shaders are working, we can then test if vertex buffers work once they’re ready, and if textures work, etc.
So that’s all for now. Until next time!
Over the last few weeks a new sample appeared: Tutorial_OpenVR
We’ve integrated OpenVR into Ogre.
The tasks done to achieve this can be summarized into five:
- Added ‘VrData’ which is passed to Camera::setVrData. This simplifies passing per-eye view matrices and custom projection matrices required by the VR headset
- Added Instanced Stereo support
- Account for the running-start
- Added Stencil Hidden Area Mesh optimization
- Added Radial Density Mask optimization
following the last post about what is going on around Ogre, here is another update.
Ogre-next project split
So lets put first things first; Ogre v1 and Ogre v2 are not only different versions of Ogre, but rather separate projects. Now officially.
While this sounds like big news, actually it is not that much of a change. If you followed the development of Ogre, you already noticed that the two branches moved independently in two separate directions. Hence, we had to write the what version to choose page.
Generally, Ogre2 focuses on the latest and fastest techniques, Ogre1 focuses on backward compatibility and keeping old projects running. Also Ogre1 is developed on github, while Ogre2 still lives on bitbucket.
With bitbucket now abandoning mercurial, we decided to move Ogre2 to github as well. While we could have kept it in a branch, we instead decided to create a separate repository with own issue tracking, landing page etc. So, enter ogre-next.
This also makes it possible to follow semver with Ogre1 and make a 2.0 release when needed.read more…
Last time I mentioned we were working on PCC / VCT Hybrid (PCC = Parallax Corrected Cubemaps, VCT = Voxel Cone Tracing).
I just implemented storing depth into the alpha channel, which is used to improve the quality of the hybrid.
Because we often use RGBA8 cubemaps, 8-bits is not enough to store the depth. Therefore we only store the deviation from the ideal probe shape.
The original PCC algorithm boils down to:
posInProbeShape = cameraCenter + reflDir * fDistance;
Where cameraCenter is artist-defined (where the PCC camera was placed to build the probe) and posInProbeShape is also artist-defined (by controlling the size of the probe)
PCC is about finding reflDir mathematically:
reflDir = (posInProbeShape - cameraCenter) / fDistance;
However we already know reflDir after executing PCC’s code.
Now the depth compression comes to effect by slightly altering the formula:
realReconstructedPosition = cameraCenter + reflDir * fDistance * depthDeviation;
The variable depthDeviation is in range [0; 2] (which we store in the alpha channel) and thus 8-bit should be enough.
Technically this could introduce a few artifacts because we only store the depth in range [0; maximum_distance * 2]
Storing the depth deviation dramatically improves the hybrid’s quality!
But let’s not rush.read more…
Over the last couple months we have been working on Voxel Cone Tracing (VCT), a Global Illumination solution.
Voxel Cone Tracing could be seen as an approximated version of ray marching.
The scene is voxelized and three voxels are generated, containing albedo, normals and emissive properties. In other words, we build a Minecraft-looking representation of the world:read more…
We’ve often been told building Ogre from source is hard.
And they’re right!
Which is why we tried our best and prepared quick start scripts that will clone all necessary dependencies, configure CMake and build for you!
Grab the download scripts:
Unzip them and run the script that matches your platform and OS!
For example if you’re on Windows and have Visual Studio, run either:
depending on the architecture you want to build for (e.g. 32-bit vs 64-bit)
The scripts will automatically start building, but you will also find the Visual Studio solution files under Ogre/build/OGRE.sln
If you’re on Linux, run either:
Which one you need depends on the C++ version you’re targetting. C++98 compiles much faster than the rest, but may have incompatibilities (particularly with std::string) if mixed in a project build for C++11 or newer
There are currently no build scripts for Apple ecosystem. For building for iOS, refer to the Setting up Guide. The instructions for Linux should work for building for macOS, but may require additional manual intervention.
We hope this makes your life easier! And let us know if you encounter problems with these scripts! The goal is that building Ogre from scratch becomes as simple as tapping a few clicks.
Further discussion on forum post.
Hoo boy! This report was scheduled for January but couldn’t be released on time for various reasons.
We have another report coming as this is old news already! We have another report coming mostly talking about VTC (Voxel Cone Tracing) which is a very interesting feature that has been in development during this year.
But until then, let’s talk about all the other features that have been implemented so far!read more…
If you’re tracking our repo you may have observed we renamed the v2-2-WIP branch into v2-2
What does this mean?
It means the branch is stabilizing. Back when it was v2-2-WIP, it was very unstable. Checking out that branch meant you could find crashes, memory leaks, missing or broken features; and the API interface was changing very quickly, thus updating the repository could mean your code would no longer compile and required adapting.
Over the last couple months, the API interfaces on 2.2 had begun to settle down, bugs were fixed and there were no apparent leaks. In fact some teams started already using it.
Now that it is no longer WIP, while there could still be API breakage on the 2.2 branch or accidental crashes/regressions, it shouldn’t be anything serious or that requires intensive porting.
We still have a few missing features (such as saving textures as DDS) but they’re not used frequently.
Coming up next
We still owe you a Progress Report of what’s been going on in 2.1 and 2.2 in the past year and a half; we have that written down but still needs a few reviews.
Coming up next is:
- More real time GI improvements
- VR performance optimizations
- We are planning on a Vulkan/D3D12 backend
Additionally we have a community member working on GPU-driven rendering and GPU particle FXs; while another community member just contributed Morph animations to 2.1
Yes, Morph animations are finally HW accelerated again! We are evaluating on porting this contribution to 2.2; it shouldn’t take long but we’re evaluating if it can be improved with the use of Compute Shaders
What about Ogre 2.1?
If someone wants to teach Matias aka dark_sylinc a quick automated way to create installer SDKs, that is welcomed! (he never liked handling that!!!)
Ogre 2.1 has been very stable. Eugene ported several improvements from the 1.x branch; and we currently are dealing with a regression that caused due to how PF_BYTE_LA / PF_L8 format is treated in D3D11, but other than that 2.1 is ready for release.
The morph animation contribution is brand new so that may need a bit more testing.
If you don’t see an SDK that is mostly due to time and knowledge to package an SDK.
If someone else wants to step in and maintain packaging, that is welcomed!
Further discussion in the Forum Post