This work was possible because user Hotshot5000 took my branch, forked it, and advanced it further. The Vulkan port was a daunting, overwhelming task and his contributions greatly helped me figure out the way to make it work.
It also saved me a lot of time. Even though around 40% of his code couldn’t make it into the final version, it was still very important as a proof of concept or as a reference implementation to base from, or as a way to compare new non-working code against a working reference.
Existing applications may need to perform additional work to get Vulkan running (e.g. port shaders to Vulkan). While this isn’t difficult, there is no guide written yet.
The 2.3 preparations ticket has a list of things that have changed that may require a dev’s attention when porting from 2.2 to 2.3
This list is updated at irregular intervals; and once 2.3 is out this page is probably going to be moved somewhere else (in fact it is a draft for the News post whenever we release 2.3). But for the time being that ticket is our hub for checking 2.2 -> 2.3 changes.
In many of the samples this is not a problem because they perform a full stall for demo purposes, but some of the more ‘real world’ samples do not.
They also do not teach how to deal with GPU systems where the present queue and the graphics rendering queue are different (I don’t know which systems have this setup, but I suspect it has to do with Optimus laptops and similar setups where GPU doing rendering is not the one hooked to the monitor).
This bug is hard to catch because often the race condition will never happen due to the nature of double and triple buffer, and worst case scenario this could result in tearing or similar artifacts (even if vsync is enabled).
Though there’s the possibility that failing to insert this barrier can result in severe artifacts in AMD GPUs due to DCC compression on render targets being dirty while rendering to it. Godot’s renderer had encountered this problem.
Once you get into the async mindset, Vulkan makes sense.
Where to next?
There’s a lot that needs to be done: Resizing the swapchain is not yet coded, separate Graphics and Present queues is not handled, there’s zero buffer management, no textures, no shaders.
The next task I’ll be focusing on is shaders; because they are useful to show stuff on screen and see if they’re working. Even if there are no vertex buffers yet, we can use gl_VertexID tricks to render triangles on screen.
And once shaders are working, we can then test if vertex buffers work once they’re ready, and if textures work, etc.
A little late report. We know we missed April & May in the middle. But don’t worry. We’ve been busy!
So…what’s new in the Ogre 2.1 development branch?
1. Added depth texture support! This feature has been requested many times for a very long time. It was about time we added it!
Now you can write directly to depth buffers (aka depth-only passes) and read from them. This is very useful for Shadow Mapping. It also allows us to do PCF filtering in hardware with OpenGL.
But you can also read the depth buffers from regular passes, which is useful for reconstructing position in Deferred Shading systems, and post-processing effects that need depth, like SSAO and Depth of Field, without having to use MRT or another emulation to get the depth.
We make the distinction between “depth buffer” and “depth textures”. In Ogre, “depth textures” are buffers that have been requested to be read from as a texture at some point in time. If you want to ever use it as a texture, you’ll want to request a depth texture (controlled via RenderTarget::setPreferDepthTexture).
A “depth buffer” is a depth buffer that you will never be reading from as a texture and that can’t be used as such. This is because certain hardware gets certain optimizations or gets more precise depth formats available that would otherwise be unavailable if you ask for a depth textures.
For most modern hardware though, there’s probably no noticeable performance difference in this flag.