If you follow my twitter you may have seen I tweeted about it.

Or if you follow our Ogre repo, you may have seen some commits.

Yes, we’re working on Vulkan support.

So far we only got to a clear screen, so this is all you’re gonna get thus far:

It is working with 3 different drivers: AMDVLK, AMD RADV, and Intel Mesa, so that’s nice.
Only X11 (via xcb library) works for now, but more Windowing systems are planned for later.

A very low level library

Vulkan is very low level, and setting this up hasn’t been easy. The motto is that all commands are submitted in order, but they are not guaranteed to end in order unless they’re properly guarded.

Want to present on screen? You better setup a semaphore so the present command waits for the GPU to finish rendering to the backbuffer.

Submitted twice to the GPU? You better sync these two submissions or else they may be reordered

On the plus side, a modern rendering library could take advantage of this to start rendering the next frame while e.g. compute postprocessing is happening on a separate queue on the current frame.

A lot of misinformation

There’s a lot of samples out there. But many of them are wrong or incomplete.

For example the LunarG’s official samples are wrong because they acquire the backbuffer from the GPU using the same semaphore instead of using one semaphore per frame.

In many of the samples this is not a problem because they perform a full stall for demo purposes, but some of the more ‘real world’ samples do not.

They also do not teach how to deal with GPU systems where the present queue and the graphics rendering queue are different (I don’t know which systems have this setup, but I suspect it has to do with Optimus laptops and similar setups where GPU doing rendering is not the one hooked to the monitor).

Google’s samples are much better, but they still miss some stuff, such as inserting a barrier dependency on VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT so that the graphics queue doesn’t start rendering to a backbuffer before it has been fully acquired and no longer used for presentation.

This bug is hard to catch because often the race condition will never happen due to the nature of double and triple buffer, and worst case scenario this could result in tearing or similar artifacts (even if vsync is enabled).

Though there’s the possibility that failing to insert this barrier can result in severe artifacts in AMD GPUs due to DCC compression on render targets being dirty while rendering to it. Godot’s renderer had encountered this problem.

This is covered at the end of Synchronization Examples’ Swapchain Image Acquire and Present .

Last week, Khronos released a new set of official samples. So far these seem to perform all correct practices.

A VERY good resource on Vulkan Synchronization I found is Yet another blog explaining Vulkan synchronization. It is really, really good.

If I were to summarize Vulkan, it reminds me of Javascript async/promises development: Everything is asynchronous and has to be coded with promises.

Once you get into the async mindset, Vulkan makes sense.

Where to next?

There’s a lot that needs to be done: Resizing the swapchain is not yet coded, separate Graphics and Present queues is not handled, there’s zero buffer management, no textures, no shaders.

The next task I’ll be focusing on is shaders; because they are useful to show stuff on screen and see if they’re working. Even if there are no vertex buffers yet, we can use gl_VertexID tricks to render triangles on screen.

And once shaders are working, we can then test if vertex buffers work once they’re ready, and if textures work, etc.

So that’s all for now. Until next time!