Refactored Ogre 1.x to increase performance by several factors; using cache friendly techniques (Data Oriented Design), SIMD instructions, AZDO (Aproaching Zero Driver Overhead), auto instancing, and multithreading
Windows Vista/7/8/10 support, macOS via Metal and OpenGL, iOS via Metal, Linux via OpenGL
Many new features: Area lights, Parallax Corrected Cubemaps, Forward Clustered lights, HDR, Exponential Shadowmaps and more
In many of the samples this is not a problem because they perform a full stall for demo purposes, but some of the more ‘real world’ samples do not.
They also do not teach how to deal with GPU systems where the present queue and the graphics rendering queue are different (I don’t know which systems have this setup, but I suspect it has to do with Optimus laptops and similar setups where GPU doing rendering is not the one hooked to the monitor).
This bug is hard to catch because often the race condition will never happen due to the nature of double and triple buffer, and worst case scenario this could result in tearing or similar artifacts (even if vsync is enabled).
Though there’s the possibility that failing to insert this barrier can result in severe artifacts in AMD GPUs due to DCC compression on render targets being dirty while rendering to it. Godot’s renderer had encountered this problem.
Once you get into the async mindset, Vulkan makes sense.
Where to next?
There’s a lot that needs to be done: Resizing the swapchain is not yet coded, separate Graphics and Present queues is not handled, there’s zero buffer management, no textures, no shaders.
The next task I’ll be focusing on is shaders; because they are useful to show stuff on screen and see if they’re working. Even if there are no vertex buffers yet, we can use gl_VertexID tricks to render triangles on screen.
And once shaders are working, we can then test if vertex buffers work once they’re ready, and if textures work, etc.