News
Augmented Reality made simple – with Ogre and OpenCV
As a small Christmas present, I want to show you how easy it has become to make Augmented Reality yourself thanks to Ogre and OpenCV. You should know that my other interest, besides graphics, lies with Computer Vision.
The demo will not rely on proprietary solutions like ARCore or ARKit – all will be done with open-source code that you can inspect an learn from. But lets start with a teaser:
This demo can be put together in less than 50 lines of code, thanks to the OpenCV ovis module that glues Ogre and OpenCV together. Next, I will briefly walk you through the steps that are needed:
First, we have to capture some images to get by the Reality part in AR. Here, OpenCV provides us an unified API that you can use for your Webcam, Industrial Cameras or a pre-recorded video:
import cv2 as cv
imsize = (1280, 720) # the resolution to use
cap = cv.VideoCapture(0)
cap.set(cv.CAP_PROP_FRAME_WIDTH, imsize[0])
cap.set(cv.CAP_PROP_FRAME_HEIGHT, imsize[1])
img = cap.read()[1] # grab an image
then, we have to set up the most crucial part in AR: camera tracking. For this, we will use the ArUco markers – the QR-like quads that surround Sinbad. To no surprise, OpenCV comes with this vision algorithm:
adict = cv.aruco.Dictionary_get(cv.aruco.DICT_4X4_50)
# extract 2D marker-corners from image
corners, ids = cv.aruco.detectMarkers(img, adict)[:2]
# convert corners to 3D transformations [R|t]
rvecs, tvecs = cv.aruco.estimatePoseSingleMarkers(corners, 5, K, None)[:2]
If you look closely, you see that we are using a undefined variable "K"
– this is the intrinsic matrix specific for your camera. If you want precise results, you should calibrate your camera to measure those. For instance using the web-service at calibdb.net, which will also just give you the parameters, if your camera is already known.
However, if you just want to continue, you can use the following values that should roughly match any webcam at 1280x720px
import numpy as np
K = np.array(((1000, 0, 640), (0, 1000, 360), (0, 0, 1.)))
So now we have the image and the according 3D transformation for the camera – only the Augmented part is missing. This is where Ogre/ ovis come into play:
# reference the 3D mesh resources
cv.ovis.addResourceLocation("packs/Sinbad.zip")
# create an Ogre window for rendering
win = cv.ovis.createWindow("OgreWindow", imsize, flags=cv.ovis.WINDOW_SCENE_AA)
win.setBackground(img)
# make Ogre renderings match your camera images
win.setCameraIntrinsics(K, imsize)
# create the virtual scene, consisting of Sinbad and a light
win.createEntity("figure", "Sinbad.mesh", tvec=(0, 0, 5), rot=(1.57, 0, 0))
win.createLightEntity("sun", tvec=(0, 0, 100))
# position the camera according to the first marker detected
win.setCameraPose(tvecs[0].ravel(), rvecs[0].ravel(), invert=True)
- You can find the full source-code for the above steps, nicely combined into a main-loop over at OpenCV.
- Alternatively, see this Ogre Sample which works with both Ogre and OpenCV installed via pip
To record the results, you can use win.getScreenshot()
and dump it into a cv.VideoWriter
– contrary to the name, this works in real-time.
Extending the above code to use cv.aruco.GridBoard
as done in the teaser video is left as an exercise for the reader as this is more on the OpenCV side.
Also, If you rather want to use ARCore on Android, there is a Sample how to use the SurfaceTexture with Ogre. Using this, you should be able to modify the hello_ar_java sample from the arcore-sdk to use Ogre.
Ogre 1.12.10 released
Ogre 1.12.10 was just released. This “holiday release” contains mostly bugfixes, however there are also some notable additions.
But first, we want thank everyone for their support, as we reached 2000 stars on github.
Table of Contents
- Native GLES2 RenderSystem on Windows
- Improved RenderToVertexBuffer
- Improved ParticleSystem
- Easy 3D Text – aka MovableText out of the box
Native GLES2 RenderSystem on Windows
Due to some work on WGL, you can now build and use the GLES2 RenderSystem on windows without any glue libraries like ANGLE. Consequently it is included in the SDK, so it is easy to try. If you are on NVIDA/ Intel that is – AMD does not support the respective extension. There are no screenshots here, as they would look almost exactly like on GL3+.
Improved RenderToVertexBuffer
Shaders bound to RenderToVertexBuffer
(aka. Transform Feedback/ Stream Output) can now use auto parameters (param_named_auto
) as in any other stage.
Furthermore the D3D11 implementation was updated and the GPU particles samples now also runs on D3D11 too.
Improved ParticleSystem
The ParticleSystem
and the default BillboardParticleRenderer
received some optimization regarding Colour and bounds related computations.
Notably, Ogre now uniformly samples the Particle direction, where it would previously incorrectly bias towards the poles:
Easy 3D Text – aka MovableText out of the box
While working on the Particles I got the idea that one could re-purpose the existing BillboardSet for 3D text rendering, and indeed it fits perfectly as you see above. No more snippets from the Wiki needed and everything integrates with the existing API as:
auto font = FontManager::getSingleton().getByName("SdkTrays/Caption");
auto bbs = mSceneMgr->createBillboardSet();
font->putText(bbs, "OGRE\n ...", 10);
Furthermore, I fixed the glyph packing code which now correctly aligns the font-atlas elements, thus obsoleting the character_spacer
tuning option.
Game Highlight – Spellheart
Today we want to present another Game highlight of Ogre3D based games. This time: Spellheart
We asked the team behind the game if they could share some insights into the Ogre3D usage and how the game was built in general, and Robin was kind enough to provide those:
Spellheart is a MOBA (Multiplayer online battle arena) game. You can build your entirely own class by choosing items and abilities, and with an extremely customizable server that anyone can host, the possibilities are endless.
The game is based upon an idea that I have never seen before. Most RPG games are static, limiting your build options and forcing you to min/max. In this game, there is no best build because there are no classes. You create your own build without limitations or restrictions. With a customizable server, gameplay can be balanced in real time.
I have been working on Spellheart myself from the start, though some friends have helped me with a few assets. As I am only a programmer, I can only do that part of the game. The 3D models and sounds in the game are mostly from people that put them out for free with good copyrights.
I am using Ogre version 1.11.2, but I have modified the source for some minor things, such as making particles be able to use an atlas. I have also made a lot of custom particle affectors.
All shaders in the game are written by hand, there is no on-the-fly automatic generation of shaders. Though I have written a program that helps me with this, so that I only have to alter one base shader to then generate all shaders at once for D3D9/ CG/ D3D11.
I do not use Deferred shading or anything like that, just normal Forward shading. Since my shaders can handle up to 20 lights and the game being top-down, there is no need for me to have support for more lights. Forward shading is therefore perfect for this game, while being faster than the newer techniques.
I use a lot of batching through a custom-written ManualObject class to make a lot of smaller objects into one single batch in a very optimized manner. This happens for things in the game such as grass and footsteps on sand.
I also use the built-in Instancing (HW Basic) on the static objects in my game. Since the game is top-down, this usually does not help that much, but “Many a little makes a mickle”. I live by that expression, optimizing everything I see can be a bottleneck. That is why the game can run in a very high FPS (500 FPS with a few changes in the options menu on my computer).
An edited version of Gorilla is used for the GUI of the game, but I also made a custom atlas generator for it to make it much easier to use. This also enables me to use a normalmap for each GUI element which is shown by a light from the cursor.
At the moment, the game only works for Windows (x64), but as Ogre can be compiled for all platforms, Linux and Mac could be added in the future (and also then using GLSL instead of CG).
The option menu in the game has extremely many options to make the game run on any kind of hardware (even Fixed Function Pipeline is possible through this).
I like Ogre because its community is helpful and that you can easily alter the source code if you want.
Many game engines out there are based around being a tool-user instead of being a programmer, and I don’t want that. Those engines also seem to have performance issues in many of their games, unless you have a very high-end computer. Therefore, Ogre is the ultimate engine for me. It allows me to be a programmer and not just a tool-user, while being able to make games for low-end computers all they way up to high-end.
Libraries I use in the code
- Ogre3D (Rendering)
- Gorilla (GUI, though a bit modified)
- enet (Networking)
- Bullet (Physics)
- OpenAL (Sound)
- CEF (In-game browser)
- Theora (Video)
- FLTK (GUI, for the server only)
- Qt (GUI, for the launcher only)
Programs I use for the game
- Visual Studio 2017 (Compiler)
- Blender (3D models)
- GIMP (Textures)
- Inno Setup (Installer)
Vulkan and Android support added to Ogre 2.3!
Some of you who follow me on Twitter or its Ogre thread may be aware of it.
But if you don’t: We added Vulkan support! And with it, Android support came along!
The vast majority of features and samples are already working, but there are some missing pieces (see Github ticket) but overall it is much more stable and robust than I’d hoped to be at this stage.
The last time we spoke about this was in November 2019 with our Vulkan Progress Report post. We’ve come a long way since then!
Shout-out to user Hotshot5000
This work was possible because user Hotshot5000 took my branch, forked it, and advanced it further.
The Vulkan port was a daunting, overwhelming task and his contributions greatly helped me figure out the way to make it work.
It also saved me a lot of time. Even though around 40% of his code couldn’t make it into the final version, it was still very important as a proof of concept or as a reference implementation to base from, or as a way to compare new non-working code against a working reference.
Moving forward
Documentation is still being updated. Docs on how to compile for Android is already up.
Existing applications may need to perform additional work to get Vulkan running (e.g. port shaders to Vulkan). While this isn’t difficult, there is no guide written yet.
The 2.3 preparations ticket has a list of things that have changed that may require a dev’s attention when porting from 2.2 to 2.3
This list is updated at irregular intervals; and once 2.3 is out this page is probably going to be moved somewhere else (in fact it is a draft for the News post whenever we release 2.3). But for the time being that ticket is our hub for checking 2.2 -> 2.3 changes.
Users wanting to learn how Vulkan works in Ogre may be very interested in reading the new RootLayout class documentation
That’s all for now! We’re very excited in what comes out of this
Further discussion in forum thread.
Ogre 2.2.4 Cerberus Released!
This is a maintenance release. Efforts to port from 2.2.3 to 2.2.4 should be minimum.
For a full list of changes see the Github release
Source and SDK is in the download page.
Discussion in forum thread.
Ogre 1.12.9 released
Ogre 1.12.9 was just released. Typically we do not write a specific announcement for minor updates, however this one contains some major new features that warrant this one.
Multi Language GPU Programs
As noted in the last progress report, even when using OgreUnifiedShader.h
one had to duplicate the GpuProgram definitions in the Ogre .material
files. This has been fixed in master by allowing multi-language programs to be defined like
fragment_program myFragmentShader glsl glsles hlsl
{
source example.frag
}
This change allowed us to drop around 900 redundant loc inside the samples.
Ogre Assimp Plugin
The separate ogre-assimp project was merged into the main repository as the Codec_Assimp
Plugin, which allows loading arbitrary meshes and runtime and the OgreAssimpConverter
Tool for converting meshes to the Ogre .mesh format offline (which improves loading time).
Especially the introduction of Codec_Assimp
is notable, as mesh loading now goes through the same Codec dispatching as image formats.
On the application side, you can then just call sceneManager->createEntity("Bike.obj")
like I did in OgreMeshViewer below:
so simply loading Codec_Assimp
turned that into a general-purpose mesh-viewer.
Descriptive Hardware Buffer Usage
Did you ever wonder whether your buffer usage is rather static or rather dynamic? Well at least I did as these rather abstract names do not tell you much what it means in terms of memory allocation – but wonder no more! There are now new, actually descriptive, aliases:
Old Name | New descriptive alias |
HBU_STATIC_WRITE_ONLY | HBU_GPU_ONLY |
HBU_DYNAMIC_WRITE_ONLY | HBU_CPU_TO_GPU |
HBU_STATIC | HBU_GPU_TO_CPU |
HBU_DYNAMIC | HBU_CPU_ONLY |
So while previously you were told to use HBU_STATIC_WRITE_ONLY, you now immediately see that the buffer will end up in GPU memory. Likewise, if you use the DYNAMIC variant, the buffer will be optimized for recurrent CPU writes.
I came up with those when planning the Vulkan RenderSystem backport and those are actually borrowed from the Vulkan Memory Allocator library. There you can also find all the subtleties regarding Vulkan, that I did not bother copying to the Ogre manual.
While these flags were named with Vulkan in mind, they map surprisingly well to the Ogre ones and, most importantly, to the actual Ogre usage. After refining the meaning, it allowed dropping several superficial copies and readbacks in the D3D11 & D3D9 RenderSystems. Yes, these even map well to D3D9.
Super-fast debug drawing
Debug drawing in Ogre has been refactored and is now abstracted by the DebugDrawer
API with a DefaultDebugDrawer
implementation.
One advantage of this is that debug drawing code will not be sprayed across core classes. At least with 1.13 – for now we keep things as is not to break any obscure use-cases.
The main advantage however is, that debug drawing is now properly batched – whereas there was one draw-call per WireBox previously, there is now one draw-call for all wireboxes in the scene. This is also true for coordinate crosses and camera frustums.
Other highlights
- Ogre can now be built using the latest Emscripten SDK, thaks to a contribution by Gustavo Branco
- The Config dialog on Windows & Linux was updated to use the new Ogre logo
Ogre ecosystem roundup #6
following the last post about what is going on around Ogre, here is another update. With the Ogre 1.12.8 release, mainly the usability of Ogre was improved with the following additions.
Table of Contents
- Nicer shadows
- Improved Documentation
- Better Python integration
- OgreUnifiedShader as a Cg replacement
- blender2ogre for Blender 2.8x
Nicer shadows
While revising the depth-texture implementation, I noticed that we would take advantage of that feature for performance instead of quality. With depth-textures the GPU can perform bilinear interpolation of the depth-test, so a single sample (tap) roughly corresponds to a classical 4-tap PCF in the shader. So what we did was just doing a 1 sample instead of 4.
However for hardware capable of depth-textures, performance is typically not an issue, so we should rather improve quality.
The images below show crops from the ShaderSystem sample at 200%. On the left, you see the old setting, while the new setting is in the middle. On the right, you see the 4-tap shader fallback used when depth-textures are not available i.e. on D3D9 and GLES2.
Yeah, so the other news is that D3D11 finally got depth-texture (PF_DEPTH
) support as well.
But what if you need to support RenderSystems without PF_DEPTH
or you are on GLES2 where PF_DEPTH
support is not guaranteed. Well, Ogre will now gracefully fall back to the closeset non-depth format: e.g. PF_DEPTH16
> PF_L16
and the RTSS will then deal with the differences.
While at it, I also updated the default shadow material settings to be more robust as in shadow aliasing artefacts:
Thanks to an additional round of fixes to the emscripten GLES2 backend, you can also try out the shadow improvements in a WebGL2 capable browser.
Improved Documentation
I took another look at the documentation and added a CI test to detect docstring inconsistencies like undocumented or superficial parameters. This should help improving the documentation quality of external and my own pull-requests.While at it I also added doxygen groups to several large classes. These allow tying related methods together. See e.g. the Material class:
- before: https://ogrecave.github.io/ogre/api/1.11/class_ogre_1_1_material.html
- after: https://ogrecave.github.io/ogre/api/1.12/class_ogre_1_1_material.html
Speaking of Materials.. did you ever wonder how they work exactly? Well, now you can take a look at
Better Python integration
Next, it kind of bothered me that one has to write so verbose code, when using the Python bindings, so I tried to take advantage of Python protocols:
# Now, instead of specifying the type as before
vp.setBackgroundColour(Ogre.ColourValue(.3, .3, .3))
# you can do
vp.setBackgroundColour((.3, .3, .3))
# or
vp.setBackgroundColour(numpy.zeros(3))
# or even
vp.setBackgroundColour(range(3))
Also, bytes objects are now supported where raw pointers are expected in the API:
# so you can do
arr = np.zeros((256, 256, 3), dtype=np.uint8)
ogre_img.loadDynamicImage(arr, 256, 256, Ogre.PF_BYTE_RGB)
OgreUnifiedShader as a Cg replacement
In an effort to provide an modern alternative for Cg and to reduce the maintenance overhead of the Ogre internal shaders, I created the OgreUnifiedShader.h
that allows writing cross-platform shaders.
It is greatly inspired by the shiny library for Ogre and the shaderc tool of bgfx. However, in contrast to the latter this header is fully self-contained. All you need to do is #include
it – no need to run an additional tool. It also has the advantage that you can run your shader through the standard c preprocessor (cpp) to see what the transformed shader looks like.
Just as bgfx, I opted for GLSL as the least common denominator between the shader languages. That is with some preprocessor based abstractions on top:
- Declare Samplers with
SAMPLER2D/3D/CUBE/..
macros instead ofsampler2D/3D/Cube/..
- Use the HLSL style
mul(x, y)
for matrix multiplication - Use
mtxFromRows
to construct matrices - Declare parameters with
IN/ OUT
macros instead ofattribute/ varying
- Use the
MAIN_PARAMETERS & MAIN_DECLARATION
macros instead ofvoid main()
Here, I tried to maintain compatibility with bgfx where possible.
For an example of usage, see this PR, where I converted the Light Shafts Demo from Cg to HLSL & GLSL. As you can see there is still some room for improvements on the Ogre Material side.
Anyway, this allowed to drop 1800 lines (#1663) of shader code from the RTSS and we now have arrived at a single, GLSL based, shader library. When I joined Ogre there were 4 RTSS implementations (GLSL, GLSLES, HLSL, Cg) – with individual, subtle, bugs.
blender2ogre for Blender 2.8x
Thanks to a contribution by “Grodou” blender2ogre gained back the ability to export textures with Blender 2.8x, which is notable as we need to read from the node-based shaders for this. Together with some additional fixes by yours truly and the initial porting done by Paul Gerke, this brings exporting with Blender 2.8x roughly at the same level as with Blender 2.7x.
However, we can now take advantage of the texture semantics one gets from the Cycles materials. Thereofore, blender2ogre is able to correctly export emissive textures and, notably, normal maps.
The difference in appearance that you see above is due to the missing gamma handling. Also, to fully match what you see in Blender, we need add support for metalness and roughness maps.
Ogre 2.2.3 Cerberus Released!
This is a maintenance release. Efforts to port from 2.2.2 to 2.2.3 should be minimum.
Notable addition:
- Fix shared depth buffers leaking (#103)
For a full list see the Github release
Discussion in forum thread.
Ogre ecosystem roundup #5
following the last post about what is going on around Ogre, here is another update.
Table of Contents
Ogre 1.12.7 point release
The 1.12.7 point release kept its focus on integration. Notably, it ships the new Metal RenderSystem, that was discussed in a previous post. Also there are the following notable changes:
- Improved Terrain Rendering: The lighting computation was pretty messed up before, which I could fix. See the following screens for comparison. Also, now vertex compression (60% less data per vertex) is used with OpenGL, too.
- Filament shader support: Thanks to a contribution of SNiLD, I could extend the existing PBR sample to showcase the usage of Filament PBR shaders. The images below also show the existing glTF2 based material and how far you get by only using plain ogre materials.
- New stable CSM Sample: I found another interesting Demo on the Forums and added it as a Sample to the Sample Browser. This time it is “Cascaded Shadow Mapping” which resembles the implementation you get in the CryEngine. This is a different take on the PSSM Shadow Mapping Algorithm, which is best explained by visualizing it
- Debug view in the PSSM RTShader: While working on the CSM Sample, I found the debug split view particularly useful. Therefore, you can now visualize how the PSSM Shadow splits are placed in your scene – check out the updated ShaderSystem Sample.
OgreMeshViewer: LOD preview
If your mesh contains LOD levels, Meshviewer will now display a new tab, highlighting the currently active level
If you would like to know how to automatically generate LOD for your meshes, see the updated Ogre Tutorial.
Ogre 2.2.2 Cerberus Released!
This is a maintenance release. Efforts to port from 2.2.1 to 2.2.2 should be minimum.
Notable additions are:
- Stable PSSM shadows technique added. Use this feature with ‘num_stable_splits 1’ in compositor scripts when defining the shadow node. Stable PSSM shadows tend to improve quality on the first splits, but they can dramatically reduce quality on the last splits of PSSM; thus using num_stable_splits 1 or 2 but not higher than that is recommended
- D3D11 improved its handling of device lost
- It is now possible to draw to a cubemap using MSAA, which was an oversight in Ogre 2.2.1. See the updated DynamicCubemap tutorial
For a full list see the Github release
Discussion in forum thread.