Ogre-Next offers a wide amount of Global Illumination solutions.
Some better than others, but VCT (Voxel Cone Tracing) stands out for its high quality at an acceptable performance (on high end GPUs).
However the main problem right now with our VCT implementation is that it’s hard to use and needs a lot of manual tweaking:
- Voxelization process is relatively slow. 10k triangles can take 10ms to voxelize on a Radeon RX 6800 XT, which makes it unsuitable for realtime voxelization (only load time or offline)
- Large scenes / outdoors need very large resolution (i.e. 1024x32x1024) or just give up to large quality degradations
- It works best on setting up static geometry on a relatively small scene like a room or a house.
If your game is divided in small sections that are paged in/out (i.e. PS1 era games like Resident Evil, Final Fantasy 7/8/9, Grim Fandango) VCT would be ideal.
But in current generation of games with continous movement over large areas, VCT falls short, not unless you do some insane amount of tricks.
So we’re looking to improve this and that’s where our new technique Cascaded Image VCT (CIVCT… it wouldn’t be a graphics technique if we didn’t come up with a long acronym) comes in:
- Voxelizes much faster (10x to 100x), enabling real time voxelization. Right now we’re focusing on static meshes but it should be possible to support dynamic stuff as well
- User friendly
- Works out of the box
- Quality settings easy to understand
- Adapts to many conditions (indoor, outdoor, small, large scenes)
That would be pretty much the holy grail of real time GI.
Step 1: Image Voxelizer
Our current VctVoxelizer is triangle based: It feeds on triangles, and outputs a 3D voxel (Albedo + Normal + Emissive). This voxel is then fed to VctLighting to produce the final GI result:
Right now we’re using VctVoxelizer voxelizes the entire scene. This is slow.
Image Voxelizer is image-based and consists in two steps:
- Reuse VctVoxelizer to separately voxelize each mesh offline and save results to disk (or during load time). At 64x64x64 a mesh would need between 2MB and 3MB of VRAM per mesh (and disk space) depending on whether the object contains emissive materials. Some meshes require much lower resolution though. This is user tweakable. You’d want to dedicate more resolution to important/big meshes, and lower resolution for the unimportant ones.
- This may sound too much, but bear in mind it is a fixed cost independent of triangle count. A mesh with a million triangles and a mesh with a 10.000 triangles will both occupy the same amount of VRAM.
- Objects are rarely square. For example desk table is often wider than it is tall or deep. Hence it could just need 64x32x32, which is between 0.5MB and 0.75MB
- Each frame, we stitch together these 3D voxels of meshes via trilinear interpolation into a scene voxel. This is very fast.
This feature has been fast thanks to Vulkan, which allows us to dynamically index an arbitrary number of bound textures in a single compute dispatch.
OpenGL, Direct3D 11 & Metal* will also support this feature but may experience degraded performance as we must perform the voxelization in multiple passes. How much of a degradation depends on the API, e.g. OpenGL actually will let us dynamically index the texture but has a hard limit on how many textures we can bind per pass.
(*) I’m not sure if Metal supports dynamic texture indexing or not. Needs checking.
Therefore this is how it changed:
This step is done offline or at loading time:
This step can be done every frame or when the camera moves too much, or if an object is moved
There is a downside of this (aside from VRAM usage): We need to voxelize each mesh + material combo. Meaning that if you have a mesh and want to apply different materials, we need to consume 2-3MB per material
This is rarely a problem though because most meshes only use one set of materials. And for those that do, you may be able to get away with baking a material set that is either white or similar; the end results after calculating GI may not vary that much to worth the extra VRAM cost.
- For simple colour swaps (e.g. RTS games, FPS with multiplayer teams), this should be workaroundeable by adding a single multiplier value, rather than voxelizing the mesh per material
- It should be possible to apply some sort of BC1-like compression, given that the mesh opaqueness and shape is the same. The only thing that changes is colour; thus a delta colour compression scheme could work well
At first I panicked a little while developing the Image Voxelizer because the initial quality was far inferior than that of the original voxelizer.
The problem was that the original VCT is a ‘perfect’ voxelization. i.e. if a triangle touches a single voxel, then that voxel adopts the colour of the triangle. Its neighbour voxels will remain empty. Simple.
That results in a ‘thin’ voxel result.
However in IVCT, voxels are interpolated into a scene voxel that will not match in resolution and may be arbitrarily offsetted by subpixels. It’s not aligned either.
The result is that certain voxels have 0.1, 0.2 … 0.9 of the influence of the mesh. This generates ‘fatter’ voxels.
In 2D we would say that the image has a halo around the contours
Once I understood what was going on, I tweaked the math to thin out these voxels by looking at the alpha value of the interpolated mesh and applying an exponential curve to get rid of this halo.
And now it looks very close to the reference VCT implementation!
Step 2: Row Translation
We want to use cascades (a similar concept from shadow mapping, i.e. Cascaded Shadow Maps. In Ogre we call it Parallel Split Shadow Maps but it’s the same thing) concentric around the camera.
That means when the camera moves, once the camera has moved too much, we must move the cascades and re-voxelize.
But we don’t need to voxelize the entire thing from scratch. We can translate everything by 1 voxel, and then revoxelize the new row:
As the camera wants to move, once it moved far enough, we must translate the voxel cascade
Given that we only need to partially update the voxels after camera movement, it makes supporting cascades very fast
Right now this step is handled by
Step 3: Cascades
This step is currently a work in progress. The implementation is planned to have N cascades (N user defined). During cone tracing, after we reach the end of a cascade we move on to the next cascade, which covers more ground but has coarser resolution, hence lower quality.
Wait isn’t this what UE5’s Lumen does?
AFAIK Lumen is also a Voxel Cone Tracer. Therefore it’s normal there will be similarities. I don’t know if they use cascades though.
As far as I’ve read, Lumen uses an entirely different approach to voxelizing which involves rasterizing from all 6 sides, which makes it very user hostile as meshes must be broken down to individual components (e.g. instead of submitting a house, each wall, column, floor and ceiling must be its own mesh).
With Ogre-Next you just provide the mesh and it will just work (although with manual tunning you could achieve greater memory savings if e.g. the columns are split and voxelized separately).
Wait isn’t this what Godot does?
Well, I was involved in SDFGI advising Juan on the subject, thus of course there are a lot of similarities.
The main difference is that Godot generates a cascade of SDFs (signed distance fields), while Ogre-Next is generating a cascade of voxels.
This allows Godot to render on slower GPUs (and is specially better at specular reflections), but at the expense of accuracy (there’s a significant visual difference when toggling between Godot’s own raw VCT implementation and its SDFGI; but they both look pretty) but I believe these quality issues could be improved in the future.
Having an SDF of the scene also offers interesting potential features such as ‘contact shadows’ in the distance.
Ogre-Next in the future may generate an SDF as well as it offers many potential features (e.g. contact shadows) or speed improvements. Please understand that VCT is an actively researched topic and we are all trying and exploring different methods to see what works best and under what conditions.
The underlying techniques aren’t new, but what made it possible are the new APIs and the raw power provided by current generation of GPUs that can keep up with them (although the current GPU shortage might delay the widespread adoption of these techniques).
Since this technique will be used in Ignition Gazebo for simulations, I had to err on the side of accuracy.
When is it coming?
CIVCT isn’t done yet but hopefully it should be ready 1-2 more weekends (I can only work on this during the weekends). Maybe 3? (I hope not!). I want to release Ogre-Next 2.3 RC0 in the meantime, and when CIVCT is ready a proper Ogre-Next 2.3 release.
The reason it’s taking so little time is because we’re improving on our existing technology and reusing lots of code. We’re just changing a few details to make it faster and more use friendly now that Vulkan gives us that freedom (but again, we plan on supporting this feature on all our API backends).
These improvements are currently living in vct-image branch but has no sample yet showcasing it as it is WIP.
Btw! Remember there is an active poll to decide on Ogre-Next 2.3 name. Don’t forget to vote!
After the first round of candidates to decide on a name for Ogre-Next 2.3, we’ve got to the final round!
Please cast your vote to decide the name for Ogre-Next 2.3 (and yes, an official release is coming soon!). Whichever wins will become the new name for Ogre-Next 2.3
The code was simplified during backporting, which shows by the size reduction from ~33k loc in Ogre-Next to ~9k loc that are now in Ogre.
The current implementation pretends to have Fixed Function capabilities, which allows operating with one default shader – similarly to what I did for Metal. This shader only supports using a single 2D texture without lighting. E.g. vertex color is not supported. This is why the text is white instead of black in the screenshot above.
Nevertheless, it already runs on Linux, Windows and Android.
Proper lighting and texturing support, will require some adaptations to the GLSL writer in the RTSS, as Vulkan GLSL is slightly different to OpenGL GLSL. This, and the other currently missing features will hopefully come together during the 13.x development cycle. If you are particularly keen on using Vulkan, consider giving a hand.
Right now, the main goal is to get Vulkan feature-complete first, so dont expect it to outperform any of the other RenderSystems. Due to being incomplete, the Vulkan RenderSystem is tagged EXPERIMENTAL.
Ogre 13.1 release
The per-pixel RTSS stage gained support for two sided lighting. This is useful if you want to have a plane correctly lit from both sides or for transparency effects, as shown below:
Furthermore, PCF16 filtering support was added to the PSSM RTSS stage. This gives you softer shadows at the cost of 4x the texture lookups. The images below show crops from the ShaderSystem sample at 200% highlighting the effect
blender2ogre improved even further
Thanks to the continued work by Guillermo “sercero” Ojea Quintana, blender2ogre gained some exciting new features.
The first is support for specifying Static and Instanced geometry like this. You might wonder whether you should be using that and if yes, which variant. Therefore, he also collected the respective documentation which is available here.
The second notable feature is support for .mesh import, which might come handy if you are modding some Ogre based game or just lost the source .blend file. This feature is based on the respective code found in the Kenshi Blender Plugin (which in turn is based on the Torchlight plugin).
Then, old_man_auz chimed in and fixed some bugs when exporting to Ogre-Next, while also cleaning up the codebase and improving documentation.
Finally, yours truly added CI unit-tests, which make contributing to blender2ogre easier.
OpenAL EFX support in ogre-audiovideo
Again, contributed by sercero are some important additions to the audio part of the ogre-audiovideo project which drastically improve the useability.
The first one is that you no longer need boost to enable threading. OgreOggSound will now follow whatever Ogre is configured with.
The second one is being able to use EFX effects with openal-soft instead of the long-dead creative implementation. This enables effects like reverb or bandpass filters.
Read more in the release-notes. This release was too, done by sercero which kindly took the burden of co-maintaining the project.
We just tagged the Ogre 13 release, making it the new current and recommended version. We would advise you to update wherever possible, to benefit from all the fixes and improvements that made their way into the new release.
This release represents 2.5 years of work from various contributors when compared to the previous major 1.12 release. Compared to the last Ogre minor release (1.12.12), however we are only talking about 4 months. Here, you will mainly find bugfixes and the architectural changes that justify the version number bump.
For source code and pre-compiled SDKs, see the downloads page.read more…
Ogre 1.12.12 release
The last 1.12 release had some serious regressions in D3D9 and GL1, therefore I scheduled one more release in the 1.12.x series.
Updated release notes
As the Ogre 1.12 series was an LTS release, many important features landed after the initial 1.12.0 release. To take this into account and to give an overview which version you need, the “New and Noteworthy” document was updated with the post .0 additions. (search for “12.” to quickly skim through them).
Nevertheless, there are also some new features in the 1.12.12 release itself:
Cubemap support in compositors
Compositors render targets can now be cubemaps by adding the
cubic keyword to the
texture declaration – just like with material
To really take advantage of this, you can now also specify the camera to use when doing
render_scene passes. This way any camera in your scene can be used as an environment-probe for cube mapping.
Finally, to really avoid touching any C++, there is now the
align_to_face keyword which automatically orients the given camera to point to the targeted cubemap face.
Terrain Component in Bindings
Thanks to a contribution by Natan Fernandes there is now initial support of the Terrain Component in our C#/ Java/ Python bindings.
Python SDK as PIP package
Python programmers can now obtain a Ogre SDK directly from PyPI as they are used to with:
pip install ogre-python
Just as the MSVC and Android SDKs, it includes the assimp plugin which allows to load almost any mesh format and ImGui, so you can create a UI in a breeze.
For now only Python 3.8 is covered – but on all platforms. This means you can now have a pre-compiled package for OSX and Linux too.
Thanks to some great work by Guillermo “sercero” Ojea Quintana, the blender2ogre export settings are much more user friendly now:
On top of having some context what a option might do, the exporter can now also let Ogre generate the LOD levels. This gives you the choice to
- Iteratively apply blender “decimate” as in previous releases. This will generate one
.meshfile per LOD level, but may result in a visually better LOD
- use the Ogre MeshLOD Component. This will store all LOD levels in one
.meshfile, only creating a separate index-buffer per LOD. This will greatly reduce storage compare to the above method.
But he did not stop there, blender2ogre now also exports
NodeAnimationTrack based animations. To this end it follows the format introduced by EasyOgreExporter, so both exporters are compatible to each other.
To formalise this, he even extended the .scene type definition, so other exporters implementing this function can validate their output.
Needless to say, he also extended the DotScene Plugin shipped with 1.12.12 to actually load these animations.
.scene support in ogre-meshviewer
Picking up the work by Guillermo, I exteded ogre-meshviewer to load
.scene file – in addition to
.mesh files and whatever formats assimp knows about.
However, for now it will merely display the scene – there are no inspection tools yet.
Ogre 1.12.11 was just released. This is the last scheduled release for the 1.12 series and contains many bugfixes and new features. The smaller ones are:
- Gamepad Support in OgreBites
- Restructured GPU Program Script documentation
- Added Camera::setSortMode to account for rendering 2D layers instead of 3D geometry (as with 2D games)
The more notable new features will be presented in more detail in the following
Support for animated particles
Support for animating particles via Sprite-sheet textures was added. This enables a whole new class of effects with vanilla Ogre that previously required using particle-universe.
On the screenshots above, you see the Smoke demo, that was updated to showcase the new feature. However, the screenshots do not do full justice to the feature. If you are interested, it is best to download the SampleBrowser build and see the effect in action.
See this post (targeting blender) for an overview of the general technique.
For running the animation, the new TextureAnimator Affector was added.
While at it, I fixed several bugs deep in Ogre that prevented ParticleSystems to be properly treated as shadow casters. Now you can call setCastShadows as with any other entity and things will just work (see last image).
Did you ever want to launch a Python Interpreter from your Shader or make HTTP requests per-pixel? Well, the wait is finally over – with the new TinyRenderSystem in Ogre 1.12.11 you can.
This render-system is based on the tinyrenderer project, which implements a full software-rasterizer in ~500 loc. If you are curious on how OpenGL works internally, I highly recommend taking a closer look.
For Ogre this had to be doubled to about ~1350 loc, but compared to the Vulkan RenderSystem from 2.x at ~24000 loc it is still tiny (note that this is already after stripping down the v2.3 implementation).
So what do we gain by having that RenderSystem? First it is a nice stress-test for Ogre, as this is a backend implemented in Ogre itself; each
Buffer uses the
DefaultBuffer implementation and each
RenderWindow is backed by an
This makes it also a great fit for offline conversion tools, that want full access to the resources, without needing to actually access the GPU.
Next, this is really useful if you want to Unit-Test a Ogre-based application. Typically, you would need to set-up a headless rendering server (more on that below) to merely check whether your triangle is centered correctly in the frame. This is super easy now.
The screenshots on top, taken from the SampleBrowser, show you how far you can actually get with the RenderSystem. Note that there is no alpha blending, no mipmapping, no user-provided shaders and generally no advanced configuration of the rasterization. So if you are after full-featured software rasterization, you are better off with OpenGL on MESA/llvmpipe.
However, if you want to experiment with the rendering pipeline without being bound by the OpenGL API, this is the way to go. You actually can do the HTTP requests per pixel ;). Also, for creating a new RenderSystem, this is the least amount of reference code to read.
Transparent headless mode on Linux
Rendering on a remote machine over ssh just got easier! Previously ogre required a running X11 instance, which can be a hassle to come by on a machine without any monitors attached (e.g. a GPU server).
Instead of bailing out, Ogre will now merely issue a warning and transparently fall-back to a PBuffer based offscreen window. See this for the technical background.
To be able to do so Ogre must be using EGL instead of GLX, to do so it must be compiled with OGRE_GLSUPPORT_USE_EGL=1. With 1.13, we will be using EGL instead of GLX by default.
Compared with the EGL support recently added in v2.2.5, the implementation is much simpler and does provide any configuration options – but on the plus side the flag above is the only switch to toggle to get it running.
Improved Bullet-Ogre integration
I added a simplified API to the btogre project.
If you want to have physics on top of your rendering, it is now as simple as:
auto mDynWorld = new BtOgre::DynamicsWorld(gravity_vec);
mDynWorld->addRigidBody(weight, yourEntity, BtOgre::CT_SPHERE);
where (as in Bullet) a weight of 0 means static object. Now you can call
and your objects will interact with each other. Of course if you need more control, the unterlying bullet types are still fully exposed.
Oh, and python bindings are now available too.
This is a special release! Most Ogre 2.1.x and 2.2.x releases, it only contains maintenance fixes and no new features.
Thus efforts to port from 2.2.4 to 2.2.5 should be minimum. And this still holds true.
But there is a new feature!
OpenGL traditionally requires a window. Without a window, OpenGL cannot be used. This implies either X11 or Wayland is installed and running; which can be a problem when running on cloud servers, VMs, embedded devices, and similar environments.
Direct3D11 doesn’t have this flaw, but it does not run on Linux.
Vulkan also doesn’t have this flaw, but its support is new (coming in Ogre 2.3) and is not yet robust and tested enough. Additionally SW implementations have yet to catch up.
Ogre can use the NULL RenderSystem to run as a server without a window, however this doesn’t actually render anything. It’s only useful to pretend there is a screen so that apps (mostly games) can reuse and turn client code into server code. It’s also useful for mesh manipulation and conversion tools which need to read Ogre meshes but don’t actually render anything.
Fortunately, Khronos introduced a workaround with EGL + PBuffers (not to be confused with 2000-era PBuffers which competed against FBOs) where an offscreen dummy ‘window’ could be created to satisfy OpenGL’s shenanigans.
Because PBuffer support in some EGL drivers are not well tested (e.g. sRGB support was only added in EGL 1.5, which Mesa does not support) Ogre creates a 1×1 PBuffer alongside the Context and uses an FBO internally for the ‘Window’ class. By using a dummy 1×1 PBuffer tied with the GL context, OpenGL context creation becomes conceptually free of window interfaces, like in D3D11 and Vulkan.
Switchable interfaces: GLX and EGL
When Ogre is built with both OGRE_GLSUPPORT_USE_GLX and OGRE_GLSUPPORT_USE_EGL_HEADLESS, toggling between GLX and EGL can be done at runtime.
This is how it looks:
Originally the GLX interface will be selected:
But after switching it to EGL Headless, only a couple options appear (since stuff like Resolution, VSync, Full Screen no longer make sense)
And like in D3D11/Vulkan, it is possible to select the GPU. /dev/dri/card0 is a dedicated AMD Radeon HD 7770 GPU, /dev/dri/card1 is a dedicated NVIDIA GeForce 1060. Yes, they can coexist:
NVIDIA seems to expose 2 “devices” belonging to the same card. ‘EGL_NV_device_cuda … #0’ is a headless device. Trying to use ‘EGL_EXT_device_drm #1’ will complain that it can’t run in headless mode. It seems it is meant for use with GLX.
‘EGL_EXT_device_drm #2’ is the AMD card.
EGL_MESA_device_software is SW emulation
We chose not to include the marketing names in device selection because Linux drivers (propietary and open source) have the tendency of changing the exposed OpenGL marketing labels quite often in subtle ways. This could break config settings quite often (i.e. the saved chosen device can no longer be found after a driver upgrade), increasing maintenance burden when this feature is meant for automated testing and similar.
Complete X11 independence
Users who need to be completely free of X11 dependencies can build with OGRE_GLSUPPORT_USE_EGL_HEADLESS + OGRE_CONFIG_UNIX_NO_X11.
This will force-disable OGRE_GLSUPPORT_USE_GLX as it is incompatible. GLX requires X11.
Headless SW Rasterization
It is possible to select the Mesa SW rasterization device. So even if there is no HW support, you can still use SW.
Please note Mesa SW at the time of writing supports up to OpenGL 3.3, which is the bare minimum to run Ogre. Some functionality may not be available.
Update: It has been called to my attention that llvmpipe (aka SW emulation) supports OpenGL 4.5 since Mesa 20.3.0
This new feature seems to be very stable and has been tested on NVIDIA, AMD (Mesa drivers) and Intel.
Nonetheless it is disabled by default (i.e. OGRE_GLSUPPORT_USE_EGL_HEADLESS is turned off) which means it should not affect users who are not caring about headless support.
For more details, please see the README of the EglHeadless tutorial.
Running EglHeadless sample should result in a CLI interface:
OpenGL ES 3.x may be around the corner?
With EGL integration, it should be possible to create an EGL window and ask for an ES 3.x context instead of an OpenGL one. There is a lot of similarities between ES 3 and OpenGL 3.3, and we already have workarounds for it as they’re the same ones we use for macOS.
While I don’t have very high hopes for Android, WebGL2 may be another story.
If such feature is added into the roadmap, it would probably be for 2.3 though.
RenderSystem::endGpuDebuggerFrameCapture were added to programmatically capture a RenderDoc frame. This was necessary for RenderDoc to work with headless rendering, but it works with all APIs in most platforms.
Users can call
RenderSystem::getRenderDocApi if they wish to perform more advanced manipulation:
if( rs->loadRenderDocApi() ) RENDERDOC_API_1_4_1 *apiHandle = rs->getRenderDocApi();
About the 2.2.5 release
For a full list of changes see the Github release
Source and SDK is in the download page.
Discussion in forum thread.
As a small Christmas present, I want to show you how easy it has become to make Augmented Reality yourself thanks to Ogre and OpenCV. You should know that my other interest, besides graphics, lies with Computer Vision.
The demo will not rely on proprietary solutions like ARCore or ARKit – all will be done with open-source code that you can inspect an learn from. But lets start with a teaser:
This demo can be put together in less than 50 lines of code, thanks to the OpenCV ovis module that glues Ogre and OpenCV together. Next, I will briefly walk you through the steps that are needed:
First, we have to capture some images to get by the Reality part in AR. Here, OpenCV provides us an unified API that you can use for your Webcam, Industrial Cameras or a pre-recorded video:
import cv2 as cv imsize = (1280, 720) # the resolution to use cap = cv.VideoCapture(0) cap.set(cv.CAP_PROP_FRAME_WIDTH, imsize) cap.set(cv.CAP_PROP_FRAME_HEIGHT, imsize) img = cap.read() # grab an image
then, we have to set up the most crucial part in AR: camera tracking. For this, we will use the ArUco markers – the QR-like quads that surround Sinbad. To no surprise, OpenCV comes with this vision algorithm:
adict = cv.aruco.Dictionary_get(cv.aruco.DICT_4X4_50) # extract 2D marker-corners from image corners, ids = cv.aruco.detectMarkers(img, adict)[:2] # convert corners to 3D transformations [R|t] rvecs, tvecs = cv.aruco.estimatePoseSingleMarkers(corners, 5, K, None)[:2]
If you look closely, you see that we are using a undefined variable
"K" – this is the intrinsic matrix specific for your camera. If you want precise results, you should calibrate your camera to measure those. For instance using the web-service at calibdb.net, which will also just give you the parameters, if your camera is already known.
However, if you just want to continue, you can use the following values that should roughly match any webcam at 1280x720px
import numpy as np K = np.array(((1000, 0, 640), (0, 1000, 360), (0, 0, 1.)))
So now we have the image and the according 3D transformation for the camera – only the Augmented part is missing. This is where Ogre/ ovis come into play:
# reference the 3D mesh resources cv.ovis.addResourceLocation("packs/Sinbad.zip") # create an Ogre window for rendering win = cv.ovis.createWindow("OgreWindow", imsize, flags=cv.ovis.WINDOW_SCENE_AA) win.setBackground(img) # make Ogre renderings match your camera images win.setCameraIntrinsics(K, imsize) # create the virtual scene, consisting of Sinbad and a light win.createEntity("figure", "Sinbad.mesh", tvec=(0, 0, 5), rot=(1.57, 0, 0)) win.createLightEntity("sun", tvec=(0, 0, 100)) # position the camera according to the first marker detected win.setCameraPose(tvecs.ravel(), rvecs.ravel(), invert=True)
- You can find the full source-code for the above steps, nicely combined into a main-loop over at OpenCV.
- Alternatively, see this Ogre Sample which works with both Ogre and OpenCV installed via pip
To record the results, you can use
win.getScreenshot() and dump it into a
cv.VideoWriter – contrary to the name, this works in real-time.
Extending the above code to use
cv.aruco.GridBoard as done in the teaser video is left as an exercise for the reader as this is more on the OpenCV side.
Also, If you rather want to use ARCore on Android, there is a Sample how to use the SurfaceTexture with Ogre. Using this, you should be able to modify the hello_ar_java sample from the arcore-sdk to use Ogre.