News
Ogre 13.2 released
Ogre 13.2.0 was just released. This “holiday release” contains mostly bugfixes, however there are also some notable additions.
Table of Contents
Vulkan RenderSystem
The elephant in the Room is likely the addition of the Vulkan Rendersystem – as was announced earlier. Contrary to my expectation, progress was quite smooth though. This means that all basic features are already in place and the RTSS and Terrain Components support Vulkan too. Therefore, the Vulkan RenderSystem is now tagged [BETA] instead of [EXPERIMENTAL]. Still, some more advanced features are currently missing.
Depending on your usage, you might be able to already port your application – at least you can already start familiarizing with it. There are two caveats though..
Buffer updates
Currently Ogre does not try to hide the asynchronicity of Vulkan from the user and rather lets you run into rendering glitches.
The general idea of Vulkan is that you have multiple images in flight to keep the GPU busy. This means that we submit the work for the next frame without waiting for the current frame to finish.
This part hits you as soon as try to update vertex data. If the GPU is not yet done processing it, you will get rendering glitches. Particularly, your rendering will be broken if you update the data each frame.
The solution here is to either implement triple-buffering yourself or discard the buffer contents on update, which will give you new memory on Vulkan. The Ogre internals have been updated accordingly and ideally also improve performance on all other rendersystems.
Rendering interruption
Closely related to the above is rendering interruption. This means that after the first Renderable was submitted for the current frame (i.e. RenderSystem::_render
has been called), you decide to load another Texture or update a buffer.
As we dont know whether the update affects the current Frame, we would need to interrupt the rendering, do the upload and continue where we left off. While certainly possible, we just throw an exception right now. Typically, it is much easier to just schedule your buffer updates before rendering kicks off, than ordering things mid-flight. And this is faster too.
Using GLSLang with GL3+
As the RTSS was extended to generate SPIRV compatible GLSL for Vulkan, it was natural to enable this path for GL3+ as well. If the gl_spirv
profile is supported, you can now call
mShaderGenerator->setTargetLanguage("glslang");
to use the glslang reference compiler instead of whatever your GL driver would do.
HiDPi support in Overlays
Some dangling threads in Overlays were fixed and you can now call
Ogre::OverlayManager::getSingleton().setPixelRatio(appContext.getDisplayDPI()/96);
which will scale up the UI appropriately and generate higher resolution Fonts. The magic 96 means 96 DPI which is the common setting of all Monitors up to FullHD.
Depth of Field Sample
I have updated the dormant DoF compositor code we had in Ogre to actually do something.
The sample builds upon the code of DWORD flying around the forums and implements the following technique by Thorsten Scheuermann.
Upcoming Global Illumination improvements in Ogre-Next
Note: This work is being sponsored by Open Source Robotics for the Ignition Project
Ogre-Next offers a wide amount of Global Illumination solutions.
Some better than others, but VCT (Voxel Cone Tracing) stands out for its high quality at an acceptable performance (on high end GPUs).
However the main problem right now with our VCT implementation is that it’s hard to use and needs a lot of manual tweaking:
- Voxelization process is relatively slow. 10k triangles can take 10ms to voxelize on a Radeon RX 6800 XT, which makes it unsuitable for realtime voxelization (only load time or offline)
- Large scenes / outdoors need very large resolution (i.e. 1024x32x1024) or just give up to large quality degradations
- It works best on setting up static geometry on a relatively small scene like a room or a house.
If your game is divided in small sections that are paged in/out (i.e. PS1 era games like Resident Evil, Final Fantasy 7/8/9, Grim Fandango) VCT would be ideal.
But in current generation of games with continous movement over large areas, VCT falls short, not unless you do some insane amount of tricks.
So we’re looking to improve this and that’s where our new technique Cascaded Image VCT (CIVCT… it wouldn’t be a graphics technique if we didn’t come up with a long acronym) comes in:
- Voxelizes much faster (10x to 100x), enabling real time voxelization. Right now we’re focusing on static meshes but it should be possible to support dynamic stuff as well
- User friendly
- Works out of the box
- Quality settings easy to understand
- Adapts to many conditions (indoor, outdoor, small, large scenes)
That would be pretty much the holy grail of real time GI.
Step 1: Image Voxelizer
Our current VctVoxelizer is triangle based: It feeds on triangles, and outputs a 3D voxel (Albedo + Normal + Emissive). This voxel is then fed to VctLighting to produce the final GI result:
Right now we’re using VctVoxelizer voxelizes the entire scene. This is slow.
Image Voxelizer is image-based and consists in two steps:
- Reuse VctVoxelizer to separately voxelize each mesh offline and save results to disk (or during load time). At 64x64x64 a mesh would need between 2MB and 3MB of VRAM per mesh (and disk space) depending on whether the object contains emissive materials. Some meshes require much lower resolution though. This is user tweakable. You’d want to dedicate more resolution to important/big meshes, and lower resolution for the unimportant ones.
- This may sound too much, but bear in mind it is a fixed cost independent of triangle count. A mesh with a million triangles and a mesh with a 10.000 triangles will both occupy the same amount of VRAM.
- Objects are rarely square. For example desk table is often wider than it is tall or deep. Hence it could just need 64x32x32, which is between 0.5MB and 0.75MB
- Each frame, we stitch together these 3D voxels of meshes via trilinear interpolation into a scene voxel. This is very fast.
This feature has been fast thanks to Vulkan, which allows us to dynamically index an arbitrary number of bound textures in a single compute dispatch.
OpenGL, Direct3D 11 & Metal* will also support this feature but may experience degraded performance as we must perform the voxelization in multiple passes. How much of a degradation depends on the API, e.g. OpenGL actually will let us dynamically index the texture but has a hard limit on how many textures we can bind per pass.
(*) I’m not sure if Metal supports dynamic texture indexing or not. Needs checking.
Therefore this is how it changed:
This step is done offline or at loading time:
This step can be done every frame or when the camera moves too much, or if an object is moved
Downside
There is a downside of this (aside from VRAM usage): We need to voxelize each mesh + material combo. Meaning that if you have a mesh and want to apply different materials, we need to consume 2-3MB per material
This is rarely a problem though because most meshes only use one set of materials. And for those that do, you may be able to get away with baking a material set that is either white or similar; the end results after calculating GI may not vary that much to worth the extra VRAM cost.
Non-researched solutions:
- For simple colour swaps (e.g. RTS games, FPS with multiplayer teams), this should be workaroundeable by adding a single multiplier value, rather than voxelizing the mesh per material
- It should be possible to apply some sort of BC1-like compression, given that the mesh opaqueness and shape is the same. The only thing that changes is colour; thus a delta colour compression scheme could work well
Trivia
At first I panicked a little while developing the Image Voxelizer because the initial quality was far inferior than that of the original voxelizer.
The problem was that the original VCT is a ‘perfect’ voxelization. i.e. if a triangle touches a single voxel, then that voxel adopts the colour of the triangle. Its neighbour voxels will remain empty. Simple.
That results in a ‘thin’ voxel result.
However in IVCT, voxels are interpolated into a scene voxel that will not match in resolution and may be arbitrarily offsetted by subpixels. It’s not aligned either.
The result is that certain voxels have 0.1, 0.2 … 0.9 of the influence of the mesh. This generates ‘fatter’ voxels.
In 2D we would say that the image has a halo around the contours
Once I understood what was going on, I tweaked the math to thin out these voxels by looking at the alpha value of the interpolated mesh and applying an exponential curve to get rid of this halo.
And now it looks very close to the reference VCT implementation!
Step 2: Row Translation
We want to use cascades (a similar concept from shadow mapping, i.e. Cascaded Shadow Maps. In Ogre we call it Parallel Split Shadow Maps but it’s the same thing) concentric around the camera.
That means when the camera moves, once the camera has moved too much, we must move the cascades and re-voxelize.
But we don’t need to voxelize the entire thing from scratch. We can translate everything by 1 voxel, and then revoxelize the new row:
As the camera wants to move, once it moved far enough, we must translate the voxel cascade
Given that we only need to partially update the voxels after camera movement, it makes supporting cascades very fast
Right now this step is handled by VctImageVoxelizer::partialBuild
Step 3: Cascades
This step is currently a work in progress. The implementation is planned to have N cascades (N user defined). During cone tracing, after we reach the end of a cascade we move on to the next cascade, which covers more ground but has coarser resolution, hence lower quality.
Wait isn’t this what UE5’s Lumen does?
AFAIK Lumen is also a Voxel Cone Tracer. Therefore it’s normal there will be similarities. I don’t know if they use cascades though.
As far as I’ve read, Lumen uses an entirely different approach to voxelizing which involves rasterizing from all 6 sides, which makes it very user hostile as meshes must be broken down to individual components (e.g. instead of submitting a house, each wall, column, floor and ceiling must be its own mesh).
With Ogre-Next you just provide the mesh and it will just work (although with manual tunning you could achieve greater memory savings if e.g. the columns are split and voxelized separately).
Wait isn’t this what Godot does?
Well, I was involved in SDFGI advising Juan on the subject, thus of course there are a lot of similarities.
The main difference is that Godot generates a cascade of SDFs (signed distance fields), while Ogre-Next is generating a cascade of voxels.
This allows Godot to render on slower GPUs (and is specially better at specular reflections), but at the expense of accuracy (there’s a significant visual difference when toggling between Godot’s own raw VCT implementation and its SDFGI; but they both look pretty) but I believe these quality issues could be improved in the future.
Having an SDF of the scene also offers interesting potential features such as ‘contact shadows’ in the distance.
Ogre-Next in the future may generate an SDF as well as it offers many potential features (e.g. contact shadows) or speed improvements. Please understand that VCT is an actively researched topic and we are all trying and exploring different methods to see what works best and under what conditions.
The underlying techniques aren’t new, but what made it possible are the new APIs and the raw power provided by current generation of GPUs that can keep up with them (although the current GPU shortage might delay the widespread adoption of these techniques).
Since this technique will be used in Ignition Gazebo for simulations, I had to err on the side of accuracy.
When is it coming?
CIVCT isn’t done yet but hopefully it should be ready 1-2 more weekends (I can only work on this during the weekends). Maybe 3? (I hope not!). I want to release Ogre-Next 2.3 RC0 in the meantime, and when CIVCT is ready a proper Ogre-Next 2.3 release.
The reason it’s taking so little time is because we’re improving on our existing technology and reusing lots of code. We’re just changing a few details to make it faster and more use friendly now that Vulkan gives us that freedom (but again, we plan on supporting this feature on all our API backends).
These improvements are currently living in vct-image branch but has no sample yet showcasing it as it is WIP.
Btw! Remember there is an active poll to decide on Ogre-Next 2.3 name. Don’t forget to vote!
Cast your vote to decide on Ogre-Next 2.3 name!
After the first round of candidates to decide on a name for Ogre-Next 2.3, we’ve got to the final round!
Please cast your vote to decide the name for Ogre-Next 2.3 (and yes, an official release is coming soon!). Whichever wins will become the new name for Ogre-Next 2.3
Vulkan RenderSystem in Ogre 13
The Vulkan RenderSystem backport from Ogre-next, now has landed in the master branch and will be available with Ogre 13.2. See the screenshot below for the SampleBrowser running on Vulkan
The code was simplified during backporting, which shows by the size reduction from ~33k loc in Ogre-Next to ~9k loc that are now in Ogre.
The current implementation pretends to have Fixed Function capabilities, which allows operating with one default shader – similarly to what I did for Metal. This shader only supports using a single 2D texture without lighting. E.g. vertex color is not supported. This is why the text is white instead of black in the screenshot above.
Nevertheless, it already runs on Linux, Windows and Android.
Proper lighting and texturing support, will require some adaptations to the GLSL writer in the RTSS, as Vulkan GLSL is slightly different to OpenGL GLSL. This, and the other currently missing features will hopefully come together during the 13.x development cycle. If you are particularly keen on using Vulkan, consider giving a hand.
Right now, the main goal is to get Vulkan feature-complete first, so dont expect it to outperform any of the other RenderSystems. Due to being incomplete, the Vulkan RenderSystem is tagged EXPERIMENTAL.
Ogre ecosystem roundup #8
following the last post about what is going on around Ogre, here is another update. With the Ogre 13.1 release, mainly the usability of Ogre was improved with the following additions.
Table of Contents
Ogre 13.1 release
The per-pixel RTSS stage gained support for two sided lighting. This is useful if you want to have a plane correctly lit from both sides or for transparency effects, as shown below:
Furthermore, PCF16 filtering support was added to the PSSM RTSS stage. This gives you softer shadows at the cost of 4x the texture lookups. The images below show crops from the ShaderSystem sample at 200% highlighting the effect
blender2ogre improved even further
Thanks to the continued work by Guillermo “sercero” Ojea Quintana, blender2ogre gained some exciting new features.
The first is support for specifying Static and Instanced geometry like this. You might wonder whether you should be using that and if yes, which variant. Therefore, he also collected the respective documentation which is available here.
The second notable feature is support for .mesh import, which might come handy if you are modding some Ogre based game or just lost the source .blend file. This feature is based on the respective code found in the Kenshi Blender Plugin (which in turn is based on the Torchlight plugin).
Then, old_man_auz chimed in and fixed some bugs when exporting to Ogre-Next, while also cleaning up the codebase and improving documentation.
Finally, yours truly added CI unit-tests, which make contributing to blender2ogre easier.
OpenAL EFX support in ogre-audiovideo
Again, contributed by sercero are some important additions to the audio part of the ogre-audiovideo project which drastically improve the useability.
The first one is that you no longer need boost to enable threading. OgreOggSound will now follow whatever Ogre is configured with.
The second one is being able to use EFX effects with openal-soft instead of the long-dead creative implementation. This enables effects like reverb or bandpass filters.
Read more in the release-notes. This release was too, done by sercero which kindly took the burden of co-maintaining the project.
Ogre 13 released
We just tagged the Ogre 13 release, making it the new current and recommended version. We would advise you to update wherever possible, to benefit from all the fixes and improvements that made their way into the new release.
This release represents 2.5 years of work from various contributors when compared to the previous major 1.12 release. Compared to the last Ogre minor release (1.12.12), however we are only talking about 4 months. Here, you will mainly find bugfixes and the architectural changes that justify the version number bump.
Table of Contents
For source code and pre-compiled SDKs, see the downloads page.
read more…Ogre ecosystem roundup #7
following the last post about what is going on around Ogre, here is another update. With the Ogre 1.12.12 release, mainly the usability of Ogre was improved with the following additions.
Table of Contents
- Ogre 1.12.12 release
- Python SDK as PIP package
- Improved blender2ogre
- .scene support in ogre-meshviewer
Ogre 1.12.12 release
The last 1.12 release had some serious regressions in D3D9 and GL1, therefore I scheduled one more release in the 1.12.x series.
Updated release notes
As the Ogre 1.12 series was an LTS release, many important features landed after the initial 1.12.0 release. To take this into account and to give an overview which version you need, the “New and Noteworthy” document was updated with the post .0 additions. (search for “12.” to quickly skim through them).
Nevertheless, there are also some new features in the 1.12.12 release itself:
Cubemap support in compositors
Compositors render targets can now be cubemaps by adding the cubic
keyword to the texture
declaration – just like with material texture_units
.
To really take advantage of this, you can now also specify the camera to use when doing render_scene
passes. This way any camera in your scene can be used as an environment-probe for cube mapping.
Finally, to really avoid touching any C++, there is now the align_to_face
keyword which automatically orients the given camera to point to the targeted cubemap face.
See this commit on how these things can simplify your code and this for further documentation.
Terrain Component in Bindings
Thanks to a contribution by Natan Fernandes there is now initial support of the Terrain Component in our C#/ Java/ Python bindings.
Python SDK as PIP package
Python programmers can now obtain a Ogre SDK directly from PyPI as they are used to with:
pip install ogre-python
Just as the MSVC and Android SDKs, it includes the assimp plugin which allows to load almost any mesh format and ImGui, so you can create a UI in a breeze.
For now only Python 3.8 is covered – but on all platforms. This means you can now have a pre-compiled package for OSX and Linux too.
Improved blender2ogre
Thanks to some great work by Guillermo “sercero” Ojea Quintana, the blender2ogre export settings are much more user friendly now:
On top of having some context what a option might do, the exporter can now also let Ogre generate the LOD levels. This gives you the choice to
- Iteratively apply blender “decimate” as in previous releases. This will generate one
.mesh
file per LOD level, but may result in a visually better LOD - use the Ogre MeshLOD Component. This will store all LOD levels in one
.mesh
file, only creating a separate index-buffer per LOD. This will greatly reduce storage compare to the above method.
SceneNode animations
But he did not stop there, blender2ogre now also exports NodeAnimationTrack
based animations. To this end it follows the format introduced by EasyOgreExporter, so both exporters are compatible to each other.
To formalise this, he even extended the .scene type definition, so other exporters implementing this function can validate their output.
Needless to say, he also extended the DotScene Plugin shipped with 1.12.12 to actually load these animations.
.scene support in ogre-meshviewer
Picking up the work by Guillermo, I exteded ogre-meshviewer to load .scene
file – in addition to .mesh
files and whatever formats assimp knows about.
However, for now it will merely display the scene – there are no inspection tools yet.
Ogre 1.12.11 released
Ogre 1.12.11 was just released. This is the last scheduled release for the 1.12 series and contains many bugfixes and new features. The smaller ones are:
- Gamepad Support in OgreBites
- Restructured GPU Program Script documentation
- Added Camera::setSortMode to account for rendering 2D layers instead of 3D geometry (as with 2D games)
The more notable new features will be presented in more detail in the following
Table of Contents
- Support for animated particles
- Software RenderSystem
- Transparent headless mode on Linux
- Improved Bullet-Ogre integration
Support for animated particles
Support for animating particles via Sprite-sheet textures was added. This enables a whole new class of effects with vanilla Ogre that previously required using particle-universe.
On the screenshots above, you see the Smoke demo, that was updated to showcase the new feature. However, the screenshots do not do full justice to the feature. If you are interested, it is best to download the SampleBrowser build and see the effect in action.
See this post (targeting blender) for an overview of the general technique.
For running the animation, the new TextureAnimator Affector was added.
While at it, I fixed several bugs deep in Ogre that prevented ParticleSystems to be properly treated as shadow casters. Now you can call setCastShadows as with any other entity and things will just work (see last image).
Software RenderSystem
Did you ever want to launch a Python Interpreter from your Shader or make HTTP requests per-pixel? Well, the wait is finally over – with the new TinyRenderSystem in Ogre 1.12.11 you can.
This render-system is based on the tinyrenderer project, which implements a full software-rasterizer in ~500 loc. If you are curious on how OpenGL works internally, I highly recommend taking a closer look.
For Ogre this had to be doubled to about ~1350 loc, but compared to the Vulkan RenderSystem from 2.x at ~24000 loc it is still tiny (note that this is already after stripping down the v2.3 implementation).
So what do we gain by having that RenderSystem? First it is a nice stress-test for Ogre, as this is a backend implemented in Ogre itself; each Buffer
uses the DefaultBuffer
implementation and each Texture
and RenderWindow
is backed by an Ogre::Image
.
This makes it also a great fit for offline conversion tools, that want full access to the resources, without needing to actually access the GPU.
Next, this is really useful if you want to Unit-Test a Ogre-based application. Typically, you would need to set-up a headless rendering server (more on that below) to merely check whether your triangle is centered correctly in the frame. This is super easy now.
The screenshots on top, taken from the SampleBrowser, show you how far you can actually get with the RenderSystem. Note that there is no alpha blending, no mipmapping, no user-provided shaders and generally no advanced configuration of the rasterization. So if you are after full-featured software rasterization, you are better off with OpenGL on MESA/llvmpipe.
However, if you want to experiment with the rendering pipeline without being bound by the OpenGL API, this is the way to go. You actually can do the HTTP requests per pixel ;). Also, for creating a new RenderSystem, this is the least amount of reference code to read.
Transparent headless mode on Linux
Rendering on a remote machine over ssh just got easier! Previously ogre required a running X11 instance, which can be a hassle to come by on a machine without any monitors attached (e.g. a GPU server).
Instead of bailing out, Ogre will now merely issue a warning and transparently fall-back to a PBuffer based offscreen window. See this for the technical background.
To be able to do so Ogre must be using EGL instead of GLX, to do so it must be compiled with OGRE_GLSUPPORT_USE_EGL=1. With 1.13, we will be using EGL instead of GLX by default.
Compared with the EGL support recently added in v2.2.5, the implementation is much simpler and does provide any configuration options – but on the plus side the flag above is the only switch to toggle to get it running.
Improved Bullet-Ogre integration
I added a simplified API to the btogre project.
https://github.com/OGRECave/btogre
If you want to have physics on top of your rendering, it is now as simple as:
auto mDynWorld = new BtOgre::DynamicsWorld(gravity_vec);<br>mDynWorld->addRigidBody(weight, yourEntity, BtOgre::CT_SPHERE);
where (as in Bullet) a weight of 0 means static object. Now you can call
mDynWorld->getBtWorld()->stepSimulation(evt.timeSinceLastFrame);
and your objects will interact with each other. Of course if you need more control, the unterlying bullet types are still fully exposed.
Oh, and python bindings are now available too.
Ogre 2.2.5 Cerberus Released and EGL Headless support!
This is a special release! Most Ogre 2.1.x and 2.2.x releases, it only contains maintenance fixes and no new features.
Thus efforts to port from 2.2.4 to 2.2.5 should be minimum. And this still holds true.
But there is a new feature!
This feature was sponsored by Open Source Robotics Corporation and was written to be used by the Ignition Project
EGL Headless
OpenGL traditionally requires a window. Without a window, OpenGL cannot be used. This implies either X11 or Wayland is installed and running; which can be a problem when running on cloud servers, VMs, embedded devices, and similar environments.
Direct3D11 doesn’t have this flaw, but it does not run on Linux.
Vulkan also doesn’t have this flaw, but its support is new (coming in Ogre 2.3) and is not yet robust and tested enough. Additionally SW implementations have yet to catch up.
Ogre can use the NULL RenderSystem to run as a server without a window, however this doesn’t actually render anything. It’s only useful to pretend there is a screen so that apps (mostly games) can reuse and turn client code into server code. It’s also useful for mesh manipulation and conversion tools which need to read Ogre meshes but don’t actually render anything.
Fortunately, Khronos introduced a workaround with EGL + PBuffers (not to be confused with 2000-era PBuffers which competed against FBOs) where an offscreen dummy ‘window’ could be created to satisfy OpenGL’s shenanigans.
Because PBuffer support in some EGL drivers are not well tested (e.g. sRGB support was only added in EGL 1.5, which Mesa does not support) Ogre creates a 1×1 PBuffer alongside the Context and uses an FBO internally for the ‘Window’ class. By using a dummy 1×1 PBuffer tied with the GL context, OpenGL context creation becomes conceptually free of window interfaces, like in D3D11 and Vulkan.
Switchable interfaces: GLX and EGL
When Ogre is built with both OGRE_GLSUPPORT_USE_GLX and OGRE_GLSUPPORT_USE_EGL_HEADLESS, toggling between GLX and EGL can be done at runtime.
This is how it looks:
Originally the GLX interface will be selected:
But after switching it to EGL Headless, only a couple options appear (since stuff like Resolution, VSync, Full Screen no longer make sense)
And like in D3D11/Vulkan, it is possible to select the GPU. /dev/dri/card0 is a dedicated AMD Radeon HD 7770 GPU, /dev/dri/card1 is a dedicated NVIDIA GeForce 1060. Yes, they can coexist:
NVIDIA seems to expose 2 “devices” belonging to the same card. ‘EGL_NV_device_cuda … #0’ is a headless device. Trying to use ‘EGL_EXT_device_drm #1’ will complain that it can’t run in headless mode. It seems it is meant for use with GLX.
‘EGL_EXT_device_drm #2’ is the AMD card.
EGL_MESA_device_software is SW emulation
We chose not to include the marketing names in device selection because Linux drivers (propietary and open source) have the tendency of changing the exposed OpenGL marketing labels quite often in subtle ways. This could break config settings quite often (i.e. the saved chosen device can no longer be found after a driver upgrade), increasing maintenance burden when this feature is meant for automated testing and similar.
Complete X11 independence
Users who need to be completely free of X11 dependencies can build with OGRE_GLSUPPORT_USE_EGL_HEADLESS + OGRE_CONFIG_UNIX_NO_X11.
This will force-disable OGRE_GLSUPPORT_USE_GLX as it is incompatible. GLX requires X11.
Headless SW Rasterization
It is possible to select the Mesa SW rasterization device. So even if there is no HW support, you can still use SW.
Please note Mesa SW at the time of writing supports up to OpenGL 3.3, which is the bare minimum to run Ogre. Some functionality may not be available.
Update: It has been called to my attention that llvmpipe (aka SW emulation) supports OpenGL 4.5 since Mesa 20.3.0
More info
This new feature seems to be very stable and has been tested on NVIDIA, AMD (Mesa drivers) and Intel.
Nonetheless it is disabled by default (i.e. OGRE_GLSUPPORT_USE_EGL_HEADLESS is turned off) which means it should not affect users who are not caring about headless support.
For more details, please see the README of the EglHeadless tutorial.
Running EglHeadless sample should result in a CLI interface:
OpenGL ES 3.x may be around the corner?
With EGL integration, it should be possible to create an EGL window and ask for an ES 3.x context instead of an OpenGL one. There is a lot of similarities between ES 3 and OpenGL 3.3, and we already have workarounds for it as they’re the same ones we use for macOS.
While I don’t have very high hopes for Android, WebGL2 may be another story.
If such feature is added into the roadmap, it would probably be for 2.3 though.
RenderDoc integration
Functions RenderSystem::startGpuDebuggerFrameCapture
and RenderSystem::endGpuDebuggerFrameCapture
were added to programmatically capture a RenderDoc frame. This was necessary for RenderDoc to work with headless rendering, but it works with all APIs in most platforms.
Users can call RenderSystem::getRenderDocApi
if they wish to perform more advanced manipulation:
if( rs->loadRenderDocApi() )
RENDERDOC_API_1_4_1 *apiHandle = rs->getRenderDocApi();
About the 2.2.5 release
For a full list of changes see the Github release
Source and SDK is in the download page.
Discussion in forum thread.
Thanks again to Open Source Robotics Corporation for sponsoring this feature for their Ignition Project
Ogre 2.1.1 Baldur Released!
This is a maintenance release. Efforts to port from 2.1.0 to 2.1.1 should be minimum.
For a full list of changes see the Github release
Source and SDK is in the download page.
Discussion in forum thread.