WarehouseJims MOGRE HOWTO 2        

This is a HOWTO page for Mogre by user WarehouseJim (created in year 2011). It follows on from WarehouseJims MOGRE HOWTO from the year 2010.


Unsorted

General

UI Inspiration

http://www.componentart.com/products/dv/ - has some quite cool rich client style web GUIs.

3D Monitors / Glasses /...

  • Anaglyph is the red-green glasses style 3D images. I assume it would be easy to use - 2 cameras and a publicly known algorithm for merging them.
  • The problem with 3D shutter glasses is monitors typically are only 60Hz -> max 30fps frame rate.
  • http://www.iz3d.com/ - 3D graphics driver stuff, but I think you could just do it manually with Ogre??
  • 23.6" full HD 120Hz monitor is currently (2010/10/05) £255 -> isn't that expensive.
  • NVidia seems to be more into the shutter glasses than ATI.  NVidia kit with shutter glasses and transmitter costs ~£120 (2010/10/05) with an extra set of glasses costing ~£60.  They work with 60Hz and 120Hz monitors.
  • I assume that if you use shutter glasses, then you really need to be able to do Vsync - although I suspect it would still work with lower / uneven frame rates as it just sends a signal when the frame has changed. -> I assume your software has to signal the NVidia kit when you change viewport.
  • http://www.ogre3d.org/forums/viewtopic.php?f=1&t=53978 - stuff about trying to use shutter glasses with a projector.  States that he has found a lot of info on the use of 3D with ogre, but has encountered some problems.
  • http://developer.nvidia.com/object/nvision08-stereo.html - suggests that using shutters is a bit more than just sending the images sequentially, they can be sent in one go (P10) - but later, it seems like it goes back to being simple with the driver dealing with the rest (P19).  Says you should give the user variable stereo separation as they tend to increase it as they get more comfortable with it.
  • "We are looking for 3D Stereo showcase titles and promotions" Nvidia 2008.

Multiple Viewport, RTT etc

  • http://www.ogre3d.org/forums/viewtopic.php?f=3&t=48376 - talks about joining multiple textures together, which haven't necessarily been rendered by the same application -> a possible way of doing multi-threading??
  • Multiple viewports and windows is very easy - you just add a new RenderWindow, and a number of cameras, add the viewport of the cameras to the renderwindow stating the position and size of the window (remember coordinates are 0->1).
  • Multiple viewports does seem to reduce the framerate approximately linearly with the number of viewports - at least for complex scenes.  For simpler scenes it performs much better.
  • Miyagi comes with examples of multiple cameras (haven't looked at them, but ControlRenderBox.cs)
  • http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=5300 - on using Mogre with 2 renderers.  It also

Vsync

While the logs often suggest you use Vsync, it sometimes artificially limits the frame rate - e.g. 2 monitors, same graphics card and Vsync -> on one screen reaches 60fps, move to the other 60fps, fullscreen 30fps.  Turn off Vsync and you get ~180fps.

Instancing etc

  • If you have a lot of identical objects, spread around the scene -> you would typically have the Static / Instanced Geometry object always being rendered, it would be quicker to cull the instancing, rather than cull the vertices, especially if you have a lot of vertices per object.  Using hardware instancing would also allow you (I assume) to select a different resolution mesh according to proximity to the camera (check if this is possible and with which version of DirectX / OpenGL ), further improving the frame rate.  If you did this, then you would probably also need different settings for different hardware as well (i.e. you probably need several shaders).
  • http://http.developer.nvidia.com/GPUGems3/gpugems3_ch02.html (Animated Crowd Rendering 2.2) - Hardware Instancing - typically use a second vertex buffer to provide information on the position of instances - I don't know if that is
  • http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter03.html - Stuff on geometry instancing (various types of).  Interestingly, it suggests that re-creating the mega-mesh of several instances, all translated into global coordinates (i.e. like StaticGeometry) isn't totally unreasonable to be done each frame, because you are still saving on draw calls.  It appears to be slightly dated overall.  3.3.4 Batching with the Geometry Instancing API.
  • It looks like there is a separate DirectX function to call to enable hardware instancing-- but check...

General Shaders

  • Can you set vertices to invisible so that nothing will be rendered, by the fragment shader, while using the standard Ogre material scripts????  Really needed for having static geometry where occasionally parts are invisible, which could be determined by a visibility mask or shader params that mask them.
  • http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=9025&sid=83566fa74fab0086ae10a544ab020dfe&start=420 - shows how to programmatically add a shader with various settings.  In this case, the shader fades the colour to white by passing a "brightness" value to the shader.

Fancy Shader Effects

http://www.gamedev.net/reference/programming/features/GPUFur/ - real time dynamic fur on the GPU.

Misc Materials

  • Wireframe is applied in a pass and there can be multiple passes per material, so it would be very easy to create some combination material with wireframe and e.g. semi-transparent, or opaque.  The same is true for when rendering the vertices.
  • With semi-transparency, you want to make sure that depth_blend is on, or you will get weird effects where some things appear to be rendered behind things that are actually behind them.  Using scene_blend modulate, you can't have a black background as it just modulates what is behind (it's recommended for things like smokey glass), whereas alpha_blend allows you to see the object, whatever is behind it.

Colouring Static Geometry

Here you want to colour (done 27/09/2010 - look at the TestMogre project)

  • Give the mesh two sets of texture coordinates by running manualObject.TextureCoord() twice.
  • Use the second texture coordinate to set the overall colour, so you use a fixed texture coordinate for each object, which corresponds to one pixel of an image.  This pixel will be used to set the colour.
  • You don't need any fancy shaders, just add a second texture_unit to your material script, but the key part being that you set text_coord_set 1 - use the second set of texture coordinates.
  • I managed to get ~220fps with Radio 5850 and Xeon X5482 processors running single threaded showing 128x128 tetrahedrons with the colour being updated every frame, but the texture remaining static.  32x32 I get about 950fps. 
  • Optimisation (NB some stats a bit dodgy as may have used different sized windows, tried to run maximised by the end)  (also NB computers aren't that consistent!):
    • Colours for each tetrahedron were always stored in 2D arrays (not lists).
    • (using float4 to store colour, not Vector3 (which had to be converted to float4); re-using byte[] objects, using constants in for loops) lead to 1020fps with 32x32; 128x128 you get ~290fps.  I.e. this kind of optimisation does work! 
    • Without any colour changing 32x32 renders at ~1250fps, but is very dependent on how much of the screen is filled with tetrahedrons - zoom out and it goes to ~1500fps; 128x128 tetrahedrons runs at 720fps with a 32x32 texture and 680fps with a 128x128 texture - this demonstrates that updating the colour every frame doesn't lead to a great performance penalty (assuming the change in material to use two textures isn't a big hit) with a small number of entities, but it is significant with a large number.
    • Re-making the texture each frame, but not changing the colour yields: 128x128 370fps.  Re-doing the colour, but not re-making the texture: 128x128  680fps.  I.e. the main hit is creating the texture and sending to the graphics card.
    • By changing the texture from TU_DYNAMIC to TU_DYNAMIC_WRITE_ONLY_DISCARDABLE and re-using the old texture pointer, rather than creating a new one, you get: 32x32 1020fps Vs 920fps (with Vs without these changes) and 128x128 320fps Vs 290-295fps.  i.e. they work!  32x32 not remaking the texture, but using TU_DYNAMIC you get 1050fps although camera position varies results a lot -> not scientific.  With TU_DYNAMIC and 128x128 you get 320fps.  In conclusion, don't remake the texture each frame.  Use TU_DYNAMIC as it has at least as good performance, but doesn't risk weirdness if you don't update the texture every frame.
  • Future optimisation ideas:
    • Use float3 pixels, or don't alter the last float's value -> hard code the bytes for it.
    • Use lower resolution colour - currently storing colour with 16Bytes per pixel (4 bytes per colour).  For most uses you could get away with 1Byte colour (but not so swanky for some stuff).  A much better option is probably half precision floating point, which uses 2Bytes per number -> 8Bytes for RGBA.  An excessively complex scheme might encode the alpha in lower resolution than the RGB.
    • Manipulate bytes rather than floats, to remove the conversion process.
    • Only change pixels that have changed - not particularly hard given you access the byte array as an array / with indexes.
    • Optimise buffer locks etc (see enums mentioned below / class ref). 
    • Don't remove the texture, update it.
    • Compress the texture before sending it.  Probably a big gamble for uncertain gain.  Also, compression level would probably vary a lot with time e.g. if you use the same colour for all pixels, compression should be very good, but because colour will largely be uncorrelated with pixel position, compression might be very bad when you use different colours for the pixels.
    • I think you must be able to state that the texture should always be in the graphics card -> speed up rendering for multiple viewports.
  • see http://www.ogre3d.org/docs/api/html/classOgre_1_1HardwareBuffer.html#_details - has various flags that might improve performance.  Some of these flags can be mixed together (i.e. use |).  Similarly http://www.ogre3d.org/docs/api/html/classOgre_1_1HardwarePixelBuffer.html

Vector Graphics Rendering

  • http://www.ogre3d.org/forums/viewtopic.php?f=11&t=47237 - about the use of Cairo with Ogre - looks like it's not currently updated.  Cairo is used for drawing vector graphics across platforms and can use an OpenGL backend i.e. hardware acceleration.  Could be useful for things like very fast and attractive graph rendering.  It's used by things like firefox, GNOME (IIRC) etc.  If using it, you might want to be clever about what you re-draw each frame and what you only draw once / occasionally, using Render To Texture to store the result.
  • OpenVG is created by Kronos (of OpenGL fame) and is designed for fast rendering of vector graphics, although primarily on handheld devices.
  • WRT vector graphics rendering - it would seem possible to use a variety of libraries, render separately and then dump in Ogre... the small print of how to do this might not be so simple though.
    • the risk of using an external library would be continuing to support its integration with Ogre.  It might be more simple for a number of things to roll-your-own... or it might not be!
  • Cairo can include things like PDF, SVG, PS rendering in your application (possibly through external libraries). 
  • Cairo has wrappers for most languages out of the box -> may be able to bypass ogre and link via .NET.

Miyagi

Quaternions

There are a few Quaternion.* methods for interpolating between different quaternions e.g. Squad, which does a cubic interpolation between four quaternions and is often smoother than linear interpolation; slerp (Spherical Linear Interpolation), a linear interpolation;

Pointers, Disposing Of Objects Etc

  • Holding on to pointers e.g. TexturePtr can be dodgy and should (AFAIK) generally not be done.  For instance, if you are dynamically creating textures and hold onto the pointer, when you resize the window, Ogre dumps the texture in the GPU and it needs to be resent, but this needs a different pointer and keeping your old one blocks the dumping process (no idea why).
  • http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=13719&sid=eed78f70d5bf0614ae4e029cbaec0385 good and long discussion, encouraged byme- If you do implicit conversion of ResourcePtr to MaterialPtr etc, e.g. MaterialPtr matPtr = MaterialManager.GetMaterial() -> you will not have disposed of all the pointers as there is an implicit conversion for these kind of things.  Best way (often) is to use using statements.  Overall, it seems like you should try and dispose of everything, pointers and non-pointers alike.
  • http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=13825 - there is also a need to do texturePtr.Unload(), before removing from the TextureManager and disposing of it.

Shadows

Mesh Import Export

Pre-Made Meshes


Ogre Cleanup / Memory Management

  • sceneManager.DestroyStaticGeometry(sg);


3D Mice

General C#

Reflection

http://geekswithblogs.net/shahed/archive/2006/12/06/100427.aspx - reflection over an enum to get all the possible values.  System.Enum.GetNamesMyEnum / GetValues.  http://blogs.msdn.com/b/tims/archive/2004/04/02/106310.aspx shows that using Enum.Parse(...), you can convert from a string to the Enum.

Halloween Edition

  • http://www.ogre3d.org/forums/viewtopic.php?f=8&t=55153&sid=0d400c6253915cac9490c62ac9ad6adb - some 3D models, including a skeleton.
  • Reduce animation complexity, change colours, add bubbling animated texture where possible (and possibly have frogs appearing in it too).

Editing ManualObject

http://www.ogre3d.org/forums/viewtopic.php?p=285753&sid=ce193664e1d3d7c4af509e6f4e2718c6 - it looks like you can't do things like go back and add texture coordinates easily, although some edit are possible (exactly what, I'm not sure, but I think adding vertices and sub-meshes are possible). 

Optimisation

  • http://www.ogre3d.org/tikiwiki/OgreProfiler - you might need to be using C++ and it recommends using the Ogre from source, but it might be good.
  • http://www.ogre3d.org/tikiwiki/Optimisation+checklist
  • Optimised game loop: http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=13133
  • http://www.radgametools.com/telemetry.htm - phenomenally expensive, but maybe worth seeing what it does.  They produce a variety of super-expensive tools (but maybe cheaper if you talk to them).
  • If you are comparing a lot of lengths of Vectors, you may be able to use Vector3.SquaredLength, rather than Vector3.Length if you are just doing a comparison, rather than using the length itself.  Square roots are expensive to calculate.  If you are comparing with a known length, consider squaring that length and comparing with Vector3.SquaredLength.
  • If you are doing a lot of Vector maths, see if you can replace several functions with higher level functions that do the same thing e.g. the members of Vector3 class for comparing vectors etc.  I assume that they will run faster (and may take advantage of more processor optimisations).

Multithreading

Creating ManualObjects / Meshes In Threads

http://www.ogre3d.org/forums/viewtopic.php?f=2&t=56140 - it's possible using the right technique to multi-thread a lot of your mesh creation.  Uses http://www.ogre3d.org/tikiwiki/Generating+A+Mesh style mesh creation rather than ManualObject, but AFAIK they are similar.  http://www.ogre3d.org/forums/viewtopic.php?f=1&t=64389 has more discussion of similar.

Memory Management

  • http://msdn.microsoft.com/en-us/library/ms404247.aspx - Weak References - if you have something that is taking up a lot of memory and you can recreate it, then you can create a weak reference when you're not directly using it which means that the garbage collector can take the resource away if you're running low on memory, but if it doesn't run low on memory, then you will be able to get a strong reference to it as if it was a strong reference all along.  If it has been garbage collected, then obviously you will need to recreate the object.

Character Creation

Using Webcam Data

http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=13851 - mentions how to speedily put data in textures.

Video Capture

http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=9593&start=15 - claims has got it working fairly well.  Has tried putting encoding on a background thread.  Thread is fairly long running -> good chance of it being a good solution.

Video Editing (for the poor)

Charts / Graphs

http://www.ogre3d.org/tikiwiki/tiki-index.php?page=Ogre%20Line%20Chart - how to create custom line charts in Ogre.  Uses shaders and creates nice graphs.

StaticGeometry

http://www.ogre3d.org/forums/viewtopic.php?f=2&t=29653&start=0 stuff about "OGRE EXCEPTION(2:InvalidParametersException): Point out of bounds in StaticGeometry::getRegionIndexes at ..\..\ogre\OgreMain\src\OgreStaticGeometry.cpp (line 215)" Exception.  In short, StaticGeometry is broken up in to 1024x1024 (x1024??) chunks and if you don't set it up correctly, then you might add something outside of the range that has been set up so it throws an exception.  This may be caused by the AABB of the entity you are adding being wrong, or having a vertex in a very weird place.  By default the SG origin is (0,0,0) and it can have 1024 x (1000,1000,1000) regions.

DirectX SDK

Error Presenting Surfaces Exception

"12:02:13: OGRE EXCEPTION(3:RenderingAPIException): Error Presenting surfaces in D3D9Device::present at .\..\..\ogre\RenderSystems\Direct3D9\src\OgreD3D9Device.cpp (line 993)" - happened (probably) when creating or re-creating meshes i.e. Creating ManualObject and then putting in a StaticGeometry.

Soft Body e.g. Clothing, Hair,Grass

Checking Hardware Capabilities of Graphics Cards

The cause for me to look into this was the fact that some (even fairly recent 2011) Intel graphics cards don't support Full Screen Anti-Aliasing (FSAA).  It has proven difficult to find a reliable method of determining capabilities in this area.

  • http://www.ogre3d.org/forums/viewtopic.php?f=2&t=65188 - uses Ogre::Root::getSingleton().getRenderSystem()->getConfigOptions();  http://www.ogre3d.org/forums/viewtopic.php?f=2&t=61986 uses a similar method.  These didn't appear to work with me, with a capable Radeon reporting only 0 for FSAA modes, which was the same as for the Intel HD graphics card without FSAA support.
  • http://www.ogre3d.org/forums/viewtopic.php?f=5&t=48492 - renderSystem.CreateRenderSystemCapabilities() needs to be called after a render window has been created (even a dummy one), or it returns null.
  • FSAA current status: Looks like we want multi-sample type not FSAA quality as the quality relates to NVidia CSAA only, and not FSAA on other cards, yet it seems like when you set the FSAA level it does set (what I would call) the quality.   See D3D9RenderSystem::determineFSAASettings.

Slicing / Clipping

For if you want to show a section through an object.

Compositing and GUIs / HUDs

  • Needs research.
  • Window Managers using things such as Compiz will (AFAIK) allow different elements of the screen (windows) to be rendered at different frame rates.  The windows can be combined and overlapped.  A similar technique could be used to create a 3D program with a high frame rate HUD, with responsive user interaction, and a potentially slower frame rate main render window.  This is important as when the frame rate drops to say 20fps, mouse moving can seem very sluggish, but animation of the main scene can seem sufficient.  This also gives the illusion of a much faster program if you can do smooth animations on the HUD.

Screen Burn-In

WRT the issue of whether if you are having a very similar image on screen for a long time, you need to change the image to prevent it being permanently etched on the screen:

  • http://www.techlore.com/article/10099/Do-LCD-TVs-Burn-In-/ - NB article is about TVs simple summary is LCDs don't really burn in except under extreme circumstances, which a computer program might be categorised as.
  • http://en.wikipedia.org/wiki/Screen_burn-in - Plasmas do get burn in (but are going out of fashion).  OLED burns more than plasma currently (but at time of writing isn't used much).  With plasma and LCD, you can get temporary burn in like behaviour, which can be relieved by leaving the screen off for a long time.  Apparently some signage companies shift the image very slightly so that while it does burn in, the edges are softer, so you don't notice it so much.

Overall, while LCD remains the dominant technology, it's not a great problem.

Get The Mogre Version

typeof(Mogre.Root).Assembly.GetName().Version

Mogre In WPF

  • http://www.codeproject.com/KB/WPF/OgreInTheWpf.aspx - slightly dated (Sept 2008), but likely to still work.
  • WPF is going to be a lot better if you are (definitely) only going to be developing on Windows, but if you ever move to Linux, you probably won't have any ability to use it - although given you'd need to get Mogre redone for Linux, you've probably already locked yourself into Windows.
  • Look also to the forum topic Ogre with WPF client, which contains good information and links, e.g. to the MogreInWPF demo. More links are in this post (of the same forum topic).

Procedural Geometry

There is a project http://code.google.com/p/ogre-procedural/ / http://www.ogre3d.org/tikiwiki/Ogre+Procedural+Geometry+Library  to help easily create procedural geometry i.e. easily create meshes of common objects such as spheres (which are made out of lots of triangles).  The more interesting side is that you can do extrusion and use SVGs as the basis for the object to extrude.  This means that you can do things like easily create a 3D map based on an SVG you found.  Currently there is even a screenshot of rendering OpenStreetMap XML data to 3D.

Render Cycle

  • Frame listeners: http://www.ogre3d.org/tikiwiki/Basic+Tutorial+4&structure=Tutorials#FrameListeners - FrameStarted / FrameRenderingQueued / FrameEnded - the order of them is not (apparently) not predictable and you should always use FrameRenderingQueued to update your program for best performance.  http://www.ogre3d.org/docs/api/html/classOgre_1_1FrameListener.html "Of course because the frame's rendering commands have already been issued, any changes you make will only take effect from the next frame, but in most cases that's not noticeable."  My personal experience is that FrameStarted Vs FrameRenderingQueued (D3D9) doesn't make any noticeable difference.
  • It is fairly important to have a fairly predictable frame rate i.e. all frames take as long as the previous one.  Issues tend to occur because you are predicting how long it will take to render a frame, and using that time prediction for animation (of scene objects or the camera) a common symptom is a stuttering camera - although you should also check your camera control.  Consistent frame rate can be difficult with things like garbage collection (even if it should be asynchronous) and especially where you cannot naturally achieve a high frame rate.  If you have a very high frame rate, then you can turn on VSync (or do something similar yourself) so that each frame takes a time which is consistently long, even if it could run faster.  If you do have stutter, then it would be a poor decision to use the previous frame time as a direct indicator of the next one.  Consider averaging several frames and consider trying to remove inconsistent values e.g. exceptionally low or high values, but if you are trying to run a real time animation, filtering values may make you drift off the real clock.  Filtering may however be sensible for human interaction e.g. camera movement where you are trying to reduce large jolts.
  • Attempt to make as much as possible asynchronous to the render cycle - but this may be easier said than done.
  • Should investigate how to make the time predictions the most accurate e.g. timing at the end of the frame cycle to update very critical parts (camera)...


Alias: MiscellaneousNotes2