WarehouseJims MOGRE HOWTO        

This is a HOWTO page for Mogre by user WarehouseJim (created in year 2010) there is another page WarehouseJims MOGRE HOWTO 2, with content from 2011. Both pages are likely still relevant.

Currently it's plain content. The page markup and corrections will be done later.


Table of contents

Unsorted Notes

Sort these into the relevant sections if possible.

  • http://www.ogre3d.org/wiki/index.php/Mogre_FAQ has some details that include stuff about memory leaks
  • Anything *_NativePtr is a struct and must be explicitly destroyed (*.DestroyNativePtr() ) to prevent memory leaks.  Call the *.Create() to make a new object, using "new" will return an empty version. == works. 
  • With the MogreFramework, I changed DefaultInputHandler to use System.Windows.Forms.Timer as Timer was ambiguous between that and a Mogre timer.
  • Compiled MogreFramework from the SVN, rather than using the binary release... it was easy else than the above timer bug and "Error 1 The command "copy "...\OGRE\MOGRE\tutorials\MogreFramework\MogreFramework\bin\Debug\MogreFramework.dll" c:\OgreSDK\bin\Debug    exited with code 1. MogreFramework" -> have to manually copy across the .dlls into the bin and release directories.
  • Should probably update for VS2008 (or maybe should do a version with SharpDevelop - using SharpDevelop)
  • http://www.ogre3d.org/phpBB2addons/viewtopic.php?p=43624&sid=ce193664e1d3d7c4af509e6f4e2718c6 to do with "could not load dynamic library .\RenderSystem_Direct3D9_d"... error coming from OGRE when you start the example framework.
  • Downloading the VS 2005 SP1 (may not be necessary, may screw things up!)
  • http://praetorianstudios.freeweb7.com/BasicMOGRESetup.zip is a Template for MOGRE projects (haven't looked at it yet)
  • The OGRE documentation is still the fundamental reference.
  • XYZ: y is vertical, z+ is into the screen.
  • A lot of LOD stuff is dealt with in the Entity class.
  • Entity class also has details of if it has a skeleton,
  •  Each class is brimming full of methods, -> you should virtually never need to write a get....() function.
  • Position of entity is always relative to parent (as one would expect)
  • Can use roll (around z)/pitch (around x)/yaw (around y) to rotate around standard axes, or can use rotate(..) to rotate around a personally defined axis (which would be particularly useful in a skeleton style situation).  node.Roll(new Degree(45)) or Radian(..)
  • Can move just through adding vectors: node.Position+=new Vector3(...);
  • Camera.LookAt(....) — v.useful.  Any node can do this as well -> could easily have a head looking where it's going.
  • While you can have multiple scenemanagers, given the camera is attached to a particular scenemanger, can it actually see the others??? (I suspect it can)
  • You can subdivide the renderwindow and display different viewports from different cameras in it simultaneously. 
  • Don't forget to set the Camera.AspectRatio — but it can be set as cam.AutoAspectRation = true
  • Shadows set to be cast on a per-entity basis (I don't know what the default is). 
  • Fog, as shown in basic tutorial 3 is very easy to do, and kind of cool.  You can set it so that it only starts a certain distance from the camera.
  • User interactions potentially should not be put in the frame listener.  E.g. you don't need to poll the mouse position 600fps (although I would actually argue that it wouldn't be a bad way of keeping fast mouse movement smooth.)  But for the keyboard, using the external timer does meant that you will move at the same speed whatever.
  • http://www.idleengineer.net/tutorials/MogreFramework/MogreFramework/DefaultInputHandler.cs is the input handler for the MogreFramework.  It is more fleshed out than the tutorial version.  Possibly want to create some code generation for this type of thing, or maybe have a user-configurable file... classic thing to assign your own keys.
  • Basic tutorial 5 is the one that tells you how to dump the example framework if you choose.
  • Can do entitiy.BoundingBox.Center to find the centre of an entity.
  • It might be a good idea to limit the frame rate to ensure that other threads have CPU time to compute other things... no real need for 700fps!
  • I think it would be sensible to at least look into using non-Windows.Forms based input... if it's easy, it would be sensible to do up front.
  • http://www.ogre3d.org/phpBB2addons/viewtopic.php?t=688&sid=ce193664e1d3d7c4af509e6f4e2718c6 has a few (old and probably c++) details of using windows.forms with Ogre.
  • The 1st intermediate tutorial's code seems not to be associated with the tutorial, and certainly it doesn't cover the full tutorial.
  • http://www.youtube.com/watch?v=1delw4ePwEw&eurl=http://sunday-lab.blogspot.com/2008/08/ogre-160rc1-shoggoth-released.html shows a neat use of scripting in game.
  • http://www.ogre3d.org/wiki/index.php/ShoggothNotes is the details about OGRE 1.6.0 which hasn't been released yet, and I'm not sure what has been done to move MOGRE in that direction.
  • http://www.ogre3d.org/phpBB2addons/viewtopic.php?t=8050&sid=ba0a697af3ed8857b82bce5210219ac4 - forum post about putting text above scene nodes.#
  • Can you use a .NET 2.0 dll in a .NET 3.5 application? — otherwise might need to build MOGRE  - which would be a pain, but I don't think would have any issues as .NET 2.0 is just a subset. --  Seemingly, yes you can... although haven't tried with any .NET 3.5 features yet.
  • NB using roll / pitch / yaw can be dodgy as for instance, with the camera, can lead to rotations.  Might be better in general to do rotations about the coordinate axes or something like that.
  • The default bounding box can't have its colour changed etc.  Can attach a custom bounding box to a node.  I don't know how to remove it... it possibly just overrides the previous one.  It can have a material etc -> could probably have a semi-transparent bounding box (new WireBoundingBox -> probably other types available.
  • Can add query flags to the entity to allow easy sorting by type ent.SetQueryFlags(...) — would be useful for setting type of object.  Then do mRaySceneQuery.setQueryTypeMask(...) to get only the ones wanted.  Can do some fancy things with it.  See http://www.ogre3d.org/wiki/index.php/Intermediate_Tutorial_3
  • Volume selection: http://www.ogre3d.org/wiki/index.php/Intermediate_Tutorial_4 - better selection can be done through MMOC (Minimimal Mogre Object Collision or sthg) which has mesh level mouse picking.  It's based on MOC (the Ogre version).
  • May be able to optimise the scene query to polygon level (according to wiki article... I think there are two C++ versions).
  • As it is a wrapper, you can't subclass MOGRE classes.  To some extent it wouldn't be a good idea anyway as in most systems you have a scene object, such as a person, who would have sound, graphics, physics etc all associated with it -> multiple-inheritance situation if you go down that route.
  • http://www.ogre3d.org/wiki/index.php/ClOgre - MogreQuickStart - it's a class made for allowing you to get going quickly with MOGRE.  It has comments (in French and English!).  Might be worth looking over.  It's written in VB though.
  • http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=8783&sid=e6cd21b7c2160794e7acb816a7a20ae0 - might be useful... about MogreNewt, sticking one object to another etc...
  • http://artis.imag.fr/~Xavier.Decoret/ has some (lengthy) tutorials / articles on 3D stuff and Ogre in particular.
  • Neoaxis is a suite of tools based around Ogre.  You program in .NET over mono -> it implies that it's using Ogre.NET.
  • Seems like you can have negative scaling!
  • http://www.ogre3d.org/wiki/index.php/CommonMistakes
  • Convert local to global / world coordinates by doing node._getDerivedPosition() - it used to be node.WorldPosition, but this was removed in Ogre 1.6
  • Some stuff on double precision http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=11634&sid=2a3b8bd96e5e02d58038bcd5cc17340d - the camera relative rendering stuff can be found in the scene manager class reference.
  • Comments for the API can be added through http://www.ogre3d.org/wiki/index.php/Mogre_XML_commentation_tool (which may become trunk / ... sometime).  Forum detailing it http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=11646&sid=2a3b8bd96e5e02d58038bcd5cc17340d

Tutorial Remarks

  • MOGRE tutorial framework doesn't show the standard OGRE HUD.
  • Haven't got tutorial 6 to work yet! - but the sample version of it does work, and you can happily add in Windows.Forms stuff to the program.  You can't interact with the MOGRE window though and the resizing doesn't work. — Got it to work by following through the tutorial.
  • Tutorial 6: Windows.Forms stuff.  The key line for how to embed the MOGRE stuff in the windows forms is: misc"externalWindowHandle" = this.Handle.ToString();  Say you want to embed it in another component, say a new panel called "panel1", you do: misc"externalWindowHandle" = this.panel1.Handle.ToString(); and it will appear within that panel... easy!  Of course you really need to set what happens during resize etc (a window forms problem mostly).
  • Tutorial 6: a key bit is that you have to set the MOGRE resolution as large as the maximum your windows forms container will get to, to ensure you don't get pixelisation.
  • It is worth noting that load times are significantly quicker when you start doing something like tutorial 6, compared to some of the others...  I think it is because I'm not loading so much pointless junk. 

Scene Querying / Finding out about the rendered image

  • there must be a some stuff elsewhere in this doc about this
  • http://www.ogre3d.org/wiki/index.php/Selection_Buffer - a different way to scene querying etc that paints the scene with a different (opaque) colour for each object -> the colour of the pixel tells you what to select.  Of course, you don't show this image to the user.  I assume this method gives you no information about what is behind the first pixel (which can be useful if it's hard for the user to click accurately), but you could probably do the same method again without rendering the object that you just selected in the front.  I don't know about the speed trade-off of this method Vs others.
  •  

Hardware Occlusion Query (HOQ)

  • http://www.ogre3d.org/wiki/index.php/Hardware_Occlusion_Query - it gives an example based on making realistic lens flare.
  • Allows you to determine on a per-pixel level what is layered on top of what.
  • Given it uses the hardware, the data gathered may not be useable within that frame if you send it back to the main program for interpretation.  I.e. the results aren't as instantaneous as you might expect. — this tidbit is taken from a forum post.
  • You can use transparent objects  

MOGRE 1.6 Shoggoth

Ogre moved to the 1.6 branch and there was some delay in moving Mogre to the new code base.  There was also some discussion about the best way to do this, the merging of Ogre.NET and Mogre, the use of Python / Java /... build systems etc.  In the end, Bekas returned to re-generate everything, after some others had had some success with different techniques.  So as of 13th May 2009, it would appear that 1.6 will be fairly stable / reliable.  Further, although the initial testing is only just happening, it would probably be sensible to use the new code base to ensure future compatibility as I am sure that the 1.4 branch will be increasingly neglected.  There is much discussion on the forums, and http://www.ogre3d.org/wiki/index.php/ShoggothNotes also has useful information, but I think that soon there will be pre-compiled binaries available.  From the discussion on the forums, it appears that most things will be backwards compatible with 1.4 (people were just re-compiling their 1.4 projects), and so the risk and effort of changing is minimal (arguably a reason to stick with 1.4 as well).

  • 1.6.4 required a DirectX update. Update: 1.6.* is now essentially the main branch (from well before March 2010).

Setting it up

I think I've got it sorted anyway!

  • http://www.ogre3d.org/wiki/index.php/Mogre_Basic_Tutorial_0 and http://www.ogre3d.org/wiki/index.php/MOGRE_Installation .
  • Copy the *.crg files from the SDK to your Release and Debug folders.
  • Any ../../ entries in the resources.cfg need to be changed to C:/MogreSDK/
  • When using the example framework, and probably otherwise, problems arise from the program not finding certain .dlls which could be due to a problem in my path variable... but it would be better not to bother with the path rubbish for redistribution anyway.  You need to copy *.dll from the MOGRE SDK to your binary directories.  *_d.dll or *_D.dll are for debugging, without the _d, they are release ones.  The plugins.cfg file needs to be adjusted to use the debug, or non-debug versions.  should really work to get the path variable working properly.  alternatively maybe references would do.  Currently having issues with the debugging versions -> maybe safer just to use the normal ones for the moment.  - I think it is now working.  You need to be very careful not to have spaces between entries in the path string.
  • You need to make sure that the platform target for compilation is set to 32bit.
  • You don't need to install any VS2008 service packs as they come with the latest build anyway.

Axes

MOGRE axes "As you are looking at your monitor now, the x axis would run from the left side to the right side of your monitor, with the right side being the positive x direction. The y axis would run from the bottom of your monitor to the top of your monitor, with the top being the positive y direction. The z axis would run into and out of your screen, with out of the screen being the positive z direction." - OGRE Tutorial 1 

dll Issues

UPDATE: These issues shouldn't arise following the proper setup.The MOGRE samples work when built from scratch, so the below should be non-issues.  I suspect that the issue is not being able to build for x86 (which causes problems mentioned below).  Current hack will have to be making all the projects within the samples solution.

Debugging

Exception Handling (from Ogre / Mogre)

OGRE Logs

  • In the bin/debug or bin/release directory you should be able to find an ogre.log file which gives information about why something hasn't worked / what has been going on.
  • Do you get more information if you use the debug .dlls????

Troubleshooting

MOGRE GUIs

Graphics API

Have decided that DirectX is a better option given the project is windows based - seemed to run a bit quicker. The choice of DirectX Vs OpenGL appears fairly academic as they can be swapped over with Ogre very easily, but the appearance on screen does seem to change between the two different APIs and importantly the latest DirectX is tied to the latest Windows version.  However, the latest OpenGL runs on Windows XP.  OpenGL appears to have stagnated in the past, but is progressing quickly now, and seemingly has (largely) caught up http://en.wikipedia.org/wiki/Opengl.  Probably the most important issue going forwards is that the number of shader features supported is large.  I don't know if they are limited by the version of DirectX / OpenGL as say Nvidia's Cg language is a separate entity (and runs on both). NB that Ogre currently only supports DirectX 9, and I think only OpenGL 2

Direct3D / DirectX

  • DirectX 9 was released in 2002!  It is the version that should be developed around.
  • DirectX 10 was released Nov 2006 is only available on Vista and above, and is a significant change from v9
  • DirectX 11 is released with Windows 7 in October 2009, but will also be available on Vista.  It is a superset of v10

OpenGL

http://en.wikipedia.org/wiki/OpenglOpenGL 3.2 formally introduces Geometry Shaders, but 3.0 allowed it as an extension.  3.x requires DirectX 10 capable hardware as a rule of thumb. 

Lighting

  • Lights can be associated with the SceneManager.   mgr.CreateLight(..);  Not sure if they can be associated with nodes, but I assume they can be.   They don't seem to be able to do light.LookAt(..)
  • For texture shadows to work, you need to use spot lights.
  • Advanced methods:
    • Spherical harmonic light maps
  •   

Animation

  • Some animation can be done with the graphics card, rather than the CPU.
  • http://www.ogre3d.org/docs/manual/manual_78.html vertex animation (interesting) subtypes Morph and Pose animation.
  • There is no facility built into a robot for instance for moving between points... you do that part, the robot does its walk though!  I guess you could alter the walk speed to keep the leg speed in sync with the speed over land.
  • Look at things like KeyFrames and AnimationTrack (or NodeAnimationTrack) - these define states to be interpolated between as part of the animation.
  • The Lighting python demo has a useful example of animation between several nodes.
  • Skeletal, morph and pose animation can be performed by the vertex shader on your GPU, but you need to give hints to Ogre to make this happen. http://www.ogre3d.org/docs/manual/manual_18.html#SEC105
  • Skeletal animation is stored in a .skeleton file, separate from the .mesh file (and with the stuff included in OgreSDK, in a separate folder).
  • Vertex animations move the vertices (as opposed to the node or skeleton). 
  • Vertex animation cannot be limited to a smaller subset of a mesh than a submesh (but AFAIK) can be limited to the submesh.
  • You can blend skeletal and vertex animations, but you can't blend pose and morph vertex animations.  I think you can blend pose with pose, but not morph with morph.
  • Morph is a less efficient subset of pose -> use pose (even if it may occasionally be a bit tough).
  • Skeletal animation is more efficient and generally easier to do.
  • If animating on the GPU you lose the ability to query the mesh (which may be an issue for physics / precision scene querying) but you can do entity.AddSoftwareAnimationRequest() to do a cpu render.  So you might be able to do this is you detect you need more precise mesh info.
  • If you use Controllers (e.g. for node-node interpolated animation), the animation is automatically updated / you don't have to push time.  This may be a (small) speed optimisation over doing this type of animation yourself, which isn't particularly complex.  You may however lose the ability to multi-thread your animations this way, but again, probably not a big issue as they're fairly simple.
  • http://sourceforge.net/projects/tecnofreakanima/ - is a tool (built using Ogre and previously Mogre) to help do blend tree animation.  It's got some interesting GUI stuff going on as well.

AnimationTracks

These are the basis of all non-vertex animation i.e. the movement of nodes and bones of a skeleton (which are just special nodes).

  • You can apply several tracks by animating a node then its child... -> basically sum the results -> things like a wobbly orbit are possible.
  • Tracks can be spline rather than just linearly interpolated.  animation.SetInterpolationMode(), animationstate.SetBoneMaskEntry()
  •  
  •  

Skeletal Animation

  • Weights specify effect of each bone on a particular vertex. (this at least was limited to max 4 bones per vertex)
  • For different speeds of walking, might want to blend running and walking animations to different degrees.
  • Hardware acceleration may have a limit on the number of bones (85 in geforce FX, prev generations <24) -> you may want to split up very complex animations (use several passes).  Or have LOD options.  Things like not modelling individual fingers might help.
  • To model muscle bunching (e.g. the bicep, but more importantly the shoulder when you raise your arm), it might be good to have extra bones which may not be moved relative to another bone, but can be used for better weighting.
  • A common issue seems to be unnaturally sharp lines where limbs reach the body.
  • You can attach external entities to a bone using entity.AttachObjectToBone().  Useful for things like carrying objects in hands.  These are called "tag points".  I don't know where on the bone it references. — couldn't get it to work with a brief try... threw exception.
  • You generally want to name bones so that they can be referenced in Ogre.
  • Skeleton stored in Skeleton class!  Constructed by SkeletonManager.  You can use it to add bones
  • Ogre 1.6 allows you to use blend masks when combining several animations which give per bone rather than just the traditional per skeleton weightings.  animationstate.CreateBlendMask()
  • Animation myanim =skeleton.CreateAnimation(name, duration);
  • Animations (AFAIK) are built in a similar way to any node based animation as bones are just a subclass of node - you create tracks which define how the nodes move.
  • skeleton.HasAnimation()
  • skeleton.SetAnimationState()
  • skeleton.SetBlendMode() - how it blends between animations.
  • skeleton.OptimiseAllAnimations() - no real details provided, but what harm can it do?!?
  • skeleton. addLinkedSkeletonAnimationSource() - for where you have near identical skeletons... not for linking sub-animations or the like.
  • Bone class - interestingly inherits from Node -> I assume you just move it like a node.  (this probably also explains what a skeleton really is - a weighted set node graph with links to vertices).  Created with sthg like bone.createChild - the root bone is created by the skeleton.
  • bones can be setManuallyControlled().  If manually controlled, then the animation won't reset its position.  NB " Manually controlled bones can be altered by the application at runtime, and their positions will not be reset by the animation routines. Note that you should also make sure that there are no AnimationTrack objects referencing this bone, or if there are, you should disable them using pAnimation->destroyTrack(pBone->getHandle());"
  • https://www.ogre3d.org/forums/viewtopic.php?f=5&t=36520 - is wrong, but might give hints as to how to make a skeleton manually.
  • bone.Reset() - reset to its original position (unless a bone is manually controlled).#
  • A bone in Ogre, according to the XML version, seems to be the start of the bone in Blender (i.e. the bone before in Blender).  They are defined with a position, rotation about an axis in radians (same as any node?) - I assume all locally compared to previous node. --i.e. the bones are a little misleading in Blender
    • Best to think of it as the bone name being the node at the fat end of the bone.
    • Rotating what would appear to be the shoulder bone, seems to rotate that bone, not the node at the end of it (which I guess is what you'd mostly want)
    • It's more confusing! (or maybe it's all obvious!)  The world position of a bone is relative to the root bone who's world position is relative to the parent node (?).  If you display it, then the point is at the source of the bone - as mentioned above. - Not this simple... try doing it with all bones and some will end up outside the skeleton  It looks like there's a scaling issue in there somewhere.  TODO - this has broader relevance / may be needed for working out the length of arms etc.
  • NB Inverse Kinematics (IK) is not implemented in Ogre (as of 1.6.4) AFAIK, even though it may be listed as a feature in some places.  There have been attempts to use OpenTissue in it though.
  • If you rotate a bone, and all the vertices appear to rotate around the root of that bone, it is likely that you have the bone assignments wrong.  Check the mesh.xml file for vertex bone assignments and check that it uses different bones.  NB this won't necessarily show up in blender (I my case, all appeared fine when you distorted the skeleton in Blender).
  • http://www.ogre3d.org/wiki/index.php/ManuallyControllingBones - on manually controlling bones.  NB it references two forum threads on manipulating the bones http://www.ogre3d.org/forums/viewtopic.php?t=1550 (more useful than the other thread, but summarised by the wiki article) and skeletal blend masks http://www.ogre3d.org/forums/viewtopic.php?t=39510 (this appears to be weight per bone biasing between different animations — the thread isn't that useful, it's a discussion of its development rather than lots about how to use it).
  • NB should do entity->getSkeleton()->getBone(i)->setManuallyControlled() not entity->getMesh()->getSkeleton()->getBone(i)->setManuallyControlled() in general.  The latter gets the general skeleton (many entities), whereas the first gets the one associated with the entity.
  • Redefining animations in program:
    • animState.AddTime(..); bone.SetManuallyControlled(true);  bone.Rotate(...);  — untested, may have floors
  • Animation Blending :
    • skeleton->setBlendMode(ANIMBLEND_CUMULATIVE); - there are various blend modes.
    • Generally you just use two animations at the same time.
  • To export animations from blender, once you have created them, you need to add them in the big box of the export window.  It will give you the option of the start and end times etc.  To check that it has been done, look in the skeleton.xml file and there should be an <animations> tag, which defines the movements of the bones.  The name of the animation can be altered in the Action Editor window, at the bottom / top of the pane.
  • http://www.ogre3d.org/forums/viewtopic.php?p=231092&sid=ce193664e1d3d7c4af509e6f4e2718c6 somebody's problems (distortion from blender to Ogre) and how to get around them.
  • It appears that you might need to clear all the errors / warnings when exporting from blender for it to be loaded properly in Ogre.  I created some animations where there were some errors (to do with the number of vertices attached to a bone etc) and the XML seemed okay, but it failed to load up the animation states (which were in the XML).  It produced no errors suggesting this would happen.
  • You can scale bones as well as rotate them, to produce bulging muscles etc.
  • Some more advanced skinning techniques are discussed http://www.ogre3d.org/forums/viewtopic.php?f=3&t=41255&p=284759&hilit=geometry+shader#p284759
  • TagPoints http://www.ogre3d.org/docs/api/html/classOgre_1_1TagPoint.html#_details can be used to attach entities to skeletons e.g. a gun in a character's hand.

Blender Animation

TODO: move into blender section? but actually it's easier here and there's arguments for merging it more here.

Skeletal Animation

Best way to start is in the Blender Manual, a tutorial on creating a ginger bread man and then animating it. "Your First Animation in 30 plus 30 Minutes" http://wiki.blender.org/index.php/Doc:Manual/Your_First_Animation/1.A_static_Gingerbread_Man

  • In pose mode, Armature tab, Axes button turns on the local axes.
  • Armature tab -  you can change the appearance of bones in the editor.
  • Make sure you have set the vertices to the right bones after creating the bones and possibly automatically associating the vertices, which tends to be dodgy.  In object mode, select the mesh, go into edit mode, then in Links and Materials tab, the Vertex Groups should be edited.  Importantly, make sure that vertices aren't associated with too many bones as OGRE has a maximum of 4 bones per vertex.
  • If you subdivide the mesh using the method in the tutorial, by default it doesn't seem to export the output (but I may have missed pressing a button).
  • I can't find out how to explicitly set the linking between a vertex and a particular bone / joint / ...  But if you create vertex groups, it seems to do this assignment automatically.   The parts between joints should be grouped, right up to the joint to avoid pinching.
  • The skeleton exported doesn't seem to have its bones in the same place as on screen (checked via the XML Ogre output putting those points into the Ogre scene) — or I guess the mesh is wrong... same-ish!
  • You can paint on vertex-group weights by changing the mode from edit/object etc to weight paint (but I advise making the bones invisible first, or at least altering their appearance so that you can see the skin well).  There is also a useful icon / press F for "Painting Mask" which makes things a bit clearer. http://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/Advanced_Tutorials/Advanced_Animation/Guided_tour/Mesh/vg#What_Are_Vertex_Groups.3F
  • To assign vertices to bones, (again from http://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/Advanced_Tutorials/Advanced_Animation/Guided_tour/Mesh/vg#What_Are_Vertex_Groups.3F) select the armature, go into pose mode, select the mesh (right click on it), go into weightpaint mode, select the bone you want to paint to and start painting.  Blender will automatically create the groups for you based on the bone name when you paint.  — the group having the same name as the bone is how the bone-group associations are made, and you can do this manually.
  • When in weight paint, red =1.0, dark blue=0, goes via green and yellow. 
  • To create an animation, get the "Action Editor" window up, distort some bones (in 3D view / pose mode), then key menu -> insert key / press I.  And it will create a key at the point on the Action Editor's timeline that is selected.  Make sure you select the time before modifying the pose.
  • NB that the scale of the animation is in frames, and typically there are 25 frames per second.
  • http://www.blender.org/development/release-logs/blender-246/skinning/ - another way to weight vertices to bones, using bone heat - I expect it's more automatic / much better for a first effort.

Animation

Selecting Objects & Scene Querying

  • C++ intermediate tutorial 4 has some extra stuff to do with stuff like PlaneBoundedVolumeListSceneQuery rather than just the plain rayscenequery.
  • RaySceneQuery behaviour is SceneManager specific, but generally it just returns anything that has a bounding box (probably AABB) that is intersected by the ray.  You can add query masks, and one of them (on the default scene manager) allows you to query static geometry as well.  This returns the static geometry object, but not what was hit within it.

Bounding Boxes

Glossary

Glossary for primarily MOGRE / OGRE terminology

  • Entity - something that can be rendered on screen, e.g. a mesh, but not a camera / light / particle...
  • SceneManager
  • SceneNode
  • Mipmap - "Multum in parvo" map - A collection of bitmap images that are scaled versions of the original.  Used for LOD effects.
  • Frustum (NB not frustrum) - a solid, typically a cone or pyramid, which has its top and bottom cut off by two parallel planes.  This is used for field of view often, the 3D representation of a camera being the plane of the screen, expanding as part of a pyramid, until the base of the pyramid / the limit of view.  I think it can also be used for effects as well (e.g. you might only render fog within a frustum).
  • RIP Map / anisotropic filtering - Rectim In Parvo map - similar to mipmaps, but allows for changes in resolution that are different on different axes e.g. 256x256 -> 128x64 rather than 128x128 & 64x64 with mipmapping.  This leads to better quality at lower angles.
  • Billboard = sprite = an image always pointing at the camera.  I think they are always square.
  • AAB - Axis Aligned Box - a box with its edges parallel with the axes
  • AABB - Axis Aligned Bounding Box - an AAB around an entity, typically used for physics etc as an optimisation.
  • Phong shading - combination of ambient, diffuse and specular shading along with continuously varying normals across the surface.  These normals are interpolated between the vertices (so balls look smooth).
  • Modulate - WRT colours, it means multiply e.g. modulate (0.5,0.5,0.5) with (0.5,1,0) -> (0.25,0.5,0)

Removing Objects & Memory Management / Leaks

I assume that the garbage collector doesn't collect unused Ogre objects as they are managed by ogre.

  • To remove static geometry (which I assume is a model for other objects), a method that seems to work (but it doesn't mean it's optimal): sg.Dispose(); sceneManager.DestroyStaticGeometry(sg);
    • if you don't dispose first then you will get an exception raised.
    • You can use sg.Destroy() instead.
    • You have to destroy from the scenemanager as well - sg.Destroy() doesn't remove itself from the scenemanager.
  • http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=11263 - has some discussion about it.

Speed Optimisation

  • Lighting and shadows seem to make a lot of difference.  With no special settings (i.e. no shadows etc) 1600 default-material (white) cubes over an image textured floor plane renders (almost) full screen 1920x1200 at 45fps.  With some lighting and shadows added (but still default-material etc) 100 cubes renders at 13fps.  If you do the same scene with the same lighting, just with shadows turned off you get 39fps (and you can actually differentiate the cubes!). With animated transparent texture (all the same texture): 1600 @ 41fps.  -> SUMMARY: you should only cast shadows from objects you really want to, or you should fake them. — Actually texture shadows are pretty fast -> use them.
  • There are automatic LOD features in the mesh tools (but you probably have to set them off manually).
  • Direct3D seems to be noticeably faster on my graphics card (Nvidia Quadro 170??)
  • Batching: Static & World Geometry - to do with batching up meshes together.  See StaticGeometry class.  It should allow much better frame rates, but at the cost of not allowing you to easily move elements within your batch, having to display the whole batch on screen at once (and probably at the same LOD) and it prefers only a few materials.  A sophisticated system might do something like render the scene as static geometry, then when you try to move something, convert into a low LOD version which can be moved (possibly leaving the old stuff in the scene until you've stopped moving), then re-create the static geometry. - this is similar to moving windows on older computers.  It might not be sensible to put the whole of a long aisle in the same batch due to LOD issues.
  • Different SceneManagers.
  • Use vertex shader to do skeletal animation on the GPU (http://www.ogre3d.org/docs/manual/manual_18.html#SEC105).
  • Double precision floating point on GPU reduces GFLOPS by factor of ~5 (latest ATI)
  • Use fewer meshes (and fewer sub-meshes) to improve batching.
  • Use NVidia Perfkit to debug GPU bottlenecks
    • Requires special instrumented driver (which you tend to install with the kit)
    • Turn off anti-aliasing, anisotropic filtering, v-sync
    • Use release D3D (doesn't work with OpenGL)
    • Enable during Ogre setup with params"useNVPerfHUD" = true; - this is also available during the default configuration
    • Drag the app into PerfHUD, right click and do "send to" -> PerfHUD, or run from within Visual Studio as an external tool
    • http://my.opera.com/adelamro/blog/2008/06/30/using-nvidia-perfhud-from-within-visual-studio - how to set it up with visual studio.
    • GettingStarted manual is useful
    • Key point when starting to debug is: Is it CPU or GPU limited... generally observable by distance between frame time and driver time (graph, main screen)
  • Use CPU / C# profiler - http://community.sharpdevelop.net/blogs/siegfried_pammer/archive/2009/04/04/introducing-a-new-tool-in-sharpdevelop-the-profiler.aspx for sharpdevelop (3.1).  I don't think that VisualStudio has one built in.
  • Use mipmaps if texture bandwidth problems
  • "PracticalPerformanceAnalysis" - pdf presentation from NVidia.  Has lots of information at the end.
  • Texture atlases - If you're using small textures, group them together in a big texture, and only use small part of the big texture. Larger textures are more efficient per pixel than small ones.
  • Per object calculations should generally be done on CPU (nv)
  • 16bit rather than 32bit colours gives ~30%? speed increase at high frame rates.
  • http://www.ogre3d.org/wiki/index.php/Optimisation_checklist
  • Remove nodes from scene graph, rather than just setting not visible as some calculations are still done on non-visible nodes.
  • SetRenderingDistance() - stops the rendering of objects too far away.
  • It seems like there is a primary and a secondary monitor for rendering on - it's a lot faster on my big monitor than my small monitor when you drag the window from big to small.
  • There might be some stuff to do with the RenderSystem / root.RenderOneFrame etc.  You can use more fine grained control.  It might be more important for not using all the user's resources more than anything else.

Rendering Lots of Moving Objects

If you have a large number of scene items (instances of meshes), then the batch count goes up and the program becomes CPU limited.  This has become an issue.NB Some stuff is under shaders

  • Use the GPU to move many objects - vertex buffer?
  • http://developer.amd.com/samples/demos/pages/froblins.aspx - the paper (downloadable PDF 50pages) talks of animating, culling etc thousands of little goblins, as well as a few things like tesselation.  They try to move absolutely everything onto the GPU, which clearly wouldn't be practical in our case (but you might be able to get quite close to it).  It also has lots of stuff on lighting, shadows etc.
  • Have to consider how you are going to interact with objects that are being rendered on the GPU -> will potentially need to do hardware queries to determine what's what in the scene.
  • There is a bit of evidence (untested ATM) that OpenGL might be a bit faster than DirectX 9 for rendering a lot of objects due to its ability to perform marshalling before sending to the kernel.  DirectX 10 should have removed this advantage.
  • http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=10509&p=61510&hilit=large+texture#p61510 - was trying to render 10,000-1,000,000 people.  There was another Mogre post, but no replies.

Instancing

Instancing is to do with putting together several objects in the same batch and then manipulating them in a vertex program (i.e. doing what was planned).

New Graphics Card

Changed from a NVidia Quadro FX1700 to an ATI Radeon HD5850, which in terms of computational power is it least an order of magnitude faster.

Shaders, GPGPU, cg etc

Shader Models

  • http://en.wikipedia.org/wiki/Shader_model_4.0 gives a good overview of the different shader models.
  • Shader model 3.0 was released wtih DirectX 9.0c in August 2004, while shader model 4.0 was released with DirectX 10 in 2006 (but of course DirextX 10 graphics cards didn't necessarily become instantly standard and laptops will have changed much more slowly).  Therefore, it is not unreasonable to develop for shader model 4.0 at the moment so long as you don't need to support very old hardware, cheap laptops etc. 

Anti-aliasing

Shadows

Shadows can be very costly to render, hence they require some consideration.  The manual goes through them in detail http://www.ogre3d.org/docs/manual/manual_70.html#SEC304. NB one of the OGRE demos (shadow) allows you to play around with shadows to see what the speed trade off is etc.

  • Shadows need to be turned on for the scene mSceneMgr->setShadowTechnique(SHADOWTYPE_STENCIL_ADDITIVE);  But then they can be turned off for entities etc as ent.CastShadows=false;  or for lights as light.CastShadows=false;  You can also stop materials etc from receiving shadows.
  • Shadows are either Additive (lightens anything not in the shadow) or Modulative (darkens the shadow to a fixed level)
  • Shadows are either Stencil (extrude the outline of a mesh) or Texture based (convert the outline to a texture and project that).
  • Texture based shadows also come in an Integrated form, which requires you to do a lot of work to get the shadows to be used, but gives you a lot of control (and you might be able to do the odd speed optimisation).
  • Modulative shadows are faster, although less accurate (WRT darkness of the shadow) than Additive.\
  • Stencil shadows are better on older hardware, but newer hardware prefers Texture based shadows which is hardware accelerated.  Also, texture based shadows work better with complex geometries and create softer / more natural edges.
  • SceneManager::setShadowFarDistance - to limit the number of shadows -> faster.
  • Use LOD to simplify meshes.
  • Avoid very long (dawn / dusk) shadows.
  • The fastest type of shadow (in general) seems to be texture_modulative.
  • 1.4 used LOD camera, 1.6 uses main camera for LOD -> quality should be better.  1.6 has made some other changes for texture shadows.
  • http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=9794&sid=31615683a29ee3f9277587c15f6b0b0d - post on texture shadows - they had problems as well!  Simple summary is they do work in Mogre.
  • http://www.ogre3d.org/wiki/index.php/Depth_Shadow_Mapping - how to do depth shadow mapping, which allows for a few corrections of stuff like self-shadowing, objects receiving texture shadows when in front of the object producing the shadow relative to the light source etc.
  • http://www.ogre3d.org/docs/api/html/classOgre_1_1LiSPSMShadowCameraSetup.html - a shadowing scheme which gets around some of the issues of low resolution when you're near to the shadow at the expense of poorer rendering at distance AFAIK.
  • Shadow mapping / lightmapping involves pre-calculation of the shadows when you have fixed lights.  Don't know about Ogre support.
  • To get texture shadows working: make sure the lights are spot lights & that sceneManager.ShadowFarDistance is great enough -> it should all work.
  • http://msdn.microsoft.com/en-us/library/ee416427%28VS.85%29.aspx - shadow volume extrusion with DirectX 10 shaders.
  • Even Day Of Defeat : Source has issues with shadows going through walls!

Camera & Viewports

Projection

  • You can project images / decals into the screen using a frustum.  http://www.ogre3d.org/wiki/index.php/Intermediate_Tutorial_6  You can choose to do this orthographically or with perspective.  This could be useful for guiding alignments etc e.g. project a grid, or project a diagram that needs to be traced out.

Level Of Detail (LOD)

  • MIP maps
  • RIP maps / anisotropic filtering - more graphics memory (and a bit of computation) intensive than MIP maps, but provide sharper images when looking along the face of a texture.

Screenshots and Videos

How to make screenshots and videos for distribution:

  • renderWindow.WriteContentsToTimestampedFile(prefixString, ".png"); (and another non-timestamped version) — it's way too slow to do a video from.
  • PNG works better than JPEG as text is clearer.
  • How to make screenshots to file (easy), I expect there is an extension to make video and as a last resort could just do JPEG to video.  http://www.ogre3d.org/phpBB2addons/viewtopic.php?t=8224&sid=a717935d82472cbdcd5cd7cc135be92c
  • How to make very high resolution screen shots http://www.ogre3d.org/wiki/index.php/High_resolution_screenshots e.g.  for printing (not for regular videos).
  • http://www.ogre3d.org/phpBB2addons/viewtopic.php?t=8304 starts to talk about it.  It mentions FRAPS, http://www.fraps.com/ which is an external program that would do it (although potentially at the expense of some flexibility about what you show on screen Vs what's in the video).  Demo is free, $37 to buy -> wouldn't be a problem.
  • http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=9593&sid=257e8590bb865db7f7c56f54c6032b4e - has stuff on capturing video, as I found screenshots are very slow.  There is a suggestion of another custom method.  Overall, the implementer seems to fail to make a good system (somewhere appears to mess up with multi-threading).  The method, as far as I can tell, tries to take the bitmap that is sent to GDI copy it, compress it and put it in a video file.  It doesn't take it directly from the GPU, or do the compression on the GPU.  Code is on the second page, although there was talk about it being put on the wiki.
    • It mentions http://www.outerra.com/video/index.html - has a GPU based compression algorithm (not implemented in Mogre), which apparently barely affects the frame rate (better than FRAPS) and can be re-compressed later.  Full HD it uses about 59MB/s - which would probably require a fast hard drive to keep up with (i.e. a standard SATA drive would struggle).  Compressed textures are moved asynchronously.  Someone might post a Mogre implementation eventually.
  • WP (GPGPU): " In fact, the programmer can substitute a write only texture for output instead of the framebuffer. This is accomplished either through Render-To-Texture (RTT), Render-To-Backbuffer-Copy-To-Texture(RTBCTT), or the more recent stream-out." - i.e. you need to get it directly from the GPU which may require special shaders or something.
  • http://www.ogre3d.org/wiki/index.php/Intermediate_Tutorial_7 - on render to texture, includes some stuff on "write texture to file"
  • renderWindow.WriteContentsToFile(fileName); - simple & comes with Mogre! — it's in the ExampleApplication that comes with Mogre.
  • http://taksi.sourceforge.net/ - a free fraps like program.  Some say it's better!  --My brief attempt gave rubbish results / it doesn't really work.  It hasn't been in development for a long time.
  • http://www.codeproject.com/KB/audio-video/avifilewrapper.aspx - is an easy-to-use C# wrapper around an AVI library for compression and decompression.
  • http://www.ogre3d.org/wiki/index.php/Saving_dynamic_textures - I'm guessing this is essentially what's in tutorial 7, but a slightly long way round - note the comment at the bottom.
  • http://www.ogre3d.org/wiki/index.php/Camcorder - it's to do with moving cameras for recording demos - have now done my own implementation (not a copy, but no doubt similar).
  • There's a python demo which does reflection through render-to-texture.
  • http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=11761&sid=21c61fffcd4454742c8dd075bd13cbd4 which references http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=5073 details creating bitmaps of RTT Targets in memory, and creating a video from them (I think) asynchronously so that you maintain a good frame rate.  They have managed to get a number of speed increases through p/invoke etc.
  • To limit the frame rate, in the Go() function (which I assume is from the example framework or something), you need to insert a Thread.Sleep() in the while loop (which calls root.RenderOneFrame()), but you need to make sure that try to control it to a constant frame rate / remember how much you slept by last time and correct to try to get the perfect frame rate.... eventually float sleepTime = averageSleepTime+targetTimePerFrame-averageFrameTime; where FrameTime is the time taken to render one frame.  A history of just 5 frames gave me good enough results (so far).

Using An External Program

Got Camtasia, but that turned out not to be a good solution for recording 3D (nice UI and all, but...), and after playing with a lot of compression settings, and even trying installing other codecs... I gave up and went to Fraps.  Fraps is much better, and can record smooth video in real time.

  • Fraps output files are very large (~ a gigabyte a minute), you need to recompress them. 
  • Windows Media Encoder with 25fps 1280x1024 5Mbps is too grainy.  If you do "quality vbr" mode, and high quality, then it makes files ~1/3 the size of Fraps, and very similar quality (but ultimately it's still too large).  95 Quality VBR at 1280x1024 20fps creates a 300MB file for a 3min video and the quality is good; 80 quality leads to 100MB and slightly grainy. 
  • Microsoft Expression Encoder is the update to the Windows Media Encoder, but it crashed when trying to open a Fraps file.
  • DivX seems to have fixed profiles (e.g. 720p HD etc), so isn't so good for arbitrary videos.
  • Fraps can handle 1280x1024 at 30fps (8 core Xeon, and can probably do somewhat more.

Resource Management

The ResourceManager generally deals with the loading and unloading of resources, which are items like textures, meshes etc.  Therefore, control of the resource manager can be relevant for smooth loading of items (do you pre-load textures), and memory management.  Basic resource management is done automatically e.g. if you add an entity, it will load the resources as appropriate, but you can do further manual control.

                • There are several states of a resource (Unknown / Declared / Created / Loaded) which means that you can reference resources without having to use significant memory.
                • You can use resourceManager.ReloadAll() to re-load resources (e.g. if you have changed a texture  while running the program, for instance in an editor).  Alternatively, resource.Reload() will just reload the one resource.
                • http://www.ogre3d.org/wiki/index.php/Advanced_Tutorial_1 - is on resource management.  The Ogre book also has some stuff on resource management.

Meshes, Materials & Colours

  • Material scripts: http://www.ogre3d.org/docs/manual/manual_14.html#SEC23
  • Material OGRE wiki: http://www.ogre3d.org/wiki/index.php/Materialsor in the manual http://www.ogre3d.org/docs/manual/manual_14.html#SEC23
  • Materials can be inherited.
  • Need to be careful about global uniqueness of names.
  • OgreXMLConverter.exe converts to and from the *.mesh.xml and *.mesh.  The latter format is a binary OGRE specific format.  This allows you to see how some of the pre-built meshes were made, and what properties are set.  There is a DTD file for the XML format somewhere in the OGRE SDK.
  • The mesh.xml format looks fairly simple for the most part, with just lists of vertices and faces... but scratch the surface and it's not so simple.
  • The material is set in the mesh.xml file for every sub-mesh... in the <submesh ..>  tag.  Seemingly you don't have any extension and I assume the material could be an image, script etc.  However, my experimentation with it hasn't succeeded as yet. — giving up for the moment! — seems like MOGRE can't correctly find the materials... see ogre.log - search for "Can't"
  • You can make something wireframe through the material by setting the polygon_level: polygon_mode <solid|wireframe|points>.  Interestingly if you do it with an image as well, it will show that image on the wires.  You could use this to highlight the edges a bit (but it's not exactly the same as highlighting borders... there's probably a way of doing that somewhere else).  An alternative way is to do   mCamera->setDetailLevel (SDL_WIREFRAME); which should render the whole scene in wireframe.
  • You can also render the whole scene as solid using cam.SetDetailLevel(SDL_SOLID);
  • http://www.ogre3d.org/docs/manual/manual_16.html#SEC51 - has lots of nice details on how to make effects.
  • You can create (programmatically) objects in 3D based on lines etc via ManualObject.  Create one, set its materials.... http://www.ogre3d.org/wiki/index.php/ManualObject .   http://www.ogre3d.org/wiki/index.php/Line3D has a basic example of its use.
  • Animated textures are easy to do, in the material script, in place of texture, use anim_texture followed by a series of images and the total duration, or you can do shorthand e.g. anim_texture chevron.png 5 2 which takes images chevron_0.png... chevron_4.png and animates over them for 2 seconds.  You also have the option of manually advancing the animation.  There is also scroll_anim & rotate_anim which work on the normal texture (e.g. scrolling it across the screen), and wave_anim allows more advanced definition of the speed of animation (passing the time through a wave function e.g. sin(t) so that it isn't constant speed... just like a wave movement!)
  • http://artis.imag.fr/Membres/Xavier.Decoret/resources/ogre/tutorial4.html - very good overview of how things work in detail.  But it may be a bit of a dated method (doesn't seem to use ManualObject upon first looking at it).
  • u,v coordinates on a mesh do not need to be continuous -> you can do things like use one image to colour a very complex mesh.  You probably want to subdivide into submeshes if you are going to change the material of part of it (and don't want to have to manage the material of the whole).
  • If you are not using ManualObject to create a mesh, you can manually (at least in C++ Ogre) subclass SimpleRenderable http://www.ogre3d.org/wiki/index.php/GeneratingAMesh - I think there is (a possibly unsafe) C# version of this.
  • Normals are defined at the vertices and used for specular and diffuse colouring.  NB if you have a plane, the specular and diffuse values are calculated based on the normal at the corners.  This interestingly means that if you define a cube with just 8 vertices (and normals), it (I assume) will have specular and diffuse values as if it were a sphere as the renderer will interpolate between these vertices what the normals should be.  Therefore, you should probably define 4 vertices and 4 normals for each face.  It may be that instead of interpolating the normals, it just interpolates the colour... similar-ish effect — I think this is what actually happens.  The flip side of this is that you make something look like a sphere without millions of vertices.  The issue with doing the normals perfectly for each surface of a cube is that it then looks very angular / unnatural.  It might be better to do them slightly off to give a bit of shimmer to the surface.  This does work and it also prevents you from getting very strong specular reflection.
  • If you have a large mesh without many vertices e.g. a simple tetrahedron, which is moving in the scene (or the lights are moving around it), then strong specular values will give a jolty change in the appearance of the face, and similarly too strong diffuse values will have a similar, although less dramatic effect.  In this situation it is probably best to turn off specular shading.
  • Rumour has (http://www.ogre3d.org/forums/viewtopic.php?f=4&t=46523) it that you can improve your frame rate dramatically if you turn off specular highlights (which can be done across the scene probably in the ogre setup... look for render state or in the individual materials by setting all the specular values, including power, to 0).
  • NB most things like ambient or diffuse colour in material scripts can optionally take the alpha argument i.e. R G B A
  • U,V coordinates for textures - it seems like they're from top left.
  • Render to Texture http://www.ogre3d.org/wiki/index.php/Intermediate_Tutorial_7 - is used sometimes for fullscreen stuff like motion blur i.e. post-processing the output in shaders.  You can use the texture as a normal texture, so it can be filtered through regular material scripts.
  • Transparency - http://www.ogre3d.org/wiki/index.php/CommonMistakes#Transparency - tells you why it might not be rendering correctly - basically you need depth_write off and scene_blend probably.  It's to do with occlusion.
  • SetAutoTracking(vector) - allows you to make something always look at another node.
  • It seems like you can set the border colour of a texture - not sure what this means in practice yet.
  • Alpha splatting - mixing two textures in variable amounts with a separate texture used to define by how much.  It's useful for making things like a muddy field by mixing mud and grass in a non-uniform way.
  • There's a material editor, which isn't quite there yet, called Ogre Material Editor.  One forum run on it is http://www.ogre3d.org/forums/viewtopic.php?f=11&t=48774 .  It's being resurrected, but it's definitely not that useful (or a bit quirky, whichever) at the moment.... it doesn't seem to allow you to load pre-existing materials and preview them (it does have a text editor for them though... with weird highlighting)
  • Meshes like to have materials set if they are defined in XML (submesh tag).  If it is wrong, or non-existent, then it will fill up your log files with junk saying that it can't load the material.
  • Assigning materials to submeshes http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=11987&sid=eedec084bff8a32154af48910f4c8160
  • Given ManualObject uses ushort references to vertices, it would appear that you are limited to 65,536 vertices per mesh.  After that, you would have to sub-divide the mesh.  This doesn't appear to be an Ogre limitation, but a general limitation in the number of vertices that can be drawn per pass.  It's not impossible that newer cards increase this. --Correction, the comment on manualObject.Triangle() suggests some cards have 32bit indexes, and implies that Mogre supports them -> maybe an issue with the wrapper?  

Material Editors

  • http://veniogames.com/downloads/Yaose - Yaose (Yet Another Ogre Script Editor) has cunning syntax highlighting, auto-completion, colour pickers etc and is simple to use.

Image / Textures

Tiling

You can't show very large images, so you need to convert into a number of smaller images (probably 4092 x 4092 - NB 8192x8192 can be loaded once, but doesn't work as tiles... throws a corrupt memory exception).  To do this, use ImageMagick, a command line utilityThe command line: U:\ImageSplitting>convert input_file_name.png +gravity –crop 8192x8192 -extent 8192x8192 output_file_%d.pngThe %d bit gives the output number of the file, its sequential working across so:-      a1, b1-      c1, d1-      e1, f1becomes a1,b1,c1,d1,e1,f1 OR:convert Level_B.png +gravity –crop 8192x8192 -extent 8192x8192 -format "Level_B_No value assignedx_No value assignedX_%Y.png"  This one's quite good:

  • C:\Tiling>convert LargeImage.png +gravity -crop 4096x4096 -extent 4096x4096 -transparent "#ffffff" -set filename:style "No value assignedX_%Y" SmallImage_%filename:style.png
  • It sets the background white colour to transparent and adds coordinates to the file name.  NB the coordinates are wrong because of the ordering of arguments used (look at the one below for fixed ordering). The best one:
  • C:\Tiling>convert LargeImage.png +gravity -crop 4096x4096  -background "rgba(255,255,255,0)" -set filename:style "No value assignedfx:page.y/4096+1" -extent 4096x4096 SmallImage_%filename:style.png
  • It seems to work best / only when you only have the 4Ailse.png in the folder that you are running it in.
  • This gives an index of the image from the top left hand corner and makes the background white where the image has been enlarged.   
  • http://www.imagemagick.org/Usage/crop/
  • http://www.imagemagick.org/script/escape.php - has details about formatting the output file names
  • ImageMagick seems to execute its command line options in the order in which they appear.
  • NB (I think) It doesn't like spaces in the folder - even if you are not specifying it in the command.
  • Creating smaller images seems to use up more memory than bigger ones (not really tested though).
  • NB there seems to be a built-in function called convert -> you might need to explicitly reference ImageMagick's convert utility.
  • If you're dealing with very large images, you need a large amount of RAM (e.g. 4000x28000 needs >16GB) or it will be very slow and destroy any responsiveness of your computer as it uses your hard disk for RAM. 

Special Effects

Showing Edges / Making Meshes Easier To View

Texture Atlases

This is where you create a large texture that is composed of smaller blocks which are used by many different meshes.  It is more efficient than several small textures.  It uses the vertex and pixel shaders to offset the texture coordinates appropriately.

Displacement Mapping (Bump / Normal /...)

The basic idea here is to increase the apparent complexity of a mesh through distorting its surface according to a texture e.g. the lighter the texture, the greater the normal displacement of the mesh surface.  This is also called displacement mapping.  It's not necessarily the case that the mesh is actually made more complex, sometimes it just varies the normal within a face to give the appearance of roughness.I will refer (try) to the general case as displacement mapping. 

  • http://nifelheim.dyndns.org/~cocidius/normalmap/ - a Gimp plugin to help you create normal maps, which gives you a 3D view.  NB the bump map tool (distributed with Gimp) isn't the right one!  Find it under filters->map.
  • All the techniques require vertex and / or fragment programs to run.  Therefore, they aren't as such built in, and there's no particular benefit to using a more conventional scheme of bump mapping Vs parallax occlusion mapping if you think POC's more useful.
  • These are not just B/W maps as the different colour channels can be used for x,y,z normal directions (either relative to the base normal of the face, or the absolute normal value).
  • You can create displacement maps with various tools (blender?) which will compare two meshes and store the difference as a displacement map.  The cost is that the texture can be quite large compared to if you could repeat a smaller texture.
  • Memory is a concern with normal mapping.
  • http://developer.nvidia.com/object/bump_map_compression.html - a paper on compressing displacement maps to reduce memory requirements.  Apparently regular texture compression schemes aren't so efficient.
  • NB that a scene query on a bump probably won't return anything.

Bump Mapping

Normal Mapping

  • Works by actually defining the normal  

Parallax Mapping (AKA Offset Mapping, Virtual Displacement Mapping)

Similar to normal mapping etc, but it increases the level of displacement according to the viewing angle so that it is displaced more when you are looking along the surface to along the surface normal.

Parallax Occlusion Mapping

Parallax mapping with occlusion!

Figure / Person Modelling

Non-Manual Object Mesh Creation

Manual Object

  • If you want to create several instances of a manual object, then you need to convert it to a Mesh using ManualObject.convertToMesh(), but you can just add the manual object directly.
  • You can do points, lines, surfaces.
  • A vertex buffer AFAIK creates lines, adding the index buffer to the vertex buffer allows you to fill it in.  In the index buffer, the order is important, putting the vertices in anticlockwise order as you look at it means that the surface will face you. — you control the rendering in part by RenderOperation (parameter when you begin creating the mesh).
  • Creating manual objects is fairly procedural.  You create a ManualObject instance as manualObject = sceneManager.CreateManualObject("myobj"); and then do mo.Begin(...), mo.Position(..) for the vertices, mo.Triangle(...) to create triangles (makes the index buffer)......  mo.End().  Then you can do mo.ConvertToMesh();
  • It seems to automatically create sub-meshes every time you do ManualObject.Begin().  This doesn't name the submesh however (but there might be a way).
  • NB to get the textures to work correctly, you need to specify the u,v coordinates by calling mo.TextureCoord(...) just after doing mo.Position(..);
  • http://www.ogre3d.org/wiki/index.php/TerrainMeshDecal (bottom bit) has stuff on how to modify a ManualObject after you have created it.  There aren't function for translating a block etc by default.#
  • texturecoord()
  • mo.Normal() — call when you are defining the indices and texture coordinates.
  • mo.Quad() - I assume gets around having to make two triangles.
  • mo.Traingle() and mo.Quad() are just shortcut to using the mo.Index() several times.
  • If you use mo.Index(), the OperationType defined in begin makes a big difference.  It defines whether you're drawing points, lines, triangles as 3 vertices, triangles as 3 vertices and then 1 vertex for each future triangle etc.
  • mo.Begin(materialName, RenderOperation.OperationTypes.OT_TRIANGLE_LIST);
  • The size of the normal affects the intensity of the colour produced on the triangle.  I.e. you should always normalise your normals — this is of course in the
  • Shadows: you need to have indices to get it to work i.e. use index / triangle / quad methods. Importantly (at least for stencil shadows) you need to convert to a mesh to get it to work.- if having problems, see https://www.ogre3d.org/forums/viewtopic.php?f=2&t=48985 .  If you want to turn on / off the shadows... it seems that mo.CastShadows doesn't work, do ent.CastShadows (but there may be a way to get it to work!). 
  • http://www.ogre3d.org/wiki/index.php/ManualSphereMeshes - includes details on Normals and TextureCoordinates that are necessary for correct texturing and lighting effects.
  • http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=11753&sid=c0ed0c1b10d0af8ee99c5a0ce12ac9cd - you can create massive point clouds as well - the example has a million points!
  • http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=11747&p=66742#p66742http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=11763 - Forum threads (largely contributed by me :-) helping someone to learn the ropes of ManualObject / how to create a mesh.  It includes some untested sample code.  Also includes some stuff on how to get mesh information and change it in program.
  • http://www.ogre3d.org/docs/api/html/classOgre_1_1ManualObject.html#9448c472c45235366231ec1a6e4e5b73
    • You can define tangents for some lighting effects
    • manualObject.Position is key, and must be called before creating a normal etc.  It's not just that the buffer indices must match up or something between the vertex and normals (AFAIK)
    • mo.TextureCoord(u,v,w) - can be called multiple times per vertex to add several sets of texture coordinate to the  same vertex.  NB, you can call it multiple times with just u,v i.e. w appears just to be for 3D textures, not for multiple indexes.
    • mo.Colour
    • mo.Triangle(uint32 i1....) - Currently C# has turned this to unint, which only supports 65000 vertices not 4,294,967,296 - Filed a bug report.
    • mo.Index() is just a lower level version of mo.Triangle - calling it 3 times is equivalent.
    • mo.Quad() creates two triangles rather than a quad per-se.
  •  

Nodes

  • Orientation and scale are by default inherited, but this is not mandatory.  Removing the inheritance of rotation may make life easier with some skeletal deformations.
  • Node.Scale Vs Node.SetScale — don't get them confused!
  • If you have too many moveable objects that are to be rendered (not nodes exactly), it will slow your frame rate -> consider if you can group things together (static geometry, bigger meshes etc).  If they still need to be moveable, you possibly have to consider using the GPU.
  • http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=12153&sid=45da4b87a0d19c16850be70d087b4b21 converting from local to global coords - self._getDerivedOrientation() * localPos * self._getDerivedScale()) + self._getDerivedPosition();

Rotations / Quaternions

  • Wikipedia is useful, if a bit academic.  E.g. http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
  • Quaternions are just 4 vectors: (w,x,y,z) that are normalised (including w!).  Such that x,y,z define the axis and the rotation about that axis is 2cos-1w.  So it's not easy (for a normal human) to guess what the rotation is given (w,x,y,z).
  •   

RenderSystem

A RenderSystem is (AFAIK) just the wrapper around OpenGL / Direct3D -> you probably shouldn't be calling stuff directly in here much, you should be using other scene manager type functions. Currently a list of interesting things found when going through the API docs

  • You can set the shading type, with a default of Gouraud - rs.SetShadingType(); options flat, gouraud, phong. — flat actually seems fine generally (probably not if you have many curved objects, but you could trade off with increased polygon count)
  • you can create many render windows, even render window  within renderwindows.
  • rs.CreateRenderWindow() has lots of associated documentation
  • rs.SetAmbientLight
  • rs.CreateMultiRenderTarget() - renders to multiple textures at once - maybe this could be used for render to texture / storing a video.
  • You can manually set things like the projection matrix and world transform matrix.
  • rs._setTextureUnitSettings() etc - not really sure what does, but..
  • rs.SetClipPlanes() / rs.AddClipPlane() -> maybe can do custom occlusion.  There's also a reset method -> maybe reset then add the floor / wall planes each frame. — according to http://www.ogre3d.org/forums/viewtopic.php?f=2&t=38943 it can't be used (at least not simply).  This was verified in another post.

SceneManager

Node Visibility & Scene Managers (detach Vs setting visibile =false)

  • http://www.ogre3d.org/forums/viewtopic.php?t=13068&highlight=setvisible (look for sinbad's comment in particular) - if you set a node not visible, then, depending on the scene manager,  it may (/ is likely to) still be part of the octree that decides whether or not it should be rendered.  At the last stage, it will look at the visibility flag and realise that it shouldn't be rendered.  However, it will still have performed a lot of calculations on that node.  If you detach a node from the scene graph, then it won't happen, but the scene manager might have to spend a lot of time re-calculating stuff, so it's probably not worth doing if you're about to re-attach it.  Generally, larger and more permanent changes will be better candidates for detaching.  — this should actually be in the docs.

Which SceneManager?

Multiple SceneManagers

Occlusion Culling (OC) / Hidden Surface Removal (HSR) / Not rendering something that's behind something else etc

TODO: merge with Camera stuff?NB Ambient Occlusion is a method of rendering the lighting on an object in a manner that better represents global illumination to simple phong shading etc.  This won't be dealt with here.

Painter's Algorithm ( & reverse)

  • It's just a crude method of painting the whole scene, starting from the back and drawing over that with anything in front of it.
  • The reverse painter's algorithm is start from the front and work backwards, not doing any calculations for an area where you have already drawn.
  • While it's a bit dated, and generally not used, sometimes it's used alongside the z-buffer to get around some of the z-fighting issues where objects are too close together to accurately render which one's on top.

Z-fighting & Depth Resolution Issues

 

  • http://en.wikipedia.org/wiki/Z-fighting (where the pic's from) - is where the z-order resolution is not sufficient -> you get two planes overlapping.
  • Try to make the near and far clip distances of the camera more similar.  In particular you can often alter the near clip distance by an order of magnitude.
  • Use higher resolution z-buffer floats. not sure how yet
  • Don't have coplanar planes - always leave a small space between them.  The size of this small space depends on the resolution of you z-buffer and the near and far clip planes (I assume).
  • OpenGL and Direct3D appear to behave differently on this issue, but this might be something to do with the precision of z-buffer they are using by default.  Direct3D was much better about it.
  • Generally (http://en.wikipedia.org/wiki/Depth_buffer ) 16bit / 24bit / 32bit z-buffers are used.  It think in real time graphics, 24bit is the max.
  • Generally you will have better z-buffer resolution for nearer objects than objects in the distance (generally sensible).  There are other algorithms e.g. w-buffers, which try to reduce this effect, but I don't know if they can be run on your GPU.
  • Sometimes the painter's algorithm is used in conjunction with the z-buffer to improve the rendering of problem areas.  not sure about Ogre and this.
  • See stuff on camera - there's hints that the far clip distance isn't important / followed.
  • If you have a very large scene, you can use multiple cameras and render to texture.  You render the far scene to a texture and create a background, then you render the near scene.  It probably works best if you have big spaces between the elements in the scene (e.g. viewing one planet from the surface of another, which was the example given).
  • Depth Bias - not always practical to use - seems to be a way to get around z-fighting without having to explicitly give a separation.  See the manual.  You set it in your material script, but I think it works beyond just that mesh. - it seems like you might have mixed results using it.  Different hardware seems to interpret it in different ways https://www.ogre3d.org/forums/viewtopic.php?f=5&t=15566&start=0 (from 2005 YMMV) - this might be due to the slope bias being ignored by older hardware.  There are details in the Ogre.Pass class reference.  It is explicitly said as useful for things like decals.
    • tried it, it's a bit hard to check it works, and there are hints that it's not 100% perfect with a bias of 1... will find out with more use.  It's certainly nicer than giving an explicit separation.
    • You get problems as the depth bias can cause something genuinely (reasonably significant) distance behind something else to be rendered on top of it.  So you want to use depth biases of ~1 (it is a float) - high depth biases and you will get z-fighting with items that weren't an issue before.  Also, sometimes it might be better to set <1 depth bias to make something behind something else, rather than try to bias it in front.
  • You may (according to a post http://ogre3d.org/forums/viewtopic.php?f=5&t=36104&start=0 ) get different levels of z-fighting according to colour depth.  32 bit colour implies 24bit depth resolution.W-BuffersThese get around the z-buffer non-linearity issue altogether by using a linear equivalent.  With a 24bit w-buffer -> you have 16.7M levels -> 0.1mm resolution from 0-1.6km.  Therefore z-buffers are probably rarely useful unless you are going to have very big scenes with items that need to be rendered accurately nearby, but even then it might not be sensible as you'll get z-fighting with those same objects at large distances, so would need some kind of LOD on your objects (or use multiple cameras).
  • rendersystem.WBuffersEnabled = true;

Occlusion Culling

  • http://www.ogre3d.org/forums/viewtopic.php?f=5&t=48007 (which has lots of useful info)  - OGRE doesn't (as of Feb 2009) come with occlusion culling, although external commercial software (e.g. Umbra) does exist -  i.e. it's a valid technique.  The challenge seems to be doing it fast enough for each frame i.e. you want to have few large bounding boxes, rather than using too much detail. 
  • There was a previous attempt for Ogre called HOQ (Hardware Occlusion Query) / Hardware Occlusion Culling (I think they're the same?)
  • Look at Coherent Hierarchical Occlusion Culling
  • The only real attempt that I've found is via PCZSM (scene manager) which will (eventually) have anti-portals which basically are occluding planes.Portal Rendering
  • http://en.wikipedia.org/wiki/Portal_rendering
  • PCZSM is Ogre's implementationPotentially Visible Sets
  • http://en.wikipedia.org/wiki/Potentially_visible_set 

Slicing Through a Large Mesh / Series of Meshes

Batching

A lot of the ways to optimise the rendering speed are to do with limiting the number of times data has to be sent to the graphics card.  It is a lot more efficient to send data in large batches, and I assume to keep data on in the card's memory rather than sending it each time.  There are various ways of doing this with Ogre e.g. StaticGeometry, PagedGeometry, WorldGeometry, using different scenemanagers etc.I assume this means that very complex meshes for things like a human will therefore render not significantly slower than a simple mesh.http://www.ogre3d.org/wiki/index.php/CommonMistakes#Performance : " That means to maintain 60fps on a 1Ghz CPU you cannot issue more than about 416 batches per frame, where a batch is one instance of a submesh (for example)."Batching is CPU limited (via the graphics card's driver), and given Ogre is not really multi-threaded, I assume it can't easily be changed with multiple cores.

  • I assume that batching is why you have billboardsets etc.

Static Geometry

Static geometry allows a significant speed optimisation by grouping together lots of triangles to be rendered as a group.  This can be done by attaching nodes or entities to a StaticGeometry object.  It does prevent you from moving items within the StaticGeometry grouping, and the whole lot is either in or out of the scene.  You should see order-of-magnitude speed increases with complex scenes which would otherwise use small batches.

  • In reality you can only use one material per batch, but the StaticGeometry class hides this -> use few materials.
  • http://www.ogre3d.org/wiki/index.php/Intermediate_Tutorial_5#Adding_Static_Geometry
  • sg = sceneManager.CreateStaticGeometry()
  • sg.SetRegionDimensions() — not sure it's generally needed.
  • sg.SetOrigin - not needed unless very open scene  --The origin is the top left corner of the region that the StaticGeometry defines. If you want to place the StaticGeometry around a point, you will need set the origin's coordinates to be minus half of the region's size: heavily edited quote!
  • Once you have attached all nodes / entities, call build()
  • http://www.ogre3d.org/docs/api/html/classOgre_1_1StaticGeometry.html
  • Although you're not meant to do much with batched geometry, it is possible to do things like wave grass.  Apparently this is in the grass demo.... maybe uses vertex programs?
  • sg.AddSceneNode adds the node and all its children, but it doesn't remove the node from the sceneGraph, which you have to do manually to stop it being rendered twice.
  • It looks like you can have automatic (after you've set it to run) subdivisions of static geometry (i.e. if you're got thousands of objects with no clear boundaries, you could add them all and let them be split by position in the scene into n groups)
  • It ignores visibility flags -> anything that's going to be visible / non visible needs to be on a non-static node, or it needs to be removed from the scene graph and reconnected (probably with an explicit update of the static geometry).  -- this will be corrected in 1.7 and I'm guessing if desperate, could get it out of SVN for that branch (but you might do better implementing it manually).
  • All your manual objects need to be turned into meshes to show up -  you can't add the directly to the scene.
  • If you have a lot of nodes (which is implicit in your use of StaticGeometry), then it is significantly faster to not have the SceneNode that you turned into StaticGeometry attached to the main SceneGraph, rather than leaving it attached and hidden.

PagedGeometry

OGRE add-on which vastly improves rendering speed of large, open scenes.  Generally used for outdoor scenes.

  • http://www.ogre3d.org/wiki/index.php/PagedGeometry_Engine  - has lots of screen shots and videos w/ v. complex outdoor scenes.
  • Can be v.fast: " Expansive jungle scene with 240,000 trees and animated vegetation, running at 100 - 150 FPS on a GeForce 7800 GT" — card seems fast, but not that fast.

WorldGeometry

  • sceneManager.SetWorldGeometry() -> y
  • Faster, but more complex and less flexible than Static Geometry
  • Generally needs fiddling around with which SceneManager you're using.

Binary Space Partitioning (BSP)

  • Pre-computed depth ordering of the scene.  So in the ideal case you can skip z-buffering and just use the painter's algorithm.  YMMV.
  • Not suitable for dynamic objects (but maybe you could use two scene managers simultaneously?) - according to WP, it's often used with z-buffering in some kind of hybrid.
  • Quake engine stuff!
  • This can be used for things like physics as well as rendering.
  • It's questionable whether this is actually efficient with modern GPUs (which can deal with z-buffers easily) -> you probably don't actually want to use it.

Potentially Visible Set (PVS)

  • http://en.wikipedia.org/wiki/Potentially_visible_set
  • It's about pre-computing what might be visible.  This could take a long time (compared to a frame), but when done right, it only needs to be computed infrequently so it doesn't affect the performance if you can do it asynchronously or in advance.
  • Generally it is looking for which parts will always be visible.  Whether the rest is visible could be determined in real time.
  • No idea if there's an Ogre implementation.  

2D / HUD etc

  • Terms: billboard (in scene 2D always facing user), overlay, HUD.
  • Some links through http://www.ogre3d.org/phpBB2addons/viewtopic.php?p=36826#36826 ... although not in itself particularly useful.
  • TextAreaOverlayElement - for simple text (with a few things like alignment) shown on the HUD
  • Overlays can be defined in script files (like materials)
  • Fonts need a .fontsdef (I think) file to make them work.  This just sets up the size and resolution from the truetypefont.
  • http://tango.freedesktop.org/ - has some style guidlines and icons that are used by major projects.  Might be worth looking over.

In-Game Web Browsers

Input Devices

MOIS

Based on OIS, which is technically a separate project to Ogre, it provides a cross platform abstraction layer to allow the usage of mice, keyboards and joysticks.http://www.ogre3d.org/wiki/index.php/MOIS

  • There is also some evidence of support for 3D mice - http://www.3dconnexion.com/forum/viewtopic.php?t=1005
  • It's probably a more flexible option than using the Windows forms inputs (but I think joysticks would become complicated), but it's probably not necessary.  Where it will help is because other projects (read Miyagi) like to use it.
  • http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=7697 - discussion about how to use MOIS in conjunction with windows forms, more specifically how to only fire events when the panel is focused.  -Basic idea is to monitor the windows forms mouse events etc to determine if there are events being sent -> you probably need to make sure it loses focus when you leave the panel.
  • MOIS is actually quite easy to use - used it due to Miyagi liking it (it has a plugin for it).

Miyagi

This has become the major GUI system to be used with Mogre, and is rapidly developing, adding new advanced controls.  While I'm sure there are pluses and minuses, in reality, there is no real choice but to use it.  Its large number of features will save time, and resurrecting a dormant GUI toolkit would be excessive work (and come with no support).

Setting up compiling from source

  • Install F# CTP from Microsoft.
  • If you want all the goodies:
  • Put mogre.dll, mois.dll, RenderSystem_*, *.cfg in the bin/debug folder.  Make sure the .cfg files are sensible.
  • Make sure that all the projects are going to compile against x86
  • Add the media folder to bin/  - the media folder, you probably have to get from the binary distribution, rather than the source tree.
  • The sample required a test.zip, which I couldn't find -> commented it out in MiyagiSample.cs
  • Okay, I give up trying to get it all to work!  You can however compile just the Miyagi bit etc without running the sample, for instance.

Features

  • Graphing via Zedgraph (LGPL) www.zedgraph.org
  • Video
  • Drag and drop  windows
  • Automatically hiding windows (which are re-activated at the borders of the screen).
  • Plugins for a wide variety of interactive languages (python, F#, Boo, Lua etc) — yet to check that they actually create consoles that can interact with your program etc.
  • Basic standard controls, including progress bars, sliders and scrolling in general AFAIK
  • Display HTML - I don't think this is a fully fledged browser, probably just styles and no javascript.
  • Plugin based architecture.
  • Resolution agnostic positioning (i.e. 0->1 across screen in both axes), but fixed sizes of objects (n pixels wide) — actually, it appears both can be resolution agnostic or fixed.  There may be small exceptions such as font size.
  • A gui designer (yet to be completed)
  • Can load up GUI definition files (AFAIK)
  • supports transparency (e.g. in backgrounds) 

Limitations

The following are due to be fixed:

  • No multi-select listbox
  • No theming
  • Can't use multiple viewports (I don't know to what extent this is the case)
  • No animated textures at the moment (but it used to be possible via .material scripts, which were removed).
  • Imperfect text support (I think it's lacking a RichTextBox type control)
  •   There is no indication that the following will be fixed any time soon
  • No tree control
  • No automatic positioning of controls e.g. flow layout. - probably not an issue. --There is certainly space for it: BaseControl.LayoutEngine -> it might actually have been implemented.Other
  • poor documentation, and few examples - but there is at least one example.

Problems Found During Usage

  • TextBox isn't doing multi-line when you add a "\n" character, it just wraps lines when the line gets too long.
  • When you select a panel, it doesn't raise itself above other panels.
  • Panel movement is only possible on the rear-most panel within a GUI (there may be several such independent panels within the same GUI), but you can only select it when you have a clear shot at it with your mouse -> you have to add borders.  To be fair, in Windows, you have title bars etc and wouldn't expect this to be the default behaviour =- might be able to hack around it by changing the z-order so that the moveable panel is in front of its child controls.
  • It's way too fickle about sizes of stuff.... -> it's generally better to copy the examples exactly... especially with TextBox.
  • Clicking on a radiobutton seems to fire an event three times over.
  • Label doesn't understand the tab character properly, it doesn't seem to show things after the tab (but they might just be off screen).

Usage Notes

  • It seems to appreciated panel.SuspendLayout(); panel.ResumeLayout(); even when you are specifying the coordinates. - may be BS.
  • control.Enabled sets whether or not the control fires events.

Fog, Field of View, Foreshortening etc

  • Caelum Sharp provides not only great skies (and rain), but also some fog features.

Smoke Trails

  • Wings of Fury 2 has quite literally smoke trails behind aircraft. — but I don't think that it's open source
  • Ogre actually comes with it built in - called RibbonTrails (and has a class named that).  http://www.ogre3d.org/wiki/index.php/Smoke_Trails takes it further by adding turbulence.  There is a video linked to.   Something like the turbulence could be a nice effect to have.
  • https://www.ogre3d.org/forums/viewtopic.php?f=5&t=33990&start=0 - has  a bit on deleting stuff from the screen with Ribbon trails attached.
  • There is a RibbonTrailFactory class.
  • ribbonTrail = static_cast<Ogre::RibbonTrail*>(  m_pSceneMgr->createMovableObject(name, "RibbonTrail", &pairList));  --NB how it's created.  You don't need to create an instance of the RibbonTrailFactory.
  • RibbonTrails are also in the Ogre and PythonOgre demos under Lighting and Ribbons (or in the case of python, it doesn't mention the ribbons).  The demo actually does somewhat more than just create a smoke trail as it also add some lighting... and something to do with billboards which I'm not sure about, but which might be to make the light visible.
  • There is also a smoke demo, which is more literally smoke.
  • It appears that you can render multiple chains (trails AFAIK) using the same RibbonTrail object.  Not only do you attach the ribbon trail to a node (I don't know what the effect of putting it on different nodes is), but you also do ribbonTrail.AddNode(myMovingNode) through which you can add several moving nodes.   If you want to use different materials for each trail, then you need a new RibbonTrail object.
  • You see to have to set the "numberOfChains" throught the NameValuePairList, (or it seems to reset things to defaults... even though you set them later).
  • More flexible stuff is through the particle system, on which I'm guessing ribbon trail is based.  Simply put, an emitter produces billboards, which are then managed, eventually dying, by Ogre.  I think there's more details elsewhere in this document.

Billboards, Moveable Text & Labels In General

Generally you need to create a bitmap with the text that you want to display, then attach it to a billboard (or other mesh) via a material.  There doesn't seem to be any built-in font rendering, so you do it yourself.   You therefore need to consider resolution. 

Physics / Collisions

  • MMOC - Minimal MOgre Collision - a port of an OGRE library with some differences http://www.ogre3d.org/addonforums/viewtopic.php?f=8&t=8615&sid=e1e7584ae6bfc804c280c4a73d0395f7.  Probably sufficient for a lot of stuff.  Also has mouse picking features (down to the mesh level).  Apparently speed cost is not noticeable, at least for simple operations.
  • MogreNewt comes with Mogre -> it might be a good option.  It seems to integrate nicely with the scenegraph, and is therefore relatively easy to use.  The Python demos are useful for learning about it.
  • NxOgre - this is for use with Nvidia Physx -> it would be a lot faster if you have a Nvidia Geforce 8* card.  It doesn't seem to come by default  with Quadro though.  Interestingly you can use a graphics card for physics alone and when it's not in an SLI configuration (so if needs be, for £40 or whatever, it could be added).
  • ODE - Open Dynamics Engine - doesn't seem to really be under development, but maybe that's because it's so mature.  Not sure it's as popular as Newt (but it does come with PythonOgre).
  • Bullet - don't think people are using it with Mogre.

MogreNewt

  • MogreNewt can work with any mesh in the scene (so you don't have to create a parallel physics representation of the view unlike pure Newton AFAIK).  This (hopefully) means that it can be used for more precise scene querying.  Obviously it doesn't mean that these items have physical properties such as mass.
  • The TreeCollisionSceneParser goes through everything that is the child of a specified node, looking for collisions.
  • TreeCollisions are for meshes (where the collision object looks like the mesh), but it makes the object essentially have infinite mass so it can't be used for moving objects.   The alternatives are primitives (e.g. sphere) or convex hulls which approximate the mesh (and I assume can be automatically made from the mesh).
  • Primitives seem to expand around their centre (i.e. their centre will always remain at (0,0,0) however large you make them).
  • Materials define the interaction between objects.  You create a material for each object (or type of object), then you also need a material pair defining how two materials interact (which seems like a bit of a hassle if you have lots of materials).  With the material pairs, you can then create special effects (e.g. sparks when iron and flint hit).
  • ForceCallbackHandlers appear to deal with non-collision forces e.g. gravity.
  • Joints are for connected bodies (e.g. skeletons).  There are several types of joint which restrict the movement in different ways.
  • Materials define (not necessarily exhaustive): Softness, elasticity, collidable, friction, callback.  These can be set using mat.Set
  • NB if you move a scene node with physics attached, then it will only move the Ogre contents and not the physics as well. -> if using it for collision of things being moved manually, you might get into trouble.  It might best to tell newt to move the node, rather than moving the node and then updating the physics.
  • The Newton API docs are very useful (mostly because there aren't <summary> tags on the C#... but there is a bit more info in the API docs).
  • To add something like gravity to  an object, you need to create a ForceCallback, which is just Delegate(Body).  In there do body.AddForce(...).  body.Force+=... seems to screw up for whatever reason.  The body's force is cleared before the delegates are called, so setting body.Force just once won't work.
  • Friction has two components - static and kinetic.  It can be set through materialPair.SetDefaultFriction.  Kinetic must be <= static friction (it wouldn't make any sense any other way).  Typically both should be about 1.0, but the max is 2.0.  Both are proportional to the normal force and is relevant when there is movement between the two objects, static friction is a when there is no relative movement.
  • Continuous Collision prevents bodies from passing through each other (if they move at high speed relative to their size and the physics frame rate) without initiating the callback.  It has a 40-80% penalty (v1.5) so it is off by default.
  • Softness - recommended <= 1.0, typically 0.15.  A lower value will make it softer.
  • Not sure if you can create collision bodies where objects are allowed to pass through them (but something like this must happen for buoyancy) - would be useful for defining things like windy areas.  The alternative would be to use a standard force callback and take account of the body's position etc.
  • body.Userdata  - can be used to attach your own objects to the body just like windows forms control.Tag.
  • body.AngularDamping - 0 to 1.0  default 0.1  Damping proportional to the square of the angular velocity.
  • http://flow3d.boxsnap.com/FlowDocs/OgreNewtDocs.htm - useful OgreNewt API docs.
  • Omega (property of Body etc) is the angular velocity.
  • http://www.ogre3d.org/wiki/index.php/Collision_detection_with_Newton - bits of useful info.
  • All objects by default use the default material!  So if you want to handle all collisions, then you need to create a ContactCallback and register it with MaterialPair(default, default).
  • v2 supports non-collision callback collision detection i.e. you can find out which bodies are colliding / intersecting with body1 without having to track all the collisions.

Forces In The ContactCallback

NB I'm not totally sure about these, and I can't find any documentation on them really (some are MogreNewt specific seemingly).

  • body.AddFroce(force) adds along (probably) global axes,
  • body.AddLocalForce(force, position) adds along the axes of the body (with moment AFAIK) i.e. if you put a box on a plane and add a local force, the direction it will move in will depend on the orientation of the box when you placed it on the plane. 
  • body.AddGlobalForce(force, position) adds along the global axes, but NB that the COM is body.CentreOfMass+ body.Position.  If you forget this, you will end up with a very large moment.   This is especially useful if you want to apply a force at the contact point (which is mostly what you want to do): body.AddGlobalForce(force, this.ContactPosition);

Blender & MOGRE

You can export meshes and animations from Blender.  Tried and tested for a mesh on Linux... assume it's going to work here.

  • Exporter available here http://www.ogre3d.org/index.php?option=com_content&task=view&id=413&Itemid=133 It needs to be installed in the (python) scripts directory.  This wasn't set on my installation of Blender, so do: in the bottom bar of blender, select the user preferences view (with an 'i' icon), then at the bottom click on file paths, set the python scripts folder, press the button next to the folder button by the python scripts folder entry line called "re-evaluate scripts....".  Then it should be available in file->export->OGRE meshes.  Unfortunately this directory location doesn't seem to be saved when you exit (no idea why), and so it will disappear : ( I'm guessing a quick google would provide some answers. — I seem to have fixed it somehow (at least on windows)!
    • As far as I can tell, it converts any quads into triangles and duplicates the vertices so that each triangle has its own vertices (this doesn't appear to happen quite everywhere -> maybe only where the quad has been subdivided or sthg?), even if they are not needed as they have the same normal -> potential optimisation?  This means that you can get a lot of duplicate vertices.
    • http://www.ogre3d.org/wiki/index.php/Blender_to_Ogre - how to export
    • http://www.ogre3d.org/wiki/index.php/Blender_Exporter - on the actual exporter, with links to forum threads on using it, bugs etc.
    • When you have an armature, there are various scaling and transform issues, which I haven't quite worked out.  Do Transform->Clear/Apply->Apply Scale etc.  This will help, but I still get a transform that I can't get rid of.  It might have something to do with the parenting between the mesh and the skeleton?  There is some advice about moving the object centre to the world centre as the export is done in local not world coordinates, which probably means that your skeleton ends up with different coordinates to your mesh.  You can make some alterations by transforming in edit mode (which moves relative to the local origin, rather than transforming the local origin relative to the global origin).
    • It seems like most of the transform issues between the skeleton and the skin are solved by transform->clear/apply.  However, I still needed to finally tweak the position of the skeleton compared to the skin this is easy to do in the myskeleton.xml file by adjusting the position of the root bone (which then affects all the other bones).
    • With the latest exporter, the default is to rotate so that Y is up (which I don't use) -> make sure it's un-checked.
    •  
  • Then the OgreXMLConverter is in the Python OGRE or OGRE tools folder — if not installed,
  • for ref. on Ubuntu there is a bug whereby the exporter isn't installed in the right directory... needs copying to the blender scripts folder or something.
  • Usage: Export from the Blender menu -> you should get a *.mesh.xml and a *.scene file.  Drag the *.mesh.xml into the  OgreXMLConverter executable, and it will create a *.mesh file, which you can use in OGRE — you can get the exporter to do this in one step from blender - you need to point to the converter in the configuration of the exporter.
  • OgreBlenderExport has documentation in with the download that is potentially useful.

Graphics Cards

  • latest cards are many times more powerful all round compared to Quadro 1700X.
  • ATI HD 5870 - 153.6GB/s memory bandwidth, 1600 stream processing units = 320 shaders?
  • Nvidia Tesla series apparently has different architecture to AMD Firestream, whereby nv shaders run at ~2.4x rest of core clock speed while AMD runs at clock speed. — Tesla is for GPGPU w/o the graphics i.e. it has no display port.

Blender

  • Export to OGRE: see Blender & MOGRE under MOGRE
  • You can set the scale of an object with numbers using the "properties" dialogue - select object and press 'n' -> it will show up — but this doesn't seem to go through to the mesh that's exported for OGRE use.
    • I'm pretty sure there's a python script to do scaling of the mesh.xml (look on OGRE wiki), otherwise can just do search/replace.
  • To change the local axis, go into node edit mode (tab), then change the median offset in the "properties" dialogue.
  • There's a load of documentation from http://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro , http://wiki.blender.org/index.php/Main_Page (which links on to other documentation), http://wiki.blender.org/index.php/Doc:Books/Essential_Blender (worthy of being available in print as well),
  • You can move a mesh compared to its local origin in edit mode (where you can see the vertices) just by grabbing and moving!
  • If the normals are inside out, you can flip them when in edit mode -> editing (f9) -> mesh tools tab.  If you scroll right, you reach more mesh tools, which allows you to show the normals on the screen.  NB we are interested in VNormals, not plain normals, which are for the faces.
  • UV texture unwrapping (applying UV textures to a mesh) http://wiki.blender.org/index.php/Doc:Manual/Textures/UV/Unwrapping_a_Mesh
  • If you select faces then in the Editing panel, link and materials tab, you can assign a different material to a series of faces (2 Mat 2 part).  See http://wiki.blender.org/index.php/Doc:Manual/Materials/Multiple_Materials.  When you export to Ogre, given it's a separate material, not texture, it will be represented as submeshes with different textures, which could be an unwanted performance hit.
  • http://blenderunderground.com/forums/viewtopic.php?t=1046 - a (seemingly) good tutorial on creating a human, with all the textures and normal maps done.
  • http://www.makehuman.org/blog/index.php?post=s1210243808 - is a tool to make meshes of humans (useful!), where you set things like age, gender, weight etc.  It doesn't give them clothes though!  It aims to be a Poser substitute (hint for what to search for!).
  • http://www.blender.org/development/release-logs/blender-246/uv-editing/ - release info on some UV editing features.
  • There are some ways of converting meshes into normal maps (/ bump mapping ), but I haven't looked into them much.  E.g. look for "ATI nmf" etc.

Meshes In General

  • Standardising meshes to unit cubes ATM, with textures appropriately put on them.

C# / .NET Debugging

  • Dependency walker http://www.dependencywalker.com/depends22_x86.zip allows you to see the .dlls that an assembly etc depends on, and what's missing.
  • If have dll issues etc, can get some more information by going into Control Panel -> Admin... -> Computer Management ->System Tools-> Event Viewer->System and you may see "SideBySide" errors, which can you give you more information on problems.

Interfaces

OGRE Interfaces

NB I believe that it would be bad practice to use the OGRE interfaces, and it may not be possible, or at least would be difficult to do so.  Any interfaces should be custom ones that are used to wrap the MOGRE objects. — yes, this is a bad way to go about it.

Renderable

I we want to use the OGRE interface for rendering: http://www.ogre3d.org/docs/api/html/classOgre_1_1Renderable.html I'm guessing a lot of the functions would be passed on.  Need to look at how animation is done under the hood.

Animable

I think this has to do with laying tracks etc, rather than something suitable for passing a time to an object.In OGRE land, you call Entity.getAnimationState(), which returns an AnimationState object on which you can act.  You can set the time through this object.  It also has addTime(...) which makes life somewhat easier, which can take +ve or -ve values... useful.animstate = entity.GetAnimationState("Idle");animstate.SetLoop(true);anmistate.SetEn   


Alias: MiscellaneousNotes