I'm now trying to figure out if it is some kind of bug in Ogre 1.10 (may be later?)
1) My OgrePlatform.h has:
Code: Select all
# define __OGRE_HAVE_DIRECTXMATH 1
This happens even if OGRE_DOUBLE_PRECISION == 1 is used and ::Ogre::Real is of type double instead of float.
See: https://bitbucket.org/sinbad/ogre/src/4 ... form.h-156
2) But it seems DirectXMath is designed for single precision only!
The XMVECTOR type is just a __m128 (SSE) data type on Windows and it uses SSE internally.
Yes, you can process two double values simultaneously instead of four floats, but this needs different implementations then.
I don't seem to see such in OgreOptimisedUtilDirectXMath?
3) If OGRE_DOUBLE_PRECISION == 1 is set, then __OGRE_HAVE_SSE is disabled in OgrePlatformInformation.h, because this line is skipped:
Code: Select all
# define __OGRE_HAVE_SSE 1
The OgreOptimisedUtilSSE is disabled, but the OgreOptimisedUtilDirectXMath stays enabled.
Conclusion:
I end up with x64 builds which have ogre double precision enabled, but still use the functions in OgreOptimisedUtilDirectXMath.
Because in _detectImplementation() in OgreOptimisedUtil.cpp, this implementation is favored over the general one in case SSE is disabled (which is for double prec).
See: https://bitbucket.org/sinbad/ogre/src/4 ... il.cpp-358
Now I feel like the code in OgreOptimisedUtilDirectXMath is just treating my doubles as floats incorrectly and don't work.
Any ideas? Thanks.
PS: From what I get, I would say that __OGRE_HAVE_DIRECTXMATH must be set to 0 for OGRE_DOUBLE_PRECISION == 1, just like __OGRE_HAVE_SSE is. But I might be wrong ... ?!