In my long experience with OpenGL on modern hardware there is no real speed difference when uploading data as RGBA or BGRA ( using glTexSubImage2D() for example )
Of course I could change the behaviour inside Ogre to switch to GL_RGBA as default (now it's GL_BGRA), but I would like to have both.
I've been doing this for years in pure OpenGL...
Swizzeling components in shaders etc. can be done, of course. But it seems unneccessarry to me.
Making Ogre use both and getting rid of getClosestOGREFormat() seems to be a major effort...
Another advantage would be that I could react to different data formats just inside of the Ogre GL code for optimizations... (because the format IS stored in the Texture Class)
( one dynamic texture getting RGBA data, another is getting BGRA data )
It seems Ogre is actually translating RGBA -> ARGB:
GL_RGBA8 -> GL_BGRA (a relation from internal format to externat data format)
This is the basic flow of things:
- the user creates a texture with some 4-component 8bit format, let's say PF_A8B8G8R8
- Ogre returns GL_RGBA8 as the internal format (correct)
- from this point on Ogre returns PF_A8R8G8B8 (which translates to GL_BGRA format) for this internal format
As long as everybody is happy to use whatever data format Ogre is "suggesting" all works well and optimized, I am not critizing that, I am just looking for reasonable way to extend this behaviour to do what I want to do.