[2.1] texture update performance

Problems building or running the engine, queries about how to use features etc.
Post Reply
fenriss
Gold Sponsor
Gold Sponsor
Posts: 25
Joined: Mon Oct 03, 2016 7:36 pm
x 2

[2.1] texture update performance

Post by fenriss »

I'm working with Ogre2.1/GL3+ and am trying to understand what's the most efficient way to do texture updates (see this question for details how and why i'm trying to do this).

In khronos opengl common mistakes (found here) i've read
We should also state that on some platforms, such as Windows, GL_BGRA for the pixel upload format is preferred.
So i tried to create a texture(location) with this format and assigned it to a datablock:

Code: Select all

Ogre::Image image;
image.load("asdf.bmp", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME);

Ogre::PixelBox convBox(image.getWidth(), image.getHeight(), image.getDepth(), Ogre::PixelFormat::PF_B8G8R8A8);
convBox.data = OGRE_ALLOC_T(Ogre::uchar, convBox.getConsecutiveSize(), Ogre::MEMCATEGORY_GENERAL);
Ogre::PixelUtil::bulkPixelConversion(image.getPixelBox(0, 0), convBox);
image.loadDynamicImage((Ogre::uchar*)convBox.data, convBox.getWidth(), convBox.getHeight(), convBox.getDepth(), Ogre::PixelFormat::PF_B8G8R8A8, true, 1, 0);

Ogre::HlmsTextureManager::TextureLocation texLocation = hlmsTextureManager->
	createOrRetrieveTexture("conv", "conv.bmp", 
		Ogre::HlmsTextureManager::TEXTURE_TYPE_DETAIL, &image);
datablock->setTexture(Ogre::TERRA_DETAIL2, texLocation.xIdx, texLocation.texture);
However, the texture shows that it has the PF_A8R8G8B8 format after creation, which is basically determined by the following code fragment:

Code: Select all

TextureManager::createManual
[...]
	Texture::createInternalResources
	[...]
		GL3PlusTexture::createInternalResourcesImpl
			[...]
			_createSurfaceList();
				[...]
				GL3PlusTextureBuffer::GL3PlusTextureBuffer
						OGRE_CHECK_GL_ERROR(glGetTexLevelParameteriv(mFaceTarget, level, GL_TEXTURE_INTERNAL_FORMAT, &value));
						mGLInternalFormat = value;
						mFormat = GL3PlusPixelUtil::getClosestOGREFormat(value);
			[...]
			mFormat = getBuffer(0,0)->getFormat();
It looks like a roundtrip is made to get the open gl internal format.
It would be really nice to understand what's going on, but more importantly, i'd like to ask how to determine which format delivers the best performance (i've read about glGetInternalFormativ but it seems like this api is not accessible via Ogre) or if Ogre does this implicitly anyway.
Post Reply