RT Shader System Component (RTSS)

Discussion area about developing or extending OGRE, adding plugins for it or building applications on it. No newbie questions please, use the Help forum for that.
Post Reply
User avatar
xadhoom
Minaton
Posts: 973
Joined: Fri Dec 28, 2007 4:35 pm
Location: Germany
x 1

Re: RT Shader System Component (RTSS)

Post by xadhoom »

Nir Hasson wrote:All the above operations - create, clone, destruction - are operations that should stay unbounded to the RTSS.I didn't want to add RTSS related code inside the core material system since the RTSS is just an extra component.At the moment, the coordination with the RTSS of each of these ops is in the responsibility of the user...
Oh I´d not propose to put any related code into the core materialmanager implementation but would use the common listener concepts. E.g.:

MaterialManager::onMaterialCreated(Material* mat); // would also cover cloning?!
MaterialManager::onMaterialDestroyed(Material* mat);

IMHO this would allow a lot more flexibility for the RTSS and other 3rd party code who wants to track materials, wouldn´t it?

I don´t know if this listeners would introduce a general performance hit when creating/destroying a lot of materials all the time.

xad
User avatar
Nir Hasson
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 363
Joined: Wed Nov 05, 2008 4:40 pm
Location: TLV - Israel
x 2
Contact:

Re: RT Shader System Component (RTSS)

Post by Nir Hasson »

Ok... I got you now...

Extending the material listener interface sounds good to me, but in any case the end user still has control over these events..
The advantage of it INMO is that you can handle all the things you'd like in one location of the code instead of many places.
However - there is still a missing piece in the story which is that the RTSS doesn't expose now any interface for cloning generated technique of a certain material..
I may add it in the future..
User avatar
masterfalcon
OGRE Team Member
OGRE Team Member
Posts: 4270
Joined: Sun Feb 25, 2007 4:56 am
Location: Bloomington, MN
x 126
Contact:

Re: RT Shader System Component (RTSS)

Post by masterfalcon »

Hey guys,

There is at least 1 other operation that will need to be handled by the RTSS for GL ES 2(alpha testing). I was wondering if you guys could point me in the right direction for adding new functionality since I'm not that super familiar with that part of the RTSS.
User avatar
Assaf Raman
OGRE Team Member
OGRE Team Member
Posts: 3092
Joined: Tue Apr 11, 2006 3:58 pm
Location: TLV, Israel
x 76

Re: RT Shader System Component (RTSS)

Post by Assaf Raman »

Do you do the alpha testing in GLSL ES in a shader?
Can you post a small sample shader of that?
Watch out for my OGRE related tweets here.
User avatar
masterfalcon
OGRE Team Member
OGRE Team Member
Posts: 4270
Joined: Sun Feb 25, 2007 4:56 am
Location: Bloomington, MN
x 126
Contact:

Re: RT Shader System Component (RTSS)

Post by masterfalcon »

Yeah, it has to be done in a shader because the built in functionality was removed. I'm working on making a list of other things that will need to be done too.

Here's the alpha test I did for the grass shaders.

Code: Select all

    vec4 texColor = texture2D(diffuseMap, oUv0.xy);

    // Do manual alpha rejection because it is not built into OpenGL ES 2
    // 0.588 is based on the value given in the material file(150/255 = 0.588)
    if (texColor.a < 0.588)
    {
        discard;
    }
User avatar
Assaf Raman
OGRE Team Member
OGRE Team Member
Posts: 3092
Joined: Tue Apr 11, 2006 3:58 pm
Location: TLV, Israel
x 76

Re: RT Shader System Component (RTSS)

Post by Assaf Raman »

I guess this is the same as in D3D9... I will ask Nir about this feature.
Watch out for my OGRE related tweets here.
User avatar
masterfalcon
OGRE Team Member
OGRE Team Member
Posts: 4270
Joined: Sun Feb 25, 2007 4:56 am
Location: Bloomington, MN
x 126
Contact:

Re: RT Shader System Component (RTSS)

Post by masterfalcon »

I'm more than happy to work on it too. I'm just not sure of everything that needs to be done to add new features.
User avatar
Praetor
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 3335
Joined: Tue Jun 21, 2005 8:26 pm
Location: Rochester, New York, US
x 3
Contact:

Re: RT Shader System Component (RTSS)

Post by Praetor »

Was the a question about multi-texturing as well? Did we ever figure out if that is currently supported?
Game Development, Engine Development, Porting
http://www.darkwindmedia.com
User avatar
masterfalcon
OGRE Team Member
OGRE Team Member
Posts: 4270
Joined: Sun Feb 25, 2007 4:56 am
Location: Bloomington, MN
x 126
Contact:

Re: RT Shader System Component (RTSS)

Post by masterfalcon »

yeah, I haven't checked to see if multi-texturing works yet. My gut tells me that it's not yet working. The other things left out that I've found are: tex coord generation, normal normalization, user clip planes and possibly multi sampling.
User avatar
Nir Hasson
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 363
Joined: Wed Nov 05, 2008 4:40 pm
Location: TLV - Israel
x 2
Contact:

Re: RT Shader System Component (RTSS)

Post by Nir Hasson »

@masterfalcon

Alpha test is done out of shaders scope or FFP scope under D3D9 class and OpenGL, so it isn't a feature that the RTSS support.
I can think of two options to do it for GLES2
1. Add general AlphaTest sub-render state. It will affect the Pixel shader only and will be executed at the end of it to perfom the alpah test against the output colour alpha value.
Implement shader lib only for GLES2 and leave the rest implementations empty. This new sub render state can override the preAddToRenderState method in order to decide whether it should be added to the given pass.
2. You can add it specifically to GLES2 in the program writer context; however you'll need a way to provide it with the source pass as an argument to check if alpha test is needed.


Regarding texturing - all the FFP functionality is already done - includes multi-texturing, tex coord generation for reflection, projection etc.. All inside the FFPTexturing sub render state.

In general – adding new features should be easy enough by creating new sub render states. You can always start by creating a new sub render state out of the system scope, test it, and add it to the core if needed.
User avatar
Assaf Raman
OGRE Team Member
OGRE Team Member
Posts: 3092
Joined: Tue Apr 11, 2006 3:58 pm
Location: TLV, Israel
x 76

Re: RT Shader System Component (RTSS)

Post by Assaf Raman »

Discard is supported on both hlsl and glsl, and I think that also in cg.

The fixed pipeline equivalent is alpha rejection - and that is the value that you need to get from the material.
Watch out for my OGRE related tweets here.
User avatar
Praetor
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 3335
Joined: Tue Jun 21, 2005 8:26 pm
Location: Rochester, New York, US
x 3
Contact:

Re: RT Shader System Component (RTSS)

Post by Praetor »

I don't think multisampling is something we need to do in a shader. That happens after the fragment shading stage. I'm sure there'll be a resolve shader in Dx12...

Anyway, user clip planes I'd say are lower priority. Could we add alpha testing to the general features and then reject it depending on the sub system?
Game Development, Engine Development, Porting
http://www.darkwindmedia.com
User avatar
masterfalcon
OGRE Team Member
OGRE Team Member
Posts: 4270
Joined: Sun Feb 25, 2007 4:56 am
Location: Bloomington, MN
x 126
Contact:

Re: RT Shader System Component (RTSS)

Post by masterfalcon »

I agree. Alpha testing is really the most important feature.
User avatar
Nir Hasson
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 363
Joined: Wed Nov 05, 2008 4:40 pm
Location: TLV - Israel
x 2
Contact:

Re: RT Shader System Component (RTSS)

Post by Nir Hasson »

Discard is supported on both hlsl and glsl, and I think that also in cg.

The fixed pipeline equivalent is alpha rejection - and that is the value that you need to get from the material.
Yeap it is - but It is performed in later stages as far as I know.
This reminds the case with fog in D3D9, but since in OpenGL its part of the fragment shader we support it in the RTSS.

IMO since only GLES2 lakes the support of Alpha Test, we can do implement it now only in GLES2 writer context instead of general context...
User avatar
Mattan Furst
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 260
Joined: Tue Jan 01, 2008 11:28 am
Location: Israel
x 32

Re: RT Shader System Component (RTSS)

Post by Mattan Furst »

I found an diffrence in coloring outputs between the RTSS and the FPP.

When lighting is turned off the RTSS does not perform vertex color tracking. Meaning there is no color modulation according to the color information specified in the vertexes. This is in consistent with the FPP implementation.

I can get the effect I'm looking for in the RTSS by switching by switching the lighting off statment with:
lighting on
ambient 0 0 0 0
diffuse 0 0 0 0
emissive vertexcolour
But then I don't get anything drawn in regular FPP mode.
it's turtles all the way down
User avatar
Nir Hasson
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 363
Joined: Wed Nov 05, 2008 4:40 pm
Location: TLV - Israel
x 2
Contact:

Re: RT Shader System Component (RTSS)

Post by Nir Hasson »

@Mattan

The reason for this difference is that the FFP can “see” the submitted vertex layout and settings – so it can do per vertex colour ops on such a geometry.
The RTSS can’t and should stay out of the geometry scope – so it can only “see” material settings. The solution was to use the per vertex colour property like you did.
The colour ops are performed in the FFPLighting sub-render state – that’s the reason that causes your material to behave differently when lighting is off.

I can suggest you some solutions –
In case this is only a special material and you can access it during the app runtime you can toggle lighting on/off when switching from FFP to RTSS and vice versa.
Try to keep lighting on but change the material to make the FFP work –
Try to change ambient colour to 1 1 1 1 or add illumination_stage ambient to force the FFP treat this pass as an ambient rendering pass. Also try to change emissive vertexcolour into diffuse vertexcolour.

Hope it helped - a bit :D
User avatar
Mattan Furst
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 260
Joined: Tue Jan 01, 2008 11:28 am
Location: Israel
x 32

Re: RT Shader System Component (RTSS)

Post by Mattan Furst »

@Nir,

Ok, I'll work around the problem as you suggested. There is however a second inconsistency I found.

When turning on ambient vertex color tracking with default emissive color. The emissive pass color is added to the final color. This is suppose to be ok as emissive color default value is meant to be black. However the default value of the emissive color is 0,0,0,1. This means that the final color receives an alpha value of 1 or more.

I'm not sure, but in-order to be consistent with the FFP coloring only the r,g,b values need to be added.
it's turtles all the way down
User avatar
Mattan Furst
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 260
Joined: Tue Jan 01, 2008 11:28 am
Location: Israel
x 32

Re: RT Shader System Component (RTSS)

Post by Mattan Furst »

@Nir
Just added a patch to the RTSS shader to handle materials of the same name from different groups as per our discussion. As far as I can tell no backward compatibility was broken.
it's turtles all the way down
User avatar
Noman
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 714
Joined: Mon Jan 31, 2005 7:21 pm
Location: Israel
x 2
Contact:

Re: RT Shader System Component (RTSS)

Post by Noman »

Hi,

I have a small question / potential bug report :
In "OgreShaderFFPLighting.cpp"'s bool FFPLighting::resolveParameters(ProgramSet* programSet)

Code: Select all

switch (mLightParamsList[i].mType)
		{
		case Light::LT_DIRECTIONAL:
			mLightParamsList[i].mDirection = vsProgram->resolveParameter(GCT_FLOAT4, -1, (uint16)GPV_LIGHTS, "light_position_view_space");
			if (mLightParamsList[i].mDirection.get() == NULL)
				return false;
			break;
and later again in the spotlight case

Code: Select all

			mLightParamsList[i].mPosition = vsProgram->resolveParameter(GCT_FLOAT4, -1, (uint16)GPV_LIGHTS, "light_position_view_space");
			if (mLightParamsList[i].mPosition.get() == NULL)
				return false;
			mLightParamsList[i].mDirection = vsProgram->resolveParameter(GCT_FLOAT4, -1, (uint16)GPV_LIGHTS, "light_position_view_space");
			if (mLightParamsList[i].mDirection.get() == NULL)
				return false;
is "light_position_view_space" the correct parameter for mDirection? It seems that the exact same parameter is used for position.
User avatar
Nir Hasson
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 363
Joined: Wed Nov 05, 2008 4:40 pm
Location: TLV - Israel
x 2
Contact:

Re: RT Shader System Component (RTSS)

Post by Nir Hasson »

@Noman

The string passed to the resolveParameter method is just a suggested prefix name, so no name collisions occur in the spotlight case.
However for better code readability - I'll change it and commit..
User avatar
Noman
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 714
Joined: Mon Jan 31, 2005 7:21 pm
Location: Israel
x 2
Contact:

Re: RT Shader System Component (RTSS)

Post by Noman »

Great, thanks.

Another point - I think there's a problem with hardware skinning in HLSL.

I modified the skeletal animation demo to use RTSS:

SkeletalAnimation.h :

Code: Select all

#ifndef __SkeletalAnimation_H__
#define __SkeletalAnimation_H__

#include "SdkSample.h"

using namespace Ogre;
using namespace OgreBites;

#define SKELETAL_ANIMATION_USE_RTSHADER

class _OgreSampleClassExport Sample_SkeletalAnimation : public SdkSample
{
public:

	Sample_SkeletalAnimation() : NUM_MODELS(6), ANIM_CHOP(8)
	{
		mInfo["Title"] = "Skeletal Animation";
		mInfo["Description"] = "A demo of the skeletal animation feature, including spline animation.";
		mInfo["Thumbnail"] = "thumb_skelanim.png";
		mInfo["Category"] = "Animation";
	}

    bool frameRenderingQueued(const FrameEvent& evt)
    {
        for (unsigned int i = 0; i < NUM_MODELS; i++)
        {
			// update sneaking animation based on speed
			mAnimStates[i]->addTime(mAnimSpeeds[i] * evt.timeSinceLastFrame);

			if (mAnimStates[i]->getTimePosition() >= ANIM_CHOP)   // when it's time to loop...
			{
				/* We need reposition the scene node origin, since the animation includes translation.
				Position is calculated from an offset to the end position, and rotation is calculated
				from how much the animation turns the character. */

				Quaternion rot(Degree(-60), Vector3::UNIT_Y);   // how much the animation turns the character

				// find current end position and the offset
				Vector3 currEnd = mModelNodes[i]->getOrientation() * mSneakEndPos + mModelNodes[i]->getPosition();
				Vector3 offset = rot * mModelNodes[i]->getOrientation() * -mSneakStartPos;

				mModelNodes[i]->setPosition(currEnd + offset);
				mModelNodes[i]->rotate(rot);

				mAnimStates[i]->setTimePosition(0);   // reset animation time
			}
        }

		return SdkSample::frameRenderingQueued(evt);
    }


protected:

	void setupContent()
	{
		// set shadow properties
		mSceneMgr->setShadowTechnique(SHADOWTYPE_TEXTURE_MODULATIVE);
		mSceneMgr->setShadowTextureSize(512);
		mSceneMgr->setShadowColour(ColourValue(0.6, 0.6, 0.6));
		mSceneMgr->setShadowTextureCount(2);

		// add a little ambient lighting
        mSceneMgr->setAmbientLight(ColourValue(0.5, 0.5, 0.5));

		SceneNode* lightsBbsNode = mSceneMgr->getRootSceneNode()->createChildSceneNode();
		BillboardSet* bbs;

		// Create billboard set for lights .
		bbs = mSceneMgr->createBillboardSet();
		bbs->setMaterialName("Examples/Flare");
		lightsBbsNode->attachObject(bbs);


		// add a blue spotlight
        Light* l = mSceneMgr->createLight();
		Vector3 dir;
		l->setType(Light::LT_SPOTLIGHT);
        l->setPosition(-40, 180, -10);
		dir = -l->getPosition();
		dir.normalise();
		l->setDirection(dir);
        l->setDiffuseColour(0.0, 0.0, 0.5);
		bbs->createBillboard(l->getPosition())->setColour(l->getDiffuseColour());
		

		// add a green spotlight.
        l = mSceneMgr->createLight();
		l->setType(Light::LT_SPOTLIGHT);
        l->setPosition(0, 150, -100);
		dir = -l->getPosition();
		dir.normalise();
		l->setDirection(dir);
        l->setDiffuseColour(0.0, 0.5, 0.0);		
		bbs->createBillboard(l->getPosition())->setColour(l->getDiffuseColour());
			
	
	

		// create a floor mesh resource
		MeshManager::getSingleton().createPlane("floor", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME,
			Plane(Vector3::UNIT_Y, -1), 250, 250, 25, 25, true, 1, 15, 15, Vector3::UNIT_Z);

		// add a floor to our scene using the floor mesh we created
		Entity* floor = mSceneMgr->createEntity("Floor", "floor");
		floor->setMaterialName("Examples/Rockwall");
		floor->setCastShadows(false);
		mSceneMgr->getRootSceneNode()->attachObject(floor);

		// set camera initial transform and speed
        mCamera->setPosition(100, 20, 0);
        mCamera->lookAt(0, 10, 0);
		mCameraMan->setTopSpeed(50);

		setupModels();
	}

	void setupModels()
	{
		tweakSneakAnim();

		SceneNode* sn = NULL;
		Entity* ent = NULL;
		AnimationState* as = NULL;

#ifdef SKELETAL_ANIMATION_USE_RTSHADER
		mShaderGenerator->setTargetLanguage("cg");
		RTShader::ShaderGenerator& rtShader = RTShader::ShaderGenerator::getSingleton();

		if (rtShader.createShaderBasedTechnique("jaiqua",
			MaterialManager::DEFAULT_SCHEME_NAME, RTShader::ShaderGenerator::DEFAULT_SCHEME_NAME, true))
		{
			RTShader::RenderState* rs = rtShader.getRenderState(RTShader::ShaderGenerator::DEFAULT_SCHEME_NAME, 
				"jaiqua", 0);
			RTShader::SubRenderState* srs = rtShader.createSubRenderState(RTShader::HardwareSkinning::Type);
			RTShader::HardwareSkinning* hardwareSkinning = static_cast<RTShader::HardwareSkinning*>(srs);
			hardwareSkinning->setHardwareSkinningParam(32, 2);
			hardwareSkinning->setAllowSkinningStateChange(true);

			rs->addTemplateSubRenderState(srs);
			rtShader.validateMaterial(RTShader::ShaderGenerator::DEFAULT_SCHEME_NAME,
				"jaiqua");
			mViewport->setMaterialScheme(RTShader::ShaderGenerator::DEFAULT_SCHEME_NAME);
		}		
#endif
        for (unsigned int i = 0; i < NUM_MODELS; i++)
        {
			// create scene nodes for the models at regular angular intervals
			sn = mSceneMgr->getRootSceneNode()->createChildSceneNode();
			sn->yaw(Radian(Math::TWO_PI * (float)i / (float)NUM_MODELS));
			sn->translate(0, 0, -20, Node::TS_LOCAL);
			mModelNodes.push_back(sn);

			// create and attach a jaiqua entity
            ent = mSceneMgr->createEntity("Jaiqua" + StringConverter::toString(i + 1), "jaiqua.mesh");
			sn->attachObject(ent);
			
			// enable the entity's sneaking animation at a random speed and loop it manually since translation is involved
			as = ent->getAnimationState("Sneak");
            as->setEnabled(true);
			as->setLoop(false);
			mAnimSpeeds.push_back(Math::RangeRandom(0.5, 1.5));
			mAnimStates.push_back(as);
        }

		// create name and value for skinning mode
		StringVector names;
		names.push_back("Skinning");
		String value = "Software";

		// change the value if hardware skinning is enabled
        Pass* pass = ent->getSubEntity(0)->getMaterial()->getBestTechnique()->getPass(0);
		if (pass->hasVertexProgram() && pass->getVertexProgram()->isSkeletalAnimationIncluded()) value = "Hardware";

		// create a params panel to display the skinning mode
		mTrayMgr->createParamsPanel(TL_TOPLEFT, "Skinning", 150, names)->setParamValue(0, value);
	}
	
	/*-----------------------------------------------------------------------------
	| The jaiqua sneak animation doesn't loop properly. This method tweaks the
	| animation to loop properly by altering the Spineroot bone track.
	-----------------------------------------------------------------------------*/
	void tweakSneakAnim()
	{
		// get the skeleton, animation, and the node track iterator
		SkeletonPtr skel = SkeletonManager::getSingleton().load("jaiqua.skeleton",
			ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME);
		Animation* anim = skel->getAnimation("Sneak");
		Animation::NodeTrackIterator tracks = anim->getNodeTrackIterator();

		while (tracks.hasMoreElements())   // for every node track...
		{
			NodeAnimationTrack* track = tracks.getNext();

			// get the keyframe at the chopping point
			TransformKeyFrame oldKf(0, 0);
			track->getInterpolatedKeyFrame(ANIM_CHOP, &oldKf);

			// drop all keyframes after the chopping point
			while (track->getKeyFrame(track->getNumKeyFrames()-1)->getTime() >= ANIM_CHOP - 0.3f)
				track->removeKeyFrame(track->getNumKeyFrames()-1);

			// create a new keyframe at chopping point, and get the first keyframe
			TransformKeyFrame* newKf = track->createNodeKeyFrame(ANIM_CHOP);
			TransformKeyFrame* startKf = track->getNodeKeyFrame(0);

			Bone* bone = skel->getBone(track->getHandle());

			if (bone->getName() == "Spineroot")   // adjust spine root relative to new location
			{
				mSneakStartPos = startKf->getTranslate() + bone->getInitialPosition();
				mSneakEndPos = oldKf.getTranslate() + bone->getInitialPosition();
				mSneakStartPos.y = mSneakEndPos.y;

				newKf->setTranslate(oldKf.getTranslate());
				newKf->setRotation(oldKf.getRotation());
				newKf->setScale(oldKf.getScale());
			}
			else   // make all other bones loop back
			{
				newKf->setTranslate(startKf->getTranslate());
				newKf->setRotation(startKf->getRotation());
				newKf->setScale(startKf->getScale());
			}
		}
	}

	void cleanupContent()
	{
		mModelNodes.clear();
		mAnimStates.clear();
		mAnimSpeeds.clear();
		MeshManager::getSingleton().remove("floor");
	}

	const unsigned int NUM_MODELS;
	const Real ANIM_CHOP;

	std::vector<SceneNode*> mModelNodes;
	std::vector<AnimationState*> mAnimStates;
	std::vector<Real> mAnimSpeeds;

	Vector3 mSneakStartPos;
	Vector3 mSneakEndPos;
};

#endif
This version uses CG. replace "cg" with "hlsl" and you get wrong results. I suspect that it has something to do with row/column majority in matrices. If anyone wants to have a crack at it, they're welcome. I'll hopefully get to it this weekend...
User avatar
Noman
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 714
Joined: Mon Jan 31, 2005 7:21 pm
Location: Israel
x 2
Contact:

Re: RT Shader System Component (RTSS)

Post by Noman »

Another issue / idea :

Right now, for the normal map, we have some parameters in the texture unit state that can be controlled via the directive :

Code: Select all

			rtshader_system
			{
				lighting_stage normal_map ground_1_NORM.tga tangent_space 0
			}
However, there are many options that don't get exposed. For example - texture address mode. Suppose I have a normal map for a repeating object (for example, a plane), I can't have the normal map accessed in the same way.

I see three solutions to this problem :
1) Worst solution - add more parameters - expose more parameters within the lighting_stage line. The main problem with this is that all parameters (up to the one you actually want to modify) will be required and that these lines will become very long
2) Better solution - add a 'parent texture unit' - most normal maps correspond to another texture in the pass. If a 'parent index' is added, all of the texture unit state parameters (besides texture name) will be copied from the parent.
3) Best solution (IMO) - add a nested texture unit state to the directive :

Code: Select all

			rtshader_system
			{
				lighting_stage normal_map tangent_space
				{
					texture_unit
					{
						texture ground_1_NORM.tga
						coord_set 0
					}
				}
			}
This gives us the freedom to access all of the TUS's options, specifying only the ones we need and not relying on other TUS's in the pass.
What do you think?
User avatar
Noman
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 714
Joined: Mon Jan 31, 2005 7:21 pm
Location: Israel
x 2
Contact:

Re: RT Shader System Component (RTSS)

Post by Noman »

I'm looking at the normal map's code, and it seems to be designed in a way that is hard to extend -
it seems to not only modify the normal by sampling the normal map, but it also carries out the entire per-pixel lighting model on its own.
Is this the only way this can work? I would expect that the normal map extension would just trigger the per pixel lighting extension and modify the normal before the per-pixel functions run.
I think it not only makes the normal map extension more sophisticated than it should be, but it also makes things harder if I want to mix in other lighting options.
I'm going to write a specular map extension, and if I do it the same way as the normal map, I don't understand how I will be able to mix them together.

Is there a page / guide somewhere explaining how this can be addressed? I tried looking at the reflection map example but couldn't understand from there.

[edit]
Looking deeper into ShaderExReflectionMap, the approach I talked about is the one taken and its indeed much simpler.
I guess my main question is now why wasn't this approach used for the normal map extension? Can it be modified to be that way?
[/edit]
User avatar
Nir Hasson
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 363
Joined: Wed Nov 05, 2008 4:40 pm
Location: TLV - Israel
x 2
Contact:

Re: RT Shader System Component (RTSS)

Post by Nir Hasson »

@Noman
Regarding the Skinning issues you mention, it’s better talk to Mattan – I think he wrote this extension, but if you can give it a shot that would be great.

Regarding the texture unit you mentioned – I agree with you that a nested TU would be the best solution.
We might add it as a static method of the material parser so any SubRenderState could use. It should parse the TU section and create a structure that hold all the data required for creating it later on by SubRenderState instances.

Regarding the Normal map SubRenderState – I know it’s complicated by the main idea was to make each SubRenderState as independent as possible. That means that it take care of the entire pipeline from the vertex shader entry until the output of the pixel shader.
Simplifying that would lead to easy and simple code to maintain but also to hard time using it from the user side that will have to coordinate multiple SubRenderStates in order to achieve a certain effect.

In the case of normal map lighting – there are respected amount of variables and operations that should be care only in the normal map stage. To mention them – VS tangent input, constructing the TBN matrix, sampling the normal map, unpacking the normal from the normal map; Not to mention supporting for all 3 light types with both specular off/on option.
The reflection map implemented is much simple case hence the code is much simpler…

Saying that – I would appreciate any modification that will improve the sophistication level of the per pixel lighting SubRenderStates specifically and simplify the process of extending current SubRenderStates. Maybe a shared base class would ease things… but I’m open to any idea you might have. :D
User avatar
Noman
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 714
Joined: Mon Jan 31, 2005 7:21 pm
Location: Israel
x 2
Contact:

Re: RT Shader System Component (RTSS)

Post by Noman »

Nir Hasson wrote: Not to mention supporting for all 3 light types with both specular off/on option.
This is actually the only part that bothers me in the normal mapper. I agree that the rest has to be done there (maybe with the use of helper functions, but thats not the point). In my opinion, it is not the normal mapper's job to actually do the light calculations. That is the job of the lighter (per-pixel lighter, in this case).

I think that the normal mapper should just modify the normal (based on its calculations) and let the rest of the pipeline to the rest of the job.
There are many types of lighting calculations that do just that - modify some of the inputs of the normal shading process and let it to the rest. Normal mapping modifies the normal, specular mapping modifies the speculars, parallax mapping modifies the texture coordinates etc.
I think that if each one of these techniques would do only that, it would be much easier to mix and match between them.

The question is how to make it easy to use them. I think that dependency checking would be enough. For example, NormalMapping would require PerPixelLighting to be enabled. I'm against adding the sub render state automatically (so that the user understands that normal mapping comes with a cost, for example) so we should just warn / throw exceptions if illegal combinations are made. I think a log message of "Normal mapping requires per-pixel lighting to be turned on. Disabling normal mapping" would be indicative enough...

Moving to this kind of SubRenderStates would also allow me to transition the deferred rendering infrastructure to use the shader generator. I would then just replace the 'generic lighting' render state with a 'g-buffer writing' render state. It would just read the inputs (that could be modified by effects such as normal mapping) and write them to the g-buffer.
Post Reply