Two Mesh morphing - any suggestions?

A place for users of OGRE to discuss ideas and experiences of utilitising OGRE in their games / demos / applications.
Post Reply
User avatar
KingLeonid
Gnoblar
Posts: 15
Joined: Wed Apr 13, 2005 12:57 pm
Location: St.Petersburg, Russia
Contact:

Two Mesh morphing - any suggestions?

Post by KingLeonid »

Hello!
I'm writing a small 3d game, where I have human-like robot. This robot can turn and fire, but when he wants to walk, he morphs into sphere and roll.
And currently I'm thinking of some way about doing such morphing.
I have only two ideas:
  1. I can do it in 3DS max and generate N separate meshes, load it into OGRE and show next mesh each frame.
    +/- - Easy to do, but require a lot of memory.
  2. Load only first (robot) and last (sphere) mesh and write per-vertex morphing algorithm
    that will produce new mesh at each frame by screwing vertexes from first to last mesh.
    +/- - Hard to do and can slow down game frame rate due to high CPU usage.
Do you have any other suggestions or comments?
May be someone has already done such thing?

Thank you in advance.
Best regards,
Leo.
User avatar
DWORD
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 1365
Joined: Tue Sep 07, 2004 12:43 pm
Location: Aalborg, Denmark
Contact:

Post by DWORD »

You can probably do it on the GPU by using a vertex shader and have a mesh where vertex positions of both meshes are present in the vertex buffer, e.g. use POSITION and TEXCOORD1 (I'm not too familiar with shaders, so bear with me). Then you can have a blending factor as a parameter. The meshes must obviously have the same number of vertices, and if you want to morph the texture too, it'll probably be a bit more complicated.
User avatar
KingLeonid
Gnoblar
Posts: 15
Joined: Wed Apr 13, 2005 12:57 pm
Location: St.Petersburg, Russia
Contact:

Post by KingLeonid »

DWORD wrote:You can probably do it on the GPU by using a vertex shader and have a mesh where vertex positions of both meshes are present in the vertex buffer, e.g. use POSITION and TEXCOORD1 (I'm not too familiar with shaders, so bear with me). Then you can have a blending factor as a parameter. The meshes must obviously have the same number of vertices, and if you want to morph the texture too, it'll probably be a bit more complicated.
Hmm, I thought about it, but I have no idea how to write vertex GPU programs and
learning Cg is scared me for now.
So I think that in near future I'll implement this via sequnce of meshes animation
and then later I probably return and reimplement the "MyMorph" class.

BUT: Will it be fast enough to hide one mesh and show another one in the same
position each frame? Assume that all models and materials are pre-loaded at startup.
Best regards,
Leo.
User avatar
DWORD
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 1365
Joined: Tue Sep 07, 2004 12:43 pm
Location: Aalborg, Denmark
Contact:

Post by DWORD »

Cg is not hard to learn, at least not for making this. I think you could do this in little more than a couple of hours, so no need to spend time creating different models, IMHO. It will probably be fast enough to switch mesh each frame, but it will require a lot of meshes to make a smooth animation. The other method saves CPU cycles, memory, and content creation time. ;)

Edit: I should probably give you some hints on starting with Cg programs, but I don't know much about it either. You can look in the Ogre manual for how to bind vertex programs to materials, and the examples have some shaders you can look at. Anyone has more tips?

You got me curious, I'd like to try this out too now. Just for the fun of it. :lol:

Edit: BTW, there was an IOTD on flipcode some time ago showing off something similar: http://www.flipcode.com/cgi-bin/fcartic ... show=65105.
User avatar
DWORD
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 1365
Joined: Tue Sep 07, 2004 12:43 pm
Location: Aalborg, Denmark
Contact:

Post by DWORD »

Check out this paper. :shock:

That's probably overkill, but it would be awesome! I don't have any test models and I'm not an artist, so I'll have to think up some mathematically defined meshes to morph. :roll:
User avatar
KingLeonid
Gnoblar
Posts: 15
Joined: Wed Apr 13, 2005 12:57 pm
Location: St.Petersburg, Russia
Contact:

Post by KingLeonid »

DWORD wrote:Check out this paper. :shock:

That's probably overkill, but it would be awesome! I don't have any test models and I'm not an artist, so I'll have to think up some mathematically defined meshes to morph. :roll:
Thank you very much for useful link, I'll try to read this PDF, but at first glance it's too complicated and scientific.
I think about simplier algorithm where I have two meshes with the same number of vertices and program doing simple linear position translation for each pair of vertices (from figure 1 to figure 2). And as far as I undestand, when my morph cycle is complete, I simply need to hide figure 1 (already morphed), show figure 2 and start
skeletal animation or whatever for that figure.
Best regards,
Leo.
User avatar
DWORD
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 1365
Joined: Tue Sep 07, 2004 12:43 pm
Location: Aalborg, Denmark
Contact:

Post by DWORD »

That article admittedly looks quite complex; I didn't mean you should go implementing it.
KingLeonid wrote:I think about simplier algorithm where I have two meshes with the same number of vertices and program doing simple linear position translation for each pair of vertices (from figure 1 to figure 2). And as far as I undestand, when my morph cycle is complete, I simply need to hide figure 1 (already morphed), show figure 2 and start skeletal animation or whatever for that figure.
Yup.
User avatar
:wumpus:
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 3067
Joined: Tue Feb 10, 2004 12:53 pm
Location: The Netherlands
x 1

Post by :wumpus: »

I'd definitly go for the simple algo. Hoppe also did some very nice research into morphing, blobs etc with features new in SM3.0, but for morphing a mesh into a sphere you need none of that stuff :)
User avatar
KingLeonid
Gnoblar
Posts: 15
Joined: Wed Apr 13, 2005 12:57 pm
Location: St.Petersburg, Russia
Contact:

Post by KingLeonid »

:wumpus: wrote:I'd definitly go for the simple algo. Hoppe also did some very nice research into morphing, blobs etc with features new in SM3.0, but for morphing a mesh into a sphere you need none of that stuff :)
Hah, I didn't mean only morphing into sphere object, when I implement that algo,
I'll do it for any two meshes with the same number of vertices.
But now I'm not sure in my knowledge in 3D of doing it, anyway, I'll try.
Best regards,
Leo.
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3
Contact:

Post by Falagard »

What I would do is create two implementations, one for fixed function and one for vertex shader support.

Both use same principles, but one is done on CPU and other on GPU.

What you need to do for morphing is not only have two meshes with same number of verts, but the verts also have to be ordered the same way and roughly the same spacial positions (verts on top of the source mesh should also be on top of the dest mesh). The best way to do this is to use 3ds max's conform functionality.

You're going to a sphere, so that is actually going to be easy, but let me take an example of a robot morphing into a sheep instead ;-)

What you do is have your original two meshes and then two morphing meshes. You take a sphere in 3ds max with an appropriate number of verts, size it larger than your robot, place it over your robot. Use the conform feature of max to get the sphere to conform to the robot's shape, choosing the option that gets it to move all the verts of the sphere into the robot - read the docs to find out about how to do it. You now have a robot shaped blob - the sphere verts have moved in towards the robot mesh. You can manually adjust the verts now of this blob by applying an edit mesh modifier and do some fine tuning, or just leave as is for now.

Do the same for your sheep using exactly the same type of sphere, same number of verts, etc.

You now have your two morph targets with same number of verts, and the verts are in positions that will allow them to morph nicely. You'll need to texture them similar to your robot and sheep meshes, but probably want to create a single texture even though your original meshes may have been made from multiple different textures. This is because during morphing, you want all the verts to stick together, and perhaps they'd split apart, I'm not sure.

Now, when you want to morph from robot to sheep, the verts all match up - there's the same number, and the winding is the same. You can easily move each vert from robot to sheep using interpolation on CPU and the verts won't all cross each other and look messed up during the morph. So, hide your real robot, show your morph robot, over several frames adjust positions of the robot morph mesh verts towards the position of the sheep morph mesh verts. When morph finished, hide morph mesh and show real sheep model.

Now, to do GPU morphing what you need to do is create a single mesh out of your two meshes, with two positions for each vert. One position is the source position, one is the dest position, then pass a time value (0.0 to 1.0) in a shader constant to interpolate position between source and dest position values in the shader.

You'd probably want to also do some funky material fading between robot textures and sheep textures during the morph.

I've always been meaning to put together a demo of this, but haven't had the time or the need so far.

Good luck!

Clay
User avatar
KingLeonid
Gnoblar
Posts: 15
Joined: Wed Apr 13, 2005 12:57 pm
Location: St.Petersburg, Russia
Contact:

Post by KingLeonid »

Thank you very much for such detail explanation, I'll try it.
But I'm not sure that I understood correctly your text about GPU:
Falagard wrote:Now, to do GPU morphing what you need to do is create a single mesh out of your two meshes, with two positions for each vert. One position is the source position, one is the dest position, then pass a time value (0.0 to 1.0) in a shader constant to interpolate position between source and dest position values in the shader.
How can I create single mesh from two meshes? Are they have the same position coordinates? And how can I know which vertex is from which mesh (which one to interpolate and to which direction)?

Anyway, I think all theese questions due to my absence of knowledge in GPU programming, so thank you very much for ideas, I'm going to read some books and articles about GPU :shock:
Best regards,
Leo.
User avatar
SuprChikn
Bugbear
Posts: 863
Joined: Tue Apr 19, 2005 6:10 am
Location: Melbourne, Aus
Contact:

Post by SuprChikn »

OK, I dont actually know how to do this, but i think what Falagard is saying is you create one mesh that basically has two states.
In state one, the verts are in possitions that give it the shape of mesh one (your guy), and in state two they are in a possition that gives it the shape of mesh two (the ball).
So now each vert has a start possition and an end possition for switching between the two shapes.
If you already understood this much then sorry for the redundant post.
User avatar
KingLeonid
Gnoblar
Posts: 15
Joined: Wed Apr 13, 2005 12:57 pm
Location: St.Petersburg, Russia
Contact:

Post by KingLeonid »

SuprChikn wrote:OK, I dont actually know how to do this, but i think what Falagard is saying is you create one mesh that basically has two states.
In state one, the verts are in possitions that give it the shape of mesh one (your guy), and in state two they are in a possition that gives it the shape of mesh two (the ball).
So now each vert has a start possition and an end possition for switching between the two shapes.
If you already understood this much then sorry for the redundant post.
Hmm, It's quite interesting. The only way that I know is to do the mesh with two
submeshes - robot and sphere, but after attaching such mesh to a scene node I'll see
two models on the screen. Correct me if I'm wrong?
Best regards,
Leo.
User avatar
DWORD
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 1365
Joined: Tue Sep 07, 2004 12:43 pm
Location: Aalborg, Denmark
Contact:

Post by DWORD »

See my post above for one way to put two meshes in one: http://www.ogre3d.org/phpBB2/viewtopic. ... 1278#71278

It doesn't matter what you store in the vertex buffer, so there's no problem in using TEXCOORD elements for positions and such.
User avatar
Falagard
OGRE Retired Moderator
OGRE Retired Moderator
Posts: 2060
Joined: Thu Feb 26, 2004 12:11 am
Location: Toronto, Canada
x 3
Contact:

Post by Falagard »

Two make two meshes in one you need to define create a tool that takes two mesh names and creates a new mesh with something like the following for each vertex (create a new vertex definition - there is code for how to do this in Ogre in various places on the forums, etc).

Each vertex will have the following:

Position (start position)
Normal
Texture
3 floats (destination position)

Code: Select all

vertDecl = hardwareBufferManager::getSingleton().createVertexDeclaration();
vertDecl ->addElement(0, 0, VET_FLOAT3, VES_POSITION);
vertDecl ->addElement(0, sizeof(float)*3, VET_FLOAT3, VES_NORMAL);
vertDecl ->addElement(0, sizeof(float)*6, VET_FLOAT2, VES_TEXTURE_COORDINATES, 0);
vertDecl ->addElement(0, 0, VET_FLOAT3);
Do a search for say.... VES_POSITION in the various ogre code and you'll find places where meshes are being created in code.

You can easily in code create a mesh and fill the position normal and texture information from the first mesh, and the final position floats from the position information of the second mesh.

You can either do this at runtime in your game, or better yet pre-process it in a tool and then serialize out to a new .mesh file and don't bother with the other two meshes ever again.

Ignore GPU programming for a minute and just think about the above and how to do it on the CPU. If you have a single mesh for which each vertex has two sets of position information, if you have a "time" value you can interpolate between the two.

GPU is the same thing but you can worry about it afterwards, and it's easy.
User avatar
:wumpus:
OGRE Retired Team Member
OGRE Retired Team Member
Posts: 3067
Joined: Tue Feb 10, 2004 12:53 pm
Location: The Netherlands
x 1

Post by :wumpus: »

I just realized, don't you need a second set of normals as well? Otherwise your sphere will not be shaded like a sphere at all.
User avatar
KingLeonid
Gnoblar
Posts: 15
Joined: Wed Apr 13, 2005 12:57 pm
Location: St.Petersburg, Russia
Contact:

Post by KingLeonid »

Thank you very much for explanation, now I understand the main road, I'll try it and
post results.
P.S: Of couse, I need second normal to interpolate to.
Best regards,
Leo.
User avatar
KingLeonid
Gnoblar
Posts: 15
Joined: Wed Apr 13, 2005 12:57 pm
Location: St.Petersburg, Russia
Contact:

Post by KingLeonid »

Ok, here is my first version of Morpher class. Currenly I've implemented only CPU software version of morph. It morphs two meshes with the same number of vertices and the same number of submeshes. Beware that vertices in both meshes should go in the same order.

I'm waiting for your comments and especially criticism!

MeshMorpher.hxx

Code: Select all

#ifndef __MESH_MORPHER_HXX__
#define __MESH_MORPHER_HXX__

#include <OgreMovableObject.h>
#include <OgreMesh.h>
#include <OgreSubMesh.h>

#include <vector>

namespace Ogre
{

    // Base class for morpher. We will use it for building
    // subclasses with different morph techniques
    class BaseMorpher
    {
    protected:
	//We need a copy of original 'from' mesh for adding our extra
	//data of for tweaking there vertices
	MeshPtr                  morph;  //our new mesh - we'll morph it

	Real                     morph_time;
	Real                     time_spent;
	Real                     step_sign;

	//method that creates our morph copy and do pre-calculations
	virtual void             initialize(MeshPtr from, MeshPtr to)=0;

    public:
	
	BaseMorpher(const MeshPtr& from, const MeshPtr& to, 
		    Real _morph_time=2.0)
            : morph_time(_morph_time), 
              time_spent(0.0), 
              step_sign(1.0) {};

	const String&            getName() const { return morph->getName();};
	void                     set_start_direction(Real start_pos=0.0, bool forward=true);
	void                     reverse_direction() { step_sign *= -1.0; };

	bool                     is_forward_direction() const { return step_sign > 0.0;};
	Real                     get_time_spent() const { return time_spent;};


	//call every frame for progress in morphing
	//return true if there is at least one extra step
	//and false when morphin is done.
	virtual bool             do_morph(Real delta_time)=0;

    };


    //Realization of morph with CPU usage
    class MorpherCPU: public BaseMorpher
    {

	struct VertInfo
	{
	    Vector3              pos1; //vertex position from first mesh
	    Vector3              posdir;
	    Vector3              norm1;
	    Vector3              normdir;
	};

	typedef std::vector<MorpherCPU::VertInfo*>        VpVector; //vector of VectInfo pointers

	VpVector                 vinfo; //directions & positions

	//method that creates our morph copy and do pre-calculations
	virtual void             initialize(MeshPtr from, MeshPtr to);
	
    public:
	MorpherCPU(const MeshPtr& from, const MeshPtr& to, 
		   Real _morph_time=2.0);

	virtual bool             do_morph(Real delta_time);
	
    };

};

#endif
MeshMorpher.cxx

Code: Select all

#include "MeshMorpher.hxx"
#include <OgreHardwareBuffer.h>
#include <OgreHardwareBufferManager.h>

namespace Ogre
{

    ///////////////////////////BaseMorpher class methods/////////////////////////
    //Set start stage and morphing direction
    void BaseMorpher::set_start_direction(Real start_pos, bool forward)
    {
        time_spent = start_pos;
        time_spent = time_spent > morph_time ? morph_time : time_spent;
        time_spent = time_spent < 0.0 ? 0.0 : time_spent;
        step_sign = forward ? 1.0 : -1.0;
    }

    ///////////////////////////MorpherCPU class methods//////////////////////////
    //Constructor
    MorpherCPU::MorpherCPU(const MeshPtr& from, const MeshPtr& to, 
			   Real _morph_time):
	BaseMorpher(from, to, _morph_time)
    {
	initialize(from, to);
	morph->load();
	morph->touch();
    }
    
    //Initialize - build new morph mesh and calculate extra data for morphing
    void MorpherCPU::initialize(MeshPtr from, MeshPtr to)
    {
	bool used_shared=false;
	int  nsubs = from->getNumSubMeshes();

	if(nsubs != to->getNumSubMeshes())
	    OGRE_EXCEPT(2001, "Can't create Morpher for two meshes with different number of submeshes",
			"MorpherCPU::initialise");

	morph = from->clone(from->getName() + "#Morph");
	//reorganise vertex buffers for dynamic writing every frame
	for(int i=0;i<nsubs;i++)
	{
	    SubMesh* msub = morph->getSubMesh(i);
	    if(msub->useSharedVertices && used_shared)
	    {
		vinfo.push_back(0); //this is a shared buffer, so skip it with 0
		continue;
	    }

	    //new vertex data and declaration
	    VertexData* newdata= new VertexData();
        
	    newdata->hardwareShadowVolWBuffer = msub->vertexData->hardwareShadowVolWBuffer;
	    newdata->vertexCount = msub->vertexData->vertexCount;

	    size_t        size=0;
	    //calculate size of elements minus VES_POSITION, VES_NORMAL
	    for(unsigned int j=0;j<msub->vertexData->vertexDeclaration->getElementCount();j++)
	    {
		const VertexElement* el = msub->vertexData->vertexDeclaration->getElement(j);
		switch(el->getSemantic())
		{
		case Ogre::VES_POSITION:
		case Ogre::VES_NORMAL:
		    break;
		default:
		    size += el->getSize();
		}
	    }
	    //create hardware buffers
	    //first for our position and normal (dynamic update every frame)
	    HardwareVertexBufferSharedPtr buf=
		HardwareBufferManager::getSingleton().createVertexBuffer(
									 6*sizeof(Real), // 3 Real for VES_POS + 3 Real for VES_NORMAL
									 newdata->vertexCount, // number of vertices
									 HardwareBuffer::HBU_DYNAMIC_WRITE_ONLY_DISCARDABLE, // usage
									 false);
	    newdata->vertexBufferBinding->setBinding(0, buf);
	    if(size)
	    {
		//second static buffer for other vertex data
		buf = HardwareBufferManager::getSingleton().createVertexBuffer(
									       size, // 
									       newdata->vertexCount, // number of vertices
									       HardwareBuffer::HBU_STATIC_WRITE_ONLY, // usage
									       false);
		newdata->vertexBufferBinding->setBinding(1, buf);
	    }
        
	    //transform vertex declaration to the new one
	    for(unsigned int j=0;j<msub->vertexData->vertexDeclaration->getElementCount();j++)
	    {
		const VertexElement* el = msub->vertexData->vertexDeclaration->getElement(j);
		switch(el->getSemantic())
		{
		case Ogre::VES_POSITION: //add position to buffer 0
		    newdata->vertexDeclaration->addElement(0, 
							   newdata->vertexDeclaration->getVertexSize(0),
							   el->getType(), VES_POSITION, el->getIndex());
		    break;
		case Ogre::VES_NORMAL: //add normal to buffer 0
		    newdata->vertexDeclaration->addElement(0, 
							   newdata->vertexDeclaration->getVertexSize(0),
							   el->getType(), VES_NORMAL, el->getIndex());
		    break;
		default: //other go to buffer 1 (static)
		    newdata->vertexDeclaration->addElement(1, 
							   newdata->vertexDeclaration->getVertexSize(1),
							   el->getType(), el->getSemantic(), el->getIndex());
		    break;
		}
	    }

	    //swap buffers
	    delete msub->vertexData;
	    msub->vertexData = newdata;

	    if(msub->useSharedVertices)
	    {
		morph->sharedVertexData = newdata;
    		used_shared = true;
	    }

	    //now copy vertices and calculate our vectors

	    //lock all 'FROM'/'TO' buffers for reading
	    {
		size_t nvertices = newdata->vertexCount;

		VertexData *vfrom = from->getSubMesh(i)->vertexData;
		VertexData *vto = to->getSubMesh(i)->vertexData;
		HardwareVertexBufferSharedPtr*  frombuf = new HardwareVertexBufferSharedPtr[vfrom->vertexBufferBinding->getBufferCount()];
		HardwareVertexBufferSharedPtr*  tobuf = new HardwareVertexBufferSharedPtr[vfrom->vertexBufferBinding->getBufferCount()];
		unsigned char** fromptr = new unsigned char*[vfrom->vertexBufferBinding->getBufferCount()];
		unsigned char** toptr = new unsigned char*[vfrom->vertexBufferBinding->getBufferCount()];
		for(unsigned bi = 0; bi < vfrom->vertexBufferBinding->getBufferCount(); bi++)
		{
		    frombuf[bi]=vfrom->vertexBufferBinding->getBuffer(bi);
		    fromptr[bi] = (unsigned char*)frombuf[bi]->lock(HardwareBuffer::HBL_READ_ONLY);
		    tobuf[bi]=vto->vertexBufferBinding->getBuffer(bi);
		    toptr[bi]= (unsigned char*)tobuf[bi]->lock(HardwareBuffer::HBL_READ_ONLY);
		}

		//lock our buffers for full writing
		HardwareVertexBufferSharedPtr  ourbuf[2];
		ourbuf[0] = newdata->vertexBufferBinding->getBuffer(0);
		if(size)
		    ourbuf[1] = newdata->vertexBufferBinding->getBuffer(1);
    
		unsigned char* ourptr[2];
		ourptr[0] = (unsigned char*) ourbuf[0]->lock(HardwareBuffer::HBL_DISCARD);
		if(size)
		    ourptr[1] = (unsigned char*) ourbuf[1]->lock(HardwareBuffer::HBL_DISCARD);

		//create our data storage
		MorpherCPU::VertInfo *vpin=new MorpherCPU::VertInfo[nvertices];
		vinfo.push_back(vpin);

		size_t nelems = newdata->vertexDeclaration->getElementCount();
		//cycle for each vertex in this submesh
		for(unsigned int vi=0;vi<nvertices;vi++)
		{
		    //copy each element of current vertex
		    for(unsigned int j=0;j<nelems;j++)
		    {
			const VertexElement* el = newdata->vertexDeclaration->getElement(j);
			const VertexElement* fel = vfrom->vertexDeclaration->getElement(j);

			unsigned char*  curfromptr = fromptr[fel->getSource()] + fel->getOffset() 
                            + vi*vfrom->vertexDeclaration->getVertexSize(fel->getSource());
			unsigned char*  curptr = ourptr[el->getSource()] + el->getOffset() 
                            + vi*newdata->vertexDeclaration->getVertexSize(el->getSource());
			memcpy((void*) curptr, curfromptr, el->getSize());

			//perform our calculation
			switch(el->getSemantic())
			{
			case Ogre::VES_POSITION:
			    {
				const VertexElement* tel = vto->vertexDeclaration->findElementBySemantic(VES_POSITION);
				unsigned char*  curtoptr = toptr[tel->getSource()] + tel->getOffset() 
				    + vi*vto->vertexDeclaration->getVertexSize(tel->getSource());
				Real* frompos = (Real*)(curfromptr);
				Real* topos = (Real*)(curtoptr);
				vpin[vi].pos1 = Vector3(frompos[0], frompos[1], frompos[2]);
				vpin[vi].posdir = Vector3(topos[0], topos[1], topos[2]) - vpin[vi].pos1;
			    }
			    break;
			case Ogre::VES_NORMAL:
			    {
				const VertexElement* tel = vto->vertexDeclaration->findElementBySemantic(VES_NORMAL);
				unsigned char*  curtoptr = toptr[tel->getSource()] + tel->getOffset() 
				    + vi*vto->vertexDeclaration->getVertexSize(tel->getSource());
				Real* frompos = (Real*)(curfromptr);
				Real* topos = (Real*)(curtoptr);
				vpin[vi].norm1 = Vector3(frompos[0], frompos[1], frompos[2]);
				vpin[vi].normdir = Vector3(topos[0], topos[1], topos[2]) - vpin[vi].norm1;
			    }        
			    break;
			default:
			    break;
			}
		    } //for all elements end
		} //for all vertices end

		//unlock all our buffers
		ourbuf[0]->unlock();
		if(size)
		    ourbuf[1]->unlock();

		for(unsigned bi = 0; bi < vfrom->vertexBufferBinding->getBufferCount(); bi++)
		{
		    frombuf[bi]->unlock();
		    tobuf[bi]->unlock();
		}

		delete[] frombuf;
		delete[] tobuf;
		delete[] fromptr;
		delete[] toptr;

	    } //end copy and transform

	} //end for all submeshes

    }
    

    //Perform morph next stage according to give time interval
    //return true if at least one more stage required for morphing
    //return false if morphing is complete
    bool MorpherCPU::do_morph(Real delta_time)
    {
        bool used_shared=false;
        Real percent=0.0;
        //We've already done...
        if((step_sign > 0 && time_spent >= morph_time)
	   || (step_sign < 0 && time_spent <= 0.0))
	    return false;

        time_spent += delta_time*step_sign;
        
        percent = time_spent / morph_time;

        percent = percent > 1.0 ? 1.0 : percent;
        percent = percent < 0.0 ? 0.0 : percent;

        //copy and transform vertex position and normal
        for(int i=0;i<morph->getNumSubMeshes();i++)
        {
            SubMesh* msub = morph->getSubMesh(i);
	    
            if(msub->useSharedVertices && used_shared)
                continue;

            VertexData* dat=msub->vertexData;
            VertInfo* vpin = vinfo[i];
            //lock our buffer
            HardwareVertexBufferSharedPtr buf=dat->vertexBufferBinding->getBuffer(0);
            unsigned char* ptr = static_cast<unsigned char*>(buf->lock(Ogre::HardwareBuffer::HBL_DISCARD));

            VertexDeclaration::VertexElementList elems = dat->vertexDeclaration->findElementsBySource(0);
	    VertexDeclaration::VertexElementList::iterator it;
		    
	    for(unsigned int vi=0;vi < dat->vertexCount; vi++)
	    {
		Real* pReal;
		for (it = elems.begin(); it != elems.end(); ++it)
		{
		    VertexElement& elem = *it;
		    elem.baseVertexPointerToElement(ptr, &pReal);
		    if (elem.getSemantic() == VES_POSITION)
		    {
			Vector3 newpos = vpin[vi].pos1 + vpin[vi].posdir*percent;
			pReal[0]=newpos[0];
			pReal[1]=newpos[1];
			pReal[2]=newpos[2];
			continue;
		    }
		    if (elem.getSemantic() == VES_NORMAL)
		    {
			Vector3 newnorm = vpin[vi].norm1 + vpin[vi].normdir*percent;
			pReal[0]=newnorm[0];
			pReal[1]=newnorm[1];
			pReal[2]=newnorm[2];
			continue;
		    }
		}
		ptr += buf->getVertexSize();
	    }
	    buf->unlock();
        }

        return time_spent > 0.0 && time_spent < morph_time;
    }
}
Usage:

Code: Select all

initialization:
....
        MeshPtr mesh1 = MeshManager::getSingleton().load("model1.mesh", ResourceGroupManager::getSingleton().getWorldResourceGroupName());
        MeshPtr mesh2 = MeshManager::getSingleton().load("model2.mesh", ResourceGroupManager::getSingleton().getWorldResourceGroupName());
        morph = new MorpherCPU(mesh1, mesh2, 5); // 5 seconds to complete morph animation
....
in every frame:
    if(!morph->do_morph(evt.timeSinceLastFrame))
        morph->reverse_direction();
....
Next on my TODO list:
  • Morph meshes with different number of submeshes, but the same number of vertices
  • Try GPU Cg programming and make MorpherGPU class with hardware morphing
  • Texture blending from mesh 1 to mesh 2 while animating morph process.
Waiting for your thoughts
Best regards,
Leo.
Post Reply