Flush GPU command buffer to stop input lag.

A place for users of OGRE to discuss ideas and experiences of utilitising OGRE in their games / demos / applications.
User avatar
Jabberwocky
OGRE Moderator
OGRE Moderator
Posts: 2819
Joined: Mon Mar 05, 2007 11:17 pm
Location: Canada
x 218
Contact:

Re: Flush GPU command buffer to stop input lag.

Post by Jabberwocky »

DanielSefton wrote:

Code: Select all

Ogre::GpuCommandBufferFlush bufferFlush;
bufferFlush.start();
You aren't declaring bufferFlush as a local variable on the stack, and letting it go out of scope while your game is running, are you? If so, that would explain the crash, since you're registered as a frame listener.

Are you calling GpuCommandBufferFlush::stop() anywhere?
Image
User avatar
mkultra333
Gold Sponsor
Gold Sponsor
Posts: 1894
Joined: Sun Mar 08, 2009 5:25 am
x 114

Re: Flush GPU command buffer to stop input lag.

Post by mkultra333 »

It isn't going out of range AFAICT. Here's the main loop.

Code: Select all

int DemoApp::runDemo_MethodB()
{

	OgreFramework::getSingletonPtr()->m_pLog->logMessage("Start main loop (MethodB)...");

	char chMessage[1024] ;
	
	UINT uFrameStartTime=OgreFramework::getSingletonPtr()->m_pTimer->getMilliseconds();
	UINT uFrameTotalTime=0 ;

	OgreFramework::getSingletonPtr()->m_pRenderWnd->resetStatistics() ;
	

	
	int numberOfQueuedFrames=1 ;
	bool GPUBufferSetupDone=false ;


	while(!m_bShutdown && !OgreFramework::getSingletonPtr()->isOgreToBeShutDown()) 
	{




		if(OgreFramework::getSingletonPtr()->m_pRenderWnd->isClosed())m_bShutdown = true;

#if OGRE_PLATFORM == OGRE_PLATFORM_WIN32
			Ogre::WindowEventUtilities::messagePump() ;
#endif	


		if(OgreFramework::getSingletonPtr()->m_pRenderWnd->isActive())
		{


			   if(GPUBufferSetupDone==false) // I added it here because I assume I can be very sure there's an active render window by now.
         {
            GPUBufferSetupDone=true ;
            mBufferFlush.start(numberOfQueuedFrames);
         }

         // get start time of frame
         uFrameStartTime=OgreFramework::getSingletonPtr()->m_pTimer->getMicroseconds() ;
            
         // update input and physics
         OgreFramework::getSingletonPtr()->m_pKeyboard->capture();
         OgreFramework::getSingletonPtr()->m_pMouse->capture();
         OgreFramework::getSingletonPtr()->updateOgre(uFrameTotalTime/1000.0f);


         // render the frame
				 OgreFramework::getSingletonPtr()->UpdateRenderTargets() ;




			// calculate frame time.
			uFrameTotalTime=OgreFramework::getSingletonPtr()->m_pTimer->getMicroseconds()-uFrameStartTime ;

		}
		else
		{
			Sleep(1000);
		}



	}


	mBufferFlush.stop() ;

	OgreFramework::getSingletonPtr()->m_pLog->logMessage("Main loop quit (MethodB)");
	OgreFramework::getSingletonPtr()->m_pLog->logMessage("Shutdown OGRE...");


	return 1 ;
}
I've tried setting up mBufferFlush at the top of this function (should be fine, since this function loops endlessly until shutdown) and now also as a class variable. In both cases, the code simply makes no difference on my computer, the lag caused by the full buffer is unaffected.

The other version, my original version using a small array of HOQs, works on my computer but crashes on another, more modern computer. The crash message is the same one as Daniel gets using the wiki code.

As far as calling stop goes, the wiki says it isn't necessary but I've added it anyway. I have doubts it'll stop the crash though, because mBufferFlush.stop() doesn't get called until after you try to exit, it's just intended for cleanup. I can't actually test, since the crash only happens on a friend's computer, not mine, but since the wiki code doesn't seem to be flushing the buffer's anyway it wouldn't help much even if it did stop the crash.

Might be worth mentioning that my graphics card and drivers are older, a nv 7950GT, while my friend has a 9800GT and a brand new system. I'm guessing Daniel's card was more modern than mine too. So perhaps something changed in the drivers or card architecture that is causing the crash (though that doesn't explain the lack of effect on my computer.)

I figure there's a good chance when my code crashes it's because I've got a bug in my code, I can see one possible problem that I'll have to fix. But why the wiki code isn't functioning, I don't know.
"In theory there is no difference between practice and theory. In practice, there is." - Psychology Textbook.
User avatar
mkultra333
Gold Sponsor
Gold Sponsor
Posts: 1894
Joined: Sun Mar 08, 2009 5:25 am
x 114

Re: Flush GPU command buffer to stop input lag.

Post by mkultra333 »

For interests sake I compared the 1.6.3 source to the 1.7.0 source, looks like they're handling things very differently

1.6.3

Code: Select all

void D3D9HardwareOcclusionQuery::endOcclusionQuery() 
	{ 
		mpQuery->Issue(D3DISSUE_END); 
	}
1.7.0

Code: Select all

	void D3D9HardwareOcclusionQuery::endOcclusionQuery() 
	{ 
		IDirect3DDevice9* pCurDevice  = D3D9RenderSystem::getActiveD3D9Device();				
		DeviceToQueryIterator it      = mMapDeviceToQuery.find(pCurDevice);
		
		if (it == mMapDeviceToQuery.end())
		{
			OGRE_EXCEPT(Exception::ERR_RENDERINGAPI_ERROR, 
				"End occlusion called without matching begin call !!", 
				"D3D9HardwareOcclusionQuery::endOcclusionQuery" );
		}

		IDirect3DQuery9* pOccQuery = mMapDeviceToQuery[pCurDevice];

		if (pOccQuery != NULL)
			pOccQuery->Issue(D3DISSUE_END); 
	}	
So 1.7.0 is doing some checks and mappings that 1.6.3 was not. Possibly both my code and the wiki code are falling foul of this.
"In theory there is no difference between practice and theory. In practice, there is." - Psychology Textbook.
User avatar
syedhs
Silver Sponsor
Silver Sponsor
Posts: 2703
Joined: Mon Aug 29, 2005 3:24 pm
Location: Kuala Lumpur, Malaysia
x 51

Re: Flush GPU command buffer to stop input lag.

Post by syedhs »

I am not really sure, but this sounds like 1.7 has to cater with multi-monitor and that explains why previously there is only one device but now it is possible to have multiple of it (hence, multiple hoq operation is possible). So is your friend's pc running on more than one monitor?
A willow deeply scarred, somebody's broken heart
And a washed-out dream
They follow the pattern of the wind, ya' see
Cause they got no place to be
That's why I'm starting with me
User avatar
mkultra333
Gold Sponsor
Gold Sponsor
Posts: 1894
Joined: Sun Mar 08, 2009 5:25 am
x 114

Re: Flush GPU command buffer to stop input lag.

Post by mkultra333 »

Yes, it is. He has two monitors. When I deactivated the HOQs it ran ok on the main monitor in D3D9, apart from the dreaded input lag. OpenGL messed up though, HOQs didn't hurt it but it played on the secondary monitor with some overflow going onto the main monitor.
"In theory there is no difference between practice and theory. In practice, there is." - Psychology Textbook.
glennr
Greenskin
Posts: 126
Joined: Thu Jun 05, 2008 3:26 am
Location: Thames, New Zealand
x 9

Re: Flush GPU command buffer to stop input lag.

Post by glennr »

On 1.7.1 I am also sometimes (maybe 50%) getting this exception:

Code: Select all

         OGRE_EXCEPT(Exception::ERR_RENDERINGAPI_ERROR,
            "End occlusion called without matching begin call !!",
            "D3D9HardwareOcclusionQuery::endOcclusionQuery" );
From the following code:

Code: Select all

						if(!hoq1)
						{
							hoq1 = m_renderSystem->createHardwareOcclusionQuery();
							hoq2 = m_renderSystem->createHardwareOcclusionQuery();
						}

						std::swap(hoq1, hoq2);

						hoq1->beginOcclusionQuery();

						m_root->renderOneFrame();

						hoq1->endOcclusionQuery();

						unsigned int numFragments = 0;
						hoq2->pullOcclusionQuery(&numFragments);
This code worked on 1.6.x and definitely calls beginOcclusionQuery() so the error message is misleading. The code in beginOcclusionQuery() seems to work by putting the query into a map, and it looks like this fails (silently?) sometimes, causing the above exception when endOcclusionQuery() is called.

Edit: Is it possible that this is caused by the device being lost and then restored?

Code: Select all

D3D9 device: 0x[0F8FB300] lost. Releasing D3D9 texture: Reflection
Released D3D9 texture: Reflection
D3D9 Device 0x[0F8FB300] entered lost state
!!! Direct3D Device successfully restored.
D3D9 device: 0x[0F8FB300] was reset
First-chance exception at 0x75f3b727 in myapp.exe: Microsoft C++ exception: Ogre::RenderingAPIException at memory location 0x0fe3f618..
yep, that is what is happening:

Code: Select all

 	RenderSystem_Direct3D9_d.dll!Ogre::D3D9HardwareOcclusionQuery::releaseQuery(IDirect3DDevice9 * d3d9Device=0x0fd3b300)  Line 257	C++
>	RenderSystem_Direct3D9_d.dll!Ogre::D3D9HardwareOcclusionQuery::notifyOnDeviceLost(IDirect3DDevice9 * d3d9Device=0x0fd3b300)  Line 215	C++
 	RenderSystem_Direct3D9_d.dll!Ogre::D3D9ResourceManager::notifyOnDeviceLost(IDirect3DDevice9 * d3d9Device=0x0fd3b300)  Line 94 + 0x20 bytes	C++
 	RenderSystem_Direct3D9_d.dll!Ogre::D3D9Device::reset()  Line 378	C++
 	RenderSystem_Direct3D9_d.dll!Ogre::D3D9Device::validateDeviceState(Ogre::D3D9RenderWindow * renderWindow=0x08769e60)  Line 875 + 0x8 bytes	C++
 	RenderSystem_Direct3D9_d.dll!Ogre::D3D9Device::validate(Ogre::D3D9RenderWindow * renderWindow=0x08769e60)  Line 808 + 0xc bytes	C++
 	RenderSystem_Direct3D9_d.dll!Ogre::D3D9RenderWindow::_beginUpdate()  Line 774 + 0x12 bytes	C++
 	OgreMain_d.dll!Ogre::RenderTarget::updateImpl()  Line 100 + 0x12 bytes	C++
 	OgreMain_d.dll!Ogre::RenderTarget::update(bool swap=false)  Line 541 + 0x12 bytes	C++
 	OgreMain_d.dll!Ogre::RenderSystem::_updateAllRenderTargets(bool swapBuffers=false)  Line 112 + 0x21 bytes	C++
 	OgreMain_d.dll!Ogre::Root::_updateAllRenderTargets()  Line 1374 + 0x1c bytes	C++
 	OgreMain_d.dll!Ogre::Root::renderOneFrame()  Line 966 + 0x8 bytes	C++

What would be the best fix for this?
glennr
Greenskin
Posts: 126
Joined: Thu Jun 05, 2008 3:26 am
Location: Thames, New Zealand
x 9

Re: Flush GPU command buffer to stop input lag.

Post by glennr »

To summarise the info in my last post. What is happening is that if you get a DeviceLost during renderOneFrame() after beginOcclusionQuery() is called, then the query gets deleted and endOcclusionQuery() can't find it and raises that exception.

The quick solution is to do this:

Code: Select all

						if(hoq1->isStillOutstanding())
						{
							hoq1->endOcclusionQuery();
						}
User avatar
Jabberwocky
OGRE Moderator
OGRE Moderator
Posts: 2819
Joined: Mon Mar 05, 2007 11:17 pm
Location: Canada
x 218
Contact:

Re: Flush GPU command buffer to stop input lag.

Post by Jabberwocky »

Thanks for this.
I haven't upgraded to 1.7 yet, but when I do this fix will be critically important for me, since the responsiveness of my game totally relies on GpuCommandBufferFlush working properly.
Image
User avatar
mkultra333
Gold Sponsor
Gold Sponsor
Posts: 1894
Joined: Sun Mar 08, 2009 5:25 am
x 114

Re: Flush GPU command buffer to stop input lag.

Post by mkultra333 »

Yeah, I've just been ignoring this issue for the time being. I'll have to give this fix a try. Thanks.
"In theory there is no difference between practice and theory. In practice, there is." - Psychology Textbook.
User avatar
mkultra333
Gold Sponsor
Gold Sponsor
Posts: 1894
Joined: Sun Mar 08, 2009 5:25 am
x 114

Re: Flush GPU command buffer to stop input lag.

Post by mkultra333 »

Just added the wiki code back to my project. Added glennr's fix like so...

Code: Select all

        if (mUseOcclusionQuery)
        {
					if(mHOQList[mCurrentFrame]->isStillOutstanding()) // make sure the query wasn't lost due to a lost device
						mHOQList[mCurrentFrame]->endOcclusionQuery();
        }
...though I've not had a chance to test it on multi-monitor systems yet.

Worked out why it wasn't working properly for me before (apart from the device lost crash). Since I don't use renderOneFrame the code was never triggering the frameStarted and frameEnded code. I added
OgreFramework::getSingletonPtr()->m_pRoot->_fireFrameStarted() ;
and
OgreFramework::getSingletonPtr()->m_pRoot->_fireFrameEnded() ;
around my main loop and now it works.
"In theory there is no difference between practice and theory. In practice, there is." - Psychology Textbook.
User avatar
Jabberwocky
OGRE Moderator
OGRE Moderator
Posts: 2819
Joined: Mon Mar 05, 2007 11:17 pm
Location: Canada
x 218
Contact:

Re: Flush GPU command buffer to stop input lag.

Post by Jabberwocky »

Cool, I'm glad you got it working.
If you run into any other troubles (with multi-monitor or otherwise), please let us know.
Image
User avatar
mkultra333
Gold Sponsor
Gold Sponsor
Posts: 1894
Joined: Sun Mar 08, 2009 5:25 am
x 114

Re: Flush GPU command buffer to stop input lag.

Post by mkultra333 »

Recent experiments have shown that setting the number of buffers to just 1 can be overkill, and seriously impact framerate. Now that my CPU is doing a little more work, and my graphics are rendering a little faster (generally around 40 - 50 fps) I've found that a default of 2 buffers works best, as a compromise between getting up the framerate versus stopping input lag. I did some extensive timing tests and at 1 buffer I was routinly seeing an extra 9-10ms wasted doing nothing. 2 buffers reduces this to 0-5ms (usually 0) without bringing back the lag.

So unless your app is running at 30fps AND your cpu is doing virtually no work, I'd suggest against 1 buffer as a default.
"In theory there is no difference between practice and theory. In practice, there is." - Psychology Textbook.
User avatar
Jabberwocky
OGRE Moderator
OGRE Moderator
Posts: 2819
Joined: Mon Mar 05, 2007 11:17 pm
Location: Canada
x 218
Contact:

Re: Flush GPU command buffer to stop input lag.

Post by Jabberwocky »

Cool.

I use the default (2). I just ran a test, and my frame rate was identical whether the GpuCommandBufferFlush was enabled or not. So at least in my case, on this computer, performance is not impacted by using GpuCommandBufferFlush.
Image
CapnJJ
Gnoblar
Posts: 6
Joined: Tue Mar 09, 2010 7:48 am
x 1

Re: Flush GPU command buffer to stop input lag.

Post by CapnJJ »

Sinbad mentioned earlier in this thread that he may bring the fix for this problem into core.

Anyone know if this is still the plan? presumably for 1.8 or after? (not sure if there is another planned release before 1.8?)

Thanks,
CapnJJ
User avatar
Jabberwocky
OGRE Moderator
OGRE Moderator
Posts: 2819
Joined: Mon Mar 05, 2007 11:17 pm
Location: Canada
x 218
Contact:

Re: Flush GPU command buffer to stop input lag.

Post by Jabberwocky »

That's a good question, I'm not sure if it's gone into Ogre 1.8 or not.
If it's ever something you need, you could always grab the 1.8 code, and just do a search GpuCommandBufferFlush. If it's not there, you can always just drop the code in yourself, by using the code from this thread, or from the wiki.
Image
User avatar
mkultra333
Gold Sponsor
Gold Sponsor
Posts: 1894
Joined: Sun Mar 08, 2009 5:25 am
x 114

Re: Flush GPU command buffer to stop input lag.

Post by mkultra333 »

Thought I'd add a few more snippets of info about buffer flushing.

First up, in another thread Oogst asked about how many buffers are typically used. I had no idea, but while looking through the Nvidia graphics card global options the other day I noticed it has a control to set the maximum number of rendered frames. Basically, this controls the number of buffers, it defaults to 3 and ranges from 0 to 8 on my card.

Secondly, I had a problem with the nvidia stereoscopic driver culling some triangles it shouldn't when in SLI mode. So the 3d Vision render of my game was producing some artifacts, stuff disappearing from one eye's view when at the edge of the screen. For reasons beyond me, setting the max gpu buffers to 2 fixed this issue, although doing the same thing via the nvidia global options did not. Intriguing... at least if you are an SLI+3D Vision user. Although I discovered turning on vsync also fixed the problem with the added benefit of not hurting the framerate.

I've also found different cards react differently to turning on buffer flushing. My ati laptop doesn't like it much, it works but the framerate hit is quite large and the display looks a bit flickery; but then the laptop doesn't seem to get the input lag to begin with. Maybe they have less (or no) buffers, so trying to use them causes more problems than it solves, I don't know.
"In theory there is no difference between practice and theory. In practice, there is." - Psychology Textbook.
User avatar
mkultra333
Gold Sponsor
Gold Sponsor
Posts: 1894
Joined: Sun Mar 08, 2009 5:25 am
x 114

Re: Flush GPU command buffer to stop input lag.

Post by mkultra333 »

Found another use for GPU command buffer flushing.

I have an SLI system. I was testing some metaballs in my project, running in windowed mode, tweaking the code. It was running well. Then I ran it fullscreen, same resolution. Suddenly the framerate was considerably worse, and very erratic. Looking at the performance timings I had for various parts of the code and rendering, they were suddenly terrible and hard to understand. In particular, locking and updating a dynamic vertex buffer used for the metaball triangles was now taking anywhere from 8 to 15 milliseconds, where in windowed mode it was about half a millisecond.

Oddly, I noticed my 3DVision render path didn't have the same problem, even though in theory it should be slower. But one difference is, as I posted above, I set the GPU command buffer to only buffer 2 frames instead of the usual 3.

So I tried this on the standard, non-3DVision version of the renderer, and sure enough the framerate picked up and smoothed out. Locking and updating the vertex buffer was back to less than a millisecond.

So it seems that, at least with my nvida cards, SLI can play havoc with framerates unless you tweak the GPU command buffer flushing. 2 card SLI seems to work a lot better with a limit of 2 command buffers, at least in a complex rendering setup. (The problem hadn't shown up until I added the metaballs, so it's obviously dependent on just how your rendering is organized.)
"In theory there is no difference between practice and theory. In practice, there is." - Psychology Textbook.
User avatar
Jabberwocky
OGRE Moderator
OGRE Moderator
Posts: 2819
Joined: Mon Mar 05, 2007 11:17 pm
Location: Canada
x 218
Contact:

Re: Flush GPU command buffer to stop input lag.

Post by Jabberwocky »

Since the release of Salvation Prophecy, I've had a few reports of game freezes which I tracked down to the use of this GpuCommandBufferFlush class. Recently, I bought an AMD Radeon HD 7700 Series video card, and experienced this problem on my computer.

The freeze occurs due to the while loop in D3D9HardwareOcclusionQuery::pullOcclusionQuery (in ogred3d9hardwareocclusionquery.cpp, Ogre v1.7.1)

Code: Select all

            while (1)
            {
                const HRESULT hr = it->second->GetData((void *)&pixels, dataSize, D3DGETDATA_FLUSH);

                if  (hr == S_FALSE)
                {
                  continue;
                }
                if  (hr == S_OK)
                {
                    mPixelCount = pixels;
                    *NumOfFragments = pixels;
                    break;
                }
                if (hr == D3DERR_DEVICELOST)
                {
                    *NumOfFragments = 0;
                    mPixelCount = 0;
                    SAFE_RELEASE(it->second);
                    break;
                }
            } 
The returned hr is equal to S_FALSE indefinitely, so we never break out of the loop.

It's likely a case of flaky drivers which may be fixed eventually, but my current driver is up to date, and still exhibits this behaviour. In the meantime, what I did is add a timeout to the pullOcclusionQuery function, and if we hit that timeout, the function returns false. I modified the calling code (GpuCommandBufferFlush) to disable itself if it detects this timeout condition. So, at least the game won't hang. But it also won't be able to use this code to deal with input lag.
Image
User avatar
syedhs
Silver Sponsor
Silver Sponsor
Posts: 2703
Joined: Mon Aug 29, 2005 3:24 pm
Location: Kuala Lumpur, Malaysia
x 51

Re: Flush GPU command buffer to stop input lag.

Post by syedhs »

I am curious, does this problem also present in other game engine or graphics rendering engine? Do they actively do something about it (like here, GPU flush) or they just work without any frame hiccup?
A willow deeply scarred, somebody's broken heart
And a washed-out dream
They follow the pattern of the wind, ya' see
Cause they got no place to be
That's why I'm starting with me
User avatar
Jabberwocky
OGRE Moderator
OGRE Moderator
Posts: 2819
Joined: Mon Mar 05, 2007 11:17 pm
Location: Canada
x 218
Contact:

Re: Flush GPU command buffer to stop input lag.

Post by Jabberwocky »

That's a great question. I don't know myself. Going back to Sinbad's original comment...
sinbad wrote:This is basically a problem of the CPU outpacing the GPU and filling up the command buffer. The easiest way to resolve it is to make the CPU do more work :)
I guess the input lag problem only occurs under specific kinds of workloads - notably low CPU heavy GPU usage. I'm not sure how common this scenario is. Perhaps most game engines don't tend to end up in this scenario.
Image
User avatar
mkultra333
Gold Sponsor
Gold Sponsor
Posts: 1894
Joined: Sun Mar 08, 2009 5:25 am
x 114

Re: Flush GPU command buffer to stop input lag.

Post by mkultra333 »

Jabberwocky, thanks for the info. Fortunately I allow the user to control whether HOQs are used or not, so I'll just include a readme note that this can be a problem on ATI and that they should disable HOQs if it comes up. My laptop is an ati HD 5650 and hasn't ever hit the problem, it's probably using fairly old drivers since I never update them on the laptop.
I guess the input lag problem only occurs under specific kinds of workloads - notably low CPU heavy GPU usage. I'm not sure how common this scenario is. Perhaps most game engines don't tend to end up in this scenario.
I think sometimes it comes up on menu screens if they have clever graphics going on in the background. But the other thing is that the class is useful for more than just fixing the cpu/gpu imbalance. As I posted above, it is also very useful for SLI systems in some situations, where it seems to fix some sort of clash between the two cards. I had this with a manually updated mesh. 3 buffers with SLI was erratic and slow, but 2 buffers was smooth and fast. Probably something to do with shuffling data between the two cards.

2 buffers instead of the default 3 also fixed a 3DVision SLI culling bug.
"In theory there is no difference between practice and theory. In practice, there is." - Psychology Textbook.
User avatar
Jabberwocky
OGRE Moderator
OGRE Moderator
Posts: 2819
Joined: Mon Mar 05, 2007 11:17 pm
Location: Canada
x 218
Contact:

Re: Flush GPU command buffer to stop input lag.

Post by Jabberwocky »

Good point mkultra about the other uses for this class. Hopefully AMD sorts out their drivers, or else we otherwise discover what might make a HOQ check to fail indefinitely.
mkultra333 wrote:Fortunately I allow the user to control whether HOQs are used or not
Yeah, me too (in an options.cfg file). Making stuff like this configurable has saved my ass a couple times since shipping. That allowed me to easily give tech support to users who ran into this problem. Still, it's not great if your first experience of playing a game is "crap it freezes - time to turn to internet/documentation/support to get it working". That's why I wanted to code in an auto-disable of GPUCommandBufferFlush if it appears to be hanging.
Image
User avatar
fantasian
Halfling
Posts: 81
Joined: Fri May 29, 2009 8:47 am
Location: Selanic, Greece
x 2
Contact:

Urgent! (was: Flush GPU command buffer to stop input lag)

Post by fantasian »

Reviving this 3 years-old thread for a cause...

So I finally worked out why my game was jerky, even if it was reportedly running at 60fps on my powerful-CPU underpowered-GPU configuration (i7-965 vs GT9600). Then, the crashes started when I pressed Alt-TAB while on fullscreen mode (DeviceLost).

Then I found this post who said to listen for RenderSystem events : http://www.ogre3d.org/forums/viewtopic. ... 98#p360298

Then I changed FlushGPUBuffer to this:

Code: Select all

#ifndef __GPUCOMMANDBUFFERFLUSH_H__
#define __GPUCOMMANDBUFFERFLUSH_H__
 
#include "OgrePrerequisites.h"
#include "OgreFrameListener.h"
 
namespace Ogre
{
 
    /** Helper class which can assist you in making sure the -GPU command
        buffer is regularly flushed, so in cases where the -CPU is outpacing the
        -GPU we do not hit a situation where the -CPU suddenly has to stall to 
        wait for more space in the buffer.
    */
	class GpuCommandBufferFlush : public FrameListener, public Ogre::RenderSystem::Listener
    {
    protected:
        bool mUseOcclusionQuery;
        typedef std::vector<HardwareOcclusionQuery*> HOQList;
        HOQList mHOQList;
        size_t mMaxQueuedFrames;
        size_t mCurrentFrame;
        bool mStartPull;
        bool mStarted;

		bool waitingForDeviceRestore;
		bool isDeviceLost;
		bool isRenderSystemSListening;
 
    public:
        GpuCommandBufferFlush();
        virtual ~GpuCommandBufferFlush();
 
        void start(size_t maxQueuedFrames = 2);
        void stop();
        bool frameStarted(const FrameEvent& evt);
        bool frameEnded(const FrameEvent& evt);
 
		virtual void eventOccurred(const Ogre::String& eventName, const Ogre::NameValuePairList* parameters = 0);
    };
 
}
 
#endif

Code: Select all

#include "Stdafx.h"
#include "OgreGpuCommandBufferFlush.h"
#include "OgreRoot.h"
#include "OgreRenderSystem.h"
#include "OgreHardwareOcclusionQuery.h"
 
namespace Ogre
{
    //---------------------------------------------------------------------
    GpuCommandBufferFlush::GpuCommandBufferFlush()
        : mUseOcclusionQuery(true)
        , mMaxQueuedFrames(2)
        , mCurrentFrame(0)
        , mStartPull(false)
        , mStarted(false)
		, waitingForDeviceRestore(false)
		, isDeviceLost(false)
		, isRenderSystemSListening(false)
    {
 
    }
    //---------------------------------------------------------------------
    GpuCommandBufferFlush::~GpuCommandBufferFlush()
    {
        stop();

		RenderSystem* rsys;
		Ogre::Root* root;
		if ( isRenderSystemSListening && (root = Root::getSingletonPtr()) && (rsys = root->getRenderSystem()) )
			rsys->removeListener(this);
    }
    //---------------------------------------------------------------------
	void GpuCommandBufferFlush::eventOccurred(const Ogre::String& eventName, const Ogre::NameValuePairList* parameters)
	{
		std::cout << "GpuCommandBufferFlush eventOccurred=" << eventName << "=" << parameters << "\n";
		if (eventName == "DeviceLost") {
			isDeviceLost = true;
			stop();
		} else if (eventName == "DeviceRestored") {
			isDeviceLost = false;
			if (waitingForDeviceRestore) {
				std::cout << "DEVICE FINALLY RESTORED!!!\n";
				start();
			}
		}
	}
    //---------------------------------------------------------------------
    void GpuCommandBufferFlush::start(size_t maxQueuedFrames)
    {
		if (isDeviceLost) {
			std::cout << "MUST WAIT FOR DEVICE TO RESTORE!!!\n";
			waitingForDeviceRestore = true;
			return;
		}
        stop();

		RenderSystem* rsys;
		Ogre::Root* root;
		if ( !(root = Root::getSingletonPtr()) || !(rsys = root->getRenderSystem()) )
            return;
 
        mMaxQueuedFrames = maxQueuedFrames;
        mUseOcclusionQuery = rsys->getCapabilities()->hasCapability(RSC_HWOCCLUSION);
 
        mCurrentFrame = 0;
        mStartPull = false;
        if (mUseOcclusionQuery)
        {
            for (size_t i = 0; i < mMaxQueuedFrames; ++i)
            {
                HardwareOcclusionQuery* hoq = rsys->createHardwareOcclusionQuery();
                mHOQList.push_back(hoq);
            }
        }
 
		if (!isRenderSystemSListening) {
			isRenderSystemSListening = true;
			rsys->addListener(this);
		}

        root->addFrameListener(this);
        mStarted = true;
		std::cout << "FRIGGIN START\n";
    }
    //---------------------------------------------------------------------
    void GpuCommandBufferFlush::stop()
    {
		waitingForDeviceRestore = false;

		RenderSystem* rsys;
		Ogre::Root* root;
        if ( !mStarted || !(root = Root::getSingletonPtr()) || !(rsys = root->getRenderSystem()) )
            return;
 
        for (HOQList::iterator i = mHOQList.begin(); i != mHOQList.end(); ++i)
        {
            rsys->destroyHardwareOcclusionQuery(*i);
        }
        mHOQList.clear();

        root->removeFrameListener(this);
        mStarted = false;
		std::cout << "FRIGGIN STOP\n";
    }
    //---------------------------------------------------------------------
    bool GpuCommandBufferFlush::frameStarted(const FrameEvent& evt)
    {
        if (mUseOcclusionQuery)
        {
            mHOQList[mCurrentFrame]->beginOcclusionQuery();
        }
        return true;
    }
    //---------------------------------------------------------------------
    bool GpuCommandBufferFlush::frameEnded(const FrameEvent& evt)
    {
        if (mUseOcclusionQuery)
        {
			if(mHOQList[mCurrentFrame]->isStillOutstanding())
	            mHOQList[mCurrentFrame]->endOcclusionQuery();
        }
        mCurrentFrame = (mCurrentFrame + 1) % mMaxQueuedFrames;
        // If we've wrapped around, time to start pulling
        if (mCurrentFrame == 0)
            mStartPull = true;
 
        if (mStartPull)
        {
            unsigned int dummy;
            mHOQList[mCurrentFrame]->pullOcclusionQuery(&dummy);
        }
 
        return true;
    }
    //---------------------------------------------------------------------
 
}
AND I stop/start the flushing in my main code as soon as the app loses/gains focus respectively!

Now, I managed to Alt-TAB about 14 times with no crash, when Ogre suddenly crashed with a message like this:
"Could not restore device because there is not enough RAM"


Now, I don't expect anyone to Alt-TAB 15 times in practice, but with only 10 days to release, I desperately have to know why/how this crash still occured. Withdrawing FlushGPUBuffers altogether is currently not a consideration because the stuttering is really too bad!

PS. BTW, I haven't found anything related to this in Ogre1.9! No Flush, no OgreGpuCommandBufferFlush...

Many, many. Many. Thanks in advance :-)
User avatar
mkultra333
Gold Sponsor
Gold Sponsor
Posts: 1894
Joined: Sun Mar 08, 2009 5:25 am
x 114

Re: Flush GPU command buffer to stop input lag.

Post by mkultra333 »

On a possibly related, possibly unrelated note, I've discovered the old code can cause problems in DX11 debug mode in Ogre 1.10 if you are doing manually controlled rendering. The following assert pops up after rendering a single frame.

Code: Select all

OGRE EXCEPTION(0:RenderingAPIException): Error calling Map: ID3D11DeviceContext::GetData: GetData is being invoked on a Query/ Predicate/ Counter, after invoking Begin, but not yet after End. The range of commands must be completed by invoking End, before invoking GetData.
 in D3D11HardwareBuffer::lockImpl at C:\Ogre\Ogre_1_10\RenderSystems\Direct3D11\src\OgreD3D11HardwareBuffer.cpp (line 173)
Doesn't seem to like the way I call _fireFrameStarted and _fireFrameEnded, I'm guessing it clashes with its frame listener or something. Error goes away when I turn occlusion queries off.
"In theory there is no difference between practice and theory. In practice, there is." - Psychology Textbook.
kubatp
Gnome
Posts: 368
Joined: Tue Jan 06, 2009 1:12 pm
x 43

Re: Flush GPU command buffer to stop input lag.

Post by kubatp »

Hi,
I just read this thread. I am not a C++ programmer (I work with .NET and C#) but I have some knowledge about C++.
I am trying to figure out one mystery. When you look at this code viewtopic.php?f=5&t=50486&sid=b410b3fce ... 81#p533708, isn't the PullOcclusionQuery performed every frame?

Basically mStartPull is set to true the first time when (mCurrentFrame == 0) and it stays like that (it is never reverted back to false. Maybe I am missing something? It is also the same here http://wiki.ogre3d.org/FlushGPUBuffer
Post Reply