DirectX Text Rendering Bug!!!!!!!!!!


04-01-2008 04:43:08

I am using the lastest QuickGUI from svn and it seems there is a DirectX only massive text rendering bug where half of all the text is missing. This is present in the demo as well as my own application and its a show stopper! I only noticed when I started building my app on windows after moving from linux.

I am using the latest drivers for an ATI X1950 Pro. This is with Ogre 1.4.5

I am unsure but there seems to be text rendering bugs mentioned on the forums but it seems they have to do with font resolutions. I am not sure if anybody has encountered something like this before but it is a major pain because I can't use OpenGL because of another library (main reason why I moved to windows).

Here are screenshots:




04-01-2008 05:48:43

It seems there is a DirectX issue when combined with certain ATI and Intel integrated graphics cards. I'm talking to Sinbad about this, maybe hiring some help. I have no idea what the issue could be, and so far he has been unable to reproduce this issue on his end. :(


04-01-2008 23:03:23

i have watched this kind of bug on a system with an ati card and directX too.. . this might be a general issue...


09-01-2008 03:00:18

Is there any way I can help? I can do anything you want.


09-01-2008 06:03:15

When Sinbad has time he will look into this issue. You'd have to know about how Ogre does rendering on a low level to be able to figure this out..


10-01-2008 05:43:01

Hey, I had a little time to look into this today, and I found something fishy in QuickGUIVertexBuffer.cpp:

void VertexBuffer::_createIndexBuffer()
mRenderOperation.useIndexes = true;
mRenderOperation.indexData = new Ogre::IndexData();
mRenderOperation.indexData->indexStart = 0;

// Create the Index Buffer
size_t indexCount = (mVertexBufferSize / 4) * 6;

mIndexBuffer = Ogre::HardwareBufferManager::getSingleton( ).createIndexBuffer(
((mVertexBufferSize > 655534) ? Ogre::HardwareIndexBuffer::IT_32BIT: Ogre::HardwareIndexBuffer::IT_16BIT),
mRenderOperation.indexData->indexBuffer = mIndexBuffer;

The fishy part is the (mVertexBufferSize > 655534) switch. I'm assuming that since this is switching between 16bit and 32 bit buffers, the divide would be at 2^16, which is 65536. Is this a typo?

I tried changing it to 65534 (the last digit might leave some needed headroom), but then there was some issue with the rendering of passes later on. I don't know if this indicates a greater issue or not, but this line doesn't seem correct.

Also, I'm not sure what the size_t indexCount = (mVertexBufferSize / 4) * 6; does... It might be a good idea to replace the appropriate constants with the #defined constants (VERTICIES_PER_QUAD) or whatever.

Any ideas?


10-01-2008 08:01:15

Your guess is as good as mine. :lol:

I reach the same conclusions you do. Assuming 6 indices per Quad, and 4 vertices per quad, that would be the way to get the number of indices required.

I tried changing it to 65534

You mean 65536, right? My calculator says 2^16 = 65536.

Making the change had no noticeable effect on the demo, at least on my machine.

Shouldn't it be something like (indexCount > 65536), instead of using the vertex buffer size? Either way, I kind of doubt 32 bit is ever toggled.. I don't think there are more than 16384 quads on the screen at any given moment...


10-01-2008 10:07:15

Well, I guess we should change it to 65536 then. :)

I thought using 65534 might help just because the > operator is being used... but it's probably not necessary. ;)

BTW, where did this code come from? Just curious...


10-01-2008 18:19:40

Tuan committed this code. Aside from the magic numbers its well written, its just that my experience with Index Buffers is essentially non existant.. I'm not too familiar with use of indices in the first place.


10-01-2008 22:52:51

I did some more reading on the topic of vertex and index buffers, and this is an interesting article I found:

The old rendering solution that seemed to work universally was the Vertex rendering approach (method 1 in the article). The new one is method 2, but a lot of people are having issues with it.

This makes me think that unless sinbad can find something wrong in the way Ogre is calling Direct3D9 or something in QuickGUI itself, maybe we should have a NO_INDEX_BUFFERS option for people who are having this issue to be able to fall back on the old rendering system (with 6 verticies per quad). So if you included the line #define NO_INDEX_BUFFERS, then the rendering code in QuickGUIVertexBuffer.cpp would use #ifdef and #else statements to switch between the rendering versions.

That way, people can still move forward with development using QuickGUI and DirectX, while hopefully sinbad or tuan (if you're around :)) can think of some fix so the new rendering method (which is faster) works on all cards. From the posts so far, it does seem like this problem is fairly widespread.

What do people think of this?


10-01-2008 23:16:46

I like what thecaptain is saying, I really need a short term fix!

Extending thecaptain's idea, why not make a patch that people can apply independently instead of loitering the QuickGUI code base until a fix comes along?

Please! Please! Please make a patch!


11-01-2008 00:00:20

Tuan did leave in all the options to configure each method, but I didn't realize that there would be problems using the Index approach. With a little effort, we could revert the VertexBuffer.h/.cpp files and clean it up so that each method can be used. I believe the Quad.h/.cpp also have to be reverted.

So busy lately... anybody want to help in this task?


11-01-2008 07:07:53

So busy lately... anybody want to help in this task?

Not only this issue, but general @quickGUI:
I'd like to help you out, but give me time until end of february/beginning of march.
then i got past all my exams (hopefully ^^ ) and doing my internship for 6 months (meaning lots of time in the evening ;) )


11-01-2008 08:52:24

Awesome, any help is appreciated. Right now I'm moving slow on development because of real life issues.. (buying my first house!)


11-01-2008 21:37:08

Actually, in order to have maximum compatibility in terms of rendering, we should just remove the use of Indexes and go back to using the Vertex Buffer. Its true the use of the index buffer increased performance, but its not very likely to have a game where all intended users have nvidia cards.

I'll work on reverting to the vertex buffer only render process..


12-01-2008 18:41:39

Index buffer usage has been removed completely. Can you can test the latest SVN to see if it renders on your machine? (latest svn works fine for me)


13-01-2008 00:12:51

Latest SVN is working perfectly in both directx and opengl. Thanks a lot!