Batch Count Efficiency Question
I was doing some analysis on the speed of QuickGUI because multiple windows seemed to slow things down quite a bit. Using the demo program, I noticed that the batch count of the screen was going up by one for every single base widget that was being displayed on the screen:
For example, the mouse increases the batch count by 1, a button with text increases it by 2 (1 for button, 1 for text on that button), and a window increases it by 8 (4 borders, 1 background, 1 titlebar, 1 titlebar label, 1 closebutton). And a window with anything in it will add a lot more!
According to this forum post:
one of the moderators (xavier) mentions: "On modern hardware, with a renderer that is *designed* for modern hardware, poly count is far less important than batch count."
This got me thinking that perhaps to improve the speed of QuickGUI, we could do some sort of texture caching on certain widgets. Starting with windows sounds like a good idea, because it seems to be the most complex widget. So perhaps we could render the entire window to a texture, and then only display that texture each frame, and only update the texture when the window changed. I believe this could reduce the batch count hit from each window to 1. That way, QuickGUI could support arbitrarly complex GUI's without much of a performance hit.
Also, this would enable more advanced features without losing performance (such as adding corner support to windows or any other widget to make it look better as it scaled).
Your idea sounds good, but I have no idea how to approach it, would take some thinking and investigation.
The Panel Widget is actually the most complex, it contains the ability to create most of the widgets. A Window is basically a Panel with a TitleBar (actually it has its own Overlay also, for zOrdering). A Sheet is a Panel that can create Windows, and has an extra overlay for menu's to have highest zOrder.
So we need to look into how to make a Panel and all it's child widgets into 1 texture, and we need to be able to detect changes and recreate this texture.
 would be great if an overlay or overlayelement could create an image based on its appearance and the added appearance of its child parts..
You're right, this would definitely take some more investigation. I'll have to look into Ogre's capabilities for Render to Texture, and see what options present themselves.
Although a Panel is actually the most complex, the interesting thing is that they really don't slow down performance nearly as much as a window. Perhaps the first step would be to render the window with its borders and titlebar to the correct size, and then save that as a material which could then be used in the same way as a panel's material. At least that would reduce the batch count by 6 or so per window...
This is still wishful thinking, but hopefully not for too much longer
You need to create the meshes for rendering as a whole, not as individual submeshes (for instance, one submesh for a corner, one for the top, one for the other corner, and so on). This requires that you approach the construction of your quads with the knowledge of what they are drawing -- a 9-brush quad, for instance, is going to need a different mesh structure than will a simple 1-brush quad, and they both are constructed differently than is a 3-brush quad. You can use multiple primitives within the same mesh, you just need to make them a single submesh because Ogre won't do that for you (it only sorts within a render queue on texture IIRC, to minimize state changes as much as possible, but it will not merge submeshes for you).
It sounds like you've got the text batching nailed -- that's a good start. Now you just need to get the image construction worked out as well.
Does each OverlayElement create a batch?
I was thinking maybe I could dig into the code used for creating a screenshot, only I wouldn't output it to file, and I would only want a certain portion of the image. This way I can reduce the number of OverlayElements, as I would only need them for the Sheet, Panels, and Windows.
But thinking on this more, I need all of the various overlay Elements. For example, when the mouse moves over a button, the button's image changes, which is shown by replacing the button's material, which is represented by the OverlayElement. (PanelOverlayElement class, for buttons)
Xavier, it sounds like batches result from using a lot of meshes. So I need to access the meshes (quads?) that result from the OverlayElements, and convert those into submesh, and link everything into as few meshes as possible? When I get a chance I'll look into this further, but it sounds like the way to go. Thanks for the input.
Actually, isn't this a problem that is internal to Ogre? I'm just making use of Ogre::Overlay, Ogre::OverlayElement, and Ogre::OverlayContainer..
Optimizing the OverlayElements to become submeshes and creating one mesh out of it.. in order to accomplish this wouldn't I have to modify the Ogre::OverlayElement class to have a function to convert to submesh and add to another mesh?
In fact, if all overlay containers and overlay elements were represented as submeshes of the overlay is existed on, then this batch efficiency problem would be handled, right? Maybe I should throw this on the Feature Requests Forum and see what others think about it.
Like I said in the main forums, don't use Overlays. They are not supported, they are an aborted attempt at a UI layer for Ogre (up to the point 3 years ago when CEGUI was adopted as the "official").
If you want to use those as the foundation for your stuff, you need to fix them -- Ogre/Sinbad won't, because they are, like I said, what they are -- not supported or developed. They are just there.
Ok, thanks for letting me know.
Can you give me any hints on where to start looking? (what classes?) Quads are made of what? Ogre::Rectangle or something?
Well, looks like I'll have to get intimate with ManualObject class.. *shiver*
Next major task: to replace the use of Overlays, and to reduce batching.
Wow, I guess the fix would have to involve not using Overlays anymore to follow good practice. Do you have a feel of how the meshes work in Ogre to be able to do something like this?
At least it would probably be a lot more efficient and be supported by the Ogre team
I have a crazy idea that might work out..
Clay posted a way to make a simple rectangle on the screen using a manual object, I will probably try to use this:
Here is my idea:
Make a struct called QuadLayer or something similar, which holds
1) int zOrder
2) bool dirty
4) List of Rectangles
GUIManager has a list of QuadLayers, and iterates through them every frame:
1) If QuadLayer is dirty:
a) iterate through rectangles and create ManualObject
2) call addRenderable(ManualObject, RENDER_OVERLAY_GROUP, zOrder)
There will be one Manual Object per zOrder. I don't know how many Manual Objects create a batch, but I wouldn't think each Manual Object would become a batch. What do you think of this approach?
That definitely seems like a plausible solution, and I like the way you worked in the dirty bit. That will help a great deal with efficiency, while still allowing rapid response to changes.
But this Manual object, will this use the sub-mesh idea that Xavier proposed? It would be best to use some Ogre-optimized function to piece everything together at the end, as opposed to figure out a way to do it ourselves.
Yah, I need to understand how batching works before I can come up with a better solution. The main problem with using mesh and submesh is that of zOrder. Do you know if it is possible ot have certain submesh parts with a different render priority? (zOrder)
For example, the Window consists of a TitleBar, which has a Close Button. The Close button must be rendered last, and the TitleBar must be rendered after the Window.
If you can help me out and look up information on Batching that would help us create a good solution.
 It doesn't look like ManualObject is an Ogre::Renderable!
Now I'm not sure how I can tell ManualObject to get placed on the RenderQueue with RENDER_QUEUE_OVERLAY... [/edit]
What would it take to update the Ogre Overlay system? The foundations are all there, so I would think the most direct approach would be to fix whatever problems Ogre's Overlay system is having. As an additional bonus, if these changes were accepted, many other Ogre projects stand to benefit.
That sounds good also. I've been looking into my idea for a while now, and it seems like a lot of work. I think I'm going to have to put this on the back burner until I understand how batching works. Looking at the original post, I see 30 FPS. What is the FPS with no GUI used?
Without any GUI elements, I was getting 41 FPS using the same settings. However, only one little window there with a couple of buttons in it brought it down to 30 FPS. I'm worried about it dropping significantly lower once I fill out the screen with all of the controls we need.
I guess working on replacing the underlying system of QuickGUI does not need to be top priority, but we should be thinking about it with the goal of putting a fix in sooner than later. Using the overlay functionality (which is essentially deprecated) seems to be trouble. Unless we can convince Sinbad and the team to support the improvement of overlays, I'm not sure it would be a very good idea to continue using them.
Perhaps we can build a similar abstraction that operates like overlays and uses the officially supported and optimized functionality in Ogre, but would only be used in QuickGUI. That way we wouldn't need to worry about modifying the internal Ogre code and the related support issues. Also, making sure we can use a dirty bit to only spend time rendering things that have changed is huge. I'd say any decent solution should include this.
KungFooMasta, what are the ideas you have looked at so far? I can look at some more options today to see what is available to us. Perhaps we should start creating a list of ways to approach this, and then weigh the +/- of each and then make a decision.
I just found out that the ManualObject function I wanted to be made virtual is already declared virtual, in the MovableObject class, so my original idea is now possible.
Last night I checked out the CEGUI demo and was amazed at the small number of batches used. I'm currently looking at the OgreCEGUIRenderer, maybe I can re-use great ideas used from that.
So it sounds like 2 possible approaches, although I don't know how much my method would improve batching. I'll post my findings after digging into the CEGUI renderer.
Hey thecaptin, thanks for starting that other thread, Xavier and Frenetic have been helping me understand more about how hardware buffer works and how to render batches.
Tomorrow I should have the base design for QuickGUI::RenderSystem implemented. (designed, not working!) I plan on documenting it so others can jump in and help me out if I need it. It will definately be the fewest batches possible, lol.
Awesome, I'm happy to see some good advice and ideas going around. Documenting your ideas will be very helpful to expedite the process of actually implementing this.
In the meantime, I think I'll have to focus on some slightly higher-level things, such as making some more widgets that we need. For example, I'm thinking of working on a multi-line text label (text box) to display some sort of word-wrapped text with possibly some simple formatting. Do you have any suggestions on how to go about this?
Good luck with the new rendersystem! Keep us in the loop so we can help if you need it.
Designing this will most likely surface areas that need to be fixed in the TextBox Widget. (Which is good!)
I would make a widget (called MultiLineTextBox) and derive that from a Label widget. The default number of lines will be 1. Create a member function to be able to set the number of lines of text (setLinesOfText or something similar). Have a list of TextBox widgets, and for each line, add a TextBox widget in the proper location. By default, after creating and adding a TextBox, you should hide the borders of the TextBoxes. (can be shown if desired)
Override the addCharacter and backspace functions (make them virtual in TextBox widget if they are not). We should keep track of the current text's width, and the width of the allowed space for text. In the addCharacter function, if the new text width is larger than the width allowed, move the word down to the next TextBox (and set focus to that textbox, and move text cursor to correct position). Similarly, if you backspace and the textCursor is at index 0, check if there is a previous textBox, set focus to it, put the cursor at the last text character, and remove it.
These are just ideas I wrote on the spot, but they would be a starting point for tackling the widget. Note that we currently don't have a scrollbar widget. (It will be created sometime soon)
The render system foundation is created, but my initial test did not show the quad I had set to render.
I am receiving some help with this, hopefully somebody can help me realize what is wrong. My current problem is outlined here:
Also, I am planning to remove use of materials, and instead reference the Textures (images, really) directly, instead of wrapped in a material script. I don't think this is a big issue since material lightning never affected the way the image appeared anyway. Let me know if this is a problem, I hope it isn't.
(Instead of setting widget material to "qgui.button.down", we set it's texture to "qgui.button.down.png")
I wouldnt remove material support from the GUI (if i correctly understand your last sentence)...current and future GUI should be able to provide shader support and materials have this by default
As a side note, im looking into adoping one of the free GUI libraries around to be part of the engine were devloping and my first choice would be a GUI that is using materials at full (benefints of this could be huge when working with a material system for the interface and the game also). Most of the GUIs around are decoupling the GUI system from Ogre and implement a separate from ogre way of using textures. (and im finding this very non-effective in terms of extensibility)
I don't think the use of materials in a GUI system is particularly crucial, as long as transparency support is added for all of the textures (That sceneblend issue I was talking about earlier).
However, I do agree that unless there is a specific impetus to remove material support, it may still help a few people to have it in there. Although lighting probably wouldn't be an issue, I believe that you could add some other post-render filters that would also affect the GUI that way (such as blur, black and white) that some designers may want to have available to them.
Does your new render system make it harder for you to still have material support? I'm just wondering what the reasons would be for removing it...
Transparency is my primary concern regarding material scripting. All of my materials are exactly the same: allow transparency. However, one of the things that drew me to QuickGUI was a design that took advantage of current Ogre constructs. I don't know if you intended this with your design or not, but it is my opinion that you should attempt to preserve this idea wherever possible.
That said, I doubt I personally would care much at all if materials were removed. Perhaps material scripting is an unnecessary link to the Ogre system.
I need to look into how materials are applied to OverlayElements, last time I looked and it wasn't something that I could absorb easily.
The main purpose of breaking things down from material to texture is because of efficiency. In order to minimize batches, I have to group quads together that have similar textures (and they have to be on the same zOrder!). For each unique texture, there will be an added batch. Ogre Overlays were innefficient because every OverlayElement created its own batch. From what I saw in the OverlayElement class, it looked like the material was stripped down to its texture unit states, and applied the textures. The rest.. I don't recall it being used. (like I said I need to look into this, but it seems like its just using the textures also)
Basically, a batch occurs when you call RenderSystem::_render(mRenderOperation). Before you make this call, you can set the texture that will be applied to all vertices in the render operation via RenderSystem::_setTexture. You can only do one _setTexture per _render. Also, vertices that are earlier in the vertex buffer (within the RenderOperation) will appear behind than vertices later on in the buffer. This is how you achieve zOrdering.
void OverlayElement::setMaterialName(const String& matName)
mMaterialName = matName;
mpMaterial = MaterialManager::getSingleton().getByName(matName);
OGRE_EXCEPT( Exception::ERR_ITEM_NOT_FOUND, "Could not find material " + matName,
// Set some prerequisites to be sure
// Generate for as many texture layers as there are in material
if (!mpMaterial.isNull() && mInitialised)
// Assume one technique and pass for the moment
size_t numLayers = mpMaterial->getTechnique(0)->getPass(0)->getNumTextureUnitStates();
VertexDeclaration* decl = mRenderOp.vertexData->vertexDeclaration;
// Check the number of texcoords we have in our buffer now
if (mNumTexCoordsInBuffer > numLayers)
// remove extras
for (size_t i = mNumTexCoordsInBuffer; i > numLayers; --i)
else if (mNumTexCoordsInBuffer < numLayers)
// Add extra texcoord elements
size_t offset = VertexElement::getTypeSize(VET_FLOAT2) * mNumTexCoordsInBuffer;
for (size_t i = mNumTexCoordsInBuffer; i < numLayers; ++i)
offset, VET_FLOAT2, VES_TEXTURE_COORDINATES,
offset += VertexElement::getTypeSize(VET_FLOAT2);
// if number of layers changed at all, we'll need to reallocate buffer
if (mNumTexCoordsInBuffer != numLayers)
// NB reference counting will take care of the old one if it exists
HardwareVertexBufferSharedPtr newbuf =
HardwareBuffer::HBU_STATIC_WRITE_ONLY // mostly static except during resizing
// Bind buffer, note this will unbind the old one and destroy the buffer it had
// Set num tex coords in use now
mNumTexCoordsInBuffer = numLayers;
// Get the tcoord buffer & lock
HardwareVertexBufferSharedPtr vbuf =
float* pVBStart = static_cast<float*>(
size_t uvSize = VertexElement::getTypeSize(VET_FLOAT2) / sizeof(float);
size_t vertexSize = decl->getVertexSize(TEXCOORD_BINDING) / sizeof(float);
for (ushort i = 0; i < numLayers; ++i)
// Calc upper tex coords
Real upperX = mU2 * mTileX[i];
Real upperY = mV2 * mTileY[i];
| / |
// Find start offset for this set
float* pTex = pVBStart + (i * uvSize);
pTex = mU1;
pTex = mV1;
pTex += vertexSize; // jump by 1 vertex stride
pTex = mU1;
pTex = upperY;
pTex += vertexSize;
pTex = upperX;
pTex = mV1;
pTex += vertexSize;
pTex = upperX;
pTex = upperY;
In this code the buffer is created and populated with the UV data, according to the number of TextureUnitStates. Since OverlayElements are Renderables, it uses the RenderSystem::_addRenderable(this,queueId,priority); function instead of using a RenderOp. I'll have to dig to see where the textures are applied, and if any other material attributes are actually used.
Looked into this a little further:
void RenderPriorityGroup::addRenderable(Renderable* rend, Technique* pTech)
// Transparent and depth/colour settings mean depth sorting is required?
// Note: colour write disabled with depth check/write enabled means
// setup depth buffer for other passes use.
if (pTech->isTransparent() &&
if (mSplitNoShadowPasses &&
rend->getCastsShadows() && mShadowCastersNotReceivers))
// Add solid renderable and add passes to no-shadow group
addSolidRenderable(pTech, rend, true);
if (mSplitPassesByLightingType && mParent->getShadowsEnabled())
addSolidRenderable(pTech, rend, false);
It starts getting really complicated really fast, on organizing passes, handling transparent techniques differently than non, etc..
I'm sorry to say this, but I still have a lot of work ahead of me just to design and integrate the already existing Renderer into QuickGUI. If anybody is willing to take the lead and come up with a design, plan, and help with implementation, MAYBE we could find a way to keep use of materials. But my gut feeling is that it will not result in a performance friendly solution, and there is a lot you can do with textures alone! With Event handling and firing, you can replace images and manipulate textures however you want, and still produce good effects.
Ok, that makes sense. I see that the material support would be a big hit to performance just to support some small things that really aren't needed for a GUI system.
Let's nix material support for now and focus on optimized texture rendering with the needed transparency support. Kungfoomasta, do you have some sort of source repository for what you are doing? If not, I'd volunteer to create an SVN repository on one of our servers for you. I'd be willing to get my hands dirty with the details, but it's tough without some sort of central collaborative environment.
Tell me what you think.
I actually have a personal SVN for my own projects, and QuickGUI lives in there. I just hand out releases that are somewhat stable. What I will be doing over the next few days will make QuickGUI inoperable until finished. (replacing overlays and text)
I wouldn't mind using your SVN instead. If I threw out my svn and username/pass, anybody could grab my other projects, which I don't want. I probably could do something to prevent that, but I haven't looked that far into it. In short, if you're willing, I'd be glad to use your repository.
Just checking up how are the things going here
Any improvements, Kungfoomasta?
Sorry i can't be of more help, i don't really know much of Rendering plus i got my hands full in improving DotSceneLoader right now (i'll share it when I finish it)
Anyway, keep up the good work!
Heya, we don't currently have any passwording on our SVN repos, so I'm not really in a position to volunteer server space right now (till I slap Apache around a bit, anyway), but if you do get a public repository set up, I'd love to help out where possible. We're pretty committed to using QuickGUI, but it obviously has a fair bit of work still required to get it to a really slick operating ability, which we'd like where possible for our project.
As far as multi-line textboxes are concerned, I really need one of those (or at least a multiline label), to store the text for a help window to show and hide. I'm not sure that having a fixed number of lines of individual textboxes for the text is very extensible, though: unless of course you were planning on having the number of textboxes expand or contract according to how many characters of text were in the value of the multi-line textbox.
I haven't delved deep into your Text widget, but my immediate gut feeling (and feel free to correct me) is that the neatest solution would be to use multiple Text widgets within one MultiLineTextBox widget (rather than multiple Textbox widgets in it), and have an addText() function which splits the string passed into characters, and calls the MultiLineTextBox's addCharacter() function for each. addCharacter() would simply place that character wherever the cursor is (moving all characters after the cursor on that Text widget line forward by one character), and move the cursor to after that character. If you were using addText(), it would only be after the iteration through all the addCharacter() calls had occurred that you would call fitText(); if you were using addCharacter() once on its own for a single character, addCharacter() would call fixText() once it was done. fixText() would iterate through all existing Text widget members of MultiLineTextBox (in a lineNumber-ordered std::list or similar I imagine), starting from the line at which you first added a character (e.g. if the cursor was in line 5 when you added text you would call fixText() from line 5), through all the lines after it.
For each line, fixText() would multiply the number of characters on the line (and, since all our new characters have been added directly to the line the cursor was on when we added them, we can expect that line at least to be overrunning the MultiLineTextBox's width) by the width of each character (I'm assuming characters in a specific font have a standard width; this could be entirely wrong) to calculate the character at which the line runs over. If it's a space, leave it on that line and move all the characters after it to the next Text line; if it's a character with a space before it, move it, and all characters following it, to the beginning of the next Text line; if it's a character with another character before it, go back and find the closest previous space, take all the characters after it and move them to the next Text line; if there was no space before it on that line of text, put a hyphen before the character that overruns, and move all the remaining characters onto the next Text line (or break it without a hyphen; this can be a MultiLineTextBox setting). If, for any of these movements onto a next line, there was no next line of Text widget in existence, create a new line, and add it with the next lineNumber to the std::list of Text widget members of the MultiLineTextBox. Once you have done fixText() for a line, check to see if there is another line after it (which there may be, now that you have bumped any non-fitting text over). If so, repeat until done.
For removeText(), you should be able to provide either: a) a string to search for and remove, with an additional boolean argument called removeAllInstances set to FALSE by default (so that you can search-and-delete if required); or b) a start location and string length to remove; or c) all text. It may to all intents and purposes be quicker to find and remove the searched-for string from the MultiLineTextBox's own std::string fullString member, delete all Text widget lines except the first one, and then rewrite the new text of the fullString string from the start of the first (and now only) Text widget with addText().
There should also of course be a backspace() function, which is basically the same as a single addCharacter() except that instead of adding a character at the cursor, it removes the character before the cursor (or not, if the cursor is at the head of the text), and then runs fixText() as normal from the line the cursor is on forward. If the cursor was at the very beginning of a new Text widget line when you backspace()d, you will need to move it to the end of the line above it before you remove the character ahead of it, and fixText() from there onward.
I believe this provides all the functionality required. Do let me know what you think, and if I am applying the functionality of the Text widget correctly. I would be happy to have a stab at this implementation if we were working from a current repository copy of the source, or I could attempt my own class and send it along via email. I am still lacking in understand of QuickGUI events et cetera, however (see my current thread full of questions for evidence!).
Let me know what you think,
1: On the textures/materials front, I'm all for removing materials.
2: On the 'QuickGUI extends Ogre's existing functionality' front, I'm obviously all for QuickGUI only using existing Ogre objects (and preferably not overlays if they're deprecated), so that QuickGUI stays as up-to-date as Ogre does, and doesn't break too much with new versions.
Hey all, I'm still working on this, been putting in a lot of time into it. With the help of thecaptain, we've come up with even better ways to reduce batching. The new method to rendering textures introduces some bugs, so I'm looking into that now. After quads are complete I will move into text.
Jekteir, thanks for the text information!! That looks like a good start on implementing text widgets.
I will see what I can do about making the repository public for users (its not my repository). Although, I doubt you would want to checkout now, it's not functional yet.
I have removed all use of Overlays. Instead, I hook into the Overlay Render Queue, and do my own rendering of quads, using Vertex Buffers. I have my own organization scheme to minimize updating of Vertex Buffers, and have zOrdering all worked out. You will not see anything related to zOrder within the code, but widgets are drawn correctly. And there is no limit to how many widgets you can have. (Overlay zOrder max was 650)
I wanted to have something functional by today, but a lot of great ideas came up, and passing them by would be a bad idea, especially when I'm still in the low level area of development.
I'd just like to say that I think you will all be impressed with the improvements that have happened over the last week. Thanks to a tireless effort from KungFooMasta and productive brainstorming sessions, batch counts have dropped significantly! Now it's on to text and other good things.
Jekteir, I'm very happy to see that you're also taking QuickGUI seriously and are committed to its improvement. My game studio is also committed to it, and I was going to take a look at the multiline textbox too!
There is a great example in CEGUI of some word wrapping functionality if you need some ideas. Right now the effort is in improving the actual rendering of the glyphs using FreeType, but there's no reason why you can't start working on a multiline text box implementation right now. The new character rendering can just be plugged into it once it's ready.
We're hosting the current development SVN server for QuickGUI, and anyone Kungfoomasta approves for an account can get one right away! So if you're serious about helping out, you are more than welcome.
That all sounds great. Looking forward to seeing the newest stuff! kungfoomasta, would you mind pm-ing me the repos information please, if you don't mind me having access? I'll checkout a copy and take a look at a MultiLineTextBox as I outlined it earlier.
Any progress on the multiline textbox?
Sorry, I've taken a look at this but I don't know enough about either the old or the new functionalities of QuickGUI to be able to write a working widget. I also wasn't sure whether TextBox had been touched in this version yet or not, so I didn't want to model MultiLineTextBox on that if it hadn't. Ideally I would have used a std::list of Text widgets, but I would have needed to implement a cursor to travel around the multiline textbox, and, well, basically as soon as it comes to creating widget members of other widgets that use the information all widgets are meant to possess, I get confused. I was also going to use a counter so that each new Text member of the MLTB, whether it had been created and destroyed before or not (as the MLTB shrank or grew), definitely had a unique instance name. I was thinking about creating a totally bare-bones object with just my functions in it and none of the normal widget functionality, but figured a) that wouldn't build and b) I'd need to incorporate an awful lot of the stuff I didn't understand in order to set up the initial Text widgets too (e.g. 'textDimensions'). Oh, my other thought was to create a widget pointer to the current Text widget that had focus.
Hope this and my previous semi-pseudocode post are of some use. If you get a bare-bones version of this working I will be happy to refine it, if I understand what the underlying functionalities of widgets et cetera are. I just don't have time to learn all of this from the code.