Development Info

A lot of this info is out of date, since the plan and schedule changed a good deal, so see the documentation page for more concrete info.

Here is the fork on Bitbucket.

Project Proposal

I would like to add a visual testing framework to OGRE.

Traditional unit testing doesn't help much in a context like OGRE, given the sheer size and heavy dependence on graphics API's. The best way of doing tests without needing significant modifications, mock rendersystems or something to that effect, appears to be a visual solution, wherein test screenshots are compared between builds.

I intend to use the existing sample framework as a a basis, since tests aren't too far off from samples and it just seems like a logical choice. There would be two primary components to the project:

The TestContext

  • Extended from SampleContext, it is to tests what the SampleBrowser is to samples.
  • Initially just runs through the tests as quickly as possible taking screenshots and then exits.
  • However, in week 5 I'll add the ability to manually inspect individual tests, select various options, launch the image comparison tool, etc.

The Image Comparison Tool

  • Built as a Sample for use with the existing SampleBrowser (and later, the TestBrowser itself).
  • Initially just allows you to manually flip through sets of images and compare them.
  • Later in the schedule some more advanced comparison tools (various methods of overlaying and diffing, some automated comparison that can give a quick summary of a whole image set, etc).

The automated image comparison will most likely be a fairly simple custom solution, although "perceptual image comparison" (see this Siggraph '04 slideshow) might be a possible later addition which could reduce the potential for false positives.

If the currently planned comparison solution proves insufficient, there is a GPL implementation of the perceptual image comparison technique (PerceptualDiff) which could be integrated fairly painlessly (GPL isn't ideal, but the testing framework is separate from OGRE and so the licensing could be switched if need be).

The primary focus is on making a useable, well-documented framework, so I won't necessarily be making a lot of actual tests (aside from a few here and there to test the workflow).

The creation of actual tests will be very similar to creating a regular Sample with the existing Sample Framework, with the addition of specifying when the test screenshot(s) should be taken, and using a fixed timestep to ensure deterministic behavior.

Due to hardware/driver differences this is not guaranteed to work across different machines, for the time being the goal is just to make it consistent with images generated on the same machine.

Unity successfully uses a similar system for their testing, see this blog post for some details about their implementation.

How will this project benefit OGRE users?


While it doesn't add any flashy features that would be visible in an end product, it would certainly help in the development and testing of OGRE which is definitely good for users. It would also make testing a lot easier for a user who needs to modify OGRE. Additionally, the image comparison tool would allow users to compare test shots from their own projects (for instance, it could be very helpful in shader debugging or something to that effect).

Is this project within the core scope of OGRE?


Definitely, as a testing framework it's obviously meant to be used directly with OGRE.

Is it realistic to achieve over a summer?


I believe so.

The main area of concern is probably the image comparison tool, since tools inevitably entail a lot of polish and documentation; but given the excellent sample framework and SDK trays stuff already in place, I think it should be feasible. Even if some of the nicer features can't be implemented in time, I can always fall back on a simpler version (I've purposely planned to make a very basic, but functional and deliverable version early on, and then add additional features towards the end).

Schedule


I will have school and exams until June 9th, so I'm aiming to go a little slower for the first weeks and catch up in the following weeks. Other than that, however, I have no vacation plans or other obligations.

April 25th – May 23rd

  • Get to know mentor.
  • Get more familiar with the sample framework, Mercurial, CMake, etc.

Week 1 (May 23rd) –

  • Start coding.
  • Create “VisualTest” class that will form the basis for tests. Derived from the the existing SampleFramework's Sample class, with the addition of options for screenshots, and taking various measures to ensure animations/particles/etc are deterministic (I've already been able to do this by modifying the existing playpen sample base class).
  • Convert the existing playpen samples to use this (the 10 or so that are already set up for the sample browser, and should be trivial to convert).

Week 2 (May 30th) –

  • Write a simple TestContext that automatically runs through a set of tests (no extra menu/options stuff yet, just runs straight through the tests taking the test images, and exits).

Week 3 (June 6th) –

  • My final exams are this week, so I will be mostly unavailable, I'll probably end up tinkering and testing a little bit anyways, but nothing definite.

Week 4 (June 13th) –

  • Create a very basic image comparison tool. At this point it will just allow you to flip through images and manually compare them. Created as a sample plugin, so it can be used with the existing browser, and eventually be run straight from the TestContext.
  • Create a test or two (or perhaps convert a couple old playpen tests), use this to start working out some of the documentation, and work out any kinks in the workflow.

Week 5 (June 20th) –

  • Add functionality to the TestContext (allow you to set various options before running the tests, manually launch individual tests, open the image comparison tool, lots of UI stuff).

Week 6 (June 27th) –

  • Polish up the TestContext's UI and such as necessary.
  • Write image comparison code that can be run automatically (outputs percent differences between images, perhaps some other stats).

Week 7 (July 4th) –

  • Finish up any remaining work on the comparison code.
  • Integrate this into the comparison tool, (display overall stats for the whole image set, allow the user to easily jump to the images that have the greatest deviance from the reference, etc.)

Week 8 (July 11th) –

  • Mid Term evaluation
  • Add additional functionality to the image comparison tool (various modes of overlaying and comparing image sets (see github's new image comparison tool for some potential functionality)).

Week 9 (July 18th) –

  • Add any necessary final polish and documentation for the comparison tool.
  • Begin converting some old playpen tests over.

Week 10 (July 25th) –

  • Convert more playpen tests over to the new system (certainly not all of them, but enough to adequately test the framework and work out any potential issues).

Week 11 (August 1st)

  • Continue converting playpen tests over.
  • The main idea here is testing out the workflow, making sure the framework can handle various features and edge cases, etc; much of the actual work over these weeks will be making fixes and additions to the framework itself, as they become necessary.

Week 12 (August 7th) –

  • Overflow for anything that needs more work, more documentation, cleanup, etc.
  • If there's extra time, look into integrating it with OGRE's existing unit tests.

Week 13 (August 14th) –

  • August 15th is the suggested pencils down date.
  • Final documentation and cleanup.

End August 22nd

Future Extensions


Since the idea here is to create a solid framework, rather than a definite set of tests, the obvious next step would be to write a set of useful, general tests using it.

Some sort of continuous integration server and perhaps a repository of reference images could also be useful next steps.

Why You're The Person For This Project


I'm a 19-year-old student in the US, I'm studying Computer Science at the University of Washington in Seattle.

I've been using C++ for over 4 years now, and OGRE for almost as long. I've been pretty much constantly working on some project or another using OGRE over the years. You can see some of my past projects here (all the projects listed there use OGRE).

I have some code up on my github, much of it is really gross 48-hour Ludum Dare code, but “OryxEngine”, a game engine I've been working with for a number of months is pretty clean.

I'm not an expert by any means, my programming experience is mostly self-taught at this point, but I simply love programming and 3d development. I love a good challenge and I'm eager to learn.

I unfortunately haven't been very involved with the OGRE community lately, but I'll try and change that. In terms of other community involvements, I'm pretty active on the TigSource forums, and I actively participate in Ludum Dare (using OGRE, as an added bonus) every few months.

Why OGRE?


I've been using OGRE for years; it's easily been the single most influential thing on my coding style and learning process over the past years. I think it'd be incredible to be able to give back and try to return the favor.