Detecting Card Support For FSAA / AntiAliasing

WarehouseJim

15-07-2011 12:30:17

I am setting FSAA (Full Screen Anti Aliasing) like:#

NameValuePairList misc = new NameValuePairList();
misc["FSAA"] = "4";
var window = root.CreateRenderWindow("Main RenderWindow", 1920, 1200, false, misc);

Unfortunately this fails with a laptop with an Intel graphics card as apparently even fairly recent Intel graphics cards don't support FSAA.

So I tried detecting whether there is FSAA support by using

var configOptions = Root.Singleton.RenderSystem.GetConfigOptions();

and checking if the "FSAA" key exists and its possible values. However, for both my super-duper Radeon graphics card and the Intel one, they both have the config option and configOption.Value.possibleValues for both cards is only "0". Whereas I know that I can set it to say "4" for the Radeon.

Does anyone know how to detect the FSAA support? My quick fix is going to be to create a dummy RenderWindow without FSAA then check the capabilities.Vendor (I think you need to create the RenderWindow to populate the capabilities of the RenderSystem - don't know why)

var capabilities = root.RenderSystem.CreateRenderSystemCapabilities();
if(capabilities.Vender == GPUVendor.GPU_ATI || capabilities.Vender == GPUVendor.GPU_NVIDIA) {
...create RenderWindow with FSAA... //reasonable assumption that for my project anyone with an NVidia or ATI graphics card has got a fairly recent one.
}
else {
...Create RenderWindow without FSAA
}

Obviously that prevents users with newer Intel graphics from using FSAA.

tafkag

15-07-2011 15:25:50

I use something similar to this in my app:


// Create the ogre-root and let it detect the available renderers.
Mogre.Root root = new Mogre.Root(pluginFile, configFile);
Const_RenderSystemList renderSystemList = root.GetAvailableRenderers();

ConfigOptionMap optionMap;
foreach (RenderSystem renderSys in renderSystemList)
{
optionMap = renderSys.GetConfigOptions();
foreach (string s in optionMap["FSAA"].possibleValues)
Debug.WriteLine("FSAA: " + s);
}


The output on my system is:

FSAA: 0
FSAA: 2
FSAA: 4
FSAA: 8
FSAA: 8 [Quality]

Beauty

16-07-2011 00:30:10

Thanks for sharing this useful code. :D
I didn't know a way to query the FSAA support level.

So I wrote a simple starter with "fallback to level 0" feature.
When a FSAA value isn't supported, there is a fallback to value 0.
The "detection" was done by a common try/catch block.
After an exception I called the code again (with FSAA=0 setting).
But your code is much more better!!

WarehouseJim

18-07-2011 09:19:18

Thanks for the code. I have sort of worked out why it wasn't working for me. It seems that only the OpenGL RenderSystem reports FSAA correctly and I was using Direct3D.


foreach (RenderSystem renderSys in renderSystemList)
{
Debug.WriteLine("--"+renderSys.Name+"---");
optionMap = renderSys.GetConfigOptions();
foreach (string s in optionMap["FSAA"].possibleValues)
Debug.WriteLine("FSAA: " + s);
}

gives on my FSAA supporting Radeon 6950:

--Direct3D9 Rendering Subsystem---
FSAA: 0
--OpenGL Rendering Subsystem---
FSAA: 0
FSAA: 2
FSAA: 4
FSAA: 8

I assume this is a bug??

tafkag

18-07-2011 10:00:35

Hm, don't know. I'm using DirectX and a Radeon (HD 6800) myself.

Beauty

18-07-2011 11:49:14

Now I tried this detection code with Mogre 1.6.5 on my graphic card Nvidia GeForge 9600 GT.
For my DirectX render system I have 0 values in optionMap["FSAA"].possibleValues.
Is this a bug?
Any idea?


By the way:
This is my code to use FSAA with a failsafe fallback.


String antiAliasingFactor = ...; // anti aliasing factor (0, 2, 4 ...)

misc["FSAA"] = antiAliasingFactor;

try
{
var.renderWindow = root.CreateRenderWindow("Main RenderWindow", 1024, 768, false, misc);
}
catch (System.Runtime.InteropServices.SEHException e)
{
if (antiAliasingFactor != "0")
{
//-- try again without anti aliasing --
misc["FSAA"] = "0";
var.renderWindow = root.CreateRenderWindow("Main RenderWindow", 1024, 768, false, misc);

// if wanted: print fallback message to logfile/console
}
else
throw;
}

tafkag

18-07-2011 13:06:34

Now I tried this detection code with Mogre 1.6.5 on my graphic card Nvidia GeForge 9600 GT.
For my DirectX render system I have 0 values in optionMap["FSAA"].possibleValues.


I should have pointed out, that this only works with 1.7.x, because of the renamed anti-aliasing parameters. In 1.6.x they were called "Anti aliasing". If I replace "FSAA" with "Anti alialing" my code outputs:

Anti aliasing: None
Anti aliasing: NonMaskable 1
Anti aliasing: NonMaskable 2
Anti aliasing: NonMaskable 3
Anti aliasing: Level 2
Anti aliasing: Level 4
Anti aliasing: Level 8

Beauty

18-07-2011 17:07:11

Thanks - Good to know. :D

For forward compatibility of my application I want to make version a switch.

Unfortunately I couldn't find out how to read the current Ogre/Mogre version.
As I read, it's defined in the file OgrePrerequisites.h, which is included to every Ogre code file.
#define OGRE_VERSION_MAJOR 1
#define OGRE_VERSION_MINOR 7
#define OGRE_VERSION_PATCH 1

How can I query the version value by Mogre?

smiley80

18-07-2011 17:58:58

typeof(Mogre.Root).Assembly.GetName().Version

Beauty

18-07-2011 21:24:18

What in hell, I never would find out this way by myself.
Thanks, smiley80 (-:

Beauty

19-07-2011 14:41:00

Now the detection works.

Here is the output for DirectX on my grafic card:
--Direct3D9 Rendering Subsystem---
FSAA: None
FSAA: NonMaskable 1
FSAA: NonMaskable 2
FSAA: NonMaskable 3
FSAA: NonMaskable 4
FSAA: NonMaskable 5
FSAA: NonMaskable 6
FSAA: NonMaskable 7
FSAA: NonMaskable 8
FSAA: Level 2
FSAA: Level 4
FSAA: Level 8


I didn't know the difference between Level and NonMaskable.
Here is an answer which I found. Perhaps somebody is interested else to know.
Demirug:


Every multisampling mode can support different levels of quality. This can be an improved sample mask or a better down filter. In the case of non mask able buffers the quality can stand for different numbers of samples, too. But two different quality’s can still have the same sample count.

Direct3D supports mask able and non mask able multisampling buffers. The different is that you can set a write mask for mask able buffers. This mask decides in which of your buffers are written. This can be used for effects like transparency or motion blur but you will lose the anti aliasing in this case. I am had used mask able multisampling buffers to add an anti aliasing effect for alpha tests. If you create a non mask able buffer you can’t use this mask and the multisampling buffer is only used for default anti aliasing.

Direct3D defines no direct matching between the quality levels og non mask able buffers and the different mask able buffers. nVidia GPUs as example can use 2 and 4 sample mask able modes but support 4 different non mask able modes.

If your card supports D3DMULTISAMPLE_5_SAMPLES it uses 5 subsamples per pixel. It is the same with all other modes.


Well, now I know more, but not everything.
Interesting would be the general speed difference between maskable and nonmaskable.

In my application I will set FSAA property just in the style to "0", "2", "4", which works.
For this I suppose the result is equal to "None", "Level 2", "Level 4".

Beauty

12-01-2012 13:03:06

I found an interesting comparison:


From left to right:
* Antialiasing: None
* Antialiasing: NonMaskable 1
* Antialiasing: Level 4
* Zoomed: Antialiasing: None
* Zoomed: Antialiasing: NonMaskable 1
* Zoomed: Antialiasing: Level 4

Created with ATI Radeon x1950 GT / graphic driver Catalyst 8.3
Thanks to user Bakhtosh for this image/description.


Now I also created a wiki page for anti-aliasing.
http://www.ogre3d.org/tikiwiki/-Anti-Aliasing

Well, I only added this image/description.
If anybody like, he can add a general description for anti-aliasing, too.