Page 1 of 11

Stereo vision manager [now as a plugin]

Posted: Tue May 22, 2007 1:48 pm
by Thieum
Hi !

There were some requests about a stereo support in OGRE recently. I did a stereo manager a month ago to test my [s]multihead patch[/s] (edit : this patch is outdated, use the one distributed with the Stereo Manager instead if you are still using Ogre 1.6. No patch is needed for Ogre 1.7)

I release it if it can be useful :
http://www.axyz.fr/support/OGRE/stereoscopy.zip
or svn : https://ogreaddons.svn.sourceforge.net/ ... tereoscopy
Update Jan 11 2010
- added separate property sheet to the project to configure the ogre 1.6 and 1.7 directories separately
- removed the visual studio 2005 project files


Update 09/25/2009
- renammed getFocalLengthInfinite into isFocalLengthInfinite
- correctly display the focal plane when the focal length is infinite
- replaced the focus WindowRventListener by setDeactivateOnFocusChange(false)
- added an update of Nir Hasson's multi monitor pacth for Ogre 1.6.3 in the distribution
- fixed a few bugs in Demo_Frenel_Stereo
- added a key (decimal on the numpad) to change the focal length to infinite in Demo_Frenel_Stereo

Update 03/02/2009
- Added a yellow/blue anaglyph mode
- Added a support to infinite focal lengths for Head Mounted Displays (allows parallel frustum stereo)
it support both anaglyph and dual output (for polarized projectors).
it is very straightforward, just initialize with a cam and the output viewports (one for anaglyph and two for dual output), like this :

Code: Select all

mStereoManager.init(mCamera, mWindows[0]->getViewport(0), mWindows[1]->getViewport(0), StereoManager::SM_DUALOUTPUT);
// or
mStereoManager.init(mCamera, mWindow->getViewport(0), NULL, StereoManager::SM_ANAGLYPH);
it uses the compositor for anaglyph rendering and there is no anti-aliasing in anaglyh mode.

it also support reflections and other effect with render textures. just add a dependency to make the render texture updated for each eye :

Code: Select all

mStereoManager.addRenderTargetDependency(rttTex);
and many other obscure undocumented features ! (yeah ! )

There is an example of usage included in the archive
Image

(I used the materials and compositor scripts from this thread)

edit 02/17/2009 :
Example use of the plugin with an unmodified ogre demo :
Image

Posted: Tue May 22, 2007 2:32 pm
by Praetor
Catch me up on terms. Dual Output you need to two windows, and they need to be side-by-side each taking half the width of the total screen size? Anaglyph is the red-blue variety?

Posted: Tue May 22, 2007 2:47 pm
by Thieum
Yes, anaglyph is the red-blue stereo (the screenshot is in anaglyph)
Dual output uses 2 screens. It can be two fullscreen windows with a multihead card or a window with two viewports spanning horizontally on two screens

Posted: Tue May 22, 2007 5:31 pm
by Praetor
So this does not address the "quadbuffer" method. I believe that is opengl only correct?

Posted: Tue May 22, 2007 5:35 pm
by Praetor
Which of these methods works in windowed mode?

Posted: Wed May 23, 2007 9:04 am
by Thieum
I don't really know how opengl quadbuffer method works but the stereo manager is independent from the rendersystem
It just take viewports as inputs and add a frame listener to shift the camera to the left and to the right
you can use it with two windows, or one windows with two viewports or anything you want :)

(but if you want to use two fullscreen outputs, yes you should use opengl quadbuffer or directx multihead)

Posted: Wed May 23, 2007 2:42 pm
by Praetor
Great. Some of the techniques seem to require fullscreen that I've read about, and we can't require that. I'll have to try out your manager and your patch when we get more "core" requirements done.

Posted: Tue May 29, 2007 4:49 am
by MrPixel
Thieum - very cool! I'm looking to implement a new stereo technique. Your anaglyph mode would be a perfect starting point. I just stumbled upon OGRE a couple of days ago so I'm going to ask some noob Qs.

Do I need to build the StereoManager as a plugin? If so, can I grab one of the existing plugins as a template for StereoManager (I made a test application by using a demo as a template). Anything else I should be aware of before diving in?

TIA
-Pixel

Posted: Tue May 29, 2007 6:19 am
by MrPixel
OK - I made a plugin/dll from StereoManager. I tried to make an app out of Fresnel-stereo but fails 'cause it can't find stereo.h. Is that the header for Frensel-stereo? Can you post it??? :D

thanks,
-Pixel

Posted: Tue May 29, 2007 6:21 am
by futnuh
Thieum wrote:It just take viewports as inputs and add a frame listener to shift the camera to the left and to the right you can use it with two windows, or one windows with two viewports or anything you want :)
When you say "shift ... left and ... right", I take it each camera is looking directly ahead with an asymmetric frustrum (rather than the "toe-in" method)?

Posted: Tue May 29, 2007 10:52 am
by Paulov
Hey

I´ve two 19ª TFTs , any idea on how could I take advantage of your dual output "addon" ?¿

I´m graphic artis, would be just for making a test, and post back. :D

Only one question.

I´ve been following HMDs like Z800 (now dead for gamers) and the incoming Icuiti 920, and both have stereo viewing mode.
But they get this stereo they show in one Eye one pint of view (while in the other no data-refresh is done) , and then change to the other eye. This operation a number of times per second. I think its called "field sequential" or "frame sequential"....

Your dual output works equally, or is "real time " for each eye??¿
And if its "real time" it affects to the performance by slowing 2x?¿?

bye bye

/// looks very nice\\\\

Posted: Tue May 29, 2007 11:14 am
by Thieum
wow, it is quite a success 8)

futnuh> yes it uses an asymmetric frustum


MrPixel>
MrPixel wrote:I tried to make an app out of Fresnel-stereo but fails 'cause it can't find stereo.h.
oops, sorry about stereo.h, it is in fact StereoManager.h but i renamed the file in the StereoManager archive but not in fresnel.cpp :oops:
the fresnel archive is fixed but you just have to replace "stereo.h" by "StereoManager.h"
Do I need to build the StereoManager as a plugin?
you don't have to convert the manager as a plugin, you just have to add the .cpp to you app and compile it. but a plugin is fine too :)
I'm looking to implement a new stereo technique.
are you talking about yellow/purple anaglyph ?


Paulov>
Paulov wrote:I´ve two 19ª TFTs , any idea on how could I take advantage of your dual output "addon" ?¿
To have a dual output, just download the multihead patch, recompile the direct3d renderer and initialise your app the same way it is made in the multihead fresnel example (not this one, the one in the multihead archive. Yes I love Demo_Fresnel :P)
You will get one rendertarget per screen.
If you want stereo vision with polarized projectors, add the stereomanager to your app, create a viewport in each rendertarget, and use the dual output mode.
I´ve been following HMDs like Z800 (now dead for gamers) and the incoming Icuiti 920, and both have stereo viewing mode.
But they get this stereo they show in one Eye one pint of view (while in the other no data-refresh is done) , and then change to the other eye. This operation a number of times per second. I think its called "field sequential" or "frame sequential"....

Your dual output works equally, or is "real time " for each eye??¿
The stereo rendering is made in real time, each frame, the render of left eye is made in a viewport and the render of right eye is made in the other viewport. There is two rendering but only in one frame. (and not sequential)
But i can add a sequential mode if it can be useful :)
And if its "real time" it affects to the performance by slowing 2x?¿?
There is two rendering, thus the framerate will be reducted
(maybe it will be halved if the GPU is the bottleneck)

Posted: Tue May 29, 2007 7:06 pm
by MrPixel
Thieum wrote:are you talking about yellow/purple anaglyph ?
I'm doing the implementation for a company that I just started working for. I'm not sure how much I can say - so I'll say nothing. :? I'll post more info when I can.

Posted: Tue May 29, 2007 7:17 pm
by futnuh
Seamless support for the various flavours of stereo (including opengl quadbuffer stereo and the DirectX equivalent) would make a lot of people in the sciviz/medviz world take notice of Ogre. Ideally changing between anaglyphic, passive polarized and active would be a single API call.

Not having looked at the code yet, I'm curious, do you support a viewer who isn't positioned at the screen center? This is necessary, for example, in multi-screen systems. (In our python-ogre stereo code, we let the user define screen size, screen center, screen normal, screen up, viewer location and eye separation.)

Posted: Tue May 29, 2007 7:34 pm
by futnuh
While we're on the subject of "stereo techniques", many people aren't aware of a passive system from Germany called Infitec. They use specially-constructed interference filters to let each channel pass a slightly different R, G, and B. That is, the left-eye red is every so slightly different from the right-eye red, etc. The system gets much better extinction that polarized (linear and circular) and doesn't require battery-powered glasses. The glasses are more expensive than polarized but less than active (CrystalEyes or NuVision). Food for thought.

Disclaimer: We sell Infitec systems as part of our turn-key museum environments.

Posted: Tue May 29, 2007 10:08 pm
by Praetor
futnuh wrote:Seamless support for the various flavours of stereo (including opengl quadbuffer stereo and the DirectX equivalent) would make a lot of people in the sciviz/medviz world take notice of Ogre. Ideally changing between anaglyphic, passive polarized and active would be a single API call.
Yes! We are currently looking at doing mainly quadbuffer. I've seen some things doing it with Ogre and it doesn't look very difficult to implement. What exactly is the equivalent in DX? I've tried looking it up with no luck. We have access to a stereo projector, analglyphic and polarized glasses. I'm not sure if some of the techniques need you to be running fullscreen, but we need to be able to run in windowed mode.

Posted: Thu May 31, 2007 11:17 am
by Thieum
I don't know how OpenGL quad buffer work, but I think this is the equivalent of directx multihead (fullscreen rendering on each output of the card)
you can swap between anaglyph and polarized pretty easily with the manager, just shut and re-init it with another mode.

but activating or desactivating multihead on the fly is not possible, you have to destroy and recreate your renderwindows.maybe yiu have to do the same thing with opengl quadbuffer.
futnuh wrote:do you support a viewer who isn't positioned at the screen center?
Not at the moment, but it is really easy to support that. but supporting multiscreen stereo would not be as easy, I don't really see how to specify easily and arbitrary number of display and their positions

I have heard about Infitec but I have never seen it. A colleague of me has seen it but he said that it was not as good as polarized stereo
Praetor wrote:We have access to a stereo projector, analglyphic and polarized glasses. I'm not sure if some of the techniques need you to be running fullscreen, but we need to be able to run in windowed mode.
fullscreen anaglyph stereo is easy, you can do this with multihead (and quadbuffer I suppose)

when you talk about windowed rendering, are you talking about a splitted window spanning on the two projectors or one separate window on eaxh monitor ? (this would be more tricky)

Posted: Thu May 31, 2007 4:10 pm
by Praetor
I believe the way the way I've seen windowed mode working with the polarized projector is by using quadbuffer. It is trivial to do the stereo in fullscreen, because the machine connected to the projector with have a quadro and we can simply span a double-wide renderwindow with two viewports. However, we simply cannot force fullscreen. I know analglyph should be possible for us without trouble, but using the projector I think will require quadbuffer. The DX equivalent isn't important, I was just curious. We are pretty much only using opengl, even on windows.

Posted: Fri Jun 01, 2007 7:57 am
by MrPixel
Thieum- I finally figured out how to apply the patch under Windows. Cool - I have Fresnel-stereo working in anaglyph mode. I'd like to try the dual output mode. Can you suppy the stereo.cfg file that Fresnel-stereo.cpp is looking for. Thanks!

Posted: Fri Jun 01, 2007 9:04 am
by Thieum

Code: Select all

[Stereoscopy]
Stereo mode = 2
Eyes spacing = 0.06
Focal length = 0.7
Screen width = 0.52
Fixed screen = true
stereo mode 1 is anaglyph and 2 is dual output, 0 to disable it
you can also press the U key to swap o the fly between anaglyph and dual output or pass StereoManager::SM_DUALOUTPUT as parameter to StereoManager::init

Posted: Thu Jun 07, 2007 6:01 am
by MrPixel
Thanks for all the help Thieum. I modified your anaglyph shader for our custom stereo setup and put together a very cool demo in about half a day. I've spent all day today trying to make a stand-alone version to run on our demo system <sigh>.

I have one remaining question about your StereoManager. You hard-coded the physical width (in meters) of the display window - SCREEN_WIDTH. I know there is a way to get this with OpenGL and glut - is there a way to get this from OGRE? I've searched the docs and haven't found anything.
-Pixel

Posted: Thu Jun 07, 2007 9:29 am
by Thieum
I didn't know you can get this with OpenGL :D I don't know with OGRE
anyway we use a projector so I doubt OpenGL can do anything for us about the screen width :P

the SCREEN_WIDTH macro is no longer used, I forgot to remove it from the source code. you can specify it in the config file and is only used when Fixed screen is true
In this case, when you change the focal length, the focal plane does not move but the camera is moved and the FOV is changed to reflect the angular width of the screen in the observer's FOV
(the focal length corresponds to the distance between the screen and observer)
the scene will then appear in real size with a correct perspective 8)
(but it works better on a projected screen)

Posted: Sat Jul 07, 2007 11:17 am
by barkas
I don't think that the cg script does the compositing right: It shouldn't filter the color subcomponents out of both textures, but convert both to grayscale and then composit that:

Code: Select all

float4 Melange_fp
(
    in float2 texCoord: TEXCOORD0,
   
    uniform sampler LeftTex : register(s0),
    uniform sampler RightTex : register(s1)

) : COLOR
{
	float3 left = dot(tex2D(LeftTex, texCoord).rgb, float3(0.3, 0.59, 0.11));
	float3 right = dot(tex2D(RightTex, texCoord).rgb, float3(0.3, 0.59, 0.11));
	return float4(left[0],right[0],right[0],1.0);
} 
I'm not sure if I should divide the right component by 2 yet, though.

I'm also working on resizing capability and multiple Renderwindows at the same time (name clashes atm) and have already added manual updating (it depends on ogres auto rendering atm).

Posted: Fri Jul 13, 2007 10:38 am
by Thieum
If you convert each texture to grayscale you will see everything in grayscales, even with the red and blue glasses. Without the grayscale conversion, you still have some color perception.
some screenshots to illustrate :

the original cg:
Image
here you can see that the head is green

your cg:
Image

Posted: Mon Jul 16, 2007 11:15 am
by barkas
You lose the color anyway, since you only see monochromatic with each eye.

With your shader you might even lose the whole 3D effect, for example if you have a part of the scene which is red, it will only show up in one eye, which was in fact the problem I had.

It only works in your example because it is mostly grayscale in any case.

BTW, I've done it in GLSL, to lose the dependency on the cg plugin:

anaglyph.compositor:

Code: Select all

compositor Stereo/Anaglyph
{
    technique
    {
        target_output
        {
            // Start with clear output
            input none
            // Draw a fullscreen quad with the black and white image
            pass render_quad
            {
                // Renders a fullscreen quad with a material
                material Stereo/AnaglyphMaterial
            }
        }
    }
}

anaglyph.material:

Code: Select all

fragment_program stereo/composite_fs glsl
{
	source anaglyph_fragment.glsl
	default_params
	{
		param_named leftTex int 0
		param_named rightTex int 1
	}
}

vertex_program stereo/composite_vs glsl
{
	source anaglyph_vertex.glsl
}

material Stereo/AnaglyphMaterial
{
	technique
	{
		pass
		{
			depth_check off
			fragment_program_ref stereo/composite_fs
			{
			}
			vertex_program_ref stereo/composite_vs
			{
			}
			texture_unit leftTex
			{
				tex_address_mode clamp
				filtering linear linear none
				tex_coord_set 0
			}
			texture_unit rightTex
			{
				tex_address_mode clamp
				filtering linear linear none
				tex_coord_set 0
			}
		}
	}
}
anaglyph_fragment.glsl

Code: Select all

uniform sampler2D leftTex, rightTex;

void main(void)
{
	float left = dot(texture2D(leftTex, vec2(gl_TexCoord[0])), vec4(0.3, 0.59, 0.11, 0.0));
	float right = dot(texture2D(rightTex, vec2(gl_TexCoord[0])), vec4(0.3, 0.59, 0.11, 0.0));
    gl_FragColor = vec4(1.0-left,1.0-right,1.0-right,1.0);
}
anaglyph_vertex.glsl

Code: Select all

void main(void)
{
	gl_Position     = ftransform();
	gl_TexCoord[0]  = gl_MultiTexCoord0;
}