Any way of reading video?

daedar

07-07-2006 15:56:24

Hi,

I'm trying to read a video file (ogg, avi, mpg) with OgreDotNet but I can't because of ExternalTextureSourceManager class not defined. In fact, I want to read video file into Texture to display it into a virtual screen (a TV for example). How can I do this with OgreDotNet???

Thanks

Mwr

08-07-2006 17:44:56

First download pjcast's C# Theora video wrapper/classes, from http://www.wreckedgames.com/ (click on downloads). Then you will need to change/make your own version of the myVideoDriver class (and myAudioDriver class if you don't want to use fmod), so that it blits to a texture.

I'll try to post some code later, if you are still having trouble, but there is most likely a better way of doing it (the blitting ) than the way I do it.

daedar

08-07-2006 19:21:11

Okay thanks a lot Mwr, I'll try this soon. But if you already have some code to post, can you do it just as an example. It should be very usefull for us ;)

Thanks again

pjcast

08-07-2006 22:18:48

Yeah, I'll be interested if the code that allows it to work with OgreDotNet. I originally wrote that video player around the time of Axiom, but I had problems getting Axiom working, and couldn't figure out how to blit to a PictureBox or Panel, so instead I made use of SDL. :)

Mwr

09-07-2006 10:28:57

Okay I'll post some code here but like I said there might well be a better way as I haven't spent that much time on it and a lot of the code needs cleaning up, at the moment it is a case of it works so for now it is good enough for me.

First of all you need to set up the video file and drivers (like in pcast's example but we need to do it in side ogre and so that it uses a texture ).

I have a texture_render class that sets up the video file (and drivers) and is added to a ArrayList (so that the frame listener can call them , and they can be closed down after use).

It has a number of class variables :


public Texture mtext;
public OgreDotNet.HardwarePixelBuffer buffer;
public OgreDotNet.HardwarePixelBufferSharedPtr pix;
public ResourcePtr resp;
public WreckedVideo.OggFile mOggFile;
public MyVideoDriver mVideo;
private MyAudioDriver mAudio;



Then the constructor is:



public texture_render(string name,string filename)
{

MyVideoDriver.createCoefTables();
MyAudioDriver.InitSoundSystem();

mVideo = new MyVideoDriver( );
mAudio = new MyAudioDriver( );

//Set up Ogg Wrapper
mOggFile = new WreckedVideo.OggFile();
mOggFile.AUDIO_DRIVER = mAudio;
mOggFile.VIDEO_DRIVER = mVideo;
mOggFile.DISPATCHER = mVideo;

//This method will throw if no ogg streams are found
mOggFile.PrepareOggFile( new FileReader(filename ), true );

//Determine what streams were found
mAudio = (MyAudioDriver)mOggFile.AUDIO_DRIVER;
mVideo = (MyVideoDriver)mOggFile.VIDEO_DRIVER;

TexturePtr mtextptr=TextureManager.Instance.CreateManual( name+"_videotexture",ResourceGroupManager.DEFAULT_RESOURCE_GROUP_NAME,TextureType.TEX_TYPE_2D,512,512,0, PixelFormat.PF_R8G8B8A8,14 );
mtext=mtextptr.Get();
pix=mtext.getBuffer();
buffer=pix.Get();
mVideo.buffer=buffer;
mVideo.mtext=mtext;
mVideo.mOggFile=this.mOggFile;
buffer.Lock(OgreDotNet.HardwareBuffer.LockOptions.HBL_NORMAL);
PixelBox p_box=buffer.getCurrentLock();
unsafe{
byte* pDest=(byte*)p_box.data;

for (int j = 0; j < 512; j++)
{
for(int ii = 0; ii < 512; ii++)
{
*pDest++ = 0;
*pDest++ =0;
*pDest++ =0;
*pDest++=255;
}
}
}
buffer.Unlock();

{ //new codeblock
ResourcePtr resPtr = MaterialManager.Instance.Create(name+"_videotexture_material",
ResourceGroupManager.DEFAULT_RESOURCE_GROUP_NAME );

resp=resPtr;
MaterialPtr mat = new MaterialPtr( ResourcePtr.getCPtr(resPtr).Handle, false);
TextureUnitState t = mat.Get().GetTechnique(0).getPass(0).createTextureUnitState(name+"_videotexture");
t.SetTextureAddressingMode( TextureUnitState.TextureAddressingMode.TAM_CLAMP);

}


Note that we also have to past the HardwareBuffer to the myvideodriver class so that it can use it to copy the video into.


So in myVideoDriver you need to first add a couple of new variables:


private byte [] m_Bitmap;
public OgreDotNet.HardwarePixelBuffer buffer=null;
public Texture mtext;
public float scale_x=1,scale_y=1;
private bool newframe=false;


Then change InitPictureFrame so that it sets up the height, width and databuffer, etc

protected override sealed void InitPictureFrame(int width, int height)
{

m_Width = width;
m_Height = height;
m_BytesPerPixel = 4; //rgba
m_Bitmap = new byte[width*height*m_BytesPerPixel];
}



Then next is BlitFrame() , which is called for every frame of video that is decoded, now I did at first try to copy straight to the texture inside this, but was having problems with it (can't exactly remember what those problems were now), so I changed it so that it copies to a temporary databuffer (m_Bitmap a byte array, which we set to the right size above).

So infact BlitFrame is mostly unchanged from pcast's example.


public override sealed void BlitFrame()
{
//called from OggFile to blit bitmap
//Convert 4:2:0 YUV YCrCb to an RGB24 Bitmap
//convenient pointers
if(yuv!=null)
{
int dstBitmap = 0;
int dstBitmapOffset = (m_BytesPerPixel * m_Width);

int ySrc = yuv.y_offset,
uSrc = yuv.u_offset,
vSrc = yuv.v_offset,
ySrc2 = (ySrc + yuv.y_stride);

//Calculate buffer offsets
int dstOff = m_Width * m_BytesPerPixel;//( m_Width*6 ) - ( yuv->y_width*3 );
int yOff = (yuv.y_stride * 2) - yuv.y_width;

//Check if upside down, if so, reverse buffers and offsets
if ( yuv.y_height < 0 )
{
yuv.y_height = -yuv.y_height;
ySrc += (yuv.y_height - 1) * yuv.y_stride;

uSrc += ((yuv.y_height / 2) - 1) * yuv.uv_stride;
vSrc += ((yuv.y_height / 2) - 1) * yuv.uv_stride;

ySrc2 = ySrc - yuv.y_stride;
yOff = -yuv.y_width - ( yuv.y_stride * 2 );

yuv.uv_stride = -yuv.uv_stride;
}

//Cut width and height in half (uv field is only half y field)
yuv.y_height = yuv.y_height >> 1;
yuv.y_width = yuv.y_width >> 1;

//Convientient temp vars
int r, g, b, u, v, bU, gUV, rV, rgbY;
int x;

//Loop does four blocks per iteration (2 rows, 2 pixels at a time)
for (int y = yuv.y_height; y > 0; --y)
{
for (x = 0; x < yuv.y_width; ++x)
{


//Get uv pointers for row
u = yuv.data[ uSrc + x ];
v = yuv.data[ vSrc + x ];

//get corresponding lookup values
rgbY= YTable[yuv.data[ySrc]];
rV = RVTable[v];
gUV = GUTable[u] + GVTable[v];
bU = BUTable[u];
++ySrc;

//scale down - brings are values back into the 8 bits of a byte
r = (rgbY + rV ) >> 13;
g = (rgbY - gUV) >> 13;
b = (rgbY + bU ) >> 13;

//Clip to RGB values (255 0)
m_Bitmap[dstBitmap] = CLIP_RGB_COLOR( r );
m_Bitmap[dstBitmap+1] = CLIP_RGB_COLOR( g );
m_Bitmap[dstBitmap+2] = CLIP_RGB_COLOR( b );

//And repeat for other pixels (note, y is unique for each
//pixel, while uv are not)
rgbY = YTable[yuv.data[ySrc]];
r = (rgbY + rV) >> 13;
g = (rgbY - gUV) >> 13;
b = (rgbY + bU) >> 13;
m_Bitmap[dstBitmap+m_BytesPerPixel] = CLIP_RGB_COLOR( r );
m_Bitmap[dstBitmap+m_BytesPerPixel+1] = CLIP_RGB_COLOR( g );
m_Bitmap[dstBitmap+m_BytesPerPixel+2] = CLIP_RGB_COLOR( b );
++ySrc;

rgbY = YTable[yuv.data[ySrc2]];
r = (rgbY + rV) >> 13;
g = (rgbY - gUV) >> 13;
b = (rgbY + bU) >> 13;
m_Bitmap[dstBitmapOffset] = CLIP_RGB_COLOR( r );
m_Bitmap[dstBitmapOffset+1] = CLIP_RGB_COLOR( g );
m_Bitmap[dstBitmapOffset+2] = CLIP_RGB_COLOR( b );
++ySrc2;

rgbY = YTable[yuv.data[ySrc2]];
r = (rgbY + rV) >> 13;
g = (rgbY - gUV) >> 13;
b = (rgbY + bU) >> 13;
m_Bitmap[dstBitmapOffset+m_BytesPerPixel] = CLIP_RGB_COLOR( r );
m_Bitmap[dstBitmapOffset+m_BytesPerPixel+1]=CLIP_RGB_COLOR( g );
m_Bitmap[dstBitmapOffset+m_BytesPerPixel+2]=CLIP_RGB_COLOR( b );
++ySrc2;

//Advance inner loop offsets
dstBitmap += m_BytesPerPixel << 1;
dstBitmapOffset += m_BytesPerPixel << 1;

} // end for x

//Advance destination pointers by offsets
dstBitmap += dstOff;
dstBitmapOffset += dstOff;
ySrc += yOff;
ySrc2 += yOff;
uSrc += yuv.uv_stride;
vSrc += yuv.uv_stride;
} //end for y
newframe=true;
}



Then as we still haven't got the video into the texture, we need a way of doing that , so I added a Blit_toTexture() function which is called from the frame listener and it uses the newframe variable to check that a new frame is actually ready (rather than just keep copying the same data everyframe).



if( newframe)
{
float y_offset1=512.0f/scale_y;
y_offset1=512.0f-y_offset1;
int y_offset=(int)y_offset1/2;
y_offset=y_offset*2048;

float x_offset1=512.0f/scale_x;
x_offset1=512.0f-x_offset1;
int x_offset=(int)x_offset1/2;
x_offset=x_offset*4;

if(buffer!=null)
{
buffer.Lock(OgreDotNet.HardwareBuffer.LockOptions.HBL_NORMAL);

PixelBox p_box=buffer.getCurrentLock();
if(p_box==null)
{
System.Console.WriteLine("didn't lock buffer");

}
unsafe
{
byte *m_Bit=(byte*)p_box.data;

for(int y=0 ;y<m_Height;y++)
{
for(int x=0; x<m_Width;x++)
{

m_Bit[(y*(512*4)+y_offset)+(x*4)+x_offset]=m_Bitmap[(y*(m_Width*4))+(x*4)];
m_Bit[(y*(512*4)+y_offset)+(x*4)+x_offset+1]=m_Bitmap[(y*(m_Width*4))+(x*4)+1];
m_Bit[(y*(512*4)+y_offset)+(x*4)+x_offset+2]=m_Bitmap[(y*(m_Width*4))+(x*4)+2];
m_Bit[(y*(512*4)+y_offset)+(x*4)+x_offset+3]=255;
}
}

}
buffer.Unlock();

newframe=false;


Then you need to assign that created material to a entity. (The name you based to texture render class +"_videotexture_material", so if you had of passed the name "video1" then the material name would be "video1_videotexture_material", of course you can change it so that it creates different named materials).

And once you want to start playback of the video you need to call:

[name_of_texture_render_class].mOggFile.SetPause( false );

I've haven't got that much time right now so there is a chance that I forgot to mention something. Also of course you need to make sure you free up everything before closing down etc.

pjcast

10-07-2006 01:38:36

I havn't read your code entirely, however, I believe the reason you needed to create a secondary surface, was you had auto updating to true. Which makes your blit get called from the decoding thread's context instead of your own. You can turn off auto updating, and instead check everyframe for a frameready IIRC. Though, I found that auto blitting yeilded the best performance, and least amount of out of sync.

Anway, thanks for the code :) If I ever get a chance to update that C# plugin (I had added support for IceCast internet ogg streams - pretty simple implimatation, wasn't that reliable though, and I think I lost those changes anyway) I will add an ODN demo. However, if someone ones to take up work on adding that, that'd be cool too ;)

daedar

10-07-2006 08:47:31

Thank for the code, I'll try this as soon as possible...

Mwr

10-07-2006 10:17:34

I forgot to say that to make the video the full size of the texture you need to add some lines after :


TextureUnitState t = mat.Get().GetTechnique(0).getPass(0).createTextureUnitState("RttTex"+Globals.Instance.Mirror_counter.ToString());
t.SetTextureAddressingMode (TextureUnitState.TextureAddressingMode.TAM_CLAMP);


Which is in the texture_render class. The lines you need to add are to scale the texture and to tell the myVideoDriver class that you want it scaled, so that it copies the video into the right position on the texture. So say we have a video that is 320x256 in size and we want it played so that it covers the whole texture which is 512x512 , so we divide the size of the texture by the size of the video and then add the following lines (after the above lines):

t.setTextureScale(1.6f,2.0f);
mVideo.scale_x=1.6f; // 512/320
mVideo.scale_y=2.0f; // 512/256


It would be better though if these scale factors were passed to the class constructor rather than hard coded.