Fully Destructible Levels - New website and domain!
- KungFooMasta
- OGRE Contributor
- Posts: 2087
- Joined: Thu Mar 03, 2005 7:11 am
- Location: WA, USA
- x 16
- Contact:
I think my eyes are playing tricks on me, the picture almost looks like a cube with chunks popping out of it..
Either way looks pretty cool. How do you determine what to texture for the inside of the wall?
Either way looks pretty cool. How do you determine what to texture for the inside of the wall?
Creator of QuickGUI!
- PolyVox
- OGRE Contributor
- Posts: 1316
- Joined: Tue Nov 21, 2006 11:28 am
- Location: Groningen, The Netherlands
- x 18
- Contact:
Thanksbetajaen wrote:Holy pants on fire. That is beyond awesome.
Though of course it's equally possible to do that! You can grow geometry out of walls just as easily as you can tunnel into them, I just haven't demo'd it yet. Not exactly sure what you'd use it for. Maybe building up your defences while the enemy destroys them?KungFooMasta wrote:I think my eyes are playing tricks on me, the picture almost looks like a cube with chunks popping out of it..
Well each voxel in the volume can have any one of 256 different materials. In that scene the whole volume is set to earth material, then a cube of air material is created in the centre, and then a one voxel thick layer of tile material is created between the two. Destroying the tile voxels reveals the earth voxels behind. Simple as thatKungFooMasta wrote:Either way looks pretty cool. How do you determine what to texture for the inside of the wall?
-
- Gnoblar
- Posts: 7
- Joined: Mon Oct 01, 2007 5:52 pm
Re: Topic
This is some very cool tech. I have very little technical understanding, but I understand that this is cool
Why were voxels dropped from things like Delta Force? Did they not use your conversion polygons at render time?
I'm thinking this could be put to very interesting use in a cartoony game. Not easily, but with all the squash and stretch that things undergo, deformable terrain and possibly characters or props would be very fitting.
Why were voxels dropped from things like Delta Force? Did they not use your conversion polygons at render time?
I'm thinking this could be put to very interesting use in a cartoony game. Not easily, but with all the squash and stretch that things undergo, deformable terrain and possibly characters or props would be very fitting.
- PolyVox
- OGRE Contributor
- Posts: 1316
- Joined: Tue Nov 21, 2006 11:28 am
- Location: Groningen, The Netherlands
- x 18
- Contact:
Games like Delta Force (and all the other games which used to use voxels for their terrain) did things differently. Rather than converting the voxels to polygons they used a process called raycasting to directly render the data, which was still basically a heightmap. Then, at some point, graphics hardware came along and games which used to use raycasting found it was now faster to convert the heightmap into a polygon mesh and render that - which is what modern games do.
The main difference with my tech is that rather than rendering from a 2D heightmap, it renders from a 3D volume (currently 256x256x256 voxels in size). When rendering such a volume you still have the choice between converting to polygons as I do, or using raycasting as is used in some other projects (e.g. http://www.advsys.net/ken/voxlap/voxlap05.htm)
The fact that the environment is destructible could be archived just as easily on games which use 2D heightmaps. Actually I'm not sure why more games don't make use of this - I'd have thought it would be popular on RTS games for example.
It seems to me the term 'voxel' has been abused over the years. It should mean 'VOlume ELement' and so it doesn't really make sense to call a terrain renderer a 'voxel engine'. But that probably just marketing people at work
The main difference with my tech is that rather than rendering from a 2D heightmap, it renders from a 3D volume (currently 256x256x256 voxels in size). When rendering such a volume you still have the choice between converting to polygons as I do, or using raycasting as is used in some other projects (e.g. http://www.advsys.net/ken/voxlap/voxlap05.htm)
The fact that the environment is destructible could be archived just as easily on games which use 2D heightmaps. Actually I'm not sure why more games don't make use of this - I'd have thought it would be popular on RTS games for example.
It seems to me the term 'voxel' has been abused over the years. It should mean 'VOlume ELement' and so it doesn't really make sense to call a terrain renderer a 'voxel engine'. But that probably just marketing people at work
- nikki
- Old One
- Posts: 2730
- Joined: Sat Sep 17, 2005 10:08 am
- Location: San Francisco
- x 13
- Contact:
- danharibo
- Minaton
- Posts: 997
- Joined: Sat Feb 25, 2006 8:14 pm
- Location: Wales, United Kingdom
- Contact:
- Chainsawkitten
- Gnoblar
- Posts: 15
- Joined: Sun Sep 23, 2007 11:27 am
- Location: Sweden
- Contact:
- KungFooMasta
- OGRE Contributor
- Posts: 2087
- Joined: Thu Mar 03, 2005 7:11 am
- Location: WA, USA
- x 16
- Contact:
I have no idea how the voxel tech works, but what about an approach of models with lots of vertices, and pushing in the vertices in real time, and retexturing the area, is this possible? (texture depending on depth. If one side of a wall gets pushed in past the other side of the wall, the texture is transparent, to look like a hole in the wall)
Creator of QuickGUI!
-
- OGRE Expert User
- Posts: 557
- Joined: Wed May 05, 2004 3:19 pm
- Location: Portland, OR, USA
- Contact:
- PolyVox
- OGRE Contributor
- Posts: 1316
- Joined: Tue Nov 21, 2006 11:28 am
- Location: Groningen, The Netherlands
- x 18
- Contact:
Thanks Guys! It's always nice to hear positive feedback. Keeps me motivated
There's basically two aspects to the performance. Firstly there's the rendering performance and this is completely unoptimised. I should be able to take advantage of hardware occlusion queries, LODs, texture atlases, and other techniques to significantly improve this. I haven't done this yet simply because it's fast enough for me to experiment with.
The other aspect is the performance when destroying parts of the volume and generating the new mesh. I've tried to optimise this more because it's the unknown part - i wasn't sure if it would work in real time. It's pretty fluid for me though. The only remaining thing is to multi thread it and it should parallize very nicely. Need a new computer to test this though
As for the retexuring, that should be fine. My current demo retextures on the fly anyway and it is very easy. All vertices have texture coordinates generated based on thier position. It works quite well, but is nowhere near as flexible as artist controlled UV mapping.
But, because I'm converting the voxels into polygon meshes, it is trivial to integrate with other geometry. You just add the resulting meshes to the scene graph as you normally would in Ogre, and it happily sits alongside any other meshes in the scene.
Actually there would be an argument for not having my work as a SceneManager at all - it could just as easily be a subclass of Renderable. Then any Ogre aplication could add a 'VolumeRenderable' to their scengraph to represent destructible objects such as statues, etc. For the time being I just focusing no the algorithms though, and I'm keen to see how integration with a physics engine affects things
Sorry... what is NAPS?danharibo wrote:NAPS demo 3
Sorry to hear it's slow - I have a 2Gz-ish machine with an NVidia 6600 and get about 50fps when viewing the whole of the castle scene on the previous page. What are your specs? And were both demos the same?Chainsawkitten wrote:It lags the shit out of this computer. 0.21 FPS.
There's basically two aspects to the performance. Firstly there's the rendering performance and this is completely unoptimised. I should be able to take advantage of hardware occlusion queries, LODs, texture atlases, and other techniques to significantly improve this. I haven't done this yet simply because it's fast enough for me to experiment with.
The other aspect is the performance when destroying parts of the volume and generating the new mesh. I've tried to optimise this more because it's the unknown part - i wasn't sure if it would work in real time. It's pretty fluid for me though. The only remaining thing is to multi thread it and it should parallize very nicely. Need a new computer to test this though
When I destroy part of the volume I delete that chunk of the mesh and recreate it from the newly modified volume. As I understand it you are talking about something slightly different - you want to keep the existing vertices and triangles and just move them to deform the mesh. So maybe a bomb goes off near a high polygon wall and you want to move all vertices within the blast radius a certain distance away from the center of the explosion - this would create a dent in the wall. This sounds feasible but it would be an alternative to my current approach, rather than a supplement to it.KungFooMasta wrote:I have no idea how the voxel tech works, but what about an approach of models with lots of vertices, and pushing in the vertices in real time, and retexturing the area, is this possible? (texture depending on depth. If one side of a wall gets pushed in past the other side of the wall, the texture is transparent, to look like a hole in the wall)
As for the retexuring, that should be fine. My current demo retextures on the fly anyway and it is very easy. All vertices have texture coordinates generated based on thier position. It works quite well, but is nowhere near as flexible as artist controlled UV mapping.
Yes, that would definitely be interesting. Although voxel based environments are fun because they are destructible, they also have some drawbacks. Level size are limited by high memory usage and triangle counts, and you don't have the artistic freedom to create the same kind of detail that you get in most games.Chaster wrote:Hmm, this is coming along well.. I may have to see about merging this into my project which is using the portal SM... Hmm...
But, because I'm converting the voxels into polygon meshes, it is trivial to integrate with other geometry. You just add the resulting meshes to the scene graph as you normally would in Ogre, and it happily sits alongside any other meshes in the scene.
Actually there would be an argument for not having my work as a SceneManager at all - it could just as easily be a subclass of Renderable. Then any Ogre aplication could add a 'VolumeRenderable' to their scengraph to represent destructible objects such as statues, etc. For the time being I just focusing no the algorithms though, and I'm keen to see how integration with a physics engine affects things
- KungFooMasta
- OGRE Contributor
- Posts: 2087
- Joined: Thu Mar 03, 2005 7:11 am
- Location: WA, USA
- x 16
- Contact:
Yah, I realized it didn't supplement your method, but thought I'd run the idea by you.
Given the ability to texture using your approach, how feasible would it be to create overhanging terrain? (I'd assume you'd have to load a heightmap terrain, convert to mesh, then convert to volume renderable, modify it, then convert back to mesh?) I sent beaugard a pm on this a while back, with no response, but I'm wondering how one would shape an object, have it textured a certain way, and then save its configuration, so you could reload it up again. Regardless, you'd need to save your voxel scenes to disk and load them if you wanted persistent scenes, right? (Say the castle was partially destroyed, and then you saved the scene, and wanted to reload it at a later time)
Given the ability to texture using your approach, how feasible would it be to create overhanging terrain? (I'd assume you'd have to load a heightmap terrain, convert to mesh, then convert to volume renderable, modify it, then convert back to mesh?) I sent beaugard a pm on this a while back, with no response, but I'm wondering how one would shape an object, have it textured a certain way, and then save its configuration, so you could reload it up again. Regardless, you'd need to save your voxel scenes to disk and load them if you wanted persistent scenes, right? (Say the castle was partially destroyed, and then you saved the scene, and wanted to reload it at a later time)
Creator of QuickGUI!
- PolyVox
- OGRE Contributor
- Posts: 1316
- Joined: Tue Nov 21, 2006 11:28 am
- Location: Groningen, The Netherlands
- x 18
- Contact:
Well both beaugard and I use the same approach to texturing. It's called triplanar texturing and is described in the NVidia slides for their Cascades demo. It basically projects the texture along the x,y,and z axis and then blends using the vertex normal. And it seems to work really well (especially for terrain).
Converting a heightmap into a volume would be very easy. I already have a tool to convert meshes to volume (hence the castle) but a heightmap should be even easier and faster (conversion should take under a second).
Converting the volume to a mesh is obviously already possible, but there is currently no way to save the resulting mesh. It would be possible to add a mesh exporter though - something like .obj is pretty simple. But be aware that the texture coordinate generation and triplanar texturing are handled by vertex/fragment shaders in the engine so the exported mesh would have no material if loaded into a 3D modelling app. There are no doubt ways around this, it just gets a bit less simple.
Volume saving is already implemented though I don't know if it's bound to anything. It just saves the volume as raw data which is large but compresses extremely well due to contiguous regions (16Mb -> 57Kb for the castle).
What are you trying to achieve? As a modelling approach voxels can make a lot of sense as they basically allow you to carve out any shape you want. I believe several commercial tools allow voxel based modelling but I've never used them. You could then export a mesh and load it as static geometry. If you want dynamic geometry in game I would look at beaugards approach - mine will work but will be wasteful (in terms of memory, etc) unless you have a lot of caves/overhangs.
Lastly,be aware that that geometry I generate isn't particularly smooth. It's actually the same as was shown in the first couple of pictures in this thread. I'm now covering it up by smoothing the normals, but generating nicer geometry is somewhere on my todo list
Converting a heightmap into a volume would be very easy. I already have a tool to convert meshes to volume (hence the castle) but a heightmap should be even easier and faster (conversion should take under a second).
Converting the volume to a mesh is obviously already possible, but there is currently no way to save the resulting mesh. It would be possible to add a mesh exporter though - something like .obj is pretty simple. But be aware that the texture coordinate generation and triplanar texturing are handled by vertex/fragment shaders in the engine so the exported mesh would have no material if loaded into a 3D modelling app. There are no doubt ways around this, it just gets a bit less simple.
Volume saving is already implemented though I don't know if it's bound to anything. It just saves the volume as raw data which is large but compresses extremely well due to contiguous regions (16Mb -> 57Kb for the castle).
What are you trying to achieve? As a modelling approach voxels can make a lot of sense as they basically allow you to carve out any shape you want. I believe several commercial tools allow voxel based modelling but I've never used them. You could then export a mesh and load it as static geometry. If you want dynamic geometry in game I would look at beaugards approach - mine will work but will be wasteful (in terms of memory, etc) unless you have a lot of caves/overhangs.
Lastly,be aware that that geometry I generate isn't particularly smooth. It's actually the same as was shown in the first couple of pictures in this thread. I'm now covering it up by smoothing the normals, but generating nicer geometry is somewhere on my todo list
- KungFooMasta
- OGRE Contributor
- Posts: 2087
- Joined: Thu Mar 03, 2005 7:11 am
- Location: WA, USA
- x 16
- Contact:
I'm just curious about the feasibility of modelling terrain, and I'm not knowledge about how the texturing happens. I'm not familiar with triplanar texturing. Lets say you have a heightmap and texture map for the entire terrain. (from L3DT for example) And now you deform the terrain and make a tunnel. What texture is applied to the walls of the cave? (I'm guessing its somehow derived from the terrain texturemap..) Can this sort of texturing work with splatted terrain?What are you trying to achieve?
Also, deformed meshes will also play a part in collisions. In the case of overhanging terrain, I'm pretty sure the heightmap can't be used, so the terrain would need to be converted to mesh. (walking inside the cave) Basically I'm interested in learning more about texturing and serializing these real time deformable meshes, to see if I should look into these new technologies more.
Creator of QuickGUI!
- PolyVox
- OGRE Contributor
- Posts: 1316
- Joined: Tue Nov 21, 2006 11:28 am
- Location: Groningen, The Netherlands
- x 18
- Contact:
Well they say a picture is worth a thousand words. Unfortunatly I suck at drawing so mines probably only worth a few hundred - but hopefully it's useful
Anyway, let's consider the problem in 2D. Figures F1 and F2 show the way it works in beaugards system (as I understand it). We have a large mesh which has a single material applied. However, this material makes use of two textures; a green grass one for the top of the mesh and a brown rock one for the side. In Figure F1, point A will be textured just with the green one, point C with just the brown one, and point B with a blended combination of both. Hence there is a nice smooth transition from green to brown.
The problem come in Figure F2, when we tunnel into the side of the rock. Because point D is facing upwards it is textured by the green texture, and hence the tunnel (which we only just created) already has grass on it - which is a bit strange. I believe you can see this effect if you try beaugards demo.
My system is shown in Figures F3 and F4. Rather than having the whole rock as one material I would break it into two (and two separate meshes). Material M1 is grass regardless of which direction the normal is pointing. Material M2 is rock, again regardless of what direction the normal is pointing. At the seam between the two material we now have a sudden change (maybe not as visually appealing) but the advantage is that the material is correct when we tunnel into the rock (Figure F4).
So, within a given material I am actually blending between two copies of the same texture. The blending is still necessary because otherwise you see artifacts where the horizontally aligned texture turns into the vertically aligned version.
Hope that all makes some sense! Just ask if you have further questions
Anyway, let's consider the problem in 2D. Figures F1 and F2 show the way it works in beaugards system (as I understand it). We have a large mesh which has a single material applied. However, this material makes use of two textures; a green grass one for the top of the mesh and a brown rock one for the side. In Figure F1, point A will be textured just with the green one, point C with just the brown one, and point B with a blended combination of both. Hence there is a nice smooth transition from green to brown.
The problem come in Figure F2, when we tunnel into the side of the rock. Because point D is facing upwards it is textured by the green texture, and hence the tunnel (which we only just created) already has grass on it - which is a bit strange. I believe you can see this effect if you try beaugards demo.
My system is shown in Figures F3 and F4. Rather than having the whole rock as one material I would break it into two (and two separate meshes). Material M1 is grass regardless of which direction the normal is pointing. Material M2 is rock, again regardless of what direction the normal is pointing. At the seam between the two material we now have a sudden change (maybe not as visually appealing) but the advantage is that the material is correct when we tunnel into the rock (Figure F4).
So, within a given material I am actually blending between two copies of the same texture. The blending is still necessary because otherwise you see artifacts where the horizontally aligned texture turns into the vertically aligned version.
Hope that all makes some sense! Just ask if you have further questions
- betajaen
- OGRE Moderator
- Posts: 3447
- Joined: Mon Jul 18, 2005 4:15 pm
- Location: Wales, UK
- x 58
- Contact:
Today, I was thinking about how physics could be used with the Voxel SceneManager.
If Ageia still updated the code for PhysX with pMaps (a dynamic voxel based triangle mesh) then you could have some physics for sections that break away from the main parts. Considering you could basically just transfer the voxel data across without any heavy conversions it would be quite fast. Sadly pMaps are now depreciated but it would of been a nice thing.
I also read up about Voxels today (about using it with terrain) and the "Marching Cubes" algorithm, it gave me a fantastic idea how to implement rending fluids in NxOgre/Ogre.
Anyway, I'm definitely going to get into the Voxel thing again. Amazing, I remember playing Amiga games that used Voxels, suddenly they were phased out for a few years, but now they are back again.
If Ageia still updated the code for PhysX with pMaps (a dynamic voxel based triangle mesh) then you could have some physics for sections that break away from the main parts. Considering you could basically just transfer the voxel data across without any heavy conversions it would be quite fast. Sadly pMaps are now depreciated but it would of been a nice thing.
I also read up about Voxels today (about using it with terrain) and the "Marching Cubes" algorithm, it gave me a fantastic idea how to implement rending fluids in NxOgre/Ogre.
Anyway, I'm definitely going to get into the Voxel thing again. Amazing, I remember playing Amiga games that used Voxels, suddenly they were phased out for a few years, but now they are back again.
- KungFooMasta
- OGRE Contributor
- Posts: 2087
- Joined: Thu Mar 03, 2005 7:11 am
- Location: WA, USA
- x 16
- Contact:
Thanks PolyVox, that cleared it up a lot. I did see the grass.. I thought it was cool. But now that you explain it, it would have grass even if the tunnel went way beneath the earth, and it would be right on the path the whole time.
So you have a mesh for each texture, interesting. Your cave would look more accurate, but you couldn't vary the texture (like limestone, granite, or something) unless you placed another mesh in there. That's not too bad.. it seems if you want to get more complex than that you'd be using a modelling app.. but who knows.
So you have a mesh for each texture, interesting. Your cave would look more accurate, but you couldn't vary the texture (like limestone, granite, or something) unless you placed another mesh in there. That's not too bad.. it seems if you want to get more complex than that you'd be using a modelling app.. but who knows.
Creator of QuickGUI!
- PolyVox
- OGRE Contributor
- Posts: 1316
- Joined: Tue Nov 21, 2006 11:28 am
- Location: Groningen, The Netherlands
- x 18
- Contact:
There are certainly some interesting things which can be done with physics in a volumetric environment. As far as I can tell the main advantage is that collision detection is very easy - I believe this will make fluid simulation much easier. It probably won't help much with rigid body stuff though because you need additional information such as penetration depth. I've no real experience with physics though so I may be wrong.betajaen wrote:Today, I was thinking about how physics could be used with the Voxel SceneManager.
If Ageia still updated the code for PhysX with pMaps (a dynamic voxel based triangle mesh) then you could have some physics for sections that break away from the main parts. Considering you could basically just transfer the voxel data across without any heavy conversions it would be quite fast. Sadly pMaps are now depreciated but it would of been a nice thing.
I also read up about Voxels today (about using it with terrain) and the "Marching Cubes" algorithm, it gave me a fantastic idea how to implement rending fluids in NxOgre/Ogre.
Anyway, I'm definitely going to get into the Voxel thing again. Amazing, I remember playing Amiga games that used Voxels, suddenly they were phased out for a few years, but now they are back again.
My initial plan is to ignore the fact that I'm working with a volume and just feed the mesh data into a conventional physics engine as some kind of static geometry. I should be able to detect when a chunk of material separates from the rest of the volume, in which case I will remove it from the static mesh and insert it as a dynamic rigid body. The physics engine will then do it's normal thing of causing the loose material to fall and bounce around.
After this is working, it may be possible to modify the underlying physics engine to use the volume via some kind of plugin or custom collision code. But I suspect that for rigid bodies it's easier just to have everything as meshes.
Fluid simulation would then most likely be implemented separately, using a custom physics engine which really worked with the volume. This kind of thing is currently out of my depth but would be a great learning experience
Yep, that's basically it. Each voxel in my volume is an unsigned char which means I can have 256 distinct materials. And like you say, each material is a separate mesh. I do have an idea for the future where I would place all the texture in a texture atlas so that I can use a single material for the whole of the geometry. This would mean I have less meshes and there would be no state changes between rendering them - should be a big performance boost. But for the time being I don't need it...KungFooMasta wrote:So you have a mesh for each texture, interesting. Your cave would look more accurate, but you couldn't vary the texture (like limestone, granite, or something) unless you placed another mesh in there. That's not too bad.. it seems if you want to get more complex than that you'd be using a modelling app.. but who knows.
-
- Gnoblar
- Posts: 6
- Joined: Thu Nov 01, 2007 1:20 pm
Hi,
first of all, great work
i'm really interested in your work cause i'm doing a very similar project in my thesis.
My results are really similar to yours, but i use an handmade 3D engine based on directX and so my output is a bit raw. (no textures)
I have some question:
1) do you use binary (insiede/outside) or scalar (example: distance from surface or a density gradient) values for voxels? Viewing your marching cubes combinations it seems that you use binary values.
2) what kind of tool do you use for removing material? It seems a sphere. What test do you use to compute intersection with the sphere? An inside/outside test or a complex CSG sphere/cube intersection to calculate the exact point of intersection?
3) nice work on smoothing surface with your ogre on pag 3. What technique have you applied?
4) Do you manage to render at real time frames while intersecting?
My work is very very similar, i use a cylinder as a removing tool and an octree data structure. I can't publish my work cause it is for a software company.
If you are interested in exchanging some ideas PM me
some screens of my work (i can remove them if there are some problems):
http://img204.imageshack.us/img204/1984/16620483jt1.jpg
http://img204.imageshack.us/img204/6938/18502451wj1.jpg
http://img204.imageshack.us/img204/6802/60586581ji9.jpg
first of all, great work
i'm really interested in your work cause i'm doing a very similar project in my thesis.
My results are really similar to yours, but i use an handmade 3D engine based on directX and so my output is a bit raw. (no textures)
I have some question:
1) do you use binary (insiede/outside) or scalar (example: distance from surface or a density gradient) values for voxels? Viewing your marching cubes combinations it seems that you use binary values.
2) what kind of tool do you use for removing material? It seems a sphere. What test do you use to compute intersection with the sphere? An inside/outside test or a complex CSG sphere/cube intersection to calculate the exact point of intersection?
3) nice work on smoothing surface with your ogre on pag 3. What technique have you applied?
4) Do you manage to render at real time frames while intersecting?
My work is very very similar, i use a cylinder as a removing tool and an octree data structure. I can't publish my work cause it is for a software company.
If you are interested in exchanging some ideas PM me
some screens of my work (i can remove them if there are some problems):
http://img204.imageshack.us/img204/1984/16620483jt1.jpg
http://img204.imageshack.us/img204/6938/18502451wj1.jpg
http://img204.imageshack.us/img204/6802/60586581ji9.jpg
- PolyVox
- OGRE Contributor
- Posts: 1316
- Joined: Tue Nov 21, 2006 11:28 am
- Location: Groningen, The Netherlands
- x 18
- Contact:
Hi cece, glad you like it Actually I just saw you post on GameDev, but I'll answer here.
Just ask if you want me to elaborate on the above points - I'll help where I can. Oh, and do you work with or know of the guy in this post?
http://www.ogre3d.org/phpBB2/viewtopic.php?p=249475
He is also working with some kind of milling machine.
Hope that helps!
Each voxel is a unsigned byte giving 256 values. 0 is empty space and the other 255 are various materials. However, for the purpose of marching cubes I consider all non-zero values to be the same so really it is a binary volume. Hence I get the same kind of jagged edges that you get in your versioncece wrote:1) do you use binary (insiede/outside) or scalar (example: distance from surface or a density gradient) values for voxels? Viewing your marching cubes combinations it seems that you use binary values.
I do use a sphere. When the user clicks a point I compare the distance of each voxel within the sphere's bounding box to the required radius. If the distance is less than the radius the voxel is set to zero. The code is somewhere in PolyVoxSceneManger.cpp and isn't too complicated.cece wrote:2) what kind of tool do you use for removing material? It seems a sphere. What test do you use to compute intersection with the sphere? An inside/outside test or a complex CSG sphere/cube intersection to calculate the exact point of intersection?
Between those screeenshots the only thing which i changing is the method used to compute the surface normals. From your screenshots it looks ike you are using central difference. Try looking into Sobel gradients and also averaging the gradients from several adjacent voxels.cece wrote:3) nice work on smoothing surface with your ogre on pag 3. What technique have you applied?
Yes, but I have more work to do here. You should try the demos if you haven't already.cece wrote:4) Do you manage to render at real time frames while intersecting?
Just ask if you want me to elaborate on the above points - I'll help where I can. Oh, and do you work with or know of the guy in this post?
http://www.ogre3d.org/phpBB2/viewtopic.php?p=249475
He is also working with some kind of milling machine.
Hope that helps!
-
- Gnoblar
- Posts: 6
- Joined: Thu Nov 01, 2007 1:20 pm
Thank you for answering
then your intersection method is equal to mine. In the GameDev topic i try to figure if it is possible to transform from binary voxels to scalar ones. With scalar voxels MC gives a much better mesh.
Regarding light smoothing.. I have some problems on averaging vertex normals because getting neighbors of a node in an octree is a bit slow.
2 other questions
How you managed to build the orc and the castle models? have you voxelized them from a mesh input? If yes, what kind of voxelization method have you used??
then your intersection method is equal to mine. In the GameDev topic i try to figure if it is possible to transform from binary voxels to scalar ones. With scalar voxels MC gives a much better mesh.
Regarding light smoothing.. I have some problems on averaging vertex normals because getting neighbors of a node in an octree is a bit slow.
2 other questions
How you managed to build the orc and the castle models? have you voxelized them from a mesh input? If yes, what kind of voxelization method have you used??
- PolyVox
- OGRE Contributor
- Posts: 1316
- Joined: Tue Nov 21, 2006 11:28 am
- Location: Groningen, The Netherlands
- x 18
- Contact:
In general it is posible to apply a low-pass (blurring) filter by replacing each voxel with the average of it neighbours - this will give you a smoother image. So you either need to maintain a seperate 'averaged' volume (which you update when your main volume changes) or you need to provide a function to compute the averaged value of a voxel on demand.cece wrote:then your intersection method is equal to mine. In the GameDev topic i try to figure if it is possible to transform from binary voxels to scalar ones. With scalar voxels MC gives a much better mesh.
Regarding light smoothing.. I have some problems on averaging vertex normals because getting neighbors of a node in an octree is a bit slow.
Computing these averages will involve you finding neigbouring voxels - you have alreay mentioned this is slow due to your octree. You ideally need a data structure which allows fast access to neighbours because it is a common operation in volume graphics, used in marching cubes, gradient esimation, and value interpolation. How big are your volumes? I've used a linear memory layout (just one big chunk of data) quite sucessfully for volumes up to 256^3. It only takes 16Mb and is very easy to implement.
Yes, they are voxelised from meshes. I basically take each triangle in the input mesh as repewatedly subdivide it until it is smaller than a voxel. Then I set the corresponding voxel to be 'on'. I made this approach up myself but it seems to work ok and is pretty fast (only a few seconds). Be aware it results in hollow volumes though, I haven't yet decided the best way to fill them.cece wrote:2 other questions
How you managed to build the orc and the castle models? have you voxelized them from a mesh input? If yes, what kind of voxelization method have you used??
-
- Gnoblar
- Posts: 6
- Joined: Thu Nov 01, 2007 1:20 pm
regard the volume filling in voxelization phase i suggest you to see this:
http://citeseer.ist.psu.edu/cache/paper ... cation.pdf
See the "parity count" method. If you don't have non-watertight meshes it would be ok.
http://citeseer.ist.psu.edu/cache/paper ... cation.pdf
See the "parity count" method. If you don't have non-watertight meshes it would be ok.
-
- Halfling
- Posts: 81
- Joined: Sat Jul 08, 2006 10:02 am
- Location: Texas, USA
Licensing
I would like to play around with this and possibly use it on an upcoming project I'm working on and just want to ask what kind of license it is going to have? I noticed it's GPL... so do I only need to release source changes to this or the source to my entire project (not possible.) I would be willing to release any source changes to this part of my part to help improve your project, but due to the nature of my project(online game). I can't release the full source without releasing some proprietary info.
- PolyVox
- OGRE Contributor
- Posts: 1316
- Joined: Tue Nov 21, 2006 11:28 am
- Location: Groningen, The Netherlands
- x 18
- Contact:
Well the licensing as it stands is GPL, which does indeed require you to release your code under the same licence and so is probably prohibitive for your project. This could change in the future, but at the moment the code isn't mature enough to know exactly what I'll be doing with it. Will I separate it into a library or release a monolithic (but scriptable) application? I'm not sure... It's all a bit experimental and I wouldn't say it's ready to be used yet, unfortunately.