Memory usage and load speed - only loading what is necessary

Falagard

03-10-2006 20:08:41

Tuan,

I think we've discussed this before, but I'm going to bring it up again because I'm concerned about the PLM2 and want to know if there's any way I can bend it to my will ;-)

First, I'll explain my motivations: I want the page size of the terrain to be fairly small, say 128, so that very high complexity materials with per pixel lighting, normal mapping per splat texture, etc. can be used, but so that the material LOD can ensure that anything further than 128 will switch to a simpler material.

I want terrain that extends very far into the distance, perhaps up to 10 kilometers, and loads quickly.

I want to use as little memory as possible.

I want a fairly high resolution terrain for pages that are close to the camera, say 2 meter resolution (a vertex every 2 meters) but possibly even 1 meter.

Sounds crazy, I know.

With those insane requirements, the current implementation of PLM2 has some issues:

1. If I break up the terrain into 128x128 pages but can view let's say 5 kilometers of terrain at once, there will be lots of distant pages visible and each is a different render operation so it isn't batching these very well.

2. Each page stores its vertex data at the highest resolution as well as a copy of that data for the height data in an additional array stored with the page. In my mind, distant pages that are at a low resolution don't need to load the terrain at a high resolution (create the vertices, store the 1 meter resolution terrain data) until it gets much closer to the camera.

3. The loading time for all that data will take a long time, but would be much faster if it only loaded what was needed.

4. Really distant pages don't need the relatively high resolution splatting textures, or even the merged base maps at the resolution they're at, they could be using much lower resolution textures.

I'm thinking of using the PLM2 for terrain while editing, but creating a custom solution while in game that does something like the following:

I take the pages at 128x128 that were created in the PLM2 and perform a process on them to start merging them into groups of four, but scaling them back to 128x128 again. So for example, I would take four pages of height data (as well as splatting alphas, normals, each with their own images) and merge them into a 256x256 map then scale it back to 128x128. This 128x128 height data would be used to build terrain at 1/2 resolution of what it originally was, as well as used for splatting, normal mapping, etc. So for example four pages that are a certain distance camera will no longer render as four separate pages, they'll render as a single page with lower res mesh, splatting textures, etc. I'd then take those new 128x128 textures that represent four pages, and merge those four into a new 256x256 and scale it back to 128x128. This could keep on going for many levels of detail.

Then, when determining visibility, a certain radius of pages would be the high resolution original pages, then another radius could be the lower resolution groups of 4, and then another radius could be the lower resolution groups of 8, etc. Terrain geometry would be created from the lower resolution data, which would mean distant pages wouldn't even have vertices created for the higher res until the camera got close enough.

Here is an image of what I'm talking about:



What you see here is the smaller circle represents the radius at which high resolution tiles should be rendered. Any pages within the radius are loaded and rendered as high res. Since everything has to be grouped in groups of 4, the light blue pages are collateral (as in.. they have no choice, collateral damage) in that they must be high res to fill out that set of four pages.

The second larger circle represents the next LOD, with the darker green pages intersecting the circle, and the darker blue being the collateral pages.

You could imagine a further circle, and the darkest green lies in that circle. This could keep going for many levels of detail as necessary.

This means that I can manually control the distance at which each level of detail changes as well, by changing the concentric circle radii.

There are obvious problems, such as where the two different levels of detail join, the vertices would have to match so there aren't any seams. This could be done by making the higher resolution page (such as the light green) have its vertices match with the lower resolution page along the border edges where it will meet the lower resolution page. The vertices along these edges will basically have to match up with the lower resolution vertices - not really stitching though. There will be an assumption that the LODs can't jump more than one level of detail at a time, so it only needs to match up with the next lowest resolution page and only along the border.

I'm envisioning brute force rendering for now, with no additional LOD morphing, but perhaps some form of morphing could actually be done later. This will mean popping between LODs, but with higher end 3d cards I could brute force the terrain out quite far so only the really distant pages will pop and then it might not be so noticeable. Lower end cards will just have to deal with popping at a closer range.

Theoretically, it'd be possible to render an entire planet this way especially if the grid is in 3d instead of 2d, since when you're out in space it'd only have to load what was necessary at that far distance, then dynamically load new data as you get closer. I'm not saying that's what I want to do, but it gives you an idea of the possibility of this technique.

So I'm bringing this up to get your thoughts on the matter. I'd love to see this in the PLM2 code base, but I'm guessing it really is a huge change to what's going on currently and will not be possible.

As I said, my current plan is to use PLM2 while editing and then run a batch conversion process on it when I'm finished with the terrain to convert it into the different implementation. I think it will actually generate mesh files which will be used instead of storing the height data. This means that I could theoretically also support caves, overhangs, and extra complex terrain if I wanted to since I could output to a mesh format such as .obj and load it into a 3d program and edit it, then export it back out. Obviously there would be a problem that if I edited the highest resolution page, I'd have to subsequently edit lower resolution pages to make them the same shape, or find a way to merge actual meshes instead of just the height data, which is quite a bit more complex.

So... thoughts?

Clay

tuan kuranes

03-10-2006 21:27:11

The resolution should be by default more that 1/2, in order to allow use of index buffer based lod, and ensure a minimum of loaded vertex/texture stability, otherwise there will be too much change, or changes won't be fast enough to minimize batch number at huge distance. Same for texture, texture should be as big as possible to avoid GPU data loading/unloading and minimize state change and optimize batching.

Having a 128 sized page (default), that means up to 7 LOD (gpuLOD) without state changes until next LOD (dataLOD) has to be and uploaded, that let you go up to a 1/7 resolution change. Good value has to be found.


Which leads me to 2nd point :
I wouldn't recommand doing all the texture packing and splitting while paging, but rather at start once and for all, or previous first loading using an offline split/pack tool. It does bloat user hard disk, but realtime achiving this will only be feasible using multi-multi-core cpu.

About mesh, caves the obvious and not simple at all problem is indeed handling LODed meshes... from join to visually correct LOD (just dropping one vertex of two won't give good results if caves...). I have no idea on that. You might not found that really related, consider it as "brainstorm", but The only thing that comes out of mind about that particular problem is teddysmooth.
Try to edit and paint <ith that, then read paper on their "Smooth Surface Construction" and "Texture Painting Technology". Perhaps a solution lies there. caves and overhand using their mesh definition and uv unwrapping technique associatied with some constraints (page size, page border being grid spaced "stitchable")


I'd love to see this in the PLM2 code base, but I'm guessing it really is a huge change to what's going on currently and will not be possible.
If you forget about the mesh/cave/overhang part, that's more or lesss what is being worked on in plsm3, with that resolution thing being choosable at split time.

Falagard

03-10-2006 21:39:58

Woah, hold on, to be very clear here... you're doing this already for PLM3?

Specifically you're going to be loading less data for distant pages instead of loading their full resolution geometry?

That sounds great - do you have any more detail on what you're planning? For example... were you going to have the page sizes remain the same (in my case 128x128) all the way into the distant terrain ... or are you merging them into larger chunks when further away (done offline before loading the terrain).

I wouldn't recommand doing all the texture packing and splitting while paging, but rather at start once and for all, or previous first loading using an offline split/pack tool. It does bloat user hard disk, but realtime achiving this will only be feasible using multi-multi-core cpu.


I probably wasn't clear here, but the merging would happen offline. I wouldn't attempt to do this at runtime.

Caves are not a big deal for me, they were just an added bonus.

If you could sorta fill us in on a little more detail of what you're planning for PLM3 that'd be amazing. I guess I'm wondering exactly what you're doing and how it differs from my explanation. My major concerns are memory and loading speed (as well as of course rendering speed) but I don't care about disk space.

I also don't think it's a big deal to force us to use a different rendering system for when editing and then run an offline process to convert/merge all the pages into pieces instead of allowing realtime editing of the terrain when it is in this mode.

Details!? :-)

Another thing I should mention is that you focus a lot of your effort on getting things to work at split time, but if we're using the PLM as a terrain editor it also be able to work on already split terrain, or at the very least we should be able to pass already split terrain through the splitter again and get it to perform tasks without re-splitting, such as this merging idea.

Oh... and another thing :-D Is it possible to *not* store the height data in memory in addition to the vertex and index buffers if we're not going to be using it for editing or query purposes, or is it absolutely required by the PLM rendering implementation?

tuan kuranes

04-10-2006 15:33:24

Woah, hold on, to be very clear here... you're doing this already for PLM3?

From PLSM3 design thread:
* Make PLSM3 recursive each page being tile of a bigger terrain (smaller heightmap but bigger scale)
(but that may not be clear enough, now that I'm reading it again...)
Specifically you're going to be loading less data for distant pages instead of loading their full resolution geometry?
Exaclyt. And not even distant if using roughness based lod and many tiles/page are very simple.
were you going to have the page sizes remain the same (in my case 128x128) all the way into the distant terrain
depending on user "resolution" as posted above, page can be 64 then 512 then 1024, etc... in your case 128 then 256 then 512 then 1024. (that's vertex count, texture size stay the same).
They're "mipmaped" offline (heightmap and texture map.)
My major concerns are memory and loading speed (as well as of course rendering speed) but I don't care about disk space.
Same here.
Details!? Smile
We seems to already covered all the thing, no ?
Each resolution is more or less equivalent of plsm2 (page + tiles + renderables + queues), with some (problematic) paging/queing synchronisation bewteen different resolution (unless not roughness lod.)
That leading me to find a "lazy" solution, with ready page/tiles being buffered until all related pages/tiles (neighbour resolution compatible) are ready.
Oh... and another thing Very Happy Is it possible to *not* store the height data in memory in addition to the vertex and index buffers if we're not going to be using it for editing or query purposes, or is it absolutely required by the PLM rendering implementation?
It's actually required... otherwise that would mean all page's renderables should always stay loaded from page life start to end... Perhaps I'll give it a shot once all framework is more polished to see if it can give good results. (please recall me that when we'll be there.)

Falagard

04-10-2006 15:55:35

Okay, well that sounds great! Saves me a lot of time, allows me to continue using PLM, and should benefit everyone.

Nice work, and thanks for all your effort.

Clay

Jerky

04-10-2006 20:47:23

Agreed from our team as well. I talked briefly about this with Falagard. That is awesome Tuan.

foxbat

10-10-2006 09:08:06

These PLSM3 features sound great, since I need exactly the same terrain system as Falagard described. I was thinking of doing my own recursive terrain LOD implementation too, but if all this will be in PLSM3 then that should save me a lot of trouble.

<Edit> - here's an image of the kind of terrain system I'm looking for. This shot is from the Ranger terrain engine at http://web.interware.hu/bandi/ranger.html


One other thing that did cross my mind though, was the possibility of storing (on hard disk) and using the actual heightmap using LOD levels. For example, if I have a 64x64 km terrain stored on hard disk at 2 meters / pixel resolution, then that would take up a huge amount of disk space.

But would it be possible to, say, have the majority of the terrain at a lower resolution, and then only use the maximum resolution for high detail areas? There could be one 8 meter resolution heightmap for the entire terrain, and then several 4 or 2 meter resolution 'patches' which are only used for areas requiring a high level of detail.

This would need to work in combination with the PLSM tile lod so that the 8 meter resolution areas only ever appear at this maximum resolution, while still allowing higher resolutions for the 'patch' sub-tiles.

tuan kuranes

16-10-2006 12:52:26

then that would take up a huge amount of disk space.
I intend to use sort of compression used in geometry clipmap.
Results it gives is 537MB to 8MB, 40GB to 355MB. sounds a good thing to do for me.