Procedural Terrain Rendering How-To


EDIT: I see what you’re saying - instead of calculating the min and max heights of the patch (to get an accurate bounding box), simply pick an arbitrary vertex within the patch - say the center vertex - and test against it’s height. That definintely works, although it is even more of an approximation - which only matters if there’s sufficiently high elevation change taking place within one single patch.


Keep in mind, all the vertex data is generated on the GPU, and the CPU (which is where the LOD calculations are taking place) has none of that information unless I explicitly send it there.


Navy, the little “pencil” is the edit button :slight_smile:


I found myself wanting an accurate bounding box for a terrain patch for other reasons, anyway - i.e. more advanced occlusion culling (imagine you’re standing at the foot of a tall cliff).

But in this whole process, the single most expensive part is sending data from the GPU to the CPU - and the difference between sending just a single vec4 (i.e. one vertex’s coordinates), sending two floats (calculated patch min and max height), or sending all 128x128xvec4s is actually quite small as well (especially when batched across multiple patches).

Even the gpu scan to calculate the patch’s min and max is fairly cheap, when batched, and it’s only performed once (ever - I can store those values in a geo-indexed hashmap, which if completely filled for an earth-size planet with 1cm vertex resolution is still under 200mb (and kept either on disk or in host RAM, not the gpu’s)).


[quote=“cybercritic, post:25, topic:765, full:true”]
Navy, the little “pencil” is the edit button
[/quote] Lol, yes, I know I know :slight_smile: But you’re responding too quickly for me haha :smile:


The bounding box calculations are actually pretty simple. Transform the camera’s position into the terrain patches’ “model coordinate system”, and run a box vs sphere test for each LOD… that’s it!


Point here is - without transfering some height info from GPU to CPU, you can’t do a good LOD test. Once you’ve decided to do that, there are many ways to skin the cat, three of which we discussed (in order from least to most ‘accurate’):

  • Distance test against a single vertex in patch
  • Distance test against patch’s bounding box’s ‘true center’
  • Sphere vs Box test against patch’s bounding box

The horse is really dead by now.


Would you mind sharing how you did your horizon-based occlusion culling. I am creating my small little procedural planet engine myself (including floating origin, positioning based on double precision coordinate system, reference frame velocity (to push speeds above lightspeed), and procedural planets based on cubespheres and quadtree.
But I still suffer on a good working culling mechanism, currently focussing on horizon culling.

I’ve implemented the following algorithm into Unity3D, the engine that I am using, but still culling seems not work good and framerates get low when getting near the planet (~lvl 20 in the quadtree). I throw all four positions into the algorith to check if they are occluded. If all are occluded I merge until the parent is visible (or I reached the very parent node).
If one is not occluded, I check depending on the distance if I need to splitt.

So any hint about your algorithm and strategy would help a lot.

(I may not add them as new user, maybe with the next post?)

public bool HorizonCullingAlgorithm2(Vector3 cameraPosition, Vector3 testPosition, float radius) {
Vector3 planetCenter = new Vector3 (0, 0, 0); // As we check against a planet at (0,0,0) and the calculated cameraPosition which must be an offset position.
Vector3 planeCoordinate = testPosition;
int angleThreshhold = 10;
// 1a) Horizon Angle
// h = Distance camera to planet center / r = planet radius / t = horizon angle
// float h = VectorServiceProvider.Distance(center,cameraPosition); // Distance from camera to planet center
float h = Vector3.Distance(planetCenter,cameraPosition);
float r = (float)radius*0.5f;
//float r = (float)radius;
float t = (float)(Math.Acos(r / h) * (180.0f / Math.PI));
// 1b) Plane Angle
// Get the camera position and plane position in space
Vector3 C = cameraPosition;
Vector3 P1 = planeCoordinate;
// Normalize each vector
C = C.normalized;
P1 = P1.normalized;
// Calculate the angle between the two vectors
double PlaneAngle = ((float)Math.Acos(Vector3.Dot(C,P1)) * (180.0 / Math.PI));
// 1c) Compare Angles. If true the plane is within the horizon
if (PlaneAngle < t + angleThreshhold ) {
return false;
return true;


Why don’t you do this in the shader, you can use “clip” to kill the pixel, also you might be able to kill anything that is not looking at the camera, instead of going with the plane and distance calculations?

You are at level 20 in your quad-tree (420), wouldn’t that make your vertex to vertex resolution at sub-mm on Earth scale?


I precalulcate everything on the CPU (noise, vertices, index etc) and push it to the GPU (create the Unity3D Gamebject) as soon as the precalucation is done. I really want to get rid of the whole object including all the calculated data in the quadtree if I dont need it. Doing this by using the Horizon Culling approach seems like a good idea to me as you really discard a lot of stuff when you approach lower distances of the sphere.
Plus, I dont have enough experience in working with shaders, so I look for an approach to do this in my quadtree directly.

Hmm I was suprised by your hint of lvl 20 as I thought, whyever, this is far from being good for planets. Due to this is recaluclated it, and hmm yeah seems like I was heading too far trying to go downwards the quadtree?I think I am not at sub-mm, but really close to something ok. Entering the stuff in Excel, I guess I am at least at something like meter detail for the vertices and cm detail for the texture. Which indeed should be fine. So seems like I should stick with a maximum depth of level 20 for the tree and try to optimize the textures and noise from here on? Seems like it if I am not wrong. Thanks for the hint!

Earth Circumference(km): 40.000
Earth Circumference (m): 40.000.000
Planes for Circumference: 4 (due to cubesphere)
Vertices per plane: 32
Texture resolution: 128
Distance width of plane (m) (lvl 00): 10.000.000 m
Distance width of plane (m) (lvl 01): 5.000.000 m
Distance width of plane (m) (lvl 02): 2.500.000 m
Distance width of plane (m) (lvl 03): 1.250.000 m
Distance width of plane (m) (lvl 04): 625.000 m
Distance width of plane (m) (lvl 05): 312.500 m
Distance width of plane (m) (lvl 06): 156.250 m
Distance width of plane (m) (lvl 07): 78.125 m
Distance width of plane (m) (lvl 08): 39.063 m
Distance width of plane (m) (lvl 09): 19.531 m
Distance width of plane (m) (lvl 10): 9.766 m
Distance width of plane (m) (lvl 11): 4.883 m
Distance width of plane (m) (lvl 12): 2.441 m
Distance width of plane (m) (lvl 13): 1.221 m
Distance width of plane (m) (lvl 14): 610 m
Distance width of plane (m) (lvl 15): 305 m
Distance width of plane (m) (lvl 16): 153 m
Distance width of plane (m) (lvl 17): 76 m
Distance width of plane (m) (lvl 18): 38 m
Distance width of plane (m) (lvl 19): 19 m
Distance width of plane (m) (lvl 20): 10 m
Distance of vertices (m) (lvl 20): 10 m / 32 = ~30cm
Distance of texturepixel (m) (lvl 20): 10 m / 128 = ~7cm


Do show something please, a video would be wonderful. :smiley:

You can drive go-carts at that resolution…


I’ll share, gladly.

One of the keys to speeding things along is only performing the test when necessary. There are many ways to reduce the number of horizon occlusion checks you actually need to perform, such as only doing so once the camera has traveled a certain distance relative to the planet since the last check. Whenever an occlusion test is performed against a patch, I record the location of the camera relative to the planet (i.e. cameraPos - planetOrigin) at that moment. The next time I go to test against that patch, if the camera’s current position relative to the planet isn’t sufficiently far enough from its relative position the last time it was checked for that patch, I skip the test and simply reuse the last result. The ‘sufficiently far enough’ distance should be different for each LoD - I used a look-up-table with values that equated to approximately 10% of a patch’s diameter at each LoD, although this number is best determined through experimentation (keep a count of how many tests you skip per frame).

Another way to reduce the number of tests is to use frustum culling. I haven’t yet implemented it, but at some point intend to try out this approach: . However you do it, do it prior to the horizon test.

Next, I calculate the radius of the “visibility sphere”. The “visibility sphere” is centered on the camera. If the camera was in orbit around a perfectly smooth planet (spherical planet). the radius of the “visibility sphere” would be the distance from the camera to the furthest point on the planet’s horizon. Nothing is visible beyond this distance.

If the planet is NOT perfectly smooth, however, we have to extend that sphere. Image a very tall, steep mountain sitting just beyond the smooth planet’s visibility sphere - you should clearly see the mountain’s peak. So, when dealing with a featured terrain, we need to modify the calculation of the visibility sphere. I do this by taking into consideration the maximum possible height of the terrain. Please reference the following image for the derivation of that equation (this graphic is something I produced three years ago to help myself develop the equation, and then help me remember it three years later!):

(You may want to open it in a new tab and zoom in to see the red text)

With the radius of the “visibility sphere” having been determined, you could simply test each of the vertices of your terrain patch bounding boxes to see if their distance to the camera is greater or less than the radius of the visibility sphere. If any vertex is less than the radius, subdivide and re-test the children patches.

I am using a slightly more complex method, simply because case does exist where no bounding box vertex falls within the visibility sphere, but part of the patch does. (i.e. imagine a 2-D square at the origin, with vertices at (1,1), (-1,-1), etc. Now image a sphere of diameter 0.5 placed at (1.2, 0). The sphere clearly intersects the box, although none of the box’s vertices fall within the sphere).

To do so, I use an AABB (Axis-Aligned-Bonding-Box) - but not in the traditional sense. Normally an AABB is aligned to the world axis, regardless of the object’s orientation. This causes the dimensions of the AABB to change as the object (unless it’s a sphere) changes orientation. At certain orientations, the AABB is not a very tight representation of the object.

Instead, I use the model’s native coordinate system, so that the AABB matches the patch much more closesly. Here’s what I mean


(Hooray for!)

Next, I transform the camera (and hence the visibility sphere) into the model’s coordinate system. Finally, I conduct a simple AABB - Sphere collision check between the visibility sphere and the patch’s AABB.

This process is integrated directly into my patch LoD determination: If the patch falls within the visibility sphere, I don’t immediately subdivide it and test it’s children. Instead, I check it against yet another camera-centered sphere: A “LoD Transition” sphere. That is: each level-of-detail has a certain ‘optimal distance’, which keeps the “vertices per pixel” in an appropriate range. I calculate this range band for each of my LoD based upon the monitor’s resolution, the camera’s field of view, and the number of vertices per patch. I don’t have the derivation of this equation written out, but it was fairly basic and made several worst-case assumptions (i.e. face-on angle to each patch, etc), in order to determine how close you could get to a patch before the number of pixels between each vertex in screen space exceeded a certain threshold.

So, with an LoD distance band look-up-table, I next check the patch against another camera-centered sphere with radius from this table based upon the patch’s LoD. If the sphere collides with the patch’s AABB, then the patch is too close to the camera for its current level of detail, and must be subdivided. If not, it’s at the appropriate LoD - add it to the drawing queue!

Hope this was helpful. As always, I’d be happy to elaborate on anything that may have been confusing.

I also have a couple of prototypes demoing these concepts… let me see if i can dig them up and post them somehwere…


Thank you very much NavyFish!!!
Great read, and much very valuable information which help a lot. Immediately exported to PDF for a second and third read :-). I like the idea of the visible sphere area, seems like a great simple approach for a occlusion check! Will try that out. Also the idea of LOD transition sphere is very nice.
Those combined with using a separate thread for the quadtree parsing (right now its in the main thread which should be one major cause for my framerate issue at lower rates) should improve my prototype a lot!

Unfornately it seems I am not allowed to post media images like pictures and movies as a new user. I hope this changes somewhen soon :frowning:


You should know that procedural planet generation is a black hole that can suck up all the time you have available, forever…




Yeah I know (and my girlfriend too :wink: ). Anyway its an interesting topic where situation between headaches and “yeah, it works” nowhere changes so frequently and often :smile:

I continued to update my application. I did a few changes in the way what the Unity3D main thread does and what is done by separate threads. Right now the main thread only creates the final precaluclated splitted or merged planes, there is one separate thread for parsing the quadtrees, and a threadpool that is used to precalculate planes that are to be splitted or merged. Works nice now.

Still working on the Horizon Culling. Still havent been able to implement it so that I get the right results of what is occluded and what not :-(.

Some images, hope it works:



Besides a few unnecessary calculations and opportunities for optimization, your implememntation is correct. Perhaps one of your inputs are not what you expect? I would ensure that the rmin and rmax values match the planet’s actual min and max terrain values, and would also debug output your radiusOfTheVisibleSphere value to see if something wonky happens like a NaN etc. Also… Are all of your input coordinates - camera position, test vertex - in the same system (e.g. world coordinates?)


Hi NavyFish,
thank you for the hints. I’ve found the issue.
Reaching a critical distance, my debug log outputed that the Horizon Culling only keeps checking 8 different points, and that 6 nodes/planes were rated as occluded, and none visible. Seems to have something to do with the 6 quadtrees and each quadtree’s absolute root parent.
Checking my code once again, I noticed that I’ve been throwing each node of the quadtree into the horizon check, so also every parent node while traversing down the tree. And only the 4 edgecoordinates, not the centercoordinate of the plane. Thinking about it once again it might make only sense for the quadtree leafs to check them against the horizon culling. Because otherwise you end up checking every time the very wide edges of the quadtree’s six parent nodes which, of course are very soon behing the horizon, and nothing is splitted anymore.

And I might not want to avoid the center coordinate when checking the visibility, although I still have an issue here with the horizon culling and its results when you start the quadtree split/merge process with the camera being very close to the planet initially. Even if I use the center coordinate also, all coordinates will behind the horizon, and the split process doesnt start? Should I not initally consider Horizon Culling when being close to the planet, e.g. let the check only start at a certain level down the quadtree? Or maybe I can catch this situation with a proper Frustum Culling check done beforehand.

If I am not complete wrong in using the horizon check only for the leafs, the upside is that I can avoid a hell lot of checks! Huray :smile:


Congrats! Feel free to share your progress.

I think checking against the center point of a node is ‘good enough’, except perhaps when you’re dealing with the lower level of details (large nodes), in which case I’d at least check their corners. Again, though, to be thorough, you actually need to do a box/sphere collision check if you want to be perfectly accurate (you can get away with square/circle by ignoring terrain height, although this is an approximation too).


Its progressing, I avoided a lot of unnecessary culling-tests for non-leafs, and the sphere splits now perfectly down to quadtree level 20 when flying at it, including the visible sphere test. :smile:

It consider the four corners and the center for each culling tests now. Of course from a visibility point of view the center is an “add-on” to the four corners, but anyway it compensates (for the moment) situations when you start a bit closer and the corner points are already behind the horizon.When I start initially on the planet’s surface, earth-size, the problem still remains, as the quadtree-root’s corner- as well as the center points are behind the horizon again = splitting does not start (as non-occlusion is a condition for splitting), and the planet remains at the lvl 0 detail. I need to handle this situation differently, maybe its the change of the merge and split rules which are very simple currently, maybe to your approach.But anyway the visible sphere culling works pretty well, approaching the planet I have at lvl7 of the quadtree ~half the patches handled as leafs and being rendered than before, and staying at currently acceptable amount on the surface. Guess when the above problem is solved I’ll look for various optimizations (e.g. reuse of noise data at merge), making the terrain more flexible (definition of water, grass, rock, snow levels) and more interesting (multiple noise calculations for detailed close-surface).

If distance of the closest edge/center coordinate to the camera, divided by 2.5, is smaller than the planediameter, and the plane is not occluded and it is a leaf, then split.

If distance of the closest edge/center coordinate to the camera, divided by 2.5, is greater than the planediameter, OR the patch and its parent is occluded, then render this plane and destroy its children if its not a leaf, or if its a leaf merge to it’s parent.


Maybe I can add a few more intesting points for discussion how everyone is doing it (especially since for a few topics there is rarely any information around).

- How to create the water surface of the planet?
Currently I render 1 sphere that is land and water level. You can see in my screenshot that I havent pushed the water to a flat surface but instead colored it in different blue colors. I could simply push every noise result below 0 to 0 again, which would result in a flat blue surface. However, I wonder what to do when you want reflection, wave animations and all that fancy stuff, especially when the water surface should be half transparent? Is the normal process leave the planet’s surface like is not pushing back anything back to 0 (to have underwater landscape structure) and then to render a second sphere, round and at water level with the same center as the planet sphere, and apply a different shader to it?
I once tried that, but there was rally heavy Z-fighting visible. Although I only bruteforce tested it with a lowlevel Unity3D ready sphere, so the inaccuracy of details might have been one cause for the Z-fighting? So I wonder how this is done normaly? If indeed a second waterbody-sphere is top be used, I’ll look into creating a water sphere the same way I currently render the planet surface, but without applying any noise to it and using a different watertexture.

- How to create a believeable terrain surface / how to use noise?
I curently use one simplex noise call/result [-1/+1], applying it with a “noisepushdown”-value (so that I can control the heigths and depths) to the round sphere. Values:
amplitude = 1.0f;
frequency = 2.0f;
lacunarity = 2.0f;
persistence = 0.65f;
octaves = quadtreelevel+4;
This gives a nice highlevel-terrain as in the screenshots, however it lacks of detailed structures when you get closer to the ground. Lets look at the follow picture found on the new of someone who seemed to succeed:

Is the strategy for ground detail an additional noise call, maybe at higher frequency, and apply it to the first result?
How would you do that, in order to avoid unnecessary calls? Start these additional noise calls only at a certain LOD / quadtree depth? And would you “add” these to the first rawer noise result (I’d be afraid to do this, as the terrain might move “up” and the terrain structure would chance if you use the noise to decide if which texture (rock/grass) is to be used). Guess multiplying with another “pushdown” value would be the better strategy.
Also on the screenshot you see that the hill top ha a specific organic structure? Do you use other noise variants to achive this? I tried a celular noise, WorleyNoise, but found it horribly slow, even at very very few octaves.

How to do atmosphere?
Still a secret to me. I guess you’d use shaders to achive the result of blue/white shimmering atmosphere when seen from space, and blue/red sky when seen from ground at day or night. If so, are these shaders normaly applied to a separate (transparent) sphere, with slightly greater radius as the planet sphere? Or directly to the ground? And are there typically different shaders to be used for sky view (like in the second screenshot, also from the net with a nice atmosphere reflection) and view from ground (like in the first screenshot)?

1 Like

Just wondering, where do you want to take this project, it’s bound to be a very long road?

I don’t want to be an ass, but my advice would be to move onto something that doesn’t have infinite development time…

Look into Atmospheric Scattering for the atmosphere, it will make you learn shaders though.