Oh, I was figuring that a few million vertices is enough detail for an Earth sized planet. And a million vertices is 32MB.
That’s not enough even for the lower detail the server would use.
So the earth is 500,000,000 km^2. At a million vertices is only a vertex per 500 km. I did vastly underestimate the scale.
But at 1km detail you state you “only” have 800 million pixels. So for the detail you’re talking to be terabytes, that’s an absurd amount of detail that seems like far more than you need on the server.
If that is truly the case, having GPU(s) on the server wouldn’t really help! They’d run out of GPU memory since you still need whatever you deem the highest detail level wherever players are, all over the place. You don’t get the usual occlusion culling benefits. If there’s hundreds spread all around, then suddenly you’re using up tons of VRAM generating the terrain there.
I still have to say… it still sounds like you should have it all precached on disk, not doing on the spot generation.
Even if it’s terabytes of data, it’s faster to look up the heightmap from disk. This is common caching, really, just on a huge scale. Otherwise you’re going to have the server slowly generate the same piece of terrain hundreds of thousands of times.
So 1KM resolution is 800 pixels, or 200mb at 8bit single channel, divided up 6 times, right?
What resolution do you think you need to reasonably do collision? 250meters? Then that’s an extra 1.6 GB, divided up 48 times, right? My math could be off there. But, after all, as long as you make sure your lower detail height map is a bit below the higher detail one, you don’t need to worry about the client doing minor clipping, you just need to detect obvious flying under the ground.
That’s not much space. You could bring it down to 125meters per pixel, or lower, it seems.
What seems to make the most sense is you have 6 slices of your planet for your 1KM resolution to load from disk for far away. Then you have your smaller slices when you need more detail to load from disk instead of repeatedly generating them when a player is closer.
This is a lot of disk space, but I can’t see how you won’t save an absurd amount of CPU.
Then you also have a REALLY consistent, predictable resource requirement on the server. You know if 8 earth sized planets are visible for players from all sides, that’s 1.6GB of memory. If you have 50 players near the surface, that’s an additional 1.65 GB of memory. You aren’t worrying about it lerching as it constantly tries to generate new terrain.
This is all under the assumption that you’re ignoring vertices and just going to work off the heightmap, which seems it would be the way to go.
(Sorry this reply went longer than I expected…)
And to clarify I was speaking about doing so on the server, since you mention framerate stalls. Yes, this makes no sense on a client where you don’t have the multiple bodies (players, projectiles, all over the star system) to compute on in parallel. Which also means you can do this asynchronously, and just look for the results as they come in your tick loop.
I also have to say, I’m quite certain you’re vastly underestimating those integrated Intel GPUs!
For starters, you have shared memory and cache which you can use between the CPU and GPU, which gets around the usual overhead of transferring the data between them which, as you said, normally eats away at the gains.
And the other big thing is that I think you’re greatly underrating how powerful they are simply because of the render cost. Yes they aren’t getting 60FPS at 1080p on high end games, but render costs are high.
For example, if you compute a million particles that interact with each other on the GPU, it’s going to be like 10% of the time to actually calculate their movement, but 90% of the time actually rendering to the screen. Give or take. And to do the same on the CPU, also foregoing rendering, is much slower than the GPU does the calculation plus the rendering that took up most of the time.
So ultimately the overhead you have is calling your compute the same way you would supply positions to calculate collisions on the CPU. You can then look for the offending collisions or wrongly reported hit detections in the shared memory (shared L3 cache, really) each tick, right?
That, or through the Open CL api. It’s virtually the same overhead as doing something multicore, except instead of the other CPU core it’s the GPU doing things far faster.
Yes, it makes no sense in a 32 player FPS to use the CPU because a CPU can do those calculations on that few easily. However, with 500+ players on a server, I’m 99% certain you’d see massive gains at the point that your server side collision detection and hit detection nearly appear to be “free”.
If you have 100 people all constantly shooting, this is a simple program that has a lot running in parallel in the way a GPU likes. This is the only way you can get good cheat proof hit detection . The alternative is… you fudge things far beyond the way Battlefield does with its net code, which opens up hacking. Whereas with compute, which I’m certain doesn’t take the GPU power you think it does, you can actually get a real sim running for hundreds of players more efficiently than Battlefield does 64 players. But at 64 players the gain is too small to bother with the programming headache, so we don’t see it done.
I think it’s more of an issue here that programming and debugging shaders is a pain in the ass, not that it wouldn’t work.
Yes, I did figure colo. It’s not actually that expensive. Don’t worry about it though, like I said, it doesn’t make sense at this point. But it’s neat.
Well you’re talking about clients uploading megabyes of data over and over. 2mb for a 2048x2048 single channel. And testing the client which you should never do.
And erm, I mean this is basically what I was saying about rendering to texture then disk and uploading to the server once. Then it’s all there. It doesn’t have to be generated, just looked up once to generate the vertices, or loaded from file system when that particular chunk is needed. But Flavien says it’d be terabytes at the detail needed.