The last progress report - Building height placement, planet "Z"

Okay so I think I have some solutions to those things mentioned? But I can’t reply there.

Continuing the discussion from Quick Progress Reports - Engineering:

Wait so, how are collisions with the planets handled now?
It’s just all on client?
And what about ships? Does the server at least verify those?
Otherwise it’s ripe for cheating. Granted, some games that have done the MMOFPS thing forgo collision checks on the server… (planetside2, which was full of hackers)

For the others reading this, it is normal for the server to have a simplified version of the game running in 3D on the server to do the hit detection and collision detection.
Just like on the game client, you have all the 3d objects moving around.
The difference is that it’s not actually rendered to the screen. So no batching, draw calls, and other things eating CPU, and no rendering that uses a GPU.

If that’s not there, you can’t really stop people from flying under the planets surface shooting up at people, can you?
And then to solve the problem, well you need that. That’s the only way around that, isn’t there?
And I’m guessing the problem is that planets are generated via GPGPU, and the server doesn’t have a capable GPU to do the same? So it either means a server with a GPU is needed, which can then render to texture to create an approximate collision mesh. Or you rewrite all your GPGPU code so that it can be done the same on the server’s CPU, eh?
Or ultimately… you can just dump the mesh into arrays on a client to basically copy paste onto the server, no?

Either way, isn’t this something that can’t be faked and the server NEEDS a planet mesh, whichever of the 3 or more ways you chose to have it get it?

Okay. But say you forget about that, and make it possible to cheat since cheating hopefully/probably won’t be a problem until later and resources are scarse now.
Can’t you just place them really far up higher than the highest peak can be and let the client drop them down far away after they’re loaded and before they’re rendered?


Yup, they are client sided. A lot of things are still client sided too; this is one of the reasons the small NPC ships flying around the asteroid base/factories are undestructible. The current version is largely still based on the KS demo which we developped in a hurry to be ready in time. After I finish reviewing the networking, the next big task is to properly re-architecture the client/server. We also have plans to include a replay system to review the last battles at your own pace, or to upload short sequences/videos to youtube. Disclaimer: as usual, not a promise.

Yes, the server runs the same physics simulation than the client, but only for player ships atm.

The client obviously performs such checks. The server doesn’t yet, so yeah, if you hacked your own client you could fly under a planet’s surface. But obviously there’s no current interest in doing that now ( just enable debug mode and you can already do it ), and later on it will be fixed.

[quote=“innociv, post:1, topic:4024”]
And I’m guessing the problem is that planets are generated via GPGPU, and the server doesn’t have a capable GPU to do the same?[/quote]

The material has a CPU component and the client already does it to generate collision meshes without having to read back from the GPU. The code is already there to retrieve a height value from the procedural algorithm on the CPU but it still requires a renderer. So what we need to do is to decouple the height gen on CPU from the rendering, or add the ability to create a material without a renderer. It’s not that difficult, I only mentionned it to explain why the patch was getting delayed ( that plus my new PC arriving and the cooking bugs ).


Thanks for the response.

The planets are static, right?
The problem you describe sounds like a big problem engine and server wise if this was for procedural generated planets, but for I:B isn’t it simple?

Why can’t you just render to texture on a debug client, save that to disk, and basically just upload that to the server to use to compute the final terrain, is what I’m wondering? Then you can find the height for the vertices at any point, and have what you eventually need for collision detection. Or you can simply build the vertices from there, dump the textures out of the server memory, and look up the nearby vertices to find heights, right? Like a wide raycast or something, not sure.
Obviously, like you said, no one has interest in hacking now, but isn’t that a pretty straight forward quick-and-dirty fix for both this problem, as well as what gives you what you’ll need in the future when the server becomes more of a game sim with the clients dumb terminals?
I’m basically bringing this up because it seems like you save time an energy in the wrong way by getting the planet meshes in the server now, instead of trying to fake around it to get buildings added without it that just gets thrown out later for the proper implementation.

I also don’t see why you’d even create the heightmap without a GPU, ie CPU implementation. Like, you write double the code, and it’s the same result from what you could do by modifying the current game client to render to texture and write to disk, and then to upload those textures to the server to use. It’s a bit of a hacky way to do it, but it 100% works, no?

If this was Infinity instead of I:B, then yeah you’d ideally want the server to have a GPU that can compute shaders. But it’s not, so you can just render the textures one and load those from the file system to memory on the server. The caveat is that you need your specialized client to redo this process each time an update changes terrain.

Ultimately I guess you MIGHT not need planet meshes on the server, if you do some sort of… I’m not sure what you would call it, like cloud peer relaying to let clients agree on collisions and check for agreements in what’s relayed, but that’s still hackable if the majority in an area all use the same hack.

You also might ultimately want a GPU on the server because you can potentially get your 1000 planets in real time combat at ~20 tick rate doing that. Simulating lots of clients at once is the sort of parallel computing GPUs are good at. It seems like a way you can quickly check if every client is accelerating faster than they should be, or if they’re colliding. :slight_smile:
But beyond the scope given the current budget, and infinity might not have a thousand players on very much at its current size. Hopefully it grows, as it’s a neat possibility.

Yes but that’s not the issue at hand.

Huh ? A procedural planet saved to disk like that would require dozens if not hundreds of terrabytes, unless you did it at the Km or higher resolution, in which case it’d be completely useless for the current problem ( buildings are much smaller than 1 Km ). That plus generating the procedural data to textures would take hours, maybe days, due to the super high resolution. At ~1 Km resolution the texture would be 40000 x 20000 for an Earth like planet.

Reading back anything from the GPU kills parallelism and introduces framerate stalls. Everybody highly recommands against it. It’s still possible if you introduce some latency between the moment you issue the request, and actually need the data ( you need many frames of latency ) but obviously that makes the pipeline more complex; when you need to get collision or height data, you need it asap, not in 10 frames.

We already have a CPU implementation, so the only question is how to decouple it from rendering. It’ll probably take a few hours to fix it. All other alternatives sound like they’d take days if not more to implement.

Servers in datacenters typically have integrated Intel cards and that’s it. I’m not aware of datacenters that even offer the possibility to install a dedicated GPU on their servers. Unless you do colocation ( ie. buy your own machine and simply relocate it to a datacenter to use their networking architecture ), which would be incredibly inconvenient for a small international company like us.


I also thought already about the problem. As I understood the server requires information how planets look like (height map). Could not all clients provide height map data (very-low resolution) to the server and the server collects the data for each planet in order to build a local “height map” cache? Well, this wouldn’t be hack-proof, though. Clients could “lie” to the server in order to have impact on the spawning position of each landbase in the game. Also doing such computations on the server would not work since a physical server would need to do mesh computations for all open games, for each single planet.

Maybe all clients compute the position data of landbases on their own and send the data together to the server. The server validates all position data and only if there are equal they are treated as “valid”. Nevertheless, you guys will find a solution, I am a software engineer but not an I:B engineer, so I should wait and see what they come up with!

1 Like

Oh, I was figuring that a few million vertices is enough detail for an Earth sized planet. And a million vertices is 32MB.
That’s not enough even for the lower detail the server would use.

So the earth is 500,000,000 km^2. At a million vertices is only a vertex per 500 km. I did vastly underestimate the scale.
But at 1km detail you state you “only” have 800 million pixels. So for the detail you’re talking to be terabytes, that’s an absurd amount of detail that seems like far more than you need on the server.

If that is truly the case, having GPU(s) on the server wouldn’t really help! They’d run out of GPU memory since you still need whatever you deem the highest detail level wherever players are, all over the place. You don’t get the usual occlusion culling benefits. If there’s hundreds spread all around, then suddenly you’re using up tons of VRAM generating the terrain there.

I still have to say… it still sounds like you should have it all precached on disk, not doing on the spot generation.
Even if it’s terabytes of data, it’s faster to look up the heightmap from disk. This is common caching, really, just on a huge scale. Otherwise you’re going to have the server slowly generate the same piece of terrain hundreds of thousands of times.
So 1KM resolution is 800 pixels, or 200mb at 8bit single channel, divided up 6 times, right?
What resolution do you think you need to reasonably do collision? 250meters? Then that’s an extra 1.6 GB, divided up 48 times, right? My math could be off there. But, after all, as long as you make sure your lower detail height map is a bit below the higher detail one, you don’t need to worry about the client doing minor clipping, you just need to detect obvious flying under the ground.
That’s not much space. You could bring it down to 125meters per pixel, or lower, it seems.

What seems to make the most sense is you have 6 slices of your planet for your 1KM resolution to load from disk for far away. Then you have your smaller slices when you need more detail to load from disk instead of repeatedly generating them when a player is closer.
This is a lot of disk space, but I can’t see how you won’t save an absurd amount of CPU.

Then you also have a REALLY consistent, predictable resource requirement on the server. You know if 8 earth sized planets are visible for players from all sides, that’s 1.6GB of memory. If you have 50 players near the surface, that’s an additional 1.65 GB of memory. You aren’t worrying about it lerching as it constantly tries to generate new terrain.

This is all under the assumption that you’re ignoring vertices and just going to work off the heightmap, which seems it would be the way to go.

(Sorry this reply went longer than I expected…)

And to clarify I was speaking about doing so on the server, since you mention framerate stalls. Yes, this makes no sense on a client where you don’t have the multiple bodies (players, projectiles, all over the star system) to compute on in parallel. Which also means you can do this asynchronously, and just look for the results as they come in your tick loop.

I also have to say, I’m quite certain you’re vastly underestimating those integrated Intel GPUs!
For starters, you have shared memory and cache which you can use between the CPU and GPU, which gets around the usual overhead of transferring the data between them which, as you said, normally eats away at the gains.
And the other big thing is that I think you’re greatly underrating how powerful they are simply because of the render cost. Yes they aren’t getting 60FPS at 1080p on high end games, but render costs are high.
For example, if you compute a million particles that interact with each other on the GPU, it’s going to be like 10% of the time to actually calculate their movement, but 90% of the time actually rendering to the screen. Give or take. And to do the same on the CPU, also foregoing rendering, is much slower than the GPU does the calculation plus the rendering that took up most of the time.

So ultimately the overhead you have is calling your compute the same way you would supply positions to calculate collisions on the CPU. You can then look for the offending collisions or wrongly reported hit detections in the shared memory (shared L3 cache, really) each tick, right?
That, or through the Open CL api. It’s virtually the same overhead as doing something multicore, except instead of the other CPU core it’s the GPU doing things far faster.

Yes, it makes no sense in a 32 player FPS to use the CPU because a CPU can do those calculations on that few easily. However, with 500+ players on a server, I’m 99% certain you’d see massive gains at the point that your server side collision detection and hit detection nearly appear to be “free”.
If you have 100 people all constantly shooting, this is a simple program that has a lot running in parallel in the way a GPU likes. This is the only way you can get good cheat proof hit detection . The alternative is… you fudge things far beyond the way Battlefield does with its net code, which opens up hacking. Whereas with compute, which I’m certain doesn’t take the GPU power you think it does, you can actually get a real sim running for hundreds of players more efficiently than Battlefield does 64 players. But at 64 players the gain is too small to bother with the programming headache, so we don’t see it done.
I think it’s more of an issue here that programming and debugging shaders is a pain in the ass, not that it wouldn’t work.

Yes, I did figure colo. It’s not actually that expensive. Don’t worry about it though, like I said, it doesn’t make sense at this point. But it’s neat.

Well you’re talking about clients uploading megabyes of data over and over. 2mb for a 2048x2048 single channel. And testing the client which you should never do.

And erm, I mean this is basically what I was saying about rendering to texture then disk and uploading to the server once. Then it’s all there. It doesn’t have to be generated, just looked up once to generate the vertices, or loaded from file system when that particular chunk is needed. But Flavien says it’d be terabytes at the detail needed.

Would it be possible for all buildings to have hand-designed bases to sit on? You could then texture the bases with whatever the local environ is.

This is, of course, assuming you can make them have collision models with ships, but not the planet itself.

AsbI understood is that the bases currently need to be placed by hand. The goal is to have a automatic system that can place bases on the planet, the system needs to know at what height to place the bases but there is currently no way for the system to know what height.

For the terrain collision: I think there are probably more elegant ways for the server to validate terrain collision. Prerendering and saving terrain detail onto disk is stupid imo, whats the point in procedual generation tech if you do that. For a single solar system this could probably be done but imagine in the future when a game is done that lets us cruise a whole galaxy, such a system would be insane for this.

It’s really not stupid. Caching things that are commonly generated is one of the most basic server optimizations that are commonly done. It’s simply on a larger data size here.

If it’s only dozens or hundreds of GB instead of TB per planet, it makes perfect sense to have it all per-generated on disk to load in as needed. Generating this level of terrain on the CPU is absurdly expensive and will drastically cut down on the number of players on the server, or servers.

You could, sure, use the integrated GPU on a Xeon processor for the generation, but it’s still a waste when caching to disk, especially an SSD, is so cheap, and you can then use the integrated GPU for other things for other things instead.

Ultimately… like I said, if it’s really dozens of terabytes per planet that’s needed for the level of detail needed on the server, not only does pre-generating and caching not help, but it’s not going to work period with any implementation since you’ll run out of memory when you have to generate it at many different points simultaneously for all the players. So saying caching is “stupid” is moot, since there’s no option that works besides not doing collision detection on the server at all.

It’s not going to work, period. The terrain is divided up to 16 times so for a single cube face of the planet that’d be 2^16 = 65536 terrain tiles (chunks) per side. A single cube face would be 65536 * 65536 = 4.3 billion tiles. There are 6 cube faces per planet, so that’d be 4.3 * 6 = 25.8 billion tiles. Each tile covers an area of 100m x 100m, so at a 1 meter resolution that’s 25.8 billions * 100 * 100 values, each value must be encoded on 16 bits ( 8 bits don’t provide enough altitude resolution ) so you need to multiple the result by 2.

Total: 65536 * 65536 * 6 * 100 * 100 * 2 bytes = 5.15*10^14 bytes = 503 TB per planet.

At 50 milliseconds of generation time per tile, generating the entire planet would take 65536 * 65536 * 6 * 50 milliseconds = 1288490189 seconds = 41 years.

Even if we can compress the data, even if we use a lower resolution, we’re still off by many orders of magnitude.


Right, again, sure, if you’re talking that detail.

But then also nothing will work. You won’t be able to support hundreds of players if they spread out because of all the terrain the server will have to generate into memory on the spot rather than cached.

So it comes down to what I was saying: If caching doesn’t work, because you need too much detail server side, nothing will.

Heck, even with 32 players, if they all get close to 32 different spots on a single planet, you’d probably run out of VRAM for that, wouldn’t you?
For the CPU, you might have enough memory, but boy will it be slow. And you’d run out of memory getting into hundreds of players that way, still.

So you’re going to have to figure out how to work with lower detail server side, aren’t you? In which case the pre-generating and caching would work.

Like you say, 41 years. This is a single solar system. You don’t think you’ll get enough players that they explore all over and do generate that 41 years of server time? Thus your problem is not that you can’t precache 500 TB of data per planet, your problem is that you’re needing to have that level of precision to begin with on the server side.

To me, it sounds like off by orders of magnitude is close enough. You have smoothing between your pixels to find the approximate height as long as you don’t have a 50m wide hole that’s 50m deep that a 49m wide building is down inside of, right?
You “simply” need to make sure that the smooth points on the lower resolution terrain never has its highest point between the smoothed pixels any lower than on the higher detail. So the higher detail needs to always add, never subtract.
In doing this, yes your higher detail could put a pillar or bump in the terrain that the server can’t detect collision for, but you’re generally going to find that okay, and anyone who is really hacking and trying to get down under the terrain is eventually going to run into the spots where it IS caught.

Innociv, cut it out. You’re trolling the lead developer in a way that he finds difficult to ignore. You’re wasting his time, and if there’s something that this project cannot tolerate, it’s a loss of Flavien’s time.

You aren’t on the project. You don’t know the architecture of his product. You don’t know what building blocks he’s already got written. You don’t know what he’s going to need in the future. You’ve picked up on one feature, have an idea that you’re married to, and you figure that it’s so worthwhile that Flavien should spend his time debating you over its merits, despite him saying point blank that it won’t work.

Just stop.


I’m not posting this to be a dick. These are actually serious, legitimate concerns I’ve laid out. Trying to generate terrain at the detail Flavien is talking about on the server for every player simply won’t work without a legit super computer, if you followed what I’m saying. And I’m only trying to help.

Flavien has already mentioned that he doesn’t consider it a big problem once he’s decoupled the height gen from the renderer. Presumably at that point he’ll have a fairly simple server side Planet.getHeight(x,y) function that should serve most purposes.


I know you’re trying to help. But you’re doing the opposite. So stop already.

That would work for placing buildings, yes. Which is like I said at the start that you could have something that works for that but it brings up the bigger problem: Collision. Are you going to call that every tick or every few ticks per player to check if where they are is below terrain? Because looking up a blended position between 4 points in an array is going to be FAR faster, but it means having that in memory to look up.

This is trading memory for CPU time, but in a game with lots of players it’s going to be the CPU that chokes you back, not memory. That’s why I brought this up.

I mean I could be wrong, and having that function be called 5 times per second for 500 players won’t use up a significant amount of CPU.
I guess it depends if the noise generation requires it to know a number of neighboring pixels or if you can really just generate one, eh?

You are wiggling around @innociv, your concerns are valid, but they might not be as difficult to solve as you might think, also Flavien said the fix should only take a few hours, if he wasn’t answering in this thread, the problem could have already been fixed.

Anyway, he said it takes 50ms to generate a 100x100m patch, at 60fps you are running at about 16ms per frame, so in theory the collision can be checked every 3 frames, it’s just that the check will need to be run in a different thread. Assuming 100x100px per patch to get the 1m resoultion at 16bits that’s 160k bits or 20KB per player, taking the extreme scenario of 1k players we end up at 20MB of memory usage.

At 50 ms for a 100 x 100 patch, it takes 0.005 ms to get a single height value. For 500 players that’s 2.5 ms. Do that 10 times per second and that’s 25 ms or 2.5% of the cpu resources per second. I don’t think it’s that bad, especially considering not all these players will be close to a planet, so most likely the average case is going to be lower.


@innociv, if you’re worried about someone cheating and bypassing local collision detection to shoot someone under the planet, there are many other alternatives.

Doing a search over even a lower resolution collision space is, as Flavien said, unrealistic. Searching a space like that, no matter what hardware the search is run on, will still be orders of magnitude too high for a real time game.

The original problem was placing factories, which Flavien said he has a method to fix (and not a terribly complicated one at that)

But for the sake of conversation, why not employ some other methods of hack/cheat detection? The game client does all of the collision detection checks, correct? Why not just do a check on the player shooting at you as well? I’m sure there are multiple methods do to client-side hack detection to report back to the server.

However, that’s a discussion for another thread. The original topic for this one I would consider more than addressed.

The problem in that example isn’t the memory usage, it’s that you’re locking a thread for 50ms every 3ms per player.

But that wouldn’t be what happens there. It’d be a single point generated to see if they’re below it or not, assuming no overhanging terrain. But generating a single point on the CPU is still, I imagine, crazy expensive when you scale it to many players.

So, to extrapolate and assume, lets say the pixel is 1m, and it will take 1/100th the time compared to a 100x100 patch that’s 5microseconds.
Well, Flavien confirmed that as I’m writing this.

So if that’s the case and you do 5 checks per second, on 200 people, that’s 1 millisecond. So 1/200th of the cycles on a core taken up. Yes, that isn’t too bad assuming a number of cores. Though, you still add a little more time for greater function call overhead in generating just one many times than a big patch, but still, I guess that would potentially scale to be usable for collision.

Using up 0.5% of a core merely for collision is still a lot in my eyes, but it does sound doable when put like that.

But that’s for 1x1m. Ships are larger than that, so you need to generate more.

Me and you both in our examples are assuming you just do collision on the center point versus the ground right below, so really it’s going to be much higher cost than that.
Like… it’d be funny if someone actually figured that out and made a hack where they can have almost half their huge ship through the ground because they know the collision was based on a 1 square meter point.

So in reality, you may want to sample 5 positions. The center and 4 corners, per ship. Now it’s gone from 0.5% to 2.5%, or 12.5% in your 500 example.
I’ll be more generous and assume the server averages like 25% CPU usage at say… 500 players, with 200 close enough to the surface that you need to do collisions, plus lets say it’s like 0.01 microseconds just to check if they’re below the highest point terrain could even be.
Well, you’re then spending almost 3% of that 25% on collision. Over 10% of the load on your server sim is very basic collisions alone. That’s extremely abnormal, so you see why I’m feeling it won’t scale.
You can drop it to every half second, and spread them out over a few ticks, I suppose.

However, yeah, you could look up those 5 points per position on the integrated GPU. The GPU on those modern Xeons is actually good enough, and in that sense your way works if you need that precision instead of lower detail like I thought you could do, while doing that on the CPU would work short term which could scale by doing it on the GPU instead later.

Sure, you can do things like allow clients to agree on collision, and only when they disagree does the server check. Lots more programming involved, this way, but it’s doable to have less load on the server.
And even then I’m not sure since the server has to look to the past, but Flavien said replays were planned which solves that problem.