The place I work at was leasing a 20 core Xeon with dual Nvidia cards for AI research. The datacenter and hardware is located somewhere in France I believe. If I remember the details correctly the company leasing the hardware to us is not the datacenter. They buy the hardware, put it in the datacenter and maintain it and resell time on it for a profit.
I could find the name of the leasing company if you were interested.
Several cloud providers (Amazon is one) can provide a GPU system as well although we found that if you need it for long term, leasing the hardware is much cheaper.
Yeah, Amazon Web Services also offers GPU server instances. Really, lots do.
But like I said, a modern Xeon with its integrated gpu is perfectly adequette for whatâs needed here. A discrete GPU shouldnât be needed under the assumption Infinityâs proc gen will work to generate a single point, instead of needing to do patches. I did not assume that much.
Like the collision detection you end up doing on the gpu for 20 people would likely be done in the same amount of time for 500 or even thousands of people on the integrated GPU, including the overhead you have to call to the GPU and reading the results.
I do not see why an actual GPU would be needed in the way Flavien describes it. If heâs able to make the algorithm where it can generate individual points instead of large patches, there wouldnât be those memory issues or the need for a powerful GPU generating large patches.
Not only that, but the integrated GPU can do both the generation of the height points, and the checking of collision, all in the same program very, very quickly.
If it is the case that a single point can be looked up, not necessitating big patches that use a lot of memory, I think something like a E3-1285 v4 would be all the graphical power needed, and would be better than a discrete GPU since there is shared memory to use. It has 48 execution units at 1150mhz each, around 850GLOPS total, and 124mb of eDRAM shared between the CPU and GPU, if needed.
If I were coding it the server side collision detection would just be a sanity check. Real time collision detection would be done on the client. The server could then do a sanity check once every fifty ticks to make sure that the client isnât cheating. Any cheat that allows you to fly through terrain, but only for half a second isnât going to be a huge amount of use.
I would also sanity check my sanity checks and if a client repeatedly failed the server side check it would get flagged as potentially hacked. You could then flag that account for a mandatory patch to the latest version.
This argument is flawed because there are many reasons to use a GPU, not just raw power. The Battlescape prototype itself, for example, doesnât work on integrated graphics (I know, I tried). Itâs as much about the architecture as the GHz or whatever.
The I-Novae engine has been built from scratch to work in a particular way that is specific to Flavienâs coding style and preferences. Yes, there may be other ways of doing things, but the ways they are using currently seem to be working fine so far.
The planetary generation engine (including height mapping) is pretty sophisticated. If Flavien says he can pull the height data for building placement with a few hours work (read: couple of days!), I believe it, and it seems like the simplest solution, bar hand-placing everything. Iâm sure it will need tweaking anyway.
I donât think it would take much CPU time to perform basic collision detection to detect cheaters. 5 points are not required. A single point every so often at a randomized position within the ship should be enough to find most of the types of invalid collision states. It would be incredibly difficult to place your ship with any significant part under the surface mesh for more than a second without violating this check.
The collision check could be a function of ship speed and rough distance to the surface. Ships that are far from the surface or are stopped wonât require any checks (assuming the planet surface is static). This will likely exclude most ships from frequent checks. The frequency of checks can be adjusted based on server load to limit CPU time to a fixed budget.
Itâs not clear that a GPU would be more efficient than a CPU for these checks. GPUs are good for computing large batches of coherent data on some sort of grid. A series of random points scattered across multiple planets would not optimally use the GPU and would likely be communication bound sending the data back and forth anyway. Of course it all depends on how expensive it is to evaluate a single terrain point. Maybe at a lower accuracy than what is used for rendering. Is terrain created with many octaves of some sort of multifractal noise?
On the other hand, plenty of data centers have GPUs available now (AWS and others). See for example:
GPU nodes may be more expensive, but theyâre there to use. In the next few years I expect GPUs and other massively parallel compute resources to be more common in data centers/cloud compute. It might be a good idea to start looking into using GPUs for some of the server side processing, for future expansion.
Yeah, thatâs true, it could be a sample of a random corner of the ship when itâs just a sanity check. That does cut down on the overhead a ton.
Itâs still far more expensive than collision generally is in a game, which is a draw back when youâre trying to get more players instead of less, but the checks really donât have to be that often like critic said, so it should work out.