I’m certain that many of you have come across a company called Eucledion. They are currently in the process of blowing up the geospatial industry with their innovative software that is able to render unlimited amounts of point-cloud data using nothing more than cpu power (forget high-end graphics cards). Check out a presentation of their “Geoverse” software here:
Now, you might ask yourself, what does this have to do with Infinity, or video games? Well, the truth is that Euclideon started with one guy who was (and probably still is) interested in making a video game engine to end all video game engines. Check out this early presentation, before Euclideon went into the geospatial industry:
If you don’t immediately see the enormous potential then you are brain-dead. I think that the infinity devs need to reach out to Euclideon and see if they would be willing to collaborate on converting the I-Novae engine so that it can do away with polygons and enter the 21st century.
Euclideon stuff has been debated to death all over the internet. It’s a static capture of the environment, it won’t work with dynamic environments. Animation is also a big problem.
While their island example has a high level of detail it looks bad.
I never imagined that it would be easy. Hardly anything worth doing is easy. The potential payoff, however, would be enormous. We are talking about vast procedurally generated environments, with virtually atomic detail here. This has only been dreamed of up to this point, and now it has the potential to become reality. That’s why I think the devs should seriously consider reaching out to Euclideon for collaboration.
What would you say is the biggest hindrance to converting the existing engine? Getting dev input would be great, since, like you said, they are deeper in the “know” with this kind of stuff.
[quote=“INovaeJan, post:3, topic:80, full:true”]
It’s a static capture of the environment, it won’t work with dynamic environments. Animation is also a big problem.[/quote]
Difficult? Sure. Lots of kinks to work out? Yes. Impossible? No. See below.
That’s because it wasn’t made to look pretty. It’s just a proof of concept done by software engineers without any real artistic input.
@31:15 - “Animation… I will make a statement… when we release animation, I believe, no, I’m pretty close to certain it will be the best animation anyone has ever seen… and I have very good reason to make that statement.”
Watch the mini presentation starting at 32:30.
These people are serious and it would only benefit the Infinity devs and their potential future games to get on that train as early as possible.
Not gonna happen. I mean, even if we were interested ( which we are not ), it’d mean scrapping a decade of work to use somebody else’s tech / IP, which means totally devaluating all our efforts until now. That makes absolutely no business / strategical sense.
Well, just generating planets for once, would be a massive undertaking and I’m optimistic. Euclideon’s current tech works by streaming point clouds, but you’d still have to procedurally generate those point clouds in the first place. And in 3D, which would be an order of magnitude slower than what we currently do ( planetary elevations are 2D ). And if you wanted “texturing” details on the ground instead of a flat color, as in their tech it’s part of the 3D dataset, we’re talking of generating something like millions of times more procedural texels data than what we currently do ( which is already stressing our GPUs ).
[quote=“INovaeFlavien, post:8, topic:80”]
Not gonna happen. I mean, even if we were interested ( which we are not ), it’d mean scrapping a decade of work to use somebody else’s tech / IP, which means totally devaluating all our efforts until now. That makes absolutely no business / strategical sense.[/quote]
Is that necessarily the case? I was thinking more along the lines of you adapting Euclideon’s tech to your needs and merging it with your engine. Yes, some of what you have developed will need to go, but in the end the process will only enrich what you have rather than devalue it. I admit that lots of the details surrounding software development elude me, but I feel like you are being unnecessarily pessimistic in your outlook.
That does sound very difficult. What if you only generate what needs to be on-screen at any time, and with variable amounts of point cloud detail based on the distance of the player from a given object? For ex., when looking at an Earth-sized planet from 50,000km away, only a limited amount of data would be procedurally generated because past a certain number of points, there will be no discernible difference. Once the player approaches to, say 300km (LEO), there will be much more detail, but again, only what is being looked at.
I’m just shooting random ideas here. In any case, thanks for taking time to reply.
Devs of any game would do themselves good to listen. This is the future.
At interplanetary distances the procedurally generated stuff is good, sure. But get up close to the terrain and it’s icky.
In a metaphorical sense, yes. I believe that if it’s merged and adapted to what the devs have done so far with procedural generation, it can open up boundless new possibilities for them. Unfortunately they seem to disagree. When did you first come across Infinity? I remember when I first watched those clips that came out 5 (6?) years ago, being totally amazed. Unfortunately, the slow dev cycle and other games have started to make the concept of Infinity a bit stale (a bad sign before anything playable has even come out). It’s not the biggest loss in the world, and I’m sure that whatever products they release down the road will be amazing, but here is an opportunity to go a step beyond and blow everyone out of the water and give the engine a new PR boost.
If the technical hurdles are prohibitive at this point, the devs should at least consider it as a long-term prospect.
This is the fourth or fifth time that I personally have seen the Euclideon stuff posted-once again, I promise you that the Infinity devs are -well- aware of what’s happening in the video game engine and 3d/procedural generation world, as well as (iirc) reading SIGGRAPH papers and implementing new ideas that are revolutionary and applicable (though @INovaeFlavien could correct me.)
I believe that if it’s merged and adapted to what the devs have done so far with procedural generation, it can open up boundless new possibilities for them.
That’s the problem. The state of their algorithms is likely not prepared to defer the immensity of cache movement which would be required to submerse dynamic generation into infinitesimal frusta. In fact, they’re probably not suitable (not yet) to handle much at all. The rendering pipeline real-time programmers have today allows GPUs (or even CPUs, for that matter) to sanely work on very discrete tasks (i.e. naturally vectorized) without shredding GPUs’ internal buses. The specified euclidean conformity/limitations of Euclideon’s algorithms are dually a mathematical miracle and barrier. The most programmability they can probably allow their pipeline to work with is functional scalar fields… after some core improvements. That’s an assumption, but it looks like they will need to make some large changes to their core algorithm (completely revamp it) before it will be suitable for games.
Mesh geometry enables pipelines to be designed to perform with a high amount of cache coherence for the sake of programmability. There’s a lot of nooks where we can drop in custom code and manage to do very extensive operations without execution units getting tossed over the less determinability of what code belongs to which fragment; to batch computation in the way that GPU’s or many-core CPUs are advantageous (or even single core performance).
Not to mention we want to ship a game eventually, not get caught in a perpetual R&D cycle. I feel we are getting in a good spot on the art side toolset wise to start producing content and assets at a greater pace. Future proofing our tech is important no doubt about it, but we have to be careful of feature creep, regardless of how cool things seem on the surface.
Euclideon’s technology is all smoke, mirrors, and a ridiculous amount of marketing hyperbole. There are very good reasons nobody else is doing anything similar to what they’re doing. If their Unlimited Detail tech was so revolutionary I assure you that Unreal Engine 4, CryEngine, etc would all be moving aggressively in the same direction as Euclideon - they’re not.
If you take a careful look at their demo’s the first glaring deficiency is that there are no animations whatsoever. Their statement that they’re building a “revolutionary” animation system is pure bullshit. My understanding from public statements they’ve made is that they’re organizing their point cloud info similar to how enterprise database software organizes geosptial information. They make no mention of how much pre-processing is required to do this nor how much disk space it takes up. Clearly this point cloud information is quite difficult to animate or else they would have done so already as it’s one of the biggest criticisms leveled against them.
Secondly they are using a laser scanner to scan in everything. This scanner seems to be capable of sampling the diffuse lighting only. Specular reflections (aka high frequency lighting) are all view dependent. None of their demo’s contain any dynamic lights, dynamic shadows, specular reflections, or any reflections of any kind (they try to fake reflections in their first demo). Also point clouds seem poorly suited to advanced global illumination techniques such as ray tracing because how do you calculate the intersection point of a surface and a ray when that surface is just a collection of points? You somehow have to turn those points into polygons or similar mathematical primitives just to be able to efficiently calculate accurate surface intersections.
Simply put Euclideon’s tech is pure bullshit under any context other than maps, which is why I think they’ve moved so aggressively toward building GIS software. They can produce beautiful static maps and that’s about it at the moment.
I’m not pessimistic. Both techs are fundamentally different. Euclideon’s doesn’t even use the video card. There’s no code that can be reused from ours to theirs and vice versa.
That’s already the case in both techs. They only stream the level of detail points to map a 1:1 pixel ratio, and we only generate triangles procedurally until a certain level of geometry based on camera distance to planet. This “millions” performance ratio still applies.
At the moment we are close to first-person-shooter-like quality at walking ground levels, no more no less. And I personally do not think Euclideon does better. If you think of a wall that is 1 meter in front of you, viewed in HD 2K, you need to have a density of one point per millimeter to have a quality similar to what you can do with textures on a GPU. If you watch their videos, you should notice that they’re still far from that resolution. Maybe they can do it technically, but on a small area, not an entire level ( and I’m not even talking of a planet here ).
Is any math freak motivated to calculate the surface area of a typical game level ( say, one of Battlefield 4’s map ) in square millimeters ? I’d be curious to see the result. In any case, that probably amounts to dozens of Gigabytes of HD space, if not more, even compressed… think of the space requirements for an entire game.
I simply do not think their tech is a good fit for a typical game.
That is correct. To be honest, in all those past X years people have been posting links to various articles on our forums, I do not remember even once discovering anything new I hadn’t known / read before That shouldn’t stop anybody from posting however, it’s good to have a reminder on some articles from times to times.
IIRC, one of the (numerous) arguments against FPS view/combat in Infinity was that the engine, as a primarily flight sim engine, wasn’t good-looking enough for FPS. So you are actually doing better than planned there?
Now I can’t wait to see the I-Novae Engine being used for open-world FPS. (Yes, I know, other stuff like animations would have to be added, but one can dream…)