In a metaphorical sense, yes. I believe that if it’s merged and adapted to what the devs have done so far with procedural generation, it can open up boundless new possibilities for them. Unfortunately they seem to disagree. When did you first come across Infinity? I remember when I first watched those clips that came out 5 (6?) years ago, being totally amazed. Unfortunately, the slow dev cycle and other games have started to make the concept of Infinity a bit stale (a bad sign before anything playable has even come out). It’s not the biggest loss in the world, and I’m sure that whatever products they release down the road will be amazing, but here is an opportunity to go a step beyond and blow everyone out of the water and give the engine a new PR boost.
If the technical hurdles are prohibitive at this point, the devs should at least consider it as a long-term prospect.
This is the fourth or fifth time that I personally have seen the Euclideon stuff posted-once again, I promise you that the Infinity devs are -well- aware of what’s happening in the video game engine and 3d/procedural generation world, as well as (iirc) reading SIGGRAPH papers and implementing new ideas that are revolutionary and applicable (though @INovaeFlavien could correct me.)
I believe that if it’s merged and adapted to what the devs have done so far with procedural generation, it can open up boundless new possibilities for them.
That’s the problem. The state of their algorithms is likely not prepared to defer the immensity of cache movement which would be required to submerse dynamic generation into infinitesimal frusta. In fact, they’re probably not suitable (not yet) to handle much at all. The rendering pipeline real-time programmers have today allows GPUs (or even CPUs, for that matter) to sanely work on very discrete tasks (i.e. naturally vectorized) without shredding GPUs’ internal buses. The specified euclidean conformity/limitations of Euclideon’s algorithms are dually a mathematical miracle and barrier. The most programmability they can probably allow their pipeline to work with is functional scalar fields… after some core improvements. That’s an assumption, but it looks like they will need to make some large changes to their core algorithm (completely revamp it) before it will be suitable for games.
Mesh geometry enables pipelines to be designed to perform with a high amount of cache coherence for the sake of programmability. There’s a lot of nooks where we can drop in custom code and manage to do very extensive operations without execution units getting tossed over the less determinability of what code belongs to which fragment; to batch computation in the way that GPU’s or many-core CPUs are advantageous (or even single core performance).
Not to mention we want to ship a game eventually, not get caught in a perpetual R&D cycle. I feel we are getting in a good spot on the art side toolset wise to start producing content and assets at a greater pace. Future proofing our tech is important no doubt about it, but we have to be careful of feature creep, regardless of how cool things seem on the surface.
Euclideon’s technology is all smoke, mirrors, and a ridiculous amount of marketing hyperbole. There are very good reasons nobody else is doing anything similar to what they’re doing. If their Unlimited Detail tech was so revolutionary I assure you that Unreal Engine 4, CryEngine, etc would all be moving aggressively in the same direction as Euclideon - they’re not.
If you take a careful look at their demo’s the first glaring deficiency is that there are no animations whatsoever. Their statement that they’re building a “revolutionary” animation system is pure bullshit. My understanding from public statements they’ve made is that they’re organizing their point cloud info similar to how enterprise database software organizes geosptial information. They make no mention of how much pre-processing is required to do this nor how much disk space it takes up. Clearly this point cloud information is quite difficult to animate or else they would have done so already as it’s one of the biggest criticisms leveled against them.
Secondly they are using a laser scanner to scan in everything. This scanner seems to be capable of sampling the diffuse lighting only. Specular reflections (aka high frequency lighting) are all view dependent. None of their demo’s contain any dynamic lights, dynamic shadows, specular reflections, or any reflections of any kind (they try to fake reflections in their first demo). Also point clouds seem poorly suited to advanced global illumination techniques such as ray tracing because how do you calculate the intersection point of a surface and a ray when that surface is just a collection of points? You somehow have to turn those points into polygons or similar mathematical primitives just to be able to efficiently calculate accurate surface intersections.
Simply put Euclideon’s tech is pure bullshit under any context other than maps, which is why I think they’ve moved so aggressively toward building GIS software. They can produce beautiful static maps and that’s about it at the moment.
I’m not pessimistic. Both techs are fundamentally different. Euclideon’s doesn’t even use the video card. There’s no code that can be reused from ours to theirs and vice versa.
That’s already the case in both techs. They only stream the level of detail points to map a 1:1 pixel ratio, and we only generate triangles procedurally until a certain level of geometry based on camera distance to planet. This “millions” performance ratio still applies.
At the moment we are close to first-person-shooter-like quality at walking ground levels, no more no less. And I personally do not think Euclideon does better. If you think of a wall that is 1 meter in front of you, viewed in HD 2K, you need to have a density of one point per millimeter to have a quality similar to what you can do with textures on a GPU. If you watch their videos, you should notice that they’re still far from that resolution. Maybe they can do it technically, but on a small area, not an entire level ( and I’m not even talking of a planet here ).
Is any math freak motivated to calculate the surface area of a typical game level ( say, one of Battlefield 4’s map ) in square millimeters ? I’d be curious to see the result. In any case, that probably amounts to dozens of Gigabytes of HD space, if not more, even compressed… think of the space requirements for an entire game.
I simply do not think their tech is a good fit for a typical game.
That is correct. To be honest, in all those past X years people have been posting links to various articles on our forums, I do not remember even once discovering anything new I hadn’t known / read before That shouldn’t stop anybody from posting however, it’s good to have a reminder on some articles from times to times.
IIRC, one of the (numerous) arguments against FPS view/combat in Infinity was that the engine, as a primarily flight sim engine, wasn’t good-looking enough for FPS. So you are actually doing better than planned there?
Now I can’t wait to see the I-Novae Engine being used for open-world FPS. (Yes, I know, other stuff like animations would have to be added, but one can dream…)
The I-Novae engine has NEVER been designed primarily as a flight/space-sim engine. It’s an urban legend simply due to the fact that our major project is a space sim, so people assume we’ve created an engine that can “only” do that.
The I-Novae engine is a general-purpose engine that aims are doing all sorts of games, including first-person shooters, RPGs, strategy games, you name it.
However you shouldn’t confuse the engine and the game. Infinity, and Battlescape as a consequence, will not have FPS elements in it, like walking on planets. It’s not a matter of engine, it’s a matter of design, content and time/budget restrictions. But maybe one day…
Was just reading through this and cracking up over how quickly it got shot down in flames… again…
This was the thing that grabbed me. Although the devs have already given their feedback on it, I just wanted to say: It’s not that easy.
There simply would not be any point in even trying. Procedural generation - and in particular what the I-Novae engine is capable of - is a powerful enough tool for what they need it for. And to be honest, what we have seen so far already looks fantastic, so I don’t think there is any need whatsoever for “unlimited detail”.
Mainly for this reason.[quote=“INovaeFlavien, post:21, topic:80”]
It’s an urban legend
[/quote]
Btw, I love the fact your engine already has urban legends attached to it.
I’m not so sure. How easily will you be able to migrate “legacy” texture maps to this new tech? Will there be another toolset required to create these textures, and if so how easy is it to do? IN have found it hard enough to get community art contributions of a sufficient quality without fundamentally changing the way graphics are generated and presumably triggering a change in toolset to something practically no-one in the community will have experience with. So if IN continues with their plan to commercialise the engine then implementing something that no-one knows how to use with very limited options in tooling it’s more likely to detract from the engine’s value, as it’ll require more investment from any potential customers to actually get to work with it.
No, pixies are the future. The present is stuff that developers can actually use to make their game. Even if Euclideon’s tech could be implemented in the I-Novae Engine, it still wouldn’t be able to make Infinity.
That’s alright, it’s not really going to happen, you’ll be in a spaceship most of the time. You can have seamless planetary transition in this game, I highly doubt anyone will care about the look of an individual speck of soil on the surface.
If those videos are anything to go by, Euclideon’s tech is not production ready. The IN Engine almost is. The Space genre is already making a bloody hard comeback with at least 3 or 4 major contenders, most with a multiplayer/psuedo-MMO component, coming out in the next year or so. If using this new tech adds another 4 years to development time then any PR boost that arises will be too late and Infinity far too stale.
There’s also the other, less talked about issue. Most serious people in the gaming industry don’t regard Euclideon that highly at the moment. A similar crowd of people don’t regard I-Novae too highly at the moment either. To put it bluntly (apologies @INovaeFlavien, I love the work you’ve done so far), both companies have promised something potentially amazing but failed so far to deliver. This is what we call “negative PR”. Should these companies then announce they’re collaborating on the same things they’ve so far failed to deliver, do you really expect that the PR will be good?
What was the PR like for Duke Nukem Forever after 6 or so years?
Oh bollocks, I should know better than to reply without reading all the other replies first. They’ve rendered my last two posts virtually useless. Especially given that Discourse specifically makes it easy to draft a reply and grab quotes whilst viewing posts… something it helpfully reminds me of as I’m making this one.
Ray tracing isn’t necessarily an advanced global illumination technique… hah. It’s just more generic and more accurate than some of the most common hacks used in the industry. n-reflection path tracing can achieve any frequence of GI (if you can contain it), whereas most games use a number of GI techniques corresponding to various frequencies.
“None of their demo’s contain any dynamic lights, dynamic shadows, specular reflections, or any reflections of any kind”
That’s false. They have already implemented real-time shadows (without any sample interpolation) and animation. I’ve seen specular reflections in some of their very old demos which are hard to find (created before they became widely known as UD)
“Secondly they are using a laser scanner to scan in everything. This scanner seems to be capable of sampling the diffuse lighting only.”
Regarding game dev, they’ve used a scanner in only their 2011 demo. The scanned data demonstrated for their geospatial solution is perfectly appropriate. Who is to say their clients can’t use different scanners or work with different tools? They have built some of the utility necessary to support artists working with mesh tools. I suspect these tools have lacked a number of features (like interpreting animation data), but they’ll probably get to those soon; or they might already have implemented since their last announcement but they’re not ready to roll it out. Their previous demos which supported these important features, like animation, used tools that Bruce Dell made directly for UD himself.
I’m more concerned about some of the points Flavien made. Regarding their already proven tech for real-time shadows and animation, I suspect the greater issue is not implementation complexity: it’s performance. The old demos they had did have a pretty low resolution of content compared to what they boasted in their 2011 demo, and almost nothing else present within the scene. The reason behind this bottleneck is probably similar to why raytracing has recently received (A LOT) more favor than radiosity techniques in R&D. I suspect they’re trying to crack down on these bottlenecks.
Thanks for the replies devs, and everyone else too. I feel a lot more informed now. Although some of you might be undervaluing what UD is capable of doing, you’ve convinced me that merging their point cloud magic with IN is just not feasible. My nerd-dream has been shot down…
You are not the first, nor (I suspect) will you be the last.
Perhaps one day the computers will be capable enough and the software well-designed enough to make unlimited detail properly useable. At that point, they may as well connect it to Occulus Rift, plug all of us in, and call it The Matrix.
This does fall under “The next logical bla bla bla…”
Is the quality of Infinity good enough for Hollywood?
“Maybe sometime but not now” or “We wouldn’t wast our time”
Seeing how much time and money the film industry waste on reinventing the wheel I was wondering if you considered that.
Hollywood uses trillions of rays to ray trace their scenes so no, it’s not however if we wanted to ray trace it we could certainly achieve that level of quality. You could certainly use our tech for Machinima though.