Adaptive Tessellation

maxpontiac
Agreed. This is one of the biggest changes I am looking forward to.

1200 cars available for Photo Mode!

Absolutely. As much as I love to drive my second love in this "game" is photomode.

If I can use every car in the game I may lose my family once GT6 is out.
 
I think some of you guys are getting a bit carried away with what this will be able to do on PS3 and GT6.

It's pretty fundamental to the improvements already demonstrated. That wedge of memory it's freed up is incredibly valuable on the PS3; for example it's probably that that's allowed the use of higher res shadow maps.

Applied to the whole scene, it has the ability to provide rock-solid framerates, too. Whilst upscaling can't add any more detail (unless you have more detail, e.g like those detail textures used on the headlamps), it can (and probably will in GT6) smooth over the visible vertices of all cars, which will be great for photomode, and for the Standards outright.

What memory is left over may be used in any way PD see fit, so, in a way, adaptive tessellation can do anything! :P
 
It's pretty fundamental to the improvements already demonstrated. That wedge of memory it's freed up is incredibly valuable on the PS3; for example it's probably that that's allowed the use of higher res shadow maps.

Applied to the whole scene, it has the ability to provide rock-solid framerates, too. Whilst upscaling can't add any more detail (unless you have more detail, e.g like those detail textures used on the headlamps), it can (and probably will in GT6) smooth over the visible vertices of all cars, which will be great for photomode, and for the Standards outright.

What memory is left over may be used in any way PD see fit, so, in a way, adaptive tessellation can do anything! :P

It really just shifts the resources around. It's not free.
 
It really just shifts the resources around. It's not free.

There is less memory required, but there's also a small overhead in managing the scene, as well as the vertex functions themselves. But vertex setup was already being done on the SPEs, and there is supposedly still processing headroom in the SPEs.

So no, it's not free, as I'm sure it wasn't trivial getting it working, but it has freed up memory with minimal impact elsewhere.
 
It saves memory by only having to load two models for each car, one with the highest poly count and one with the lowest poly count. They can then use the extra memory freed up on other assets or effects.
No, from my understanding this replaces the need of having multiple models with different level-of-details, it will be just 1 model, the car your racing, cars your racing against and photomode cars.This replaces the current LOD system.
 
Let's say if GT6 loaded models that look like this

And then fleshed it out on the fly during gameplay, that would certainly reduce load times... :)
_-Gran-Turismo-6-PS3-_.jpg
 
Hi.. I don't have understood this post... this technique will be implemented on GT6? Or it is something we are speculating?
 
Most likely this would be for LOD and reducing memory requirements for multiple levels of detail.
Meaning there could be more 'stages' of detail for the car models and allow greater polygon efficiency, more poly's where needed and less where not needed (without experiencing popping of different detail cars), previously RAM requirements would disallow that and they have have a good model for medium distance and a crap model for far distance, obviously this is wasteful and looks bad.

We know PD have complained about limited PS3 memory.
 
Hi.. I don't have understood this post... this technique will be implemented on GT6? Or it is something we are speculating?

The technique "adaptive tesselation" is confirmed to be implemented in GT6. What exactly this technique entails for GT6 is speculation, since all we really know is that GT6 has it and that it's a technique for dynamically reducing/increasing mesh detail on the fly instead of resorting to using multiple meshes for different LODs.
 
Since this thread popped up again, I find it refreshing to spend some time looking at it with a few months experience with GT6.

I was pretty sure that the tessellation implementation would be rather weaksauce, considering that it would have to run from within the game engine rather than in hardware as it does on PS4. Griffith400 and a few others were hoping that it would mean and end to LOD levels of models being swapped in and out based on distance. But especially after learning that nothing was cut down from GT6 but 3D, I had a feeling that this was just a bit too heavy a burden to put on the Cell. And it only took the first race to see LOD shifting taking place to realize I was right. It's not a huge deal to me, but I do wonder if it was worth the work, especially after seeing some of the videos showcasing how it works in GT6.

There is a video somewhere that shows it plainly. As the user froze the game for a photoshoot, he began zooming in on a wheel well. As he did, the faceting around the wheel well began to slowly smooth out. He did another shot of a taillight, and as he zoomed in for a closeup of it, the game engine gradually began adding facets, making it increasingly rounder, until it looked almost facet free. I thought the tessellation would work faster than that, but this confirmed my hunch that PD was really pushing everything they could in the game engine, and Cell was really sweating to juggle all these tasks. And that tessellation wasn't so easy to load onto the engine.

So in GT6, it seems mostly to be a nice feature to give us prettier Photomode cars.
 
Yeah, there's definitely LOD swapping. A bit disappointing, but I'd still argue that Polyphony tackling adaptive tessellation was worth the effort, if only for the possibility that it might be more extensively used in PS4 iterations of Gran Turismo.
 
The LoD swapping is probably explained by the fact that most of the cars carried over from GT5 do not use the tessellation. See here.

Also, there is nothing stopping the processing that controls the degree of subdivision from operating on steps, rather than continuously; that would give the impression of "classic" levels of detail and also reduce the processing overhead in managing the subdivision process (as well as saving time doing that subdividing itself by stretching it out over a few frames only when the levels "pop", instead of every / every-other frame or so, regardless).

The video released by PD shows that the tessellation does indeed operate continuously, but it could still pop to a fixed, lower level when the car is far enough away, effectively turning off tessellation processing for cars that don't justify the expense at that moment.


Finally, that photomode demonstration was controlled in some way to prevent the game from updating the mesh subdivision despite changes to the camera (i.e. via a "hack" of some kind). Normally, you can't see the transitions at all, and that was made clear when the video was first shown. EDIT: here, it seems the video has been taken down, but screen-grabs remain.
 
Last edited:
That's kind of a bummer. I mean, it's great that tessellation is used more than we thought, at least on certain cars... But it sucks that all those cars modeled for GT5 presumably need to be remodeled to take advantage of tessellation.
 
That's kind of a bummer. I mean, it's great that tessellation is used more than we thought, at least on certain cars... But it sucks that all those cars modeled for GT5 presumably need to be remodeled to take advantage of tessellation.

I'm not saying it would be an instant process, but I think the GT5 models contain enough information that they can be adapted into subdivision models and exported with whatever information the real time system needs, without having to remodel anything from scratch.
 
Not from scratch, no, but it'd still be putting a decent load of work on the modelers... modelers who could be doing better things, like making brand new models for the game.

So depending on how things are run at Polyphony, that could mean either a handful less cars than we might've gotten otherwise, a release date that is later than it could've been otherwise, or some degree of compromise in between the two.
 
Not from scratch, no, but it'd still be putting a decent load of work on the modelers... modelers who could be doing better things, like making brand new models for the game.

So depending on how things are run at Polyphony, that could mean either a handful less cars than we might've gotten otherwise, a release date that is later than it could've been otherwise, or some degree of compromise in between the two.
If they implement tessellation properly on PS4, it should hardly be any work at all.
 
I'm pretty sure models have to be specially prepared to take advantage of adaptive tessellation. So no, it's not gonna be like they're gonna just be able to turn on "proper" tessellation simply because the PS4 has more processing power... the modellers are still gonna have to go through and prepare all of GT5's premium car models individually to be able to utilize adaptive tessellation.
 
I'm pretty sure models have to be specially prepared to take advantage of adaptive tessellation. So no, it's not gonna be like they're gonna just be able to turn on "proper" tessellation simply because the PS4 has more processing power... the modellers are still gonna have to go through and prepare all of GT5's premium car models individually to be able to utilize adaptive tessellation.
They need special preparation now, because the PS3 doesn't actually support tessellation. With proper hardware support, you can do tessellation without any preparation on the models at all (I know this as a game development student with quite a lot of knowledge in the field of graphics).
 
They need special preparation now, because the PS3 doesn't actually support tessellation. With proper hardware support, you can do tessellation without any preparation on the models at all (I know this as a game development student with quite a lot of knowledge in the field of graphics).

Maybe you can explain something I've been wondering about: if some tyres were textured using a simple square image of circular text going around the wheel, and other tyres used a linear strip of text that used the UV coordinates to curve into a circle, wouldn't you need to specify which method the tessellation system needs to use when it generates the extra UVs?
 
Maybe you can explain something I've been wondering about: if some tyres were textured using a simple square image of circular text going around the wheel, and other tyres used a linear strip of text that used the UV coordinates to curve into a circle, wouldn't you need to specify which method the tessellation system needs to use when it generates the extra UVs?
Good point, this could give a slight distortion if you use the same method in both cases. This is why normally you are consistent in the way you model things. If PD used both methods interchangeably, then 1) they are stupid and unorganized, and 2) they do have to do some work. They could also just ignore the problem and live with the (very minor) distortion.
 
Having looked into it a wee bit, it seems you need to store the full-detail mesh and the coarsest detail mesh you want the game to show; then you need to supervise some kind of hierarchy / link between the two, per vertex. Obviously, that process would be automated, but the end-point needs to be supervised, i.e. tested. It would all be stored in the mesh file, with the UV stuff as well, which is literally just a mapping the graphics engine will simply obey according to the instructions in the mesh file (which is processed with the vertex tessellation at the same time, presumably; I've seen demos of it working a few years back).

Again, this all comes under mesh topology, which is underpinned by graph theory (or its analogues), so any real detail on how this works is full of systems jargon, which is basically set theory: i.e. mathematics. Here's a good paper, although the algorithm described is for triangles only; I think PD use a mixture of triangles and quads in their meshes, mostly quads. I expect the extension is trivial, given the mathematic basis (assuming it is understood... :boggled:).

So what that means is, the existing meshes should just need repackaging with that hierarchy / linking in place. That can be automated, but will need manual testing. It's sort of like the difference between saving a bitmap as a jpeg or as a gif; either require attention to optimise, and the specific optimisations will differ between techniques.
 
They need special preparation now, because the PS3 doesn't actually support tessellation. With proper hardware support, you can do tessellation without any preparation on the models at all (I know this as a game development student with quite a lot of knowledge in the field of graphics).

This depends on the method they are using, the asset, and how it will be seen.

Most hardware versions I have seen are basically improved displacement mapping ( real polygons = better detail ), or they just "smooth" stuff with more polys. And anything you can do on hardware can be done with software. So if they could "automate it" they already could ( it would only hurt speed ).

Regardless, the model would need to made with the tessellation in mind.

Maybe you can explain something I've been wondering about: if some tyres were textured using a simple square image of circular text going around the wheel, and other tyres used a linear strip of text that used the UV coordinates to curve into a circle, wouldn't you need to specify which method the tessellation system needs to use when it generates the extra UVs?

The tessellation system they use would not need to vary based on UV mapping style. It would either work for both, or fail for both. UV stretching is UV stretching.

Having looked into it a wee bit, it seems you need to store the full-detail mesh and the coarsest detail mesh you want the game to show; then you need to supervise some kind of hierarchy / link between the two, per vertex. Obviously, that process would be automated, but the end-point needs to be supervised, i.e. tested. It would all be stored in the mesh file, with the UV stuff as well, which is literally just a mapping the graphics engine will simply obey according to the instructions in the mesh file (which is processed with the vertex tessellation at the same time, presumably; I've seen demos of it working a few years back).

Again, this all comes under mesh topology, which is underpinned by graph theory (or its analogues), so any real detail on how this works is full of systems jargon, which is basically set theory: i.e. mathematics. Here's a good paper, although the algorithm described is for triangles only; I think PD use a mixture of triangles and quads in their meshes, mostly quads. I expect the extension is trivial, given the mathematic basis (assuming it is understood... :boggled:).

So what that meaed repackaging with that hierarchy / linking in place. That can be automated, but will need manual testing. It's sort of like the difference between saving a bitmap as a jpeg or as a gif; either require attention to optimise, and the specific optimisations will differ between techniques.

There are many way to do tessellation. You can start with a low res model and "add" detail based on some sort of map ( think displacement map or a high field map that adds polygons ). Or you can have a high res model and reduce the polycount dynamically. Or you can just subdivide the thing to make it look smoother near the camera (I think they did this or some hybrid of it )

Also, every GPU uses triangles, it a lot faster mathematically. So they are using triangles as well. The mesh they show in the "pictures" are not converted to games meshes yet (and some of the are WIP shots).

It's also not really a hierarchy ( that is more how traditional LODs work ), but more of a mathematical formula.

I use to work in this industry ( freelance but I did get a few full time offers ), and was developing my own software, a never finished voxel based rendering engine, and a game asset texturing too, also never finished.
 
This depends on the method they are using, the asset, and how it will be seen.

Most hardware versions I have seen are basically improved displacement mapping ( real polygons = better detail ), or they just "smooth" stuff with more polys. And anything you can do on hardware can be done with software. So if they could "automate it" they already could ( it would only hurt speed ).

Regardless, the model would need to made with the tessellation in mind.



The tessellation system they use would not need to vary based on UV mapping style. It would either work for both, or fail for both. UV stretching is UV stretching.



There are many way to do tessellation. You can start with a low res model and "add" detail based on some sort of map ( think displacement map or a high field map that adds polygons ). Or you can have a high res model and reduce the polycount dynamically. Or you can just subdivide the thing to make it look smoother near the camera (I think they did this or some hybrid of it )

Also, every GPU uses triangles, it a lot faster mathematically. So they are using triangles as well. The mesh they show in the "pictures" are not converted to games meshes yet (and some of the are WIP shots).

It's also not really a hierarchy ( that is more how traditional LODs work ), but more of a mathematical formula.

I use to work in this industry ( freelance but I did get a few full time offers ), and was developing my own software, a never finished voxel based rendering engine, and a game asset texturing too, also never finished.

I'm pretty sure GPUs can deal with quads; that is you can define and render quads through a graphics API, but obviously I don't know what the API / GPU then does with that information. It's perfectly possible it "translates" it to triangles somehow, so I'll take your point. Besides, I read a bit more and the method I linked to applies to general meshes, as suspected.

PD are clearly using a hybrid subdivision and simplification system; i.e. the mesh resolution scaling is bidirectional from the actual "default" mesh. The subdivision process (one of many tessellation approaches) is pretty trivial to understand, and most people have focused on that alone in the discussion of PD's implementation of "tessellation". What people generally miss is the "adaptive / progressive mesh" aspect they proudly demo'd in that trailer and in screen grabs on their website.

The paper I linked to describes how a progressive mesh works (for one particular implementation, but it describes the challenges overall), with extra complication in the form of view-dependency, which is probably a good idea these days. There is a later paper that describes improvements in the dynamic process (now using geomorphs to smooth transitions), specifically for terrain (2D, special case), and an even later one that deals with parallelisation of the task (there is a lot of cross-dependency in vertex, edge and face lists that make them difficult to modify in parallel), crucially without tessellation hardware.

That latter paper describes the additional data required over a traditional "indexed triangle list"; that extra data consists of a vertex hierarchy (precomputed) to actually facilitate the core processes (vertex splitting and edge collapsing) involved in maintaining a progressive mesh (this part is crucial to comprehend) - this is precomputed by half-collapsing edges in the default mesh and reordering vertex indices to produce new faces (the "hierarchy" is a tree / "forest" structure storing which vertex begets which vertices in the next "level", forming which faces, for each collapse - naturally the process is reversible). That hierarchy informs which splits and collapses are "legal", in terms of maintaining visual quality and mesh integrity, for the current state of the progressive mesh
For this reason, they also store a "vertex state" texture (whether a vertex is "collapsed" or "split", etc.) at run time to facilitate the splitting and collapsing to be able to be run simultaneously in parallel (via the legality checks), collision free. They also double-buffer the vertex list for separate rendering / update duties, and stretching the process out over several frames if necessary. Remember: a hierarchy is a mathematical construct; that is, the hierarchy itself is formally defined in those papers in terms of set theory.

Other approaches forgo that requirement for a pre-computed vertex hierarchy (e.g. vertex clustering), but yield poorer results in the simplified meshes and have different run-time requirements also. Thinking back to PD's video, it seems that the refinement is performed using morphs, which hides the splits and collapses somewhat; however, the hierarchy can clearly be seen, as collapses and splits won't occur until there is the correct "environment" around e.g. a given face to do so; that results in "islands" of coarse faces in fine-faced surroundings (for comparison's sake, watch any of the videos for the second paper I linked to: the terrain one).


So my point was that it should just be a case of running the precomputation on the existing "traditional" meshes (the GT5 carryovers) to store that additional data. A mesh is a mesh, converting between storage formats should be trivial, and adding extra information about its structure equally so. Plus, all the car models in GT5 were made with the PS4 in mind. I suspect the "tessellation" pipeline would be slightly different on PS4 due to the dedicated tessellation hardware, but there is still the requirement for the progressive mesh processing (which would be handled by stream processors). Obviously, the PS3 does it all on the SPUs.
 
Back