Jack the Anarch
First time out of the vault
I don't know how well versed you gamers are in photo realistic image synthesis but the topic of real-time raytracing has been long present in academic levels. With the soon to arrive (2007.) quad-core processors (Kentsfield), such prestige might slowly begin to be available to desktop gaming rigs round the world. Before continuing read the following article to get a better idea what I'm talking about.
For the first time gamers could get to look at a synthetic world with 3 degrees of movement with quality level hitherto restricted to 2D game engines only. No more imperfect curvatures, limited distance of view, faked reflections and such.
A video portraying some of what might be available in raytraced games can be seen here
These guys from Saarbruecken have coded both a Quake 3 raytraced engine and an entirely new game of their own 'Oasen' featuring a 25 million polygons large scene rendered at over 25fps.
Here are a few more videos of what their engine can do in realtime:
http://www.intrace.de/images/gallery/twinbeetle/videos/Twinbeetle.avi
ftp://graphics.cs.uni-sb.de/boeing777_hq.avi (117 MBs - attention)
The last video portrays real-time rendering (1-3 fps) of a massive 350 million Boeing 777 model weighing at 30-60GBs on a single AMD Opteron 1.8GHz dual CPU machine at 640x480 resolution.
The advantages of raytracing are numerous, the most important being it's output sensitivity to geometric load, i.e. the performance is more restricted by desired resolution, realism level and antialiasing than it is with the geometric complexity of the scene to be rendered. To be precise the time it takes an optimized raytracer to render a scene can be descriped to be as O(logN) where as current hardware assisted engines can do no better than O(N) which becomes a huge handicap as scenes get bigger and bigger. And they most certainly are. O(x) is the Big-Oh notation used to give an asymptotic complexity of an algorithm in dependance on the variable x. Thus, state-of-the-art graphics hardware experiences a performance drop linear in the increase of geometric deteil. With doubling the scene complexity you lose 50% of the performance, whereas with raytracing you could increase the geometry size 40-fold and experience an only 10%drop in performance or even less if the scene was more complex.
Other than faster performance, with real-time raytracing you get considerably more realistic (read beautiful) renders than with current state-of-the-art graphics hardware. I know I'm in peril of boring even the more attentive readers, but I'll show you just how much more realistic you can get with raytracing.
Imagine that there are only two kinds of light scatterings at surfaces: perfect diffuse (like ideal matte paints, designated as 'D') and perfect specular (like ideal mirrors, designated as 'S'). Let there be a light source ('L') and a receving camera lens or an eye ('E'). Now, all the light paths raytracing vulgaris can simulate are described by the following regular expression: LD?S*E, where the asterix denotes that there can be arbitrarily many specular reflections (zero, one, five, etc.) and the '?' means that the scattering can happen once only or not happen at all, as a 0-1 bit). Once a diffuse surface has been hit from the eye (reverse order- hence raytracing) no more specular reflections can be simulated. This is one major drawback of simple raytracing but is still loads better than LD?S?E lighpaths that current state-of-the-art graphics hardware is capable of simulating. What you see in games without environment mapping is just LD?E and even with environment mapping on you get a faked specular reflection in most cases.
How would Fallout 3 look if raytraced? Awesome!!
Will Fallout 3 be raytraced? No!!
Is that sad? Yes.
Anyhow, thanks for reading through. Comments are welcome.
Cheers
For the first time gamers could get to look at a synthetic world with 3 degrees of movement with quality level hitherto restricted to 2D game engines only. No more imperfect curvatures, limited distance of view, faked reflections and such.
A video portraying some of what might be available in raytraced games can be seen here
These guys from Saarbruecken have coded both a Quake 3 raytraced engine and an entirely new game of their own 'Oasen' featuring a 25 million polygons large scene rendered at over 25fps.
Here are a few more videos of what their engine can do in realtime:
http://www.intrace.de/images/gallery/twinbeetle/videos/Twinbeetle.avi
ftp://graphics.cs.uni-sb.de/boeing777_hq.avi (117 MBs - attention)
The last video portrays real-time rendering (1-3 fps) of a massive 350 million Boeing 777 model weighing at 30-60GBs on a single AMD Opteron 1.8GHz dual CPU machine at 640x480 resolution.
The advantages of raytracing are numerous, the most important being it's output sensitivity to geometric load, i.e. the performance is more restricted by desired resolution, realism level and antialiasing than it is with the geometric complexity of the scene to be rendered. To be precise the time it takes an optimized raytracer to render a scene can be descriped to be as O(logN) where as current hardware assisted engines can do no better than O(N) which becomes a huge handicap as scenes get bigger and bigger. And they most certainly are. O(x) is the Big-Oh notation used to give an asymptotic complexity of an algorithm in dependance on the variable x. Thus, state-of-the-art graphics hardware experiences a performance drop linear in the increase of geometric deteil. With doubling the scene complexity you lose 50% of the performance, whereas with raytracing you could increase the geometry size 40-fold and experience an only 10%drop in performance or even less if the scene was more complex.
Other than faster performance, with real-time raytracing you get considerably more realistic (read beautiful) renders than with current state-of-the-art graphics hardware. I know I'm in peril of boring even the more attentive readers, but I'll show you just how much more realistic you can get with raytracing.
Imagine that there are only two kinds of light scatterings at surfaces: perfect diffuse (like ideal matte paints, designated as 'D') and perfect specular (like ideal mirrors, designated as 'S'). Let there be a light source ('L') and a receving camera lens or an eye ('E'). Now, all the light paths raytracing vulgaris can simulate are described by the following regular expression: LD?S*E, where the asterix denotes that there can be arbitrarily many specular reflections (zero, one, five, etc.) and the '?' means that the scattering can happen once only or not happen at all, as a 0-1 bit). Once a diffuse surface has been hit from the eye (reverse order- hence raytracing) no more specular reflections can be simulated. This is one major drawback of simple raytracing but is still loads better than LD?S?E lighpaths that current state-of-the-art graphics hardware is capable of simulating. What you see in games without environment mapping is just LD?E and even with environment mapping on you get a faked specular reflection in most cases.
How would Fallout 3 look if raytraced? Awesome!!
Will Fallout 3 be raytraced? No!!
Is that sad? Yes.
Anyhow, thanks for reading through. Comments are welcome.
Cheers