Crysis video

The jungle levels will be the first quarter of the game and the rest of the time you'll be fighting aliens in claustrophobic corridors. They're the new Trigens and I fully expect people to quit as soon as they hit the alien levels and replay the jungle ones.
 
calculon000 said:
When you say "impossible", you really mean "wait 7 years", right?
For radiosity? 15 years minimum is my estimate. *Ray tracing* a single scene takes minutes on today's systems, and radiosity is way more demanding than ray tracing.
 
Well, if we look at the difference between Doom 3, which came out in 2004, and Fallout 1, which came out in 1997, could you say that maybe it would happen in 7 years, given Crysis as a benchmark?
 
calculon000 said:
Well, if we look at the difference between Doom 3, which came out in 2004, and Fallout 1, which came out in 1997, could you say that maybe it would happen in 7 years, given Crysis as a benchmark?
It would make more sense to compare Doom III to Quake.

With that comparison fully in mind, I strongly doubt it will happen in the next seven years. Analytical models such as radiosity are extremely resource-intensive and today's graphics hardware isn't suited for them (and we all know it's impossible to do real-time 3D graphics without hardware acceleration). Besides, there is plenty of room to further improve empirical lighting and shading models (adding hardware support for NURBS would be a good start), so it would be senseless to rush a transition to analytical models.

Interesting fact - some guy on gamedev.net claims to have invented "real-time radiosity" in 1998. I'm highly skeptical, mostly because no-one seems to have bothered to actually implement it and the guy is apparently too incompetent to do it himself. Besides, I don't trust a computer graphics article that doesn't contain equations.

http://www.gamedev.net/reference/articles/article918.asp
http://www.gamedev.net/reference/articles/article1075.asp

A more realistic approach is presented in some works that outline attempts to do traditional radiosity in GPU. It's still largely useless for real-time graphics, but it's a good start.

http://www.cs.unc.edu/~coombe/research/radiosity/

The main challenge is getting new lighting and shading models to work in graphics hardware and said hardware may need to undergo extensive redesign to accomodate them. For example, hardware lights would be useless for radiosity because "light sources" don't actually exist there in a classic sense (they are actually polygons that radiate their own energy), stencil buffer would no longer be needed because radiosity method already incorporates shadows as part of the basic algorithm, scenes rendered with radiosity have a different geometric composition in which objects are subdivided into arrays of adaptively sized (but generally extremely small) patches, etc.
 
[i said:
Rattus Rattus[/i]]
It would make more sense to compare Doom III to Quake.

With that comparison fully in mind, I strongly doubt it will happen in the next seven years. Analytical models such as radiosity are extremely resource-intensive and today's graphics hardware isn't suited for them (and we all know it's impossible to do real-time 3D graphics without hardware acceleration).
No we don't. In fact, that's patently false.
It is, however, impossible to do real-time 3d graphics at the level currently done with hardware acceleration.
 
Sander said:
No we don't. In fact, that's patently false.
Really? Ever try writing a software renderer with support for Phong shading, multiple light sources, shadow volumes and environmental bump-mapping and getting it to run real-time on a state-of-the-art system?

Nope, didn't think so.

To be fair, what you say was true ten years ago. It isn't anymore. And as far as I can see, it never will be. Your average GPU is an order of magnitude more powerful than the best CPU and *still* barely manages to cope with all the graphical bells and whistles of "next-gen" games. Something damn *miraculous* would have to happen to change that.
 
[i said:
Rattus Rattus[/i]]
Really? Ever try writing a software renderer with support for Phong shading, multiple light sources, shadow volumes and environmental bump-mapping and getting it to run real-time on a state-of-the-art system?

Nope, didn't think so.
Is your reading so completely poor? I clearly used a second sentence that said that it couldn't be done at the level it is being done with hardware acceleration.
Then you come in, ignore that second sentence and say that I'm wrong because it can't be done at the level it is now being done with hardware acceleration?
Fuck that, Rat. You've been doing that more frequently recently, and it's getting really, really annoying. So stop it, okay?

To be fair, what you say was true ten years ago. It isn't anymore. And as far as I can see, it never will be. Your average GPU is an order of magnitude more powerful than the best CPU and *still* barely manages to cope with all the graphical bells and whistles of "next-gen" games. Something damn *miraculous* would have to happen to change that.
Such as a move *away* from 'more polygons=UBAR!' and towards functions, for instance.
 
Sander said:
Is your reading so completely poor? I clearly used a second sentence that said that it couldn't be done at the level it is being done with hardware acceleration.
Then you come in, ignore that second sentence and say that I'm wrong because it can't be done at the level it is now being done with hardware acceleration?
Fuck that, Rat. You've been doing that more frequently recently, and it's getting really, really annoying. So stop it, okay?
I have devised a revolutionary new method of analysing text by simply reading its first part, expressing its content in first-order predicate logic and using the thusly created predicates to extrapolate the meaning of the entire text.

It still has a few minor faults.

Sander said:
Such as a move *away* from 'more polygons=UBAR!' and towards functions, for instance.
A good point. Procedural synthesis should streamline tasks like generating terrain, vegetation and even textures with (pseudo)random patterns, but may require significant hardware advances before it becomes fully feasible on PC. But with penetration of procedural content, fast GPU will be equally important (if not moreso) as it is today, because all those procedurally-generated primitives need to be processed somewhere. This, of course, necessitates a good bus with lots of bandwidth. PCI-E's 10 GB/s is very modest compared to 35 GB/s on PS3 and I suspect it might become a bottleneck in near future. An alternate solution might be to move a CPU core or two to the graphics card and execute procedures there. It would certainly give PCI-E more breathing space.
 
Back
Top