duc said:
Out of interest what do you mean when you say a 2d POV interpreter? A 2d engine like fallout or something completely different - for example POV-Ray.
It is, for all intents and purposes, a 2d rendered pane based upon what the engine processes from objects both calculated and designed, with a physics engine written entwined with it.
It is already quite easy to make a conventional 3d object and decorate it purely with code and no art resources needed to be included. Once the object is projected, and the pattern for the object is overlaid, (here is the difference) then the data for what the viewer sees is put into a rendered pane. It takes early POV methods (instead of the stupid fog of war crap) and uses them for determining what should be seen where, and all sorts of effects could be filtered in based upon location.
Once certain objects are rendered, they could be held to a certain pane and held in memory (or temp written file), allowing the engine to work on the moving actors/objects. Some things it would process and then re-touch only occasionally, like far-away background objects, which would be diffused into a better looking distance background than a sudden lack of object detail and some fog. Or, what almost every "3d engine" uses now for the ultimate background, a blurry pic.
As far as I am aware programs from Blender through 3dsmax to the high end software used to create computer animations like Toy Story use clever maths (e.g. splines) to approximate a curved surface. Maybe I missed the point but isn't it all still just triangles and geometry, or are you saying that we should be describing the surface of an object rather than building it out of polygons like lego?
Exactly! At the core is the skeletal structure, then the mass of what the object is made up of, and then the portrayed surface. When all you need is to present what is seen by the viewer, the rest managed by the engine, aiming to juggle prettier polygons seems rather stupid when there's still plenty of ways 2d development can improve even upon conventional 3d. So does writing an engine around polygon behavior (blocky, generic animation, almost unnaturally-angled rebounds occur), as does writing polygon engines around a truer physics engine (clipping errors, sometimes simpler animation/interaction, many aspects can seem "cheap" if the contact physics aren't exhaustingly detailed).
I could even see that being applied towards a space sci-fi game, where the clothing would have to move differently depending upon the environment and the type of cloth. Different G cloth behavior, or even cloth behavior in different atmospheres, at least without a simplistic draping behavior algorithm or pre-animating it, is still a myth for conventional 3d, yet would be simply an environmental toggle to use for engine support if you're rendering a view pane.
Hopefully, someday this technology will be able to break into the FPS market, and this video card war bullshit can be laid to rest and stop driving the whole industry into the dirt.