Xj3D Dynamic Texture Rendering Extensions

Component: xj3d_RenderedTexture
Levels: 1
Specification Suggested Name: RenderedTexturing


Introduction

Name

The name of this component is "xj3d_RenderedTexture". This name shall be used when referring to this component in the COMPONENT statement (see ISO FDIS 19775-1:200x 7.2.5.4 Component statement).

Overview

This extension provides the ability to dynamically render a partial scenegraph to an offscreen texture that can then be used on the geometry of a node. This can be used in many different ways such as creating mirror effects, inputs to shaders etc. The purpose of this component is to provide for extended visual effects, but not the complete form of offscreen rendering and buffers that would be available to lower-level rendering APIs. For example, this does not provide the ability to read back the contents of the texture, not any of the associated buffers such as depth, stencil or occlusion.


Concepts

Bindable Node Handling

In order to accomodate all types of visual effects, a rendered texture needs to be able to take their own complete scene graph structure, which may or may not be part of the same scene graph as the main scene. This causes a number of heartaches due to the runtime model that VRML/X3D has taken, particularly bindable nodes. Bindable nodes only allow a single instance of the node to be active at a time, but with offscreen rendering providing a separate, potentially independent, subscene we need to have more than one active at a time. To work around this problem, a series of fields are added that allow you to nominate other instances that are used in the given subscene (or none at all to use default values).

If no value is provided for the field then default behaviour is applied - for example, no background provided means that the base texture colour is cleared to black.

Defining Texture Parameters

Other forms of textures have implicit size definitions given to them as part of the source. As this texture is being dynamically rendered, from no external source, the user needs to tell the system details about the texture. Specifically, the two needed pieces of information are the dimensions of the texture, and the number of colour components.

Information about the size of the texture is provided through the dimensions field. This is an array of integer values with the first 3 values having defined meanings, and is almost identical in semantics to the SFImage field definition. The first two values are the width and height of the texture, in pixels. Width and height must be a power of two. The third value provided is the number of colour components to generate in the texture: a value of 1-4. The user is not required to provide all of these values, and the following default behaviour can be assumed:

Updating the texture

Since offscreen texture rendering requires a completely separate rendering path, changing it every frame can have some very heavy performance penalties. In many cases, the user may not want to do anything more than a single render right at the start of the world loading. An explicit flag field has been provided that allows the user scripting to update the texture when it feels it is time to update. Setting this field to "ALWAYS" will make the texture be rendered every frame. A value of "NONE" will stop rendering so that no further updates are performed - even if the contained scene graph changes. When the value is set to "NEXT_FRAME_ONLY" it is an instruction to render the texture at the end of this frame, and then don't render again. What this means is that the value is set this frame, and at the start of the next frame, the value will be automatically set back to "NONE" to indicate that the rendering has taken place already. Since this is a field change value, it will automatically generate an output event which may be routed.

Design Issues

- What to with the lack of NavigationInfo. Most of it is useless as there is no direct navigation, but fields like headlight. visibilityLimit and the near clip distance from the avatar is definitely needed.

- Sensors. Most input sensors won't function (eg touch and drag sensors). Time sensors should work, but what about visibility, proximity sensors, LODs, billboards. Supporting them in a dynamic system is extremely compute intensive, and can be quite a problem to implement. For now, none of these are supported.


Node Reference

RenderedTexture : X3DTexture2DNode {
  SFNode   [in,out] metadata   NULL        [X3DMetadataObject]
  SFBool   []       repeatS    TRUE
  SFBool   []       repeatT    TRUE
  SFString [in,out] update     "NONE"      ["NONE", "NEXT_FRAME_ONLY", "ALWAYS"]
  SFNode   [in,out] viewpoint  NULL        [X3DViewpointNode]
  SFNode   [in,out] background NULL        [X3DBackgroundNode]
  SFNode   [in,out] fog        NULL        [Fog]
  SFNode   [in,out] scene      NULL        [X3DGroupNode]
  MFInt32  []       dimensions [128 128 3] (0, ∞)
}
The RenderedTexture node defines a texture image that takes its source from a X3D scene graph structure.


Support Levels

This component defines 2 levels of conformance. The nodes are specified by the following levels:

LevelPrequisitesNodes/FeaturesSupport
Level 1 Core 1
Grouping 1
Shape 1
Rendering 1
Texturing 1
RenderedTexture All fields fully supported


Examples

Shape {
  appearance Appearance {
    texture RenderedTexture {
	  update "NEXT_FRAME_ONLY"
      scene Group {
        children Shape {
          geometry IndexedFaceSet {
            coord Coordinates { 
              point [ 1 0 0, 1 1 0, 0 1 0, 0 0 0 ]
            }
            coordIndex [ 0 1 2 3 ]
            color Color {
              color [ 1 0 0, 0 1 0, 0 0 1, 1 1 1 ]
            }
          }
        }
      }
    }
  }
  geometry Box {}
}


[ Xj3D Homepage | Xj3D @ Web3d | Screenshots | Dev docs | Dev Releases | Contributors | Getting Started ]
Last updated: $Date: 2004-04-30 04:50:27 $