pixels are always being processed by different processors. Thus, each polygon would5.4.1 Pixel Processing
have a number of processors operating on it in parallel.
The triangles, passed across the T-bus from the Geometry Engines, are fed into five
parallel Pixel Generators (PGs). The PGs walk the top and bottom edges of the
triangles, identifying which pixels lie along those edges. For each X location in the
pixel (or subpixel) grid, a pair is formed from the top and bottom pixels touched by
each polygon. These pairs are called spans.
To get the actual pixels which can be rendered into the frame buffer, the PGs must take
the spaiis and iterate from the top pixel to the bottom pixel to find the parameter
values for each of the polygon's interior pixels. Thus, coming out of the PG, each pixel
would have iterated values for location (X,YZ,W), texture (S,TU), color (R,G,B), and
Two paths from the PGs to the next stage in the pipeline facilitate performance. One
path is used when the polygons are texture mapped, requiring the texture memory to
be accessed, while the other path bypasses the texture processors altogether.
Given an iterated set of pixels, each PG then hands the individual pixels off to be5.5 Display Subsystem
rendered into the frame buffer. If the pixels do not have anv associated texture, they
are passed directly down the path to the Image Blender (IB) processors, where the
current Fog color is blended with the pixel color. If, however, there are texture
coordinates associated with the pixels being rendered, thev are first routed through
the texture mapping processors to determine the appropriate texel color.
The pixel processors downstream from the Image Blenders support both 8-bit and 12-
bit RGBA components for pixels, allowing for a selection from over 68 billion
simultaneous colors for any given object or vertex. This accuracy of color information
is critical for Image Processing applications and is supported through Silicon
Graphics' ImageVision Library(tm) (IL).
The Raster Subsystem supports from one to four RM boards. Use of multiple boards
affords the ability to scale both pixel fill performance and frame buffer resolution.
With one optional RM board installed, RealitvEngine systems support both the new,
ultra-high-resolution standard of 1600 x 1200 pixels non-interlaced, and the full
specification for HDTV, offering interlaced displays using either the 1250/50 or the
Once the Raster Subsvstem has completed rendering the frame, the Display
Subsystem takes the digital frame buffer and processes the pixels through digital-to-
analog converters (DACS) to generate an analog pixel stream. This signal may then be
sent across coaxial cable to display devices as component (separate R,G,B) video.
Advanced hardware in the display subsystem supports programmable pixel timings
to allow the system to drive displays with a wide variety of resolutions, refresh rates,
and interlace,non-interlace characteristics.
RealityEngine is the first graphics system to provide standard video support for both
HDTV (1920x1035 at 30Hz interlaced) and the new ultra-high-resolution standard,