Difference between revisions of "SGI Onyx2"

From Nekochan
Jump to: navigation, search
(PCI cards)
m (Main memory and directory memory: added link to DIMM)
 
(19 intermediate revisions by 4 users not shown)
Line 3: Line 3:
 
=Introduction=
 
=Introduction=
  
What defines an Onyx2 as a workstation, is a screen, keyboard and mouse. Without video hardware ([[SGI_Onyx2#InfiniteReality|see InfiniteReality below]]) an Onyx2 is an Origin 2000 server. Even the SGI documentation describes an Onyx2 as a workstation despite the fact they can be configured into 5 rack "reality monsters". Thats some workstation, and a lot of noise.
+
What defines an Onyx2 as a workstation, is a screen, keyboard and mouse. Without video hardware ([[SGI_Onyx2#InfiniteReality|see InfiniteReality below]]) an Onyx2 is an [[Origin 2000]] server. Even the SGI documentation describes an Onyx2 as a workstation despite the fact they can be configured into 5 rack "reality monsters". That's some workstation, and a lot of noise!
  
  
Line 14: Line 14:
 
(Deskside) [http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?coll=0650&db=bks&fname=/SGI_Admin/Onyx2_Desk_OG/ch01.html document number: 007-3454-005]
 
(Deskside) [http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?coll=0650&db=bks&fname=/SGI_Admin/Onyx2_Desk_OG/ch01.html document number: 007-3454-005]
  
 
+
[[Image:Onyx2.JPG|thumb|280px|right|Rack system]]
 
[[Image:Onyx2.png|thumb|280px|right|Rack system]]
 
[[Image:Onyx2.png|thumb|280px|right|Rack system]]
  
Line 22: Line 22:
  
 
[[Image:onyx2_2200_front.jpg|thumb|Deskside|280px|right|Deskside Model]]
 
[[Image:onyx2_2200_front.jpg|thumb|Deskside|280px|right|Deskside Model]]
 +
 +
[[Image:onyx2_front.jpg|thumb|Deskside|280px|right|Deskside Model]]
 +
 +
[[Image:onyx2_side.jpg|thumb|Deskside|280px|right|Deskside Model]]
 +
 +
[[Image:onyx2-front-720.jpg|thumb|Deskside|280px|right|Deskside Model]]
  
 
=Architecture=
 
=Architecture=
  
An Onyx2 system is comprised of nodes linked together by an interconnection network. It uses the distributed shared memory S2MP (Scalable Shared-Memory Multiprocessing) architecture. The Onyx2 uses NUMAlink (originally named CrayLink) for its system interconnect. The nodes are connected to router boards, which use NUMAlink cables to connect to other nodes through their routers. The NUMAlink's network topology is a bristled fat hypercube. In configurations with more than 64 processors, a hierarchical fat hypercube network topology is used instead. Additional NUMAlink cables, called Xpress links can be installed between unused Standard Router ports to reduce latency and increase bandwidth. Xpress links can only be used in systems that have 16 or 32 processors, as these are the only configurations with a network topology that enables unused ports to be used in such a way.
+
An Onyx2 system is comprised of nodes linked together by an interconnection network. It uses the distributed shared memory S2MP (Scalable Shared-Memory Multiprocessing) architecture. The Onyx2 uses [[NUMAlink]] (originally named CrayLink) for its system interconnect. The nodes are connected to router boards, which use NUMAlink cables to connect to other nodes through their routers. The NUMAlink's network topology is a bristled fat hypercube. In configurations with more than 64 processors, a hierarchical fat hypercube network topology is used instead. Additional NUMAlink cables, called Xpress links can be installed between unused Standard Router ports to reduce latency and increase bandwidth. Xpress links can only be used in systems that have 16 or 32 processors, as these are the only configurations with a network topology that enables unused ports to be used in such a way.
  
 
==Router boards==
 
==Router boards==
Line 50: Line 56:
  
 
An Onyx2 node fits on a single 16" by 11" printed circuit board that contains one or two processors, the main memory, the directory memory and the Hub ASIC. The node board plugs into the backplane through a 300-pad CPOP (Compression Pad-on-Pad) connector. The connector actually combines two connections, one to the NUMAlink router network and another to the XIO I/O subsystem.
 
An Onyx2 node fits on a single 16" by 11" printed circuit board that contains one or two processors, the main memory, the directory memory and the Hub ASIC. The node board plugs into the backplane through a 300-pad CPOP (Compression Pad-on-Pad) connector. The connector actually combines two connections, one to the NUMAlink router network and another to the XIO I/O subsystem.
 +
 +
See also the [[Onyx2/Origin2000_Node_boards]] topic.
  
 
==Processor==
 
==Processor==
  
Each processor and their secondary cache is contained on a HIMM (Horizontal Inline Memory Module) daughter card that plugs into the node board. At the time of introduction, the Onyx2 used the IP27 board, featuring one or two R10000 processors clocked at 180 MHz with 1 MB secondary cache(s). A high-end model with two 195 MHz R10000 processors with 4 MB secondary caches was also available. In February 1998, the IP31 board was introduced with two 250 MHz R10000 processors with 4 MB secondary caches. Later, the IP31 board was upgraded to support two 300, 350 or 400 MHz R12000 processors. The 300 and 400 MHz models had 8 MB L2 caches, while the 350 MHz model had 4 MB L2 caches. Near the end of its life, a variant of the IP31 board that could utilize the 500 MHz R14000 with 8 MB L2 caches was made available.
+
Each processor and their secondary cache is contained on a HIMM (Horizontal Inline Memory Module) daughter card that plugs into the [[Onyx2/Origin2000_Node_boards|node board]]. At the time of introduction, the Onyx2 used the IP27 board, featuring one or two R10000 processors clocked at 180 MHz with 1 MB secondary cache(s). A high-end model with two 195 MHz R10000 processors with 4 MB secondary caches was also available. In February 1998, the IP31 board was introduced with two 250 MHz R10000 processors with 4 MB secondary caches. Later, the IP31 board was upgraded to support two 300, 350 or 400 MHz R12000 processors. The 300 and 400 MHz models had 8 MB L2 caches, while the 350 MHz model had 4 MB L2 caches. Near the end of its life, a variant of the IP31 board that could utilize the 500 MHz R14000 with 8 MB L2 caches was made available.
  
 
==Main memory and directory memory==
 
==Main memory and directory memory==
  
Each node board can support a maximum of 4 GB of memory through 16 DIMM slots by using proprietary ECC SDRAM DIMMs with capacities of 16, 32, 64 and 256 MB. Because the memory bus is 144 bits wide (128 bits for data and 16 bits for ECC), memory modules are inserted in pairs. Directory memory, which contains information on the contents of remote caches for maintaining cache coherency, must be used in configurations with more than 32 processors as the Onyx2 uses a distributed shared memory model. The directory memory is contained on proprietary DIMMs that are inserted into eight DIMM slots set aside for its use. In configurations where there are fewer than 32 processors, the directory memory is contained within the main memory.
+
Each [[Onyx2/Origin2000_Node_boards|node board]] can support a maximum of 4 GB of memory through 16 [[DIMM]] slots by using proprietary ECC SDRAM DIMMs with capacities of 16, 32, 64 and 256 MB. Because the memory bus is 144 bits wide (128 bits for data and 16 bits for ECC), memory modules are inserted in pairs. Directory memory, which contains information on the contents of remote caches for maintaining cache coherency, must be used in configurations with more than 32 processors as the Onyx2 uses a distributed shared memory model. The directory memory is contained on proprietary DIMMs that are inserted into eight DIMM slots set aside for its use. In configurations where there are fewer than 32 processors, the directory memory is contained within the main memory.
  
 
==Hub ASIC==
 
==Hub ASIC==
Line 65: Line 73:
 
==I/O subsystem==
 
==I/O subsystem==
  
The I/O subsystem is based around the Crossbow (Xbow) ASIC, which shares many similarities with the SPIDER ASIC. Since the Xbow ASIC is intended for use with the simpler XIO protocol, its hardware is also simpler, allowing the ASIC to feature eight ports, compared with the SPIDER ASIC's six ports. Two of the ports connect to the node boards, and the remaining six to XIO cards. While the I/O subsystem's native bus is XIO, PCI-X and VME64 buses can also be used, provided by XIO bridges.
+
The I/O subsystem is based around the Crossbow (Xbow) ASIC, which shares many similarities with the SPIDER ASIC. Since the Xbow ASIC is intended for use with the simpler XIO protocol, its hardware is also simpler, allowing the ASIC to feature eight ports, compared with the SPIDER ASIC's six ports. Two of the ports connect to the [[Onyx2/Origin2000_Node_boards|node boards]], and the remaining six to XIO cards. While the I/O subsystem's native bus is XIO, PCI-X and VME64 buses can also be used, provided by XIO bridges.
  
 
A IO6 base I/O board is present in every system. It is a XIO card that provides:
 
A IO6 base I/O board is present in every system. It is a XIO card that provides:
Line 87: Line 95:
 
The implementation is partitioned into '''Geometry''' (also known as the '''Geometry Engine'''), '''Raster Memory''' (also known as the '''Raster Manager''') and '''Display Generator''' boards, with each board corresponding to each stage of the three major stages in the architecture's pipeline. The board set partitioning scheme is the same as the RealityEngine, as a result of Silicon Graphics wanting the RealityEngine to be easily upgradable to the InfiniteReality. Each pipeline consists of one Geometry Engine board, one, two or four Raster Manager boards and one Display Generator board.<ref name="Paper">John S. Montrym et al. "InfiniteReality: A Real-Time Graphics System". ACM SIGGRAPH.</ref>
 
The implementation is partitioned into '''Geometry''' (also known as the '''Geometry Engine'''), '''Raster Memory''' (also known as the '''Raster Manager''') and '''Display Generator''' boards, with each board corresponding to each stage of the three major stages in the architecture's pipeline. The board set partitioning scheme is the same as the RealityEngine, as a result of Silicon Graphics wanting the RealityEngine to be easily upgradable to the InfiniteReality. Each pipeline consists of one Geometry Engine board, one, two or four Raster Manager boards and one Display Generator board.<ref name="Paper">John S. Montrym et al. "InfiniteReality: A Real-Time Graphics System". ACM SIGGRAPH.</ref>
  
The implementation comprises twelve [[Application-specific integrated circuit|ASIC]] designs [[Semiconductor fabrication|fabricated]] in 0.5 and 0.35 micrometre processes with three layers of metal interconnect.<ref name="Paper"/> These ASICs require a 3.3 V power supply. An InfiniteReality pipeline in a maximal configuration contains 251 million transistors. The InfiniteReality was developed by 55 engineers.<ref name="HC"> John Montrym, Brian McClendon. "InfiniteReality Graphics - Power Through Complexity". Advanced Systems Division, Silicon Graphics, Inc.</ref>
+
The implementation comprises twelve Application-specific integrated circuit (ASIC) designs fabricated in 0.5 and 0.35 micrometre processes with three layers of metal interconnect.<ref name="Paper"/> These ASICs require a 3.3 V power supply. An InfiniteReality pipeline in a maximal configuration contains 251 million transistors. The InfiniteReality was developed by 55 engineers.<ref name="HC"> John Montrym, Brian McClendon. "InfiniteReality Graphics - Power Through Complexity". Advanced Systems Division, Silicon Graphics, Inc.</ref>
  
 
Given a system capable enough, such as certain models of the Onyx2 and Onyx 3000, up to 16 InfiniteReality pipelines can be hosted. The pipelines can be operated in three modes: multi-seat, multi-display and multi-pipe. In multi-seat mode, each pipeline can serve up to eight simultaneous users, each with their own separate displays, keyboards and mice. In multi-display mode, multiple outputs drive multiple displays, which is useful for virtual reality. The multi-pipe mode has two methods of operation. The first method requires a digital multiplexer (DPLEX) daughterboard to be installed in every pipeline, which combines the output of multiple pipelines. The second method uses '''MonsterMode''' software to distribute the data used to render a frame to multiple pipelines.
 
Given a system capable enough, such as certain models of the Onyx2 and Onyx 3000, up to 16 InfiniteReality pipelines can be hosted. The pipelines can be operated in three modes: multi-seat, multi-display and multi-pipe. In multi-seat mode, each pipeline can serve up to eight simultaneous users, each with their own separate displays, keyboards and mice. In multi-display mode, multiple outputs drive multiple displays, which is useful for virtual reality. The multi-pipe mode has two methods of operation. The first method requires a digital multiplexer (DPLEX) daughterboard to be installed in every pipeline, which combines the output of multiple pipelines. The second method uses '''MonsterMode''' software to distribute the data used to render a frame to multiple pipelines.
Line 95: Line 103:
 
=== Geometry board ===
 
=== Geometry board ===
  
The Geometry board is responsible for geometry and image processing and is divided into four stages, each stage being implemented by separate device(s). The first stage is the '''Host Interface'''. Due to the InfiniteReality being designed for two very different platforms, the traditional [[shared memory]] [[Bus (computing)|bus]]-based Onyx using the POWERpath-2 bus, and the [[distributed shared memory]] network-based Onyx2 using the [[NUMAlink|NUMAlink2]] interconnect, the InfiniteReality had to have an interface that could provide similar performance on both platforms, which had a large difference in incoming bandwidth (200 MB/s versus 400 MB/s respectively).<ref name="Paper"/>
+
The Geometry board is responsible for geometry and image processing and is divided into four stages, each stage being implemented by separate device(s). The first stage is the '''Host Interface'''. Due to the InfiniteReality being designed for two very different platforms, the traditional shared memory bus-based Onyx using the POWERpath-2 bus, and the distributed shared memory network-based Onyx2 using the [[NUMAlink|NUMAlink2]] interconnect, the InfiniteReality had to have an interface that could provide similar performance on both platforms, which had a large difference in incoming bandwidth (200 MB/s versus 400 MB/s respectively).<ref name="Paper"/>
  
To this end, a '''Host Interface Processor''', an embedded [[RISC]] core, is used to fetch display list objects using [[direct memory access]] (DMA). The Host Interface Processor is accompanied by 16 MB of [[SDRAM|synchronous dynamic random access memory]] (SDRAM), of which 15 MB is used to [[cache]] display leaf objects. The cache can deliver data to the next stage at over 300 MB/s. The next stage is the '''Geometry Distributor''', which transfers data and instructions from the Host Interface Processor to individual Geometry Engines.
+
To this end, a '''Host Interface Processor''', an embedded [[RISC]] core, is used to fetch display list objects using [[direct memory access]] (DMA). The Host Interface Processor is accompanied by 16 MB of [[SDRAM|synchronous dynamic random access memory]] (SDRAM), of which 15 MB is used to cache display leaf objects. The cache can deliver data to the next stage at over 300 MB/s. The next stage is the '''Geometry Distributor''', which transfers data and instructions from the Host Interface Processor to individual Geometry Engines.
  
The next stage is performing geometry and image processing. The '''Geometry Engine''' is used for the purpose, with each Geometry board containing up to four working in a [[MIMD|multiple instruction multiple data]] (MIMD) fashion. The Geometry Engine is a semi-custom ASIC with a single instruction multiple data (SIMD) pipeline containing three [[floating-point]] cores, each containing an [[arithmetic logic unit]] (ALU), a multiplier and a 32-bit by 32-entry [[register file]] with two read and two write ports. These cores are provided with a 32-bit by 2,560-entry memory that holds elements of OpenGL [[State (computing)|state]] and provides [[Scratchpad RAM|scratchpad]] storage. Each core also has a '''float-to-fix converter''' to convert floating-point values into [[integer]] form. The Geometry Engine is capable of completing three instructions per cycle, and each Geometry board, with four such devices, can complete 12 instructions per cycle. The Geometry Engine uses a 195-bit microinstruction, which is compressed in order to reduce size and banwidth usage in return for slightly less performance.
+
The next stage is performing geometry and image processing. The '''Geometry Engine''' is used for the purpose, with each Geometry board containing up to four working in a [[MIMD|multiple instruction multiple data]] (MIMD) fashion. The Geometry Engine is a semi-custom ASIC with a single instruction multiple data (SIMD) pipeline containing three floating-point cores, each containing an arithmetic logic unit (ALU), a multiplier and a 32-bit by 32-entry [[register file]] with two read and two write ports. These cores are provided with a 32-bit by 2,560-entry memory that holds elements of OpenGL State and provides Scratchpad RAM storage. Each core also has a '''float-to-fix converter''' to convert floating-point values into integer form. The Geometry Engine is capable of completing three instructions per cycle, and each Geometry board, with four such devices, can complete 12 instructions per cycle. The Geometry Engine uses a 195-bit microinstruction, which is compressed in order to reduce size and banwidth usage in return for slightly less performance.
  
 
The Geometry Engine processor operates at 90 MHz, achieving a maximum theoretical performance of 540 MFLOPS.<ref name="HC"/> As there are four such processors on a GE12-4 or GE14-4 board, the maximum theoretical performance is 2.16 GFLOPS. A 16-pipeline system therefore achieves a maximum theoretical performance of 34.56 GFLOPS.
 
The Geometry Engine processor operates at 90 MHz, achieving a maximum theoretical performance of 540 MFLOPS.<ref name="HC"/> As there are four such processors on a GE12-4 or GE14-4 board, the maximum theoretical performance is 2.16 GFLOPS. A 16-pipeline system therefore achieves a maximum theoretical performance of 34.56 GFLOPS.
  
The fourth stage is the '''Geometry-Raster FIFO''', a [[FIFO|first in first out]] (FIFO) [[buffer]] that merges the outputs of the four Geometry Engines into one, reassembling the outputs in the order they were issued. The FIFO is built from SDRAM and has a capacity of 4 MB,<ref>Mark J. Kilgard. "Realizing OpenGL: Two Implementations of One Architecture". 1997 SIGGRAPH Eurographics Workshop, August 1997.</reF> large enough to store 65,536 [[Vertex (geometry)|vertexes]]. The transformed vertexes are moved from this FIFO to the Raster Manager boards for triangle reassembly and setup  by the Triangle Bus (also known as the Vertex Bus), which has a bandwidth of 400 MB/s.
+
The fourth stage is the '''Geometry-Raster FIFO''', a [[FIFO|first in first out]] (FIFO) buffer that merges the outputs of the four Geometry Engines into one, reassembling the outputs in the order they were issued. The FIFO is built from SDRAM and has a capacity of 4 MB,<ref>Mark J. Kilgard. "Realizing OpenGL: Two Implementations of One Architecture". 1997 SIGGRAPH Eurographics Workshop, August 1997.</reF> large enough to store 65,536 vertexes. The transformed vertexes are moved from this FIFO to the Raster Manager boards for triangle reassembly and setup  by the Triangle Bus (also known as the Vertex Bus), which has a bandwidth of 400 MB/s.
  
 
=== Raster Memory board ===
 
=== Raster Memory board ===
  
The function of the Raster Memory board is to perform [[rasterization]]. It also contains the [[texture memory]] and [[framebuffer|raster memory]], which is more commonly known as the [[framebuffer]]. Rasterization is performed in the '''[[Fragment (computer graphics)|Fragment Generator]]''' and the eighty '''Image Engines'''. The Fragment Generator comprises four ASIC designs: the '''Scan Converter''' (SC) ASIC, the '''Texel Address Calculator''' (TA) ASIC, the '''Texture Memory Controller''' (TM) ASIC and the '''Texture Fragment''' (TF) ASIC.<ref name="Paper"/>
+
The function of the Raster Memory board is to perform rasterization. It also contains the [[texture memory]] and [[framebuffer|raster memory]], which is more commonly known as the [[framebuffer]]. Rasterization is performed in the '''Fragment Generator''' and the eighty '''Image Engines'''. The Fragment Generator comprises four ASIC designs: the '''Scan Converter''' (SC) ASIC, the '''Texel Address Calculator''' (TA) ASIC, the '''Texture Memory Controller''' (TM) ASIC and the '''Texture Fragment''' (TF) ASIC.<ref name="Paper"/>
  
The SC ASIC and the TA ASIC perform scan conversion, color and depth interpolation, perspective correct texture coordinate interpolation and level of detail computation on incoming data, and the results are passed to the eight TM ASICs, which are specialized [[memory controller]]s optimized for texel access. Each TM ASIC controls four SDRAMs that make up one-eighth of the texture memory. The SDRAMs used are 16 bits wide and have separate address and data buses. SDRAMs with a capacity of 4 Mb are used by Raster Manager boards with 16 MB of texture memory while 16 Mb SDRAMs are used by Raster Manager boards with 64 MB of texture memory.<ref name="HC"/> The TM ASICs perform texel lookups in its SDRAMs according to the texel addresses issued by the TA ASIC. Texels from the TM ASICs are forwarded to the appropriate TF ASIC, where texture filtering, texture environment combination with interpolated color and fog application is performed. As each SDRAM holds part of the texture memory, all of the 32 SDRAMs must be connected to all of the 80 Image Engines. To achieve this, the TM and TF ASICs implement a two-rank [[omega network]], which reduces the number of individual paths required for the 32 to 80 sort while maintaining the same functionality.
+
The SC ASIC and the TA ASIC perform scan conversion, color and depth interpolation, perspective correct texture coordinate interpolation and level of detail computation on incoming data, and the results are passed to the eight TM ASICs, which are specialized memory controllers optimized for texel access. Each TM ASIC controls four SDRAMs that make up one-eighth of the texture memory. The SDRAMs used are 16 bits wide and have separate address and data buses. SDRAMs with a capacity of 4 Mb are used by Raster Manager boards with 16 MB of texture memory while 16 Mb SDRAMs are used by Raster Manager boards with 64 MB of texture memory.<ref name="HC"/> The TM ASICs perform texel lookups in its SDRAMs according to the texel addresses issued by the TA ASIC. Texels from the TM ASICs are forwarded to the appropriate TF ASIC, where texture filtering, texture environment combination with interpolated color and fog application is performed. As each SDRAM holds part of the texture memory, all of the 32 SDRAMs must be connected to all of the 80 Image Engines. To achieve this, the TM and TF ASICs implement a two-rank omega network, which reduces the number of individual paths required for the 32 to 80 sort while maintaining the same functionality.
  
The eighty Image Engines have multiple functions. Firstly, each Image Engine controls a portion of the raster memory, which in the case of the InfiniteReality, is a 1 MB SGRAM organized as 262,144 by 32-bit words.<ref name="Paper"/><ref name="HC"/> Secondly, the following OpenGL per-fragment operations are performed by the Image Engines: pixel ownership test, stencil test, depth buffer test, blending, dithering and logical operation. Lastly, the Image Engines perform anti-aliasing and [[accumulation buffer]] operations. To deliver pixel data for display, each Image Engine has a 2-bit serial bus to the Display Generator board. If one Raster Manager board is present in the pipeline, the Image Engine uses the entire width of the bus, whereas if two or more Raster Manager boards are present, the Image Engine uses half the bus.<ref name="Paper"/> Each serial bus is actually a part of the Video Bus, which has a bandwidth of 1.2 GB/s. Four Image Engine "cores" are contained on an Image Engine ASIC, which contains nearly 488,000 logic gates, comprising 1.95 million transistors, on a 42 mm<sup>2</sup> (6.5 by 6.5 mm) die that was fabricated in a 0.35 micrometre process by [[VLSI Technology]].
+
The eighty Image Engines have multiple functions. Firstly, each Image Engine controls a portion of the raster memory, which in the case of the InfiniteReality, is a 1 MB SGRAM organized as 262,144 by 32-bit words.<ref name="Paper"/><ref name="HC"/> Secondly, the following OpenGL per-fragment operations are performed by the Image Engines: pixel ownership test, stencil test, depth buffer test, blending, dithering and logical operation. Lastly, the Image Engines perform anti-aliasing and [[accumulation buffer]] operations. To deliver pixel data for display, each Image Engine has a 2-bit serial bus to the Display Generator board. If one Raster Manager board is present in the pipeline, the Image Engine uses the entire width of the bus, whereas if two or more Raster Manager boards are present, the Image Engine uses half the bus.<ref name="Paper"/> Each serial bus is actually a part of the Video Bus, which has a bandwidth of 1.2 GB/s. Four Image Engine "cores" are contained on an Image Engine ASIC, which contains nearly 488,000 logic gates, comprising 1.95 million transistors, on a 42 mm<sup>2</sup> (6.5 by 6.5 mm) die that was fabricated in a 0.35 micrometre process by VLSI Technology.
  
 
The InfiniteReality uses the '''RM6-16''' or '''RM6-64''' Raster Managers. Each pipeline is capable of display resolutions of 2.62, 5.24 or 10.48 million pixels, provided that one, two or four Raster Manager boards respectively are present.<ref name="Report">Onyx2 Reality, Onyx2 InfiniteReality and Onyx2 InfiniteReality2 Technical Report, August 1998. Silicon Graphics, Inc.</ref> The raster memory can be configured to use 256, 512 or 1024 bits per pixel. 320 MB supports a resolution of 2560 by 2048 pixels with each pixel containing 512 bits of information.<ref name="HC"/> In a configuration with four Raster Managers, the texture memory has a bandwidth of 15.36 GB/s, and the raster memory has a bandwidth of 72.8 GB/s.
 
The InfiniteReality uses the '''RM6-16''' or '''RM6-64''' Raster Managers. Each pipeline is capable of display resolutions of 2.62, 5.24 or 10.48 million pixels, provided that one, two or four Raster Manager boards respectively are present.<ref name="Report">Onyx2 Reality, Onyx2 InfiniteReality and Onyx2 InfiniteReality2 Technical Report, August 1998. Silicon Graphics, Inc.</ref> The raster memory can be configured to use 256, 512 or 1024 bits per pixel. 320 MB supports a resolution of 2560 by 2048 pixels with each pixel containing 512 bits of information.<ref name="HC"/> In a configuration with four Raster Managers, the texture memory has a bandwidth of 15.36 GB/s, and the raster memory has a bandwidth of 72.8 GB/s.
Line 117: Line 125:
 
=== Display Generator board ===
 
=== Display Generator board ===
  
The '''DG4-2''' Display Generator board contains hardware to drive up to two video outputs, which may be expanded to eight video outputs with an optional daughterboard, a configuration known as the '''DG4-8'''. The outputs are independent and each output has hardware for generating video timing, video resizing, [[gamma correction]] and [[digital-to-analog]] conversion. Digital-to-analog conversion is provided by 8-bit digital-to-analog converters that support a pixel clock frequency up to 220 MHz.
+
The '''DG5-2''' Display Generator board contains hardware to drive up to two video outputs, which may be expanded to eight video outputs with an optional daughterboard, a configuration known as the '''DG5-8'''. The outputs are independent and each output has hardware for generating video timing, video resizing, gamma correction and digital-to-analog conversion. Digital-to-analog conversion is provided by 8-bit digital-to-analog converters that support a pixel clock frequency up to 220 MHz.
  
Data for the video outputs are provided by four ASICs that de-serialize and de-interleave the 160-bit streams into 10-bit component [[RGBA]], 12-bit component RBGA, L16, Stereo Field Sequential (FS) or color indexes. The hardware also incorporates the [[cursor]] at this stage. A 32,768 [[color index|color index map]] entries are available.
+
Data for the video outputs are provided by four ASICs that de-serialize and de-interleave the 160-bit streams into 10-bit component [[RGBA]], 12-bit component RBGA, L16, Stereo Field Sequential (FS) or color indexes. The hardware also incorporates the cursor at this stage. A 32,768 |color index map entries are available.
  
 
=== Capabilities and performance ===
 
=== Capabilities and performance ===
Line 160: Line 168:
 
== InfiniteReality3 ==
 
== InfiniteReality3 ==
  
InfiniteReality3 was introduced in 2000 along with the [[SGI Origin 3000 and Onyx 3000|Onyx 3000]] to supersede the InfiniteReality2. It was used in the [[SGI Onyx2|Onyx2]] and Onyx 3000 visualization systems. The only improvement over the previous implementation was replacement of the RM9-64 Raster Manager with the '''RM10-256''' Raster Manager, which has 256 MB of texture memory, four times that the of the previous raster manager. When maximally configured with four Raster Managers, the InfiniteReality3 pipeline provides 320 MB of raster memory.
+
InfiniteReality3 was introduced in 2000 along with the Onyx 3000 to supersede the InfiniteReality2. It was used in the [[SGI Onyx2|Onyx2]] and Onyx 3000 visualization systems. The only improvement over the previous implementation was replacement of the RM9-64 Raster Manager with the '''RM10-256''' Raster Manager, which has 256 MB of texture memory, four times that the of the previous raster manager. When maximally configured with four Raster Managers, the InfiniteReality3 pipeline provides 320 MB of raster memory.
  
 
== InfiniteReality4 ==
 
== InfiniteReality4 ==
Line 288: Line 296:
 
=Hardware aggregator=
 
=Hardware aggregator=
  
Node boards have 2 CPUs per board.
+
[[Onyx2/Origin2000_Node_boards|Node board]]s have 2 CPUs per board.
  
 
==Known node board CPU speeds==
 
==Known node board CPU speeds==
  
180 MHz (Can not be mixed with others speed node boards --- someone confirm this)
+
IP27: CPUs are mounted directly to the [[Onyx2/Origin2000_Node_boards|node board]] individually.
  
195 MHz  
+
180 MHz R10000 (Can not be mixed with others speed [[Onyx2/Origin2000_Node_boards|node board]]s)
  
250 MHz
+
195 MHz R10000
  
400 MHz
 
  
500 MHz
+
IP31: CPUs are mounted in pairs (along with their respective caches) to a PIMM, a pluggable module which then mounts to the [[Onyx2/Origin2000_Node_boards|node board]].
+
  
Memory compatibility with Origin200 and 2000.
+
250 MHz R10000
 +
 
 +
300 MHz R12000
 +
 
 +
350 MHz R12000 (Can not be used in configurations greater than 8 CPUs)
 +
 
 +
400 MHz R12000
 +
 
 +
500 MHz R14000
  
 
==PCI cards==
 
==PCI cards==
Line 313: Line 327:
 
! width="105" | Type of device
 
! width="105" | Type of device
 
! width="80"  | Vendor name
 
! width="80"  | Vendor name
! width="125" | Model
+
! width="80" | Model
 +
! width="125" | Description
 
! width="125" | PCI Vendor ID
 
! width="125" | PCI Vendor ID
 
! width="125" | PCI Device ID
 
! width="125" | PCI Device ID
Line 320: Line 335:
 
| SCSI
 
| SCSI
 
| Qlogic
 
| Qlogic
| qla1040b Fast/Wide SCSI controller
+
| qla1040b
 +
| Fast/Wide SCSI controller
 
| 1077
 
| 1077
 
| 1020
 
| 1020
Line 327: Line 343:
 
| Fibre Channel / SCSI
 
| Fibre Channel / SCSI
 
| Qlogic
 
| Qlogic
| qla2342 dual-port 2Gb FC controller
+
| qla2342  
 +
| dual-port 2Gb FC controller
 
| 1077
 
| 1077
 
| 2312
 
| 2312
Line 813: Line 830:
 
IP27prom in Module 3/Slot n4: Revision 6.156
 
IP27prom in Module 3/Slot n4: Revision 6.156
 
</pre>
 
</pre>
 +
 +
= Diagnostics =
 +
{{:Onyx2 Diagnostics}}
 +
  
 
=External links (See also)=
 
=External links (See also)=
 +
  
 
''SGI Tech pubs''
 
''SGI Tech pubs''
  
(Rack) [http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?coll=0650&db=bks&fname=/SGI_Admin/Onyx2_Rack_OG/ch01.html document number: 007-3457-005]
+
* (Rack) [http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?coll=0650&db=bks&fname=/SGI_Admin/Onyx2_Rack_OG/ch01.html document number: 007-3457-005]
  
(Deskside) [http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?coll=0650&db=bks&fname=/SGI_Admin/Onyx2_Desk_OG/ch01.html document number: 007-3454-005]
+
* (Deskside) [http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?coll=0650&db=bks&fname=/SGI_Admin/Onyx2_Desk_OG/ch01.html document number: 007-3454-005]
  
  
 
'' nekochan wiki pages of interest ''
 
'' nekochan wiki pages of interest ''
  
[[SGI Origin 2000]]  
+
* [[SGI Origin 2000]]
 +
 
 +
* [[Origin2000/Onyx2 Diagnostics: Node Board LEDs]]
 +
 
 +
* [[Removing O2K Midplane]]
  
[[Origin2000/Onyx2_Diagnostics: Node Board LEDs]]
 
  
[[Category:SGI workstations|Onyx2]]
 
  
 
Forum post for step by step diagnosis:
 
Forum post for step by step diagnosis:
[http://forums.nekochan.net/viewtopic.php?f=3&t=17194&p=134279&#p134279|diagnosis post]
+
* [http://forums.nekochan.net/viewtopic.php?f=3&t=17194&p=134279&#p134279|diagnosis post]
  
  
 
'' futuretech ''
 
'' futuretech ''
  
[http://www.futuretech.blinkenlights.nl/onyx2/ Future tech blinken lights Onyx2 page]  
+
* [http://www.futuretech.blinkenlights.nl/onyx2/ Future tech blinken lights Onyx2 page]
 +
 
 +
* [http://www.futuretech.blinkenlights.nl/origin/007-3439-002.pdf Origin and Onyx2 Theory of Operations Manual]
 +
 
 +
* [http://www.futuretech.blinkenlights.nl/onyx2/tech_specs.html Onyx2 Technical Specifications]
  
[http://www.futuretech.blinkenlights.nl/origin/007-3439-002.pdf Origin and Onyx2 Theory of Operations Manual]
+
* [http://web.archive.org/web/20000816040905/www.sgi.com/onyx2/groupstation.html Onyx2: The GroupStation in Defense Imaging]
  
[http://www.futuretech.blinkenlights.nl/onyx2/tech_specs.html Onyx2 Technical Specifications]
+
* [http://www.futuretech.blinkenlights.nl/onyx2/2270.pdf Onyx2 Digital Media Product Guide]
  
[http://web.archive.org/web/20000816040905/www.sgi.com/onyx2/groupstation.html Onyx2: The GroupStation in Defense Imaging]
+
* [http://www.futuretech.blinkenlights.nl/octnonyx2comp.html Comparison of Octane MXE to Deskside Onyx2]
  
[http://www.futuretech.blinkenlights.nl/onyx2/2270.pdf Onyx2 Digital Media Product Guide]
+
* [http://www.futuretech.blinkenlights.nl/onyx2/tech_report.pdf Onyx2 Technical Report (1.3MB PDF)]
  
[http://www.futuretech.blinkenlights.nl/octnonyx2comp.html  Comparison of Octane MXE to Deskside Onyx2]
 
  
[http://www.futuretech.blinkenlights.nl/onyx2/tech_report.pdf Onyx2 Technical Report (1.3MB PDF)]
+
[[Category:SGI workstations]]

Latest revision as of 13:27, 6 September 2013

Feel free to edit the heck out of this page.

Introduction

What defines an Onyx2 as a workstation, is a screen, keyboard and mouse. Without video hardware (see InfiniteReality below) an Onyx2 is an Origin 2000 server. Even the SGI documentation describes an Onyx2 as a workstation despite the fact they can be configured into 5 rack "reality monsters". That's some workstation, and a lot of noise!


The authoritative source of all information is SGI techpubs.sgi.com

Related techpubs.sgi.com documentation

(Rack) document number: 007-3457-005

(Deskside) document number: 007-3454-005

Rack system
Rack system
Conceptual view of the above system
24 CPU meters in gr_osview
Deskside Model
Deskside Model
Deskside Model
Deskside Model

Architecture

An Onyx2 system is comprised of nodes linked together by an interconnection network. It uses the distributed shared memory S2MP (Scalable Shared-Memory Multiprocessing) architecture. The Onyx2 uses NUMAlink (originally named CrayLink) for its system interconnect. The nodes are connected to router boards, which use NUMAlink cables to connect to other nodes through their routers. The NUMAlink's network topology is a bristled fat hypercube. In configurations with more than 64 processors, a hierarchical fat hypercube network topology is used instead. Additional NUMAlink cables, called Xpress links can be installed between unused Standard Router ports to reduce latency and increase bandwidth. Xpress links can only be used in systems that have 16 or 32 processors, as these are the only configurations with a network topology that enables unused ports to be used in such a way.

Router boards

There are four different router boards used by the Onyx2. Each successive router board allows a larger amount of nodes to be connected.

Null Router

The Null Router connects two nodes in the same module. A system using the Null Router cannot be expanded as there are no external connectors.

Star Router

The Star Router can connect up to four nodes. It is always used in conjunction with a Standard Router to function correctly.

Standard Router (Rack Router)

The Standard Router can connect up to 32 nodes. It contains the SPIDER ASIC, which serves as a router for the NUMAlink network. The SPIDER ASIC has six ports, each with a pair of unidirectional links, connected to a crossbar which enables the ports to communicate with each other.

Meta Router (Cray Router)

The Meta Router is used in conjunction with Standard Routers to connect more than 32 nodes. It can connect up to 64 nodes.

Onyx2 nodes

An Onyx2 node fits on a single 16" by 11" printed circuit board that contains one or two processors, the main memory, the directory memory and the Hub ASIC. The node board plugs into the backplane through a 300-pad CPOP (Compression Pad-on-Pad) connector. The connector actually combines two connections, one to the NUMAlink router network and another to the XIO I/O subsystem.

See also the Onyx2/Origin2000_Node_boards topic.

Processor

Each processor and their secondary cache is contained on a HIMM (Horizontal Inline Memory Module) daughter card that plugs into the node board. At the time of introduction, the Onyx2 used the IP27 board, featuring one or two R10000 processors clocked at 180 MHz with 1 MB secondary cache(s). A high-end model with two 195 MHz R10000 processors with 4 MB secondary caches was also available. In February 1998, the IP31 board was introduced with two 250 MHz R10000 processors with 4 MB secondary caches. Later, the IP31 board was upgraded to support two 300, 350 or 400 MHz R12000 processors. The 300 and 400 MHz models had 8 MB L2 caches, while the 350 MHz model had 4 MB L2 caches. Near the end of its life, a variant of the IP31 board that could utilize the 500 MHz R14000 with 8 MB L2 caches was made available.

Main memory and directory memory

Each node board can support a maximum of 4 GB of memory through 16 DIMM slots by using proprietary ECC SDRAM DIMMs with capacities of 16, 32, 64 and 256 MB. Because the memory bus is 144 bits wide (128 bits for data and 16 bits for ECC), memory modules are inserted in pairs. Directory memory, which contains information on the contents of remote caches for maintaining cache coherency, must be used in configurations with more than 32 processors as the Onyx2 uses a distributed shared memory model. The directory memory is contained on proprietary DIMMs that are inserted into eight DIMM slots set aside for its use. In configurations where there are fewer than 32 processors, the directory memory is contained within the main memory.

Hub ASIC

The Hub ASIC interfaces the processors, memory and XIO to the NUMAlink 2 system interconnect. The ASIC contains five major sections: the crossbar (referred to as the "XB"), the I/O interface (referred to as the "II"), the network interface (referred to as the "NI"), the processor interface (referred to as the "PI") and the memory and directory interface (referred to as the "DM"), which also serves as the memory controller. The interfaces communicate with each other via FIFO buffers that are connected to the crossbar. When two processors are connected to the Hub ASIC, the node does not behave in a SMP fashion. Instead, the two processors operate separately and their buses are multiplexed over the single processor interface. This was done to save pins on the Hub ASIC. The Hub ASIC is clocked at 100 MHz and contains 900,000 gates fabricated in a five-layer metal process.

I/O subsystem

The I/O subsystem is based around the Crossbow (Xbow) ASIC, which shares many similarities with the SPIDER ASIC. Since the Xbow ASIC is intended for use with the simpler XIO protocol, its hardware is also simpler, allowing the ASIC to feature eight ports, compared with the SPIDER ASIC's six ports. Two of the ports connect to the node boards, and the remaining six to XIO cards. While the I/O subsystem's native bus is XIO, PCI-X and VME64 buses can also be used, provided by XIO bridges.

A IO6 base I/O board is present in every system. It is a XIO card that provides:

  • 1 10/100BASE-TX port
  • 2 Serial ports provided by dual UARTs
  • 1 internal Fast 20 UltraSCSI single-ended port
  • 1 external wide UltraSCSI, singled ended port
  • 1 real-time interrupt output for frame sync
  • 1 real-time interrupt input (edge triggered)
  • Flash PROM, NVRAM and real time clock


InfiniteReality

The difference between a SGI Origin 2000 and an Onyx2 is the InfiniteReality. In fact, the Onyx2 Rack system pictured top right was built from two Onyx2 racks with the InfiniteReality taken out of the second rack and in its place, the top compute module, is an Origin 2000 deskside with the plastics removed. The InfiniteReality was introduced in early 1996. It succeeded the RealityEngine, although the RealityEngine coexisted with the InfiniteReality for some time for the Onyx as an entry-level option for deskside "workstation" configurations.

The InfiniteReality architecture was a third-generation design and is categorized as a sort-middle architecture. It was designed to render complex scenes in high-quality at 60 frames per second, roughly four or two times the performance of the RealityEngine it replaces. It was designed explicitly for use in conjunction with the OpenGL graphics library and implements most of the OpenGL pipeline in hardware.

The implementation is partitioned into Geometry (also known as the Geometry Engine), Raster Memory (also known as the Raster Manager) and Display Generator boards, with each board corresponding to each stage of the three major stages in the architecture's pipeline. The board set partitioning scheme is the same as the RealityEngine, as a result of Silicon Graphics wanting the RealityEngine to be easily upgradable to the InfiniteReality. Each pipeline consists of one Geometry Engine board, one, two or four Raster Manager boards and one Display Generator board.[1]

The implementation comprises twelve Application-specific integrated circuit (ASIC) designs fabricated in 0.5 and 0.35 micrometre processes with three layers of metal interconnect.[1] These ASICs require a 3.3 V power supply. An InfiniteReality pipeline in a maximal configuration contains 251 million transistors. The InfiniteReality was developed by 55 engineers.[2]

Given a system capable enough, such as certain models of the Onyx2 and Onyx 3000, up to 16 InfiniteReality pipelines can be hosted. The pipelines can be operated in three modes: multi-seat, multi-display and multi-pipe. In multi-seat mode, each pipeline can serve up to eight simultaneous users, each with their own separate displays, keyboards and mice. In multi-display mode, multiple outputs drive multiple displays, which is useful for virtual reality. The multi-pipe mode has two methods of operation. The first method requires a digital multiplexer (DPLEX) daughterboard to be installed in every pipeline, which combines the output of multiple pipelines. The second method uses MonsterMode software to distribute the data used to render a frame to multiple pipelines.

To interface the pipeline to the system, a Flat Cable Interface (FCI) cable is used to connect the Host Interface Processor ASIC on the Geometry Board to the Ibus on the IO4 board, a part of the host system.

Geometry board

The Geometry board is responsible for geometry and image processing and is divided into four stages, each stage being implemented by separate device(s). The first stage is the Host Interface. Due to the InfiniteReality being designed for two very different platforms, the traditional shared memory bus-based Onyx using the POWERpath-2 bus, and the distributed shared memory network-based Onyx2 using the NUMAlink2 interconnect, the InfiniteReality had to have an interface that could provide similar performance on both platforms, which had a large difference in incoming bandwidth (200 MB/s versus 400 MB/s respectively).[1]

To this end, a Host Interface Processor, an embedded RISC core, is used to fetch display list objects using direct memory access (DMA). The Host Interface Processor is accompanied by 16 MB of synchronous dynamic random access memory (SDRAM), of which 15 MB is used to cache display leaf objects. The cache can deliver data to the next stage at over 300 MB/s. The next stage is the Geometry Distributor, which transfers data and instructions from the Host Interface Processor to individual Geometry Engines.

The next stage is performing geometry and image processing. The Geometry Engine is used for the purpose, with each Geometry board containing up to four working in a multiple instruction multiple data (MIMD) fashion. The Geometry Engine is a semi-custom ASIC with a single instruction multiple data (SIMD) pipeline containing three floating-point cores, each containing an arithmetic logic unit (ALU), a multiplier and a 32-bit by 32-entry register file with two read and two write ports. These cores are provided with a 32-bit by 2,560-entry memory that holds elements of OpenGL State and provides Scratchpad RAM storage. Each core also has a float-to-fix converter to convert floating-point values into integer form. The Geometry Engine is capable of completing three instructions per cycle, and each Geometry board, with four such devices, can complete 12 instructions per cycle. The Geometry Engine uses a 195-bit microinstruction, which is compressed in order to reduce size and banwidth usage in return for slightly less performance.

The Geometry Engine processor operates at 90 MHz, achieving a maximum theoretical performance of 540 MFLOPS.[2] As there are four such processors on a GE12-4 or GE14-4 board, the maximum theoretical performance is 2.16 GFLOPS. A 16-pipeline system therefore achieves a maximum theoretical performance of 34.56 GFLOPS.

The fourth stage is the Geometry-Raster FIFO, a first in first out (FIFO) buffer that merges the outputs of the four Geometry Engines into one, reassembling the outputs in the order they were issued. The FIFO is built from SDRAM and has a capacity of 4 MB,[3] large enough to store 65,536 vertexes. The transformed vertexes are moved from this FIFO to the Raster Manager boards for triangle reassembly and setup by the Triangle Bus (also known as the Vertex Bus), which has a bandwidth of 400 MB/s.

Raster Memory board

The function of the Raster Memory board is to perform rasterization. It also contains the texture memory and raster memory, which is more commonly known as the framebuffer. Rasterization is performed in the Fragment Generator and the eighty Image Engines. The Fragment Generator comprises four ASIC designs: the Scan Converter (SC) ASIC, the Texel Address Calculator (TA) ASIC, the Texture Memory Controller (TM) ASIC and the Texture Fragment (TF) ASIC.[1]

The SC ASIC and the TA ASIC perform scan conversion, color and depth interpolation, perspective correct texture coordinate interpolation and level of detail computation on incoming data, and the results are passed to the eight TM ASICs, which are specialized memory controllers optimized for texel access. Each TM ASIC controls four SDRAMs that make up one-eighth of the texture memory. The SDRAMs used are 16 bits wide and have separate address and data buses. SDRAMs with a capacity of 4 Mb are used by Raster Manager boards with 16 MB of texture memory while 16 Mb SDRAMs are used by Raster Manager boards with 64 MB of texture memory.[2] The TM ASICs perform texel lookups in its SDRAMs according to the texel addresses issued by the TA ASIC. Texels from the TM ASICs are forwarded to the appropriate TF ASIC, where texture filtering, texture environment combination with interpolated color and fog application is performed. As each SDRAM holds part of the texture memory, all of the 32 SDRAMs must be connected to all of the 80 Image Engines. To achieve this, the TM and TF ASICs implement a two-rank omega network, which reduces the number of individual paths required for the 32 to 80 sort while maintaining the same functionality.

The eighty Image Engines have multiple functions. Firstly, each Image Engine controls a portion of the raster memory, which in the case of the InfiniteReality, is a 1 MB SGRAM organized as 262,144 by 32-bit words.[1][2] Secondly, the following OpenGL per-fragment operations are performed by the Image Engines: pixel ownership test, stencil test, depth buffer test, blending, dithering and logical operation. Lastly, the Image Engines perform anti-aliasing and accumulation buffer operations. To deliver pixel data for display, each Image Engine has a 2-bit serial bus to the Display Generator board. If one Raster Manager board is present in the pipeline, the Image Engine uses the entire width of the bus, whereas if two or more Raster Manager boards are present, the Image Engine uses half the bus.[1] Each serial bus is actually a part of the Video Bus, which has a bandwidth of 1.2 GB/s. Four Image Engine "cores" are contained on an Image Engine ASIC, which contains nearly 488,000 logic gates, comprising 1.95 million transistors, on a 42 mm2 (6.5 by 6.5 mm) die that was fabricated in a 0.35 micrometre process by VLSI Technology.

The InfiniteReality uses the RM6-16 or RM6-64 Raster Managers. Each pipeline is capable of display resolutions of 2.62, 5.24 or 10.48 million pixels, provided that one, two or four Raster Manager boards respectively are present.[4] The raster memory can be configured to use 256, 512 or 1024 bits per pixel. 320 MB supports a resolution of 2560 by 2048 pixels with each pixel containing 512 bits of information.[2] In a configuration with four Raster Managers, the texture memory has a bandwidth of 15.36 GB/s, and the raster memory has a bandwidth of 72.8 GB/s.

Display Generator board

The DG5-2 Display Generator board contains hardware to drive up to two video outputs, which may be expanded to eight video outputs with an optional daughterboard, a configuration known as the DG5-8. The outputs are independent and each output has hardware for generating video timing, video resizing, gamma correction and digital-to-analog conversion. Digital-to-analog conversion is provided by 8-bit digital-to-analog converters that support a pixel clock frequency up to 220 MHz.

Data for the video outputs are provided by four ASICs that de-serialize and de-interleave the 160-bit streams into 10-bit component RGBA, 12-bit component RBGA, L16, Stereo Field Sequential (FS) or color indexes. The hardware also incorporates the cursor at this stage. A 32,768 |color index map entries are available.

Capabilities and performance

The InfiniteReality was capable of several advanced capabilities:

  • 8 by 8 multi-sampled anti-aliasing[5]
  • A maximum color depth of 48-bit RGBA[5]
  • 16 overlay planes[5]
  • A 24-bit floating point Z-buffer[5]
  • Each pixel consists of 256 to 1,048 bits of data
  • Stereo viewing was supported and was quad buffered

The InfiniteReality's performance was:

  • 11 million non-lighted, depth-buffered, anti-aliased, triangle strip (40 pixel triangles), triangles per second
  • 8.3 million textured, depth-buffered, anti-aliased, triangle strip (50 pixel triangles), triangles per second
  • 7+ million lighted, textured and anti-aliased triangles per second
  • 800 million trilinear mip-mapped, textured, 16-bit texel, depth buffered pixels per second
  • 750 million trilinear mip-mapped, textured, 16-bit texel, four by four sub-sample anti-aliased, depth buffered pixels per second
  • 710+ million textured and anti-aliased pixels per second
  • 300 million displayed pixels per second, distributed over one to eight outputs

InfiniteReality2

InfiniteReality2 is what hinv (an IRIX utility that lists the hardware present in a system) refers to an InfiniteReality that is used in the Onyx2. The InfiniteReality2 however, was still marketed as the InfiniteReality. It was the second implementation of the InfiniteReality architecture, and was introduced in late 1996. It is identical to the InfiniteReality architecturally, but differs mechanically as the Onyx2's Origin 2000-based card cage is different from the Onyx's Challenge-based card cage.

Introduced by the InfiniteReality2 is an interface scheme that is used in rackmount Onyx2 or later systems. Instead of being connected to the host system via a FCI cable, the board set is plugged into the rear of a midplane, which can support two pipelines. The midplane has eleven slots. Slot six to slot eleven are for the first pipeline, which may contain one to four Raster Manager boards. Slot one to four is for the second pipeline, which may contain one or two Raster Manager boards due to the number of slots there are. Because of this, maximally configured Onyx systems use one midplane for each pipeline to avoid restricting half of the 16 pipelines to a maximum of two Raster Manager boards. Slot five contains a Ktown board if the midplane is used in an Origin 2000-based system (Onyx2) or a Ktown2 board if the midplane is used in an Origin 3000-based system (Onyx 3000). The purpose of these boards is to interface the host system's XIO link to the Host Interface Processor ASIC on the Geometry board. These boards have two XIO ports for this purpose, with the top XIO port connected to the right pipeline and the bottom XIO port connected to the left pipeline.

Reality

The Reality is a cost-reduced version of the InfiniteReality2 intended to provide similar performance. Instead of using the GE14-4 Geometry Engine board and the RM7-16 or RM7-64 Raster Manager boards, the Reality used the GE14-2 Geometry Engine board and the RM8-16 or RM8-64 Raster Manager boards. The GE14-2 has two Geometry Engine Processors, instead of four like the other models. The RM8-16 and RM864 has 16 or 64 MB of texture memory respectively and 40 MB of raster memory. The Reality was also limited by the number of Raster Manager boards it could support, one or two. When maximally configured with two RM8-64 Raster Manager boards, the Reality pipeline has 80 MB of raster memory.

InfiniteReality2E

The InfiniteReality2E was an upgrade of the InfiniteReality, marketed as the InfiniteReality2, introduced in 1998. It succeeded the InfiniteReality2 board set and was itself succeeded by the InfiniteReality3 in 2000, but was not discontinued until 10 April 2001.

It improves upon the InfiniteReality by replacing the GE14-4 Geometry Engine board with the GE16-4 Geometry Engine board and the RM7-16 or RM7-64 Raster Manager boards with the RM9-64 Raster Manager board. The new Geometry Engine board operated at 112 MHz,[6] improving geometry and image processing performance. The new Raster Manager board operated at 72 MHz,[6] improving anti-aliased pixel fill performance.

InfiniteReality3

InfiniteReality3 was introduced in 2000 along with the Onyx 3000 to supersede the InfiniteReality2. It was used in the Onyx2 and Onyx 3000 visualization systems. The only improvement over the previous implementation was replacement of the RM9-64 Raster Manager with the RM10-256 Raster Manager, which has 256 MB of texture memory, four times that the of the previous raster manager. When maximally configured with four Raster Managers, the InfiniteReality3 pipeline provides 320 MB of raster memory.

InfiniteReality4

InfiniteReality4 was introduced in 2002 to succeed the InfiniteReality3. It was used in the Onyx2, Onyx 3000 and Onyx 350. It is the last member of the InfiniteReality family, itself succeeded by the ATI FireGL-based UltimateVision, which was used in the Onyx4. The only improvement over the previous implementation was the replacement of the RM10-256 Raster Manager by the RM11-1024 Raster Manager, which has improved performance, 1 GB of texture memory and 2.5 GB of raster memory, four and thirty-two times that of the previous raster manager, respectively. When maximally configured with four Raster Managers, the InfiniteReality4 pipeline has 10 GB of raster memory. In a maximum configuration with 16 pipelines, the InfiniteReality4 contained 16 GB of texture memory and 160 GB of raster memory.[7]

Comparison

The figures presented in the tables are for a minimal 1-pipeline and a maximal 16-pipeline configuration, except for the Reality, which was restricted to single pipe operation.

Hardware

Model Geometry
Engine
board
Raster Manager
board
Display Generator
board
Texture
memory
(MB)
Raster
memory
(MB)
Introduced Discontinued
InfiniteReality GE12-4 RM6-16 or RM6-64 DG4-2 or DG4-8 16 to 1,024[8] 80 to 5,120[8]  ? 1999-09-30
InfiniteReality2 GE14-4 RM7-16 or RM7-64 DG5-2 or DG5-8 16 to 1,024 80 to 5,120  ?  ?
Reality GE14-2 RM8-16 or RM8-64 DG5-2 or DG5-8 64 40 to 80  ?  ?
InfiniteReality2E GE16-4 RM9-64 DG5-2 or DG5-8 64 to 1,024[8] 80 to 5,120[8]  ?  ?
InfiniteReality3 GE16-4 RM10-256 DG5-2 or DG5-8 256 to 4,096[7] 80 to 5,120[7]  ? 2003-06-27
InfiniteReality4 GE16-4 RM11-1024 DG5-2 or DG5-8 1,024 to 16,384[7] 2,560 to 163,840[7]  ?  ?

Performance

Model Polygons
(millions per second)
Pixel fill
(millions of pixels per second)
Volume rendering
(millions of voxels per second)
InfiniteReality 10.9  ?  ?
InfiniteReality2 10.9  ?  ?
Reality 5.5 94 to 188Template:Fn 100 to 200
InfiniteReality2E 13.1 to 210[8] 192 to 6,100 200 to 6,400
InfiniteReality3 13.1 to 210 5,600 6,400
InfiniteReality4 13.1 to 210 10,200Template:Fn 6,400

Template:Fnb Anti-aliased, Z-buffered, textured.
Template:Fnb 8 by 8 sub-sampled anti-aliased, Z-buffered, textured, lit, 40-bit color pixels.

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 John S. Montrym et al. "InfiniteReality: A Real-Time Graphics System". ACM SIGGRAPH.
  2. 2.0 2.1 2.2 2.3 2.4 John Montrym, Brian McClendon. "InfiniteReality Graphics - Power Through Complexity". Advanced Systems Division, Silicon Graphics, Inc.
  3. Mark J. Kilgard. "Realizing OpenGL: Two Implementations of One Architecture". 1997 SIGGRAPH Eurographics Workshop, August 1997.
  4. Onyx2 Reality, Onyx2 InfiniteReality and Onyx2 InfiniteReality2 Technical Report, August 1998. Silicon Graphics, Inc.
  5. 5.0 5.1 5.2 5.3 Remanufactured Silicon Graphics Onyx2 Product Guide, June 1999. Document 1073. Silicon Graphics, Inc.
  6. 6.0 6.1 Alexander Wolfe. "Siggraph sets the stage for latest graphics". EE Times, 20 July 1998.
  7. 7.0 7.1 7.2 7.3 7.4 "SGI Onyx 300 with InfiniteReality Family Graphics Datasheet." Silicon Graphics, 3224, 25 October 2002.
  8. 8.0 8.1 8.2 8.3 8.4 Onyx2 GroupStation Datasheet, August 1998. Document 1840. Silicon Graphics, Inc.

Hardware aggregator

Node boards have 2 CPUs per board.

Known node board CPU speeds

IP27: CPUs are mounted directly to the node board individually.

180 MHz R10000 (Can not be mixed with others speed node boards)

195 MHz R10000


IP31: CPUs are mounted in pairs (along with their respective caches) to a PIMM, a pluggable module which then mounts to the node board.

250 MHz R10000

300 MHz R12000

350 MHz R12000 (Can not be used in configurations greater than 8 CPUs)

400 MHz R12000

500 MHz R14000

PCI cards

PCI card cage and compatible PCI cards very similar to Octane, except the screws for the cage have a different orientation from the Octane one.

PCI cards in the card cage run at 33 MHz. They must be 5V-compatible and may be either 32- or 64-bit; the card cage has three 64-bit slots. What follows is a list of known working cards.

Type of device Vendor name Model Description PCI Vendor ID PCI Device ID Notes
SCSI Qlogic qla1040b Fast/Wide SCSI controller 1077 1020 This is the SCSI controller on the BASEIO board. Works "out of the box" on IRIX 6.4 and 6.5.
Fibre Channel / SCSI Qlogic qla2342 dual-port 2Gb FC controller 1077 2312 Force a kernel recompile if it doesn't show up in hinv. Works "out of the box" on IRIX 6.5.17 and above.

Memory capacities

Onyx2 uses the same proprietary memory as the rest of the Origin 200/2000 series of computers. To distinguish between the different capacities, they were color-coded across the top edge of each DIMM:

Red: 256MB

White/Silver: 128MB

Green: 64MB

Sample hinv (from the pictured rack system above)

Location: /hw/module/1/slot/n1/node
        MODULEID Board: barcode K0027261   part              rev   
    IP31PIMMR14K Board: barcode MJG000     part 030-1547-002 rev  E
       8P12_MPLN Board: barcode HXP697     part 030-1535-001 rev  B
            IP31 Board: barcode MHZ690     part 030-1523-001 rev  C
Location: /hw/module/1/slot/n2/node
            IP31 Board: barcode MJA682     part 030-1523-001 rev  C
    IP31PIMMR14K Board: barcode MJV829     part 030-1547-002 rev  E
Location: /hw/module/1/slot/n3/node
            IP31 Board: barcode MHZ231     part 030-1523-001 rev  C
    IP31PIMMR14K Board: barcode MJJ983     part 030-1547-002 rev  E
Location: /hw/module/1/slot/n4/node
            IP31 Board: barcode JRP729     part 030-1255-003 rev  D
    IP31PIMMR14K Board: barcode DPD869     part 030-1547-002 rev  D
Location: /hw/module/1/slot/r1/router
      ROUTER_IR1 Board: barcode KLC273     part 030-0841-003 rev  C
Location: /hw/module/1/slot/r2/router
      ROUTER_IR1 Board: barcode KDK226     part 030-0841-003 rev  B
Location: /hw/module/1/slot/io2/pci_xio
         PCI_XIO Board: barcode KDG223     part 030-1062-002 rev  E
Location: /hw/module/1/slot/io8/mscsi
           MSCSI Board: barcode KCP460     part 030-1243-001 rev  M
Location: /hw/module/1/slot/io7/divo
            DIVO Board: barcode KAH156     part 030-1305-001 rev  E
Location: /hw/module/1/slot/io1/baseio
          BASEIO Board: barcode DYZ782     part 030-0734-002 rev  N
             MIO Board: barcode EYZ131     part 030-0880-003 rev  E
Location: /hw/module/1/slot/io9/fibre_channel
   FIBRE_CHANNEL Board: barcode JHT635     part 030-0927-003 rev  E
Location: /hw/module/1/slot/io3/kona
          GE16-4 Board: barcode KVZ553     part 030-1398-001 rev  E
           KTOWN Board: barcode KFR848     part 030-1067-001 rev  F
Location: /hw/module/2/slot/n1/node
        MODULEID Board: barcode K0019167   part              rev   
   IP31PIMMR12KS Board: barcode LAT691     part 030-1423-002 rev  G
            IP31 Board: barcode LAX165     part 030-1523-001 rev  C
       8P12_MPLN Board: barcode FXZ861     part 030-0762-006 rev  K
Location: /hw/module/2/slot/n2/node
   IP31PIMMR12KS Board: barcode HGL932     part 030-1423-002 rev  G
            IP31 Board: barcode KSB603     part 030-1523-001 rev  C
Location: /hw/module/2/slot/n3/node
            IP31 Board: barcode KRT958     part 030-1523-001 rev  C
   IP31PIMMR12KS Board: barcode KRK403     part 030-1423-002 rev  F
Location: /hw/module/2/slot/n4/node
   IP31PIMMR12KS Board: barcode KRH708     part 030-1423-002 rev  G
            IP31 Board: barcode KSB450     part 030-1523-001 rev  C
Location: /hw/module/2/slot/r1/router
      ROUTER_IR1 Board: barcode GWR211     part 030-0841-003 rev  B
Location: /hw/module/2/slot/r2/router
      ROUTER_IR1 Board: barcode GTL148     part 030-0841-003 rev  B
Location: /hw/module/2/slot/io1/baseio
          BASEIO Board: barcode HSG629     part 030-1124-002 rev  M
Location: /hw/module/2/slot/io3/mscsi
           MSCSI Board: barcode HSK846     part 030-1243-001 rev  M
Location: /hw/module/3/slot/n1/node
        MODULEID Board: barcode K0009218   part              rev   
            IP31 Board: barcode KRS981     part 030-1523-001 rev  C
   IP31PIMMR12KS Board: barcode KRH713     part 030-1423-002 rev  G
       8P12_MPLN Board: barcode DWZ328     part 013-1547-003 rev  D
Location: /hw/module/3/slot/n2/node
   IP31PIMMR12KS Board: barcode LAJ899     part 030-1423-002 rev  G
            IP31 Board: barcode LAJ448     part 030-1523-001 rev  C
Location: /hw/module/3/slot/n3/node
   IP31PIMMR12KS Board: barcode LAT486     part 030-1423-002 rev  G
            IP31 Board: barcode JZP191     part 030-1523-001 rev  C
Location: /hw/module/3/slot/n4/node
   IP31PIMMR12KS Board: barcode LAK445     part 030-1423-002 rev  G
            IP31 Board: barcode LAK857     part 030-1523-001 rev  C
Location: /hw/module/3/slot/r1/router
      ROUTER_IR1 Board: barcode MDS015     part 030-0841-003 rev  D
Location: /hw/module/3/slot/r2/router
      ROUTER_IR1 Board: barcode MDM991     part 030-0841-003 rev  D
Location: /hw/module/3/slot/io1/baseio
          BASEIO Board: barcode FSN491     part 030-0734-002 rev  K
             MIO Board: barcode GWN986     part 030-0880-003 rev  F
Location: /hw/module/3/slot/io3/mscsi
           MSCSI Board: barcode GSR276     part 030-1243-001 rev  G
Processor 0: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 1: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 2: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 3: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 4: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 5: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 6: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 7: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 8: 400 MHZ IP27
          BASEIO Board: barcode HSG629     part 030-1124-002 rev  M
Location: /hw/module/2/slot/io3/mscsi
           MSCSI Board: barcode HSK846     part 030-1243-001 rev  M
Location: /hw/module/3/slot/n1/node
        MODULEID Board: barcode K0009218   part              rev   
            IP31 Board: barcode KRS981     part 030-1523-001 rev  C
   IP31PIMMR12KS Board: barcode KRH713     part 030-1423-002 rev  G
       8P12_MPLN Board: barcode DWZ328     part 013-1547-003 rev  D
Location: /hw/module/3/slot/n2/node
   IP31PIMMR12KS Board: barcode LAJ899     part 030-1423-002 rev  G
            IP31 Board: barcode LAJ448     part 030-1523-001 rev  C
Location: /hw/module/3/slot/n3/node
   IP31PIMMR12KS Board: barcode LAT486     part 030-1423-002 rev  G
            IP31 Board: barcode JZP191     part 030-1523-001 rev  C
Location: /hw/module/3/slot/n4/node
   IP31PIMMR12KS Board: barcode LAK445     part 030-1423-002 rev  G
            IP31 Board: barcode LAK857     part 030-1523-001 rev  C
Location: /hw/module/3/slot/r1/router
      ROUTER_IR1 Board: barcode MDS015     part 030-0841-003 rev  D
Location: /hw/module/3/slot/r2/router
      ROUTER_IR1 Board: barcode MDM991     part 030-0841-003 rev  D
Location: /hw/module/3/slot/io1/baseio
          BASEIO Board: barcode FSN491     part 030-0734-002 rev  K
             MIO Board: barcode GWN986     part 030-0880-003 rev  F
Location: /hw/module/3/slot/io3/mscsi
           MSCSI Board: barcode GSR276     part 030-1243-001 rev  G
Processor 0: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 1: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 2: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 3: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 4: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 5: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 6: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 7: 500 MHZ IP27
CPU: MIPS R14000 Processor Chip Revision: 1.4
FPU: MIPS R14010 Floating Point Chip Revision: 1.4
Processor 8: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 9: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 10: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 11: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 12: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 13: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 14: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 15: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 16: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 17: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 18: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 19: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 20: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 21: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 22: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
Processor 23: 400 MHZ IP27
CPU: MIPS R12000 Processor Chip Revision: 3.5
FPU: MIPS R12010 Floating Point Chip Revision: 3.5
CPU 0 at Module 1/Slot 1/Slice A: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 1 at Module 1/Slot 1/Slice B: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 2 at Module 1/Slot 2/Slice A: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 3 at Module 1/Slot 2/Slice B: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 4 at Module 1/Slot 3/Slice A: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 5 at Module 1/Slot 3/Slice B: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 6 at Module 1/Slot 4/Slice A: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 7 at Module 1/Slot 4/Slice B: 500 Mhz MIPS R14000 Processor Chip (enabled)
  Processor revision: 1.4. Scache: Size 8 MB Speed 250 Mhz  Tap 0xa
CPU 8 at Module 2/Slot 1/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 9 at Module 2/Slot 1/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 10 at Module 2/Slot 2/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 11 at Module 2/Slot 2/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 12 at Module 2/Slot 3/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 13 at Module 2/Slot 3/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 14 at Module 2/Slot 4/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 15 at Module 2/Slot 4/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 16 at Module 3/Slot 1/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 17 at Module 3/Slot 1/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 18 at Module 3/Slot 2/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 19 at Module 3/Slot 2/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 20 at Module 3/Slot 3/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 21 at Module 3/Slot 3/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 22 at Module 3/Slot 4/Slice A: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
CPU 23 at Module 3/Slot 4/Slice B: 400 Mhz MIPS R12000 Processor Chip (enabled)
  Processor revision: 3.5. Scache: Size 8 MB Speed 266 Mhz  Tap 0xa
Main memory size: 38400 Mbytes
Instruction cache size: 32 Kbytes
Data cache size: 32 Kbytes
Secondary unified instruction/data cache size: 8 Mbytes
Memory at Module 1/Slot 1: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 1/Slot 2: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 1/Slot 3: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 1/Slot 4: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 2/Slot 1: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 2/Slot 2: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 2/Slot 3: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 2/Slot 4: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 3/Slot 1: 4096 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
  Bank 2 contains 512 MB (Standard) DIMMS (enabled)
  Bank 3 contains 512 MB (Standard) DIMMS (enabled)
  Bank 4 contains 512 MB (Standard) DIMMS (enabled)
  Bank 5 contains 512 MB (Standard) DIMMS (enabled)
  Bank 6 contains 512 MB (Standard) DIMMS (enabled)
  Bank 7 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 3/Slot 2: 1024 MB (enabled)
  Bank 0 contains 512 MB (Standard) DIMMS (enabled)
  Bank 1 contains 512 MB (Standard) DIMMS (enabled)
Memory at Module 3/Slot 3: 256 MB (enabled)
  Bank 0 contains 128 MB (Standard) DIMMS (enabled)
  Bank 1 contains 128 MB (Standard) DIMMS (enabled)
  Bank 2 contains 128 MB (Standard) DIMMS (disabled)
  Bank 3 contains 128 MB (Standard) DIMMS (disabled)
Memory at Module 3/Slot 4: 256 MB (enabled)
  Bank 0 contains 256 MB (Standard) DIMMS (enabled)
ROUTER in Module 1/Slot 2: Revision 2: Active Ports [2,3,4,5,6] (enabled)
ROUTER in Module 1/Slot 4: Revision 2: Active Ports [2,3,4,5,6] (enabled)
ROUTER in Module 2/Slot 2: Revision 2: Active Ports [3,4,5,6] (enabled)
ROUTER in Module 2/Slot 4: Revision 2: Active Ports [1,3,4,5,6] (enabled)
ROUTER in Module 3/Slot 2: Revision 2: Active Ports [2,4,5,6] (enabled)
ROUTER in Module 3/Slot 4: Revision 2: Active Ports [1,2,4,5,6] (enabled)
Integral SCSI controller 2: Version QL1040B (rev. 2), differential
Integral SCSI controller 3: Version QL1040B (rev. 2), differential
Integral SCSI controller 4: Version QL1040B (rev. 2), differential
Integral SCSI controller 5: Version QL1040B (rev. 2), differential
Integral SCSI controller 8: Version QL1040B (rev. 2), single ended
  Disk drive: unit 1 on SCSI controller 8 (unit 1)
  Disk drive: unit 2 on SCSI controller 8 (unit 2)
  Disk drive: unit 3 on SCSI controller 8 (unit 3)
  CDROM: unit 6 on SCSI controller 8
Integral SCSI controller 9: Version QL1040B (rev. 2), single ended
Integral SCSI controller 6: Version Fibre Channel AIC-1160, revision 2
Integral SCSI controller 0: Version QL1040B (rev. 2), single ended
  Disk drive: unit 1 on SCSI controller 0 (unit 1)
  Disk drive: unit 2 on SCSI controller 0 (unit 2)
  CDROM: unit 6 on SCSI controller 0
Integral SCSI controller 1: Version QL1040B (rev. 2), single ended
  Disk drive: unit 4 on SCSI controller 1 (unit 4)
  Disk drive: unit 5 on SCSI controller 1 (unit 5)
Integral SCSI controller 15: Version QL1040B (rev. 2), single ended
Integral SCSI controller 14: Version QL1040B (rev. 2), single ended
  Disk drive: unit 1 on SCSI controller 14 (unit 1)
Integral SCSI controller 10: Version QL1040B (rev. 2), differential
Integral SCSI controller 11: Version QL1040B (rev. 2), differential
Integral SCSI controller 12: Version QL1040B (rev. 2), differential
Integral SCSI controller 13: Version QL1040B (rev. 2), differential
Integral SCSI controller 16: Version QL1040B (rev. 2), differential
Integral SCSI controller 17: Version QL1040B (rev. 2), differential
Integral SCSI controller 18: Version QL1040B (rev. 2), differential
Integral SCSI controller 19: Version QL1040B (rev. 2), differential
Integral SCSI controller 7: Version Fibre Channel AIC-1160, revision 2
IOC3/IOC4 serial port: tty5
IOC3/IOC4 serial port: tty6
IOC3/IOC4 serial port: tty1
IOC3/IOC4 serial port: tty2
IOC3/IOC4 serial port: tty9
IOC3/IOC4 serial port: tty10
IOC3/IOC4 serial port: tty3
IOC3/IOC4 serial port: tty4
IOC3/IOC4 serial port: tty7
IOC3/IOC4 serial port: tty8
IOC3 parallel port: plp1
IOC3 parallel port: plp2
Graphics board: InfiniteReality3
Gigabit Ethernet: eg0, module 1, PCI slot 2, firmware version 12.4.10
Fast Ethernet: ef1, version 1, module 2, slot io1, pci 2
Integral Fast Ethernet: ef0, version 1, module 1, slot io1, pci 2
Fast Ethernet: ef2, version 1, module 3, slot io1, pci 2
Iris Audio Processor: version RAD revision 7.0, number 1
Iris Audio Processor: version RAD revision 7.0, number 2
Origin PCI XIO board, module 1 slot 2: Revision 4
  PCI Adapter ID (vendor 0x133d, device 0x0001) PCI slot 0
  PCI Adapter ID (vendor 0x10a9, device 0x0009) PCI slot 2
Origin MSCSI board, module 1 slot 8: Revision 4
Origin BASEIO board, module 1 slot 1: Revision 3
Origin BASEIO board, module 2 slot 1: Revision 4
Origin BASEIO board, module 3 slot 1: Revision 3
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 0
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 1
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 2
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 3
  PCI Adapter ID (vendor 0x10a9, device 0x0002) PCI slot 0
  PCI Adapter ID (vendor 0x10a9, device 0x0002) PCI slot 2
Origin FIBRE CHANNEL board, module 1 slot 9: Revision 4
  PCI Adapter ID (vendor 0x10a9, device 0x0003) PCI slot 2
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 0
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 1
Origin MSCSI board, module 2 slot 3: Revision 4
  PCI Adapter ID (vendor 0x9004, device 0x1160) PCI slot 0
  PCI Adapter ID (vendor 0x9004, device 0x1160) PCI slot 1
  PCI Adapter ID (vendor 0x10a9, device 0x0003) PCI slot 6
  PCI Adapter ID (vendor 0x10a9, device 0x0003) PCI slot 2
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 0
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 1
  PCI Adapter ID (vendor 0x10a9, device 0x0005) PCI slot 7
  PCI Adapter ID (vendor 0x10a9, device 0x0003) PCI slot 6
  PCI Adapter ID (vendor 0x10a9, device 0x0003) PCI slot 2
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 0
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 1
  PCI Adapter ID (vendor 0x10a9, device 0x0005) PCI slot 7
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 0
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 1
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 2
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 3
Origin MSCSI board, module 3 slot 3: Revision 3
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 0
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 1
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 2
  PCI Adapter ID (vendor 0x1077, device 0x1020) PCI slot 3
DIVO Video: controller 0 unit 0: Input, Output
IOC3/IOC4 external interrupts: 2
IOC3/IOC4 external interrupts: 1
IOC3/IOC4 external interrupts: 3
HUB in Module 1/Slot 1: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 1/Slot 2: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 1/Slot 3: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 1/Slot 4: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 2/Slot 1: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 2/Slot 2: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 2/Slot 3: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 2/Slot 4: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 3/Slot 1: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 3/Slot 2: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 3/Slot 3: Revision 6 Speed 100.00 Mhz (enabled)
HUB in Module 3/Slot 4: Revision 6 Speed 100.00 Mhz (enabled)
IP27prom in Module 1/Slot n1: Revision 6.156
IP27prom in Module 1/Slot n2: Revision 6.156
IP27prom in Module 1/Slot n3: Revision 6.156
IP27prom in Module 1/Slot n4: Revision 6.156
IO6prom on Global Master Baseio in Module 1/Slot io1: Revision 6.156
IP27prom in Module 2/Slot n1: Revision 6.156
IP27prom in Module 2/Slot n2: Revision 6.156
IP27prom in Module 2/Slot n3: Revision 6.156
IP27prom in Module 2/Slot n4: Revision 6.156
IP27prom in Module 3/Slot n1: Revision 6.156
IP27prom in Module 3/Slot n2: Revision 6.156
IP27prom in Module 3/Slot n3: Revision 6.156
IP27prom in Module 3/Slot n4: Revision 6.156

Diagnostics

Try stripping the Onyx2 until you get a minimum configuration that boots without error.

Remove:

  • Directory RAM
  • All standard RAM except the pair in Bank 0 on each node <your hinv indicates all Bank 0s were working>
  • The Graphics module
  • The IO6G <if you still have the IO6 to replace it with>
  • The MENET and FC boards
  • The HD that contains the failed IRIX install
  • The external CD
  • If necessary, all but one nodeboard

<from this point make and test each change/reconfiguration *one* step at a time - it'll take more time, but it will also enable you to make more sense of any errors>

Connect a serial terminal <enable a *large* scroll back buffer on the terminal program and save each session>.

  1. Boot to the PROM monitor and issue "resetenv"
  2. Enter POD mode from the PROM command line by entering "pod", then:
  • "go cac"
  • "clearalllogs"
  • "initalllogs"
  • "flush"
  • "reset" <the system will reset>
  1. When it restarts, stop in the PROM and:

run "enableall",followed by "update" at the PROM command line <NOTE: repeat this 3 step process after *every* hardware error>

Reboot - are there any error messages?

If so - what are they? <stop and report back to the forums>

If not, install the IO6G and graphics board <but *nothing* else yet and do not connect kb, m, or monitor> Boot to the PROM monitor, and "update" the PROM hardware invertory Boot again - if errors appear report back

If no errors appear during the boot to PROM Pwer down, re-install the boot drive, restart the system, clear/prep the drive and install IRIX <what revision is your install set, btw?>

If there are install errors <stop and report back>

If not, connect a kb, mouse and monitor, <leave the serial terminal connected for now> and attempt to boot IRIX

If booting IRIX is unsuccessful what errors appeared?

If the IRIX boot was successful, test each RAM set in Bank 0 of a nodeboard <*no* Directory RAM yet>. If any set gives errors, record the error message, init the POD log, update the PROm inventory, and test the remaining sets.

Once you have eliminated any problem RAM Try the RAM that passed in the other memory banks If there are any errors during this process, try another known good set in the problem bank if the problem persists <and cleaning the slot(s) didn't help>, skip the bank or replace the nodeboard

Once the RAM is tested and running w/o error, reinstall the MENET and FC boards You can also reinstall the Directory RAM, but in an 8 processor system it does little beyond using electricity and producing heat.

BTW - when you remove nodeboards the compression connectors <labeled "Connector Actuation 7/64 Hex> should be released first, then the phillips headed machine screws at the top and bottom of each board.

When you install nodeboard, reverse the process. Tighten the machine screws first, then the compression bolts . Following this procedure prevents the compression connector having to support the weight of the nodeboard during removal/installation.


See Also


External links (See also)

SGI Tech pubs


nekochan wiki pages of interest


Forum post for step by step diagnosis:


futuretech