NASA Advanced Supercomputing
Project Columbia at NASA Ames, March 9 2005
Home
|
PRS Trips
NAS welcomes the visitors
from SLAC/KIPAC and SGI
180.97 Kb
Bill Thigpen, Columbia Project Manager,
explains how it was assembled in <120 days!
178.24 Kb
Project Columbia's cluster of
20 SGI Altix nodes: 10,240 processors
182.20 Kb
Richard Mount (cellphone in belt) and the group
listen to Project Columbia's introduction
199.23 Kb
Stewart Marshall at the back of nodes
178.13 Kb
Columbia Project benchmark:
51.9 Teraflops (trillion floating-point operations/second)
165.37 Kb
Project Columbia utilization status
182.60 Kb
NUMAlink Interconnect Fabric
200.60 Kb
Cooling fans inside Altix rack
252.85 Kb
Altix 3700 Bx2 rack:
512 Intel Itanium2 processors each
153.32 Kb
Some of the Altix nodes
124.19 Kb
Neat cabling
269.05 Kb
Floor access
232.50 Kb
Fiber optics cabling conduits
225.00 Kb
Fiber optics cabling detail:
each cable contains 16 fibers
198.60 Kb
Back
221.78 Kb
Back details
313.55 Kb
Chuck next to the Cray
145.93 Kb
Bill pointing to the Cray rack
162.98 Kb
Water-cooled door
175.60 Kb
NUMAlink which interconnects the nodes
223.23 Kb
NUMAlink details
261.48 Kb
Visualization center multi-variate demonstration
245.37 Kb
Hubble ultra deep field
219.81 Kb
Columbia System
Based on SGI® NUMAflex™ architecture
20 SGI® Altix™ 3700 superclusters, each with 512 processors
Global shared memory across 512 processors
10,240 Intel Itanium® 2 processors
Current processor speed: 1.5 gigahertz
Current cache: 6 megabytes
1 terabyte of memory per 512 processors, with 20 terabytes total memory
Operating Environment
Linux® based operating system
PBS Pro™ job scheduler
Intel® Fortran/C/C++ compiler
SGI® ProPack™ 3.2 software
Interconnect
SGI® NUMAlink™
InfiniBand network
10 gigabit Ethernet
1 gigabit Ethernet
Storage
Online: 440 terabytes of Fibre Channel RAID storage
Archive storage capacity: 10 petabytes