#: locale=en ## Tour ### Description ### Title tour.name = HPC ## Skin ### Multiline Text HTMLText_33711F51_3D9B_3042_41C8_5BBC76E1A7A1.html =
Microsystems and Engineering Sciences Applications (MESA)


Welcome to MESA, home of Sandia’s advanced nuclear weapons research, design and development functions, as well as integrated materials research, and the production facilities for microsystem technologies.


Focused primarily on the nuclear weapons mission, the facilities that make up MESA ultimately connect with all of Sandia's mission areas via microsystems research and applications.


After eight years of construction, MESA was completed in 2007 at a cost of $518 million. It was Sandia's largest construction project since the Labs' first permanent buildings were built in the 1940s.


HTMLText_36EBE84C_3A9D_7042_4191_EC0369B8FEA4.html =
898 Lobby


Building 898 is a three-story, partially pre-fabricated, facility housing offices and light laboratories. Designed as an integrated facility promoting interaction among groups via workspace blocks, the building sports a postmodern, high-tech industrial design. The soaring lobby and high ceilings create an open and empowering environment for the design work housed here.
HTMLText_4B7F6214_4050_0F41_41C0_85556C4C1794.html =
Gowning Room


Clean rooms environments are kept clean both by limiting the volume of particles introduced to the space and by constantly moving the air to remove particles. Workers wear protective garments to reduce the number of contaminants they introduce to the room. This changing station allows staff to put on appropriate layers prior to entering a clean room; additional changing rooms allow for full changes of clothing.
HTMLText_4CC54F85_4050_1543_41CC_8E7E56628DA1.html =
858 Lobby


Originally built in the late 1980s to house the Manufacturing Development Laboratory and its offices, Building 858 expanded significantly when 858EF (the MicroFab) and 858EL (MicroLab) were added as part of the MESA construction. The building houses wafer manufacturing, compound semiconductor fabrication, microelectromechanical systems (MEMS) production, and electronic circuit manufacturing.
HTMLText_4CE6518A_4050_0D41_41B3_DB23EC067A66.html =
MicroFab


The MESA Microsystems Fabrication facility is one of the most complex buildings at Sandia. It is the first in the world to combine silicon processing with fabrication of compound semiconductors under one roof. This is the heart of microsystem manufacturing, done primarily in cleanrooms.


HTMLText_8A3175C8_82D0_A076_41DD_F98409987A3A.html =
858 SiliconFab


Building 898 is a three-story, partially pre-fabricated, facility housing offices and light laboratories. Designed as an integrated facility promoting interaction among groups via workspace blocks, the building sports a postmodern, high-tech industrial design. Its polished stone and steel and glass exterior includes curvilinear walled ribbons of windows leading to the northeast entrance, and extended wings with sun-protecting louvered windows. The soaring lobby and high ceilings create an open and empowering environment for the design work housed here.
HTMLText_8ABEDA31_82D0_E016_41BF_7CE26A2FC635.html =
898 Corridor


State-of-the-art videoconferencing and stunning projection capabilities make the VIEWS Corridor a prime viewing and sharing spot for scientific computing displays. Output from the visualization tools used in Sandia's high performance computing arena can be projected in 3D. The area is a magnet for VIP visits and tours.


HTMLText_8B829D3C_82DF_600E_41D6_C912684491DD.html =
858 Photolith Lab


Sandia maintains a clean room outside of the MicroFab area for its photolithography processes. This allows the MESA workers to develop and maintain customized processes without disruption to the manufacturing effort.


### Tooltip Image_4B679893_40F0_1B47_41CD_B75F2E07DBC5.toolTip = Close ## Media ### Title album_4715843D_5685_FA0B_41D4_800D6AEEAFDE.label = The Data Center cold air balloon album_6E4DFA15_7442_400B_41BC_F640D6C9CA25.label = Up on the roof album_6E4DFA15_7442_400B_41BC_F640D6C9CA25_0.label = CS2A5852 album_7BEA07F5_744E_400B_41D3_FBCC61A9D6CB.label = Taking a CRAC at cooler air album_A9D8D4CD_94C7_C01B_41D1_EB6F13EFDF65.label = A special piece of history map_A46BE269_A800_373C_41E2_151284923A32.label = Data Center Map[1] panorama_5A680873_5682_8A1F_41D0_4651D0C2FE3F.label = Cooling Distribution Facility panorama_5A681454_5682_9A19_41C1_5303877AD719.label = Computing Museum panorama_5A68C2A1_5682_7E3B_41D1_5CF4663E2F21.label = LEED Data Center panorama_5A694FA3_5682_863F_41CE_2A80320FBA54.label = Data Center Annex photo_07717C0A_1012_9BD5_4172_18AAA6194471.label = Computer Tape photo_59AB56CF_7C4E_4017_41B2_D62C15DBDE81.label = 726 STILLS5389 photo_59FCBB6E_7C5E_4019_41D0_54ABBC273940.label = 726 STILLS5385 photo_5ABD806D_7DCF_C01B_41D0_2AE6C47D066A.label = 726 STILLS5348 photo_5BAEA148_7DC2_C019_41D3_3E494F0E37B1.label = 726 STILLS5398_CROP photo_6045658A_7442_4019_41D2_4B40A90BEFCA.label = HPC MUSEUM STILLS5274_Monitor photo_6097D731_7442_400B_41D3_EB8141569970.label = Bit to Brontobyte photo_60A215F2_7442_4009_41D0_AE16A3202D8F.label = Redstorm_crew photo_60B0A693_7442_400F_41CE_A97ABDA0210C.label = asciRed photo_6195D784_743E_4009_41DB_1780EE9691AF.label = IBM_Electronic_Data_Processing_Machine_-_GPN-2000-001881 photo_61DDC920_743E_4009_41D1_EECC315C9A30.label = Dave supercomputers photo_6244D9C5_75C2_400B_41A9_3954186D08C8.label = Telecommunications Displays photo_6340C754_75C6_4009_41CD_FBBB9124786E.label = Remember Factoring photo_63C58573_75C2_C00F_41DC_CACA82AC3EC6.label = HPC MUSEUM STILLS5325 photo_649EB2AC_7442_C019_41B7_850E7446583E.label = HPC VT_4235 photo_6503ACDF_7447_C037_41A1_A505D5EBB706.label = IMG_2752 photo_6517ED0A_7442_4019_41D9_1A26C1C57B97.label = HPC VT_4117 photo_656FF4A8_744E_4019_41C3_772CCAF0A1DB.label = HPC VT_4283 photo_65E520F3_7442_400F_41BB_A06FC31E6F6F.label = HPC VT_4152 photo_6658A8D0_7443_C009_41D7_DA9471BAFF5F.label = HPC VT_4171 photo_66719FAF_7C46_4017_41C9_335E9AACA8A7.label = 726 STILLS5337 photo_66FBB09C_74C2_4039_41BF_F8FD3B51074D.label = HPC VT_0701 photo_67080A06_7C46_C009_41A2_488D3FAE171D.label = 726 STILLS5392 photo_674DBE93_744F_C00F_41D8_2C09240975A9.label = HPC VT_4161 photo_676303A5_7443_C00B_41D5_84607D894F37.label = HPC VT_4293 photo_67B702BA_7C42_C079_41CD_7BF2039A3985.label = 726 STILLS5371 photo_67FF27F4_7446_C009_41D4_C42307459CE1.label = HPC VT_4142 photo_68020C9B_743E_403F_41C9_E5A4C82DD8FB.label = 725 STILLS5471 photo_68229379_7442_40FB_41D6_F8BB62D56C0A.label = 725 STILLS5462 photo_690612F3_7441_C1F2_41D0_5FF0C29E4F35.label = 725 STILLS5468 photo_69A07C73_7442_400F_41C6_2F3CCB137562.label = CS2A5864 photo_6B1179B5_74C2_400B_41C8_C91CD91E0654.label = 725 PANO5476_retouched photo_6D4292D9_75C2_C03B_41BE_DF7B9D01560F.label = HPC MUSEUM STILLS5310 photo_6D76D3C5_75C2_C00B_41DA_A5BC55BAB5D7.label = HPC MUSEUM STILLS5307 photo_6DFA978F_7442_C017_41D5_A454BF435A9C.label = HPC MUSEUM STILLS5303 photo_6E1D2216_7442_4009_41D1_ECD9DFDFA836.label = 725-B_retouched photo_6ED9686A_7446_4019_41D5_0ACDCF854095.label = 725 RC-0844 photo_6F0F0FA2_7446_4009_41A4_B909852B1B14.label = HPC MUSEUM STILLS5300 photo_6F6C2B04_745E_4009_41DA_0F02BF27EB2A.label = 725 STILLS5473 photo_7B6E1144_7441_C009_41D6_BDA7FFEE1EFA.label = Eclipse photo_8CCDB83A_9DCC_1E19_41C0_A9E2A4DA9866.label = HPC MUSEUM STILLS5318 photo_AAAE063E_943E_4079_41D9_C85A3D4D048C.label = HPC VT_4186 photo_ABE14626_94CE_C009_41D3_E965F70EA598.label = 725 RC-0977_retiuched photo_BC481D6D_A800_6D34_41D3_75C4FD63C1CE.label = Computer Hard Drive video_5BF39B13_7C46_400F_41C5_597EE70BA31D.label = 726 Walkthrough video_66433713_7447_C00F_41BB_8140B66A98FB.label = 880 Server Install video_69D32D0B_7442_C01F_41DB_44DC44F37FC1.label = ASTRA_Timelapse_Front_View_FINAL_3840 video_6B954BA6_74DE_C009_41CC_49E83E859544.label = 725_CoolingSystem video_6D448115_75C3_C00B_41CC_30EE5F07713F.label = CRAY-2 video_6D9E0ECF_75C3_C017_41CB_9190CB28009A.label = HPC_Movie_Long_FINAL video_6E056B1F_7442_C037_41D0_7AC8AB73AE3E.label = THE HOOK video_8D2DCE28_9DDC_1239_41E2_4C3FDFB21EC4.label = airflow1 video_AB3B606B_94C2_C01F_41DB_FE17F9572532.label = airflow1 ## Popup ### Body htmlText_46D7AB26_568D_8E39_41CA_44B2A7148E3B.html =
You've heard of hot air balloons, but what about a cold air balloon? Well, back in 2005, Sandia built a computer called Thunderbird and at the time, it was the fifth fastest computer in the world, but all that speed also meant it generated a lot of heat. The system was air cooled, but the problem was there was no air containment. Basically, the air that was flowing over the system to keep it cool was actually moving too quickly, and that was heating up nodes near the floor, not keeping them cool. The solution was remarkably simply; put a tarp on it. Yup, a tarp was put on the top of one of the fastest computers in the world. That tarp helped create a column of contained air and that rolled back down to cool the machines. It worked so well that when one of the managers at Sandia heard about it they decided to replace the tarp with the same material used in a hot air balloon. It was put over the entire aisle, and became Sandia's first air containment system. This system was used for the entire life of the Thunderbird system, and was another example of Sandia leading the way for engineering solutions in high performance computing.
htmlText_58CDC56A_7C41_C019_41CB_82863CD0B6B2.html =
This guy with the label maker isn't really doing anything important, he's just posing for the picture.
htmlText_58E733CC_7C41_C019_41D9_3F2F62F8CEC5.html =
Keeping systems cool is a vital part of any data center. That make this facility one of the coolest at Sandia; because it actually keeps things cool, and because walking through all those pipes and big pumps is a unique and very cool experience.
htmlText_5ABF837E_7DC1_C0F9_41C5_31594093FD28.html =
These pumps transfer cooled water from this facility to the Data Center to keep the super computers running cool. The pumps push 5000 gallons of water per minute through underground pipes to feed the high performance cooling systems.
htmlText_5AFA3971_7DC2_400B_41D3_BD45A6D0E627.html =
Throughout the Data Center tour we talk about the importance of keeping all those super computers cool. That's such an important task that Sandia actually has two buildings dedicated to cooling. This building and its counterpart help pump around 5000 gallons of water per minute through 10,560 feet of piping that feeds the cooling systems of the Data Center across the street.
htmlText_60011CCC_744E_4019_41DA_A8C245AF02D1.html =
During your tour through this museum, you'll read a lot about FLOPS, but what exactly are FLOPS? Well, in this case, it has nothing to do with trying to get the other team carded in a soccer game. Rather FLOPS stands for "floating point operations per second" and is basically a measure of computer performance and speed. FLOPS indicate how many multiplications can be performed within one second on the machine. Thus, the CDC 6600, the world's fastest computer in 1964-1969, had a processing speed of about 1 megaFLOPS (one million FLOPS). ASCI Red's peak was 3.2 teraFLOPS (one trillion FLOPS), while its successor, ASCI Red Storm, reached a peak of 284 teraFLOPS. Machines have since surpassed petaFLOPS (one quadrillion FLOPS), and in 2020, Japan's Fugaku supercomputer clocked in at a speed of 415 petaFLOPS.
htmlText_60734D42_7441_C009_41CB_DD3934C32F70.html =
If you'd like to read more about what takes place here at the Data Center, be sure to check out this series of annual reports that highlights various accomplishments and how HPC is helping with real-world problems.
HPC Annual Report
htmlText_607B873E_7442_4079_41D2_A3D3A18F159E.html =
This door leads into the actual supercomputer annex where many of Sandia's "big machines" are kept. You might hear an ominous hum as you approach it, but don't worry, that just means there's a lot of power in there. Not only is it loud, but it's also kind of warm in there.
htmlText_60A67701_744E_C00B_41D3_33C6BA32FA77.html =
This monitor displays who's currently inside the supercomputer annex. It quickly shows who's working, who's visiting, and who's bringing donuts (actually, it doesn't really show that last one).
htmlText_60C127D4_7442_4009_41CB_1A6B0138A4D2.html =
Sandia and Cray collaborated on the XT3, which was installed at Sandia as Red Storm in 2003. In 2004, Cray made the XT3 available commercially. Red Storm was actually constructed from commercially available off-the-shelf parts. The interconnect chips of Red Storm allowed it to efficiently pass data between its over 10,000 processors even while applications were running. Those interconnects also made it possible for this machine to create three-dimensional meshes to make 3D representations of complex problems. Its 40 teraFLOPS helped it set world records for high performance computing (HPC) visualization and for two of the HPC Challenge benchmarks.
htmlText_60CC8F0B_7442_C01F_41D6_CD231E6425A5.html =
With so much wire, so many parts, and so many computers in the Data Center, one has to wonder; what happens when something breaks or loses connectivity to the network? It's not like you can run out to Radio Shack for replacement parts (mostly because, you know, they're out of business). Fortunately, the Data Center has its own Radio Shack, of sorts. The Data Center keeps plenty of spare parts on hand, like extra nodes. It also has plenty of cable for installing networks when adding new computers. Most of these cables are used to network machines together. On average the Data Center adds about eight to nine new systems a week, and these are the parts that get those systems are up and running.
htmlText_619560EC_7442_4019_41DC_47CDBC85E8ED.html =
This term is used a lot when exploring the world of high performance computing. A node is a collection of hardware that consists of just CPUs and RAM. It's your desktop computer can do, but without all the peripherals (no mouse, keyboard, monitors). Nodes unleash the HPC potential in hardware interconnected with no gaps or interruptions.
htmlText_61966A13_7442_400F_41B2_374006586630.html =
This monitor shows the electricity usage for the annex, including power provided by solar panels on the roof. It's basically a dashboard that shows the monumental power requirements for the Data Center, and provides a quick way to view how that power is being used while showing how the Data Center adheres to "green" standards of power consumption.
htmlText_61B6BFDC_743E_C039_41D4_B63356A18C21.html =
Computing history is often told and celebrated as linear progress. The increases in speed, storage capability, materials development, chip manufacturing, volume, complexity, and did I mention speed? The museum shows all of that, but it also displays the chaos of computer evolution and development. The rate of change and the plethora of problems computing may address generates a lot of technology, new concepts, and new products all at the same time. It was not always obvious what tech would win out in research and in the marketplace. There, amongst the mugs and the tan plastic towers, the museum illustrates the excitement of so many options resulting in so many solutions.
htmlText_623AAB46_75C2_4009_41A9_3358AA013230.html =
It's the Computing and Communications Museum, after all. The communicating part is represented in two large displays built into the wall of the hallway leading to the main museum and entrance to the supercomputing annex. One display covers the user side - largely the evolution from one black telephone on a main office desk and different tools for receiving multiple callers at one centralized location through different styles of telephone handsets to the very modern, very small phones we all carry. Of particular note are the massive, heavy, black, rotary dial explosion-proof telephones made out of Bakelite. These were in Sandia's explosive-handling and -testing facilities and were made to prevent dust from explosives from getting into the telephone's inner working and experiencing a spark. Because...boom.
htmlText_625D7C59_75C6_403B_41D8_A64D1A6E2807.html =
Unlike tape drives, on which data must be accessed serially, disk drives allow for random access storage, increasing the speed of access to the data. The phrase "Hard drive" indicates the rigidity of the platters or disks that hold data. Hard drives usually include several platters mounted onto the same spindle and encased in a protective box. The disks are read by the head - usually one per side of each disk. Over time, significant increases in storage density on hard drives have been achieved, largely via the arrangement of the magnetic material on the disk.
htmlText_62782488_75FE_4019_41CC_D8C0A1528FB9.html =
In April 2012, Sandia's Computing and Communications Museum opened. This Museum is a series of posters with timelines and display cases holding artifacts that encompass both the history of computing in general and Sandia's own computing history. It largely demonstrates the overall move from the Lab's use of commercially available computing products, its work with industry to refine and specialize some of those products for Sandia's use, and what Sandia itself has created and contributed to the history of computing. It is a living collection; new items are added as they are located and as they are created.
htmlText_62F2107A_75FE_40F9_41C8_6F960AC72584.html =
This video walks you through what high performance computing (HPC) is, what it's used for, and its history at Sandia. This is a great place to start for some background on what you'll see throughout the rest of this tour.
htmlText_63875E4A_75C3_C019_41C8_873142857F50.html =
In this display case are several models of the supercomputers of the past. Pictured is a model of a Cray X-MP system, the first multiprocessor supercomputer. It debuted in 1982 and was capable of a maximum 940 MFLOPS (or mega FLOPS).
htmlText_63B6F89B_75C6_403F_4191_65050EFCC550.html =
In 1984, Sandians Jim Davis and Diane Holdridge used Sandia's Cray 1S to crack the 69-digit Mersenne number, the longest number ever factored at the time. They broke their own record from the previous year when they factored a 63-digit and 67-digit number. Using the Cray for factoring large numbers was part of the cryptographic work in computer encryption going on at that time; work that contributed directly to the secure transmission of data that allows for all of the electronic transactions we now engage in.
htmlText_648F3DF1_7443_C00B_41DA_E5D7F1685137.html =
There are a lot of blinking lights as you walk through the Data Center, but they aren't just there for show, they indicate what's happening in the system. Some of the lights are status indicators. They show green, and that means everything is functioning as it should. An amber light, however, indicates that something may be wrong; like a bad connection to the node. As for the blinking, well, that just means the machine is hard at work processing data. There are also network connectors that use lights to indicate speed, so yeah, there are a lot of blinking lights, all serving as at-a-glance indicators of how well (or not) the machines are running.
htmlText_64BF861F_7442_C037_41C5_6F2E151801B6.html =
Sitting off to the side in the Data Center is a piece of hardware that is now obsolete but is also responsible for everything that goes on inside this room. This is a CRAY 1. The first CRAY 1 was deployed at Los Alamos National Laboratory in 1975 and had a top speed of 160 megaFLOPS (MFLOPS). All those boards on a CRAY 1 put together are the equivalent of a single processor core in a modern computer.
As for all those wires you see, they connect the transistors and were pin-to-pin interconnects that helped make the processor work. Each wire was hand-cut to a specific length. An oscilloscope was used to time the signal through the wire, and that determined how long each wire had to be. This was done for each and every wire in the machine. If there was a single failure, new wire had to be cut to the exact timing specifications. To get the wires into the machine, Seymore Cray, inventor of the CRAY, hired seamstresses to do the wiring. Their deft touch and attention to detail in cutting those wires made the CRAY 1 possible, and they eventually were known as the cable ladies. While it may seem obsolete sitting next to the shining machines of the future, the CRAY 1 is actually a bit like an Egyptian pyramid; it's an ancient (in computer age) technological marvel that would be difficult to duplicate even today.
htmlText_64E5E423_7446_400F_41D5_9D6ABD236DF8.html =
The Data Center uses a lot of power, so what happens in the unlikely event that the power goes out? That's where these unassuming cabinets come into play. They house row upon row of lithium ion batteries, each of which could power your entire house for around eight hours. All critical equipment (anything associated with safety or continuity of business) in the Data Center is backed up by battery and generator. If the power goes out, these batteries kick in and help get the generator started. These batteries and the generators they connect to make sure that even when there's no power, the machines in the Data Center will never experience an unexpected shutdown. Naturally, the Supercomputers draw far too much power to actually run off of batteries (they would last mere minutes), hence the need for the generators.
htmlText_651EF4CB_744E_C01F_41D2_125D1CB15765.html =
Well, it's not exactly like that, but when there's an issue with a system in the Data Center, these carts are used to plug in and diagnose the problem. The carts, known as crash carts (more of a medical terminology as they're used to look at the "patient", or to triage what's going on inside one of these machines), allow researchers to check the system and find potential trouble within a node. Once the problem is diagnosed, the proper steps can be taken to get the system back up and running. No need to stand clear, though, because no defibrillators are used in these scenarios.
htmlText_663561F8_74C2_43FA_41B1_50A9A5BC3F83.html =
There's a lot of data stored at this facility, operated by SMSS (Sandia Mass Storage Systems) whose sole responsibility is high performance data archiving. These tape libraries are used by the high performance computers for storage, especially if it's a long-term storage need, thus discs are sent to tape libraries like this. The library holds that information for many years; meaning there are many Petabytes stored here. Whenever that information needs to be retrieved, the robot helps find the tape that needs to be reviewed and plugs it into a system in the storage rack so the data can be reviewed.
htmlText_664CEDB4_744E_C009_41CE_C3C74D441FBD.html =
Sky Bridge was the first direct-to-chip liquid cooled system at Sandia. This display shows a computer node with a liquid-cooled central processing unit (CPU) and a cutout of the coolant distribution unit (CDU) that distributes water to the CPUs in the rack. Water cooling not only helps keep the temperature down, but also keeps the noise level down.
htmlText_6653B0A4_7442_4009_41C6_C2B7FCE3FEDA.html =
Sky Bridge first entered service at Sandia in 2015 and at the time was the primary unclassified scientific computing platform at Sandia. It doubled the available computing cycles, and its use of the existing Advanced Simulation and Computing (ASC)/TLCC2 compute cluster meant that existing computer codes could run on the system without any modification. It also played a major role in the transition to liquid cooled machines. It has a maximum power draw of 675 kW (kilowatts) and a max performance of 520 tera FLOPS (TFLOPS). It's liquid cooling system and more efficient power distribution saved $700,000 in construction costs and reduced annual operating costs by $120,000. Why the name "Sky Bridge"? Well, the system used the Sandy Bridge chip, and it took the place of the Red Sky system, so put those together and you have Sky Bridge.
htmlText_665FED19_7C46_C03B_4166_8C6CB236E3D3.html =
Most of this building is made up of an intricate maze of piping, all of which is used to transfer water to the Data Center in order to keep the super computers cool. There are approximately two miles of piping between this facility and the Data Center, much of it underground. The pipes you see here are really just the tip of the ice berg in the massive effort to keep the machines of the Data Center cool.
htmlText_66729352_7442_4009_41D8_CDAAACA5BFE5.html =
You will notice the glass panels on the floor was you walk through the racks of high-powered computers. They serve multiple purposes. First, it just looks kind of cool. Second, this is how the data center gets liquid cooling to the computer racks, so the glass tile has a very practical function; it helps when doing walk throughs to see how the racks are doing. One just has to glance at the temperature gauges on the pipes through the glass to see if everything is functioning as it should. It's also easier to notice any leaks; all without ever having to remove any of the tiles. Finally, since the tiles are moved less often, it helps from a safety perspective as well.
htmlText_66B95ED2_7C5F_C009_41D2_82C61814F90E.html =
This panel controls the settings for the energy efficient chiller. It's the brains of the unit, and it makes sure the temperature isn't too cold or too warm so the machines in the Data Center don't over heat.
Note: the plant next to the panel isn't real, but provides some nice greenery in a facility that's mostly just pipes.
htmlText_66E43294_74C7_C009_41CC_7D2E33085496.html =
The Data Center already has plenty of fancy lighting, so all it needs it some music to turn it into a really hip club. Well, of course it will never really be a club, but it does have a great sound system. There are JBL speakers embedded in the ceiling of the data center, so if researchers want, they can practice the Macarena while their data is processed.
htmlText_66F5BCCC_7C42_4019_41DB_C3CCC216FEDB.html =
This complicated set of controls is a water treatment station. The water quality of the facility is tested on a regular basis to make sure the pipes are staying clean and the chemical makeup of the water is correct.
htmlText_6741B5E9_7C41_C01B_41B1_531075632231.html =
At first glance, one might assume the label on this piece of machinery refers to how much it weighs. However, that 1,300 lbs. is actually how much refrigerant the chiller can hold. That refrigerant cools the water that cools the computers in the grand circle of cooling at the Data Center.
htmlText_6763D805_744E_400B_41BA_AFB984177DA8.html =
Tom Brady, love him or hate him, is one of the most successful NFL quarterbacks of all time. Between 2016 and 2018, he appeared in three consecutive Super Bowls and won two of them. Overall, he's made it to ten Super Bowls and won seven (so far). Well, ASCI Red, a collaboration between Sandia and Intel that began in 1995, had a similar string of success during the late 1990s. It was the number one system on the Top 500 list of world's fastest super computers for four consecutive years, from 1997 to 2000. ASCI Red was the first computer to be built under DOE's Accelerated Strategic Computing Initiative (ASCI) program to support the Stockpile Stewardship Program. Running just three quarters of ASCI Red, the machine reached 1.06 TFLOPS, making it the first supercomputer to exceed one teraflops. After memory and processor upgrades in 1999, it achieved 2.38 TFLOPS. Further upgrades to Pentium 11 Xeon processors brought performance to a maximum of 3.2 TFLOPS.
htmlText_67AF21DC_7C42_C039_41DC_B7BC143A2893.html =
This cylinder looking thing is temporary storage for refrigerant. Whenever the main chiller needs maintenance, the refrigerant is moved into this container while the main unit is worked on. Think of it as a refrigerant IV for a sick chiller unit.
htmlText_67B7DC29_7441_C01B_41C8_AA8AADA5EFDB.html =
In the Data Center, the computer clusters are all networked together, which naturally takes a lot of wiring. This interface for the system allows the clusters to talk to each other like it's all in one big system. The network topologies are what define what these wires look like, and in this case, they used a topology known as the Fat Tree topology (Fat tree is a topology, or the arrangement of the nodes to communicate with each other, that has a hierarchy resembling the branches of a tree). All of that wiring adds up, and for a machine like Eclipse, there's about 9.86 miles of copper and optic cabling involved.
htmlText_67D0C660_7442_4009_41DB_3246A186611B.html =
These rows of machines may all look similar, but they actually serve several different purposes. The first row is a series of virtual machines for the Sandia enterprise. Using these virtual machines saves on the number of actual machines needed to run daily tasks. One rack equals what would have been one full row of computers. The next three rows are used for the common engineering environment to solve engineering problems for Sandia's wide variety of customers. The last three rows are advanced architecture test systems, or test beds. These are used to test and prototype the newest in emerging hardware and software technologies (they may not exist anywhere else in the world), which are tested here to see if they're viable for use in the next generation of supercomputers. They're also used to test new applications to see if their codes will run on Sandia systems. Other test beds are used to assess software for bigger machines at other labs, such as Lawrence Livermore (CA) or Los Alamos (NM). While they're all used for different purposes, these rows of machines have one thing in common, they're cooled by a Laminar flow.
htmlText_67D6480D_7442_C01B_41D5_02D2C6182456.html =
When it's time to install a new enclosure in one of these racks, it's not quite the same as replacing your hard drive on your laptop. In fact, it requires a little more heavy lifting. This forklift like machine is used to install each node. It can lift, replace, or install new servers in the rack.
htmlText_68474D0E_7442_C019_41D9_2FF94C4898F4.html =
This room is where everything stays cool. The blue pipes are the main warm water process loop that supplies water directly into the high performance computers on the floor of the data center. Think of it like the arteries and veins in a human body cycling fluids to help keep things running smoothly and cooly.
htmlText_68A2BD42_743F_C009_41B0_C82C48C3C21B.html =
This is the system console; it allows someone to connect to one of the big machines and drive it from this console. As for the big speakers, well, like in the 880 Data Center, sometimes computer engineers need a little music playing in the background to help keep their productivity high.
htmlText_68FE3794_74C2_C00A_41BF_C868FC1EAD36.html =
The Man: Steve Attaway spent over thirty years at Sandia, and during that time he was a researcher, a mentor, and a pioneer. He broke new ground in parallelizing code so it could run on thousands of cores at once, and he always made time to invest and help develop the following generations of researchers that would continue to use the resources in Sandia's data center to solve the nation's "big" problems and challenges.
The Machine: The Attaway Cluster is a bleeding-edge machine, clocking in at around 1.93 petaflops (PF) while also using a next-gen water cooling system. It has 1,488 nodes, with 192 GB of RAM per node. The cooling system uses a negative pressure liquid cooling solution, which not only keeps the machine cool, but helps keep it leak free. (see video: Negative pressure, positively cool for more).
htmlText_695C0AEC_7442_C019_41CD_419E4BFB6242.html =
A lot of the work to keep the machines cool takes place under the floor. Here, some workers are connecting 4" pipes that will feed warm water coolant to the large computers to remove heat. This method is far more efficient than past, traditional methods that removed the heat only with air, which was not very effective or energy efficient for these larger powerful computers.
htmlText_6966D12B_7443_C01F_41C2_553DF17D965B.html =
These large, 4" hoses connect chilled water to the CDU (cooling distribution unit) that in turn feeds the now warmer, medium temperature water in the process loop through a heat exchanger within the CDU. It's just one part of the complex process of keeping the world's fastest computers from overheating in the desert.
htmlText_69C6A6C4_7442_4009_41B4_F8CA6F0565B9.html =
What does it take to install a new super computer. Well, as this time lapse video shows, it's takes more than just the few cords you need to plug in for your home desktop. It takes a lot of time, a lot of planning, a lot of power, a lot of cords, and a lot of people.
htmlText_6B2A3DD4_74DE_4009_41DA_EAAF34B4B323.html =
A negative-pressure liquid cooling system was designed for the highly advanced Attaway Cluster to help keep it cool and leak free. The system keeps the machine's temperature at around 50 deg C with a low power draw. As for the negative pressure, the system essentially works in a vacuum, which means if there is a leak, instead of spraying out, the liquid gets "sucked" back in (more accurately, air is drawn in instead of the liquid leaking out). This advanced cooling technique not only prevent leaks, but means the system can continue to operate even with minor leaks present. It also makes maintenance easier since disconnecting a server automatically evacuates the water, leaving the system dry and ready to be worked on.
htmlText_6CFC49B4_7441_C009_41DA_4CDB815BA531.html =
In the past, working on supercomputers could be dangerous, and not just because they might turn into Skynet...
htmlText_6D1D9225_7443_C00B_41D4_DBDF76D21B6E.html =
The XT3 machine had dual core processors that ran at about 10 GFLOPS/core, making an entire compute cabinet just under 1 TFLOP/Cabinet. While that was better than ASCI-RED, today's tech can do that with a single chip. This board helped upgrade Red Storm to 254 TFLOPS, or XT4 technology. It used a SEASTAR interconnect, a proprietary networked interconnect that provided node-to-node connection using a 3D network topology, known as TORUS, that was developed by Cray, Inc. and Sandia. An algorithm calculated what paths were available (kind of like a Terminator brain) so if there was any disruption in the connections, a redundant path could be used, keeping the machine online. With a little help from XT3 Red Storm, it set world records as the second fastest computer in the world (it never did make it to number one).
htmlText_6D229E2F_75DE_C017_41A3_19A98265F53D.html =
This is a CRAY 2, of which there aren't many left in the flesh...or...in the wiring, or whatever... The CRAY 2 debuted in 1983 and was the world's first gigaFLOP super computer; meaning one CRAY 2 was as powerful as 10 CRAY 1s. This was also the first super computer to utilize a modern UNIX operating system. However, what makes this machine truly unique is that the world's supply of CRAY 2s is rather limited; only 24 were ever made. This particular one was found in condemned building, sitting next to a bunch of legacy hardware. It was basically a "barn find"; hidden under a tarp and a thick layer of dust. It required a specialized lift to move it, of which there is only one in the entire world. At the time, the lift was at the British Museum, and it was shipped to Sandia to help move this heavy CRAY from the obscurity of a barn to the prominent place it now holds in Sandia's Data Center.
htmlText_6D270E7A_75C2_40F9_41B4_E9697F352D20.html =
Considering how fast technology changes, these relics are the fossils of the computing era. Starting on the far right there's a Sony WatchCam, next to it is a vintage Macintosh SE, there's a Compaq "portable" PC in the middle - considered to be the first laptop despite weighing 18lbs - and a TRS 80 Model III (often referred to as the "Trash 80") on the left. Directly below it is a TRL Line Printer, which happens to be from Radio Shack. On the bottom right is an Atari 800XL, the gaming company's attempt at also producing PCs. And yes, that is some vintage, green and white, dot matrix printer paper sitting there in that box.
htmlText_6D36C425_75C1_C00B_41D4_B28D4A9AAB52.html =
The Cray XT 3 I/O board provided network and storage system connectivity. There were hundreds of these boards connected to 44 storage cabinets that held over 40,000 hard drives. To connect these I/O boards, around 60 km of optical cable (basically the distance from Albuquerque to Santa Fe) was used to connect the system together. These boards provided accessibility to the supercomputer, meaning users would log into these nodes to access the full potential of a supercomputer.
htmlText_6D5C7D4A_75C2_4019_41D1_0A518A5C13D2.html =


htmlText_6E35A10D_7442_401B_41D3_A216B963CFEC.html =
Most data centers don't put windows in the building, but this one is unique. Not only does it have windows, but they help save energy. The windows face north, so sun never directly shines into the Data Center, but they still allow in plenty of natural light. That means lights in the data center aren't always needed, and that helps save energy. Plus, it's always nice to know if it's raining outside.
htmlText_6EC7509D_7447_C03B_41D2_593A3665639D.html =
In 2018, Astra was the world's fastest Arm-based supercomputer (Arm is a type of computing processor) according to the TOP500 List. It has a speed of around 1.529 petaflops (PF). Not only was Astra the fastest, but it was among the first supercomputers to processors based on Arm technology, which ironically, was originally used mostly for low-power mobile computers, including cell phones and tablets. Arm processors in each of Astra's nodes, however, are about one hundred times faster than ones in a cell phone, and Astra has 2,592 nodes. Feel the power!
htmlText_6F15BA86_7442_C009_41C0_0C37CCF05227.html =
This building was designed with expansion in mind. The wall can come down to divide the building, or to open it up. The other side contains the National Security Computing Center.
htmlText_6F17170C_7441_C019_41CE_738FCD5FF3DA.html =
This computers in this row are from Lawrence Livermore National Laboratory and were about to be decommissioned. Since Sandia has a similar system running in the Data Center and since the manufacturer of the system is no longer producing support components, they brought here to the data center for spare parts. The name of the computer was Cab (after a wine of some sort, maybe a nice cabernet) because Livermore is in wine country, not chile country.
htmlText_6F17E104_7446_C009_41C4_B65922AD1ACC.html =
These three boards are from Sandia's nCube machines, originally installed in 1987. The original nCube/ten processors were 200 KFLOPS (that's kiloFLOPS) processors with 512 Kbytes (kilo bytes) of memory each, giving 200 MFLOPS (megaFLOPS) peak and 512 MBytes (megabytes) of memory for the entire 1024 processor machine. One of the boards served as an I/O board, with 16 processors filling one quarter of the board, and a pool of associated memory. The other is an empty 64-processor nCube/ten board that proved defective but critical to the reliability of the machine when the original 1024-processor configuration demonstrated cooling problems. The empty board was inserted next to the bank of processor boards to promote the desired airflow within the system. The third board is an nCube 2 board that dates from 1990 and has a full 64 processing modules on it; that is, 64 daughter cards each with a CPU and six memory chips. Each processor on this board was a 2 MFLOPS processor with 4 Mbytes of local memory. Each board cost $250 when purchased.
htmlText_6F55E68B_745E_401F_41D9_AEDEE73B341F.html =
Many of the test systems in the Data Center use unique names. For instance, the Yacumama system is a liquid cooling technology test system being evaluated, and is named for a mythical sea monster. See, makes sense, right? A liquid cooled machine, sea monsters are in the water...and it just sounds cool. The naming of the computers is a fun event as many names and themes are passed back and forth through various personnel that have or will do work on the large computers. One of the other recent themes is based on chile peppers (being from New Mexico and our great chile) and which ones are hotter, which are then used for a computer that also produces a large amount of heat while running at very high speeds.
htmlText_6F9F425E_7442_4039_41B8_2DE221B2B70F.html =
High-Performance Computing (HPC) systems consume substantial amounts of energy to perform the large-scale computations required. A biproduct is substantial amounts of heat, requiring stringent cooling regimens to keep the computers running. Although a typical building is designed to meet heating and cooling needs for the comfort of its human occupants, HPC data centers must provide massive cooling power for its banks of servers. This usually results in high water and energy usage. Therefore, employing energy efficiency and conservation techniques in building data centers is crucial for operation and reducing ongoing costs. Many green building strategies and new, innovative systems were implemented in the 725E Data Center to get it to LEED Gold.
htmlText_6FB9C90A_7446_C019_41D8_B846DF8E40EF.html =
The 725E LEED Data Center is also equipped with a Thermosyphon Cooling System (TCS), which uses passive heat transfer to make the building more efficient. The TCS saved more than half a million gallons of water within a six-month period. One of the unique features of the TCS is that its cooling unit intercedes passively. Its refrigerant rests in a shell that surrounds an outgoing pipe like a glove on a hand, absorbing heat until the liquid evaporates into a gas, like how boiling water becomes steam. The gas rises in vertical pipes until it reaches the upper limits of the device. There, it gives up its acquired heat to the atmosphere, coalesces back into liquid and sinks down, ready to cool again.
htmlText_6FFE0A51_747E_C00B_41D7_E259230383D8.html =
The Airside Economizer brings fresh air in and cycles it to the floor for cooling. Heat generated by the computers rises and some of it escapes outside via exhaust, while a portion of that risen heat remains at the 25-foot-tall celling, allowing maintenance of a more constant indoor temperature of around 78 degrees. The roof of this Data Center also has a cell deck (like a swamp cooler on hyper steroids!) that is used only about 17% of the year. During other times, it just cycles the cooler air into the data center.
htmlText_7A314622_7442_4009_41C3_7D8C5127784F.html =
Eclipse, so named because it's the name of a New Mexico chile and it was purchased in 2017 or the Year of the Great American Eclipse, came online at Sandia in 2018. It was part of the Commodity Technology Systems (CTS) project that added over almost 300,000 compute cores to the high performance computing capacity at Sandia. At 1.2 petaFLOPS (PFLOPS), just one Eclipse has the same compute capability as two Sky Bridge clusters. Eclipse also incorporated innovative, direct water-cooled processors that help lower operations costs and increase energy efficiency.
htmlText_7B082441_7441_C00B_41A2_44AF2A07D23D.html =
The CRAC, or the Computer Room Air Conditioner unit, is one of the many systems in the Data Center that keeps these machines cool. The red lights on the top are where the filters are, and this is where hot air comes into the system. The hot comes in from on top, goes across the filters and some cooling coils that go all the way back to a big chiller, and is then pushed under the floor to help cool the computers. To increase efficiency, this system adjusts based on the temperature in the room; if the computers are working hard, its uses more air to keep things cooler, but when things are slower, it pulls less air to save more energy. Average air temp from the floor is between 72 and 78 degrees. This Data Center is one of the top twelve in the world when it comes to energy efficiency, and it's because of systems like this. This system saves around 26 million gallons of water a year, which is vitally important in a desert environment like New Mexico.
htmlText_8C08276E_9DCC_1239_41D3_C7D4971B7962.html =
3M introduced the first magnetic tape in 1947 for audio recording, but it can be traced back to 1951 for data storage when it appeared with UNIVAC I. Initially, tapes were large (10.5 in), open reels, slowly moving to cassettes and cartridges beginning in the 1970s. Although it is being replaced, magnetic tape is still in use for backing up infrastructure computing systems.
htmlText_8CCA7062_9DD4_2E29_41D2_8E9D194AC995.html =
Sandians attend computer conferences, vendors give out branded items, and...yeah...what else are you going to do with them; drink coffee out of a different mug every day? Better to just put them in the display case.
htmlText_927CB419_9DB4_161B_41D7_F1ED1CBD57A6.html =
Sandia is continually looking for ways to reduce energy use in high performance computing (HPC), which includes not only the efficiencies of the compute platform, but the effectiveness of the infrastructure and facilities. As Sandia strives to make its HPC centers the most energy efficient in the world, it's addressing the energy problem with a system view; from the processors to the water cooling and distribution systems. New Mexico's climate makes it possible to use non-mechanical cooling (using water pumps as opposed to previous systems that were turbine air-cooled and pushed air through the cabinets), which when combined with warm water ( ~ 75 F to ~ 85 F), saves considerable energy. Sandia will soon be able to capture energy from an elevated return water (~ 95 F to ~ 115 F) to support indirect energy needs of the facilities such as process water, domestic hot water, and adsorption cooling processes.
### Title window_0028C659_1032_9477_41A3_B257FD64F868.title = It's the Insides that Count window_003065E4_1037_F45D_41A1_FE5E0314905C.title = Men at Work window_00331247_1011_8C5B_4143_1DEE06B319F4.title = The Data Center Cold Air Balloon window_003A4354_0F9D_2A70_41AD_624668A50AE5.title = Keeping the Lights On window_006C1008_1031_8BD5_41A0_C0ED11226050.title = The Attaway window_00704554_102E_B47D_4199_49688C7EC24E.title = Pay No Attention to what's Behind the Wall window_00808598_10F7_94F5_41AA_4898B146CD01.title = What are FLOPS? window_0088C2C6_0FB8_55A4_4175_43C9810E5CDB.title = The Tom Brady of Supercomputers: ASCI Red window_0088DB8A_0FB8_CBAC_4190_A9E44C265CED.title = Some Night Table Reading window_00A3A951_0FB8_54BC_41A4_3F79059E436B.title = A Window to the Past window_00AD49D1_0F9D_2670_4199_8CE339ED6074.title = Plugging into the Matrix window_00AE0B7B_1031_9C2B_4183_1D1DD3384C12.title = Astra: Look Ma! Fast Arms! window_00C2B69D_1072_94EF_41A2_FC2BAAD20F49.title = Computing with Tape window_00D5A411_0F96_EDF0_41A3_16F7EAD50FC3.title = Club HPC window_00F9B3AC_1011_8C2D_415D_4B04624CC40C.title = What Exactly is High-performance Computing? window_0101CB60_1012_9C55_418F_149AAF25E290.title = A Special Piece of History window_010B1409_1033_8BD7_4166_1D77CF0A90BE.title = That's the Power of Cool window_0123E195_1012_8CFF_4175_63B3CBD8764A.title = What's with the Mugs? window_012E70C1_10F3_8C56_419A_04F15951191B.title = Using a Negative to Keep Things Positively Cool window_01348583_10F1_74DB_4184_9B436E5B6264.title = What's a Node? window_01416AD9_1031_BC77_4198_89E71DAAFF69.title = Avoiding Brain Freeze window_016768D7_1031_9C7B_419D_8B903F19312D.title = Portal to Power window_016EFB93_10F1_7CFB_4162_BE09DD313B24.title = Test Systems and Their Unique Names window_01844679_10F1_F437_41B1_089F9C8EAACE.title = The Telecommunications Part of the Computing Museum window_019AE700_103E_F5D5_419F_1C8BA77C1C70.title = Now that's a Lot of Cool window_01F5F751_0FA8_DCBF_416A_97E23766DA11.title = The Perfect Red Storm window_0235E8BD_10F2_BC2F_41AF_BC274522EBD2.title = Time to Pump Up window_024466DF_10FE_946B_41AF_37159354DCB0.title = Seeing the Light window_02C7F8E7_1016_FC5B_4198_92D3665AE0B0.title = That's a Lot of Wire window_02E17744_10FE_F45D_41A3_CA4A29CF53FE.title = A Stroll Through a Really Cool Place window_0788DF9E_1012_B4ED_419C_35A15135EF4B.title = Tape no more; Computing with Hard Drives window_1E2F1D3D_0FE8_CCE4_416F_457D290F10B7.title = Up on the Rooftop window_1E39AFD4_0FF8_4BA4_419E_69873EE047CF.title = Keeping an Eye on Things window_1E4B4A0D_0FD8_34A4_41A8_1F9B984F7133.title = The Cray-2; a Closer Look window_1E6DCE4D_0FF8_4CA4_41A4_77342EE10419.title = Installing a Super Computer window_1E78C696_0FE8_DDA4_4198_364D39028187.title = Parts to Spare window_1E95C3DC_0FE8_7BA4_41AC_5277429CD1F3.title = Miles Long Pipe Maze window_1EBCEC6C_1012_942D_4177_B600DA3F94AE.title = A Bridge to the Sky...Sort of window_1EC6ADCD_0FE8_CFA4_41AE_7996E00A0579.title = Making a Greener Building for Super Computers window_1ECDF1DC_0FE8_57A4_41A4_7B460DF402EC.title = Be Careful or be Terminated window_1ED58093_0FE8_55A3_419E_198A2F716A6D.title = Like a Garden Hose, but Bigger window_1EFA0362_101E_8C55_41AB_DF2E89D7FD11.title = Looks can be Misleading window_1F19B6AF_0FF9_DDE4_41A7_1E97DC199A6B.title = Blue is Cool window_1F231D3F_0FB8_4CE4_41A3_A0E8061CFA7B.title = A Monitor for Monitoring window_1F2C2C9A_1012_F4F5_41A8_6F76A8B45886.title = A Heavy Load window_1F3D30CA_1012_8C55_4164_4B47362FE988.title = Just in Case window_1F410898_0FE8_F5AC_41AD_59914D4DFB9F.title = Get Me a Refrigerant IV, STAT! window_1F5C68D1_0FAB_6670_41AB_1534294F0F50.title = Keeping it Cool window_1F6FE217_1011_8FFB_41A1_AF8E8F3166A3.title = It's a Little Light that Blinks window_1F868C63_0FAE_FE50_4165_E17AE8AF6F00.title = Casting a Long Shadow window_1F8DD9F9_1011_FC37_4187_0B62BD6136C8.title = Staying Cool, Saving Energy window_1F8F6F05_1032_F5DF_41A4_36CE36DDF4C5.title = A Real Poser window_1F8F8855_0FD8_54A4_4195_E85E6C5B062E.title = A Peek on the Inside window_1F8FC12D_1031_8C2F_41AE_0ED478195A68.title = Who's There? window_1F983812_1033_9BF5_4192_A59A673BA64B.title = The Power of Cubes window_1FB0D359_1011_8C77_41AF_A7FD74085C2D.title = The Cooling Beneath Our Feet window_1FB913C5_0FA8_5BA4_41A0_C9C5523985FC.title = Welcome to the Computing and Communications Museum window_1FD3FF30_0F97_1A30_41A4_2EFCCE7C3D24.title = A Robotic Helping Hand window_1FDC7444_0FE8_3CA4_417B_810BC1267A8B.title = Saving Water in the Desert window_1FF36A43_0FE9_D49C_41A7_FC65AE6CF5A2.title = Keeping it Clean window_1FFDC10E_0FAA_E7D0_41A0_A2AC8896D2DA.title = Taking a CRAC at Cooler Air window_4CE0BA0A_5C30_AE2B_41C2_75CC65C81496.title = Relics from the Past window_53611C13_5C31_AA39_41C0_6DDA747269A6.title = Remember Factoring? window_53C752EE_5C30_BFEB_41A7_163BE86C927D.title = A Rich History, a Multi-layered Message window_53D7EA89_5C31_6E29_41CC_43EC98D9F49E.title = A Little Piece of History ## Hotspot ### Tooltip HotspotMapOverlayArea_A442B3AE_A800_3534_418B_31590DE545C9.toolTip = Cooling Distribution Facility HotspotMapOverlayArea_A7D1F145_A800_D574_41DF_D9AA95B03DAA.toolTip = LEED Data Center HotspotMapOverlayArea_B8B6E638_A800_7F1C_41E2_2CE72DCA15BD.toolTip = Data CEnter Annex HotspotMapOverlayArea_B96E592E_A800_3534_41E2_5E69FFB22B53.toolTip = Computing Museum HotspotPanoramaOverlayArea_58A70F4A_5682_8609_41A5_B3EEEF95BFBA.toolTip = The Data Center Cold Air Balloon HotspotPanoramaOverlayArea_58EEC7C7_7C42_4017_41AC_E50C845F011A.toolTip = Fun Fact: Now that's a Lot of Cool HotspotPanoramaOverlayArea_59B9A56D_7DC1_C01B_41D5_E482D837BEDD.toolTip = Time to Pump Up HotspotPanoramaOverlayArea_5B782A1E_7C42_4039_41C8_E4B2C8FD9234.toolTip = Fun fact: A Real Poser HotspotPanoramaOverlayArea_5BDB776E_7DC2_4019_41C4_DB41059C4551.toolTip = That's the Power of Cool HotspotPanoramaOverlayArea_602FA0EC_744E_4019_41D3_C148C04656F6.toolTip = Who's There? HotspotPanoramaOverlayArea_605AE93C_7442_4079_41CC_23FFD9023AFE.toolTip = A Monitor for Monitoring HotspotPanoramaOverlayArea_6094F12C_7442_4019_41B7_47B956EBDDD5.toolTip = Fun Fact: What's a Node? HotspotPanoramaOverlayArea_609FC88B_75C3_C01F_41BB_4D91B11B3853.toolTip = A Window to the Past HotspotPanoramaOverlayArea_60ADFC4A_75CE_4019_41D1_FA72F5BE31B4.toolTip = Fun Fact: This is a Fire Extinguisher. HotspotPanoramaOverlayArea_618C2E92_743E_C009_41DA_F56413A083BA.toolTip = A Rich History, a Multi-layered Message HotspotPanoramaOverlayArea_61B969B8_7442_4079_41DA_E108F9536BCD.toolTip = Portal to Power HotspotPanoramaOverlayArea_62184F0D_75C1_C01B_41D7_B0E362AC8D8C.toolTip = Video: The Cray-2; a Closer Look HotspotPanoramaOverlayArea_62228A89_75C2_C01B_419C_F8B1E57B5DF7.toolTip = Fun Fact: The Telecommunications Part of the Computing Museum HotspotPanoramaOverlayArea_628CCFDE_75FE_4039_41C0_9888AF8C8096.toolTip = Welcome to the Computing and Communications Museum HotspotPanoramaOverlayArea_62E02EAB_75C6_401F_4164_7FFFC119D224.toolTip = Tape no more; Computing with Hard Drives HotspotPanoramaOverlayArea_63BEC184_75FE_4009_41B6_8DF89B7BAFBE.toolTip = Video: What Exactly is High-performance Computing? HotspotPanoramaOverlayArea_63CA67B3_75FE_C00F_41D9_C8CF3FBCBB05.toolTip = Fun Fact: Remember Factoring? HotspotPanoramaOverlayArea_642A6C16_7C42_4009_41CA_74759FE232A8.toolTip = Keeping it Clean HotspotPanoramaOverlayArea_646C34C3_74C6_400F_41D6_1FF97D2BBA80.toolTip = Fun Fact: Club HPC HotspotPanoramaOverlayArea_6507C12C_7442_401A_41A1_08165928C169.toolTip = The Cooling Beneath Our Feet HotspotPanoramaOverlayArea_6510D25F_74C2_4037_41D7_2E40BEC1772B.toolTip = A Robotic Helping Hand HotspotPanoramaOverlayArea_65246399_7442_C03B_41D0_3D0425B69136.toolTip = A Little Piece of History HotspotPanoramaOverlayArea_65539D99_7441_C03B_41CF_44EDABB78001.toolTip = A Bridge to the Sky...Sort of HotspotPanoramaOverlayArea_656112BA_7446_4079_41B2_71C0637CC0F6.toolTip = Keeping the Lights On HotspotPanoramaOverlayArea_65699711_7442_C00B_41D9_1931F0EE1B75.toolTip = Video: A Heavy Load HotspotPanoramaOverlayArea_6587793B_7C46_C07F_41DD_24FECEADEC7C.toolTip = Fun Fact: Miles Long Pipe Maze HotspotPanoramaOverlayArea_65A72EAE_7442_4019_41DA_048F07C2D013.toolTip = It's a Little Light that Blinks HotspotPanoramaOverlayArea_663055B8_7442_4079_41D6_29D380A69979.toolTip = The Perfect Red Storm HotspotPanoramaOverlayArea_6656C428_7C41_C019_41D7_0812F496D480.toolTip = Video: A Stroll Through a Really Cool Place HotspotPanoramaOverlayArea_66930CFF_7441_C1F7_41B5_DB59955D5270.toolTip = Fun Fact: That's a Lot of Wire HotspotPanoramaOverlayArea_67163C2A_744E_C019_4183_98A6158E0C53.toolTip = Keeping it Cool HotspotPanoramaOverlayArea_672BE6AE_744E_4019_41BC_FF1695B300B7.toolTip = The Tom Brady of Supercomputers: ASCI Red HotspotPanoramaOverlayArea_672E805F_7C42_C037_41BA_E4D70AE9B718.toolTip = Get Me a Refrigerant IV, STAT! HotspotPanoramaOverlayArea_6753D628_744E_4019_41D9_057196F35E37.toolTip = Fun Fact: What are FLOPS? HotspotPanoramaOverlayArea_678D58E5_7441_C00B_41A8_F987561227B8.toolTip = Some Night Table Reading HotspotPanoramaOverlayArea_6793D1FF_7442_43F7_41D1_369C204DB102.toolTip = Looks can be Misleading HotspotPanoramaOverlayArea_67AEA6E1_7C5E_400B_41DE_20F2394DA2A8.toolTip = Avoiding Brain Freeze HotspotPanoramaOverlayArea_67C30E1B_7442_403F_41D0_FD047F497E04.toolTip = Just in Case HotspotPanoramaOverlayArea_685FAF05_743F_C00B_41A9_3FA77F86A1FE.toolTip = Keeping an Eye on Things HotspotPanoramaOverlayArea_68663C85_74C1_C00B_4196_BA3800914D2F.toolTip = Video: Using a Negative to Keep Things Positively Cool HotspotPanoramaOverlayArea_6959783C_7443_C079_41D6_8AAF639DC01C.toolTip = Like a Garden Hose, but Bigger HotspotPanoramaOverlayArea_69EF2E74_7442_C009_41B4_C6A18A751824.toolTip = Men at Work HotspotPanoramaOverlayArea_6B3407E1_7442_C00B_41D6_AADCDBA2D78E.toolTip = Blue is Cool HotspotPanoramaOverlayArea_6BC189AD_74C2_C01B_41D8_EE4D0721F589.toolTip = The Attaway HotspotPanoramaOverlayArea_6C5A5FF3_7446_C00F_41DB_2BFD51271E7B.toolTip = The Power of Cubes HotspotPanoramaOverlayArea_6D1CE388_75DE_4019_41D2_1B1A35668DBE.toolTip = A Peek on the Inside HotspotPanoramaOverlayArea_6D621D80_75C2_4009_41BE_0B1E34EA9E9C.toolTip = Relics from the Past HotspotPanoramaOverlayArea_6D9F447A_745E_40F9_41C1_08CC5F4AEB72.toolTip = Fun Fact: Test Systems and Their Unique Names HotspotPanoramaOverlayArea_6DAA454F_7442_C017_41D3_749E696C019A.toolTip = Pay No Attention to what's Behind the Wall HotspotPanoramaOverlayArea_6DD56190_7442_4009_41A4_E3E5B21564F7.toolTip = It's the Insides that Count HotspotPanoramaOverlayArea_6E06A8FC_7442_41F9_41DB_954B379568B5.toolTip = Video: Installing a Super Computer HotspotPanoramaOverlayArea_6E3DC0AA_7441_C019_41BB_2950E1D7D376.toolTip = Making a Greener Building for Super Computers HotspotPanoramaOverlayArea_6EAC7305_7442_400B_41DC_47C405065212.toolTip = Fun Fact: Seeing the Light HotspotPanoramaOverlayArea_6F5F5BF8_7441_C7F9_41D4_55E82D0D8969.toolTip = Video: Be Careful or be Terminated HotspotPanoramaOverlayArea_6F960CD6_7446_4009_41D3_E7C03E040F3E.toolTip = Astra: Look Ma! Fast Arms! HotspotPanoramaOverlayArea_6FEDD60A_7442_4019_41D2_A5007985EB4B.toolTip = Parts to Spare HotspotPanoramaOverlayArea_782A6A66_7442_4009_41C0_535F77F058D6.toolTip = Casting a Long Shadow HotspotPanoramaOverlayArea_7B1A8CAC_7441_C019_41D2_98B433352778.toolTip = Taking a CRAC at Cooler Air HotspotPanoramaOverlayArea_7BA7225C_744E_C039_41D5_D3EE8F676ECE.toolTip = Plugging into the Matrix HotspotPanoramaOverlayArea_8C4A714F_9DCC_6E77_41DB_1316009A76FF.toolTip = Fun Fact: What's with the Mugs? HotspotPanoramaOverlayArea_8F0E79EC_9DCC_7E39_41AC_AFDA7D5771D2.toolTip = Computing with Tape HotspotPanoramaOverlayArea_90780FB3_9DB4_122F_41D0_B9E8078636F0.toolTip = Video: Staying Cool, Saving Energy HotspotPanoramaOverlayArea_AA01392D_94C3_C01B_41DC_81687E8EEB45.toolTip = Slideshow: A Special Piece of History HotspotPanoramaOverlayArea_ABA3283B_94CF_C07F_41DD_EFE9A7522BED.toolTip = Saving Water in the Desert HotspotPanoramaOverlayArea_ABBF0D63_94CE_400F_41D9_93FD9EE286E7.toolTip = Video: Up on the Rooftop ## Action ### URL LinkBehaviour_93FA3329_9FF4_BB8F_41D1_0989574C58D3.source = http://tours.sandia.gov/mantl_info.html LinkBehaviour_93FA6329_9FF4_BB8F_41D7_70DD5370F1D3.source = http://tours.sandia.gov/support.html LinkBehaviour_93FA9329_9FF4_BB8F_41BB_B88FC9B9D2A8.source = http://tours.sandia.gov/tours.html LinkBehaviour_93FAC329_9FF4_BB8F_41DF_DC1E8B4A0B8D.source = http://tours.sandia.gov/tours.html ## E-Learning ### Score Name score1.label = Score 1 ### Question Screen quizQuestion_57D47AFB_5BA2_6B95_41D3_4511FEF90BC8.ok = OK ### Report Screen quizScore_57D1BAFB_5BA2_6B95_41D2_162370A3BB1C.completion = Completed quizScore_57D1BAFB_5BA2_6B95_41D2_162370A3BB1C.downloadCSV = Download .csv quizScore_57D1BAFB_5BA2_6B95_41D2_162370A3BB1C.elapsedTime = Time quizScore_57D1BAFB_5BA2_6B95_41D2_162370A3BB1C.items = Items Found quizScore_57D1BAFB_5BA2_6B95_41D2_162370A3BB1C.questions = Questions quizScore_57D1BAFB_5BA2_6B95_41D2_162370A3BB1C.questionsCorrect = Correct quizScore_57D1BAFB_5BA2_6B95_41D2_162370A3BB1C.questionsIncorrect = Incorrect quizScore_57D1BAFB_5BA2_6B95_41D2_162370A3BB1C.repeat = Repeat quizScore_57D1BAFB_5BA2_6B95_41D2_162370A3BB1C.submitToLMS = Submit quizScore_57D1BAFB_5BA2_6B95_41D2_162370A3BB1C.title = - SCORE - ### Timeout Screen quizTimeout_57D51AFB_5BA2_6B95_41D5_7888B27B3256.repeat = Repeat quizTimeout_57D51AFB_5BA2_6B95_41D5_7888B27B3256.score = View Score quizTimeout_57D51AFB_5BA2_6B95_41D5_7888B27B3256.title = - TIMEOUT -