Latest from Air Conditioning

New University Data Center Supported With Modular, Scalable Architecture

April 1, 2007
After the owner of the building housing the University of Texas Health Science Center's (UTHSC's) data center decided to raze the building and rebuild

After the owner of the building housing the University of Texas Health Science Center's (UTHSC's) data center decided to raze the building and rebuild on the property, UTHSC Director of Data Center Operations and Support Services Kevin Granhold and Facilities Project Manager Jeff Carbone were directed to spearhead a new data-center design/build project.

After exploring a number of options, the information-technology (IT) team decided to build a 7,000-sq-ft data center on the roof of a university-owned parking garage in downtown Houston.

“A rooftop in Houston, Texas, is extremely hot,” Granhold said. “Placing the super-cooled data center on the north side of a 26-story tower was a good idea because that particular location is primarily in the shade most of the day for most of the year. The size of our proposed data center fit quite well in that space, and we had multiple ways to get in and out of the facility.”

An engineering firm was hired to assess requirements and provide a preliminary design. This assessment consisted of a series of workshops with several building architects, as well as APC, the provider of the rack-based integrated uninterruptable-power-supply (UPS), cooling, and power-distribution solution.

Logistical issues involving construction, security, workflow, and fire suppression were discussed. Also discussed was the desired criticality level of the data center.

“We are primarily a research and educational facility with some clinical components,” Granhold said. “Unlike many hospital IT shops with patient systems to support, we don't have to confront life-or-death situations. We had to control our costs. The more redundancy, the higher the cost. However, we needed to strike a balance. We could not afford to have our facility shut down during a hurricane, for example.”

The team decided to invest in a generator and chillers. The availability of backup power and chilled water to cool the data center was deemed essential.

The new chiller system had to be configured for placement on the roof and sized correctly. A debate regarding whether to deploy one, two, or three chillers ensued. The team decided on three chillers for reasons of flexibility. One bigger chiller would have represented a single point of failure. Additionally, running one big chiller at 25 percent or 50 percent of capacity was deemed inefficient. Three smaller chillers offered an N+1 scenario in terms of availability. Two chillers would run at any one time, with a third chiller available in case either of the other two chillers failed.

MODULAR, SCALABLE ARCHITECTURE

Because a modular, scalable UPS, rack, and cooling architecture (APC InfraStruXure) had been selected, the team was not confronted with the issue of having to purchase excess capacity up front to meet long-term needs. This allowed the team to leverage the project budget so that additional floor space could be built into the design to support additional racks with in-row cooling and additional power capacity at a later time.

Rack-placement decisions were based on power-density calculations, growth paths, and cooling-related issues.

The InfraStruXure configuration was critical to the overall design of the building. In fact, the APC solution had to be planned and configured before the building design could be completed.

The self-enclosed, zoned architecture of InfraStruXure helped to facilitate the deployment of best practices (see sidebar). The IT operations group wanted a hot-aisle-containment and cold-aisle scenario that could accommodate both server consolidation and high-density servers. The hot-aisle-containment system was attractive because it allowed Granhold to better manage unpredictable server densities.

Designing the data center around InfraStruXure greatly simplified power-distribution work. The facilities manager needed to concern himself only with supplying 480 v to the in-row UPS. Instead of putting in breaker panel after breaker panel and trying to distribute under the floor and in boxes, the APC systems engineer designed one breaker in the electrical room to run hundreds of servers. This significantly reduced the cost of electrical wiring.

The IT operations group was pleased it no longer would have to incur the cost of electricians each time it wanted to implement a change to the data-center power system.

“I can unplug a whip or put in a PDU, and I have the right voltage,” Granhold said. “Facilities liked the idea that someone else was coming in and setting up the major components of our data-center power infrastructure.”

Granhold added: “The facilities manager understood the issues of drift and hot spots and how I was containing that. The more understanding he had, the more positive he became. His department bought into it before we signed the contract with the building contractor.”

Information and photographs courtesy of Circle 149

Data-Center Best Practices

Designing and building a new data center provided the UTHSC staff with insights into how to streamline processes and procedures and how to deploy technology more efficiently:

  • Convert from 110 v to 208 v

    As much as possible, UTHSC migrated from 110-v power distribution to 208-v power distribution. UTHSC has found that moving to 208 v is more efficient from a UPS- and overall power-utilization point of view. With 208 v, more devices can be supported on the same amperage. The efficiency of the higher voltage is deemed the biggest advantage.

    To give an example with hypothetical numbers, with a 20-amp, 110-v circuit, I might be able to support four servers,” Granhold said. “On a 20-amp, 208-v circuit, I can support eight servers. That means that I can now run two circuits to each rack for redundancy. With 110 v, I could only fill half of my need. With 208 v, I can now fully meet the need.”

  • Aggressively pursue standardization

    As a result of standardization, acquisition costs, maintenance costs, training costs, implementation costs, learning curves, and employee turnover have decreased dramatically.

    “We standardize on what color of wiring we use,” Granhold said. “We standardize on APC data-center infrastructure for air, power, and racks. We've standardized on the brand of servers we use. If I have a motherboard failure, I can pull out a drive or card and stick it in another server, and it'll be up almost immediately.”

  • Start with racks, not walls

    “Instead of putting up four walls and then populating with racks, designing the building around the rack layout is much more efficient,” Granhold said.

    “We planned how we wanted to lay the racks out and then built the building around that,” Granhold continued. “In our data center, we have an 8-ft aisle all the way around the perimeter of the racks, and then, down the center of the self-contained hot aisles, we have a 6-ft aisle way. Most organizations make the mistake of building a big room and then somehow trying to force-fit the racks. They then discover a column in the way. They end up only being able to fit in three rows, which disrupts the entire hot-aisle/cold-aisle setup. Also, installing equipment like big air handlers around the perimeter of the room forces you to lose valuable rack space.”

  • Install close-coupled cooling

    With increasing power densities and an acute need to remove heat from data centers, the best strategy is to locate cooling systems right next to where heat is being produced. This approach will not work in all cases, such as when a data center needs to support a traditional mainframe. For newer Intel- and Unix-based systems, however, closely coupled rack-mountable air units in self-contained rows of racks are a best practice.

  • Deploy wire management above floors

    Install and manage cables above racks, instead of under floors.

    “More-permanent cabling-power feeds that run to the UPS can be either above the floor or below the floor, but power to each individual server should be above the floor,” Granhold said. “This eliminates the accumulation over time of underfloor cables. The under-the-floor cable congestion can get difficult to manage. One can easily get to the point where there is no longer any room to run new wire under the floor. Overhead cable distribution encourages good practices that minimize human error. In my book, nice plus neat equals manageable.”

  • Deploy progressive design

    “Don't have someone who has worked in data centers for 30 years design the data center,” Granhold said. “I've seen several data centers designed by an ‘expert’ data-center manager who has run things since the mainframe days. The organization ends up with a ‘new’ 1970s data center, which, in essence, is an obsolete, inflexible data center.”

Related

Photo 241499821 © Unique93 | Dreamstime.com

JOHN VASTYAN

March 16, 2024
ASHRAE