Login

Email   Password
  
 

Why Join ASHRAE

ASHRAE Membership

ASHRAE membership is open to any person associated with heating, ventilation, air conditioning or refrigeration. ASHRAE is unique because its membership is drawn from a wide range of disciplines relating to the HVAC&R field. Approximately 51,000 individuals from more than 100 nations belong to the Society.

Discounts on Publications

ASHRAE members earn 15% off publications. Hundreds of titles are available including the complete collection of ASHRAE Standards including 90.1, 62.1 and 189.1.
Click here for information on joining or to join ASHRAE

Develop Leadership Skills

When you join ASHRAE, you are making an investment in yourself. When you become active in the Society by giving your time and sharing your knowledge, you get even more out of that investment.

Network with Industry Professionals

Each month, all over the world, ASHRAE chapters convene for an informational program featuring a speaker or topic that is key to professionals in the industry. Meet with your peers and share ideas.
 
 
Need technical info? Search ASHRAE's Bookstore >
 
 
Resources & Publications
 

Data Center Dilemma

©2013 This excerpt taken from the article of the same name which appeared in ASHRAE Journal, vol. 55, no. 3, March 2013.

By Jeff Sloan, P.E., Member ASHRAE

About the Author
Jeff Sloan, P.E., is a design manager at McKinstry Co., in Seattle. He is a member of ASHRAE’s Puget Sound chapter.

TeleCommunications Systems (TCS) faced a difficult choice in late 2009 (Photo 1). Its research and production IT systems had expanded to fill 3,000 ft2 (279 m2) of developed data center space in Seattle’s World Trade Center Building (2401 Elliott Avenue), and it was still growing. TCS wanted to create a larger data center in the same building, but this building didn’t seem to have adequate remaining power in its electrical substation, and it lacked space for the additional generators and chillers that a conventional data center would require.

TCS’s older data center systems required almost as much electrical power for the cooling equipment and for critical power processing as was needed for the computers themselves. Data center designers characterize a given data center’s power utilization effectiveness as its “PUE” ratio, defined as the incoming (total) power wattage divided by the useful (process) power wattage. A plan to accommodate 400 kW of ac and dc equipment with a 1.8 PUE (similar to their current data center) would require 720 kW of incoming power. That much capacity wasn’t available within the building’s electrical distribution, and additional generator space would be needed.

TCS had been tolerating other problems in their existing data center space; poor air distribution caused some of its equipment to overheat unless the cooling system was adjusted to maintain low temperatures. The poor air distribution in the older data center can be seen in the thermograph in Figure 1, where some equipment is drawing in air that is warmer than the room setpoint, despite the refrigerated supply air coming from the visible ceiling outlet.

In this picture, the equipment is mounted in “open” racks and aisles that present many opportunities for cooling air to recirculate; warm air can leave some servers and then be pulled into other servers without returning to the HVAC equipment first. The cold supply air temperatures and high humidity setpoints in the older data center were causing some HVAC equipment to humidify, and other HVAC equipment to simultaneously dehumidify, wasting energy and wasting water.

Design

The design proposed to overcome these difficulties was styled after a successful data center project that had been built in Moses Lake, Wash., in 2006 by the same design-build team. The Moses Lake project has reliably operated without refrigeration through summer temperatures of more than 100°F (38°C) because its design solved the cooling air distribution problem shown in Figure 1, and by doing so, achieved a year-round PUE of 1.20.

A design with a PUE that low would allow TCS to install the desired 400 kW of IT equipment without modifications to the building’s electrical service or the need for a larger generator. Once this concept was explained to TCS, they chose to place their servers and other IT equipment in closed “chimney” cabinets (Figure 2,) instead of in open racks. The chimney cabinets were placed in a 15 ft (5 m) clear-height space with a 10.5 ft (3.2 m) high suspended T-bar ceiling to form a return air plenum above the ceiling.

The chimney-style cabinets are available from a variety of manufacturers and are designed to convey all the warm air produced by the equipment they contain directly into the return air plenum, without opportunities for recirculation. The spaces in the chimney cabinets that are not occupied by equipment should be blanked off so cooling air can only enter each cabinet through a server. The thermograph in Figure 2, shows how uniform air temperatures arrive at each of the (dark) installed servers, despite the (warm) blanks filling the remainder of the cabinets.

Without hot air recirculation, the HVAC equipment serving the new TCS data center space only needs to make 75°F (24°C) supply air, a condition that can be produced year-round in the Pacific Northwest without refrigeration by using direct evaporative cooling only. By using power only for fans and not a chiller, the HVAC system requires only 10% of the generator’s capacity.

With this chillerless cooling system, along with efficient UPS and dc power processing equipment, TCS’s new data center operates with a PUE of only 1.15, year-round. With a PUE that low, the project qualified for utility conservation incentives that helped to offset the cost of the more expensive chimney cabinets.

 

Read the Full Article

 

Return to Featured Article Excerpts