Artificial intelligence (AI) policy: ASHRAE prohibits the entry of content from any ASHRAE publication or related ASHRAE intellectual property (IP) into any AI tool, including but not limited to ChatGPT. Additionally, creating derivative works of ASHRAE IP using AI is also prohibited without express written permission from ASHRAE.

Close
logoShaping Tomorrow’s Global Built Environment Today

ASHRAE Journal Podcast Episode 27

 ← All Episodes 


Left, Mukul Anand; Joe Prisco

27. Revolutionizing Data Center Sustainability With AI and Purpose-Built Solutions

Data center operators and owners have strict sustainability goals, and it is paramount mission-critical data centers are equipped with innovative solutions to help achieve these goals as quickly as possible. Join Host Thomas Loxley with Mukul Anand and Joe Prisco as they discuss the current “hot aisle-cold aisle” design and how purposefully designed data centers can achieve optimal performance and efficiency driven by artificial intelligence.

Have any great ideas for the show? Contact the ASHRAE Journal Podcast team at podcast@ashrae.org

Interested in reaching the global HVACR engineering leaders with one program? Contact Greg Martin at 01 678-539-1174 | gmartin@ashrae.org.

Available on:  Spotify  Apple Podcasts  Google Podcasts
Podcast Addict | And Other Podcast Players
RSS FeedDownload the episode.

Do you have questions or comments? Let us know!
  • Host Bio

    Thomas Loxley, Assistant Manager of Standards–Codes, ASHRAE

    Thomas Loxley is a graduate of University of Kentucky (BS in Biosystems and Agricultural Engineering) and Auburn University (MS in Biosystems Engineering). He comes to ASHRAE with manufacturing industry experience and currently serves as Staff Liaison to Standing Standards Project Committees: 90.4 and 189.1. Thomas also works with teams of members to write guidelines for the Task Force for Building Decarbonization.

    As the Assistant Manager of Standards–Codes, he enjoys working with the wide range of professionals to write new standards for a brighter future.

    Thomas resides in Decatur, Ga., with his wife, daughter and dog.

  • Guest Bios

    Mukul Anand is currently Global Director, Business Development - Applied HVAC Global Products at Johnson Controls, Int. He leads the efforts to collaborate with JCI's data center customers, focusing on their global growth and associated needs, and provides strategic guidance to reduce power consumption, water usage and total cost of ownership. Reliability, uptime and real time response are the over-arching themes.

    Anand proactively understands the needs of the largest data center owners in terms of applied HVAC Equipment and lead the initiatives to launch specialized products for the vertical. He collaborates with the Fire Detection, Fire Suppression, Security, Building Automation & Controls and Rooftop Units, and assist in the installation and commissioning of infrastructure equipment. He leads several projects annually to establish best practices and drive the voice of the customer through the organization. He participates in industry events and voice of the customer sessions to lead the business toward the current and future needs of JCI's customers. He establishes and maintains a global network of subject matter experts to form regional just in time teams to support the global expansion plans of JCI's largest customers at the velocity of growth being experienced in the Cloud and wholesale colocation segments.

    Anand received an MS Degree in Thermal and Fluid Sciences from the University of Maryland in 1998 and an MBA degree from Smith School of Business (University of Maryland) in 2003.

    Joe Prisco is a Senior Technical Staff Member (STSM) in IBM Systems. He is presently the Chair of the IBM Development Power Council and is the technical owner of the physical planning information used to layout, design and build data centers for information technology (IT) equipment. Prisco is the power profile chief test engineer responsible for compliance to worldwide energy laws, standards and programs such as ENERGY STAR. Additionally, He is a rack power distribution architect and has an extensive background in electrical power generation, transmission and distribution. Prisco is active on several committees, including the International Electrotechnical Commission (IEC) SC77A/WG1 on low frequency EMC power line emissions; the National Fire Protection Association (NFPA) 70 code making panel 12; and ASHRAE TC 9.9 and SSPC 90.4.

  • Transcription

    ASHRAE Journal:

    ASHRAE Journal presents.

    Thomas Loxley:

    Welcome. I am Thomas Loxley, ASHRAE's assistant manager for Standards and Codes. This is ASHRAE Journal Podcast episode 27, and we will be discussing artificial intelligence and its applications to data centers.

    Joining me today are Joe Prisco and Mukul Anand. Joe Prisco is an electrical engineer with IBM. He is currently the power profile chief test engineer responsible for compliance to worldwide energy laws and standards and programs such as ENERGY STAR. Additionally, Joe is a contributing member to both technical committee 9.9 and standards project committee 90.4. Welcome, Joe.

    Joe Prisco:

    Thank you, Thomas. Hello and thanks for the introduction.

    Thomas Loxley:

    Mukul Anand is a mechanical engineer with Johnson Controls where he is currently the global director for business development and applied HVAC global products. He works with JCI's data center customers focusing on growth and offering guidance to reduce power consumption, water usage, and total cost of ownership without compromising the reliability of time and real-time response. He also contributes to TC 9.9 and project committee's 90.4, 127, and 128. Hi, Mukul.

    Mukul Anand:

    Hello. Thank you for having us.

    Thomas Loxley:

    Before we get started today, can you give us some background on your career and how you got started in ASHRAE?

    Joe Prisco:

    So what I do is I do all things electrical power when people ask me what my job is. I cover everything from service entrance into a data center up to the IT equipment where the AC comes in and it gets converted to be used by the electronics. So, early on, Dr. Roger Schmidt got me involved with what was called the ASHRAE Technical Group, TC 9 HDEC, High Density Electronic Equipment Facility Cooling, and that eventually became ASHRAE TC 9.9. So, I worked closely with Roger behind the scenes in preparing a lot of the data and getting involved early on in ASHRAE with that activity.

    Mukul Anand:

    For me, I was a part of Center for Environmental Energy Engineering at University of Maryland, College Park when I was doing my graduate studies. As a part of that, we were experimenting with several natural refrigerants and I was the vice president of the student chapter for ASHRAE for the university campus. That's how I got involved in ASHRAE, and that's probably 27 years ago. Since then, it's been a very mutually beneficial relationship. I've learned a lot from ASHRAE, and hopefully, I've contributed and will continue to contribute. Thanks for asking, Tom.

    Thomas Loxley:

    Yeah. So, before we dig into our thought leadership exercise here on artificial intelligence, Joe, can you give us some history on data center design and the well-known hot aisle and cold aisle design?

    Joe Prisco:

    Sure, Thomas. Happy to do that. Now, when I went back and did some research on this and I've done this before, the one who's credited with hot aisle/cold aisle concept is Dr. Bob Sullivan of IBM. Now, of course, you could probably argue there's others who may have been the so-called inventors, but he seems the one who's been credited with it. I was very fortunate to work with Dr. Bob early in my career, actually turned out to be a good friend of mine. I was fortunate also to travel with him in those early days of my career to troubleshoot data center installations. What was happening in the early 1990s is the hard disk drives are overheating.

    Well, we'd go into data centers and we would see the arrangement. It'd be arrangement where you'd have air intake on one side of a row, the heat exhaust coming out, and then the next row over would be basically the intake of that row. So, what was happening is you'd have this exhaust air coming out, mixing with the chilled air, going into the next row in that set of products. Well, it worked up to that point for two reasons. One, because the cold aisle temperatures at that point in time were very cold, 55 to 59 degrees Fahrenheit were the typical numbers, and also, the heat load densities weren't that big.

    So, the exhaust air coming out wasn't really all that warm. It would mix with some of the chilled air, would elevate the temperature a little bit into the second row and then the third row, then the fourth row. But by the time you got down to later rows, the temperature might've been 70 degrees Fahrenheit. So, it really wasn't a problem. But as heat load density is increased, it became a problem, because the heat coming out, the temperature was higher, there was more of it, and then mixing was an issue. So, I know working with Bob, his proposal was, "Geez, why don't we have the air intakes face each other, the exhaust face each other” to set up what today is known as the hot aisle/cold aisle type arrangement.

    Thomas Loxley:

    So overall, that's how a lot of data centers are designed today, and it seems to be one of the more energy efficient ways to keep the servers cool. Knowing that most all of our data centers utilize this approach, what limitations does this present and what role would AI have in optimizing our data centers?

    Mukul Anand:

    Sure. So, over the past few years, we have very successfully used aisle containment as a measure of increasing efficiency. We have also learned several new concepts over the last couple of years. As the rack densities are becoming higher, so a lot more amount of energy is being released from the same white space. Ever-increasing ambient temperatures, so more and more we are seeing instances of cities hitting record high temperatures. We also see heat island effect, a very sharp focus on water usage effectiveness, and therefore, we need to come up with new ways of making these data centers even more energy efficient.

    The data centers of today are what we call the mission-critical facilities. It means that reliability and uptime are absolutely paramount. As time goes on and the focus on sustainability increases, we also have to think about the impact of using water, the impact of having lower sound. Some of these are facilitated by using a dynamic chilled water set point where we use the concept of chilled water reset to operate these data centers more sustainably. We hope and expect that AI will help us achieve this goal in the near term future.

    Thomas Loxley:

    So there are several types of artificial intelligence. Can we give our listeners a brief overview of the main kinds of AI?

    Joe Prisco:

    Sure, I can start with that one. So, when you think of AI, I mean first of all, what is AI? It's the ability of computer systems to attempt and that's the keyword, to attempt to mimic problem solving and decision-making capabilities of the human mind. So, that's the basis of AI. There's actually four different types that I categorize. There's probably more, but the main four would be your traditional AI applications. First one being machine learning, which you could think of that like spam detection for email. Your spam detection was already pre-programmed with things that might be spam. So, it filters those out. But then in your email, you have the option too, if you think this is spam, you click on the button that says spam. It learns from that and says, Okay, if it sees an email from a certain sender that was spam, it's going to go ahead and put that into the spam folder instead of your in-basket.

    There’s also natural language processing. That would be the second one. So, you think of things when you go to a website and you interact with a chatbot and you type things in, it gives you a response or even translation. You have something in one language, you need to convert it to a second. That’s a form of AI as well.

    The third one would be conversational AI. So, things like customer support. So, you call a help desk. You call your favorite business. You got the automated prompts. A lot of times now you speak as to what you want. For me, sometimes it’s more going “operator to operator,” but again, it understands and tries to get you an answer to what you might need help with.

    The fourth one, which seems to be the big one right now, is generative AI. So, things like text generation, code generation, image and video generation. It seems to be a sore subject. We know there's a Writers Guild strike for example going on. One of their concerns is generative AI with image and video generation and things like that.

    Thomas Loxley:

    So within data center design, we have our theoretical and physical limitations within cold aisle design, right? We have some material limitations with silicon. Depending on the location, we might have some limitations on free cooling options. How can we as engineers and data center operators best use AI to optimize these solutions?

    Mukul Anand:

    So first, we have to absolutely respect that data centers in the mission-critical vertical will always treat reliability and uptime as the paramount goes. Within the confines of that, we need to make sure that we are designing and operating these data centers as sustainably as possible, as well as be a good corporate citizen. What this means is that we should not design the data centers only for the extreme conditions. We should also design them for where the actual operating hours will be and if the pooling system for the data center is efficient at the actual operating hours or is it only designed to provide cooling at the extreme conditions. That being said, we also need to focus on the other aspects of being good corporate citizen.

    For example, using applied HVAC equipment that are relatively quiet so that the neighboring businesses and residences are not impacted by the newest level of these buildings. How do we make sure that we absolutely use low GWP refrigerants so that we don't have a big impact on the environment going forward? Making sure that we have modernized controls and are able to dynamically set the set points so that at any given time an AI-based model can operate these data centers as efficiently as possible, making sure that we are not consuming water in areas that are restricted on water. Some of these technologies are all brought together by real time optimizing how we operate these data centers. That's where AI comes in, where it looks at all the parameters and then controls the cooling system end to end so that it's operating in the best possible manner.

    Joe Prisco:

    Right. I'll add to that and maybe I'm coming more as an electrical engineer who dabbles in thermal engineering, but what I see when I take measurements on servers, storage, mostly those products, depending on the type of environment they're in, the power consumption is not static. It's dynamic. Well, a lot of it is influenced by inlet temperature or other temperatures inside that will have an impact on fan speeds and fan speeds will impact power consumption. So, the data centers are definitely dynamic, which means from a facility side that the amount of chilled air you provide will be variable.

    So, how do you get the right mix, like Mukul said, of all the different parameters? Because within the chilled water or chilled air system, I mean whether it's a CRAC or CRAH, there's a lot of mechanical components to that. How are they all efficient? A lot of times if you want to use AI, it's really collecting data on your own facility, all the different parameters and use that to build and train a model to understand what type of efficiencies you can gain from it.

    Mukul Anand:

    Absolutely. Thank you, Joe.

    Thomas Loxley:

    So what about co-location? What impact would artificial intelligence have on data centers that are co-located with another facility?

    Mukul Anand:

    Sure. So, co-location companies work under service level agreements. In order to ensure that uptime and reliability for their customers is honored and maintained at all times, in some cases, the service level agreements lead to an overdesign of the HVAC system leading to HVAC equipment that are larger, heavier, and have a larger carbon footprint. In addition to that, the service level agreements also cause the cold aisle temperatures to be set at lower than it needs to be. That leads to increased power consumption throughout the year, and it leads to reduced usage of economization when the ambient conditions in that city allow for it.

    So, based on these, we need to take special care with the co-location modes of operation as well as the co-location service level agreements to absolutely respect the reliability and uptime, but also focus on sustainability and use all the tools that's available to us to operate them as sustainably and as efficiently as possible.

    Joe Prisco:

    On the electrical side of it, and like Mukul said, overdesigning stranded capacity and that's just the nature to me of electrical, especially when you're doing two end type designs. You have an A side and a B side. So, if the B side fails, the A side's got to pick up the full load. So, the highest you can ever go is 50% loading on let's say an A side or B side, and then you have a UPS and other switchgear that's capable of double that capacity.

    So, you're way over provisioning, but you have to do that for uptime and for really resiliency as well if you want your data center to be resilient operationally. It's a balance. I mean there are other designs people can look at with the N+1 and try to do things more efficiently, but then again, based on SLAs, we'll dictate how you might design that data center from electrical standpoint.

    Thomas Loxley:

    Do we think that artificial intelligence would help show us some opportunities where free cooling is likely being missed out on our data centers?

    Mukul Anand:

    We believe so. In the design of let's say a data center cooling architecture that uses air-cooled or water cooled chillers and computer room air handlers, when it's unseasonably hot outside, it's possible to use chilled water reset so that the chilled water set point can be a few degree Fahrenheit higher and the computer room air handlers can adjust to provide the cooling capacity that's needed for the white space. This ensures that the chillers and other equipment on the rooftop do not have to be over-designed for that one afternoon in 20 years that it may encounter, right? It can be right-sized.

    On the same token, when it's very cold outside, then it's possible for the chilled water set point to be set lower, allowing the computer room air handlers to lower the airflow rate and the fan frequency saving power as well as the server fans consuming less power if the cold aisle temperatures are lower. You're not paying the price for it because it's coming from an economization cooling setup. So, both on extremely hot days and on very cold days, AI should be able to optimize the cooling architecture and how it operates to ensure that we are operating very sustainably.

    Joe Prisco:

    Definitely, the only thing I'll add, I know Mukul's the cooling expert, but if you're going to use AI, again, it's really collecting the right data and training those models to give you the right results with any type of AI that you might use. So, whatever you define as the data, as the inputs to that model and then the outputs, with more data over time, you'll learn more from it and then you can work to optimize it with AI.

    Mukul Anand:

    Absolutely, Joe, that's a great point. In this day, we do have several data centers that collect a lot of information on historically how the power and cooling equipment have operated and having this data available and running models on them should be able to give us insights that were not available to us in the past. So, we are very optimistic of learning new ways of running data center cooling systems, as well as developing KPIs and best practices that can be used globally for making all data centers a little bit more sustainable, a little bit more efficient at a time.

    Thomas Loxley:

    That's a really good point. I know a lot of the work is done to optimize based off of previous weather data and a certain climate location, but your local climate might produce very different temperatures and humidity ranges that span quite large range for you as a designer to have to meet those design requirements and utilizing AI in order to collect the data at the exact location. So, relying on a table of the previous 20-year weather data, you'd be able to, I guess, have a better opportunity to cool your data center in a more energy efficient way without oversizing the equipment.

    Mukul Anand:

    Absolutely, Tom. That is a great point. Historically, we've said, here's the bin data, the weather bin data for Washington Dulles Airport, but today, our data center customers have really become savvy in the sense that now that the Ashburn area in Loudoun County has so many data center campuses. If you are next to another data center, your environmental conditions are a little bit different. If you are on a single story data center and the HVAC equipment is located on the rooftop and the generators are on the ground floor below, that's a different ecosystem.

    If you are located on a five-story-high, multi-story data center, then your ecosystem is a little bit different in terms of wind speeds and the influence from what's on the ground level. So, our customers are depending on CFD models, a whole lot of analytics around airflow and sound. When this kind of information is fed into AI models, we believe that for specific data center campuses, we are going to get insights that we didn't have in the past. So, this is a great time to be in this space and we are very optimistic of learning new things in the years to come.

    Joe Prisco:

    Right, and I absolutely fully agree with Mukul. The one exciting thing is the generative AI, that it actually starts generating data that in ways that you might not have thought of, because it's seeing the data, it's learning the patterns. Based on the different models, how it might apply it may give you some insights that you just never thought of. So, the opportunity is there and it's pretty exciting.

    Thomas Loxley:

    Here's a big question for both of you. Are there times where reliability and resilience are at odds with one another in data center design?

    Joe Prisco:

    So when I think of reliability, I think of things like mean time between failure of parts. When I think of resiliency, I think of maybe a holistic operational, let's say again, bringing up the A side, B side, that the A side, everything in that whole distribution needs to be working together to make sure you stay up. You have the uptime that you need. There are differences between the two, but as I think about it, it's efficiency that seems to be more at odds with the reliability and resiliency because the example I mentioned before from the electrical side, when you're doing two end, you want to be able to meet the SLAs, the reliability, the resiliency requirements. Your efficiency can suffer. Even though one good thing I'm seeing in the industry is that efficiencies around 40, 50, 60% are really improving.

    So, you definitely see that where you might used to have efficiency curves at the higher the load, the higher the efficiency, now there seems to be a little inflection point or knee of the curve around 40, 50, 60%. Because when you're operating two end and most you could have is 50%, you're going to have some guard band there. So, you're probably saying the most I can ever go capacity wise is 40% on one leg, yet it's sized for 100%. So, the efficiencies they're approving which help, that helps efficiency, but it still may not be the same as if you're just running an end type system, which seems to go against the industry norm because then reliability and resiliency could be impacted.

    Mukul Anand:

    Yeah, exactly, Joe. When we think about reliability and resiliency, we are thinking about what happens in Phoenix when the ambient air temperature in the shade is 130 degree Fahrenheit. With heat island effect, it becomes 140 degree. You want to run your data center at 100% of the design load, which may either never happen or probably happen once or twice in 20 years. Your HVAC system as well as the electrical infrastructure is designed to support that day or that afternoon.

    However, if the extreme conditions can be catered either on the software side by moving some of the processes elsewhere or on the hardware side by using techniques like chilled water reset and running all the redundant HVAC equipment as well, then you're not designing it for that extreme afternoon in 20 years. You're designing it for where the real bin hours are and the real conditions, the maximum operating conditions that you see in that data center at that location. Thereby allowing the power consumption throughout the year to be optimized as opposed to just having the capacity on that afternoon that you may experience once in a while.

    This leads us to the other benefits that we must absolutely focus on is to have equipment for power and cooling that are right-sized and have low embedded carbon and generate a reasonable or low amount of carbon throughout the life of the product. It's the simplification. It's making sure that these are light so that the structural steel used in the data center buildings is not what it would otherwise be. It's ensuring that we use low GWP refrigerants, but also ensuring we use as little of it as possible to avoid any future unintended consequences.

    Putting just enough number of cooling equipment on the rooftop so the noise level is as little as it can be, at the end of its life ensuring that we are putting as little as possible into the landfill because the useful life of that HVAC equipment is now complete, which leads us to a couple of topics that we are actively researching right now. It's to ensure that the cooling equipment can not only handle the requirements of air-cooled servers, but the same cooling equipment can handle the requirements of liquid cool servers, whether it is the immersion type or direct to chip type. By using these, we can ensure that we put as little equipment in the landfill over the next 15, 20 years as the server technology transitions from the 100% air cooler servers today to something different in the future.

    Joe Prisco:

    Right. A lot of good points. I mean, sustainability is on the top of everyone's mind. I think Mukul, you touched on about every single one of them. So, nice job.

    Thomas Loxley:

    Yeah, it's more than just a buzzword now, right? Sustainability, decarbonization, these are all action items on ASHRAE's agenda. So, back to artificial intelligence, the data center industry seems to be one of the first to adopt AI technologies. Do you have any examples of other issues in the data center industry where AI has already stepped in to help solve or completely solve these issues?

    Joe Prisco:

    With AI and I'm going to focus on the electrical side, it's really understanding loading on electrical panels, really understanding where you're at from a capacity standpoint, using data from wherever you can get it, mostly building management systems. I know the ASHRAE audience is mostly mechanical thermal engineers, but the simple example I usually use is a lot of people have played around or wired their own homes. So, when you open your home panel, there's usually a main breaker at the top of it and there's a bunch of individual circuit breakers with them. Well, building management systems, and they have a bunch of what they call current transformers or also sense resistors get voltage readings.

    So, you collect all this data where you can really understand differences between plan loads. So, when you plan a data center, here's what the loads may be versus actual that you might have. Then really understanding with artificial intelligence when you're collecting this data, your actual loads over time, how do they compare to, let's say, a maximum demand requirement from the US National Electrical Code? You can see over time, are you getting close to limits? Do you have room on circuits both within individual circuit breakers or even in the main panel? Then if you have that, how much load can you actually add? You have a planned load, but once you get these actuals, then you really can determine where you're at and how much capacity you have left.

    Because I do work with a lot of individuals who always want to install more power panels, install more capacity. Well, before you do that, you should really understand is there a need? Where are you at today? These building management systems produce so much data where artificial intelligence, once you build a model to train a model, can really help you understand when you're going to be at your limits. What did the daily fluctuations really look like in a more simplified or concise view versus looking at streams of data and just plotting things and saying, "Here's my absolute max. Where do I stack up?"

    Versus that might've just been one little blip that you're close to the maximum, but average wise, you're not even close to what the maximum may be. So, I think AI can play a huge role in understanding capacity, your limitations there, and how you can more efficiently design your data centers. Instead of adding additional capacity, work with what you already have if you have capacity to add more equipment.

    Thomas Loxley:

    So it could avoid installing unnecessary equipment just to make an expansion.

    Joe Prisco:

    Yes, absolutely, Thomas. That's where I've seen AI portions of it being used in that type of space, to avoid additional costs with construction, procuring equipment, all the capital type expense that goes along with building it. Plus, sustainability and decarbonization would obviously play a role in that too, if you can avoid those type of CapEx investments.

    Mukul Anand:

    On the cooling side, we have successfully used technologies like chilled water storage, direct evaporative cooling, and the equipment set includes air-cooled chillers, water cooled chillers, several pumps, valves, hydronics, as well as computer room air handlers. I will throw in the server fans in there as well, the power consumed by the server fans. Several different firms have shown that AI-based models can lead to an optimization technique that allows for lower annual energy cost than just having a legacy sequence of operations being used in data centers. But going forward, we also can leverage the CFD models that are correctly being studied for heat island effect.

    We can leverage information that we have collected over the years for multi-story data centers where the density of heat rejection from rooftops is extreme in nature. The recent focus and change of cooling architectures because of the lack of water availability, more and more communities are speaking against the sound generated by data centers.

    So, I think that AI models of the future would look at the way we've historically operated using sequence of operations, the AI models that are already in place, showing a lot of promise, looking at the analysis that's being done today for newer data centers that are being designed with high density as well as understanding the requirements of air-cooled servers versus liquid-cooled servers and what that would need in terms of HVAC and taking all that in a holistic model. So, that when we design, manufacture, install, and commission a cooling architecture today, we know that it's going to be relevant for the next 10, 15, 20 years as the server architecture changes, plus the promise of AI that we see today. That's why it's so exciting.

    Thomas Loxley:

    So for Mukul, what additional considerations do we need to consider when looking at limited roof space or multi-story data centers?

    Mukul Anand:

    Thanks, Tom. This is a really critical topic for North American data centers today. Data centers in Europe, Singapore, et cetera, have been multi-story for a while, but in the recent past, what we've seen is the price of land in Loudoun County, Northern Virginia has spiked significantly over the last few years. We're seeing the same thing in Silicon Valley, as well as Dallas, where now our owner operators are designing and building multi-story data centers. What that does is the cooling equipment now has a much smaller footprint to reject the heat from many levels of wide space, and therefore the density of heat rejection on these rooftops is several times of what it would be from a single story white space.

    So, additional research has to be done in order to ensure that the energy generated by the IT equipment as well as the energy from compressors and fans and pumps, et cetera, can all be rejected from a very limited amount of rooftop space, which leads to very specific challenges around heat transfer airflow, as well as CFD modeling of the rooftop. So, a lot of research is going into this to ensure that the growth of the data center vertical can continue, and we can help by ensuring that there are cooling architectures available to support the multi-story data center build out.

    Thomas Loxley:

    So what needs to happen for a larger adoption of lower GWP refrigerants?

    Mukul Anand:

    Over the years, we have researched, experimented with, and launched HVAC equipment with several different kinds of refrigerants. Over the last few years, the low GWP requirement have led us to use refrigerants that are termed mildly flammable or A2L refrigerants that provide the best of both worlds solution to us. In order for us to ensure that A2L refrigerants get the widespread acceptance, we have to ensure that the entire ecosystem is educated and is planning to support installations with A2L refrigerants in them.

    This includes the manufacturing of these refrigerants, ensuring that they are available where needed, ensuring that factories that manufacture SVAC equipment are prepared to charge the refrigerant into HVAC equipment, but that's not where it ends. In addition to these, we need to ensure that the Department of Transportation has the policies in place to ship these equipment from the factory location to the job site. We have to ensure that the rigging companies understand if there are any special steps that need to be taken. We need to ensure that folks, the technicians who work on these pieces of equipment have any specialized processes or equipment available at their disposal to work on them.

    In addition to that, the UL's specifications, as well as any safety protocols, are well documented and understood. Finally, we also need to ensure that the authority having jurisdiction in any municipality as well as the insurance companies are aware of how to cover these buildings that have equipment with A2Ls in them. So, for us to ensure that the data center vertical continues to grow at the pace at which it needs to grow, we need to make sure that the entire ecosystem is ready to adopt and work with these low GWP refrigerants.

    Thomas Loxley:

    Okay. Well, I'd like to thank our guests for joining us today. Joe and Mukul, it has been a pleasure. Are there any concluding thoughts for our listeners to close this out?

    Joe Prisco:

    Yes, I do have one. Listening to Mukul a few times mentioned CFD and I know I've said this a few times, I'm an electrical engineer, but I was actually taught how to do some of those CFD models. I was pretty good at them. Well, what I realized early on with them is if I spent one day at a data center, collecting data, put the information in the model, the results weren't very good. If I spent five days or more with others collecting a lot of data, the model accuracy for that point in time was actually really good. There's a correlation between that and AI.

    With AI, as you build and train models, the more data that you can put into the model you're training that's specific to what you need, the better your AI results. So, if you're going to be building and training models, you probably just don't want to go out and scrape data from any place out there. It may not be reliable data, there may be other issues with it. So, using your own data, lots of the data and training the model with it, you'll get the right results, both with traditional models and even with the generative type models as well.

    Mukul Anand:

    So in final concluding remarks, I will say that we must absolutely respect the uptime and reliability needs of the mission-critical space. In addition to that, we must focus on operating these data centers as sustainably as possible, which means using renewable power as in when possible, using as little power throughout the year as possible by the use of economization, operating these data centers as quietly as possible, using low GWP refrigerants and ensuring that we use all the tools available to us, including the capabilities of AI to make sure that we are operating these data centers as sustainably as possible and reducing waste and giving these HVAC pieces of equipment a longer life, so to say. That's my hope and what we are working towards.

    Thomas Loxley:

    Well, great. Thanks again, and I look forward to seeing both of you in Chicago for our winter meeting. I am Thomas Loxley and you've been listening to the ASHRAE Journal Podcast.

    ASHRAE Journal:

    The ASHRAE Journal Podcast team is editor, Drew Champlin; managing editor, Kelly Barraza; producer and associate editor, Chadd Jones; assistant editor, Kaitlyn Baich; associate editor, Tani Palefski; creative designer, Theresa Carboni; and technical editor, Rebecca Matyasovski. Copyright ASHRAE. The views expressed in this podcast are those of individuals only and not of ASHRAE, its sponsors, or advertisers. Please refer to ashrae.org/podcast for the full disclaimer.

     

Close