Artificial Intelligence
Research, Options, and Difficult Choices for a 10MW Liquid Cooling Facility

Laying the Groundwork — Research, Facility Options, and Difficult Choices 

Part 2 of a 4-Part Series. Read Part 1.

When we decided to bring direct-to-chip (DTC) liquid cooling into our integration process, it wasn’t just about swapping out one cooling method for another — it meant rethinking our entire facility setup. From power availability to building layouts to the timing of major installations like chillers and plumbing, every detail mattered. Here’s how we navigated the research and planning phase, and some of the difficult decisions that came with it. 

Lead Times & Location Strategy 

Upgrades take time. Increasing power capacity isn’t as simple as calling your utility and having them flip a switch. From permitting to substation enhancements, it can take a year or more to add significant power. That timeline forced us to consider alternative facilities or expansions that already had enough power “built in.” 

We had to think beyond immediate needs. Any capital investment—whether upgrading our existing facility or leasing a new one — needed to support growth well beyond the next 12 months. Scalability was key. 

One major consideration was square footage. A typical AI rack requires around nine pallets of material, making warehouse capacity a critical factor. At scale, this also introduced a challenge in sustainable material disposal—the sheer volume of pallets needing removal could become a logistical and environmental concern. 

Beyond space, power was the other non-negotiable. We needed significantly more of it—and we needed it fast. Below are the three main options we evaluated, each with its own pros, cons, and cost profiles. 

Option 1: Rework our Local Unused Warehouse Space 

Within our existing campus, we had a large depot with additional capacity. Its proximity allowed for employee sharing and eliminated the need for lease negotiations, making it an attractive option. On paper, the facility’s 53,000+ square feet seemed ideal for high-volume AI rack integration. However, despite its generous footprint, it had significant challenges: 

Key Challenges of This Facility 

  • Severely Limited Power: The existing power capacity was only 400 amps at 277–480 volts—a fraction of what we needed for large-scale liquid-cooled integration. Upgrading the electrical infrastructure would require a year-long lead time, making it difficult to meet demand in a timely manner. 
  • High Retrofit Costs: Beyond power, extensive modifications were necessary, including a complete overhaul of power distribution, the addition of chillers, and potential reconfiguration of large sections of the building. The level of investment required would be significant. 
  • Overall Cost Inefficiency: When we ran the numbers, this was the least cost-effective path. The combination of upgrade expenses and delays made it difficult to justify. 

While the size of the building was attractive, the lack of power infrastructure made it impractical. The cost and timeline required to bring it up to standard ultimately ruled it out as a viable solution.  

Option 2: Expand our Existing US Facility 

Our existing facility had a key advantage — our team was already there, and the layout was optimized for integration workflows. Leveraging this familiarity could streamline operations, but significant constraints made expansion a challenge. 

Key Challenges of Expansion 

  • Power Limitations: Scaling to 10MW would require extensive electrical infrastructure upgrades, a process that could take a year or longer—far beyond the timeline we needed to meet growing demand. 
  • Space Constraints: Even with solving the power challenge, the available space was tight for high-volume AI rack integration. Managing multiple large-scale requests within the existing footprint would have been difficult. 
  • Operational Trade-offs: Expanding within the current facility would have meant either displacing other business lines or limiting overall capacity, creating a significant opportunity cost. 

While expanding our existing facility offered familiarity and operational continuity, the power and space limitations ultimately made it a less viable long-term solution. 

Option 3: A New Facility 

We explored multiple locations across several states before narrowing our search to a standout option—one 68,000 sq ft facility located quite literally across the street from our existing location. The odds of finding such a location so close were incredibly slim, and beyond its physical attributes, it offered additional advantages such as streamlined inventory movement and shared workforce potential. 

One of the most compelling factors was its power capacity. The facility’s previous tenant specialized in industrial plastic injection molding, a process that required moving extremely heavy materials and operating liquid-cooled machinery. As a result, the building already had a substantial power profile of approximately 10MW—a rare find in Illinois, where few existing sites could offer that level of on-demand capacity. 

Key Advantages of the Facility 

  • Readiness: The space was well-equipped, featuring air conditioning across what would become our warehouse and production areas, along with multiple loading docks — critical for efficiently moving large rack crates in and out. 
  • Strategic Power Access: With a high-capacity power infrastructure already in place, we could significantly reduce the lead time and complexity of electrical upgrades. 

While the facility had the electrical backbone we needed, additional investment was still required for chillers and plumbing. Given the long lead times for these components, careful sequencing would be necessary to bring the facility online without unnecessary delays. 

Balancing Partnerships, Costs, and Timelines 

No matter which option we chose, a significant investment was inevitable. The key question became: How do we scale quickly while maintaining flexibility—without locking ourselves into technology or facilities that could become obsolete in just a few years? 

Technology Standardization vs. Flexibility 

The liquid cooling market is evolving rapidly. A system considered cutting-edge today could be surpassed by a more efficient or cost-effective solution within 12–18 months. Rather than committing to a single proprietary technology, we prioritized vendor flexibility to allow for future upgrades without overhauling our entire setup. 

Conversations with our clients and partners reinforced this approach. There was no requirement to standardize on one brand or cooling style; in fact, a hybrid approach—integrating multiple technologies—allowed for the fastest going live. We focused on solutions from CoolIT, Motivair, and Vertiv, ensuring we could adapt as the industry evolved. The supply chain for specialized cooling equipment presented another challenge. Lead times varied significantly for key components such as Chillers, Coolant Distribution Units (CDUs), and specialized piping. These supply constraints inevitably shaped our technology choices and implementation timeline. 

Facility Power Requirements: The First Major Hurdle 

If you’re looking to integrate and test liquid-cooled racks at scale that draw >100kW per rack, you need serious power. Our existing integration site has 2.4 megawatts (MW) available—enough for significant node and standard rack workloads, but nowhere near the 10MW we wanted for direct-to-chip (DTC) liquid cooling at scale.   

Powering the racks themselves is only part of the equation—you also need overhead for chillers, pumps, and auxiliary systems. Without sufficient redundancy, even a minor disruption can cascade into significant issues.  Within the integration center we also designed a 60-rack innovation lab. The hands-on environment is designed to enable organizations to benchmark performance gains across diverse AI and HPC architectures. The lab designs feature rack-scale, direct-to-chip liquid cooling, a liquid-to-air coolant distribution unit (CDU) and a liquid-to-liquid CDU connected to chilled facility liquid. Real-world deployment scenarios and side-by-side thermal efficiency and power consumption comparisons will enable clients to make data-driven infrastructure decisions. 

As we assessed our power needs, we weren’t just thinking about today—we were planning for what comes next. While several solutions met our immediate requirements, we knew that with future rack power demands reaching >600kW, we had to start preparing for long-term scalability. That meant working closely with key stakeholders to understand grid capacity, infrastructure constraints, and expansion possibilities. 

Navigating Power Constraints and Grid Planning 

In discussions with the State of Illinois and ComEd, we uncovered a crucial factor: two other projects in the area had recently submitted significant load increase requests. These additional demands, expected to come online between fall 2025 and early 2026, could push the transformer at our substation to 90% or higher capacity. To accommodate our own power requirements, several options emerged: 

  • Extending a new feeder from the substation to distribute the load more effectively. 
  • Installing a new substation transformer to ensure sufficient capacity for future growth. 
  • Exploring alternative feeder configurations, though this would require a more detailed analysis and wasn’t a guaranteed solution. 

One key insight from our conversations was the role of diversification factors in load evaluations. While we initially stated our load requirement as 10MW, ComEd’s New Business team would likely apply a diversification factor, potentially reducing the estimated operating load. This could make integrating our power needs into the existing infrastructure more feasible—without an immediate need for major upgrades. 

Ultimately, ComEd confirmed that serving our load wasn’t the issue—it was a question of cost and timing. The next step? Submitting a meter and service application (load letter) to kick off the official design and integration process with their New Business team. 

This experience reinforced a critical lesson: in high-density computing, power isn’t just a utility—it’s a strategic asset. Understanding regional grid constraints, planning for long-term scalability, and collaborating with the right partners are all essential to staying ahead in the AI and HPC era. 

In following posts, we’ll dive deeper into some of our technology choices, and the nuts and bolts of standing up a liquid-cooled integration facility—where theory meets reality.  

If you’re navigating a similar decision process, keep a close eye on lead times and think strategically about your growth trajectory. The perfect facility may not exist—but the key is understanding which trade-offs you’re willing to accept to get your operation off the ground and running quickly. 

Key Points

  • Power is King – If your facility can’t scale to meet the power demands of liquid-cooled racks, you’ll be stuck before you start. 
  • Lead Times Rule Everything – Electrical upgrades, chiller installations, and specialized equipment all have long lead times—plan accordingly. 
  • Flexibility Pays Dividends – The best option isn’t always the perfect one—it’s the one that lets you pivot as market conditions and technology evolve. 

About the author

Chris Tucker

EVP, Foundry™

Chris Tucker is EVP of Foundry™, AHEAD’s facilities for integrated rack design, configuration, and deployment. Chris is passionate about helping companies identify and solve complex business issues with cutting-edge infrastructure products and services. Hailing from Wales, he is equally passionate about Welsh rugby.

SUBSCRIBE
Subscribe to the AHEAD I/O Newsletter for a periodic digest of all things apps, opps, and infrastructure.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.