Smart Conduit Choices for the Ai Data Center Boom

October 28, 2025

“Build it and they will come” was a phrase widely used by early advocates of fiber-to-the-home (FTTH), particularly for underserved areas. While incumbent ISPs attempted to squeeze every last megabit from their existing copper assets – and convince subscribers it was enough for their needs – others knew that given 1G and more over fiber, households would soon find use for the bandwidth – from streaming to gaming, videoconferencing, and an increasing number of smart appliances within the home.   

Fast-forward to 2025, and “Artificial Intelligence” (AI) is the latest buzzword on everybody’s lips. Only this time, it’s not a case of “build it and they will come”, it’s more about “we know what’s coming, but can we actually build it?”.  

 

The Impact of AI on the Network

According to a report from the Fiber Broadband Association (FBA) and RVA, we need, and can expect, at least a 3X increase in cumulative hyperscale data center capacity by 2032, with the number of newly constructed facilities reaching 225 by that year – mainly driven by AI. (Source: FBA) 

A typical hyperscale data center is built to run cloud services like storage, web hosting, SaaS, and data bases, and is optimized for flexibility, scalability, and cost-efficiency. By contrast, an AI data center is built specifically to handle AI workloads like model training, which is more compute-intensive and less about storage and transactions. Consequently, AI facilities will be denser, hotter, and more network-intensive.  

But it’s not just what these data centers look like that’s changing, so too is where they’re being built. While traditional hubs like California and Virginia will remain integral, leading long-haul infrastructure provider, Zayo, is seeing increased demand for its services in less-predictable places like Ohio and Texas, driven by the availability of space and power. (Source: Zayo)

The Remaining Bottleneck
While the prospect of new data center hotspots might be exciting for local economies, according to the FBA, it’s not a home run just yet. That’s because of another constraint which is often overlooked but is just as critical as land, electricity, and cooling: fiber interconnection between data centers.  

Obviously, any new facilities will require substantial investment in long-haul interconnectivity, but with many existing data centers being upgraded to meet the demands of AI, existing long-haul links will also need reinforcement. The upshot? As many as 373 million fiber miles connecting data centers in the United States by 2029 – an increase of 214 million miles from 2024. (source: FBA) 

Building It
As a collective, the hyperscalers and their parent companies have undoubtedly revolutionized our lives. But one thing they haven’t been able to do yet is solve the age-old challenges associated with utility construction. Deploying fiber can still be slow, expensive, and labor-intensive – particularly when building cross-country links over hundreds, if not thousands, of miles.  

However, they have devised strategies to limit the impact of these challenges and ensure that, whatever the scenario, they get the very most out of their network infrastructure on day one and years in the future.  

For years, Dura-Line has worked with the world’s biggest data center operators and build partners to optimize their conduit infrastructure for any scenario – limiting the impact of slow, expensive construction methods, and ensuring flexible, scalable capacity to support the AI revolution long into the future.  

 

Here’s what you need to know. 

Brownfield? Think Pathway Subdivision.
When planning a new long-haul route, the first thing a hyperscaler and their build partner will do is look at what infrastructure already exists between their two locations. When there’s pre-installed conduit owned by the data center or a third party, the build partner will look to create a network within a network by subdividing a larger conduit with MicroDucts and multi-way bundles, known as FuturePath®, to create numerous new pathways.  

In many cases, MicroDucts can be air-jetted into an existing duct at high speeds over impressive distances in a single step. This is true even if the legacy pathway already contains a cable, thanks to an innovative installation method known as an OverRide, where the cable is isolated and the existing conduit is pressurized for jetting.  

OverRides are a highly versatile brownfield solution for data center interconnect (DCI) projects. Once it’s clear there’s enough space for an installation, the number and size of MicroDucts used depends on overall fiber count and routing requirements.  

In long-haul DCI, it may make most sense to install fewer but larger MicroDucts to accommodate cables with higher fiber counts. But for metro DCI, a hyperscaler may prioritize route diversity and redundancy by installing a higher number of pathways and lower fiber counts. In either case, all MicroDucts should be installed at the same time. 

Configure the ideal OverRide for your network with our handy digital calculator

Greenfield? Think Pathway Quantity.
When initial searches don’t yield existing pathways prime for subdivision and there’s no option but to break ground, a hyperscaler’s attention will switch to achieving the maximum fiber capacity possible for every construction dollar and within a defined physical footprint.  

In the past, greenfield links would have been built with standard conduits with typical diameters of 1.25, 1.5, and 2 inches. But today, FuturePath bundles offer multiple pathways in the same or a smaller footprint as a traditional pathway. For example, a standard 2 in. SDR 11 standard conduit like Smoothwall has a nominal outer diameter (OD) of 2.375 inches (60.3 mm), whereas a FuturePath 7-Way 18/14 mm bundle measures 2.03 inches (51.6 mm) in OD with space for up to seven different cables.  

Crucially, the FuturePath bundle can be installed using the same traditional construction methods as the standard conduit, including trenchless technologies like horizontal directional drilling (HDD), which save significant time and cost in long-haul builds. This scenario is a win-win for all parties because the hyperscaler – whose priority is capacity – gets their fiber needs met several times over, and their build partner – who typically owns and operates the network – gets physically separate redundant pathways to monetize and accelerate their return on investment.  

 

Need help choosing the right product for your next installation? Check out our online Comparison Tool today!

 

Any Time You Deploy: Think Fiber Density.
MicroDucts provide permanent protective pathways for miniaturized fiber cables. While micro cables were first developed out of necessity in Europe to ease congested in older dense urban environments, more recent innovations in cable technology have been driven by hyperscale data center networks.  

It is well known that data centers drove the development of unprecedented fiber counts like 1,728, 3,456, and 6,912 in a single cable, but these are best suited to connecting buildings over short distances on hyperscale campuses. In long-haul and metro DCI applications, fiber counts typically peak at 864 F, so cable manufacturers have focused on maximizing density (the number of fibers per square millimeter) in their micro cable portfolios.  

With 864-fiber micro cables available that can comfortably be jetted into MicroDucts as small as 14 mm inner diameter (ID) and miniaturized ribbon cables that unlock the efficiency of mass-fusion splicing, hyperscalers and their build partners are now spoiled for choice when it comes to maximizing fiber density on their DCI routes.  

So, Back to Our Question  

We know what’s coming but can we build it?  

With MicroDucts and FuturePath, the answer is decidedly yes 

Sources:
FBA & RVA  Zayo

 

Ready to Learn More?

Read success stories and discover more product solutions on our dedicated Data Center webpage.

Go to Data Centers

Ready to Learn More?

Read success stories and discover more product solutions on our dedicated Data Center webpage.

Go to Data Centers