Can our networks and data centres handle the surge?

INSIDER ACCESS: Can our networks and data centres handle the surge?

As the proliferation of artificial intelligence accelerates, the underlying infrastructure—comprising data centres, network architecture, and power systems—is under unprecedented pressure.

The panel discussion titled “AI Infrastructure Keynote: Can Our Networks and Data Centres Handle the Surge?” explored whether current systems are prepared to support the demand explosion brought about by AI.

Speakers

Erik Kreifeldt, principal at Forthright Insight (moderator)

Benjamin Von Seeger, chief revenue officer at Stellenium

Rodney Dellinger, Webscale CTO & head of architecture at Nokia

Scott Mills, SVP, engineering and customer solutions at Digital Realty

Milad Abdelmessih, VP at KDDI Telehouse

The status of the AI build-out

Opening the session, moderator Kreifeldt laid out the stakes, questioning whether the industry is truly ready for what’s next: “What is the status of the AI build-out?” he asked.

Scott Mills of Digital Realty responded with urgency, stating: “We are inundated with things on the frontier models. It’s not just the traditional data centre; we’re talking gigawatt-scale campuses. The challenge is: do we have enough electrons to meet these aspirations?”

Milad Abdelmessih echoed this, stressing the two critical factors: “It’s power and connectivity. Where can you get that scale of power—especially for the training models—and does that location have the connectivity AI needs to function?”

Beyond power: the connectivity factor

While power dominated much of the discussion, the panel made clear that connectivity must not be overlooked. Benjamin Von Seeger offered a reality check:

“We focused so much on power and site selection that we completely forgot the most important thing. I can build the most beautiful data centre in the world, but if I don’t have connectivity, I’ve got nothing.”

He added that telcos must “come back into the game” to support AI’s infrastructure needs and highlighted the resurgence of network access points and mini rooms:

“The interconnect plays a crucial role… we want carriers to re-engage with more scalable solutions.”

Rodney Dellinger agreed, likening the infrastructure to a human body: “If data centres are the brains of AI, the network is the cardiovascular system. You can’t have one without the other.”

The hyperscale revolution

Scott Mills revealed the scale of change within the data centre industry: “What started as 15–30MW buildings are now 100–200MW single buildings. We’ve shifted from CPU-centric to GPU-centric designs. We’re seeing one-megawatt cabinets, 640kW cabinets—it’s extraordinary.”

Von Seeger observed that this scale demands an entirely new approach to energy management: “We will become energy companies. If we want to survive, we have to build our own power plants. No country is going to give you two-gigawatt permits—you’ll have to generate it yourself.”

He also warned of a rapid increase in demand:“Right now, we use about 1% of global power for digital infrastructure. In 36 months, that’s going to be over 13%.”

The retrofit challenge

Abdelmessih focused on retrofitting older data centres: “Facilities built 10 years ago weren’t designed for 30 to 100kW cabinets. Yet we need to transform them to support that level of scale.”

He stressed that edge facilities—despite not being hyperscale—still see high demand: “Customers want to be in highly connected facilities. They're not just chasing the cheapest power—they want interconnection.”

This is crucial as AI inference moves from centralised training hubs to edge locations. “It’s going to creep up,” he said. “Real-time applications are only going to increase, and inference will need to happen much closer to end users.”

Network innovation: simpler, faster, smarter

On the networking front, Dellinger explained how AI is driving architectural change. He outlined a move from disaggregation back to convergence: “We’re seeing coherent optics now integrated directly into Ethernet switches. That removes an entire class of networking gear while maintaining performance.”

He also discussed a real-world case of inline amplifier (ILA) requirements: “One customer asked for 16 fibre pairs through one ILA hut using just 3kW—resulting in 800 terabits point-to-point. That’s one customer!”

The message was clear: bandwidth demands are escalating fast, and the equipment doesn’t yet exist to meet what’s coming. “We’re being asked to deliver solutions that don’t exist yet—but must exist in the next two years.”

Submarine cable constraints and alien waves

Kreifeldt raised concerns about capacity on core networks, especially submarine cables. Dellinger acknowledged this, pointing to emerging models like "spectrum as a service": “Rather than buying a full fibre pair or a wave service, smaller operators can lease a portion of the spectrum. Alien waves are one way we’re enabling that.”

He added: “The shift is also about who owns the network. We’re seeing hyperscalers push capacity, but there's room for smaller players with the right models.”

Latency and the role of telcos

Von Seeger emphasised latency as a game-changer: “With carrier infrastructure, we can now achieve 0.2 millisecond latency within 30 miles. That’s why we’re seeing build-outs beyond traditional hubs like Reston, Virginia.”

Abdelmessih agreed, underlining the need for distributed infrastructure: “Inference nodes need to sit on peering points and IX locations for quicker delivery. Real-time applications demand it.”

He added that smaller GPU-as-a-service providers play a vital role: “They’re supporting enterprise and manufacturing customers with hybrid models. It’s not all about the hyperscalers.”

As AI demand surges, this panel illustrated that success will hinge on a highly collaborative, cross-functional effort. Telcos, hyperscalers, infrastructure providers, and GPU-as-a-service operators all have a part to play.

Erik Kreifeldt noted: “We’re moving from pure hardware builds to a software-defined, distributed, and dynamic AI infrastructure. The question is not if we’ll meet demand—it’s how quickly we can reimagine everything from power to peering.”

Key takeaways:

  • AI infrastructure is shifting from CPU to GPU-centric models, requiring unprecedented power and connectivity.
  • Hyperscalers are setting architectural standards, but smaller players remain vital.
  • Connectivity, particularly low-latency interconnects, is becoming as critical as power.
  • Innovation in network architecture (e.g. alien waves, coherent optics) will ease bandwidth constraints.
  • Retrofitting legacy data centres and building edge nodes will be central to supporting AI inference.

MORE FROM CAPACITY