In the era of cloud adoption and AI, the demand for data centre bandwidth has skyrocketed, leading to the exponential sprawl of data centres worldwide.
However, new data centres are running up against sustainability, space and budget constraints. Policymakers recognise the benefits of data centres to productivity, economic growth and research, but there is still a tension over their impact on local communities, water and electricity use.
The best solution is in optimising the data centre infrastructure we already have to unlock more performance while still being mindful of the limits we have. Our cities, our consumer products, and our world are going to become more digital, and we need more compute to keep up.
Optimising the data centre infrastructure we already have to unlock more performance is the best way data centres can turn constraints into an opportunity for a competitive advantage.
Why data centre optimisation matters
CIOs and IT leaders increasingly face calls to provide a high-performance foundational compute infrastructure across their businesses and handle new, more demanding use cases while balancing sustainability commitments, space and budget constraints.
Many have sought to build new data centres outright to meet demand and pair them with energy-efficient technologies to minimise their environmental impact.
For example, the LUMI (Large Unified Modern Infrastructure) Supercomputer, one of the most powerful in Europe, uses 100% carbon-free hydroelectric energy for its operations and its waste heat is reused to heat homes in the nearby town of Kajanni, Finland.

There are many other examples like LUMI showing the considerable progress the data centre industry has made in addressing the need for energy efficiency.
Yet energy efficiency alone won’t be enough to power the growing demands of AI, which is expected to plump up data centre storage capacity.
According to IDC, external storage systems in EMEA grew 3.6% in 3Q24, with enterprise servers in EMEA also growing strongly, with 25.0% growth.
AI’s greater energy requirements will also require more energy-efficient designs to help ensure scalability and address environmental goals, and with data centre square footage, land and power grids nearing capacity, one way to optimise design is to upgrade from old servers.
Data centres are expensive investments, and some CIOs and IT leaders try to recoup costs by running their hardware for as long as possible. As a result, most data centres are still using hardware that is 10 years old and only expand compute when absolutely necessary.
While building new data centres might be necessary for some, there are significant opportunities to upgrade existing infrastructure. Upgrading to newer systems means data centres can achieve the same tasks more efficiently.
Global IT data centre capacity will grow from 180 gigawatts (GW) in 2024 to 296 GW in 2028, representing a 12.3% CAGR, while electricity consumption will grow at a higher rate, 23.3%, from 397 Terawatt hours (TWh) to 915 TWh in 2028. For the ageing data centres, that can translate to fewer racks and systems to manage, while still maintaining the same bandwidth.
It can leave significant room for future IT needs, but also makes room for experimentation, which is absolutely necessary in AI workloads at the moment. They can use the space to build less expensive proof-of-concept half racks before it leads to bigger build-outs and use new hyper-efficient chips to help reduce energy consumption and cooling requirements, recouping investment back more quickly.
What to look for in an upgrade
There are many factors to consider in a server upgrade and there isn’t a one size fits all solution to data centre needs.
It’s not just about buying the most powerful chip that can be afforded. Yes, the significance of a good chip on energy efficiency cannot be overstated, but each data centre has different needs that will shape the hardware and software stack they need to operate most efficiently.
IT decision makers should look for providers that can deliver end-to-end data centre Infrastructure at scale combining high performance chips, networking, software and systems design expertise.
For example, the right physical racks makes it easy to swap in new kit as needs evolve, and having open software is equally important for getting the different pieces of the software stack from different providers talking with each other.
In addition, providers that are continually investing in world-class systems design and AI systems capabilities will be best positioned to accelerate enterprise AI hardware and software roadmaps.
Advancing the Data Centre
As our reliance on digital technologies continues to grow, so too does our need for computing power.
It is important to balance the need for more compute real estate with sustainability goals, and the way forward is in making the most out of the existing real estate we have. This is a big opportunity to think smartly about this and turn an apparent tension into a massive advantage.
By using the right computational architecture, data centres can achieve the same tasks more efficiently, making room for the future technologies that will transform businesses and lives.
This article first appeared in Datacloud Magazine - June 2025
Author
Robert Hormuth is the corporate VP for architecture and strategy, data centre solutions group at AMD.