Games are becoming more expensive to make – so what can network and infrastructure providers do to help?
The global video game industry is valued at over $300 billion - that’s more than the film and TV industries combined. But as revenues have grown, so too has the cost of creating these blockbuster games. Not only can development costs for AAA video game franchises such as Grand Theft Auto V, Call of Duty and Final Fantasy easily surpass the $100 million mark, but the video game industry has been slowly moving away from single-player games in favour of multiplayer experiences. Often because they’re easier to monetise, but also due to consumers favouring social gaming experiences.
As you might imagine, making sure you’ve got the network capacity to cater for these players isn’t cheap. Whether it’s a game supporting a thousand concurrent players or ten million, games need significant infrastructure to deliver a smooth online experience with minimal latency. Plus, there is the sheer capacity cost for the amounts of data being handled. Epic Games, the creators of Fortnite, revealed it was using two petabytes of data to support 3.2m concurrent gamers back in 2018 - and since then the global player base for the game has continued to grow.
For online game developers, making sure they’ve got the right network infrastructure is extremely important. Competitive online games such as first-person-shooters require extremely low latency (sub 30ms is considered standard for most competitive games) but this is only possible with global server distribution. You can’t rely on a centralised infrastructure if you’re connecting millions of players across the world.
A typical way to minimise latency is to use data centres as physically close to the gamers as possible, but when games allow players all around the world to play each other, this becomes more complicated. By harnessing Edge computing - as EdgeGap does - it’s possible to significantly increase the number of locations a publisher will have access to, but there is still a layer of optimisation that’s needed to find the best location for the lowest latency.
The other big issue and cost implication for games companies is scalability. When a game launches, the publisher needs to ensure they have sufficient hosting capacity for the number of players they expect in the game. Plan for too little and you risk players experiencing poor performance and potentially being unable to play at all. Plan for too much, and you are paying for capacity you don’t need. Being able to react to demand is therefore critical, as the financial performance of the game depends on matching capacity planning to the actual demand.
As it stands, a lot of video game companies have their own dedicated teams to oversee all these aspects of online game performance, known as Developer Operations, or DevOps. DevOps is incredibly specialised and therefore expensive, with average salaries in the US around $115k. Riot Games, one of the largest gaming companies in the world, revealed in 2020 it was operating 14,500 containers in Riot regions alone, but “worrying trends,” and “preventable incidents” started to emerge due to an ever-growing number of distinct microservices making it difficult for operators – many of which are based all over the world in local territories – to produce working and stable shards (shards are used in video games to segment huge player bases such as the millions in Fortnite geographically to reduce latency).
When things go wrong, it can be costly. Lag and disconnections in online games can end up costing companies millions, especially in live-service games where the video game’s main source of income is through the thousands of in-game purchases (weapons, skins, characters, subscriptions, new content) made every minute. In addition, the video game industry is highly competitive, and if players aren’t impressed with the online services of a game at launch, it’s likely that they’ll just play something else and never return.
Bigger companies can absorb higher infrastructure and hosting costs, but for the majority of studios it’s essential to keep costs down. We have launched a pay-per-use model so that game companies only pay for the server time and capacity they actually use, and other cloud vendors like AWS and Microsoft who have built a presence in the games industry are developing their own approaches. What is key is giving companies of every size the ability to tap into global network infrastructure, and the performance benefits that brings.
This addresses one of the biggest challenges facing video game creators: there’s no guarantee that your game is going to be the huge hit you want it to be on day one, making it hard to predict network traffic at launch. Too much money is currently being wasted by companies on under-utilised infrastructure and annual contracts lock studios down into taking out servers that they might never use.
If game studios and wider tech companies are hedging their bets on the future of games being online - let alone an always-on VR-based metaverse - a significant amount of investment will be needed into upgrading their infrastructure, along with better ways to manage and optimise such data-intensive experiences.