Roundtable: 100G comes of age

Roundtable: 100G comes of age

The long march to 100G represents the biggest reboot of global networks for more than a decade. Here, Capacity asks some of the movers and shakers that have played a major role in shaping the upgrade to talk on the challenges that still lie ahead.

100g-rountable-panel.jpg How will the roll-out of 100G take shape over the next 12 months?


Tarazi: Verizon has been an industry leader in developing high-speed networks around the world for years so when we had an opportunity to help lead the march towards 100G we jumped at the opportunity. We first began commercial deployment of 100G technology on our European network in 2009. By last year, we had also begun enabling our US network. As growing traffic demands spur the need for more capacity over the next 12 months, Verizon will continue to enable network routes around the world using next-generation 100G technology.

Xenos: We expect to see strong growth and a healthy ramp up of 100G builds in 2012, primarily driven by new long-haul backbone builds. We expect to see a progressive increase in 100G roll-outs in 2013 as large carriers complete their standardisation cycle. This strong growth includes 100G in subsea applications where usable bandwidth is limited, and spectral efficiency is of high importance.

Schmitt: The roll-out of 100G will take years, even as long as a decade. Over the next 12 months the market will go from two vendors producing 100G equipment to over 10, providing a more competitive supply environment that will allow carriers to obtain competitive bids and move forward with greenfield deployments in the 2013-2015 timeframe.

D’Ambrosia: As we progress through the remainder of the year, we are moving into a perfect storm: deployment of 40GbE is happening; 100GbE is moving beyond the early adopter stage; and the launch of Intel’s Romley server platform is set to drive high volume server deployment of 10GbE ports. As bandwidth projections do the “Exponential Dance” it becomes clear that 100G roll-out will happen - because it has to happen.



Q: What technical challenges still present themselves?


Schmitt: The biggest technical challenges at this point are cost related. 100G short reach or “grey” interfaces are still very expensive when compared to the cost of coherent electronics. There needs to be major advances in integration and we need to see the introduction of new lower cost formats like CFP2 and QSFP. Likewise, new ROADM architectures with colourless and directionless functionality are here but the cost and size of these systems are too big; new components that optimise these architectures are coming.

Tarazi: Like all new technologies, there is a learning curve, and we knew there would be a few challenges. We actually look forward to working through these challenges as they arise with experts in the field before we deploy new technologies. With 100G, the network equipment is evolving quickly to second generation cards, which have more functionality. This gives Verizon an opportunity to work closely with our suppliers to take advantage of new features.

D’Ambrosia: Just as prior generations of boxes started off with single port per line card configurations and progressed to tens and hundreds of ports per system, the same thing will happen with 40G and 100G. From a technical perspective the challenges will not only be making all of the electrical signalling run faster (think 10G to 25G) from chip-to-chip or module and board-to-board connections, but dealing with all of the power and cooling required for line cards in systems supporting multi-Terabit capacities.

Xenos: Reaching ultra-long-haul or trans-Pacific distances with 100G has been a challenge; this has now been solved with next-generation solutions such as Ciena’s WaveLogic 3, which includes enabling technologies such as soft decision FEC

and transmitter DSP.



Q: What is driving demand and how might this change in the future?


Schmitt: Pretty simple - bandwidth demand is rising. But I don’t want to be the 1,000th person to use hyperbole and expound on the insatiable demand for bandwidth. Outside of the enterprise vertical and data centre market, carriers aren’t seeing an accompanying rise in revenue. Capex isn’t going to rise faster than revenue, regardless of how many times people talk about “explosive bandwidth growth”.

D’Ambrosia: The continuing reduction in the cost per bit has enabled individuals and businesses to introduce and grow services and products that consumers and businesses want to purchase. The increase in such services, combined with higher access rates and methods and a growing number of users is feeding a bandwidth tsunami. It just keeps coming.

Tarazi: New services such as 4G wireless, video, broadband and cloud are driving unprecedented bandwidth growth in the network, and we expect this to continue. At Verizon, it’s the responsibility of our global network planning and technology team to stay ahead of the growth curve for our customers. One way we are doing this is by deploying 100G technology across the network as needed. Wholesale customers play a major role in the demand for 100G core transport, and we will continue to position our wholesale products to take advantage of all 100G benefits as well.

Xenos: There are several factors. Service currency has evolved from GE to 10GE, pushing operators to upgrade to 100G to be able to scale their networks. Also, operators need to upgrade to higher capacities to be able to transport new high speed services such as 40GE and 100GE. Finally, ever increasing traffic demands are resulting in networks with congested links; service providers are deploying 100G to prolong the life of existing architecture.



Q: With the price of 10G still falling, how can 100G systems find a pricing point where they become cost-effective and how far away are we from that point?


Schmitt: 10G and optical switching (ROADMs) represents a very tough act to beat in the metro environment and will be for some time. 100G offers better economics in the core, with higher traffic loads and longer spans with fewer intermediate nodes. 100G still costs

10-20 times 10G today. But a carrier needing to light a new fibre will want to invest in a technology that will last, and as a result they will move to using all coherent technology even if there aren’t immediate cost savings.

Xenos: The primary benefit of 100G lies in its ability to scale transmission capacity by a factor of 10 on the same infrastructure that currently supports 10G optical channels. Another important benefit lies in its need to support native 100G signals from other networking equipment, or support 100G service offerings.100G is already commercially attractive with respect to 10G in terms of cost, space and power. Of course if traffic growth is not anticipated to fill an existing DWDM system in the foreseeable future, and there is no need to support native 100G signals, continuing to add 10G channels is a viable option.

 

Tarazi: The 100G technology is already cost-effective in the long-haul environment. Verizon continues to work diligently with our suppliers to determine when it will be cost-effective enough for metro applications. Equipment vendors know the service provider community expects cost efficiencies on equipment that supports 100G being developed for the marketplace. 

D’Ambrosia: As you move away from the core and towards high volume deployments cost sensitivity is increased. However, it needs to be realised that cost sensitivity is dependent upon the application space that is being served, and the economics of that space will dictate the ultimate answer to these questions. However, in general, as noted above, there are technical challenges that are currently being worked out in the various standards bodies and industry organisations that will help to drive down power and cost, while increasing port density.



Q: What role do legacy technologies play in a 100G world?

 

Tarazi: As with previous generations, the lower bit rates -- 10G and 40G technology -- will move from the core backbone network to the metro network, which is closer to the customer. When the lower bit rates migrate from the core to the metro, and eventually into customer access environments, we more than likely will see an increase in volume, allowing customers to take advantage of higher speeds at competitive cost points.

Schmitt: I think 40G will be relegated to a secondary role, used primarily to squeeze more capacity out of existing networks where 100G just won’t work. 100G will be the technology of choice for the next decade, much like 10G was for the last decade. 10G technology will continue to grow and flourish - 10G wavelengths plus optical switching is an unbeatable technology in the metro and will be for some time.

Xenos: In the short term, we continue to see a strong growth in both 40G and 10G deployments. In the long term, we expect a tapering off, primarily of 40G. We expect some 10G/40G requirements to remain where spectrum is plentiful and higher capacities are not required, particularly in metro applications.

 

D’Ambrosia: When one labels a technology as “legacy” it should always be realised that this is most likely referring to a single application space. As noted above, the Intel Romley server platform is raising excitement over the prospects for high volume 10GE port count. Moreover, I would hardly call 40GE – launched just two years ago – as legacy just yet.



Q: Should we go to 400G as soon as possible or hold out for 1T – or should we have a complete rethink and try to develop a standard that embraces the “super-channel” philosophy?

 

D’Ambrosia: This debate has been raging for nearly two years now: 400GE versus 1T Ethernet. The technical challenges that are being overcome to enable low-cost high-volume 100G are potentially applicable to the development of a 400GE solution, but not necessarily to 1TE. Therefore, additional investments must be made in technology to make 1TE a truly attractive solution, which will necessarily drive cost up – and this point cannot be forgotten – the market is not just looking for a solution, it is looking for a solution at the right cost.

Xenos: 400G services will be able to be deployed over today’s existing infrastructure. Moving to “gridless ROADMs” can offer a small incremental benefit in spectral efficiency for 400G transmission, but they are not essential. Both 400G and 1T will require a “super-channel” implementation, meaning they will require multiple closely spaced carriers. As more carriers are required, a flexible grid or “gridless” architecture will allow modest improvement in spectral efficiency. At the recent Optical Fiber Communications conference, the community sentiment seems to have settled on 400G as the focus of attention for the next step.

 

Tarazi: Super-channels will allow us to support either 400G or 1T on the line side which are the shared trunks on the outside fibre plant that run between our long-haul facilities connecting our central offices. Verizon is working closely with the various industry standards groups around the world to determine the timeline for a cost-effective client side which is the dedicated trunk between two different pieces of equipment inside the central office at either bit rate (400G or 1T). This will allow us to make an informed decision that standards group members can agree on for the future.

Schmitt: It will be very difficult to compete with the economics of 100G carriers, and I believe super channels (multiples of 100G lambdas) will be successful as a result. Super channels are the best way to extract maximum spectral efficiency and employ flexible modulation schemes. 100G will be the common denominator for transport networks for a long time.



Q: What do you see as the single greatest advance that 100G will bring to the market?


Schmitt: It provides a legitimate reason to start fresh with a greenfield build, and this will allow new equipment and control systems to be introduced at the same time. OTN switching and automated control planes will be introduced, but only because coherent networking provided the excuse to launch a new architecture.

Tarazi: Of course, one of the greatest attributes of 100G technology is scale. From a purely technical perspective, I would say the most important attribute 100G brings is the coherent receiver and associated digital signal processing that allows us to remove the dispersion compensating fibre at amplifier sites and fixed filters from the access/egress nodes. This will enable colourless, directionless and contentionless ROADMs with flexible grid to become a reality.

Xenos: The scaling of fibre capacity resulting in the lowest cost per transported bit - resulting from the reduction of equipment deployed and power consumed per unit of bandwidth in networks globally.

D’Ambrosia: More users are finding more ways to be connected to a given network, and the rates of each of these access methods continues to grow. Therefore, perhaps the greatest single advance that 100G will enable is the collective compute power of the network and the people connected to it.



Q: What do you see as the greatest threat to the wide scale adoption of 100G?

 

D’Ambrosia: Well, according to the Mayan calendar, 2012 is supposed to be the end of the world, so that may slow things down.

Xenos: The high cost of 100G client interface modules makes 100G services more expensive than 10G services on a cost/bit basis. Increasing 100G client component volumes and the entry of additional suppliers into the market will improve this situation over the coming year. We also anticipate improved network adoption as 100G ports on switches and routers become more cost competitive with 10G ports, which is also anticipated over the next year. There is no question as to whether 100G will see widespread adoption, just a question of timing and pricing.

 

Tarazi: We need to see the costs of deploying 100G come down significantly before it is widely adopted in the metro networks. This is critical as use of 100G in the metro networks is expected to drive volumes up for the whole industry. 

Schmitt: Macroeconomic concerns. Spending in Europe in 2H 2011 was heavily based towards legacy equipment and these are economic concerns, not technological. There are no significant barriers to 100G, at least nothing that won’t be solved by next year. The worst case is a delay - 100G is an unstoppable force at this point.



Q: What are your hopes and fears for the future?

 

Tarazi: Our hopes for 100G are great. We’ve been able to scale the network as needed to meet growing traffic demands; improve our network performance; increase our network efficiencies; provide lower latency on many routes; and strengthen our Verizon networks for our customers worldwide. We have learned a tremendous amount working with 100G technology. I hope that we, as an industry, will take the lessons we’ve learned and apply those lessons to other areas. One concern is that if we don’t apply those lessons, the supply chain could become fragmented making it tougher to obtain cost-effective solutions.

Schmitt: It would be fantastic if something happened that allowed wireline revenues to grow again. My broadband connection costs the same as it did five years ago yet the value it provides has grown by a factor of 100. Finding a way to turn bandwidth growth into revenue growth would supercharge the industry.

Xenos: My greatest hope is that the history of 10G repeats itself, with continuing technological innovation leading to steady improvements in cost, space and power. My greatest fear is that service providers fail to capitalise on residential subscriber demand for new applications and services. Even as innovations in network architecture drive cost improvements as traffic scales, service providers must innovate even more aggressively to overcome challenges of revenue growth. We believe a deeper engagement model with suppliers and partners can help.

D’Ambrosia: My greatest hope is that as we move forward in these endeavors, the industry considers the full ecosystem impact of the decisions it will make. It often seems that we cyclically address problems as they become greater issues – access and the cord for example, or compute power and the network. It all has to work together.

Gift this article