The Path to 400 Gigabit Networks

0

What’s Your Migration Plan?

With 100 Gb/s ports expected to peak in 2020 or 2021, it’s time to make way for 400 Gb/s switches, according to a 2019 Ethernet Switch – Data Center Five Year Forecast Report from The Dell’Oro Group.

Though 400 Gb/s data center switch options were introduced by manufacturers in late 2018 and early 2019, adoption has yet to take off. However, many expect 400 Gb/s to see real adoption in 2020, and market analyst Dell’Oro expects 400 Gb/s shipments to reach 15 million ports by 2023.

The draw? The new 400 Gb/s switches, based on 12.8 Tb/s chips, bring not only much faster speeds but greater network density. The ability for switches to enable high-density breakout scenarios with 100 Gb/s ports translates into a lower total cost of ownership per port.

Transceiver shipments of 100 GbE have grown much faster than expected. The most popular options have been 100G-CWDM4, a single-mode 2-fiber solution with a 2-kilometer reach, and 100G-SR4, an 8-fiber 100-meter multimode solution. These options will continue to see strong adoption in the next several years, but 100 Gb/s ports are expected to peak in 2020 or 2021, and make way for 400 Gb/s switches, according to a 2019 Ethernet Switch – Data Center Five Year Forecast Report from The Dell’Oro Group.

While new 400 Gb/s switches come at a significant cost premium, they will likely drop in price as adopters in the cloud service provider and telecom industries purchase more 400 Gb/s switches over the next several years. Those cloud providers continue to gobble up more of the data center space, as large players such as Amazon Web Services (AWS), Microsoft Azure, Google, and IBM, are predicted to make up half of all data center servers by 2021, according to the Cisco Global Cloud Index. These data centers are moving to 100, 200, and 400 Gb/s uplinks now or in the near future.

New 400 Gb/s Transceiver Form Factors

There are 2 major transceiver form factors for 400 Gb/s: QSFP-DD, and OSFP. Various switch manufacturers are offering one or the other, and in some cases both:

The QSFP-DD (Quad small form-factor pluggable, double density) transceivers can support up to 32 ports in a 1-rack-unit (1RU) switch, and are backwards-compatible to QSFP+ and QSFP28 options. The transceiver accepts LC, MPO, and CS connectors.

The OSFP (Octal Small Form-Factor Pluggable) transceivers can also support up to 32 ports in a 1RU switch, and accept the same connector types, and can become backwards-compatible with QSFP ports with the use of an adapter. They have been designed to handle future generations, such as 800 Gb/s.

The first 400 Gb/s switches on the market come from Arista, Cisco, and Juniper. Arista 7060 top-of-rack switches offer 32 ports of 400 Gb/s in 1RU, using either QSFP-DD or OSFP transceiver options. The Cisco 3432 top-of-rack switch also offers 32 ports of 400 Gb/s in 1RU, using QSFP-DD transceivers. In addition, its 3408 switch provides a 4RU 8-slot chassis option. The Juniper QFX5220-32D 1RU top-of-rack switch also offers 32 ports with QSFP-DD transceivers.

One variable yet to be resolved is the number of fiber strands used to deliver 400 Gb/s. Currently, switch manufacturers have plans for connectors with 2, 8, 16, 24, and even 32, fibers. Given how the market is eager for higher speeds — and how measured standards bodies can be — some of the options introduced are proprietary or based on multisource agreements (MSAs). Between proprietary and standards-based offerings, there are now a range of options, but a few favorites are emerging.

With 200 Gb/s transceiver options, only 2 are available on the market today: 2×100-PSM4 single-mode, and 2×100-SR4 multimode, as outlined in blue in Figure 1. These are proprietary options introduced by Cisco, and they both rely on 24-fiber MTP connectors. While transceiver options using 2 LC fibers or 8-fiber MTP connectors (1, 2, and 4) are defined in IEEE standards, they have not yet been introduced to the market.

200G Transceivers

Figure 1. 200G Transceivers

The transceivers outlined in Figure 2 — rows 1, 2, and 5 — highlight the options likely to become the most common over the next several years. Both 400G-FR4 and 400G-SR4.2, originally introduced MSA between manufacturers, are currently in development by IEEE. 400G-FR4 is being drafted under IEEE P802.3cu and is expected to be published in late 2020, while 400G-SR4.2 is defined by IEEE 802.3cm, which was published in January 2020. The 400G-SR4.2 transceiver specification created by the 400G BiDi MSA is called 400G-BD4.2. Other 400 Gb/s transceivers not mentioned on this list include interfaces under development that will reach beyond 10 kilometers.

400G Transceivers

Figure 2. 400G Transceivers

It is important to note that the majority of 100, 200, and 400 Gb/s transceiver options are for single-mode networks, due to the bandwidth and distance capabilities. This trend is also partially a result of decreasing cost — as adoption by cloud companies with major purchasing power has reduced the cost of single-mode optics — and recent standards committee activities continuing to promote more single-mode options for higher speeds. As this trend continues, the market in general could find single-mode to be a more enticing option.

Cabling System Migration Strategies

With many transceiver options for 100, 200, and 400 Gb/s, in both single-mode and multimode fiber, it is important to plan for a cabling design that can handle multiple tech refreshes. There are numerous design factors that shape the makeup of a cabling infrastructure for 200 and 400 Gb/s networks. These may include whether the data center is an enterprise or cloud provider (although in the majority of cases it will be a cloud provider moving to 400 Gb/s). Others include reach requirements, existing server speeds, and the cost per channel.

Power levels can be a factor: the 400 Gigabit optics are in the 10-12-watt range, about 3 times that of 100 Gigabit optics.

Finally, channel insertion loss budgets are an important consideration. The typical single-mode budget will be between 3.0 to 4.0 dB, while the typical OM4 budget will remain at 1.9 dB.

2 Cable System Design Examples

Looking at switch-to-switch cabling scenarios, a 2-fiber single-mode channel as shown in Figure 3 delivers 400 Gb/s in the form of 400G-FR4 and 400G-FR8, using QSFP-DD or OSFP transceivers. This is a very versatile cabling design, as it can also support 10, 40, 100, and 200 Gb/s, using or reusing a backbone of 24-fiber MTP trunk cabling.

400G: Switch-to-Switch Use Case, 2-fiber Single-Mode Configuration (10/40/100/200/400G) with 400G FR4

Figure 3. 400G: Switch-to-Switch Use Case, 2-fiber Single-Mode Configuration (10/40/100/200/400G) with 400G FR4

There are many scenarios for breaking out channels to servers. The Figure 4 configuration breaks 400 Gb/s down to four 100 Gb/s channels when using 400G-DR4 or 400G-XDR4 single-mode, again using a 24-fiber MTP trunk, and then duplex LCs to 100G QSFP28 transceivers at the server.

400G to 4 x 100G Breakout Use Case, 8-fiber Single-Mode Channel Breakout Configuration with 400G-DR4/XDR4

Figure 4. 400G to 4 x 100G Breakout Use Case, 8-fiber Single-Mode Channel Breakout Configuration with 400G-DR4/XDR4

Your Migration Path

While a 24-fiber backbone is used in the examples mentioned earlier, it is not always required to
support next-generation upgrades; 12-fiber MTP trunk cables are also available. However, the
24-fiber solution is a key piece in establishing the most flexibility when migrating to 400 Gb/s.

It is critical to understand the impact of new technology and standards, and build flexibility into your network when migrating to 100, 200, and 400 Gb/s. It is also important to get assistance from experts who understand the evolution of the data center environment and the latest network technology. Look for a cabling system provider who works with leading equipment manufacturers, is active in all next-generation standard developments, and can advise customers on their best possible migration strategy.

Like this Article?

Subscribe to ISE magazine and start receiving your FREE monthly copy today!

Resource
For more information about the 2019 Ethernet Switch – Data Center Five Year Forecast Report from The Dell’Oro Group, please visit https://www.delloro.com/market-research/enterprise-network-infrastructure/ethernet-switch-data-center/.

 

Related

About Author

Gary Bernstein, RCDD, CDCD, is Senior Director of Global Product Management, Leviton Fiber and Data Center. Gary has more than 20 years of experience in the communications industry, with extensive knowledge in fiber cabling infrastructure and data center architectures, and works closely with many hyperscale companies. He has held positions in engineering, sales, product management, and corporate management. Gary has been a member of the TR42.7 Copper and TR42.11 Fiber Committees, and several IEEE Task Forces, including IEEE802.3bs 400G, 802.3cd 50/100/200G, 802.3cm 400G MMF, and 802.3cu 100/400G SMF. For more information, please visit www.leviton.com.

Comments are closed.