Skip to main content

The Future of Cable Infrastructure: How Performance Requirements Shape Data Center Architecture

By Chad Jameson
I/O Solutions General Manager

Data center speed requirements have experienced quite an evolution. With demands for 20 Gbps, 56 Gbps and now 112 Gbps, consumer and brand expectations for faster, more efficient online experiences have required data centers to keep transforming.

And the future isn’t showing any signs of a slowdown.

As we look toward another jump in operating frequency and data rates climbing to 224 Gbps, the makeup of data centers will have to transition yet again. High-speed cables, especially direct attach cables (DACs), have traditionally been the go-to solution to connect servers within racks, playing a crucial role in helping data centers upgrade quickly and efficiently. But with higher frequency requirements and data speeds climbing incrementally, data center managers need versatility in their cabling toolboxes.

As data centers require increasingly higher frequencies, it’s important to keep in mind when planning which cables are needed that with each frequency increase, DACs’ length capacity is shortened again and again. For example, at 224 Gbps, DACs may be able to connect and deliver quality data up to 1.0m. To bridge longer lengths, active electrical cables (AECs) and active optical cables (AOCs) must fill in where DACs fall short in supporting plug-and-play upgrades. In other words, today’s data center managers must look beyond their traditional cables to select options to manage varying rack architectures. Passive cables, such as DACs, have dominated rack architecture for decades, but as we move into the next generation of frequency, active cables will become not just more popular, but also necessary.

To prepare for the future and to maximize performance right now, here are a few key considerations in selecting cable solutions for changing data center needs.

Short Connections and Power Budgets

DACs have become the standard connectivity solution within a rack. At 56 Gbps PAM-4 (a brand of the pulse amplitude modulation technology), DACs can effectively connect rows within a rack that cover a space of up to 3.0m, but higher data speeds create loss in these passive cables. As data rates climb, DACs will be ideal across 1.0m or less, which might not be long enough to connect top-of-rack (TOR) switches with servers located lower on the rack.

Image

With no electronics in the cable assembly, DACs offer a passive solution that simply passes data straight through. So, in addition to being effective for connectivity within a rack, they are ideal in situations where added power might create a larger drain on the overall power draw for the rack.

A cloud operations data center in North Dakota illustrates DACs in action. Its 7-foot racks are humming with data transmission activity; TOR switches communicate via DACs to each server in the rows underneath. DACs are a good solution for busy data centers operating at lower frequencies, as they are less costly than other cabling solutions and do not add to thermal budgets. Additionally, at 56 Gbps PAM-4, DACs effectively connect the TOR switch to all the servers on the rack, from the topmost to the one on the bottommost row.

As the year progresses and performance expectations rise, the data center upgrades to 112 Gbps PAM-4. DACs still connect TOR switches to servers on the higher rows, but in distances beyond 2.0m, they experience an unacceptable level of loss at the higher data rate. Data center managers now need an alternative cabling solution to connect lower servers to the TOR switch while maintaining acceptable performance.

The bottom line: DACs do not add to power budgets and are viable options for 56 Gbps PAM-4 applications within racks with run lengths up to 3.0m. At 112 Gbps PAM-4, they continue to be effective for 0.5 to 1.0m lengths.

Filling the Gap: Active Electrical Cables 

Image

AECs offer a strong solution to span lengths too long for DACs at 112 Gbps PAM-4. Nearly zero data loss and smaller-diameter cables create a better cable option for run lengths beyond 2.0m.

Re-timers built into AECs clean signals at both the entry and exit points. Data enters the cable and re-timers recondition it, removing noise and amplifying the signal. When the data exits the cable, it undergoes the same process again. AECs transmit data “more cleanly” across longer lengths than the DACs do and are also less expensive than AOCs.

Managers at that same North Dakota cloud operations data center referenced above have added AECs to their cabling options with their upgrade to 112 Gbps PAM-4. Their racks now use a combination of DACs and AECs, effectively affording the best of both worlds to the new, higher-functioning, higher-frequency data center. DACs are still the most cost-effective way to connect TOR switches to servers in higher rows, but data loss prevents their use to connect servers farther down the rack. Being less expensive than fiber optic cables and providing lossless connectivity from top to bottom, AECs provide the ideal middle ground between DACs and AOCs as data speeds increase.

The bottom line: AECs are an excellent option for clean, high-speed connections between rows in a rack in run lengths up to 7.0m. Though they use power, their small diameter helps improve airflow.

Overcoming Distances: Fiber Optics

AOCs, which use fiber optic cabling that cleans the signal and is nearly lossless, are yet another important option. Little bleeds out of the cable as the data travels through it, and because of this construction, AOCs reliably deliver data across much longer lengths—so long, in fact, that these distances can be measured in kilometers.

Between the three cable options available, AECs are more expensive than DACs, and AOCs are the most expensive of the three types. But while the cost of upgrading to fiber optics can be as much as 10 times the original copper cable cost in many cases, these fiber optic cables are ideal for rack-to-rack and row-to-row connectivity, especially in large data centers where performance thresholds are stringent and cable distances are extensive. This means that when connecting rows within a rack, AOCs do not make financial sense where DACs and AECs offer a better fit economically.

The North Dakota cloud operations data center used in our example now uses AOCs for longer-reach applications (>7.0m), such as when connecting TOR end-of-the-row switches. At 112 Gbps PAM-4 and even higher data rates, loss is less of a concern with AOCs. The North Dakota facility also uses fiber optics to connect to data centers in Texas and Virginia, and that connectivity continues onward, eventually reaching data centers in Europe, in Asia and elsewhere around the globe.

The bottom line: AOCs and the power of fiber optics offer a solution for row-to-row and data center-to-data center connections.

Data centers need fast, reliable, seamless connectivity to meet today’s insatiable data demands and ever-increasing performance, speed and frequency requirements. Molex applies its exceptional knowledge and 80+ years of experience to help make the promise of data-driven technology a reality. Learn more about Molex’s complete data center cabling portfolio and maximize your rack architecture with DAC AssembliesAEC 112 Gbps PAM-4 Solutions and AOC Integrated Cable Solutions.

Share