Skip to main content

Going Modular: Empowering Data Center Transformation

By Liz Hardin
Copper Solutions Group Product Manager

By Chad Jameson
I/O Solutions General Manager

Current technology trends, such as 5G implementation, artificial intelligence, machine learning and the increasing reach of the internet of things, all rely on data. Lots of data. Being able to store and manage that data quickly and cleanly is driving data centers to adopt new platforms and transition to the next generation of standards and components.

The importance of data center flexibility grows as a means to accommodate ever-expanding demands. Modularization and disaggregation are highly effective strategies for accelerating performance upgrades while working within existing data center infrastructure to circumvent the time, effort and expense required to deploy a completely new architecture. Plug-and-play modular approaches are now a viable option for businesses needing to quickly respond to industry developments without breaking the bank.

Leveraging the efficiencies of an upgrade while avoiding the extensive investment of a new server or storage build-out depends upon meeting networking and compute connectivity requirements. Therefore, access to a comprehensive portfolio of server products and solutions as well as engineering expertise is key for any upscaling effort.   

The Fab Four: Keys to Data Optimization

Enabling fast and powerful connectivity is at the core of effective server optimization, whether within existing or upgraded systems. With bandwidth requirements continuing to increase, the following considerations will become more crucial as systems upgrade to higher data speeds:

  1. Cabling: Extending PCIe channels via internal cabling to connect to peripherals is a means to support a quicker upgrade path. Developments in cabling have mitigated signal loss, enabling designers to connect periperals to riser cards as a plug-and-play upgrade method and helping prolong the life of existing architecture.
  2. Signal loss: As data rates increase, signal loss becomes a bigger networking design issue. Engineers have several options, such as optical connectivity, linear amplifiers, additional cabling and re-timers, and all of these fixes introduce pros and cons. For example, optical channels are more costly, while linear amplifiers limit cable lengths and re-timers add latency. The availability of different signal loss mitigation methods, however, gives system architects the flexibility to choose a path that works best for their unique application.
  3. Space constraints: As designers leverage cables to help alleviate channel loss, density/volume of cabling inside the box becomes a new challenge. Connector compactness, creative cable management solutions and cable routing are now crucial elements of system design.
  4. Thermal management: More powerful compute needs have led to thermal issues. With cable solutions adding to the volume inside the box, design concerns are no longer centered primarily on signal integrity. Thermal impact and efficiency have become primary drivers in system design decisions and overall total cost of ownership projections.

Innovating Internal Connections

In response to organizations looking for solutions that address today’s data center design challenges, Molex offers one of the broadest portfolios in the industry. Customers can now meet their internal connectivity needs through one trusted supplier.

Image

Although small in stature, Mirror Mezz connectors are mighty, packed with innovative design features that enhance reliability and maintain high-speed signal integrity. Molex recently added Mirror Mezz Pro to the product family to deliver performance at twice the maximum data rate while fitting the original compact PCB footprint. In fact, Open Compute Project’s prestigious Open Accelerator Infrastructure (OAI) group has developed specifications for an open-hardware compute accelerator module form factor and its interconnects. Molex’s Mirror Mezz Pro was the perfect fit for density and performance requirements and was selected as the board-to-board mezzanine connector standard for the OAI v1.5 112G system specification. 

Overcoming lossy PCB traces, the innovative NearStack PCIe high-speed connectors and cable assemblies can create a direct connection from anywhere in the system to near the application-specific integrated circuit (ASIC), improving SI, lowering insertion loss and reducing signal latency.

Image

A range of additional internal PCIe 4 and 5 interconnect solutions provides a multitude of design options to serve additional purposes. Molex’s portfolio includes the Sliver Edge Card Connector system and cable assemblies, which are recognized by the Small Form Factor Committee, JEDEC, Open Compute Project and Gen-Z Consortium standards for delivering superior SI performance to 32 Gbps.

Molex offers SlimSAS Connectors and Cable Assemblies that are compatible with PCIe 4 and 5, as well as SAS3 protocols, as well as future-proofed for SAS 4 (24 Gbps). These cable assemblies can solve space and performance issues in download systems well into the future.

zCD Interconnect System is the Molex brand product family for CDFP connectors and cable assemblies that are part of a 2013 Multi Source Agreement to help data centers meet ever-increasing data rates with a widely accepted pluggable form factor. Offering high port and bandwidth density, the zCD Interconnect System delivers 400 Gbps per port with superior signal integrity performance.

Image

Additionally, Molex recently introduced Mini Cool Edge I/O (MCIO) connectors, which transmit up to 16/32 Gbps PCIe gen 4 and 5 and PCIe card electromechanical (CEM) connectors to provide higher bandwidth in server and storage applications.

The Molex Advantage

Molex provides a complete portfolio of solutions to meet customer needs at any stage of the data center lifecycle. From initial design to upscaling via modular design practices and full-scale implementations, Molex delivers unparalleled engineering expertise and reliability, ensuring optimized data center routing and connectivity for maximum performance.

Share