Skip to main content

Scalability and High Performance with Modular Server Architecture

With the goal of promoting energy and cost efficiencies across the industry, the Server initiative of the Open Compute Project (OCP) has gained significant traction among data center operators. By adopting a modular approach defined in standards such as DC-MHS, NIC 3.0 and DC-SCM, data centers gain greater flexibility and control over resources, while scaling their infrastructure to meet the requirements of demanding applications, including generative artificial intelligence (AI), streaming services and more.

Overview


In the rapidly evolving field of data centers, the Open Compute Project (OCP) stands at the forefront of designing and sharing open-source hardware and architecture. One of the key sub-projects under the OCP umbrella is the Data Center – Modular Hardware System (DC-MHS), which introduces a shift from monolithic architectures to modular, scalable and pluggable designs with a focus on interoperability across the data center through standardized interfaces and form factors.  

Molex is a recommended supplier for DC-MHS providing OCP-specified server components including acceleration modules, system security and control modules, and Network Interface Cards on I/O board. 

  OCP DC-MHS NIC 3.0 DC-SCM
  M-XIO M-PIC M-SIF    
NearStack PCIe Connector System X        
KickStart Connector System   X      
Micro-Fit+ Connectors   X      
Pico-Clasp Connectors   X      
Impel/Impel Plus Backplane Connectors     X    
QSFP-DD Connector System       X  
Sliver Edge Card Connectors       X X

M-XIO/PESTI


The M-XIO/PESTI (Modular – Extended I/O Connectivity/Peripheral Sideband Tunneling Interface) workstream has two parts. First, it defines the connector, pinouts and signal interface details of an M-XIO source connector, which enables the entry and exit points between sources such as motherboards and Host Processor Modules (HPMs) and peripheral subsystems including PCIe risers and backplanes.  

Second, it defines the PESTI protocol, which establishes electrical compatibility between the various components specified within DC-MHS for the passing of statuses and discovery of subsystems. M-XIO/PESTI standardizes an efficient data and resource-sharing mechanism akin to the standardized ports and interfaces of computer systems.  

The Molex NearStack PCIe connector system is specified in OCP documentation as the standard connector for enabling PCIe Gen-5 32 Gbps data rates and fulfilling Small Form Factor (SFF) TA 1026 requirements.

OCP

M-PIC


The M-PIC (Modular – Platform Infrastructure Connectivity) workstream standardizes the necessary elements to interface an HPM to platform and chassis infrastructure, including cooling, power distribution and networking.

M-PIC streamlines communication between modules, further enhancing resource efficiency and management, reducing operational costs and improving overall data center performance. 

Like the NearStack PCIe direct-to-cable interconnects, the KickStart Connector System is recommended in OCP’s M-PIC specification for cable-optimized, boot-peripheral connectors and meets SFF TA 1036 standard.

Accommodating PCIe Gen 5 data rates of 32 Gbps NRZ, KickStart combines low- and high-speed signals with power circuits into a single cable assembly to optimize space and provide a flexible, standardized and easy-to-implement solution.   

OCP

M-CRPS


The M-CRPS (Modular – Common Redundant Power Supply) workstream specifies the requirements of an internal redundant power supply, creating standardization across data centers and vendors alike. 

By implementing common and redundant power supply designs, the M-CRPS workstream significantly improves the overall stability and uptime of the data center infrastructure, safeguarding critical operations against potential disruptions. 

Molex continues to invest in development that supports DC-MHS. A new release designed for the specific requirements of the M-CRPS workstream is coming soon.  

M-SIF


The M-SIF (Modular – Shared Infrastructure) workstream aims to improve the interoperability of shared infrastructure enclosures housing multiple serviceable modules, such as HPMs, Data Center Storage and Compute Modules (DC-SCM), and peripherals. 

With blind-mating and hot-pluggable functionalities, M-SIF enables seamless insertion and removal of these modules without precise alignment.

As a result, the enclosure remains operational, allowing for uninterrupted data center performance. 

Impel Enhanced, included in the Open Compute Project M-SIF Base Specification, provides enhanced cable and backplane headers that mate to existing Impel Plus daughtercards and is designed specifically for PCIe Gen 6 channel requirements.

Molex PowerPlane Busbars along with Backplane Connectors offer a total solution to ensure improved interoperability to shared system infrastructures.

OCP

NIC and DC-SCM


 

By redefining the landscape of Network Interface Cards and DC-SCMs, OCP equips design engineers and system architects with the tools to develop robust, future-ready data center solutions. 

The introduction of NIC 3.0 ushers in a versatile form factor that accommodates PCIe Gen 5 and aligns with the SFF-TA-1002 connector standard, ensuring both interoperability and high performance.

Similarly, the DC-SCM specification adopts a modular approach to server management, bolstering security and control features. This modularity promotes a unified management framework, streamlining integration across various platforms and enhancing the user experience and development workflow.

Additional Resources


Building the Next Generation of Hyperscale Data Centers

The next generation of hyperscale data centers will be built using OCP guidelines such as those defined in DC-MHS. Learn how Molex is empowering system architects with engineering expertise and innovative interconnect solutions, such as our NearStack PCIe Connector System, to scale up and scale out these expansive facilities.

Molex Solutions for DC-MHS

Developments in AI, machine learning (ML) and high-performance computing (HPC) are spurring intense workloads and throughputs in data centers and driving demand for innovations in data center systems and technology. Learn from Bill Wilson, New Product Development Manager for Molex, about the latest high-speed, power and low-speed connectivity solutions for OCP DC-MHS.

DCMHS

224 Gbps-PAM4 High-Speed Data Center Technology

What’s next for scaling up and out? The answer is clear: 224 Gbps-PAM4 architecture. We’re proud to have released the industry’s first comprehensive portfolio of 224G products – a culmination of decades of engineering expertise and co-development with our customers. The next generation of hyperscale data centers can now be built for the demands of AI, machine learning and future technologies.

Data Center Server & Storage Solutions

The Internet of Things (IoT) revolution has transformed data into a vital component of our everyday lives. Now more than ever, data centers must employ state-of-the-art server and storage technologies to keep up with the exponential growth in demand for real-time data. Learn how industry expertise and cutting-edge interconnect solutions from Molex ensure powerful connectivity, reliability, performance and efficiency.