- 288 # of Ports External
- 2 # of Power Supply Included
The Intel® Omni-Path Director Class Switch (Intel® OP Director Class Switch), based on Intel’s next generation 48-radix switch silicon, has many innovative features that provide optimum performance for both small and large fabrics. Both switch models are dense form factor designs capable of supporting up to 768 100 Gb/s ports in a low 20U footprint. Designed to be modular alongside edge switches, host adapters, and software, the Intel OP Director Class Switch 100 series enables customers to tailor their system configuration to meet present and future needs.
Intel® Omni-Path Architecture is designed to support high message rate traffic from each node through the fabric. With ever-increasing processing power and core counts in Intel® Xeon® processors and Intel® Xeon Phi™ processors, that means the fabric has to support high bandwidth as well as high message rate throughput.
Intel® OPA switch 48-port design provides for improved fabric scalability, reduced latency, increased density, and reduced cost and power. In fact, the 48-port ASIC can enable 5 hop configurations of up to 27,648 nodes, or over 2.3x what’s possible with current InfiniBand* solutions. Depending on fabric size, this can reduce fabric infrastructure requirements in a typical fat tree configuration by over 50 percent, since fewer switches, cables, racks, and power are needed as compared to today’s 36-port switch ASICs.
Features in Intel® OPA help minimize the negative performance impacts of large Maximum Transfer Units (MTUs) on small messages and help maintain consistent latency for interprocess communication (IPC) messages, such as Message Passing Interface (MPI) messages, when large messages—typically storage—are being simultaneously transmitted in the fabric. This will allow Intel® OPA to bypass lower priority large packets to allow higher priority small packets, creating a low and more predictable latency through the fabric.
Intel® Omni-Path Architecture also delivers efficient detection and error correction, which is expected to be much more efficient than forward error correction (FEC) defined in the InfiniBand standard. Enhancements include zero load for detection, and if a correction is required, packets only need to be retransmitted from the last link—not all the way from the sending node—which enables near zero additional latency for a correction.
Explore how the Intel® Enterprise Edition for Lustre* Software can unleash the performance and scalability of the Lustre* parallel file system and how the Intel® Solid State Drive Data Center Family can take storage performance to new heights.