• <More on Intel.com

InfiniBand Architecture

The high-performance interconnect

InfiniBand* is a low-latency, high-bandwidth data center interconnect, featuring remote direct memory addressing (RDMA) for high-performance inter-processor communication (IPC). It is used in a wide range of computing environments, from high-performance computing (HPC) systems and large data centers to embedded applications, where fast, inter-server communications is critical to performance.

A true HPC solution

InfiniBand* was developed as a specification more than a decade ago, and has achieved significant success in the high performance computing (HPC) environment.

InfiniBand* features three critical benefits over traditional data center fabrics:

  • High bandwidth
  • Low latency
  • Low CPU utilization

Intel and InfiniBand*

Intel has played a leading role in the development of InfiniBand* architecture, from helping to guide the standards bodies that enable InfiniBand* architecture to working with the industry on software stack development. Intel is committed to ensuring successful, industry-wide adoption of InfiniBand* architecture.

Offering InfiniBand* connectivity for the data center and high performance computing (HPC) markets in both Intel® Xeon® processor and Intel® Itanium® processor–based platforms, Intel continues its collaboration with the industry to ensure that InfiniBand* architecture capabilities are optimized to perform best with Intel products.

The InfiniBand* advantage

InfiniBand* is based on RDMA, a message passing paradigm. The RDMA service passes messages across the network, between processors. Messages are passed directly between registered memory locations, without operating system intervention or data copying.

Developers interface with InfiniBand using a semantic interface called InfiniBand ‘verbs.’ According to the InfiniBand* specification, “Verbs describe the functions necessary to configure, manage and operate an InfiniBand channel adapter. Verbs are not an API, but provide the framework for the OSV to specify the API.”

This highly efficient communication is useful for applications such as MPI for HPC, traditional socket applications, storage applications, file systems, and more—all through the use of specialized APIs over a common transport.

Driving the industry with collaboration

Representing a new approach to I/O technology, InfiniBand architecture is based on collective research, knowledge, and experience of industry leaders; many are members of both the OpenFabrics* Alliance (OFA) and InfiniBand* Trade Association (IBTA).

  • The OFA develops, distributes, and promotes a unified, transport-independent, open-source software stack for RDMA-capable fabrics and networks, including Ethernet* and InfiniBand* architecture. Upper-level protocols in the stack support IP, NAS, SAN, sockets, clustered file systems, and database application environments.
  • The InfiniBand* Trade Association has developed and continues to enhance a common I/O specification to deliver a channel-based, switched-fabric technology, which the entire industry can adopt.

The IBTA and OFA are almost perfect complements. The IBTA developed and maintains and enhances the InfiniBand* specification. OFA develops and maintains application programming interfaces (APIs) consistent with the InfiniBand* specifications. In another important area, the IBTA runs tests of the compliance of components with the IBTA spec, and OFA runs interoperability tests of components.

As a member of both the OFA and IBTA, Intel helped develop the specification standard and software stacks for InfiniBand. These stacks are standard on Microsoft Windows* and included in the Linux kernel and all major Linux distributions. The OpenFabrics* stack is also a key ingredient to the Intel® Cluster Ready program, simplifying the building of HPC clusters.

Both as an industry leader and collaborator and with InfiniBand architecture support in its product lines, Intel remains committed to the continued success of InfiniBand* architecture into the future.