Evaluating InfiniBand performance with PCI Express
Autor: | Amith R. Mamidala, Jiuxing Liu, D.K. Panda, Abhinav Vishnu |
---|---|
Rok vydání: | 2005 |
Předmět: | |
Zdroj: | IEEE Micro. 25:20-29 |
ISSN: | 1937-4143 0272-1732 |
DOI: | 10.1109/mm.2005.9 |
Popis: | The InfiniBand architecture is an industry standard that offers low latency and high bandwidth as well as advanced features such as remote direct memory access (RDMA), atomic operations, multicast, and quality of service. InfiniBand products can achieve a latency of several microseconds for small messages and a bandwidth of 700 to 900 Mbytes/s. As a result, it is becoming increasingly popular as a high-speed interconnect technology for building high-performance clusters. The Peripheral Component Interconnect (PCI) has been the standard local-I/O-bus technology for the last 10 years. However, more applications require lower latency and higher bandwidth than what a PCI bus can provide. As an extension, PCI-X offers higher peak performance and efficiency. InfiniBand host channel adapters (HCAs) with PCI Express achieve 20 to 30 percent lower latency for small messages compared with HCAs using 64-bit, 133-MHz PCI-X interfaces. PCI Express also improves performance at the MPI level, achieving a latency of 4.1/spl mu/s for small messages. It can also improve MPI collective communication and bandwidth-bound MPI application performance. |
Databáze: | OpenAIRE |
Externí odkaz: |