Atlassian uses cookies to improve your browsing experience, perform analytics and research, and conduct advertising. Accept all cookies to indicate that you agree to our use of cookies on your device. Atlassian cookies and tracking notice, (opens new window)
User Manual

User Manual
Results will update as you type.
  • Application Guide
  • Status of System
  • Usage Guide
  • Compute partitions
    • CPU CLX partition
    • CPU Genoa partition
      • Workflow CPU Genoa
      • Examples CPU Genoa
      • Slurm partition CPU Genoa
      • InfiniBand network
    • GPU A100 partition
    • GPU PVC partition
    • Next-Gen Technology Pool
  • Software
  • FAQ
  • NHR Community
  • Contact

    You‘re viewing this with anonymous access, so some content might be blocked.
    /
    InfiniBand network
    Updated Feb. 12

    InfiniBand network

    NHR Infiniband Lewin.png

    Notes:

    • Both, the GPU (green) and Genoa (red) sub-network have a Fat Tree topology.

    • 2 spine switches (grey) are linked to 6 storage gateways of the “IBM Spectrum Scale” (GPFS)File SystemsPreview: home, scratch, sw.

    • A pair of two Genoa compute nodes share one 400Gb/s NIC. Hence, per node the effective bandwidth is 200Gb/s (like in the GPU cluster).0

    • 14 LNET gateway servers provide access to the OPA network of the CLX cluster.

    • Mellanox (acquired by Nvidia) designs/manufactures InfiniBand components.

     

     

     

     

    {"serverDuration": 10, "requestCorrelationId": "e68b129b3ec148d88f4438179acf2ac2"}