Friday, June 10, 2022

Frontier supercomputer powered by AMD is the fastest and first exascale machine

Exascale computing is the next milestone in the development of supercomputers. Able to process information much faster than today’s most powerful supercomputers, exascale computers will give scientists a new tool for addressing some of the biggest challenges facing our world, from climate change to understanding cancer to designing new kinds of materials. 

One way scientists measure computer performance is in floating point operations per second (FLOPS). These involve simple arithmetic like addition and multiplication problems. Their performance in FLOPS has so many zeros - researchers instead use prefixes like Giga, Tera, Exa. where “Exa” means 18 zeros. That means an exascale computer can perform more than 1,000,000,000,000,000,000 FLOPS, or 1 exaFLOP. DOE is deploying the United States’ first exascale computers: Frontier at ORNL and Aurora at Argonne National Laboratory and El Capitan at Lawrence Livermore National Laboratory.

Exascale supercomputers will allow scientists to create more realistic Earth system and climate models. They will help researchers understand the nanoscience behind new materials. Exascale computers will help us build future fusion power plants. They will power new studies of the universe, from particle physics to the formation of stars. And these computers will help ensure the safety and security of the United States by supporting tasks such as the maintenance of  US nuclear deterrent.

For decades, the performance maximization has been the chief concern of both the hardware architects and the software developers.  Due to end of performance scaling by increasing CPUs clock frequencies (i.e Moore's Law),  Industries transit from single-core to multi-core and many-core architectures.  As a result, the hardware acceleration and the use of co-processors together with CPU are becoming a popular choice to gain the performance boost while keeping the power budget low. This includes both the new customized hardware for particular application domain such as Tensor Processing Unit (TPU), Vision Processing Unit (VPU) and Neural Processing Unit (NPU); and the modifications in existing platforms such as Intel Xeon Phi co-processors, general purpose GPUs and Field Programmable Gate Array (FPGA)s. Such accelerators together with main processors and memory, constitute a heterogeneous system. However, this heterogeneity has raised unprecedented difficulties posed to performance and energy optimization of modern heterogeneous HPC platforms. 

The focus of maximizing the performance of HPC in terms of completing the hundreds of trillion Floating Point Operations Per Second (FLOPS) has led the supercomputers to consume an enormously high amount of energy in terms of electricity and for cooling down purposes. As a consequence, current HPC systems are already consuming Megawatts of energy. Energy efficiency is becoming an equally important design concern with performance in ICT.  Current HPC systems are already consuming Megawatts of energy. For example, the world’s powerful supercomputer like Summit consumes around 13 Megawatts of power which is roughly equivalent to the power draw of roughly over 10000 households. Because of such high power consumption, future HPC systems are highly likely to be power constrained. For example, DOE aims to deploy this exascale supercomputer capable of performing 1 million trillion ( or 1018) floating-point operations per second in a power envelope of 20-30 megawatts. Initial target was to deliver a double precision exaflops of compute capability for 20 megawatts of power and other target was  2 exaflops for 29 megawatts of power when it’s running at full power. Taking into consideration the above-mentioned factors, HPE Cray designed Frontier supercomputer powered by AMD for growing accelerated computational needs and power constraints.

The Frontier supercomputer, built at the Department of Energy's Oak Ridge National Laboratory in Tennessee, has now become the world's first known supercomputer to demonstrate a processor speed of 1.1 exaFLOPS (1.1 quintillion floating point operations per second, or FLOPS).  The Frontier supercomputer's exascale performance is enabled by  world's most advanced pieces of technology from HPE and AMD.

Frontier supercomputer powered by AMD is  the first exascale machine meaning it can process more than a quintillion calculations per second with an HPL score of 1.102 Exaflop/s. Based on the latest HPE Cray EX235a architecture and equipped with AMD EPYC 64C 2GHz processors, the system has 8,730,112 total cores and a power efficiency rating of 52.23 gigaflops/watt. It relies on gigabit ethernet for data transfer. 

Exascale is the next level of computing performance. By solving calculations five times faster than today’s top supercomputers—exceeding a quintillion [ or 1018  ] calculations per second—exascale systems will enable scientists to develop new technologies for energy, medicine, and materials. The Oak Ridge Leadership Computing Facility will be home to one of America’s first exascale systems, Frontier, which will help guide researchers to new discoveries at exascale.

It's based on HPE Cray’s new EX architecture and Slingshot interconnect with optimized 3rd Gen AMD EPYC™ CPUs for HPC and AI, and AMD Instinct™ 250X accelerators. It delivers linepack (double precision floating point – FP64) compute performance of 1.1 EFLOPS (ExaFLOPS). 

Source

The Frontier test and development system (TDS) secured the first place in the Green500 list, delivering 62.68 gigaflops/watt power-efficiency from a single cabinet of optimised 3rd Gen AMD EPYC processors and AMD Instinct MI250x accelerators. It could lead to breakthroughs in medicine, astronomy, and more.  


The HPE/AMD system delivers 1.102 Linpack exaflops of computing power in a 21.1-megawatt power envelope, an efficiency of 52.23 gigaflops per watt. Frontier only uses about 29 megawatts at its very peak. During a test, Frontier ran at 1.1 exaflops and could go as high as 2 exaflops.

Source


Node diagram:

Source

These are HPE Cray EX systems has  74 cabinets of this — 9,408 nodes. Each node has one CPU and four GPUs. The GPUs are the [AMD] MI250Xs. The CPUs are an AMD Epyc CPU. It’s all wired together with the high-speed Cray interconnect, called Slingshot. And it’s a water-cooled system.  Recently  good efforts towards using computational fluid dynamics to model the water flow in the cooling system. These are incredibly instrumented machines with liquid cooling dynamically adjust to the workloads.  There’s sensors that are monitoring temperatures where even down to the individual components on the individual node-boards, so they can adjust the cooling levels up and down to make sure that the system stays at a safe temperature. It was estimated to provide over 60 gigaflops-per-watt for the single cabinet run.

This is the datacenter where they formerly had the Titan supercomputer. So they removed that supercomputer and refurbished this datacenter. That needed more power and  more cooling. So they brought in 40 megawatts of power to the datacenter and  have 40 megawatts of cooling available. Frontier really only uses about 29 megawatts of that at its very peak. And this Supercomputer is even a little bit quieter than Summit  because they’re going to liquid-cooled with no fans and no rear doors where  exchanging heat with the room. It’s 100 percent liquid cooled, and the [fan] noise generated from storage systems that are also HPE and are air-cooled.

At OLCF [Oak Ridge Leadership Computing Facility], they  have the Center for Accelerated Application Readiness, we call it CAAR. Its  vehicle for application readiness. That group supports eight apps for the OLCF and 12 apps for the Exascale Computing Project. Frontier was OLCF-5, the next system will be OLCF-6.

The result was confirmed in a benchmarking test called High-Performance Linpack (HPL). As impressive as that sounds, the ultimate limits of Frontier are even more staggering, with the supercomputer theoretically capable of a peak performance of 2 quintillion calculations per second. Among all these massively powerful supercomputers, only Frontier has achieved true exascale performance, at least where it counts, according to TOP500. Some of the most exciting things are the work in artificial intelligence and those workloads. Plan for research teams  to develop better treatments for different diseases, how to improve efficacies of treatments, and these systems are capable of digesting just incredible amounts of data. Thousands of  laboratory reports or pathology reports,  can draw inferences across these reports that no human being could ever do but that a supercomputer can do. They still have Summit here, a previous Top500 number-one system, an IBM/Nvidia machine. It’s highly utilized at this point. They will at least run it for a year and overlap with Frontier  so that we can make sure that Frontier is up and stable and give people time to transition their data and their applications over to the new system.

With the Linpack exaflops milestone achieved by the Frontier supercomputer at Oak Ridge National Laboratory, the United States is turning its attention to the next crop of exascale machines, some 5-10x more performant than Frontier. At least one such system is being planned for the 2025-2030 timeline, and the DOE is soliciting input from the vendor community to inform the design and procurement process. That can solve scientific problems 5 to 10 times faster – or solve more complex problems, such as those with more physics or requirements for higher fidelity – than the current state-of-the-art systems. These future systems will include associated networks and data hierarchies. A capable software stack will meet the requirements of a broad spectrum of applications and workloads, including large-scale computational science campaigns in modeling and simulation, machine intelligence, and integrated data analysis. They expect these systems to operate within a power envelope of 20–60 MW. These systems must be sufficiently resilient to hardware and software failures, in order to minimize requirements for user intervention. This could include the successor to Frontier (aka OLCF-6), the successor to Aurora (aka ALCF-5), the successor to Crossroads (aka ATS-5), the successor to El Capitan (aka ATS-6) as well as a future NERSC system (possibly NERSC-11). Note that of the “predecessor systems,” only Frontier has been installed so far. A key thrust of the DOE supercomputing strategy is the creation of an Advanced Computing Ecosystem (ACE) that enables “integration with other DOE facilities, including light source, data, materials science, and advanced manufacturing. The next generation of supercomputers will need to be capable of being integrated into an ACE environment that supports automated workflows, combining one or more of these facilities to reduce the time from experiment and observation to scientific insight.

The original CORAL contract called for three pre-exascale systems (~100-200 petaflops each) with at least two different architectures to manage risk. Only two systems – Summit at Oak Ridge and Sierra at Livermore – were completed in the intended timeframe, using nearly the same heterogeneous IBM-Nvidia architecture. CORAL-2 took a similar tack, calling for two or three exascale-class systems with at least two distinct architectures. The program is procuring two systems – Frontier and El Capitan – both based on a similar heterogenous HPE AMD+AMD architecture. The redefined Aurora – which is based on the heterogenous HPE Intel+Intel architecture – becomes the “architecturally-diverse” third system (although it technically still belongs to the first CORAL contract).

-----

Reference:

https://www.olcf.ornl.gov/wp-content/uploads/2019/05/frontier_specsheet.pdf

https://www.researchgate.net/figure/Power-consumption-by-top-10-supercomputers-over-time-based-on-the-results-published_fig2_350176475

https://www.youtube.com/watch?v=HvJGsF4t2Tc

Monday, February 14, 2022

Open MPI with hierarchical collectives (HCOLL) Algorithms

MPI, an acronym for Message Passing Interface, is a library specification for parallel computing architectures, which allows for communication of information between various nodes and clusters. Today, MPI is the most common protocol used in high performance computing (HPC).

The Open MPI Project is an open source Message Passing Interface implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.

https://developer.nvidia.com/blog/benchmarking-cuda-aware-mpi/
source

Open MPI is developed in a true open source fashion by a consortium of research, academic, and industry partners.  Latest version of Open MPI: Version 4.1.

Download OpenMPI from link  https://www.open-mpi.org/software/ompi/v4.1/

Example https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-4.1.1.tar.gz 

source
source

NOTE: NVIDIA Mellanox HPC-X is a comprehensive software package that includes MPI and SHMEM communications libraries. HPC-X uses 'hcoll' library for collective communication and 'hcoll' is enabled by default in HPC-X on Azure HPC VMs and can be controlled at runtime by using the parameter[-mca coll_hcoll_enable 1]

How to install UCX :

Unified Communication X (UCX) is a framework of communication APIs for HPC. It is optimized for MPI communication over InfiniBand and works with many MPI implementations such as OpenMPI and MPICH.

  • wget https://github.com/openucx/ucx/releases/download/v1.4.0/ucx-1.4.0.tar.gz
  • tar -xvf ucx-1.4.0.tar.gz
  • cd ucx-1.4.0
  • ./configure --prefix=<ucx-install-path> 
  • make -j 8 && make install

Optimizing MPI collectives and hierarchical communication algorithms (HCOLL):

MPI Collective communication primitives offer a flexible, portable way to implement group communication operations. They are widely used across various scientific parallel applications and have a significant impact on the overall application performance. Refer configuration parameters to optimize collective communication performance using HPC-X and HCOLL library for collective communication.

As an example, if you suspect your tightly coupled MPI application is doing an excessive amount of collective communication, you can try enabling hierarchical collectives (HCOLL). To enable those features, use the following parameters.


-mca coll_hcoll_enable 1 -x HCOLL_MAIN_IB=<MLX device>:<Port>

HCOLL :

Scalable infrastructure: Designed and implemented with current and emerging “extreme-scale” systems in mind

  • Scalable communicator creation, memory consumption, runtime interface
  • Asynchronous execution
  • Blocking and non-blocking collective routines
  • Easily integrated into other packages
  • Successfully integrated into OMPI – “hcoll” component in “coll” framework
  • Successfully integrated in Mellanox OSHMEM
  • Experimental integration in MPICH
  • Host level hierarchy awareness
  • Socket groups, UMA groups
  • Exposes Mellanox and InfiniBand specific capabilities
source

How to build OpenMPI with HCOLL

Install UCX as described above and build with HCOLL  as shown below 

Steps:

  1. ./configure --with-lsf=/LSF_HOME/10.1/ --with-lsf-libdir=/LSF_HOME/10.1/linux3.10-glibc2.17-ppc64le/lib/ --disable-man-pages --enable-mca-no-build=btl-uct --enable-mpi1-compatibility  --prefix $MY_HOME/openmpi-4.1.1/install --with-ucx=/ucx-install_dir CPPFLAGS=-I/ompi/opal/mca/hwloc/hwloc201/hwloc/include --cache-file=/dev/null --srcdir=. --disable-option-checking
  2. make 
  3. make install

---------------------------Set Test Environment------------------------------------------------

  1.  export PATH=$MY_HOME/openmpi-4.1.1/install/bin:$PATH
  2.  export LD_LIBRARY_PATH=$MY_HOME/openmpi- 4.1.1/install/lib:/opt/mellanox/hcoll/lib:/opt/mellanox/sharp/lib:$LD_LIBRARY_PATH
  3.  export OPAL_PREFIX=$MY_HOME/openmpi-4.1.1/install
NOTE: It may be necessary to explicitly pass LD_LIBRARY_PATH  as mentioned in (3)

--------------  How to run mpi testcase without HCOLL--------------------------------------

1) Use these --mca option to disable HCOLL

--mca coll_hcoll_enable 0 

--mca coll_hcoll_priority 0 

2) Add --mca coll_base_verbose 10  to get more details 

3) Add -x LD_LIBRARY_PATH to get the proper path as shown below


-----------------------------Execute Testcase ----------------------------------

Testcase source:  https://github.com/jeffhammond/BigMPI/tree/master/test

$MY_HOME/openmpi-4.1.1/install/bin/mpirun --np 4 --npernode 1 --host host01,host02,host03,host04 -x LD_LIBRARY_PATH -x BIGCOUNT_MEMORY_PERCENT=6 -x BIGCOUNT_MEMORY_DIFF=10 -x HCOLL_RCACHE=^ucs -mca coll_hcoll_enable 0 --mca coll_hcoll_priority 0 test_allreduce_uniform_count

--------------------------------------------------------------------------

INT_MAX               :           2147483647
UINT_MAX              :           4294967295
SIZE_MAX              : 18446744073709551615
----------------------:-----------------------------------------
                      : Count x Datatype size      = Total Bytes
TEST_UNIFORM_COUNT    :           2147483647
V_SIZE_DOUBLE_COMPLEX :           2147483647 x  16 =    32.0 GB
V_SIZE_DOUBLE         :           2147483647 x   8 =    16.0 GB
V_SIZE_FLOAT_COMPLEX  :           2147483647 x   8 =    16.0 GB
V_SIZE_FLOAT          :           2147483647 x   4 =     8.0 GB
V_SIZE_INT            :           2147483647 x   4 =     8.0 GB
----------------------:-----------------------------------------
Results from MPI_Allreduce(int x 2147483647 = 8589934588 or 8.0 GB):
Rank  2: PASSED
Rank  3: PASSED
Rank  0: PASSED
Rank  1: PASSED
--------------------- Adjust count to fit in memory: 2147483647 x  50.0% = 1073741823
Root  : payload    34359738336  32.0 GB =  16 dt x 1073741823 count x   2 peers x   1.0 inflation
Peer  : payload    34359738336  32.0 GB =  16 dt x 1073741823 count x   2 peers x   1.0 inflation
Total : payload    34359738336  32.0 GB =  32.0 GB root +  32.0 GB x   0 local peers
---------------------
Results from MPI_Allreduce(double _Complex x 1073741823 = 17179869168 or 16.0 GB):
Rank  0: PASSED
Rank  2: PASSED
Rank  3: PASSED
Rank  1: PASSED
---------------------
Results from MPI_Iallreduce(int x 2147483647 = 8589934588 or 8.0 GB):
Rank  2: PASSED
Rank  0: PASSED
Rank  3: PASSED
Rank  1: PASSED
--------------------- Adjust count to fit in memory: 2147483647 x  50.0% = 1073741823
Root  : payload    34359738336  32.0 GB =  16 dt x 1073741823 count x   2 peers x   1.0 inflation
Peer  : payload    34359738336  32.0 GB =  16 dt x 1073741823 count x   2 peers x   1.0 inflation
Total : payload    34359738336  32.0 GB =  32.0 GB root +  32.0 GB x   0 local peers
---------------------
Results from MPI_Iallreduce(double _Complex x 1073741823 = 17179869168 or 16.0 GB):
Rank  2: PASSED
Rank  0: PASSED
Rank  3: PASSED
Rank  1: PASSED
[smpici@host01 BigCount]$

=====================Example for A data integrity issue (DI issue)====

There is end-to-end data integrity checks to detect data corruption. If any DI issue observed , it will be critical (high priority/ high severity defect)

DI issue with HCOLL  ---let's see an example for DI issue.

$MY_HOME/openmpi-4.1.1/install/bin/mpirun --np 4 --npernode 1 --host host01,host02,host03,host04 -x LD_LIBRARY_PATH -x BIGCOUNT_MEMORY_PERCENT=6 -x BIGCOUNT_MEMORY_DIFF=10 -x HCOLL_RCACHE=^ucs  --mca coll_hcoll_enable 1 --mca coll_hcoll_priority 98 test_allgatherv_uniform_count 


Results from MPI_Allgatherv(double _Complex x 2147483644 = 34359738304 or 32.0 GB): Mode: PACKED MPI_IN_PLACE
Rank  2: ERROR: DI in      805306368 of     2147483644 slots (  37.5 % wrong)
Rank  0: ERROR: DI in      805306368 of     2147483644 slots (  37.5 % wrong)
Rank  3: ERROR: DI in      805306368 of     2147483644 slots (  37.5 % wrong)
Rank  1: ERROR: DI in      805306368 of     2147483644 slots (  37.5 % wrong)


---------------Lets run the same testcase without  HCOLL-------------------------------------------


$MY_HOME/openmpi-4.1.1/install/bin/mpirun --np 4 --npernode 1 --host host01,host02,host03,host04 -x LD_LIBRARY_PATH -x BIGCOUNT_MEMORY_PERCENT=6 -x BIGCOUNT_MEMORY_DIFF=10 -x HCOLL_RCACHE=^ucs  --mca coll_hcoll_enable 0 --mca coll_hcoll_priority 0 test_allgatherv_uniform_count   

Results from MPI_Allgatherv(double _Complex x 2147483644 = 34359738304 or 32.0 GB): Mode: PACKED MPI_IN_PLACE
Rank  0: PASSED
Rank  2: PASSED
Rank  3: PASSED
Rank  1: PASSED

Results from MPI_Iallgatherv(double _Complex x 2147483644 = 34359738304 or 32.0 GB): Mode: PACKED MPI_IN_PLACE
Rank  3: PASSED
Rank  2: PASSED
Rank  0: PASSED
Rank  1: PASSED

=========================

How to enable and disable HCOLL to find Mellanox defect and use mca coll_base_verbose 10 to get more debug information 

CASE 1: Enable HCOLL  and run bigcount with allreduce

[smpici@myhostn01 collective-big-count]$  /nfs_smpi_ci/sachin/ompi_4-0.x/openmpi-4.0.7/install/bin/mpirun --timeout 500 --np 4 --npernode 1 -host myhost01:1,myhost02:1,myhost03:1,myhost04:1 -mca coll_hcoll_np 0 -mca coll_hcoll_enable 1 --mca coll_hcoll_priority 98 -x BIGCOUNT_MEMORY_PERCENT=6 -x BIGCOUNT_MEMORY_DIFF=10 -x BIGCOUNT_ENABLE_NONBLOCKING="0" -x HCOLL_RCACHE=^ucs /nfs_smpi_ci/sachin/bigmpi_ompi/ibm-tests/tests/big-mpi/BigCountUpstream/ompi-tests-public/collective-big-count/test_allreduce_uniform_count

--------------------------------------------------------------------------

Total Memory Avail.   :  567 GB
Percent memory to use :    6 %
Tolerate diff.        :   10 GB
Max memory to use     :   34 GB
----------------------:-----------------------------------------
INT_MAX               :           2147483647
UINT_MAX              :           4294967295
SIZE_MAX              : 18446744073709551615
----------------------:-----------------------------------------
                      : Count x Datatype size      = Total Bytes
TEST_UNIFORM_COUNT    :           2147483647
V_SIZE_DOUBLE_COMPLEX :           2147483647 x  16 =    32.0 GB
V_SIZE_DOUBLE         :           2147483647 x   8 =    16.0 GB
V_SIZE_FLOAT_COMPLEX  :           2147483647 x   8 =    16.0 GB
V_SIZE_FLOAT          :           2147483647 x   4 =     8.0 GB
V_SIZE_INT            :           2147483647 x   4 =     8.0 GB
----------------------:-----------------------------------------
---------------------
Results from MPI_Allreduce(int x 2147483647 = 8589934588 or 8.0 GB):
Rank  2: PASSED
Rank  3: PASSED
Rank  0: PASSED
Rank  1: PASSED
--------------------- Adjust count to fit in memory: 2147483647 x  50.0% = 1073741823
Root  : payload    34359738336  32.0 GB =  16 dt x 1073741823 count x   2 peers x   1.0 inflation
Peer  : payload    34359738336  32.0 GB =  16 dt x 1073741823 count x   2 peers x   1.0 inflation
Total : payload    34359738336  32.0 GB =  32.0 GB root +  32.0 GB x   0 local peers
---------------------
Results from MPI_Allreduce(double _Complex x 1073741823 = 17179869168 or 16.0 GB):
--------------------------------------------------------------------------
The user-provided time limit for job execution has been reached:

  Timeout: 500 seconds

The job will now be aborted.  Please check your code and/or
adjust/remove the job execution time limit (as specified by --timeout
command line option or MPIEXEC_TIMEOUT environment variable).
--------------------------------------------------------------------------
[smpici@myhostn01 collective-big-count]$

CASE 2: Disable HCOLL  and run bigcount with allreduce

[user1@myhostn01 collective-big-count]$ /nfs_smpi_ci/sachin/ompi_4-0.x/openmpi-4.0.7/install/bin/mpirun --np 4 --npernode 1 -host myhost01:1,myhost02:1,myhost03:1,myhost04:1 --mca coll_base_verbose 10 -mca coll_hcoll_np 0 -mca coll_hcoll_enable 0 --mca coll_hcoll_priority 0 -x BIGCOUNT_MEMORY_PERCENT=6 -x BIGCOUNT_MEMORY_DIFF=10 -x BIGCOUNT_ENABLE_NONBLOCKING="0" -x HCOLL_RCACHE=^ucs /nfs_smpi_ci/sachin/bigmpi_ompi/ibm-tests/tests/big-mpi/BigCountUpstream/ompi-tests-public/collective-big-count/test_allreduce_uniform_count

[myhostn01:1984671] coll:base:comm_select: component disqualified: self (priority -1 < 0)
[myhostn01:1984671] coll:sm:comm_query (0/MPI_COMM_WORLD): intercomm, comm is too small, or not all peers local; disqualifying myself
[myhostn01:1984671] coll:base:comm_select: component not available: sm
[myhostn01:1984671] coll:base:comm_select: component disqualified: sm (priority -1 < 0)
[myhostn01:1984671] coll:base:comm_select: component not available: sync
[myhostn01:1984671] coll:base:comm_select: component disqualified: sync (priority -1 < 0)
[myhostn01:1984671] coll:base:comm_select: component available: tuned, priority: 30
[myhostn01:1984671] coll:base:comm_select: component not available: hcoll

----------------------:-----------------------------------------
Total Memory Avail.   :  567 GB
Percent memory to use :    6 %
Tolerate diff.        :   10 GB
Max memory to use     :   34 GB
----------------------:-----------------------------------------
INT_MAX               :           2147483647
UINT_MAX              :           4294967295
SIZE_MAX              : 18446744073709551615
----------------------:-----------------------------------------
                      : Count x Datatype size      = Total Bytes
TEST_UNIFORM_COUNT    :           2147483647
V_SIZE_DOUBLE_COMPLEX :           2147483647 x  16 =    32.0 GB
V_SIZE_DOUBLE         :           2147483647 x   8 =    16.0 GB
V_SIZE_FLOAT_COMPLEX  :           2147483647 x   8 =    16.0 GB
V_SIZE_FLOAT          :           2147483647 x   4 =     8.0 GB
V_SIZE_INT            :           2147483647 x   4 =     8.0 GB
----------------------:-----------------------------------------
---------------------
Results from MPI_Allreduce(int x 2147483647 = 8589934588 or 8.0 GB):
Rank  2: PASSED
Rank  0: PASSED
Rank  3: PASSED
Rank  1: PASSED
--------------------- Adjust count to fit in memory: 2147483647 x  50.0% = 1073741823
Root  : payload    34359738336  32.0 GB =  16 dt x 1073741823 count x   2 peers x   1.0 inflation
Peer  : payload    34359738336  32.0 GB =  16 dt x 1073741823 count x   2 peers x   1.0 inflation
Total : payload    34359738336  32.0 GB =  32.0 GB root +  32.0 GB x   0 local peers
---------------------
Results from MPI_Allreduce(double _Complex x 1073741823 = 17179869168 or 16.0 GB):
Rank  0: PASSED
Rank  3: PASSED
Rank  2: PASSED
Rank  1: PASSED
[user1@myhostn01 collective-big-count]$

=================================================================

This post briefly shows features for optimal collective communication performance  and highlights the  general recommendations. The real application performance depends on your application characteristics, runtime configuration, transport protocols, processes per node (ppn) configuration... etc.


Reference:
http://mug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/18/bureddy-mug-18.pdf
https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/hpc/setup-mpi



Friday, January 28, 2022

HPC Clusters in a Multi-Cloud Environment

High performance computing (HPC) is the ability to process data and perform complex calculations at high speeds. One of the best-known types of HPC solutions is the supercomputer. A supercomputer contains thousands of compute nodes that work together to complete one or more tasks. This is called parallel processing. HPC solutions have three main components: Compute , Network and Storage. To build a high performance computing architecture, compute servers are networked together into a cluster. Software programs and algorithms are run simultaneously on the servers in the cluster. The cluster is networked to the data storage to capture the output. Together, these components operate seamlessly to complete a diverse set of tasks.

To operate at maximum performance, each component must keep pace with the others. For example, the storage component must be able to feed and ingest data to and from the compute servers as quickly as it is processed. Likewise, the networking components must be able to support the high-speed transportation of data between compute servers and the data storage. If one component cannot keep up with the rest, the performance of the entire HPC infrastructure suffers.

Containers give HPC the portability that Hybrid Cloud demands .Containers are ready-to-execute packages of software. Container technology provides hardware abstraction, wherein the container is not tightly coupled with the server. Abstraction between the hardware and software stacks provides ease of access, ease of use, and the agility that bare metal environments lack.

Source
Source

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user while using resources more efficiently and reducing costs. Recently, high performance computing (HPC) is moving closer to the enterprise and can therefore benefit from an HPC container and Kubernetes ecosystem, with new requirements to quickly allocate and deallocate computational resources to HPC workloads so that planning of compute capacity no longer required in advance. The HPC community is picking up the concept and applying it to batch jobs and interactive applications.

In a multi-cloud environment, an enterprise utilizes multiple public cloud services, most often from different cloud providers. For example, an organization might host its web front-end application on AWS and host its Exchange servers on Microsoft Azure. Since all cloud providers are not created equal, organizations adopt a multi-cloud strategy to deliver best of breed IT services, to prevent lock-in to a single cloud provider, or to take advantages of cloud arbitrage and choose providers for specific services based on which provider is offering the lowest price at that time. Although it is similar to a hybrid cloud, multi-cloud specifically indicates more than one public cloud provider service and need not include a private cloud component at all. Enterprises adopt a multi-cloud strategy so as not to ‘keep all their eggs in a single basket’, for geographic or regulatory governance demands, for business continuity, or to take advantage of features specific to a particular provider.

source
source

Multi-cloud is the use of multiple cloud computing and storage services in a single network architecture. This refers to the distribution of cloud assets, software, applications, and more across several cloud environments. With a typical multi-cloud architecture utilizing two or more public clouds as well as private clouds, a multi-cloud environment aims to eliminate the reliance on any single cloud provider or instance.

Multi-cloud is the use of two or more cloud computing services from any number of different cloud vendors. A multi-cloud environment could be all-private, all-public or a combination of both. Companies use multi-cloud environments to distribute computing resources and minimize the risk of downtime and data loss. They can also increase the computing power and storage available to a business. Innovations in the cloud in recent years have resulted in a move from single-user private clouds to multi-tenant public clouds and hybrid clouds — a heterogeneous environment that leverages different infrastructure environments like the private and public cloud.

A multi-cloud platform combines the best services that each platform offers. This allows companies to customize an infrastructure that is specific to their business goals. A multi-cloud architecture also provides lower risk. If one web service host fails, a business can continue to operate with other platforms in a multi-cloud environment versus storing all data in one place. Examples of public Cloud Providers: 

Hybrid-cloud A hybrid cloud architecture is mix of on-premises, private, and public cloud services with orchestration between the cloud platforms. Hybrid cloud management involves unique entities that are managed as one across all environments. Hybrid cloud architecture allows an enterprise to move data and applications between private and public environments based on business and compliance requirements. For example, customer data can live in a private environment. But heavy processing can be sent to the public cloud without ever having customer data leave the private environment. Hybrid cloud computing allows instant transfer of information between environments, allowing enterprises to experience the benefits of both environments.


Hybrid cloud architecture works well for the following industries:

• Finance: Financial firms are able to significantly reduce their space requirements in a hybrid cloud architecture when trade orders are placed on a private cloud and trade analytics live on a public cloud.

• Healthcare: When hospitals send patient data to insurance providers, hybrid cloud computing ensures HIPAA compliance.

• Legal: Hybrid cloud security allows encrypted data to live off-site in a public cloud while connected o a law firm’s private cloud. This protects original documents from threat of theft or loss by natural disaster.

• Retail: Hybrid cloud computing helps companies process resource-intensive sales data and analytics.

The hybrid cloud strategy could be applied  to move workloads dynamically to the most appropriate IT environment based on cost, performance and security. Utilize on-premises resources for existing workloads, and use public or hosted clouds for new workloads. Run internal business systems and data on premises while customer-facing systems run on infrastructure as a service (iaaS), public or hosted clouds.

Reference:

https://www.hpcwire.com/2019/09/19/kubernetes-containers-and-hpc
https://www.hpcwire.com/2020/03/19/kubernetes-and-hpc-applications-in-hybrid-cloud-environments-part-ii
https://www.hpcwire.com/2021/09/02/kubernetes-based-hpc-clusters-in-azure-and-google-cloud-multi-cloud-environment
https://www-stage.avinetworks.com/
https://www.vmware.com/topics/glossary/content/hybrid-cloud-vs-multi-cloud