Friday, January 10, 2020

what's next in computing - Quantum logic - IBM Q

Many of the world’s biggest mysteries and potentially greatest opportunities remain beyond the grasp of classical computers. To continue the pace of progress, we need to augment the classical approach with a new platform, one that follows its own set of rules. That is quantum computing. The importance of quantum computing is both understated and widely over-hyped at the same time. Although it won’t replace conventional computers, quantum innovation represents a new computing paradigm. As quantum computing technology advances, clients are becoming increasingly curious about how it might impact their business. The intersection of industry and technology will be critical for clients to identify potential applications of quantum computing.The IBM Q Network is a collaboration of Fortune 500 companies, academic institutions, and research labs working together to advance quantum computing. IBM works with the sponsors, champions and stakeholders who will be influencers to drive initial conversations. Quantum sponsors are frequently found in a CIO or innovation group that focuses on new and emerging technology. They will be interested in discussing specific industry use cases where there is a high potential to leverage quantum for future business advantage. Mercedes-Benz and Daimler are working with International Business Machines Corp.’s quantum-computing division with the goal of deploying the next-generation computing power in certain use cases. The  quantum computers are infinitely faster than supercomputers.There are 3 basic types of quantum computers, quantum annealers, analog quantum and universal quantum computers. Quantum computers operate in a very different way to classical computers. They take advantage of the unusual phenomena of quantum mechanics, for example where subatomic particles can appear to exist in more than one state at any time.

In September 2019, IBM became the first company to have a fleet of quantum computers. IBM's 14th quantum computer is its most powerful so far, a model with 53 of the qubits that form the fundamental data-processing element at the heart of the system. IBM is competing with companies like Google, Microsoft, Honeywell, Rigetti Computing, IonQ, Intel and NTT in the race to make useful quantum computers. Another company, D-Wave, uses a different approach called annealing that's already got some customers, while AT&T and others are pursuing the even more distant realm of quantum networking. They are housed at the IBM Quantum Computation Center in New York. The Q System One is the first quantum system to consolidate thousands of components into a glass-enclosed, air-tight environment built specifically for business use. Click here for more info

Multiple IBM Q systems are housed at the IBM Quantum Computing Center in New York
While traditional computers store information as either 0s and 1s, quantum computers use quantum bits, or qubits, which represent and store information as both 0s and 1s simultaneously. That means quantum computers have the potential to sort through a vast number of possible solutions in a fraction of a second. Qubits are kept at an extremely cold temperature of 1/100th the temperature of outer space.  This measure of temperature is called the Kelvin degree, and zero degrees Kelvin is called "absolute zero". IBM keeps its qubits at 0.015 degrees Kelvin, while the brisk air on a freezing cold winter day is at 273 degrees Kelvin.Qubits are kept this cold to prolong their fragile quantum state. The longer qubits can be kept in a quantum state, the more operations can be performed on them while taking advantage of superposition and entanglement.  So what's up with a quantum computer with 53 qubits? It stems from the hexagonally derived lattice of qubits that's advantageous when it comes to minimizing unwanted interactions. Quantum computing remains a highly experimental field, limited by the difficult physics of the ultra-small and by the need to keep the machines refrigerated to within a hair's breadth of absolute zero to keep outside disturbances from ruining any calculations.
A close-up view of the IBM Q quantum computer. The processor is in the silver-colored cylinder.
Rigetti is racing against similar projects at Google, Microsoft, IBM, and Intel. Every Bay Area startup will tell you it is doing something momentously difficult, but Rigetti is biting off more than most – it's working on quantum computing. All venture-backed startups face the challenge of building a business, but this one has to do it by making progress on one of tech's thorniest problems.
Within the next five years, Google will produce a viable quantum computer. That's the stake the company has just planted.  "The field of quantum computing will soon achieve a historic milestone," They call this milestone "quantum supremacy." the world's biggest tech companies are already jockeying for their own form of commercial supremacy as they anticipate a quantum breakthrough. Both Google and IBM now say they will offer access to true quantum computing over the internet (call it quantum cloud computing).after years spent developing quantum technologies, IBM is also trying to prevent Google, a relative newcomer to the field, from stealing its quantum mindshare. And it's still unclear whether the claims made by these two companies will hold up. The future of quantum computing, like the quantum state itself, remains uncertain. Rigetti is now entering the fray. The company  launched its own cloud platform, called Forest, where developers can write code for simulated quantum computers, and some partners get to access the startup's existing quantum hardware.

source
Quantum can work in unison with current computing infrastructure to solve complex problems that were previously thought impractical or impossible. This can be paradigm-shifting. For example, the difficulty of factoring large numbers into their primes is the basis of modern cryptography. For the size of numbers used in modern public-private key encryption:On a conventional computer, this calculation would take trillions of years.  On a future quantum computer, it would take only minutes. As getting more power out of classical computers for a fixed amount of space, time, and resources becomes more challenging, completely new approaches like quantum computing become ever more interesting as we aim to tackle more complicated problems.Quantum computing could be a way to revive the rate of progress [that we have come to depend on in conventional computers], at least in some areas. "If you can successfully apply it to problems it could give you an exponential increase in computing power that you can’t get” through traditional chip designs That’s because IBM see a future beyond traditional computing. For decades, computing power has doubled roughly every two years — a pattern known as Moore’s Law. Those advances have relied on making transistors ever smaller, thereby enabling each computer chip to have more calculation power. IBM has invented new ways to shrink transistors. IBM grew to its current size by leveraging the continued scaling of conventional computing. But ,that approach is finite, and its end is in sight. Now, times are not so good. “Underlying Moore’s Law is scaling, the ability to pack more and more transistors into a smaller and smaller space, At some point … you’re going to reach atomic dimensions and that’s the end of that approach.” The specter of a world in which silicon chips are not improving exponentially  dubbed the IBM Q Network. We don’t lack computing power today, but you see Moore’s Law going into saturation.

A quantum system in a definite state can still behave randomly. This is a counter-intuitive idea of quantum physics. Quantum computers exist now because we have recently figured out how to control what has been in the world this whole time: the quantum phenomena of superposition, entanglement, and interference. These new ingredients in computing expand what is possible to design into algorithms. The word qubit has two meanings, one physical and one conceptual. Physically, it refers to the individual devices that are used to carry out calculations in quantum computers. Conceptually, a qubit is like a bit in a regular computer. It’s the basic unit of data in a quantum circuit.

Classical bits, that can only be 0 or 1, qubits can exist in a superposition of these states

A superposition is a weighted sum or difference of two or more states. For example, the state of the air when two or more musical tones sound at once. A "weighted sum or difference" means that some parts of the superposition are more or less prominently represented, such as when a violin is played more loudly than the other instruments in a string quartet. Ordinary, or “classical,” superpositions commonly occur in macroscopic phenomena involving waves.Quantum theory predicts that a computer with n qubits can exist in a superposition of all 2^n of its distinct logical states 000...0, through 111...1. This is exponentially more than a classical superposition. Playing n musical tones at once can only produce a superposition of n states.A set of n coins, each of which might be heads or tails, can be described as a probabilistic mixture of 2^n states, but it actually is in only one of them — we just don’t know which. However, quantum computers are capable of holding their data in superpositions of 2^n distinct logical states. For this reason, quantum superposition is more powerful than classical probabilism. Quantum computers capable of holding their data in superposition can solve some problems exponentially faster than any known classical algorithm. A more technical difference is that while probabilities must be positive (or zero), the weights in a superposition can be positive, negative, or even complex numbers. 
Entanglement is a property of most quantum superpositions and does not occur in classical superpositions. Entanglement is a core concept of quantum computing. In an entangled state, the whole system is in a definite state, even though the parts are not. In an entangled state, the whole system is in a definite state, even though the parts are not. Observing one of two entangled particles causes it to behave randomly, but tells the observer how the other particle would act if a similar observation were made on it. Because entanglement involves a correlation between individually random behaviors of the two particles, it cannot be used to send a message. Therefore, the term “instantaneous action at a distance,” sometimes used to describe entanglement, is a misnomer. There is no action (in the sense of something that can be used to exert a controllable influence or send a message), only correlation, which, though uncannily perfect, can only be detected afterward when the two observers compare notes. The ability of quantum computers to exist in entangled states is responsible for much of their extra computing power.

One important factor is: Physical qubits are much more sensitive to noise than transistors in regular circuits. The ability to hold a quantum state is called coherence. The longer the coherence time, the more operations researchers can perform in a quantum circuit before resetting it, and the more sophisticated the algorithms that can be run on it. To reduce errors, quantum computers need qubits that have long coherence times. And physicists need to be able to control quantum states more tightly, with simpler electrical or optical systems than are standard today. A quantum computer will need about 200 or so perfect qubits to perform chemical simulations that are impossible on classical computers. Because qubits are so prone to error, though, these systems are likely to require redundancy, with tens or perhaps hundreds of faulty qubits doing the work of one ideal qubit that gives the right answer. These so-far-theoretical ideal qubits are often called “logical qubits” or “error-corrected qubits.” . So, It doesn’t make sense to increase the number of qubits before you improve your error rates.
A leap from bits to qubits: this two-letter change could mean entirely new horizons for healthcare. Quantum computing might bring supersonic drug design, in silico clinical trials with virtual humans simulated ‘live’, full-speed whole genome sequencing and analytics, the movement of hospitals to the cloud, the achievement of predictive health, or the security of medical data via quantum uncertainty. Quantum computing could enable exponential speedups for certain classes of problems by exploiting superposition and entanglement in the manipulation of quantum bits (qubits). One such example is using quantum computing in Artificial Intelligence space, we can implement an optimisation algorithm(s) which can use the properties of superposition and help in speeding up the optimisation problem which can eventually lead to better and faster learning algorithm(s).

Quantum encryption, as its name suggests, relies on the quantum properties of photons, atoms, and other small units of matter to secure information. In this case, the physicists used a quantum property of photons known as polarization, which more or less describes the orientation of a photon. For the teleconference, they assigned photons with two different polarizations, to represent 1’s and 0’s. In this way, a beam of light becomes a cryptographic key they could use to scramble a digital message. If implemented the way physicists first envisioned it back in the 1980’s, quantum encryption would be unbreakable. The protocol is a bit complicated, but it essentially involves the sender transmitting photons to the recipient to form a key, and both parties sharing part of the key publicly. If someone had tried to intercept it, the recipient’s key would not match the sender’s key in a specific statistical way, set by rules in quantum mechanics. The sender would immediately know the key was compromised. Physicists also see quantum encryption as an important tool for when quantum computers finally become functional. These quantum computers—or more likely, the ones to follow a few decades later—could bust the best encryption algorithms today. But no computer could crack a properly quantum-encrypted message. Key words: properly encrypted. When physicists started to actually build quantum networks, they couldn’t achieve their vision of perfect quantum encryption. It turns out, sending photons thousands of miles across the world through free space, optical fiber, and relay stations, all without corrupting their polarization, is extremely technically challenging. Quantum signals die after about 100 miles of transmission through optical fiber, and no one knows how to amplify a signal yet. The best quantum memories today can only store a key for a matter of minutes before the information disappears. So Pan’s group had to incorporate conventional telecom technology to propagate their quantum signals. At several points in their network, they had to convert quantum information (polarizations) into classical information (voltages and currents) and then back into quantum. This isn’t ideal, because the absolute security of a quantum key relies on its quantum-ness. Anytime the key gets converted into classical information, normal hacking rules apply.

Quantum computers working with classical systems have the potential to solve complex real-world problems such as simulating chemistry, modelling financial risk and optimizing supply chains. One of the areas they’re applying their technology is chemistry simulations, for example to understand how materials behave and how chemicals interact. One particularly interesting problem is designing the chemical composition of more effective batteries. These could be used in the next generation of electric vehicles. Exxon Mobil plans to use quantum computing to better understand catalytic and molecular interactions that are too difficult to calculate with classical computers. Potential applications include more predictive environmental models and highly accurate quantum chemistry calculations to enable the discovery of new materials for more efficient carbon capture.

JP Morgan Chase is focusing on use cases for quantum computing in the financial industry, including trading strategies, portfolio optimization, asset pricing and risk analysis.

Accelerating drug discovery through quantum-computing molecule comparison :Molecular comparison, an important process in early-phase drug design and discovery.  Today, it takes pharmaceutical companies up to 10+ years and often billions of dollars to discover a new drug and bring it to market. Improving the front end of the process with quantum computing can dramatically cut costs and time to market, repurpose pre-approved drugs more easily for new applications, and empower computational chemists to make new discoveries faster that could lead to cures for a range of diseases
Use case for quantum computers is the design of new materials and drugs

Revolutionizing the molecule comparison process Quantum computing has the potential to change the very definition of molecular comparison by enabling pharmaceutical and material science companies to develop methods to analyze larger-scale molecules. Today, companies can run hundreds of millions of comparisons on classical computers; however, they are limited only to molecules up to a certain size that a classical computer can actually compute. As quantum computers become more readily available, it will be possible to compare molecules that are much larger, which opens the door for more pharmaceutical advancements and cures for a range of diseases.
Discovering new battery materials could “unlock a billion-dollar opportunity for Automotive industries. This case could simulate the actual behavior of a battery with a quantum computer, which is currently not possible with existing computer power. Daimler joins other automotive companies experimenting with quantum computing’s potential applications. Ford Motor Co. is researching how the technology could quickly optimize driving routes and improve the structure of batteries for electric vehicles. Volkswagen AG is developing a quantum-computing-based traffic-management system that could be offered as a commercial service. It also is interested in developing more advanced batteries. Today, battery development and testing is a physical process that requires experts to build prototypes first because there is no simulation software. A quantum computer could help Mercedes-Benz find new materials or combinations of materials that could result in better electrochemical performance and longer life cycles of batteries. Some of those innovations could include organic batteries, which could be safer, energy efficient and environmentally friendly.

Full-Scale Fault Tolerance

The third phase is still decades away. A universal fault-tolerant quantum computer is the grand challenge of quantum computing.  It is a device that can properly perform universal quantum operations using unreliable components. Today's quantum computers are not fault-tolerant. Achieving full-scale fault tolerance will require makers of quantum technology to overcome additional technical constraints, including problems related to scale and stability. But once they arrive, we expect fault-tolerant quantum computers to affect a broad array of industries. They have the potential to vastly reduce trial and error and improve automation in the specialty-chemicals market, enable tail-event defensive trading and risk-driven high-frequency trading strategies in finance, and even promote in silico drug discovery, which has major implications for personalized medicine.

 Now is the right time for business leaders to prepare for quantum. The conditions are in place to experiment and expand this fundamentally new technology. Organizations that seek to be at the forefront of this transformational shift will seize competitive advantage. Rigetti—like Google, IBM, and Intel—preaches the idea that this advance will bring about a wild new phase of the cloud computing revolution. Data centers stuffed with quantum processors will be rented out to companies freed to design chemical processes and drugs more quickly, or deploy powerful new forms of machine learning.Over the last decade, banks and government institutions in multiple countries including the US, China, and Switzerland have dabbled in quantum encryption products, but Christensen suspects that the technology will be niche for a while longer. Because the technology is so new, the costs and benefits aren’t clear yet. IBM Q Network is working with 45 clients, including startups, academic institutions and Fortune 500 clients. Large enterprise clients are investing in the emerging technology now so they will be prepared when a commercial-grade quantum computer comes to market, capable of error-correcting and solving large-scale problems. With all this promise, it’s little surprise that the value creation numbers get very big over time.

----------------- 
Reference:

https://www.ibm.com/thought-leadership/innovation_explanations/article/dario-gil-quantum-computing.html
Dario Gil, IBM Research : https://www.youtube.com/watch?v=yy6TV9Dntlw
https://www.youtube.com/watch?v=lypnkNm0B4A
https://fortune.com/longform/business-quantum-computing/
https://www.wired.com/2017/03/race-sell-true-quantum-computers-begins-really-exist/?mbid=BottomRelatedStories
https://www.wired.com/story/quantum-computing-factory-taking-on-google-ibm/?mbid=BottomRelatedStories
https://www.wired.com/story/why-this-intercontinental-quantum-encrypted-video-hangout-is-a-big-deal/?mbid=BottomRelatedStories
https://www.accenture.com/ro-en/success-biogen-quantum-computing-advance-drug-discovery

Thursday, December 26, 2019

Distributed computing with Message Passing Inteface (MPI)

The rapidly increasing number of cores in modern microprocessors is pushing the current high performance computing (HPC) systems into the exascale era. The hybrid nature of these systems – distributed memory across nodes and shared memory with non-uniform memory access within each node– poses a challenge. Message Passing Interface (MPI) is a standardized message-passing library interface specification. MPI is a very abstract description on how messages can be exchanged between different processes.   In other words, Message Passing Interface (MPI) is a portable message-passing library standards developed for distributed and parallel computing. Its a STANDARD. So , MPI has multiple implementations. It is a standardized means of exchanging messages between multiple computers running a parallel program across distributed memory. MPI gives user the flexibility of calling set of routines from C, C++, Fortran, C#, Java or Python. The advantages of MPI over older message passing libraries are portability (because MPI has been implemented for almost every distributed memory architecture) and speed (because each implementation is in principle optimized for the hardware on which it runs).The message-passing interface (MPI) standard is the dominant communications protocol used in high performance computing today for writing message-passing programs. The MPI-4.0 standard is under development.

MPI Implementations and its derivatives :

There are a number of groups working on MPI implementations. The two principal are OpenMPI, an open-source implementation and MPICH is used as the foundation for the vast majority of its derivatives  including IBM MPI (for Blue Gene), Intel MPI, Cray MPI, Microsoft MPI, Myricom MPI, OSU MVAPICH/MVAPICH2, and many others. MPICH and its derivatives form the most widely used implementations of MPI in the world. MPICH is one of the most popular implementations of MPI. MPICH has been used as the bases for many other MPI derivatives  as shown here. On the other side, IBM Spectrum MPI, Mellanox HPC-X are  MPI - Message Passing Interface based on Open MPI. Similarly, bullx MPI is built around OpenMPI, which has been enhanced by Bull with optimized collective communication.

Open MPI was formed by the merging FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI, and is found in many TOP-500 supercomputers.

source
MPI offers a standard API, and is "portable". That means you can use same source code on all platforms without any modification. It is relatively trivial to switch an application between the different versions of MPI. Most MPI implementations use sockets for TCP based communication. Odds are good that any given MPI implementation will be better optimized and provide faster message passing, than a home grown application using sockets directly. In addition, should you ever get a chance to run your code on a cluster that has InfiniBand, the MPI layer will abstract any of those code changes. This is not a trivial advantage - coding an application to directly use OFED (or another IB Verbs) implementation is very difficult. Most MPI applications include small test apps that can be used to verify the correctness of the networking setup independently of your application. This is a major advantage when it comes time to debug your application. The MPI standard includes the "pMPI" interfaces, for profiling MPI . This interface also allows you to easily add checksums, or other data verification to all the message.  

The standard Message Passing Interface (MPI) has two-sided communication ( pt2pt) and collective communication models. In these communication models, both sender and receiver have to participate in data exchange operations explicitly, which requires synchronization between the processes.  Communications can be either of two types :
  •     Point-to-Point : Two processes in the same communicator are going to communicate.
  •     Collective : All the processes in a communicator are going to communicate together.

One-sided communications are a new type that allows communications to be made in a highly asynchronous way by defining windows of memory where every process can write and or read from. All of these revolve around the idea of Remote Memory Access (RMA). Traditional p2p or collective communications basically work in two-steps : first the data is transferred from the original(s) process(es) to the destination(s). The the sending and receiving processes are synchronised in some way (be it blocking synchronisation or by calling MPI_Wait). RMA allows us to decouple these two steps. One of the biggest implications of this, is the possibility to define shared-memory that will be used by many processes (cf MPI_Win_allocate_shared). Although shared-memory might seem out of the scope of MPI which was initially made for distributed memory, it makes sense to include such functionalities to allow processes sharing the same NUMA nodes for instance. All of these functionalities are grouped under the name of "One-sided communications", since they imply you don't need to have more than one process to store or load information in a shared-memory buffer.
 



In two-sided communication, memory is private to each process. When the sender calls the MPI_Send operation and the receiver calls the MPI_Recv operation, data in the sender memory is copied to a buffer then sent over the network, where it is copied to the receiver memory. One drawback of this approach is that the sender has to wait for the receiver to be ready to receive the data before it can send the data. This may cause a delay in sending data as shown here.


A simplified diagram of MPI two-sided communication send/receive. The sender calls MPI_Send but has to wait until the receiver calls MPI_Recv before data can be sent.To overcome this drawback, the MPI 2 standard introduced Remote Memory Access (RMA), also called one-sided communication because it requires only one process to transfer data. One-sided communication decouples data transfer from system synchronization. The MPI 3.0 standard revised and added extensions to the one-sided communication, adding new functionality to improve the performance of MPI 2 RMA.

Collective Data Movement  :

MPI_BCAST, MPI_GATHER and MPI_SCATTER are collective data movement routines in which all processes interact with a distinguished root process. For example, communication performed in the finite difference program, assuming three processes. Each column represents a processor; each illustrated figure shows data movement in a single phase.The five phases illustrated are (1) broadcast, (2) scatter, (3) nearest-neighbour exchange, (4) reduction, and (5) gather.
source

  1. MPI_BCAST to broadcast the problem size parameter (size) from process 0 to all np processes;
  2. MPI_SCATTER to distribute an input array (work) from process 0 to other processes, so that each process receives size/np elements; 
  3. MPI_SEND and MPI_RECV for exchange of data (a single floating-point number) with neighbours;
  4. MPI_ALLREDUCE to determine the maximum of a set of localerr values computed at the different processes and to distribute this maximum value to each process; and
  5. MPI_GATHER to accumulate an output array at process 0. 

 Many common MPI benchmarks are based primarily on point-to-point communication, providing the best opportunities for analyzing the performance impact of the MCA on real applications. Open MPI implements MPI point-to-point functions on top of the Point- to-point Management Layer (PML) and Point-to-point Transport Layer (PTL) frameworks.The PML fragments messages, schedules fragments across PTLs, and handles incoming message matching.The PTL provides an interface between the PML and underlying network devices.



source

where:
 Point-to-Point management layer (PML)
 Point-to-point Transport Layer (PTL) 
 Bit-transport layer (BTL)


Open MPI is a large project containing many different sub-systems and a relatively large code base. and has  three sections of code listed here.
  •     OMPI: The MPI API and supporting logic
  •     ORTE: The Open Run-Time Environment (support for different back-end run-time systems)
  •     OPAL: The Open Portable Access Layer (utility and "glue" code used by OMPI and ORTE)
There are strict abstraction barriers in the code between these sections. That is, they are compiled into three separate libraries: libmpi, liborte, and libopal with a strict dependency order: OMPI depends on ORTE and OPAL, and ORTE depends on OPAL.

source
The message passing interface (MPI) is one of the most popular parallel programming models for distributed memory systems. As the number of cores per node has increased, programmers have increasingly combined MPI with shared memory parallel programming interfaces, such as the OpenMP programming model. This hybrid of distributed-memory and shared-memory parallel programming idioms has aided programmers in addressing the concerns of performing efficient internode communication while effectively utilizing advancements in node-level architectures, including multicore and many-core processor architectures. Version 3.0 of the MPI standard, adds a new MPI interprocess shared memory extension (MPI SHM). This new extension is now supported by many MPI distributions. The MPI SHM extension enables programmers to create regions of shared memory that are directly accessible by MPI processes within the same shared memory domain. In contrast with hybrid approaches, MPI SHM offers an incremental approach to managing memory resources within a node, where data structures can be individually moved into shared segments to reduce the memory footprint and improve the communication efficiency of MPI programs. Halo exchange is a prototypical neighborhood exchange communication pattern. In such patterns, the adjacency of communication partners often results in communication with processes in the same node, making them good candidates for acceleration through MPI SHM. By applying MPI SHM to this common communication pattern, we demonstrate that direct data sharing can be used instead of communication, resulting in significant performance gains.


Open MPI includes an implementation of OpenSHMEM. OpenSHMEM is a PGAS (partitioned global address space) API for single-sided asynchronous scalable communications in HPC applications. An OpenSHMEM program is SPMD (single program, multiple data) in style. The SHMEM processes, called processing elements or PEs, all start at the same time and they all run the same program. Usually the PEs perform computation on their own subdomains of the larger problem and periodically communicate with other PEs to exchange information on which the next computation phase depends. OpenSHMEM is particularly advantageous for applications at extreme scales with many small put/get operations and/or irregular communication patterns across compute nodes, since it offloads communication operations to the hardware whenever possible. One-sided operations are non-blocking and asynchronous, allowing the program to continue its execution along with the data transfer.

IBM Spectrum® MPI is an implementation of Open MPI, its basic architecture and functionality is similar. IBM Spectrum MPI supports many features of OpenMPI and adds some unique features of its own.IBM Spectrum MPI uses the same basic code structure as Open MPI, and is made up of the  sections OMPI, ORTE and OPAL as discussed in above section. IBM® Spectrum MPI is a high-performance, production-quality implementation of Message Passing Interface (MPI). It accelerates application performance in distributed computing environments. It provides a familiar portable interface based on the open-source MPI. It goes beyond Open MPI and adds some unique features of its own, such as advanced CPU affinity features, dynamic selection of interface libraries, superior workload manager integrations and better performance. IBM Spectrum MPI supports a broad range of industry-standard platforms, interconnects and operating systems, helping to ensure that parallel applications can run almost anywhere. IBM Spectrum MPI Version 10.2 delivers an improved, RDMA-capable Parallel Active Messaging Interface (PAMI) using Mellanox OFED on both POWER8® and POWER9™ systems in Little Endian mode. It also offers an improved collective MPI library that supports the seamless use of GPU memory buffers for the application developer. The library provides advanced logic to select the fastest algorithm of many implementations for each MPI collective operation as shown  below.


As high-performance computing (HPC) bends to the needs of "big data" applications, speed remains essential. But it's not only a question of how quickly one can compute problems, but how quickly one can program the complex applications that do so. High performance computing is no longer limited to those who own supercomputers. HPC’s democratization has been driven particularly by cloud computing, which has given scientists access to supercomputing-like features at the cost of a few dollars per hour.Interest in HPC in the cloud has been growing over the past few years. The cloud offers applications a range of benefits, including elasticity, small startup and maintenance costs, and economics of scale. Yet, compared to traditional HPC systems such as supercomputers, some of the cloud’s primary benefits for HPC arise from its virtualization flexibility. In contrast to supercomputers’ strictly preserved system software, the cloud lets scientists build their own virtual machines and configure them to suit needs and preferences. In general, the cloud is still considered an addition to traditional supercomputers — a bursting solution for cases in which internal resources are overused, especially for small-scale experiments, testing, and initial research. Clouds are convenient for embarrassingly parallel applications (those that do not communicate very much among partitions), which can scale even on commodity interconnects common to contemporary clouds.

Reference:
https://stackoverflow.com/questions/153616/mpi-or-sockets
https://www.ibm.com/support/knowledgecenter/en/SSZTET_10.3/admin/smpi02_running_apps.html
https://hpc.llnl.gov/sites/default/files/MPI-SpectrumUserGuide.pdf 
https://computing.llnl.gov/tutorials/mpi/
http://www.cs.nuim.ie/~dkelly/CS402-06/Message%20Passing%20Interface.htm 
https://www.sciencedirect.com/topics/computer-science/message-passing-interface

Friday, October 18, 2019

How to use System Tap - Who killed my process




In computing, SystemTap (stap) is a scripting language and tool for dynamically instrumenting running production Linux kernel-based operating systems. System administrators can use SystemTap to extract, filter and summarize data in order to enable diagnosis of complex performance or functional problems.

SystemTap consists of free and open-source software and includes contributions from Red Hat, IBM, Intel, Hitachi, Oracle, and other community members


Installation : yum install systemtap systemtap-runtime
 


To determine which process is sending the signal to application/process, it is necessary to trace the signals through the Linux kernel. 

Script 1:  An example script that will monitor SIGKILL and SIGTERM send to the myApp_mtt process

cat my-systemtap_SIGKILL_SIGTERM.stp
--------------------------------------------------------------------- 
#! /usr/bin/env stap
#
# This systemtap script will monitor for SIGKILL and SIGTERM signals send to
# a process named "myApp_mtt".script show process tree of process
# which tried to kill "myApp_mtt"
#

probe signal.send {
  if ((sig_name == "SIGKILL" || sig_name == "SIGTERM") && pid_name == "myApp_mtt") {
    printf("%10d   %-34s   %-10s   %5d   %-7s   %s pid: %d, tid:%d uid:%d ppid:%d\n",
             gettimeofday_s(), tz_ctime(gettimeofday_s()), pid_name, sig_pid, sig_name, execname(), pid(), tid(), uid(), ppid());

    cur_proc = task_current();
    parent_pid = task_pid(task_parent (cur_proc));

    while (parent_pid != 0) {
        printf ("%s (%d),%d,%d -> ", task_execname(cur_proc), task_pid(cur_proc), task_uid(cur_proc),task_gid (cur_proc));
        cur_proc = task_parent(cur_proc);
        parent_pid = task_pid(task_parent (cur_proc));
    }
  }
}

probe begin {
  printf ("\nSACHIN P B: Investigating a murder mistery of Mr. myApp_mtt\n");
  printf("systemtap script started at: %s\n\n", tz_ctime(gettimeofday_s()));
  printf("%50s%-18s\n",
    "",  "Signaled Process");
  printf("%-10s   %-34s   %-10s   %5s   %-7s   %s\n",
    "Epoch", "Time of Signal", "Name", "PID", "Signal", "Signaling Process Name");
  printf("---------------------------------------------------------------");
  printf("---------------------------------------------------------------");
  printf("\n");
}

probe end {
  printf("\n");
}
----------------------------------------------------------
Script 2:  Sample Shell script to send signals SIGTERM/SIGKILL

cat I_am_killer-007.sh
#!/bin/bash
echo "I am going to kill Mr.myApp_mtt sooner.....wait and watch"
sleep 20
pkill -SIGTERM myApp_mtt     ----> CASE1
pkill _SIGKILL myApp_mtt     ----->  CASE 2
echo "Done !!!.......Catch me if you can !"
-----------------------------------------------------
CASE 1:  Test  SIGTERM
Step 1 : Lets start systemtap as shown below:
[root@myhostname sachin]# stap my-systemtap_SIGKILL_SIGTERM.stp
SACHIN P B: Investigating a murder mistery of Mr. myApp_mtt
systemtap script started at: Thu Oct 17 18:34:54 2019 EDT

                                                  Signaled Process
Epoch        Time of Signal                       Name           PID   Signal    Signaling Process Name
------------------------------------------------------------------------------------------------------------------------------
waits here to print logs  when  SIGTERM and SIGKILL  caught

+++++++++++++++++++++++++++++++++++
Step 2 : Lets start our application myApp_mtt
[root@myhostname sachin]# ./myApp_mtt &
[1] 114583
[root@myhostname sachin]#

[root@myhostname sachin]#  ps -ef | grep myApp_mtt | grep -v grep
root     114583  80054  0 19:04 pts/8    00:00:00 ./myApp_mtt
[root@myhostname sachin]#
++++++++++++++++++++++++++++++++++
Step 3: Lets kill this application sending SIGTERM

[root@myhostname sachin]# ./I_am_killer-007.sh
I am going to kill Mr.myApp_mtt sooner.....wait and watch
+++++++++++++++++++++++++++++++++++++
Step 4: Verify PID/PPID  of process that sends SIGKILL
[root@myhostname sachin]#  ps -ef | grep I_am_killer-007.sh | grep -v grep
root     122566  79450  0 19:05 pts/7    00:00:00 /bin/bash ./I_am_killer-007.sh
[root@myhostname sachin]#
+++++++++++++++++++++++++++++++++++++
Step 5: Check for completion:
[root@myhostname sachin]# ./I_am_killer-007.sh
I am going to kill Mr.myApp_mtt sooner.....wait and watch
Done !!!.......Catch me if you can !
[root@myhostname sachin]#
[root@myhostname sachin]#  ps -ef | grep I_am_killer-007.sh | grep -v grep
root     122566  79450  0 19:05 pts/7    00:00:00 /bin/bash ./I_am_killer-007.sh
[root@myhostname sachin]#
[1]+  Terminated              ./myApp_mtt
[root@myhostname sachin]#
++++++++++++++++++++++++++++++++++++++
Step 6: Check  system tap logs -that should match pid of parent process and killer.
[root@myhostname sachin]# stap my-systemtap_SIGKILL_SIGTERM.stp
SACHIN P B: Investigating a murder mistery of Mr. myApp_mtt
systemtap script started at: Thu Oct 17 19:03:53 2019 EDT

                                                  Signaled Process
Epoch        Time of Signal                       Name           PID   Signal    Signaling Process Name
------------------------------------------------------------------------------------------------------------------------------
1571353566   Thu Oct 17 19:06:06 2019 EDT         myApp_mtt       114583   SIGTERM   pkill pid: 124080, tid:124080 uid:0 ppid:122566
pkill (124080),0,0 -> I_am_killer-007 (122566),0,0 -> bash (79450),0,0 -> su (79449),0,0 -> sudo (78656),0,0 -> bash (78202),560045,100 -> sshd (78200),560045,100 -> sshd (77624),0,0 -> sshd (11405),0,0 ->



+++++++++++++++++++++++++++++++++++++++++++
CASE 2 : Test SIGKILL
Step 1 : Now , You  change the script  to send signal SIGKILL  to myApp_mtt.
[root@myhostname sachin]# cat I_am_killer-007.sh
#!/bin/bash
echo "I am going to kill Mr.myApp_mtt sooner.....wait and watch"
sleep 20
pkill -SIGKILL myApp_mtt
echo "Done !!!.......Catch me if you can !"
++++++++++++++++++++++++++++++++++++++++++++

Step 2: Verify PID/PPID  of process that sends SIGKILL
[root@myhostname sachin]#  ./myApp_mtt &
[2] 151008
[root@myhostname sachin]# ps -ef | grep myApp_mtt | grep -v grep
root     151008 150421  0 03:07 pts/45   00:00:00 ./myApp_mtt
[root@myhostname sachin]#  ps -ef | grep I_am_killer-007.sh | grep -v grep
root     151027 150627  0 03:07 pts/4    00:00:00 /bin/bash ./I_am_killer-007.sh
[root@myhostname sachin]#
[root@myhostname sachin]# ./I_am_killer-007.sh
I am going to kill Mr.myApp_mtt sooner.....wait and watch
Done !!!.......Catch me if you can !
[root@myhostname sachin]#
[1]   Killed                  ./myApp_mtt
[root@myhostname sachin]#
+++++++++++++++++++++++++++++++++++++++++++
Step 3: Check systemtap logs for SIGKILL signal and to know the process that killed   myApp_mtt
[root@myhostname sachin]# stap my-systemtap_SIGKILL_SIGTERM.stp
SACHIN P B: Investigating a murder mistery of Mr. myApp_mtt
systemtap script started at: Fri Oct 18 03:07:34 2019 EDT

                                                  Signaled Process
Epoch        Time of Signal                       Name           PID   Signal    Signaling Process Name
------------------------------------------------------------------------------------------------------------------------------
1571382496   Fri Oct 18 03:08:16 2019 EDT         myApp_mtt       151008   SIGKILL   pkill pid: 151049, tid:151049 uid:0 ppid:151027
pkill (151049),0,0 -> I_am_killer-007 (151027),0,0 -> bash (150627),0,0 -> su (150626),0,0 -> sudo (150623),0,0 -> bash (150589),560045,100 -> sshd (150588),560045,100 -> sshd (150582),0,0 -> sshd (11405),0,0 ->


Conclusion : We caught the killer (I_am_killer-007) who sent SIGNAL (SIGTERM/SIGKILL) to process/application

++++++++++++++++++++++++++++++++++++++++++++++++++++++
Reference:
1) https://sourceware.org/systemtap/SystemTap_Beginners_Guide/
2) https://www.thegeekdiary.com/how-to-find-which-process-is-killing-myApp_mtt-with-sigkill-or-sigterm-on-linux/
3) https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html-single/systemtap_language_reference/index
4)http://epic-alfa.kavli.tudelft.nl/share/doc/systemtap-client-2.7/examples/network/connect_stat.stp