Saturday, May 23, 2020

RHEL8 - Next generation of Linux container Capabilities - podman, buildah, skopeo .....!

Container technology is creating a lot of buzz in the recent times. As people move from virtualization to container technology, many enterprises have adopted software container cloud application deployment. Containers leverage some key capabilities available within Linux. Containers depend on key Linux kernel features such as control groups, namespaces, and SELinux in order to manage resources and isolate the applications that are running inside the containers. It’s not just containers that generally work best with Linux, but also the tools used to manage their lifecycles. Today, Kubernetes is the leading container orchestration platform, and it was built on Linux concepts and uses Linux tooling and application programming interfaces (APIs) to manage the containers.

Red Hat OpenShift is a leading hybrid cloud, enterprise Kubernetes application platform, trusted by 1,700+ organizations. It is much easier to use, and it even has a web interface for configuration. They developed container tools for single hosts and in clusters, standardizing on Kubernetes. Other alternative- popular managed Kubernetes service are  AWS EKS(Amazon Elastic Kubernetes Service)/Fargate,  Azure AKS, or Google Cloud Platform’s GKE, Apache Mesos, Docker Swarm, Nomad, OpenStack, Rancher, and Docker Compose.

For RHEL 8, the Docker package is not included and not supported by Red Hat. The docker package has been replaced by the new suite of tools in the Container Tools module as listed

  •     The podman container engine replaced docker engine
  •     The buildah utility replaced docker build
  •     The skopeo utility replaced docker push

Red Hat Quay -A distributed, highly available container registry for entire enterprise.  Unlike other container tools implementations, tools described here do not center around the monolithic Docker container engine and docker command. Instead,  they provide a set of command-line tools that can operate without a container engine. These include:

  • podman - client tool for directly managing pods and container images (run, stop, start, ps, attach, exec, and so on)
  • buildah - client tool for building, pushing and signing container images
  • skopeo - client tool for copying, inspecting, deleting, and signing images
  • runc -  Container runtime client for providing container run and build features to podman and buildahwith OCI format containers
  • crictl - For troubleshooting and working directly with CRI-O container engines
Because these tools are compatible with the Open Container Initiative (OCI), they can be used to manage the same Linux containers that are produced and managed by Docker and other OCI-compatible container engines. However, they are especially suited to run directly on Red Hat Enterprise Linux, in single-node use cases. Each tool in this scenario can be more light-weight and focused on a subset of features. And with no need for a daemon process running to implement a container engine, these tools can run without the overhead of having to work with a daemon process.

For a multi-node container platform, there is OpenShift. Instead of relying on the single-node, daemonless tools, OpenShift requires a daemon-based container engine like  CRI-O Container Engine. . Also, podman stores its data in the same directory structure used by Buildah, Skopeo, and CRI-O, which will allow podman to eventually work with containers being actively managed by CRI-O in OpenShift.

In a nutshell, you get Podman with RHEL in a single node use case (orchestrate yourself) and CRI-O as part of the highly automated OpenShift 4 software stack as shown in diagram.
source

What is CRI-O? 
CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to enable using OCI (Open Container Initiative) compatible runtimes. It is a lightweight alternative to using Docker as the runtime for kubernetes. It allows Kubernetes to use any OCI-compliant runtime as the container runtime for running pods. Today it supports runc and Kata Containers as the container runtimes but any OCI-conformant runtime can be plugged in principle. CRI-O supports OCI container images and can pull from any container registry. It is a lightweight alternative to using Docker, Moby or rkt as the runtime for Kubernetes.

Why CRI-O ?
CRI-O is an open source, community-driven container engine. Its primary goal is to replace the Docker service as the container engine for Kubernetes implementations, such as OpenShift Container Platform.  The CRI-O container engine provides a stable, more secure, and performant platform for running Open Container Initiative (OCI) compatible runtimes. You can use the CRI-O container engine to launch containers and pods by engaging OCI-compliant runtimes like runc [the default OCI runtime] or Kata Containers.


CRI RUNTIMES source
CRI-O is not supported as a stand-alone container engine. You must use CRI-O as a container engine for a Kubernetes installation, such as OpenShift Container Platform. To run containers without Kubernetes or OpenShift Container Platform, use podman. CRI-O’s purpose is to be the container engine that implements the Kubernetes Container Runtime Interface (CRI) for OpenShift Container Platform and Kubernetes, replacing the Docker service.  The scope of CRI-O is tied to the Container Runtime Interface (CRI). CRI extracted and standardized exactly what a Kubernetes service (kubelet) needed from its container engine. There is little need for direct command-line contact with CRI-O. A set of container-related command-line tools are available to provide full access to CRI-O for testing and monitoring - crictl, runc, podman, buildah, skopeo. Some Docker features are included in other tools instead of in CRI-O. For example, podman offers exact command-line compatibility with many docker command features and extends those features to managing pods as well. No container engine is needed to run containers or pods with podman. Features for building, pushing, and signing container images, which are also not required in a container engine, are available in the buildah command.
Kubernetes and CRI-O process
The following are the components of CRI-O :
  • OCI compatible runtime – Default is runC, other OCI compliant are supported as well e.g Kata Containers.
  • containers/storage – Library used for managing layers and creating root file-systems for the containers in a pod.
  • containers/image – Library is used for pulling images from registries.
  • networking (CNI) – Used for setting up networking for the pods. Flannel, Weave and OpenShift-SDN CNI plugins have been tested.
  • container monitoring (conmon) – Utility within CRI-O that is used to monitor the containers.
  • security is provided by several core Linux capabilities

What is Podman?
Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System developed by Red Hat where engineers have paid special attention to using the same nomenclature when executing Podman commands. Containers can either be run as root or in rootless mode. It's a replacement for Docker for local development of containerized applications. Podman commands map 1 to 1 to Docker commands, including their arguments. You could alias docker with podman and never notice that there is a completely different tool managing your local containers.The Podman approach is simply to directly interact with the image registry, with the container and image storage, and with the Linux kernel through the runC container runtime process (not a daemon). Podman allows you to do all of the Docker commands without the daemon dependency.


Podman workflow

One of the core features of Podman is it's focus on security. There is no daemon involved in using Podman. It uses traditional fork-exec model instead and as well heavily utilizes user namespaces and network namespaces. As a result, Podman is a bit more isolated and in general more secure to use than Docker. You can even be root in a container without granting container or Podman any root privileges on the host -- and user in a container won't be able to do any root-level tasks on the host machine.Running rootless Podman and Buildah can do most things people want to do with containers, but there are times when root is still required. The nicest feature is running Podman and containers as a non-root user. This means you never have give a user root privileges on the host, while in the client/server model (like Docker employs), you must open a socket to a privileged daemon running as root to launch the containers. There you are at the mercy of the security mechanisms implemented in the daemon versus the security mechanisms implemented in the host operating systems—a dangerous proposition.

How containers run with container Engine ?
source
Podman can now ease the transition to Kubernetes and CRI-O :
 
On a basic level, Kubernetes is often viewed as the application that runs your containers, but Kubernetes really is a huge bundle of utilities or APIs that explain how a group of microservices running in containers on a group of servers can coordinate and work together and share services and resources. Kubernetes only supplies the APIs for  orchestration and scheduling, and resource management. To have a complete container orchestration platform, you’ll need the OS underneath, a container registry, container networking, container storage, logging and monitoring, and a way to integrate continuous integration/continuous delivery (CI/CD). Red Hat OpenShift, a supported Kubernetes for cloud-native applications with enterprise security on multi-cloud environment.
A group of seals is called a pod :)-  Padman manage pods. The Pod concept was introduced by Kubernetes.  Podman pods are similar to the Kubernetes definition. Podman can now capture the YAML description of local pods and containers and then help users transition to a more sophisticated orchestration environment like Kubernetes. Check this developer and user workflow:
  • Create containers/pods locally using Podman on the command line.
  • Verify these containers/pods locally or in a localized container runtime (on a different physical machine).
  • Snapshot the container and pod descriptions using Podman and help users re-create them in Kubernetes.
  • Users add sophistication and orchestration (where Podman cannot) to the snapshot descriptions and leverage advanced functions of Kubernetes.
How containers run in kubernetes cluster?

This container stack within Red Hat Enterprise Linux and Red Hat Enterprise Linux CoreOS serves as part of the foundation for OpenShift. As can be seen in the drawing below, the CRI-O stack in OpenShift shares many of its underlying components with Podman. This allows Red Hat engineers to leverage knowledge gained in experiments conducted in Podman for new capabilities in OpenShift.

source
Pod-Architecture source

Every Podman pod includes an “infra” container.   This container does nothing, but go to sleep. Its purpose is to hold the namespaces associated with the pod and allow podman to connect other containers to the pod.  This allows you to start and stop containers within the POD and the pod will stay running, where as if the primary container controlled the pod, this would not be possible. Most of the attributes that make up the Pod are actually assigned to the “infra” container.  Port bindings, cgroup-parent values, and kernel namespaces are all assigned to the “infra” container. This is critical to understand, because once the pod is created these attributes are assigned to the “infra” container and cannot be changed. 

In the above diagram, notice the box above each container, conmon, this is the container monitor.  It is a small C Program that’s job is to watch the primary process of the container, and if the container dies, save the exit code.  It also holds open the tty of the container, so that it can be attached to later. This is what allows podman to run in detached mode (backgrounded), so podman can exit but conmon continues to run.  Each container has its own instance of conmon.


Buildah : The buildah command allows you to build container images either from command line or using Dockerfiles. These images can then be pushed to any container registry and can be used by any container engine, including Podman, CRI-O, and Docker. Buildah specializes in building OCI images. Buildah’s commands replicate all of the commands that are found in a Dockerfile. Buildah’s goal is also to provide a lower level coreutils interface to build container images, allowing people to build containers without requiring a Dockerfile. Buildah’s other goal is to allow you to use other scripting languages to build container images without requiring a daemon. The buildah command can be used as a separate command, but is incorporated into other tools as well. For example the podman build command used buildah code to build container images. Buildah is also often used to securely build containers while running inside of a locked down container by a tool like Podman, OpenShift/Kubernetes or Docker. Buildah allows you to have a Kubernetes cluster without any Docker daemon for both runtime and builds.  So, When to use Buildah and when to use Podman. With Podman you can run, build (it calls Buildah under the covers for this), modify and troubleshoot containers in your Kubernetes cluster. With the two projects together, you have a well rounded solution for your OCI container image and container needs. Buildah and Podman are easily installable via yum install buildah podman.

A quick and easy way to summarize the difference between the two projects is the buildah run command emulates the RUN command in a Dockerfile while the podman run command emulates the docker run command in functionality. Buildah is an efficient way to create OCI images while Podman allows you to manage and maintain those images and containers in a production environment using familiar container CLI commands. Together they form a strong foundation to support your OCI container image and container needs.

skopeo: The skopeo command is a tool for copying containers and images between different types of container storage. It can copy containers from one container registry to another. It can copy images to and from a host, as well as to other container environments and registries. Skopeo can inspect images from container image registries, get images and image layers, and use signatures to create and verify images. 

Running containers as root or rootless :

Running the container tools such as podman, skopeo, or buildah as a user with superuser privilege (root user) is the best way to ensure that your containers have full access to any feature available on your system. However, with the feature called "Rootless Containers," generally available as of RHEL 8.1, you can work with containers as a regular user.

Although container engines, such as Docker, let you run docker commands as a regular (non-root) user, the docker daemon that carries out those requests runs as root. So, effectively, regular users can make requests through their containers that harm the system, without there being clarity about who made those requests. By setting up rootless container users, system administrators limit potentially damaging container activities from regular users, while still allowing those users to safely run many container features under their own accounts.
Also, note that Docker is a daemon-based container engine which allows us to deploy applications inside containers as shown in diagram docker-workflow. With the release of RHEL 8 and CentOS 8, docker package has been removed from their default package repositories, docker has been replaced with podman and buildah. If you are comfortable with docker and deploy most the applications inside the docker containers and does not want to switch to podman then there is a way to install and use community version of docker on CentOS 8 and RHEL 8 system by using the official Docker repository for CentOS7/RHEL7, which is a compatible clone.
Docker workflow

NOTE: Technology Preview features provide early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. RHEL 8.2 provides access to technology previews of containerized versions of Buildah, a tool for building container images that comply with the Open Container Image (OCI) specification, and Skopeo, a tool that facilitates the movement of container images. Red Hat is adding Udica, a tool that makes it easier to create customized, container-centric SELinux security policies that reduce the risk that a process might “break out” of a container. RHEL 8.2 also introduces enhancements to the Red Hat Universal Base Image, which now supports OpenJDK and .NET 3.0, in addition to making it easier to access source code associated with a given image via a single command. That adds additional management and monitoring capabilities via updates to Red Hat Insights, which is provided to make it easier to define and monitor policies created by the IT organization, as well as reduce any drift from baselines initially defined by the IT team.
----------------------------------------------------------------------------------------------------------------------------------
Podman installation on RHEL and small demo to illustrate with DB application:
Step 1: yum -y install podman
This command will install Podman and also its dependencies: atomic-registries, runC, skopeo-containers, and SELinux policies. Check this as shown below :
[root@IBMPOWER_sachin]# rpm -qa | grep podman
podman-1.6.4-18.el7_8.x86_64
[root@IBMPOWER_sachin]# rpm -qa | grep skopeo
skopeo-0.1.40-7.el7_8.x86_64
[root@IBMPOWER_sachin]# rpm -qa | grep runc
runc-1.0.0-67.rc10.el7_8.x86_64
[root@IBMPOWER_sachin]#

Step 2 : Command-line examples to create container and run RHEL container 
[root@IBMPOWER_sachin script]# podman run -it rhel sh
Trying to pull registry.access.redhat.com/rhel...
Getting image source signatures
Copying blob feaa73091cc9 done
Copying blob e20f387c7bf5 done
Copying config 1a9b6d0a58 done
Writing manifest to image destination
Storing signatures
sh-4.2#

[root@IBMPOWER_sachin ~]# podman images
REPOSITORY                        TAG      IMAGE ID       CREATED       SIZE
registry.access.redhat.com/rhel   latest   1a9b6d0a58f8   2 weeks ago   215 MB
[root@IBMPOWER_sachin ~]#

Step 3 : Install a containerized service for setting up a MariaDB database :
Run a MariaDB persistent container - MariaDB 10.2 with some custom variables and try to let its “data” be persistent.
[root@IBMPOWER_sachin~]# 
podman pull registry.access.redhat.com/rhscl/mariadb-102-rhel7
Trying to pull registry.access.redhat.com/rhscl/mariadb-102-rhel7...
Getting image source signatures
Copying blob 8574a8f8c7e5 done
Copying blob f60299098adf done
Copying blob 82a8f4ea76cb done
Copying blob a3ac36470b00 done
Copying config 66a314da15 done
Writing manifest to image destination
Storing signatures
66a314da15d608d89f7b589f6668f9bc0c2fa814ec9c690481a7a057206338bd
[root@IBMPOWER_sachin ~]#
[root@IBMPOWER_sachin ~]# podman images
REPOSITORY                                           TAG      IMAGE ID       CREATED       SIZE
registry.access.redhat.com/rhscl/mariadb-102-rhel7   latest   66a314da15d6   11 days ago   453 MB
registry.access.redhat.com/rhel                      latest   1a9b6d0a58f8   2 weeks ago   215 MB
[root@IBMPOWER_sachin ~]#

After you pull an image to your local system and before you run it, it is a good idea to investigate that image. Reasons for investigating an image before you run it include:
  •  Understanding what the image does
  •  Checking what software is inside the image
Example: Get information about  the “user ID running inside the container”, "ExposedPorts" and the “persistent volume location to attach“ ....etc as shown here:
podman inspect registry.access.redhat.com/rhscl/mariadb-102-rhel7  | grep User
podman inspect registry.access.redhat.com/rhscl/mariadb-102-rhel7 | grep -A1 ExposedPorts
podman inspect registry.access.redhat.com/rhscl/mariadb-102-rhel7 | grep -A1 Volume

 

Step 4 : Set up a folder that will handle MariaDB’s data once we start our container:
[root@IBMPOWER_sachin ~]# mkdir /root/mysql-data
[root@IBMPOWER_sachin ~]# chown 27:27 /root/mysql-data
 Step 5: Run the container
[root@IBMPOWER_sachin ~]#  
podman run -d -v /root/mysql-data:/var/lib/mysql/data:Z -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DATABASE=db -p 3306:3306 registry.access.redhat.com/rhscl/mariadb-102-rhel7
fd2d30f8ec72734a2eee100f89f35574739c7a6a30281be77998de466635b3b0
[root@IBMPOWER_sachin ~]# podman container list
CONTAINER ID  IMAGE                                                      COMMAND     CREATED        STATUS            PORTS                   NAMES
fd2d30f8ec72  registry.access.redhat.com/rhscl/mariadb-102-rhel7:latest  run-mysqld  9 seconds ago  Up 9 seconds ago  0.0.0.0:3306->3306/tcp  wizardly_jang
[root@IBMPOWER_sachin ~]#
Step 6:  check logs
[root@ ]# podman logs fd2d30f8ec72 | head
=> sourcing 20-validate-variables.sh ...
=> sourcing 25-validate-replication-variables.sh ...
=> sourcing 30-base-config.sh ...
---> 11:03:27     Processing basic MySQL configuration files ...
=> sourcing 60-replication-config.sh ...
=> sourcing 70-s2i-config.sh ...
---> 11:03:27     Processing additional arbitrary  MySQL configuration provided by s2i ...
=> sourcing 40-paas.cnf ...
=> sourcing 50-my-tuning.cnf ...
---> 11:03:27     Initializing database ...
Step 7: That started and initialized its database . Lets create some table and check
[root@IBMPOWER_sachin ~]# podman exec -it fd2d30f8ec72 /bin/bash
bash-4.2$ mysql --user=user --password=pass -h 127.0.0.1 -P 3306 -t
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 8
Server version: 10.2.22-MariaDB MariaDB Server Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| db                 |
| information_schema |
| test               |
+--------------------+
3 rows in set (0.00 sec) MariaDB [(none)]>  use test;
Database changed
MariaDB [test]> show tables;
Empty set (0.00 sec) MariaDB [test]> CREATE TABLE hpc_team (username VARCHAR(20), date DATETIME);
Query OK, 0 rows affected (0.00 sec) MariaDB [test]> INSERT INTO hpc_team (username, date) VALUES ('Aboorva', Now());
Query OK, 1 row affected (0.00 sec) MariaDB [test]> INSERT INTO hpc_team (username, date) VALUES ('Nysal', Now());
Query OK, 1 row affected (0.00 sec) MariaDB [test]> INSERT INTO hpc_team (username, date) VALUES ('Sachin', Now());
Query OK, 1 row affected (0.00 sec) MariaDB [test]> select * from hpc_team;
+----------+---------------------+
| username | date                |
+----------+---------------------+
| Aboorva  | 2020-05-26 11:12:41 |
| Nysal    | 2020-05-26 11:12:55 |
| Sachin   | 2020-05-26 11:13:08 |
+----------+---------------------+
3 rows in set (0.00 sec) MariaDB [test]> quit
Bye
bash-4.2$
bash-4.2$ ls
aria_log.00000001  db                ib_buffer_pool  ib_logfile1  ibtmp1             mysql               performance_schema  test
aria_log_control   fd2d30f8ec72.pid  ib_logfile0     ibdata1      multi-master.info  mysql_upgrade_info  tc.log
bash-4.2$ cd test/
bash-4.2$ ls -alsrt
total 108
 4 drwxr-xr-x 6 mysql mysql  4096 May 26 11:03 ..
 4 -rw-rw---- 1 mysql mysql   483 May 26 11:12 hpc_team.frm
 4 drwx------ 2 mysql mysql  4096 May 26 11:12 .
96 -rw-rw---- 1 mysql mysql 98304 May 26 11:13 hpc_team.ibd
bash-4.2$
Step 8: Check DB folder from host machine :
[root@IBMPOWER_sachin mysql-data]# cd test/
[root@IBMPOWER_sachin test]# ls -alsrt
total 108
 4 drwxr-xr-x 6 27 27  4096 May 26 07:03 ..
 4 -rw-rw---- 1 27 27   483 May 26 07:12 hpc_team.frm
 4 drwx------ 2 27 27  4096 May 26 07:12 .
96 -rw-rw---- 1 27 27 98304 May 26 07:13 hpc_team.ibd
[root@IBMPOWER_sachin test]#

Step 9: We can set up our systemd unit file for handling the database. We’ll use a unit file as shown below:
cat /etc/systemd/system/mariadb-service.service
[Unit]
Description=Custom MariaDB Podman Container
After=network.target
[Service]
Type=simple
TimeoutStartSec=5m
ExecStartPre=-/usr/bin/podman rm "mariadb-service"
ExecStart=/usr/bin/podman run --name mariadb-service -v /root/mysql-data:/var/lib/mysql/data:Z -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DATABASE=db -p 3306:3306 --net host registry.access.redhat.com/rhscl/mariadb-102-rhel7
ExecReload=-/usr/bin/podman stop "mariadb-service"
ExecReload=-/usr/bin/podman rm "mariadb-service"
ExecStop=-/usr/bin/podman stop "mariadb-service"
Restart=always
RestartSec=30
[Install]
WantedBy=multi-user.target

Tuesday, March 24, 2020

IBM Supercomputer Summit identified possible drug compounds for COVID19 vaccine

Humanity is going through difficult time. It's important for us to fight COVID19 together. The world is facing an unprecedented challenge with communities and economies everywhere affected by the growing COVID-19 pandemic. The world is coming together to combat the pandemic bringing governments, organizations from across industries and sectors to help respond to this global outbreak. That means unleashing the full capacity of our world-class supercomputers to rapidly advance scientific research for treatments and a vaccine. IBM announced alongside the White House Office of Science and Technology Policy that it would help coordinate an effort to provide hundreds of petaflops of compute to scientists researching the coronavirus. As part of the newly launched COVID-19 High Performance Computing (HPC) Consortium, the company pledged to assist in evaluating proposals and to provide access to resources for projects that make the most immediate impact. IBM claims those efforts are beginning to bear fruit .This is an initiative to increase access to high-performance computing for groups researching and fighting the novel coronavirus, also known as COVID-19. Scientists have enlisted the help of a supercomputer to fight back against the rapid spread of the novel coronavirus. Researchers from the Oak Ridge National Laboratory just published the results of a project in which they tasked the massive IBM supercomputer known as Summit with finding the most effective existing drugs that could combat COVID-19. Summit [described as Formula One of supercomputers], can perform mathematical equations at speeds that “boggle the mind” i.e Capable of performing over 200 quadrillion calculations per second . That’s not a typo :)- .  This computation speed  accelerates the process of discovery. More details on system specifications available at link

Vaccine and Drug development is the process of bringing a new infectious disease vaccine or therapeutic drug to the market once a lead compound has been identified through the process of drug discovery, typically requiring more than five years to assure safety and efficacy of the new compound. Supercomputers can solve calculations and run experiments that, if done on traditional computing systems or by hand, would take months or years. In traditional computing systems and data centers, each computer functions and does calculations independently. By contrast, high-performance computers can work together and pass calculations between one another to process information more quickly. Such computers are also especially good for conducting research in areas like epidemiology and molecular modeling because the systems mirror the interconnectivity that exists in nature.

IBM partnered with the White House Office of Science and Technology Policy and the Department of Energy to create the COVID-19 High Performance Computing Consortium. The effort, which IBM started just last week, is expected to harness powerful high-performance computing, or “supercomputing,” resources that will massively increase the speed and capacity of coronavirus-related research. The COVID-19 High-Performance Computing Consortium includes the Seattle area’s powerhouses of cloud computing, Amazon Web Services and Microsoft, as well as IBM and Google Cloud. There are also academic partners (MIT and Rensselaer Polytechnic Institute), federal agency partners (NASA and the National Science Foundation) and five Department of Energy labs (Argonne, Lawrence Livermore, Los Alamos, Oak Ridge and Sandia). Among the resources being brought to bear is the world’s most powerful supercomputer, the Oak Ridge Summit, which packs a 200-petaflop punch. The system will harness 16 supercomputing systems from IBM, national laboratories, several universities, Amazon, Google, Microsoft and others. Computing power will be provided via remote access to researchers whose projects are approved by the consortium’s leadership board, which will be comprised of tech industry leaders and White House and Energy Department officials. The group plans to begin accepting research proposals through an online portal.



US Department of Energy’s Oak Ridge National Laboratory(ORNL)  has deployed the world’s most powerful and smartest supercomputer, the IBM built Summit, in the fight against COVID-19. Researchers from ORNLwere granted emergency computation time on Summit, using it to perform simulations with unprecedented speed. In just 2 days Summit identified and studied 77 small-molecule drug potential compounds to fight against the COVID-19 (new Coronavirus). A task that – using a traditional wet-lab approach – would have taken years. The researchers at Oak Ridge National Laboratory used Summit to perform simulations of more than 8,000 possible compounds to screen for those that have the most opportunity to have an impact on the disease, by binding to the main “spike” protein of the coronavirus, rendering it unable to infect host cells. Starting with over 8,000 compounds, Summit’s incredible power shortened the time of the experiment dramatically, ruling out the vast majority of possible medications before settling on 77 drugs which it ranked based on how effective they would likely be at halting the virus in the human body.

The paper, which was published in the journal ChemRxiv, focuses on the method the virus uses to bind to cells. Like other viruses, the novel coronavirus uses a spike protein to inject cells. Using Summit with an algorithm to investigate which drugs could bind to the protein and prevent the virus from doing its duty, the researchers now have a list of 77 drugs that show promise. They ranked the compounds of interest that could have value in experimental studies of the virus[source]


Whats  mutation and  Features, Evaluation with respect to  COVID19 ?


Structure of the Coronavirus Virion
Mutation is a mundane aspect of existence for many viruses, and the novel coronavirus is no exception. This new virus COVID19 which is the acronym of "coronavirus disease 2019" seems to be very contagious and has quickly spread globally.The CoVs have become the major pathogens of emerging respiratory disease outbreaks. They are a large family of single-stranded RNA viruses (+ssRNA) that can be isolated in different animal species. For reasons yet to be explained, these viruses can cross species barriers and can cause, in humans, illness ranging from the common cold to more severe diseases such as MERS and SARS. The potential for these viruses to grow to become a pandemic worldwide seems to be a serious public health risk.

A mutation is an alteration in the nucleotide sequence of the genome of an organism, virus, or extrachromosomal DNA. Mutations result from errors during DNA replication, mitosis, and meiosis or other types of damage to DNA. The RNA viral genome can be double-stranded (as in DNA) or single-stranded.  In some of these viruses, replication occurs quickly, and there are no mechanisms to check the genome for accuracy.  This error-prone process often results in mutations.  You can think of COVID19 and probably next mutated version COVID20  :) . A gene mutation is a permanent alteration in the DNA sequence that makes up a gene, such that the sequence differs from what is found in most people. Replication errors and DNA damage are actually happening in the cells of our bodies all the time.In most cases, however, they don't cause cancer, or even mutations. That's because they are usually detected and fixed by DNA proofreading and repair mechanisms. 

CoVs are positive-stranded RNA viruses with a crown-like appearance under an electron microscope (coronam is the Latin term for crown) due to the presence of spike glycoproteins on the envelope. The subfamily Orthocoronavirinae of the Coronaviridae family (order Nidovirales) classifies into four genera of CoVs: Alphacoronavirus (alphaCoV), Betacoronavirus (betaCoV), Deltacoronavirus (deltaCoV), and Gammacoronavirus (gammaCoV). Furthermore, the betaCoV genus divides into five sub-genera or lineages.[2] Genomic characterization has shown that probably bats and rodents are the gene sources of alphaCoVs and betaCoVs. On the contrary, avian species seem to represent the gene sources of deltaCoVs and gammaCoVs.  Members of this large family of viruses can cause respiratory, enteric, hepatic, and neurological diseases in different animal species, including camels, cattle, cats, and bats. To date, seven human CoVs (HCoVs) — capable of infecting humans — have been identified.  

SARS-CoV-2 belongs to the betaCoVs category. It has round or elliptic and often pleomorphic form, and a diameter of approximately 60–140 nm. Like other CoVs, it is sensitive to ultraviolet rays and heat. Furthermore, these viruses can be effectively inactivated by lipid solvents. Its single-stranded RNA genome contains 29891 nucleotides, encoding for 9860 amino acids. Although its origins are not entirely understood, these genomic analyses suggest that SARS-CoV-2 probably evolved from a strain found in bats. The potential amplifying mammalian host, intermediate between bats and humans, is, however, not known. Since the mutation in the original strain could have directly triggered virulence towards humans, it is not certain that this intermediary exists. Because the first cases of the CoVID-19 disease were linked to direct exposure to the Huanan Seafood Wholesale Market of Wuhan, the animal-to-human transmission was presumed as the main mechanism. Nevertheless, subsequent cases were not associated with this exposure mechanism. Therefore, it was concluded that the virus could also be transmitted from human-to-human, and symptomatic people are the most frequent source of COVID-19 spread. The possibility of transmission before symptoms develop seems to be infrequent, although it cannot be excluded. Moreover, there are suggestions that individuals who remain asymptomatic could transmit the virus. This data suggests that the use of isolation is the best way to contain this epidemic. As with other respiratory pathogens, including flu and rhinovirus, the transmission is believed to occur through respiratory droplets from coughing and sneezing.
source

The novel SARS-CoV-2 coronavirus [COVID19] that emerged in the city of Wuhan, China, last year and has since caused a large scale COVID-19 epidemic and spread all over the world is the product of natural evolution. The scientists analyzed the genetic template for spike proteins, armatures on the outside of the virus that it uses to grab and penetrate the outer walls of human and animal cells. More specifically, they focused on two important features of the spike protein: the receptor-binding domain (RBD), a kind of grappling hook that grips onto host cells, and the cleavage site, a molecular can opener that allows the virus to crack open and enter host cells. A recent scientific article suggested that the novel coronavirus responsible for the Covid-19 epidemic has mutated into a more "aggressive" form. The genetic material of the virus is RNA. Unlike with human DNA, when viruses copy their genetic material, it does not proofread its work. Because RNA viruses essentially operate without a spell-check, they often make mistakes. These "mistakes" are mutations, and viruses mutate rapidly compared to other organisms. Mutations that are harmful to the viruses are less likely to survive and are eliminated through natural selection. Sadly, this new virus doesn’t have that deletion. When mutations occur that help a virus spread or survive better, they are unlikely to make a big difference in the course of an outbreak. Still, a common perception is that the continuous acquisition of mutations will cause our future coronavirus vaccines to be ineffective. While virus evolution may confer vaccine resistance, this process often takes many years for the right mutations to accumulate. A virologist at the Charité University had sequenced the virus from a German patient infected with COVID-19 in Italy. The genome looked similar to that of a virus found in a patient in Munich; both shared three mutations not seen in early sequences from China But he thought it was just as likely that a Chinese variant carrying the three mutations had taken independent routes to both countries.  Like all viruses, SARS-CoV-2 evolves over time through random mutations, only some of which are caught and corrected by the virus’s error correction machinery. Scientists will also be scouring the genomic diversity for mutations that might change how dangerous the pathogen is or how fast it spreads. There, too, caution is warranted.
 


 Why a supercomputer is needed to fight the coronavirus  [COVID19]:
 
Viruses infect cells by binding to them and using a ‘spike’ to inject their genetic material into the host cell. To understand new biological compounds, like viruses, researchers in wet labs grow the micro-organism and see how it reacts in real-life to the introduction of new compounds. This is a slow process without powerful computers that can perform digital simulations to narrow down the range of potential variables. Computer simulations can examine how different variables react with different viruses. Each of these individual variables can comprise billions of unique data points. When these data points are compounded with multiple simulations, this can become a very time-intensive process if a conventional computing system is used.


These promising compounds could now play a role in developing new treatments or even a highly-effective vaccine that would keep the virus from taking root inside a person’s body. Right now, our best defense against the virus is social distancing, but a vaccine or treatment to ease symptoms and shorten recovery time would go a long way toward getting us on track for a return to normalcy. Researchers used the supercomputer to screen 8,000 compounds to identify the 77 most likely to bind to the main “spike” protein in the coronavirus and render it incapable of attaching to host cells in the human body. Those 77 compounds can now be experimented on with the aim of developing a coronavirus treatment. The supercomputer made it possible to avoid the lengthy process of experimenting on all 8,000 of those compounds


The results from Summit don’t mean that a cure or treatment for the new coronavirus has been found. But scientists hope that the computational findings will inform future studies and provide a focused framework for wet-labs to further investigate the compounds. Only then will we know if any of them have the needed characteristics to attack and kill the virus. Going forward, the researchers plan to run the experiment again with a new, more accurate model of the protein spike that the virus uses. It’s possible that the new model will change which drugs are most effective against the virus and hopefully shorten the road to a treatment option. It will still be many months before we have a vaccine available, but scientists are hard at work on those solutions. IBM said Summit would continue to be used for "providing ground-breaking technology for the betterment of humankind".
Reference:
https://www.ibm.com/blogs/nordic-msp/ibm-supercomputer-summit-attacks-coronavirus
https://nypost.com/2020/03/23/supercomputer-finds-77-drugs-that-could-halt-coronavirus-spread
https://www.mercurynews.com/2020/03/22/ibm-partners-with-white-house-to-direct-supercomputing-power-for-coronavirus-research

https://www.ncbi.nlm.nih.gov/books/NBK554776
https://www.youtube.com/watch?v=NJLXdsO1GBI
https://www.youtube.com/watch?v=PWzbArPgo-o

Friday, January 10, 2020

what's next in computing - Quantum logic - IBM Q

Many of the world’s biggest mysteries and potentially greatest opportunities remain beyond the grasp of classical computers. To continue the pace of progress, we need to augment the classical approach with a new platform, one that follows its own set of rules. That is quantum computing. The importance of quantum computing is both understated and widely over-hyped at the same time. Although it won’t replace conventional computers, quantum innovation represents a new computing paradigm. As quantum computing technology advances, clients are becoming increasingly curious about how it might impact their business. The intersection of industry and technology will be critical for clients to identify potential applications of quantum computing.The IBM Q Network is a collaboration of Fortune 500 companies, academic institutions, and research labs working together to advance quantum computing. IBM works with the sponsors, champions and stakeholders who will be influencers to drive initial conversations. Quantum sponsors are frequently found in a CIO or innovation group that focuses on new and emerging technology. They will be interested in discussing specific industry use cases where there is a high potential to leverage quantum for future business advantage. Mercedes-Benz and Daimler are working with International Business Machines Corp.’s quantum-computing division with the goal of deploying the next-generation computing power in certain use cases. The  quantum computers are infinitely faster than supercomputers.There are 3 basic types of quantum computers, quantum annealers, analog quantum and universal quantum computers. Quantum computers operate in a very different way to classical computers. They take advantage of the unusual phenomena of quantum mechanics, for example where subatomic particles can appear to exist in more than one state at any time.

In September 2019, IBM became the first company to have a fleet of quantum computers. IBM's 14th quantum computer is its most powerful so far, a model with 53 of the qubits that form the fundamental data-processing element at the heart of the system. IBM is competing with companies like Google, Microsoft, Honeywell, Rigetti Computing, IonQ, Intel and NTT in the race to make useful quantum computers. Another company, D-Wave, uses a different approach called annealing that's already got some customers, while AT&T and others are pursuing the even more distant realm of quantum networking. They are housed at the IBM Quantum Computation Center in New York. The Q System One is the first quantum system to consolidate thousands of components into a glass-enclosed, air-tight environment built specifically for business use. Click here for more info

Multiple IBM Q systems are housed at the IBM Quantum Computing Center in New York
While traditional computers store information as either 0s and 1s, quantum computers use quantum bits, or qubits, which represent and store information as both 0s and 1s simultaneously. That means quantum computers have the potential to sort through a vast number of possible solutions in a fraction of a second. Qubits are kept at an extremely cold temperature of 1/100th the temperature of outer space.  This measure of temperature is called the Kelvin degree, and zero degrees Kelvin is called "absolute zero". IBM keeps its qubits at 0.015 degrees Kelvin, while the brisk air on a freezing cold winter day is at 273 degrees Kelvin.Qubits are kept this cold to prolong their fragile quantum state. The longer qubits can be kept in a quantum state, the more operations can be performed on them while taking advantage of superposition and entanglement.  So what's up with a quantum computer with 53 qubits? It stems from the hexagonally derived lattice of qubits that's advantageous when it comes to minimizing unwanted interactions. Quantum computing remains a highly experimental field, limited by the difficult physics of the ultra-small and by the need to keep the machines refrigerated to within a hair's breadth of absolute zero to keep outside disturbances from ruining any calculations.
A close-up view of the IBM Q quantum computer. The processor is in the silver-colored cylinder.
Rigetti is racing against similar projects at Google, Microsoft, IBM, and Intel. Every Bay Area startup will tell you it is doing something momentously difficult, but Rigetti is biting off more than most – it's working on quantum computing. All venture-backed startups face the challenge of building a business, but this one has to do it by making progress on one of tech's thorniest problems.
Within the next five years, Google will produce a viable quantum computer. That's the stake the company has just planted.  "The field of quantum computing will soon achieve a historic milestone," They call this milestone "quantum supremacy." the world's biggest tech companies are already jockeying for their own form of commercial supremacy as they anticipate a quantum breakthrough. Both Google and IBM now say they will offer access to true quantum computing over the internet (call it quantum cloud computing).after years spent developing quantum technologies, IBM is also trying to prevent Google, a relative newcomer to the field, from stealing its quantum mindshare. And it's still unclear whether the claims made by these two companies will hold up. The future of quantum computing, like the quantum state itself, remains uncertain. Rigetti is now entering the fray. The company  launched its own cloud platform, called Forest, where developers can write code for simulated quantum computers, and some partners get to access the startup's existing quantum hardware.

source
Quantum can work in unison with current computing infrastructure to solve complex problems that were previously thought impractical or impossible. This can be paradigm-shifting. For example, the difficulty of factoring large numbers into their primes is the basis of modern cryptography. For the size of numbers used in modern public-private key encryption:On a conventional computer, this calculation would take trillions of years.  On a future quantum computer, it would take only minutes. As getting more power out of classical computers for a fixed amount of space, time, and resources becomes more challenging, completely new approaches like quantum computing become ever more interesting as we aim to tackle more complicated problems.Quantum computing could be a way to revive the rate of progress [that we have come to depend on in conventional computers], at least in some areas. "If you can successfully apply it to problems it could give you an exponential increase in computing power that you can’t get” through traditional chip designs That’s because IBM see a future beyond traditional computing. For decades, computing power has doubled roughly every two years — a pattern known as Moore’s Law. Those advances have relied on making transistors ever smaller, thereby enabling each computer chip to have more calculation power. IBM has invented new ways to shrink transistors. IBM grew to its current size by leveraging the continued scaling of conventional computing. But ,that approach is finite, and its end is in sight. Now, times are not so good. “Underlying Moore’s Law is scaling, the ability to pack more and more transistors into a smaller and smaller space, At some point … you’re going to reach atomic dimensions and that’s the end of that approach.” The specter of a world in which silicon chips are not improving exponentially  dubbed the IBM Q Network. We don’t lack computing power today, but you see Moore’s Law going into saturation.

A quantum system in a definite state can still behave randomly. This is a counter-intuitive idea of quantum physics. Quantum computers exist now because we have recently figured out how to control what has been in the world this whole time: the quantum phenomena of superposition, entanglement, and interference. These new ingredients in computing expand what is possible to design into algorithms. The word qubit has two meanings, one physical and one conceptual. Physically, it refers to the individual devices that are used to carry out calculations in quantum computers. Conceptually, a qubit is like a bit in a regular computer. It’s the basic unit of data in a quantum circuit.

Classical bits, that can only be 0 or 1, qubits can exist in a superposition of these states

A superposition is a weighted sum or difference of two or more states. For example, the state of the air when two or more musical tones sound at once. A "weighted sum or difference" means that some parts of the superposition are more or less prominently represented, such as when a violin is played more loudly than the other instruments in a string quartet. Ordinary, or “classical,” superpositions commonly occur in macroscopic phenomena involving waves.Quantum theory predicts that a computer with n qubits can exist in a superposition of all 2^n of its distinct logical states 000...0, through 111...1. This is exponentially more than a classical superposition. Playing n musical tones at once can only produce a superposition of n states.A set of n coins, each of which might be heads or tails, can be described as a probabilistic mixture of 2^n states, but it actually is in only one of them — we just don’t know which. However, quantum computers are capable of holding their data in superpositions of 2^n distinct logical states. For this reason, quantum superposition is more powerful than classical probabilism. Quantum computers capable of holding their data in superposition can solve some problems exponentially faster than any known classical algorithm. A more technical difference is that while probabilities must be positive (or zero), the weights in a superposition can be positive, negative, or even complex numbers. 
Entanglement is a property of most quantum superpositions and does not occur in classical superpositions. Entanglement is a core concept of quantum computing. In an entangled state, the whole system is in a definite state, even though the parts are not. In an entangled state, the whole system is in a definite state, even though the parts are not. Observing one of two entangled particles causes it to behave randomly, but tells the observer how the other particle would act if a similar observation were made on it. Because entanglement involves a correlation between individually random behaviors of the two particles, it cannot be used to send a message. Therefore, the term “instantaneous action at a distance,” sometimes used to describe entanglement, is a misnomer. There is no action (in the sense of something that can be used to exert a controllable influence or send a message), only correlation, which, though uncannily perfect, can only be detected afterward when the two observers compare notes. The ability of quantum computers to exist in entangled states is responsible for much of their extra computing power.

One important factor is: Physical qubits are much more sensitive to noise than transistors in regular circuits. The ability to hold a quantum state is called coherence. The longer the coherence time, the more operations researchers can perform in a quantum circuit before resetting it, and the more sophisticated the algorithms that can be run on it. To reduce errors, quantum computers need qubits that have long coherence times. And physicists need to be able to control quantum states more tightly, with simpler electrical or optical systems than are standard today. A quantum computer will need about 200 or so perfect qubits to perform chemical simulations that are impossible on classical computers. Because qubits are so prone to error, though, these systems are likely to require redundancy, with tens or perhaps hundreds of faulty qubits doing the work of one ideal qubit that gives the right answer. These so-far-theoretical ideal qubits are often called “logical qubits” or “error-corrected qubits.” . So, It doesn’t make sense to increase the number of qubits before you improve your error rates.
A leap from bits to qubits: this two-letter change could mean entirely new horizons for healthcare. Quantum computing might bring supersonic drug design, in silico clinical trials with virtual humans simulated ‘live’, full-speed whole genome sequencing and analytics, the movement of hospitals to the cloud, the achievement of predictive health, or the security of medical data via quantum uncertainty. Quantum computing could enable exponential speedups for certain classes of problems by exploiting superposition and entanglement in the manipulation of quantum bits (qubits). One such example is using quantum computing in Artificial Intelligence space, we can implement an optimisation algorithm(s) which can use the properties of superposition and help in speeding up the optimisation problem which can eventually lead to better and faster learning algorithm(s).

Quantum encryption, as its name suggests, relies on the quantum properties of photons, atoms, and other small units of matter to secure information. In this case, the physicists used a quantum property of photons known as polarization, which more or less describes the orientation of a photon. For the teleconference, they assigned photons with two different polarizations, to represent 1’s and 0’s. In this way, a beam of light becomes a cryptographic key they could use to scramble a digital message. If implemented the way physicists first envisioned it back in the 1980’s, quantum encryption would be unbreakable. The protocol is a bit complicated, but it essentially involves the sender transmitting photons to the recipient to form a key, and both parties sharing part of the key publicly. If someone had tried to intercept it, the recipient’s key would not match the sender’s key in a specific statistical way, set by rules in quantum mechanics. The sender would immediately know the key was compromised. Physicists also see quantum encryption as an important tool for when quantum computers finally become functional. These quantum computers—or more likely, the ones to follow a few decades later—could bust the best encryption algorithms today. But no computer could crack a properly quantum-encrypted message. Key words: properly encrypted. When physicists started to actually build quantum networks, they couldn’t achieve their vision of perfect quantum encryption. It turns out, sending photons thousands of miles across the world through free space, optical fiber, and relay stations, all without corrupting their polarization, is extremely technically challenging. Quantum signals die after about 100 miles of transmission through optical fiber, and no one knows how to amplify a signal yet. The best quantum memories today can only store a key for a matter of minutes before the information disappears. So Pan’s group had to incorporate conventional telecom technology to propagate their quantum signals. At several points in their network, they had to convert quantum information (polarizations) into classical information (voltages and currents) and then back into quantum. This isn’t ideal, because the absolute security of a quantum key relies on its quantum-ness. Anytime the key gets converted into classical information, normal hacking rules apply.

Quantum computers working with classical systems have the potential to solve complex real-world problems such as simulating chemistry, modelling financial risk and optimizing supply chains. One of the areas they’re applying their technology is chemistry simulations, for example to understand how materials behave and how chemicals interact. One particularly interesting problem is designing the chemical composition of more effective batteries. These could be used in the next generation of electric vehicles. Exxon Mobil plans to use quantum computing to better understand catalytic and molecular interactions that are too difficult to calculate with classical computers. Potential applications include more predictive environmental models and highly accurate quantum chemistry calculations to enable the discovery of new materials for more efficient carbon capture.

JP Morgan Chase is focusing on use cases for quantum computing in the financial industry, including trading strategies, portfolio optimization, asset pricing and risk analysis.

Accelerating drug discovery through quantum-computing molecule comparison :Molecular comparison, an important process in early-phase drug design and discovery.  Today, it takes pharmaceutical companies up to 10+ years and often billions of dollars to discover a new drug and bring it to market. Improving the front end of the process with quantum computing can dramatically cut costs and time to market, repurpose pre-approved drugs more easily for new applications, and empower computational chemists to make new discoveries faster that could lead to cures for a range of diseases
Use case for quantum computers is the design of new materials and drugs

Revolutionizing the molecule comparison process Quantum computing has the potential to change the very definition of molecular comparison by enabling pharmaceutical and material science companies to develop methods to analyze larger-scale molecules. Today, companies can run hundreds of millions of comparisons on classical computers; however, they are limited only to molecules up to a certain size that a classical computer can actually compute. As quantum computers become more readily available, it will be possible to compare molecules that are much larger, which opens the door for more pharmaceutical advancements and cures for a range of diseases.
Discovering new battery materials could “unlock a billion-dollar opportunity for Automotive industries. This case could simulate the actual behavior of a battery with a quantum computer, which is currently not possible with existing computer power. Daimler joins other automotive companies experimenting with quantum computing’s potential applications. Ford Motor Co. is researching how the technology could quickly optimize driving routes and improve the structure of batteries for electric vehicles. Volkswagen AG is developing a quantum-computing-based traffic-management system that could be offered as a commercial service. It also is interested in developing more advanced batteries. Today, battery development and testing is a physical process that requires experts to build prototypes first because there is no simulation software. A quantum computer could help Mercedes-Benz find new materials or combinations of materials that could result in better electrochemical performance and longer life cycles of batteries. Some of those innovations could include organic batteries, which could be safer, energy efficient and environmentally friendly.

Full-Scale Fault Tolerance

The third phase is still decades away. A universal fault-tolerant quantum computer is the grand challenge of quantum computing.  It is a device that can properly perform universal quantum operations using unreliable components. Today's quantum computers are not fault-tolerant. Achieving full-scale fault tolerance will require makers of quantum technology to overcome additional technical constraints, including problems related to scale and stability. But once they arrive, we expect fault-tolerant quantum computers to affect a broad array of industries. They have the potential to vastly reduce trial and error and improve automation in the specialty-chemicals market, enable tail-event defensive trading and risk-driven high-frequency trading strategies in finance, and even promote in silico drug discovery, which has major implications for personalized medicine.

 Now is the right time for business leaders to prepare for quantum. The conditions are in place to experiment and expand this fundamentally new technology. Organizations that seek to be at the forefront of this transformational shift will seize competitive advantage. Rigetti—like Google, IBM, and Intel—preaches the idea that this advance will bring about a wild new phase of the cloud computing revolution. Data centers stuffed with quantum processors will be rented out to companies freed to design chemical processes and drugs more quickly, or deploy powerful new forms of machine learning.Over the last decade, banks and government institutions in multiple countries including the US, China, and Switzerland have dabbled in quantum encryption products, but Christensen suspects that the technology will be niche for a while longer. Because the technology is so new, the costs and benefits aren’t clear yet. IBM Q Network is working with 45 clients, including startups, academic institutions and Fortune 500 clients. Large enterprise clients are investing in the emerging technology now so they will be prepared when a commercial-grade quantum computer comes to market, capable of error-correcting and solving large-scale problems. With all this promise, it’s little surprise that the value creation numbers get very big over time.

----------------- 
Reference:

https://www.ibm.com/thought-leadership/innovation_explanations/article/dario-gil-quantum-computing.html
Dario Gil, IBM Research : https://www.youtube.com/watch?v=yy6TV9Dntlw
https://www.youtube.com/watch?v=lypnkNm0B4A
https://fortune.com/longform/business-quantum-computing/
https://www.wired.com/2017/03/race-sell-true-quantum-computers-begins-really-exist/?mbid=BottomRelatedStories
https://www.wired.com/story/quantum-computing-factory-taking-on-google-ibm/?mbid=BottomRelatedStories
https://www.wired.com/story/why-this-intercontinental-quantum-encrypted-video-hangout-is-a-big-deal/?mbid=BottomRelatedStories
https://www.accenture.com/ro-en/success-biogen-quantum-computing-advance-drug-discovery

Thursday, December 26, 2019

Distributed computing with Message Passing Inteface (MPI)

One thing is certain: The explosion of data creation in our society will continue as far as experts and anyone else can forecast. In response, there is an insatiable demand for more advanced high performance computing to make this data useful.

The IT industry has been pushing to new levels of high-end computing performance; this is the dawn of the exascale era of computing. Recent announcements from the US Department of Energy for exascale computers represent the starting point for a new generation of computing advances. This is critical for the advancement of any number of use cases such as understanding the interactions underlying the science of weather, sub-atomic structures, genomics, physics, rapidly emerging artificial intelligence applications, and other important scientific fields. The supercomputer performance dropped to every 2.3 years from 2009 to 2019 due to several factors including the slowdown in Moore’s Law and technical constraints such as Dennard scaling. To push the bleeding edge of performance and efficiency will require new architectures and computing paradigms. There is a good chance that 5 nanometer technology could come to market later this year or in 2021 due to advances in Semiconductor engineering.

The rapidly increasing number of cores in modern microprocessors is pushing the current high performance computing (HPC) systems into the exascale era. The hybrid nature of these systems – distributed memory across nodes and shared memory with non-uniform memory access within each node– poses a challenge. Message Passing Interface (MPI) is a standardized message-passing library interface specification. MPI is a very abstract description on how messages can be exchanged between different processes.   In other words, Message Passing Interface (MPI) is a portable message-passing library standards developed for distributed and parallel computing. Its a STANDARD. So , MPI has multiple implementations. It is a standardized means of exchanging messages between multiple computers running a parallel program across distributed memory. MPI gives user the flexibility of calling set of routines from C, C++, Fortran, C#, Java or Python. The advantages of MPI over older message passing libraries are portability (because MPI has been implemented for almost every distributed memory architecture) and speed (because each implementation is in principle optimized for the hardware on which it runs).The message-passing interface (MPI) standard is the dominant communications protocol used in high performance computing today for writing message-passing programs. The MPI-4.0 standard is under development.

MPI Implementations and its derivatives :

There are a number of groups working on MPI implementations. The two principal are OpenMPI, an open-source implementation and MPICH is used as the foundation for the vast majority of its derivatives  including IBM MPI (for Blue Gene), Intel MPI, Cray MPI, Microsoft MPI, Myricom MPI, OSU MVAPICH/MVAPICH2, and many others. MPICH and its derivatives form the most widely used implementations of MPI in the world. MPICH is one of the most popular implementations of MPI. MPICH has been used as the bases for many other MPI derivatives  as shown here. On the other side, IBM Spectrum MPI, Mellanox HPC-X are  MPI - Message Passing Interface based on Open MPI. Similarly, bullx MPI is built around OpenMPI, which has been enhanced by Bull with optimized collective communication.

Open MPI was formed by the merging FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI, and is found in many TOP-500 supercomputers.

source
MPI offers a standard API, and is "portable". That means you can use same source code on all platforms without any modification. It is relatively trivial to switch an application between the different versions of MPI. Most MPI implementations use sockets for TCP based communication. Odds are good that any given MPI implementation will be better optimized and provide faster message passing, than a home grown application using sockets directly. In addition, should you ever get a chance to run your code on a cluster that has InfiniBand, the MPI layer will abstract any of those code changes. This is not a trivial advantage - coding an application to directly use OFED (or another IB Verbs) implementation is very difficult. Most MPI applications include small test apps that can be used to verify the correctness of the networking setup independently of your application. This is a major advantage when it comes time to debug your application. The MPI standard includes the "pMPI" interfaces, for profiling MPI . This interface also allows you to easily add checksums, or other data verification to all the message.  

The standard Message Passing Interface (MPI) has two-sided communication ( pt2pt) and collective communication models. In these communication models, both sender and receiver have to participate in data exchange operations explicitly, which requires synchronization between the processes.  Communications can be either of two types :
  •     Point-to-Point : Two processes in the same communicator are going to communicate.
  •     Collective : All the processes in a communicator are going to communicate together.

One-sided communications are a new type that allows communications to be made in a highly asynchronous way by defining windows of memory where every process can write and or read from. All of these revolve around the idea of Remote Memory Access (RMA). Traditional p2p or collective communications basically work in two-steps : first the data is transferred from the original(s) process(es) to the destination(s). The the sending and receiving processes are synchronised in some way (be it blocking synchronisation or by calling MPI_Wait). RMA allows us to decouple these two steps. One of the biggest implications of this, is the possibility to define shared-memory that will be used by many processes (cf MPI_Win_allocate_shared). Although shared-memory might seem out of the scope of MPI which was initially made for distributed memory, it makes sense to include such functionalities to allow processes sharing the same NUMA nodes for instance. All of these functionalities are grouped under the name of "One-sided communications", since they imply you don't need to have more than one process to store or load information in a shared-memory buffer.
 



In two-sided communication, memory is private to each process. When the sender calls the MPI_Send operation and the receiver calls the MPI_Recv operation, data in the sender memory is copied to a buffer then sent over the network, where it is copied to the receiver memory. One drawback of this approach is that the sender has to wait for the receiver to be ready to receive the data before it can send the data. This may cause a delay in sending data as shown here.


A simplified diagram of MPI two-sided communication send/receive. The sender calls MPI_Send but has to wait until the receiver calls MPI_Recv before data can be sent.To overcome this drawback, the MPI 2 standard introduced Remote Memory Access (RMA), also called one-sided communication because it requires only one process to transfer data. One-sided communication decouples data transfer from system synchronization. The MPI 3.0 standard revised and added extensions to the one-sided communication, adding new functionality to improve the performance of MPI 2 RMA.

Collective Data Movement  :

MPI_BCAST, MPI_GATHER and MPI_SCATTER are collective data movement routines in which all processes interact with a distinguished root process. For example, communication performed in the finite difference program, assuming three processes. Each column represents a processor; each illustrated figure shows data movement in a single phase.The five phases illustrated are (1) broadcast, (2) scatter, (3) nearest-neighbour exchange, (4) reduction, and (5) gather.
source

  1. MPI_BCAST to broadcast the problem size parameter (size) from process 0 to all np processes;
  2. MPI_SCATTER to distribute an input array (work) from process 0 to other processes, so that each process receives size/np elements; 
  3. MPI_SEND and MPI_RECV for exchange of data (a single floating-point number) with neighbours;
  4. MPI_ALLREDUCE to determine the maximum of a set of localerr values computed at the different processes and to distribute this maximum value to each process; and
  5. MPI_GATHER to accumulate an output array at process 0. 

 Many common MPI benchmarks are based primarily on point-to-point communication, providing the best opportunities for analyzing the performance impact of the MCA on real applications. Open MPI implements MPI point-to-point functions on top of the Point- to-point Management Layer (PML) and Point-to-point Transport Layer (PTL) frameworks.The PML fragments messages, schedules fragments across PTLs, and handles incoming message matching.The PTL provides an interface between the PML and underlying network devices.



source

where:
 Point-to-Point management layer (PML)
 Point-to-point Transport Layer (PTL) 
 Bit-transport layer (BTL)


Open MPI is a large project containing many different sub-systems and a relatively large code base. and has  three sections of code listed here.
  •     OMPI: The MPI API and supporting logic
  •     ORTE: The Open Run-Time Environment (support for different back-end run-time systems)
  •     OPAL: The Open Portable Access Layer (utility and "glue" code used by OMPI and ORTE)
There are strict abstraction barriers in the code between these sections. That is, they are compiled into three separate libraries: libmpi, liborte, and libopal with a strict dependency order: OMPI depends on ORTE and OPAL, and ORTE depends on OPAL.

source
The message passing interface (MPI) is one of the most popular parallel programming models for distributed memory systems. As the number of cores per node has increased, programmers have increasingly combined MPI with shared memory parallel programming interfaces, such as the OpenMP programming model. This hybrid of distributed-memory and shared-memory parallel programming idioms has aided programmers in addressing the concerns of performing efficient internode communication while effectively utilizing advancements in node-level architectures, including multicore and many-core processor architectures. Version 3.0 of the MPI standard, adds a new MPI interprocess shared memory extension (MPI SHM). This new extension is now supported by many MPI distributions. The MPI SHM extension enables programmers to create regions of shared memory that are directly accessible by MPI processes within the same shared memory domain. In contrast with hybrid approaches, MPI SHM offers an incremental approach to managing memory resources within a node, where data structures can be individually moved into shared segments to reduce the memory footprint and improve the communication efficiency of MPI programs. Halo exchange is a prototypical neighborhood exchange communication pattern. In such patterns, the adjacency of communication partners often results in communication with processes in the same node, making them good candidates for acceleration through MPI SHM. By applying MPI SHM to this common communication pattern, we demonstrate that direct data sharing can be used instead of communication, resulting in significant performance gains.


Open MPI includes an implementation of OpenSHMEM. OpenSHMEM is a PGAS (partitioned global address space) API for single-sided asynchronous scalable communications in HPC applications. An OpenSHMEM program is SPMD (single program, multiple data) in style. The SHMEM processes, called processing elements or PEs, all start at the same time and they all run the same program. Usually the PEs perform computation on their own subdomains of the larger problem and periodically communicate with other PEs to exchange information on which the next computation phase depends. OpenSHMEM is particularly advantageous for applications at extreme scales with many small put/get operations and/or irregular communication patterns across compute nodes, since it offloads communication operations to the hardware whenever possible. One-sided operations are non-blocking and asynchronous, allowing the program to continue its execution along with the data transfer.

IBM Spectrum® MPI is an implementation of Open MPI, its basic architecture and functionality is similar. IBM Spectrum MPI supports many features of OpenMPI and adds some unique features of its own.IBM Spectrum MPI uses the same basic code structure as Open MPI, and is made up of the  sections OMPI, ORTE and OPAL as discussed in above section. IBM® Spectrum MPI is a high-performance, production-quality implementation of Message Passing Interface (MPI). It accelerates application performance in distributed computing environments. It provides a familiar portable interface based on the open-source MPI. It goes beyond Open MPI and adds some unique features of its own, such as advanced CPU affinity features, dynamic selection of interface libraries, superior workload manager integrations and better performance. IBM Spectrum MPI supports a broad range of industry-standard platforms, interconnects and operating systems, helping to ensure that parallel applications can run almost anywhere. IBM Spectrum MPI Version 10.2 delivers an improved, RDMA-capable Parallel Active Messaging Interface (PAMI) using Mellanox OFED on both POWER8® and POWER9™ systems in Little Endian mode. It also offers an improved collective MPI library that supports the seamless use of GPU memory buffers for the application developer. The library provides advanced logic to select the fastest algorithm of many implementations for each MPI collective operation as shown  below.


As high-performance computing (HPC) bends to the needs of "big data" applications, speed remains essential. But it's not only a question of how quickly one can compute problems, but how quickly one can program the complex applications that do so. High performance computing is no longer limited to those who own supercomputers. HPC’s democratization has been driven particularly by cloud computing, which has given scientists access to supercomputing-like features at the cost of a few dollars per hour.Interest in HPC in the cloud has been growing over the past few years. The cloud offers applications a range of benefits, including elasticity, small startup and maintenance costs, and economics of scale. Yet, compared to traditional HPC systems such as supercomputers, some of the cloud’s primary benefits for HPC arise from its virtualization flexibility. In contrast to supercomputers’ strictly preserved system software, the cloud lets scientists build their own virtual machines and configure them to suit needs and preferences. In general, the cloud is still considered an addition to traditional supercomputers — a bursting solution for cases in which internal resources are overused, especially for small-scale experiments, testing, and initial research. Clouds are convenient for embarrassingly parallel applications (those that do not communicate very much among partitions), which can scale even on commodity interconnects common to contemporary clouds. This is the beauty of Super computer engineering – demand driving innovation, and the exascale era is just the next milestone on the never-ending HPC journey.

Reference:
https://stackoverflow.com/questions/153616/mpi-or-sockets
https://www.ibm.com/support/knowledgecenter/en/SSZTET_10.3/admin/smpi02_running_apps.html
https://hpc.llnl.gov/sites/default/files/MPI-SpectrumUserGuide.pdf 
https://computing.llnl.gov/tutorials/mpi/
http://www.cs.nuim.ie/~dkelly/CS402-06/Message%20Passing%20Interface.htm 
https://www.sciencedirect.com/topics/computer-science/message-passing-interface 
https://www.nextplatform.com/2020/02/13/going-beyond-exascale-computing/?_lrsc=5244db38-a9d0-4d40-9c04-2d8c2ecf4755 
https://dl.acm.org/doi/pdf/10.1145/2966884.2966909