Thursday, February 21, 2019

Kubernetes [K8s] Architecture and setup on RHEL

Kubernetes (k8s) is an open source container management platform designed to run enterprise-class, cloud-enabled and web-scalable IT workloads. With the rise of containerization in the world of Devops, the need of a platform to effectively orchestrate these containers also grew.  Since Kubernetes operates at the container level rather than at the hardware level, it provides some generally applicable features common to PaaS offerings.  It is a vendor-agnostic cluster and container management tool, open-sourced by Google in 2014.

Kubernetes Architecture : 

Kubernetes is designed on the principles of scalability, availability, security and portability. It optimizes the cost of infrastructure by efficiently distributing the workload across available resources. This architecture of Kubernetes provides a flexible, loosely-coupled mechanism for service discovery. Like most distributed computing platforms, a Kubernetes cluster consists of at least one master and multiple compute nodes (also known as worker nodes). The master is responsible for exposing the application program interface (API), scheduling the deployments and managing the overall cluster. Each node runs a container runtime, such as Docker or rkt (container system developed by CoreOS as a light weight and secure alternative to Docker), along with an agent that communicates with the master. The node also runs additional components for logging, monitoring, service discovery and optional add-ons. Nodes are the workhorses of a Kubernetes cluster. They expose compute, networking and storage resources to applications. Nodes can be virtual machines (VMs) running in a cloud or bare metal servers running within the data center.
source
A pod is a collection of one or more containers. The pod serves as Kubernetes’ core unit of management. Pods act as the logical boundary for containers sharing the same context and resources. The grouping mechanism of pods make up for the differences between containerization and virtualization by making it possible to run multiple dependent processes together. At runtime, pods can be scaled by creating replica sets, which ensure that the deployment always runs the desired number of pods.

Replica sets deliver the required scale and availability by maintaining a pre-defined set of pods at all times. A single pod or a replica set can be exposed to the internal or external consumers via services. Services enable the discovery of pods by associating a set of pods to a specific criterion. Pods are associated to services through key-value pairs called labels and selectors. Any new pod with labels that match the selector will automatically be discovered by the service. This architecture provides a flexible, loosely-coupled mechanism for service discovery.
The definition of Kubernetes objects, such as pods, replica sets and services, are submitted to the master. Based on the defined requirements and availability of resources, the master schedules the pod on a specific node. The node pulls the images from the container image registry and coordinates with the local container runtime to launch the container.

etcd is an open source, distributed key-value database from CoreOS, which acts as the single source of truth (SSOT) for all components of the Kubernetes cluster. The master queries etcd to retrieve various parameters of the state of the nodes, pods and containers.

 
source

Kubernetes Components:

 i) Master Components
Master components provide the cluster’s control plane. Master components make global decisions about the cluster (for example, scheduling), and detecting and responding to cluster events (starting up a new pod when a replication controller’s ‘replicas’ field is unsatisfied. The master stores the state and configuration data for the entire cluster in ectd, a persistent and distributed key-value data store. Each node has access to ectd and through it, nodes learn how to maintain the configurations of the containers they’re running. You can run etcd on the Kubernetes master or in standalone configurations.


  • kube-apiserver
  • etcd
  • kube-scheduler
  • kube-controller-manager
  • cloud-controller-manager

ii) Node Components
Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment . All nodes in a Kubernetes cluster must be configured with a container runtime, which is typically Docker. The container runtime starts and manages the containers as they’re deployed to nodes in the cluster by Kubernetes.
  • kubelet
  • kube-proxy
  • Container Runtime

----------------------------------------------------------------------------

Installation Steps of Kubernetes 1.7 on RHEL 

---------------------------------------------------------------------------


Lets do the installation on two x86 nodes installed with RHEL 7.5
1) master-node
2) worker-node

Step 1: on master-node, disable SELinux & setup firewall rules
  1. setenforce 0
  2. sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

Set the following firewall rules and other configuration details.


  1.  firewall-cmd --permanent --add-port=6443/tcp
  2.  firewall-cmd --permanent --add-port=2379-2380/tcp
  3.  firewall-cmd --permanent --add-port=10250/tcp
  4.  firewall-cmd --permanent --add-port=10251/tcp
  5.  firewall-cmd --permanent --add-port=10252/tcp
  6.  firewall-cmd --permanent --add-port=10255/tcp
  7.  firewall-cmd --reload
  8.  modprobe br_netfilter
  9.  echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

NOTE:You MUST disable swap in order for the kubelet to work properly

Step 2: Configure Kubernetes Repository
cat /etc/yum.repos.d/kubernetes.repo
-----------------------------------------------------------
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
----------------------------------------------------------


Step 3: configure  Docker  - validated version(18.06) for kubernetes

cat /etc/yum.repos.d/docker-main.repo
-------------------------------------------------------
[docker-main-repo]
name=Docker main Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
-------------------------------------------------------

Step 4: Install docker

[root@master-node ]# yum install docker-ce-18.06*
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
Resolving Dependencies
--> Running transaction check
---> Package docker-ce.x86_64 0:18.06.3.ce-3.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=======================================================================================================================================
 Package                      Arch                      Version                              Repository                           Size
=======================================================================================================================================
Installing:
 docker-ce                    x86_64                    18.06.3.ce-3.el7                     docker-ce-stable                     41 M

Transaction Summary
=======================================================================================================================================
Install  1 Package

Total size: 41 M
Installed size: 168 M
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : docker-ce-18.06.3.ce-3.el7.x86_64                                                                                   1/1
  Verifying  : docker-ce-18.06.3.ce-3.el7.x86_64                                                                                   1/1

Installed:
  docker-ce.x86_64 0:18.06.3.ce-3.el7

Complete!
[root@master-node ]#
-------------------------------------------------------------------

Step 5: Start and enable kubenetes

[root@master-node ~]# systemctl  start kubelet
[root@master-node ~]# systemctl  status kubelet
? kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           +-10-kubeadm.conf
   Active: active (running) since Thu 2019-02-21 01:00:56 EST; 1min 50s ago
     Docs: https://kubernetes.io/docs/
 Main PID: 9551 (kubelet)
    Tasks: 69
   Memory: 56.3M
   CGroup: /system.slice/kubelet.service
           +-9551 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubele...

Feb 21 01:02:22 master-node kubelet[9551]: W0221 01:02:22.527780    9551 cni.go:203] Unable to update cni config: No networks fo...i/net.d
Feb 21 01:02:22 master-node kubelet[9551]: E0221 01:02:22.528025    9551 kubelet.go:2192] Container runtime network not ready: N...ialized
Feb 21 01:02:27 master-node kubelet[9551]: W0221 01:02:27.528986    9551 cni.go:203] Unable to update cni config: No networks fo...i/net.d
Feb 21 01:02:27 master-node kubelet[9551]: E0221 01:02:27.529196    9551 kubelet.go:2192] Container runtime network not ready: N...ialized
Feb 21 01:02:32 master-node kubelet[9551]: W0221 01:02:32.530265    9551 cni.go:203] Unable to update cni config: No networks fo...i/net.d
Feb 21 01:02:32 master-node kubelet[9551]: E0221 01:02:32.530448    9551 kubelet.go:2192] Container runtime network not ready: N...ialized
Feb 21 01:02:37 master-node kubelet[9551]: W0221 01:02:37.531526    9551 cni.go:203] Unable to update cni config: No networks fo...i/net.d
Feb 21 01:02:37 master-node kubelet[9551]: E0221 01:02:37.531645    9551 kubelet.go:2192] Container runtime network not ready: N...ialized
Feb 21 01:02:42 master-node kubelet[9551]: W0221 01:02:42.532552    9551 cni.go:203] Unable to update cni config: No networks fo...i/net.d
Feb 21 01:02:42 master-node kubelet[9551]: E0221 01:02:42.532683    9551 kubelet.go:2192] Container runtime network not ready: N...ialized
Hint: Some lines were ellipsized, use -l to show in full.
[root@master-node ~]#

[root@master-node ]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
[root@master-node]#
-----------------------------------

step 6 : Start and enable docker

[root@master-node]# systemctl restart docker
[root@master-node]# systemctl status docker
? docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-02-21 00:41:14 EST; 7s ago
     Docs: https://docs.docker.com
 Main PID: 30271 (dockerd)
    Tasks: 161
   Memory: 90.8M
   CGroup: /system.slice/docker.service
           +-30271 /usr/bin/dockerd
           +-30284 docker-containerd --config /var/run/docker/containerd/containerd.toml
           +-30490 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/...
           +-30523 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/...
           +-30541 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/...
           +-30639 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/...
           +-30667 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/...
           +-30747 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/...
           +-30764 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/...
           +-30810 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/...
           +-30938 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/...

Feb 21 00:41:15 master-node dockerd[30271]: time="2019-02-21T00:41:15-05:00" level=info msg="shim docker-containerd-shim started...d=30541
Feb 21 00:41:15 master-node dockerd[30271]: time="2019-02-21T00:41:15-05:00" level=info msg="shim docker-containerd-shim started...d=30639
Feb 21 00:41:16 master-node dockerd[30271]: time="2019-02-21T00:41:16-05:00" level=info msg="shim docker-containerd-shim started...d=30667
Feb 21 00:41:16 master-node dockerd[30271]: time="2019-02-21T00:41:16-05:00" level=info msg="shim docker-containerd-shim started...d=30747
Feb 21 00:41:16 master-node dockerd[30271]: time="2019-02-21T00:41:16-05:00" level=info msg="shim docker-containerd-shim started...d=30764
Feb 21 00:41:16 master-node dockerd[30271]: time="2019-02-21T00:41:16-05:00" level=info msg="shim docker-containerd-shim started...d=30810
Feb 21 00:41:16 master-node dockerd[30271]: time="2019-02-21T00:41:16-05:00" level=info msg="shim docker-containerd-shim started...d=30938
Feb 21 00:41:16 master-node dockerd[30271]: time="2019-02-21T00:41:16-05:00" level=info msg="shim docker-containerd-shim started...d=30983
Feb 21 00:41:17 master-node dockerd[30271]: time="2019-02-21T00:41:17-05:00" level=info msg="shim reaped" id=cc95fdfdc1d7d0d6104...62d4d91
Feb 21 00:41:17 master-node dockerd[30271]: time="2019-02-21T00:41:17.122817061-05:00" level=info msg="ignoring event" module=li...Delete"
Hint: Some lines were ellipsized, use -l to show in full.
[root@master-node]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@master-node]#


----------------------------------------
Step 7: Check the version of kubernetes installed on the master node :
[root@master-node ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:05:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
[root@master-node ~]#

----------------------------------------------

Step 8:
Check the version of docker installed :

[root@master-node]# docker version
Client:
 Version:           18.06.3-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        d7080c1
 Built:             Wed Feb 20 02:26:51 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.3-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       d7080c1
  Built:            Wed Feb 20 02:28:17 2019
  OS/Arch:          linux/amd64
  Experimental:     false


---------------------------------------------------------------

Step 9: Initialize Kubernetes Master with ‘kubeadm init’


[root@master-node ~]# kubeadm init
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master-node localhost] and IPs [IP_ADDRESS_master-node 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master-node localhost] and IPs [IP_ADDRESS_master-node 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master-node kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 IP_ADDRESS_master-node]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.502302 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master-node" as an annotation
[mark-control-plane] Marking the node master-node as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master-node as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: od9n1d.rltj6quqmm2kojd7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join IP_ADDRESS_master-node:6443 --token od9n1d.rltj6quqmm2kojd7 --discovery-token-ca-cert-hash sha256:9ea1e1163550080fb9f5f63738fbf094f065de12cd38f493ec4e7c67c735fc7b

[root@master-node ~]#
-----------------------------------------------
If you get error -PORT already in use - please run kubectl reset
kubernetes master has been initialized successfully as shown above .
------------------------------------------------------------------------------------------------------

Step 10:Set the cluster for root user.

[root@master-node ~]# mkdir -p $HOME/.kube
[root@master-node ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master-node ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@master-node ~]#


------------------------

Step 11 : Check the status on master node  and Deploy pod network to the cluster

--------------------------
[root@master-node ~]# kubectl get nodes
NAME       STATUS     ROLES    AGE   VERSION
master-node   NotReady   master   12m   v1.13.3
[root@master-node ~]#


[root@master-node]# kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
default       nginx                              0/1     Pending   0          125m
kube-system   coredns-86c58d9df4-d4j4x           1/1     Running   0          3h49m
kube-system   coredns-86c58d9df4-sg8tk           1/1     Running   0          3h49m
kube-system   etcd-master-node                      1/1     Running   0          3h48m
kube-system   kube-apiserver-master-node            1/1     Running   0          3h48m
kube-system   kube-controller-manager-master-node   1/1     Running   0          3h48m
kube-system   kube-proxy-b6wcd                   1/1     Running   0          159m
kube-system   kube-proxy-qfdhq                   1/1     Running   0          3h49m
kube-system   kube-scheduler-master-node            1/1     Running   0          3h48m
kube-system   weave-net-5c46g                    2/2     Running   0          159m
kube-system   weave-net-7qsnj                    2/2     Running   0          3h35m
[root@master-node]#

-------------------


Step 12:  deploy network

The Weave Net addon for Kubernetes comes with a Network Policy Controller that automatically monitors Kubernetes for any NetworkPolicy annotations on all namespaces and configures iptables rules to allow or block traffic as directed by the policies.


[root@master-node ~]#  export kubever=$(kubectl version | base64 | tr -d '\n')
[root@master-node ~]#  kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created
[root@master-node ~]#

------------------------

Step 13:Check the status on master node

[root@master-node ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
master-node   Ready    master   14m   v1.13.3
[root@master-node ~]#



-----------------------------
Perform the following steps on each worker node

Step 14: 
  Disable SELinux and other configuration details

  1. setenforce 0
  2. sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
  3. firewall-cmd --permanent --add-port=10250/tcp
  4. firewall-cmd --permanent --add-port=10255/tcp
  5. firewall-cmd --permanent --add-port=30000-32767/tcp
  6. firewall-cmd --permanent --add-port=6783/tcp
  7. firewall-cmd  --reload
  8. echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

NOTE: You MUST disable swap in order for the kubelet to work properly
----------------------------
Step 15: Configure Kubernetes  and docker Repositories on worker node  ( same as steps above)

-------------
Step 16:
Install docker

--------------
Step 17:

Start and enable docker service

-----------------

Step 18:

Now you can  Join worker node to master node

Whenever kubernetes master initialized , then in the output we get command and token.  Copy that command and run

[root@worker-node ~]#  kubeadm join IP_ADDRESS_master-node:6443 --token od9n1d.rltj6quqmm2kojd7 --discovery-token-ca-cert-hash sha256:9ea1e1163550080fb9f5f63738fbf094f065de12cd38f493ec4e7c67c735fc7b
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "IP_ADDRESS_master-node:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://IP_ADDRESS_master-node:6443"
[discovery] Requesting info from "https://IP_ADDRESS_master-node:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "IP_ADDRESS_master-node:6443"
[discovery] Successfully established connection with API Server "IP_ADDRESS_master-node:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "worker-node" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@worker-node ~]#

This will activate the services required -------------------------

Step 19:

Now verify Nodes status from master node using kubectl command


[root@master-node]# kubectl get nodes
NAME       STATUS   ROLES    AGE    VERSION
master-node   Ready    master   119m   v1.13.3
worker-node   Ready    <none>   49m    v1.13.3
[root@master-node ]#


As we can see master and worker nodes are in ready status. This concludes that kubernetes 1.7 has been installed successfully and also we have successfully joined worker node.  Now we can create pods and services


---------------------------------   oooooooooooooooooo -------------------------------------------

Reference:
1) https://docs.google.com/presentation/d/1mbjjxNlPzgZIH1ciyprMRoIAYiEZuFQlG7ElXUvP1wg/edit#slide=id.g3d4e7af7b7_2_52
2) https://github.com/kubernetes-sigs/kube-batch
3) https://github.com/intel/multus-cni
4) https://kubernetes.io/docs/tutorials/kubernetes-basics
5) http://www.developintelligence.com/blog/2017/02/kubernetes-actually-use
6) https://kubernetes.io/docs/setup/independent/install-kubeadm/
7) https://www.linuxtechi.com/install-kubernetes-1-7-centos7-rhel7/
8) https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/get_started_orchestrating_containers_with_kubernetes
9) https://github.com/kubernetes/kubeadm/issues/339
10) https://www.linuxtechi.com/install-kubernetes-1-7-centos7-rhel7/
11) https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/get_started_orchestrating_containers_with_kubernetes
12) https://kubernetes.io/docs 
13) https://thenewstack.io/kubernetes-an-overview/
14) https://blog.newrelic.com/engineering/what-is-kubernetes/

Tuesday, January 8, 2019

OpenMP Accelerator Support for GPUs on POWER architecture

The combination of the IBM® POWER® processors and the NVIDIA GPUs provides a platform for heterogeneous high-performance computing that can run several technical computing workloads efficiently. The computational capability is built on top of massively parallel and multithreaded cores within the NVIDIA GPUs and the IBM POWER processors. You can offload parallel operations within applications, such as data analysis or high-performance computing workloads, to GPUs or other devices. 

       OpenMP API specification provides a set of directives to instruct the compiler and runtime to offload a block of code to the device. The device can be GPU, FPGA etc. In the new generations of POWER architecture, the POWER processor can be attached to the Nvidia GPU via the high speed NVLINK for fast data transfer between CPU and GPU. This hardware configuration is an essential part of the CORAL project with the U.S. national labs and can bring us closer to the Exascale Computing. The IBM XL compilers has a long history of supporting OpenMP API starting from the first version of the specification. The XL compilers continue to support OpenMP specification and exploit the POWER hardware architecture with GPU. The XL compiler team works closely with the IBM Research team to develop the compiler infrastructure for the offloading mechanism. In addition, the team also collaborates with the open source community for the runtime interface on the GPU device runtime library.


source

The OpenMP program (C, C++ or Fortran) with device constructs is fed into the High-Level Optimizer and partitioned into the CPU and GPU parts. The intermediate code is optimized by High-level Optimizer. Note that such optimization benefits both code for CPU as well as GPU. The CPU part is sent to the POWER Low-level Optimizer for further optimization and code generation. The GPU part of the code is translated to the LLVM IR and then fed into the LLVM optimizer in the CUDA Toolkit for optimization specific for Nvidia GPU and PTX code generation. Finally, the linker is invoked to link the objects to create an executable. From this outline view, one can see that the compiler employs the expertise from the both worlds to ensure that the applications are being optimized accordingly. For the CPU part, the POWER Low-level Optimizer which accumulates many years of optimization knowledge on the POWER architecture generates optimized code. For the GPU part, the GPU expertise from the CUDA Toolkit is used to generate optimized code on the Nvidia device. As a result, the entire applications are optimized in a balanced way.

The XL C/C++ V13.1.5 and XL Fortran V15.1.5 compilers are one of the first compilers that provide support for Nvidia GPU offloading using OpenMP 4.5 programming model. This release has the basic device constructs (i.e. target, target update and target data directives) support to allow users to experiment the offloading mechanism and porting code for GPU. The other important aspect of offloading computation to devices is the data mapping. The map clause is also supported in this release.

 Example:
Compile the above code using IBM XL compiler  with below mentioned flags

 xlc_r -O3 -fopenmp -qsmp=omp -qoffload sample_offload.c

Observe the offloaded task running on GPU



  OpenMP offloading compilers for NVIDIA GPUs have improved dramatically over the past year and are ready for real use.



Reference:
1) https://www.openmp.org/updates/openmp-accelerator-support-gpus/
2) https://computation.llnl.gov/projects/co-design/lulesh
3) https://www.ibm.com/support/knowledgecenter/en/SSXVZZ_13.1.5/com.ibm.xlcpp1315.lelinux.doc/proguide/offloading.html
4) https://openpowerfoundation.org/presentations/openmp-accelerator-support-for-gpu/