Monday, February 20, 2023

HPC workloads on Kubernetes cluster with Volcano batch scheduler

Kubernetes is primarily designed for managing containerized workloads, and it can also be used for managing High-Performance Computing (HPC) clusters. However, using Kubernetes for HPC clusters may require additional customization and configuration.

Performance: Kubernetes is not specifically designed for high-performance computing workloads, and it may not provide the same level of performance as specialized HPC schedulers. To improve performance, you may need to customize the Kubernetes scheduling policies and resource management.

Networking: HPC workloads often require high-bandwidth, low-latency networking between compute nodes. Kubernetes networking is designed for containerized workloads, and it may not provide the required level of performance for HPC workloads. You may need to use specialized network plugins or customize the Kubernetes networking configuration to achieve the desired performance.

Storage: HPC workloads often require large amounts of high-performance storage. Kubernetes provides built-in support for persistent volumes, but it may not be optimized for HPC workloads. You may need to use specialized storage solutions or customize the Kubernetes storage configuration.

Resource Management: HPC clusters often require strict resource management to ensure that workloads are efficiently utilizing compute resources. Kubernetes provides a range of resource management features, such as quotas and limits, but you may need to customize the configuration to meet the specific needs of your HPC workloads


Kubernetes Batch: Kubernetes Batch is a built-in batch scheduler that is included in the core Kubernetes distribution. It provides basic batch job management functionality, such as job creation, monitoring, and cleanup.

In addition to Kubernetes Batch, there are several other batch schedulers that can be used with Kubernetes for managing batch jobs, including:

  • Apache Airflow: Apache Airflow is a popular open-source platform for creating, scheduling, and monitoring workflows. It can be used to manage batch jobs in Kubernetes using the KubernetesExecutor.
  • Apache Spark: Apache Spark is a distributed computing framework that includes a built-in scheduler for managing batch jobs. Spark can be run on Kubernetes using the Spark operator, which provides support for managing Spark jobs as Kubernetes native resources.
  • HTCondor: HTCondor is a widely-used batch scheduler in the high-performance computing (HPC) community. It can be used with Kubernetes through the HTCondor-Kubernetes integration, which allows HTCondor to manage Kubernetes pods as HTCondor jobs.
  • Slurm: Slurm is another popular batch scheduler in the HPC community. It can be used with Kubernetes through the Slurm-Kubernetes integration, which allows Slurm to manage Kubernetes pods as Slurm jobs.
  • Valcano: Valcano is a new batch scheduler designed specifically for Kubernetes. It provides advanced job scheduling and resource management capabilities, such as backfill scheduling and intelligent resource allocation


Volcano is a Kubernetes native batch system and job scheduler designed to handle deep learning, machine learning, and high-performance computing (HPC) workloads. It was developed by Alibaba Cloud and was released as an open-source project in 2018.

Here are some of the features that make Volcano special for HPC workloads:
  • Resource Management: Volcano has advanced resource management capabilities that allow it to efficiently schedule and manage HPC workloads across a large number of nodes. It can allocate GPUs, CPUs, memory, and other resources required for running HPC workloads in a distributed computing environment.
  • Performance Optimization: Volcano is designed to optimize the performance of HPC workloads by minimizing resource contention and reducing the time it takes to start and complete jobs. It uses advanced scheduling algorithms to allocate resources and optimize job execution times.
  • Custom Schedulers: Volcano allows users to create custom schedulers and customize job scheduling policies to meet their specific requirements. This makes it easier for users to configure the scheduler to meet the specific needs of their HPC workloads.
  • Job Prioritization: Volcano supports job prioritization based on various factors such as job dependencies, job age, and user-defined priorities. This ensures that higher priority jobs are executed first, and that resources are allocated efficiently across the cluster.
  • Workflow Support: Volcano supports complex workflow management, allowing users to define dependencies between jobs and execute them in a specific order. This is particularly useful for HPC workloads that require a series of jobs to be executed in a specific sequence.
  • Overall, Volcano is a powerful job scheduler that is optimized for HPC workloads in Kubernetes environments. Its advanced resource management, performance optimization, and custom scheduling capabilities make it an ideal choice for running complex deep learning, machine learning, and HPC workloads in Kubernetes.


Integrating the Volcano scheduler into a Kubernetes cluster involves following steps:

Install the Volcano components: The first step is to install the Volcano components on the Kubernetes cluster. This can be done using the Volcano Helm chart, which includes all the necessary components such as the Volcano scheduler, admission controllers, and CRDs (Custom Resource Definitions).

Install Volcano components with Helm chart:

# Add the Volcano Helm repository
helm repo add volcano
helm repo update

# Install the Volcano components
helm install volcano volcano/volcano

NOTE: HELM is  package manager for Kubernetes

Configure the scheduler: Once the Volcano components are installed, the next step is to configure the Volcano scheduler. This involves setting the scheduling policies, priority classes, and other parameters that govern how the scheduler allocates resources to jobs.

Configure the scheduler with YAML file:

kind: Queue
  name: default
  # Set the scheduling policy for the queue
    type: "PriorityPolicy"
      defaultPriority: 50
  # Set the resource limits for the queue
      cpu: "16"
      memory: "64Gi"
      cpu: "2"
      memory: "8Gi"

Apply the YAML file with the following command:

kubectl apply -f queue.yaml

Define the job templates: After configuring the scheduler, the next step is to define the job templates that will be used to create batch jobs. This involves specifying the Docker image, command, arguments, and resource requirements for each job template.

Define job templates with YAML file:

kind: Job
  name: pi
      - name: pi
        image: perl
        command: ["perl"]
        args: ["-Mbignum=bpi", "-wle", "print bpi(2000)"]
            cpu: "2"
            memory: "8Gi"
            cpu: "1"
            memory: "4Gi"
      # Set the queue name to use for this job
      queue: default

Apply the YAML file with the following command:

kubectl apply -f job.yam


Create batch jobs: Once the job templates are defined, batch jobs can be created using the Kubernetes API or the kubectl command-line tool. When a batch job is created, the Volcano scheduler will allocate resources to the job based on the scheduling policies and resource requirements defined in the job template. Create batch jobs with kubectl command:

kubectl create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'

Monitor and manage batch jobs: Finally, the batch jobs can be monitored and managed using the Kubernetes API or the kubectl command-line tool. This includes viewing the status of running jobs, scaling up or down the number of replicas, and deleting completed jobs.

Monitor and manage batch jobs with kubectl command:

# View the status of all batch jobs

kubectl get

# View the logs for a specific batch job

kubectl logs job/pi

# Scale up or down the number of replicas for a batch job

kubectl scale job/pi --replicas=10

# Delete a completed batch job

kubectl delete job/pi


kubectl get

This command will display a table with information about all the Volcano jobs currently running, including the job name, namespace, queue, status, and creation time.

If you want to see more details about a specific job, you can use the following command

kubectl describe job-name


The rdma/hca_shared resource in Kubernetes refers to the HCA (Host Channel Adapter) Shared Device Plugin. This device plugin enables Kubernetes to detect and utilize InfiniBand and RDMA network interfaces on the host nodes, allowing containers to access RDMA resources through a standard Kubernetes API. By using the HCA Shared Device Plugin in a Kubernetes cluster, applications can take advantage of RDMA technology for high-performance networking and low-latency communication. This is especially useful for applications that require high-throughput data transfer or require low-latency communication, such as big data analytics, high-performance computing, and machine learning workloads. So, rdma/hca_shared configuration in Kubernetes is used to enable RDMA support in the cluster, which can improve the performance of certain types of applications that require fast, low-latency networking.

--------------------Add  resources to a Kubernetes cluster using YAML file-----

To add Nvidia GPU resources to a Kubernetes cluster using YAML file, you will need to modify the spec section of the deployment or pod YAML file to include the necessary configuration options. Here's an example YAML file for adding Nvidia GPU resources to a Kubernetes deployment:

apiVersion: apps/v1
kind: Deployment
  name: my-deployment
  replicas: 1
      app: my-app
        app: my-app
        - name: my-container
          image: my-image

In the example above, the resources section specifies that the container needs one Nvidia GPU resource. You can adjust the number of GPUs by changing the value of You can also specify the GPU type by using the resource limit.

You can apply this YAML file to your Kubernetes cluster using the kubectl apply -f command 

The most commonly used Kubernetes commands:

  1. kubectl create: creates a resource from a file or from stdin.
  2. kubectl apply: applies changes to an existing resource.
  3. kubectl get: retrieves information about one or more resources.
  4. kubectl describe: provides detailed information about a specific resource or a set of resources.
  5. kubectl delete: removes one or more resources from the cluster.
  6. kubectl logs: displays logs from a specific pod or container.
  7. kubectl exec: runs a command inside a container in a specific pod.
  8. kubectl port-forward: forwards one or more local ports to a pod.
  9. kubectl rollout: manages rolling updates of a deployment.
You can use the kubectl logs command with the -f option to stream logs in real-time, similar to the tail -f command. Here's the syntax for using kubectl logs with the -f option:

kubectl logs -f <pod-name> <container-name>
kubectl logs -f my-pod my-container

This will start streaming the logs from the my-container container in the my-pod pod in real-time. You can stop the log streaming by pressing Ctrl+C.


Version of Volcano scheduler installed on K8s cluster is : 

$ kubectl get deployment -n volcano-system volcano-scheduler -o=jsonpath='{.spec.template.spec.containers[0].image}' | cut -d':' -f2

----------details ----------------
$ kubectl describe deployment -n volcano-system volcano-scheduler
Name:                   volcano-scheduler
Namespace:              volcano-system
CreationTimestamp:      Fri, 03 Feb 2023 00:38:14 -0500
Labels:                 app=volcano-scheduler
Annotations:   1
Selector:               app=volcano-scheduler
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app=volcano-scheduler
  Service Account:  volcano-scheduler
    Image:      sachin-power-hpc-docker-local/volcanosh/vc-scheduler:1.7
NOTE: This command describes the deployment object for the Volcano Scheduler in the volcano-system namespace and provides detailed information about the deployment, including the image tag and version.

You can get the version of Kubernetes installed in your cluster by running the following command:

$ kubectl version --short | grep Server
Server Version: v1.23.4

After creating the job - we can check the job name as shown in Step 2 

Step 1 :  Submit Job - spec defines scheduler=volcano

[spb@k8s-masterNode]#kubectl create -f openmpi-example.yaml created

Step 2: get the name of job submitted by volcano scheduler . So, It's vcjob 

[spb@k8s-masterNode]#kubectl get vcjob -o
[spb@k8s-masterNode]#kubectl get vcjob -o

Step 3 : Just observe - how the vcjob changes it's state  from Pending ---> Running--->Completed

[spb@k8s-masterNode]#kubectl get vcjob
nj-ompi-job   Pending   2                         5s
[spb@k8s-masterNode]#kubectl get vcjob
nj-ompi-job   Running   2              2          14s
[spb@k8s-masterNode]#kubectl get vcjob
nj-ompi-job   Running   2              3          37s
[spb@k8s-masterNode]#kubectl get vcjob
nj-ompi-job   Completed   2                         90s

Step 4 : Based on the spec defined in yaml file - you can dispaly the vcjob 

[spb@k8s-masterNode]#kubectl get vcjob -o custom-columns=',MinAvailable:.spec.minAvailable,SCHED:.spec.schedulerName'
NAME          MinAvailable   SCHED
nj-ompi-job   2              volcano

Step 5 : Get the status of VC job 

[spb@k8s-masterNode]#kubectl get vcjob nj-ompi-job -n kube-system -o jsonpath='{.status.conditions[?(@.status=="Completed")].status}'
[spb@k8s-masterNode]#kubectl get vcjob nj-ompi-job -n kube-system -o jsonpath='{.status.conditions[-1:].status}'

Get the status of a specific job:
kubectl get job-name -o yaml

This command will display the job status in YAML format, which can be useful for debugging errors or issues with the job

Get the logs for a specific pod in the job
kubectl logs job-name-pod-name

Get detailed information about the job and its pods:
kubectl describe job-name
kubectl describe pod job-name-pod-name

View the events for a specific job:

kubectl get events --field-selector


$ kubectl get events --field-selector -n smpici
LAST SEEN   TYPE     REASON      OBJECT                                MESSAGE
20m         Normal   Scheduled   pod/smpi--build-0   Successfully assigned smpici/smpi-build-0 to workerNode
20m         Normal   Pulling     pod/smpi-build-0   Pulling image "sachin-power-hpc-docker-local/master:rhel_8.6"
20m         Normal   Pulled      pod/smpi-build-0   Successfully pulled image "sachin-power-hpc-docker-local/master:rhel_8.6" in 2.374936782s
20m         Normal   Created     pod/smpi--build-0   Created container build
20m         Normal   Started     pod/smpi-build-0   Started container build

View the resource usage of a specific pod:
kubectl top pod job-name-pod-name

This command will display the CPU and memory usage of a specific pod in the job, which can help with diagnosing performance issues or resource constraints.

Example : 
$ kubectl get pods -n smpici
NAME                              READY   STATUS    RESTARTS   AGE
smpi-37-build-0   1/1     Running   0          16m
$ kubectl top pod smpi-weekly-ibm-smpi-37-build-0 -n smpici
NAME                              CPU(cores)   MEMORY(bytes)
smpi-37-build-0                     2m           208Mi

Example : How to delete the pod that  completed successfully

$ kubectl get pods -n smpici  --field-selector=status.phase==Succeeded
NAME                                      READY   STATUS      RESTARTS   AGE
smpi-30-k8s-mpimaster-0   0/1     Completed   0          5d2h
$ kubectl delete pods -n smpici  --field-selector=status.phase==Succeeded
pod "smpi-30-k8s-mpimaster-0" deleted

-------------------------Vela, IBM’s first AI-optimized, cloud-native supercomputer.----------------------

IBM has announced the release of Vela, a new security technology and AI supercomputer. The technology is designed to assist organizations in detecting and responding to cyber threats in real time.

It is also IBM’s first AI-optimized, cloud-native supercomputer, designed for developing and training large-scale AI models. It combines various security tools and technologies to provide a unified view of a company’s security posture.

Vela is intended to assist organizations in improving their security posture by providing them with the information they need to quickly identify and respond to threats. The technology analyzes massive amounts of security data and provides actionable insights to security teams using artificial intelligence and machine learning algorithms. It integrates various security tools and technologies and employs artificial intelligence and machine learning algorithms to provide security teams with actionable insights.

A comparison of the traditional HPC software stack and Cloud-native AI stack are shown in diagram. But the downside of cloud native stack with virtualization, historically, is that it reduces node performance.


This work includes configuring the bare-metal host for virtualization with support for Virtual Machine Extensions (VMX), single-root IO virtualization (SR-IOV), and huge pages. With AI-optimized processor, the IBM AIU [Artificial Intelligence Unit] - The era of cloud-native AI supercomputing has only just begun. The IBM AIU is not a graphics processor. It was specifically designed and optimized to accelerate matrix and vector computations used by deep learning models. The AIU can solve computationally complex problems and perform data analysis at speeds far beyond the capability of a CPU. Deploying AI to classify cats and dogs in photos is a fun academic exercise. But it won’t solve the pressing problems we face today. For AI to tackle the complexities of the real world — things like predicting the next Hurricane /natural calamities or whether we’re heading into a recession — we need enterprise-quality, industrial-scale hardware. IBM AIU takes us one step closer. 

There are many real-world examples of Kubernetes clusters being used to run HPC workloads. Here are a few examples:

  • The National Energy Research Scientific Computing Center (NERSC), which is part of the US Department of Energy, uses Kubernetes to manage its HPC resources. NERSC uses Kubernetes to manage both traditional HPC workloads and machine learning workloads, and has reported significant improvements in resource utilization and efficiency since adopting Kubernetes.
  • Argonne National Laboratory's Theta supercomputer, which is one of the fastest supercomputers in the world, uses Kubernetes to manage its containerized workloads. Theta uses Kubernetes to run a variety of scientific simulations, including simulations of earthquakes and climate models.
  • Oak Ridge National Laboratory's Summit supercomputer, which is the most powerful supercomputer in the world, also uses Kubernetes to manage its containerized workloads.Summit uses Kubernetes to run scientific simulations and other HPC workloads. The Summit supercomputer at Oak Ridge National Laboratory uses IBM Spectrum MPI as its primary MPI implementation for running parallel workloads. It is is capable of running a variety of HPC workloads using MPI, including simulations, data analysis, and machine learning applications.
  • The University of Cambridge's High Performance Computing Service uses Kubernetes to manage its HPC resources. The service uses Kubernetes to run a variety of HPC workloads, including simulations of fluid dynamics and other scientific applications.
One of the most commonly used container runtime engines on Summit is Singularity, which is an open-source container platform designed specifically for HPC workloads. Singularity is optimized for running MPI workloads, and provides a number of features specifically tailored to HPC, including support for Infiniband networking and the ability to run on a wide range of HPC systems. In addition to Singularity, Summit also supports other container runtime engines, including Docker and Shifter. However, Singularity is the preferred container runtime engine for running HPC workloads on Summit, and is used by many researchers and organizations to run MPI workloads.

There are a number of emerging container runtime technologies that are being developed for use in HPC environments. One example is Charliecloud, which is an open-source container runtime that is designed specifically for HPC workloads. Charliecloud provides a lightweight and secure environment for running containerized workloads on HPC systems, and is optimized for performance and flexibility.

Another emerging technology is called udocker, which is a user-space container engine that is designed to provide a simple and lightweight way to run containerized workloads on HPC systems. Udocker is optimized for use in environments where users may not have root access to the system, and provides a number of features specifically tailored to HPC workloads, including support for MPI and Infiniband networking.

Frontier is a new supercomputer that is currently being developed at Oak Ridge National Laboratory. 
One of the key innovations of the Frontier supercomputer is its use of new "next-generation" runtime engines that are being developed specifically for HPC workloads. These new runtime engines are designed to provide improved performance, scalability, and flexibility compared to traditional runtime engines like MPI. One of the most promising new runtime engines being developed for Frontier is called Unified Parallel C (UPC). UPC is a high-level parallel programming language that is designed to simplify the development of parallel algorithms for HPC workloads. It provides a number of features specifically tailored to HPC, including support for shared memory programming and optimized communication primitives.

AWS provides a wide range of compute, storage, and network resources that are optimized for running HPC workloads in the cloud. By leveraging these services, researchers and engineers can easily and cost-effectively run complex simulations and other HPC workloads, while also benefiting from the scalability and flexibility of cloud computing. Some examples:
  • Amazon EC2: This is AWS's flagship compute service, which allows users to rent virtual machines (instances) with a variety of different configurations and capabilities. EC2 provides a wide range of instance types that are optimized for different workloads, including HPC workloads. These instance types offer high-performance CPUs, GPUs, and FPGAs, as well as high-speed network connectivity.
  • Amazon Elastic File System (EFS): This is a fully managed cloud file storage service that is designed for HPC workloads. EFS provides a scalable and highly available file system that can be accessed from multiple instances simultaneously, which is important for parallel computing workloads.
  • Amazon S3: This is a highly scalable and durable object storage service that can be used to store and retrieve large data sets for HPC workloads. S3 provides a simple API that can be used to access data from anywhere, and supports a variety of data formats and access patterns.
  • AWS ParallelCluster: This is an open-source HPC cluster management tool that can be used to deploy and manage HPC clusters on AWS. ParallelCluster provides a simple interface for configuring and launching HPC clusters, and supports a variety of different schedulers and software packages.
  • Amazon FSx for Lustre: This is a fully managed file system service that provides high-performance Lustre file systems for HPC workloads. FSx for Lustre provides scalable performance and high availability, and can be used to store and manage large data sets for parallel computing workloads.
To run weather data analysis using HPC cloud services on AWS, you would need to follow these general steps:
  • Provision the necessary compute resources: Using services such as Amazon EC2, you would need to launch instances that are optimized for your specific workload. For weather data analysis, you may require high-performance CPUs, GPUs, or FPGAs, as well as high-speed network connectivity to move data in and out of the instances.
  • Store the data: You would need to store the weather data in a highly scalable and durable storage system. Amazon S3 is a popular choice for storing large data sets in the cloud.
  • Configure the software stack: Once the compute resources and data storage are set up, you would need to configure the software stack. This would involve installing and configuring the necessary software packages, including any libraries or tools that are required for weather data analysis.
  • Submit the analysis jobs: You would then submit the analysis jobs to the HPC cluster using a batch scheduler such as Slurm or AWS ParallelCluster. The scheduler will manage the allocation of compute resources and ensure that the jobs are executed in a timely and efficient manner.
  • Retrieve the results: Once the analysis jobs are completed, you would retrieve the results from the storage system and perform any post-processing or analysis that is necessary.
Additionally, there are a number of third-party tools and services available that are specifically designed for weather data analysis in the cloud, such as the Unidata Cloud-Ready Data Services, which provides a suite of cloud-based tools and services for working with weather data. There are some configuration details for a typical HPC software stack that might be used for the above example -weather data analysis :
  • Operating System: Many HPC workloads run on Linux-based operating systems, such as CentOS or Ubuntu. These operating systems are generally lightweight and optimized for high-performance computing. Linux distributions, such as CentOS and Ubuntu, are commonly used in HPC environments, particularly in academic and research settings where open source software is often preferred. RHEL (Red Hat Enterprise Linux) and SLES (SUSE Linux Enterprise Server) are actually commonly used operating systems in HPC environments, particularly in the commercial sector. Both RHEL and SLES are enterprise-grade operating systems that offer long-term support and stability, which are important features in HPC environments where system uptime and reliability are critical.
  • Cluster Management Software: There are a variety of cluster management software packages available for HPC workloads, including Slurm, Torque, and LSF. These tools provide a way to manage the allocation of compute resources and schedule jobs on the cluster.
  • MPI Library: For parallel computing workloads, you would typically need to install a Message Passing Interface (MPI) library, such as OpenMPI or MPICH. These libraries allow multiple processes to communicate with each other and coordinate their work on the cluster.
  • Compiler Toolchain: A compiler toolchain is necessary for building and executing code on the cluster. Commonly used compilers include GCC, Clang, and Intel Compiler.
  • Data Processing Libraries: For weather data analysis, you would typically need to install a variety of data processing libraries, such as NetCDF, HDF5, and GRIB. These libraries allow you to read, write, and manipulate weather data in a variety of formats.
  • Visualization Tools: Once the weather data analysis is complete, you may want to visualize the results using tools such as Matplotlib, Paraview, or Visit. These tools allow you to create visualizations and animations that can help you better understand the results of your analysis.
Steps to be followed in AWS to run HPC workloads :
  • Choose an AWS instance: Select an Amazon EC2 instance type that suits your computational needs. EC2 provides a range of instance types with varying CPU, memory, and storage capacities. For a weather data analysis, you may need a high-memory instance type with multiple CPUs and GPUs, depending on the size of your data.
  • Install software dependencies: Install the necessary software dependencies on your EC2 instance, such as the operating system, compilers, libraries, and analysis tools. You can either install these manually or use a configuration management tool like Ansible or Chef to automate the process.
  • Move data to AWS: Transfer your weather data from your on-premises environment to the AWS cloud using AWS Storage services such as Amazon S3 or EFS. You can also use AWS Direct Connect to establish a dedicated network connection between your on-premises environment and your AWS resources.
  • Configure your software: Set up your weather analysis software, including the input/output paths, data formats, and algorithm parameters. You can run your analysis either on a single EC2 instance or on a cluster of instances using HPC job schedulers like Slurm or LSF.
  • Monitor and optimize performance: Monitor the performance of your weather analysis to identify bottlenecks and optimize performance. You can use AWS CloudWatch to monitor CPU, memory, and network utilization, as well as application-level metrics like response times and error rates.
  • Generate reports and visualizations: Once your analysis is complete, you can generate reports and visualizations using tools like Matplotlib, Paraview, or Visit. You can save the results to Amazon S3 or EFS, or use other AWS services like AWS Lambda to trigger alerts or actions based on the results.
Some of the commonly used HPC workloads on AWS cloud :
  • Genomics and bioinformatics: AWS offers a range of genomic and bioinformatics tools and services that can handle large-scale data processing and analysis tasks. These include tools for sequence alignment, variant calling, gene expression analysis, and genome assembly.
  • Computational chemistry and materials science: AWS provides a range of high-performance computing (HPC) resources, such as GPU-enabled instances, that can handle complex simulations and calculations. These resources can be used to run molecular dynamics simulations, quantum chemistry calculations, and other computational chemistry and materials science workflows.
  • Financial modeling and simulation: AWS provides a range of compute and storage resources that can be used for financial modeling and simulation tasks. These resources can be used to run Monte Carlo simulations, backtesting, portfolio optimization, and other financial analysis workflows.
  • Weather and climate modeling: AWS provides a range of weather and climate data services, such as Amazon Forecast, that can be used to generate forecasts and predictions. These services can be used to run weather and climate simulations, analyze large datasets, and generate forecasts for various applications.
  • Machine learning and AI: AWS provides a range of machine learning and AI services, such as Amazon SageMaker and Amazon Rekognition, that can be used to train and deploy machine learning models. These services can be used for natural language processing, image and video analysis, and other AI workflows.
Area of research in HPC is the development of container-based cloud environments that are designed to support scientific workloads. These environments aim to provide a flexible and scalable platform for running scientific applications, while also addressing challenges such as data movement, job scheduling, and resource management. As mentioned earlier ,  Singularity is designed to support the needs of scientific and HPC workloads, and provides features such as secure image management, container mobility, and seamless integration with HPC schedulers. Another area of research is the development of new programming models and frameworks that are designed to simplify the development of HPC applications in container-based environments. For example, the Big Data and Extreme Computing (BDEC) project is developing a framework that provides a high-level abstraction layer for managing and orchestrating distributed scientific workflows. The project has the potential to make it easier and more efficient to manage large-scale scientific workflows in a variety of domains.

No comments:

Post a Comment