Thursday, September 26, 2024

Kprobes in Action : Instrumenting and Debugging the Linux Kernel

Kprobes (Kernel Probes) is a powerful feature in the Linux kernel that allows developers and system administrators to dynamically intercept and monitor any kernel function. It provides a mechanism for tracing and debugging the kernel by enabling you to inject custom code into almost any point in the kernel, allowing you to collect information, modify data, or even create entirely new behaviors. Kprobes are particularly useful for diagnosing kernel issues, performance tuning, and understanding kernel behavior without needing to modify the kernel source code or reboot the system.

Background on Kprobes:

Kprobes were introduced in the Linux kernel as a way to enable non-disruptive kernel tracing. The main use case is dynamic instrumentation, which allows developers to investigate how the kernel behaves at runtime without modifying or recompiling the kernel.

How Kprobes Work

Kprobes allow you to place a "probe" at a specific point in the kernel, known as a probe point. When the kernel execution reaches this probe point, the probe is triggered, and a handler function that you define is executed. Once the handler is done, the normal execution of the kernel resumes.

There are two types of handlers in Kprobes:

1. Pre-handler: This is executed just before the probed instruction.

2. Post-handler: This is executed after the probed instruction completes.

Key Components of Kprobes:

1. Kprobe Structure : Defines the probe, including the symbol name (function to be probed) and pointers to pre- and post-handlers.

   - Example:

     static struct kprobe kp = {
         .symbol_name = "do_fork",  // Name of the function to probe
     };

2. Pre-Handler: Executed before the instruction at the probe point. It can be used to capture the state of the system (e.g., register values).

   - Example:

     static int handler_pre(struct kprobe *p, struct pt_regs *regs) {
         printk(KERN_INFO "Pre-handler: register value is %lx\n", regs->ip);
         return 0;
     }

3. Post-Handler: Executed after the instruction at the probe point. This is useful for gathering information after the instruction has executed.

   - Example:

     static void handler_post(struct kprobe *p, struct pt_regs *regs, unsigned long flags) {
         printk(KERN_INFO "Post-handler: instruction completed\n");
     }

Inserting a Kprobe:

Once the Kprobe structure is set up, you register the probe using the `register_kprobe()` function, which activates the probe at the desired location in the kernel.

Example of inserting a probe:

int ret = register_kprobe(&kp);
if (ret < 0) {
    printk(KERN_ERR "Kprobe registration failed\n");
} else {
    printk(KERN_INFO "Kprobe registered successfully\n");
}

When you're done with the probe, it should be unregistered using `unregister_kprobe()`.

Use Cases for Kprobes:

1. Debugging: Inspect kernel function behavior and parameters at runtime without recompiling the kernel.

2. Performance Monitoring: Collect detailed performance statistics at various points in the kernel.

3. Dynamic Analysis: Understand kernel module or driver behavior in real-time.

4. Fault Injection: Inject faults at specific points in the kernel to test how the kernel reacts to errors.

5. Security Auditing: Monitor suspicious or unauthorized kernel activities.


Kprobes vs. Other Tracing Mechanisms:

- Ftrace: Another kernel tracing framework, but more focused on function-level tracing. Kprobes are more versatile as they allow you to probe any instruction.

- SystemTap**: Provides a higher-level interface that uses Kprobes under the hood.

- eBPF: A more modern, flexible, and performant tracing framework that has overlapping functionality with Kprobes.

Kprobe Variants:

1. Jprobes**: A variant that allows you to specify the exact function signature for the probe. This feature is deprecated in modern kernels.

2. Kretprobes**: A specialized form of Kprobes that hooks into the return path of functions, allowing you to trace function exits and the values returned by kernel functions.


Limitations of Kprobes:

- Probes introduce overhead, so excessive probing can impact system performance.

- Probing certain sensitive or timing-critical functions can lead to system instability.

- The handler code should be minimal and non-blocking to avoid disrupting the kernel execution flow.

Example Code:

Below is a basic example of how Kprobes can be used to monitor the `do_fork` function in the kernel, which is responsible for process creation:

#include <linux/kernel.h>
#include <linux/module.h>

#include <linux/kprobes.h>


static struct kprobe kp = {
    .symbol_name = "do_fork", // The function to probe
};

static int handler_pre(struct kprobe *p, struct pt_regs *regs) {
    printk(KERN_INFO "do_fork() called, IP = %lx\n", regs->ip);
    return 0;
}

static void handler_post(struct kprobe *p, struct pt_regs *regs, unsigned long flags) {
    printk(KERN_INFO "do_fork() completed\n");
}

static int __init kprobe_init(void) {
    kp.pre_handler = handler_pre;
    kp.post_handler = handler_post;
   
    if (register_kprobe(&kp) < 0) {
        printk(KERN_ERR "Kprobe registration failed\n");
        return -1;
    }
    printk(KERN_INFO "Kprobe registered successfully\n");
    return 0;
}

static void __exit kprobe_exit(void) {
    unregister_kprobe(&kp);
    printk(KERN_INFO "Kprobe unregistered\n");
}

module_init(kprobe_init);
module_exit(kprobe_exit);
MODULE_LICENSE("GPL");

This will print information to the kernel log each time the `do_fork()` function is invoked.

To compile and run the Kprobe example you provided, you need to follow these steps:

1. Prerequisites

- You need to have the Linux kernel headers installed.

- Make sure you have root (superuser) privileges, as you'll be loading kernel modules.

- You need `gcc` and `make` installed for compiling the kernel module.

  yum search glibc-static
  yum install glibc-static
  yum update --allowerasing

2. Write the Kprobe Kernel Module

   wget  https://raw.githubusercontent.com/torvalds/linux/master/samples/kprobes/kprobe_example.c

   Save the Kprobe code into a file, for example, `kprobe_example.c`:

3. Create a Makefile

Create a `Makefile` to automate the compilation of the kernel module. This Makefile should look like this:

makefile
obj-m += kprobe_example.o
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean

4. Compile the Kprobe Kernel Module

$ make

This will use the kernel headers and create a module file `kprobe_example.ko`.

5. Insert the Kernel Module

To insert the module into the running kernel, use the `insmod` command. You need root privileges to load kernel modules:

$  insmod kprobe_example.ko

 6. Check Kernel Logs for Output

You can monitor the kernel logs to see the output of the Kprobe:

$ dmesg | tail

[18084.207866] kprobe_init: Planted kprobe at 00000000207be762

This will show you the success or failure of inserting the Kprobe and print any trace outputs when the kernel function (like `do_fork`) is invoked.

7. Trigger the Probed Function

You can manually trigger the `do_fork()` function by starting a new process, such as running any command: $ ls

Since `do_fork()` is involved in creating new processes, every time a new process is created, the Kprobe pre- and post-handlers will execute, and you'll see the output in `dmesg`.

8. Remove the Kernel Module

Once you're done with the Kprobe, you can remove the kernel module using `rmmod`:

$  rmmod kprobe_example

[root@myhost]# lsmod | grep probe
kprobe_example          3569  0
[root@myhost]# ls
[root@myhost]# rmmod kprobe_example
[root@myhost]# lsmod | grep probe
[root@myhost]#

Check the kernel log again to see the output confirming the Kprobe has been removed:

$ dmesg | tail

9. Clean Up

To clean the build directory and remove compiled files, you can run:

$ make clean

--------------------------------------------------------------------------------------------------------

Example Workflow:

$ vim kprobe_example.c                # Write the module code

$ vim Makefile                               # Create the Makefile

$ make                                             # Compile the module

$ insmod kprobe_example.ko          # Insert the module

$ dmesg | tail                                   # Check kernel logs for output

$ ls                                                    # Trigger the do_fork() function

$ dmesg | tail                                    # Check logs again to see Kprobe output

$ sudo rmmod kprobe_example      # Remove the module

$ make clean                                    # Clean up the build files


Monday, September 23, 2024

NFS (Network File System) : NFS Server and NFS Client configuration

NFS, or Network File System, is a distributed file system protocol developed by Sun Microsystems in 1984. It allows a system to share directories and files with others over a network. With NFS, users and applications can access files on remote systems as if they were local, enabling seamless file sharing across different systems, architectures, and locations.

Key Features of NFS:

- Transparency: NFS provides users with transparent access to files on a remote server as if they were located on their local machine.

- Interoperability: NFS is platform-independent, allowing different operating systems to communicate and share files across a network.

- Statelessness: In its early versions, NFS was designed to be stateless, meaning that the server did not maintain session information for clients. This design provided resilience but also led to limitations in reliability and consistency.

- Security: Over time, NFS introduced several security improvements, including support for Kerberos authentication in NFSv4.

History and Evolution of NFS :

1. NFSv1 (1984):

   - NFS was initially developed by Sun Microsystems as a way to provide network file sharing in SunOS.
   - It was proprietary to Sun and served as an early attempt to allow file sharing across systems in a networked environment.

2. NFSv2 (1989):

   - The first widely available version, NFSv2, was introduced in RFC 1094.
   - It operated using the User Datagram Protocol (UDP) for fast, connectionless communication.
   - NFSv2 supported only 32-bit file sizes, which limited its scalability as file sizes grew.
   - It used a stateless protocol, meaning the server didn’t keep track of clients, which simplified server-side management but limited its capabilities for complex applications.

3. NFSv3 (1995):

   - Introduced in RFC 1813, NFSv3 addressed many limitations of NFSv2.
   - Improvements:
     - Larger file size support with 64-bit file handling.
     - Asynchronous Writes: To improve performance, NFSv3 allowed asynchronous writes, meaning the client could continue            writing without waiting for server acknowledgment.
     - Introduced support for TCP, allowing for better reliability in file transfers over unreliable networks.
     - Still stateless but more robust in handling large workloads.

4. NFSv4 (2003):

   - Defined in RFC 3530, NFSv4 represented a significant evolution from previous versions.
   - Major Features:
     - Stateful Protocol: NFSv4 introduced statefulness, allowing for better handling of file locks and recovery from network                failures.
     - Security: NFSv4 incorporated stronger security features, including support for Kerberos and improved ACLs (Access                  Control Lists).
     - Single Port: NFSv4 used a single well-known port (2049), which simplified firewall configuration.
     - Performance Optimizations: NFSv4 added features like compound operations and better caching mechanisms to enhance            performance in WAN environments.
   - Cross-Platform: As a stateful and more secure protocol, NFSv4 gained popularity across different Unix-like systems and               was widely adopted in enterprise environments.

5. NFSv4.1 (2010):

   - Introduced pNFS (parallel NFS), allowing clients to read and write files in parallel, which improved performance for large, distributed workloads.
   - Sessions: NFSv4.1 introduced sessions, providing reliable and robust mechanisms for handling multiple file operations.

6. NFSv4.2 (2016):

   - Added new features like server-side cloning, application I/O hints, and a standardized method for handling holes in sparse files.

NFS Server Configuration : 

1. Install NFS Utilities:

   First, install the NFS server utilities (`nfs-utils`) using `dnf`. This package provides essential NFS-related services and tools, including `rpcbind`, `nfsd`, and other utilities needed for the NFS server to function.

   dnf install nfs-utils -y

2. Manage Firewall:

   If the firewall is running, it can block NFS traffic by default. Use these commands to check the status, stop, or disable the firewall. If security is a concern, you should configure the firewall to allow NFS traffic on specific ports instead of turning it off entirely.

   systemctl status firewalld

   systemctl stop firewalld

   systemctl status firewalld

If you need to keep the firewall running, ensure you open the following ports for NFS:

   firewall-cmd --add-service=nfs --permanent

   firewall-cmd --reload

3. Start and Enable NFS Services:

   To make the NFS server persist across reboots, enable and start the `nfs-server.service`. The NFS server will provide access to the shared directories over the network.

   systemctl start nfs-server.service

   systemctl enable nfs-server.service

   systemctl status nfs-server.service

4. Restart `rpcbind` and `nfs-utils` Services:

   NFS requires RPC (Remote Procedure Call) services to operate, specifically the `rpcbind` service. Restart `rpcbind` and `nfs-utils` to ensure proper functioning of NFS.

   systemctl restart rpcbind.service

   systemctl restart nfs-utils.service

5. Create a Directory to Share:

   Create the directory that you want to export (share) over NFS. For example, in this case:

   mkdir /sachinpb

   

6. Configure NFS Exports:

   Edit the `/etc/exports` file to configure the directory export settings. This file defines which directories will be shared over NFS and the permissions for each directory.

   edit  /etc/exports  ==> /sachinpb *(rw,async,no_root_squash)

Add the following line to export the `/sachinpb` directory to all clients (`*`), with read-write access, asynchronous mode (`async`), and no root squashing (`no_root_squash`):

Here’s what these options mean:
   - `rw`: Clients can read and write to the shared directory.
   - `async`: Writes to the shared directory will be cached, improving performance.
   - `no_root_squash`: The root user on the client will have root privileges on the NFS share. Use with caution, as it could pose security risks.

7. Apply Export Settings:

   After updating `/etc/exports`, use `exportfs` to apply the export settings:

   exportfs -a

   You can verify the shared directories using `showmount`:

  showmount -e

8. Test NFS Server:

   Navigate to the shared directory and create a test file to verify that the NFS export is accessible.

   cd /sachinpb
   touch file1


NFS Client Configuration

1. Manage Firewall:

   As with the server, the firewall on the client might block NFS traffic. If needed, stop the firewall or configure it to allow NFS traffic. Check and stop the firewall:

   systemctl status firewalld
   systemctl stop firewalld

2. Install NFS and RPC Utilities:

   Install the required packages on the client side. The `nfs-utils` package provides the necessary utilities for mounting NFS shares, and `rpcbind` is essential for NFS communication.

   dnf install rpc* -y
   dnf install nfs-utils -y

3. Create a Mount Point:

   Create a directory on the client machine where the NFS share will be mounted. This directory acts as the local mount point for the NFS share.

mkdir /sachinpb

4. Mount the NFS Share:

   Use the `mount` command to mount the NFS share from the server. Replace `9.53.174.120` with the actual IP address or hostname of your NFS server:

   mount -t nfs 9.x.y.12:/sachinpb /sachinpb

OR, you can also add this in /etc/fstab file  as shown below :

# cat /etc/fstab
9.x.y.12:/sachinpb     /sachinpb  nfs     defaults        0 0

followed by mount -a

This command mounts the `/sachinpb` directory from the NFS server onto the `/sachinpb` directory on the client.

5. Verify the Mount:

   Change to the mounted directory and check if the contents from the NFS server are visible on the client machine:

    cd /sachinpb

   If the mount is successful, you should see the `file1` created earlier on the NFS server.

Wednesday, September 18, 2024

Mastering Remote Server Management with Python's Paramiko Library

Paramiko is a powerful Python library that implements the SSH (Secure Shell) protocol, specifically SSHv2. It provides both client and server functionality, making it suitable for a variety of tasks involving secure remote connections. Here are the main features and uses of Paramiko.

1) SSH Protocol Implementation:

Paramiko is a pure-Python implementation of the SSHv2 protocol, allowing secure communication over unsecured networks. It supports both client and server functionalities, enabling users to create secure connections and transfer data securely.

2) Client and Server Functionality:

As a client, Paramiko can connect to remote servers, execute commands, and transfer files securely using SFTP (SSH File Transfer Protocol).

As a server, it can accept incoming SSH connections, allowing users to run their own SSH servers in Python.

3) High-Level API:

The library provides a high-level API that simplifies common tasks such as executing remote shell commands or transferring files. This makes it easier for developers to implement SSH functionality without dealing with the complexities of the underlying protocol.

4) Cryptography Support:

Paramiko relies on the cryptography library for cryptographic functions. This library uses C and Rust extensions to provide secure encryption methods necessary for implementing SSH.

5) Key Management:

It supports various authentication methods, including password-based authentication and public/private key pairs. Users can manage host keys and perform host key verification to enhance security.

6) Extensibility:

While Paramiko is powerful on its own, it also serves as the foundation for higher-level libraries like Fabric, which further simplify remote command execution and file transfers.

Common Use Cases

  1. Remote Command Execution: Automating tasks by executing shell commands on remote servers.
  2. File Transfers: Securely transferring files between local and remote systems using SFTP.
  3. System Administration: Automating system administration tasks across multiple servers in a secure manner.
  4. Integration with Other Tools: Often used in conjunction with other Python libraries and frameworks to enhance functionality in deployment scripts or DevOps tools.

Pre-requisites:

  • python -m pip install --upgrade pip
  • dnf install rust*

Consider using a virtual environment to avoid conflicts with system packages. You can create one using:

  • python3 -m venv myenv
  • source myenv/bin/activate

(myenv) [root@my-host ~]# pip3 install paramiko
Collecting paramiko
Successfully built bcrypt cryptography
Installing collected packages: pycparser, bcrypt, cffi, pynacl, cryptography, paramiko
Successfully installed bcrypt-4.2.0 cffi-1.17.1 cryptography-43.0.1 paramiko-3.5.0 pycparser-2.22 pynacl-1.5.0
(myenv) [root@my-host ~]#

 

To SSH into a remote host and run the ANY command using Python, you can use the Paramiko library. Below is a code snippet that demonstrates how to do this. 

------------------python code-------------------------------------
import paramiko

def ssh_ls_command(hostname, username, password):
    try:
        # Create an SSH client instance
        ssh = paramiko.SSHClient()

        # Automatically add the server's host key (for simplicity)
        ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())

        # Connect to the remote host
        ssh.connect(hostname, username=username, password=password)

        # Execute the 'ls' command
        stdin, stdout, stderr = ssh.exec_command('ls')

        # Read the output from stdout
        output = stdout.read().decode()
        error = stderr.read().decode()

        # Close the SSH connection
        ssh.close()

        if output:
            print("Output of 'ls' command:")
            print(output)
        if error:
            print("Error:")
            print(error)

    except Exception as e:
        print(f"An error occurred: {e}")

# Example usage
hostname = 'remote-host.com'
username = 'root'
password = 'mypassword'
ssh_ls_command(hostname, username, password)

where :

  • Importing Paramiko: The script starts by importing the paramiko library.
  • Creating SSH Client: An instance of SSHClient is created to manage connections.
  • Host Key Policy: The set_missing_host_key_policy method is used to automatically add the server's host key. This is convenient for testing but should be handled more securely in production.
  • Connecting: The connect method establishes a connection to the remote host using the provided hostname, username, and password.
  • Executing Command: The exec_command method runs the specified command (ls in this case) on the remote server.
  • Reading Output: The standard output and error are read and printed.
  • Error Handling: Any exceptions during the process are caught and displayed.

--------------------------------------------------------------------------
OUTPUT:

(myenv) [root@my-host ~]# python3 ssh.py
Output of 'ls' command:
anaconda-ks.cfg
kernel-6.11.0-0.rc5.22.el10.src.rpm
linux
original-ks.cfg
rpmbuild
test_tunnel.c
test_tunnel_kern.c
(myenv) [root@my-host ~]#

----------------------------------------------------------------------------

Paramiko is an essential tool for Python developers who need to implement secure communication over SSH. Its ease of use, combined with robust features for managing SSH connections and executing commands remotely, makes it a popular choice for automation and system administration tasks.

Saturday, September 7, 2024

CMA (Contiguous Memory Allocator) - Linux Memory Management Mechanism

CMA (Contiguous Memory Allocator) in Linux is a memory management mechanism designed to provide large contiguous blocks of physical memory for specific use cases, such as DMA (Direct Memory Access) operations or device drivers that require continuous memory regions. When discussing CMA (Contiguous Memory Allocator), it's crucial to focus on physical memory*rather than virtual memory due to the specific requirements of devices and hardware components that rely on direct access to memory. physical memory is referenced instead of virtual memory in the context of CMA.

Purpose:

Some hardware, like certain device drivers or subsystems (e.g., graphics or networking devices), need large chunks of physically contiguous memory. However, as Linux uses a virtual memory system, physical memory can become fragmented over time. CMA ensures that these devices get the required memory, even if the system is fragmented.

How it Works:

   - CMA reserves a portion of memory at boot time, which can later be allocated in contiguous blocks when requested.

   - During normal system operation, this reserved memory isn't locked away—it can be used for general purpose allocations. However, when a contiguous allocation request comes, this memory is freed and given to the device or driver that requested it.

Device and Hardware Requirements:

   - Certain hardware components (like GPUs, network cards, or other peripherals) require contiguous blocks of physical memory for  DMA (Direct Memory Access) operations or other high-performance activities.

   - DMA is a process where devices communicate directly with the physical memory without the CPU's intervention. For DMA to work efficiently, the hardware needs physically contiguous memory, meaning that the memory addresses are adjacent in physical memory.

   - Virtual memory is designed to make efficient use of available memory for software processes, but virtual memory can be fragmented and non-contiguous in physical memory. This is because virtual memory maps logical addresses to scattered physical memory locations.

CMA Guarantees Physical Contiguity:

 The key feature of CMA is that it reserves a contiguous region of physical memory that can be allocated on demand. Virtual memory, on the other hand, is not necessarily contiguous in physical terms.

   - Even though processes use virtual addresses (which are convenient for applications), the underlying devices or drivers that require DMA or large contiguous blocks of memory need physical addresses, and CMA ensures that this need is met.

Physical vs. Virtual Memory:

   - Physical Memory: Refers to the actual RAM installed in the system. It's where data is physically stored, and it must be contiguous for hardware operations.

   - Virtual Memory: Is an abstraction provided by the operating system that allows applications to use more memory than physically available. It is divided into pages, which can be scattered across different locations in physical memory.

For example, a 4 GB virtual address space could be mapped to non-contiguous physical memory chunks. However, if a device needs to access a block of memory directly through DMA, the physical memory it accesses must be contiguous.

Virtual Memory Can’t Be Used Directly for DMA or Hardware Access: 

   - Virtual memory is designed for software abstraction and can be fragmented across the physical memory. This is fine for applications but unsuitable for devices that require access to memory in a sequential physical block.

   - When devices perform DMA, they must work with real physical addresses. Therefore, allocating memory in virtual space doesn’t meet the requirement unless the physical memory behind those virtual addresses is also contiguous, which is why CMA allocates from physical memory directly.

How CMA Works with Physical Memory:

   - CMA reserves a chunk of  physical memory at boot time that can later be allocated in contiguous blocks when requested by the device drivers. It does so to ensure that even when the system’s physical memory becomes fragmented, there will still be a large contiguous block of physical memory available for hardware components.

   - Although this memory is allocated from the physical address space, it can be used by virtual memory applications when it's not being actively used by a device.

CMA deals with physical memory because certain devices and hardware require contiguous blocks of physical RAM for tasks like DMA. Virtual memory, which can be fragmented across different physical locations, doesn't meet the needs of these operations. Physical memory contiguity ensures that devices can perform high-speed data transfers efficiently, whereas virtual memory, though beneficial for applications, cannot guarantee this contiguity.

CMA Allocation and Range:

   - CMA typically reserves a contiguous memory range at boot time based on system configuration or kernel parameters. The location and size of the CMA region are either:

     - Automatically determined by the kernel based on memory requirements.

     - Specified manually using kernel boot parameters.

   - CMA memory is reserved in the physical memory address space and is set aside as a separate region from the rest of the memory.

Kernel Boot Parameters:

   - The size and location of the CMA region can be controlled via kernel boot parameters, such as:

     - `cma=size[M/G]`: Specifies the size of the CMA region. For example, `cma=512M` would reserve                     512 MB for CMA.

     - `cma_start=address`: Specifies the starting address of the CMA region in the physical memory.

     - `cma_end=address`: Specifies the ending address of the CMA region.

   Example:

   cma=256M cma_start=0x20000000 cma_end=0x30000000

   This reserves 256 MB for CMA starting at the physical address `0x20000000`.

Where is CMA Allocated?:

   - CMA is allocated during boot time and usually resides in the lower end of the physical memory to ensure that DMA or other hardware requests can access it easily.

   - CMA allocations can be made in any part of the memory, but it usually starts from a specific predefined region (if not defined manually).

6. Checking CMA Information in Linux:

   You can get details about CMA configuration by looking at certain files in the `/proc` or `/sys` filesystems:

/proc/meminfo: This file contains general memory information, including CMA. You can find the CMA reserved region under the entry `CmaTotal` and the currently used CMA memory under `CmaFree`.

     Example:

     CmaTotal:  262144 kB

     CmaFree:   131072 kB

/sys/kernel/debug/cma: If CMA debugging is enabled, this directory will provide detailed information about the CMA memory allocations.

Example to View CMA Memory: cat /proc/meminfo | grep Cma

Output might look like this

CmaTotal:         524288 kB

CmaFree:          512000 kB

This tells you the total reserved CMA memory (`CmaTotal`) and the currently available CMA memory (`CmaFree`).

Summary:

- CMA is allocated at boot and reserves contiguous memory blocks for devices needing such memory.

- It can be specified using boot parameters (size, start, and end address).

- The reserved memory is used by the system when no contiguous memory requests are made and freed when needed for such operations.

- You can check the allocation and usage through system files like `/proc/meminfo`.

========================FADUMP=========================================

Firmware assisted dump (fadump) is a dump capturing mechanism provided as a reliable alternative to kdump on IBM POWER systems. The fadump utility captures the vmcore file from a fully-reset system with PCI and I/O devices. This mechanism uses firmware to preserve memory regions during a crash and then reuses the kdump userspace scripts to save the vmcore file. The memory regions consist of all system memory contents, except the boot memory, system registers, and hardware Page Table Entries (PTEs).

The fadump mechanism offers improved reliability over the traditional dump type, by rebooting the partition and using a new kernel to dump the data from the previous kernel crash. 

README: /usr/share/doc/kexec-tools/fadump-howto.txt

In the Secure Boot environment, the GRUB2 boot loader allocates a boot memory region, known as the Real Mode Area (RMA). The RMA has a size of 512 MB, which is divided among the boot components and, if a component exceeds its size allocation, GRUB2 fails with an out-of-memory (OOM) error.

Options for Using fadump:

fadump=on:  This is the default setting for enabling fadump. It reserves memory from a special area called **CMA (Contiguous Memory Allocator)**. Think of this as a memory-saving technique that allows some of this reserved memory to still be used by other parts of the system during normal operation. The idea is to avoid wasting memory that would otherwise sit idle.

fadump=nocma: This option tells the system not to use the special CMA-backed memory for fadump. Instead, it reserves a portion of memory separately and completely, which might be useful if you want to capture more detailed information, like user-level data, during a crash. By not using CMA, this memory is reserved exclusively for fadump and isn't used for other tasks while the system is running.

fadump=on: Imagine you have a spare room (memory) in your house. Normally, you leave it empty just in case you need to store something later (for fadump). But with this setting, you let guests use the room for sleeping when you don’t need it. When something goes wrong (system crash), you ask them to leave so you can use it to store important things (dump data).

 fadump=nocma: Now, if you set the option to nocma, it's like keeping that spare room off-limits to guests at all times, so it's always ready for storing important stuff whenever you need it.

fadump=on (default): Allows the reserved memory to be used for other tasks when the system is working normally, saving memory.

fadump=nocma : Keeps the reserved memory off-limits to other tasks, ensuring that it's available for storing more detailed data during a crash.

The system with SLES distro will  automatically choose  whether to use `fadump=nocma` or `fadump=on`, depending on the KDUMP_DUMPLEVEL setting. On RHEL based systems , you can set fadump=on /fadump=nocma using grubby command followd by reboot. Or else you can add "/etc/default/grub" file to add these options and run "grub2-mkconfig -o /boot/grub2/grub.cfg"

KDUMP_DUMPLEVEL determines how much information is captured in a system crash. If it’s set to exclude user pages, the system will automatically use `fadump=on` (the default behavior). But, if user pages are **included** in the dump, it will switch to `fadump=nocma`.