Linux/macOS: Retrieve RPMs from .sh file without running the script

Problem

Sometimes vendors ship their software as a single self-extracting .sh installer that contains multiple .rpm or other files inside.

Running the .sh directly might trigger installation logic you don’t want, so the challenge is: How can we safely unpack the RPMs without executing the script?

Solution

Most vendor installers provide built-in extraction flags that allow you to unpack it safely.

First, check whether your script supports extraction options:

  • Run it with --help.
  • Or open the file in a text editor (vi, vim, less) and search for the section that lists available options.
  • Look for keywords like --target, --noexec, or --keep.

    In my case, the script showed this usage block:

    $0 [options] [--] [additional arguments to embedded script]
    
    Options:
      --confirm             Ask before running embedded script
      --quiet               Do not print anything except error messages
      --noexec              Do not run embedded script
      --keep                Do not erase target directory after running
      --noprogress          Do not show the progress during decompression
      --nox11               Do not spawn an xterm
      --nochown             Do not give the extracted files to the current user
      --target dir          Extract directly to a target directory
                            (absolute or relative path)
      --tar arg1 [arg2 ...] Access the contents of the archive through tar
      --                    Pass following arguments to the embedded script
    
    

    The key flags here are:

    • --target -> specifies the output directory for extracted files
    • --noexec -> prevents the embedded installer logic from executing

    Here’s how I safely extracted the files from my .sh installer. You might need to create an extract directory before:

    $ sh flashgrid_cluster_node_update-25.5.89.70767.sh --target extract/ --noexec
    Creating directory extract/
    Verifying archive integrity... All good.
    Uncompressing update 100%
    

    Checking the number of files extracted, shows 46:

    $ ll extract/ | wc -l
    46
    

    Linux: Change the crash dump location

    When kdump is enabled, the crash dumps are typically written to /var/crash. However, this directory may not always be suitable – especially if it lacks sufficient space. Thankfully, the dump location is configurable.

    Follow the steps below to redirect the crash dump to another path.

    1. Edit the kdump configuration file /etc/kdump.conf

    Find the line that begins with path (or add it if it doesn’t exist), and set it to your desired directory. For example:

    path /var2/crash

    This tells kdump to save crash dumps to /var2/crash instead of the default /var/crash.

    2. Ensure the directory exists and has enough space

    Create the new directory if it doesn’t already exist:

    # mkdir /var2/crash

    Make sure it has appropriate permissions and enough disk space to store crash dumps, which can be large depending on system memory.

    3. Restart the kdump service

    After making changes, restart the kdump service to apply the new configuration:

    # systemctl restart kdump

    You can check the status to confirm it’s active:

    # systemctl status kdump

    ● kdump.service - Crash recovery kernel arming
    Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: enabled)
    Active: active (exited) since Thu 2025-07-10 19:42:12 UTC; 10min ago
    Main PID: 1162 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 196884)
    Memory: 0B
    CGroup: /system.slice/kdump.service

    Jul 10 19:42:08 rac1.mycompany.mydomain systemd[1]: Starting Crash recovery kernel arming...
    Jul 10 19:42:12 rac1.mycompany.mydomain kdumpctl[1428]: kdump: kexec: loaded kdump kernel
    Jul 10 19:42:12 rac1.mycompany.mydomain kdumpctl[1428]: kdump: Starting kdump: [OK]
    Jul 10 19:42:12 rac1.mycompany.mydomain systemd[1]: Started Crash recovery kernel arming.

    ⚠️ Important Notes

    • The crash dump directory must be accessible even during a crash, so avoid temporary filesystems (like /tmp) or network paths unless properly configured.
    • For production systems, it’s best to use a dedicated partition or storage volume with enough capacity to hold full memory dumps.

    ORA-27106: system pages not available to allocate memory

    Oracle error ORA-27106: system pages not available to allocate memory can appear when starting up a database instance, particularly when HugePages are misconfigured or unavailable. This post walks through a real-world scenario where the error occurs, explains the underlying cause, and provides step-by-step resolution.

    Problem

    Attempting to start up the Oracle database instance results in the following error:

    oracle@mk23ai-b:~$ sqlplus / as sysdba

    SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Thu Jul 3 00:15:46 2025
    Version 23.7.0.25.01

    Copyright (c) 1982, 2024, Oracle. All rights reserved.

    Connected to an idle instance.

    SQL> startup
    ORA-27106: system pages not available to allocate memory
    Additional information: 6506
    Additional information: 2
    Additional information: 3

    Cause

    This error is most often seen on Linux platforms when HugePages are either:

    • Not configured,
    • Insufficiently allocated,
    • and the database is explicitly configured to use only HugePages with: use_large_pages='ONLY'

    Troubleshooting

    1) Identify the SPFILE path of the database

    $ srvctl config database -db orclasm

    Output:

    Database unique name: orclasm
    Database name: orclasm
    Oracle home: /u01/app/oracle/product/23ai/dbhome_1
    Oracle user: oracle
    Spfile: +DATA/ORCLASM/PARAMETERFILE/spfile.274.1201294643
    Password file:
    Domain:
    Start options: open
    Stop options: immediate
    Database role: PRIMARY
    Management policy: AUTOMATIC
    Disk Groups: DATA
    Services:
    OSDBA group:
    OSOPER group:
    Database instance: orclasm

    2) Create a PFILE from the SPFILE

    You can create a pfile from an spfile without starting the instance, which is particularly useful when the instance cannot be started.

    $ export ORACLE_SID=orclasm
    $ sqlplus / as sysdba

    SQL> create pfile='/tmp/temppfile.ora' from spfile='+DATA/ORCLASM/PARAMETERFILE/spfile.274.1201294643';

    File created.

    SQL> exit

    Now, inspect the HugePages configuration setting:

    $ grep -i use_large_pages /tmp/temppfile.ora
    *.use_large_pages='ONLY'

    3) Check HugePages availability on the system

    $ grep Huge /proc/meminfo

    Example output (problem scenario):

    HugePages_Total:       0
    HugePages_Free: 0
    HugePages_Rsvd: 0
    Hugepagesize: 2048 kB

    HugePages are not configured on the system in this case. If it is configured for you, then the HugePages_Free value is insufficient.

    Solution

    1) Estimate required HugePages

    You can estimate the needed HugePages based on total SGA:

    𝑓: HugePages = (SGA size in MB) / Hugepagesize

    For example, if SGA is 24 GB (24576 MB) and Hugepagesize = 2 MB, then required
    HugePages = 24576 / 2 = 12288

    2) Configure HugePages at OS level

    Edit /etc/sysctl.conf:

    vm.nr_hugepages = 12288

    Then apply:

    # sysctl -p
    

    3) Start the database in nomount to verify it is startable

    $ sqlplus / as sysdba
    SQL>
    startup nomount

    4) Reboot and verify

    Restart the system to ensure that everything is functioning properly after the reboot and double check the config:

    $ grep Huge /proc/meminfo

    Expected output:

    HugePages_Total:    12288
    HugePages_Free: 12288
    Hugepagesize: 2048 kB

    ⚠️ Temporary Workaround (not recommended for production)

    If you need to get the database up urgently and cannot configure HugePages immediately, change the parameter to:

    use_large_pages='TRUE'

    This allows fallback to regular memory pages. However, for best performance and to avoid fragmentation, it’s strongly recommended to configure HugePages correctly and use use_large_pages='ONLY' in production.

    Linux: Disable Kdump

    To disable Kdump, follow these steps:

    1. Disable the kdump service:

    # systemctl disable --now kdump.service

    2. Check that the kdump service is inactive:

    # systemctl status kdump.service

    3. Remove kexec-tools package

    # rpm -e kexec-tools 

    4. (Optional) Remove the crashkernel command-line parameter from the current kernel by running the following command:

    # grubby --remove-args="crashkernel" --update-kernel=/boot/vmlinuz-$(uname -r)

    Or set the desired value using grubby --update-kernel=/boot/vmlinuz-$(uname -r) --args="crashkernel=....” (Instead of dots, indicate your value).

    ℹ️ One possible error that may occur when removing the kexec-tools package is that it might indicate that the package is not installed, even though it actually is. In this case, you can try rebuilding the RPM database and then rerunning the erase command.

    # rpm --rebuilddb
    # rpm -e kexec-tools

    Linux: sed cannot rename /etc/default/sedysYQ9l Operation not permitted

    Problem:

    I was trying to enable Kdump and wanted to set the memory for crashkernel, so I tried this command that is provided by the RHEL official site:

    [root@rac1 ~]# sudo grubby --update-kernel=ALL --args="crashkernel=1G"

    And I’ve received the following error:

    sed: cannot rename /etc/default/sedysYQ9l: Operation not permitted

    Please note that every time you rerun the command, the letters after /etc/default change, so you probably have a different path.

    Workaround:

    At this time, I am providing only a workaround since I could not find a solution. You have several options available.

    • Enabling it for the current kernel, which can be done with one command:
    # grubby --update-kernel=/boot/vmlinuz-$(uname -r) --args="crashkernel=1G"
    • Or enable for a specific kernel (run multiple times for other kernels if necessary)
    # grubby --update-kernel=/boot/vmlinuz-4.18.0-553.22.1.el8_10.x86_64 --args="crashkernel=1G"

    Linux: Enable Kdump

    Some systems may have Kernel crash dumps (kdump) disabled due to performance concerns. When encountering a kernel panic and contacting RHEL support, they might request a kdump. You are advised to enable kdump and either wait for the incident to occur or manually trigger it to observe the kernel panic. Kdump must be enabled in order for the incident to generate the dump files.

    1. If kernel-tools package is removed from the system, install it:

    # yum install kexec-tools -y

    2. To reserve memory for Crashkernel, add the crashkernel option to the current kernel:

    # grubby --update-kernel=/boot/vmlinuz-$(uname -r) --args="crashkernel=1G"

    3. Reboot the System

    # reboot

    4. Enable and start Kdump service

    # systemctl enable --now kdump.service

    5. Verify Kdump is running

    # systemctl status kdump.service

    ● kdump.service - Crash recovery kernel arming
    Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor prese>
    Active: active (exited) since Tue 2025-06-24 20:29:58 UTC; 7min ago
    Main PID: 1169 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 196884)
    Memory: 0B
    CGroup: /system.slice/kdump.service

    ⚠️ Testing: Trigger a Kernel Panic

    Please note that I will show you a command that can trigger a kernel panic. This will allow you to check if a dump is generated. This is meant for testing purposes only and should not be executed on a production system during working hours. 🙂

    Are you sure you want to cause a kernel panic right now? – If yes, then here is the command:

    # echo c > /proc/sysrq-trigger

    At this point, the node/VM has crashed and rebooted. When you relog in, you can check /var/crash/ directory to see if crash data was generated.

    # ll /var/crash/
    ...
    drwxr-xr-x 2 root root 67 Jun 24 20:15 127.0.0.1-2025-06-24-20:15:51

    # cd /var/crash/
    # ll
    ..
    drwxr-xr-x 2 root root 67 Jun 24 20:15 127.0.0.1-2025-06-24-20:15:51

    # cd 127.0.0.1-2025-06-24-20\:15\:51/

    # ll
    ...
    -rw------- 1 root root 45904 Jun 24 20:15 kexec-dmesg.log
    -rw------- 1 root root 242941092 Jun 24 20:15 vmcore
    -rw------- 1 root root 43877 Jun 24 20:15 vmcore-dmesg.txt
    ⚠️ Be sure to monitor disk space in /var/crash, as vmcore files can be large.

    Linux: Locate a file by name and then search for a specific word inside

    If you’ve ever needed to locate a file by name and then search for a specific word inside it, then this blog is for you.
    Linux makes it simple by combining two powerful tools: find and grep:

    # find /your/path -type f -name "*.log" -exec grep -i "error" {} +

    Explanation:

    • -type f: Filters for files only.
    • -name "*.log": Limits the search to .log files.
    • -exec grep -i "error" {} +: Searches for the word "error" inside each found file, ignoring case sensitivity.

    In my case, I was searching for files named flashgrid_node and then wanted to find content containing the keyword “SYNCING“. Here is my command version:

    # find ./ -type f -name "flashgrid_node" -exec grep -i "SYNCING" {} +

    It searches in the current directory (‘./’).

    Useful tip, If you want to show only the file names that contain the word, you can add the -l flag to grep:

    # find /your/path -type f -name "*.log" -exec grep -il "error" {} +

    This was my output:

    $ find ./ -type f -name "flashgrid_node" -exec grep -il "SYNCING" {} +

    ./rac1/rac1.example.com/flashgrid_node
    ./rac2/rac2.example.com/flashgrid_node
    ./racq/racq.example.com/flashgrid_node

    Linux: Add passwordless sudo permission to user

    In this blog, I will provide passwordless sudo permission to oracle user for testing purposes. Change the username as needed.

    1. As the root user, create a separate group for the purpose of granting sudo privileges. Choose a desired name for the group.

    # groupadd oracle_sudoers

    2. Use the usermod command to add the oracle user to the group:

    # usermod -aG oracle_sudoers oracle

    3. Update /etc/sudoers using visudo command

    # visudo

    Add the following line:

    %oracle_sudoers ALL=(ALL) NOPASSWD: ALL

    Explanation (read carefully):

    PartMeaning
    %oracle_sudoersRefers to a Linux user group called oracle_sudoers. The % symbol means “group” and not a specific user. So this applies to all users who are members of this group.
    ALL=(ALL)Members can run commands as any user, including root.
    The first ALL refers to any host (typically used in multi-host sudo configs).
    The (ALL) means as any target user.
    NOPASSWD:Allows these users to run sudo commands without being prompted for a password.
    ALLThey are allowed to run any command with sudo. You can also restrict the group to execute only the specific commands you designate in this section.

    ⚠️ Security Note:

    This provides full root-level access without password prompts to members of oracle_sudoers, so it should be used with caution and only for trusted administrative users (e.g., DBAs or sysadmins).

    Reduction rxdrop/s – receive packet drops per second

    Note: The post may seem related to Linux only. However, the issue of high rxdrop/s affects anything, especially Oracle database performance, if it occurs on the database server.

    In this post, I will explain the parameters that affect the behavior of receive packet drops per second (rxdrop/s).
    The post is more about workarounds and tuning values rather than identifying exactly what has changed and why the fragmentation level has increased.
    We will discuss several key /etc/sysctl.conf parameters that can help make fragmentation more manageable, these are:

    net.ipv4.ipfrag_high_thresh = 67108864
    net.ipv4.ipfrag_low_thresh = 66060288
    net.ipv4.ipfrag_time = 10

    Problem:

    The ksar graph (a tool that interprets output from sar) was displaying peaks for interface errors on eth1 interface, particularly rxdrop/s:

    Workaround:

    The following kernel parameters in Linux control how the system handles IP packet fragments in the IPv4 stack.

    • The maximum memory threshold (in bytes) that the kernel can use to store IPv4 fragmented packets:

      net.ipv4.ipfrag_high_thresh = 67108864

    When the total memory used for IP fragments exceeds 64 MB (67108864), the kernel will start dropping fragments until memory usage falls below the low threshold (ipfrag_low_thresh next parameter).

    • The minimum memory threshold to stop dropping fragments:

      net.ipv4.ipfrag_low_thresh = 66060288

    Once memory drops below ~63 MB (66060288), fragment discarding stops.

    • The time (in seconds) the kernel will keep an incomplete, fragmented packet in memory before discarding it:

      net.ipv4.ipfrag_time = 10

    The default is often 30 seconds. Here it’s reduced to 10 seconds.

      This helps prevent memory from being held too long by incomplete or malicious fragment streams, which are common in DoS attacks.

      After changing these parameters in /etc/sysctl.conf you need to run sysctl -p to apply the modified kernel parameters and make them effective at runtime.

      Checking supported MTU (Maximum Transmission Unit) for a system using PING

      When troubleshooting network issues, ensuring that packets are not being fragmented is crucial. One way to check the Maximum Transmission Unit (MTU) of a network path is by using the ping command with specific flags that test for fragmentation.

      What is MTU?

      MTU (Maximum Transmission Unit) is the largest size of a packet that can be sent over a network without fragmentation. If a packet exceeds the MTU, it is either fragmented or dropped (if fragmentation is disabled).

      To determine the MTU value that works for your connection, you can use the ping command with the Don’t Fragment (DF) flag, ensuring that packets exceeding the MTU are rejected instead of being fragmented.

      Using PING to check MTU

      A simple way to test MTU is by sending a ping with a specified packet size and ensuring it does not get fragmented:

      # ping 10.7.0.4 -c 2 -M do -s 1400

      Where:

      • 10.7.0.4: The destination IP address to which we are sending the ping
      • -c 2: Sends 2 pings before stopping
      • -M do: Enables strict Path MTU Discovery, meaning fragmentation is not allowed
      • -s 1400: Sets the ICMP payload size to 1400 bytes. The total packet size will be:
        • 1400 bytes (payload) + 8 bytes (ICMP header) + 20 bytes (IP header) = 1428 bytes.

      In the following example, we are successfully sending a packet with a size of 1400:

      [root@rac1 ~]# ping 10.0.1.4 -c 2 -M do -s 1400
      PING 10.0.1.4 (10.0.1.4) 1400(1428) bytes of data.
      1408 bytes from 10.0.1.4: icmp_seq=1 ttl=63 time=0.726 ms
      1408 bytes from 10.0.1.4: icmp_seq=2 ttl=63 time=0.720 ms

      --- 10.0.1.4 ping statistics ---
      2 packets transmitted, 2 received, 0% packet loss, time 1046ms

      Sending a packet of the size 1472 is also successful:

      [root@rac1 ~]# ping 10.0.1.4 -c 2 -M do -s 1472
      PING 10.0.1.4 (10.0.1.4) 1472(1500) bytes of data.
      1480 bytes from 10.0.1.4: icmp_seq=1 ttl=63 time=0.780 ms
      1480 bytes from 10.0.1.4: icmp_seq=2 ttl=63 time=0.759 ms

      --- 10.0.1.4 ping statistics ---
      2 packets transmitted, 2 received, 0% packet loss, time 1034ms
      rtt min/avg/max/mdev = 0.759/0.769/0.780/0.029 ms

      But sending a packet with the size 1473 is not successful:

      [root@rac1 ~]# ping 10.0.1.4 -c 2 -M do -s 1473
      PING 10.0.1.4 (10.0.1.4) 1473(1501) bytes of data.

      --- 10.0.1.4 ping statistics ---
      2 packets transmitted, 0 received, 100% packet loss, time 1023ms

      This indicates that the largest packet size you can send without fragmentation is 1472.