Linux/macOS: Retrieve RPMs from .sh file without running the script

Problem

Sometimes vendors ship their software as a single self-extracting .sh installer that contains multiple .rpm or other files inside.

Running the .sh directly might trigger installation logic you don’t want, so the challenge is: How can we safely unpack the RPMs without executing the script?

Solution

Most vendor installers provide built-in extraction flags that allow you to unpack it safely.

First, check whether your script supports extraction options:

  • Run it with --help.
  • Or open the file in a text editor (vi, vim, less) and search for the section that lists available options.
  • Look for keywords like --target, --noexec, or --keep.

    In my case, the script showed this usage block:

    $0 [options] [--] [additional arguments to embedded script]
    
    Options:
      --confirm             Ask before running embedded script
      --quiet               Do not print anything except error messages
      --noexec              Do not run embedded script
      --keep                Do not erase target directory after running
      --noprogress          Do not show the progress during decompression
      --nox11               Do not spawn an xterm
      --nochown             Do not give the extracted files to the current user
      --target dir          Extract directly to a target directory
                            (absolute or relative path)
      --tar arg1 [arg2 ...] Access the contents of the archive through tar
      --                    Pass following arguments to the embedded script
    
    

    The key flags here are:

    • --target -> specifies the output directory for extracted files
    • --noexec -> prevents the embedded installer logic from executing

    Here’s how I safely extracted the files from my .sh installer. You might need to create an extract directory before:

    $ sh flashgrid_cluster_node_update-25.5.89.70767.sh --target extract/ --noexec
    Creating directory extract/
    Verifying archive integrity... All good.
    Uncompressing update 100%
    

    Checking the number of files extracted, shows 46:

    $ ll extract/ | wc -l
    46
    

    Linux: Change the crash dump location

    When kdump is enabled, the crash dumps are typically written to /var/crash. However, this directory may not always be suitable – especially if it lacks sufficient space. Thankfully, the dump location is configurable.

    Follow the steps below to redirect the crash dump to another path.

    1. Edit the kdump configuration file /etc/kdump.conf

    Find the line that begins with path (or add it if it doesn’t exist), and set it to your desired directory. For example:

    path /var2/crash

    This tells kdump to save crash dumps to /var2/crash instead of the default /var/crash.

    2. Ensure the directory exists and has enough space

    Create the new directory if it doesn’t already exist:

    # mkdir /var2/crash

    Make sure it has appropriate permissions and enough disk space to store crash dumps, which can be large depending on system memory.

    3. Restart the kdump service

    After making changes, restart the kdump service to apply the new configuration:

    # systemctl restart kdump

    You can check the status to confirm it’s active:

    # systemctl status kdump

    ● kdump.service - Crash recovery kernel arming
    Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: enabled)
    Active: active (exited) since Thu 2025-07-10 19:42:12 UTC; 10min ago
    Main PID: 1162 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 196884)
    Memory: 0B
    CGroup: /system.slice/kdump.service

    Jul 10 19:42:08 rac1.mycompany.mydomain systemd[1]: Starting Crash recovery kernel arming...
    Jul 10 19:42:12 rac1.mycompany.mydomain kdumpctl[1428]: kdump: kexec: loaded kdump kernel
    Jul 10 19:42:12 rac1.mycompany.mydomain kdumpctl[1428]: kdump: Starting kdump: [OK]
    Jul 10 19:42:12 rac1.mycompany.mydomain systemd[1]: Started Crash recovery kernel arming.

    ⚠️ Important Notes

    • The crash dump directory must be accessible even during a crash, so avoid temporary filesystems (like /tmp) or network paths unless properly configured.
    • For production systems, it’s best to use a dedicated partition or storage volume with enough capacity to hold full memory dumps.

    Linux: Disable Kdump

    To disable Kdump, follow these steps:

    1. Disable the kdump service:

    # systemctl disable --now kdump.service

    2. Check that the kdump service is inactive:

    # systemctl status kdump.service

    3. Remove kexec-tools package

    # rpm -e kexec-tools 

    4. (Optional) Remove the crashkernel command-line parameter from the current kernel by running the following command:

    # grubby --remove-args="crashkernel" --update-kernel=/boot/vmlinuz-$(uname -r)

    Or set the desired value using grubby --update-kernel=/boot/vmlinuz-$(uname -r) --args="crashkernel=....” (Instead of dots, indicate your value).

    ℹ️ One possible error that may occur when removing the kexec-tools package is that it might indicate that the package is not installed, even though it actually is. In this case, you can try rebuilding the RPM database and then rerunning the erase command.

    # rpm --rebuilddb
    # rpm -e kexec-tools

    PRVG-11960 : Set user ID bit is not set for file oradism

    Problem:

    While running asmca, I have got the following error:

    Cause - Following nodes does not have required file ownership/permissions: Node :mk23ai-b PRVG-11960 : Set user ID bit is not set for file "/u01/app/oracle/product/23ai/dbhome_1/bin/oradism" on node "mk23ai-b".   Action - Run the Oracle home root script as the "root" user to fix the permissions.

    Troubleshoot:

    Check the current permissions on the file:

    oracle@mk23ai-b:~$ ll /u01/app/oracle/product/23ai/dbhome_1/bin/oradism
    -rwxr-x---. 1 root oinstall 1138016 Jul 11 2024 /u01/app/oracle/product/23ai/dbhome_1/bin/oradism

    Solution:

    The error message includes an action section that states the steps to follow. Connect to the database server as the root user and execute the root.sh script from the RDBMS home directory, since oradism mentioned in the error is located there.

    root@mk23ai-b:~# /u01/app/oracle/product/23ai/dbhome_1/root.sh
    Performing root user operation.

    The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME= /u01/app/oracle/product/23ai/dbhome_1

    Enter the full pathname of the local bin directory: [/usr/local/bin]:
    The contents of "dbhome" have not changed. No need to overwrite.
    The contents of "oraenv" have not changed. No need to overwrite.
    The contents of "coraenv" have not changed. No need to overwrite.

    Entries will be added to the /etc/oratab file as needed by
    Database Configuration Assistant when a database is created
    Finished running generic part of root script.
    Now product-specific root actions will be performed.

    Check the file permissions again:

    oracle@mk23ai-b:~$ ll /u01/app/oracle/product/23ai/dbhome_1/bin/oradism
    -rwsr-x---. 1 root oinstall 1138016 Jul 11 2024 /u01/app/oracle/product/23ai/dbhome_1/bin/oradism

    This time it has user ID bit is set.

    Normally, when you run a program (an executable file), it runs with your own permissions – meaning it can only do what your user account is allowed to do. But if the setuid bit is set on a file, the program runs with the permissions of the file’s owner, regardless of who is running it.

    You can continue using ASMCA this time.

    Checking supported MTU (Maximum Transmission Unit) for a system using PING

    When troubleshooting network issues, ensuring that packets are not being fragmented is crucial. One way to check the Maximum Transmission Unit (MTU) of a network path is by using the ping command with specific flags that test for fragmentation.

    What is MTU?

    MTU (Maximum Transmission Unit) is the largest size of a packet that can be sent over a network without fragmentation. If a packet exceeds the MTU, it is either fragmented or dropped (if fragmentation is disabled).

    To determine the MTU value that works for your connection, you can use the ping command with the Don’t Fragment (DF) flag, ensuring that packets exceeding the MTU are rejected instead of being fragmented.

    Using PING to check MTU

    A simple way to test MTU is by sending a ping with a specified packet size and ensuring it does not get fragmented:

    # ping 10.7.0.4 -c 2 -M do -s 1400

    Where:

    • 10.7.0.4: The destination IP address to which we are sending the ping
    • -c 2: Sends 2 pings before stopping
    • -M do: Enables strict Path MTU Discovery, meaning fragmentation is not allowed
    • -s 1400: Sets the ICMP payload size to 1400 bytes. The total packet size will be:
      • 1400 bytes (payload) + 8 bytes (ICMP header) + 20 bytes (IP header) = 1428 bytes.

    In the following example, we are successfully sending a packet with a size of 1400:

    [root@rac1 ~]# ping 10.0.1.4 -c 2 -M do -s 1400
    PING 10.0.1.4 (10.0.1.4) 1400(1428) bytes of data.
    1408 bytes from 10.0.1.4: icmp_seq=1 ttl=63 time=0.726 ms
    1408 bytes from 10.0.1.4: icmp_seq=2 ttl=63 time=0.720 ms

    --- 10.0.1.4 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1046ms

    Sending a packet of the size 1472 is also successful:

    [root@rac1 ~]# ping 10.0.1.4 -c 2 -M do -s 1472
    PING 10.0.1.4 (10.0.1.4) 1472(1500) bytes of data.
    1480 bytes from 10.0.1.4: icmp_seq=1 ttl=63 time=0.780 ms
    1480 bytes from 10.0.1.4: icmp_seq=2 ttl=63 time=0.759 ms

    --- 10.0.1.4 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1034ms
    rtt min/avg/max/mdev = 0.759/0.769/0.780/0.029 ms

    But sending a packet with the size 1473 is not successful:

    [root@rac1 ~]# ping 10.0.1.4 -c 2 -M do -s 1473
    PING 10.0.1.4 (10.0.1.4) 1473(1501) bytes of data.

    --- 10.0.1.4 ping statistics ---
    2 packets transmitted, 0 received, 100% packet loss, time 1023ms

    This indicates that the largest packet size you can send without fragmentation is 1472.

    Rename directories, subdirectories, files recursively that contain matching string

    Problem:

    I have copied /u01 directory (containing Oracle software) from another node. The Oracle software home includes directories and files with hostnames.

    My task was to rename all directories, subdirectories, and files containing the specific hostname (in my case rac2) into rac1.

    Let me show you the folder hierarchy that is challenging when you want to rename by script. For simplicity, this hierarchy is made up, but this type of dependency exists in /u01:

    /u01/first_level_rac2/second_level_rac2/third_level_rac2.txt

    We want to have:

    /u01/first_level_rac1/second_level_rac1/third_level_rac1.txt

    So finally, all folders or files containing the string rac2 should be replaced with rac1.

    The challenge here is that you need to start renaming from the third_level, then rename second_level and later first_level. Otherwise, you will have accessibility issues with other subdirectories or files.

    Solution:

    If you want a shortcut, here is the code:

    [root@rac1 ~]# find /u01 -depth -name "*rac2*" | while read i ; do
    newname="$(echo ${i} |sed 's/\(.*\)rac2/\1rac1/')" ;
    echo "mv" "${i}" "${newname}" >> rename_rac2_rac1.sh;
    done

    Later you need to run rename_rac2_rac1.sh file, which will contain mv statements for each matching file or folder.

    Let me explain,

    find /u01 -depth -name "*rac2*" – This will find all files and folders that contain rac2 keyword and will display the output with depth-first order.

    Without depth, the output is the following:

    /u01/first_level_rac2
    /u01/first_level_rac2/second_level_rac2
    /u01/first_level_rac2/second_level_rac2/third_level_rac2.txt

    With -depth, you will see the next order:

    /u01/first_level_rac2/second_level_rac2/third_level_rac2.txt
    /u01/first_level_rac2/second_level_rac2
    /u01/first_level_rac2

    "$(echo ${i} |sed 's/\(.*\)rac2/\1rac1/')" – In this line, the value of i iterator (each line from find command) will be redirected to sed command that will replace the first occurrence of rac2 keyword searching from backward.

    Later old name and a new name will be concatenated with mv statement and saved into rename_rac2_rac1.sh

    This will be mv statements generated by the script:

    mv /u01/first_level_rac2/second_level_rac2/third_level_rac2.txt /u01/first_level_rac2/second_level_rac2/third_level_rac1.txt
    
    mv /u01/first_level_rac2/second_level_rac2 /u01/first_level_rac2/second_level_rac1
    
    mv /u01/first_level_rac2 /u01/first_level_rac1

    Copy a file over SSH without SCP

    Problem:

    /usr/bin/scp binary was removed from the system. Which has caused the Oracle Patching process to fail.

    scp binary is provided by openssh-clients rpm, which was present on the system, but scp binary was missing.

    Troubleshooting/Testing:

    The workaround is to copy scp binary from a similar healthy server (keep the same version). However, transferring a file to a location where it doesn’t exist can be a bit challenging. Let’s try:

    [fg@rac1 ~]$ scp /usr/bin/scp racq:/tmp/scp
    bash: scp: command not found
    lost connection

    We got lost connection, because scp is not on racq node.

    Solution:

    Need to use ssh and cat commands. For most systems root user login is not enabled, so you need to place the file under /tmp and then relocate to the correct location.

    In my example, I have already set up fg user equivalency, so in my case, the format will be the following:

    [fg@rac1 ~]$ ssh racq cat < /usr/bin/scp ">" /tmp/scp

    Connect to the remote server and copy /tmp/scp to the correct location. Reset permissions.

    [root@racq tmp]# cp /tmp/scp /usr/bin/scp
    [root@racq tmp]# chmod 755 /usr/bin/scp
    [root@racq tmp]# chown root:root /usr/bin/scp

    The transfer should be working now:

    [fg@rac1 ~]$ scp /usr/bin/scp racq:/tmp/scp
    scp      100%   89KB  44.0MB/s   00:00

    The process worked for a binary file, so it will work for a text file too.

    Inserting entry in .db file from Linux shell

    • Create a text file containing necessary entries, e.g
    [root@rac1 ~]# cat test.txt
    ora.test.db
    \92\c4\9dhard(global:uniform:ora.DATA.dg) pullup(global:ora.DATA.dg) weak(type:ora.listener.type, global:type:ora.scan_listener.type, uniform:ora.ons, global:ora.gns)\c4\c5hard(fg.DATA.DisksReady, global:uniform:ora.DATA.dg) pullup(fg.DATA.DisksReady, global:ora.DATA.dg) weak(type:ora.listener.type, global:type:ora.scan_listener.type, uniform:ora.ons, global:ora.gns)

    Please note, this is a sample entry that was necessary in my case. You better know what should be there for your case.

    • Create output.db file from test.txt
    [root@rac1 ~]# db_load -T -t btree -f test.txt output.db
    • Read data from output.db file to make sure that the entry is there using db_dump
    [root@rac1 ~]# db_dump -p output.db
    VERSION=3
    format=print
    type=btree
    db_pagesize=4096
    HEADER=END
     ora.test.db
     \92\c4\9dhard(global:uniform:ora.DATA.dg) pullup(global:ora.DATA.dg) weak(type:ora.listener.type, global:type:ora.scan_listener.type, uniform:ora.ons, global:ora.gns)\c4\c5hard(fg.DATA.DisksReady, global:uniform:ora.DATA.dg) pullup(fg.DATA.DisksReady, global:ora.DATA.dg) weak(type:ora.listener.type, global:type:ora.scan_listener.type, uniform:ora.ons, global:ora.gns)
    DATA=END
    

    Install Google Chrome on Linux 7.9 using terminal

    There are several ways to do that, I found the simplest (I hope so) and want to share it with you:

    0. Create repo file:

    # vi /etc/yum.repos.d/google-chrome.repo
    
    [google-chrome]
    name=google-chrome
    baseurl=https://dl.google.com/linux/chrome/rpm/stable/x86_64
    enabled=1
    gpgcheck=1
    gpgkey=https://dl.google.com/linux/linux_signing_key.pub

    1. Enable repo ol7_optional_latest for vulkan dependency:

    # yum-config-manager --enable ol7_optional_latest

    2. Install google-chrome-stable package:

    # yum install google-chrome-stable -y

    3. Run:

    $ google-chrome

    Or in the background:

    $ google-chrome &

    The window will come up in VNC or X Window whichever you’ve configured before.

    REMOTE HOST IDENTIFICATION HAS CHANGED!

    Problem:

    Connecting via ssh to the newly created host causes error:

    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
    @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
    IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
    Someone could be eavesdropping on you right now (man-in-the-middle attack)!
    It is also possible that a host key has just been changed.
    The fingerprint for the ECDSA key sent by the remote host is
    SHA256:AxfpHOVc8NP2OYPGce92HMa5LADDQj2V98ZKgoQHFGU.
    Please contact your system administrator.
    Add correct host key in /Users/mari/.ssh/known_hosts to get rid of this message.
    Offending ECDSA key in /Users/mari/.ssh/known_hosts:315
    ECDSA host key for 52.1.130.91 has changed and you have requested strict checking.
    Host key verification failed.

    Reason:

    I had another server with the same Public IP, so when I connected to the old saver the host identification has been saved in known_hosts. After a while I have removed old server and created a new one and assigned the PIP. The host identification has changed, but old entries were still saved in known_hosts.

    Solution:

    Open /Users/mari/.ssh/known_hosts and delete only the line containing mentioned IP (52.1.130.91 in my case), save file and retry the connection.
    It should work now.