Linux: Change the crash dump location

When kdump is enabled, the crash dumps are typically written to /var/crash. However, this directory may not always be suitable – especially if it lacks sufficient space. Thankfully, the dump location is configurable.

Follow the steps below to redirect the crash dump to another path.

1. Edit the kdump configuration file /etc/kdump.conf

Find the line that begins with path (or add it if it doesn’t exist), and set it to your desired directory. For example:

path /var2/crash

This tells kdump to save crash dumps to /var2/crash instead of the default /var/crash.

2. Ensure the directory exists and has enough space

Create the new directory if it doesn’t already exist:

# mkdir /var2/crash

Make sure it has appropriate permissions and enough disk space to store crash dumps, which can be large depending on system memory.

3. Restart the kdump service

After making changes, restart the kdump service to apply the new configuration:

# systemctl restart kdump

You can check the status to confirm it’s active:

# systemctl status kdump

● kdump.service - Crash recovery kernel arming
Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: enabled)
Active: active (exited) since Thu 2025-07-10 19:42:12 UTC; 10min ago
Main PID: 1162 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 196884)
Memory: 0B
CGroup: /system.slice/kdump.service

Jul 10 19:42:08 rac1.mycompany.mydomain systemd[1]: Starting Crash recovery kernel arming...
Jul 10 19:42:12 rac1.mycompany.mydomain kdumpctl[1428]: kdump: kexec: loaded kdump kernel
Jul 10 19:42:12 rac1.mycompany.mydomain kdumpctl[1428]: kdump: Starting kdump: [OK]
Jul 10 19:42:12 rac1.mycompany.mydomain systemd[1]: Started Crash recovery kernel arming.

⚠️ Important Notes

  • The crash dump directory must be accessible even during a crash, so avoid temporary filesystems (like /tmp) or network paths unless properly configured.
  • For production systems, it’s best to use a dedicated partition or storage volume with enough capacity to hold full memory dumps.

ORA-26988: Cannot grant Oracle GoldenGate privileges. The procedure GRANT_ADMIN_PRIVILEGE is disabled.

Problem:

While trying to grant the privilege to Golden Gate user in 23ai database, I received the following error:

SQL> EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('GGADMIN');
BEGIN DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('GGADMIN'); END;

*
ERROR at line 1:
ORA-26988: Cannot grant Oracle GoldenGate privileges. The procedure GRANT_ADMIN_PRIVILEGE is disabled.
ORA-06512: at "SYS.DBMS_LOGREP_UTIL", line 601
ORA-06512: at "SYS.DBMS_LOGREP_UTIL", line 636
ORA-06512: at "SYS.DBMS_GOLDENGATE_AUTH", line 38
ORA-06512: at line 1
Help: https://docs.oracle.com/error-help/db/ora-26988/

Explanation:

With Oracle Database release 23ai, procedures are replaced by roles.

Solution:

Grant the following Oracle GoldenGate roles: OGG_CAPTURE for Extract, OGG_APPLY for Replicat, and OGG_APPLY_PROCREP for procedural replication with Replicat.

grant OGG_APPLY to GGADMIN;
grant OGG_APPLY_PROCREP to GGADMIN;
grant OGG_CAPTURE to GGADMIN;

ORA-27106: system pages not available to allocate memory

Oracle error ORA-27106: system pages not available to allocate memory can appear when starting up a database instance, particularly when HugePages are misconfigured or unavailable. This post walks through a real-world scenario where the error occurs, explains the underlying cause, and provides step-by-step resolution.

Problem

Attempting to start up the Oracle database instance results in the following error:

oracle@mk23ai-b:~$ sqlplus / as sysdba

SQL*Plus: Release 23.0.0.0.0 - for Oracle Cloud and Engineered Systems on Thu Jul 3 00:15:46 2025
Version 23.7.0.25.01

Copyright (c) 1982, 2024, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORA-27106: system pages not available to allocate memory
Additional information: 6506
Additional information: 2
Additional information: 3

Cause

This error is most often seen on Linux platforms when HugePages are either:

  • Not configured,
  • Insufficiently allocated,
  • and the database is explicitly configured to use only HugePages with: use_large_pages='ONLY'

Troubleshooting

1) Identify the SPFILE path of the database

$ srvctl config database -db orclasm

Output:

Database unique name: orclasm
Database name: orclasm
Oracle home: /u01/app/oracle/product/23ai/dbhome_1
Oracle user: oracle
Spfile: +DATA/ORCLASM/PARAMETERFILE/spfile.274.1201294643
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA
Services:
OSDBA group:
OSOPER group:
Database instance: orclasm

2) Create a PFILE from the SPFILE

You can create a pfile from an spfile without starting the instance, which is particularly useful when the instance cannot be started.

$ export ORACLE_SID=orclasm
$ sqlplus / as sysdba

SQL> create pfile='/tmp/temppfile.ora' from spfile='+DATA/ORCLASM/PARAMETERFILE/spfile.274.1201294643';

File created.

SQL> exit

Now, inspect the HugePages configuration setting:

$ grep -i use_large_pages /tmp/temppfile.ora
*.use_large_pages='ONLY'

3) Check HugePages availability on the system

$ grep Huge /proc/meminfo

Example output (problem scenario):

HugePages_Total:       0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB

HugePages are not configured on the system in this case. If it is configured for you, then the HugePages_Free value is insufficient.

Solution

1) Estimate required HugePages

You can estimate the needed HugePages based on total SGA:

𝑓: HugePages = (SGA size in MB) / Hugepagesize

For example, if SGA is 24 GB (24576 MB) and Hugepagesize = 2 MB, then required
HugePages = 24576 / 2 = 12288

2) Configure HugePages at OS level

Edit /etc/sysctl.conf:

vm.nr_hugepages = 12288

Then apply:

# sysctl -p

3) Start the database in nomount to verify it is startable

$ sqlplus / as sysdba
SQL>
startup nomount

4) Reboot and verify

Restart the system to ensure that everything is functioning properly after the reboot and double check the config:

$ grep Huge /proc/meminfo

Expected output:

HugePages_Total:    12288
HugePages_Free: 12288
Hugepagesize: 2048 kB

⚠️ Temporary Workaround (not recommended for production)

If you need to get the database up urgently and cannot configure HugePages immediately, change the parameter to:

use_large_pages='TRUE'

This allows fallback to regular memory pages. However, for best performance and to avoid fragmentation, it’s strongly recommended to configure HugePages correctly and use use_large_pages='ONLY' in production.

Linux: Disable Kdump

To disable Kdump, follow these steps:

1. Disable the kdump service:

# systemctl disable --now kdump.service

2. Check that the kdump service is inactive:

# systemctl status kdump.service

3. Remove kexec-tools package

# rpm -e kexec-tools 

4. (Optional) Remove the crashkernel command-line parameter from the current kernel by running the following command:

# grubby --remove-args="crashkernel" --update-kernel=/boot/vmlinuz-$(uname -r)

Or set the desired value using grubby --update-kernel=/boot/vmlinuz-$(uname -r) --args="crashkernel=....” (Instead of dots, indicate your value).

ℹ️ One possible error that may occur when removing the kexec-tools package is that it might indicate that the package is not installed, even though it actually is. In this case, you can try rebuilding the RPM database and then rerunning the erase command.

# rpm --rebuilddb
# rpm -e kexec-tools

Linux: sed cannot rename /etc/default/sedysYQ9l Operation not permitted

Problem:

I was trying to enable Kdump and wanted to set the memory for crashkernel, so I tried this command that is provided by the RHEL official site:

[root@rac1 ~]# sudo grubby --update-kernel=ALL --args="crashkernel=1G"

And I’ve received the following error:

sed: cannot rename /etc/default/sedysYQ9l: Operation not permitted

Please note that every time you rerun the command, the letters after /etc/default change, so you probably have a different path.

Workaround:

At this time, I am providing only a workaround since I could not find a solution. You have several options available.

  • Enabling it for the current kernel, which can be done with one command:
# grubby --update-kernel=/boot/vmlinuz-$(uname -r) --args="crashkernel=1G"
  • Or enable for a specific kernel (run multiple times for other kernels if necessary)
# grubby --update-kernel=/boot/vmlinuz-4.18.0-553.22.1.el8_10.x86_64 --args="crashkernel=1G"

Linux: Enable Kdump

Some systems may have Kernel crash dumps (kdump) disabled due to performance concerns. When encountering a kernel panic and contacting RHEL support, they might request a kdump. You are advised to enable kdump and either wait for the incident to occur or manually trigger it to observe the kernel panic. Kdump must be enabled in order for the incident to generate the dump files.

1. If kernel-tools package is removed from the system, install it:

# yum install kexec-tools -y

2. To reserve memory for Crashkernel, add the crashkernel option to the current kernel:

# grubby --update-kernel=/boot/vmlinuz-$(uname -r) --args="crashkernel=1G"

3. Reboot the System

# reboot

4. Enable and start Kdump service

# systemctl enable --now kdump.service

5. Verify Kdump is running

# systemctl status kdump.service

● kdump.service - Crash recovery kernel arming
Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor prese>
Active: active (exited) since Tue 2025-06-24 20:29:58 UTC; 7min ago
Main PID: 1169 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 196884)
Memory: 0B
CGroup: /system.slice/kdump.service

⚠️ Testing: Trigger a Kernel Panic

Please note that I will show you a command that can trigger a kernel panic. This will allow you to check if a dump is generated. This is meant for testing purposes only and should not be executed on a production system during working hours. 🙂

Are you sure you want to cause a kernel panic right now? – If yes, then here is the command:

# echo c > /proc/sysrq-trigger

At this point, the node/VM has crashed and rebooted. When you relog in, you can check /var/crash/ directory to see if crash data was generated.

# ll /var/crash/
...
drwxr-xr-x 2 root root 67 Jun 24 20:15 127.0.0.1-2025-06-24-20:15:51

# cd /var/crash/
# ll
..
drwxr-xr-x 2 root root 67 Jun 24 20:15 127.0.0.1-2025-06-24-20:15:51

# cd 127.0.0.1-2025-06-24-20\:15\:51/

# ll
...
-rw------- 1 root root 45904 Jun 24 20:15 kexec-dmesg.log
-rw------- 1 root root 242941092 Jun 24 20:15 vmcore
-rw------- 1 root root 43877 Jun 24 20:15 vmcore-dmesg.txt
⚠️ Be sure to monitor disk space in /var/crash, as vmcore files can be large.