If the /tmp
filesystem on your AIX 7.2 system is showing 100% usage, it might cause problems such as preventing new temporary files from being written, leading to application or system issues. Here are steps you can take to resolve this:
1. Check Disk Usage:
Start by identifying the largest files or directories using the following command:
du -sk /tmp/* | sort -n
This will display a sorted list of disk usage within the /tmp
directory.
2. Identify Large Files:
Look for any unusually large files or directories that may be using excessive space. If these files are not in use and safe to delete, you can remove them to free up space.
3. Clear Old or Unnecessary Files:
Remove old log files, temporary cache files, or other unnecessary data from the /tmp
directory:
rm /tmp/<filename>
Be cautious not to delete files that are in use by running processes.
4. Check for Core Dumps:
Sometimes core dumps can fill up the /tmp
directory. List any core files:
ls -lh /tmp/core*
If they are not needed for debugging, you can delete them to free up space.
5. Increase Filesystem Size (if needed):
If /tmp
is regularly filling up and cannot be adequately managed by deleting files, you might consider increasing its size. This can be done with the following command:
chfs -a size=+<additional_size>M /tmp
Replace <additional_size>
with the amount of space you want to add (in MB).
6. Reboot (if needed):
If file locks or other issues prevent you from clearing space, you may need to reboot the system, but this should be a last resort after all other options have been tried.
$ df
Filesystem 512-blocks Free %Used Iused %Iused Mounted on
/dev/hd3 10485760 0 100% 76 10% /tmp
$ du -sk /tmp/* | sort -n
0 /tmp/errmbatch
0 /tmp/lost+found
0 /tmp/saved_errmbatch
4 /tmp/vgdata
12 /tmp/bos
104 /tmp/tmprpm
It seems you’re dealing with a situation where the /tmp
filesystem shows 100% usage, but the du
output doesn’t reflect much file usage. This could be due to one of the following:
1. Deleted Files Still Open by Processes:
Files that have been deleted but are still held open by a process will not show up in du
output, yet they still consume space. You can identify such files with:
lsof | grep /tmp
Look for files marked as “(deleted).” You can free up space by terminating the processes holding these files open or truncating the file with:
> /proc/<pid>/fd/<file_descriptor>
2. Filesystem Metadata Issue:
Sometimes the filesystem metadata can become corrupted, resulting in an incorrect display of space usage. Running a filesystem check might help:
fsck /tmp
You would need to unmount the filesystem to run fsck
, but since you prefer not to reboot, this step might require careful planning.
3. Filesystem Allocation Issue:
The filesystem could have space allocated to inodes rather than blocks. You can check inode usage with:
df -i /tmp
If inode usage is high, even though file sizes are small, it might explain the discrepancy. If needed, you might have to increase inode allocation or resize the filesystem.
4. Fragmentation or Corruption:
Filesystem fragmentation or corruption could cause space to be misreported. A defragfs
or a check for corruption might help, but these actions usually require unmounting the filesystem.
Unmounting /tmp
:
Unmounting /tmp
can be tricky because it might have active processes using it. You could try forcing the unmount with:
umount -f /tmp
However, this is risky if daemons or system processes are actively using /tmp
. Be prepared for potential issues with the system if /tmp
is critical to running services.
If you’re unsure about manual daemon management after a reboot, you might want to try troubleshooting further before considering unmounting.
Reboots sounds like a solid plan. A reboot should clear up any lingering issues in /tmp
, especially if it’s caused by locked or deleted files held by processes. It’s often the simplest way to resolve such issues when you can’t immediately identify or fix the underlying cause.
The reboot resolved the issue and /tmp is back to normal.