I have run out of inodes! I have a folder with millions of smaller files that I would like to keep. However, using df -i
I see that I am 100% out of inode space. Can I just clear them? It appears I need to do something since I can’t even download the files through scp or sftp in order to clear them from my remote server.
Ideas? Suggestions?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Hey!
Quick update to this question here. I recently answered a similar question about this here:
Running out of inodes can be a real headache, especially when you have a folder with millions of smaller files. Don’t worry, I’ve got you covered! Here’s a more efficient way to handle this using the
find
command directly.Identify Inode Usage: First, let’s identify which directories are consuming the most inodes. You can use the
find
command combined withwc -l
to get a count of the inodes used by each directory.This command will give you a sorted list of directories with their inode usage.
Drill Down Further: Once you’ve identified the main directory, you need to drill down further. For example, if
/home
is the culprit:Keep drilling down until you find the specific directory causing the issue.
Handle Hidden Files: Include hidden files in your search by specifying the appropriate paths:
Clean Up: Once you’ve found the directory with the most inodes, it’s time to clean up. Here are a few strategies:
Automate Cleanups: To prevent this from happening again, you can set up a cron job to regularly clean up unnecessary files:
Add a job to clean up temporary files every week:
Increase Inode Limit: If the issue persists and you can’t delete any more files, consider resizing your filesystem to increase the inode limit. This is a more complex operation and might require backing up your data.
Here are some commands to help you get started:
By following these steps, you should be able to clear up some inodes and prevent this issue from recurring in the future.
- Bobby
Your inodes are full, so there’s files somewhere in your server occupying them.
To find what’s consuming inode space, run this:
for i in /*; do echo $i; find $i |wc -l; done
After it returns the sizes of each item, look through them and see which directory is the biggest. Keep drilling down directories a few times until you found the culprit folder. It’s probably log files (error logs/admin logs), PHP session files, or in my case, mod cache disk. I had to keep going into the /var folder, show below.
for i in /var/*; do echo $i; find $i |wc -l; done
Once my search showed that /var/cache/apache2/mod_cache_disk was using almost 100% of my inodes, i deleted the folder and restarted apache and the server, making my inode usage only 4%, down from 100%.
service apache2 restart reboot
Heya,
If the problem persist and it’s not related only to the php session files, you can expand the search and check the disk space usage in other directories as well.
You can check which folders are using the most space on the Droplet by using the disk utilization command, du:
Check our tutorial on How to fix disk space issues here:
https://docs.digitalocean.com/support/how-do-i-fix-disk-space-issues-on-my-droplet/
There is also an
ncurses
interface fordu
, appropriately calledncdu
, that you can install:This will graphically represent your disk usage:
You can step through the filesystem by using the up and down arrows and pressing Enter on any directory entry.
Hope that this helps!