-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker-host prune should be a bit smarter #34
Comments
This would be fantastic. On one of my customer clusters they consume all the inodes rather quickly, often so fast that builds are blocked before cleanup can run. |
That script could be a bit smarter, maybe something like this? threshold=50
for limit in 168h 72h 48h 24h; do
space=$(df / | awk 'END{print +$5}')
inodes=$(df -i / | awk 'END{print +$5}')
[ "$space" -lt $threshold ] && [ "$inodes" -lt $threshold ] && break
docker image prune -af --filter "until=$limit"
done |
Hmm the 24h is super dangerous if for example it runs on a Monday during the day it can fail deployments. therefore I'm not much in favour of a script that handles things dynamically |
I'd also like if we could change the frequency of the cleanup cron task |
Can we prune dangling images more aggressively (ie on a more frequent schedule) - do we have an idea on what the image breakdown is between dangling/non-dangling? |
Changing the frequency would go into the helm charts most likely |
Sometimes the docker-host fills up faster than the 168h here prune schedule can cope with.
This prune should be slightly more aggressive when the docker-host volume starts to reach limits that the 168h prune can't deal with, like 90% inodes or so we prune to 24h or all.
The text was updated successfully, but these errors were encountered: