Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with cleanup #6

Open
Seraf opened this issue Oct 2, 2015 · 5 comments
Open

Problem with cleanup #6

Seraf opened this issue Oct 2, 2015 · 5 comments

Comments

@Seraf
Copy link

Seraf commented Oct 2, 2015

Hello,

I'm using 3 releases for deployment. Sometimes, when it comes to cleanup,there's a file seems to bug the cleanup process :

TASK: [deployment | Clean up releases] ******************************** 
failed: [10.1.0.176] => {"changed": true, "cmd": "ls -1dt /home/www/client/project/releases/* | tail -n +$((3 + 1)) | xargs rm -rf", "delta": "0:00:03.683295", "end": "2015-10-02 17:29:43.029682", "rc": 123, "start": "2015-10-02 17:29:39.346387", "warnings": []}
stderr: rm: cannot remove `/home/www/client/project/releases/20151002095730Z': Directory not empty

The file in this directory is the app/logs/prod.log file.

Did you already get this problem ? Any idea about how to proceed ?

Thanks for your role !

@FlxPeters
Copy link

Maybe the file is owned by another user? A dirty Workaround is to add a custom pre cleanup task and do the same as cleanup, but with sudo permissions.

Correct permissions would be much more clean.

@cbrunnkvist
Copy link
Owner

Thanks for reporting, @Seraf . I have not seen this error but it might be because I tend to favor logging on standard out rather than letting the application handle production logging. It does sound like an owership issue however. Perhaps you ran the initial deploy with slightly different config or something?

Either way, it would help us in figuring out the reason for your problem if you added some more details like the full permission mask of that file and its containing directory...

@Seraf
Copy link
Author

Seraf commented Oct 10, 2015

Hello,
Thanks both for your answer. Unfortunaly, the developer who had the problem deleted manually the directory of the old release, so I have not more information.
I didn't seen this problem again. If I have this one again, I will re-open this issue with more info. By now, I close this issue as I can't give more informations.

Thanks

@Seraf Seraf closed this as completed Oct 10, 2015
@Seraf
Copy link
Author

Seraf commented Oct 19, 2015

Hi, same issue again,

Same error, here is the folder :
drwxrwsr-x 3 customer www-data 4,0K oct. 19 14:45 20151016135051Z
drwxrws--- 3 customer www-data 4,0K oct. 19 14:44 20151016135051Z/app
drwxrws--- 2 customer www-data 4,0K oct. 19 14:44 20151016135051Z/app/logs
-rw-rw---- 1 customer www-data 40K oct. 19 14:44 20151016135051Z/app/logs/prod.log

phpfpm pool running with customer:www-data, and umask 0227
The playbook is run as root and as rm use -rf as args, it should do the trick ... I ended up by adding an ignore_errors on it, but it's not beautiful :s

@Seraf Seraf reopened this Oct 19, 2015
@cbrunnkvist
Copy link
Owner

Well, the first explanation for this that comes to my mind is that that of a race condition: the process/-es (php spawned by php-fpm I suppose) do not shut down in time, and (under sustained load) keep bombarding the path & file with mkdir/open/write requests giving rise to a sequence like the following:

  1. rm lists the directory contents
  2. a php worker does stat/open/write to the file
  3. rm calls unlink on the file
  4. a php worker does stat/open/write to the file, recreating an identical but truncated "prod.log"
  5. rm thinks that there are now NO more entries, calls rmdir on the folder which returns ENOTEMPTY and forces rm to exit

The risk of that happening could be exacerbated if inside a VM where I/O slowed down under load.

The reason it appears to work when run as root is either because the writing processes has by then stopped processing, of because you invoke rm directly from the shell which means somewhat lower load than when spawned by ansible(python) if the VM or host are resource starved already.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants