-
-
Notifications
You must be signed in to change notification settings - Fork 625
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When a parallel task fails allow the other parallel tasks to finish before exiting with failure #1246
Comments
This is definitely a must have. I'm currently considering an alternative to Just (a competing tool) because it doesn't support parallelization. And it's great the Task does support it, however the implementation without waiting for the remaining tasks to complete is not acceptable to me. I'd also like to mention, that the best approach IMO would be to default to waiting for all the tasks to finish and only kill the remaining tasks (fail fast) when explicitly requested by user (e.g. via a CLI flag or a task definition option). At the end of the day Task is not a build tool like Make, but a task runner. Killing tasks mid-execution can have detrimental effects as already mentioned in the original post (e.g. you likely don't want to kill This is also how various test frameworks capable of test parallelization usually work, waiting for all the tests to complete before existing and presenting you with the aggregated report unless instructed otherwise. So it's not an entirely new concept. In fact, most of the tools doing some sort of parallelization, in my experience fail fast only when you explicitly ask them to do so. |
I agree with @supersmile2009 that the expected behavior is to finish running the deps/parallel tasks. task-d:
deps: [ task-a, task-b, task-c]
failfast: true and a command line option:
|
My ugly workaround is to touch a file if there's an error, and then at the end exit nonzero if that file exists. That means I have to clean up that file before starting another run, and it's pretty ugly, but it's a better experience than seeing a stacktrace generated from a program that got sigkilled because another task failed. Here's a simplified example: ---
version: 3
env:
VENV: .venv
VENV_ACTIVATE: '{{ .VENV }}/bin/activate'
ANS_ENV: .ansible
# TODO: Remove this once mvdan's Go sh interpreter auto-sets OSTYPE, which
# later versions of Python depend upon being set in the activate script, which
# breaks with `set -u` enforced.
# https://github.com/mvdan/sh/pull/912
OSTYPE:
sh: bash -c 'echo $OSTYPE'
TASK_ISSUE_FILE: .task/had-issue
# Shell options
set:
- u # Fail on unset variable reference
tasks:
_clean_lint_state:
internal: true
silent: true
cmds:
- rm -f {{ .TASK_ISSUE_FILE }} || true
_ansible_lint:
internal: true
deps:
- init
cmds:
- name: ansible-lint playbooks
cmd: |
ansible-lint --force-color || touch {{ .TASK_ISSUE_FILE }}
_yamllint:
internal: true
deps:
- init
cmds:
- name: Lint YAML files with yamllint
cmd: |
yamllint --strict $(git ls-files '*.yaml' '*.yml') ||
touch {{ .TASK_ISSUE_FILE }}
init:
desc: Initialize and install dependencies
run: once
deps:
- _clean_lint_state
lint:
desc: Run linters to check code for style
deps:
- _yamllint
- _ansible_lint
cmds:
- '[[ ! -f {{ .TASK_ISSUE_FILE }} ]]' |
Currently when a parallel task fails, Taskfile will kill the other running process.
If you have a running a long-running task like terraform, this will break processes and might leave the task in a bad state.
We should have a config or flag to allow the parallel tasks to complete before quitting with error from the failed task.
this is similar to:
#1226
The text was updated successfully, but these errors were encountered: