You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So in my use case I'm consolidating all different linters (among other things) to be ran with the Taskfile. See the snippet below for the general structure.
When running linters it would be nice to run all of them always (the same is especially true with the for loop) and once they're all ran to return an error if any of them errored. I could work around this by using longer shell scripts but I would prefer not to.
I'm thinking something like a run_all: always parameter. The default for which could be the current behaviour of stop-on-first-error.
The ignore_error works otherwise but it also ignores the error code, meaning that when the linters are ran in CI it will not cause the run to fail.
deps are run in parallel, there does not seem to be any mechanism which is killing tasks, it runs them all in parallel and returns the first failing task (of all the tasks that failed).
@xremming If you modify your example to something people can easily run (e.g. tasks with cmds exit(0) or exit(1) and some echos) and share the CLI command you use, it might be possible to clarify the behaviour.
So in my use case I'm consolidating all different linters (among other things) to be ran with the Taskfile. See the snippet below for the general structure.
When running linters it would be nice to run all of them always (the same is especially true with the for loop) and once they're all ran to return an error if any of them errored. I could work around this by using longer shell scripts but I would prefer not to.
I'm thinking something like a
run_all: always
parameter. The default for which could be the current behaviour ofstop-on-first-error
.The
ignore_error
works otherwise but it also ignores the error code, meaning that when the linters are ran in CI it will not cause the run to fail.The text was updated successfully, but these errors were encountered: