-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JENKINS-61735 Get all environment variable #92
base: master
Are you sure you want to change the base?
Conversation
private static final Logger LOGGER = Logger.getLogger(LogstashConsoleLogFilter.class.getName()); | ||
|
||
private transient Run<?, ?> run; | ||
public LogstashConsoleLogFilter() {} | ||
|
||
public LogstashConsoleLogFilter(Run<?, ?> run) | ||
public LogstashConsoleLogFilter(Run<?, ?> run, hudson.EnvVars envVars) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit confused. Who is going to initialize this value for the old-style builds? I only see a change for pipeline
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's set to null. But in that case some other checks need to be applied. See below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I planned to introduce this modif only for declarative pipeline as it is the STAGE_NAME that were missing in the environment variables previously.
For old-style builds, I guess that your are talking for instance about freestyle, the STAGE_NAME is not relevant. So I guess that the plugin should work as before.
For my usecase, I don't see a reason to make some evolution on old-style logs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as @mwinter69 mentioned here #92 (comment) it will break the freestyle build use case
@@ -16,14 +16,16 @@ | |||
public class LogstashConsoleLogFilter extends ConsoleLogFilter implements Serializable | |||
{ | |||
|
|||
private hudson.EnvVars envVars; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please change to using imports
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
@@ -107,7 +107,7 @@ private boolean perform(Run<?, ?> run, TaskListener listener) { | |||
|
|||
// Method to encapsulate calls for unit-testing | |||
LogstashWriter getLogStashWriter(Run<?, ?> run, OutputStream errorStream, TaskListener listener) { | |||
return new LogstashWriter(run, errorStream, listener, run.getCharset()); | |||
return new LogstashWriter(run, errorStream, listener, run.getCharset(), null); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please define two constructors instead of passing null value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
Please add some test coverage for the added feature |
@mwinter69 care to take a look? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm missing a test that ensures that the information you want is included.
Another topic. What envvars are injected by the context? Do they contain sensitive information maybe like passwords?
private static final Logger LOGGER = Logger.getLogger(LogstashConsoleLogFilter.class.getName()); | ||
|
||
private transient Run<?, ?> run; | ||
public LogstashConsoleLogFilter() {} | ||
|
||
public LogstashConsoleLogFilter(Run<?, ?> run) | ||
public LogstashConsoleLogFilter(Run<?, ?> run, hudson.EnvVars envVars) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's set to null. But in that case some other checks need to be applied. See below
|
||
try { | ||
// TODO: sensitive variables are not filtered, c.f. https://stackoverflow.com/questions/30916085 | ||
buildVariables = build.getEnvironment(listener); | ||
} catch (IOException | InterruptedException e) { | ||
LOGGER.log(WARNING,"Unable to get environment for " + build.getDisplayName(),e); | ||
buildVariables = new HashMap<>(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is wrong to remove this block. For non pipeline builds this would result in a loss of information. So in case the buildVariables is null it should be filled as done before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't catch. I thought that this part of code (line 215. public BuildData(Run build, Date currentTime, TaskListener listener, hudson.EnvVars envVars) {) was on "ly for declarative pipeline and not for non pipeline builds.
But maybe the comment line 214 "// Pipeline project build" is not correct.
That's why I've considered that "envars" could not be null and that it was not necessary to do the build.getEnvironment(listener)
Note that my concern is to get more info than build.getEnvironment(listener) because with this command, you will not recover the environment variables of the stage itself and you will not get the STAGE_NAME and NODE_NAME that are to my mind very important variables that needs to be pushed into a Graylog. Otherwise you cannot make any correlation between info that might be recovered of one line of log and the stage and node that was at the origin of this line of log.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think best way to resolve this would be to add some automated tests to show it's working
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm missing a test that ensures that the information you want is included.
Another topic. What envvars are injected by the context? Do they contain sensitive information maybe like passwords?
Concerning sensitive info, you are probably right, but that need to be checked.
I've no idea how to distinguish sensitive information in a set of environment variables.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Filtering for sensitive variables is not supported in Pipeline. At least it wasn't last time I checked.
I think it's good enough if we make sure the sensitive variables for old-style builds are filtered
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This constructor is indeed not called for instances of AbstractBuild. But it is not limited to WorkflowRun. It will also be called for any other implementation directly inheriting from Run (e.g. AsyncRun, ExternalRun, see https://jenkins.io/doc/developer/extensions/jenkins-core/#run)
So I would prefer to keep this code.
(Though any of these job types would never have been working together with logstash plugin before the pipeline support was added)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. I suggest to keep this code but to execute it only if enVars is non null or empty if I figure out to create a new contructor (I guess it's related with your comment src/main/java/jenkins/plugins/logstash/LogstashNotifier.java line 110).
@@ -68,7 +69,8 @@ private ConsoleLogFilter createConsoleLogFilter(StepContext context) | |||
throws IOException, InterruptedException { | |||
ConsoleLogFilter original = context.get(ConsoleLogFilter.class); | |||
Run<?, ?> build = context.get(Run.class); | |||
ConsoleLogFilter subsequent = new LogstashConsoleLogFilter(build); | |||
hudson.EnvVars envVars = context.get(hudson.EnvVars.class); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure but have you checked if this will return something? Maybe envVars is something that is always injected, if not has to be aded as required context after line 110.
|
||
try { | ||
// TODO: sensitive variables are not filtered, c.f. https://stackoverflow.com/questions/30916085 | ||
buildVariables = build.getEnvironment(listener); | ||
} catch (IOException | InterruptedException e) { | ||
LOGGER.log(WARNING,"Unable to get environment for " + build.getDisplayName(),e); | ||
buildVariables = new HashMap<>(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This constructor is indeed not called for instances of AbstractBuild. But it is not limited to WorkflowRun. It will also be called for any other implementation directly inheriting from Run (e.g. AsyncRun, ExternalRun, see https://jenkins.io/doc/developer/extensions/jenkins-core/#run)
So I would prefer to keep this code.
(Though any of these job types would never have been working together with logstash plugin before the pipeline support was added)
Concerning test coverage, I'm not familiar with embeded tests in jenkins plugins. I will probably need some help |
You might want to check https://jenkins.io/doc/developer/testing/ and the existing tests |
@jakub-bochenski @mwinter69 I've pushed a new commit with your remarks |
this.envVars = null; | ||
this.errorStream = error != null ? error : System.err; | ||
this.build = run; | ||
this.listener = listener; | ||
this.charset = charset; | ||
this.dao = this.getDaoOrNull(); | ||
if (this.dao == null) { | ||
this.jenkinsUrl = ""; | ||
this.buildData = null; | ||
} else { | ||
this.jenkinsUrl = getJenkinsUrl(); | ||
this.buildData = getBuildData(); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a duplication of code. Instead just call the new constructor forwarding all arguments and pass null for the envvars
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
I don't think it would work for any code. For IDE debugger to have a chance to attach you have to run the tests from inside your IDE, not from command line. |
@jakub-bochenski I've figured out how to launch a test from my IDE I've also tried to change in PipelineTest.java logstash() with |
in the tests you will only have those steps available that are provided via dependencies in the pom. So when you add a test dependency to the pipeline-stage-step it should work |
@mwinter69 Thanks for your help. |
"node('master') {\n" + | ||
" stage('mystage') {\n" + | ||
" logstash {\n" + | ||
" currentBuild.result = 'SUCCESS'\n" + | ||
" echo 'Message'\n" + | ||
" }\n" + | ||
" }\n" + | ||
"}", true)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this will only work then the logstash step is insside a node/stage. When you write
logstash { stage('s') { echo 'm' } }
Will it work as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I do:
p.setDefinition(new CpsFlowDefinition(
"node('master') {\n" +
" logstash {\n" +
" stage('mystage') {\n" +
" currentBuild.result = 'SUCCESS'\n" +
" echo 'Message'\n" +
" }\n" +
" }\n" +
"}", true));
Then you will have the NODE_NAME but not the STAGE_NAME.
In fact, it's normal to get the NODE_NAME as in this case the node is evaluated before the logstash{} while the STAGE_NAME is not yet evaluated, so it cannot be set.
But in fact, it's an expected behavior as we recover the envVars in the context of the evaluation of logstash{}.
In my use case (e.g. having the capacity to have at least STAGE_NAME + NODE_NAME in metadata associated to each line of log), I can deal with that as it is only a matter of putting the logstash{} statement at the right place.
And for a pipeline with multiple stages, I'll have to set a "logstash{}" statement inside each stages.
FYI, I've in mind another case where the pipeline fails at its very begining (e.g. Cannot recover the jenkinsfile from git. We have this issue frequently...), I guess that for the moment, I've no solution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mwinter69 I was thinking about your remark.
I've currently recovered the envVars from the context that is set in LogstashStep.java
I was wondering whether it was possible to recover the envVars from the context at each line of log level.
And maybe if it was then possible to setup globally the logstash for a pipeline, then it would cover all my use cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm just implementing the possibility to enable globally with a TaskListenerDecorator. Not sure if there is a way to get to the corresponding flow node so one can extract from there (via the parents) the node and stage.
There is a completely other possibility to get the logs to ES by implementing a true pipeline logger (like in https://github.com/jenkinsci/pipeline-cloudwatch-logs-plugin). There it is possible to get the node and stage.
But this comes at a high price:
- You need to also implement reading from ES as the logs are no longer stored in the file system (so how to handle rabbitmq, redis,...,)
- performance is bad when writing directly to ES (might be necessary to use a buffer inbetween like fluentd). Haven't tested this but logstash plugin might have this bad performance impact as well
We have implemented this pipeline logger in our company, we might once publish this as open source, but it is still WIP. What we currently try to do is writing to ES and in parallel write to local Filesystem, so we don't need to read from ES.
Also our implemention is much more lightweigth in regarding the data we sent to ES (see https://issues.jenkins-ci.org/browse/JENKINS-54685) and uses a more straight forward data model
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mwinter69 Thanks for your answer. Is it possible to share your implementation of enabling globally with a TaskListenerDecorator ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jakub-bochenski Forget my question. You were talking about the parameter failBuild that can be set into logstashSend()
I'm going to check how to set this parameter with logstash{} statement
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jakub-bochenski I have tried to reproduce my issue in a sandbox with the following pipeline:
pipeline {
agent { label 'master' }
stages {
stage('Stage in parallel') {
parallel {
stage('My stage 1') {
steps {
logstash {
sh '''#!/bin/bash
for i in {1..10000}
do
echo "#1 Welcome $i times"
done
'''
}
}
}
stage('My stage 2') {
steps {
logstash {
sh '''#!/bin/bash
for i in {1..10000}
do
echo "#2 Welcome $i times"
done
'''
}
}
}
}
}
}
}
But after a while I get a lot of errors in the jenkins log :
``̀`
2020-05-19 11:51:53.185+0000 [id=12] WARNING o.j.p.w.log.FileLogStorage#maybeFlush: failed to flush /var/jenkins_home/jobs/afac/builds/1/log
java.io.IOException: Stream Closed
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:326)
at org.jenkinsci.plugins.workflow.log.DelayBufferedOutputStream$FlushControlledOutputStream.write(DelayBufferedOutputStream.java:125)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.FilterOutputStream.flush(FilterOutputStream.java:140)
at org.jenkinsci.plugins.workflow.log.FileLogStorage.maybeFlush(FileLogStorage.java:190)
at org.jenkinsci.plugins.workflow.log.FileLogStorage.overallLog(FileLogStorage.java:198)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.getLogText(WorkflowRun.java:1046)
at java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:627)
at org.kohsuke.stapler.Function$MethodFunction.invoke(Function.java:396)
at org.kohsuke.stapler.Function$InstanceFunction.invoke(Function.java:408)
at org.kohsuke.stapler.MetaClass$2.doDispatch(MetaClass.java:219)
at org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:58)
at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:747)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:878)
at org.kohsuke.stapler.MetaClass$9.dispatch(MetaClass.java:456)
at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:747)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:878)
at org.kohsuke.stapler.MetaClass$4.doDispatch(MetaClass.java:280)
at org.kohsuke.stapler.NameBasedDispatcher.dispatch(NameBasedDispatcher.java:58)
at org.kohsuke.stapler.Stapler.tryInvoke(Stapler.java:747)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:878)
at org.kohsuke.stapler.Stapler.invoke(Stapler.java:676)
at org.kohsuke.stapler.Stapler.service(Stapler.java:238)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:755)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1617)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:154)
at org.jenkinsci.plugins.ssegateway.Endpoint$SSEListenChannelFilter.doFilter(Endpoint.java:248)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:151)
at jenkins.security.ResourceDomainFilter.doFilter(ResourceDomainFilter.java:76)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:151)
at io.jenkins.blueocean.auth.jwt.impl.JwtAuthenticationFilter.doFilter(JwtAuthenticationFilter.java:61)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:151)
at io.jenkins.blueocean.ResourceCacheControl.doFilter(ResourceCacheControl.java:134)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:151)
at net.bull.javamelody.MonitoringFilter.doFilter(MonitoringFilter.java:239)
at net.bull.javamelody.MonitoringFilter.doFilter(MonitoringFilter.java:215)
at net.bull.javamelody.PluginMonitoringFilter.doFilter(PluginMonitoringFilter.java:88)
at org.jvnet.hudson.plugins.monitoring.HudsonMonitoringFilter.doFilter(HudsonMonitoringFilter.java:114)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:151)
at jenkins.metrics.impl.MetricsFilter.doFilter(MetricsFilter.java:125)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:151)
at org.jenkinsci.plugins.modernstatus.ModernStatusFilter.doFilter(ModernStatusFilter.java:52)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:151)
at jenkins.telemetry.impl.UserLanguages$AcceptLanguageFilter.doFilter(UserLanguages.java:128)
at hudson.util.PluginServletFilter$1.doFilter(PluginServletFilter.java:151)
at hudson.util.PluginServletFilter.doFilter(PluginServletFilter.java:157)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
at hudson.security.csrf.CrumbFilter.doFilter(CrumbFilter.java:153)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:84)
at hudson.security.UnwrapSecurityExceptionFilter.doFilter(UnwrapSecurityExceptionFilter.java:51)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at jenkins.security.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:118)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at org.acegisecurity.providers.anonymous.AnonymousProcessingFilter.doFilter(AnonymousProcessingFilter.java:125)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at org.acegisecurity.ui.rememberme.RememberMeProcessingFilter.doFilter(RememberMeProcessingFilter.java:142)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at org.acegisecurity.ui.AbstractProcessingFilter.doFilter(AbstractProcessingFilter.java:271)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at jenkins.security.BasicHeaderProcessor.doFilter(BasicHeaderProcessor.java:93)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at org.acegisecurity.context.HttpSessionContextIntegrationFilter.doFilter(HttpSessionContextIntegrationFilter.java:249)
at hudson.security.HttpSessionContextIntegrationFilter2.doFilter(HttpSessionContextIntegrationFilter2.java:67)
at hudson.security.ChainedServletFilter$1.doFilter(ChainedServletFilter.java:87)
at hudson.security.ChainedServletFilter.doFilter(ChainedServletFilter.java:90)
at hudson.security.HudsonFilter.doFilter(HudsonFilter.java:171)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
at org.kohsuke.stapler.compression.CompressionFilter.doFilter(CompressionFilter.java:49)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
at hudson.util.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:82)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
at org.kohsuke.stapler.DiagnosticThreadNameFilter.doFilter(DiagnosticThreadNameFilter.java:30)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
at jenkins.security.SuspiciousRequestFilter.doFilter(SuspiciousRequestFilter.java:36)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:566)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:500)
at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938)
at java.lang.Thread.run(Thread.java:748)
...
2020-05-19 11:52:02.119+0000 [id=1180] INFO o.j.p.w.s.concurrent.Timeout#lambda$ping$0: org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep [#1264]: checking /var/jenkins_home/workspace/afac on unresponsive for 10 sec
2020-05-19 11:52:02.126+0000 [id=1180] INFO o.j.p.w.s.concurrent.Timeout#lambda$ping$0: org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep [#1265]: checking /var/jenkins_home/workspace/afac on unresponsive for 10 sec
2020-05-19 11:52:07.120+0000 [id=1180] INFO o.j.p.w.s.concurrent.Timeout#lambda$ping$0: org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep [#1264]: checking /var/jenkins_home/workspace/afac on unresponsive for 15 sec
2020-05-19 11:52:07.126+0000 [id=1180] INFO o.j.p.w.s.concurrent.Timeout#lambda$ping$0: org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep [#1265]: checking /var/jenkins_home/workspace/afac on unresponsive for 15 sec
2020-05-19 11:52:12.121+0000 [id=1180] INFO o.j.p.w.s.concurrent.Timeout#lambda$ping$0: org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep [#1264]: checking /var/jenkins_home/workspace/afac on unresponsive for 20 sec
2020-05-19 11:52:12.126+0000 [id=1180] INFO o.j.p.w.s.concurrent.Timeout#lambda$ping$0: org.jenkinsci.plugins.workflow.steps.durable_task.DurableTaskStep [#1265]: checking /var/jenkins_home/workspace/afac on unresponsive for 20 sec
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry, I don't quite follow
if it's a separate issue from this pull request, please file it in the JIRA tracker/GH issue with full description
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. I will do that as in fact this issue is not related to my pull request
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've written the JIRA issue https://issues.jenkins-ci.org/browse/JENKINS-62354 for that.
This is now the main issue that blocks me to use logstash plugin.
Hi, Thanks!! |
@timor-raiman I think the work on this PR was never finished because node and stage name are already available as a result of #93 |
For declarative pipeline with stages that run in parallel, the logs of console are mixed meaning that it is impossible to extract the log of a dedicated stage.
It is now possible thanks to the STAGE_NAME environment parameter to extract the console logs of a dedicated stage of a declarative pipeline.
To do that, I've recovered the whole set of environment parameters with the command:
hudson.EnvVars envVars = context.get(hudson.EnvVars.class);
This is introduced into LogstashStep.java and propagated to BuildData.java
Moreover, the NODE_NAME is now part of the environment parameters, so it can be possible to make some filter (e.g. I use Graylog) to make some correlation between an error found in logs and the node that was actually running the stage.