-
-
Notifications
You must be signed in to change notification settings - Fork 2k
Distributed Testing
Karate can split a test-suite across multiple nodes that can be running remotely. This means that you can run Web-UI automation in parallel but get a single consolidated report which includes video of each Scenario
.
We really need you to try this and help us stabilize this ! Right now 0.9.9.RC4 is available.
Here's how it works.
- a
JobConfig
interface which you need to implement (but we have some ready-made ones) - a Docker container based on JDK8 and Maven to which the Karate "fatjar" and also the Chrome browser has been added (this is the full, "real" Chrome - not just limited to "headless")
- you start a test using the
Runner
- but instead of callingparallel(threads)
you call a methodjobManager(config)
, here is an example - what this does is start an HTTP job-server that will wait for remote "executors" to connect and ask for "job chunks"
- the unit of work is a Karate
Scenario
- when all
Scenario
-s are done, the job-server will aggregate the report
- the unit of work is a Karate
- the design is such that no continuous communication is needed with the remote executors (see life-cycle below)
- we provide a convenience implementation called
MavenJobConfig
andMavenChromeJobConfig
- these will firedocker
commands to the local shell by default - but you can override the
JobConfig.startExecutors()
method to do anything you want, for example Kubernetes deployments if that is your thing !- you can choose to do nothing when you over-ride this method (or use an executor count of 0 or -1) for convenience), this is typical for CI pipelines, e.g. when using Jenkins and Docker
- in which case - you are responsible for starting multiple executor "worker nodes" (e.g. using shell scripts or Jenkins steps), and the only thing you need to ensure is that each worker node can make HTTP calls to the central "manager" node
The responsibilities of the JobExecutor
are very simple, the only input is the KARATE_JOBURL
. If you use the Docker container, you pass this as an environment variable.
This is the life-cycle:
- connect to the job-server and download a zip, extract it
- ask job-server for
init
config, e.g. startup and shutdown commands to run - while the server does not respond with
stop
- ask for
next
job chunk - execute commands as instructed by server
- zip and upload results to the server
- ask for
- execute shutdown commands after which the Java process will end (and terminate the Docker container if applicable)
Right now this works for Maven projects. This can be made to work for Gradle with very little effort.
The JobExecutor
is designed so that it:
- is part of the Karate "fatjar" / standalone JAR / single binary / ~50 MB
- can be started and configured via the CLI
- just needs to be told where the server is (
KARATE_JOBURL
for which the CLI option is-j
or--jobserver
) - requires only a JRE to run
Note that you can use jbang to bootstrap both a JVM and Karate in a single command e.g. (just use the right version instead of X.X.X
):
curl -Ls https://sh.jbang.dev | bash -s - com.intuit.karate:karate-core:X.X.X -j http://myhost.com:8080
Think of the JobExecutor
as a lightweight CI worker process. Yes, a mini-Jenkins if you will ! What this means is that you are not tied to the Karate Docker container.
So as long as you can:
- run the Karate JAR on a given machine
- have that machine be able to make HTTP calls to the job server URL
- provide (via the server
JobConfig
) the exact shell commands to run
You can distribute anything using this approach. Note that the JobConfig
is responsible for what to do once the results are uploaded back to the server. Here is where you need to merge or aggregate the results into one report. The Karate MavenJobConfig
can be used as a reference.
And yes, a way to execute karate-gatling
tests in parallel is also possible.
You can use this project as a reference and run this locally (with or without Docker) to get a feel of the whole thing and what to expect examples/jobserver
.
The following Jenkins config uses a very simple Dockerfile
which can be avoided if you know your way around Docker and Jenkins. Here we are using the Jenkins-Kubernetes plugin with a docker container available. But you just need an environment in which you can run docker commands and you should be all set ! So just Jenkins should work if you have Docker support. Do let us know how we can improve these instructions.
Here the Git "clone" step is omitted, but all the steps here assume that we are in the root folder of your maven project.
FROM ptrthomas/karate-chrome
COPY . /src
node {
karateWorker = "docker run -d --network=karate --rm --cap-add=SYS_ADMIN -e KARATE_JOBURL=http://karate:9080 karate"
}
pipeline {
agent {
kubernetes {
label "${config.pod_label}"
yamlFile 'KubernetesPods.yaml'
}
}
stages {
stage('Docker Build') {
steps {
container('docker') {
sh "docker rm karate || true"
sh "docker network create karate || true"
sh "docker build --pull -t karate ."
}
}
}
stage('Karate Tests') {
parallel {
stage('Boss') {
steps {
container('docker') {
sh "docker run --network=karate --name karate --cap-add=SYS_ADMIN -w /src karate mvn clean test -Dtest=JenkinsJobRunner"
}
}
}
stage('Workers') {
steps {
container('docker') {
sh karateWorker
sh karateWorker
sh karateWorker
}
}
}
}
}
}
post {
always {
container('docker') {
sh "docker cp karate:/src/target ."
}
junit "target/karate-reports/*.xml"
publishHTML(
target: [
allowMissing: false,
alwaysLinkToLastBuild: false,
keepAll: true,
reportDir: "target/karate-reports",
reportFiles: 'karate-summary.html',
reportName: "Karate Summary"
]
)
zip zipFile: "target.zip", archive: false, dir: "target", glob: "karate-reports/**/*,**/*.log"
archiveArtifacts "target.zip"
}
}
}
And here is the code for the JenkinsJobRunner
package web;
import com.intuit.karate.Results;
import com.intuit.karate.Runner;
import com.intuit.karate.job.MavenChromeJobConfig;
import static org.junit.jupiter.api.Assertions.assertEquals;
import org.junit.jupiter.api.Test;
public class JenkinsJobRunner {
@Test
void testAll() {
MavenChromeJobConfig config = new MavenChromeJobConfig(0, "karate", 9080);
System.setProperty("karate.env", "jobserver");
Results results = Runner.path("classpath:web").tags("~@ignore")
.outputJunitXml(true)
.timeoutMinutes(5).jobManager(config);
assertEquals(0, results.getFailCount(), results.getErrorMessages());
}
}
This is experimental, please test, provide feedback and contribute if you can !
The JobExecutor
is designed to be "generic" and it works for even karate-gatling
tests.
A convenience JobConfig
implementation GatlingMavenJobConfig
is available and then you can use a JobManager
directly like this:
GatlingMavenJobConfig config = new GatlingMavenJobConfig(2, "hostname", 8080);
JobManager manager = new JobManager(config);
manager.start();
manager.waitForCompletion();
So this setup takes the karate-gatling
project (which invoked it) and multiplies it by the number of "executors" that call-back. When each executor completes, the contents of the target/gatling
folder (which contains simulation.log
) are uploaded. The "job manager" server takes care of re-naming the Gatling simulation.log
files to be unique and then invokes the Gatling routine to generate the aggregated report.
The executor count that you pass to the constructor matters here, it is the number of executors that will be given a valid "chunk" to execute. Note that here there is only one "job", which is the entire Gatling simulation and we are multiplying (not dividing it).
Here is an example that uses Docker on the same local node (on a Mac) GatlingDockerJobRunner
, note that this is part of the examples/gatling folder which you can build and run locally as a Maven project.
You should be able to use the same approach to "scale-out" across multiple hardware nodes.Just start multiple JobExecutor
-s once the server jobUrl
is known. The example above is for Maven, but you should be able to figure out an approach for Gradle if needed.