-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spike: Dockerfiles POC #709
Comments
|
Questions:
|
@cmoulliard since support for this feature in |
This is an option but we will perhaps faced to an issue as openshift will create a pod where the user is not allow to execute |
I would like to help with coding/writing tests in Go, But I am a newbie in Go. Do you have some parts that can be done from my side? |
@phracek we would love to have your help! Do any issues in the epic look interesting to you? I think that #716 and #717 would require the least intricate knowledge of the buildpacks spec, if you would like to play around with https://github.com/GoogleContainerTools/kaniko. |
We need also to create a sample project containing the different files such as
|
Please see https://github.com/buildpacks/samples/compare/dockerfiles-poc?expand=1 . It is by no means complete, but I am hoping that we could expand upon it. Some initial validation steps:
Edit: notable features of the RFC that are not exercised: build plan interactions, "single stage" Dockerfiles e.g., build.Dockerfile or run.Dockerfile |
Can you create a PR to allow us to review/comment it please ? @natalieparellano |
Sure, done! I made it a draft as I'm pretty sure it's not mergeable yet :) |
To be sure that we are on the page, can you review the following description please ? The current lifecycle workflow executes the following steps: Such a step should take place after the Example A. During the detect phase execution, it appears that the project to be build is using Maven as compilation/packaging tool and Java is the language framework. Then
REMARK: Ideally the name (OR ID) of the Next, the new step that we could call REMARK: If a |
Minor comment: as of |
So the new step should be inserted here: lifecycle/cmd/lifecycle/creator.go Line 242 in 8c2edb2 |
@cmoulliard Yes, that seems appropriate. Especially because this is still a POC, it would probably be expedient to make a discrete step there and then we could revise after. It would be a big help, if you're able to get this moving for the community 🙂 |
FYI: We need also a new acceptance test case validating some of the scenario we will test using the |
Question: Should we use the docker client part of the lifecycle Dockerfiles change to perform an |
@cmoulliard the RFC says that is up to the platform - not exactly sure how that is intended to work. I would think that lifecycle ships with a |
AFAIK, kaniko is shipped with their own image and cannot be used as go lib within the lifecycle application to execute a build - see: https://github.com/GoogleContainerTools/kaniko#running-kaniko The lib to be used and which could help us is buildah (already used by podman): https://github.com/containers/podman/blob/main/cmd/podman/images/build.go WDYT ? @jabrown85 @aemengo @natalieparellano |
FYI: I create a simple buildah go app able to build a dockerfile - https://github.com/redhat-buildpacks/poc/blob/main/bud/bud.go#L85 |
Questions:
|
I don't think so, no. The idea, as I understand it, is to execute the Dockerfile commands live on the build image that lifecycle is already executing on (inside of For the run image, the new layers created during the execution of the Dockerfile would carry over in a volume so the exporter would take Does that make sense? |
Make sense and that will simplify our life ;-) if we dont have to push it somewhere |
Is the exporter able to publish the newly image created then ? |
Yes, the exporter gets registry credentials mounted from the platform to push the resulting app image. It creates a new manifest using the manifest of the run image + buildpack layers and exports that manifest and layers to the destination (docker, registry). |
We should also define a convention to name the image newly created as it could be cached to be reused for a next build and by consequence first searched before to execute the command to apply the docker file ? WDYT |
Caching is not identified in the RFC today. I think caching may start off as a platform-specific feature. |
Ok but then it will be needed when |
That is how I understand it. Each |
I was able to create a simple POC which supports the concept to https://github.com/redhat-buildpacks/poc/blob/main/k8s/manifest.yml#L72-L135 Remark: As it is needed to modify the path to access the layer(s) content extracted within a volume (e.g. /layers --> export PATH="/layers/usr/bin:$PATH", ...), then the RFC perhaps should include an additional ARG or ENV VAR (CNB_EXTENSION_LAYERS) to let to specify where the layers will be extracted in order to change the $PATH during the build step |
I talked a lot with Giuseppe Scrivano today (podman project) and I think that I found the right tools/projects (skopeo, umoci) to manage the images after the dockerfiles have been applied. As
Remark: If we use Tekton, then no need to develop something else as we could apply some pre-steps (= initcontainer) to perform the execution of steps 1-2 and 3. For local development using pack then, that will be a different story !! End to end script tested
Remark: buildah, skopeo and umoci should be installed to play with the technology before we integrate them within the lifecycle |
As @sclevine mentioned in slack, I think we should concentrate on one implementation at a time if possible. Stephen listed two distinct implementations that we could concentrate on. I, personally, would like to concentrate on the pure userspace version first. Here is what I imagined could happen at a high level. For build extensions, a new phase executor For run extensions, a new phase |
If I understand correctly what you say here, the idea is to execute the lifecycle If the answer is yes, then I suppose that it will be needed that part of Tekton, kpack builds(= openshift or kubernetes builds executed using a pod), that the privileged option is enabled (https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) - correct ? |
Please see #786 for a lifecycle that goes part of the way toward meeting the acceptance criteria outlined in this comment by:
This could be paired with the "extender" POC that @cmoulliard is working on. |
I think this spike has more or less served its purpose. Through #802 we were able to identify many of the changes that would be necessary, which are outlined in buildpacks/spec#298. buildpacks/spec#308 and buildpacks/spec#307 break this down further into changes that would be required for "phase 1" of the implementation (using Dockerfiles to switch the runtime base image, only). #849 can be used to track the work for this. With all that said I think we can close this issue, and related spike issues. |
Description
To help verify the feasibility of buildpacks/rfcs#173 and work out some of the thorny details that would go in a spec PR, we should spike out a POC lifecycle that supports Dockerfiles. This would eventually also allow potential end users to tinker with the feature and provide their feedback.
Proposed solution
Let's create a
dockerfiles-poc
branch on the lifecycle with the following functionality. See linked issues for individual units of work that should be more-or-less parallelizable.Note: "gross" code is okay. Out of scope:
creator
supportThe text was updated successfully, but these errors were encountered: