-
In #370, support was added for A bit of an example was given in the PR:
However, it would be helpful to have a fully worked example in the documentation, from builder creation through The Right Way to handle 32-bit ARM would be nice to see here as well; can it just be another platform on the 64-bit ARM hosts? It would also be good to show how/whether other client machines can connect to the same builder on the Kubernetes cluster, or if (some of?) the setup needs to be repeated for e.g. each CI job that wants to build a multi-arch image. |
Beta Was this translation helpful? Give feedback.
Replies: 14 comments
-
builder need to create in each CI job before build. With in fact, so in each CI job, before build, could just run, for connect docker buildx client to buildkit in k8s cluster docker buildx create --use --name=buildkit --platform=linux/amd64 --node=buildkit-amd64 --driver=kubernetes --driver-opt="namespace=buildkit,nodeselector=kubernetes.io/arch=amd64"
docker buildx create --append --name=buildkit --platform=linux/arm64 --node=buildkit-arm64 --driver=kubernetes --driver-opt="namespace=buildkit,nodeselector=kubernetes.io/arch=arm64"
# not like x86, i386 could be run on x86_64 host.
# without emulation like qemu, arm64 host only support arm64
# so have to add a arm32 node into k8s cluster,
# and it should be append too
docker buildx create --append --name=buildkit --platform=linux/arm/v7 --node=buildkit-arm --driver=kubernetes --driver-opt="namespace=buildkit,nodeselector=kubernetes.io/arch=arm"
# not sure nodeselector is correct. i haven't arm32 host. after build, don't run if you don't want to run the scripts in each build,
This is same for all driver. it about how client / buildkit works. |
Beta Was this translation helpful? Give feedback.
-
This is all good info @morlay. That's a useful point about setting up the deployment in advance. I think if I do that I can give the pods requests and limits to work around #210. Right now my cluster assigns some very low limits to anything that doesn't provide its own, so I don't think I can successfully build anything but the smallest containers. Any tips on how to get BuildKit on Kubernetes to pull from Docker Hub through my cluster's caching registry, to avoid the pull limits? When using the Docker Daemon I have to set up an
When pulling layers does BuildKit just pull through Kubernetes's container fetch mechanism? Or does it have its own config? Or does it run the Docker Daemon in its pod and I need to inject this config into the BuildKit pods before it starts? |
Beta Was this translation helpful? Give feedback.
-
I've just tested dropping that |
Beta Was this translation helpful? Give feedback.
-
buildkit running with containerd not docker. should update [registry."docker.io"]
mirrors = ["http://docker-registry.toil:5000"]
http = true
insecure = true see more https://github.com/moby/buildkit/blob/master/docs/buildkitd.toml.md |
Beta Was this translation helpful? Give feedback.
-
@morlay Thanks for the tip! This doesn't quite work for a couple reasons. The docs you linked show that the right format is to leave off the protocol scheme (so more like):
When I do that, it still doesn't work; I think the problem is that the mirror value is just passed along as a hostname, and the port is never parsed out, so if I'm running an HTTP mirror it needs to be on port 80. I will try moving the mirror to port 80 and seeing if that works. |
Beta Was this translation helpful? Give feedback.
-
Changing the port to 80 and passing just the hostname didn't seem to help.
I plugged that in and it seems to be working now. Thanks! |
Beta Was this translation helpful? Give feedback.
-
To get the multi-arch builds working with emulation, I had to add
|
Beta Was this translation helpful? Give feedback.
-
@adamnovak initContainers:
- name: qemu
image: "{{ .Values.imageBinfmt.hub }}/binfmt:{{ .Values.imageBinfmt.tag }}"
args:
- --install
- amd64,arm64
securityContext:
privileged: true |
Beta Was this translation helpful? Give feedback.
-
@adamnovak The error above should be fixed with qemu update in moby/buildkit#1953 . Can you test with |
Beta Was this translation helpful? Give feedback.
-
@morlay That looks a bit like what I put in:
@tonistiigi I'm not letting |
Beta Was this translation helpful? Give feedback.
-
@tonistiigi I installed buildkit on my
But I am still having multi-arch builds fail (here, for amd64 on the arm64 cluster) - strangely, it gets through all the RUN and apt commands, but seems to fail when either doing It always seems to fail with an illegal instruction or a panic - to me saying there is something up with qemu?
Another failure (same build system, same nodes, same manifests - different failure location):
|
Beta Was this translation helpful? Give feedback.
-
it is qemu issue. qemu x86_64 not work well for golang compiling on aarch64 host. if you use pure go, could set GOARCH=$TARGETARCH Example: https://github.com/jaegertracing/jaeger-operator/blob/master/build/Dockerfile#L22 Notice the line 1 too. |
Beta Was this translation helpful? Give feedback.
-
@morlay Great call, thank you! I adjusted the Dockerfile to include ARG TARGETOS
ARG TARGETARCH
ARG TARGETPLATFORM
ARG BUILDPLATFORM And adjusted my build to: RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -ldflags="-w -s" -o /app/argocd-notifications ./cmd And that seems to have fixed it - it took me a while, however, to realise that when using My working Dockerfile, in full for those that stumble across this in future: FROM --platform=$BUILDPLATFORM golang:1.15.3 as builder
RUN apt-get update && apt-get install ca-certificates
WORKDIR /src
ARG TARGETOS
ARG TARGETARCH
ARG TARGETPLATFORM
ARG BUILDPLATFORM
COPY go.mod /src/go.mod
COPY go.sum /src/go.sum
RUN go mod download
# Perform the build
COPY . .
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -ldflags="-w -s" -o /app/argocd-notifications ./cmd
RUN ln -s /app/argocd-notifications /app/argocd-notifications-backend
FROM scratch
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=builder /app/argocd-notifications /app/argocd-notifications
COPY --from=builder /app/argocd-notifications-backend /app/argocd-notifications-backend
# User numeric user so that kubernetes can assert that the user id isn't root (0).
# We are also using the root group (the 0 in 1000:0), it doesn't have any
# privileges, as opposed to the root user.
USER 1000:0 |
Beta Was this translation helpful? Give feedback.
@adamnovak
builder need to create in each CI job before build.
With
kubernetes
driver, the deployment with same name (--node
) in same namespace of buildkit will be shared.in fact,
docker buildx create
just create metadata of driver.the deployment of buildkit only deploy if it not exists in the assigned namespace of the k8s cluster.
even, you could use kubectl apply the deployment to k8s cluster first (i use this way)
so in each CI job, before build, could just run, for connect docker buildx client to buildkit in k8s cluster
docker buildx create --use --name=buildkit --platform=linux/amd64 --node=buildkit-amd64 --driver=kubernetes --driver-opt="namespace=buildkit,nodeselector=kubernetes.…