Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: kubernetes worker #2199

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from
Draft

Conversation

crazy-max
Copy link
Member

Add integration tests for kubernetes driver.

@crazy-max
Copy link
Member Author

Arg can't get the node: https://github.com/docker/buildx/actions/runs/7611314658/job/20726595786#step:6:5366

node is not ready: /usr/bin/kubectl --kubeconfig /tmp/kubeconfig3632220465.yml wait --for=condition=Ready node/integrationk3s The connection to the server 127.0.0.1:32841 was refused - did you specify the right host or port?\n: exit status 1

Looking at the logs:

stdout: /usr/bin/k3s server --bind-address 127.0.0.1 --https-listen-port 32841 --data-dir /tmp/kubedata3658668145 --write-kubeconfig /tmp/kubeconfig3632220465.yml --write-kubeconfig-mode 666 --node-name integrationk3s":"","stderr: /usr/bin/k3s server --bind-address 127.0.0.1 --https-listen-port 32841 --data-dir /tmp/kubedata3658668145 --write-kubeconfig /tmp/kubeconfig3632220465.yml --write-kubeconfig-mode 666 --node-name integrationk3s":"> StartCmd 2024-01-22 12:49:05.109587278 +0000 UTC m=+3.391893833 /usr/bin/k3s server --bind-address 127.0.0.1 --https-listen-port 32841 --data-dir /tmp/kubedata3658668145 --write-kubeconfig /tmp/kubeconfig3632220465.yml --write-kubeconfig-mode 666 --node-name integrationk3s
time=\"2024-01-22T12:49:05.266381206Z\" level=info msg=\"Starting k3s v1.21.2+k3s1 (5a67e8dc)\"
time=\"2024-01-22T12:49:05.269304672Z\" level=info msg=\"Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s\"
time=\"2024-01-22T12:49:05.269335008Z\" level=info msg=\"Configuring database table schema and indexes, this may take a moment...\"
time=\"2024-01-22T12:49:05.271324034Z\" level=info msg=\"Database tables and indexes are up to date\"
time=\"2024-01-22T12:49:05.272220910Z\" level=info msg=\"Kine listening on unix://kine.sock\"
time=\"2024-01-22T12:49:05.284099469Z\" level=info msg=\"certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:05 +0000 UTC\"
time=\"2024-01-22T12:49:05.284592405Z\" level=info msg=\"certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:05 +0000 UTC\"
time=\"2024-01-22T12:49:05.285095560Z\" level=info msg=\"certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:05 +0000 UTC\"
time=\"2024-01-22T12:49:05.285602101Z\" level=info msg=\"certificate CN=kube-apiserver signed by CN=k3s-client-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:05 +0000 UTC\"
time=\"2024-01-22T12:49:05.286101950Z\" level=info msg=\"certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:05 +0000 UTC\"
time=\"2024-01-22T12:49:05.286529545Z\" level=info msg=\"certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:05 +0000 UTC\"
time=\"2024-01-22T12:49:05.287004196Z\" level=info msg=\"certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:05 +0000 UTC\"
time=\"2024-01-22T12:49:05.287800396Z\" level=info msg=\"certificate CN=kube-apiserver signed by CN=k3s-server-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:05 +0000 UTC\"
time=\"2024-01-22T12:49:05.288583511Z\" level=info msg=\"certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:05 +0000 UTC\"
time=\"2024-01-22T12:49:05.289375402Z\" level=info msg=\"certificate CN=etcd-server signed by CN=etcd-server-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:05 +0000 UTC\"
time=\"2024-01-22T12:49:05.289842951Z\" level=info msg=\"certificate CN=etcd-client signed by CN=etcd-server-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:05 +0000 UTC\"
time=\"2024-01-22T12:49:05.290603334Z\" level=info msg=\"certificate CN=etcd-peer signed by CN=etcd-peer-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:05 +0000 UTC\"
time=\"2024-01-22T12:49:05.648455784Z\" level=info msg=\"certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:05 +0000 UTC\"
time=\"2024-01-22T12:49:05.648736947Z\" level=info msg=\"Active TLS secret  (ver=) (count 8): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.17.0.3:172.17.0.3 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=BD7AC66137B1597D427684EBD1F856E174BE4B21]\"
time=\"2024-01-22T12:49:05.652685023Z\" level=info msg=\"Running kube-apiserver --advertise-port=32841 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/tmp/kubedata3658668145/server/tls/temporary-certs --client-ca-file=/tmp/kubedata3658668145/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/tmp/kubedata3658668145/server/tls/server-ca.crt --kubelet-client-certificate=/tmp/kubedata3658668145/server/tls/client-kube-apiserver.crt --kubelet-client-key=/tmp/kubedata3658668145/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/tmp/kubedata3658668145/server/tls/client-auth-proxy.crt --proxy-client-key-file=/tmp/kubedata3658668145/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/tmp/kubedata3658668145/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=32842 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/tmp/kubedata3658668145/server/tls/service.key --service-account-signing-key-file=/tmp/kubedata3658668145/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/tmp/kubedata3658668145/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/tmp/kubedata3658668145/server/tls/serving-kube-apiserver.key\"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I0122 12:49:05.653521    5009 server.go:656] external host was not specified, using 172.17.0.3
I0122 12:49:05.653675    5009 server.go:195] Version: v1.21.2+k3s1
I0122 12:49:05.811358    5009 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I0122 12:49:05.812072    5009 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0122 12:49:05.812083    5009 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0122 12:49:05.812904    5009 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0122 12:49:05.812916    5009 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0122 12:49:05.826261    5009 instance.go:283] Using reconciler: lease
I0122 12:49:05.869724    5009 rest.go:130] the default service ipfamily for this cluster is: IPv4
W0122 12:49:06.106558    5009 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0122 12:49:06.112844    5009 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0122 12:49:06.115064    5009 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0122 12:49:06.119740    5009 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0122 12:49:06.121210    5009 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W0122 12:49:06.124126    5009 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W0122 12:49:06.124141    5009 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
I0122 12:49:06.130788    5009 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0122 12:49:06.130803    5009 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time=\"2024-01-22T12:49:06.140391212Z\" level=info msg=\"Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/tmp/kubedata3658668145/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0\"
time=\"2024-01-22T12:49:06.140449016Z\" level=info msg=\"Waiting for API server to become available\"
time=\"2024-01-22T12:49:06.141073880Z\" level=info msg=\"Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/tmp/kubedata3658668145/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/tmp/kubedata3658668145/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/tmp/kubedata3658668145/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/tmp/kubedata3658668145/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/tmp/kubedata3658668145/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/tmp/kubedata3658668145/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/tmp/kubedata3658668145/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/tmp/kubedata3658668145/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/tmp/kubedata3658668145/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/tmp/kubedata3658668145/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/tmp/kubedata3658668145/server/tls/service.key --use-service-account-credentials=true\"
time=\"2024-01-22T12:49:06.142070282Z\" level=info msg=\"Node token is available at /tmp/kubedata3658668145/server/token\"
time=\"2024-01-22T12:49:06.142125124Z\" level=info msg=\"To join node to cluster: k3s agent -s https://127.0.0.1:32841 -t ${NODE_TOKEN}\"
time=\"2024-01-22T12:49:06.142728204Z\" level=info msg=\"Wrote kubeconfig /tmp/kubeconfig3632220465.yml\"
time=\"2024-01-22T12:49:06.142999408Z\" level=info msg=\"Run: k3s kubectl\"
time=\"2024-01-22T12:49:06.174215195Z\" level=info msg=\"Cluster-Http-Server 2024/01/22 12:49:06 http: TLS handshake error from 127.0.0.1:57154: remote error: tls: bad certificate\"
time=\"2024-01-22T12:49:06.179868793Z\" level=info msg=\"Cluster-Http-Server 2024/01/22 12:49:06 http: TLS handshake error from 127.0.0.1:57176: remote error: tls: bad certificate\"
time=\"2024-01-22T12:49:06.186476928Z\" level=info msg=\"certificate CN=integrationk3s signed by CN=k3s-server-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:06 +0000 UTC\"
time=\"2024-01-22T12:49:06.188423905Z\" level=info msg=\"certificate CN=system:node:integrationk3s,O=system:nodes signed by CN=k3s-client-ca@1705927745: notBefore=2024-01-22 12:49:05 +0000 UTC notAfter=2025-01-21 12:49:06 +0000 UTC\"
time=\"2024-01-22T12:49:06.712804014Z\" level=info msg=\"Module overlay was already loaded\"
time=\"2024-01-22T12:49:06.712842185Z\" level=info msg=\"Module nf_conntrack was already loaded\"
time=\"2024-01-22T12:49:06.712857704Z\" level=info msg=\"Module br_netfilter was already loaded\"
time=\"2024-01-22T12:49:06.719380834Z\" level=warning msg=\"Failed to load kernel module iptable_nat with modprobe\"
time=\"2024-01-22T12:49:06.721395868Z\" level=info msg=\"Set sysctl 'net/netfilter/nf_conntrack_max' to 131072\"
time=\"2024-01-22T12:49:06.721421105Z\" level=error msg=\"Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_max: permission denied\"
time=\"2024-01-22T12:49:06.721459006Z\" level=info msg=\"Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400\"
time=\"2024-01-22T12:49:06.721533714Z\" level=info msg=\"Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600\"
time=\"2024-01-22T12:49:06.722572481Z\" level=info msg=\"Logging containerd to /tmp/kubedata3658668145/agent/containerd/containerd.log\"
time=\"2024-01-22T12:49:06.722695724Z\" level=info msg=\"Running containerd -c /tmp/kubedata3658668145/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /tmp/kubedata3658668145/agent/containerd\"
time=\"2024-01-22T12:49:07.724584965Z\" level=info msg=\"Containerd is now running\"
time=\"2024-01-22T12:49:07.732797898Z\" level=info msg=\"Connecting to proxy\" url=\"wss://127.0.0.1:32841/v1-k3s/connect\"
time=\"2024-01-22T12:49:07.734291914Z\" level=info msg=\"Handling backend connection request [integrationk3s]\"
time=\"2024-01-22T12:49:07.734977007Z\" level=info msg=\"Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/tmp/kubedata3658668145/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/usr/libexec/cni --cni-conf-dir=/tmp/kubedata3658668145/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=integrationk3s --kubeconfig=/tmp/kubedata3658668145/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/tmp/kubedata3658668145/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file=/tmp/kubedata3658668145/agent/serving-kubelet.crt --tls-private-key-file=/tmp/kubedata3658668145/agent/serving-kubelet.key\"
time=\"2024-01-22T12:49:07.735551364Z\" level=info msg=\"Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=integrationk3s --kubeconfig=/tmp/kubedata3658668145/agent/kubeproxy.kubeconfig --proxy-mode=iptables\"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W0122 12:49:07.735860    5009 server.go:220] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I0122 12:49:07.736224    5009 server.go:436] \"Kubelet version\" kubeletVersion=\"v1.21.2+k3s1\"
W0122 12:49:07.736282    5009 proxier.go:653] Failed to read file /lib/modules/6.2.0-1018-azure/modules.builtin with error open /lib/modules/6.2.0-1018-azure/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0122 12:49:07.737147    5009 proxier.go:663] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0122 12:49:07.737847    5009 proxier.go:663] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0122 12:49:07.738533    5009 proxier.go:663] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0122 12:49:07.739183    5009 proxier.go:663] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0122 12:49:07.739871    5009 proxier.go:663] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
E0122 12:49:07.746323    5009 node.go:161] Failed to retrieve node info: nodes \"integrationk3s\" is forbidden: User \"system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope
W0122 12:49:07.755140    5009 manager.go:159] Cannot detect current cgroup on cgroup v2
I0122 12:49:07.755158    5009 dynamic_cafile_content.go:167] Starting client-ca-bundle::/tmp/kubedata3658668145/agent/client-ca.crt
I0122 12:49:07.958768    5009 dynamic_cafile_content.go:167] Starting request-header::/tmp/kubedata3658668145/server/tls/request-header-ca.crt
I0122 12:49:07.958777    5009 dynamic_cafile_content.go:167] Starting client-ca-bundle::/tmp/kubedata3658668145/server/tls/client-ca.crt
I0122 12:49:07.958963    5009 dynamic_serving_content.go:130] Starting serving-cert::/tmp/kubedata3658668145/server/tls/serving-kube-apiserver.crt::/tmp/kubedata3658668145/server/tls/serving-kube-apiserver.key
I0122 12:49:07.959224    5009 secure_serving.go:197] Serving securely on 127.0.0.1:32842
I0122 12:49:07.959265    5009 tlsconfig.go:240] Starting DynamicServingCertificateController
I0122 12:49:07.959582    5009 autoregister_controller.go:141] Starting autoregister controller
I0122 12:49:07.959599    5009 cache.go:32] Waiting for caches to sync for autoregister controller
I0122 12:49:07.959631    5009 controller.go:83] Starting OpenAPI AggregationController
I0122 12:49:07.959668    5009 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0122 12:49:07.959678    5009 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0122 12:49:07.959708    5009 crdregistration_controller.go:111] Starting crd-autoregister controller
I0122 12:49:07.959718    5009 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I0122 12:49:07.959876    5009 apf_controller.go:294] Starting API Priority and Fairness config controller
I0122 12:49:07.960109    5009 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0122 12:49:07.960125    5009 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0122 12:49:07.960151    5009 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/tmp/kubedata3658668145/server/tls/client-auth-proxy.crt::/tmp/kubedata3658668145/server/tls/client-auth-proxy.key
I0122 12:49:07.960536    5009 dynamic_cafile_content.go:167] Starting client-ca-bundle::/tmp/kubedata3658668145/server/tls/client-ca.crt
I0122 12:49:07.960561    5009 dynamic_cafile_content.go:167] Starting request-header::/tmp/kubedata3658668145/server/tls/request-header-ca.crt
I0122 12:49:07.963831    5009 available_controller.go:475] Starting AvailableConditionController
I0122 12:49:07.963847    5009 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0122 12:49:07.963884    5009 customresource_discovery_controller.go:209] Starting DiscoveryController
I0122 12:49:07.967925    5009 controller.go:86] Starting OpenAPI controller
I0122 12:49:07.967979    5009 naming_controller.go:291] Starting NamingConditionController
I0122 12:49:07.968009    5009 establishing_controller.go:76] Starting EstablishingController
I0122 12:49:07.968080    5009 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0122 12:49:07.968119    5009 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0122 12:49:07.968149    5009 crd_finalizer.go:266] Starting CRDFinalizer
time=\"2024-01-22T12:49:07.968526156Z\" level=info msg=\"Waiting for cloudcontroller rbac role to be created\"
I0122 12:49:07.973376    5009 controller.go:611] quota admission added evaluator for: namespaces
E0122 12:49:07.977546    5009 controller.go:151] Unable to perform initial Kubernetes service initialization: Service \"kubernetes\" is invalid: spec.clusterIPs: Invalid value: []string{\"10.43.0.1\"}: failed to allocated ip:10.43.0.1 with error:cannot allocate resources of type serviceipallocations at this time
E0122 12:49:07.978034    5009 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.3, ResourceVersion: 0, AdditionalErrorMsg: 
I0122 12:49:08.011930    5009 shared_informer.go:247] Caches are synced for node_authorizer 
I0122 12:49:08.059681    5009 cache.go:39] Caches are synced for autoregister controller
I0122 12:49:08.059700    5009 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0122 12:49:08.059732    5009 shared_informer.go:247] Caches are synced for crd-autoregister 
I0122 12:49:08.059930    5009 apf_controller.go:299] Running API Priority and Fairness config worker
I0122 12:49:08.061841    5009 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
I0122 12:49:08.063861    5009 cache.go:39] Caches are synced for AvailableConditionController controller
E0122 12:49:08.754916    5009 node.go:161] Failed to retrieve node info: nodes \"integrationk3s\" is forbidden: User \"system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope
I0122 12:49:08.958640    5009 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0122 12:49:08.958749    5009 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0122 12:49:08.963486    5009 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0122 12:49:08.965628    5009 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0122 12:49:08.965672    5009 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0122 12:49:09.189292    5009 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0122 12:49:09.208649    5009 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0122 12:49:09.295479    5009 lease.go:233] Resetting endpoints for master service \"kubernetes\" to [172.17.0.3]
I0122 12:49:09.296057    5009 controller.go:611] quota admission added evaluator for: endpoints
I0122 12:49:09.298187    5009 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
time=\"2024-01-22T12:49:09.745149297Z\" level=info msg=\"Waiting for node integrationk3s: nodes \\\"integrationk3s\\\" not found\"
time=\"2024-01-22T12:49:09.970692059Z\" level=info msg=\"Kube API server is now running\"
time=\"2024-01-22T12:49:09.970727475Z\" level=info msg=\"k3s is up and running\"
Flag --address has been deprecated, see --bind-address instead.
I0122 12:49:09.972978    5009 controllermanager.go:175] Version: v1.21.2+k3s1
I0122 12:49:09.973279    5009 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252
time=\"2024-01-22T12:49:09.976787494Z\" level=info msg=\"Creating CRD addons.k3s.cattle.io\"
W0122 12:49:09.980464    5009 authorization.go:47] Authorization is disabled
W0122 12:49:09.980476    5009 authentication.go:47] Authentication is disabled
I0122 12:49:09.980485    5009 deprecated_insecure_serving.go:51] Serving healthz insecurely on 127.0.0.1:10251
time=\"2024-01-22T12:49:09.982741580Z\" level=info msg=\"Creating CRD helmcharts.helm.cattle.io\"
time=\"2024-01-22T12:49:09.984841382Z\" level=info msg=\"Creating CRD helmchartconfigs.helm.cattle.io\"
time=\"2024-01-22T12:49:09.996215385Z\" level=info msg=\"Waiting for CRD addons.k3s.cattle.io to become available\"
time=\"2024-01-22T12:49:10.498554094Z\" level=info msg=\"Done waiting for CRD addons.k3s.cattle.io to become available\"
time=\"2024-01-22T12:49:10.498580453Z\" level=info msg=\"Waiting for CRD helmcharts.helm.cattle.io to become available\"
I0122 12:49:10.533078    5009 shared_informer.go:240] Waiting for caches to sync for tokens
I0122 12:49:10.537744    5009 controller.go:611] quota admission added evaluator for: serviceaccounts
I0122 12:49:10.541983    5009 garbagecollector.go:142] Starting garbage collector controller
I0122 12:49:10.541996    5009 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0122 12:49:10.542020    5009 graph_builder.go:289] GraphBuilder running
I0122 12:49:10.542091    5009 controllermanager.go:574] Started \"garbagecollector\"
I0122 12:49:10.547453    5009 controllermanager.go:574] Started \"cronjob\"
I0122 12:49:10.547540    5009 cronjob_controllerv2.go:125] Starting cronjob controller v2
I0122 12:49:10.547548    5009 shared_informer.go:240] Waiting for caches to sync for cronjob
I0122 12:49:10.553213    5009 node_ipam_controller.go:91] Sending events to api server.
I0122 12:49:10.633711    5009 shared_informer.go:247] Caches are synced for tokens 
E0122 12:49:10.857692    5009 node.go:161] Failed to retrieve node info: nodes \"integrationk3s\" not found
time=\"2024-01-22T12:49:11.001033399Z\" level=info msg=\"Done waiting for CRD helmcharts.helm.cattle.io to become available\"
time=\"2024-01-22T12:49:11.001058636Z\" level=info msg=\"Waiting for CRD helmchartconfigs.helm.cattle.io to become available\"
time=\"2024-01-22T12:49:11.503715980Z\" level=info msg=\"Done waiting for CRD helmchartconfigs.helm.cattle.io to become available\"
time=\"2024-01-22T12:49:11.509731407Z\" level=info msg=\"Writing static file: /tmp/kubedata3658668145/server/static/charts/traefik-9.18.2.tgz\"
time=\"2024-01-22T12:49:11.509918405Z\" level=info msg=\"Writing static file: /tmp/kubedata3658668145/server/static/charts/traefik-crd-9.18.2.tgz\"
time=\"2024-01-22T12:49:11.526340157Z\" level=info msg=\"Failed to get existing traefik HelmChart\" error=\"helmcharts.helm.cattle.io \\\"traefik\\\" not found\"
time=\"2024-01-22T12:49:11.526479776Z\" level=info msg=\"Writing manifest: /tmp/kubedata3658668145/server/manifests/metrics-server/aggregated-metrics-reader.yaml\"
time=\"2024-01-22T12:49:11.526558022Z\" level=info msg=\"Writing manifest: /tmp/kubedata3658668145/server/manifests/metrics-server/auth-delegator.yaml\"
time=\"2024-01-22T12:49:11.526637710Z\" level=info msg=\"Writing manifest: /tmp/kubedata3658668145/server/manifests/metrics-server/auth-reader.yaml\"
time=\"2024-01-22T12:49:11.526712589Z\" level=info msg=\"Writing manifest: /tmp/kubedata3658668145/server/manifests/metrics-server/metrics-apiservice.yaml\"
time=\"2024-01-22T12:49:11.526800232Z\" level=info msg=\"Writing manifest: /tmp/kubedata3658668145/server/manifests/rolebindings.yaml\"
time=\"2024-01-22T12:49:11.526877466Z\" level=info msg=\"Writing manifest: /tmp/kubedata3658668145/server/manifests/ccm.yaml\"
time=\"2024-01-22T12:49:11.526978122Z\" level=info msg=\"Writing manifest: /tmp/kubedata3658668145/server/manifests/coredns.yaml\"
time=\"2024-01-22T12:49:11.527082116Z\" level=info msg=\"Writing manifest: /tmp/kubedata3658668145/server/manifests/local-storage.yaml\"
time=\"2024-01-22T12:49:11.527167464Z\" level=info msg=\"Writing manifest: /tmp/kubedata3658668145/server/manifests/traefik.yaml\"
time=\"2024-01-22T12:49:11.527252312Z\" level=info msg=\"Writing manifest: /tmp/kubedata3658668145/server/manifests/metrics-server/metrics-server-deployment.yaml\"
time=\"2024-01-22T12:49:11.527326800Z\" level=info msg=\"Writing manifest: /tmp/kubedata3658668145/server/manifests/metrics-server/metrics-server-service.yaml\"
time=\"2024-01-22T12:49:11.527404906Z\" level=info msg=\"Writing manifest: /tmp/kubedata3658668145/server/manifests/metrics-server/resource-reader.yaml\"
time=\"2024-01-22T12:49:11.627833041Z\" level=info msg=\"Starting k3s.cattle.io/v1, Kind=Addon controller\"
time=\"2024-01-22T12:49:11.631218685Z\" level=info msg=\"Waiting for control-plane node integrationk3s startup: nodes \\\"integrationk3s\\\" not found\"
time=\"2024-01-22T12:49:11.646620008Z\" level=info msg=\"Cluster dns configmap has been set successfully\"
I0122 12:49:11.649802    5009 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io
I0122 12:49:11.680162    5009 controller.go:611] quota admission added evaluator for: deployments.apps
time=\"2024-01-22T12:49:11.728943338Z\" level=info msg=\"Starting helm.cattle.io/v1, Kind=HelmChart controller\"
time=\"2024-01-22T12:49:11.728984384Z\" level=info msg=\"Starting helm.cattle.io/v1, Kind=HelmChartConfig controller\"
time=\"2024-01-22T12:49:11.728997361Z\" level=info msg=\"Starting batch/v1, Kind=Job controller\"
time=\"2024-01-22T12:49:11.834515287Z\" level=info msg=\"Waiting for node integrationk3s: nodes \\\"integrationk3s\\\" not found\"
time=\"2024-01-22T12:49:12.032294057Z\" level=info msg=\"Starting /v1, Kind=Pod controller\"
time=\"2024-01-22T12:49:12.032292925Z\" level=info msg=\"Starting /v1, Kind=Secret controller\"
time=\"2024-01-22T12:49:12.032299333Z\" level=info msg=\"Starting /v1, Kind=Node controller\"
time=\"2024-01-22T12:49:12.032326096Z\" level=info msg=\"Starting /v1, Kind=Service controller\"
time=\"2024-01-22T12:49:12.032327736Z\" level=info msg=\"Starting /v1, Kind=Endpoints controller\"
time=\"2024-01-22T12:49:12.231100496Z\" level=info msg=\"Active TLS secret k3s-serving (ver=273) (count 8): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.17.0.3:172.17.0.3 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=BD7AC66137B1597D427684EBD1F856E174BE4B21]\"
time=\"2024-01-22T12:49:12.735027460Z\" level=info msg=\"Stopped tunnel to 127.0.0.1:32841\"
time=\"2024-01-22T12:49:12.735055822Z\" level=info msg=\"Proxy done\" err=\"context canceled\" url=\"wss://127.0.0.1:32841/v1-k3s/connect\"
time=\"2024-01-22T12:49:12.735057993Z\" level=info msg=\"Connecting to proxy\" url=\"wss://172.17.0.3:32841/v1-k3s/connect\"
time=\"2024-01-22T12:49:12.735114108Z\" level=info msg=\"error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF\"
time=\"2024-01-22T12:49:12.735207905Z\" level=error msg=\"Failed to connect to proxy\" error=\"dial tcp 172.17.0.3:32841: connect: connection refused\"
time=\"2024-01-22T12:49:12.735224646Z\" level=error msg=\"Remotedialer proxy error\" error=\"dial tcp 172.17.0.3:32841: connect: connection refused\"
I0122 12:49:12.762534    5009 server.go:660] \"--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /\"
I0122 12:49:12.762702    5009 container_manager_linux.go:291] \"Container manager verified user specified cgroup-root exists\" cgroupRoot=[]
I0122 12:49:12.762777    5009 container_manager_linux.go:296] \"Creating Container Manager object based on Node Config\" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
I0122 12:49:12.762808    5009 topology_manager.go:120] \"Creating topology manager with policy per scope\" topologyPolicyName=\"none\" topologyScopeName=\"container\"
I0122 12:49:12.762824    5009 container_manager_linux.go:327] \"Initializing Topology Manager\" policy=\"none\" scope=\"container\"
I0122 12:49:12.762833    5009 container_manager_linux.go:332] \"Creating device plugin manager\" devicePluginEnabled=true
I0122 12:49:12.763053    5009 kubelet.go:404] \"Attempting to sync node with API server\"
I0122 12:49:12.763071    5009 kubelet.go:272] \"Adding static pod path\" path=\"/tmp/kubedata3658668145/agent/pod-manifests\"
I0122 12:49:12.763091    5009 kubelet.go:283] \"Adding apiserver pod source\"
I0122 12:49:12.763106    5009 apiserver.go:42] \"Waiting for node sync before watching apiserver pods\"
I0122 12:49:12.763916    5009 kuberuntime_manager.go:222] \"Container runtime initialized\" containerRuntime=\"containerd\" version=\"v1.7.3\" apiVersion=\"v1alpha2\"
W0122 12:49:12.764333    5009 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I0122 12:49:12.764869    5009 server.go:1191] \"Started kubelet\"
I0122 12:49:12.765788    5009 fs_resource_analyzer.go:67] \"Starting FS ResourceAnalyzer\"
I0122 12:49:12.767781    5009 server.go:149] \"Starting to listen\" address=\"0.0.0.0\" port=10250
I0122 12:49:12.768229    5009 server.go:405] \"Adding debug handlers to kubelet server\"
E0122 12:49:12.769726    5009 cri_stats_provider.go:369] \"Failed to get the info of the filesystem with mountpoint\" err=\"unable to find data in memory cache\" mountpoint=\"/tmp/kubedata3658668145/agent/containerd/io.containerd.snapshotter.v1.overlayfs\"
E0122 12:49:12.769747    5009 kubelet.go:1306] \"Image garbage collection failed once. Stats initialization may not have completed yet\" err=\"invalid capacity 0 on image filesystem\"
I0122 12:49:12.770675    5009 volume_manager.go:271] \"Starting Kubelet Volume Manager\"
I0122 12:49:12.773289    5009 desired_state_of_world_populator.go:141] \"Desired state populator starts to run\"
E0122 12:49:12.801319    5009 nodelease.go:49] \"Failed to get node when trying to set owner ref to the node lease\" err=\"nodes \\\"integrationk3s\\\" not found\" node=\"integrationk3s\"
I0122 12:49:12.816239    5009 cpu_manager.go:199] \"Starting CPU manager\" policy=\"none\"
I0122 12:49:12.816294    5009 cpu_manager.go:200] \"Reconciling\" reconcilePeriod=\"10s\"
I0122 12:49:12.816316    5009 state_mem.go:36] \"Initialized new in-memory state store\"
I0122 12:49:12.818045    5009 policy_none.go:44] \"None policy: Start\"
E0122 12:49:12.819095    5009 node_container_manager_linux.go:57] \"Failed to create cgroup\" err=\"cannot enter cgroupv2 \\\"/sys/fs/cgroup/kubepods\\\" with domain controllers -- it is in an invalid state\" cgroupName=[kubepods]
E0122 12:49:12.819123    5009 kubelet.go:1384] \"Failed to start ContainerManager\" err=\"cannot enter cgroupv2 \\\"/sys/fs/cgroup/kubepods\\\" with domain controllers -- it is in an invalid state\"
> stopped 2024-01-22 12:49:12.850970012 +0000 UTC m=+11.133276567 exit status 1 1
","waitk3s: ":"node is not ready: /usr/bin/kubectl --kubeconfig /tmp/kubeconfig3632220465.yml wait --for=condition=Ready node/integrationk3s The connection to the server 127.0.0.1:32841 was refused - did you specify the right host or port?
: exit status 1

I see k3s is up and running:

level=info msg=\"Kube API server is now running\"\ntime=\"2024-01-22T12:49:09.970727475Z\"
level=info msg=\"k3s is up and running\"\

Don't think the following errors are related:

time=\"2024-01-22T12:49:12.735207905Z\" level=error msg=\"Failed to connect to proxy\" error=\"dial tcp 172.17.0.3:32841: connect: connection refused\"
time=\"2024-01-22T12:49:12.735224646Z\" level=error msg=\"Remotedialer proxy error\" error=\"dial tcp 172.17.0.3:32841: connect: connection refused\"

Maybe you have an idea @AkihiroSuda?

@AkihiroSuda
Copy link
Collaborator

E0122 12:49:12.819095 5009 node_container_manager_linux.go:57] "Failed to create cgroup" err="cannot enter cgroupv2 \"/sys/fs/cgroup/kubepods\" with domain controllers -- it is in an invalid state" cgroupName=[kubepods]
E0122 12:49:12.819123 5009 kubelet.go:1384] "Failed to start ContainerManager" err="cannot enter cgroupv2 \"/sys/fs/cgroup/kubepods\" with domain controllers -- it is in an invalid state"

Nested containers with cgroup v2 needs this ENTRYPOINT script: https://github.com/moby/moby/blob/v25.0.0/hack/dind#L59-L69

# cgroup v2: enable nesting
if [ -f /sys/fs/cgroup/cgroup.controllers ]; then
	# move the processes from the root group to the /init group,
	# otherwise writing subtree_control fails with EBUSY.
	# An error during moving non-existent process (i.e., "cat") is ignored.
	mkdir -p /sys/fs/cgroup/init
	xargs -rn1 < /sys/fs/cgroup/cgroup.procs > /sys/fs/cgroup/init/cgroup.procs || :
	# enable controllers
	sed -e 's/ / +/g' -e 's/^/+/' < /sys/fs/cgroup/cgroup.controllers \
		> /sys/fs/cgroup/cgroup.subtree_control
fi

@crazy-max
Copy link
Member Author

crazy-max commented Jan 23, 2024

E0122 12:49:12.819095 5009 node_container_manager_linux.go:57] "Failed to create cgroup" err="cannot enter cgroupv2 "/sys/fs/cgroup/kubepods" with domain controllers -- it is in an invalid state" cgroupName=[kubepods]
E0122 12:49:12.819123 5009 kubelet.go:1384] "Failed to start ContainerManager" err="cannot enter cgroupv2 "/sys/fs/cgroup/kubepods" with domain controllers -- it is in an invalid state"

Nested containers with cgroup v2 needs this ENTRYPOINT script: https://github.com/moby/moby/blob/v25.0.0/hack/dind#L59-L69

# cgroup v2: enable nesting
if [ -f /sys/fs/cgroup/cgroup.controllers ]; then
	# move the processes from the root group to the /init group,
	# otherwise writing subtree_control fails with EBUSY.
	# An error during moving non-existent process (i.e., "cat") is ignored.
	mkdir -p /sys/fs/cgroup/init
	xargs -rn1 < /sys/fs/cgroup/cgroup.procs > /sys/fs/cgroup/init/cgroup.procs || :
	# enable controllers
	sed -e 's/ / +/g' -e 's/^/+/' < /sys/fs/cgroup/cgroup.controllers \
		> /sys/fs/cgroup/cgroup.subtree_control
fi

Looks better now, thanks @AkihiroSuda! I'm looking at the tests failure https://github.com/crazy-max/buildx/actions/runs/7624605445/job/20767159528#step:6:54

Most likely linked to the local registry that can't be accessed from the node as it runs from the host.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants