Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pods unable to reach 10.43.0.1:443 even with firewall disabled #10010

Closed
Stolz opened this issue Apr 23, 2024 · 4 comments
Closed

Pods unable to reach 10.43.0.1:443 even with firewall disabled #10010

Stolz opened this issue Apr 23, 2024 · 4 comments

Comments

@Stolz
Copy link

Stolz commented Apr 23, 2024

Environmental Info:

K3s Version:

$ k3s --version
k3s version v1.25.4+k3s- ()
go version go1.22.2

Node(s) CPU architecture, OS, and Version:

$ cat /etc/lsb-release
DISTRIB_ID="Gentoo"

$ uname -a
Linux solid 6.0.7-gentoo-solid-stolz #4 SMP Sat Nov 5 19:03:13 HKT 2022 x86_64 AMD Ryzen 7 5700G with Radeon Graphics AuthenticAMD GNU/Linux

$ uptime # Long uptime, hence the old kernel version in use
 17:49:54 up 350 days,  7:16,  4 users,  load average: 0.78, 0.79, 0.55

$ iptables --version
iptables v1.8.10 (legacy)

Cluster Configuration: Single node server.

$ cat /etc/rancher/k3s/config.yaml
write-kubeconfig-mode: "0640"

$ env | grep K3S_ # No output because no K3s env variables have been defined

Describe the bug:

Pods from default addons cannot connect to https://10.43.0.1:443.

Steps To Reproduce:

  • Flush iptables firewall and reset default policy to allow all traffic
  • Install K3s
  • Start K3s server
  • Check status of pods in kube-system namespace

Expected behavior:

All default addons from /var/lib/rancher/k3s/server/manifests should be up and running. If any iptables extension is missing it should be catched by check-config.sh script.

Actual behavior:

coredns pod never reaches ready staus. local-path-provisioner and metrics-server pods enter CrashLoopBackOff status. All the failing pods show an error related to unable to connect to https://10.43.0.1:443. Server logs mention some iptables extension as missing.

Additional context / logs:

My system has a lot of iptables rules but for the sake of simplicity I have reproduced the issue with a firewall withot any rule and with a permissive default policy. These are all the steps I followed:

Install K3s from official Getoo repository

emerge -av sys-cluster/k3s
Check if there are any kernel options missing ...
$ wget -q https://raw.githubusercontent.com/k3s-io/k3s/master/contrib/util/check-config.sh

$ modprobe configs

$ sh check-config.sh
Verifying binaries in .:
- sha256sum: sha256sums unavailable
- links: link list unavailable

System:
- /sbin iptables v1.8.10 (legacy): ok
- swap: disabled
- routes: ok

Limits:
- /proc/sys/kernel/keys/root_maxkeys: 1000000

info: reading kernel config from /proc/config.gz ...

Generally Necessary:
- cgroup hierarchy: cgroups Hybrid mounted, cpuset|memory controllers status: good
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_PIDS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_VETH: enabled (as module)
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_IPVS: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_MULTIPORT: enabled (as module)
- CONFIG_IP_NF_NAT: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_POSIX_MQUEUE: enabled

Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: missing
- CONFIG_NET_CLS_CGROUP: enabled (as module)
- CONFIG_CGROUP_NET_PRIO: enabled
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: enabled
- CONFIG_IP_NF_TARGET_REDIRECT: enabled (as module)
- CONFIG_IP_SET: enabled (as module)
- CONFIG_IP_VS: enabled (as module)
- CONFIG_IP_VS_NFCT: enabled
- CONFIG_IP_VS_PROTO_TCP: enabled
- CONFIG_IP_VS_PROTO_UDP: enabled
- CONFIG_IP_VS_RR: enabled (as module)
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Network Drivers:
  - "overlay":
    - CONFIG_VXLAN: enabled (as module)
      Optional (for encrypted networks):
      - CONFIG_CRYPTO: enabled
      - CONFIG_CRYPTO_AEAD: enabled (as module)
      - CONFIG_CRYPTO_GCM: enabled (as module)
      - CONFIG_CRYPTO_SEQIV: enabled (as module)
      - CONFIG_CRYPTO_GHASH: enabled (as module)
      - CONFIG_XFRM: missing
      - CONFIG_XFRM_USER: missing
      - CONFIG_XFRM_ALGO: missing
      - CONFIG_INET_ESP: missing
      - CONFIG_INET_XFRM_MODE_TRANSPORT: missing
- Storage Drivers:
  - "overlay":
    - CONFIG_OVERLAY_FS: enabled (as module)

STATUS: pass

Disable firewall (default policy allows all traffic)

$ iptables -P INPUT ACCEPT
$ iptables -P FORWARD ACCEPT
$ iptables -P OUTPUT ACCEPT
$ iptables -t nat -F
$ iptables -t mangle -F
$ iptables -F
$ iptables -X

$ iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Start K3s server

$ rm -f /var/log/k3s/k3s.log

$ /etc/init.d/k3s start
* Starting k3s ...

$ sleep 10s && /etc/init.d/k3s status
* status: started
Check iptables rules added by K3s ...
$ iptables -nvL
Chain INPUT (policy ACCEPT 16833 packets, 3787K bytes)
 pkts bytes target     prot opt in     out     source               destination
 1237  104K KUBE-PROXY-FIREWALL  0    --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes load balancer firewall */
13477 2499K KUBE-NODEPORTS  0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes health check service ports */
 1237  104K KUBE-EXTERNAL-SERVICES  0    --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes externally-visible service portals */
16833 3787K KUBE-ROUTER-INPUT  0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-router netpol - 4IA2OSFRMVNDXBVV */
16833 3787K KUBE-FIREWALL  0    --  *      *       0.0.0.0/0            0.0.0.0/0

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 KUBE-PROXY-FIREWALL  0    --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes load balancer firewall */
    0     0 KUBE-FORWARD  0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
    0     0 KUBE-SERVICES  0    --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes service portals */
    0     0 KUBE-EXTERNAL-SERVICES  0    --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes externally-visible service portals */
    0     0 FLANNEL-FWD  0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* flanneld forward */
    0     0 KUBE-ROUTER-FORWARD  0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-router netpol - TEMCG2JMHZYE7H7T */

Chain OUTPUT (policy ACCEPT 17079 packets, 4687K bytes)
 pkts bytes target     prot opt in     out     source               destination
  886 70988 KUBE-PROXY-FIREWALL  0    --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes load balancer firewall */
  886 70988 KUBE-SERVICES  0    --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes service portals */
17079 4687K KUBE-ROUTER-OUTPUT  0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-router netpol - VEAAIY32XVBHCSCY */
17079 4687K KUBE-FIREWALL  0    --  *      *       0.0.0.0/0            0.0.0.0/0

Chain FLANNEL-FWD (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     0    --  *      *       10.42.0.0/16         0.0.0.0/0            /* flanneld forward */
    0     0 ACCEPT     0    --  *      *       0.0.0.0/0            10.42.0.0/16         /* flanneld forward */

Chain KUBE-EXTERNAL-SERVICES (2 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-FIREWALL (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       0    --  *      *      !127.0.0.0/8          127.0.0.0/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT
    0     0 DROP       0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-FORWARD (1 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-KUBELET-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-NODEPORTS (1 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-NWPLCY-DEFAULT (0 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 MARK       0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* rule to mark traffic matching a network policy */ MARK or 0x10000

Chain KUBE-PROXY-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-PROXY-FIREWALL (3 references)
 pkts bytes target     prot opt in     out     source               destination

Chain KUBE-ROUTER-FORWARD (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* rule to explicitly ACCEPT traffic that comply to network policies */ mark match 0x20000/0x20000

Chain KUBE-ROUTER-INPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     0    --  *      *       0.0.0.0/0            10.43.0.0/16         /* allow traffic to primary cluster IP range - TZZOAXOCHPHEHX7M */
    0     0 RETURN     6    --  *      *       0.0.0.0/0            0.0.0.0/0            /* allow LOCAL TCP traffic to node ports - LR7XO7NXDBGQJD2M */ ADDRTYPE match dst-type LOCAL multiport dports 30000:32767
    0     0 RETURN     17   --  *      *       0.0.0.0/0            0.0.0.0/0            /* allow LOCAL UDP traffic to node ports - 76UCBPIZNGJNWNUZ */ ADDRTYPE match dst-type LOCAL multiport dports 30000:32767
    0     0 ACCEPT     0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* rule to explicitly ACCEPT traffic that comply to network policies */ mark match 0x20000/0x20000

Chain KUBE-ROUTER-OUTPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 ACCEPT     0    --  *      *       0.0.0.0/0            0.0.0.0/0            /* rule to explicitly ACCEPT traffic that comply to network policies */ mark match 0x20000/0x20000

Chain KUBE-SERVICES (2 references)
 pkts bytes target     prot opt in     out     source               destination

Check pod status

$ kubectl get pods -n kube-system
NAME                                      READY   STATUS             RESTARTS      AGE
coredns-597584b69b-pwlmm                  0/1     Running            0             4m24s
helm-install-traefik-bskvm                1/1     Running            1 (73s ago)   4m23s
helm-install-traefik-crd-t7q8d            1/1     Running            1 (73s ago)   4m23s
local-path-provisioner-79f67d76f8-j4vcv   0/1     CrashLoopBackOff   4 (17s ago)   4m24s
metrics-server-5c8978b444-mhx2c           0/1     CrashLoopBackOff   4 (13s ago)   4m24s

Check failing pods logs

coredns pod ...
$ kubectl describe -n kube-system pod/coredns-597584b69b-pwlmm
(...redacted for brevity...)
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  14m                    default-scheduler  Successfully assigned kube-system/coredns-597584b69b-pwlmm to solid
  Normal   Pulled     14m                    kubelet            Container image "rancher/mirrored-coredns-coredns:1.9.4" already present on machine
  Normal   Created    14m                    kubelet            Created container coredns
  Normal   Started    14m                    kubelet            Started container coredns
  Warning  Unhealthy  4m19s (x310 over 14m)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 503

$ kubectl logs -n kube-system pod/coredns-597584b69b-pwlmm
[INFO] plugin/reload: Running configuration SHA512 = b941b080e5322f6519009bb49349462c7ddb6317425b0f6a83e5451175b720703949e3f3b454a24e77f3ffe57fd5e9c6130e528a5a1dd00d9000e4afd6c1108d
CoreDNS-1.9.4
linux/amd64, go1.19.1, 1f0a41a
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.43.0.1:443/version": dial tcp 10.43.0.1:443: i/o timeout
local-path-provisioner pod ...
$ kubectl describe -n kube-system pod/local-path-provisioner-79f67d76f8-j4vcv
(...redacted for brevity...)
Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Scheduled  26m                 default-scheduler  Successfully assigned kube-system/local-path-provisioner-79f67d76f8-j4vcv to solid
  Normal   Pulled     23m (x5 over 26m)   kubelet            Container image "rancher/local-path-provisioner:v0.0.23" already present on machine
  Normal   Created    23m (x5 over 26m)   kubelet            Created container local-path-provisioner
  Normal   Started    23m (x5 over 26m)   kubelet            Started container local-path-provisioner
  Warning  BackOff    82s (x96 over 25m)  kubelet            Back-off restarting failed container

$ kubectl logs -n kube-system pod/local-path-provisioner-79f67d76f8-j4vcv
time="2024-04-23T10:15:26Z" level=fatal msg="Error starting daemon: Cannot start Provisioner: failed to get Kubernetes server version: Get \"https://10.43.0.1:443/version?timeout=32s\": dial tcp 10.43.0.1:443: i/o timeout"

local-path-provisioner pod ...
$ kubectl describe -n kube-system pod/metrics-server-5c8978b444-mhx2c
(...redacted for brevity...)
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  28m                   default-scheduler  Successfully assigned kube-system/metrics-server-5c8978b444-mhx2c to solid
  Warning  Unhealthy  28m                   kubelet            Readiness probe failed: Get "https://10.42.0.7:10250/readyz": read tcp 10.42.0.1:52682->10.42.0.7:10250: read: connection reset by peer
  Normal   Created    28m (x2 over 28m)     kubelet            Created container metrics-server
  Normal   Started    28m (x2 over 28m)     kubelet            Started container metrics-server
  Warning  Unhealthy  28m (x13 over 28m)    kubelet            Readiness probe failed: Get "https://10.42.0.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  28m (x5 over 28m)     kubelet            Readiness probe failed: Get "https://10.42.0.7:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  13m                   kubelet            Readiness probe failed: Get "https://10.42.0.7:10250/readyz": read tcp 10.42.0.1:54188->10.42.0.7:10250: read: connection reset by peer
  Normal   Pulled     8m39s (x9 over 28m)   kubelet            Container image "rancher/mirrored-metrics-server:v0.6.1" already present on machine
  Warning  BackOff    3m41s (x99 over 27m)  kubelet            Back-off restarting failed container

$ kubectl logs -n kube-system pod/metrics-server-5c8978b444-mhx2c
Error: unable to load configmap based request-header-client-ca-file: Get "https://10.43.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 10.43.0.1:443: i/o timeout
@Stolz
Copy link
Author

Stolz commented Apr 23, 2024

Adding server logs as well sice original message was too long

K3s server logs ...
cat /var/log/k3s/k3s.log
time="2024-04-23T18:32:32+08:00" level=info msg="Starting k3s v1.25.4+k3s- ()"
time="2024-04-23T18:32:32+08:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2024-04-23T18:32:32+08:00" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2024-04-23T18:32:32+08:00" level=info msg="Database tables and indexes are up to date"
time="2024-04-23T18:32:32+08:00" level=info msg="Kine available at unix://kine.sock"
time="2024-04-23T18:32:32+08:00" level=info msg="Reconciling bootstrap data between datastore and disk"
time="2024-04-23T18:32:32+08:00" level=info msg="Tunnel server egress proxy mode: agent"
time="2024-04-23T18:32:32+08:00" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
time="2024-04-23T18:32:32+08:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --etcd-servers=unix://kine.sock --feature-gates=JobTrackingWithFinalizers=true --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
time="2024-04-23T18:32:32+08:00" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259"
time="2024-04-23T18:32:32+08:00" level=info msg="Waiting for API server to become available"
time="2024-04-23T18:32:32+08:00" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --use-service-account-credentials=true"
time="2024-04-23T18:32:32+08:00" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-config=/var/lib/rancher/k3s/server/etc/cloud-config.yaml --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --controllers=*,-route --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --leader-elect-resource-name=k3s-cloud-controller-manager --node-status-update-frequency=1m0s --profiling=false"
I0423 18:32:32.189884   15620 server.go:581] external host was not specified, using 192.168.0.9
time="2024-04-23T18:32:32+08:00" level=info msg="Server node token is available at /var/lib/rancher/k3s/server/token"
I0423 18:32:32.190082   15620 server.go:171] Version: v1.25.4+k3s-
I0423 18:32:32.190096   15620 server.go:173] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
time="2024-04-23T18:32:32+08:00" level=info msg="To join server node to cluster: k3s server -s https://192.168.0.9:6443 -t ${SERVER_NODE_TOKEN}"
time="2024-04-23T18:32:32+08:00" level=info msg="Agent node token is available at /var/lib/rancher/k3s/server/agent-token"
time="2024-04-23T18:32:32+08:00" level=info msg="To join agent node to cluster: k3s agent -s https://192.168.0.9:6443 -t ${AGENT_NODE_TOKEN}"
time="2024-04-23T18:32:32+08:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
time="2024-04-23T18:32:32+08:00" level=info msg="Run: k3s kubectl"
I0423 18:32:32.198981   15620 shared_informer.go:255] Waiting for caches to sync for node_authorizer
I0423 18:32:32.199462   15620 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0423 18:32:32.199471   15620 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0423 18:32:32.199974   15620 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0423 18:32:32.199981   15620 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
{"level":"warn","ts":"2024-04-23T18:32:32.201+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001b7d6c0/kine.sock","attempt":0,"error":"rpc error: code = Unknown desc = no such table: dbstat"}
time="2024-04-23T18:32:32+08:00" level=info msg="certificate CN=solid signed by CN=k3s-server-ca@1713864012: notBefore=2024-04-23 09:20:12 +0000 UTC notAfter=2025-04-23 10:32:32 +0000 UTC"
time="2024-04-23T18:32:32+08:00" level=info msg="certificate CN=system:node:solid,O=system:nodes signed by CN=k3s-client-ca@1713864012: notBefore=2024-04-23 09:20:12 +0000 UTC notAfter=2025-04-23 10:32:32 +0000 UTC"
W0423 18:32:32.211272   15620 genericapiserver.go:656] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
I0423 18:32:32.211744   15620 instance.go:261] Using reconciler: lease
time="2024-04-23T18:32:32+08:00" level=info msg="Module overlay was already loaded"
time="2024-04-23T18:32:32+08:00" level=info msg="Module nf_conntrack was already loaded"
time="2024-04-23T18:32:32+08:00" level=info msg="Module br_netfilter was already loaded"
time="2024-04-23T18:32:32+08:00" level=info msg="Module iptable_nat was already loaded"
W0423 18:32:32.226966   15620 sysinfo.go:203] Nodes topology is not available, providing CPU topology
time="2024-04-23T18:32:32+08:00" level=warning msg="Flannel is using external addresses with an insecure backend: vxlan. Please consider using an encrypting flannel backend."
time="2024-04-23T18:32:32+08:00" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2024-04-23T18:32:32+08:00" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
W0423 18:32:32.231203   15620 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {/run/k3s/containerd/containerd.sock /run/k3s/containerd/containerd.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: connection refused". Reconnecting...
I0423 18:32:32.269777   15620 instance.go:574] API group "internal.apiserver.k8s.io" is not enabled, skipping.
W0423 18:32:32.344638   15620 genericapiserver.go:656] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W0423 18:32:32.345367   15620 genericapiserver.go:656] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
W0423 18:32:32.347019   15620 genericapiserver.go:656] Skipping API autoscaling/v2beta1 because it has no resources.
W0423 18:32:32.349787   15620 genericapiserver.go:656] Skipping API batch/v1beta1 because it has no resources.
W0423 18:32:32.350662   15620 genericapiserver.go:656] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
W0423 18:32:32.351415   15620 genericapiserver.go:656] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W0423 18:32:32.351436   15620 genericapiserver.go:656] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
W0423 18:32:32.353425   15620 genericapiserver.go:656] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W0423 18:32:32.353433   15620 genericapiserver.go:656] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
W0423 18:32:32.354137   15620 genericapiserver.go:656] Skipping API node.k8s.io/v1beta1 because it has no resources.
W0423 18:32:32.354143   15620 genericapiserver.go:656] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0423 18:32:32.354162   15620 genericapiserver.go:656] Skipping API policy/v1beta1 because it has no resources.
W0423 18:32:32.356563   15620 genericapiserver.go:656] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W0423 18:32:32.356572   15620 genericapiserver.go:656] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0423 18:32:32.357308   15620 genericapiserver.go:656] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W0423 18:32:32.357314   15620 genericapiserver.go:656] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0423 18:32:32.359539   15620 genericapiserver.go:656] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0423 18:32:32.361592   15620 genericapiserver.go:656] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W0423 18:32:32.363665   15620 genericapiserver.go:656] Skipping API apps/v1beta2 because it has no resources.
W0423 18:32:32.363672   15620 genericapiserver.go:656] Skipping API apps/v1beta1 because it has no resources.
W0423 18:32:32.364576   15620 genericapiserver.go:656] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
W0423 18:32:32.365347   15620 genericapiserver.go:656] Skipping API events.k8s.io/v1beta1 because it has no resources.
I0423 18:32:32.365849   15620 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0423 18:32:32.365856   15620 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0423 18:32:32.377071   15620 genericapiserver.go:656] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
I0423 18:32:33.041143   15620 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I0423 18:32:33.041261   15620 secure_serving.go:210] Serving securely on 127.0.0.1:6444
I0423 18:32:33.041325   15620 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
I0423 18:32:33.041432   15620 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0423 18:32:33.041439   15620 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
I0423 18:32:33.041463   15620 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key"
I0423 18:32:33.041495   15620 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0423 18:32:33.041529   15620 available_controller.go:491] Starting AvailableConditionController
I0423 18:32:33.041541   15620 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0423 18:32:33.041546   15620 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I0423 18:32:33.041569   15620 controller.go:80] Starting OpenAPI V3 AggregationController
I0423 18:32:33.041592   15620 controller.go:83] Starting OpenAPI AggregationController
I0423 18:32:33.041616   15620 apf_controller.go:300] Starting API Priority and Fairness config controller
I0423 18:32:33.041641   15620 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I0423 18:32:33.041694   15620 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I0423 18:32:33.041696   15620 autoregister_controller.go:141] Starting autoregister controller
I0423 18:32:33.041713   15620 cache.go:32] Waiting for caches to sync for autoregister controller
I0423 18:32:33.041740   15620 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0423 18:32:33.041741   15620 customresource_discovery_controller.go:209] Starting DiscoveryController
I0423 18:32:33.041747   15620 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0423 18:32:33.041767   15620 crdregistration_controller.go:111] Starting crd-autoregister controller
I0423 18:32:33.041773   15620 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I0423 18:32:33.041775   15620 controller.go:85] Starting OpenAPI controller
I0423 18:32:33.041798   15620 controller.go:85] Starting OpenAPI V3 controller
I0423 18:32:33.041812   15620 naming_controller.go:291] Starting NamingConditionController
I0423 18:32:33.041824   15620 establishing_controller.go:76] Starting EstablishingController
I0423 18:32:33.041836   15620 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0423 18:32:33.041859   15620 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0423 18:32:33.041879   15620 crd_finalizer.go:266] Starting CRDFinalizer
I0423 18:32:33.099974   15620 shared_informer.go:262] Caches are synced for node_authorizer
I0423 18:32:33.141596   15620 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0423 18:32:33.141713   15620 cache.go:39] Caches are synced for AvailableConditionController controller
I0423 18:32:33.141725   15620 apf_controller.go:305] Running API Priority and Fairness config worker
I0423 18:32:33.142126   15620 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0423 18:32:33.142145   15620 cache.go:39] Caches are synced for autoregister controller
I0423 18:32:33.142134   15620 shared_informer.go:262] Caches are synced for crd-autoregister
time="2024-04-23T18:32:33+08:00" level=info msg="Containerd is now running"
time="2024-04-23T18:32:33+08:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2024-04-23T18:32:33+08:00" level=info msg="Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=solid --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2024-04-23T18:32:33+08:00" level=info msg="Handling backend connection request [solid]"
time="2024-04-23T18:32:33+08:00" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
I0423 18:32:33.914054   15620 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0423 18:32:34.045605   15620 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W0423 18:32:34.155220   15620 handler_proxy.go:105] no RequestInfo found in the context
E0423 18:32:34.155247   15620 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0423 18:32:34.155259   15620 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0423 18:32:34.155287   15620 handler_proxy.go:105] no RequestInfo found in the context
E0423 18:32:34.155303   15620 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0423 18:32:34.156283   15620 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
Flag --cloud-provider has been deprecated, will be removed in 1.25 or later, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
I0423 18:32:34.247025   15620 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
time="2024-04-23T18:32:34+08:00" level=info msg="Annotations and labels have already set on node: solid"
I0423 18:32:34.247757   15620 server.go:408] "Kubelet version" kubeletVersion="v1.25.4+k3s-"
I0423 18:32:34.247765   15620 server.go:410] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
time="2024-04-23T18:32:34+08:00" level=info msg="Starting flannel with backend vxlan"
I0423 18:32:34.248447   15620 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
W0423 18:32:34.249538   15620 sysinfo.go:203] Nodes topology is not available, providing CPU topology
time="2024-04-23T18:32:34+08:00" level=info msg="Stopped tunnel to 127.0.0.1:6443"
time="2024-04-23T18:32:34+08:00" level=info msg="Connecting to proxy" url="wss://192.168.0.9:6443/v1-k3s/connect"
time="2024-04-23T18:32:34+08:00" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2024-04-23T18:32:34+08:00" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
time="2024-04-23T18:32:34+08:00" level=info msg="Flannel found PodCIDR assigned for node solid"
time="2024-04-23T18:32:34+08:00" level=info msg="The interface enp3s0 with ipv4 address 192.168.0.9 will be used by flannel"
time="2024-04-23T18:32:34+08:00" level=info msg="Tunnel authorizer set Kubelet Port 10250"
I0423 18:32:34.250584   15620 kube.go:126] Waiting 10m0s for node controller to sync
I0423 18:32:34.250605   15620 kube.go:420] Starting kube subnet manager
time="2024-04-23T18:32:34+08:00" level=info msg="Handling backend connection request [solid]"
I0423 18:32:34.252015   15620 server.go:655] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
I0423 18:32:34.252180   15620 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0423 18:32:34.252225   15620 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I0423 18:32:34.252255   15620 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I0423 18:32:34.252261   15620 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true
I0423 18:32:34.252294   15620 state_mem.go:36] "Initialized new in-memory state store"
I0423 18:32:34.254032   15620 kubelet.go:381] "Attempting to sync node with API server"
I0423 18:32:34.254043   15620 kubelet.go:270] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
I0423 18:32:34.254065   15620 kubelet.go:281] "Adding apiserver pod source"
I0423 18:32:34.254074   15620 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
I0423 18:32:34.254649   15620 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="v1.6.8-k3s1" apiVersion="v1"
I0423 18:32:34.254886   15620 server.go:1170] "Started kubelet"
E0423 18:32:34.255342   15620 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
E0423 18:32:34.255362   15620 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
I0423 18:32:34.255396   15620 server.go:155] "Starting to listen" address="0.0.0.0" port=10250
I0423 18:32:34.255441   15620 scope.go:115] "RemoveContainer" containerID="b0ecda05d4750ccb2d24807e58d68c7a51f76560366bef4c563c003c511c3815"
I0423 18:32:34.255765   15620 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I0423 18:32:34.255787   15620 volume_manager.go:293] "Starting Kubelet Volume Manager"
I0423 18:32:34.255853   15620 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
I0423 18:32:34.256210   15620 server.go:438] "Adding debug handlers to kubelet server"
W0423 18:32:34.256238   15620 iptables.go:221] Error checking iptables version, assuming version at least 1.4.11: executable file not found in $PATH
I0423 18:32:34.277112   15620 scope.go:115] "RemoveContainer" containerID="1f90ef7b958f4cbe209bb93a6a22b04d9bce400b8973ac39a57c2a7407cc76d3"
I0423 18:32:34.295100   15620 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I0423 18:32:34.302634   15620 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4
E0423 18:32:34.302687   15620 kubelet_network_linux.go:83] "Failed to ensure that iptables hint chain exists" err="error creating chain \"KUBE-IPTABLES-HINT\": executable file not found in $PATH: "
I0423 18:32:34.302695   15620 kubelet_network_linux.go:71] "Failed to initialize iptables rules; some functionality may be missing." protocol=IPv6
I0423 18:32:34.302707   15620 status_manager.go:161] "Starting to sync pod status with apiserver"
I0423 18:32:34.302722   15620 kubelet.go:2010] "Starting kubelet main sync loop"
E0423 18:32:34.302750   15620 kubelet.go:2034] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
I0423 18:32:34.355957   15620 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
I0423 18:32:34.363355   15620 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
I0423 18:32:34.363365   15620 kubelet_node_status.go:70] "Attempting to register node" node="solid"
I0423 18:32:34.367431   15620 kubelet_node_status.go:108] "Node was previously registered" node="solid"
I0423 18:32:34.367470   15620 kubelet_node_status.go:73] "Successfully registered node" node="solid"
I0423 18:32:34.368510   15620 setters.go:545] "Node became not ready" node="solid" condition={Type:Ready Status:False LastHeartbeatTime:2024-04-23 18:32:34.368481389 +0800 HKT m=+2.237562317 LastTransitionTime:2024-04-23 18:32:34.368481389 +0800 HKT m=+2.237562317 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
I0423 18:32:34.386858   15620 cpu_manager.go:213] "Starting CPU manager" policy="none"
I0423 18:32:34.386868   15620 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s"
I0423 18:32:34.386880   15620 state_mem.go:36] "Initialized new in-memory state store"
I0423 18:32:34.386964   15620 state_mem.go:88] "Updated default CPUSet" cpuSet=""
I0423 18:32:34.386975   15620 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
I0423 18:32:34.386981   15620 policy_none.go:49] "None policy: Start"
I0423 18:32:34.387341   15620 memory_manager.go:168] "Starting memorymanager" policy="None"
I0423 18:32:34.387351   15620 state_mem.go:35] "Initializing new in-memory state store"
I0423 18:32:34.387413   15620 state_mem.go:75] "Updated machine memory state"
I0423 18:32:34.387857   15620 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
I0423 18:32:34.387979   15620 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
I0423 18:32:34.403374   15620 scope.go:115] "RemoveContainer" containerID="1f90ef7b958f4cbe209bb93a6a22b04d9bce400b8973ac39a57c2a7407cc76d3"
E0423 18:32:34.404019   15620 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f90ef7b958f4cbe209bb93a6a22b04d9bce400b8973ac39a57c2a7407cc76d3\": not found" containerID="1f90ef7b958f4cbe209bb93a6a22b04d9bce400b8973ac39a57c2a7407cc76d3"
I0423 18:32:34.404062   15620 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:1f90ef7b958f4cbe209bb93a6a22b04d9bce400b8973ac39a57c2a7407cc76d3} err="failed to get container status \"1f90ef7b958f4cbe209bb93a6a22b04d9bce400b8973ac39a57c2a7407cc76d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f90ef7b958f4cbe209bb93a6a22b04d9bce400b8973ac39a57c2a7407cc76d3\": not found"
I0423 18:32:34.404080   15620 scope.go:115] "RemoveContainer" containerID="1f90ef7b958f4cbe209bb93a6a22b04d9bce400b8973ac39a57c2a7407cc76d3"
I0423 18:32:34.404540   15620 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:1f90ef7b958f4cbe209bb93a6a22b04d9bce400b8973ac39a57c2a7407cc76d3} err="failed to get container status \"1f90ef7b958f4cbe209bb93a6a22b04d9bce400b8973ac39a57c2a7407cc76d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f90ef7b958f4cbe209bb93a6a22b04d9bce400b8973ac39a57c2a7407cc76d3\": not found"
time="2024-04-23T18:32:34+08:00" level=info msg="Starting the netpol controller version v1.5.2-0.20221026101626-e01045262706, built on 2024-04-22T13:36:07Z, go1.22.2"
I0423 18:32:34.475672   15620 network_policy_controller.go:163] Starting network policy controller
I0423 18:32:34.492531   15620 network_policy_controller.go:175] Starting network policy controller full sync goroutine
E0423 18:32:34.510925   15620 network_policy_controller.go:284] Aborting sync. Failed to run iptables-restore: failed to call iptables-restore: exit status 1 (Warning: Extension physdev revision 0 not supported, missing kernel module?
Warning: Extension NFLOG revision 0 not supported, missing kernel module?
Warning: Extension limit revision 0 not supported, missing kernel module?
Warning: Extension REJECT revision 0 not supported, missing kernel module?
iptables-restore: line 110 failed
)
*filter
:INPUT ACCEPT [2244:919380] - [0:0]
:FORWARD ACCEPT [2:120] - [0:0]
:OUTPUT ACCEPT [2255:960816] - [0:0]
:KUBE-FIREWALL - [0:0] - [0:0]
:KUBE-KUBELET-CANARY - [0:0] - [0:0]
:KUBE-NWPLCY-DEFAULT - [0:0] - [0:0]
:KUBE-ROUTER-FORWARD - [0:0] - [0:0]
:KUBE-ROUTER-INPUT - [0:0] - [0:0]
:KUBE-ROUTER-OUTPUT - [0:0] - [0:0]
:KUBE-POD-FW-TYHU6IIERJPDEGRV - [0:0]
:KUBE-POD-FW-FQUNU3C5ZHX4AEG5 - [0:0]
:KUBE-POD-FW-WTFN6XE7KXIJSX7I - [0:0]
:KUBE-POD-FW-M7LSKS7EAJSJVMFW - [0:0]
:KUBE-POD-FW-KS7IBLLYYIFYJXB6 - [0:0]
-A INPUT -m comment --comment "kube-router netpol - 4IA2OSFRMVNDXBVV" -j KUBE-ROUTER-INPUT
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kube-router netpol - TEMCG2JMHZYE7H7T" -j KUBE-ROUTER-FORWARD
-A OUTPUT -m comment --comment "kube-router netpol - VEAAIY32XVBHCSCY" -j KUBE-ROUTER-OUTPUT
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-NWPLCY-DEFAULT -m comment --comment "rule to mark traffic matching a network policy" -j MARK --set-xmark 0x10000/0x10000
-A KUBE-ROUTER-INPUT -d 10.43.0.0/16 -m comment --comment "allow traffic to primary cluster IP range - TZZOAXOCHPHEHX7M" -j RETURN
-A KUBE-ROUTER-INPUT -p tcp -m comment --comment "allow LOCAL TCP traffic to node ports - LR7XO7NXDBGQJD2M" -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -j RETURN
-A KUBE-ROUTER-INPUT -p udp -m comment --comment "allow LOCAL UDP traffic to node ports - 76UCBPIZNGJNWNUZ" -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -j RETURN
-I KUBE-POD-FW-TYHU6IIERJPDEGRV 1 -d 10.42.0.9 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-I KUBE-POD-FW-TYHU6IIERJPDEGRV 1 -s 10.42.0.9 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-I KUBE-POD-FW-TYHU6IIERJPDEGRV 1 -m comment --comment "rule to permit the traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.9 -j ACCEPT
-I KUBE-POD-FW-TYHU6IIERJPDEGRV 1 -m comment --comment "rule to drop invalid state for pod" -m conntrack --ctstate INVALID -j DROP
-I KUBE-POD-FW-TYHU6IIERJPDEGRV 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-ROUTER-FORWARD -m comment --comment "rule to jump traffic destined to POD name:coredns-597584b69b-pwlmm namespace: kube-system to chain KUBE-POD-FW-TYHU6IIERJPDEGRV" -d 10.42.0.9 -j KUBE-POD-FW-TYHU6IIERJPDEGRV
-A KUBE-ROUTER-OUTPUT -m comment --comment "rule to jump traffic destined to POD name:coredns-597584b69b-pwlmm namespace: kube-system to chain KUBE-POD-FW-TYHU6IIERJPDEGRV" -d 10.42.0.9 -j KUBE-POD-FW-TYHU6IIERJPDEGRV
-A KUBE-ROUTER-FORWARD -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:coredns-597584b69b-pwlmm namespace: kube-system to chain KUBE-POD-FW-TYHU6IIERJPDEGRV" -d 10.42.0.9 -j KUBE-POD-FW-TYHU6IIERJPDEGRV
-A KUBE-ROUTER-INPUT -m comment --comment "rule to jump traffic from POD name:coredns-597584b69b-pwlmm namespace: kube-system to chain KUBE-POD-FW-TYHU6IIERJPDEGRV" -s 10.42.0.9 -j KUBE-POD-FW-TYHU6IIERJPDEGRV
-A KUBE-ROUTER-FORWARD -m comment --comment "rule to jump traffic from POD name:coredns-597584b69b-pwlmm namespace: kube-system to chain KUBE-POD-FW-TYHU6IIERJPDEGRV" -s 10.42.0.9 -j KUBE-POD-FW-TYHU6IIERJPDEGRV
-A KUBE-ROUTER-OUTPUT -m comment --comment "rule to jump traffic from POD name:coredns-597584b69b-pwlmm namespace: kube-system to chain KUBE-POD-FW-TYHU6IIERJPDEGRV" -s 10.42.0.9 -j KUBE-POD-FW-TYHU6IIERJPDEGRV
-A KUBE-ROUTER-FORWARD -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:coredns-597584b69b-pwlmm namespace: kube-system to chain KUBE-POD-FW-TYHU6IIERJPDEGRV" -s 10.42.0.9 -j KUBE-POD-FW-TYHU6IIERJPDEGRV
-A KUBE-POD-FW-TYHU6IIERJPDEGRV -m comment --comment "rule to log dropped traffic POD name:coredns-597584b69b-pwlmm namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10
-A KUBE-POD-FW-TYHU6IIERJPDEGRV -m comment --comment "rule to REJECT traffic destined for POD name:coredns-597584b69b-pwlmm namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT
-A KUBE-POD-FW-TYHU6IIERJPDEGRV -j MARK --set-mark 0/0x10000
-A KUBE-POD-FW-TYHU6IIERJPDEGRV -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000
-I KUBE-POD-FW-FQUNU3C5ZHX4AEG5 1 -d 10.42.0.10 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-I KUBE-POD-FW-FQUNU3C5ZHX4AEG5 1 -s 10.42.0.10 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-I KUBE-POD-FW-FQUNU3C5ZHX4AEG5 1 -m comment --comment "rule to permit the traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.10 -j ACCEPT
-I KUBE-POD-FW-FQUNU3C5ZHX4AEG5 1 -m comment --comment "rule to drop invalid state for pod" -m conntrack --ctstate INVALID -j DROP
-I KUBE-POD-FW-FQUNU3C5ZHX4AEG5 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-ROUTER-FORWARD -m comment --comment "rule to jump traffic destined to POD name:helm-install-traefik-bskvm namespace: kube-system to chain KUBE-POD-FW-FQUNU3C5ZHX4AEG5" -d 10.42.0.10 -j KUBE-POD-FW-FQUNU3C5ZHX4AEG5
-A KUBE-ROUTER-OUTPUT -m comment --comment "rule to jump traffic destined to POD name:helm-install-traefik-bskvm namespace: kube-system to chain KUBE-POD-FW-FQUNU3C5ZHX4AEG5" -d 10.42.0.10 -j KUBE-POD-FW-FQUNU3C5ZHX4AEG5
-A KUBE-ROUTER-FORWARD -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:helm-install-traefik-bskvm namespace: kube-system to chain KUBE-POD-FW-FQUNU3C5ZHX4AEG5" -d 10.42.0.10 -j KUBE-POD-FW-FQUNU3C5ZHX4AEG5
-A KUBE-ROUTER-INPUT -m comment --comment "rule to jump traffic from POD name:helm-install-traefik-bskvm namespace: kube-system to chain KUBE-POD-FW-FQUNU3C5ZHX4AEG5" -s 10.42.0.10 -j KUBE-POD-FW-FQUNU3C5ZHX4AEG5
-A KUBE-ROUTER-FORWARD -m comment --comment "rule to jump traffic from POD name:helm-install-traefik-bskvm namespace: kube-system to chain KUBE-POD-FW-FQUNU3C5ZHX4AEG5" -s 10.42.0.10 -j KUBE-POD-FW-FQUNU3C5ZHX4AEG5
-A KUBE-ROUTER-OUTPUT -m comment --comment "rule to jump traffic from POD name:helm-install-traefik-bskvm namespace: kube-system to chain KUBE-POD-FW-FQUNU3C5ZHX4AEG5" -s 10.42.0.10 -j KUBE-POD-FW-FQUNU3C5ZHX4AEG5
-A KUBE-ROUTER-FORWARD -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:helm-install-traefik-bskvm namespace: kube-system to chain KUBE-POD-FW-FQUNU3C5ZHX4AEG5" -s 10.42.0.10 -j KUBE-POD-FW-FQUNU3C5ZHX4AEG5
-A KUBE-POD-FW-FQUNU3C5ZHX4AEG5 -m comment --comment "rule to log dropped traffic POD name:helm-install-traefik-bskvm namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10
-A KUBE-POD-FW-FQUNU3C5ZHX4AEG5 -m comment --comment "rule to REJECT traffic destined for POD name:helm-install-traefik-bskvm namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT
-A KUBE-POD-FW-FQUNU3C5ZHX4AEG5 -j MARK --set-mark 0/0x10000
-A KUBE-POD-FW-FQUNU3C5ZHX4AEG5 -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000
-I KUBE-POD-FW-WTFN6XE7KXIJSX7I 1 -d 10.42.0.11 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-I KUBE-POD-FW-WTFN6XE7KXIJSX7I 1 -s 10.42.0.11 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-I KUBE-POD-FW-WTFN6XE7KXIJSX7I 1 -m comment --comment "rule to permit the traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.11 -j ACCEPT
-I KUBE-POD-FW-WTFN6XE7KXIJSX7I 1 -m comment --comment "rule to drop invalid state for pod" -m conntrack --ctstate INVALID -j DROP
-I KUBE-POD-FW-WTFN6XE7KXIJSX7I 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-ROUTER-FORWARD -m comment --comment "rule to jump traffic destined to POD name:helm-install-traefik-crd-t7q8d namespace: kube-system to chain KUBE-POD-FW-WTFN6XE7KXIJSX7I" -d 10.42.0.11 -j KUBE-POD-FW-WTFN6XE7KXIJSX7I
-A KUBE-ROUTER-OUTPUT -m comment --comment "rule to jump traffic destined to POD name:helm-install-traefik-crd-t7q8d namespace: kube-system to chain KUBE-POD-FW-WTFN6XE7KXIJSX7I" -d 10.42.0.11 -j KUBE-POD-FW-WTFN6XE7KXIJSX7I
-A KUBE-ROUTER-FORWARD -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:helm-install-traefik-crd-t7q8d namespace: kube-system to chain KUBE-POD-FW-WTFN6XE7KXIJSX7I" -d 10.42.0.11 -j KUBE-POD-FW-WTFN6XE7KXIJSX7I
-A KUBE-ROUTER-INPUT -m comment --comment "rule to jump traffic from POD name:helm-install-traefik-crd-t7q8d namespace: kube-system to chain KUBE-POD-FW-WTFN6XE7KXIJSX7I" -s 10.42.0.11 -j KUBE-POD-FW-WTFN6XE7KXIJSX7I
-A KUBE-ROUTER-FORWARD -m comment --comment "rule to jump traffic from POD name:helm-install-traefik-crd-t7q8d namespace: kube-system to chain KUBE-POD-FW-WTFN6XE7KXIJSX7I" -s 10.42.0.11 -j KUBE-POD-FW-WTFN6XE7KXIJSX7I
-A KUBE-ROUTER-OUTPUT -m comment --comment "rule to jump traffic from POD name:helm-install-traefik-crd-t7q8d namespace: kube-system to chain KUBE-POD-FW-WTFN6XE7KXIJSX7I" -s 10.42.0.11 -j KUBE-POD-FW-WTFN6XE7KXIJSX7I
-A KUBE-ROUTER-FORWARD -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:helm-install-traefik-crd-t7q8d namespace: kube-system to chain KUBE-POD-FW-WTFN6XE7KXIJSX7I" -s 10.42.0.11 -j KUBE-POD-FW-WTFN6XE7KXIJSX7I
-A KUBE-POD-FW-WTFN6XE7KXIJSX7I -m comment --comment "rule to log dropped traffic POD name:helm-install-traefik-crd-t7q8d namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10
-A KUBE-POD-FW-WTFN6XE7KXIJSX7I -m comment --comment "rule to REJECT traffic destined for POD name:helm-install-traefik-crd-t7q8d namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT
-A KUBE-POD-FW-WTFN6XE7KXIJSX7I -j MARK --set-mark 0/0x10000
-A KUBE-POD-FW-WTFN6XE7KXIJSX7I -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000
-I KUBE-POD-FW-M7LSKS7EAJSJVMFW 1 -d 10.42.0.7 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-I KUBE-POD-FW-M7LSKS7EAJSJVMFW 1 -s 10.42.0.7 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-I KUBE-POD-FW-M7LSKS7EAJSJVMFW 1 -m comment --comment "rule to permit the traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.7 -j ACCEPT
-I KUBE-POD-FW-M7LSKS7EAJSJVMFW 1 -m comment --comment "rule to drop invalid state for pod" -m conntrack --ctstate INVALID -j DROP
-I KUBE-POD-FW-M7LSKS7EAJSJVMFW 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-ROUTER-FORWARD -m comment --comment "rule to jump traffic destined to POD name:metrics-server-5c8978b444-mhx2c namespace: kube-system to chain KUBE-POD-FW-M7LSKS7EAJSJVMFW" -d 10.42.0.7 -j KUBE-POD-FW-M7LSKS7EAJSJVMFW
-A KUBE-ROUTER-OUTPUT -m comment --comment "rule to jump traffic destined to POD name:metrics-server-5c8978b444-mhx2c namespace: kube-system to chain KUBE-POD-FW-M7LSKS7EAJSJVMFW" -d 10.42.0.7 -j KUBE-POD-FW-M7LSKS7EAJSJVMFW
-A KUBE-ROUTER-FORWARD -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:metrics-server-5c8978b444-mhx2c namespace: kube-system to chain KUBE-POD-FW-M7LSKS7EAJSJVMFW" -d 10.42.0.7 -j KUBE-POD-FW-M7LSKS7EAJSJVMFW
-A KUBE-ROUTER-INPUT -m comment --comment "rule to jump traffic from POD name:metrics-server-5c8978b444-mhx2c namespace: kube-system to chain KUBE-POD-FW-M7LSKS7EAJSJVMFW" -s 10.42.0.7 -j KUBE-POD-FW-M7LSKS7EAJSJVMFW
-A KUBE-ROUTER-FORWARD -m comment --comment "rule to jump traffic from POD name:metrics-server-5c8978b444-mhx2c namespace: kube-system to chain KUBE-POD-FW-M7LSKS7EAJSJVMFW" -s 10.42.0.7 -j KUBE-POD-FW-M7LSKS7EAJSJVMFW
-A KUBE-ROUTER-OUTPUT -m comment --comment "rule to jump traffic from POD name:metrics-server-5c8978b444-mhx2c namespace: kube-system to chain KUBE-POD-FW-M7LSKS7EAJSJVMFW" -s 10.42.0.7 -j KUBE-POD-FW-M7LSKS7EAJSJVMFW
-A KUBE-ROUTER-FORWARD -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:metrics-server-5c8978b444-mhx2c namespace: kube-system to chain KUBE-POD-FW-M7LSKS7EAJSJVMFW" -s 10.42.0.7 -j KUBE-POD-FW-M7LSKS7EAJSJVMFW
-A KUBE-POD-FW-M7LSKS7EAJSJVMFW -m comment --comment "rule to log dropped traffic POD name:metrics-server-5c8978b444-mhx2c namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10
-A KUBE-POD-FW-M7LSKS7EAJSJVMFW -m comment --comment "rule to REJECT traffic destined for POD name:metrics-server-5c8978b444-mhx2c namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT
-A KUBE-POD-FW-M7LSKS7EAJSJVMFW -j MARK --set-mark 0/0x10000
-A KUBE-POD-FW-M7LSKS7EAJSJVMFW -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000
-I KUBE-POD-FW-KS7IBLLYYIFYJXB6 1 -d 10.42.0.8 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-I KUBE-POD-FW-KS7IBLLYYIFYJXB6 1 -s 10.42.0.8 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT
-I KUBE-POD-FW-KS7IBLLYYIFYJXB6 1 -m comment --comment "rule to permit the traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.8 -j ACCEPT
-I KUBE-POD-FW-KS7IBLLYYIFYJXB6 1 -m comment --comment "rule to drop invalid state for pod" -m conntrack --ctstate INVALID -j DROP
-I KUBE-POD-FW-KS7IBLLYYIFYJXB6 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-ROUTER-FORWARD -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-79f67d76f8-j4vcv namespace: kube-system to chain KUBE-POD-FW-KS7IBLLYYIFYJXB6" -d 10.42.0.8 -j KUBE-POD-FW-KS7IBLLYYIFYJXB6
-A KUBE-ROUTER-OUTPUT -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-79f67d76f8-j4vcv namespace: kube-system to chain KUBE-POD-FW-KS7IBLLYYIFYJXB6" -d 10.42.0.8 -j KUBE-POD-FW-KS7IBLLYYIFYJXB6
-A KUBE-ROUTER-FORWARD -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-79f67d76f8-j4vcv namespace: kube-system to chain KUBE-POD-FW-KS7IBLLYYIFYJXB6" -d 10.42.0.8 -j KUBE-POD-FW-KS7IBLLYYIFYJXB6
-A KUBE-ROUTER-OUTPUT -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-79f67d76f8-j4vcv namespace: kube-system to chain KUBE-POD-FW-KS7IBLLYYIFYJXB6" -s 10.42.0.8 -j KUBE-POD-FW-KS7IBLLYYIFYJXB6
-A KUBE-ROUTER-INPUT -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-79f67d76f8-j4vcv namespace: kube-system to chain KUBE-POD-FW-KS7IBLLYYIFYJXB6" -s 10.42.0.8 -j KUBE-POD-FW-KS7IBLLYYIFYJXB6
-A KUBE-ROUTER-FORWARD -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-79f67d76f8-j4vcv namespace: kube-system to chain KUBE-POD-FW-KS7IBLLYYIFYJXB6" -s 10.42.0.8 -j KUBE-POD-FW-KS7IBLLYYIFYJXB6
-A KUBE-ROUTER-FORWARD -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-79f67d76f8-j4vcv namespace: kube-system to chain KUBE-POD-FW-KS7IBLLYYIFYJXB6" -s 10.42.0.8 -j KUBE-POD-FW-KS7IBLLYYIFYJXB6
-A KUBE-POD-FW-KS7IBLLYYIFYJXB6 -m comment --comment "rule to log dropped traffic POD name:local-path-provisioner-79f67d76f8-j4vcv namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10
-A KUBE-POD-FW-KS7IBLLYYIFYJXB6 -m comment --comment "rule to REJECT traffic destined for POD name:local-path-provisioner-79f67d76f8-j4vcv namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT
-A KUBE-POD-FW-KS7IBLLYYIFYJXB6 -j MARK --set-mark 0/0x10000
-A KUBE-POD-FW-KS7IBLLYYIFYJXB6 -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000
-A KUBE-ROUTER-FORWARD -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT
-A KUBE-ROUTER-OUTPUT -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT
-A KUBE-ROUTER-INPUT -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT
COMMIT
time="2024-04-23T18:32:35+08:00" level=info msg="Kube API server is now running"
time="2024-04-23T18:32:35+08:00" level=info msg="ETCD server is now running"
time="2024-04-23T18:32:35+08:00" level=info msg="Waiting for cloud-controller-manager privileges to become available"
time="2024-04-23T18:32:35+08:00" level=info msg="k3s is up and running"
time="2024-04-23T18:32:35+08:00" level=info msg="Applying CRD addons.k3s.cattle.io"
time="2024-04-23T18:32:35+08:00" level=info msg="Applying CRD helmcharts.helm.cattle.io"
time="2024-04-23T18:32:35+08:00" level=info msg="Applying CRD helmchartconfigs.helm.cattle.io"
time="2024-04-23T18:32:35+08:00" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-19.0.400.tgz"
time="2024-04-23T18:32:35+08:00" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-19.0.400.tgz"
time="2024-04-23T18:32:35+08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
time="2024-04-23T18:32:35+08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
time="2024-04-23T18:32:35+08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
time="2024-04-23T18:32:35+08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
time="2024-04-23T18:32:35+08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
time="2024-04-23T18:32:35+08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
time="2024-04-23T18:32:35+08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
time="2024-04-23T18:32:35+08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
time="2024-04-23T18:32:35+08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
time="2024-04-23T18:32:35+08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
time="2024-04-23T18:32:35+08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
time="2024-04-23T18:32:35+08:00" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
E0423 18:32:35.199055   15620 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0423 18:32:35.200611   15620 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0423 18:32:35.200928   15620 serving.go:355] Generated self-signed cert in-memory
time="2024-04-23T18:32:35+08:00" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
time="2024-04-23T18:32:35+08:00" level=info msg="Creating deploy event broadcaster"
I0423 18:32:35.202139   15620 event.go:294] "Event occurred" object="kube-system/ccm" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
E0423 18:32:35.204065   15620 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0423 18:32:35.206349   15620 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0423 18:32:35.209396   15620 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0423 18:32:35.212815   15620 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2024-04-23T18:32:35+08:00" level=info msg="Starting /v1, Kind=Secret controller"
time="2024-04-23T18:32:35+08:00" level=info msg="Creating helm-controller event broadcaster"
time="2024-04-23T18:32:35+08:00" level=info msg="Updating TLS secret for kube-system/k3s-serving (count: 10): map[listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-192.168.0.9:192.168.0.9 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/cn-solid:solid listener.cattle.io/fingerprint:SHA1=EA1A9A31BCC70E0BD1F05321026E1114FE6C74CF]"
time="2024-04-23T18:32:35+08:00" level=info msg="Cluster dns configmap already exists"
I0423 18:32:35.228234   15620 serving.go:355] Generated self-signed cert in-memory
I0423 18:32:35.229932   15620 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0423 18:32:35.230925   15620 event.go:294] "Event occurred" object="kube-system/ccm" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
I0423 18:32:35.231954   15620 controller.go:616] quota admission added evaluator for: addons.k3s.cattle.io
I0423 18:32:35.235349   15620 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
I0423 18:32:35.251338   15620 kube.go:133] Node controller sync successful
I0423 18:32:35.251367   15620 vxlan.go:138] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
time="2024-04-23T18:32:35+08:00" level=info msg="Wrote flannel subnet file to /run/flannel/subnet.env"
time="2024-04-23T18:32:35+08:00" level=info msg="Running flannel backend."
I0423 18:32:35.253754   15620 vxlan_network.go:61] watching for new subnet leases
I0423 18:32:35.254150   15620 apiserver.go:52] "Watching apiserver"
I0423 18:32:35.255712   15620 topology_manager.go:205] "Topology Admit Handler"
I0423 18:32:35.255774   15620 topology_manager.go:205] "Topology Admit Handler"
I0423 18:32:35.255797   15620 topology_manager.go:205] "Topology Admit Handler"
I0423 18:32:35.255828   15620 topology_manager.go:205] "Topology Admit Handler"
I0423 18:32:35.255862   15620 topology_manager.go:205] "Topology Admit Handler"
I0423 18:32:35.257912   15620 iptables.go:260] bootstrap done
I0423 18:32:35.259101   15620 iptables.go:260] bootstrap done
I0423 18:32:35.260748   15620 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/3b3b200f-15bf-49ac-be5a-0b754801b204-content\") pod \"helm-install-traefik-bskvm\" (UID: \"3b3b200f-15bf-49ac-be5a-0b754801b204\") " pod="kube-system/helm-install-traefik-bskvm"
I0423 18:32:35.260769   15620 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxtsk\" (UniqueName: \"kubernetes.io/projected/3b3b200f-15bf-49ac-be5a-0b754801b204-kube-api-access-kxtsk\") pod \"helm-install-traefik-bskvm\" (UID: \"3b3b200f-15bf-49ac-be5a-0b754801b204\") " pod="kube-system/helm-install-traefik-bskvm"
I0423 18:32:35.260786   15620 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"values\" (UniqueName: \"kubernetes.io/configmap/e5441f52-e7c4-4fe1-9d60-9acef7dc2c8b-values\") pod \"helm-install-traefik-crd-t7q8d\" (UID: \"e5441f52-e7c4-4fe1-9d60-9acef7dc2c8b\") " pod="kube-system/helm-install-traefik-crd-t7q8d"
I0423 18:32:35.260802   15620 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2kh2\" (UniqueName: \"kubernetes.io/projected/e5441f52-e7c4-4fe1-9d60-9acef7dc2c8b-kube-api-access-h2kh2\") pod \"helm-install-traefik-crd-t7q8d\" (UID: \"e5441f52-e7c4-4fe1-9d60-9acef7dc2c8b\") " pod="kube-system/helm-install-traefik-crd-t7q8d"
I0423 18:32:35.260825   15620 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a8c385e-e43e-4086-9f9a-3237125c6e9b-config-volume\") pod \"local-path-provisioner-79f67d76f8-j4vcv\" (UID: \"7a8c385e-e43e-4086-9f9a-3237125c6e9b\") " pod="kube-system/local-path-provisioner-79f67d76f8-j4vcv"
I0423 18:32:35.260844   15620 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8a4e41d-fd27-44d7-b881-28beb20f3bd7-config-volume\") pod \"coredns-597584b69b-pwlmm\" (UID: \"b8a4e41d-fd27-44d7-b881-28beb20f3bd7\") " pod="kube-system/coredns-597584b69b-pwlmm"
I0423 18:32:35.260862   15620 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw28h\" (UniqueName: \"kubernetes.io/projected/b8a4e41d-fd27-44d7-b881-28beb20f3bd7-kube-api-access-jw28h\") pod \"coredns-597584b69b-pwlmm\" (UID: \"b8a4e41d-fd27-44d7-b881-28beb20f3bd7\") " pod="kube-system/coredns-597584b69b-pwlmm"
I0423 18:32:35.260875   15620 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"values\" (UniqueName: \"kubernetes.io/configmap/3b3b200f-15bf-49ac-be5a-0b754801b204-values\") pod \"helm-install-traefik-bskvm\" (UID: \"3b3b200f-15bf-49ac-be5a-0b754801b204\") " pod="kube-system/helm-install-traefik-bskvm"
I0423 18:32:35.260885   15620 controller.go:616] quota admission added evaluator for: deployments.apps
I0423 18:32:35.260889   15620 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"content\" (UniqueName: \"kubernetes.io/configmap/e5441f52-e7c4-4fe1-9d60-9acef7dc2c8b-content\") pod \"helm-install-traefik-crd-t7q8d\" (UID: \"e5441f52-e7c4-4fe1-9d60-9acef7dc2c8b\") " pod="kube-system/helm-install-traefik-crd-t7q8d"
I0423 18:32:35.260905   15620 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8b64c32b-c536-4fe7-8bb2-8f44fde12f1a-tmp-dir\") pod \"metrics-server-5c8978b444-mhx2c\" (UID: \"8b64c32b-c536-4fe7-8bb2-8f44fde12f1a\") " pod="kube-system/metrics-server-5c8978b444-mhx2c"
I0423 18:32:35.260924   15620 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrgq9\" (UniqueName: \"kubernetes.io/projected/8b64c32b-c536-4fe7-8bb2-8f44fde12f1a-kube-api-access-xrgq9\") pod \"metrics-server-5c8978b444-mhx2c\" (UID: \"8b64c32b-c536-4fe7-8bb2-8f44fde12f1a\") " pod="kube-system/metrics-server-5c8978b444-mhx2c"
I0423 18:32:35.260943   15620 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tdkp\" (UniqueName: \"kubernetes.io/projected/7a8c385e-e43e-4086-9f9a-3237125c6e9b-kube-api-access-9tdkp\") pod \"local-path-provisioner-79f67d76f8-j4vcv\" (UID: \"7a8c385e-e43e-4086-9f9a-3237125c6e9b\") " pod="kube-system/local-path-provisioner-79f67d76f8-j4vcv"
I0423 18:32:35.260956   15620 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-config-volume\" (UniqueName: \"kubernetes.io/configmap/b8a4e41d-fd27-44d7-b881-28beb20f3bd7-custom-config-volume\") pod \"coredns-597584b69b-pwlmm\" (UID: \"b8a4e41d-fd27-44d7-b881-28beb20f3bd7\") " pod="kube-system/coredns-597584b69b-pwlmm"
I0423 18:32:35.260965   15620 reconciler.go:169] "Reconciler: start to sync state"
I0423 18:32:35.265614   15620 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
I0423 18:32:35.268734   15620 event.go:294] "Event occurred" object="kube-system/local-storage" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""
I0423 18:32:35.283611   15620 event.go:294] "Event occurred" object="kube-system/local-storage" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""
I0423 18:32:35.284661   15620 serving.go:355] Generated self-signed cert in-memory
I0423 18:32:35.286147   15620 event.go:294] "Event occurred" object="kube-system/aggregated-metrics-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""
I0423 18:32:35.289140   15620 event.go:294] "Event occurred" object="kube-system/aggregated-metrics-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""
time="2024-04-23T18:32:35+08:00" level=warning msg="Error ensuring node password secret for pre-validated node 'solid': unable to verify hash for node 'solid': hash does not match"
I0423 18:32:35.291279   15620 event.go:294] "Event occurred" object="kube-system/auth-delegator" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""
I0423 18:32:35.293870   15620 event.go:294] "Event occurred" object="kube-system/auth-delegator" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""
time="2024-04-23T18:32:35+08:00" level=info msg="Starting /v1, Kind=Node controller"
time="2024-04-23T18:32:35+08:00" level=info msg="Starting /v1, Kind=ConfigMap controller"
time="2024-04-23T18:32:35+08:00" level=info msg="Starting /v1, Kind=ServiceAccount controller"
I0423 18:32:35.296106   15620 event.go:294] "Event occurred" object="kube-system/auth-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\""
time="2024-04-23T18:32:35+08:00" level=info msg="Labels and annotations have been set successfully on node: solid"
I0423 18:32:35.297954   15620 event.go:294] "Event occurred" object="kube-system/auth-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\""
I0423 18:32:35.300637   15620 event.go:294] "Event occurred" object="kube-system/metrics-apiservice" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\""
I0423 18:32:35.304423   15620 event.go:294] "Event occurred" object="kube-system/metrics-apiservice" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\""
I0423 18:32:35.306756   15620 event.go:294] "Event occurred" object="kube-system/metrics-server-deployment" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml\""
time="2024-04-23T18:32:35+08:00" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
time="2024-04-23T18:32:35+08:00" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
I0423 18:32:35.315266   15620 event.go:294] "Event occurred" object="kube-system/traefik-crd" fieldPath="" kind="HelmChart" apiVersion="helm.cattle.io/v1" type="Normal" reason="ApplyJob" message="Applying HelmChart using Job kube-system/helm-install-traefik-crd"
I0423 18:32:35.315285   15620 event.go:294] "Event occurred" object="kube-system/traefik" fieldPath="" kind="HelmChart" apiVersion="helm.cattle.io/v1" type="Normal" reason="ApplyJob" message="Applying HelmChart using Job kube-system/helm-install-traefik"
I0423 18:32:35.316798   15620 event.go:294] "Event occurred" object="kube-system/metrics-server-deployment" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml\""
E0423 18:32:35.318528   15620 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0423 18:32:35.320605   15620 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0423 18:32:35.320894   15620 event.go:294] "Event occurred" object="kube-system/metrics-server-service" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml\""
time="2024-04-23T18:32:35+08:00" level=info msg="Starting rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding controller"
I0423 18:32:35.323953   15620 event.go:294] "Event occurred" object="kube-system/metrics-server-service" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml\""
E0423 18:32:35.324515   15620 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0423 18:32:35.325526   15620 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2024-04-23T18:32:35+08:00" level=info msg="Starting batch/v1, Kind=Job controller"
I0423 18:32:35.326516   15620 event.go:294] "Event occurred" object="kube-system/resource-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml\""
I0423 18:32:35.332774   15620 event.go:294] "Event occurred" object="kube-system/resource-reader" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml\""
I0423 18:32:35.423056   15620 controllermanager.go:145] Version: v1.25.4+k3s-
I0423 18:32:35.424743   15620 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0423 18:32:35.424746   15620 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0423 18:32:35.424753   15620 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I0423 18:32:35.424755   15620 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0423 18:32:35.424771   15620 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0423 18:32:35.424782   15620 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0423 18:32:35.424868   15620 secure_serving.go:210] Serving securely on 127.0.0.1:10258
I0423 18:32:35.424948   15620 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0423 18:32:35.496668   15620 event.go:294] "Event occurred" object="kube-system/rolebindings" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\""
I0423 18:32:35.497554   15620 controllermanager.go:178] Version: v1.25.4+k3s-
I0423 18:32:35.497564   15620 controllermanager.go:180] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0423 18:32:35.499998   15620 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0423 18:32:35.500011   15620 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I0423 18:32:35.500048   15620 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0423 18:32:35.500048   15620 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0423 18:32:35.500055   15620 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0423 18:32:35.500055   15620 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0423 18:32:35.500071   15620 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0423 18:32:35.500278   15620 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0423 18:32:35.508818   15620 event.go:294] "Event occurred" object="kube-system/rolebindings" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\""
I0423 18:32:35.518986   15620 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.4+k3s-"
I0423 18:32:35.518995   15620 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0423 18:32:35.520607   15620 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0423 18:32:35.520609   15620 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0423 18:32:35.520616   15620 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I0423 18:32:35.520617   15620 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0423 18:32:35.520627   15620 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0423 18:32:35.520619   15620 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0423 18:32:35.520700   15620 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0423 18:32:35.520770   15620 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0423 18:32:35.525376   15620 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0423 18:32:35.525387   15620 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
I0423 18:32:35.525392   15620 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0423 18:32:35.600653   15620 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0423 18:32:35.600688   15620 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0423 18:32:35.600743   15620 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
I0423 18:32:35.621321   15620 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0423 18:32:35.621351   15620 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
I0423 18:32:35.621390   15620 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0423 18:32:35.697951   15620 event.go:294] "Event occurred" object="kube-system/traefik" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="ApplyingManifest" message="Applying manifest at \"/var/lib/rancher/k3s/server/manifests/traefik.yaml\""
I0423 18:32:35.701337   15620 event.go:294] "Event occurred" object="kube-system/traefik" fieldPath="" kind="Addon" apiVersion="k3s.cattle.io/v1" type="Normal" reason="AppliedManifest" message="Applied manifest at \"/var/lib/rancher/k3s/server/manifests/traefik.yaml\""
I0423 18:32:36.454771   15620 request.go:682] Waited for 1.092380736s due to client-side throttling, not priority and fairness, request: POST:https://127.0.0.1:6443/api/v1/namespaces/kube-system/serviceaccounts/local-path-provisioner-service-account/token
I0423 18:32:36.456322   15620 scope.go:115] "RemoveContainer" containerID="887459ec8feb302633b910f1a75a4073b4983a020e302a450a734a204a206d03"
I0423 18:32:36.756223   15620 scope.go:115] "RemoveContainer" containerID="553d28bdf40ed381fb01ba4413a6135e4ec9b96b43ddc6855f013a2317c1b660"
I0423 18:32:36.756304   15620 scope.go:115] "RemoveContainer" containerID="e6810303180ea10a9eb8f56b0645578cfe8ea7fa1ebd1f24133c4202514a209c"
E0423 18:32:36.829237   15620 controllermanager.go:476] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2024-04-23T18:32:36+08:00" level=info msg="Creating service-controller event broadcaster"
I0423 18:32:36.831111   15620 controller.go:616] quota admission added evaluator for: namespaces
I0423 18:32:36.833514   15620 controller.go:616] quota admission added evaluator for: serviceaccounts
E0423 18:32:36.904902   15620 controllermanager.go:475] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0423 18:32:36.905358   15620 shared_informer.go:255] Waiting for caches to sync for tokens
I0423 18:32:36.906942   15620 controllermanager.go:603] Started "serviceaccount"
I0423 18:32:36.906997   15620 serviceaccounts_controller.go:117] Starting service account controller
I0423 18:32:36.907005   15620 shared_informer.go:255] Waiting for caches to sync for service account
I0423 18:32:36.908587   15620 controllermanager.go:603] Started "deployment"
W0423 18:32:36.908595   15620 controllermanager.go:568] "bootstrapsigner" is disabled
I0423 18:32:36.908686   15620 deployment_controller.go:160] "Starting controller" controller="deployment"
I0423 18:32:36.908692   15620 shared_informer.go:255] Waiting for caches to sync for deployment
I0423 18:32:36.909968   15620 controllermanager.go:603] Started "ttl-after-finished"
I0423 18:32:36.910052   15620 ttlafterfinished_controller.go:109] Starting TTL after finished controller
I0423 18:32:36.910058   15620 shared_informer.go:255] Waiting for caches to sync for TTL after finished
I0423 18:32:36.913791   15620 garbagecollector.go:154] Starting garbage collector controller
I0423 18:32:36.913802   15620 shared_informer.go:255] Waiting for caches to sync for garbage collector
I0423 18:32:36.913823   15620 graph_builder.go:291] GraphBuilder running
I0423 18:32:36.913927   15620 controllermanager.go:603] Started "garbagecollector"
I0423 18:32:36.915444   15620 controllermanager.go:603] Started "daemonset"
I0423 18:32:36.915556   15620 daemon_controller.go:291] Starting daemon sets controller
I0423 18:32:36.915565   15620 shared_informer.go:255] Waiting for caches to sync for daemon sets
I0423 18:32:36.917085   15620 controllermanager.go:603] Started "persistentvolume-binder"
I0423 18:32:36.917203   15620 pv_controller_base.go:318] Starting persistent volume controller
I0423 18:32:36.917215   15620 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0423 18:32:36.918685   15620 controllermanager.go:603] Started "endpointslicemirroring"
I0423 18:32:36.918786   15620 endpointslicemirroring_controller.go:216] Starting EndpointSliceMirroring controller
I0423 18:32:36.918793   15620 shared_informer.go:255] Waiting for caches to sync for endpoint_slice_mirroring
I0423 18:32:36.919842   15620 controllermanager.go:603] Started "csrcleaner"
I0423 18:32:36.919902   15620 cleaner.go:82] Starting CSR cleaner controller
I0423 18:32:36.923298   15620 controllermanager.go:603] Started "clusterrole-aggregation"
I0423 18:32:36.923356   15620 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
I0423 18:32:36.923364   15620 shared_informer.go:255] Waiting for caches to sync for ClusterRoleAggregator
I0423 18:32:36.924628   15620 controllermanager.go:603] Started "statefulset"
I0423 18:32:36.924759   15620 stateful_set.go:152] Starting stateful set controller
I0423 18:32:36.924767   15620 shared_informer.go:255] Waiting for caches to sync for stateful set
I0423 18:32:36.926664   15620 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-serving"
I0423 18:32:36.926673   15620 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
I0423 18:32:36.926687   15620 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key"
I0423 18:32:36.926816   15620 certificate_controller.go:112] Starting certificate controller "csrsigning-kubelet-client"
I0423 18:32:36.926822   15620 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kubelet-client
I0423 18:32:36.926837   15620 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key"
I0423 18:32:36.926898   15620 certificate_controller.go:112] Starting certificate controller "csrsigning-kube-apiserver-client"
I0423 18:32:36.926907   15620 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
I0423 18:32:36.926930   15620 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/client-ca.crt::/var/lib/rancher/k3s/server/tls/client-ca.key"
I0423 18:32:36.926942   15620 controllermanager.go:603] Started "csrsigning"
I0423 18:32:36.927018   15620 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown"
I0423 18:32:36.927025   15620 shared_informer.go:255] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
I0423 18:32:36.927042   15620 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key"
W0423 18:32:36.927070   15620 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0423 18:32:36.928378   15620 controllermanager.go:603] Started "ttl"
I0423 18:32:36.928435   15620 ttl_controller.go:120] Starting TTL controller
I0423 18:32:36.928442   15620 shared_informer.go:255] Waiting for caches to sync for TTL
I0423 18:32:36.929826   15620 controllermanager.go:603] Started "endpoint"
I0423 18:32:36.929861   15620 endpoints_controller.go:182] Starting endpoint controller
I0423 18:32:36.929869   15620 shared_informer.go:255] Waiting for caches to sync for endpoint
I0423 18:32:36.931112   15620 controllermanager.go:603] Started "replicationcontroller"
I0423 18:32:36.931195   15620 replica_set.go:205] Starting replicationcontroller controller
I0423 18:32:36.931201   15620 shared_informer.go:255] Waiting for caches to sync for ReplicationController
I0423 18:32:36.932337   15620 controllermanager.go:603] Started "podgc"
I0423 18:32:36.932408   15620 gc_controller.go:99] Starting GC controller
I0423 18:32:36.932429   15620 shared_informer.go:255] Waiting for caches to sync for GC
I0423 18:32:36.934909   15620 controllermanager.go:603] Started "job"
I0423 18:32:36.935092   15620 job_controller.go:196] Starting job controller
I0423 18:32:36.935102   15620 shared_informer.go:255] Waiting for caches to sync for job
E0423 18:32:36.940591   15620 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0423 18:32:36.940899   15620 controllermanager.go:603] Started "horizontalpodautoscaling"
W0423 18:32:36.940910   15620 controllermanager.go:568] "route" is disabled
I0423 18:32:36.940986   15620 horizontal.go:168] Starting HPA controller
I0423 18:32:36.940993   15620 shared_informer.go:255] Waiting for caches to sync for HPA
E0423 18:32:36.942787   15620 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2024-04-23T18:32:36+08:00" level=info msg="Starting /v1, Kind=Node controller"
E0423 18:32:36.946478   15620 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0423 18:32:36.948604   15620 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2024-04-23T18:32:36+08:00" level=info msg="Starting /v1, Kind=Pod controller"
E0423 18:32:36.953208   15620 resource_quota_controller.go:165] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0423 18:32:36.953253   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for controllerrevisions.apps
I0423 18:32:36.953285   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0423 18:32:36.953311   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
I0423 18:32:36.953321   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for limitranges
I0423 18:32:36.953420   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for serviceaccounts
I0423 18:32:36.953435   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for statefulsets.apps
I0423 18:32:36.953450   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for deployments.apps
I0423 18:32:36.953461   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for replicasets.apps
I0423 18:32:36.953469   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for daemonsets.apps
I0423 18:32:36.953484   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for addons.k3s.cattle.io
I0423 18:32:36.953495   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0423 18:32:36.953506   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
I0423 18:32:36.953519   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
W0423 18:32:36.953532   15620 shared_informer.go:533] resyncPeriod 14h42m58.862098435s is smaller than resyncCheckPeriod 16h50m44.422279355s and the informer has already started. Changing it to 16h50m44.422279355s
I0423 18:32:36.953566   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io
I0423 18:32:36.953576   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpoints
I0423 18:32:36.953587   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
I0423 18:32:36.953601   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io
I0423 18:32:36.953624   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for jobs.batch
I0423 18:32:36.953640   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0423 18:32:36.953656   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
I0423 18:32:36.953671   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for podtemplates
I0423 18:32:36.953682   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for cronjobs.batch
I0423 18:32:36.953694   15620 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io
I0423 18:32:36.953702   15620 controllermanager.go:603] Started "resourcequota"
I0423 18:32:36.953748   15620 resource_quota_controller.go:277] Starting resource quota controller
I0423 18:32:36.953757   15620 shared_informer.go:255] Waiting for caches to sync for resource quota
I0423 18:32:36.953767   15620 resource_quota_monitor.go:295] QuotaMonitor running
E0423 18:32:36.953790   15620 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0423 18:32:36.954860   15620 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2024-04-23T18:32:36+08:00" level=info msg="Starting apps/v1, Kind=DaemonSet controller"
I0423 18:32:36.955653   15620 controllermanager.go:301] Started "cloud-node-lifecycle"
I0423 18:32:36.955760   15620 node_lifecycle_controller.go:113] Sending events to api server
I0423 18:32:36.955810   15620 controllermanager.go:301] Started "service"
W0423 18:32:36.955817   15620 controllermanager.go:278] "route" is disabled
I0423 18:32:36.955892   15620 controllermanager.go:301] Started "cloud-node"
I0423 18:32:36.955903   15620 controller.go:237] Starting service controller
I0423 18:32:36.955916   15620 shared_informer.go:255] Waiting for caches to sync for service
I0423 18:32:36.955970   15620 node_controller.go:157] Sending events to api server.
I0423 18:32:36.955992   15620 node_controller.go:166] Waiting for informer caches to sync
I0423 18:32:36.956081   15620 controllermanager.go:603] Started "csrapproving"
I0423 18:32:36.956146   15620 certificate_controller.go:112] Starting certificate controller "csrapproving"
I0423 18:32:36.956153   15620 shared_informer.go:255] Waiting for caches to sync for certificate-csrapproving
E0423 18:32:36.957955   15620 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0423 18:32:36.958281   15620 controllermanager.go:603] Started "pv-protection"
I0423 18:32:36.958319   15620 pv_protection_controller.go:79] Starting PV protection controller
I0423 18:32:36.958326   15620 shared_informer.go:255] Waiting for caches to sync for PV protection
I0423 18:32:36.959569   15620 controllermanager.go:603] Started "ephemeral-volume"
I0423 18:32:36.959598   15620 controller.go:169] Starting ephemeral volume controller
I0423 18:32:36.959603   15620 shared_informer.go:255] Waiting for caches to sync for ephemeral
I0423 18:32:37.005438   15620 shared_informer.go:262] Caches are synced for tokens
I0423 18:32:37.007101   15620 node_lifecycle_controller.go:497] Controller will reconcile labels.
I0423 18:32:37.007120   15620 controllermanager.go:603] Started "nodelifecycle"
I0423 18:32:37.007157   15620 node_lifecycle_controller.go:532] Sending events to api server.
I0423 18:32:37.007177   15620 node_lifecycle_controller.go:543] Starting node controller
I0423 18:32:37.007181   15620 shared_informer.go:255] Waiting for caches to sync for taint
I0423 18:32:37.056208   15620 shared_informer.go:262] Caches are synced for service
I0423 18:32:37.057540   15620 controllermanager.go:603] Started "pvc-protection"
W0423 18:32:37.057559   15620 controllermanager.go:568] "service" is disabled
I0423 18:32:37.057601   15620 pvc_protection_controller.go:103] "Starting PVC protection controller"
I0423 18:32:37.057612   15620 shared_informer.go:255] Waiting for caches to sync for PVC protection
I0423 18:32:37.109356   15620 controllermanager.go:603] Started "persistentvolume-expander"
I0423 18:32:37.109402   15620 expand_controller.go:340] Starting expand controller
I0423 18:32:37.109407   15620 shared_informer.go:255] Waiting for caches to sync for expand
I0423 18:32:37.158959   15620 controllermanager.go:603] Started "endpointslice"
I0423 18:32:37.159047   15620 endpointslice_controller.go:261] Starting endpoint slice controller
I0423 18:32:37.159097   15620 shared_informer.go:255] Waiting for caches to sync for endpoint_slice
I0423 18:32:37.207374   15620 controllermanager.go:603] Started "replicaset"
I0423 18:32:37.207424   15620 replica_set.go:205] Starting replicaset controller
I0423 18:32:37.207430   15620 shared_informer.go:255] Waiting for caches to sync for ReplicaSet
I0423 18:32:37.308808   15620 controllermanager.go:603] Started "disruption"
W0423 18:32:37.308825   15620 controllermanager.go:568] "tokencleaner" is disabled
I0423 18:32:37.308875   15620 disruption.go:421] Sending events to api server.
I0423 18:32:37.308897   15620 disruption.go:432] Starting disruption controller
I0423 18:32:37.308904   15620 shared_informer.go:255] Waiting for caches to sync for disruption
I0423 18:32:37.356978   15620 scope.go:115] "RemoveContainer" containerID="91f139a5093542d5e7ed934463b3ec49d24b04a544c00a729ff3b2be6b89910d"
I0423 18:32:37.358606   15620 node_ipam_controller.go:91] Sending events to api server.
time="2024-04-23T18:32:38+08:00" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=solid --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
I0423 18:32:38.243429   15620 server.go:230] "Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP"
I0423 18:32:38.244947   15620 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0423 18:32:38.245348   15620 proxier.go:666] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0423 18:32:38.249663   15620 node.go:163] Successfully retrieved node IP: 192.168.0.9
I0423 18:32:38.249674   15620 server_others.go:138] "Detected node IP" address="192.168.0.9"
W0423 18:32:38.250020   15620 iptables.go:221] Error checking iptables version, assuming version at least 1.4.11: executable file not found in $PATH
I0423 18:32:38.250486   15620 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0423 18:32:38.250494   15620 server_others.go:206] "Using iptables Proxier"
I0423 18:32:38.250501   15620 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0423 18:32:38.250646   15620 server.go:661] "Version info" version="v1.25.4+k3s-"
I0423 18:32:38.250654   15620 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0423 18:32:38.251002   15620 config.go:226] "Starting endpoint slice config controller"
I0423 18:32:38.251013   15620 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0423 18:32:38.251032   15620 config.go:317] "Starting service config controller"
I0423 18:32:38.251032   15620 config.go:444] "Starting node config controller"
I0423 18:32:38.251040   15620 shared_informer.go:255] Waiting for caches to sync for service config
I0423 18:32:38.251041   15620 shared_informer.go:255] Waiting for caches to sync for node config
I0423 18:32:38.352095   15620 shared_informer.go:262] Caches are synced for node config
I0423 18:32:38.352326   15620 shared_informer.go:262] Caches are synced for service config
I0423 18:32:38.352372   15620 shared_informer.go:262] Caches are synced for endpoint slice config
E0423 18:32:38.387944   15620 proxier.go:1504] "Failed to execute iptables-restore" err=<
        exit status 1: Ignoring deprecated --wait-interval option.
        Warning: Extension REJECT revision 0 not supported, missing kernel module?
        iptables-restore: line 14 failed
 >
I0423 18:32:38.387962   15620 proxier.go:855] "Sync failed" retryingTime="30s"
I0423 18:32:47.364315   15620 range_allocator.go:76] Sending events to api server.
I0423 18:32:47.364387   15620 range_allocator.go:110] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
I0423 18:32:47.364413   15620 controllermanager.go:603] Started "nodeipam"
I0423 18:32:47.364481   15620 node_ipam_controller.go:154] Starting ipam controller
I0423 18:32:47.364488   15620 shared_informer.go:255] Waiting for caches to sync for node
E0423 18:32:47.373876   15620 namespaced_resources_deleter.go:162] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0423 18:32:47.373924   15620 controllermanager.go:603] Started "namespace"
I0423 18:32:47.373989   15620 namespace_controller.go:200] Starting namespace controller
I0423 18:32:47.373997   15620 shared_informer.go:255] Waiting for caches to sync for namespace
I0423 18:32:47.375153   15620 controllermanager.go:603] Started "cronjob"
W0423 18:32:47.375161   15620 controllermanager.go:568] "cloud-node-lifecycle" is disabled
I0423 18:32:47.375171   15620 cronjob_controllerv2.go:135] "Starting cronjob controller v2"
I0423 18:32:47.375181   15620 shared_informer.go:255] Waiting for caches to sync for cronjob
I0423 18:32:47.376432   15620 controllermanager.go:603] Started "attachdetach"
I0423 18:32:47.376488   15620 attach_detach_controller.go:328] Starting attach detach controller
I0423 18:32:47.376504   15620 shared_informer.go:255] Waiting for caches to sync for attach detach
I0423 18:32:47.378931   15620 controllermanager.go:603] Started "root-ca-cert-publisher"
I0423 18:32:47.379025   15620 publisher.go:107] Starting root CA certificate configmap publisher
I0423 18:32:47.379035   15620 shared_informer.go:255] Waiting for caches to sync for crt configmap
I0423 18:32:47.381996   15620 shared_informer.go:255] Waiting for caches to sync for resource quota
W0423 18:32:47.385799   15620 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="solid" does not exist
I0423 18:32:47.386154   15620 job_controller.go:510] enqueueing job kube-system/helm-install-traefik-crd
I0423 18:32:47.386168   15620 job_controller.go:510] enqueueing job kube-system/helm-install-traefik
E0423 18:32:47.388998   15620 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0423 18:32:47.390809   15620 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0423 18:32:47.392172   15620 shared_informer.go:255] Waiting for caches to sync for garbage collector
I0423 18:32:47.408157   15620 shared_informer.go:262] Caches are synced for ReplicaSet
I0423 18:32:47.408304   15620 shared_informer.go:262] Caches are synced for taint
I0423 18:32:47.408331   15620 shared_informer.go:262] Caches are synced for service account
I0423 18:32:47.408370   15620 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
I0423 18:32:47.408373   15620 taint_manager.go:204] "Starting NoExecuteTaintManager"
I0423 18:32:47.408396   15620 taint_manager.go:209] "Sending events to api server"
W0423 18:32:47.408419   15620 node_lifecycle_controller.go:1058] Missing timestamp for Node solid. Assuming now as a timestamp.
I0423 18:32:47.408461   15620 node_lifecycle_controller.go:1259] Controller detected that zone  is now in state Normal.
I0423 18:32:47.408497   15620 event.go:294] "Event occurred" object="solid" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node solid event: Registered Node solid in Controller"
I0423 18:32:47.409547   15620 shared_informer.go:262] Caches are synced for expand
I0423 18:32:47.410112   15620 shared_informer.go:262] Caches are synced for TTL after finished
I0423 18:32:47.416165   15620 shared_informer.go:262] Caches are synced for daemon sets
I0423 18:32:47.417280   15620 shared_informer.go:262] Caches are synced for persistent volume
I0423 18:32:47.419476   15620 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0423 18:32:47.423820   15620 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I0423 18:32:47.425114   15620 shared_informer.go:262] Caches are synced for stateful set
I0423 18:32:47.427278   15620 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
I0423 18:32:47.427303   15620 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0423 18:32:47.427313   15620 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
I0423 18:32:47.427359   15620 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
I0423 18:32:47.428522   15620 shared_informer.go:262] Caches are synced for TTL
I0423 18:32:47.430800   15620 shared_informer.go:262] Caches are synced for endpoint
I0423 18:32:47.431994   15620 shared_informer.go:262] Caches are synced for ReplicationController
I0423 18:32:47.432934   15620 shared_informer.go:262] Caches are synced for GC
I0423 18:32:47.435223   15620 shared_informer.go:262] Caches are synced for job
I0423 18:32:47.441515   15620 shared_informer.go:262] Caches are synced for HPA
I0423 18:32:47.456511   15620 shared_informer.go:262] Caches are synced for certificate-csrapproving
I0423 18:32:47.458164   15620 shared_informer.go:262] Caches are synced for PVC protection
I0423 18:32:47.458371   15620 shared_informer.go:262] Caches are synced for PV protection
I0423 18:32:47.459818   15620 shared_informer.go:262] Caches are synced for ephemeral
I0423 18:32:47.459906   15620 shared_informer.go:262] Caches are synced for endpoint_slice
I0423 18:32:47.464633   15620 shared_informer.go:262] Caches are synced for node
I0423 18:32:47.464658   15620 range_allocator.go:166] Starting range CIDR allocator
I0423 18:32:47.464671   15620 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I0423 18:32:47.464705   15620 shared_informer.go:262] Caches are synced for cidrallocator
I0423 18:32:47.474877   15620 shared_informer.go:262] Caches are synced for namespace
I0423 18:32:47.476013   15620 shared_informer.go:262] Caches are synced for cronjob
I0423 18:32:47.477133   15620 shared_informer.go:262] Caches are synced for attach detach
I0423 18:32:47.479364   15620 shared_informer.go:262] Caches are synced for crt configmap
I0423 18:32:47.608831   15620 shared_informer.go:262] Caches are synced for deployment
I0423 18:32:47.608972   15620 shared_informer.go:262] Caches are synced for disruption
I0423 18:32:47.654933   15620 shared_informer.go:262] Caches are synced for resource quota
I0423 18:32:47.682181   15620 shared_informer.go:262] Caches are synced for resource quota
I0423 18:32:47.992816   15620 shared_informer.go:262] Caches are synced for garbage collector
I0423 18:32:48.014006   15620 shared_informer.go:262] Caches are synced for garbage collector
I0423 18:32:48.014028   15620 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0423 18:33:07.377301   15620 scope.go:115] "RemoveContainer" containerID="887459ec8feb302633b910f1a75a4073b4983a020e302a450a734a204a206d03"
I0423 18:33:07.377523   15620 scope.go:115] "RemoveContainer" containerID="7b2c8eea70069b5553be4748322b1b7aee3014c4e2043124874edc563679c122"
E0423 18:33:07.377858   15620 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=metrics-server pod=metrics-server-5c8978b444-mhx2c_kube-system(8b64c32b-c536-4fe7-8bb2-8f44fde12f1a)\"" pod="kube-system/metrics-server-5c8978b444-mhx2c" podUID=8b64c32b-c536-4fe7-8bb2-8f44fde12f1a
I0423 18:33:07.379026   15620 scope.go:115] "RemoveContainer" containerID="6fb8ceca14c76516aecd90edd83fc8875507eeddb2641cb2f376ce882a5dcbc3"
E0423 18:33:07.379398   15620 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=local-path-provisioner pod=local-path-provisioner-79f67d76f8-j4vcv_kube-system(7a8c385e-e43e-4086-9f9a-3237125c6e9b)\"" pod="kube-system/local-path-provisioner-79f67d76f8-j4vcv" podUID=7a8c385e-e43e-4086-9f9a-3237125c6e9b
I0423 18:33:07.394551   15620 scope.go:115] "RemoveContainer" containerID="e6810303180ea10a9eb8f56b0645578cfe8ea7fa1ebd1f24133c4202514a209c"
{"level":"warn","ts":"2024-04-23T18:33:07.946+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001b7d6c0/kine.sock","attempt":0,"error":"rpc error: code = Unknown desc = no such table: dbstat"}
E0423 18:33:08.414331   15620 proxier.go:1504] "Failed to execute iptables-restore" err=<
        exit status 1: Ignoring deprecated --wait-interval option.
        Warning: Extension REJECT revision 0 not supported, missing kernel module?
        iptables-restore: line 14 failed
 >
I0423 18:33:08.414361   15620 proxier.go:855] "Sync failed" retryingTime="30s"
I0423 18:33:14.404501   15620 scope.go:115] "RemoveContainer" containerID="7b2c8eea70069b5553be4748322b1b7aee3014c4e2043124874edc563679c122"
E0423 18:33:14.404902   15620 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=metrics-server pod=metrics-server-5c8978b444-mhx2c_kube-system(8b64c32b-c536-4fe7-8bb2-8f44fde12f1a)\"" pod="kube-system/metrics-server-5c8978b444-mhx2c" podUID=8b64c32b-c536-4fe7-8bb2-8f44fde12f1a
E0423 18:33:17.689613   15620 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0423 18:33:18.003607   15620 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0423 18:33:21.303198   15620 scope.go:115] "RemoveContainer" containerID="6fb8ceca14c76516aecd90edd83fc8875507eeddb2641cb2f376ce882a5dcbc3"
I0423 18:33:29.304180   15620 scope.go:115] "RemoveContainer" containerID="7b2c8eea70069b5553be4748322b1b7aee3014c4e2043124874edc563679c122"
W0423 18:33:34.156090   15620 handler_proxy.go:105] no RequestInfo found in the context
E0423 18:33:34.156203   15620 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0423 18:33:34.156227   15620 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0423 18:33:34.157237   15620 handler_proxy.go:105] no RequestInfo found in the context
E0423 18:33:34.157301   15620 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
I0423 18:33:34.157321   15620 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
E0423 18:33:38.437643   15620 proxier.go:1504] "Failed to execute iptables-restore" err=<
        exit status 1: Ignoring deprecated --wait-interval option.
        Warning: Extension REJECT revision 0 not supported, missing kernel module?
        iptables-restore: line 14 failed
 >
I0423 18:33:38.437671   15620 proxier.go:855] "Sync failed" retryingTime="30s"
E0423 18:33:47.696218   15620 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W0423 18:33:48.012651   15620 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
{"level":"warn","ts":"2024-04-23T18:33:49.104+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001b7d6c0/kine.sock","attempt":0,"error":"rpc error: code = Unknown desc = no such table: dbstat"}
I0423 18:33:51.476831   15620 scope.go:115] "RemoveContainer" containerID="6fb8ceca14c76516aecd90edd83fc8875507eeddb2641cb2f376ce882a5dcbc3"
I0423 18:33:51.476956   15620 scope.go:115] "RemoveContainer" containerID="5f323254aa0171f3519912eacfbb9c93f76643ddf8086090df1e7225c09a265a"
E0423 18:33:51.477071   15620 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=local-path-provisioner pod=local-path-provisioner-79f67d76f8-j4vcv_kube-system(7a8c385e-e43e-4086-9f9a-3237125c6e9b)\"" pod="kube-system/local-path-provisioner-79f67d76f8-j4vcv" podUID=7a8c385e-e43e-4086-9f9a-3237125c6e9b
I0423 18:34:00.496938   15620 scope.go:115] "RemoveContainer" containerID="7b2c8eea70069b5553be4748322b1b7aee3014c4e2043124874edc563679c122"
I0423 18:34:00.497202   15620 scope.go:115] "RemoveContainer" containerID="4cec70423b4bdf53a98c67d0c1338f6922d7da3d1055e6dca9ef9dc8149371dd"
E0423 18:34:00.497625   15620 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=metrics-server pod=metrics-server-5c8978b444-mhx2c_kube-system(8b64c32b-c536-4fe7-8bb2-8f44fde12f1a)\"" pod="kube-system/metrics-server-5c8978b444-mhx2c" podUID=8b64c32b-c536-4fe7-8bb2-8f44fde12f1a
I0423 18:34:04.303538   15620 scope.go:115] "RemoveContainer" containerID="5f323254aa0171f3519912eacfbb9c93f76643ddf8086090df1e7225c09a265a"
E0423 18:34:04.303860   15620 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"local-path-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=local-path-provisioner pod=local-path-provisioner-79f67d76f8-j4vcv_kube-system(7a8c385e-e43e-4086-9f9a-3237125c6e9b)\"" pod="kube-system/local-path-provisioner-79f67d76f8-j4vcv" podUID=7a8c385e-e43e-4086-9f9a-3237125c6e9b
I0423 18:34:04.404447   15620 scope.go:115] "RemoveContainer" containerID="4cec70423b4bdf53a98c67d0c1338f6922d7da3d1055e6dca9ef9dc8149371dd"
E0423 18:34:04.404829   15620 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=metrics-server pod=metrics-server-5c8978b444-mhx2c_kube-system(8b64c32b-c536-4fe7-8bb2-8f44fde12f1a)\"" pod="kube-system/metrics-server-5c8978b444-mhx2c" podUID=8b64c32b-c536-4fe7-8bb2-8f44fde12f1a
E0423 18:34:08.463665   15620 proxier.go:1504] "Failed to execute iptables-restore" err=<
        exit status 1: Ignoring deprecated --wait-interval option.
        Warning: Extension REJECT revision 0 not supported, missing kernel module?
        iptables-restore: line 14 failed
 >
I0423 18:34:08.463692   15620 proxier.go:855] "Sync failed" retryingTime="30s"

Copy link
Contributor

github-actions bot commented Jun 7, 2024

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 45 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 21, 2024
@github-project-automation github-project-automation bot moved this from New to Done Issue in K3s Development Jun 21, 2024
@Cairry
Copy link

Cairry commented Sep 14, 2024

I have the same problem, how do I solve it?

@4ngrym0f0
Copy link

In my case, I'm using nftables, I had to add the following rule:

ip saddr 10.42.0.0/16 ip daddr 10.43.0.0/16 accept

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

No branches or pull requests

3 participants