Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DeploymentConfig manifest settings are ignored #11348

Closed
fmunteanu opened this issue Nov 20, 2024 · 5 comments
Closed

DeploymentConfig manifest settings are ignored #11348

fmunteanu opened this issue Nov 20, 2024 · 5 comments

Comments

@fmunteanu
Copy link

fmunteanu commented Nov 20, 2024

Environmental Info:
K3s Version:

# k3s -v
k3s version v1.30.6+k3s1 (1829eaae)
go version go1.22.8

Node(s) CPU architecture, OS, and Version:

# uname -a
Linux apollo 6.8.0-1014-raspi #16-Ubuntu SMP PREEMPT_DYNAMIC Tue Oct 15 20:54:23 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux

Cluster Configuration:
3 servers + 5 agents, HA enabled

Describe the bug:
My goal is to set specific cordons resources. For some reason, the DeploymentConfig configuration is ignored, while starting the K3s server.

Steps To Reproduce:

During K3s installation, I create the following file on each server and start the service:

# cat /var/lib/rancher/k3s/server/manifests/coredns-config.yaml
apiVersion: helm.cattle.io/v1
kind: DeploymentConfig
metadata:
  name: coredns
  namespace: kube-system
spec:
  template:
    spec:
      containers:
        - name: coredns
          resources:
            limits:
              memory: 128Mi
            requests:
              cpu: 10m
              memory: 128Mi

The above deployment settings are ignored:

$ kubectl describe deployment coredns -n kube-system
Name:                   coredns
Namespace:              kube-system
CreationTimestamp:      Wed, 20 Nov 2024 12:36:51 -0500
Labels:                 k8s-app=kube-dns
                        kubernetes.io/name=CoreDNS
                        objectset.rio.cattle.io/hash=bce283298811743a0386ab510f2f67ef74240c57
Annotations:            deployment.kubernetes.io/revision: 1
                        objectset.rio.cattle.io/applied:
                          H4sIAAAAAAAA/6xVQW/bOBP9Kx/mLMVW3DaugO/QjbPboq3XqJNeCqOgqZHFNcXhkiMnRuD/vhhJduw2TdrFniyTb4ZvHucN70F58xlDNOQgB+V9HGwySGBtXAE5TNBb2tboGBKokV...
                        objectset.rio.cattle.io/id:
                        objectset.rio.cattle.io/owner-gvk: k3s.cattle.io/v1, Kind=Addon
                        objectset.rio.cattle.io/owner-name: coredns
                        objectset.rio.cattle.io/owner-namespace: kube-system
Selector:               k8s-app=kube-dns
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=kube-dns
  Service Account:  coredns
  Containers:
   coredns:
    Image:       rancher/mirrored-coredns-coredns:1.11.3
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=2s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /etc/coredns/custom from custom-config-volume (ro)
  Volumes:
   config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
   custom-config-volume:
    Type:                       ConfigMap (a volume populated by a ConfigMap)
    Name:                       coredns-custom
    Optional:                   true
  Topology Spread Constraints:  kubernetes.io/hostname:DoNotSchedule when max skew 1 is exceeded for selector k8s-app=kube-dns
  Priority Class Name:          system-cluster-critical
  Node-Selectors:               kubernetes.io/os=linux
  Tolerations:                  CriticalAddonsOnly op=Exists
                                node-role.kubernetes.io/control-plane:NoSchedule op=Exists
                                node-role.kubernetes.io/master:NoSchedule op=Exists
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   coredns-7b98449c4 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  15m   deployment-controller  Scaled up replica set coredns-7b98449c4 to 1

Not sure if is related, the coredns.yaml file is not properly formatted, note the containers list indentations:

    spec:
      priorityClassName: "system-cluster-critical"
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
        - key: "node-role.kubernetes.io/control-plane"
          operator: "Exists"
          effect: "NoSchedule"
        - key: "node-role.kubernetes.io/master"
          operator: "Exists"
          effect: "NoSchedule"
      nodeSelector:
        kubernetes.io/os: linux
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: kubernetes.io/hostname
          whenUnsatisfiable: DoNotSchedule
          labelSelector:
            matchLabels:
              k8s-app: kube-dns
      containers:
      - name: coredns
        image: "rancher/mirrored-coredns-coredns:1.11.3"
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
@brandond
Copy link
Member

brandond commented Nov 20, 2024

apiVersion: helm.cattle.io/v1
kind: DeploymentConfig

What is this even? This is not a resource type supported by K3s, I wouldn't expect it to even deploy correctly let alone be used.

The coredns deployment that comes with K3s is not deployed via Helm. It is just a flat manifest:
https://github.com/k3s-io/k3s/blob/master/manifests/coredns.yaml

If you want to customize the coredns config beyond what is possible via coredns corefile drop-ins, you should start k3s with --disable=coredns, and provide your own custom coredns deployment.

@fmunteanu
Copy link
Author

fmunteanu commented Nov 20, 2024

You're right @brandond, I forgot the to update the apiVersion when I was looking at your documentation. The goal is to update the deployment with following settings:

# cat /var/lib/rancher/k3s/server/manifests/coredns-config.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
spec:
  template:
    spec:
      containers:
        - name: coredns
          resources:
            limits:
              memory: 128Mi
            requests:
              cpu: 10m
              memory: 128Mi

@brandond
Copy link
Member

brandond commented Nov 20, 2024

Kubernetes manifests aren't merged like that; your deployment will REPLACE the existing one. If you try to manage the same resource from multiple manifests bad things will happen.

You should stop k3s, copy coredns.yaml to coredns-custom.yaml, make your changes, then start k3s again with --disable=coredns.

@fmunteanu
Copy link
Author

fmunteanu commented Nov 20, 2024

Thank you @brandond. Since my goal is to standardize the resources to following best-practice format example for coredns:

          resources:
            limits:
              memory: 192Mi
            requests:
              cpu: 10m
              memory: 192Mi

I was wondering if if a good idea to create a PR for coredns, metrics-server and such?

Looking at metrics-server deployment:

        resources:
          requests:
            cpu: 100m
            memory: 70Mi

Should become:

          resources:
            limits:
              memory: 128Mi
            requests:
              cpu: 10m
              memory: 128Mi

I'm curious why there is no memory limit set, this is quite problematic, if a memory leak occurs. The above listed new values are good IMO, I was looking at the metrics.

@brandond
Copy link
Member

I'm curious why there is no memory limit set, this is quite problematic, if a memory leak occurs.

I have honestly never seen this as necessary, and shipping with default requests and limits would be a premature optimization that would not fit everyone's use cases. These are low-overhead, mature projects that do not leak memory. They work fine as-is and have for years.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done Issue
Development

No branches or pull requests

2 participants