diff --git a/.github/workflows/spellcheck.yml b/.github/workflows/spellcheck.yml new file mode 100644 index 000000000..5793cddda --- /dev/null +++ b/.github/workflows/spellcheck.yml @@ -0,0 +1,18 @@ +name: Spell Check + +on: [pull_request] + +jobs: + + spell-check: + name: "Spell checker" + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + - name: Use Node.js + uses: actions/setup-node@v4 + with: + node-version: 18 + - run: | + npm install --save-dev cspell@latest + npx cspell lint "content/**/*.md" diff --git a/.spelling b/.spelling new file mode 100644 index 000000000..35606b978 --- /dev/null +++ b/.spelling @@ -0,0 +1,63 @@ +EKS +kubelet +kubelets +NTFS +syscall +syscalls +AWSVPC +Kyverno +tmpfs +Runtimeclass +tolerations +ONTAP +LTSC +ltsc +daemonset +eksctl +Karpenter +Fargate +Datacenter +containerd +certmanager +istiod +ADOT +ClowdHaus +Dockershim +Inspektor +Seccomp +Qualys +Nodegroup +Nodegroups +Velero +autoscaler +Autoscaler +autoscalers +Autoscalers +SETUID +SETGID +Sandboxed +sandboxed +sandboxing +Sandboxing +pagefiles +udev +Anchore +Palo +NTLM +ABAC +Bottlerocket +Sysdig +routable +ipamd +schedulable +subresource +Dockerfile +Dockerfiles +Stackrox +Tigera +Kaniko +buildah +Burstable +burstable +Runbooks +runbooks diff --git a/content/cost_optimization/awareness.md b/content/cost_optimization/awareness.md index 38bde90ba..2c8165eb9 100644 --- a/content/cost_optimization/awareness.md +++ b/content/cost_optimization/awareness.md @@ -45,7 +45,7 @@ Tags don't have any semantic meaning to Amazon EKS and are interpreted strictly AWS Trusted Advisor offers a rich set of best practice checks and recommendations across five categories: cost optimization; security; fault tolerance; performance; and service limits. -For Cost Optimization, Trusted Advisor helps eliminate unused and idle resources and recommends making commitments to reserved capacity. The key action items that will help Amazon EKS will be around low utilsed EC2 instances, unassociated Elastic IP addresses, Idle Load Balancers, underutilized EBS volumes among other things. The complete list of checks are provided at https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/. +For Cost Optimization, Trusted Advisor helps eliminate unused and idle resources and recommends making commitments to reserved capacity. The key action items that will help Amazon EKS will be around low utilized EC2 instances, unassociated Elastic IP addresses, Idle Load Balancers, underutilized EBS volumes among other things. The complete list of checks are provided at https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/. The Trusted Advisor also provides Savings Plans and Reserved Instances recommendations for EC2 instances and Fargate which allows you to commit to a consistent usage amount in exchange for discounted rates. diff --git a/content/cost_optimization/optimizing_WIP.md b/content/cost_optimization/optimizing_WIP.md index bfd9b6739..5e0be54d0 100644 --- a/content/cost_optimization/optimizing_WIP.md +++ b/content/cost_optimization/optimizing_WIP.md @@ -4,7 +4,7 @@ Right Sizing as per the AWS Well-Architected Framework, is “… using the lowe When you specify the resource `requests` for the Containers in a Pod, the scheduler uses this information to decide which node to place the Pod on. When you specify a resource `limits` for a Container, the kubelet enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. The details of how Kubernetes manages resources for containers are given in the [documentation](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/). -In Kubernetes, this means setting the right compute resources ([CPU and memory are collectively referred to as compute resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)) - setting the resource `requests` that align as close as possible to the actual utilization. The tools for getting the actual resource usags of Pods are given in the section on Rexommendations below. +In Kubernetes, this means setting the right compute resources ([CPU and memory are collectively referred to as compute resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)) - setting the resource `requests` that align as close as possible to the actual utilization. The tools for getting the actual resource usages of Pods are given in the section on Rexommendations below. **Amazon EKS on AWS Fargate**: When pods are scheduled on Fargate, the vCPU and memory reservations within the pod specification determine how much CPU and memory to provision for the pod. If you do not specify a vCPU and memory combination, then the smallest available combination is used (.25 vCPU and 0.5 GB memory). The list of vCPU and memory combinations that are available for pods running on Fargate are listed in the [Amazon EKS User Guide](https://docs.aws.amazon.com/eks/latest/userguide/fargate-pod-configuration.html). diff --git a/content/networking/sgpp/index.ko.md b/content/networking/sgpp/index.ko.md index 8ba874ff7..03c905502 100644 --- a/content/networking/sgpp/index.ko.md +++ b/content/networking/sgpp/index.ko.md @@ -115,7 +115,7 @@ kubectl set env daemonset -n kube-system aws-node AWS_VPC_K8S_CNI_EXTERNALSNAT=t ### Fargate를 이용하는 파드용 보안 그룹 사용 -Fargate에서 실행되는 파드의 보안 그룹은 EC2 워커 노드에서 실행되는 파드와 매우 유사하게 작동한다. 예를 들어 Fargate 파드에 연결하는 보안 그룹 정책에서 보안 그룹을 참조하기 전에 먼저 보안 그룹을 생성해야 합니다.기본적으로 보안 그룹 정책을 Fargate 파드에 명시적으로 할당하지 않으면 [클러스터 보안 그룹](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)이 모든 Fargate 파드에 할당됩니다. 단순화를 위해 Fagate Pod의 SecurityGroupPolicy에 클러스터 보안 그룹을 추가할 수도 있습니다. 그렇지 않으면 보안 그룹에 최소 보안 그룹 규칙을 추가해야 합니다. 설명 클러스터 API를 사용하여 클러스터 보안 그룹을 찾을 수 있습니다. +Fargate에서 실행되는 파드의 보안 그룹은 EC2 워커 노드에서 실행되는 파드와 매우 유사하게 작동한다. 예를 들어 Fargate 파드에 연결하는 보안 그룹 정책에서 보안 그룹을 참조하기 전에 먼저 보안 그룹을 생성해야 합니다.기본적으로 보안 그룹 정책을 Fargate 파드에 명시적으로 할당하지 않으면 [클러스터 보안 그룹](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)이 모든 Fargate 파드에 할당됩니다. 단순화를 위해 Fargate Pod의 SecurityGroupPolicy에 클러스터 보안 그룹을 추가할 수도 있습니다. 그렇지 않으면 보안 그룹에 최소 보안 그룹 규칙을 추가해야 합니다. 설명 클러스터 API를 사용하여 클러스터 보안 그룹을 찾을 수 있습니다. ```bash aws eks describe-cluster --name CLUSTER_NAME --query 'cluster.resourcesVpcConfig.clusterSecurityGroupId' diff --git a/content/networking/sgpp/index.md b/content/networking/sgpp/index.md index 47e6b023e..feab38f7f 100644 --- a/content/networking/sgpp/index.md +++ b/content/networking/sgpp/index.md @@ -109,7 +109,7 @@ Ensure that `terminationGracePeriodSeconds` is non-zero in your Pod specificatio ### Using Security Groups for Pods with Fargate -Security groups for Pods that run on Fargate work very similarly to Pods that run on EC2 worker nodes. For example, you have to create the security group before referencing it in the SecurityGroupPolicy you associate with your Fargate Pod. By default, the [cluster security group](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) is assiged to all Fargate Pods when you don't explicitly assign a SecurityGroupPolicy to a Fargate Pod. For simplicity's sake, you may want to add the cluster security group to a Fagate Pod's SecurityGroupPolicy otherwise you will have to add the minimum security group rules to your security group. You can find the cluster security group using the describe-cluster API. +Security groups for Pods that run on Fargate work very similarly to Pods that run on EC2 worker nodes. For example, you have to create the security group before referencing it in the SecurityGroupPolicy you associate with your Fargate Pod. By default, the [cluster security group](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) is assiged to all Fargate Pods when you don't explicitly assign a SecurityGroupPolicy to a Fargate Pod. For simplicity's sake, you may want to add the cluster security group to a Fargate Pod's SecurityGroupPolicy otherwise you will have to add the minimum security group rules to your security group. You can find the cluster security group using the describe-cluster API. ```bash aws eks describe-cluster --name CLUSTER_NAME --query 'cluster.resourcesVpcConfig.clusterSecurityGroupId' diff --git a/content/reliability/docs/controlplane.md b/content/reliability/docs/controlplane.md index 99838efcd..8a3da61b1 100644 --- a/content/reliability/docs/controlplane.md +++ b/content/reliability/docs/controlplane.md @@ -173,7 +173,7 @@ Or make sure the webhook has a fail open policy with a timeout shorter than 30 s `Sysctl` is a Linux utility that allows users to modify kernel parameters during runtime. These kernel parameters control various aspects of the operating system's behavior, such as network, file system, virtual memory, and process management. -Kubernetes allows assigning `sysctl` profiles for Pods. Kubernetes categorizes `systcls` as safe and unsafe. Safe `sysctls` are namespaced in the container or Pod, and setting them doesn’t impact other Pods on the node or the node itself. In contrast, unsafe sysctls are disabled by default since they can potentially disrupt other Pods or make the node unstable. +Kubernetes allows assigning `sysctl` profiles for Pods. Kubernetes categorizes `systcls` as safe and unsafe. Safe `sysctls` are namespaced in the container or Pod, and setting them doesn’t impact other Pods on the node or the node itself. In contrast, unsafe `sysctls` are disabled by default since they can potentially disrupt other Pods or make the node unstable. As unsafe `sysctls` are disabled by default, the kubelet will not create a Pod with unsafe `sysctl` profile. If you create such a Pod, the scheduler will repeatedly assign such Pods to nodes, while the node fails to launch it. This infinite loop ultimately strains the cluster control plane, making the cluster unstable. diff --git a/content/scalability/docs/index.ko.md b/content/scalability/docs/index.ko.md index 1758ce529..f54f208b4 100644 --- a/content/scalability/docs/index.ko.md +++ b/content/scalability/docs/index.ko.md @@ -24,7 +24,7 @@ search: **쿠버네티스 컨트롤 플레인**은 EKS 클러스터에는 AWS가 실행하고 사용자를 위해 자동으로 확장되는 모든 서비스 (예: 쿠버네티스 API Server) 가 포함됩니다. 컨트롤 플레인을 확장하는 것은 AWS의 책임이지만 컨트롤 플레인을 책임감 있게 사용하는 것은 사용자의 책임입니다. -**쿠버네티스 데이터 플레인** 규모 조정은 클러스터 및 워크로드에 필요한 AWS 리소스를 다루지만 EKS 컨트롤 플레인을 벗어납니다. EC2 인스턴스, kublet, 스토리지를 비롯한 모든 리소스는 클러스터 확장에 따라 확장해야 합니다. +**쿠버네티스 데이터 플레인** 규모 조정은 클러스터 및 워크로드에 필요한 AWS 리소스를 다루지만 EKS 컨트롤 플레인을 벗어납니다. EC2 인스턴스, kubelet, 스토리지를 비롯한 모든 리소스는 클러스터 확장에 따라 확장해야 합니다. **클러스터 서비스**는 클러스터 내에서 실행되며 클러스터 및 워크로드에 기능을 제공하는 쿠버네티스 컨트롤러 및 애플리케이션입니다. 여기에는 [EKS 애드온](https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html)이 포함될 수 있으며 규정 준수 및 통합을 위해 설치하는 기타 서비스 또는 헬름 차트도 포함될 수 있습니다. 이런 서비스는 워크로드에 따라 달라지는 경우가 많으며 워크로드가 확장됨에 따라 클러스터 서비스도 함께 확장해야 합니다. diff --git a/content/scalability/docs/kubernetes_slos.md b/content/scalability/docs/kubernetes_slos.md index bafd27f7b..9bc2ede61 100644 --- a/content/scalability/docs/kubernetes_slos.md +++ b/content/scalability/docs/kubernetes_slos.md @@ -61,8 +61,8 @@ Kubernetes is also improving the Observability around the SLIs by adding [Promet |Metric |Definition | |--- |--- | -|apiserver_request_sli_duration_seconds | Response latency distribution (not counting webhook duration and priority & fairness queue wait times) in seconds for each verb, group, version, resource, subresource, scope and component. | -|apiserver_request_duration_seconds | Response latency distribution in seconds for each verb, dry run value, group, version, resource, subresource, scope and component. | +|`apiserver_request_sli_duration_seconds` | Response latency distribution (not counting webhook duration and priority & fairness queue wait times) in seconds for each verb, group, version, resource, subresource, scope and component. | +|`apiserver_request_duration_seconds` | Response latency distribution in seconds for each verb, dry run value, group, version, resource, subresource, scope and component. | *Note: The `apiserver_request_sli_duration_seconds` metric is available starting in Kubernetes 1.27.* @@ -111,10 +111,10 @@ The SLI metrics provide insight into how Kubernetes components are performing by Similar to the queries above you can use these metrics to gain insight into how long node scaling, image pulls and init containers are delaying the pod launch compared to Kubelet actions. -**Pod startup latency SLI -** this is the time from the pod being created to when the application containers reported as running. This includes the time it takes for the worker node capacity to be available and the pod to be scheduled, but this does not include the time it takes to pull images or for the init containers to run. +**Pod startup latency SLI -** this is the time from the pod being created to when the application containers reported as running. This includes the time it takes for the worker node capacity to be available and the pod to be scheduled, but this does not include the time it takes to pull images or for the init containers to run. `histogram_quantile(0.99, sum(rate(kubelet_pod_start_sli_duration_seconds_bucket[5m])) by (le))` -**Pod startup latency Total -** this is the time it takes the kubelet to start the pod for the first time. This is measured from when the kubelet recieves the pod via WATCH, which does not include the time for worker node scaling or scheduling. This includes the time to pull images and init containers to run. +**Pod startup latency Total -** this is the time it takes the kubelet to start the pod for the first time. This is measured from when the kubelet receives the pod via WATCH, which does not include the time for worker node scaling or scheduling. This includes the time to pull images and init containers to run. `histogram_quantile(0.99, sum(rate(kubelet_pod_start_duration_seconds_bucket[5m])) by (le))` diff --git a/content/scalability/docs/quotas.md b/content/scalability/docs/quotas.md index 3b8c435ae..cbfb11fda 100644 --- a/content/scalability/docs/quotas.md +++ b/content/scalability/docs/quotas.md @@ -51,7 +51,7 @@ We have seen EKS customers impacted by the quotas listed below for other AWS ser ## AWS Request Throttling -AWS services also implement request throttling to ensure that they remain performant and available for all customers. Simliar to Service Quotas, each AWS service maintains their own request throttling thresholds. Consider reviewing the respective AWS Service documentation if your workloads will need to quickly issue a large number of API calls or if you notice request throttling errors in your application. +AWS services also implement request throttling to ensure that they remain performant and available for all customers. Similar to Service Quotas, each AWS service maintains their own request throttling thresholds. Consider reviewing the respective AWS Service documentation if your workloads will need to quickly issue a large number of API calls or if you notice request throttling errors in your application. EC2 API requests around provisioning EC2 network interfaces or IP addresses can encounter request throttling in large clusters or when clusters scale drastically. The table below shows some of the API actions that we have seen customers encounter request throttling from. You can review the EC2 rate limit defaults and the steps to request a rate limit increase in the [EC2 documentation on Rate Throttling](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/throttling.html). diff --git a/content/scalability/docs/scaling_theory.md b/content/scalability/docs/scaling_theory.md index 799354839..e8b7cea6f 100644 --- a/content/scalability/docs/scaling_theory.md +++ b/content/scalability/docs/scaling_theory.md @@ -79,7 +79,7 @@ When fewer errors are occurring, it is easier spot issues in the system. By peri #### Expanding Our View -In large scale clusters with 1,000’s of nodes we don’t want to look for bottlenecks individually. In PromQL we can find the highest values in a data set using a function called topk; K being a variable we place the number of items we want. Here we use three nodes to get an idea whether all of the Kubelets in the cluster are saturated. We have been looking at latency up to this point, now let’s see if the Kubelet is discarding events. +In large scale clusters with 1,000’s of nodes we don’t want to look for bottlenecks individually. In PromQL we can find the highest values in a data set using a function called `topki`; K being a variable we place the number of items we want. Here we use three nodes to get an idea whether all of the Kubelets in the cluster are saturated. We have been looking at latency up to this point, now let’s see if the Kubelet is discarding events. ``` topk(3, increase(kubelet_pleg_discard_events{}[$__rate_interval])) diff --git a/content/security/docs/compliance.ko.md b/content/security/docs/compliance.ko.md index 681433328..5de9b521e 100644 --- a/content/security/docs/compliance.ko.md +++ b/content/security/docs/compliance.ko.md @@ -10,7 +10,7 @@ search: 다음 표는 다양한 컨테이너 서비스가 준수하는 규정 준수 프로그램을 보여줍니다. -| 컴플라이언스 프로그램 | Amazon ECS 오케스트레이터 | Amazon EKS 오케스트레이터| ECS Fargete | 아마존 ECR | +| 컴플라이언스 프로그램 | Amazon ECS 오케스트레이터 | Amazon EKS 오케스트레이터| ECS Fargate | 아마존 ECR | | --- |:----------:|:----------:|:---- -------:|:----------:| | PCI DSS Level 1 | 1 | 1 | 1 | 1 | | HIPAA Eligible | 1 | 1 | 1 | 1 | diff --git a/content/security/docs/data.ko.md b/content/security/docs/data.ko.md index f20016481..a6cc043b8 100644 --- a/content/security/docs/data.ko.md +++ b/content/security/docs/data.ko.md @@ -116,7 +116,7 @@ fields @timestamp, @message ### 외부 시크릿 제공자 사용 -[AWS Secret Manager](https://aws.amazon.com/secrets-manager/)와 Hishcorp의 [Vault](https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar/)를 포함하여 쿠버네티스 시크릿을 사용할 수 있는 몇 가지 실행 가능한 대안이 있습니다. 이런 서비스는 쿠버네티스 시크릿에서는 사용할 수 없는 세밀한 액세스 제어, 강력한 암호화, 암호 자동 교체 등의 기능을 제공합니다. Bitnami의 [Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets)는 비대칭 암호화를 사용하여 "봉인된 시크릿"을 생성하는 또 다른 접근 방식입니다. 공개 키는 시크릿을 암호화하는 데 사용되는 반면 암호 해독에 사용된 개인 키는 클러스터 내에 보관되므로 Git과 같은 소스 제어 시스템에 봉인된 시크릿을 안전하게 저장할 수 있습니다. 자세한 내용은 [실드 시크릿을 사용한 쿠버네티스의 시크릿 배포 관리](https://aws.amazon.com/blogs/opensource/managing-secrets-deployment-in-kubernetes-using-sealed-secrets/)를 참조합니다. +[AWS Secret Manager](https://aws.amazon.com/secrets-manager/)와 Hashicorp의 [Vault](https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar/)를 포함하여 쿠버네티스 시크릿을 사용할 수 있는 몇 가지 실행 가능한 대안이 있습니다. 이런 서비스는 쿠버네티스 시크릿에서는 사용할 수 없는 세밀한 액세스 제어, 강력한 암호화, 암호 자동 교체 등의 기능을 제공합니다. Bitnami의 [Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets)는 비대칭 암호화를 사용하여 "봉인된 시크릿"을 생성하는 또 다른 접근 방식입니다. 공개 키는 시크릿을 암호화하는 데 사용되는 반면 암호 해독에 사용된 개인 키는 클러스터 내에 보관되므로 Git과 같은 소스 제어 시스템에 봉인된 시크릿을 안전하게 저장할 수 있습니다. 자세한 내용은 [실드 시크릿을 사용한 쿠버네티스의 시크릿 배포 관리](https://aws.amazon.com/blogs/opensource/managing-secrets-deployment-in-kubernetes-using-sealed-secrets/)를 참조합니다. 외부 시크릿 스토어의 사용이 증가함에 따라 이를 쿠버네티스와 통합해야 할 필요성도 커졌습니다. [Secret Store CSI 드라이버](https://github.com/kubernetes-sigs/secrets-store-csi-driver)는 CSI 드라이버 모델을 사용하여 외부 시크릿 스토어로부터 시크릿을 가져오는 커뮤니티 프로젝트입니다. 현재 이 드라이버는 [AWS Secret Manager](https://github.com/aws/secrets-store-csi-driver-provider-aws), Azure, Vault 및 GCP를 지원합니다. AWS 공급자는 AWS 시크릿 관리자**와** AWS 파라미터 스토어를 모두 지원합니다. 또한 암호가 만료되면 암호가 교체되도록 구성할 수 있으며, AWS Secrets Manager 암호를 쿠버네티스 암호와 동기화할 수 있습니다. 암호의 동기화는 볼륨에서 암호를 읽는 대신 암호를 환경 변수로 참조해야 할 때 유용할 수 있습니다. diff --git a/content/security/docs/iam.md b/content/security/docs/iam.md index 01f9fe5f8..2423258f9 100644 --- a/content/security/docs/iam.md +++ b/content/security/docs/iam.md @@ -343,7 +343,7 @@ While IAM is the preferred way to authenticate users who need access to an EKS c - [Consistent OIDC authentication across multiple EKS clusters using kube-oidc-proxy](https://aws.amazon.com/blogs/opensource/consistent-oidc-authentication-across-multiple-eks-clusters-using-kube-oidc-proxy/) !!! attention - EKS natively supports OIDC authentication without using a proxy. For further information, please read the launch blog, [Introducing OIDC identity provider authentication for Amazon EKS](https://aws.amazon.com/blogs/containers/introducing-oidc-identity-provider-authentication-amazon-eks/). For an example showing how to configure EKS with Dex, a popular open source OIDC provider with connectors for a variety of different authention methods, see [Using Dex & dex-k8s-authenticator to authenticate to Amazon EKS](https://aws.amazon.com/blogs/containers/using-dex-dex-k8s-authenticator-to-authenticate-to-amazon-eks/). As described in the blogs, the username/group of users authenticated by an OIDC provider will appear in the Kubernetes audit log. + EKS natively supports OIDC authentication without using a proxy. For further information, please read the launch blog, [Introducing OIDC identity provider authentication for Amazon EKS](https://aws.amazon.com/blogs/containers/introducing-oidc-identity-provider-authentication-amazon-eks/). For an example showing how to configure EKS with Dex, a popular open source OIDC provider with connectors for a variety of different authentication methods, see [Using Dex & dex-k8s-authenticator to authenticate to Amazon EKS](https://aws.amazon.com/blogs/containers/using-dex-dex-k8s-authenticator-to-authenticate-to-amazon-eks/). As described in the blogs, the username/group of users authenticated by an OIDC provider will appear in the Kubernetes audit log. You can also use [AWS SSO](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) to federate AWS with an external identity provider, e.g. Azure AD. If you decide to use this, the AWS CLI v2.0 includes an option to create a named profile that makes it easy to associate an SSO session with your current CLI session and assume an IAM role. Know that you must assume a role _prior_ to running `kubectl` as the IAM role is used to determine the user's Kubernetes RBAC group. diff --git a/content/security/docs/image.ko.md b/content/security/docs/image.ko.md index ef9d21541..ffcced814 100644 --- a/content/security/docs/image.ko.md +++ b/content/security/docs/image.ko.md @@ -12,7 +12,7 @@ search: ### 최소 이미지 생성 -먼저 컨테이너 이미지에서 필요없는 바이너리를 모두 제거합니다. Dockerhub로부터 검증되지 않은 이미지를 사용하는 경우 각 컨테이너 레이어의 내용을 볼 수 있는 [Dive](https://github.com/wagoodman/dive)와 같은 애플리케이션을 사용하여 이미지를 검사합니다. 권한을 상승할 수 있는 SETUID 및 SETGID 비트가 있는 모든 바이너리를 제거하고 nc나 curl과 같이 악의적인 용도로 사용될 수 있는 셸과 유틸리티를 모두 제거하는 것을 고려합니다. 다음 명령을 사용하여 SETUID 및 SETGID 비트가 있는 파일을 찾을 수 있습니다. +먼저 컨테이너 이미지에서 필요없는 바이너리를 모두 제거합니다. Docker hub로부터 검증되지 않은 이미지를 사용하는 경우 각 컨테이너 레이어의 내용을 볼 수 있는 [Dive](https://github.com/wagoodman/dive)와 같은 애플리케이션을 사용하여 이미지를 검사합니다. 권한을 상승할 수 있는 SETUID 및 SETGID 비트가 있는 모든 바이너리를 제거하고 nc나 curl과 같이 악의적인 용도로 사용될 수 있는 셸과 유틸리티를 모두 제거하는 것을 고려합니다. 다음 명령을 사용하여 SETUID 및 SETGID 비트가 있는 파일을 찾을 수 있습니다. ```bash find / -perm /6000 -type f -exec ls -ld {} \; @@ -149,7 +149,7 @@ EKS는 ECR에서 kube-proxy, coredns 및 aws-node용 이미지를 가져오므 ### 선별된 이미지 세트 만들기 -개발자가 직접 이미지를 만들도록 허용하는 대신 조직의 다양한 애플리케이션 스택에 대해 검증된 이미지 세트를 만드는 것을 고려해 보세요. 이렇게 하면 개발자는 Dockerfile 작성 방법을 배우지 않고 코드 작성에 집중할 수 있습니다. 변경 사항이 Master에 병합되면 CI/CD 파이프라인은 자동으로 에셋을 컴파일하고, 아티팩트 리포지토리에 저장하고, 아티팩트를 적절한 이미지에 복사한 다음 ECR과 같은 Docker 레지스트리로 푸시할 수 있습니다. 최소한 개발자가 자체 Dockerfile을 만들 수 있는 기본 이미지 세트를 만들어야 합니다. 이상적으로는 Dockerhub에서 이미지를 가져오지 않는 것이 좋습니다. a) 이미지에 무엇이 들어 있는지 항상 알 수는 없고 b) 상위 1000개 이미지 중 약 [1/5](https://www.kennasecurity.com/blog/one-fifth-of-the-most-used-docker-containers-have-at-least-one-critical-vulnerability/)에는 취약점이 있기 때문입니다. 이런 이미지 및 취약성 목록은 [이 사이트](https://vulnerablecontainers.org/)에서 확인할 수 있습니다. +개발자가 직접 이미지를 만들도록 허용하는 대신 조직의 다양한 애플리케이션 스택에 대해 검증된 이미지 세트를 만드는 것을 고려해 보세요. 이렇게 하면 개발자는 Dockerfile 작성 방법을 배우지 않고 코드 작성에 집중할 수 있습니다. 변경 사항이 Master에 병합되면 CI/CD 파이프라인은 자동으로 에셋을 컴파일하고, 아티팩트 리포지토리에 저장하고, 아티팩트를 적절한 이미지에 복사한 다음 ECR과 같은 Docker 레지스트리로 푸시할 수 있습니다. 최소한 개발자가 자체 Dockerfile을 만들 수 있는 기본 이미지 세트를 만들어야 합니다. 이상적으로는 Docker hub에서 이미지를 가져오지 않는 것이 좋습니다. a) 이미지에 무엇이 들어 있는지 항상 알 수는 없고 b) 상위 1000개 이미지 중 약 [1/5](https://www.kennasecurity.com/blog/one-fifth-of-the-most-used-docker-containers-have-at-least-one-critical-vulnerability/)에는 취약점이 있기 때문입니다. 이런 이미지 및 취약성 목록은 [이 사이트](https://vulnerablecontainers.org/)에서 확인할 수 있습니다. ### 루트가 아닌 사용자로 실행하려면 Dockerfile에 USER 지시문을 추가 diff --git a/content/security/docs/image.md b/content/security/docs/image.md index 3479abfc8..9956ae15f 100644 --- a/content/security/docs/image.md +++ b/content/security/docs/image.md @@ -6,7 +6,7 @@ You should consider the container image as your first line of defense against an ### Create minimal images -Start by removing all extraneous binaries from the container image. If you’re using an unfamiliar image from Dockerhub, inspect the image using an application like [Dive](https://github.com/wagoodman/dive) which can show you the contents of each of the container’s layers. Remove all binaries with the SETUID and SETGID bits as they can be used to escalate privilege and consider removing all shells and utilities like nc and curl that can be used for nefarious purposes. You can find the files with SETUID and SETGID bits with the following command: +Start by removing all extraneous binaries from the container image. If you’re using an unfamiliar image from Docker hub, inspect the image using an application like [Dive](https://github.com/wagoodman/dive) which can show you the contents of each of the container’s layers. Remove all binaries with the SETUID and SETGID bits as they can be used to escalate privilege and consider removing all shells and utilities like nc and curl that can be used for nefarious purposes. You can find the files with SETUID and SETGID bits with the following command: ```bash find / -perm /6000 -type f -exec ls -ld {} \; @@ -145,7 +145,7 @@ Each ECR repository can have a lifecycle policy that sets rules for when images ### Create a set of curated images -Rather than allowing developers to create their own images, consider creating a set of vetted images for the different application stacks in your organization. By doing so, developers can forego learning how to compose Dockerfiles and concentrate on writing code. As changes are merged into Master, a CI/CD pipeline can automatically compile the asset, store it in an artifact repository and copy the artifact into the appropriate image before pushing it to a Docker registry like ECR. At the very least you should create a set of base images from which developers to create their own Dockerfiles. Ideally, you want to avoid pulling images from Dockerhub because 1/ you don't always know what is in the image and 2/ about [a fifth](https://www.kennasecurity.com/blog/one-fifth-of-the-most-used-docker-containers-have-at-least-one-critical-vulnerability/) of the top 1000 images have vulnerabilities. A list of those images and their vulnerabilities can be found [here](https://vulnerablecontainers.org/). +Rather than allowing developers to create their own images, consider creating a set of vetted images for the different application stacks in your organization. By doing so, developers can forego learning how to compose Dockerfiles and concentrate on writing code. As changes are merged into Master, a CI/CD pipeline can automatically compile the asset, store it in an artifact repository and copy the artifact into the appropriate image before pushing it to a Docker registry like ECR. At the very least you should create a set of base images from which developers to create their own Dockerfiles. Ideally, you want to avoid pulling images from Docker hub because 1/ you don't always know what is in the image and 2/ about [a fifth](https://www.kennasecurity.com/blog/one-fifth-of-the-most-used-docker-containers-have-at-least-one-critical-vulnerability/) of the top 1000 images have vulnerabilities. A list of those images and their vulnerabilities can be found [here](https://vulnerablecontainers.org/). ### Add the USER directive to your Dockerfiles to run as a non-root user diff --git a/content/security/docs/multiaccount.md b/content/security/docs/multiaccount.md index 585b5b561..4fee29d85 100644 --- a/content/security/docs/multiaccount.md +++ b/content/security/docs/multiaccount.md @@ -49,9 +49,9 @@ When working with a EKS cluster and multiple AWS accounts, IRSA can directly ass ##### Accessing AWS API Resources with IAM Roles For Service Accounts -[IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) allows you to deliver temporary AWS credentials to your workloads running on EKS. IRSA can be used to get temporary credentials for IAM roles in the workload accounts from the cluster account. This allows your workloads running on your EKS clusters in the cluster account to consume AWS API resources, such as S3 buckets hosted in the workload account seemlessly, and use IAM authentication for resources like Amazon RDS Databases or Amazon EFS FileSystems. +[IAM Roles for Service Accounts (IRSA)](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) allows you to deliver temporary AWS credentials to your workloads running on EKS. IRSA can be used to get temporary credentials for IAM roles in the workload accounts from the cluster account. This allows your workloads running on your EKS clusters in the cluster account to consume AWS API resources, such as S3 buckets hosted in the workload account seamlessly, and use IAM authentication for resources like Amazon RDS Databases or Amazon EFS FileSystems. -AWS API resources and other Resources that use IAM authentication in a workload account can only be accessed by credentials for IAM roles in that same workload account, except where cross account access is capable and has been explicity enabled. +AWS API resources and other Resources that use IAM authentication in a workload account can only be accessed by credentials for IAM roles in that same workload account, except where cross account access is capable and has been explicitly enabled. ###### Enabling IRSA for cross account access @@ -155,7 +155,7 @@ You would utilize [IAM roles for Service Accounts (IRSA)](https://docs.aws.amazo ### Centralized Networking -You can also utilize AWS RAM to share the VPC Subnets to workload accounts and launch Amazon EKS clusters and other AWS resources in them. This enables centralized network managment/administration, simplified network connectivity, and de-centralized EKS clusters. Refer this [AWS blog](https://aws.amazon.com/blogs/containers/use-shared-vpcs-in-amazon-eks/) for a detailed walkthrough and considerations of this approach. +You can also utilize AWS RAM to share the VPC Subnets to workload accounts and launch Amazon EKS clusters and other AWS resources in them. This enables centralized network management/administration, simplified network connectivity, and de-centralized EKS clusters. Refer this [AWS blog](https://aws.amazon.com/blogs/containers/use-shared-vpcs-in-amazon-eks/) for a detailed walkthrough and considerations of this approach. |![De-centralized EKS Cluster Architecture using VPC Shared Subnets](./images/multi-account-eks-shared-subnets.png)| |:--:| @@ -171,7 +171,7 @@ The decision to run with a Centralized or De-centralized will depend on your req |Cost Efficiency: | Allows reuse of EKS cluster and network resources, which promotes cost efficiency | Requires networking and cluster setups per workload, which requires additional resources| |Resilience: | Multiple workloads on the centralized cluster may be impacted if a cluster becomes impaired | If a cluster becomes impaired, the damage is limited to only the workloads that run on that cluster. All other workloads are unaffected | |Isolation & Security:|Isolation/Soft Multi-tenancy is achieved using k8s native constructs like `Namespaces`. Workloads may share the underlying resources like CPU, memory, etc. AWS resources are isolated into their own workload accounts which by default are not accessible from other AWS accounts. |Stronger isolation on compute resources as the workloads run in individual clusters and nodes that don't share any resources. AWS resources are isolated into their own workload accounts which by default are not accessible from other AWS accounts.| -|Performance & Scalabity:|As workloads grow to very large scales you may encounter kubernetes and AWS service quotas in the cluster account. You can deploy addtional cluster accounts to scale even further|As more clusters and VPCs are present, each workload has more available k8s and AWS service quota| +|Performance & Scalability:|As workloads grow to very large scales you may encounter kubernetes and AWS service quotas in the cluster account. You can deploy additional cluster accounts to scale even further|As more clusters and VPCs are present, each workload has more available k8s and AWS service quota| |Networking: | Single VPC is used per cluster, allowing for simpler connectivity for applications on that cluster | Routing must be established between the de-centralized EKS cluster VPCs | |Kubernetes Access Management: |Need to maintain many different roles and users in the cluster to provide access to all workload teams and ensure kubernetes resources are properly segregated| Simplified access management as each cluster is dedicated to a workload/team| |AWS Access Management: |AWS resources are deployed into to their own account which can only be accessed by default with IAM roles in the workload account. IAM roles in the workload accounts are assumed cross account either with IRSA or EKS Pod Identities.|AWS resources are deployed into to their own account which can only be accessed by default with IAM roles in the workload account. IAM roles in the workload accounts are delivered directly to pods with IRSA or EKS Pod Identities| diff --git a/content/security/docs/network.md b/content/security/docs/network.md index d8079b3c7..858e974cf 100644 --- a/content/security/docs/network.md +++ b/content/security/docs/network.md @@ -147,7 +147,7 @@ contains_label(arr, val) { #### Monitor the vpc-network-policy-controller, node-agent logs -Enable the EKS Control plane controller manager logs to diagnose the network policy functionality. You can stream the control plane logs to a CloudWatch log group and use [CloudWatch Log insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html) to perform advanced queries. From the logs, you can view what pod endpoint objects are resolved to a Network Policy, reconcilation status of the policies, and debug if the policy is working as expected. +Enable the EKS Control plane controller manager logs to diagnose the network policy functionality. You can stream the control plane logs to a CloudWatch log group and use [CloudWatch Log insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html) to perform advanced queries. From the logs, you can view what pod endpoint objects are resolved to a Network Policy, reconciliation status of the policies, and debug if the policy is working as expected. In addition, Amazon VPC CNI allows you to enable the collection and export of policy enforcement logs to [Amazon Cloudwatch](https://aws.amazon.com/cloudwatch/) from the EKS worker nodes. Once enabled, you can leverage [CloudWatch Container Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights.html) to provide insights on your usage related to Network Policies. @@ -450,21 +450,21 @@ Whenever there's a CSR from a workload, it will be forwarded to _istio-csr_, whi 6. Create an `istio-system` namespace. This is where the `istiod certificate` and other Istio resources will be deployed. 7. Install Istio CSR configured with AWS Private CA Issuer Plugin. You can preserve the certificate signing requests for workloads to verify that they get approved and signed (`preserveCertificateRequests=true`). - ```bash - helm install -n cert-manager cert-manager-istio-csr jetstack/cert-manager-istio-csr \ - --set "app.certmanager.issuer.group=awspca.cert-manager.io" \ - --set "app.certmanager.issuer.kind=AWSPCAClusterIssuer" \ - --set "app.certmanager.issuer.name=" \ - --set "app.certmanager.preserveCertificateRequests=true" \ - --set "app.server.maxCertificateDuration=48h" \ - --set "app.tls.certificateDuration=24h" \ - --set "app.tls.istiodCertificateDuration=24h" \ - --set "app.tls.rootCAFile=/var/run/secrets/istio-csr/ca.pem" \ - --set "volumeMounts[0].name=root-ca" \ - --set "volumeMounts[0].mountPath=/var/run/secrets/istio-csr" \ - --set "volumes[0].name=root-ca" \ - --set "volumes[0].secret.secretName=istio-root-ca" - ``` +```bash +helm install -n cert-manager cert-manager-istio-csr jetstack/cert-manager-istio-csr \ +--set "app.certmanager.issuer.group=awspca.cert-manager.io" \ +--set "app.certmanager.issuer.kind=AWSPCAClusterIssuer" \ +--set "app.certmanager.issuer.name=" \ +--set "app.certmanager.preserveCertificateRequests=true" \ +--set "app.server.maxCertificateDuration=48h" \ +--set "app.tls.certificateDuration=24h" \ +--set "app.tls.istiodCertificateDuration=24h" \ +--set "app.tls.rootCAFile=/var/run/secrets/istio-csr/ca.pem" \ +--set "volumeMounts[0].name=root-ca" \ +--set "volumeMounts[0].mountPath=/var/run/secrets/istio-csr" \ +--set "volumes[0].name=root-ca" \ +--set "volumes[0].secret.secretName=istio-root-ca" +``` 8. Install Istio with custom configurations to replace `istiod` with `cert-manager istio-csr` as the certificate provider for the mesh. This process can be carried out using the [Istio Operator](https://tetrate.io/blog/what-is-istio-operator/). diff --git a/content/security/docs/pods.ko.md b/content/security/docs/pods.ko.md index 89ad5dcb5..cb4cbf8e1 100644 --- a/content/security/docs/pods.ko.md +++ b/content/security/docs/pods.ko.md @@ -97,7 +97,7 @@ PSP 지원 중단 및 즉시 사용 가능한 파드 보안을 제어해야 하 - **Restricted:** 현재 파드 강화 모범 사례에 따라 엄격하게 제한된 정책입니다. 이 정책은 기준선에서 상속되며 루트 또는 루트 그룹으로 실행할 수 없는 것과 같은 추가 제한 사항을 추가합니다. 제한된 정책은 애플리케이션의 기능에 영향을 미칠 수 있습니다. 이들은 주로 보안에 중요한 응용 프로그램을 실행하는 것을 목표로 합니다. -이런 정책은 [파드 실행을 위한 프로파일](https://kubernetes.io/docs/concepts/security/pod-security-standards/#profile-details)을 정의하며, 세 가지 수준의 특권(Priviledged) 액세스에서부터 제한된(Restricted) 액세스로 정렬됩니다. +이런 정책은 [파드 실행을 위한 프로파일](https://kubernetes.io/docs/concepts/security/pod-security-standards/#profile-details)을 정의하며, 세 가지 수준의 특권(Privileged) 액세스에서부터 제한된(Restricted) 액세스로 정렬됩니다. PSS에서 정의한 컨트롤을 구현하기 위해 PSA는 세 가지 모드로 작동합니다. diff --git a/content/upgrades/index.md b/content/upgrades/index.md index 171db9c76..f0dd19206 100644 --- a/content/upgrades/index.md +++ b/content/upgrades/index.md @@ -54,7 +54,7 @@ To upgrade a cluster you will need to take the following actions: 1. [Review the Kubernetes and EKS release notes.](#use-the-eks-documentation-to-create-an-upgrade-checklist) 2. [Take a backup of the cluster. (optional)](#backup-the-cluster-before-upgrading) 3. [Identify and remediate deprecated and removed API usage in your workloads.](#identify-and-remediate-removed-api-usage-before-upgrading-the-control-plane) -4. [Ensure Managed Node Groups, if used, are on the same Kubernetes version as the control plane.](#track-the-version-skew-of-nodes-ensure-managed-node-groups-are-on-the-same-version-as-the-control-plane-before-upgrading) EKS managed node groups and nodes created by EKS Fargate Profiles support 2 minor version skew between the control plane and data plane for Kubernetes version 1.27 and below. Starting 1.28 and above, EKS managed node groups and nodes created by EKS Fargate Profiles support 3 minor version skew betweeen control plane and data plane. For example, if your EKS control plane version is 1.28, you can safely use kubelet versions as old as 1.25. If your EKS version is 1.27, the oldest kubelet version you can use is 1.25. +4. [Ensure Managed Node Groups, if used, are on the same Kubernetes version as the control plane.](#track-the-version-skew-of-nodes-ensure-managed-node-groups-are-on-the-same-version-as-the-control-plane-before-upgrading) EKS managed node groups and nodes created by EKS Fargate Profiles support 2 minor version skew between the control plane and data plane for Kubernetes version 1.27 and below. Starting 1.28 and above, EKS managed node groups and nodes created by EKS Fargate Profiles support 3 minor version skew between control plane and data plane. For example, if your EKS control plane version is 1.28, you can safely use kubelet versions as old as 1.25. If your EKS version is 1.27, the oldest kubelet version you can use is 1.25. 5. [Upgrade the cluster control plane using the AWS console or cli.](https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html) 6. [Review add-on compatibility.](#upgrade-add-ons-and-components-using-the-kubernetes-api) Upgrade your Kubernetes add-ons and custom controllers, as required. 7. [Update kubectl.](https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html) diff --git a/content/windows/docs/hardening.md b/content/windows/docs/hardening.md index bfa6462f0..d6ecd50b8 100644 --- a/content/windows/docs/hardening.md +++ b/content/windows/docs/hardening.md @@ -63,7 +63,7 @@ https://inspector-agent.amazonaws.com/windows/installer/latest/AWSAgentInstall.e 2. Transfer the agent to the Windows worker node. 3. Run the following command on PowerShell to install the Amazon Inspector Agent: `.\AWSAgentInstall.exe /install` -Below is the ouput after the first run. As you can see, it generated findings based on the [CVE](https://cve.mitre.org/) database. You can use this to harden your Worker nodes or create an AMI based on the hardened configurations. +Below is the output after the first run. As you can see, it generated findings based on the [CVE](https://cve.mitre.org/) database. You can use this to harden your Worker nodes or create an AMI based on the hardened configurations. ![](./images/inspector-agent.png) @@ -72,7 +72,7 @@ For more information on Amazon Inspector, including how to install Amazon Inspec ## Amazon GuardDuty > [Amazon GuardDuty](https://aws.amazon.com/guardduty/) is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. With the cloud, the collection and aggregation of account and network activities is simplified, but it can be time consuming for security teams to continuously analyze event log data for potential threats. -By using Amazon GuardDuty you have visilitiby on malicious actitivy against Windows worker nodes, like RDP brute force and Port Probe attacks. +By using Amazon GuardDuty you have visibility on malicious activity against Windows worker nodes, like RDP brute force and Port Probe attacks. Watch the [Threat Detection for Windows Workloads using Amazon GuardDuty](https://www.youtube.com/watch?v=ozEML585apQ) video to learn how to implement and run CIS Benchmarks on Optimized EKS Windows AMI diff --git a/content/windows/docs/logging.md b/content/windows/docs/logging.md index 9ed65edfd..8c2c07d93 100644 --- a/content/windows/docs/logging.md +++ b/content/windows/docs/logging.md @@ -8,7 +8,7 @@ The Log collection mechanism retrieves STDOUT/STDERR logs from Kubernetes pods. More detailed information about log streaming from Windows workloads to CloudWatch is explained [here](https://aws.amazon.com/blogs/containers/streaming-logs-from-amazon-eks-windows-pods-to-amazon-cloudwatch-logs-using-fluentd/) -## Logging Recomendations +## Logging Recommendations The general logging best practices are no different when operating Windows workloads in Kubernetes. diff --git a/content/windows/docs/monitoring.ko.md b/content/windows/docs/monitoring.ko.md index 9475208f0..9b05f3d03 100644 --- a/content/windows/docs/monitoring.ko.md +++ b/content/windows/docs/monitoring.ko.md @@ -66,7 +66,7 @@ scrape_configs: 쿠버네티스 서비스 그룹을 모니터링하는 방법을 선언적으로 지정하는 ServiceMonitor는 쿠버네티스 내에서 메트릭을 스크랩하려는 애플리케이션을 정의하는 데 사용됩니다.ServiceMonitor 내에서 운영자가 쿠버네티스 서비스를 식별하는 데 사용할 수 있는 쿠버네티스 레이블을 지정합니다. 쿠버네티스 서비스는 쿠버네티스 서비스를 식별하고, 쿠버네티스 서비스는 다시 우리가 모니터링하고자 하는 파드를 식별합니다. -ServiceMonitor를 활용하려면 특정 윈도우 대상을 가리키는 엔드포인트 객체, 윈도우 노드용 헤드리스 서비스 및 ServiceMontor를 생성해야 합니다. +ServiceMonitor를 활용하려면 특정 윈도우 대상을 가리키는 엔드포인트 객체, 윈도우 노드용 헤드리스 서비스 및 ServiceMonitor를 생성해야 합니다. ```yaml apiVersion: v1 diff --git a/content/windows/docs/monitoring.md b/content/windows/docs/monitoring.md index a6d95da45..bed55d69c 100644 --- a/content/windows/docs/monitoring.md +++ b/content/windows/docs/monitoring.md @@ -60,7 +60,7 @@ A better and recommended way to add targets is to use a Custom Resource Definit The ServiceMonitor, which declaratively specifies how groups of Kubernetes services should be monitored, is used to define an application you wish to scrape metrics from within Kubernetes. Within the ServiceMonitor we specify the Kubernetes labels that the operator can use to identify the Kubernetes Service which in turn identifies the Pods, that we wish to monitor. -In order to leverage the ServiceMonitor, create an Endpoint object pointing to specific Windows targets, a headless service and a ServiceMontor for the Windows nodes. +In order to leverage the ServiceMonitor, create an Endpoint object pointing to specific Windows targets, a headless service and a ServiceMonitor for the Windows nodes. ```yaml apiVersion: v1 diff --git a/content/windows/docs/networking.md b/content/windows/docs/networking.md index c4a280bb3..f232246a6 100644 --- a/content/windows/docs/networking.md +++ b/content/windows/docs/networking.md @@ -23,7 +23,7 @@ The number of pods that a Windows worker node can support is dictated by the siz ``` Here, instead of allocating secondary IPv4 addresses, VPC Resource Controller will allocate `/28 prefixes` and therefore, the overall number of available IPv4 addresses will be boosted 16 times. -Using the formula above, we can calculate max pods for an Windows worker noded based on a m5.large instance as below: +Using the formula above, we can calculate max pods for an Windows worker node based on a m5.large instance as below: - By default, when running in secondary IP mode- ``` 10 secondary IPv4 addresses per ENI - 1 = 9 available IPv4 addresses diff --git a/content/windows/docs/oom.ko.md b/content/windows/docs/oom.ko.md index deeae6418..f0b6b5a60 100644 --- a/content/windows/docs/oom.ko.md +++ b/content/windows/docs/oom.ko.md @@ -21,7 +21,7 @@ search: --kube-reserved memory=0.5Gi,ephemeral-storage=1Gi --system-reserved memory=1.5Gi,ephemeral-storage=1Gi --eviction-hard memory.available<200Mi,nodefs.available<10%" ``` -eksctl을 배포 도구로 사용하는 경우, 다음 https://eksctl.io/usage/customizing-the-kubelet/ 문서를 참조하여 kublet을 커스터마이즈 할 수 있습니다. +eksctl을 배포 도구로 사용하는 경우, 다음 https://eksctl.io/usage/customizing-the-kubelet/ 문서를 참조하여 kubelet을 커스터마이즈 할 수 있습니다. ## 윈도우 컨테이너 메모리 요구 사항 [Microsoft 문서](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/system-requirements)에 따르면 NANO용 Windows Server 베이스 이미지에는 최소 30MB가 필요한 반면 윈도우 Server Core 이미지에는 45MB가 필요합니다. 이 수치는 .NET 프레임워크, IIS 웹 서비스 및 응용 프로그램과 같은 윈도우 구성 요소를 추가함에 따라 증가합니다. diff --git a/content/windows/docs/patching.ko.md b/content/windows/docs/patching.ko.md index fc12f6ac5..149f6962c 100644 --- a/content/windows/docs/patching.ko.md +++ b/content/windows/docs/patching.ko.md @@ -26,7 +26,7 @@ Amazon은 2개의 캐시된 윈도우 컨테이너 이미지를 포함하는 EKS 캐시된 이미지는 main OS 업데이트에 따라 업데이트 됩니다. Microsoft가 윈도우 컨테이너 베이스 이미지에 직접적인 영향을 미치는 새로운 윈도우 업데이트를 출시하면 해당 업데이트는 main OS에서 일반적인 윈도우 업데이트(ordinary Windows Update)로 시작 됩니다. 환경을 최신 상태로 유지하면 노드 및 컨테이너 수준에서 보다 안전한 환경이 제공됩니다. -윈도우 컨테이너 이미지의 크기는 푸시/풀 수행에 영향을 미치므로 컨테이너 시작 시간(conatiner startup time)이 느려질 수 있습니다. [윈도우 컨테이너 이미지 캐싱](https://aws.amazon.com/blogs/containers/speeding-up-windows-container-launch-times-with-ec2-image-builder-and-image-cache-strategy/)에 방식으로 컨테이너 이미지를 캐싱하면 컨테이너 시작 대신 AMI 빌드 생성시 비용이 많이 드는 I/O 작업(파일 추출)이 발생할 수 있습니다. 따라서 필요한 모든 이미지 레이어가 AMI에서 추출되어 바로 사용할 수 있게 되므로 윈도우 컨테이너가 시작되고 트래픽 수신을 시작할 수 있는 시간이 단축됩니다. 푸시 작업 중에는 이미지를 구성하는 레이어만 저장소에 업로드됩니다. +윈도우 컨테이너 이미지의 크기는 푸시/풀 수행에 영향을 미치므로 컨테이너 시작 시간(container startup time)이 느려질 수 있습니다. [윈도우 컨테이너 이미지 캐싱](https://aws.amazon.com/blogs/containers/speeding-up-windows-container-launch-times-with-ec2-image-builder-and-image-cache-strategy/)에 방식으로 컨테이너 이미지를 캐싱하면 컨테이너 시작 대신 AMI 빌드 생성시 비용이 많이 드는 I/O 작업(파일 추출)이 발생할 수 있습니다. 따라서 필요한 모든 이미지 레이어가 AMI에서 추출되어 바로 사용할 수 있게 되므로 윈도우 컨테이너가 시작되고 트래픽 수신을 시작할 수 있는 시간이 단축됩니다. 푸시 작업 중에는 이미지를 구성하는 레이어만 저장소에 업로드됩니다. 다음 예제에서는 Amazon ECR에서 **fluentd-windows-sac2004** 이미지의 크기가 **390.18MB**에 불과하다는 것을 보여줍니다. 푸시 작업 중에 발생한 업로드 양입니다. diff --git a/content/windows/docs/scheduling.md b/content/windows/docs/scheduling.md index fd7eb68b0..abab6fb78 100644 --- a/content/windows/docs/scheduling.md +++ b/content/windows/docs/scheduling.md @@ -19,7 +19,7 @@ In Enterprise environments, it's not uncommon to have a large number of pre-exis For example: `--register-with-taints='os=windows:NoSchedule'` -If you are using EKS, eksctl offers ways to apply taints through clusterConfig: +If you are using EKS, `eksctl` offers ways to apply taints through clusterConfig: ```yaml NodeGroups: diff --git a/cspell.config.yaml b/cspell.config.yaml new file mode 100644 index 000000000..0b077592f --- /dev/null +++ b/cspell.config.yaml @@ -0,0 +1,24 @@ +patterns: + - name: markdown_code_block + pattern: "/^```[\\s\\S]*?^\\s*```/gm" + - name: markdown_code_snippet + pattern: "/`(.*)`/g" + - name: code_block + pattern: "/^:::code[\\s\\S]*?^\\s*:::/gm" + - name: mardown_comment + pattern: "/