Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix minor typos and spellcheck on PRs #587

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions .github/workflows/spellcheck.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
name: Spell Check

on: [pull_request]

jobs:

spell-check:
name: "Spell checker"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: 18
- run: |
npm install --save-dev cspell@latest
npx cspell lint "content/**/*.md"
63 changes: 63 additions & 0 deletions .spelling
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
EKS
kubelet
kubelets
NTFS
syscall
syscalls
AWSVPC
Kyverno
tmpfs
Runtimeclass
tolerations
ONTAP
LTSC
ltsc
daemonset
eksctl
Karpenter
Fargate
Datacenter
containerd
certmanager
istiod
ADOT
ClowdHaus
Dockershim
Inspektor
Seccomp
Qualys
Nodegroup
Nodegroups
Velero
autoscaler
Autoscaler
autoscalers
Autoscalers
SETUID
SETGID
Sandboxed
sandboxed
sandboxing
Sandboxing
pagefiles
udev
Anchore
Palo
NTLM
ABAC
Bottlerocket
Sysdig
routable
ipamd
schedulable
subresource
Dockerfile
Dockerfiles
Stackrox
Tigera
Kaniko
buildah
Burstable
burstable
Runbooks
runbooks
2 changes: 1 addition & 1 deletion content/cost_optimization/awareness.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Tags don't have any semantic meaning to Amazon EKS and are interpreted strictly

AWS Trusted Advisor offers a rich set of best practice checks and recommendations across five categories: cost optimization; security; fault tolerance; performance; and service limits.

For Cost Optimization, Trusted Advisor helps eliminate unused and idle resources and recommends making commitments to reserved capacity. The key action items that will help Amazon EKS will be around low utilsed EC2 instances, unassociated Elastic IP addresses, Idle Load Balancers, underutilized EBS volumes among other things. The complete list of checks are provided at https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/.
For Cost Optimization, Trusted Advisor helps eliminate unused and idle resources and recommends making commitments to reserved capacity. The key action items that will help Amazon EKS will be around low utilized EC2 instances, unassociated Elastic IP addresses, Idle Load Balancers, underutilized EBS volumes among other things. The complete list of checks are provided at https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/.

The Trusted Advisor also provides Savings Plans and Reserved Instances recommendations for EC2 instances and Fargate which allows you to commit to a consistent usage amount in exchange for discounted rates.

Expand Down
2 changes: 1 addition & 1 deletion content/cost_optimization/optimizing_WIP.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Right Sizing as per the AWS Well-Architected Framework, is “… using the lowe

When you specify the resource `requests` for the Containers in a Pod, the scheduler uses this information to decide which node to place the Pod on. When you specify a resource `limits` for a Container, the kubelet enforces those limits so that the running container is not allowed to use more of that resource than the limit you set. The details of how Kubernetes manages resources for containers are given in the [documentation](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).

In Kubernetes, this means setting the right compute resources ([CPU and memory are collectively referred to as compute resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)) - setting the resource `requests` that align as close as possible to the actual utilization. The tools for getting the actual resource usags of Pods are given in the section on Rexommendations below.
In Kubernetes, this means setting the right compute resources ([CPU and memory are collectively referred to as compute resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)) - setting the resource `requests` that align as close as possible to the actual utilization. The tools for getting the actual resource usages of Pods are given in the section on Rexommendations below.

**Amazon EKS on AWS Fargate**: When pods are scheduled on Fargate, the vCPU and memory reservations within the pod specification determine how much CPU and memory to provision for the pod. If you do not specify a vCPU and memory combination, then the smallest available combination is used (.25 vCPU and 0.5 GB memory). The list of vCPU and memory combinations that are available for pods running on Fargate are listed in the [Amazon EKS User Guide](https://docs.aws.amazon.com/eks/latest/userguide/fargate-pod-configuration.html).

Expand Down
2 changes: 1 addition & 1 deletion content/networking/sgpp/index.ko.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ kubectl set env daemonset -n kube-system aws-node AWS_VPC_K8S_CNI_EXTERNALSNAT=t

### Fargate를 이용하는 파드용 보안 그룹 사용

Fargate에서 실행되는 파드의 보안 그룹은 EC2 워커 노드에서 실행되는 파드와 매우 유사하게 작동한다. 예를 들어 Fargate 파드에 연결하는 보안 그룹 정책에서 보안 그룹을 참조하기 전에 먼저 보안 그룹을 생성해야 합니다.기본적으로 보안 그룹 정책을 Fargate 파드에 명시적으로 할당하지 않으면 [클러스터 보안 그룹](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)이 모든 Fargate 파드에 할당됩니다. 단순화를 위해 Fagate Pod의 SecurityGroupPolicy에 클러스터 보안 그룹을 추가할 수도 있습니다. 그렇지 않으면 보안 그룹에 최소 보안 그룹 규칙을 추가해야 합니다. 설명 클러스터 API를 사용하여 클러스터 보안 그룹을 찾을 수 있습니다.
Fargate에서 실행되는 파드의 보안 그룹은 EC2 워커 노드에서 실행되는 파드와 매우 유사하게 작동한다. 예를 들어 Fargate 파드에 연결하는 보안 그룹 정책에서 보안 그룹을 참조하기 전에 먼저 보안 그룹을 생성해야 합니다.기본적으로 보안 그룹 정책을 Fargate 파드에 명시적으로 할당하지 않으면 [클러스터 보안 그룹](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html)이 모든 Fargate 파드에 할당됩니다. 단순화를 위해 Fargate Pod의 SecurityGroupPolicy에 클러스터 보안 그룹을 추가할 수도 있습니다. 그렇지 않으면 보안 그룹에 최소 보안 그룹 규칙을 추가해야 합니다. 설명 클러스터 API를 사용하여 클러스터 보안 그룹을 찾을 수 있습니다.

```bash
aws eks describe-cluster --name CLUSTER_NAME --query 'cluster.resourcesVpcConfig.clusterSecurityGroupId'
Expand Down
2 changes: 1 addition & 1 deletion content/networking/sgpp/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ Ensure that `terminationGracePeriodSeconds` is non-zero in your Pod specificatio

### Using Security Groups for Pods with Fargate

Security groups for Pods that run on Fargate work very similarly to Pods that run on EC2 worker nodes. For example, you have to create the security group before referencing it in the SecurityGroupPolicy you associate with your Fargate Pod. By default, the [cluster security group](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) is assiged to all Fargate Pods when you don't explicitly assign a SecurityGroupPolicy to a Fargate Pod. For simplicity's sake, you may want to add the cluster security group to a Fagate Pod's SecurityGroupPolicy otherwise you will have to add the minimum security group rules to your security group. You can find the cluster security group using the describe-cluster API.
Security groups for Pods that run on Fargate work very similarly to Pods that run on EC2 worker nodes. For example, you have to create the security group before referencing it in the SecurityGroupPolicy you associate with your Fargate Pod. By default, the [cluster security group](https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html) is assiged to all Fargate Pods when you don't explicitly assign a SecurityGroupPolicy to a Fargate Pod. For simplicity's sake, you may want to add the cluster security group to a Fargate Pod's SecurityGroupPolicy otherwise you will have to add the minimum security group rules to your security group. You can find the cluster security group using the describe-cluster API.

```bash
aws eks describe-cluster --name CLUSTER_NAME --query 'cluster.resourcesVpcConfig.clusterSecurityGroupId'
Expand Down
2 changes: 1 addition & 1 deletion content/reliability/docs/controlplane.md
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ Or make sure the webhook has a fail open policy with a timeout shorter than 30 s

`Sysctl` is a Linux utility that allows users to modify kernel parameters during runtime. These kernel parameters control various aspects of the operating system's behavior, such as network, file system, virtual memory, and process management.

Kubernetes allows assigning `sysctl` profiles for Pods. Kubernetes categorizes `systcls` as safe and unsafe. Safe `sysctls` are namespaced in the container or Pod, and setting them doesn’t impact other Pods on the node or the node itself. In contrast, unsafe sysctls are disabled by default since they can potentially disrupt other Pods or make the node unstable.
Kubernetes allows assigning `sysctl` profiles for Pods. Kubernetes categorizes `systcls` as safe and unsafe. Safe `sysctls` are namespaced in the container or Pod, and setting them doesn’t impact other Pods on the node or the node itself. In contrast, unsafe `sysctls` are disabled by default since they can potentially disrupt other Pods or make the node unstable.

As unsafe `sysctls` are disabled by default, the kubelet will not create a Pod with unsafe `sysctl` profile. If you create such a Pod, the scheduler will repeatedly assign such Pods to nodes, while the node fails to launch it. This infinite loop ultimately strains the cluster control plane, making the cluster unstable.

Expand Down
2 changes: 1 addition & 1 deletion content/scalability/docs/index.ko.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ search:

**쿠버네티스 컨트롤 플레인**은 EKS 클러스터에는 AWS가 실행하고 사용자를 위해 자동으로 확장되는 모든 서비스 (예: 쿠버네티스 API Server) 가 포함됩니다. 컨트롤 플레인을 확장하는 것은 AWS의 책임이지만 컨트롤 플레인을 책임감 있게 사용하는 것은 사용자의 책임입니다.

**쿠버네티스 데이터 플레인** 규모 조정은 클러스터 및 워크로드에 필요한 AWS 리소스를 다루지만 EKS 컨트롤 플레인을 벗어납니다. EC2 인스턴스, kublet, 스토리지를 비롯한 모든 리소스는 클러스터 확장에 따라 확장해야 합니다.
**쿠버네티스 데이터 플레인** 규모 조정은 클러스터 및 워크로드에 필요한 AWS 리소스를 다루지만 EKS 컨트롤 플레인을 벗어납니다. EC2 인스턴스, kubelet, 스토리지를 비롯한 모든 리소스는 클러스터 확장에 따라 확장해야 합니다.

**클러스터 서비스**는 클러스터 내에서 실행되며 클러스터 및 워크로드에 기능을 제공하는 쿠버네티스 컨트롤러 및 애플리케이션입니다. 여기에는 [EKS 애드온](https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html)이 포함될 수 있으며 규정 준수 및 통합을 위해 설치하는 기타 서비스 또는 헬름 차트도 포함될 수 있습니다. 이런 서비스는 워크로드에 따라 달라지는 경우가 많으며 워크로드가 확장됨에 따라 클러스터 서비스도 함께 확장해야 합니다.

Expand Down
8 changes: 4 additions & 4 deletions content/scalability/docs/kubernetes_slos.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ Kubernetes is also improving the Observability around the SLIs by adding [Promet

|Metric |Definition |
|--- |--- |
|apiserver_request_sli_duration_seconds | Response latency distribution (not counting webhook duration and priority & fairness queue wait times) in seconds for each verb, group, version, resource, subresource, scope and component. |
|apiserver_request_duration_seconds | Response latency distribution in seconds for each verb, dry run value, group, version, resource, subresource, scope and component. |
|`apiserver_request_sli_duration_seconds` | Response latency distribution (not counting webhook duration and priority & fairness queue wait times) in seconds for each verb, group, version, resource, subresource, scope and component. |
|`apiserver_request_duration_seconds` | Response latency distribution in seconds for each verb, dry run value, group, version, resource, subresource, scope and component. |

*Note: The `apiserver_request_sli_duration_seconds` metric is available starting in Kubernetes 1.27.*

Expand Down Expand Up @@ -111,10 +111,10 @@ The SLI metrics provide insight into how Kubernetes components are performing by

Similar to the queries above you can use these metrics to gain insight into how long node scaling, image pulls and init containers are delaying the pod launch compared to Kubelet actions.

**Pod startup latency SLI -** this is the time from the pod being created to when the application containers reported as running. This includes the time it takes for the worker node capacity to be available and the pod to be scheduled, but this does not include the time it takes to pull images or for the init containers to run.
**Pod startup latency SLI -** this is the time from the pod being created to when the application containers reported as running. This includes the time it takes for the worker node capacity to be available and the pod to be scheduled, but this does not include the time it takes to pull images or for the init containers to run.
`histogram_quantile(0.99, sum(rate(kubelet_pod_start_sli_duration_seconds_bucket[5m])) by (le))`

**Pod startup latency Total -** this is the time it takes the kubelet to start the pod for the first time. This is measured from when the kubelet recieves the pod via WATCH, which does not include the time for worker node scaling or scheduling. This includes the time to pull images and init containers to run.
**Pod startup latency Total -** this is the time it takes the kubelet to start the pod for the first time. This is measured from when the kubelet receives the pod via WATCH, which does not include the time for worker node scaling or scheduling. This includes the time to pull images and init containers to run.
`histogram_quantile(0.99, sum(rate(kubelet_pod_start_duration_seconds_bucket[5m])) by (le))`


Expand Down
2 changes: 1 addition & 1 deletion content/scalability/docs/quotas.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ We have seen EKS customers impacted by the quotas listed below for other AWS ser

## AWS Request Throttling

AWS services also implement request throttling to ensure that they remain performant and available for all customers. Simliar to Service Quotas, each AWS service maintains their own request throttling thresholds. Consider reviewing the respective AWS Service documentation if your workloads will need to quickly issue a large number of API calls or if you notice request throttling errors in your application.
AWS services also implement request throttling to ensure that they remain performant and available for all customers. Similar to Service Quotas, each AWS service maintains their own request throttling thresholds. Consider reviewing the respective AWS Service documentation if your workloads will need to quickly issue a large number of API calls or if you notice request throttling errors in your application.

EC2 API requests around provisioning EC2 network interfaces or IP addresses can encounter request throttling in large clusters or when clusters scale drastically. The table below shows some of the API actions that we have seen customers encounter request throttling from.
You can review the EC2 rate limit defaults and the steps to request a rate limit increase in the [EC2 documentation on Rate Throttling](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/throttling.html).
Expand Down
2 changes: 1 addition & 1 deletion content/scalability/docs/scaling_theory.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ When fewer errors are occurring, it is easier spot issues in the system. By peri

#### Expanding Our View

In large scale clusters with 1,000’s of nodes we don’t want to look for bottlenecks individually. In PromQL we can find the highest values in a data set using a function called topk; K being a variable we place the number of items we want. Here we use three nodes to get an idea whether all of the Kubelets in the cluster are saturated. We have been looking at latency up to this point, now let’s see if the Kubelet is discarding events.
In large scale clusters with 1,000’s of nodes we don’t want to look for bottlenecks individually. In PromQL we can find the highest values in a data set using a function called `topki`; K being a variable we place the number of items we want. Here we use three nodes to get an idea whether all of the Kubelets in the cluster are saturated. We have been looking at latency up to this point, now let’s see if the Kubelet is discarding events.

```
topk(3, increase(kubelet_pleg_discard_events{}[$__rate_interval]))
Expand Down
2 changes: 1 addition & 1 deletion content/security/docs/compliance.ko.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ search:

다음 표는 다양한 컨테이너 서비스가 준수하는 규정 준수 프로그램을 보여줍니다.

| 컴플라이언스 프로그램 | Amazon ECS 오케스트레이터 | Amazon EKS 오케스트레이터| ECS Fargete | 아마존 ECR |
| 컴플라이언스 프로그램 | Amazon ECS 오케스트레이터 | Amazon EKS 오케스트레이터| ECS Fargate | 아마존 ECR |
| --- |:----------:|:----------:|:---- -------:|:----------:|
| PCI DSS Level 1 | 1 | 1 | 1 | 1 |
| HIPAA Eligible | 1 | 1 | 1 | 1 |
Expand Down
2 changes: 1 addition & 1 deletion content/security/docs/data.ko.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ fields @timestamp, @message

### 외부 시크릿 제공자 사용

[AWS Secret Manager](https://aws.amazon.com/secrets-manager/)와 Hishcorp의 [Vault](https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar/)를 포함하여 쿠버네티스 시크릿을 사용할 수 있는 몇 가지 실행 가능한 대안이 있습니다. 이런 서비스는 쿠버네티스 시크릿에서는 사용할 수 없는 세밀한 액세스 제어, 강력한 암호화, 암호 자동 교체 등의 기능을 제공합니다. Bitnami의 [Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets)는 비대칭 암호화를 사용하여 "봉인된 시크릿"을 생성하는 또 다른 접근 방식입니다. 공개 키는 시크릿을 암호화하는 데 사용되는 반면 암호 해독에 사용된 개인 키는 클러스터 내에 보관되므로 Git과 같은 소스 제어 시스템에 봉인된 시크릿을 안전하게 저장할 수 있습니다. 자세한 내용은 [실드 시크릿을 사용한 쿠버네티스의 시크릿 배포 관리](https://aws.amazon.com/blogs/opensource/managing-secrets-deployment-in-kubernetes-using-sealed-secrets/)를 참조합니다.
[AWS Secret Manager](https://aws.amazon.com/secrets-manager/)와 Hashicorp의 [Vault](https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar/)를 포함하여 쿠버네티스 시크릿을 사용할 수 있는 몇 가지 실행 가능한 대안이 있습니다. 이런 서비스는 쿠버네티스 시크릿에서는 사용할 수 없는 세밀한 액세스 제어, 강력한 암호화, 암호 자동 교체 등의 기능을 제공합니다. Bitnami의 [Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets)는 비대칭 암호화를 사용하여 "봉인된 시크릿"을 생성하는 또 다른 접근 방식입니다. 공개 키는 시크릿을 암호화하는 데 사용되는 반면 암호 해독에 사용된 개인 키는 클러스터 내에 보관되므로 Git과 같은 소스 제어 시스템에 봉인된 시크릿을 안전하게 저장할 수 있습니다. 자세한 내용은 [실드 시크릿을 사용한 쿠버네티스의 시크릿 배포 관리](https://aws.amazon.com/blogs/opensource/managing-secrets-deployment-in-kubernetes-using-sealed-secrets/)를 참조합니다.

외부 시크릿 스토어의 사용이 증가함에 따라 이를 쿠버네티스와 통합해야 할 필요성도 커졌습니다. [Secret Store CSI 드라이버](https://github.com/kubernetes-sigs/secrets-store-csi-driver)는 CSI 드라이버 모델을 사용하여 외부 시크릿 스토어로부터 시크릿을 가져오는 커뮤니티 프로젝트입니다. 현재 이 드라이버는 [AWS Secret Manager](https://github.com/aws/secrets-store-csi-driver-provider-aws), Azure, Vault 및 GCP를 지원합니다. AWS 공급자는 AWS 시크릿 관리자**와** AWS 파라미터 스토어를 모두 지원합니다. 또한 암호가 만료되면 암호가 교체되도록 구성할 수 있으며, AWS Secrets Manager 암호를 쿠버네티스 암호와 동기화할 수 있습니다. 암호의 동기화는 볼륨에서 암호를 읽는 대신 암호를 환경 변수로 참조해야 할 때 유용할 수 있습니다.

Expand Down
Loading