Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync PR #348 (Update docs for custom runtimes and registry rewrites) from Community docs #77

Merged
merged 1 commit into from
Dec 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 22 additions & 13 deletions versions/latest/modules/en/pages/advanced.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -159,30 +159,39 @@ You can extend the K3s base template instead of copy-pasting the complete stock
BinaryName = "/usr/bin/custom-container-runtime"
----

== NVIDIA Container Runtime Support
== Alternative Container Runtime Support

K3s will automatically detect and configure the NVIDIA container runtime if it is present when K3s starts.
K3s will automatically detect alternative container runtimes if they are present when K3s starts. Supported container runtimes are:

----
crun, lunatic, nvidia, nvidia-cdi, nvidia-experimental, slight, spin, wasmedge, wasmer, wasmtime, wws
----

NVIDIA GPUs require installation of the NVIDIA Container Runtime in order to schedule and run accelerated workloads in Pods. To use NVIDIA GPUs with K3s, perform the following steps:

. Install the nvidia-container package repository on the node by following the instructions at:
https://nvidia.github.io/libnvidia-container/
. Install the nvidia container runtime packages. For example:
`apt install -y nvidia-container-runtime cuda-drivers-fabricmanager-515 nvidia-headless-515-server`
. Install K3s, or restart it if already installed:
`curl -ksL get.k3s.io | sh -`
. Confirm that the nvidia container runtime has been found by k3s:
. xref:installation/installation.adoc[Install K3s], or restart it if already installed.
. Confirm that the nvidia container runtime has been found by k3s:
`grep nvidia /var/lib/rancher/k3s/agent/etc/containerd/config.toml`

This will automatically add `nvidia` and/or `nvidia-experimental` runtimes to the containerd configuration, depending on what runtime executables are found.
You must still add a RuntimeClass definition to your cluster, and deploy Pods that explicitly request the appropriate runtime by setting `runtimeClassName: nvidia` in the Pod spec:
If these steps are followed properly, K3s will automatically add NVIDIA runtimes to the containerd configuration, depending on what runtime executables are found.

[NOTE]
.Version Gate
====
The `--default-runtime` flag and built-in RuntimeClass resources are available as of the December 2023 releases: v1.29.0+k3s1, v1.28.5+k3s1, v1.27.9+k3s1, v1.26.12+k3s1
Prior to these releases, you must deploy your own RuntimeClass resources for any runtimes you want to reference in Pod specs.
====

K3s includes Kubernetes RuntimeClass definitions for all supported alternative runtimes. You can select one of these to replace `runc` as the default runtime on a node by setting the `--default-runtime` value via the k3s CLI or config file.

If you have not changed the default runtime on your GPU nodes, you must explicitly request the NVIDIA runtime by setting `runtimeClassName: nvidia` in the Pod spec:

[,yaml]
----
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: nvidia
handler: nvidia
---
apiVersion: v1
kind: Pod
metadata:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ please ensure you also create the `registries.yaml` file on each server as well.

Containerd has an implicit "default endpoint" for all registries.
The default endpoint is always tried as a last resort, even if there are other endpoints listed for that registry in `registries.yaml`.
Rewrites are not applied to pulls against the default endpoint.
For example, when pulling `registry.example.com:5000/rancher/mirrored-pause:3.6`, containerd will use a default endpoint of `+https://registry.example.com:5000/v2+`.

* The default endpoint for `docker.io` is `+https://index.docker.io/v2+`.
Expand Down Expand Up @@ -94,11 +95,13 @@ Then pulling `docker.io/rancher/mirrored-pause:3.6` will transparently pull the

==== Rewrites

Each mirror can have a set of rewrites. Rewrites can change the name of an image based on regular expressions.
Each mirror can have a set of rewrites, which use regular expressions to match and transform the name of an image when it is pulled from a mirror.
This is useful if the organization/project structure in the private registry is different than the registry it is mirroring.
Rewrites match and transform only the image name, NOT the tag.

For example, the following configuration would transparently pull the image `docker.io/rancher/mirrored-pause:3.6` as `registry.example.com:5000/mirrorproject/rancher-images/mirrored-pause:3.6`:

[,yaml]
----
mirrors:
docker.io:
Expand All @@ -108,8 +111,31 @@ mirrors:
"^rancher/(.*)": "mirrorproject/rancher-images/$1"
----

When using redirects and rewrites, images will still be stored under the original name.
For example, `crictl image ls` will show `docker.io/rancher/mirrored-pause:3.6` as available on the node, even though the image was pulled from the mirrored registry with a different name.
[NOTE]
.Version Gate
====
Rewrites are no longer applied to the xref:#_default_endpoint_fallback[Default Endpoint] as of the January 2024 releases: v1.26.13+k3s1, v1.27.10+k3s1, v1.28.6+k3s1, v1.29.1+k3s1.
Prior to these releases, rewrites were also applied to the default endpoint, which would prevent K3s from pulling from the upstream registry if the image could not be pulled from a mirror endpoint, and the image was not available under the modified name in the upstream.
====

If you want to apply rewrites when pulling directly from a registry - when it is not being used as a mirror for a different upstream registry - you must provide a mirror endpoint that does not match the default endpoint.
Mirror endpoints in `registries.yaml` that match the default endpoint are ignored; the default endpoint is always tried last with no rewrites, if fallback has not been disabled.

For example, if you have a registry at `+https://registry.example.com/+`, and want to apply rewrites when explicitly pulling `registry.example.com/rancher/mirrored-pause:3.6`, you can add a mirror endpoint with the port listed.
Because the mirror endpoint does not match the default endpoint - **`"https://registry.example.com:443/v2" != "https://registry.example.com/v2"`** - the endpoint is accepted as a mirror and rewrites are applied, despite it being effectively the same as the default.

[,yaml]
----
mirrors:
registry.example.com
endpoint:
- "https://registry.example.com:443"
rewrites:
"^rancher/(.*)": "mirrorproject/rancher-images/$1"
----

Note that when using mirrors and rewrites, images will still be stored under the original name.
For example, `crictl image ls` will show `docker.io/rancher/mirrored-pause:3.6` as available on the node, even if the image was pulled from a mirror with a different name.

=== Configs

Expand Down
34 changes: 20 additions & 14 deletions versions/latest/modules/ja/pages/advanced.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -159,31 +159,37 @@ K3sのベーステンプレートを拡張して、K3sのソースコードか
BinaryName = "/usr/bin/custom-container-runtime"
----

== NVIDIAコンテナランタイムのサポート
== Alternativeコンテナランタイムのサポート

K3sは、起動時にNVIDIAコンテナランタイムが存在する場合、自動的に検出して構成します。
K3s will automatically detect alternative container runtimes if they are present when K3s starts. Supported container runtimes are:

----
crun, lunatic, nvidia, nvidia-cdi, nvidia-experimental, slight, spin, wasmedge, wasmer, wasmtime, wws
----

NVIDIA GPUs require installation of the NVIDIA Container Runtime in order to schedule and run accelerated workloads in Pods. To use NVIDIA GPUs with K3s, perform the following steps:

. ノードにnvidia-containerパッケージリポジトリをインストールします。手順は以下を参照してください:
https://nvidia.github.io/libnvidia-container/
. nvidiaコンテナランタイムパッケージをインストールします。例えば:
`apt install -y nvidia-container-runtime cuda-drivers-fabricmanager-515 nvidia-headless-515-server`
. K3sをインストールするか、既にインストールされている場合は再起動します:
`curl -ksL get.k3s.io | sh -`
. K3sがnvidiaコンテナランタイムを検出したことを確認します:
+
. xref:installation/installation.adoc[Install K3s], or restart it if already installed.
. Confirm that the nvidia container runtime has been found by k3s:
`grep nvidia /var/lib/rancher/k3s/agent/etc/containerd/config.toml`

これにより、見つかったランタイム実行ファイルに応じて、`nvidia` および/または `nvidia-experimental` ランタイムが自動的に containerd の設定に追加されます。
クラスターに RuntimeClass 定義を追加し、Pod スペックで `runtimeClassName: nvidia` を設定して適切なランタイムを明示的に要求する Pod をデプロイする必要があります:
[NOTE]
.Version Gate
====
The `--default-runtime` flag and built-in RuntimeClass resources are available as of the December 2023 releases: v1.29.0+k3s1, v1.28.5+k3s1, v1.27.9+k3s1, v1.26.12+k3s1
Prior to these releases, you must deploy your own RuntimeClass resources for any runtimes you want to reference in Pod specs.
====

K3s includes Kubernetes RuntimeClass definitions for all supported alternative runtimes. You can select one of these to replace `runc` as the default runtime on a node by setting the `--default-runtime` value via the k3s CLI or config file.

If you have not changed the default runtime on your GPU nodes, you must explicitly request the NVIDIA runtime by setting `runtimeClassName: nvidia` in the Pod spec:

[,yaml]
----
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: nvidia
handler: nvidia
---
apiVersion: v1
kind: Pod
metadata:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ Containerdはプライベートレジストリに接続し、kubeletが必要と

Containerdにはすべてのレジストリに対して暗黙の「デフォルトエンドポイント」があります。
デフォルトエンドポイントは、``registries.yaml``にそのレジストリの他のエンドポイントがリストされている場合でも、最後の手段として常に試されます。
Rewrites are not applied to pulls against the default endpoint.
例えば、``registry.example.com:5000/rancher/mirrored-pause:3.6``をプルする場合、containerdは``+https://registry.example.com:5000/v2+``のデフォルトエンドポイントを使用します。

* ``docker.io``のデフォルトエンドポイントは``+https://index.docker.io/v2+``です。
Expand Down Expand Up @@ -91,11 +92,13 @@ mirrors:

==== リライト

各ミラーにはリライトのセットを持つことができます。リライトは正規表現に基づいてイメージの名前を変更できます。
Each mirror can have a set of rewrites, which use regular expressions to match and transform the name of an image when it is pulled from a mirror.
これは、プライベートレジストリの組織/プロジェクト構造がミラーリングしているレジストリと異なる場合に便利です。
Rewrites match and transform only the image name, NOT the tag.

例えば、次の設定は、``docker.io/rancher/mirrored-pause:3.6``のイメージを透過的に``registry.example.com:5000/mirrorproject/rancher-images/mirrored-pause:3.6``としてプルします:

[,yaml]
----
mirrors:
docker.io:
Expand All @@ -105,8 +108,31 @@ mirrors:
"^rancher/(.*)": "mirrorproject/rancher-images/$1"
----

リダイレクトとリライトを使用する場合でも、イメージは元の名前の下に保存されます。
例えば、``crictl image ls``は、ノード上で``docker.io/rancher/mirrored-pause:3.6``が利用可能であることを示しますが、イメージは異なる名前でミラーリングされたレジストリからプルされました。
[NOTE]
.Version Gate
====
Rewrites are no longer applied to the xref:#_default_endpoint_fallback[Default Endpoint] as of the January 2024 releases: v1.26.13+k3s1, v1.27.10+k3s1, v1.28.6+k3s1, v1.29.1+k3s1.
Prior to these releases, rewrites were also applied to the default endpoint, which would prevent K3s from pulling from the upstream registry if the image could not be pulled from a mirror endpoint, and the image was not available under the modified name in the upstream.
====

If you want to apply rewrites when pulling directly from a registry - when it is not being used as a mirror for a different upstream registry - you must provide a mirror endpoint that does not match the default endpoint.
Mirror endpoints in `registries.yaml` that match the default endpoint are ignored; the default endpoint is always tried last with no rewrites, if fallback has not been disabled.

For example, if you have a registry at `+https://registry.example.com/+`, and want to apply rewrites when explicitly pulling `registry.example.com/rancher/mirrored-pause:3.6`, you can add a mirror endpoint with the port listed.
Because the mirror endpoint does not match the default endpoint - **`"https://registry.example.com:443/v2" != "https://registry.example.com/v2"`** - the endpoint is accepted as a mirror and rewrites are applied, despite it being effectively the same as the default.

[,yaml]
----
mirrors:
registry.example.com
endpoint:
- "https://registry.example.com:443"
rewrites:
"^rancher/(.*)": "mirrorproject/rancher-images/$1"
----

Note that when using mirrors and rewrites, images will still be stored under the original name.
For example, `crictl image ls` will show `docker.io/rancher/mirrored-pause:3.6` as available on the node, even if the image was pulled from a mirror with a different name.

=== 設定

Expand Down
33 changes: 20 additions & 13 deletions versions/latest/modules/ko/pages/advanced.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -136,30 +136,37 @@ K3s는 ``/var/lib/rancher/k3s/agent/etc/containerd/config.toml``에 컨테이너
``config.toml.tmpl``은 Go 템플릿 파일로 취급되며, `config.Node` 구조가 템플릿으로 전달됩니다. 이 구조를 사용하여 구성 파일을 사용자 정의하는 방법에 대한 Linux 및 Windows 예제는 https://github.com/k3s-io/k3s/blob/master/pkg/agent/templates[이 폴더]를 참조하세요.
config.Node Go 언어 구조체는 https://github.com/k3s-io/k3s/blob/master/pkg/daemons/config/types.go#L37[여기]에 정의되어 있습니다.

== NVIDIA 컨테이너 런타임 지원
== Alternative 컨테이너 런타임 지원

K3s는 K3s 시작 시 NVIDIA 컨테이너 런타임이 있으면 자동으로 감지하여 설정합니다.
K3s will automatically detect alternative container runtimes if they are present when K3s starts. Supported container runtimes are:

----
crun, lunatic, nvidia, nvidia-cdi, nvidia-experimental, slight, spin, wasmedge, wasmer, wasmtime, wws
----

NVIDIA GPUs require installation of the NVIDIA Container Runtime in order to schedule and run accelerated workloads in Pods. To use NVIDIA GPUs with K3s, perform the following steps:

. 아래의 안내에 따라 노드에 엔비디아 컨테이너 패키지 리포지토리를 설치합니다:
https://nvidia.github.io/libnvidia-container/
. 엔비디아 컨테이너 런타임 패키지를 설치합니다. 예시:
`apt install -y nvidia-container-runtime cuda-drivers-fabricmanager-515 nvidia-headless-515-server`
. K3s를 설치하거나 이미 설치되어 있는 경우 다시 시작합니다:
`curl -ksL get.k3s.io | sh -`
. k3s가 엔비디아 컨테이너 런타임을 찾았는지 확인합니다:
. xref:installation/installation.adoc[Install K3s], or restart it if already installed.
. Confirm that the nvidia container runtime has been found by k3s:
`grep nvidia /var/lib/rancher/k3s/agent/etc/containerd/config.toml`

이렇게 하면 발견된 런타임 실행 파일에 따라 컨테이너 설정에 `nvidia` 및/또는 `nvidia-experimental` 런타임이 자동으로 추가됩니다.
여전히 클러스터에 런타임클래스 정의를 추가하고, 파드 스펙에서 ``runtimeClassName: nvidia``를 설정하여 적절한 런타임을 명시적으로 요청하는 파드를 배포해야 합니다:
[NOTE]
.Version Gate
====
The `--default-runtime` flag and built-in RuntimeClass resources are available as of the December 2023 releases: v1.29.0+k3s1, v1.28.5+k3s1, v1.27.9+k3s1, v1.26.12+k3s1
Prior to these releases, you must deploy your own RuntimeClass resources for any runtimes you want to reference in Pod specs.
====

K3s includes Kubernetes RuntimeClass definitions for all supported alternative runtimes. You can select one of these to replace `runc` as the default runtime on a node by setting the `--default-runtime` value via the k3s CLI or config file.

If you have not changed the default runtime on your GPU nodes, you must explicitly request the NVIDIA runtime by setting `runtimeClassName: nvidia` in the Pod spec:

[,yaml]
----
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: nvidia
handler: nvidia
---
apiVersion: v1
kind: Pod
metadata:
Expand Down
Loading