Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elaborate more on Local Path Provider #189

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/cli/server.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,9 +131,9 @@ The following options must be set to the same value on all servers in the cluste

### Storage Class

| Flag | Description |
| ------------------------------------ | -------------------------------------------------------------- |
| `--default-local-storage-path` value | Default local storage path for local provisioner storage class |
| Flag | Default | Description |
| ------------------------------------ | ------------------------------ | -------------------------------------------------------------- |
| `--default-local-storage-path` value | `/var/lib/rancher/k3s/storage` | Default local storage path for local provisioner storage class |

### Kubernetes Components

Expand Down
6 changes: 5 additions & 1 deletion docs/storage/storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,11 @@ Both components have out-of-tree alternatives that can be used with K3s: The Kub
Kubernetes maintainers are actively migrating in-tree volume plugins to CSI drivers. For more information on this migration, please refer [here](https://kubernetes.io/blog/2021/12/10/storage-in-tree-to-csi-migration-status-update/).

## Setting up the Local Storage Provider
K3s comes with Rancher's Local Path Provisioner and this enables the ability to create persistent volume claims out of the box using local storage on the respective node. Below we cover a simple example. For more information please reference the official documentation [here](https://github.com/rancher/local-path-provisioner/blob/master/README.md#usage).
K3s comes with Rancher's Local Path Provisioner, which provides the ability to use Persistent Volume Claims using local storage on the respective node. Note that because the Persistent Volumes are bound to local paths on the node, pods using these PVs will not be able to reschedule to other nodes, should the node hosting the volume become unavailable.

Included below is a sample pod using a PV and PVC. For more details on the usage of the Local Path Provisioner please reference the [official documentation](https://github.com/rancher/local-path-provisioner/blob/master/README.md#usage).

K3s allows for overriding the path used on nodes via `--default-local-storage-path` (see the [Server CLI options](../cli/server.md) for more details including defaults), and further changes to Local Path Provisioner can be made via overriding the provided ConfigMap using [HelmChartConfig](../helm/helm#customizing-packaged-components-with-helmchartconfig). The contents of this ConfigMap is detailed [here](https://github.com/rancher/local-path-provisioner/blob/master/README.md#configuration)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unfortunately this isn't true. it's not packaged as a helm chart, so you can't configure it as such. The only way to modify it is to start k3s with --disable=local-storage and provide your own deployment.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, OK. I'll look at rewording this to explain that you need to use your own deployment. Is there a reason that local-path-provisioner wasn't set up via helm, and is there anything stopping the switch being made?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Historical reasons mostly. Things that we bundled before adding the embedded helm controller were all managed as flat manifests. When time came to add traefik, it was only available as a helm chart, so we added the helm controller. We added HelmChartConfig support some time later. It's somewhat non-trivial to have helm "take over" existing resources in a nondisruprive way, so we've not prioritized migrating things.


Create a hostPath backed persistent volume claim and a pod to utilize it:

Expand Down
Loading