Skip to content

Commit

Permalink
docs(kapsule): add a limitations section to multi-az reference (#3181)
Browse files Browse the repository at this point in the history
* docs(kapsule): add a limitations section to multi-az reference

* Update containers/kubernetes/reference-content/multi-az-clusters.mdx

Co-authored-by: Rowena Jones <[email protected]>

* Update containers/kubernetes/reference-content/multi-az-clusters.mdx

Co-authored-by: Rowena Jones <[email protected]>

* Update containers/kubernetes/reference-content/multi-az-clusters.mdx

Co-authored-by: Rowena Jones <[email protected]>

---------

Co-authored-by: Pablo RUTH <[email protected]>
Co-authored-by: Rowena Jones <[email protected]>
  • Loading branch information
3 people authored May 17, 2024
1 parent 2e3e3dc commit fce4ea9
Showing 1 changed file with 12 additions and 1 deletion.
13 changes: 12 additions & 1 deletion containers/kubernetes/reference-content/multi-az-clusters.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,17 @@ The main advantages of running a Kubernetes Kapsule cluster in multiple AZs are:

For more information, refer to the [official Kubernetes best practices for running clusters in multiple zones](https://kubernetes.io/docs/setup/best-practices/multiple-zones/) documentation.

## Limitations

- Kapsule's Control Plane network access is managed by a Load Balancer in the primary zone of each region. If this zone fails globally, the Control Plane will be unreachable, even if the cluster spans multiple zones. This limitation also applies to HA Dedicated Control Planes.
- Persistent volumes are limited to their Availability Zone (AZ). Applications must replicate data across persistent volumes in different AZs to maintain high availability in case of zone failures.
- In "controlled isolation" mode, nodes access the Control Plane via their public IPs. If two AZs can't communicate (split-brain scenario), nodes won't appear unhealthy from Kubernetes' perspective, but communication between nodes in different AZs will be disrupted. Applications must handle this scenario if they use components across multiple AZs.
- In "full isolation" mode, nodes also use the Public Gateway to access the Control Plane. If nodes can't reach the Public Gateway (e.g. because of Private Network failure in an AZ), they will become unhealthy. As there is only one Public Gateway per Private Network, losing the AZ with the Public Gateway results in the loss of all nodes in all private pools across all AZs.

<Message type="note">
It is important to note that the scalability and reliability of Kubernetes does not automatically ensure the scalability and reliability of an application hosted on it. While Kubernetes is a robust and scalable platform, each application must independently implement measures to achieve scalability and reliability, ensuring it avoids bottlenecks and single points of failure. Therefore, although Kubernetes itself remains responsive, the responsiveness of your application relies on your design and deployment choices.
</Message>

## Kubernetes Kapsule infrastructure setup

<Message type="note">
Expand Down Expand Up @@ -259,4 +270,4 @@ This method is an important point to maintain system resilience and operational

* Tutorial [Deploying a multi-AZ Kubernetes cluster with Terraform and Kapsule](/tutorials/k8s-kapsule-multi-az/)
* Complete [Terraform configuration files to deploy a multi-AZ cluster](https://github.com/scaleway/kapsule-terraform-multi-az-tutorial/)
* [Official Kubernetes best practices for running clusters in multiple zones](https://kubernetes.io/docs/setup/best-practices/multiple-zones/)
* [Official Kubernetes best practices for running clusters in multiple zones](https://kubernetes.io/docs/setup/best-practices/multiple-zones/)

0 comments on commit fce4ea9

Please sign in to comment.