Skip to content

Commit

Permalink
Add upgrade considerations for UID, and IP pool migration with operator
Browse files Browse the repository at this point in the history
  • Loading branch information
caseydavenport committed Mar 28, 2024
1 parent 3ce1e20 commit 5054b23
Show file tree
Hide file tree
Showing 3 changed files with 135 additions and 3 deletions.
112 changes: 112 additions & 0 deletions calico/networking/ipam/migrate-pools.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ description: Migrate pods from one IP pool to another on a running cluster witho
# Migrate from one IP pool to another

import DetermineIpam from '@site/calico/_includes/content/_determine-ipam.mdx';
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

## Big picture

Expand Down Expand Up @@ -70,6 +72,113 @@ Disabling an IP pool only prevents new IP address allocations; it does not affec
In the following example, we created a Kubernetes cluster using **kubeadm**. But the IP pool CIDR we configured (192.168.0.0/16) doesn't match the
Kubernetes cluster CIDR. Let's change the CIDR to **10.0.0.0/16**, which for the purposes of this example falls within the cluster CIDR.

<Tabs>
<TabItem label="Operator" value="Operator-0">

Let’s run `kubectl get ippools` to see the IP pool, **default-ipv4-ippool**.

```
NAME CREATED AT
default-ipv4-ippool 2024-03-28T16:14:28Z
```

### Step 1: Add a new IP pool

We add a new **IPPool** with the CIDR range, **10.0.0.0/16**.

Add the following to your `default` Installation, below the existing IP pool.

```
kubectl edit installation default
```

```
- name: new-ipv4-pool
cidr: 10.0.0.0/16
encapsulation: IPIP
```

Let’s verify the new IP pool.

```bash
kubectl get ippools
```

```
NAME CREATED AT
default-ipv4-ippool 2024-03-28T16:14:28Z
test-pool 2024-03-28T18:30:15Z
```

### Step 2: Disable the old IP pool

Edit the `default` Installation, and modify the **default-ipv4-ippool** such that it no longer selects
any nodes. This prevents IP allocation from the pool.

```
kubectl edit installation default
```

```
- name: 192.168.0.0-16
allowedUses:
- Workload
- Tunnel
blockSize: 26
cidr: 192.168.0.0/16
disableBGPExport: false
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
- nodeSelector: all()
+ nodeSelector: "!all()"
```

Apply the changes.

Remember, disabling a pool only affects new IP allocations; networking for existing pods is not affected.

### Step 3: Delete pods from the old IP pool

Next, we delete all of the existing pods from the old IP pool. (In our example, **coredns** is our only pod; for multiple pods you would trigger a deletion for all pods in the cluster.)

```bash
kubectl delete pod -n kube-system coredns-6f4fd4bdf-8q7zp
```

### Step 4: Verify that new pods get an address from the new IP pool

1. Create a test namespace and nginx pod.

Check failure on line 150 in calico/networking/ipam/migrate-pools.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'nginx'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'nginx'?", "location": {"path": "calico/networking/ipam/migrate-pools.mdx", "range": {"start": {"line": 150, "column": 32}}}, "severity": "ERROR"}

```bash
kubectl create ns ippool-test
```

1. Create an nginx pod.

Check failure on line 156 in calico/networking/ipam/migrate-pools.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'nginx'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'nginx'?", "location": {"path": "calico/networking/ipam/migrate-pools.mdx", "range": {"start": {"line": 156, "column": 14}}}, "severity": "ERROR"}

```bash
kubectl -n ippool-test create deployment nginx --image nginx
```

1. Verify that the new pod gets an IP address from the new range.

```bash
kubectl -n ippool-test get pods -l app=nginx -o wide
```

1. Clean up the ippool-test namespace.

```bash
kubectl delete ns ippool-test
```

### Step 5: Delete the old IP pool

Now that you've verified that pods are getting IPs from the new range, you can safely delete the old pool. To do this,
remove it from the default `Installation`, leaving only the newly create IP pool.

</TabItem>
<TabItem label="Manifest" value="Manifest-1">

Let’s run `calicoctl get ippool -o wide` to see the IP pool, **default-ipv4-ippool**.

```
Expand Down Expand Up @@ -222,6 +331,9 @@ Now that you've verified that pods are getting IPs from the new range, you can s
calicoctl delete pool default-ipv4-ippool
```

</TabItem>
</Tabs>

## Additional resources

- [IP pools reference](../../reference/resources/ippool.mdx)
14 changes: 12 additions & 2 deletions calico/operations/upgrading/kubernetes-upgrade.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: Upgrade to a newer version of Calico for Kubernetes.

## About upgrading {{prodname}}

This page describes how to upgrade to {{version}} from {{prodname}} v3.0 or later. The
This page describes how to upgrade to {{version}} from {{prodname}} v3.15 or later. The
procedure varies by datastore type and install method.

If you are using {{prodname}} in etcd mode on a Kubernetes cluster, we recommend upgrading to the Kubernetes API datastore [as discussed here](../datastore-migration.mdx).
Expand All @@ -28,7 +28,17 @@ This may result in unexpected behavior and data.

:::

<HostEndpointsUpgrade orch='Kubernetes' />
## Upgrade OwnerReferences

If you do not use OwnerReferences on resources in the projectcalico.org/v3 API group, you can skip this section.

Starting in Calico v3.28, a change in the way UIDs are generated for projectcalico.org/v3 resources requires that you update any OwnerReferences

Check failure on line 35 in calico/operations/upgrading/kubernetes-upgrade.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'UIDs'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'UIDs'?", "location": {"path": "calico/operations/upgrading/kubernetes-upgrade.mdx", "range": {"start": {"line": 35, "column": 47}}}, "severity": "ERROR"}
that refer to projectcalico.org/v3 resources as an owner. After upgrade, the UID for all projectcalico.org/v3 resources will be changed, resulting in any
owned resources being garbage collected by Kubernetes.

1. Remove any OwnerReferences from resources in your cluster that have `apiGroup: projectcalico.org/v3`.
1. Perform the upgrade normally.
1. Add new OwnerReferences to your resources referencing the new UID.

## Upgrading an installation that was installed using helm

Expand Down
12 changes: 11 additions & 1 deletion calico/operations/upgrading/openshift-upgrade.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,17 @@ description: Upgrade to a newer version of Calico for OpenShift.

This page describes how to upgrade to {{version}} for OpenShift 4 from an existing {{prodname}} cluster.

<HostEndpointsUpgrade orch='OpenShift' />
## Upgrade OwnerReferences

If you do not use OwnerReferences on resources in the projectcalico.org/v3 API group, you can skip this section.

Starting in Calico v3.28, a change in the way UIDs are generated for projectcalico.org/v3 resources requires that you update any OwnerReferences

Check failure on line 15 in calico/operations/upgrading/openshift-upgrade.mdx

View workflow job for this annotation

GitHub Actions / runner / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'UIDs'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'UIDs'?", "location": {"path": "calico/operations/upgrading/openshift-upgrade.mdx", "range": {"start": {"line": 15, "column": 47}}}, "severity": "ERROR"}
that refer to projectcalico.org/v3 resources as an owner. After upgrade, the UID for all projectcalico.org/v3 resources will be changed, resulting in any
owned resources being garbage collected by Kubernetes.

1. Remove any OwnerReferences from resources in your cluster that have `apiGroup: projectcalico.org/v3`.
1. Perform the upgrade normally.
1. Add new OwnerReferences to your resources referencing the new UID.

## Upgrading Calico on OpenShift 4

Expand Down

0 comments on commit 5054b23

Please sign in to comment.