diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/components/EnvironmentFile.js b/calico-cloud_versioned_docs/version-20-1/_includes/components/EnvironmentFile.js
deleted file mode 100644
index a706a0c74e..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/components/EnvironmentFile.js
+++ /dev/null
@@ -1,59 +0,0 @@
-import React from 'react';
-
-import Admonition from '@theme/Admonition';
-import CodeBlock from '@theme/CodeBlock';
-import Link from '@docusaurus/Link';
-
-import { baseUrl } from '../../variables';
-
-export default function EnvironmentFile(props) {
- return (
- <>
-
- {props.install === 'container' ? (
-
- Use the following guidelines and sample file to define the environment variables for starting Calico on the host.
- For more help, see the {props.nodecontainer} configuration reference
-
- ) : (
-
- Use the following guidelines and sample file to define the environment variables for starting Calico on the host.
-
- )}
-
-
For the Kubernetes datastore set the following:
-
-
-
-
Variable
-
Configuration guidance
-
-
-
-
-
KUBECONFIG
-
Path to kubeconfig file to access the Kubernetes API Server
-
-
-
- {props.install === 'container' && (
-
- If using certificates and keys, you will need to volume mount them into the container at the location
- specified by the paths mentioned above.
-
- )}
-
- Sample EnvironmentFile - save to /etc/calico/calico.env
-
If you are using one of the recommended distributions, you will already satisfy these.
-
-
- Due to the large number of distributions and kernel version out there, it’s hard to be precise about the names
- of the particular kernel modules that are required to run {prodname}. However, in general, you’ll need:
-
-
-
-
- The iptables modules (both the “legacy” and “nft” variants are supported). These are typically
- broken up into many small modules, one for each type of match criteria and one for each type of action.{' '}
- {prodname} requires:
-
-
-
The “base” modules (including the IPv6 versions if IPv6 is enabled in your cluster).
-
- At least the following match criteria: set, rpfilter, addrtype,{' '}
- comment, conntrack, icmp, tcp, udp,{' '}
- ipvs, icmpv6 (if IPv6 is enabled in your kernel), mark,{' '}
- multiport, rpfilter, sctp, ipvs (if using
- kube-proxy in IPVS mode).
-
-
- At least the following actions: REJECT, ACCEPT, DROP,{' '}
- LOG.
-
-
-
-
-
IP sets support.
-
-
-
Netfilter Conntrack support compiled in (with SCTP support if using SCTP).
-
-
-
- IPVS support if using kube-proxy in IPVS mode.
-
-
-
-
- IPIP, VXLAN, Wireguard support, if using {prodname} networking in one of those modes.
-
-
-
-
- eBPF (including the tc hook support) and XDP (if you want to use the eBPF dataplane).
-
-
-
- >
- );
-}
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/components/ReqsSys.js b/calico-cloud_versioned_docs/version-20-1/_includes/components/ReqsSys.js
deleted file mode 100644
index bb47fac522..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/components/ReqsSys.js
+++ /dev/null
@@ -1,441 +0,0 @@
-import React from 'react';
-
-import Admonition from '@theme/Admonition';
-import Link from '@docusaurus/Link';
-import Heading from '@theme/Heading';
-
-import { orchestrators } from '@site/variables';
-import { prodname, baseUrl } from '../../variables';
-
-function NodeRequirementsEnt(props) {
- return (
- <>
-
- Node requirements
-
-
-
-
x86-64 processor with at least 2 cores, 8.0GB RAM and 20 GB free disk space
-
-
-
- Linux kernel 3.10 or later with required dependencies. The
- following distributions have the required kernel, its dependencies, and are known to work well with{' '}
- {prodname} and {props.orch}.
-
- {prodname} must be able to manage cali*
- interfaces on the host. When IPIP is enabled (the default),
- {prodname} also needs to be able to manage tunl*
- interfaces. When VXLAN is enabled, {prodname} also needs to be able to manage the vxlan.calico{' '}
- interface.
-
- {/*}
-
-
- Many Linux distributions, such as most of the above, include NetworkManager. By default, NetworkManager
- does not allow
- {prodname} to manage interfaces. If your nodes have NetworkManager, complete the steps in{' '}
-
- Preventing NetworkManager from controlling {prodname} interfaces
- {' '}
- before installing {prodname}.
-
-
- */}
-
-
-
- If your Linux distribution comes with installed Firewalld or another iptables manager it should be disabled.
- These may interfere with rules added by {prodname} and result in unexpected behavior.
-
-
-
- If a host firewall is needed, it can be configured by {prodname} HostEndpoint and GlobalNetworkPolicy.
- More information about configuration at Security for host.
-
-
-
-
-
- In order to properly run Elasticsearch, nodes must be configured according to the{' '}
-
- Elasticsearch system configuration documentation.
-
-
-
-
-
- The Typha autoscaler requires a minimum number of Linux worker nodes based on total number of schedulable
- nodes.
-
- {prodname} requires a key/value store accessible by all {prodname} components.
- {
- {
- OpenShift: With OpenShift, the Kubernetes API datastore is used for the key/value store.,
- Kubernetes: (
-
- On Kubernetes, you can configure {prodname} to access an etcdv3 cluster directly or to use the
- Kubernetes API datastore.
-
- ),
- OpenStack: (
-
- For production you will likely want multiple nodes for greater performance and reliability. If you don’t
- already have an etcdv3 cluster to connect to, please refer to{' '}
- the upstream etcd docs for detailed advice and setup.
-
- ),
- 'host protection': The key/value store must be etcdv3.,
- }[props.orch]
- }
-
- *{' '}
-
- The value passed to kube-apiserver using the --secure-port
- flag. If you cannot locate this, check the targetPort value returned by
- kubectl get svc kubernetes -o yaml.
-
-
- When installed as a Kubernetes daemon set, {prodname} meets this requirement by running as a privileged
- container. This requires that the kubelet be allowed to run privileged containers. There are two ways this
- can be achieved.
-
-
-
- Specify --allow-privileged on the kubelet (deprecated).
-
-
- Use a{' '}
- pod security policy.
-
-
- >
- )}
- >
- );
-}
-
-export default function ReqsSys(props) {
- return (
- <>
-
-
-
-
- >
- );
-}
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_create-kubeconfig.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_create-kubeconfig.mdx
deleted file mode 100644
index 7b47173e65..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_create-kubeconfig.mdx
+++ /dev/null
@@ -1,40 +0,0 @@
-1. Create a service account
-
- ```bash
- SA_NAME=my-host
- kubectl create serviceaccount $SA_NAME -n calico-system -o yaml
- ```
-
-1. Obtain the token for the secret associated with your host
-
- ```bash
- kubectl describe secret -n calico-system $(kubectl get serviceaccount -n calico-system $SA_NAME -o=jsonpath="{.secrets[0].name}")
- ```
-
-1. Use a text editor to create a kubeconfig file
-
- ```yaml
- apiVersion: v1
- kind: Config
-
- users:
- - name: my-host
- user:
- token:
-
- clusters:
- - cluster:
- certificate-authority-data:
- server:
- name:
-
- contexts:
- - context:
- cluster: my-cluster
- user: my-host
- name: my-host
-
- current-context: my-cluster
- ```
-
- Take the cluster information from an already existing kubeconfig.
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_determine-ipam.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_determine-ipam.mdx
deleted file mode 100644
index f44121d82a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_determine-ipam.mdx
+++ /dev/null
@@ -1,9 +0,0 @@
-If you are not sure which IPAM your cluster is using, the way to tell depends on install method.
-
-The IPAM plugin can be queried on the default Installation resource.
-
-```
-kubectl get installation default -o go-template --template {{.spec.cni.ipam.type}}
-```
-
-If your cluster is using Calico IPAM, the above command should return a result of `Calico`.
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_docker-container-service.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_docker-container-service.mdx
deleted file mode 100644
index 5525307635..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_docker-container-service.mdx
+++ /dev/null
@@ -1,70 +0,0 @@
-import NonClusterReadOnlyStep from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_non-cluster-read-only-step.mdx';
-import EnvironmentFile from '@site/calico-cloud/_includes/components/EnvironmentFile';
-
-This section describes how to run `$[nodecontainer]` as a Docker container.
-
-
-
-### Step 2: Create environment file
-
-
-
-### Step 3: Configure the init system
-
-Use an init daemon (like systemd or upstart) to start the $[nodecontainer] image as a service using the EnvironmentFile values.
-
-Sample systemd service file: `$[noderunning].service`
-
-```shell
-[Unit]
-Description=$[noderunning]
-After=docker.service
-Requires=docker.service
-
-[Service]
-EnvironmentFile=/etc/calico/calico.env
-ExecStartPre=-/usr/bin/docker rm -f $[noderunning]
-ExecStart=/usr/bin/docker run --net=host --privileged \
- --name=$[noderunning] \
- -e NODENAME=${CALICO_NODENAME} \
- -e IP=${CALICO_IP} \
- -e IP6=${CALICO_IP6} \
- -e CALICO_NETWORKING_BACKEND=${CALICO_NETWORKING_BACKEND} \
- -e AS=${CALICO_AS} \
- -e NO_DEFAULT_POOLS=${NO_DEFAULT_POOLS} \
- -e DATASTORE_TYPE=${DATASTORE_TYPE} \
- -e KUBECONFIG=${KUBECONFIG} \
- -v /var/log/calico:/var/log/calico \
- -v /var/lib/calico:/var/lib/calico \
- -v /var/run/calico:/var/run/calico \
- -v /run/docker/plugins:/run/docker/plugins \
- -v /lib/modules:/lib/modules \
- -v /etc/pki:/pki \
- $[registry]$[componentImage.cnxNode] /bin/calico-node -felix
-
-ExecStop=-/usr/bin/docker stop $[noderunning]
-
-Restart=on-failure
-StartLimitBurst=3
-StartLimitInterval=60s
-
-[Install]
-WantedBy=multi-user.target
-```
-
-Upon start, the systemd service:
-
-- Confirms Docker is installed under the `[Unit]` section
-- Gets environment variables from the environment file above
-- Removes existing `$[nodecontainer]` container (if it exists)
-- Starts `$[nodecontainer]`
-
-The script also stops the `$[nodecontainer]` container when the service is stopped.
-
-:::note
-
-Depending on how you've installed Docker, the name of the Docker service
-under the `[Unit]` section may be different (such as `docker-engine.service`).
-Be sure to check this before starting the service.
-
-:::
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_domain-names.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_domain-names.mdx
deleted file mode 100644
index 0c62a87bc4..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_domain-names.mdx
+++ /dev/null
@@ -1,16 +0,0 @@
-When a configured domain name has no wildcard (`*`), it matches exactly that domain name. For example:
-
-- `microsoft.com`
-- `tigera.io`
-
-With a single asterisk in any part of the domain name, it matches 1 or more path components at that position. For example:
-
-- `*.google.com` matches `www.google.com` and `www.ipv6.google.com`, but not `google.com`
-- `www.*.com` matches `www.sun.com` and `www.apple.com`, but not `www.com`
-- `update.*.mycompany.com` matches `update.tools.mycompany.com`, `update.secure.suite.mycompany.com`, and so on
-
-**Not** supported are:
-
-- Multiple wildcards in the same domain, for example: `*.*.mycompany.com`
-- Asterisks that are not the entire component, for example: `www.g*.com`
-- More general wildcards, such as regular expressions
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_ebpf-value.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_ebpf-value.mdx
deleted file mode 100644
index ca2baf3587..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_ebpf-value.mdx
+++ /dev/null
@@ -1,12 +0,0 @@
-The eBPF dataplane mode has several advantages over standard Linux networking pipeline mode:
-
-- It scales to higher throughput.
-- It uses less CPU per GBit.
-- It has native support for Kubernetes services (without needing kube-proxy) that:
-
- - Reduces first packet latency for packets to services.
- - Preserves external client source IP addresses all the way to the pod.
- - Supports DSR (Direct Server Return) for more efficient service routing.
- - Uses less CPU than kube-proxy to keep the dataplane in sync.
-
-To learn more and see performance metrics from our test environment, see the blog, [Introducing the Calico eBPF dataplane](https://www.projectcalico.org/introducing-the-calico-ebpf-dataplane/).
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_endpointport.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_endpointport.mdx
deleted file mode 100644
index 05dde33612..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_endpointport.mdx
+++ /dev/null
@@ -1,15 +0,0 @@
-An EndpointPort associates a name with a particular TCP/UDP/SCTP port of the endpoint, allowing it to
-be referenced as a named port in [policy rules](../../reference/resources/networkpolicy.mdx#entityrule).
-
-| Field | Description | Accepted Values | Schema | Default |
-| -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------- | ------ | ------- |
-| name | The name to attach to this port, allowing it to be referred to in [policy rules](../../reference/resources/networkpolicy.mdx#entityrule). Names must be unique within an endpoint. | | string | |
-| protocol | The protocol of this named port. | `TCP`, `UDP`, `SCTP` | string | |
-| port | The workload port number. | `1`-`65535` | int | |
-
-:::note
-
-On their own, EndpointPort entries don't result in any change to the connectivity of the port.
-They only have an effect if they are referred to in policy.
-
-:::
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_entityrule.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_entityrule.mdx
deleted file mode 100644
index cae41827b0..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_entityrule.mdx
+++ /dev/null
@@ -1,82 +0,0 @@
-import DomainNames from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_domain-names.mdx';
-
-Entity rules specify the attributes of the source or destination of a packet that must match for the rule as a whole
-to match. Packets can be matched on combinations of:
-
-- Identity of the source/destination, by using [Selectors](#selectors) or by specifying a particular
- Kubernetes `Service`. Selectors can match [workload endpoints](../../reference/resources/workloadendpoint.mdx),
- [host endpoints](../../reference/resources/hostendpoint.mdx) and ([namespaced](../../reference/resources/networkset.mdx) or
- [global](../../reference/resources/globalnetworkset.mdx)) network sets.
-- Source/destination IP address, protocol and port.
-
-If the rule contains multiple match criteria (for example, an IP and a port) then all match criteria must match
-for the rule as a whole to match a packet.
-
-| Field | Description | Accepted Values | Schema | Default |
-| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | ------------------------------------------- | ------- |
-| nets | Match packets with IP in any of the listed CIDRs. | List of valid IPv4 CIDRs or list of valid IPv6 CIDRs (IPv4 and IPv6 CIDRs shouldn't be mixed in one rule) | list of cidrs |
-| notNets | Negative match on CIDRs. Match packets with IP not in any of the listed CIDRs. | List of valid IPv4 CIDRs or list of valid IPv6 CIDRs (IPv4 and IPv6 CIDRs shouldn't be mixed in one rule) | list of cidrs |
-| selector | Positive match on selected endpoints. If a `namespaceSelector` is also defined, the set of endpoints this applies to is limited to the endpoints in the selected namespaces. | Valid selector | [selector](#selector) | |
-| notSelector | Negative match on selected endpoints. If a `namespaceSelector` is also defined, the set of endpoints this applies to is limited to the endpoints in the selected namespaces. | Valid selector | [selector](#selector) | |
-| namespaceSelector | Positive match on selected namespaces. If specified, only workload endpoints in the selected Kubernetes namespaces are matched. Matches namespaces based on the labels that have been applied to the namespaces. Defines the scope that selectors will apply to, if not defined then selectors apply to the NetworkPolicy's namespace. Match a specific namespace by name using the `projectcalico.org/name` label. Select the non-namespaced resources like GlobalNetworkSet(s), host endpoints to which this policy applies by using `global()` selector. | Valid selector | [selector](#selector) | |
-| ports | Positive match on the specified ports | | list of [ports](#ports) | |
-| domains | Positive match on [domain names](#exact-and-wildcard-domain-names). | List of [exact or wildcard domain names](#exact-and-wildcard-domain-names) | list of strings |
-| notPorts | Negative match on the specified ports | | list of [ports](#ports) | |
-| serviceAccounts | Match endpoints running under service accounts. If a `namespaceSelector` is also defined, the set of service accounts this applies to is limited to the service accounts in the selected namespaces. | | [ServiceAccountMatch](#serviceaccountmatch) | |
-| services | Match the specified service(s). If specified on egress rule destinations, no other selection criteria can be set. If specified on ingress rule sources, only positive or negative matches on ports can be specified. | | [ServiceMatch](#servicematch) | |
-
-:::note
-
-You cannot mix IPv4 and IPv6 CIDRs in a single rule using `nets` or `notNets`. If you need to match both, create 2 rules.
-
-:::
-
-#### Selector performance in EntityRules
-
-When rendering policy into the dataplane, $[prodname] must identify the endpoints that match the selectors
-in all active rules. This calculation is optimized for certain common selector types.
-Using the optimized selector types reduces CPU usage (and policy rendering time) by orders of magnitude.
-This becomes important at high scale (hundreds of active rules, hundreds of thousands of endpoints).
-
-The optimized operators are as follows:
-
-- `label == "value"`
-- `label in { 'v1', 'v2' }`
-- `has(label)`
-- ` && ` is optimized if **either** `` or `` is
- optimized.
-
-The following perform like `has(label)`. All endpoints with the label will be scanned to find matches:
-
-- `label contains 's'`
-- `label starts with 's'`
-- `label ends with 's'`
-
-The other operators, and in particular, `all()`, `!`, `||` and `!=` are not optimized.
-
-Examples:
-
-- `a == 'b'` - optimized
-- `a == 'b' && has(c)` - optimized
-- `a == 'b' || has(c)` - **not** optimized due to use of `||`
-- `c != 'd'` - **not** optimized due to use of `!=`
-- `!has(a)` - **not** optimized due to use of `!`
-- `a == 'b' && c != 'd'` - optimized, `a =='b'` is optimized so `a == 'b' && ` is optimized.
-- `c != 'd' && a == 'b'` - optimized, `a =='b'` is optimized so ` && a == 'b'` is optimized.
-
-### Exact and wildcard domain names
-
-The `domains` field is only valid for egress Allow rules. It restricts the
-rule to apply only to traffic to one of the specified domains. If this field is specified, the
-parent [Rule](#rule)'s `action` must be `Allow`, and `nets` and `selector` must both be left empty.
-
-
-
-:::note
-
-$[prodname] implements policy for domain names by learning the
-corresponding IPs from DNS, then programming rules to allow those IPs. This means that
-if multiple domain names A, B and C all map to the same IP, and there is domain-based
-policy to allow A, traffic to B and C will be allowed as well.
-
-:::
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_icmp.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_icmp.mdx
deleted file mode 100644
index 1adb456472..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_icmp.mdx
+++ /dev/null
@@ -1,4 +0,0 @@
-| Field | Description | Accepted Values | Schema | Default |
-| ----- | ------------------- | -------------------- | ------- | ------- |
-| type | Match on ICMP type. | Can be integer 0-254 | integer |
-| code | Match on ICMP code. | Can be integer 0-255 | integer |
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_ipnat.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_ipnat.mdx
deleted file mode 100644
index 9cfa2fb904..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_ipnat.mdx
+++ /dev/null
@@ -1,6 +0,0 @@
-IPNAT contains a single NAT mapping for a WorkloadEndpoint resource.
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------- | ------------------------------------------- | ------------------ | ------ | ------- |
-| internalIP | The internal IP address of the NAT mapping. | A valid IP address | string | |
-| externalIP | The external IP address. | A valid IP address | string | |
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_license.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_license.mdx
deleted file mode 100644
index 7d51be5675..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_license.mdx
+++ /dev/null
@@ -1,23 +0,0 @@
-**How long does it take to get a new $[prodname] license?**
- After you submit a sales purchase order to Tigera, 1-2 days.
-
-**Is there a grace period?**
- No.
-
-**Does Manager UI display license expiration?**
-Yes. The license indicator in Manager UI (top right banner) turns red when the license expires.
-
-![expiration](/img/calico-cloud/expiration.png)
-
-**What happens when a license expires or is invalid?**
- Users can log in to Manager UI with read/access for all previously-created resources, but they cannot create any new $[prodname] resources. The Manager UI may appear to function, but actions will not be applied; so it is important to proactively managed your license.
-
-**What happens if I add nodes beyond what I'm licensed for?**
-
-- Node limits are not currently enforced
-- All $[prodname] features still work
-
-**How do I get information about my license? Monitor the expiration date?**
-
-- [Prometheus!](../../operations/monitor/metrics/license-agent.mdx). Monitor days till expiration, nodes available, and nodes used.
-- Use `kubectl` to get [license key information](../../reference/resources/licensekey.mdx#viewing-information-about-your-license-key)
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_non-cluster-binary-install.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_non-cluster-binary-install.mdx
deleted file mode 100644
index fb8e6644c5..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_non-cluster-binary-install.mdx
+++ /dev/null
@@ -1,151 +0,0 @@
-import NonClusterReadOnlyStep from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_non-cluster-read-only-step.mdx';
-import EnvironmentFile from '@site/calico-cloud/_includes/components/EnvironmentFile';
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-
-
-### Step 2: Download and extract the binary
-
-This step requires Docker, but it can be run from any machine with Docker installed. It doesn't have to be the host you will run it on (i.e your laptop is fine).
-
-1. Use the following command to download the $[nodecontainer] image.
-
- ```bash
- docker pull $[registry]$[componentImage.cnxNode]
- ```
-
-1. Confirm that the image has loaded by typing `docker images`.
-
- ```
- REPOSITORY TAG IMAGE ID CREATED SIZE
- $[registry]$[releases.0.components.cnx-node.image] $[releases.0.components.cnx-node.version] e07d59b0eb8a 2 minutes ago 42MB
- ```
-
-1. Create a temporary $[nodecontainer] container.
-
- ```bash
- docker create --name container $[registry]$[componentImage.cnxNode]
- ```
-
-1. Copy the calico-node binary from the container to the local file system.
-
- ```bash
- docker cp container:/bin/calico-node $[nodecontainer]
- ```
-
-1. Delete the temporary container.
-
- ```bash
- docker rm container
- ```
-
-1. Set the extracted binary file to be executable.
-
- ```bash
- chmod +x $[nodecontainer]
- chown root:root $[nodecontainer]
- ```
-
-### Step 3: Copy the `calico-node` binary
-
-Copy the binary from Step 2 to the target machine, using any means (`scp`, `ftp`, USB stick, etc.).
-
-### Step 4: Create environment file
-
-
-
-### Step 5: Start Felix
-
-There are a few ways to start Felix: create a startup script, or manually configure Felix.
-
-
-
-
-Felix should be started at boot by your init system and the init system
-**must** be configured to restart Felix if it stops. Felix relies on
-that behavior for certain configuration changes.
-
-If your distribution uses systemd, then you could use the following unit file:
-
-```bash
-[Unit]
-Description=Calico Felix agent
-After=syslog.target network.target
-
-[Service]
-User=root
-EnvironmentFile=/etc/calico/calico.env
-ExecStartPre=/usr/bin/mkdir -p /var/run/calico
-ExecStart=/usr/local/bin/$[nodecontainer] -felix
-KillMode=process
-Restart=on-failure
-LimitNOFILE=32000
-
-[Install]
-WantedBy=multi-user.target
-```
-
-Or, for upstart:
-
-```bash
-description "Felix (Calico agent)"
-author "Project Calico Maintainers "
-
-start on stopped rc RUNLEVEL=[2345]
-stop on runlevel [!2345]
-
-limit nofile 32000 32000
-
-respawn
-respawn limit 5 10
-
-chdir /var/run
-
-pre-start script
- mkdir -p /var/run/calico
- chown root:root /var/run/calico
-end script
-
-exec /usr/local/bin/$[nodecontainer] -felix
-```
-
-**Start Felix**
-
-After you've configured Felix, start it via your init system.
-
-```bash
-service calico-felix start
-```
-
-
-
-
-Configure Felix by creating a file at `/kubernetes/calico/felix.cfg`.
-
-
-:::note
-
-Felix tries to detect whether IPv6 is available on your platform but
-the detection can fail on older (or more unusual) systems. If Felix
-exits soon after startup with `ipset` or `iptables` errors try
-setting the `Ipv6Support` setting to `false`.
-
-:::
-
-Next, configure Felix to interact with a Kubernetes datastore. You
-must set the `DatastoreType` setting to `kubernetes`. You must also set the environment variable `CALICO_KUBECONFIG`
-to point to a valid kubeconfig for your kubernetes cluster and `CALICO_NETWORKING_BACKEND` to `none`.
-
-:::note
-
-For the Kubernetes datastore, Felix works in policy-only mode. Even though pod networking is
-disabled on the baremetal host Felix is running on, policy can still be used to secure the host.
-
-:::
-
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_non-cluster-read-only-step.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_non-cluster-read-only-step.mdx
deleted file mode 100644
index 516481941a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_non-cluster-read-only-step.mdx
+++ /dev/null
@@ -1,21 +0,0 @@
-import CreateKubeconfig from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_create-kubeconfig.mdx';
-
-### Step 1: (Optional) Configure access for the non-cluster-host
-
-To run Calico Node as a container, it will need a kubeconfig. You can skip this step if you already have a kubeconfig ready to use.
-
-
-
-Run the following two commands to create a cluster role with read-only access and a corresponding cluster role binding.
-
-```bash
-kubectl apply -f $[filesUrl_CE]/manifests/non-cluster-host-clusterrole.yaml
-kubectl create clusterrolebinding $SA_NAME --serviceaccount=calico-system:$SA_NAME --clusterrole=non-cluster-host-read-only
-```
-
-:::note
-
-We include examples for systemd, but the commands can be
-applied to other init daemons such as upstart.
-
-:::
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_ports.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_ports.mdx
deleted file mode 100644
index 4517f59290..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_ports.mdx
+++ /dev/null
@@ -1,33 +0,0 @@
-$[prodname] supports the following syntaxes for expressing ports.
-
-| Syntax | Example | Description |
-| --------- | ---------- | ------------------------------------------------------------------- |
-| int | 80 | The exact (numeric) port specified |
-| start:end | 6040:6050 | All (numeric) ports within the range start ≤ x ≤ end |
-| string | named-port | A named port, as defined in the ports list of one or more endpoints |
-
-An individual numeric port may be specified as a YAML/JSON integer. A port range or
-named port must be represented as a string. For example, this would be a valid list of ports:
-
-```yaml
-ports: [8080, '1234:5678', 'named-port']
-```
-
-#### Named ports
-
-Using a named port in an `EntityRule`, instead of a numeric port, gives a layer of indirection,
-allowing for the named port to map to different numeric values for each endpoint.
-
-For example, suppose you have multiple HTTP servers running as workloads; some exposing their HTTP
-port on port 80 and others on port 8080. In each workload, you could create a named port called
-`http-port` that maps to the correct local port. Then, in a rule, you could refer to the name
-`http-port` instead of writing a different rule for each type of server.
-
-:::note
-
-Since each named port may refer to many endpoints (and $[prodname] has to expand a named port into
-a set of endpoint/port combinations), using a named port is considerably more expensive in terms
-of CPU than using a simple numeric port. We recommend that they are used sparingly, only where
-the extra indirection is required.
-
-:::
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_rule.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_rule.mdx
deleted file mode 100644
index 9fb2b7f47b..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_rule.mdx
+++ /dev/null
@@ -1,46 +0,0 @@
-A single rule matches a set of packets and applies some action to them. When multiple rules are specified, they
-are executed in order.
-
-| Field | Description | Accepted Values | Schema | Default |
-| ----------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------ | ----------------------------- | ------- |
-| metadata | Per-rule metadata. | | [RuleMetadata](#rulemetadata) | |
-| action | Action to perform when matching this rule. | `Allow`, `Deny`, `Log`, `Pass` | string | |
-| protocol | Positive protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | |
-| notProtocol | Negative protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | |
-| icmp | ICMP match criteria. | | [ICMP](#icmp) | |
-| notICMP | Negative match on ICMP. | | [ICMP](#icmp) | |
-| ipVersion | Positive IP version match. | `4`, `6` | integer | |
-| source | Source match parameters. | | [EntityRule](#entityrule) | |
-| destination | Destination match parameters. | | [EntityRule](#entityrule) | |
-| http | Match HTTP request parameters. Application layer policy must be enabled to use this field. | | [HTTPMatch](#httpmatch) | |
-
-After a `Log` action, processing continues with the next rule; `Allow` and `Deny` are immediate
-and final and no further rules are processed.
-
-An `action` of `Pass` in a `NetworkPolicy` or `GlobalNetworkPolicy` will skip over the remaining policies and jump to the
-first profile assigned to the endpoint, applying the policy configured in the
-profile; if there are no Profiles configured for the endpoint the default applied action is `Deny`.
-
-### RuleMetadata
-
-Metadata associated with a specific rule (rather than the policy as a whole). The contents of the metadata does not affect how a rule is interpreted or enforced; it is
-simply a way to store additional information for use by operators or applications that interact with $[prodname].
-
-| Field | Description | Schema | Default |
-| ----------- | ----------------------------------- | ----------------------- | ------- |
-| annotations | Arbitrary non-identifying metadata. | map of string to string | |
-
-Example:
-
-```yaml
-metadata:
- annotations:
- app: database
- owner: devops
-```
-
-Annotations follow the
-[same rules as Kubernetes for valid syntax and character set](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/#syntax-and-character-set).
-
-On Linux with the iptables dataplane, rule annotations are rendered as comments in the form `-m comment --comment "="` on the iptables rule(s) that correspond
-to the $[prodname] rule.
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_selector-scopes.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_selector-scopes.mdx
deleted file mode 100644
index 9d9fbc8c54..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_selector-scopes.mdx
+++ /dev/null
@@ -1,20 +0,0 @@
-Understanding scopes and the `all()` and `global()` operators: selectors have a scope of resources
-that they are matched against, which depends on the context in which they are used. For example:
-
-- The `nodeSelector` in an `IPPool` selects over `Node` resources.
-
-- The top-level selector in a `NetworkPolicy` selects over the workloads _in the same namespace_ as the
- `NetworkPolicy`.
-- The top-level selector in a `GlobalNetworkPolicy` doesn't have the same restriction, it selects over all endpoints
- including namespaced `WorkloadEndpoint`s and non-namespaced `HostEndpoint`s.
-
-- The `namespaceSelector` in a `NetworkPolicy` (or `GlobalNetworkPolicy`) _rule_ selects over the labels on namespaces
- rather than workloads.
-
-- The `namespaceSelector` determines the scope of the accompanying `selector` in the entity rule. If no `namespaceSelector`
- is present then the rule's `selector` matches the default scope for that type of policy. (This is the same namespace
- for `NetworkPolicy` and all endpoints/network sets for `GlobalNetworkPolicy`)
-- The `global()` operator can be used (only) in a `namespaceSelector` to change the scope of the main `selector` to
- include non-namespaced resources such as [GlobalNetworkSet](../../reference/resources/globalnetworkset.mdx).
- This allows namespaced `NetworkPolicy` resources to refer to global non-namespaced resources, which would otherwise
- be impossible.
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_selectors.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_selectors.mdx
deleted file mode 100644
index 52baf960ae..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_selectors.mdx
+++ /dev/null
@@ -1,50 +0,0 @@
-A label selector is an expression which either matches or does not match a resource based on its labels.
-
-$[prodname] label selectors support a number of operators, which can be combined into larger expressions
-using the boolean operators and parentheses.
-
-| Expression | Meaning |
-| ------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| **Logical operators** |
-| `( )` | Matches if and only if `` matches. (Parentheses are used for grouping expressions.) |
-| `! ` | Matches if and only if `` does not match. **Tip:** `!` is a special character at the start of a YAML string, if you need to use `!` at the start of a YAML string, enclose the string in quotes. |
-| ` && ` | "And": matches if and only if both ``, and, `` matches |
-| \ || \ | "Or": matches if and only if either ``, or, `` matches. |
-| **Match operators** |
-| `all()` | Match all in-scope resources. To match _no_ resources, combine this operator with `!` to form `!all()`. |
-| `global()` | Match all non-namespaced resources. Useful in a `namespaceSelector` to select global resources such as global network sets. |
-| `k == 'v'` | Matches resources with the label 'k' and value 'v'. |
-| `k != 'v'` | Matches resources without label 'k' or with label 'k' and value _not_ equal to `v` |
-| `has(k)` | Matches resources with label 'k', independent of value. To match pods that do not have label `k`, combine this operator with `!` to form `!has(k)` |
-| `k in { 'v1', 'v2' }` | Matches resources with label 'k' and value in the given set |
-| `k not in { 'v1', 'v2' }` | Matches resources without label 'k' or with label 'k' and value _not_ in the given set |
-| `k contains 's'` | Matches resources with label 'k' and value containing the substring 's' |
-| `k starts with 's'` | Matches resources with label 'k' and value starting with the substring 's' |
-| `k ends with 's'` | Matches resources with label 'k' and value ending with the substring 's' |
-
-Operators have the following precedence:
-
-- **Highest**: all the match operators
-- Parentheses `( ... )`
-- Negation with `!`
-- Conjunction with `&&`
-- **Lowest**: Disjunction with `||`
-
-For example, the expression
-
-```
-! has(my-label) || my-label starts with 'prod' && role in {'frontend','business'}
-```
-
-Would be "bracketed" like this:
-
-```
-((!(has(my-label)) || ((my-label starts with 'prod') && (role in {'frontend','business'}))
-```
-
-It would match:
-
-- Any resource that did not have label "my-label".
-- Any resource that both:
- - Has a value for `my-label` that starts with "prod", and,
- - Has a role label with value either "frontend", or "business".
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_serviceaccountmatch.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_serviceaccountmatch.mdx
deleted file mode 100644
index c3aff9c184..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_serviceaccountmatch.mdx
+++ /dev/null
@@ -1,6 +0,0 @@
-A ServiceAccountMatch matches service accounts in an EntityRule.
-
-| Field | Description | Schema |
-| -------- | ------------------------------- | --------------------- |
-| names | Match service accounts by name | list of strings |
-| selector | Match service accounts by label | [selector](#selector) |
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/content/_servicematch.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/content/_servicematch.mdx
deleted file mode 100644
index 2d47fed02c..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/content/_servicematch.mdx
+++ /dev/null
@@ -1,6 +0,0 @@
-A ServiceMatch matches a service in an EntityRule.
-
-| Field | Description | Schema |
-| --------- | ------------------------ | ------ |
-| name | The service's name. | string |
-| namespace | The service's namespace. | string |
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/.gitkeep b/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/.gitkeep
deleted file mode 100644
index e69de29bb2..0000000000
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_master-release-notes.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_master-release-notes.mdx
deleted file mode 100644
index eada91ffc4..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_master-release-notes.mdx
+++ /dev/null
@@ -1,3 +0,0 @@
-_Dateline_
-
-## Undefined feature X
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_release-v3.15-release-notes.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_release-v3.15-release-notes.mdx
deleted file mode 100644
index e69de29bb2..0000000000
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.13.0-release-notes.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.13.0-release-notes.mdx
deleted file mode 100644
index c487503e71..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.13.0-release-notes.mdx
+++ /dev/null
@@ -1,29 +0,0 @@
-05 Mar 2020
-
-### New eBPF dataplane technology preview
-
-The flagship feature of v3.13 is the tech preview of Calico's new eBPF dataplane mode. While it's not ready for production (it's missing some key features and has had limited testing), it is a great preview of things to come for those ready to adopt newer kernel versions.
-
-The eBPF dataplane:
-
-- Scales to higher throughput.
-- Uses less CPU per GBit.
-- Has native support for Kubernetes services (without needing kube-proxy) that:
- - Reduces first packet latency for packets to services.
- - Preserves external client source IP addresses all the way to the pod.
- - Supports DSR (Direct Server Return) for more efficient service routing.
- - Uses less CPU than kube-proxy to keep the dataplane in sync.
-
-If that's whetted your appetite and you'd like to hear more (and see some pretty performance graphs), head over to [the announcement blog](https://www.projectcalico.org/introducing-the-calico-ebpf-dataplane/). Once you're ready to give it a spin, you'll want [the how-to guide](/operations/performance/ebpf/enabling-ebpf/).
-
-### Bug fixes
-
-- Fixes an issue where Felix / Typha unnecessarily perform full resyncs of NetworkPolicies [libcalico-go #1192](https://github.com/projectcalico/libcalico-go/pull/1192) (@spikecurtis)
-
-### Other changes
-
-- Add protocols section to calico/node nsswitch.conf [node #418](https://github.com/projectcalico/node/pull/418) (@leodotcloud)
-- Calico now auto-detects the IP Pool CIDR when running on kubeadm [node #417](https://github.com/projectcalico/node/pull/417) (@rafaelvanoni)
-- In calico.yaml, `CALICO_IPV4POOL_CIDR` has been commented out, but the default CIDR remains the same. To change the CIDR in the manifest, you must first uncomment that section. [calico #3211](https://github.com/projectcalico/calico/pull/3211) (@rafaelvanoni)
-- Improve Felix liveness reporting when handling large policies. [felix #2215](https://github.com/projectcalico/felix/pull/2215) (@fasaxc)
-- Improve IPAM garbage collection for etcd clusters. [kube-controllers #459](https://github.com/projectcalico/kube-controllers/pull/459) (@caseydavenport)
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.13.1-release-notes.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.13.1-release-notes.mdx
deleted file mode 100644
index b8b5923dea..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.13.1-release-notes.mdx
+++ /dev/null
@@ -1,5 +0,0 @@
-12 Mar 2020
-
-### Bug fixes
-
-- Fix handling of NodePort traffic close to the MTU in the eBPF data plane [felix #2230](https://github.com/projectcalico/felix/pull/2230) (@tomastigera)
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.14.0-pre-release-notes.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.14.0-pre-release-notes.mdx
deleted file mode 100644
index 1eec76858a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.14.0-pre-release-notes.mdx
+++ /dev/null
@@ -1,40 +0,0 @@
-### [WIP] Other changes
-
-- Fix incorrect check of CIDR block size in node startup script. [node #468](https://github.com/projectcalico/node/pull/468) (@tmjd)
-- Update image to address CVE. [node #460](https://github.com/projectcalico/node/pull/460) (@lmm)
-- Release tunnel IP addresses more safely [node #443](https://github.com/projectcalico/node/pull/443) (@caseydavenport)
-- Properly release tunnel addresses even if nod object was improperly modified. [node #436](https://github.com/projectcalico/node/pull/436) (@caseydavenport)
-- Fix potential tunnel address leak when upgrading from Calico < v3.6 to Calico ≥ v3.8 [node #430](https://github.com/projectcalico/node/pull/430) (@caseydavenport)
-- Run health checks in parallel [node #423](https://github.com/projectcalico/node/pull/423) (@stamm)
-- On case of overlapping between host network and default IP pool Calico node start to find a free IP range between `172.16.0.0/16` and `172.31.0.0/16` [node #421](https://github.com/projectcalico/node/pull/421) (@mhmxs)
-- In BPF mode, fix that RPF check was bypassed for traffic to local workloads behind a nodeport. [felix #2283](https://github.com/projectcalico/felix/pull/2283) (@tomastigera)
-- In BPF mode, fix some corner cases in conntrack whitelisting and add additional tests. [felix #2281](https://github.com/projectcalico/felix/pull/2281) (@tomastigera)
-- In BPF mode, upgrade from the previous version may result in connections being silently dropped. This is due to adjusting the format of the connection tracking BPF map. [felix #2277](https://github.com/projectcalico/felix/pull/2277) (@tomastigera)
-- Felix can now calculate routes without dependency on Calico IPAM. [felix #2269](https://github.com/projectcalico/felix/pull/2269) (@caseydavenport)
-- In BPF mode, Felix now sets the kernel.unprivileged_bpf_disabled sysctl by default to restrict access to the BPF syscall. This behaviour is controlled by the BPFDisableUnprivileged configuration parameter. [felix #2261](https://github.com/projectcalico/felix/pull/2261) (@fasaxc)
-- Calico now supports encryption of all pod-to-pod traffic using wireguard. [felix #2257](https://github.com/projectcalico/felix/pull/2257) (@robbrockbank)
-- The MTU used by the BPF programs when sending ICMP TOO BIG messages to control path MTU is now configured by the VXLANMTU configuration parameter. [felix #2251](https://github.com/projectcalico/felix/pull/2251) (@tomastigera)
-- In BPF mode, fix lack of FIB lookup due to using incorrect endianness. [felix #2250](https://github.com/projectcalico/felix/pull/2250) (@tomastigera)
-- The BPF dataplane now handles ICMP error messages as "related" traffic so that they follow the same path back through the dataplane as the packet they respond to. This improves compatibility with path MTU detection and other non-mainline traffic. [felix #2247](https://github.com/projectcalico/felix/pull/2247) (@tomastigera)
-- In BPF dataplane mode, Felix now handles single-block IPAM pools. Previously single-block pools resulted in a collision when programming the dataplane routes. [felix #2245](https://github.com/projectcalico/felix/pull/2245) (@fasaxc)
-- None required [felix #2233](https://github.com/projectcalico/felix/pull/2233) (@tomastigera)
-- None required [felix #2232](https://github.com/projectcalico/felix/pull/2232) (@tomastigera)
-- [OpenStack] Allow DHCP from the workload, on kernels where rp_filter doesn't already [felix #2231](https://github.com/projectcalico/felix/pull/2231) (@nelljerram)
-- all-interfaces host endpoints now supports normal network policy in addition to pre-dnat policy [felix #2228](https://github.com/projectcalico/felix/pull/2228) (@lmm)
-- Add FelixConfiguration option for setting route information source [libcalico-go #1222](https://github.com/projectcalico/libcalico-go/pull/1222) (@caseydavenport)
-- Added Wireguard configuration. [libcalico-go #1215](https://github.com/projectcalico/libcalico-go/pull/1215) (@realgaurav)
-- Add a new Profile with allow-all rules named `projectcalico-default-allow`. This profile can be used in host endpoints to provide default-allow in the absence of policy [libcalico-go #1207](https://github.com/projectcalico/libcalico-go/pull/1207) (@lmm)
-- v3 Client can CRUD KubeControllersConfiguration resources [libcalico-go #1205](https://github.com/projectcalico/libcalico-go/pull/1205) (@spikecurtis)
-- New KubeControllersConfiguration API resource [libcalico-go #1203](https://github.com/projectcalico/libcalico-go/pull/1203) (@spikecurtis)
-- Exclude kube-ipvs0 from bird routing [confd #314](https://github.com/projectcalico/confd/pull/314) (@spikecurtis)
-- Use mv to place CNI binaries instead of cp [cni-plugin #849](https://github.com/projectcalico/cni-plugin/pull/849) (@caseydavenport)
-- Fix missing hostname binary on ubi-minimal [cni-plugin #848](https://github.com/projectcalico/cni-plugin/pull/848) (@lmm)
-- Change ppc64le base image from debian slim to UBI [cni-plugin #846](https://github.com/projectcalico/cni-plugin/pull/846) (@DomDeMarc)
-- Fix shell stderr redirection in CNI installation script [cni-plugin #842](https://github.com/projectcalico/cni-plugin/pull/842) (@hanxueluo)
-- auto host endpoints have a default allow profile [kube-controllers #470](https://github.com/projectcalico/kube-controllers/pull/470) (@lmm)
-- Fix IPAM garbage collection in etcd mode on clusters where node name does not match Kubernetes node name. [kube-controllers #467](https://github.com/projectcalico/kube-controllers/pull/467) (@caseydavenport)
-- Use KubeControllersConfiguration resource for config [kube-controllers #464](https://github.com/projectcalico/kube-controllers/pull/464) (@spikecurtis)
-- Fix kube-controllers attempting to clean up nonexistent node resources [kube-controllers #461](https://github.com/projectcalico/kube-controllers/pull/461) (@fcuello-fudo)
-- kube-controllers can now automatically provision host endpoints for nodes in the cluster [kube-controllers #458](https://github.com/projectcalico/kube-controllers/pull/458) (@lmm)
-- Kubernetes network tutorials updated for v1.18. [calico #3447](https://github.com/projectcalico/calico/pull/3447) (@tmjd)
-- With OpenShift install time resources can be created. This means Calico resources can be created before the Calico components are started. [calico #3338](https://github.com/projectcalico/calico/pull/3338) (@tmjd)
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.15.0-release-notes.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.15.0-release-notes.mdx
deleted file mode 100644
index fc4e9dfc7a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.15.0-release-notes.mdx
+++ /dev/null
@@ -1,77 +0,0 @@
-30 November 2022 (Chart Release 1)
-
-See Chart Release 0 for "What's New".
-
-## Bug Fixes
-
-- Fixed a bug that caused servers not to trust the certificate of Dex, preventing users to login using an external identity provider.
-- Fixed a bug where Intrusion Detection Controller failed to create Syslog events due to constraints in its security context.
-- Fixed a bug where Anomaly Detection API failed to write to disk when persistent storage is configured due to constraints in its security context.
-
-## Known issues
-
-- When using persistent storage for Anomaly Detection, currently only `tigera-anomaly-detection` can be used as a value for `StorageClassName`.
-- L7 log collection fails to deploy on CIS hardened clusters. As a result, some cards in the manager UI dashboard will not display any metrics.
-- Enabling L7 related Anomaly Detection jobs require L7 to be enabled on the cluster. Anomaly Detection jobs crash-loop if L7 is not enabled.
-- Upgrading $[prodname] v3.15.0 on Rancher/RKE from $[prodname] v3.13.0 currently requires manually terminating the calico-node container for upgrade to proceed.
-- Mirantis MKE has provisional support due to upgrade issues on that platform. Please contact our support team for upgrades or deployments on Mirantis MKE.
-
----
-
-[Release archive]($[downloadsurl]/ee/archives/release-v3.15.0-v1.28.6.tgz) with Kubernetes manifests. Based on Calico v3.24.
-
-22 November 2022 (Chart Release 0)
-
-## What's new
-
-### Egress Gateway failure detection
-
-$[prodname] has improved the probes to check readiness and outbound connectivity of Egress Gateways. [Link to documentation](../../networking/egress/troubleshoot.mdx)
-
-### Egress Gateway pods are now non-privileged
-
-$[prodname] has rearchitected Egress Gateway pods to improve security and make use of a temporary init container to set up packet forwarding. [Link to documentation](../../networking/egress/egress-gateway-on-prem.mdx)
-
-### UI for Global Threat Feeds
-
-$[prodname] includes a new UI that can be used to manage and configure Global Threat Feeds. [Link to documentation](../../reference/resources/globalthreatfeed.mdx)
-
-### FIPS encryption mode
-
-$[prodname] has added a new FIPS 140-2 install mode that leverages FIPS-approved cryptographic algorithms and NIST-validated cryptographic modules.
-[Link to documentation](../../operations/fips.mdx)
-
-### Prometheus metrics for federation
-
-$[prodname] includes new Prometheus metrics to monitor the health of federation across clusters. [Link to documentation](../../reference/component-resources/kube-controllers/prometheus.mdx)
-
-### Namespace-based policy recommendations
-
-$[prodname] has improved its policy recommendation engine to add namespace-based recommendations. This enables operators to easily implement microsegmentation for namespaces. [Link to documentation](../../network-policy/generate-policy-recommendation.mdx#policy-recommendations-when-and-why)
-
-### New and improved Dashboards
-
-$[prodname] includes new and improved Dashboards that enable operators to define cluster- and namespace-scoped dashboards with new modules for policy usage, application layer and DNS metrics, and much more. [Link to documentation](/visibility/get-started-cem)
-
-### Included updates from Calico OSS
-
-$[prodname] also includes new features and fixes from Calico OSS. For more details on these changes please see the [release notes here.](https://projectcalico.docs.tigera.io/archive/v3.24/release-notes)
-
-### Security update: allow-tigera tier
-
-The Tigera Operator is now responsible for maintaining the allow-tigera tier (responsible for enabling and securing traffic flows for $[prodname] components). This means that tigera-operator will create the allow-tigera tier and its policies based on your cluster type (if not already present), and continuously monitor them to ensure they match the expected state defined by Tigera.
-Any edits or deletions to the allow-tigera tier and its policies will be automatically reverted by the Operator, protecting you from potential inadvertent changes that disrupt Tigera component operations.
-During $[prodname] upgrades, as new Tigera components are added and existing components evolve, the Operator will update the tier and its policies accordingly to ensure the traffic required for these changes is allowed.
-
-### Action item: Customers who have modified the allow-tigera tier in previous releases
-
-As of this release, the allow-tigera tier will be managed and maintained by Tigera, and edits to this tier are no longer supported. If you have made edits to the tier and its policies, and you wish to retain your changes prior to upgrade, you must take action. See [Change allow-tigera behaviour](../../network-policy/policy-tiers/allow-tigera.mdx) for details on how you can retain your changes by representing them as policies in an adjacent tier. You will need to determine what edits you have made and translate them to adjacent policies accordingly. Reach out to Support if you require assistance on this migration.
-
-## Known issues
-
-- Intrusion Detection Controller fails to create Syslog events due to constraints in its security context. A patch for this issue will be included in v3.15.1.
-- Anomaly Detection API fails to write to disk when persistent storage is configured due to constraints in its security context. A patch for this issue is included in v3.15.1.
-- L7 log collection fails to deploy on CIS hardened clusters. As a result, some cards in the manager UI dashboard will not display any metrics.
-- Enabling L7 related Anomaly Detection jobs require L7 to be enabled on the cluster. Anomaly Detection jobs crash-loop if L7 is not enabled.
-- Upgrading $[prodname] v3.15.0 on Rancher/RKE from $[prodname] v3.13.0 currently requires manually terminating the calico-node container for upgrade to proceed.
-- Mirantis MKE has provisional support due to upgrade issues on that platform. Please contact our support team for upgrades or deployments on Mirantis MKE.
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.16.0-release-notes.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.16.0-release-notes.mdx
deleted file mode 100644
index 788cbfb1e7..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.16.0-release-notes.mdx
+++ /dev/null
@@ -1,73 +0,0 @@
-27 Aug 2020
-
-### eBPF is generally available
-
-We [introduced](https://www.projectcalico.org/introducing-the-calico-ebpf-dataplane/) tech-preview support for the eBPF dataplane in Calico v3.13. The eBPF dataplane has several advantages over the Linux networking dataplane including: higher throughput, lower CPU usage, and native Kubernetes services support. With Calico v3.16, eBPF support is now GA! Check out the [guide](/operations/enabling-bpf/) to try it out.
-
-### Windows support
-
-Calico for Windows is open-source! Calico for Windows supports Kubernetes networking using VXLAN and enforces network policy for Windows workloads. Try out our [quickstart guide](/windows-calico/quickstart) to get a Calico for Windows cluster up and running!
-
-### BGP Community Advertisement
-
-Calico now supports BGP communities! Check out the BGP configuration resource [reference](../../reference/resources/bgpconfig.mdx#communities) for more details. We've also added custom BGP port configuration.
-
-### Bug fixes
-
-- Adding support for monitoring node IP addresses/subnets changes. [node #554](https://github.com/projectcalico/node/pull/554) (@realgaurav)
-- Don't fail if not authorized to access configmaps [node #541](https://github.com/projectcalico/node/pull/541) (@caseydavenport)
-- Always auto-detect node IP address & subnet. [node #531](https://github.com/projectcalico/node/pull/531) (@realgaurav)
-- Fix that calico/node required IP auto detection to be enabled [node #513](https://github.com/projectcalico/node/pull/513) (@krisiasty)
-- In BPF mode, fix that packets could be dropped if the UDP/TCP header didn't fit in the SKB's head buffer. [felix #2462](https://github.com/projectcalico/felix/pull/2462) (@fasaxc)
-- In BPF mode, ensure that the host is always reachable, even if the conntrack table gets full. [felix #2456](https://github.com/projectcalico/felix/pull/2456) (@tomastigera)
-- In BPF mode, fix file descriptor leaks. [felix #2455](https://github.com/projectcalico/felix/pull/2455) (@fasaxc)
-- Fix that the async_calc_graph health watchdog could time out while the calc graph was blocked sending its output downstream. [felix #2451](https://github.com/projectcalico/felix/pull/2451) (@fasaxc)
-- Fix route_table.go slow retries (and reduce log spam) when a route is moved from one interface to another. [felix #2448](https://github.com/projectcalico/felix/pull/2448) (@fasaxc)
-- Reduce log spam when an interface is removed from the dataplane. [felix #2447](https://github.com/projectcalico/felix/pull/2447) (@fasaxc)
-- In BPF mode, Felix now correctly handles the case where a workload endpoint interface is recreated with the same name. [felix #2431](https://github.com/projectcalico/felix/pull/2431) (@fasaxc)
-- Felix no longer logs "Wireguard disabled" in its dataplane resolution loop. [felix #2420](https://github.com/projectcalico/felix/pull/2420) (@fasaxc)
-- Fix that libcalico-go could emit a nil Node resource resulting in a memory leak in Typha and errors in Felix. [libcalico-go #1291](https://github.com/projectcalico/libcalico-go/pull/1291) (@fasaxc)
-
-### Other changes
-
-- Add support for BGP communities and configurable BGP ports [libcalico-go #1262](https://github.com/projectcalico/libcalico-go/pull/1262) (@Suraiya-Hameed)
-- Calico IPAM support for Windows nodes [libcalico-go #1276](https://github.com/projectcalico/libcalico-go/pull/1276) (@song-jiang)
-- Reintroduce Windows operating system support [felix #2443](https://github.com/projectcalico/felix/pull/2443) (@song-jiang)
-- calico/node's security has been improved by removing as many unneeded packages, binaries and libraries from the base image as possible. [node #525](https://github.com/projectcalico/node/pull/525) (@fasaxc)
-- A new IP/interface detection method `cidr` is added. The syntax (for example for the environment variable `IP_AUTODETECTION_METHOD` is `cidr=(,)*`. [node #518](https://github.com/projectcalico/node/pull/518) (@mandelsoft)
-- Upgrade to golang 1.14 [typha #385](https://github.com/projectcalico/typha/pull/385) (@Brian-McM)
-- Upgrade to Golang 1.14 [felix #2437](https://github.com/projectcalico/felix/pull/2437) (@Brian-McM)
-- Fix incorrect parsing of pod CIDR when using host-local IPAM [libcalico-go #1278](https://github.com/projectcalico/libcalico-go/pull/1278) (@caseydavenport)
-- Previously, Felix had a fixed 10s timer on which it resynced its list of local interfaces with the dataplane. To reduce CPU usage, the timer has been increased to 90s by default and a config parameter (InterfaceRefreshInterval) added to control it. [felix #2433](https://github.com/projectcalico/felix/pull/2433) (@fasaxc)
-- Connections to services without endpoints are now properly rejected in iptables dataplane mode. The fix required moving the iptables ACCEPT rule to the end of the filter FORWARD chain; if you have your own rules in that chain then please check that they do not drop or reject pod traffic before it reaches the ACCEPT rule. [felix #2424](https://github.com/projectcalico/felix/pull/2424) (@caseydavenport)
-- In BPF mode, traffic to unknown workload interfaces is now blocked (as long as Felix was running long enough to insert its policing rules). [felix #2423](https://github.com/projectcalico/felix/pull/2423) (@fasaxc)
-- In BPF mode, Felix now attaches programs in parallel for improved performance. [felix #2410](https://github.com/projectcalico/felix/pull/2410) (@fasaxc)
-- In BPF mode, Felix now collects the BPF verifier log only on retry for increased performance and prevention of log buffer size issues. [felix #2429](https://github.com/projectcalico/felix/pull/2429) (@fasaxc)
-- In BPF mode, Felix now rate-limits stale BPF map cleanup to save CPU. [felix #2428](https://github.com/projectcalico/felix/pull/2428) (@fasaxc)
-- In BPF mode, Felix now detects BPF support on Red Hat kernels with backports as well as generic kernels. [felix #2409](https://github.com/projectcalico/felix/pull/2409) (@sridhartigera)
-- In BPF mode, Felix now uses a more efficient algorithm to resync the Kubernetes services with the dataplane. This speeds up the initial sync (especially with large numbers of services). [felix #2401](https://github.com/projectcalico/felix/pull/2401) (@tomastigera)
-- eBPF dataplane support for encryption via Wireguard [felix #2389](https://github.com/projectcalico/felix/pull/2389) (@nelljerram)
-- Reject connections to services with no backends [felix #2380](https://github.com/projectcalico/felix/pull/2380) (@sridhartigera)
-- Implementation to handle setting source-destination-check for AWS EC2 instances. [felix #2381](https://github.com/projectcalico/felix/pull/2381) (@realgaurav)
-- In BPF mode, Felix now applies policy updates without reapplying the BPF programs; this gives a performance boost and closes a window where traffic was not policed. [felix #2363](https://github.com/projectcalico/felix/pull/2363) (@fasaxc)
-- In Kubernetes API Datastore mode, record when a pod is deleted from the network; this prevents pods that are stuck in Terminating state from being treated as active pods, resulting in duplicate IP errors and incorrect IP set calculation. [libcalico-go #1284](https://github.com/projectcalico/libcalico-go/pull/1284) (@fasaxc)
-- Upgrade to golang 1.14 [libcalico-go #1271](https://github.com/projectcalico/libcalico-go/pull/1271) (@Brian-McM)
-- Maintaining original next hop on specific bgppeer [libcalico-go #1266](https://github.com/projectcalico/libcalico-go/pull/1266) (@gunboe)
-- New Felix configuration parameter "FeatureDetectOverride" allows for overriding iptables feature detection. [libcalico-go #1264](https://github.com/projectcalico/libcalico-go/pull/1264) (@uablrek)
-- Speed up allocation of new IPAM blocks when most blocks are already in-use. [libcalico-go #1248](https://github.com/projectcalico/libcalico-go/pull/1248) (@caseydavenport)
-- Handle backend watch, if upstream closes channel[ClosedByRemote] [libcalico-go #1247](https://github.com/projectcalico/libcalico-go/pull/1247) (@krishgobinath)
-- Upgrade to Golang 1.14 [pod2daemon #43](https://github.com/projectcalico/pod2daemon/pull/43) (@Brian-McM)
-- Remove unnecessary packages from docker image [pod2daemon #42](https://github.com/projectcalico/pod2daemon/pull/42) (@gianlucam76)
-- Add support for BGP communities and configurable BGP ports [confd #341](https://github.com/projectcalico/confd/pull/341) (@Suraiya-Hameed)
-- Add configurable file logging. [cni-plugin #927](https://github.com/projectcalico/cni-plugin/pull/927) (@mgleung)
-- Upgrade to golang 1.14 [cni-plugin #921](https://github.com/projectcalico/cni-plugin/pull/921) (@Brian-McM)
-- Handle panics in the CNI plugin more gracefully [cni-plugin #913](https://github.com/projectcalico/cni-plugin/pull/913) (@caseydavenport)
-- install-cni will now check if the cni.conf file is a valid json document [cni-plugin #904](https://github.com/projectcalico/cni-plugin/pull/904) (@johscheuer)
-- The Calico CNI plugin now disables duplicate address detection on IPv6 interfaces. This avoids the associated delay. [cni-plugin #895](https://github.com/projectcalico/cni-plugin/pull/895) (@fasaxc)
-- Support projectcalico.org/namespace label for Mesos to enable namespaced workload endpoints [cni-plugin #886](https://github.com/projectcalico/cni-plugin/pull/886) (@vixns)
-- Enable CNI plugin logging to disk by default [calico #3881](https://github.com/projectcalico/calico/pull/3881) (@mgleung)
-- Update version of flannel included in documentation to v0.12.0 [calico #3873](https://github.com/projectcalico/calico/pull/3873) (@caseydavenport)
-
-### Known issues
-
-- Calico CNI binaries panic unless they use the canonical binary name [cni-plugin #941](https://github.com/projectcalico/cni-plugin/issues/941)
diff --git a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.16.1-release-notes.mdx b/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.16.1-release-notes.mdx
deleted file mode 100644
index 5ab00e3744..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/_includes/release-notes/_v3.16.1-release-notes.mdx
+++ /dev/null
@@ -1,16 +0,0 @@
-08 Sep 2020
-
-### Bug fixes
-
-- Fix population of etcd certificates in CNI config [cni-plugin #949](https://github.com/projectcalico/cni-plugin/pull/949) (@caseydavenport)
-- Resolves an issue on nodes whose Kubernetes node name does not exactly match the system hostname [cni-plugin #943](https://github.com/projectcalico/cni-plugin/pull/943) (@nelljerram)
-- Fix flannel migration issues when running on Rancher [kube-controllers #506](https://github.com/projectcalico/kube-controllers/pull/506) (@songjiang)
-- Fix `kubectl exec` format for migration controller [kube-controllers #504](https://github.com/projectcalico/kube-controllers/pull/504) (@songjiang)
-- Fix flannel migration for clusters with multiple control plane nodes. [kube-controllers #503](https://github.com/projectcalico/kube-controllers/pull/503) (@caseydavenport)
-- Fix datastore migration of KubeControllerConfiguration [calico #3976](https://github.com/projectcalico/calico/pull/3976) (@mgleung)
-
-### Other changes
-
-- Add knobs to explicitly disable adding drop rules for encapsulated packets originating from workloads. [felix #2486](https://github.com/projectcalico/felix/pull/2486) (@doublek)
-- Add FelixConfiguration parameters to explicitly allow encapsulated packets from workloads. [libcalico-go #1301](https://github.com/projectcalico/libcalico-go/pull/1301) (@doublek)
-- In BPF mode, Felix no longer needs configuration to avoid detecting EKS workloads as host interfaces. [felix #2471](https://github.com/projectcalico/felix/pull/2471) (@fasaxc)
diff --git a/calico-cloud_versioned_docs/version-20-1/about/index.mdx b/calico-cloud_versioned_docs/version-20-1/about/index.mdx
deleted file mode 100644
index 0a2b534ab5..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/about/index.mdx
+++ /dev/null
@@ -1,39 +0,0 @@
----
-description: A high-level description of Calico Cloud.
----
-
-# About Calico Cloud
-
-## What is $[prodname]?
-
-$[prodname] is a security solution for cloud-native applications running on containers and Kubernetes. It is the SaaS, pay-as-you-go version of Calico Enterprise that includes the core Calico Open Source (Calico CNI and network policy).
-
-![calico-cloud](/img/calico/calico-cloud.svg)
-
-Beyond the **Kubernetes security** features that you get from Calico Enterprise and Calico Open Source, $[prodname] adds these **container security** solutions:
-
-- **Image Assurance**
-
-Automated image scanning and blocking so you can monitor and assess workloads for new and existing CVEs 24/7.
-
-- **Container threat defense**
-
-Fully automated protection against known and unknown attacks (network or container-based).
-
-## Best fit
-
-The best fit for $[prodname] is small teams who need to manage the full spectrum of compliance in a web-based console. To jumpstart learning for teams, $[prodname] provides:
-
-- Built-in onboarding tutorials in the web console
-- Automatic policy recommendations to make it easy for developers to secure their microservices and applications from day one
-- Hands-on training from Customer Support during the trial period so you can see how to realize your use cases in short order
-- Self-service workshops and training to speed up adoption
-
-## Need more info?
-
-- For specific $[prodname] features, see [Tigera product comparison](../about/product-comparison.mdx)
-- To connect your cluster to $[prodname] in 15 minutes, [start a free trial](https://auth.calicocloud.io/u/signup/identifier?state=hKFo2SB3ekhybXN1TGdxTkZTUWIwQV9BSzNlaHBEUk0wMENJdKFur3VuaXZlcnNhbC1sb2dpbqN0aWTZIEE5b2NkREs1eWZKR0twc0ZWZmh2LWZCZEZxb2ZRNkJOo2NpZNkgc3NJQkNFdEdkZFpLNlVubDNOYWl2ZzhrY2RmcWd6dFE)
-- [Calico Cloud pricing](https://www.tigera.io/tigera-products/calico-cloud-pricing/)
-- [Connect a cluster to $[prodname] documentation](../get-started/connect-cluster.mdx)
-- [Image assurance documentation](../image-assurance)
-- [Container threat defense documentation](../threat/container-threat-detection.mdx)
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/about/product-comparison.mdx b/calico-cloud_versioned_docs/version-20-1/about/product-comparison.mdx
deleted file mode 100644
index 9d20aeb6a6..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/about/product-comparison.mdx
+++ /dev/null
@@ -1,98 +0,0 @@
----
-description: Describes Tigera products and provides a feature comparison table.
----
-
-import { CheckIcon } from '@chakra-ui/icons';
-
-# Tigera product comparison
-
-## Calico Open Source
-
-The base product that comprises both Calico Enterprise and Calico Cloud. It provides the core networking and network policy features.
-
-![calico-open-source](/img/calico/calico-open-source.svg)
-
-## Calico Enterprise
-
-Includes the Calico Open Source core networking and network policy, but adds advanced features for networking, network policy, visibility and troubleshooting, threat defense, and compliance reports.
-
-![calico-enterprise](/img/calico/calico-enterprise.svg)
-
-## Calico Cloud
-
-The SaaS version of Calico Enterprise. It adds Image Assurance to scan and detect vulnerabilities in images, and container threat defense to detect malware. It also adds onboarding tutorials, and eliminates the cost to manage Elasticsearch logs and storage that comes with Calico Enterprise.
-
-![calico-cloud](/img/calico/calico-cloud.svg)
-
-What is the best fit for you? It depends on your needs. The following table provides a high-level comparison.
-
-| Product | Cost and support | Best fit |
-| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| Calico Open Source | Free, community-supported | **Users** who want best-in-class networking and network policy capabilities for Kubernetes without any costs. |
-| Calico Enterprise | Paid subscription | **Enterprise teams** who need full control to customize their networking security deployment to meet regulatory and compliance requirements for Kubernetes at scale. Teams who want Tigera Customer Support for day-zero to production best practices, custom training and workshops, and Solution Architects to customize solutions. |
-| Calico Cloud | Free trial with hands-on training from Customer Support, then pay-as-you-go with self-service training. Also offered as an annual subscription. | **Small teams** who need to manage the full spectrum of compliance in a web-based console for novice users: - Secure clusters, pods, and applications - Scan images for vulnerabilities - Web-based UI for visibility to troubleshoot Kubernetes - Detect and mitigate threats - Run compliance reports
**Enterprise teams** who want to scale their Calico Enterprise on-premises deployments by providing more self-service to developers. |
-
-## Product comparison by feature
-
-| |
|
diff --git a/calico-cloud_versioned_docs/version-20-1/compliance/compliance-reports-cis.mdx b/calico-cloud_versioned_docs/version-20-1/compliance/compliance-reports-cis.mdx
deleted file mode 100644
index b1dad2c166..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/compliance/compliance-reports-cis.mdx
+++ /dev/null
@@ -1,184 +0,0 @@
----
-description: Configure reports to assess compliance for all assets in a Kubernetes cluster.
----
-
-# Configure CIS benchmark reports
-
-## Big picture
-
-Use the $[prodname] Kubernetes CIS benchmark report to assess compliance for all assets in a Kubernetes cluster.
-
-## Value
-
-A standard requirement for an organization’s security and compliance posture is to assess your Kubernetes clusters against CIS benchmarks. The $[prodname] Kubernetes CIS benchmark report provides this comprehensive view into your Kubernetes clusters while strengthening your threat detection capability by looking beyond networking data.
-
-## Concepts
-
-### Default settings and configuration
-
-During $[prodname] installation, each node starts a pod named, `compliance-benchmarker`. A preconfigured Kubernetes CIS benchmark report is generated every hour. You can view the report in **Manager UI**, **Compliance**, **Compliance Reports**, download it to .csv format.
-
-To schedule the CIS benchmark report or change settings, use the **global report** resource. Global reports are configured as YAML files and are applied using `kubectl`.
-
-### Best practices
-
-We recommend that you review the CIS benchmark best practices for securing cluster component configurations here: [CIS benchmarks downloads](https://learn.cisecurity.org/benchmarks).
-
-## Before you begin
-
-**Required**
-
-* You [Enabled compliance reports](../compliance/enable-compliance)
-
-**Limitations**
-
-CIS benchmarks runs only on nodes where $[prodname] is running. This limitation may exclude control plane nodes in some managed cloud platforms (AKS, EKS, GKE). Because the user has limited control over installation of control plane nodes in managed cloud platforms, these reports may have limited use for cloud users.
-
-## How to
-
-- [Configure and schedule CIS benchmark reports](#configure-and-schedule-cis-benchmark-reports)
-- [View report generation status](#view-report-generation-status)
-- [Review and address CIS benchmark results](#review-and-address-cis-benchmark-results)
-- [Manually run reports](#manually-run-reports)
-- [Troubleshooting](#troubleshooting)
-
-### Configure and schedule CIS benchmark reports
-
-Verify that the `compliance-benchmarker` is running and the `cis-benchmark` report type is installed.
-
-```bash
-kubectl get -n tigera-compliance daemonset compliance-benchmarker
-kubectl get globalreporttype cis-benchmark
-```
-
-In the following example, we use a **GlobalReport** with CIS benchmark fields to schedule and filter results. The report is scheduled to run at midnight of the next day (in UTC), and the benchmark items 1.1.4 and 1.2.5 will be omitted from the results.
-
-| **Fields** | **Description** |
-| -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| schedule | The start and end time of the report using [crontab format](https://en.wikipedia.org/wiki/Cron). To allow for archiving, reports are generated approximately 30 minutes after the end time. A single report is limited to a maximum of two per hour. |
-| highThreshold | **Optional**. Integer percentage value that determines the lower limit of passing tests to consider a node as healthy. Default: 100 |
-| medThreshold | **Optional**. Integer percentage value that determines the lower limit of passing tests to consider a node as unhealthy. Default: 50 |
-| includeUnscoredTests | **Optional**. Boolean value that when false, applies a filter to exclude tests that are marked as “Unscored” by the CIS benchmark standard. If true, the tests will be included in the report. Default: true |
-| numFailedTests | **Optional**. Integer value that sets the number of tests to display in the Top-failed Tests section of the CIS benchmark report. Default: 5 |
-| resultsFilter | **Optional**. An include or exclude filter to apply on the test results that will appear on the report. |
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalReport
-metadata:
- name: daily-cis-results
- labels:
- deployment: production
-spec:
- reportType: cis-benchmark
- schedule: 0 0 * * *
- cis:
- highThreshold: 100
- medThreshold: 50
- includeUnscoredTests: true
- numFailedTests: 5
- resultsFilters:
- - benchmarkSelection: { kubernetesVersion: '1.13' }
- exclude: ['1.1.4', '1.2.5']
-```
-
-### View report generation status
-
-To view the status of a report, you must use the `kubectl` command. For example:
-
-```bash
-kubectl get globalreports.projectcalico.org daily-cis-results -o yaml
-```
-
-In a report, the job status types are:
-
-- **lastScheduledReportJob**:
- The most recently scheduled job for generating the report. Because reports are scheduled in order, the “end time” of
- this report will be the “start time” of the next scheduled report.
-- **activeReportJobs**:
- Default = allows up to 5 concurrent report generation jobs.
-- **lastFailedReportJobs**:
- Default = keeps the 3 most recent failed jobs and deletes older ones. A single report generation job will be retried
- up to 6 times (by default) before it is marked as failed.
-- **lastSuccessfulReportJobs**:
- Default = keeps the 2 most recent successful jobs and deletes older ones.
-
-#### Change the default report generation time
-
-By default, reports are generated 30 minutes after the end of the report, to ensure all of the audit data is archived.
-(However, this gap does not affect the data collected “start/end time” for a report.)
-
-You can adjust the time for audit data for cases like initial report testing, to demo a report, or when manually
-creating a report that is not counted in global report status.
-
-To change the delay, go to the installation manifest, and uncomment and set the environment variable
-`TIGERA_COMPLIANCE_JOB_START_DELAY`. Specify value as a [Duration string][parse-duration].
-
-### Review and address CIS benchmark results
-
-We recommend the following approach to CIS benchmark reports results:
-
-1. Download the Kubernetes CIS benchmarks and export your full CIS benchmark results in .csv format.
-1. In the compliance dashboard, review the "Top-Failed Tests" section to identify which tests are the most problematic.
-1. Cross-reference the top-failed tests to identify which nodes are failing that test.
-1. Look up those tests in the [Kubernetes benchmark document](https://downloads.cisecurity.org/#/) and follow the remediation steps to resolve the failure.
-1. Discuss with your infrastructure and security team if this remediation is viable within your organization.
-1. If so, update your nodes with the fix and ensure that the test passes on the next generation of the report.
-1. If the fix is not viable but is an acceptable risk to take within the organization, configure the report specification to exclude that test index so that it no longer appears in the report.
-1. If the fix is not viable and not an acceptable risk to take on, keep the failing test within the report so that your team is reminded to address the issue as soon as possible.
-
-### Manually run reports
-
-You can manually run reports at any time. For example, run a manual report:
-
-- To specify a different start/end time
-- If a scheduled report fails
-
-$[prodname] GlobalReport schedules Kubernetes Jobs which create a single-run pod to generate a report and store it in Elasticsearch. Because you need to run manual reports as a pod, you need higher permissions: allow `create` access for pods in namespace `tigera-compliance` using the `tigera-compliance-reporter` service account.
-
-To manually run a report:
-
-1. Download the pod template corresponding to your installation method.
- **Operator**
-
- ```bash
- curl $[filesUrl_CE]/manifests/compliance-reporter-pod-managed.yaml -o compliance-reporter-pod.yaml
- ```
-
-1. Edit the template as follows:
-
- - Edit the pod name if required.
- - If you are using your own docker repository, update the container image name with your repo and image tag.
- - Set the following environments according to the instructions in the downloaded manifest:
- - `TIGERA_COMPLIANCE_REPORT_NAME`
- - `TIGERA_COMPLIANCE_REPORT_START_TIME`
- - `TIGERA_COMPLIANCE_REPORT_END_TIME`
-
-1. Apply the updated manifest, and query the status of the pod to ensure it completes.
- Upon completion, the report is available in Manager UI.
-
- ```bash
- # Apply the compliance report pod
- kubectl apply -f compliance-reporter-pod.yaml
- # Query the status of the pod
- kubectl get pod -n=tigera-compliance
- ```
-
-:::note
-
-Manually-generated reports do not appear in GlobalReport status.
-
-:::
-
-### Troubleshooting
-
-**Problem**: Compliance reports can fail to generate if the `compliance-benchmarker` component cannot find the required `kubelet` or `kubectl` binaries to determine the Kubernetes version running on the cluster.
-
-**Solution or workaround**: If a node is running within a container (not running `kubelet` as a binary), make sure the `kubectl` binary is available in the `/usr/bin` directory.
-
-## Additional resources
-
-- For details on configuring and scheduling reports, see [Global reports](../reference/resources/globalreport.mdx)
-- For other predefined compliance reports, see [Compliance reports](../reference/resources/compliance-reports/index.mdx)
-
-[parse-duration]: https://golang.org/pkg/time/#ParseDuration
diff --git a/calico-cloud_versioned_docs/version-20-1/compliance/enable-compliance.mdx b/calico-cloud_versioned_docs/version-20-1/compliance/enable-compliance.mdx
deleted file mode 100644
index 5058a1367a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/compliance/enable-compliance.mdx
+++ /dev/null
@@ -1,38 +0,0 @@
----
-description: Enable compliance reports to configure reports to assess compliance for all assets in a Kubernetes cluster.
----
-
-# Enable compliance reports
-
-## Big picture
-
-Enabling compliance reports improves the cluster's compliance posture. It involves generating compliance reports for Kubernetes clusters based on archived flow and audit logs for Calico Enterprise and Kubernetes resources. The process includes components for snapshotting configurations, generating reports, managing jobs, providing APIs with RBAC, and benchmarking security.
-
-## Value
-
-The compliance system consists of several key components that work together to ensure comprehensive compliance monitoring and reporting:
-
- - `compliance-snapshotter` : Lists required configurations and pushes snapshots to Elasticsearch, providing visibility into configuration changes.
- - `compliance-reporter` : Generates reports by analyzing configuration history, determining configuration evolution and identifying "worst-case outliers."
- - `compliance-controller` : Manages the creation, deletion, and monitoring of report generation jobs.
- - `compliance-server` : Offers API for report management and enforces RBAC.
- - `compliance-benchmarker` : Runs CIS Kubernetes Benchmark checks on each node to ensure secure deployment.
-
-### Enable compliance reports using kubectl
-
-* Create a compliance custom resource, named `tigera-secure`, in the cluster.
-
- ```bash
- kubectl apply -f - <
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/compliance/encrypt-cluster-pod-traffic.mdx b/calico-cloud_versioned_docs/version-20-1/compliance/encrypt-cluster-pod-traffic.mdx
deleted file mode 100644
index 7b4352c7f4..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/compliance/encrypt-cluster-pod-traffic.mdx
+++ /dev/null
@@ -1,233 +0,0 @@
----
-description: Enable WireGuard for state-of-the-art cryptographic security between pods for Calico Enterprise clusters.
----
-
-# Encrypt data in transit
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-## Big picture
-
-Enable WireGuard to secure on the wire in-cluster pod traffic in a $[prodname] cluster.
-
-## Value
-
-When this feature is enabled, $[prodname] automatically creates and manages WireGuard tunnels between nodes providing transport-level security for inter-node, in-cluster pod traffic. WireGuard provides [formally verified](https://www.wireguard.com/formal-verification/) secure and [performant tunnels](https://www.wireguard.com/performance/) without any specialized hardware. For a deep dive in to WireGuard implementation, see this [white paper](https://www.wireguard.com/papers/wireguard.pdf).
-
-$[prodname] supports WireGuard encryption for both IPv4 and IPv6 traffic. These can be independently enabled in the FelixConfiguration resource: `wireguardEnabled`
-enables encrypting IPv4 traffic over an IPv4 underlay network and `wireguardEnabledV6` enables encrypting IPv6 traffic over an IPv6 underlay network.
-
-## Before you begin
-
-**Terminology**
-
- - Inter-node pod traffic: Traffic leaving a pod from one node destined to a pod on another node
- - Inter-node, host-network traffic: traffic generated by the node itself or a host-networked-pod destined to another node or host-networked-pod
- - Same-node pod traffic: Traffic between pods on the same node
-
-**Supported encryption**
-
-- Inter-node pod traffic: IPv4 only
-- Inter-node, host-network traffic, IPv4/IPv6: supported only on managed clusters deployed on EKS and AKS
-
-**Unsupported**
-
-- Encrypted same-node pod traffic
-- GKE
-- Using your own custom keys to encrypt traffic
-
-**Required**
-
-- On all nodes in the cluster that you want to participate in $[prodname] encryption, verify that the operating system(s) on the nodes are [installed with WireGuard](https://www.wireguard.com/install/).
-
- :::note
-
- Some node operating systems do not support WireGuard, or do not have it installed by default. Enabling $[prodname] WireGuard encryption does not require all nodes to be installed with WireGuard. However, traffic to or from a node that does not have WireGuard installed, will not be encrypted.
-
- :::
-
-- IP addresses for every node in the cluster. This is required to establish secure tunnels between the nodes. $[prodname] can automatically do this using [IP autodetection methods](../networking/ipam/ip-autodetection.mdx).
-
-## How to
-
-- [Install WireGuard](#install-wireguard)
-- [Enable WireGuard for a cluster](#enable-wireguard-for-a-cluster)
-- [Verify encryption is enabled](#verify-encryption-is-enabled)
-- [Disable WireGuard for an individual node](#disable-wireguard-for-an-individual-node)
-- [Disable WireGuard for a cluster](#disable-wireguard-for-a-cluster)
-
-### Install WireGuard
-
-WireGuard is included in Linux 5.6+ kernels, and has been backported to earlier Linux kernels in some Linux distributions.
-
-Install WireGuard on cluster nodes using [instructions for your operating system](https://www.wireguard.com/install/). Note that you may need to reboot your nodes after installing WireGuard to make the kernel modules available on your system.
-
-Use the following instructions for these platforms that are not listed on the WireGuard installation page, before proceeding to [enabling WireGuard](#enable-wireguard-for-a-cluster).
-
-
-
-
-To install WireGuard on the default Amazon Machine Image (AMI):
-
-```bash
-sudo yum install kernel-devel-`uname -r` -y
-sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -y
-sudo curl -o /etc/yum.repos.d/jdoss-wireguard-epel-7.repo https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/repo/epel-7/jdoss-wireguard-epel-7.repo
-sudo yum install wireguard-dkms wireguard-tools -y
-```
-
-
-
-
-AKS cluster nodes run Ubuntu with a kernel that has WireGuard installed already, so there is no manual installation required.
-
-
-
-
-To install WireGuard for OpenShift v4.8:
-
-1. Install requirements:
-
- - [CoreOS Butane](https://coreos.github.io/butane/getting-started/)
- - [Openshift CLI](https://docs.openshift.com/container-platform/4.2/cli_reference/openshift_cli/getting-started-cli.html)
-
-1. Download and configure the tools needed for kmods.
-
-```bash
-FAKEROOT=$(mktemp -d)
-git clone https://github.com/tigera/kmods-via-containers
-cd kmods-via-containers
-make install FAKEROOT=${FAKEROOT}
-cd ..
-git clone https://github.com/tigera/kvc-wireguard-kmod
-cd kvc-wireguard-kmod
-make install FAKEROOT=${FAKEROOT}
-cd ..
-```
-
-1. Configure/edit `${FAKEROOT}/root/etc/kvc/wireguard-kmod.conf`.
-
- a. You must then set the URLs for the `KERNEL_CORE_RPM`, `KERNEL_DEVEL_RPM` and `KERNEL_MODULES_RPM` packages in the conf file `$FAKEROOT/etc/kvc/wireguard-kmod.conf`. Obtain copies for `kernel-core`, `kernel-devel`, and `kernel-modules` rpms from [RedHat Access](https://access.redhat.com/downloads/content/package-browser) and host it in an http file server that is reachable by your OCP workers.
-
- b. For help configuring `kvc-wireguard-kmod/wireguard-kmod.conf` and WireGuard version to kernel version compatibility, see the [kvc-wireguard-kmod README file](https://github.com/tigera/kvc-wireguard-kmod#quick-config-variables-guide).
-
-1. Get RHEL Entitlement data from your own RHEL8 system from a host in your cluster.
-
- ```bash
- tar -czf subs.tar.gz /etc/pki/entitlement/ /etc/rhsm/ /etc/yum.repos.d/redhat.repo
- ```
-
-1. Copy the `subs.tar.gz` file to your workspace and then extract the contents using the following command.
-
- ```bash
- tar -x -C ${FAKEROOT}/root -f subs.tar.gz
- ```
-
-1. Transpile your machine config using [CoreOS Butane](https://coreos.github.io/butane/getting-started/).
-
- ```bash
- cd kvc-wireguard-kmod
- make ignition FAKEROOT=${FAKEROOT} > mc-wg.yaml
- ```
-
-1. With the KUBECONFIG set for your cluster, run the following command to apply the MachineConfig which will install WireGuard across your cluster.
- ```bash
- oc create -f mc-wg.yaml
- ```
-
-
-
-
-### Enable WireGuard for a cluster
-
-Enable IPv4 WireGuard encryption across all the nodes using the following command.
-
-```bash
-kubectl patch felixconfiguration default --type='merge' -p '{"spec":{"wireguardEnabled":true}}'
-```
-
-Enable IPv6 WireGuard encryption across all the nodes using the following command.
-
-```bash
-kubectl patch felixconfiguration default --type='merge' -p '{"spec":{"wireguardEnabledV6":true}}'
-```
-
-To enable both IPv4 and IPv6 WireGuard encryption across all the nodes, use the following command.
-
-```bash
-kubectl patch felixconfiguration default --type='merge' -p '{"spec":{"wireguardEnabled":true,"wireguardEnabledV6":true}}'
-```
-
-
-
-:::note
-
-The above command can be used to change other WireGuard attributes. For a list of other WireGuard parameters and configuration evaluation, see the [Felix configuration](../reference/resources/felixconfig.mdx#felix-configuration-definition).
-
-:::
-
-We recommend that you review and modify the MTU used by $[prodname] networking when WireGuard is enabled to increase network performance. Follow the instructions in the [Configure MTU to maximize network performance](../networking/configuring/mtu.mdx) guide to set the MTU to a value appropriate for your network.
-
-### Verify encryption is enabled
-
-To verify that the nodes are configured for WireGuard encryption, check the node status set by Felix using `kubectl`. For example:
-
-```bash
-kubectl get node -o yaml
-...
-kind: Node
-metadata:
- annotations:
- projectcalico.org/WireguardPublicKey: jlkVyQYooZYzI2wFfNhSZez5eWh44yfq1wKVjLvSXgY=
-...
-```
-
-### Enable WireGuard statistics
-
-Since v3.11.1, WireGuard statistics are now automatically enabled with the enable wireguard setting(s) mentioned above.
-
-### View WireGuard statistics
-
-To view WireGuard statistics in Manager UI, you must enable them. From the left navbar, click **Dashboard**, and the Layout Settings icon.
-
-![Wireguard Dashboard Toggle](/img/calico-enterprise/wireguard/stats-toggle.png)
-
-### Disable WireGuard for an individual node
-
-To disable WireGuard on a specific node with WireGuard installed, modify the node-specific Felix configuration. e.g., to turn off encryption for pod traffic on node `my-node`, use the following command. This command disables WireGuard for both IPv4 and IPv6, modify it accordingly if disabling only either IP version:
-
-```bash
-cat <
-
-
-
-
-
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/compliance/overview.mdx b/calico-cloud_versioned_docs/version-20-1/compliance/overview.mdx
deleted file mode 100644
index 13f8693399..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/compliance/overview.mdx
+++ /dev/null
@@ -1,368 +0,0 @@
----
-description: Get the reports for regulatory compliance on Kubernetes workloads and environments.
----
-
-# Schedule and run compliance reports
-
-## Big picture
-
-Schedule and run compliance reports to assess Kubernetes workloads and environments for regulatory compliance.
-
-## Value
-
-Compliance tools that rely on periodic snapshots, do not provide accurate assessments of Kubernetes workloads against your compliance standards. $[prodname] compliance dashboard and reports provide a complete inventory of regulated workloads, along with evidence of enforcement of network controls for these workloads. Additionally, audit reports are available to see changes to any network security controls.
-
-## Concepts
-
-### Compliance reports at a glance
-
-Compliance report are based on archived flow logs and audit logs for all of your $[prodname] resources, plus any audit logs you've configured for Kubernetes resources in the Kubernetes API server:
-
-- Pods
-- Host endpoints
-- Service accounts
-- Namespaces
-- Kubernetes service endpoints
-- Global network sets
-- Calico and Kubernetes network policies
-- Global network policies
-
-Compliance reports provide the following high-level information:
-
-- **Protection**
-
- - Endpoints explicitly protected using ingress or egress policy
- - Endpoints with Envoy enabled
-
-- **Policies and services**
-
- - Policies and services associated with endpoints
- - Policy audit logs
-
-- **Traffic**
- - Allowed ingress/egress traffic to/from namespaces
- - Allowed ingress/egress traffic to/from the internet
-
-![compliance-reporting](/img/calico-enterprise/compliance-reporting.png)
-
-## Before you begin
-
-**Unsupported**
-
-- AKS
-- GKE
-- OpenShift
-- TKG
-
-**Required**
-
-* You [Enabled compliance reports](../compliance/enable-compliance)
-
-- Ensure that all nodes in your Kubernetes clusters are time-synchronized using NTP or similar (for accurate audit log timestamps)
-
-- [Configure audit logs for Kubernetes resources](../visibility/elastic/audit-overview.mdx)
-
- You must configure audit logs for Kubernetes resources through the Kubernetes API to get a complete view of all resources.
-
-## How To
-
-- [Configure report permissions](#configure-report-permissions)
-- [Configure and schedule reports](#configure-and-schedule-reports)
-- [View report generation status](#view-report-generation-status)
-- [Run reports](#run-reports)
-
-### Configure report permissions
-
-Report permissions are granted using the standard Kubernetes RBAC based on ClusterRole and ClusterRoleBindings. The following table outlines the required RBAC verbs for each resource type for a specific user actions.
-
-| **Action** | **globalreporttypes** | **globalreports** | **globalreports/status** |
-| ------------------------------------------------------- | ------------------------------- | --------------------------------- | ------------------------ |
-| Manage reports (create/modify/delete) | | \* | get |
-| View status of report generation through kubectl | | get | get |
-| List the generated reports and summary status in the UI | | list + get (for required reports) | |
-| Export the generated reports from the UI | get (for the particular report) | get (for required reports) | |
-
-The following sample manifest creates RBAC for three users: Paul, Candice and David.
-
-- Paul has permissions to create/modify/delete the report schedules and configuration, but does not have permission to export generated reports from the UI.
-- Candice has permissions to list and export generated reports from the UI, but cannot modify the report schedule or configuration.
-- David has permissions to list and export generated `dev-inventory` reports from the UI, but cannot list or download other reports, nor modify the report
- schedule or configuration.
-
-```yaml
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tigera-compliance-manage-report-config
-rules:
- - apiGroups: ['projectcalico.org']
- resources: ['globalreports']
- verbs: ['*']
- - apiGroups: ['projectcalico.org']
- resources: ['globalreports/status']
- verbs: ['get', 'list', 'watch']
-
----
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tigera-compliance-manage-report-config
-subjects:
- - kind: User
- name: paul
- apiGroup: rbac.authorization.k8s.io
-roleRef:
- kind: ClusterRole
- name: tigera-compliance-manage-report-config
- apiGroup: rbac.authorization.k8s.io
-
----
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tigera-compliance-list-download-all-reports
-rules:
- - apiGroups: ['projectcalico.org']
- resources: ['globalreports']
- verbs: ['get', 'list']
- - apiGroups: ['projectcalico.org']
- resources: ['globalreporttypes']
- verbs: ['get']
-
----
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tigera-compliance-list-download-all-reports
-subjects:
- - kind: User
- name: candice
- apiGroup: rbac.authorization.k8s.io
-roleRef:
- kind: ClusterRole
- name: tigera-compliance-list-download-all-reports
- apiGroup: rbac.authorization.k8s.io
-
----
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tigera-compliance-list-download-dev-inventory
-rules:
- - apiGroups: ['projectcalico.org']
- resources: ['globalreports']
- verbs: ['list']
- - apiGroups: ['projectcalico.org']
- resources: ['globalreports']
- verbs: ['get']
- resourceNames: ['dev-inventory']
- - apiGroups: ['projectcalico.org']
- resources: ['globalreporttypes']
- verbs: ['get']
- resourceNames: ['dev-inventory']
-
----
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tigera-compliance-list-download-dev-inventory
-subjects:
- - kind: User
- name: david
- apiGroup: rbac.authorization.k8s.io
-roleRef:
- kind: ClusterRole
- name: tigera-compliance-list-download-dev-inventory
- apiGroup: rbac.authorization.k8s.io
-```
-
-### Configure and schedule reports
-
-To configure and schedule a compliance report, create a [GlobalReport](../reference/resources/globalreport.mdx) with the following information.
-
-| **Fields** | **Description** |
-| --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| name | Unique name for your report. |
-| reportType | One of the following predefined report types: `inventory`, `network-access`, `policy-audit`. |
-| schedule | The start and end time of the report using [crontab format](https://en.wikipedia.org/wiki/Cron). To allow for archiving, reports are generated approximately 30 minutes after the end time. A single report is limited to a maximum of two per hour. |
-| endpoints | **Optional**. For inventory and network-access reports, specifies the endpoints to include in the report. For the policy-audit report, restricts audit logs to include only policies that apply to the selected endpoints. If not specified, the report includes all endpoints and audit logs. |
-| jobNodeSelector | **Optional**. Limits report generation jobs to specific nodes. |
-| suspend | **Optional**. Suspends report generation. All in-flight reports will complete, and future scheduled reports are suspended. |
-
-:::note
-
-GlobalReports can only be configured using kubectl (not calicoctl); and they cannot be edited in the Tigera
-Secure EE Manager UI.
-
-:::
-
-The following sections provide sample schedules for the predefined reports.
-
-### Weekly reports, all endpoints
-
-The following report schedules weekly inventory reports for _all_ endpoints. The jobs that create the reports will run
-on the infrastructure nodes (e.g. nodetype == 'infrastructure').
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalReport
-metadata:
- name: weekly-full-inventory
-spec:
- reportType: inventory
- schedule: 0 0 * * 0
- jobNodeSelector:
- nodetype: infrastructure
-```
-
-### Daily reports, selected endpoints
-
-The following report schedules daily inventory reports for production endpoints (e.g. deployment == ‘production’).
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalReport
-metadata:
- name: daily-production-inventory
-spec:
- reportType: inventory
- endpoints:
- selector: deployment == 'production'
- schedule: 0 0 * * *
-```
-
-### Hourly reports, endpoints in named namespaces
-
-The following report schedules hourly network-access reports for the accounts department endpoints, that are
-specified using the namespace names: **payable**, **collections** and **payroll**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalReport
-metadata:
- name: hourly-accounts-networkaccess
-spec:
- reportType: network-access
- endpoints:
- namespaces:
- names: ['payable', 'collections', 'payroll']
- schedule: 0 * * * *
-```
-
-### Daily reports, endpoints in selected namespaces
-
-The following report schedules daily network-access reports for the accounts department with endpoints specified using
-a namespace selector.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalReport
-metadata:
- name: daily-accounts-networkaccess
-spec:
- reportType: network-access
- endpoints:
- namespaces:
- selector: department == 'accounts'
- schedule: 0 0 * * *
-```
-
-### Monthly reports, endpoints for named service accounts in named namespaces
-
-The following schedules monthly audit reports. The audited policy is restricted to policy that applies to
-widgets/controller endpoints specified by the namespace **widgets** and service account **controller**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalReport
-metadata:
- name: monthly-widgets-controller-tigera-policy-audit
-spec:
- reportType: policy-audit
- schedule: 0 0 1 * *
- endpoints:
- serviceAccounts:
- names: ['controller']
- namespaces:
- names: ['widgets']
-```
-
-### View report generation status
-
-To view the status of a report, you must use the `kubectl` command. For example:
-
-```bash
-kubectl get globalreports.projectcalico.org daily-inventory.p -o yaml
-```
-
-In a report, the job status types are:
-
-- **lastScheduledReportJob**:
- The most recently scheduled job for generating the report. Because reports are scheduled in order, the “end time” of
- this report will be the “start time” of the next scheduled report.
-- **activeReportJobs**:
- Default = allows up to 5 concurrent report generation jobs.
-- **lastFailedReportJobs**:
- Default = keeps the 3 most recent failed jobs and deletes older ones. A single report generation job will be retried
- up to 6 times (by default) before it is marked as failed.
-- **lastSuccessfulReportJobs**:
- Default = keeps the 2 most recent successful jobs and deletes older ones.
-
-### Change the default report generation time
-
-By default, reports are generated 30 minutes after the end of the report, to ensure all of the audit data is archived.
-(However, this gap does not affect the data collected “start/end time” for a report.)
-
-You can adjust the time for audit data for cases like initial report testing, to demo a report, or when manually
-creating a report that is not counted in global report status.
-
-To change the delay, go to the installation manifest, and uncomment and set the environment
-`TIGERA_COMPLIANCE_JOB_START_DELAY`. Specify value as a [Duration string][parse-duration].
-
-### Run reports
-
-You can run reports at any time to specify a different start/end time, and if a scheduled report fails.
-
-$[prodname] GlobalReport schedules Kubernetes Jobs, which create a single-run pod to generate a report and store it
-in Elasticsearch. Because you need to run reports as a pod, you need higher permissions: allow `create` access
-access for pods in namespace `tigera-compliance` using the `tigera-compliance-reporter` service account.
-
-To run a report on demand:
-
-1. Download the pod template corresponding to your installation method.
-
- ```bash
- curl $[filesUrl_CE]/manifests/compliance-reporter-pod-managed.yaml -o compliance-reporter-pod.yaml
- ```
-
-1. Edit the template as follows:
- - Edit the pod name if required.
- - If you are using your own docker repository, update the container image name with your repo and image tag.
- - Set the following environments according to the instructions in the downloaded manifest:
- - `TIGERA_COMPLIANCE_REPORT_NAME`
- - `TIGERA_COMPLIANCE_REPORT_START_TIME`
- - `TIGERA_COMPLIANCE_REPORT_END_TIME`
-1. Apply the updated manifest, and query the status of the pod to ensure it completes.
- Upon completion, the report is available in $[prodname] Manager.
-
- ```bash
- # Apply the compliance report pod
- kubectl apply -f compliance-reporter-pod.yaml
-
- # Query the status of the pod
- kubectl get pod -n tigera-compliance
- ```
-
-:::note
-
-Manually-generated reports do not appear in GlobalReport status.
-
-:::
-
-## Additional resources
-
-- For details on configuring and scheduling reports, see [Global reports](../reference/resources/globalreport.mdx)
-- For report field descriptions, see [Compliance reports](../reference/resources/compliance-reports/index.mdx)
-- [CIS benchmarks](compliance-reports-cis.mdx)
-
-[parse-duration]: https://golang.org/pkg/time/#ParseDuration
diff --git a/calico-cloud_versioned_docs/version-20-1/get-help/support.mdx b/calico-cloud_versioned_docs/version-20-1/get-help/support.mdx
deleted file mode 100644
index d56cb3645a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/get-help/support.mdx
+++ /dev/null
@@ -1,33 +0,0 @@
----
-description: Ways to get help and provide feedback.
----
-
-import IconUser from '/img/icons/user-icon.svg';
-
-# Support and feedback
-
-## Contact Support
-
-You can find solutions to many common problems by following our [troubleshooting checklist](../get-started/checklist.mdx) or by consulting the [Tigera Help Center](https://www.tigera.io/calico-support/).
-
-For everything else, you can open a support ticket.
-
-### Paid Calico Cloud users
-
-Sign in to the [support portal](https://tigeraio.my.site.com/community/s/login/) to open a ticket.
-
-### Free trial users
-
-From the Manager UI, click the user icon > **Contact Support**, and then complete the form.
-
-## Provide feedback
-
-We value your feedback and suggestions for improvement. Email us: help@calicocloud.io.
-
-## Support policy
-
-For details, see our [Support policy](https://www.tigera.io/legal/calico-cloud-support-policy).
-
-## Check the status of Calico Cloud services
-
-Go to [Calico Cloud Status](https://status.calicocloud.io) to view the current status of our sites and services.
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/get-started/cc-arch-diagram.mdx b/calico-cloud_versioned_docs/version-20-1/get-started/cc-arch-diagram.mdx
deleted file mode 100644
index 2ccb8742bb..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/get-started/cc-arch-diagram.mdx
+++ /dev/null
@@ -1,48 +0,0 @@
----
-description: Understand the main components of Calico Cloud.
----
-
-# Calico Cloud architecture
-
-## $[prodname] security
-
-$[prodname] architecture is based on the $[prodname] multi-cluster management feature. $[prodname] manages the control plane, and you connect your clusters (called **managed clusters**) to the control plane. Communication between the $[prodname] control plane and managed clusters is secured using TLS tunnels.
-
-![calico-architecture](/img/calico-cloud/cc-architecture.svg)
-
-The components that secure communications between the $[prodname] control plane and managed clusters are:
-
-- **$[prodname] tunnel server** - accepts secure TLS connections from managed clusters
-- **Guardian** - an agent that runs in each managed cluster that proxies communication between $[prodname] components and managed cluster components
-
-All connections go through the $[prodname] tunnel server and Guardian. The only exception is during installation and upgrade when managed clusters connect to $[prodname] using TLS connections to get install/update resources, register the cluster, and report the status of the install/update.
-
-The $[prodname] tunnel is initiated by Guardian on the managed cluster. The $[prodname] control plane does not initiate new connections to the managed cluster outside of the tunnel. However, there are connections that go through the $[prodname] tunnel server that are initiated from the control plane; for example, when a user interacts with the Manager UI, or when configuration needs to be pushed into the managed cluster.
-
-## Managed cluster components
-
-The following diagram shows the major components in a managed cluster, followed by component descriptions.
-
-![calico-architecture-diagram](/img/calico-cloud/cc-arch-diagram.png)
-
-| Component | Description | Ports/Protocol |
-| ------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
-| $[prodname] controller | Deploys required resources for $[prodname]. | TCP 6443 to Kubernetes API server |
-| $[prodname] installer | Gets installation resources from the $[prodname] portal, registers a managed cluster, and reports installation or upgrade progress. | • TCP 443 to $[prodname] hosted service • TCP 6443 to Kubernetes API server |
-| $[prodname] tunnel server | Communicates with managed clusters by creating secure TLS tunnels. | Port 9000 from managed clusters |
-| calico-node | Bundles key components that are required for networking containers with $[prodname]:
• Felix • BIRD • confd | • TCP 5473 to Typha • TCP 9900 and 9081 from Prometheus API service |
-| Container threat detection | A threat detection engine that analyzes observed file and process activity to detect known malicious and suspicious activity. Monitors the following types of suspicious activity within containers:
• Access to sensitive system files and directories • Defense evasion • Discovery • Execution • Persistence • Privilege escalation
Includes these components:
**Runtime Security Operator** An operator to manage and reconcile container threat defense components.
**Runtime Reporter Pods** Pods running on each node in the cluster to perform the detection activity outlined above.They send activity reports to Elasticsearch for analysis by $[prodname]. | TCP to Kubernetes API server |
-| Compliance | Generates compliance reports for the Kubernetes cluster. Reports are based on archived flow and audit logs for Calico Cloud resources, plus any audit logs you’ve configured for Kubernetes resources in the Kubernetes API server. Compliance reports provide the following high-level information:
• Endpoints explicitly protected using ingress or egress policy • Policies and services - Policies and services associated with endpoints - Policy audit logs • Traffic - Allowed ingress/egress traffic to/from namespaces, and to/from the internet Compliance includes these components:
**compliance-snapshotter** Handles listing of required Kubernetes and $[prodname]configuration and pushes snapshots to Elasticsearch. Snapshots give you visibility into configuration changes, and how the cluster-wide configuration has evolved within a reporting interval.
**compliance-reporter** Handles report generation. Reads configuration history from Elasticsearch and determines time evolution of cluster-wide configuration, including relationships between policies, endpoints, services, and network sets. Data is then passed through a zero-trust aggregator to determine the “worst-case outliers” in the reporting interval.
**compliance-controller** Reads report configuration and manages creation, deletion, and monitoring of report generation jobs.
**compliance-benchmarker** A daemonset that runs checks in the CIS Kubernetes Benchmark on each node so you can see if Kubernetes is securely deployed. | • TCP 8080 to Guardian • TCP 6443 to Kubernetes API server |
-| Fluentd | Open-source data collector for unified logging. Collects and forwards $[prodname] logs (flows, DNS, L7) to log storage. | • TCP 8080 to Guardian • TCP 9080 from Prometheus API service |
-| Guardian | An agent running in each managed cluster that proxies communication between the $[prodname] tunnel server and your managed cluster. Secured using TLS tunnels. | • Port 9000 to tunnel server • TCP 6443 to Kubernetes API server • TCP 6443 from $[prodname] components |
-| Installation endpoints | Endpoints at `*.calicocloud.io` and `*.projectcalico.org`. | TCP 443 for both |
-| Intrusion detection controller | Handles integrations with threat intelligence feeds and $[prodname] custom alerts. | • TCP 8080 to Guardian • TCP 6443 to Kubernetes API server |
-| Image Assurance | Identifies vulnerabilities in container images that you deploy to Kubernetes clusters. Components of interest are:
**Admission controller** Uses Kubernetes Validating Webhook Configuration to control which images can be used to create pods based on scan results.
**API** Isolates tenant data and authorizes all external access to Image Assurance data. **Note:** $[prodname] does not store registry credentials in its database and does not pull customer images into the $[prodname] control plane. | • TCP 8080 to Guardian • TCP 6443 to Kubernetes API server |
-| Kubernetes API server | A Kubernetes component that validates and configures data for the API objects (for example, pods, services, and others). | TCP 6443 (from all components) |
-| kube-controllers | Monitors the Kubernetes API and performs actions based on cluster state. $[prodname] kube-controllers container includes these controllers:
• Node • Service • Federated services • Authorization | • TCP 9094 from Prometheus API service • TCP 6443 to Kubernetes API server |
-| Log storage | Storage for logs (flows, L7, DNS, audit). Data for each managed cluster is isolated and protected against unauthorized access. | n/a |
-| Packet capture API | Retrieves capture files (pcap format) generated by a packet capture for use with network protocol analysis tools like Wireshark. Packet capture data is visible in the Manager UI and Service Graph. | • TCP 8449 Guardian to Packet Capture API • TCP 6443 to Kubernetes API server |
-| Prometheus API service | Collects metrics from $[prodname] components and makes the metrics available to Manager UI. | • TCP 6443 to Kubernetes API server • TCP 9080 to Fluentd • TCP 9900 and 9081 to Prometheus API service |
-| Tigera API server | Allows users to manage $[prodname] resources such as policies and tiers through kubectl or the Kubernetes API server. | • TCP 9095 to Prometheus API service • TCP 8080 from Kubernetes API server |
-| Typha | Increases scale by reducing each node’s impact on the datastore. | TCP 5473 from calico-node to Typha |
-| User access to Manager UI | Authenticated users can access the browser-based Manager UI, which provides network traffic visibility and troubleshooting, centralized multi-cluster management, threat-defense, container threat detection, policy lifecycle management, scan images for vulnerabilities, and compliance for multiple roles/stakeholders. | Port 443 to $[prodname] tunnel server |
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/get-started/checklist.mdx b/calico-cloud_versioned_docs/version-20-1/get-started/checklist.mdx
deleted file mode 100644
index 1c68da7b20..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/get-started/checklist.mdx
+++ /dev/null
@@ -1,82 +0,0 @@
----
-description: Review this checklist before opening a Support ticket.
----
-
-# Troubleshooting checklist
-
-## Check $[prodname] installation
-
-Installing $[prodname] on your Kubernetes cluster is managed by the $[prodname] operator. The $[prodname] operator is deployed as a Deployment in the `calico-cloud` namespace, and records status in a custom resource named `installer`.
-
-Check the `installer` status using the following command.
-
-```bash
-kubectl get installer default --namespace calico-cloud -o jsonpath --template '{.status}'
-```
-
-**Sample output**
-
-```
-{"clusterName":"my-new-cluster","state":"installing"}
-```
-
-After `state` is `complete`, $[prodname] is properly installed.
-
-## Check logs for fatal errors
-
-Check that the $[prodname] operator is running and that logs do not have any fatal errors.
-
-```bash
-kubectl logs -n calico-cloud deployment/calico-cloud-controller-manager
-2022-04-04T14:34:32.472Z INFO controller-runtime.metrics metrics server is starting to listen {"addr": "127.0.0.1:8080"}
-2022-04-04T14:34:32.472Z INFO setup starting manager
-2022-04-04T14:34:32.472Z INFO setup config {"ccBaseURL": "https://www.dev.calicocloud.io/api", "debug": false, "leader-elect": true}
-I0404 14:34:32.472586 1 leaderelection.go:243] attempting to acquire leader lease calico-cloud/c2ad41ce.calicocloud.io...
-2022-04-04T14:34:32.472Z INFO controller-runtime.manager starting metrics server {"path": "/metrics"}
-I0404 14:34:32.480870 1 leaderelection.go:253] successfully acquired lease calico-cloud/c2ad41ce.calicocloud.io
-<...>
-```
-
-## Check custom resources
-
-Verify that you have the **installer custom resource**, and that the values are appropriate for your environment.
-
-```bash
-kubectl get installers.operator.calicocloud.io --namespace calico-cloud -o yaml
-```
-
-```yaml
-apiVersion: v1
-items:
- - apiVersion: operator.calicocloud.io/v1
- kind: Installer
- metadata:
- annotations:
- kubectl.kubernetes.io/last-applied-configuration: |
- {"apiVersion":"operator.calicocloud.io/v1","kind":"Installer","metadata":{"annotations":{},"name":"my-new-cluster","namespace":"calico-cloud"}}
- creationTimestamp: '2022-04-04T14:34:29Z'
- generation: 1
- name: my-new-cluster
- namespace: calico-cloud
- resourceVersion: '1102'
- uid: eb1d1cd0-f01f-47b2-81fe-8eee46dbe712
- status:
- clusterName: my-new-cluster
- message: ''
- resourceVersion: ''
- state: installing
-kind: List
-metadata:
- resourceVersion: ''
- selfLink: ''
-```
-
-## Send failed installation diagnostics to Calico Cloud support
-
-To upload diagnostics about a failed installation to $[prodname] support, run the following command:
-
-```bash
-kubectl patch installer -n calico-cloud default --type merge -p='{"spec":{"uploadDiags":true}}'
-```
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/get-started/connect-cluster.mdx b/calico-cloud_versioned_docs/version-20-1/get-started/connect-cluster.mdx
deleted file mode 100644
index 1ca8b2a106..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/get-started/connect-cluster.mdx
+++ /dev/null
@@ -1,66 +0,0 @@
----
-description: Get answers to your questions about connecting to Calico Cloud.
----
-
-# What happens when you connect a cluster to Calico Cloud
-
-Although connecting your cluster to $[prodname] is easy, we also understand you may want details about what happens when your cluster is managed by $[prodname]. We hope this article, along with the other topics in this section, gives you the information you need to install $[prodname] with confidence. If you have other questions, let us know: [Support policy](https://www.tigera.io/legal/calico-cloud-support-policy) or email: feedback@calicocloud.io.
-
-## What happens when you connect your cluster
-
-- Your cluster is registered and connected to $[prodname]
-- Your Calico open source install is updated with resources for $[prodname] features
-- Tigera components and services are added to collect metrics and logs that are sent to the Calico Cloud management plane, which provides the basis for visibility and troubleshooting
-- Policies are added to secure communications between Tigera components
-- A global threat feed is added to alert on any egress traffic to addresses in the threat feed to protect the cluster
-- All of your existing network policies (Kubernetes and Calico) can be found in a single place: the default tier
-
-After your cluster is connected, it is listed in the Managed Clusters page where you can move between multiple clusters.
-
-![managed-clusters](/img/calico-cloud/managed-clusters.png)
-
-## What happens when you disconnect your cluster
-
-Whether you’ve finished your $[prodname] Trial, or terminated your licensed $[prodname] subscription, we know you want your cluster to remain functional. $[prodname] provides a migration script that returns your cluster to a working state in open-source Calico. The migration script:
-
-- Removes and cleans up all $[prodname] components and services that have no equivalent in open-source Calico
-- Switches the operator configuration to open-source Calico, which migrates your cluster to a version of open-source Calico
-- Ensures all Calico policies are migrated to the default tier, or allows you to remove all Calico policies
-
-For details of the migration script, see [Uninstall $[prodname] from a cluster](../operations/disconnect.mdx).
-
-## What happens under the covers when you connect your cluster
-
-### Pre-check
-
-- Verifies that your cluster can be migrated, and validates that the cluster platform and version is supported by $[prodname]
-- Records the number of nodes and other basic attributes of your cluster. This data remains on your system and is used for troubleshooting in case there are issues connecting your cluster.
-
-### Install and connect
-
-- Based on the Calico install for your cluster, the manifest is upgraded/migrated to use the $[prodname] Tigera operator. (Note that clusters installed using Helm are currently not supported.)
-- Tigera operator installs the required $[prodname] Custom Defined Resources (CRDs), and a standard pull secret to allow access to $[prodname] images
-- Installs a Prometheus instance for component metrics (if it doesn’t already exist on the cluster)
-- Pushes a license to the cluster so components receive appropriate entitlements
-- Adds custom namespaces, service account, and role bindings to support cluster access and permissions to the $[prodname] user interface
-- Creates RBAC permissions to access these resources:
- - Number of nodes in the cluster and stats for billing purposes
- - Policies in the cluster, and pod information
-- Registers the cluster so it can connect to $[prodname]
-- Creates a global threat feed to alert on any egress traffic to addresses in the threat feed
-- Adds policies to secure communication for $[prodname] components, including safeguards that prevent any new network policies from impacting the vital functions of the cluster
-- Creates roles and bindings to allow the installer to operate and to allow Calico Cloud to operate after installation.
- Each of these objects is given minimal permissions for its specific function.
-- Creates roles and bindings for two user types for access to Manager UI:
- - **Admin** - full permissions for network policies and $[prodname] resources, and read permissions for namespace and pods
- - **User** - read-only/view permissions to same resources above
-
-## How long does it take to connect a cluster?
-
-Typically, about 5 minutes.
-
-## Troubleshooting an installation
-
-- A checklist to [troubleshoot your installation](checklist.mdx)
-
-- Additional troubleshooting for the [Tigera Operator](operator-checklist.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/get-started/index.mdx b/calico-cloud_versioned_docs/version-20-1/get-started/index.mdx
deleted file mode 100644
index 45727dc1ff..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/get-started/index.mdx
+++ /dev/null
@@ -1,43 +0,0 @@
----
-description: Steps to connect clusters to Calico Cloud and upgrade.
-hide_table_of_contents: true
----
-
-import { DocCardLink, DocCardLinkLayout } from '/src/___new___/components';
-
-# Install and upgrade
-
-Requirements and guides for connecting your Kubernetes cluster to Calico Cloud.
-
-## Before you begin
-
-
-
-
-
-
-
-
-
-## Connect your cluster
-
-
-
-
-
-
-
-
-
-## Troubleshooting
-
-
-
-
-
-
-## Upgrade
-
-
-
-
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/get-started/install-automated.mdx b/calico-cloud_versioned_docs/version-20-1/get-started/install-automated.mdx
deleted file mode 100644
index 0b84761bb7..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/get-started/install-automated.mdx
+++ /dev/null
@@ -1,194 +0,0 @@
----
-description: Install Calico Cloud as part of an automated workflow.
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-import IconUser from '/img/icons/user-icon.svg';
-
-# Install Calico Cloud as part of an automated workflow
-
-You can connect clusters to Calico Cloud as part of an automated workflow, using persistent client credentials and customized Helm charts.
-
-## Prerequisites
-
-* You have an active Calico Cloud account. You can sign up for a 14-day free trial at [calicocloud.io](https://calicocloud.io).
-* You are signed in to the Calico Cloud Manager UI as a user with the **Owner** or **Admin**, role.
-* You have at least one cluster that meets our [system requirements](system-requirements.mdx).
-* You have kubectl access to the cluster.
-* You have installed Helm 3.0 or later on your workstation.
-
-## Create client credentials
-
-Create client credentials and generate a Kubernetes secret to use for automated Helm installations.
-
-1. Select the user icon ** > Settings**.
-1. Under the **Client Credentials** tab, click **Add Client Credential**
-1. In the **Add Client Credential** dialog, enter a name and click **Create**.
- Your new client credential will appear in the list on the **Manage Client Credentials** page.
-1. Locate the newly created client credential in the list and select **Action** > **Manage keys** > **Add Key**
-1. Enter a name, choose how long the key will be valid, and click **Create key**.
-1. Click **Download** to download the `.yaml` secret file and store it in a secure location.
- You will not be able to retrieve this secret again.
-
-:::important
-
-To ensure that you always have a valid key, you should transition to a second key before the first key expires.
-Create a second key, download the secret, and then replace copies of the secret file for the first key with the secret file for the second key.
-When all the secrets from the first key have been replaced, you can safely delete the first key from the **Client Credentials** page.
-When the key is deleted, all API requests based on that key will be rejected.
-
-:::
-
-## About customizing your Helm installation
-
-You can customize your Calico Cloud installation for the following purposes:
-
-* to enable or disable certain features
-* to modify pod scheduling and resource management
-
-To do this, you can either edit the default `values.yaml` file or pass individual key-value pairs using the `--set` flag for the `helm upgrade` command.
-
-### Required parameters
-
-The following paramaters are required for all Calico Cloud installations.
-
-| Parameter | Value | Example | Description |
-| -- | -- | -- | -- |
-| `installer.clusterName` | string | `cluster-name` | The name given to your managed cluster in Calico Cloud. |
-| `installer.calicoCloudVersion` | string | `$[cloudUserVersion]` | The version of Calico Cloud you're installing. |
-
-
-```yaml title="Example from values.yaml with clusterName and calicoCloudVersion"
-installer:
- clusterName: example-cluster
- calicoCloudVersion: $[cloudUserVersion]
-```
-
-### Optional parameters for private registries
-
-If you're using a private registry, you must set the following parameters.
-
-| Parameter | Value | Example | Description |
-| -- | -- | -- | -- |
-| `installer.registry` | string | `registry-name` | The name given to your managed cluster in Calico Cloud. |
-| `installer.imagePath` | string | `image-path` | The version of Calico Cloud you're installing. |
-| `imagePullSecrets.name` | string | `secret-name` | The version of Calico Cloud you're installing. |
-
-### Optional parameters for features
-
-The following parameters enable certain features in Calico Cloud.
-These features can be enabled or diabled only by setting them in your `values.yaml` file at installation.
-
-| Feature name | Parameter | Values |
-|---------|-----|--------|
-| Image Assurance | `installer.components.imageAssurance.state` | `Enabled` (default), `Disabled` |
-| Container Threat Detection | `installer.components.runtimeSecurity.state` | `Enabled`, `Disabled` (default) |
-| Security Posture Dashboard | `installer.components.securityPosture.state` | `Enabled` (default), `Disabled` |
-| Packet Capture | `installer.components.packetCaptureAPI.state` | `Enabled`, `Disabled` (default) |
-| Compliance Reports | `installer.components.compliance.enabled` | `true` (default), `false` |
-
-:::note
-
-If you're upgrading from Calico Cloud 19 or earlier, the Container Threat Detection and Packet Capture features will remain enabled unless you explicitly set them to `Disabled`.
-
-:::
-
-### Optional parameters for pod scheduling and resource management
-
-For many Calico Cloud components, you can specify node selectors, tolerations, and resource requests and limits.
-The full list of Calico Cloud components is available in the default `values.yaml` file.
-
-:::note
-Helm may overwrite previous customizations of custom resource fields available under the `installer.components` Helm parameter.
-For `installer.components`, you should define all your `values.yaml` customizations to be sure nothing is lost during Calico Cloud upgrades and reinstalls.
-:::
-
-## Prepare your values.yaml with customizations
-
-***Prerequisites***
-* You reviewed the information about available customizations in [About customizing your Helm installation](#about-customizing-your-helm-installation).
-* If you're installing from a private registry, you [added the Calico Cloud images to a private registry](setup-private-registry.mdx), and you have the following information about the registry:
- * Registry secret name
- :::note
- If your private registry requires credentials, create a `calico-cloud` namespace on your cluster.
- Then, create an image pull secret and use this name for the **Registry Secret Name**.
- :::
- * Image registry
- * Image path
-
-1. Add the Calico Cloud Helm repository to your local client:
-
- ```bash
- helm repo add calico-cloud https://installer.calicocloud.io/charts --force-update
- ```
-
-1. Save the default values definitions to your workstation so you can edit them locally:
- ```bash
- helm show values calico-cloud/calico-cloud > .yaml
- ```
- All editable values are provided in the default values definitions.
-
-1. Add values for the required parameters, `install.clusterName` and `install.calicoCloudVersion`.
-
- ```yaml title="Example from values.yaml file with clusterName and calicoCloudVersion"
- installer:
- clusterName: example-cluster
- calicoCloudVersion: $[cloudUserVersion]
- ```
-
-1. Add values for the optional parameters.
- For each resource you want to edit, uncomment the object, add a value, and save.
-
- ```yaml title="Example from values.yaml file with compliance reports disabled"
- installer:
- components:
- compliance:
- enabled: false
- ```
-
-## Install Calico Cloud as part of an automated workflow
-
-You can install Calico Cloud using repeatable kubectl or Helm commands together with valid client credentials.
-These commands can be added to any automated workflow.
-
-***Prerequisites***
-
-* You have generated a set of client credentials and you know the path to your secret.
-* You have a `values.yaml` file with your customizations.
-
-1. Add the Calico Cloud Helm repository to your local client.
-
- ```bash
- helm repo add calico-cloud https://installer.calicocloud.io/charts --force-update
- ```
-
-1. Add the Calico Cloud custom resource definitions:
-
- ```bash
- helm upgrade --install calico-cloud-crds calico-cloud/calico-cloud-crds \
- --namespace calico-cloud \
- --create-namespace
- ```
-
-1. Apply the client credentials secret to your cluster.
-
- ```bash
- kubectl apply -f
- ```
-
- :::important
- You should keep track of this with a secret management system.
- :::
-
-1. Apply the Calico Cloud installer custom resource with your customizations in the `values.yaml` file.
-
- ```bash
- helm upgrade --install calico-cloud calico-cloud/calico-cloud \
- --namespace calico-cloud \
- -f .yaml
- ```
-
-## Additional resources
-
-* [Calico Cloud installation reference](https://docs.tigera.io/calico-cloud/reference/installation/api)
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/get-started/install-cluster.mdx b/calico-cloud_versioned_docs/version-20-1/get-started/install-cluster.mdx
deleted file mode 100644
index d7561086ca..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/get-started/install-cluster.mdx
+++ /dev/null
@@ -1,73 +0,0 @@
----
-description: Steps to connect your cluster to Calico Cloud.
-title: Install Calico Cloud
----
-
-# Connect a cluster to Calico Cloud
-
-You can quickly connect a cluster to Calico Cloud by generating a unique kubectl or Helm command in the Calico Cloud Manager UI and running it on your cluster.
-
-## Prerequisites
-
-* You have an active Calico Cloud account. You can sign up for a 14-day free trial at [calicocloud.io](https://calicocloud.io).
-* You are signed in to the Calico Cloud Manager UI as a user with the **Owner**, **Admin**, or **DevOps** role.
-* You have at least one cluster that meets our [system requirements](system-requirements.mdx).
-* You have kubectl access to the cluster.
-* If you're using Helm, you installed Helm 3.0 or later on your workstation.
-
-## Connect a cluster to Calico Cloud with kubectl
-
-1. From the **Managed Clusters** page, click **Connect Cluster**.
-1. In the **Connect a Cluster** dialog, enter a **Cluster Name** and select a **Cluster Type**.
-1. Optional: If you must install a specific older release, select the Calico Cloud version you want to install.
- We always recommend the latest version, which is installed by default.
-1. Click **Connect** to generate a unique kubectl command. Copy the command.
-
- ```bash title="Example of generated kubectl installation command"
- kubectl apply -f https://installer.calicocloud.io/manifests/cc-operator/latest/deploy.yaml && curl -H "Authorization: Bearer mprcnz04t:9dav6eoag:s8w7xjslez1x1xkf6ds0h23miz5b1fw6phh9897d0n76e4pjfdekijowjv5lw9dd" "https://www.calicocloud.io/api/managed-cluster/deploy.yaml?version=v19.1.0" | kubectl apply -f -
- ```
-
-1. From a terminal, paste and run the command.
-1. On the **Managed Clusters** page, you should immediately see your cluster in the list of managed clusters.
- Monitor the status under **Connection Status**.
- When the status changes to **Connected**, installation is complete and your cluster is connected to Calico Cloud.
-
-## Connect a cluster to Calico Cloud with Helm
-
-1. From the **Managed Clusters** page, click **Connect Cluster**.
-1. In the **Connect a Cluster** dialog, enter a **Cluster Name** and select a **Cluster Type**.
-1. Optional: If you must install a specific older release, select the Calico Cloud version you want to install.
- We always recommend the latest version, which is installed by default.
-1. Click **Connect** to generate a unique Helm installation command. Copy the command.
-
- ```bash title="Example of generated Helm installation command"
- helm repo add calico-cloud https://installer.calicocloud.io/charts --force-update && helm upgrade --install calico-cloud-crds calico-cloud/calico-cloud-crds --namespace calico-cloud --create-namespace && helm upgrade --install calico-cloud calico-cloud/calico-cloud --namespace calico-cloud --set apiKey=ryl34elz8:9dav6eoag:ifk1uwruwlgp7vzn7ecijt5zjbf5p9p1il1ag8877ylwjo4muu19wzg2g8x5qa7x --set installer.clusterName=my-cluster --set installer.calicoCloudVersion=v19.1.0
- ```
-1. Optional: To make changes to what features are enabled during installation, paste the command to a text editor and append the `--set` option any of the following key-value pairs.
- You can change these options only by reinstalling or upgrading Calico Cloud and changing the values.
-
- | Feature | Key | Values |
- |---------|-----|--------|
- | Image Assurance | `installer.components.imageAssurance.state` | `Enabled` (default), `Disabled` |
- | Container Threat Detection | `installer.components.runtimeSecurity.state` | `Enabled`, `Disabled` (default\*) * The default for new clusters is `Disabled`. For upgrades for previously connected clusters, the default will retain the previous state. |
- | Security Posture Dashboard | `installer.components.securityPosture.state` | `Enabled` (default), `Disabled` |
- | Packet Capture | `installer.components.packetCaptureAPI.state` | `Enabled`, `Disabled` (default\*) * The default for new clusters is `Disabled`. For upgrades for previously connected clusters, the default will retain the previous state. |
- | Compliance Reports | `installer.components.compliance.enabled` | `true` (default), `false` |
-
- ```bash title="Example of generated Helm command with user-added parameters"
- helm repo add calico-cloud https://installer.calicocloud.io/charts --force-update && helm upgrade --install calico-cloud-crds calico-cloud/calico-cloud-crds --namespace calico-cloud --create-namespace && helm upgrade --install calico-cloud calico-cloud/calico-cloud --namespace calico-cloud --set apiKey=ryl34elz8:9dav6eoag:ifk1uwruwlgp7vzn7ecijt5zjbf5p9p1il1ag8877ylwjo4muu19wzg2g8x5qa7x --set installer.clusterName=my-cluster --set installer.calicoCloudVersion=v19.1.0 \
- --set installer.components.imageAssurance.state=Enabled \
- --set installer.components.runtimeSecurity.state=Enabled \
- --set installer.components.securityPosture.state=Enabled
- ```
- In this example, this command connects the cluster to Calico Cloud with Image Assurance, Runtime Security, and Security Posture Dashboard features enabled.
-
-1. From a terminal, paste and run the command.
-1. On the **Managed Clusters** page, you should immediately see your cluster in the list of managed clusters.
- Monitor the status under **Connection Status**.
- When the status changes to **Connected**, installation is complete and your cluster is connected to Calico Cloud.
-
-## Additional resources
-
-* [Calico Cloud troubleshooting checklist](checklist.mdx)
-* [Tigera operator troubleshooting checklist](operator-checklist.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/get-started/install-private-registry.mdx b/calico-cloud_versioned_docs/version-20-1/get-started/install-private-registry.mdx
deleted file mode 100644
index 220054fa3d..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/get-started/install-private-registry.mdx
+++ /dev/null
@@ -1,61 +0,0 @@
----
-description: Steps to connect your cluster to Calico Cloud.
-title: Install using a private registry
----
-
-# Connect a cluster to Calico Cloud using a private registry
-
-You can perform a Helm installation from images stored on a private registry.
-
-## Prerequisites
-
-* You have an active Calico Cloud account. You can sign up for a 14-day free trial at [calicocloud.io](https://calicocloud.io).
-* You are signed in to the Calico Cloud Manager UI as a user with the **Owner**, **Admin**, or **DevOps** role.
-* You have at least one cluster that meets our [system requirements](system-requirements.mdx).
-* You have kubectl access to the cluster.
-* You have installed Helm 3.0 or later on your workstation.
-* You have [added the Calico Cloud images to a private registry](setup-private-registry.mdx), and you have the following information about the registry:
- * Registry secret name
- :::note
- If your private registry requires credentials, create a `calico-cloud` namespace on your cluster.
- Then, create an image pull secret and use this name for the **Registry Secret Name**.
- :::
- * Image registry
- * Image path
-
-
-## Install Calico Cloud using a private registry
-
-1. From the **Managed Clusters** page, click **Connect Cluster**.
-1. In the **Connect a Cluster** dialog, enter a **Cluster Name** and select a **Cluster Type**.
-1. Optional: If you must install a specific older release, select the Calico Cloud version you want to install. We always recommend the latest version, which is installed by default.
-1. Click **Advanced Options**, and then select both **Install via helm** and **Private registry**.
-1. Enter the **Registry Secret Name**, **Image registry**, and **Image path**.
-1. Click **Connect** to generate a unique Helm installation command. Copy the command.
-1. Optional: To make changes to what features are enabled during installation, paste the command to a text editor and append the `--set` option any of the following key-value pairs.
- You can change these options only by reinstalling or upgrading Calico Cloud and changing the values.
-
- | Feature | Key | Values |
- |---------|-----|--------|
- | Image Assurance | `installer.components.imageAssurance.state` | `Enabled` (default), `Disabled` |
- | Container Threat Detection | `installer.components.runtimeSecurity.state` | `Enabled`, `Disabled` (default\*) * The default for new clusters is `Disabled`. For upgrades for previously connected clusters, the default will retain the previous state. |
- | Security Posture Dashboard | `installer.components.securityPosture.state` | `Enabled` (default), `Disabled` |
- | Packet Capture | `installer.components.packetCaptureAPI.state` | `Enabled`, `Disabled` (default\*) * The default for new clusters is `Disabled`. For upgrades for previously connected clusters, the default will retain the previous state. |
- | Compliance Reports | `installer.components.compliance.enabled` | `true` (default), `false` |
-
- ```bash title="Example of generated Helm command with user-added parameters"
- helm repo add calico-cloud https://installer.calicocloud.io/charts --force-update && helm upgrade --install calico-cloud-crds calico-cloud/calico-cloud-crds --namespace calico-cloud --create-namespace && helm upgrade --install calico-cloud calico-cloud/calico-cloud --namespace calico-cloud --set apiKey=ryl34elz8:5kdv6siag:ifk1uwruwlgp7vzn7ecijt5zjbf5p9p1il1ag8877ylwjo4muu19wzg2g8x5qa7x --set installer.clusterName=my-cluster --set installer.calicoCloudVersion=v19.1.0 \
- --set installer.components.imageAssurance.state=Enabled \
- --set installer.components.runtimeSecurity.state=Enabled \
- --set installer.components.securityPosture.state=Enabled
- ```
- In this example, this command connects the cluster to Calico Cloud with Image Assurance, Runtime Security, and Security Posture Dashboard features enabled.1. From a terminal, paste and run the command.
-
-1. On the **Managed Clusters** page, you should immediately see your cluster in the list of managed clusters.
- Monitor the status under **Connection Status**.
- When the status changes to **Connected**, installation is complete and your cluster is connected to Calico Cloud.
-
-## Additional resources
-
-* [Calico Cloud troubleshooting checklist](checklist.mdx)
-* [Tigera operator troubleshooting checklist](operator-checklist.mdx)
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/get-started/operator-checklist.mdx b/calico-cloud_versioned_docs/version-20-1/get-started/operator-checklist.mdx
deleted file mode 100644
index 4851ce1e10..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/get-started/operator-checklist.mdx
+++ /dev/null
@@ -1,679 +0,0 @@
----
-description: Additional troubleshooting for the Tigera operator.
----
-
-# Tigera operator troubleshooting checklist
-
-If you have issues getting your cluster up and running, use this checklist.
-
-- [Check installation start errors](#check-installation-start-errors)
-- [Check Calico Cloud installation](#check-calico-cloud-installation)
-- [Check logs for fatal errors](#check-logs-for-fatal-errors)
-- [Check custom resources](#check-custom-resources)
-- [Check pod capacity](#check-pod-capacity)
-- [Check Manager UI dashboard for traffic](#check-manager-ui-dashboard-for-traffic)
-
-## Check installation start errors
-
-Are you seeing any of these issues at the start of installation?
-
-### [ERROR] Detected plugin ls: No such file or directory, it is currently not supported
-
-The cluster you are using to install $[prodname] does not have a CNI plugin installed, the CNI is incompatible. If your cluster has functional pod networking and you see this message, it is likely that kubelet has been configured to use kubenet networking, which is not compatible with $[prodname]. You can use a different cluster, or re-create your cluster with [compatible networking](system-requirements.mdx).
-
-### $[prodname] cannot be connected to a cluster with FIPS mode enabled
-
-At this time, FIPS mode is not supported in $[prodname]. Disable FIPS mode in the cluster and install again.
-
-### Install script is taking a long time
-
-If you are migrating a large cluster from a previous manifest-based Calico install, the script can take some time; this is normal.
-
-But, it could also mean that your cluster has an incompatibility. Go to the next step [Check Calico Cloud installation](#check-calico-cloud-installation).
-
-## Check Calico Cloud installation
-
-Installing $[prodname] on your Kubernetes cluster is managed by the Tigera operator. The Tigera operator is deployed as a ReplicaSet in the `tigera-operator` namespace, and records status in a custom resource named, `tigerastatus`. The operator get its configuration from several Custom Resources (CRs); the central one is the Installation CR.
-
-Check `tigerastatus` using the following command.
-
-```bash
-kubectl get tigerastatus
-```
-
-**Sample output**
-
-```
-NAME AVAILABLE PROGRESSING DEGRADED SINCE
-apiserver True False False 10m
-calico True False False 11m
-cloud-core True False False 11m
-compliance True False False 9m39s
-intrusion-detection True False False 9m49s
-log-collector True False False 9m29s
-management-cluster-connection True False False 9m54s
-monitor True False False 10m
-runtime-security True False False 10m
-```
-
-If all components show a status of "Available" = TRUE, $[prodname] is properly installed.
-
-:::note
-
-The `runtime-security` component is available only if [the container threat detection feature is enabled](../threat/container-threat-detection.mdx#enable-container-threat-detection).
-
-:::
-
-**Issue: $[prodname] is not installed**
-
-If $[prodname] is not installed, you'll get the following error. Install $[prodname] on the node using the `curl` command that you got from Support.
-
-```bash
-kubectl get tigerastatus
-error: the server doesn't have a resource type "tigerastatus"
-```
-
-**Issue: $[prodname] components are missing or are degraded**
-
-If some of the $[prodname] components are Available = FALSE or DEGRADED = TRUE, run the following command and contact Support with the following output.
-
-```bash
-kubectl get tigerastatus -o yaml
-```
-
-:::note
-
-If you are using the **AWS or Azure CNI plugin**, a degraded state is likely because you do not have enough pod capacity on your nodes. To fix this, see [Check pod capacity](#check-pod-capacity).
-
-:::
-
-**Sample output**
-
-In the following example, the typha component has an issue because it is showing `AVAILABLE: FALSE`, and `DEGRADED: TRUE`. To understand details of $[prodname] components, see [Deep dive into custom resources](#deep-dive-into-custom-resources).
-
-```yaml
-apiVersion: v1
-items:
- - apiVersion: operator.tigera.io/v1
- kind: TigeraStatus
- metadata:
- creationTimestamp: '2020-12-30T17:13:30Z'
- generation: 1
- managedFields:
- - apiVersion: operator.tigera.io/v1
- fieldsType: FieldsV1
- fieldsV1:
- f:spec: {}
- f:status:
- .: {}
- f:conditions: {}
- manager: operator
- operation: Update
- time: '2020-12-30T17:16:20Z'
- name: calico
- resourceVersion: '8166'
- selfLink: /apis/operator.tigera.io/v1/tigerastatuses/calico
- uid: 39a8a2d0-2074-418c-b52d-0baa0a48f4a1
- spec: {}
- status:
- conditions:
- - lastTransitionTime: '2020-12-30T17:13:30Z'
- status: 'False'
- type: Available
- - lastTransitionTime: '2020-12-30T17:13:30Z'
- message: DaemonSet "calico-system/calico-node" is not yet scheduled on any nodes
- reason: Not all pods are ready
- status: 'True'
- type: Progressing
- - lastTransitionTime: '2020-12-30T17:13:30Z'
- message: 'failed to wait for operator typha deployment to be ready: waiting
- for typha to have 4 replicas, currently at 3'
- reason: error migrating resources to calico-system
- status: 'True'
- type: Degraded
-kind: List
-metadata:
- resourceVersion: ''
- selfLink: ''
-```
-
-## Check logs for fatal errors
-
-Check that the Tigera operator is running and that logs do not have any fatal errors.
-
-```bash
-kubectl get pods -n tigera-operator
-```
-
-```
-NAME READY STATUS RESTARTS AGE
-tigera-operator-8687585b66-68gmr 1/1 Running 0 139m
-```
-
-```bash
-kubectl logs -n tigera-operator tigera-operator-8687585b66-68gmr
-```
-
-```
-2020/12/30 17:38:54 [INFO] Version: 90975f4
-2020/12/30 17:38:54 [INFO] Go Version: go1.14.4
-2020/12/30 17:38:54 [INFO] Go OS/Arch: linux/amd64
-{"level":"info","ts":1609349935.2848425,"logger":"setup","msg":"Checking type of cluster","provider":""}
-{"level":"info","ts":1609349935.2868738,"logger":"setup","msg":"Checking if TSEE controllers are required","required":true}
-<...>
-```
-
-## Check custom resources
-
-Verify that you have the **installation custom resource**, and that the values are appropriate for your environment.
-
-```bash
-kubectl get installation.operator.tigera.io default -o yaml
-```
-
-```yaml
-apiVersion: operator.tigera.io/v1
-kind: Installation
-metadata:
- annotations:
- kubectl.kubernetes.io/last-applied-configuration: |
- {"apiVersion":"operator.tigera.io/v1","kind":"Installation","metadata":{"annotations":{},"name":"default"},"spec":{"imagePullSecrets":[{"name":"tigera-pull-secret"}],"variant":"TigeraSecureEnterprise"}}
- creationTimestamp: '2021-01-20T19:50:23Z'
- generation: 2
- managedFields:
- - apiVersion: operator.tigera.io/v1
- fieldsType: FieldsV1
- fieldsV1:
- f:metadata:
- f:annotations:
- .: {}
- f:kubectl.kubernetes.io/last-applied-configuration: {}
- f:spec:
- .: {}
- f:imagePullSecrets: {}
- f:variant: {}
- manager: kubectl
- operation: Update
- time: '2021-01-20T19:50:23Z'
- - apiVersion: operator.tigera.io/v1
- fieldsType: FieldsV1
- fieldsV1:
- f:spec:
- f:calicoNetwork:
- .: {}
- f:bgp: {}
- f:hostPorts: {}
- f:ipPools: {}
- f:mtu: {}
- f:multiInterfaceMode: {}
- f:nodeAddressAutodetectionV4:
- .: {}
- f:firstFound: {}
- f:cni:
- .: {}
- f:ipam:
- .: {}
- f:type: {}
- f:type: {}
- f:componentResources: {}
- f:flexVolumePath: {}
- f:nodeUpdateStrategy:
- .: {}
- f:rollingUpdate:
- .: {}
- f:maxUnavailable: {}
- f:type: {}
- f:status:
- .: {}
- f:variant: {}
- manager: operator
- operation: Update
- time: '2021-01-20T19:55:10Z'
- name: default
- resourceVersion: '5195'
- selfLink: /apis/operator.tigera.io/v1/installations/default
- uid: 016c3f0b-39f0-48a0-9da8-a59a81ed9128
-spec:
- calicoNetwork:
- bgp: Enabled
- hostPorts: Enabled
- ipPools:
- - blockSize: 26
- cidr: 10.42.0.0/16
- encapsulation: IPIP
- natOutgoing: Enabled
- nodeSelector: all()
- mtu: 0
- multiInterfaceMode: None
- nodeAddressAutodetectionV4:
- firstFound: true
- cni:
- ipam:
- type: Calico
- type: Calico
- componentResources:
- - componentName: Node
- resourceRequirements:
- requests:
- cpu: 250m
- flexVolumePath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
- imagePullSecrets:
- - name: tigera-pull-secret
- nodeUpdateStrategy:
- rollingUpdate:
- maxUnavailable: 1
- type: RollingUpdate
- variant: TigeraSecureEnterprise
-status:
- variant: TigeraSecureEnterprise
-```
-
-Verify that you have the following custom resources. In the default installation, there is no configuration information.
-
-**Check API server**
-
-```bash
-kubectl get apiserver.operator.tigera.io tigera-secure
-```
-
-```
-NAME AGE
-tigera-secure 85m
-```
-
-**Check cloud core**
-
-```bash
-kubectl get cloudcore.operator.tigera.io tigera-secure
-```
-
-```
-NAME AGE
-tigera-secure 88m
-```
-
-**Check compliance**
-
-```bash
-kubectl get compliance.operator.tigera.io tigera-secure
-```
-
-```
-NAME AGE
-tigera-secure 90m
-```
-
-**Check intrusion detection**
-
-```bash
-kubectl get intrusiondetection.operator.tigera.io tigera-secure
-```
-
-```
-NAME AGE
-tigera-secure 93m
-```
-
-**Check log collector**
-
-```bash
-kubectl get logcollector.operator.tigera.io tigera-secure
-```
-
-```
-NAME AGE
-tigera-secure 96m
-```
-
-**Check management cluster**
-
-```bash
-kubectl get ManagementClusterConnection.operator.tigera.io tigera-secure -o yaml
-```
-
-```yaml
-apiVersion: operator.tigera.io/v1
-kind: ManagementClusterConnection
-metadata:
- annotations:
- kubectl.kubernetes.io/last-applied-configuration: |
- {"apiVersion":"operator.tigera.io/v1","kind":"ManagementClusterConnection","metadata":{"annotations":{},"name":"tigera-secure"},"spec":{"managementClusterAddr":".tigera.io:9000"}}
- creationTimestamp: '2021-01-20T19:55:40Z'
- generation: 1
- managedFields:
- - apiVersion: operator.tigera.io/v1
- fieldsType: FieldsV1
- fieldsV1:
- f:metadata:
- f:annotations:
- .: {}
- f:kubectl.kubernetes.io/last-applied-configuration: {}
- f:spec:
- .: {}
- f:managementClusterAddr: {}
- manager: kubectl
- operation: Update
- time: '2021-01-20T19:55:40Z'
- name: tigera-secure
- resourceVersion: '5425'
- selfLink: /apis/operator.tigera.io/v1/managementclusterconnections/tigera-secure
- uid: b7a2093e-a4b6-4e76-b291-15f45bfa11cf
-spec:
- managementClusterAddr: .tigera.io:9000
-```
-
-**Check monitor**
-
-```bash
-kubectl get monitor.operator.tigera.io tigera-secure
-```
-
-```
-NAME AGE
-tigera-secure 98m
-```
-
-**Check runtime security **
-
-```bash
-kubectl get runtimesecurity.operator.tigera.io default
-```
-
-```
-NAME AGE
-default 99m
-```
-
-:::note
-The `runtime-security` custom resource will only be available if the container threat detection feature is enabled.
-:::
-
-For more information on operator custom resources see the [Installation API reference](../reference/installation/api.mdx).
-
-### Deep dive into custom resources
-
-Run the following command to see if you have required custom resources:
-
-```bash
-kubectl get tigerastatus
-```
-
-| | NAME | AVAILABLE | PROGRESSING | DEGRADED | SINCE |
-| --- | ----------------------------- | --------- | ----------- | -------- | ----- |
-| 1 | apiserver | TRUE | FALSE | FALSE | 10m |
-| 2 | calico | TRUE | FALSE | FALSE | 11m |
-| 3 | cloud-core | TRUE | FALSE | FALSE | 11m |
-| 4 | compliance | TRUE | FALSE | FALSE | 9m39s |
-| 5 | intrusion-detection | TRUE | FALSE | FALSE | 9m49s |
-| 6 | log-collector | TRUE | FALSE | FALSE | 9m29s |
-| 7 | management-cluster-connection | TRUE | FALSE | FALSE | 9m54s |
-| 8 | monitor | TRUE | FALSE | FALSE | 11m |
-| 9 | runtime-security | TRUE | FALSE | FALSE | 10m |
-
-**1 - api server**
-
-`apiserver` is a required component that is an aggregated api-server. It is required for things like applying the tigera license. If `tigerastatus` reports it as unavailable or degraded, check the pods and logs in the `tigera-system`namespace. For example,
-
-```bash
-kubectl get pods -n tigera-system
-```
-
-```
-NAME READY STATUS RESTARTS AGE
-tigera-apiserver-5c75bc8d4b-sbn6g 2/2 Running 0 45m
-```
-
-**2 - calico**
-
-`calico` is the core component for networking. If it is not available or degraded, check the pods and their logs in the `calico-system` namespace. There should be a calico-node pod running on each of your nodes. You should have at least one `calico-typha` pod and the number will scale with the number of nodes in your cluster. You should have a `calico-kube-controllers` pod running. For example,
-
-```bash
-kubectl get pods -n calico-system
-```
-
-```
-NAME READY STATUS RESTARTS AGE
-calico-kube-controllers-5c77d4d559-hfl5d 1/1 Running 0 44m
-calico-node-6s2c9 1/1 Running 0 40m
-calico-node-8nf28 1/1 Running 0 41m
-calico-node-djlrg 1/1 Running 0 40m
-calico-node-ms8nv 1/1 Running 0 40m
-calico-node-t7pck 1/1 Running 0 40m
-calico-typha-bdb494458-76gcx 1/1 Running 0 41m
-calico-typha-bdb494458-847tr 1/1 Running 0 41m
-calico-typha-bdb494458-k8lhj 1/1 Running 0 40m
-calico-typha-bdb494458-vjbjz 1/1 Running 0 40m
-```
-
-**3 - cloud-core**
-
-`cloud-core` is responsible for predefined and custom roles for users. Check the pods and logs in the `calico-cloud` namespace with the label selector `k8s-app=cc-core-operator`.
-
-```bash
-$ kubectl get pods -n calico-cloud -l k8s-app=cc-core-operator
-```
-
-```
-NAME READY STATUS RESTARTS AGE
-cc-core-operator-126dcd494a-9kj7g 1/1 Running 0 80m
-```
-
-**4 - compliance**
-
-`compliance` is responsible for the compliance features. Check the pods and logs in the `tigera-compliance` namespace.
-
-```bash
-$ kubectl get pods -n tigera-compliance
-```
-
-```
-NAME READY STATUS RESTARTS AGE
-compliance-benchmarker-bqvps 1/1 Running 0 65m
-compliance-benchmarker-h58hr 1/1 Running 0 65m
-compliance-benchmarker-kdtwp 1/1 Running 0 65m
-compliance-benchmarker-mzm2z 1/1 Running 0 65m
-compliance-benchmarker-s5mmf 1/1 Running 0 65m
-compliance-controller-77785646df-ws2cj 1/1 Running 0 65m
-compliance-snapshotter-6bcbdc65b-66k9v 1/1 Running 0 65m
-```
-
-**5 - intrusion-detection**
-
-`intrusion-detection` is responsible for the intrusion detection features. Check the pods and logs in the `tigera-intrusion-detection` namespace.
-
-```bash
-$ kubectl get pods -n tigera-intrusion-detection
-```
-
-```
-NAME READY STATUS RESTARTS AGE
-intrusion-detection-controller-669bf45c75-grvz9 1/1 Running 0 66m
-intrusion-detection-es-job-installer-xm22v 1/1 Running 0 66m
-```
-
-**6 - log-collector**
-
-`log-collector` collects flow and other logs and forwards them to $[prodname]. Check the pods and logs in the `tigera-fluentd` namespace. You should have one pod running on each of your nodes.
-
-```bash
-kubectl get pods -n tigera-fluentd
-```
-
-```
-NAME READY STATUS RESTARTS AGE
-fluentd-node-5mzh6 1/1 Running 0 70m
-fluentd-node-7vmxw 1/1 Running 0 70m
-fluentd-node-bbc4p 1/1 Running 0 70m
-fluentd-node-chfz4 1/1 Running 0 70m
-fluentd-node-d6f56 1/1 Running 0 70m
-```
-
-**7 - management-cluster-connection**
-
-The `management-cluster-connection` is required for your managed clusters to connect to the $[prodname] backend. If it is not available or degraded, check the pods and logs in the `tigera-guardian` namespace.
-
-```bash
-kubectl get pods -n tigera-guardian
-```
-
-```
-NAME READY STATUS RESTARTS AGE
-tigera-guardian-7d5d94d5cc-49rg8 1/1 Running 0 48m
-```
-
-To verify that the guardian component has network connectivity to the management cluster:
-
-Find the URL to the management cluster:
-
-```bash
-kubectl get managementclusterconnection tigera-secure -o=jsonpath='{.spec.managementClusterAddr}'
-.tigera.io:9000
-```
-
-then, from a worker node, verify network connectivity to the management cluster:
-
-```bash
-openssl s_client -connect .tigera.io:9000
-```
-
-```
-CONNECTED(00000003)
-depth=0 CN = tigera-voltron
-verify error:num=18:self signed certificate
-verify return:1
-depth=0 CN = tigera-voltron
-verify return:1
----
-Certificate chain
- 0 s:CN = tigera-voltron
- i:CN = tigera-voltron
----
-Server certificate
------BEGIN CERTIFICATE-----
-MIIC5DCCAcygAwIBAgIBATANBgkqhkiG9w0BAQsFADAZMRcwFQYDVQQDEw50aWdl
-cmEtdm9sdHJvbjAeFw0yMDEyMjExOTA1MzhaFw0yNTEyMjAxOTA1MzhaMBkxFzAV
-<...>
-```
-
-**8 - monitor**
-
-`monitor` is responsible for configuring prometheus and associated custom resources. Check the pods and logs in the `tigera-prometheus` namespace.
-
-```bash
-$ kubectl get pods -n tigera-prometheus
-```
-
-```
-NAME READY STATUS RESTARTS AGE
-alertmanager-calico-node-alertmanager-0 2/2 Running 0 125m
-alertmanager-calico-node-alertmanager-1 2/2 Running 0 125m
-alertmanager-calico-node-alertmanager-2 2/2 Running 0 125m
-calico-prometheus-operator-77bf897c9b-7f88x 1/1 Running 0 125m
-prometheus-calico-node-prometheus-0 3/3 Running 1 125m
-```
-
-**9 - runtime-security**
-
-`runtime-security` is responsible for the container threat detection feature. Check the pods and logs in the `calico-cloud` namespace with the label selector `k8s-app=tigera-runtime-security-operator`.
-
-```bash
-$ kubectl get pods -n calico-cloud -l k8s-app=tigera-runtime-security-operator
-```
-
-```
-NAME READY STATUS RESTARTS AGE
-tigera-runtime-security-operator-127b606afc-ap25k 1/1 Running 0 80m
-```
-
-### Check additional custom resources
-
-Check for the presence of other custom resources created by the Tigera operator: FelixConfiguration, IPPool, Tigera License, and Prometheus for component metrics.
-
-`FelixConfiguration` contains configuration that are not configured as environment variables to the`calico-node` container.
-
-```bash
-kubectl get Felixconfiguration default
-```
-
-```
-NAME CREATED AT
-default 2021-01-20T19:49:35Z
-```
-
-The operator creates a default `IPPool` for your pod networking if it does not already exist; in this case, the CIDR is taken from the Installation CR.
-
-```bash
-kubectl get IPPool
-```
-
-```
-NAME CREATED AT
-default-ipv4-ippool 2021-01-20T19:49:35Z
-```
-
-A Tigera license is applied by the installation script.
-
-```bash
-kubectl get LicenseKeys.crd.projectcalico.org
-```
-
-```
-NAME AGE
-default 120m
-```
-
-The installation script deploys a Prometheus operator and associated custom resources. If you already have a Prometheus operator running in your cluster, contact Tigera support.
-
-```bash
-kubectl get pods -n tigera-prometheus
-```
-
-```
-NAME READY STATUS RESTARTS AGE
-alertmanager-calico-node-alertmanager-0 2/2 Running 0 125m
-alertmanager-calico-node-alertmanager-1 2/2 Running 0 125m
-alertmanager-calico-node-alertmanager-2 2/2 Running 0 125m
-calico-prometheus-operator-77bf897c9b-7f88x 1/1 Running 0 125m
-prometheus-calico-node-prometheus-0 3/3 Running 1 125m
-```
-
-## Check pod capacity
-
-If cluster does not have enough capacity, it will not be able to deploy pods. There is no specific error associated with this condition.
-
-The high-level components $[prodname] needs to run are:
-
-- Per node: 1 fluentd, 1 compliance benchmarker
-- On top of per node: 3 alertmanager (from statefulset), 1 prometheus, 1 prometheus operator, 1 kube-controllers, 2 compliance snapshotter and controller, 1 guardian, 1 ids controller, 1 apiserver
-
-Some clusters have limited pod-networked pod capacity.
-
-- For the AWS CNI plugin, the [number of pods that can be networked is based on the size of the instance](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html).
-- For AKS with the Azure CNI, the [number of pods that can be networked is set at cluster deployment time or when new node pools are created (default of 30)](https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni#maximum-pods-per-node).
-- For GKE, there is a [hard limit of 110 pods per node](https://cloud.google.com/kubernetes-engine/docs/best-practices/scalability#dimension_limits).
-- For Calico CNI, the pod limit is based on the available IPs in the IPPool, and there is no specific per node limit.
-
-Verify you have the following pod-networked pod capacity.
-
-- Verify on each node in your cluster that there is capacity for at least 2 pods.
-- Verify there is capacity for at least 11 pods in the cluster in addition to the per node capacity.
-
-To check the capacity of individual nodes on AWS or AKS, query the node status and look at `Capacity.Pods` (which is the total capacity for the node). To get the number of pod-networked pods for a node, count the pods on the node that are pod-networked (non-hostNetworked pods).
-
-## Check pod security policy violation
-
-If your cluster is using Kubernetes version 1.24 or earlier, a pod security policy (PSP) violation may be blocking pods on the cluster.
-
-Search for the term `PodSecurityPolicy` in the status message of failed cluster deployments. If a PSP is present, install open source Calico in the cluster before you connect to $[prodname].
-
-## Check Manager UI dashboard for traffic
-
-### Manager UI main dashboard is missing traffic graphs
-
-When you log in to Manager UI, the first item in the left nav is the main Dashboard. This dashboard is a birds-eye view of your managed cluster activity for policy and networking. For the graphs to display traffic, you must have the Prometheus operator. If it is missing, the following message is displayed during installation:
-
-`Prometheus Operator detected in the cluster. Skipping Tigera Prometheus Operator`
-
-To install an appropriate Prometheus operator, contact Support below.
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/get-started/prepare-cluster.mdx b/calico-cloud_versioned_docs/version-20-1/get-started/prepare-cluster.mdx
deleted file mode 100644
index 8b53ee99ae..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/get-started/prepare-cluster.mdx
+++ /dev/null
@@ -1,78 +0,0 @@
----
-description: Prepare your cluster to install Calico Cloud.
----
-
-# Prepare your cluster for Calico Cloud
-
-Get your cluster ready to connect to Calico Cloud.
-
-## Prerequisites
-
-* Your cluster meets the [system requirements](system-requirements.mdx) for Calico Cloud.
-
-## Allow outbound traffic from pods to Calico Cloud endpoints
-
-Pods running in your Kubernetes cluster must allow outbound traffic to the following endpoints:
-
-- `https://installer.calicocloud.io:443/*`
-- `https://www.calicocloud.io:443/api/*`
-- `https://client-auth.calicocloud.io:443/*`
-- TCP to `.calicocloud.io:9000`
-
-For each node, Docker must be able to pull images from the following endpoints:
-
-* `quay.io`
-* `cdn01.quay.io`
-* `cdn02.quay.io`
-* `us-docker.pkg.dev`
-
-## Make sure you have the right permissions for your platform user account
-
-If your cluster is installed on a managed service, you must have sufficient permissions from your identity and access management system.
-Check that you are authorized to create the following Kubernetes resource types:
-
-* `ClusterRole`
-* `ClusterRoleBinding`
-* `Deployment`
-* `ServiceAccount`
-* `CustomResourceDefinition`
-
-## Prepare your cluster on Azure Kubernetes Service
-
-### Remove taints from Linux node pools
-
-If you have a hybrid clusters with both Windows and Linux nodes, the Linux nodes may have taints that prevent Calico Cloud from scheduling pods on those nodes.
-These taints must be removed before you connect your cluster to Calico Cloud.
-
-You can check whether any node pools in your cluster have taints by running the following command:
-
-```bash
-az aks nodepool list --resource-group --cluster-name --query "[].{name:name nodeTaints:nodeTaints}"
-```
-
-Remove any taints in the Linux node pools by running the command:
-
-```bash
-az aks nodepool update --resource-group --cluster-name --name --node-taints ""
-```
-
-## Prepare your cluster Google Kubernetes Engine
-
-### Turn on intranode visibility for your cluster
-
-Verify that intranode visibility is set to `Enabled` by running the following command:
-
-```bash
-gcloud container clusters describe --flatten networkConfig.enableIntraNodeVisibility
-```
-
-If intranode visibiliity is not enabled, you must enable it by following running the following command:
-
-```bash
-gcloud container clusters update --enable-intra-node-visibility
-```
-
-## Next steps
-
-* [Connect your cluster to Calico Cloud](install-cluster.mdx)
-* [Connect your cluster using a private registry](install-private-registry.mdx)
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/get-started/setup-private-registry.mdx b/calico-cloud_versioned_docs/version-20-1/get-started/setup-private-registry.mdx
deleted file mode 100644
index d3e093def8..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/get-started/setup-private-registry.mdx
+++ /dev/null
@@ -1,102 +0,0 @@
----
-description: Add images to a private registry for installing Calico Cloud on a cluster.
----
-
-# Set up a private registry
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-import VersionedCloudImageList from '@site/src/___new___/components/VersionedCloudImageList';
-
-## Big picture
-
-Add $[prodname] images to a private registry to install new clusters or update existing ones.
-
-## Value
-
-In some deployments, installing $[prodname] in clusters from third-party repos is not an option. Before you can install $[prodname] from a private registry, you must first add $[prodname] images to the registry.
-
-## Concepts
-
-A **container image registry** (often known as a **registry**), is a service where you can push, pull, and store container images. In Kubernetes, a registry is considered _private_ if it is not publicly available.
-
-A **private registry** is not publicly available. It must be accessible on a private network or with an **image pull secret** for authenticated access.
-
-An **image path** is a directory in a registry that contains images required to install $[prodname].
-
-## Before you begin
-
-**Required**
-
-- [Helm CLI](https://helm.sh/docs/intro/install/) command
-- Use the [Crane command](https://github.com/google/go-containerregistry/blob/main/cmd/crane/README.md) to follow steps in this guide. (Other tools can be used, but commands be adjusted accordingly.)
-- Image registry
- - Must be private (accessible only on a private network or using log in credentials)
- - Images copied to the registry must not be made publicly available
- - Push access to your registry
-
-## How to
-
-Add the required $[prodname] images to a private registry.
-
-### Set up registry credentials
-
-1. Log into $[prodname] and navigate to "Managed Clusters".
-1. Get the "Registry Credentials" by clicking on the icon. ![registry credentials](/img/calico-cloud/private-registry-icon.png)
-1. Apply the credentials so the $[prodname] images can be accessed.
-
-### Create the list of required images
-
-
-
-### Set up the private registry
-
-```bash
-REGISTRY=
-```
-
-If you want all images to come from the same path in your registry, set this image path value. Otherwise unset this environment variable or set it to the empty string.
-
-```bash
-IMAGEPATH=""
-```
-
-#### Image examples
-
-| Original image | Image Registry | Image Path | Private registry image |
-| ----------------------------------- | -------------- | ------------- | -------------------------------------------- |
-| `quay.io/tigera/typha:v1.2.3` | `my.registry/` | | `my.registry/tigera/typha:v1.2.3` |
-| `quay.io/tigera/typha:v1.2.3` | `my.registry/` | `custom-path` | `my.registry/custom-path/typha:v1.2.3` |
-| `quay.io/tigera/cc-operator:v4.5.6` | `my.registry/` | | `my.registry/tigera/cc-operator:v4.5.6` |
-| `quay.io/tigera/cc-operator:v4.5.6` | `my.registry/` | `custom-path` | `my.registry/custom-path/cc-operator:v4.5.6` |
-
-### Copy images to your registry
-
-For $[prodname] to install images from your registry, copy the images from the standard registries into your own registry.
-
-
-
-
-```bash
-for image in ${IMAGES[@]}; do
- img_base=$(echo ${image} | sed "s#^.*/\([^/]*/[^/]*$\)#\1#")
- crane cp ${image} ${REGISTRY}${img_base} || break
-done
-```
-
-
-
-
-```bash
-for image in ${IMAGES[@]}; do
- img_base=$(echo ${image} | sed "s#^.*/##")
- crane cp ${image} ${REGISTRY}${IMAGEPATH}/${img_base} || break
-done
-```
-
-
-
-
-## Install using the private registry
-
-Follow the directions [to connect a cluster to Calico Cloud](install-cluster.mdx).
diff --git a/calico-cloud_versioned_docs/version-20-1/get-started/system-requirements.mdx b/calico-cloud_versioned_docs/version-20-1/get-started/system-requirements.mdx
deleted file mode 100644
index ecebfad2b9..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/get-started/system-requirements.mdx
+++ /dev/null
@@ -1,143 +0,0 @@
----
-description: Review cluster requirements to connect to Calico Cloud.
----
-
-# System requirements
-
-Before you connect your cluster to Calico Cloud, make sure your cluster meets the system requirements.
-Your cluster must already have a CNI installed before you can connect to Calico Cloud.
-
-
-
-## Kubernetes distributions and CNIs
-
-Calico Cloud works with Kubernetes on self-provisioned infrastructure and on managed Kubernetes distributions.
-To use Calico Cloud for both networking and network policy, your cluster must have Calico Open Source installed before you connect to Calico Cloud.
-For most managed distributions, you can use the provider's CNI for networking and use Calico Cloud for network policy.
-
-| Distribution | Supported CNIs |
-| --- | --- |
-| Kubernetes on self-provisioned infrastructure | - Calico Open Source 3.20 or later |
-| Amazon Elastic Kubernetes Service | - Calico Open Source 3.20 or later - Amazon VPC CNI |
-| Azure Kubernetes Service | - Calico Open Source 3.20 or later - Azure CNI |
-| Google Kubernetes Engine | - Calico Open Source 3.20 or later - GKE CNI |
-| Rancher Kubernetes Engine 2 | - Calico Open Source 3.20 or later |
-
-:::note
-
-The Kubernetes distributions listed above are those that Tigera currently tests and supports for Calico Cloud.
-You may be able to connect clusters on other distributions with Calico Open Source installed as the CNI.
-For more information about connecting other cluster types to Calico Cloud, [contact Support](https://tigeraio.my.site.com/community/s/login/).
-
-:::
-
-## Kubernetes versions
-
-Your Kubernetes distribution must be based on one of the following Kubernetes versions:
-* Kubernetes 1.30
-* Kubernetes 1.29
-* Kubernetes 1.28
-
-## Architectures
-
-Calico Cloud can be installed on nodes based on the following chip architectures:
-* x86-64
-* ARM64
-
-## Browser support for the Manager UI web console
-
-To access the Manager UI web console, you can use latest two versions of the following web browsers:
-* Chrome
-* Safari
-
-## Kubernetes reconcilers
-
-* $[prodname] cannot be usually be installed on clusters that are managed by any kind of Kubernetes reconciler (for example, Addon-manager). To verify, look for an annotation called `addonmanager.kubernetes.io/mode` on either of the following resources. (The resources may not exist).
-
- * `tigera-operator` deployment in the `tigera-operator` namespace
- * `calico-node` daemonset in the `kube-system` namespace
-
- If the following command finds addonmanager on either of the resources, then Addon-manager is being used. Find a different cluster to use.
-
- ```bash
- kubectl get -n -o yaml | grep ' addonmanager.kubernetes.io/mode:'
- ```
-* Some AKS clusters with AddonManager are compatible with Calico Cloud.
- If output from the following command includes "EnsureExists", then the install is compatible with $[prodname].
-
- ```bash
- kubectl get CustomResourceDefinition installations.operator.tigera.io -o yaml | grep ' addonmanager.kubernetes.io/mode:'
- ```
-
- :::note
-
- If the command output does not include "EnsureExists" and you are on a recent version of AKS your cluster might still be compatible.
- You can [contact Support](https://tigeraio.my.site.com/community/s/login/) for more information.
-
- :::
-
- :::warning
-
- If your cluster already has Calico installed by AKS and managed by AddonManager, the standard [uninstall](../operations/disconnect.mdx)
- is not supported. You will need to reach out to support to create a plan to uninstall $[prodname].
-
- :::
-
-## Distribution-specific requirements
-
-### Azure Kubernetes Service
-
-* Your cluster uses a supported combination of the `networkPlugin` and `networkPolicy` configurations:
- - `"networkPlugin": "none"` and `"networkPolicy": null`
- - `"networkPlugin": "azure"` and `"networkPolicy": null`
- - `"networkPlugin": "azure"` and `"networkPolicy": "calico"`
-
- You can check your configuration by running the following command:
-
- ```bash
- az aks show --query 'networkProfile'
- ```
-
-* If your cluster uses the Azure CNI, your cluster's CNI is set to [transparent mode](https://docs.microsoft.com/en-us/azure/aks/faq#what-is-azure-cni-transparent-mode-vs-bridge-mode).
-
- ```bash
- az vmss run-command invoke -g -n --scripts "cat /etc/cni/net.d/*" --command-id RunShellScript --instance-id 0 --query 'value[0].message'
- ```
-
- If the Azure CNI is enabled, the output should include `"mode": "transparent"`.
-
-### Google Kubernetes Service
-
-* Your cluster's network policy is disabled.
- To verify, run the following command:
-
- ```bash
- gcloud container clusters describe --flatten addonsConfig.networkPolicyConfig.disabled
- ```
-
-* Your cluster's Dataplane V2 is set to `null`.
- To verify, run the following command:
-
- ```bash
- gcloud container clusters describe --flatten networkConfig.datapathProvider
- ```
-
-### Rancher Kubernetes Engine 2
-
-* The Calico Open Source CNI must not be provisioned by the RKE2 installer.
- You can connect an RKE2 cluster to Calico Cloud only if:
- * the RKE2 cluster was installed without a CNI
- * Calico Open Source was installed manually
-
- To verify, run this command to see the configuration on your control-plane node:
-
- ```bash
- cat /etc/rancher/rke2/config.yaml
- ```
- You should see `cni: none`.
-
- If you're creating a new RKE2 cluster, you can set this configuration as an environment variable (`RKE2_CNI=none`) when you run the installation script.
-
-## Next steps
-
-* [Prepare your cluster for Calico Cloud](prepare-cluster.mdx)
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/get-started/upgrade-cluster.mdx b/calico-cloud_versioned_docs/version-20-1/get-started/upgrade-cluster.mdx
deleted file mode 100644
index 4da942eb2c..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/get-started/upgrade-cluster.mdx
+++ /dev/null
@@ -1,15 +0,0 @@
----
-description: Steps to upgrade to the latest version of Calico Cloud.
----
-
-# Upgrade Calico Cloud
-
-To upgrade managed clusters to the latest version of $[prodname]:
-
-1. In Manager UI, go to **Managed Clusters**.
-1. For the cluster you want to upgrade, select **Actions** > **Reinstall**.
-1. Select the version of $[prodname] you want to install, you can select the currently installed version, or any newer supported version
-1. Click **Next**, copy the command, and apply it to appropriate cluster.
-
-![reinstall](/img/calico-cloud/managed-clusters-reinstall.png)
-![reinstall-step1](/img/calico-cloud/managed-clusters-reinstall-step1.png)
diff --git a/calico-cloud_versioned_docs/version-20-1/get-started/windows-limitations.mdx b/calico-cloud_versioned_docs/version-20-1/get-started/windows-limitations.mdx
deleted file mode 100644
index fcf8fffea2..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/get-started/windows-limitations.mdx
+++ /dev/null
@@ -1,205 +0,0 @@
----
-description: Review limitations before starting installation.
----
-
-# Limitations and known issues for Windows nodes
-
-## $[prodname] feature limitations
-
-| Feature | Unsupported in this release |
-| ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| Platforms | - GKE |
-| Install and upgrade | - Typha component for scaling (Linux-based feature) |
-| Networking | - Overlay mode with BGP peering - IP in IP overlay with BGP routing - Cross-subnet support and MTU setting for VXLAN - IPv6 and dual stack - Dual-ToR - Service advertisement - Multiple networks to pods |
-| Policy | - Staged network-policy - Firewall integrations - Policy for hosts (host endpoints, including automatic host endpoints) - Tiered policy: TKG, GKE, AKS - WAF integration - AWS firewall integration - Fortinet integration |
-| Visibility and troubleshooting | - Packet capture - DNS logs - iptables logs - L7 logs |
-| Threat defense | - No threat defense features are supported. |
-| Image Assurance | - No Image Assurance features are supported.
-| Multi-cluster management | - Multi-cluster management federated identity endpoints and services - Federated endpoint identity and services |
-| Compliance and security | - CIS benchmark and other reports - Wireguard encryption for pod-to-pod traffic and host-to-host traffic |
-| Dataplane | - eBPF is a Linux-based feature |
-
-## $[prodname] BGP networking limitations
-
-If you are using $[prodname] with BGP, note these current limitations with Windows.
-
-| Feature | Limitation |
-| ------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| IP mobility/ borrowing | $[prodname] IPAM allocates IPs to host in blocks for aggregation purposes. If the IP pool is full, nodes can also "borrow" IPs from another node's block. In BGP terms, the borrower then advertises a more specific "/32" route for the borrowed IP and traffic for that IP is only routed to the borrowing host.
Windows nodes do not support this borrowing mechanism; they will not borrow IPs even if the IP pool is full and they mark their blocks so that Linux nodes will not borrow from them. |
-| IPs reserved for Windows | $[prodname] IPAM allocates IPs in CIDR blocks. Due to networking requirements on Windows, four IPs per Windows node-owned block must be reserved for internal purposes.
For example, with the default block size of /26, each block contains 64 IP addresses, 4 are reserved for Windows, leaving 60 for pod networking.
To reduce the impact of these reservations, a larger block size can be configured at the IP pool scope (before any pods are created). |
-| Single IP block per host | $[prodname] IPAM is designed to allocate blocks of IPs (default size /26) to hosts on demand. While the $[prodname] CNI plugin was written to do the same, kube-proxy for Windows currently only supports a single IP block per host.
To work around the default limit of one /26 per host there some options:
- Use $[prodname] BGP networking with the kubernetes datastore. In that mode, $[prodname] IPAM is not used and the CNI host-local IPAM plugin is used with the node's Pod CIDR.
To allow multiple IPAM blocks per host (at the expense of kube-proxy compatibility), set the `windows_use_single_network` flag to `false` in the `cni.conf.template` before installing $[prodname]. Changing that setting after pods are networked is not recommended because it may leak HNS endpoints. |
-| IP-in-IP overlay | $[prodname]'s IPIP overlay mode cannot be used in clusters that contain Windows nodes because Windows does not support IP-in-IP. |
-| NAT-outgoing | $[prodname] IP pools support a "NAT outgoing" setting with the following behaviour:
- Traffic between $[prodname] workloads (in any IP pools) is not NATted. - Traffic leaving the configured IP pools is NATted if the workload has an IP within an IP pool that has NAT outgoing enabled. $[prodname] honors the above setting but it is only applied at pod creation time. If the IP pool configuration is updated after a pod is created, the pod's traffic will continue to be NATted (or not) as before. NAT policy for newly-networked pods will honor the new configuration. $[prodname] automatically adds the host itself and its subnet to the NAT exclusion list. This behaviour can be disabled by setting flag `windows_disable_host_subnet_nat_exclusion` to `true` in `cni.conf.template` before running the install script. |
-| Service IP advertisement | This $[prodname] feature is not supported on Windows. |
-
-### Check your network configuration
-
-If you are using a networking type that requires layer 2 reachability (such as $[prodname] with a BGP mesh and no peering to your fabric), you can check that your network has layer 2 reachability as follows:
-
-On each of your nodes, check the IP network of the network adapter that you plan to use for pod networking. For example, on Linux, assuming your network adapter is eth0, you can run:
-
-```
-$ ip addr show eth0
- 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
-
- link/ether 00:0c:29:cb:c8:19 brd ff:ff:ff:ff:ff:ff
- inet 192.168.171.136/24 brd 192.168.171.255 scope
-
- global eth0
- valid_lft forever preferred_lft forever
- inet6 fe80::20c:29ff:fecb:c819/64 scope
- link
-
- valid_lft forever preferred_lft
- forever
-```
-
-In this case, the IPv4 is 192.168.171.136/24; which, after applying the /24 mask gives 192.168.171.0/24 for the IP network.
-
-Similarly, on Windows, you can run
-
-```
-PS C:\> ipconfig
-
-Windows IP Configuration
-
-Ethernet adapter vEthernet (Ethernet 2):
-
- Connection-specific DNS Suffix . :
- us-west-2.compute.internal Link-local IPv6 Address . . . .
- . : fe80::6d10:ccdd:bfbe:bce2%15 IPv4 Address. . . . . . .
- . . . . : 172.20.41.103 Subnet Mask . . . . . . . . . . .
- : 255.255.224.0 Default Gateway . . . . . . . . . :
- 172.20.32.1
-
-```
-
-In this case, the IPv4 address is 172.20.41.103 and the mask is represented as bytes 255.255.224.0 rather than CIDR notation. Applying the mask, we get a network address 172.20.32.0/19.
-
-Because the Linux node has network 192.168.171.136/24 and the Windows node has a different network, 172.20.32.0/19, they are unlikely to be on the same layer 2 network.
-
-## VXLAN networking limitations
-
-Because of differences between the Linux and Windows dataplane feature sets, the following $[prodname] features are not supported on Windows.
-
-| Feature | Limitation |
-| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| IPs reserved for Windows | $[prodname] IPAM allocates IPs in CIDR blocks. Due to networking requirements on Windows, four IPs per Windows node-owned block must be reserved for internal purposes.
For example, with the default block size of /26, each block contains 64 IP addresses, 4 are reserved for Windows, leaving 60 for pod networking.
To reduce the impact of these reservations, a larger block size can be configured at the IP pool scope (before any pods are created). |
-| Single IP block per host | $[prodname] IPAM is designed to allocate blocks of IPs (default size /26) to hosts on demand. While the $[prodname] CNI plugin was written to do the same, kube-proxy currently only supports a single IP block per host. To allow multiple IPAM blocks per host (at the expense of kube-proxy compatibility), set the `windows_use_single_network` flag to `false` in the `cni.conf.template` before installing $[prodname]. Changing that setting after pods are networked is not recommended because it may leak HNS endpoints. |
-
-## Routes are lost in cloud providers
-
-If you create a Windows host with a cloud provider (AWS for example), the creation of the vSwitch at $[prodname] install time can remove the cloud provider's metadata route. If your application relies on the metadata service, you may need to examine the routing table before and after installing $[prodname] to reinstate any lost routes.
-
-## VXLAN limitations
-
-**VXLAN support**
-
-- Windows 1903 build 18317 and above
-- Windows 1809 build 17763 and above
-
-**Configuration updates**
-
-Certain configuration changes will not be honored after the first pod is networked. This is because Windows does not currently support updating the VXLAN subnet parameters after the network is created so updating those parameters requires the node to be drained:
-
-One example is the VXLAN VNI setting. To change such parameters:
-
-- Drain the node of all pods
-- Delete the $[prodname] HNS network:
-
- ```powershell
- Import-Module -DisableNameChecking $[rootDirWindows]\libs\hns\hns.psm1
- Get-HNSNetwork | ? Name -EQ "$[prodname]" | Remove-HNSNetwork
- ```
-
-- Update the configuration in `config.ps1`, run `uninstall-calico.ps1` and then `install-calico.ps1` to regenerate the CNI configuration.
-
-## Pod-to-pod connections are dropped with TCP reset packets
-
-Restarting Felix or changes to policy (including changes to endpoints referred to in policy) can cause pod-to-pod connections to be dropped with TCP reset packets when one of the following occurs:
-
-- The policy that applies to a pod is updated
-- Some ingress or egress policy that applies to a pod contains selectors and the set of endpoints that those selectors match changes
-
-Felix must reprogram the HNS ACL policy attached to the pod. This reprogramming can cause TCP resets. Microsoft has confirmed this is a HNS issue, and they are investigating.
-
-## Service ClusterIPs incompatible with selectors on pod IPs in network policy
-
-**Windows 1809 prior to build 17763.1432**
-
-On Windows nodes, kube-proxy unconditionally applies source NAT to traffic from local pods to service ClusterIPs. This means that, at the destination pod, where policy is applied, the traffic appears to come from the source host rather than the source pod. In turn, this means that a network policy with a source selector matching the source pod will not match the expected traffic.
-
-## Network policy and using selectors
-
-Under certain conditions, relatively simple $[prodname] policies can require significant Windows dataplane resources, that can cause significant CPU and memory usage, and large policy programming latency.
-
-We recommend avoiding policies that contain rules with both a source and destination selector. The following is an example of a policy that would be inefficient. The policy applies to all workloads, and it only allows traffic from workloads labeled as clients to workloads labeled as servers:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: calico-dest-selector
-spec:
- selector: all()
- order: 500
- ingress:
- - action: Allow
- destination:
- selector: role == "webserver"
- source:
- selector: role == "client"
-```
-
-Because the policy applies to all workloads, it will be rendered once per workload (even if the workload is not labeled as a server), and then the selectors will be expanded into many individual dataplane rules to capture the allowed connectivity.
-
-Here is a much more efficient policy that still allows the same traffic:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: calico-dest-selector
-spec:
- selector: role == "webserver"
- order: 500
- ingress:
- - action: Allow
- source:
- selector: role == "client"
-```
-
-The destination selector is moved into the policy selector, so this policy is only rendered for workloads that have the `role: webserver` label. In addition, the rule is simplified so that it only matches on the source of the traffic. Depending on the number of webserver pods, this change can reduce the dataplane resource usage by several orders of magnitude.
-
-## Network policy with tiers
-
-Because of the way the Windows dataplane handles rules, the following limitations are required to avoid performance issues:
-
-- Tiers: maximum of 5
-- `pass` rules: maximum of 10 per tier
-- If each tier contains a large number of rules, and has pass rules, you may need to reduce the number of tiers further.
-
-## Flow log limitations
-
-$[prodname] supports flow logs with these limitations:
-
-- No packet/bytes stats for denied traffics
-- Inaccurate `num_flows_started` and `num_flows_completed` stats with VXLAN networking
-- No DNS stats
-- No Http stats
-- No RuleTrace for tiers
-- No BGP logs
-
-## DNS Policy limitations
-
-:::note
-
-DNS Policy is a tech preview feature. Tech preview features may be subject to significant changes before they become GA.
-
-:::
-
-$[prodname] supports DNS policy on Windows with these limitations:
-
-- It could take up to 5 seconds for the first TCP SYN packet to go through, for a connection to a DNS domain name. This is because DNS policies are dynamically programmed. The first TCP packet could be dropped since there is no policy to allow it until $[prodname] detects domain IPs from DNS response and programs DNS policy rules. The Windows TCPIP stack will send SYN again after TCP Retransmission timeout (RTO) if previous SYN has been dropped.
-- Some runtime libraries do not honour DNS TTL. Instead, they manage their own DNS cache which has a different TTL value for DNS entries. On .NET Framework, the value to control DNS TTL is ServicePointManager.DnsRefreshTimeout which has default value of 120 seconds - [DNS refresh timeout](https://docs.microsoft.com/en-us/dotnet/api/system.net.servicepointmanager.dnsrefreshtimeout). It is important that $[prodname] uses a longer TTL value than the one used by the application, so that DNS policy will be in place when the application is making outbound connections. The configuration item “WindowsDNSExtraTTL” should have a value bigger than the maximum value of DNS TTL used by the runtime libraries for your applications.
-- Due to the limitations of Windows container networking, a policy update could have an impact on performance. Programming DNS policy may result in more policy updates. Setting “WindowsDNSExtraTTL” to a bigger number will reduce the performance impact.
diff --git a/calico-cloud_versioned_docs/version-20-1/image-assurance/creating-jira-issues-for-scan-results.mdx b/calico-cloud_versioned_docs/version-20-1/image-assurance/creating-jira-issues-for-scan-results.mdx
deleted file mode 100644
index 8641fe0896..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/image-assurance/creating-jira-issues-for-scan-results.mdx
+++ /dev/null
@@ -1,38 +0,0 @@
----
-description: Creating Jira Issues for you scan results.
----
-
-import IconUser from '/img/icons/user-icon.svg';
-
-# Create a Jira issue for your image scan results
-
-You can create and assign Jira issues with information about vulnerabilties from your Image Assurance scan results.
-
-![ia-jira-after-creation](/img/calico-cloud/ia-jira-after-creation.png)
-*Scan results detail with link to Jira issue.*
-
-## Add Jira credentials to Calico Cloud
-
-You must add Jira user credentials to Calico Cloud to create issues for scanned images.
-
-***Prerequisites***
-
-* You have access to a Jira user account with permissions to create issues in a project.
-* For the Jira user account, you have:
- * An Atlassian site URL. If you access Jira at the URL `https://.atlassian.net/jira`, then your site URL is `.atlassian.net`.
- * An API token.
- For details on how to get an API token, see [Manage API tokens for your Atlassian account](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/).
-
-1. In Manager UI, the user icon ** > Settings**.
-1. Select the **Jira** tab, complete the fields with information about your Jira user account, and then click **Save**.
-1. Select the Jira project you want Calico Cloud to create issues for, and then click **Save**.
-
-## Create a Jira issue for a scanned image
-
-You can create and assign a Jira issue directly from the scan results information page for an image.
-
-1. From the Manager UI, click **Image Assurance >All Scanned Images**.
-1. Click an item in the list of scanned images to open a detailed view of the vulnerabilities in that image.
-1. In the **JIRA** section, click **Add Ticket**.
-1. In the **Add Jira issue** dialog, complete the fields and click **Create Jira issue**.
- A link to the new Jira issue will be added to the detailed view page.
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/image-assurance/exclude-vulnerabilities-from-scan-results.mdx b/calico-cloud_versioned_docs/version-20-1/image-assurance/exclude-vulnerabilities-from-scan-results.mdx
deleted file mode 100644
index 0dc16902a0..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/image-assurance/exclude-vulnerabilities-from-scan-results.mdx
+++ /dev/null
@@ -1,81 +0,0 @@
----
-description: Reduce noise in your Image Assurance scan results by excluding vulnerabilities.
----
-
-import UploadIcon from '/img/icons/upload-icon.svg';
-
-# Exclude vulnerabilities from scan results
-
-This guide shows you how to exclude vulnerabilities from your image scan results.
-By excluding vulnerabilities that don't require action, such as false positives, you can reduce the volume of reported vulnerabilities that you need to deal with.
-You can set the exclusion for a particular vulnerability to apply to a particular image version, all images in a repository, or all images in the system.
-
-:::caution
-Excluding vulnerabilities from image results can significantly change how Image Assurance determines image health.
-You may need to take corrective action to stabilize your workflow.
-- **Maximum CVSS score:** An image’s maximum CVSS score may be reduced to a lower score.
-An exception could eliminate the vulnerability with the highest CVSS score for an image.
-- **Scan assessment:** The assessment value (Pass, Warn, or Fail) could change because it is based on the maximum CVSS score.
-For example, Fail could change to Warn, and Warn could change to Pass.
-- **Vulnerability global alerts:** Alerts may no longer be triggered.
-Alerts are triggered based on scan results or maximum CVSS score values of images.
-- **Admission controller policies:** Pods could be created where they were previously blocked.
-Admission controller policies are based on vulnerability information (scan result or maximum CVSS score).
-
-:::
-
-## Exclude a vulnerability from your scan results
-
-You can exclude a vulnerability from your scan results page.
-
-1. From the Calico Cloud Manager UI, go to **Image Assurance > All Scan Results**.
-1. Select an image from the list.
-1. On the scan results panel that appears, expand a package, and then click **Actions > Add Exception**.
-1. On the **Add exception** dialog, select a scope, enter an explanation for why you're creating an exception, and then click **Save**.
-
-## Exclude multiple vulnerabilities at the same time
-
-To exclude large numbers of vulnerabilities, you can organize the required information as a CSV file and import it directly to Calico Cloud.
-
-1. Optional: You can start with a preformatted list of all vulnerabilities in your cluster, and then edit the list to include only those vulnerabilities you want to exclude.
-1. From the Calico Cloud Manager UI, go to **Image Assurance > All Scan Results**.
-1. On the **All Scan Results** page, click the **Export** button, and then click **Export data**.
-1. On the **Export** dialog that appears, select the **CSV** data type and the **Export a row for each image and CVE IDs** option for the table style.
-Click **Export** to download the CSV file.
-1. Edit the CSV file
-
-1. Prepare the CSV file with information about the vulnerabilities you want to exclude.
-The file must contain the following information:
-
-| Column header | Description | Example |
- | -- | -- | -- |
-| `CVE` | The CVE identifier. Required by the `any`, `repo`, and `image` scopes. | `CVE-2024-1234` |
-| `Registry`| The URL path to the container registry. Required by the `repo` and `image` scopes. | `mycontainerregistry.io/my-org` |
-| `Repository`| The container image name. Required by the `repo` and `image` scopes.| `my-application` |
-| `Tags`| A JSON list of image tags. Required by the `image` scope. | `"[""v1.2.3"",""v2.3.4""]"` |
-| `Justification` | Your explanation for why you're excluding this vulnerability. Required by the `any`, `image`, and `image` scopes.| `This one is a false positive.` |
-| `Scope` | Determines whether the vulnerability exception applies only to specific tagged images, to any image in a specific repository, or to any image where the vulnerability is found. One of the following values is required: • `any`: The exception applies to all images. • `repo`: The exception applies to all versions of an image in a repository. • `image`: The exception applies to a specific, tagged version of an image. | `image` |
-
-:::tip
-An exported vulnerability list (see step 1) includes many more columns than what is required to import vulnerability exceptions in bulk.
-You do not need to remove or reorganize the extraneous information, but you do need to add two new columns for `Justification` and `Scope`.
-:::
-
-Example: a CSV list of vulnerability exclusion definitions
-
-``` title="example-vulnerability-exclusions.csv"
-CVE,Registry,Repository,Tags,Justification,Scope
-CVE-2024-1234,mycontainerregistry.io/my-org,my-application,"[""v3.4.5""]",justification,image
-CVE-2024-2345,mycontainerregistry.io/my-org,my-application,"[""v1.2.3"",""v2.3.4""]",justification,image
-CVE-2024-3456,mycontainerregistry.io/my-org,my-application,,justification,repo
-CVE-2024-4567,,,,justification,any
- ```
-
-1. In the Calico Cloud Manager UI, go to **Image Assurance > Vulnerability Exceptions**, and then click **Upload exceptions**.
-1. Select the CSV file you created, and then click **Upload file**.
-After the data is validated, you'll see a summary of the exceptions.
-If there are errors, modify your CSV file and repeats steps 1 and 2.
-1. Review the processed exceptions on the summary page click **Create exceptions**.
-The new exceptions are listed on the **Vulnerability Exceptions** page.
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/image-assurance/index.mdx b/calico-cloud_versioned_docs/version-20-1/image-assurance/index.mdx
deleted file mode 100644
index b96c33b9a4..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/image-assurance/index.mdx
+++ /dev/null
@@ -1,28 +0,0 @@
----
-description: Detect and block vulnerable images from container workloads.
-hide_table_of_contents: true
----
-
-import { DocCardLink, DocCardLinkLayout } from '/src/___new___/components';
-
-# Image Assurance
-
-Detect and block vulnerable images from container workloads.
-
-## Scaning images for vulnerabilities
-
-
-
-
-
-
-
-
-## Working with scan results
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/image-assurance/install-the-admission-controller.mdx b/calico-cloud_versioned_docs/version-20-1/image-assurance/install-the-admission-controller.mdx
deleted file mode 100644
index 2dec4b5cc3..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/image-assurance/install-the-admission-controller.mdx
+++ /dev/null
@@ -1,172 +0,0 @@
----
-description: Block vulnerable containers from being deployed into your cluster using the Admission Controller.
----
-
-# Create policy to block vulnerable images from your cluster
-
-:::note
-
-This feature is tech preview. Tech preview features may be subject to significant changes before they become GA.
-
-:::
-
-## Big picture
-
-Protecting your cluster from vulnerable images can be very difficult. An image that appears to be secure today could
-contain a newly-discovered vulnerability tomorrow, and acting on this new information in real time can be challenging.
-
-$[prodname]’s Image Assurance Admission Controller automatically blocks resources that would create containers with vulnerable images from entering your cluster using the latest vulnerability data and scan results.
-
-## Concepts
-
-### About the $[prodname] Admission Controller
-
-$[prodname] uses [Kubernetes Validating Webhook Configuration](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) to register the $[prodname] Admission Controller as a callback to accept or reject resources that create pods (such as deployments and daemonsets).
-
-### How the Admission Controller evaluates admission requests
-
-The $[prodname] Admission Controller blocks the creation or modification of resources that create pods. If a resource that creates pods is admitted by the Admission Controller, _the pods it creates are not reevaluated_. For example, if you create a deployment, the Admission Controller receives an admission request and either allows or rejects the request. If allowed, the Admission Controller will not evaluate the request for the replica set that Kubernetes creates for the deployment or the pod that is created for the replica set. Why? Because it could destabilize a production cluster if new vulnerabilities are found for deployed images and pods are restarted.
-
-However, if you _create a pod directly_, the Admission Controller evaluates the admission request for the pod and allows or rejects the request.
-
-### About container admission policies
-
-Container admission policies are custom Kubernetes resources that allow you to configure the criteria for the Admission Controller to reject admission requests for resources that create pods.
-
-## How to
-
-- [Install the Admission Controller](#install-the-admission-controller)
-- [Create container admission policies](#create-container-admission-policies)
-- [Troubleshoot](#troubleshoot)
-
-### Install the Admission Controller
-
-1. Create a directory to download the manifests/scripts needed for the installation.
-
- ```bash
- mkdir admission-controller-install && cd admission-controller-install
- ```
-
-1. Generate the certificate key pair.
-
- For secure TLS communication between the Kubernetes API server and the Admission controller, you must generate a TLS
- certificate and key pair.
-
- You can either generate the TLS key and certificate yourself and move them to the current folder under the names
- `admission_controller_key.pem` and `admission_controller_cert.pem`, or use the following command to generate the pair:
-
- ```bash
- export URL="$[clouddownloadurl]/manifests" && curl ${URL}/generate-open-ssl-key-cert-pair.sh | bash
- ```
-
- :::caution
-
- If you generate the key and certificate pair yourself, you must set the SANS to `tigera-image-assurance-admission-controller-service.tigera-image-assurance.svc`.
-
- :::
-
-1. Download and configure the Admission Controller manifests.
-
-As a safety mechanism, we require that you specify the namespaces that the Admission Controller will apply in
-the `IN_NAMESPACE_SELECTOR_KEY` and the `IN_NAMESPACE_SELECTOR_VALUES`. These values configure the Kubernetes API server to send admission requests to our Admission Controller only for resources from relevant namespaces.
-
-For example, to configure the Kubernetes API server to send admission requests for
-resources created in any namespace with label key `name`, and label values either `prod` or `staging-test`, set the variables as follows:
-
- ```bash
- export IN_NAMESPACE_SELECTOR_KEY="name" && export IN_NAMESPACE_SELECTOR_VALUES="prod staging-test"
- ```
-Here is the namespace manifest with the staging-test label and name key.
-
- ```yaml
- kind: Namespace
- apiVersion: v1
- metadata:
- name: staging-test
- labels:
- name: staging-test
- ```
-:::caution
-
-Do not add Kubernetes critical namespaces such as the `kube-system` namespace. This could create a
-deadlock situation where you cannot bring up the Admission Controller because a critical Kubernetes pod is not running, but you also cannot bring up the critical Kubernetes pod because the Admission Controller is not running.
-:::
-
- ```bash
- export IN_NAMESPACE_SELECTOR_KEY="name" && \
- export IN_NAMESPACE_SELECTOR_VALUES="prod staging-test" && \
- curl ${URL}/install-ia-admission-controller.sh | bash
- ```
-
-4. Apply the Admission Controller manifests.
-
- ```bash
- kubectl apply -f ./tigera-image-assurance-admission-controller-deploy.yaml
- ```
-
-## Create container admission policies
-
-Container admission policies are used to define criteria for the Admission Controller to admit or reject admission for resources that create pods. For details, see [ContainerAdmissionPolicies](../reference/resources/containeradmissionpolicy.mdx).
-
-**Sample container admission policies**
-
-This ContainerAdmissionPolicy allows admission requests for pod-creating resources whose image is in the registry/repository `gcr.io/company/production-repository/*`, with a scan status of either `Pass` or `Warn`, and rejects all other admission requests.
-
-```yaml
-apiVersion: containersecurity.tigera.io/v1beta1
-kind: ContainerAdmissionPolicy
-metadata:
- name: reject-failed-and-non-gcr
-spec:
- selector: all()
- namespaceSelector: all()
- order: 10
- rules:
- - action: 'Reject'
- imagePath:
- operator: IsNoneOf
- values:
- - '^gcr.io/company/production-repository/.*'
- - action: Allow
- imageScanStatus:
- operator: IsOneOf
- values:
- - Pass
- - Warn
- - action: Reject
-```
-
-The following ContainerAdmissionPolicy rejects deploying or updating pod-creating resources with the label, `reject-policy: reject-outdated-scans`,
-from any namespace labeled, `apply-container-admission-policies == 'true'`, that would deploy an image that hasn't been scanned in 3 days.
-
-```yaml
-apiVersion: containersecurity.tigera.io/v1beta1
-kind: ContainerAdmissionPolicy
-metadata:
- name: reject-failed-and-non-gcr
-spec:
- selector: "reject-policy == 'reject-outdated-scans'"
- namespaceSelector: "apply-container-policies == 'true'"
- order: 1
- rules:
- - action: Allow
- imageLastScan:
- operator: 'gt'
- duration:
- days: 3
- - action: Reject
-```
-
-The first rule (Allow), allows images based on the age of the image scan (in days). In this example, we want to allow
-images that have been scanned within the last three days. So, we use the gt operator (greater than), along with Duration,
-3 days, to say the image scan time must be greater than 3 days ago but less than now. "Now" is defined as when the
-attempt was made to create the Kubernetes resource. If the Allow rule does not match, then the second action rule
-(Reject) is evaluated, which denies everything.
-
-You can also use modify the Allow rule to match an absolute time. For help, see [ContainerAdmissionPolicies](../reference/resources/containeradmissionpolicy.mdx).
-
-## Troubleshoot
-
-**My container admission policy is not blocking resources from a namespace, even though the namespaceSelector matches the namespace**
-
-This indicates that the Kubernetes API server is not sending admission requests for the namespace. Verify that the key and value(s) that you specified for `IN_NAMESPACE_SELECT_KEY` and `IN_NAMESPACE_SELECT_VALUES` in the installation steps, match the policy namespaces.
diff --git a/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/cluster-scanner.mdx b/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/cluster-scanner.mdx
deleted file mode 100644
index 8342d5c130..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/cluster-scanner.mdx
+++ /dev/null
@@ -1,122 +0,0 @@
----
-description: Detect vulnerabilities in a Kubernetes cluster.
----
-
-# Scan images in a Kubernetes cluster
-
-Scan all images in a Kubernetes cluster for vulnerabilities to achieve a continuous clean bill of health and defense in depth.
-
-Common use cases for scanning in a Kubernetes cluster are:
-
-- Images may pass scanning during the build phase, but they could contain vulnerabilities days or weeks later
-- Third-party images that are pulled from public registries are often not scanned in build pipelines and can contain Critical or High vulnerabilities
-- Application teams may build one-off images outside of their pipeline to make an emergency patch and fix a critical bug.
-
-## About Image Assurance scanner
-
-The Image Assurance scanner that runs in a Kubernetes cluster is out-of-the-box ready to use without configuration. It runs as a daemonset in a managed cluster where images are located, and is installed on all nodes in the cluster.
-
-Vulnerability detection consists of these steps:
-
-- **Image Assurance scanner** - generates a dependency list Software Bill of Materials (SBOM) using Syft
-- **Vulnerability lookup** - $[prodname] uploads the SBOM where packages are matched with known CVEs in the vulnerability databases based on dependencies using Grype
-
-![vulnerability-detection](/img/calico-cloud/vulnerability-detection.png)
-
-$[prodname] checks running images for new vulnerabilities every 24 hours and reports scan results to the Image dashboard in Manager UI.
-
-## Before you begin
-
-**Unsupported**
-- OpenShift
-- GCP-Kubeadm
-- AWS-Kubeadm
-- GKE
-
-**Cluster requirements**
-- Containerd is the container runtime
-- AKS clusters: if you are using Kubernetes v1.19 or higher, containerd should be your default runtime
-- Containerd must be using overlays or native file system snapshotter
-
-## How to
-- [Get latest version of Image Assurance](#get-latest-version-of-image-assurance)
-- [Enable scanner](#enable-scanner)
-- [Customize scanner settings](#customize-scanner-settings)
-- [Disable scanner](#disable-scanner)
-
-### Get latest version of Image Assurance
-
-1. On the **Managed Clusters** page, select the cluster from the list, and click **Reinstall**.
-1. Copy the updated installation script command and run it against your cluster.
-
-### Enable scanner
-
-Complete the following steps for each managed cluster you want enabled with the cluster scanner:
-
-1. Modify the [Image Assurance](../../reference/installation/ia-api.mdx#image-assurance.operator.tigera.io/v1.ImageAssuranceSpec) installation resource.
-
- ```bash
- kubectl edit imageassurance default
- ```
-
-2. Set the `clusterScanner` field to `Enabled` and save the file.
-
-The cluster scanner is deployed as a container inside the `tigera-image-assurance-crawdad` daemonset.
-
-3. Verify that a new container with name, `cluster-scanner` is created inside the daemonset.
-
-That’s it. The cluster scanner will start scanning images on running pods in the cluster. For help viewing image events in Manager UI, see [View scanned and running images](../understanding-scan-results).
-
-### Customize scanner settings
-
-To change default settings, modify the [Image Assurance](../../reference/installation/ia-api.mdx#image-assurance.operator.tigera.io/v1.ImageAssuranceSpec) installation resource.
-
-- Container runtime socket path
-
- Set the `criSocketPath` field to the path of the container runtime socket. Default: `/run/containerd/containerd.sock`
-
-- Containerd file system root path
-
- Set the `containerdVolumeMountPath`. Default: `/var/lib/containerd/`.
-
-### Configure exclusions for image scanning
-
-To specify which namespaces should be excluded from future scans, follow the following steps.
-
-- Modify your [Image Assurance](../../reference/installation/ia-api.mdx#image-assurance.operator.tigera.io/v1.ImageAssuranceSpec) installation resource to include the `exclusions.namespaces` field. List each namespace you want to exclude.
-
-
-```yaml
-apiVersion: image-assurance.operator.tigera.io/v1
-kind: ImageAssurance
-metadata:
- name: default
-spec:
- clusterScanner: Enabled
- exclusions:
- namespaces:
- - "kube-system"
- - "dev-qa"
-```
-
-In this example, the workloads in the `kube-system` and `dev-qa` namespaces are excluded from future image scans.
-
-:::note
-
-Applying or updating namespace exclusions affects only future scans. Results from scans conducted prior to these exclusions will remain unchanged. The exclusions configured will apply to both cluster scanner and runtime view (**Running Images** tab).
-
-:::
-
-### Disable scanner
-
-1. Modify the `imageassurance` installation resource.
-
-```bash
- kubectl edit imageassurance default
- ```
-
-2. Set the `clusterScanner` field to `Disabled` and save the file. This deletes the cluster scanner container from the daemonset from your cluster.
-
-## Next step
-
-[Set up alerts on vulnerabilities ](../set-up-alerts)
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/index.mdx b/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/index.mdx
deleted file mode 100644
index d1291a9be4..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Scan images and workloads for vulnerabilities.
-hide_table_of_contents: true
----
-
-# Scanners
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/overview.mdx b/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/overview.mdx
deleted file mode 100644
index 19eb7bdc4a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/overview.mdx
+++ /dev/null
@@ -1,40 +0,0 @@
----
-description: Choose a method to scan images for vulnerabilities.
----
-
-# Choose an image scanning method
-
-## Big picture
-
-Scan images and Kubernetes workloads for vulnerabilities using $[prodname] Image Assurance.
-
-## Value
-
-$[prodname] Image Assurances helps you identify vulnerabilities in container images that you deploy to Kubernetes clusters. Vulnerabilities are known flaws in libraries and packages used by applications that attackers can exploit and cause harm. With Image Assurance you can:
-
-- Scan an image for vulnerabilities
-- Assess the impact of newly-found vulnerabilities and prioritize remediation efforts
-- Catch vulnerabilities days or weeks later with continuous image rescanning
-- Create exceptions to ignore specific vulnerabilities
-- Create alerts on high-severity vulnerabilities so you can delegate remediation efforts to the appropriate team
-- Block non-compliant workloads using policy as part of your cloud-native security posture
-
-## About Image Assurance
-
-Image Assurance is based on the Common Vulnerabilities and Exposures (CVE) system, which provides a catalog of publicly-known security vulnerabilities and exposures. Known vulnerabilities are identified by a unique CVE ID based on the year it was reported (for example, CVE-2021-44228).
-
-Scanned image content includes:
-
-- Libraries and content (for example, python, ruby gems, jars and go)
-- Packages (OS and non-OS)
-- Image layer
-
-## Image scanning options
-
-Image Assurance provides different versions of the scanner to accommodate different use cases as shown in the following table.
-
-| Scan images in... | Description | Scanner access | Benefits |
-| --------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
-| [Kubernetes cluster](./cluster-scanner.mdx) | Scan any running image in the Kubernetes cluster including locally-built first-party images to fix critical bugs. | Runs automatically in the managed cluster in Manager UI | The Image Assurance dashboard provides an easy way to get started with vulnerability scanning and remediation, and defense-in-depth coverage without building your own scanning solution. |
-| [CI/CD pipeline](./pipeline-scanner.mdx) | Integrate the CLI scanner in your application build pipeline and private registries including: - Customer-built images - Local images - Third-party images from public registries (for example Kafka, Redis) | A downloadable binary | Incorporate the scanner as a lightweight runner in your build pipeline. Use the scanner offline and on-demand for ad hoc scanning and emergency patching. |
-| [Image registries](./registry-scanner.mdx) | Scan images in registries (for example, Amazon ECR). | A downloadable Docker image | Add a layer of defense for images that were not scanned in your build pipeline, but get published to your registry. |
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/pipeline-scanner.mdx b/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/pipeline-scanner.mdx
deleted file mode 100644
index 8bd163f3b8..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/pipeline-scanner.mdx
+++ /dev/null
@@ -1,179 +0,0 @@
----
-description: Scan images in your build pipeline using Image Assurance.
----
-
-# Integrate the scanner into your build pipeline
-
-## Big picture
-
-Integrate the Image Assurance CLI scanner into your build pipeline to ensure builds are checked by Image Assurance before deployment.
-
-## Value
-
-The Image Assurance CLI scanner allows you to manually scan container images locally or remotely for on-demand scanning and emergency scanning. The CLI scanner is ideal for use in a CI/CD pipeline to automatically scan images before pushing them to a registry.
-
-If the CLI scanner is part of your pipeline, scanning is done before runtime and the results are displayed in the Image Assurance dashboard in Manager UI. You can then use [Image Assurance Admission Controller](../install-the-admission-controller) to automatically blocks resources that would create containers with vulnerable images from entering your cluster. For a real-world use case, see [Hands-on guide: How to scan and block container images to mitigate SBOM attacks](https://www.tigera.io/blog/hands-on-guide-how-to-scan-and-block-container-images-to-mitigate-sbom-attacks/).
-
-## Before you begin
-
-**Image requirements**
-
-- Docker container runtime
-- Images must be available locally through the Docker container runtime environment where the Image Assurance scanner is running.
-
-**Scanner requirements**
-
-- Must have internet access to download and update the vulnerability database
-- To see image scan results in Manager UI, the scanner must communicate with an external API endpoint outside your environment
-
-## How to
-
-- [Get the latest version of Image Assurance](#get-the-latest-version-of-image-assurance)
-- [Start the scanner](#start-the-scanner)
-- [Integrate the scanner in your build pipeline](#integrate-the-scanner-in-your-build-pipeline)
-- [Manually scan images](#manually-scan-images)
-- [Scan images using a configuration file](#scan-images-using-a-configuration-file)
-
-### Get the latest version of Image Assurance
-
-1. On the **Managed Clusters** page, select the cluster from the list, and click **Reinstall**.
-1. Copy the updated installation script command and run it against your cluster.
-
-### Start the scanner
-
-
-
-1. Download the latest version of the scanner.
-
- **Linux**
-
- ```shell
- curl -Lo tigera-scanner $[clouddownloadbase]/tigera-scanner/$[cloudversion]/image-assurance-scanner-cli-linux-amd64
- ```
-
- **macOS**
-
- ```shell
- curl -Lo tigera-scanner $[clouddownloadbase]/tigera-scanner/$[cloudversion]/image-assurance-scanner-cli-darwin-amd64
- ```
-
-2. Set the executable flag on the binary.
-
- ```shell
- chmod +x ./tigera-scanner
- ```
-
-:::note
-You must download and set the executable flag each time you get a new version of the scanner.
-
-:::
-
-3. Verify that the scanner works correctly by running the version command.
-
- ```shell
- ./tigera-scanner version
- $[imageassuranceversion]
- ```
-### Integrate the scanner into your build pipeline
-
-You can include the CLI scanner in your CI/CD pipelines (for example, Jenkins, GitHub actions). Ensure the following:
-
-- Download the CLI scanner binary onto your CI runner
-- If you are running an ephemeral environment in the pipeline, include the download, and update the executable steps in your pipeline to download the scanner on every execution
-- Create a secret containing the API-Token and API URL and make it available in the pipeline (for example, using a SECURE_API_TOKEN environment variable)
-- Add a step in your pipeline to run the `image-assurance-scanner` after building the container image, and specify the image name as a parameter. For example:
- `./image-assurance-cli-scanner --apiurl ${IMAGE_NAME}`
-
-If your CI platform supports it, you can also use the containerized version of Image Assurance scanner for integrations with other tools like Harness. To integrate the containerized version of Image Assurance scanner into your CI/CD platform, go to: [Image Assurance containerized scanner](https://quay.io/repository/tigera/image-assurance-scanner-cli) and pull the latest image. For example:
-
-```bash
- docker pull quay.io/tigera/image-assurance-scanner-cli:vx.x.x
-```
-
-### Manually scan images
-
-You can scan images and report results back to $[prodname], or scan images locally without reporting results to $[prodname].
-
-**Syntax**:
-
-`tigera-scanner scan [OPTIONS] `
-
-**Options**:
-
-- `--apiurl` - $[prodname] API URL path. You can get this URL in Manager UI, **Image Assurance**, **Scan settings**.
-- `--token` - secure API or authorization token to make requests to $[prodname] API URL. You can get this URL in Manager UI, **Image Assurance**, **Scan settings**.
-- `--warn_threshold` - CVSS threshold for Warn scan results. Range from 0.0 - 10.0.
-- `--fail_threshold` - CVSS threshold for Fail scan results. Range from 0.0 - 10.0.
-- `--vulnerability_db_path` - path to a folder to store vulnerability data (defaults to `$XDG_CACHE_HOME`; if it is not set, defaults to `$HOME/.cache`).
-- `--input_file ` - Path to a JSON file containing image URLs.
-- `--output_file ` - File path that will contain scan results in a JSON format.
-
-**Examples**
-
-**Scan an image, report results**
-
-```shell
-./tigera-scanner scan ubuntu:latest --apiurl https://.calicocloud.io --token ezBhbGcetc...
-```
-
-**Scan an image locally, do not report results**
-
-```shell
-./tigera-scanner scan ubuntu:latest
-```
-
-**Scan an image with a failure and warning threshold**
-
-```shell
-./tigera-scanner scan ubuntu:latest --fail_threshold 7.0 --warn_threshold 3.9
-```
-
-**Scan multiple images locally, do not report results**
-
-```shell
-./tigera-scanner scan ubuntu:latest alpine:latest
-```
-
-**Scan multiple images using an input and output file**
-
-The input file must have the following JSON structure:
-
-```json
-{
- "images": [
- "ubuntu:latest",
- "alpine:latest"
- ]
-}
-```
-
-```shell
-./tigera-scanner scan --input_file images.json --output_file results.json
-```
-
-### Scan images using a configuration file
-
-Create a configuration file in `$HOME/.tigera-scanner.yaml` for the scanner to read.
-
-:::note
-
-Key names must match the full name of arguments passed to the scanner. The configuration precedence order is options > environment variables > file configuration.
-
-:::
-
-**Options**
-
-| Options | Shorthand | Environment variable | Description |
-| ----------------------- | --------- | ------------------------ | --------------------------------------------------------------------------------------------------------------------------- |
-| --apiurl | -a | CC_API_URL | $[prodname] API URL path. You can get this URL in Manager UI, Image Assurance, Scan settings. |
-| --token | -t | CC_TOKEN | Secure API or authorization token to make requests to $[prodname] API URL. |
-| --warn_threshold | -w | CC_WARN_THRESHOLD | CVSS threshold for Warn scan results. Range from 0.0 - 10.0. |
-| --fail_threshold | -f | CC_FAIL_THRESHOLD | CVSS threshold for Fail scan results. Range from 0.0 - 10.0. |
-| --vulnerability_db_path | -p | CC_VULNERABILITY_DB_PATH | Path to a folder to store vulnerability data (defaults to `$XDG_CACHE_HOME`; if it is not set, defaults to `$HOME/.cache`). |
-| --input_file | -i | CC_INPUT_FILE | Path to the JSON file containing image URLs. |
-| --output_file | -o | CC_OUTPUT_FILE | File path that will contain scan results in a JSON format. |
-
-## Next step
-
-[Set up alerts](../set-up-alerts)
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/registry-scanner.mdx b/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/registry-scanner.mdx
deleted file mode 100644
index 58c3301071..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/image-assurance/scanners/registry-scanner.mdx
+++ /dev/null
@@ -1,161 +0,0 @@
----
-description: Scan images in container registries.
----
-
-# Scan images in container registries
-
-Scan images in container registries at any time, on any infrastructure, including Kubernetes.
-
-## Value
-
-Add a layer of defense for images that don’t go through a pipeline (for example, third-party images), but are published to a registry. If CVEs are missed in your build pipeline, you can catch them before they are deployed.
-
-## Concepts
-
-You can run the registry scanner wherever there is a container runtime. And it doesn’t have to be for Kubernetes.
-
-To use registry scanner, all you need to do is:
-- Specify the registry paths to the images you want to scan
-- Provide permissions for the scanner to access your registries
-- Get a token for access to the Image Assurance API
-
-Based on the paths you specify, the scanner recursively scans all images in the registry once, and sends results to the Image Assurance dashboard in Manager UI.
-
-To deploy the registry scanner as a pod in Kubernetes cluster, we recommend that you define a Kubernetes job or cronjob.
-
-## Before you begin
-
-**Required**
-
-- Registry scanner is running where there is a container runtime
-- Valid registry credentials
-
-**Supported registry platforms**
-
-- Amazon Elastic Container Registry (ECR)
-- Azure Container Registry (ACR)
-- Google Container Registry (GCR)
-
-**Limitations**
-
-- The registry scanner is available as an image (not using Tigera operator installation)
-- You can scan images from only one of the supported registry platforms/account per instance
-- If you use the registry scanner with Docker, only tagged images are scanned. However, if you use the scanner with Amazon or Azure, all images (tagged and untagged) are scanned.
-
-## How to
-
-- [Download the registry scanner](#download-the-registry-scanner)
-- [Set up authentication to registry scanner](#set-up-authentication-to-registry-scanner)
-- [Set up registry scanner](#set-up-registry-scanner)
-- [Scan images and send results to Calico Cloud](#scan-images-and-send-results-to-calico-cloud)
-- [Troubleshoot](#troubleshoot)
-- [Get previous registry scanner versions](#get-previous-registry-scanner-versions)
-
-### Download the registry scanner
-
-The registry scanner comes in a Docker image. To get the image, run this command: `docker pull quay.io/tigera/image-assurance-registry-scanner:$[imageassuranceversion]`.
-
-### Set up authentication to registry scanner
-
-The registry scanner requires authentication to access a registry. Set up credentials for one of the following registry platforms:
-
-- **Docker/Google** - scans only tagged images; untagged images residing in your image registry are not pulled and scanned.
-- **Azure or AWS** - scans tagged and untagged images (all)
-
-#### Docker/GCR required credentials
-
-- `DOCKER_USERNAME`
-- `DOCKER_PASSWORD`
-
-If you have a valid /.docker/config.json, you can also mount this config file on the container while running the registry scanner.
-
-```bash
-docker run -e ... -v ~/.docker/config.json:/.docker/config.json quay.io/tigera/image-assurance-registry-scanner:$[imageassuranceversion]
-```
-#### Azure required credentials
-
-Registry instances are scanned one at a time. If Docker credentials are found, they are ignored.
-
-- `AZURE_CLIENT_ID`
-- `AZURE_CLIENT_SECRET` or `AZURE_FEDERATED_TOKEN`
-- `AZURE_TENANT_ID`
-
-#### AWS required credentials
-
-Registry instances are scanned one at a time. If Docker credentials are found, they are ignored.
-
-- `AWS_ACCESS_KEY_ID`
-- `AWS_SECRET_ACCESS_KEY`
-- `AWS_REGION`
-
-### Set up registry scanner
-
-**Required**
-
-- `REGISTRY` - the registry you want to scan. For example, gcr.io.
-- `IMAGE_ASSURANCE_API_URL` - Get the URL in the Manager UI
-- `IMAGE_ASSURANCE_API_TOKEN` - Get the token in the Manager UI
-- Registry credentials: Docker/gcr, acr, or ecr
-
-**Optional**
-
-`REGISTRY_FILTER` - limits scanning time when you have thousands of repositories and images. Supports a comma-separated list.
-
-Example: gcr registry
-
-```bash
-gcr.io/prod-env/api
-gcr.io/staging-env/api
-gcr.io/dev/api
-```
-To filter out images in the dev "sub" registry:
-
-`-e REGISTRY_FILTER=prod-env,staging-env`
-
-### Scan images and send results to $[prodname]
-
-Example: gcr registry with Docker credentials
-
-```bash
-docker run -e REGISTRY=gcr.io -e IMAGE_ASSURANCE_API_URL=https://-management.dev.calicocloud.io/bast -e IMAGE_ASSURANCE_API_TOKEN=$TOKEN -e DOCKER_USERNAME= -e DOCKER_PASSWORD= quay.io/tigera/image-assurance-registry-scanner:$[imageassuranceversion]
-```
-
-Example: acr registry with Azure credentials
-
-```bash
-docker run -e REGISTRY=your-org.azurecr.io -e IMAGE_ASSURANCE_API_URL=https://-management.dev.calicocloud.io/bast -e IMAGE_ASSURANCE_API_TOKEN=$TOKEN -e AZURE_CLIENT_ID= -e AZURE_CLIENT_SECRET= -e AZURE_TENANT_ID= quay.io/tigera/image-assurance-registry-scanner:$[imageassuranceversion]
-```
-
-Example: ecr registry with AWS credentials
-
-```bash
-docker run -e REGISTRY=gcr.io -e IMAGE_ASSURANCE_API_URL=https://-management.dev.calicocloud.io/bast -e IMAGE_ASSURANCE_API_TOKEN=$TOKEN -e AWS_ACCESS_KEY_ID= -e AWS_SECRET_ACCESS_KEY= -e AWS_REGION= quay.io/tigera/image-assurance-registry-scanner:$[imageassuranceversion]
-```
-
-Example: run when mounting the dockerfile
-
-```bash
-docker run -e REGISTRY=gcr.io -e IMAGE_ASSURANCE_API_URL=https://-management.dev.calicocloud.io/bast -e IMAGE_ASSURANCE_API_TOKEN=$TOKEN -v ~/.docker/config.json/:/.docker/config.json quay.io/tigera/image-assurance-registry-scanner:$[imageassuranceversion]
-```
-
-### Troubleshoot
-
-**Issues authenticating with registry scanner**
-
-Verify that the registry credentials are correct.
-
-**Scan results are not uploading to $[prodname]**
-
-Image scan results that are uploaded to $[prodname] through the registry scanner require additional processing before appearing in the Image Assurance dashboard. This may result in a time delay before CVE results appear for those images in the UI. You can also verify that the API token and URL are correct.
-
-**Scanned images do not match what I expect**
-
-Verify that the credentials on the registry side have the correct permission level.
-
-### Get previous registry scanner versions
-
-For previous versions of registry scanner, see [quay repository](https://quay.io/organization/tigera).
-
-## Next step
-
-- [Understand scan results in Manager UI](../understanding-scan-results)
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/image-assurance/set-up-alerts.mdx b/calico-cloud_versioned_docs/version-20-1/image-assurance/set-up-alerts.mdx
deleted file mode 100644
index 7ea5305db2..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/image-assurance/set-up-alerts.mdx
+++ /dev/null
@@ -1,88 +0,0 @@
----
-description: Get alerts on vulnerabilities.
----
-
-# Set up alerts on vulnerabilities
-
-## Big picture
-
-Create alerts on high-severity vulnerabilities so you can delegate remediation efforts to the appropriate team.
-
-## How to
-
-To create alerts, use the [Global alert resource](../reference/resources/globalalert.mdx).
-
-### Example 1 - Alert on a failed image
-
-In this example, an alert is created whenever there is more than one event for an image from the specified registry/repo that has a result value of Fail within the past 30 minutes.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalAlert
-metadata:
- name: example-1
-spec:
- summary: 'Vulnerabilities for a specific repo based on results'
- description: 'Vulnerabilities for a specific repo based on results'
- severity: 100
- period: 30m
- lookback: 30m
- dataSet: vulnerability
- query: registry="quay.io/tigera" AND repository="node" AND result="Fail"
- metric: count
- condition: gt
- threshold: 1
-```
-
-### Example 2 - Alert on a specific repo with maximum CVSS greater than 7.0
-
-In this example, an alert is created whenever there is at least one event for an image from the specified registry/ repo that has a max CVSS score greater than 7.0 within the past 30 minutes. Providing control over the exact max CVSS score threshold lets you define a trigger that is different from what the CVSS score threshold is configured for Fail scan result on the Scan Results page in Manager UI.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalAlert
-metadata:
- name: example-2
-spec:
- summary: 'Vulnerabilities for a specific repo based on max CVSS score'
- description: 'Vulnerabilities for a specific repo based on max CVSS score'
- severity: 100
- period: 30m
- lookback: 30m
- dataSet: vulnerability
- query: registry="quay.io/tigera" AND repository="node"
- field: max_cvss_score
- metric: max
- condition: gt
- threshold: 7.0
-```
-
-### Example 3 - Alert on a failed scan result within a pod in a namespace
-
-In this example, an alert is created whenever there is at least one event for an image that has a scan result of Fail, that is running within a pod in any cluster in the namespace `tigera-elasticsearch`, within the past 30 minutes.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalAlert
-metadata:
- name: example-3
-spec:
- summary: 'Vulnerabilities for namespace tigera-elasticsearch'
- description: 'Vulnerabilities for namespace tigera-elasticsearch'
- severity: 100
- period: 30m
- lookback: 30m
- dataSet: vulnerability
- query: result="Fail" AND namespace="tigera-elasticsearch"
- metric: count
- condition: gt
- threshold: 1
-```
-
-:::note
-
-You must apply global alerts across all applicable managed clusters. For example, if you have five clusters, you must apply the alerts five times.
-
-:::
-
-For a complete list of parameters, see [Global alert resource](../reference/resources/globalalert.mdx).
diff --git a/calico-cloud_versioned_docs/version-20-1/image-assurance/understanding-scan-results.mdx b/calico-cloud_versioned_docs/version-20-1/image-assurance/understanding-scan-results.mdx
deleted file mode 100644
index 21f6c42ba4..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/image-assurance/understanding-scan-results.mdx
+++ /dev/null
@@ -1,94 +0,0 @@
----
-description: Understand scan results in Manager UI.
----
-
-# View scanned and running images
-
-The Images Assurance board in Manager UI provides lists for scanned images and running images.
-
-In the left navbar in Manager UI, click **Image Assurance**, **All Scanned Images**.
-
-## All Scanned Images tab
-
-This tab lists scanned images if you have enabled or used one of the [Image Assurance scanners](./scanners/overview).
-To manage your scan results, you can filter the list results or delete particular scan result items.
-
-## Running Images tab
-
-The tab lists active container images *for all connected managed clusters*. It provides the CVEs associated with running pods to help you assess pod vulnerability. This tab is disabled by default.
-
-To enable Running Images, click the **Setting** icon in the top right corner, then select **Enable Runtime View**.
-
-:::note
-
-If you are using the CLI scanner and your cluster does not use the default containerd socket path (`/run/containerd/containerd.sock`), you must change the path to allow the Running Images service to collect image information. To update the CRI socket path for a cluster, run the following command:
-
-```bash
-kubectl patch imageassurance default --type='merge' -p '{"spec":{"criSocketPath":""}}'
-```
-For details, see the [Image Assurance installation reference](../reference/installation/ia-api.mdx#image-assurance.operator.tigera.io/v1.ImageAssuranceSpec).
-
-:::
-
-Other notes:
-
-- The columns, **Clusters** and **Running Instances**, show the number of running instances in clusters that are connected to $[prodname].
-- The **Unknown** scan result filter reflects images that are not fully scanned. Because they can add noise to the table, they are disabled by default. To enable Unknown results for strategic troubleshooting, click the **Result** drop-down menu and select **Unknown**.
-- In the All Scanned Images and Running Images tabs, the **Registry path** field may be blank if $[prodname] cannot access this metadata. For example, images from Docker Hub do not specify the registry in the image metadata.
-- Any exceptions configured for image scanning will be applicable to Runtime View as well.
-
-## Image assessment: Pass, Warn, and Fail
-
-The Image Assurance image assessment is based on the [Common Vulnerability Scoring System v3 (CVSS Scores)](https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator). The following table shows how Image Assurance scores map to CVSS scores.
-
-| CVSS v3 scores | Image Assurance mapping | Default settings |
-| ------------------------------------------- | ----------------------- | -------------------- |
-| 0.0 – (None) 0.1 – 3.9 (Low) | Pass = 3.9 | Low |
-| 4.0 – 6.9 (Medium) | Warn = 7 | Medium severity – 7 |
-| 7.0 – 8.9 (High) 9.0 – 10.0 (Critical) | Fail = 8.0 | Critical or high – 8 |
-
-CVEs without a CVSS v3 score (too old to have one, or are too new to be assigned one), display a blank score in the UI, and **N/A** in the CLI.
-
-### Changing the default CVSS threshold values
-
-The following default threshold values will work for the majority of $[prodname] deployments. However, you may need to change the defaults because of security requirements.
-
-![scan-settings](/img/calico-cloud/scan-settings.png)
-
-To change the CVSS threshold values, note the following:
-
-- Changes to threshold values take effect immediately and alter the scan results for images already in the system
-- If you are using admission controller policies, changing a value may allow pods in your Kubernetes cluster that were previously being blocked, to now be deployed or vice versa.
-
-## Exploit Data
-
-The scanning process also attaches EPSS and known exploit data onto each image and vulnerability viewable through the UI.
-
-An EPSS score of 0.1 - or 10% - means a vulnerability has a 10% probability that it will be exploited in the wild within the next 30 days.
-Users should use this information alongside CVSS scores to prioritize remediating vulnerabilities. For example, you may not have the time to remediate
-all critical vulnerabilities, but you can use the EPSS score to help prioritize. By additionally filtering with an EPSS score of > 90% you can target
-the critical vulnerabilities that are most likely to be exploited.
-
-Note that an EPSS score of > 10% can be considered a high number.
-Information about [The EPSS Model](https://www.first.org/epss/model) can be found on the EPSS website created by [FIRST](https://www.first.org/).
-
-Known exploits are based off of the [CISA KEV Catalog](https://www.cisa.gov/known-exploited-vulnerabilities-catalog), a list of
-vulnerabilities that have been exploited in the wild and maintained by CISA.
-
-## Export results
-
-From each tab, you can export data or a JSON file with image URLs. Exporting data is based on the images in the list and the current filter selections. CSV table options include:
-
-- **Export one row per image** - export one row for each image with all associated CVEs condensed into a single column.
-- **Export one row for each image and CVE ID** - export a unique image plus CVE combination for each row. For example, if an image has 10 CVEs, 10 rows are created (1 for each CVE).
-
-:::note
-
-Images without associated CVEs are not included in the exported data (regardless if they are included by filters).
-
-:::
-
-## Next steps
-
-- [Set up alerts on vulnerabilities](set-up-alerts.mdx)
-- Create [policy](install-the-admission-controller.mdx) to block vulnerable containers from deploying to your cluster
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/multicluster/aws.mdx b/calico-cloud_versioned_docs/version-20-1/multicluster/aws.mdx
deleted file mode 100644
index 631b562f86..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/multicluster/aws.mdx
+++ /dev/null
@@ -1,68 +0,0 @@
----
-description: A sample configuration of Calico Enterprise federated endpoint identity and federated services for an AWS cluster.
----
-
-# Cluster mesh example for clusters in AWS
-
-## Big picture
-
-A sample configuration for cluster mesh using AWS clusters.
-
-## Tutorial
-
-**Set up**
-
-The cluster is installed on real hardware where node and pod IPs are routable, using an edge VPN router to peer with the AWS cluster.
-
-![A diagram showing the key configuration requirements setting up an AWS cluster (using AWS VPN CNI) peering with an on-premise cluster.](/img/calico-enterprise/federation/aws-rcc.svg)
-
-**Calico Enterprise configuration**
-
-- IP pool resource is configured for the on-premise IP assignment with IPIP is disabled
-- BGP peering to the VPN router
-- A Remote Cluster Configuration resource references the AWS cluster
-- Service discovery of the AWS cluster services uses the Calico Enterprise Federated Services Controller
-
-**Notes**
-
-- If VPN Router is configured as a route reflector for the on-premise cluster, you would:
- - Configure the default BGP Configuration resource to disable node-to-node mesh
- - Configure a global BGP Peer resource to peer with the VPN router
-- If the IP Pool has `Outgoing NAT` enabled, then you must add an IP Pool covering the AWS cluster VPC with disabled set to `true`. When set to `true` the pool is not used for IP allocations, and SNAT is not performed for traffic to the AWS cluster.
-
-**AWS configuration**
-
-- A VPC CIDR is chosen that does not overlap with the on-premise IP ranges.
-- There are 4 subnets within the VPC, split across two AZs (for availability) such that each AZ has a public and private subnet. In this particular example, the split of responsibility is:
- - The private subnet is used for node and pod IP allocation
- - The public subnet is used to home a NAT gateway for pod-to-internet traffic.
-- The VPC is peered to an on-premise network using a VPN. This is configured as a VPN gateway for the AWS side, and a classic VPN for the customer side. BGP is used for route distribution.
-- Routing table for private subnet has:
- - "propagate" set to "true" to ensure BGP-learned routes are distributed
- - Default route to the NAT gateway for public internet traffic
- - Local VPC traffic
-- Routing table for public subnet has default route to the internet gateway.
-- Security group for the worker nodes has:
- - Rule to allow traffic from the peered networks
- - Other rules required for settings up VPN peering (refer to the AWS docs for details).
-
-To automatically create a Network Load Balancer (NLB) for the AWS deployment, we apply a service with the correct annotation.
-
-```yaml
-apiVersion: v1
-kind: Service
-metadata:
- annotations:
- service.beta.kubernetes.io/aws-load-balancer-type: nlb
- name: nginx-external
-spec:
- externalTrafficPolicy: Local
- ports:
- - name: http
- port: 80
- protocol: TCP
- targetPort: 80
- selector:
- run: nginx
- type: LoadBalancer
-```
diff --git a/calico-cloud_versioned_docs/version-20-1/multicluster/index.mdx b/calico-cloud_versioned_docs/version-20-1/multicluster/index.mdx
deleted file mode 100644
index 50e8d21abf..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/multicluster/index.mdx
+++ /dev/null
@@ -1,19 +0,0 @@
----
-description: Steps to configure cluster mesh.
-hide_table_of_contents: true
----
-
-import { DocCardLink, DocCardLinkLayout } from '/src/___new___/components';
-
-# Cluster mesh
-
-With cluster mesh, you can secure cross-cluster connections using identity-aware network policy and federate services for cross-cluster service discovery.
-
-
-
-
-
-
-
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/multicluster/kubeconfig.mdx b/calico-cloud_versioned_docs/version-20-1/multicluster/kubeconfig.mdx
deleted file mode 100644
index a22bbe35fe..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/multicluster/kubeconfig.mdx
+++ /dev/null
@@ -1,458 +0,0 @@
----
-description: Configure a local cluster to pull endpoint data from a remote cluster.
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-# Configure federated endpoint identity and multi-cluster networking
-
-## Big picture
-
-Configure a cluster to federate endpoint identities and establish cross-cluster connectivity.
-
-## Value
-
-Secure cross-cluster traffic with identity-aware policy, and leverage $[prodname] to establish the required cross-cluster networking.
-
-## Concepts
-
-### Local and remote clusters
-
-Each cluster in the cluster mesh can act as both a local and remote cluster.
-
-- Local clusters are configured to retrieve endpoint and routing data from remote clusters (via RemoteClusterConfiguration)
-- Remote clusters authorize local clusters to retrieve endpoint and routing data
-
-### Remote endpoint identity and policy
-
-Typically, policy can only reference the endpoint identity (e.g. pod labels) of local endpoints. Federated endpoint identity enables local policy rules to reference remote endpoint identities.
-
-### RemoteClusterConfiguration
-RemoteClusterConfiguration is the resource that configures a local cluster to sync resources from a remote cluster. It primarily describes how a local cluster establishes that connection to the remote cluster through which resources are synced.
-
-The resources synced through this connection enable the local cluster to reference remote endpoint identity and establish cross-cluster overlay routes.
-
-RemoteClusterConfiguration creates this connection in one direction. If you want identity-aware policy on both sides (i.e. both clusters) of a connection, or you want $[prodname] to establish cross-cluster overlay networking, you need to create a RemoteClusterConfiguration for both directions.
-
-### kubeconfig files
-Each cluster in the cluster mesh should have a dedicated kubeconfig file used by other clusters in the mesh to connect and authenticate.
-
-## Before you begin
-
-
-## How to
-- [Create kubeconfig files](#create-kubeconfig-files)
-- [Create RemoteClusterConfiguration](#create-remoteclusterconfigurations)
-- [Validate federation and multi-cluster networking](#validate-federation-and-multi-cluster-networking)
-- [Create remote-identity-aware network policy](#create-remote-identity-aware-network-policy)
-- [Troubleshoot](#troubleshoot)
-- [Configure IP pool resources](#configure-ip-pool-resources)
-
-### Ensure pod IP routability
-Federation of workload endpoint identities requires [Pod IP routability](./overview#pod-ip-routability) between clusters. If your clusters are using a supported overlay networking mode, $[prodname] can automatically meet this requirement when clusters are connected.
-
-#### $[prodname] multi-cluster networking
-$[prodname] can automatically extend the overlay networking in your clusters to establish pod IP routes across clusters and thus meet the requirement for Pod IP routability. Only VXLAN overlay is supported at this time.
-
-Ensure the following requirements are met if utilizing $[prodname] multi-cluster networking to achieve pod IP routability:
-- All nodes in the cluster mesh must be able to establish connections to each other via their private IP, and must have unique node names.
-- VXLAN must be enabled on participating IP pools in all clusters, and these IP pool CIDRs must not overlap.
-- `routeSource` and `vxlan*` FelixConfiguration values must be aligned across clusters, and traffic on the `vxlanPort` must be allowed between nodes in the cluster mesh.
-- RemoteClusterConfigurations must be established in both directions for cluster pairs in the cluster mesh.
-- CNI must be Calico.
-
-With these requirements met, multi-cluster networking will be automatically established when RemoteClusterConfigurations are created.
-
-#### Other networking configurations
-Alternatively, you can meet the requirement for Pod IP routability by configuring $[prodname] with BGP or with VPC routing to establish unencapsulated Pod IP routes in your environment.
-
-:::caution
-If you have already configured federated endpoint identity without multi-cluster networking, and you wish to switch to using multi-cluster networking, you should note that the steps below are intended for establishing new RemoteClusterConfigurations. You may wish to consult the [switch to multi-cluster networking](#switch-to-multi-cluster-networking) section.
-:::
-
-### Create kubeconfig files
-
-Create a kubeconfig file, for each cluster, that will be used by other clusters to connect and authenticate themselves.
-
-**For each** cluster in the cluster mesh, utilizing an existing kubeconfig with administrative privileges, follow these steps:
-
-1. Create the ServiceAccount used by remote clusters for authentication:
-
- ```bash
- kubectl apply -f $[filesUrl_CE]/manifests/federation-remote-sa.yaml
- ```
-
-1. If RBAC is enabled, create the ClusterRole and ClusterRoleBinding used by remote clusters for authorization:
-
- ```bash
- kubectl apply -f $[filesUrl_CE]/manifests/federation-rem-rbac-kdd.yaml
- ```
-
-1. Create the kubeconfig file:
-
- Open a file in your favorite editor. Consider establishing a naming scheme unique to each cluster, e.g. `kubeconfig-app-a`.
-
- Paste the following into the file - we will replace the templated values with data retrieved in following steps.
- ```yaml
- apiVersion: v1
- kind: Config
- users:
- - name: tigera-federation-remote-cluster
- user:
- token:
- clusters:
- - name: tigera-federation-remote-cluster
- cluster:
- certificate-authority-data:
- server:
- contexts:
- - name: tigera-federation-remote-cluster-ctx
- context:
- cluster: tigera-federation-remote-cluster
- user: tigera-federation-remote-cluster
- current-context: tigera-federation-remote-cluster-ctx
- ```
-
-1. Retrieve the ServiceAccount token:
-
- #### If using Kubernetes ≥ 1.24
- - Create the ServiceAccount token:
- ```bash
- kubectl apply -f - <` with it's value:
- ```bash
- kubectl describe secret tigera-federation-remote-cluster -n kube-system
- ```
-
- #### If using Kubernetes < 1.24
- - Retrieve the ServiceAccount token value and replace `` with it's value:
- ```bash
- kubectl describe secret -n kube-system $(kubectl get serviceaccounts tigera-federation-remote-cluster -n kube-system -o jsonpath='{.secrets[0].name}')
- ```
-
-1. Retrieve and save the certificate authority and server data:
-
- Run the following command:
- ```bash
- kubectl config view --flatten --minify
- ```
- Replace `` and `` with `certificate-authority-data` and `server` values respectively.
-
-1. Verify that the `kubeconfig` file works:
-
- Issue a command like the following to validate the kubeconfig file can be used to connect to the current cluster and access resources:
- ```bash
- kubectl --kubeconfig=kubeconfig-app-a get nodes
- ```
-
-### Create RemoteClusterConfigurations
-We'll now create the RemoteClusterConfigurations that establish synchronization between clusters. This enables remote-identity aware policy, federated services, and can establish multi-cluster networking.
-
-
-
-
-In this setup, the cluster mesh will be configured to meet the pod IP routability requirement by establishing routes between clusters using [$[prodname] multi-cluster networking](#calico-enterprise-multi-cluster-networking).
-
-**For each pair** of clusters in the cluster mesh (e.g. cluster A and cluster B):
-
-1. In cluster A, create a secret that contains the kubeconfig for cluster B:
-
- Determine the namespace (``) for the secret to replace in all steps.
- The simplest method to create a secret for a remote cluster is to use the `kubectl` command because it correctly encodes the data and formats the file.
- ```bash
- kubectl create secret generic remote-cluster-secret-name -n \
- --from-literal=datastoreType=kubernetes \
- --from-file=kubeconfig=
- ```
-
-1. If RBAC is enabled in cluster A, create a Role and RoleBinding for $[prodname] to use to access the secret that contains the kubeconfig for cluster B:
- ```bash
- kubectl create -f - <
- rules:
- - apiGroups: [""]
- resources: ["secrets"]
- verbs: ["watch", "list", "get"]
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: RoleBinding
- metadata:
- name: remote-cluster-secret-access
- namespace:
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: Role
- name: remote-cluster-secret-access
- subjects:
- - kind: ServiceAccount
- name: calico-typha
- namespace: calico-system
- EOF
- ```
-
-1. Create the RemoteClusterConfiguration in cluster A:
-
- Within the RemoteClusterConfiguration, we specify the secret used to access cluster B, and the overlay routing mode which toggles the establishment of cross-cluster overlay routes.
- ```bash
- kubectl create -f - <
- kind: Secret
- syncOptions:
- overlayRoutingMode: Enabled
- EOF
- ```
-
-1. [Validate](#check-remote-cluster-connection) the that the remote cluster connection can be established.
-
-1. Repeat the above steps, switching cluster A and cluster B.
-
-After completing the above steps for all cluster pairs in the cluster mesh, your clusters should now be ready to utilize remote-identity-aware policy and federated services, along with multi-cluster networking if requirements were met.
-
-
-
-
-In this setup, the cluster mesh will rely on the [underlying network](#other-networking-configurations) to meet the pod IP routability requirement.
-
-**For each pair** of clusters in the cluster mesh (e.g. cluster A and cluster B):
-
-1. In cluster A, create a secret that contains the kubeconfig for cluster B:
-
- Determine the namespace (``) for the secret to replace in all steps.
- The simplest method to create a secret for a remote cluster is to use the `kubectl` command because it correctly encodes the data and formats the file.
- ```bash
- kubectl create secret generic remote-cluster-secret-name -n \
- --from-literal=datastoreType=kubernetes \
- --from-file=kubeconfig=
- ```
-
-1. If RBAC is enabled in cluster A, create a Role and RoleBinding for $[prodname] to use to access the secret that contains the kubeconfig for cluster B:
- ```bash
- kubectl create -f - <
- rules:
- - apiGroups: [""]
- resources: ["secrets"]
- verbs: ["watch", "list", "get"]
- ---
- apiVersion: rbac.authorization.k8s.io/v1
- kind: RoleBinding
- metadata:
- name: remote-cluster-secret-access
- namespace:
- roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: Role
- name: remote-cluster-secret-access
- subjects:
- - kind: ServiceAccount
- name: calico-typha
- namespace: calico-system
- EOF
- ```
-
-1. Create the RemoteClusterConfiguration in cluster A:
-
- Within the RemoteClusterConfiguration, we specify the secret used to access cluster B, and the overlay routing mode which toggles the establishment of cross-cluster overlay routes.
- ```bash
- kubectl create -f - <
- kind: Secret
- syncOptions:
- overlayRoutingMode: Disabled
- EOF
- ```
-
-1. If you have no IP pools in cluster A with NAT-outgoing enabled, skip this step.
-
- Otherwise, if you have IP pools in cluster A with NAT-outgoing enabled, and workloads in that pool will egress to workloads in cluster B, you need to instruct $[prodname] to not perform NAT on traffic destined for IP pools in cluster B.
-
- You can achieve this by creating a disabled IP pool in cluster A for each CIDR in cluster B. This IP pool should have NAT-outgoing disabled. For example:
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: IPPool
- metadata:
- name: clusterB-main-pool
- spec:
- cidr:
- disabled: true
- ```
-
-1. [Validate](#check-remote-cluster-connection) the that the remote cluster connection can be established.
-
-1. Repeat the above steps, switching cluster A and cluster B.
-
-After completing the above steps for all cluster pairs in the cluster mesh, your clusters should now be ready to utilize remote-identity-aware policy and federated services.
-
-
-
-
-:::caution
- This tutorial sets up RemoteClusterConfigurations in both directions. This is required for $[prodname] to manage multi-cluster networking, and also ensures you can write identity-aware policy on both sides of a cross-cluster connection. Unidirectional connections can be made at your own discretion.
-:::
-
-### Switch to multi-cluster networking
-The steps above assume that you are configuring both federated endpoint identity and multi-cluster networking for the first time. If you already have federated endpoint identity, and want to use multi-cluster networking, follow these steps:
-
-1. Validate that all [requirements](#calico-enterprise-multi-cluster-networking) for multi-cluster networking have been met.
-2. Update the ClusterRole in each cluster in the cluster mesh using the RBAC manifest found in [Create kubeconfig files](#create-kubeconfig-files)
-3. In all RemoteClusterConfigurations, set `Spec.OverlayRoutingMode` to `Enabled`.
-4. Verify that all RemoteClusterConfigurations are bidirectional (in both directions for each cluster pair) using these [instructions](#create-remoteclusterconfigurations).
-5. If you had previously created disabled IP pools to prevent NAT outgoing from applying to remote cluster destinations, those disabled IP pools are no longer needed when using multi-cluster networking and must be deleted.
-
-### Validate federated endpoint identity & multi-cluster networking
-#### Validate RemoteClusterConfiguration and federated endpoint identity
-##### Check remote cluster connection
-You can check the Typha logs for remote cluster connection status. Run the following command:
-```bash
-kubectl logs deployment/calico-typha -n calico-system | grep "Sending in-sync update"
-```
-You should see an entry for each RemoteClusterConfiguration in the local cluster.
-
-If the output contains unexpected results, proceed to the [troubleshooting](#troubleshoot) section.
-
-#### Validate multi-cluster networking
-If all requirements were met for $[prodname] to establish multi-cluster networking, you can test the functionality by establishing a connection from a pod in a local cluster to the IP of a pod in a remote cluster. Ensure that there is no policy in either cluster that may block this connection.
-
-If the connection fails, proceed to the [troubleshooting](#troubleshoot) section.
-
-### Create remote-identity-aware network policy
-With federated endpoint identity and routing between clusters established, you can now use labels to reference endpoints on a remote cluster in local policy rules, rather than referencing them by IP address.
-
-The main policy selector still refers only to local endpoints; and that selector chooses which local endpoints to apply the policy.
-However, rule selectors can now refer to both local and remote endpoints.
-
-In the following example, cluster A (an application cluster) has a network policy that governs outbound connections to cluster B (a database cluster).
-```yaml
-apiversion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: default.app-to-db
- namespace: myapp
-spec:
- # The main policy selector selects endpoints from the local cluster only.
- selector: app == 'backend-app'
- tier: default
- egress:
- - destination:
- # Rule selectors can select endpoints from local AND remote clusters.
- selector: app == 'postgres'
- protocol: TCP
- ports: [5432]
- action: Allow
-```
-
-### Troubleshoot
-#### Troubleshoot RemoteClusterConfiguration and federated endpoint identity
-
-##### Verify configuration
-For each impacted remote cluster pair (between cluster A and cluster B):
-1. Retrieve the kubeconfig from the secret stored in cluster A. Manually verify that it can be used to connect to cluster B.
- ```bash
- kubectl get secret -n remote-cluster-secret-name -o=jsonpath="{.data.kubeconfig}" | base64 -d > verify_kubeconfig_b
- kubectl --kubeconfig=verify_kubeconfig_b get nodes
- ```
- This validates that the credentials used by Typha to connect to cluster B's API server are stored in the correct location and provide sufficient access.
-
- The command above should yield a result like the following:
- ```
- NAME STATUS ROLES AGE VERSION
- clusterB-master Ready master 7d v1.27.0
- clusterB-worker-1 Ready worker 7d v1.27.0
- clusterB-worker-2 Ready worker 7d v1.27.0
- ```
-
- If you do not see the nodes of cluster B listed in response to the command above, verify that you [created](#create-kubeconfig-files) the kubeconfig for cluster B correctly, and that you [stored](#create-remoteclusterconfigurations) it in cluster A correctly.
-
- If you do see the nodes of cluster B listed in response to the command above, you can run this test (or a similar test) on a node in cluster A to verify that cluster A nodes can connect to the API server of cluster B.
-
-2. Validate that the Typha service account in Cluster A is authorized to retrieve the kubeconfig secret for cluster B.
- ```bash
- kubectl auth can-i list secrets --namespace --as=system:serviceaccount:calico-system:calico-typha
- ```
-
- This command should yield the following output:
- ```
- yes
- ```
-
- If the command does not return this output, verify that you correctly [configured RBAC](#create-remoteclusterconfigurations) in cluster A.
-
-3. Repeat the above, switching cluster A to cluster B.
-
-##### Check logs
-Validate that querying Typha logs yield the expected result outlined in the [validation](#validate-federated-endpoint-identity--multi-cluster-networking) section.
-
-If the Typha logs do not yield the expected result, review the warning or error-related logs in `typha` or `calico-node` for insights.
-
-#### Troubleshoot multi-cluster networking
-##### Basic validation
-* Ensure that RemoteClusterConfiguration and federated endpoint identity are [functioning correctly](#validate-federated-endpoint-identity--multi-cluster-networking)
-* Verify that you have met the [prerequisites](#calico-enterprise-multi-cluster-networking) for multi-cluster networking
-* If you had previously set up RemoteClusterConfigurations without multi-cluster networking, and are upgrading to use the feature, review the [switching considerations](#switch-to-multi-cluster-networking)
-* Verify that traffic between clusters is not being denied by network policy
-
-##### Check overlayRoutingMode
-Ensure that `overlayRoutingMode` is set to `"Enabled"` on all RemoteClusterConfigurations.
-
-If overlay routing is successfully enabled, you can view the logs of a Typha instance using:
-```bash
-kubectl logs deployment/calico-typha -n calico-system
-```
-
-You should see an output for each connected remote cluster that looks like this:
-```
-18:49:35.394 [INFO][14] wrappedcallbacks.go 443: Creating syncer for RemoteClusterConfiguration(my-cluster)
-18:49:35.394 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/ipam/v2/assignment/"
-18:49:35.395 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/workloadendpoints"
-18:49:35.396 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/hostendpoints"
-18:49:35.396 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/profiles"
-18:49:35.396 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/nodes"
-18:49:35.397 [INFO][14] watchercache.go 186: Full resync is required ListRoot="/calico/resources/v3/projectcalico.org/ippools"
-```
-
-If you do not see the each of the resource types above, overlay routing was not successfully enabled in your cluster. Verify that you followed the [setup](#create-remoteclusterconfigurations) correctly for overlay routing, and that the cluster is using a version of $[prodname] that supports multi-cluster networking.
-
-###### Check logs
-Warning or error logs in `typha` or `calico-node` may provide insight into where issues are occurring.
-
-## Next steps
-
-[Configure federated services](services-controller.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/multicluster/overview.mdx b/calico-cloud_versioned_docs/version-20-1/multicluster/overview.mdx
deleted file mode 100644
index 777537a648..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/multicluster/overview.mdx
+++ /dev/null
@@ -1,54 +0,0 @@
----
-description: Configure a cluster mesh for cross-cluster endpoints sharing, cross-cluster connectivity, and cross-cluster service discovery.
----
-
-# Overview
-
-## Big picture
-
-Secure cross-cluster connections with identity-aware network policy, and federate services for cross-cluster service discovery.
-
-Utilize $[prodname] to establish cross-cluster connectivity.
-
-## Value
-
-At some point in your Kubernetes journey, you may have applications that need to access services and workloads running in another cluster.
-
-By default, pods can only communicate with pods within the same cluster. Additionally, services and network policy only select pods from within the same cluster. $[prodname] can help overcome these barriers by forming a cluster mesh the following features:
-- **Federated endpoint identity**
-
- Allow a local Kubernetes cluster to include the workload endpoints (pods) and host endpoints of a remote cluster in the calculation of local network policies applied on each node of the local cluster.
-
-- **Federated services**
-
- Enable a local Kubernetes Service to populate with Endpoints selected from both local cluster and remote cluster Services.
-
-- **Multi-cluster networking**
-
- Establish an overlay network between clusters to provide cross-cluster connectivity with $[prodname].
-
-## Concepts
-
-### Pod IP routability
-
-$[prodname] cluster mesh is implemented at Kubernetes at the network layer, based on pod IPs.
-
-Taking advantage of federated workload endpoint identity and federated services requires that pod IPs are routable between clusters. This is because identity-aware network policy requires source and destination pod IPs to be preserved to establish pod identity. Additionally, the Endpoint IPs of pods selected by a federated Service must be routable in order for that Service to be of value.
-
-You can utilize $[prodname] multi-cluster networking to establish pod IP routability between clusters via overlay. Alternatively, you can manually set up pod IP routability between clusters without encapsulation (e.g. VPC routing, BGP routing).
-
-### Federated endpoint identity
-
-Federated endpoint identity in a cluster mesh allows a local Kubernetes cluster to include the workload endpoints (pods) and host endpoints of a remote cluster in the calculation of the local policies for each node, e.g. Cluster A network policy allows its application pods to talk to database pods in Cluster B.
-
-This feature does not _federate network policies_; policies from a remote cluster are not applied to the endpoints on the local cluster, and the policy from the local cluster is rendered only locally and applied to the local endpoints.
-
-### Federated services
-
-Federated services in a cluster mesh works with federated endpoint identity, providing cross-cluster service discovery for a local cluster. If you have an existing service discovery mechanism, this feature is optional.
-
-Federated services use the Tigera Federated Services Controller to federate all Kubernetes endpoints (workload and host endpoints) across all of the clusters. The Federated Services Controller accesses service and endpoints data in the remote clusters directly through the Kubernetes API.
-
-## Next steps
-
-[Configure remote-aware policy and multi-cluster networking](kubeconfig.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/multicluster/services-controller.mdx b/calico-cloud_versioned_docs/version-20-1/multicluster/services-controller.mdx
deleted file mode 100644
index 2f7944e835..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/multicluster/services-controller.mdx
+++ /dev/null
@@ -1,209 +0,0 @@
----
-description: Configure a federated service for cross-cluster service discovery for local clusters.
----
-
-# Configure federated services
-
-## Big picture
-
-Configure local clusters to discover services across multiple clusters.
-
-## Value
-
-Use cluster mesh and federated services discovery along with federated endpoint identity to extend and automate endpoints sharing. (Optional if you have your own service discovery mechanism.)
-
-## Concepts
-
-### Federated services
-
-A federated service (also called a backing service), is a set of services with consolidated endpoints. $[prodname] discovers services across a cluster mesh (both local cluster and remote clusters) and creates a "federated service" on the local cluster that encompasses all of the individual services.
-
-Federated services are managed by the Tigera Federated Service Controller, which monitors and maintains endpoints for each locally-federated service. The controller does not change configuration on remote clusters.
-
-A federated service looks similar to a regular Kubernetes service, but instead of using a pod selector, it uses an annotation. For example:
-
-```yaml
-apiVersion: v1
-kind: Service
-metadata:
- name: my-app-federated
- namespace: default
- annotations:
- federation.tigera.io/serviceSelector: run == "my-app"
-spec:
- ports:
- - name: my-app-ui
- port: 8080
- protocol: TCP
- type: ClusterIP
-```
-
-| Annotation | Description |
-| -------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `federation.tigera.io/serviceSelector` | Required field that specifies the services used in the federated service. Format is a standard $[prodname] selector (i.e. the same as $[prodname] policy resources) and selects services based on their labels. The selector annotation selects services, not pods.
Only services in the same namespace as the federated service are included. This implies namespace names across clusters are linked (this is a basic premise of federated endpoint identity).
If the value is incorrectly specified, the service is not federated and endpoint data is removed from the service. View the warning logs in the controller for any issues processing this value. |
-
-**Syntax and rules**
-
-- Services that you specify in the federated service must be in the same namespace or they are ignored. A basic assumption of federated endpoint identity is that namespace names are linked across clusters.
-- If you specify a `spec.Selector` in a federated service, the service is not federated.
-- You cannot federate another federated service. If a service has a federated services annotation, it is not included as a backing service of another federated service.
-- The target port number in the federated service ports is not used.
-
-**Match services using a label**
-
-You can also match services using a label. The label is implicitly added to each service, but it does not appear in `kubectl` when viewing the service.
-
-| Label | Description |
-| ---------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `federation.tigera.io/remoteClusterName` | Label added to all remote services that correspond to the Remote Cluster Configuration name for the remote cluster. Use this label to restrict the clusters selected by the services. **Note**: The label is not added for services in the local cluster. |
-
-**About endpoints**
-
-- Do not manually create or manage endpoints resources; let the Tigera controller do all of the work. User updates to endpoint resources are ignored.
-- Endpoints are selected only when the service port name and protocol in the federated service matches the port name and protocol in the backing service.
-- Endpoint data configured in the federated service is slightly modified from the original data of the backing service. For backing services on remote clusters, the `targetRef.name` field in the federated service adds the ``. For example, `/`.
-
-## Before you begin
-
-**Required**
-
-- [Configure federated endpoint identity](kubeconfig.mdx)
-
-## How to
-
-- [Create service resources](#create-service-resources)
-- [Create a federated service](#create-a-federated-service)
-- [Access a federated service](#access-a-federated-service)
-
-### Create service resources
-
-On each cluster in the mesh that is providing a particular service, create your service resources as you normal would with the following requirements:
-
-- Services must all be in the same namespace.
-- Configure each service with a common label key and value to identify the common set of services across your clusters (for example, `run=my-app`).
-
-Kubernetes manages the service by populating the service endpoints from the pods that match the selector configured in the service spec.
-
-### Configure a federated service
-
-1. On a cluster that needs to access the federated set of pods that are running an application, create a
- service on that cluster leaving the `spec selector` blank.
-1. Set the `federation.tigera.io/serviceSelector` annotation to be a $[prodname] selector that selects the previously-configured services using the matching label match (for example, `run == "my-app"`).
-
-The Federated Services Controller manages this service, populating the service endpoints from all of the services that match the service selector configured in the annotation.
-
-### Access a federated service
-
-Any application can access the federated service using the local DNS name for that service. The simplest way to access a federated service is through its corresponding DNS name.
-
-By default, Kubernetes adds DNS entries to access a service locally. For a service called `my-svc` in the namespace
-`my-namespace`, the following DNS entry would be added to access the service within the local cluster:
-
-```
-my-svc.my-namespace.svc.cluster.local
-```
-
-DNS lookup for this name returns the fixed ClusterIP address assigned for the federated service. The ClusterIP is translated in iptables to one of the federated service endpoint IPs, and is load balanced across all of the endpoints.
-
-## Tutorial
-
-### Create a service
-
-In the following example, the remote cluster defines the following service.
-
-```yaml
-apiVersion: v1
-kind: Service
-metadata:
- labels:
- run: my-app
- name: my-app
- namespace: default
-spec:
- selector:
- run: my-app
- ports:
- - name: my-app-ui
- port: 80
- protocol: TCP
- targetPort: 9000
- - name: my-app-console
- port: 81
- protocol: TCP
- targetPort: 9001
- type: ClusterIP
-```
-
-This service definition exposes two ports for the application `my-app`. One port for accessing a UI, and the other for accessing a management console. The service specifies a Kubernetes selector, which means the endpoints for this service are automatically populated by Kubernetes from matching pods within the services own cluster.
-
-### Create a federated service
-
-To create a federated service on your local cluster that federates the web access port for both the local and remote service, you would create a service resource on your local cluster as follows:
-
-```yaml
-apiVersion: v1
-kind: Service
-metadata:
- name: my-app-federated
- namespace: default
- annotations:
- federation.tigera.io/serviceSelector: run == "my-app"
-spec:
- ports:
- - name: my-app-ui
- port: 8080
- protocol: TCP
- type: ClusterIP
-```
-
-The `spec.selector` is not specified so it will not be managed by Kubernetes. Instead, we use a `federation.tigera.io/selector` annotation, indicating that this is a federated service managed by the Federated Services Controller.
-
-The controller matches the `my-app` services (matching the run label) on both the local and remote clusters, and consolidates endpoints from the `my-app-ui` TCP port for both of those services. Because the federated service does not specify the `my-app-console` port, the controller does not include these endpoints in the federated service.
-
-The endpoints data for the federated service is similar to the following. Note that the name of the remote cluster is included in `targetRef.name`.
-
-```yaml
-apiVersion: v1
-kind: Endpoints
-metadata:
- creationTimestamp: 2018-07-03T19:41:38Z
- annotations:
- federation.tigera.io/serviceSelector: run == "my-app"
- name: my-app-federated
- namespace: default
- resourceVersion: '701812'
- selfLink: /api/v1/namespaces/default/endpoints/my-app-federated
- uid: 1a0427e8-7ef9-11e8-a24c-0259d75c6290
-subsets:
- - addresses:
- - ip: 192.168.93.12
- nodeName: node1.localcluster.tigera.io
- targetRef:
- kind: Pod
- name: my-app-59cf48cdc7-frf2t
- namespace: default
- resourceVersion: '701655'
- uid: 19f5e914-7ef9-11e8-a24c-0259d75c6290
- ports:
- - name: my-app-ui
- port: 80
- protocol: TCP
- - addresses:
- - ip: 192.168.0.28
- nodeName: node1.remotecluster.tigera.io
- targetRef:
- kind: Pod
- name: remotecluster/my-app-7b6f758bd5-ctgbh
- namespace: default
- resourceVersion: '701648'
- uid: 19e2c841-7ef9-11e8-a24c-0259d75c6290
- ports:
- - name: my-app-ui
- port: 80
- protocol: TCP
-```
-
-## Additional resources
-
-- [Cluster mesh example for AWS](aws.mdx)
-- [Federated service controller](../reference/component-resources/kube-controllers/configuration.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/application-layer-policies/alp-tutorial.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/application-layer-policies/alp-tutorial.mdx
deleted file mode 100644
index bb15e86b9a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/application-layer-policies/alp-tutorial.mdx
+++ /dev/null
@@ -1,168 +0,0 @@
----
-description: Learn how to apply ALP to your workloads and control ingress traffic.
----
-
-# Application layer policy tutorial
-
-This tutorial shows how to use $[prodname] application layer policy to restrict ingress traffic for applications and microservices.
-
-### Install the demo application
-
-We will use a simple microservice application to demonstrate $[prodname]
-application layer policy. The [YAO Bank](https://github.com/projectcalico/yaobank) application creates a
-customer-facing web application, a microservice that serves up account
-summaries, and an [etcd](https://github.com/coreos/etcd) datastore.
-
-```bash
-kubectl apply -f $[tutorialFilesURL]/10-yaobank.yaml
-```
-```bash
-namespace/yaobank configured
-service/database created
-serviceaccount/database created
-deployment.apps/database created
-service/summary created
-serviceaccount/summary created
-deployment.apps/summary created
-service/customer created
-serviceaccount/customer created
-deployment.apps/customer created
-```
-
-:::note
-
-You can also
-[view the manifest in your browser](/files/10-yaobank.yaml).
-
-:::
-
-Verify that the application pods have been created and are ready.
-```bash
- kubectl rollout status deploy/summary deploy/customer deploy/database
-```
-
-When the demo application is displayed, you will see three pods.
-
-```
-NAME READY STATUS RESTARTS AGE
-customer-2809159614-qqfnx 3/3 Running 0 21h
-database-1601951801-m4w70 3/3 Running 0 21h
-summary-2817688950-g1b3n 3/3 Running 0 21h
-```
-
-## Set up
-- A $[prodname] cluster is running with application layer policy enabled
-- Cluster has three microservices: customer, database, summary
-- The customer web service should not have access to the backend database, but should have access to clients outside the cluster
-
-Imagine what would happen if an attacker were to gain control of the customer web pod in our
-application. Let's simulate this by executing a remote shell inside that pod.
-
-```bash
-kubectl exec -ti customer- -c customer -- bash
-```
-
-Notice that from here, we get direct access to the backend database. For example, we can list all the entries in the database like this:
-
-```bash
-curl http://database:2379/v2/keys?recursive=true | python -m json.tool
-```
-
-(Piping to `python -m json.tool` nicely formats the output.)
-
-## Apply application layer policy
-
-In this step, we get the application layer policy YAML and apply it. Note that the policy scope is cluster-wide.
-
-With a $[prodname] policy, you can mitigate risks to the banking application.
-
-```bash
-wget $[tutorialFilesURL]/30-policy.yaml
-kubectl create -f 30-policy.yaml
-```
-
-Let's examine this policy piece by piece. First, notice that an application layer policy looks like a regular $[prodname] global network policy.
-The only difference you'll see is the ability to use the application layer policy parameters in global network policy. Another important difference
-is that you'll see HTTP traffic flows in Manager UI in features like Service Graph.
-
-Next, there are three policy objects, one for each microservice.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: customer
-spec:
- selector: app == 'customer'
- ingress:
- - action: Allow
- http:
- methods: ['GET']
- egress:
- - action: Allow
-```
-
-The first policy protects the customer web app. Because this application is customer-facing, we do not
-restrict what can communicate with it. We do, however, restrict its communications to HTTP `GET`
-requests.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: summary
-spec:
- selector: app == 'summary'
- ingress:
- - action: Allow
- source:
- serviceAccounts:
- names: ['customer']
- egress:
- - action: Allow
-```
-
-The second policy protects the account summary microservice. We know the only consumer of this
-service is the customer web app, so we restrict the source of incoming connections to the service
-account for the customer web app.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: database
-spec:
- selector: app == 'database'
- ingress:
- - action: Allow
- source:
- serviceAccounts:
- names: ["summary"]
- egress:
- - action: Allow
-```
-
-The third policy protects the database. Only the summary microservice should have direct access to
-the database.
-
-### Verify the policy is working
-
-Let's verify our policy is working as intended. First, return to your browser and refresh to
-ensure policy enforcement has not broken the application.
-
-Next, return to the customer web app. Recall that we simulated an attacker gaining control of that
-pod by executing a remote shell inside it.
-
-```bash
-kubectl exec -ti customer- -c customer bash
-```
-
-Repeat our attempt to access the database.
-
-```bash
-curl -I http://database:2379/v2/keys?recursive=true
-```
-
-We omitted the JSON formatting because we do not expect to get a valid JSON response. This
-time we should get a `403 Forbidden` response. Only the account summary microservice has database
-access according to our policy.
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/application-layer-policies/alp.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/application-layer-policies/alp.mdx
deleted file mode 100644
index f3e4f0c8b4..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/application-layer-policies/alp.mdx
+++ /dev/null
@@ -1,68 +0,0 @@
----
-description: Enforce application layer policies in your cluster to configure access controls based on L7 attributes.
----
-
-# Enable and enforce application layer policies
-
-Application layer policies let you configure access controls based on L7 attributes.
-
-## Before you begin
-
-### Unsupported
-- $[prodname] implements application layer policy using Envoy as a DaemonSet. This means you cannot use application layer policy alongside a service mesh like Istio.
-- GKE
-
-### Limitations
-- Application layer policy supports restricting only ingress traffic
-- Support for L7 attributes are limited to HTTP method and URL exact/prefix path
-- Supported protocols are limited to TCP-based protocols (for example, HTTP, HTTPS, or gRPC)
-- You can control application layer policies only at the cluster level (not per namespace)
-
-## How to
-- [Enable application layer policies](#enable-application-layer-policies)
-- [Enforce application layer policies for ingress traffic](#enforce-application-layer-policies-for-ingress-traffic)
-- [Disable application layer policies](#disable-application-layer-policies)
-
-### Enable application layer policies (ALP)
-In the ApplicationLayer custom resource, set the `applicationLayerPolicy` field to Enabled.
-
-```
-apiVersion: operator.tigera.io/v1
-kind: ApplicationLayer
-metadata:
- name: tigera-secure
-spec:
- applicationLayerPolicy: Enabled
-
-```
-
-### Enforce application layer policies for ingress traffic
-
-You can restrict ingress traffic using HTTP match criteria using Global network policy.
-For a list of all HTTP match parameters, see [Global network policy](/reference/resources/globalnetworkpolicy.mdx).
-
-In the following example, the trading app is allowed ingress traffic only for HTTP GET requests that match the exact path /projects/calico, or that begins with the prefix, /users.
-
-```
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: customer
-spec:
- selector: app == 'tradingapp'
- ingress:
- - action: Allow
- http:
- methods: ["GET"]
- paths:
- - exact: "/projects/calico"
- - prefix: "/users"
- egress:
- - action: Allow
-```
-### Disable application layer policies
-
-To disable the policies, do one of the following steps:
- - Set the `applicationLayerPolicy` field in the `ApplicationLayer` custom resource to `Disabled`.
- - Remove the `applicationLayerPolicy` field entirely.
- - Delete the ApplicationLayer` custom resource.
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/application-layer-policies/index.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/application-layer-policies/index.mdx
deleted file mode 100644
index 914f5afde2..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/application-layer-policies/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Use application layer policies to restrict ingress traffic based on HTTP attributes.
-hide_table_of_contents: true
----
-
-# Application layer policies to control ingress traffic
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/calico-labels.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/calico-labels.mdx
deleted file mode 100644
index 13efa74a43..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/calico-labels.mdx
+++ /dev/null
@@ -1,86 +0,0 @@
----
-description: Calico Cloud automatic labels for use with resources.
----
-
-# Calico Cloud automatic labels
-
-As a convenience, $[prodname] provides immutable labels that are used for specific resources when evaluating selectors in policies. The labels make it easier to match resources in common ways (such as matching a namespace by name).
-
-## Labels for matching namespaces
-
-The label `projectcalico.org/name` is set to the name of the namespace. This allows for matching namespaces by name when using a `namespaceSelector` field.
-
-For example, the following GlobalNetworkPolicy applies to workloads with label, `color: red` in namespaces named, `"foo"` and `"bar"`. The policy allows ingress traffic to port 8080 from all workloads in a third namespace named, `"baz"`:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: foo-and-bar
-spec:
- namespaceSelector: projectcalico.org/name in {"foo", "bar"}
- selector: color == "red"
- types:
- - Ingress
- ingress:
- - action: Allow
- source:
- namespaceSelector: projectcalico.org/name == "baz"
- destination:
- ports:
- - 8080
-```
-
-Be aware that the default values for `namespaceSelector` for NetworkPolicy and GlobalNetworkPolicy are different. For example:
-
-**In a network policy**,
-
- ```yaml
- namespaceSelector:
- selector: foo == "bar"
- ```
-means "resources in the same namespace as the network policy that matches foo == 'bar'".
-
-**In a global network policy**,
-
- ```yaml
- namespaceSelector:
- selector: foo == "bar"
- ```
-means "resources in any namespace and non-namespaced resources that match foo == 'bar'".
-
-Further,
-
- ```yaml
- namespaceSelector: projectcalico.org/name == "some-namespace"
- selector: foo == "bar"
- ```
-is equivalent to:
-
- ```yaml
- namespaceSelector:
- selector: (foo == "bar") && (projectcalico.org/namespace == "some-namespace")
- ```
-
-### Labels for matching service accounts
-
-Similarly, the `projectcalico.org/name` label is applied to ServiceAccounts and allows for matching by name in a `serviceAccountSelector`.
-
-### Kubernetes labels for matching namespaces
-
-Kubernetes also has [automatic labeling](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#automatic-labelling), for example `kubernetes.io/metadata.name`. The Kubernetes namespace label serves the same purpose and can be used in the same way as the $[prodname] label. The `projectcalico.org/name` label predates the automatic Kubernetes label.
-
-## Labels for matching host endpoints
-
-[Automatic HostEndpoints](../../network-policy/hosts/kubernetes-nodes) use the following label to differentiate them from regular HostEndpoints:
-
-- `projectcalico.org/created-by: calico-kube-controllers`
-
-## Use the correct selector with labels in policies
-
-$[prodname] labels must be used with the correct selector or the policy will not work as designed (and there are no error messages in Manager UI or when applying the YAML).
-
-| Calico label | Usage requirements | Use in these resources... |
-| --------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
-| `projectcalico.org/name` | Use with a **namespaceSelector** or **serviceAccountSelector**. | - Network policy - Staged network policy
Namespaced resources that apply only to workload endpoint resources in the namespace. |
-| `projectcalico.org/namespace` | Use only with selectors.
Use the label as the label name, and a namespace name as the value to compare against (for example projectcalico.org/namespace == "default"). | - Global network policy - Staged global network policy
Cluster-wide (non-namespaced) resources that apply to workload endpoint resources in all namespaces, and to host endpoint resources. |
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/calico-network-policy.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/calico-network-policy.mdx
deleted file mode 100644
index bd78699415..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/calico-network-policy.mdx
+++ /dev/null
@@ -1,256 +0,0 @@
----
-description: Create your first Calico network policies. Shows the rich features using sample policies that extend native Kubernetes network policy.
----
-
-# Get started with Calico network policy
-
-## Big picture
-
-Enforce which network traffic that is allowed or denied using rules in Calico network policy.
-
-## Value
-
-### Extends Kubernetes network policy
-
-Calico network policy provides a richer set of policy capabilities than Kubernetes including: policy ordering/priority, deny rules, and more flexible match rules. While Kubernetes network policy applies only to pods, Calico network policy can be applied to multiple types of endpoints including pods, VMs, and host interfaces.
-
-### Write once, works everywhere
-
-No matter which cloud provider you use now, adopting Calico network policy means you write the policy once and it is portable. If you move to a different cloud provider, you don’t need to rewrite your Calico network policy. Calico network policy is a key feature to avoid cloud provider lock-in.
-
-### Works seamlessly with Kubernetes network policies
-
-You can use Calico network policy in addition to Kubernetes network policy, or exclusively. For example, you could allow developers to define Kubernetes network policy for their microservices. For broader and higher-level access controls that developers cannot override, you could allow only security or Ops teams to define Calico network policies.
-
-## Concepts
-
-### Endpoints
-
-Calico network policies apply to **endpoints**. In Kubernetes, each pod is a Calico endpoint. However, Calico can support other kinds of endpoints. There are two types of Calico endpoints: **workload endpoints** (such as a Kubernetes pod or OpenStack VM) and **host endpoints** (an interface or group of interfaces on a host).
-
-### Namespaced and global network policies
-
-**Calico network policy** is a namespaced resource that applies to pods/containers/VMs in that namespace.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-tcp-port-6379
- namespace: production
-```
-
-**Calico global network policy** is a non-namespaced resource and can be applied to any kind of endpoint (pods, VMs, host interfaces) independent of namespace.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-tcp-port-6379
-```
-
-Because global network policies use **kind: GlobalNetworkPolicy**, they are grouped separately from **kind: NetworkPolicy**. For example, global network policies will not be returned from `kubectl get networkpolicy.p`, and are rather returned from `kubectl get globalnetworkpolicy`.
-
-### Ingress and egress
-
-Each network policy rule applies to either **ingress** or **egress** traffic. From the point of view of an endpoint (pod, VM, host interface), **ingress** is incoming traffic to the endpoint, and **egress** is outgoing traffic from the endpoint. In a Calico network policy, you create ingress and egress rules independently (egress, ingress, or both).
-
-You can specify whether policy applies to ingress, egress, or both using the **types** field. If you do not use the types field, Calico defaults to the following values.
-
-| Ingress rule present? | Egress rule present? | Value |
-| :-------------------: | :------------------: | :-------------: |
-| No | No | Ingress |
-| Yes | No | Ingress |
-| No | Yes | Egress |
-| Yes | Yes | Ingress, Egress |
-
-### Network traffic behaviors: deny and allow
-
-The Kubernetes network policy specification defines the following behavior:
-
-- If no network policies apply to a pod, then all traffic to/from that pod is allowed.
-- If one or more network policies apply to a pod containing ingress rules, then only the ingress traffic specifically allowed by those policies is allowed.
-- If one or more network policies apply to a pod containing egress rules, then only the egress traffic specifically allowed by those policies is allowed.
-
-For compatibility with Kubernetes, **Calico network policy** follows the same behavior for Kubernetes pods. For other endpoint types (VMs, host interfaces), Calico network policy is default deny. That is, only traffic specifically allowed by network policy is allowed, even if no network policies apply to the endpoint.
-
-
-
-## How to
-
-- [Control traffic to/from endpoints in a namespace](#control-traffic-tofrom-endpoints-in-a-namespace)
-- [Control traffic to/from endpoints independent of namespace](#control-traffic-tofrom-endpoints-independent-of-namespace)
-- [Control traffic to/from endpoints using IP addresses or CIDR ranges](#control-traffic-tofrom-endpoints-using-ip-addresses-or-cidr-ranges)
-- [Apply network policies in specific order](#apply-network-policies-in-specific-order)
-- [Generate logs for specific traffic](#generate-logs-for-specific-traffic)
-
-### Control traffic to/from endpoints in a namespace
-
-In the following example, ingress traffic to endpoints in the **namespace: production** with label **color: red** is allowed, only if it comes from a pod in the same namespace with **color: blue**, on port **6379**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-tcp-port-6379
- namespace: production
-spec:
- selector: color == 'red'
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: color == 'blue'
- destination:
- ports:
- - 6379
-```
-
-To allow ingress traffic from endpoints in other namespaces, use a **namespaceSelector** in the policy rule. A namespaceSelector matches namespaces based on the labels that are applied in the namespace. In the following example, ingress traffic is allowed from endpoints in namespaces that match **shape == circle**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-tcp-port-6379
- namespace: production
-spec:
- selector: color == 'red'
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: color == 'blue'
- namespaceSelector: shape == 'circle'
- destination:
- ports:
- - 6379
-```
-
-### Control traffic to/from endpoints independent of namespace
-
-The following Calico network policy is similar to the previous example, but uses **kind: GlobalNetworkPolicy** so it applies to all endpoints, regardless of namespace.
-
-In the following example, incoming TCP traffic to any pods with label **color: red** is denied if it comes from a pod with **color: blue**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: deny-blue
-spec:
- selector: color == 'red'
- ingress:
- - action: Deny
- protocol: TCP
- source:
- selector: color == 'blue'
-```
-
-As with **kind: NetworkPolicy**, you can allow or deny ingress traffic from endpoints in specific namespaces using a namespaceSelector in the policy rule:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: deny-circle-blue
-spec:
- selector: color == 'red'
- ingress:
- - action: Deny
- protocol: TCP
- source:
- selector: color == 'blue'
- namespaceSelector: shape == 'circle'
-```
-
-### Control traffic to/from endpoints using IP addresses or CIDR ranges
-
-Instead of using a selector to define which traffic is allowed to/from the endpoints in a network policy, you can also specify an IP block in CIDR notation.
-
-In the following example, outgoing traffic is allowed from pods with the label **color: red** if it goes to an IP address in the **1.2.3.4/24** CIDR block.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-egress-external
- namespace: production
-spec:
- selector: color == 'red'
- types:
- - Egress
- egress:
- - action: Deny
- destination:
- nets:
- - 1.2.3.0/24
-```
-
-### Apply network policies in specific order
-
-To control the order/sequence of applying network policies, you can use the **order** field (with precedence from the lowest value to highest). Defining policy **order** is important when you include both **action: allow** and **action: deny** rules that may apply to the same endpoint.
-
-In the following example, the policy **allow-cluster-internal-ingress** (order: 10) will be applied before the **policy drop-other-ingress** (order: 20).
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: drop-other-ingress
-spec:
- order: 20
- #...deny policy rules here...
-```
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-cluster-internal-ingress
-spec:
- order: 10
- #...allow policy rules here...
-```
-
-### Generate logs for specific traffic
-
-In the following example, incoming TCP traffic to an application is denied, and each connection attempt is logged to syslog.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-tcp-port-6379
- namespace: production
-spec:
- selector: role == 'database'
- types:
- - Ingress
- - Egress
- ingress:
- - action: Log
- protocol: TCP
- source:
- selector: role == 'frontend'
- - action: Deny
- protocol: TCP
- source:
- selector: role == 'frontend'
-```
-
-### Create policy for established connections
-
-Policies are immediately applied to any new connections. However, for existing connections that are already open, the policy changes will only take effect after the connection has been reestablished. This means that any ongoing sessions may not immediately reflect policy changes until they are initiated again.
-
-## Additional resources
-
-- For additional Calico network policy features, see [Calico network policy](../../reference/resources/networkpolicy.mdx) and [Calico global network policy](../../reference/resources/globalnetworkpolicy.mdx)
-- For an alternative to using IP addresses or CIDRs in policy, see [Network sets](../../reference/resources/networkset.mdx)
-- For details on how to stage network policy, see [Staged network policies](../staged-network-policies.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/index.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/index.mdx
deleted file mode 100644
index 1b7a16054f..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Learn how to create your first Calico Cloud network policy.
-hide_table_of_contents: true
----
-
-# Calico Cloud network policy for beginners
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/kubernetes-default-deny.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/kubernetes-default-deny.mdx
deleted file mode 100644
index be0574f606..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/kubernetes-default-deny.mdx
+++ /dev/null
@@ -1,147 +0,0 @@
----
-description: Create a default deny network policy so pods that are missing policy are not allowed traffic until appropriate network policy is defined.
----
-
-# Enable a default deny policy for Kubernetes pods
-
-## Big picture
-
-Enable a default deny policy for Kubernetes pods using Kubernetes or $[prodname] network policy.
-
-## Value
-
-A **default deny** network policy provides an enhanced security posture so pods without policy (or incorrect policy) are not allowed traffic until appropriate network policy is defined.
-
-## Features
-
-This how-to guide uses the following $[prodname] features:
-
-- **NetworkPolicy**
-- **GlobalNetworkPolicy**
-
-## Concepts
-
-### Default deny/allow behavior
-
-**Default allow** means all traffic is allowed by default, unless otherwise specified. **Default deny** means all traffic is denied by default, unless explicitly allowed. **Kubernetes pods are default allow**, unless network policy is defined to specify otherwise.
-
-For compatibility with Kubernetes, **$[prodname] network policy** enforcement follows the standard convention for Kubernetes pods:
-
-- If no network policies apply to a pod, then all traffic to/from that pod is allowed.
-- If one or more network policies apply to a pod with type ingress, then only the ingress traffic specifically allowed by those policies is allowed.
-- If one or more network policies apply to a pod with type egress, then only the egress traffic specifically allowed by those policies is allowed.
-
-For other endpoint types (VMs, host interfaces), the default behavior is to deny traffic. Only traffic specifically allowed by network policy is allowed, even if no network policies apply to the endpoint.
-
-## How to
-
-- [Create a default deny network policy](#crate-a-default-deny-network-policy)
-- [Create a global default deny network policy](#create-a-global-default-deny-network-policy)
-
-### Create a default deny network policy
-
-Immediately after installation, a best practice is to create a namespaced default deny network policy to secure pods without policy or incorrect policy until you can put policies in place and test them.
-
-In the following example, we create a $[prodname] default deny **NetworkPolicy** for all workloads in the namespace, **engineering**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: default-deny
- namespace: engineering
-spec:
- selector: all()
- types:
- - Ingress
- - Egress
-```
-
-Here's an equivalent default deny **Kubernetes network policy** for all pods in the namespace, **engineering**
-
-```yaml
-apiVersion: networking.k8s.io/v1
-kind: NetworkPolicy
-metadata:
- name: default-deny
- namespace: engineering
-spec:
- podSelector: {}
- policyTypes:
- - Ingress
- - Egress
-```
-
-### Create a global default deny policy
-
-A default deny policy ensures that unwanted traffic (ingress and egress) is denied by default without you having to remember default deny/allow behavior of Kubernetes and $[prodname] policies. This policy can also help mitigate risks of lateral malicious attacks.
-
-#### Best practice #1: Allow, stage, then deny
-
-We recommend that you create a global default deny policy after you complete writing policy for the traffic that you want to allow. The following steps summarizes the best practice to test and lock down the cluster to block unwanted traffic:
-
-1. Create a global default deny policy and test it in a staging environment. (The policy will show all the traffic that would be blocked if it were converted into a deny.)
-1. Create network policies to individually allow the traffic shown as blocked in step 1 until no connections are denied.
-1. Enforce the global default deny policy.
-
-#### Best practice #2: Keep the scope to non-system pods
-
-A global default deny policy applies to the entire cluster including all workloads in all namespaces, hosts (computers that run the hypervisor for VMs or container runtime for containers), including Kubernetes control plane and $[prodname] control plane nodes and pods.
-
-For this reason, the best practice is to create a global default deny policy for **non-system pods** as shown in the following example.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: deny-app-policy
-spec:
- namespaceSelector: has(projectcalico.org/name) && projectcalico.org/name not in {"kube-system", "calico-system", "tigera-system"}
- types:
- - Ingress
- - Egress
- egress:
- # allow all namespaces to communicate to DNS pods
- - action: Allow
- protocol: UDP
- destination:
- selector: 'k8s-app == "kube-dns"'
- ports:
- - 53
- - action: Allow
- protocol: TCP
- destination:
- selector: 'k8s-app == "kube-dns"'
- ports:
- - 53
-```
-
-Note the following:
-
-- Even though we call this policy "global default deny", the above policy is not explicitly denying traffic. By selecting the traffic with the `namespaceSelector` but not specifying an allow, the traffic is denied after all other policy is evaluated. This design also makes it unnecessary to ensure any specific order (priority) for the default-deny policy.
-- Allowing access to `kube-dns` simplifies per-pod policies because you don't need to duplicate the DNS rules in every policy
-- The policy deliberately excludes the `kube-system`, `calico-system`, and `tigera-system` namespaces by using a negative `namespaceSelector` to avoid impacting any control plane components
-
-In a staging environment, verify that the policy does not block any necessary traffic before enforcing it.
-
-### Don't try this!
-
-The following policy works and looks fine on the surface. But as described in Best practices #2, the policy is too broad in scope and could break your cluster. Therefore, we do not recommend adding this type of policy, even if you have verified allowed traffic in your staging environment.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: default.default-deny
-spec:
- tier: default
- selector: all()
- types:
- - Ingress
- - Egress
-```
-
-## Additional resources
-
-- [Network policy](../../reference/resources/networkpolicy.mdx)
-- [Global network policy](../../reference/resources/globalnetworkpolicy.mdx)
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/external-ips-policy.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/external-ips-policy.mdx
deleted file mode 100644
index d2e5390952..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/external-ips-policy.mdx
+++ /dev/null
@@ -1,112 +0,0 @@
----
-description: Limit egress and ingress traffic using IP address either directly within Calico network policy or managed as Calico network sets.
----
-
-# Use external IPs or networks rules in policy
-
-## Big picture
-
-Use $[prodname] network policy to limit traffic to/from external non-$[prodname] workloads or networks.
-
-## Value
-
-Modern applications often integrate with third-party APIs and SaaS services that live outside Kubernetes clusters. To securely enable access to those integrations, network security teams must be able to limit IP ranges for egress and ingress traffic to workloads. This includes using IP lists or ranges to deny-list bad actors or embargoed countries.
-
-Using $[prodname] network policy, you can define IP addresses/CIDRs directly in policy to limit traffic to external networks. Or using $[prodname] network sets, you can easily scale out by using the same set of IPs in multiple policies.
-
-## Concepts
-
-### IP addresses/CIDRs
-
-IP addresses and CIDRs can be specified directly in both Kubernetes and $[prodname] network policy rules. $[prodname] network policy supports IPV4 and IPV6 CIDRs.
-
-### Network sets
-
-A **network set** resource is an arbitrary set of IP subnetworks/CIDRs that can be matched by standard label selectors in Kubernetes or $[prodname] network policy. This is useful to reference a set of IP addresses using a selector from a namespaced network policy resource. It is typically used when you want to scale/reuse the same set of IP addresses in policy.
-
-A **global network set** resource is similar, but can be selected only by $[prodname] global network policies.
-
-## How to
-
-- [Limit traffic to or from external networks, IPs in network policy](#limit-traffic-to-or-from-external-networks-ips-in-network-policy)
-- [Limit traffic to or from external networks, global network set](#limit-traffic-to-or-from-external-networks-global-network-set)
-
-### Limit traffic to or from external networks, IPs in network policy
-
-In the following example, a $[prodname] NetworkPolicy allows egress traffic from pods with the label **color: red**, if it goes to an IP address in the 192.0.2.0/24 CIDR block.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-egress-external
- namespace: production
-spec:
- selector: color == 'red'
- types:
- - Egress
- egress:
- - action: Allow
- destination:
- nets:
- - 192.0.2.0/24
-```
-
-### Limit traffic to or from external networks, global network set
-
-In this example, we use a $[prodname] **GlobalNetworkSet** and reference it in a **GlobalNetworkPolicy**.
-
-In the following example, a $[prodname] **GlobalNetworkSet** deny-lists the CIDR ranges 192.0.2.55/32 and 203.0.113.0/24:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkSet
-metadata:
- name: ip-protect
- labels:
- ip-deny-list: 'true'
-spec:
- nets:
- - 192.0.2.55/32
- - 203.0.113.0/24
-```
-
-Next, we create two $[prodname] **GlobalNetworkPolicy** objects. The first is a high “order” policy that allows traffic as a default for things that don’t match our second policy, which is low “order” and uses the **GlobalNetworkSet** label as a selector to deny ingress traffic (IP-deny-list in the previous step). In the label selector, we also include the term **!has(projectcalico.org/namespace)**, which prevents this policy from matching pods or NetworkSets that also have this label. To more quickly enforce the denial of forwarded traffic to the host at the packet level, use the **doNotTrack** and **applyOnForward** options.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: forward-default-allow
-spec:
- selector: apply-ip-protect == 'true'
- order: 1000
- doNotTrack: true
- applyOnForward: true
- types:
- - Ingress
- ingress:
- - action: Allow
----
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: ip-protect
-spec:
- selector: apply-ip-protect == 'true'
- order: 0
- doNotTrack: true
- applyOnForward: true
- types:
- - Ingress
- ingress:
- - action: Deny
- source:
- selector: ip-deny-list == 'true' && !has(projectcalico.org/namespace)
-```
-
-## Additional resources
-
-- To understand how to use global network sets to mitigate common threats, see [Defend against DoS attacks](../../extreme-traffic/defend-dos-attack.mdx)
-- [Global network sets](../../../reference/resources/globalnetworkset.mdx)
-- [Global network policy](../../../reference/resources/globalnetworkpolicy.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/icmp-ping.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/icmp-ping.mdx
deleted file mode 100644
index 09ba250e91..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/icmp-ping.mdx
+++ /dev/null
@@ -1,130 +0,0 @@
----
-description: Control where ICMP/ping is used by creating a Calico network policy to allow and deny ICMP/ping messages for workloads and host endpoints.
----
-
-# Use ICMP/ping rules in policy
-
-## Big picture
-
-Use $[prodname] network policy to allow and deny ICMP/ping messages.
-
-## Value
-
-The **Internet Control Message Protocol (ICMP)** provides valuable network diagnostic functions, but it can also be used maliciously. Attackers can use
-it to learn about your network, or for DoS attacks. Using $[prodname] network policy, you can control where ICMP is used. For example, you can:
-
-- Allow ICMP ping, but only for workloads, host endpoints (or both)
-- Allow ICMP for pods launched by operators for diagnostic purposes, but block other uses
-- Temporarily enable ICMP to diagnose a problem, then disable it after the problem is resolved
-- Deny/allow ICMPv4 and/or ICMPv6
-
-## Concepts
-
-### ICMP packet type and code
-
-$[prodname] network policy also lets you deny and allow ICMP traffic based on specific types and codes. For example, you can specify ICMP type 5, code 2 to match specific ICMP redirect packets.
-
-For details, see [ICMP type and code](https://en.wikipedia.org/wiki/Internet_Control_Message_Protocol#Control_messages).
-
-## How to
-
-- [Deny all ICMP, all workloads and host endpoints](#deny-all-icmp-all-workloads-and-host-endpoints)
-- [Allow ICMP ping, all workloads and host endpoints](#allow-icmp-ping-all-workloads-and-host-endpoints)
-- [Allow ICMP matching protocol type and code, all Kubernetes pods](#allow-icmp-matching-protocol-type-and-code-all-kubernetes-pods)
-
-### Deny all ICMP, all workloads and host endpoints
-
-In this example, we introduce a "deny all ICMP" **GlobalNetworkPolicy**.
-
-This policy **selects all workloads and host endpoints**. It enables a default deny for all workloads and host endpoints, in addition to the explicit ICMP deny rules specified in the policy.
-
-If your ultimate goal is to allow some traffic, have your regular "allow" policies in place before applying a global deny-all ICMP traffic policy.
-
-In this example, all workloads and host endpoints are blocked from sending or receiving **ICMPv4** and **ICMPv6** messages.
-
-If **ICMPv6** messages are not used in your deployment, it is still good practice to deny them specifically as shown below.
-
-In any "deny-all" $[prodname] network policy, be sure to specify a lower order (**order:200**) than regular policies that might allow traffic.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: block-icmp
-spec:
- order: 200
- selector: all()
- types:
- - Ingress
- - Egress
- ingress:
- - action: Deny
- protocol: ICMP
- - action: Deny
- protocol: ICMPv6
- egress:
- - action: Deny
- protocol: ICMP
- - action: Deny
- protocol: ICMPv6
-```
-
-### Allow ICMP ping, all workloads and host endpoints
-
-In this example, workloads and host endpoints can receive **ICMPv4 type 8** and **ICMPv6 type 128** ping requests that come from other workloads and host endpoints.
-
-All other traffic may be allowed by other policies. If traffic is not explicitly allowed, it will be denied by default.
-
-The policy applies only to **ingress** traffic. (Egress traffic is not affected, and default deny is not enforced for egress.)
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-ping-in-cluster
-spec:
- selector: all()
- types:
- - Ingress
- ingress:
- - action: Allow
- protocol: ICMP
- source:
- selector: all()
- icmp:
- type: 8 Ping request
- - action: Allow
- protocol: ICMPv6
- source:
- selector: all()
- icmp:
- type: 128 Ping request
-```
-
-### Allow ICMP matching protocol type and code, all Kubernetes pods
-
-In this example, only Kubernetes pods that match the selector **projectcalico.org/orchestrator == 'kubernetes'** are allowed to receive ICMPv4 **code: 1 host unreachable** messages.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-host-unreachable
-spec:
- selector: projectcalico.org/orchestrator == 'kubernetes'
- types:
- - Ingress
- ingress:
- - action: Allow
- protocol: ICMP
- icmp:
- type: 3 Destination unreachable
- code: 1 Host unreachable
-```
-
-## Additional resources
-
-For more on the ICMP match criteria, see:
-
-- [Global network policy](../../../reference/resources/globalnetworkpolicy.mdx)
-- [Network policy](../../../reference/resources/networkpolicy.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/index.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/index.mdx
deleted file mode 100644
index c035f8e7de..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Control traffic to/from endpoints using Calico network policy rules.
-hide_table_of_contents: true
----
-
-# Policy rules
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/namespace-policy.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/namespace-policy.mdx
deleted file mode 100644
index 3591aa1b17..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/namespace-policy.mdx
+++ /dev/null
@@ -1,89 +0,0 @@
----
-description: Use namespaces and namespace selectors in Calico network policy to group or separate resources. Use network policies to allow or deny traffic to/from pods that belong to specific namespaces.
----
-
-# Use namespace rules in policy
-
-## Big picture
-
-Use $[prodname] network policies to reference pods in other namespaces.
-
-## Value
-
-Kubernetes namespaces let you group/separate resources to meet a variety of use cases. For example, you can use namespaces to separate development, production, and QA environments, or allow different teams to use the same cluster. You can use namespace selectors in $[prodname] network policies to allow or deny traffic to/from pods in specific namespaces.
-
-## How to
-
-- [Control traffic to/from endpoints in a namespace](#control-traffic-tofrom-endpoints-in-a-namespace)
-- [Use Kubernetes RBAC to control namespace label assignment](#use-kubernetes-rbac-to-control-namespace-label-assignment)
-
-### Control traffic to/from endpoints in a namespace
-
-In the following example, ingress traffic is allowed to endpoints in the **namespace: production** with label **color: red**, and only from a pod in the same namespace with **color: blue**, on **port 6379**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-tcp-6379
- namespace: production
-spec:
- selector: color == 'red'
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: color == 'blue'
- destination:
- ports:
- - 6379
-```
-
-To allow ingress traffic from endpoints in other namespaces, use a **namespaceSelector** in the policy rule. A namespaceSelector matches one or more namespaces based on the labels that are applied on the namespace. In the following example, ingress traffic is also allowed from endpoints with **color: blue** in namespaces with **shape: circle**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-tcp-6379
- namespace: production
-spec:
- selector: color == 'red'
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: color == 'blue'
- namespaceSelector: shape == 'circle'
- destination:
- ports:
- - 6379
-```
-
-### Use Kubernetes RBAC to control namespace label assignment
-
-Network policies can be applied to endpoints using selectors that match labels on the endpoint, the endpoint's namespace, or the endpoint's service account. By applying selectors based on the endpoint's namespace, you can use Kubernetes RBAC to control which users can assign labels to namespaces. This allows you to separate groups who can deploy pods from those who can assign labels to namespaces.
-
-In the following example, users in the development environment can communicate only with pods that have a namespace labeled, `environment == "development"`.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: restrict-development-access
-spec:
- namespaceSelector: 'environment == "development"'
- ingress:
- - action: Allow
- source:
- namespaceSelector: 'environment == "development"'
- egress:
- - action: Allow
- destination:
- namespaceSelector: 'environment == "development"'
-```
-
-## Additional resources
-
-- For more network policies, see [Network policy](../../../reference/resources/networkpolicy.mdx)
-- To apply policy to all namespaces, see [Global network policy](../../../reference/resources/globalnetworkpolicy.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/policy-rules-overview.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/policy-rules-overview.mdx
deleted file mode 100644
index 2b2d7cbd7e..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/policy-rules-overview.mdx
+++ /dev/null
@@ -1,22 +0,0 @@
----
-description: Define network connectivity for Calico endpoints using policy rules and label selectors.
----
-
-# Basic rules
-
-## Big picture
-
-Use Calico policy rules and label selectors that match Calico endpoints (pods, OpenStack VMs, and host interfaces) to define network connectivity.
-
-## Value
-
-Using label selectors to identify the endpoints (pods, OpenStack VMs, host interfaces) that a policy applies to, or that should be selected by policy rules, means you can define policy without knowing the IP addresses of the endpoints. This is ideal for handling dynamic workloads with ephemeral IPs (such as Kubernetes pods).
-
-## How to
-
-Read [Get started with Calico policy](../calico-network-policy.mdx) and [Kubernetes policy](../../../tutorials/kubernetes-tutorials/kubernetes-network-policy.mdx), which cover all the basics of using label selectors in policies to select endpoints the policies apply to, or in policy rules.
-
-## Additional resources
-
-- [Global network policy](../../../reference/resources/globalnetworkpolicy.mdx)
-- [Network policy](../../../reference/resources/networkpolicy.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/service-accounts.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/service-accounts.mdx
deleted file mode 100644
index dec9f56e77..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/service-accounts.mdx
+++ /dev/null
@@ -1,116 +0,0 @@
----
-description: Use Kubernetes service accounts in policies to validate cryptographic identities and/or manage RBAC controlled high-priority rules across teams.
----
-
-# Use service accounts rules in policy
-
-## Big picture
-
-Use $[prodname] network policy to allow/deny traffic for Kubernetes service accounts.
-
-## Value
-
-Using $[prodname] network policy, you can leverage Kubernetes service accounts with RBAC for flexible control over how policies are applied in a cluster. For example, the security team can have RBAC permissions to:
-
-- Control which service accounts the developer team can use within a namespace
-- Write high-priority network policies for those service accounts (that the developer team cannot override)
-
-The network security team can maintain full control of security, while selectively allowing developer operations where it makes sense.
-
-## Concepts
-
-### Use smallest set of permissions required
-
-Operations on service accounts are controlled by RBAC, so you can grant permissions only to trusted entities (code and/or people) to create, modify, or delete service accounts. To perform any operation in a workload, clients are required to authenticate with the Kubernetes API server.
-
-If you do not explicitly assign a service account to a pod, it uses the default ServiceAccount in the namespace.
-
-You should not grant broad permissions to the default service account for a namespace. If an application needs access to the Kubernetes API, create separate service accounts with the smallest set of permissions required.
-
-### Service account labels
-
-Like all other Kubernetes objects, service accounts have labels. You can use labels to create ‘groups’ of service accounts. $[prodname] network policy lets you select workloads by their service account using:
-
-- An exact match on service account name
-- A service account label selector expression
-
-## Before you begin...
-
-Configure unique Kubernetes service accounts for your applications.
-
-## How to
-
-- [Limit ingress traffic for workloads by service account name](#limit-ingress-traffic-for-workloads-by-service-account-name)
-- [Limit ingress traffic for workloads by service account label](#limit-ingress-traffic-for-workloads-by-service-account-label)
-- [Use Kubernetes RBAC to control service account label assignment](#use-kubernetes-rbac-to-control-service-account-label-assignment)
-
-### Limit ingress traffic for workloads by service account name
-
-In the following example, ingress traffic is allowed from any workload whose service account matches the names **api-service** or **user-auth-service**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: demo-calico
- namespace: prod-engineering
-spec:
- ingress:
- - action: Allow
- source:
- serviceAccounts:
- names:
- - api-service
- - user-auth-service
- selector: 'app == "db"'
-```
-
-### Limit ingress traffic for workloads by service account label
-
-In the following example, ingress traffic is allowed from any workload whose service account matches the label selector, **app == web-frontend**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-web-frontend
- namespace: prod-engineering
-spec:
- ingress:
- - action: Allow
- source:
- serviceAccounts:
- selector: 'app == "web-frontend"'
- selector: 'app == "db"'
-```
-
-### Use Kubernetes RBAC to control service account label assignment
-
-Network policies can be applied to endpoints using selectors that match labels on the endpoint, the endpoint's namespace, or the endpoint's service account. By applying selectors based on the endpoint's service account, you can use Kubernetes RBAC to control which users can assign labels to service accounts. This allows you to separate groups who can deploy pods from those who can assign labels to service accounts.
-
-In the following example, pods with an intern service account can communicate only with pods with service accounts labeled, `role: intern`.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: restrict-intern-access
- namespace: prod-engineering
-spec:
- serviceAccountSelector: 'role == "intern"'
- ingress:
- - action: Allow
- source:
- serviceAccounts:
- selector: 'role == "intern"'
- egress:
- - action: Allow
- destination:
- serviceAccounts:
- selector: 'role == "intern"'
-```
-
-## Additional resources
-
-- [Network policy](../../../reference/resources/networkpolicy.mdx)
-- [Global network policy](../../../reference/resources/globalnetworkpolicy.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/service-policy.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/service-policy.mdx
deleted file mode 100644
index 09b3d19378..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/policy-rules/service-policy.mdx
+++ /dev/null
@@ -1,119 +0,0 @@
----
-description: Use Kubernetes Service names in policy rules.
----
-
-# Use service rules in policy
-
-## Big picture
-
-Use $[prodname] network policy to allow/deny traffic for Kubernetes services.
-
-## Value
-
-Using $[prodname] network policy, you can leverage Kubernetes Service names to easily define access to Kubernetes services. Using service names in policy enables you to:
-
-- Allow or deny access to the Kubernetes API service.
-- Reference port information already declared by the application, making it easier to keep policy up-to-date as application requirements change.
-
-## How to
-
-- [Allow access to the Kubernetes API for a specific namespace](#allow-access-to-the-kubernetes-api-for-a-specific-namespace)
-- [Allow access to Kubernetes DNS for the entire cluster](#allow-access-to-kubernetes-dns-for-the-entire-cluster)
-- [Allow access from a specified service](#allow-access-from-a-specified-service)
-
-### Allow access to the Kubernetes API for a specific namespace
-
-In the following example, egress traffic is allowed to the `kubernetes` service in the `default` namespace for all pods in the namespace `my-app`. This service is the typical
-access point for the Kubernetes API server.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-api-access
- namespace: my-app
-spec:
- selector: all()
- egress:
- - action: Allow
- destination:
- services:
- name: kubernetes
- namespace: default
-```
-
-Endpoint addresses and ports to allow will be automatically detected from the service.
-
-### Allow access to Kubernetes DNS for the entire cluster
-
-In the following example, a GlobalNetworkPolicy is used to select all pods in the cluster to apply a rule which ensures
-all pods can access the Kubernetes DNS service.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-kube-dns
-spec:
- selector: all()
- egress:
- - action: Allow
- destination:
- services:
- name: kube-dns
- namespace: kube-system
-```
-
-:::note
-
-This policy also enacts a default-deny behavior for all pods, so make sure any other required application traffic is allowed by a policy.
-
-:::
-
-## Allow access from a specified service
-
-In the following example, ingress traffic is allowed from the `frontend-service` service in the `frontend` namespace for all pods in the namespace `backend`.
-This allows all pods that back the `frontend-service` service to send traffic to all pods in the `backend` namespace.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-frontend-service-access
- namespace: backend
-spec:
- selector: all()
- ingress:
- - action: Allow
- source:
- services:
- name: frontend-service
- namespace: frontend
-```
-
-We can also further specify the ports that the `frontend-service` service is allowed to access. The following example limits access from the `frontend-service`
-service to port 80.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-frontend-service-access
- namespace: backend
-spec:
- selector: all()
- ingress:
- - action: Allow
- protocol: TCP
- source:
- services:
- name: frontend-service
- namespace: frontend
- destination:
- ports: [80]
-```
-
-## Additional resources
-
-- [Network policy](../../../reference/resources/networkpolicy.mdx)
-- [Global network policy](../../../reference/resources/globalnetworkpolicy.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/services/index.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/services/index.mdx
deleted file mode 100644
index 9a8084c99b..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/services/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Apply Calico policy to Kubernetes node ports, and to services that are exposed externally as cluster IPs.
-hide_table_of_contents: true
----
-
-# Policy for Kubernetes services
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/services/kubernetes-node-ports.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/services/kubernetes-node-ports.mdx
deleted file mode 100644
index 1f748d66cb..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/services/kubernetes-node-ports.mdx
+++ /dev/null
@@ -1,135 +0,0 @@
----
-description: Restrict access to Kubernetes node ports using Calico Cloud global network policy. Follow the steps to secure the host, the node ports, and the cluster.
----
-
-# Apply Calico Cloud policy to Kubernetes node ports
-
-## Big picture
-
-Restrict access to node ports to specific external clients.
-
-## Value
-
-Exposing services to external clients using node ports is a standard Kubernetes feature. However, if you want to restrict access to node ports to specific external clients, you need to use Calico global network policy.
-
-## Concepts
-
-### Network policy with preDNAT field
-
-In a Kubernetes cluster, kube-proxy will DNAT a request to the node's port and IP address to one of the pods that backs the service. For Calico global network policy to both allow normal ingress cluster traffic and deny other general ingress traffic, it must take effect before DNAT. To do this, you simply add a **preDNAT** field to a Calico global network policy. The preDNAT field:
-
-- Applies before DNAT
-- Applies only to ingress rules
-- Enforces all ingress traffic through a host endpoint, regardless of destination
- The destination can be a locally hosted pod, a pod on another node, or a process running on the host.
-
-## Before you begin...
-
-For services that you want to expose to external clients, configure Kubernetes services with type **NodePort**.
-
-## How to
-
-To securely expose a Kubernetes service to external clients, you must implement all of the following steps.
-
-- [Allow cluster ingress traffic, but deny general ingress traffic](#allow-cluster-ingress-traffic-but-deny-general-ingress-traffic)
-- [Allow local host egress traffic](#allow-local-host-egress-traffic)
-- [Create host endpoints with appropriate network policy](#create-host-endpoints-with-appropriate-network-policy)
-- [Allow ingress traffic to specific node ports](#allow-ingress-traffic-to-specific-node-ports)
-
-### Allow cluster ingress traffic but deny general ingress traffic
-
-In the following example, we create a global network policy to allow cluster ingress traffic (**allow-cluster-internal-ingress**): for the nodes’ IP addresses (**1.2.3.4/16**), and for pod IP addresses assigned by Kubernetes (**100.100.100.0/16**). By adding a preDNAT field, Calico global network policy is applied before regular DNAT on the Kubernetes cluster.
-
-In this example, we use the **selector: has(kubernetes-host)** -- so the policy is applicable to any endpoint with a **kubernetes-host** label (but you can easily specify particular nodes).
-
-Finally, when you specify a preDNAT field, you must also add the **applyOnForward: true** field.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-cluster-internal-ingress-only
-spec:
- order: 20
- preDNAT: true
- applyOnForward: true
- ingress:
- - action: Allow
- source:
- nets: [1.2.3.4/16, 100.100.100.0/16]
- - action: Deny
- selector: has(kubernetes-host)
-```
-
-### Allow local host egress traffic
-
-We also need a global network policy to allow egress traffic through each node's external interface. Otherwise, when we define host endpoints for those interfaces, no egress traffic will be allowed from local processes (except for traffic that is allowed by the [Failsafe rules](../../../reference/host-endpoints/failsafe.mdx).
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-outbound-external
-spec:
- order: 10
- egress:
- - action: Allow
- selector: has(kubernetes-host)
-```
-
-### Create host endpoints with appropriate network policy
-
-In this example, we assume that you have already defined Calico host endpoints with network policy that is appropriate for the cluster. (For example, you wouldn’t want a host endpoint with a “default deny all traffic to/from this host” network policy because that is counter to the goal of allowing/denying specific traffic.) For help, see [host endpoints](../../../reference/resources/hostendpoint.mdx).
-
-All of our previously-defined global network policies have a selector that makes them applicable to any endpoint with a **kubernetes-host label**; so we will include that label in our definitions. For example, for **eth0** on **node1**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: HostEndpoint
-metadata:
- name: node1-eth0
- labels:
- kubernetes-host: ingress
-spec:
- interfaceName: eth0
- node: node1
- expectedIPs:
- - INSERT_IP_HERE
-```
-
-When creating each host endpoint, replace `INSERT_IP_HERE` with the IP address on eth0. The `expectedIPs` field is required so that any selectors within ingress or egress rules can properly match the host endpoint.
-
-### Allow ingress traffic to specific node ports
-
-Now we can allow external access to the node ports by creating a global network policy with the preDNAT field. In this example, **ingress traffic is allowed** for any host endpoint with **port: 31852**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-nodeport
-spec:
- preDNAT: true
- applyOnForward: true
- order: 10
- ingress:
- - action: Allow
- protocol: TCP
- destination:
- selector: has(kubernetes-host)
- ports: [31852]
- selector: has(kubernetes-host)
-```
-
-To make the NodePort accessible only through particular nodes, give the nodes a particular label. For example:
-
-```yaml
-nodeport-external-ingress: true
-```
-
-Then, use **nodeport-external-ingress: true** as the selector of the **allow-nodeport** policy, instead of **has(kubernetes-host)**.
-
-## Additional resources
-
-- [Global network policy](../../../reference/resources/globalnetworkpolicy.mdx)
-- [Host endpoints](../../../reference/resources/hostendpoint.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/services/services-cluster-ips.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/services/services-cluster-ips.mdx
deleted file mode 100644
index 963ce09343..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/services/services-cluster-ips.mdx
+++ /dev/null
@@ -1,193 +0,0 @@
----
-description: Expose Kubernetes service cluster IPs over BGP using Calico Cloud, and restrict who can access them using Calico Cloud network policy.
----
-
-# Apply Calico Cloud policy to services exposed externally as cluster IPs
-
-## Big picture
-
-Control access to services exposed through clusterIPs that are advertised outside the cluster using BGP.
-
-## Value
-
-$[prodname] network policy uses standard Kubernetes Services that allow you to expose services within clusters to external clients in the following ways:
-
-- [Apply policy to Kubernetes nodeports](kubernetes-node-ports.mdx)
-- Using cluster IPs over BGP (described in this article)
-
-## Concepts
-
-### Advertise cluster IPs outside the cluster
-
-A **cluster IP** is a virtual IP address that represents a Kubernetes Service. Kube Proxy on each host translates the clusterIP into a pod IP for one of the pods backing the service, acting as a reverse proxy and load balancer.
-
-Cluster IPs were originally designed for use within the Kubernetes cluster. $[prodname] allows you to advertise Cluster IPs externally -- so external clients can use them to access services hosted inside the cluster. This means that $[prodname] ingress policy can be applied at **one or both** of the following locations:
-
-- Host interface, when the traffic destined for the clusterIP first ingresses the cluster
-- Pod interface of the backend pod
-
-### Traffic routing: local versus cluster modes
-
-$[prodname] implements [Kubernetes service external traffic policy](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip), which controls whether external traffic is routed to node-local or cluster-wide endpoints. The following table summarizes key differences between these settings. The default is **cluster mode**.
-
-| **Service setting** | **Traffic is load balanced...** | **Pros and cons** | **Required service type** |
-| ------------------------------------------- | --------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------- |
-| **externalTrafficPolicy: Cluster**(default) | Across all nodes in the cluster | Equal distribution of traffic among all pods running a service.
Possible unnecessary network hops between nodes for ingress external traffic.When packets are rerouted to pods on another node, traffic is SNAT’d (source network address translation).
Destination pod can see the proxying node’s IP address rather than the actual client IP. | **ClusterIP** |
-| **externalTrafficPolicy: Local** | Across the nodes with the endpoints for the service | Avoids extra hops so better for apps that ingress a lot external traffic.
Traffic is not SNAT’d so actual client IPs are preserved.
Traffic distributed among pods running a service may be imbalanced. | **LoadBalancer** (for cloud providers), or **NodePort** (for node’s static port) |
-
-## Before you begin...
-
-[Configure Calico to advertise cluster IPs over BGP](../../../networking/configuring/advertise-service-ips.mdx).
-
-## How to
-
-Selecting which mode to use depends on your goals and resources. At an operational level, **local mode** simplifies policy, but load balancing may be uneven in certain scenarios. **Cluster mode** requires more work to manage clusterIPs, SNAT, and create policies that reference specific IP addresses, but you always get even load balancing.
-
-- [Secure externally exposed cluster IPs, local mode](#secure-externally-exposed-cluster-ips-local-mode)
-- [Secure externally exposed cluster IPs, cluster mode](#secure-externally-exposed-cluster-ips-cluster-mode)
-
-### Secure externally exposed cluster IPs, local mode
-
-Using **local mode**, the original source address of external traffic is preserved, and you can define policy directly using standard $[prodname] network policy.
-
-1. Create $[prodname] **NetworkPolicies** or **GlobalNetworkPolicies** that select the same set of pods as your Kubernetes Service.
-1. Add rules to allow the external traffic.
-1. If desired, add rules to allow in-cluster traffic.
-
-### Secure externally exposed cluster IPs, cluster mode
-
-In the following steps, we define **GlobalNetworkPolicy** and **HostEndpoints**.
-
-#### Step 1: Verify Kubernetes Service manifest
-
-Ensure that your Kubernetes Service manifest explicitly lists the clusterIP; do not allow Kubernetes to automatically assign the clusterIP because you need it for your policies in the following steps.
-
-#### Step 2: Create global network policy at the host interface
-
-In this step, you create a **GlobalNetworkPolicy** that selects all **host endpoints**. It controls access to the cluster IP, and prevents unauthorized clients from outside the cluster from accessing it. The hosts then forwards only authorized traffic.
-
-**Set policy to allow external traffic for cluster IPs**
-
-Add rules to allow the external traffic for each clusterIP. The following example allows connections to two cluster IPs. Make sure you add **applyOnForward** and **preDNAT** rules.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-cluster-ips
-spec:
- selector: k8s-role == 'node'
- types:
- - Ingress
- applyOnForward: true
- preDNAT: true
- ingress:
- # Allow 50.60.0.0/16 to access Cluster IP A
- - action: Allow
- source:
- nets:
- - 50.60.0.0/16
- destination:
- nets:
- - 10.20.30.40/32 Cluster IP A
- # Allow 70.80.90.0/24 to access Cluster IP B
- - action: Allow
- source:
- nets:
- - 70.80.90.0/24
- destination:
- nets:
- - 10.20.30.41/32 Cluster IP B
-```
-
-**Add a rule to allow traffic destined for the pod CIDR**
-
-Without this rule, normal pod-to-pod traffic is blocked because the policy applies to forwarded traffic.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-to-pods
-spec:
- selector: k8s-role == 'node'
- types:
- - Ingress
- applyOnForward: true
- preDNAT: true
- ingress:
- # Allow traffic forwarded to pods
- - action: Allow
- destination:
- nets:
- - 192.168.0.0/16 Pod CIDR
-```
-
-**Add a rule to allow traffic destined for all host endpoints**
-
-Or, you can add rules that allow specific host traffic including Kubernetes and $[prodname]. Without this rule, normal host traffic is blocked.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-traffic-hostendpoints
-spec:
- selector: k8s-role == 'node'
- types:
- - Ingress
- # Allow traffic to the node (not nodePorts, TCP) (not nodePorts, TCP)
- - action: Allow
- protocol: TCP
- destination:
- selector: k8s-role == 'node'
- notPorts: ["30000:32767"] #nodePort range
- # Allow traffic to the node (not nodePorts, TCP) (not nodePorts, UDP)
- - action: Allow
- protocol: UDP
- destination:
- selector: k8s-role == 'node'
- notPorts: ["30000:32767"] #nodePort range
-```
-
-#### Step 3: Create a global network policy that selects pods
-
-In this step, you create a **GlobalNetworkPolicy** that selects the **same set of pods as your Kubernetes Service**. Add rules that allow host endpoints to access the service ports.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-nodes-svc-a
-spec:
- selector: k8s-svc == 'svc-a'
- types:
- - Ingress
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: k8s-role == 'node'
- destination:
- ports: [80, 443]
- - action: Allow
- protocol: UDP
- source:
- selector: k8s-role == 'node'
- destination:
- ports: [80, 443]
-```
-
-#### Step 4: (Optional) Create network polices or global network policies that allow in-cluster traffic to access the service
-
-#### Step 5: Create HostEndpoints
-
-Create HostEndpoints for the interface of each host that will receive traffic for the clusterIPs. Be sure to label them so they are selected by the policy in Step 2 (Add a rule to allow traffic destined for the pod CIDR), and the rules in Step 3.
-
-In the previous example policies, the label **k8s-role: node** is used to identify these HostEndpoints.
-
-## Additional resources
-
-- [Enable service IP advertisement](../../../networking/configuring/advertise-service-ips.mdx)
-- [Defend against DoS attacks](../../extreme-traffic/defend-dos-attack.mdx)
-- [Global network policy](../../../reference/resources/globalnetworkpolicy.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/simple-policy-cnx.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/simple-policy-cnx.mdx
deleted file mode 100644
index cee56954e0..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/beginners/simple-policy-cnx.mdx
+++ /dev/null
@@ -1,331 +0,0 @@
----
-description: Learn the extra features for Calico Cloud that make it so important for production environments.
----
-
-# Calico Cloud for Kubernetes demo
-
-This guide is a variation of the [simple policy demo](../../tutorials/kubernetes-tutorials/kubernetes-policy-basic.mdx) intended to introduce the extra features of $[prodname] to people already familiar with Project Calico for Kubernetes.
-
-It requires a Kubernetes cluster configured with Calico networking and $[prodname], and expects that you have `kubectl` configured to interact with the cluster.
-
-
-
-This guide assumes that you have installed all the $[prodname] components from the
-guides above and that your cluster consists of the following nodes:
-
-- k8s-node1
-- k8s-node2
-- k8s-master
-
-Where you see references to these in the text below, substitute for your actual node names. You can find what nodes are on your cluster with `kubectl get nodes`
-
-## Configure Namespaces
-
-This guide will deploy pods in a Kubernetes namespace. Let's create the `Namespace` object for this guide.
-
-```
-kubectl create ns policy-demo
-```
-
-## Create demo pods
-
-We'll use Kubernetes `Deployment` objects to easily create pods in the namespace.
-
-1. Create some nginx pods in the `policy-demo` namespace.
-
- ```shell
- kubectl create deployment --namespace=policy-demo nginx --image=nginx
- ```
-
-1. Expose them through a service.
-
- ```shell
- kubectl expose --namespace=policy-demo deployment nginx --port=80
- ```
-
-1. Ensure the nginx service is accessible.
-
- ```shell
- kubectl run --namespace=policy-demo access --rm -ti --image busybox /bin/sh
- ```
-
- This should open up a shell session inside the `access` pod, as shown below.
-
- ```
- Waiting for pod policy-demo/access-472357175-y0m47 to be running, status is Pending, pod ready: false
-
- If you don't see a command prompt, try pressing enter.
-
- / #
- ```
-
-1. From inside the `access` pod, attempt to reach the `nginx` service.
-
- ```shell
- wget -q nginx -O -
- ```
-
- You should see a response from `nginx`. Great! Our service is accessible. You can exit the pod now.
-
-1. Inspect the network policies using calicoq. The `host` command displays
- information about the policies for endpoints on a given host.
-
- :::note
-
- calicoq complements calicoctl by inspecting the
- dynamic aspects of $[prodname] Policy: in particular displaying the endpoints actually affected by policies,
- and the policies that actually apply to endpoints.
-
-
- :::
-
- ```
- DATASTORE_TYPE=kubernetes calicoq host k8s-node1
- ```
-
- You should see the following output.
-
- ```
- Policies and profiles for each endpoint on host "k8s-node1":
-
- Workload endpoint k8s/tigera-prometheus.alertmanager-calico-node-alertmanager-0/eth0
- Policies:
- Policy "tigera-prometheus/knp.default.calico-node-alertmanager" (order 1000; selector "(projectcalico.org/orchestrator == 'k8s' && alertmanager == 'calico-node-alertmanager' && app == 'alertmanager') && projectcalico.org/namespace == 'tigera-prometheus'")
- Policy "tigera-prometheus/knp.default.calico-node-alertmanager-mesh" (order 1000; selector "(projectcalico.org/orchestrator == 'k8s' && alertmanager == 'calico-node-alertmanager' && app == 'alertmanager') && projectcalico.org/namespace == 'tigera-prometheus'")
- Policy "tigera-prometheus/knp.default.default-deny" (order 1000; selector "(projectcalico.org/orchestrator == 'k8s') && projectcalico.org/namespace == 'tigera-prometheus'")
- Profiles:
- Profile "kns.tigera-prometheus"
- Rule matches:
- Policy "tigera-prometheus/knp.default.calico-node-alertmanager-mesh" inbound rule 1 source match; selector "(projectcalico.org/namespace == 'tigera-prometheus') && (projectcalico.org/orchestrator == 'k8s' && app in { 'alertmanager' } && alertmanager in { 'calico-node-alertmanager' })"
-
- ...
-
- Workload endpoint k8s/policy-demo.nginx-8586cf59-5bxvh/eth0
- Policies:
- Profiles:
- Profile "kns.policy-demo"
- ```
-
- For each workload endpoint, the `Policies:` section lists the policies that
- apply to that endpoint, in the order they apply. calicoq displays both
- $[prodname] Policies and Kubernetes NetworkPolicies, although this
- example focuses on the latter. The `Rule matches:` section lists the
- policies that match that endpoint in their rules, in other words that have
- rules that deny or allow that endpoint as a packet source or destination.
-
- Focusing on the
- `k8s/tigera-prometheus.alertmanager-calico-node-alertmanager-0/eth0` endpoint:
-
- - The first two policies are defined in the monitor-calico.yaml manifest.
- The selectors here have been translated from the original NetworkPolicies to
- the $[prodname] format (note the addition of the namespace test).
-
- - The third policy and the following profile are created automatically by the
- policy controller.
-
-1. Use kubectl to see the detail of any particular policy or profile. For
- example, for the `kns.policy-demo` profile, which defines default behavior for
- pods in the `policy-demo` namespace:
-
- ```shell
- kubectl get profile kns.policy-demo -o yaml
- ```
-
- You should see the following output.
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: Profile
- metadata:
- creationTimestamp: '2022-01-06T21:32:05Z'
- name: kns.policy-demo
- resourceVersion: 435026/
- uid: 75dd2ed4-d3a6-41ca-a106-db073bfa946a
- spec:
- egress:
- - action: Allow
- destination: {}
- source: {}
- ingress:
- - action: Allow
- destination: {}
- source: {}
- labelsToApply:
- pcns.projectcalico.org/name: policy-demo
- ```
-
- Alternatively, you may also use $[prodname] Manager to inspect and view information and metrics associated with policies, endpoints, and nodes.
-
-## Enable isolation
-
-Let's turn on isolation in our policy-demo namespace. $[prodname] will then prevent connections to pods in this namespace.
-
-Running the following command creates a NetworkPolicy which implements a default deny behavior for all pods in the `policy-demo` namespace.
-
-```shell
-kubectl create -f - <
-
-### Workload and host endpoints
-
-Policy with domain names can be enforced on workload or host endpoints. When a policy with domain names applies to a workload endpoint, it
-allows that workload to connect out to the specified domains. When policy with domain names applies to a host endpoint, it allows clients
-directly on the relevant host (including any host-networked workloads) to connect out to the specified domains.
-
-### Trusted DNS servers
-
-$[prodname] trusts DNS information only from its list of DNS trusted servers. Using trusted DNS servers to back domain names in
-policy, prevents a malicious workload from using IPs returned by a fake DNS server to hijack domain names in policy rules.
-
-By default, $[prodname] trusts the Kubernetes cluster’s DNS service (kube-dns or CoreDNS). For workload endpoints, these
-out-of-the-box defaults work with standard Kubernetes installs, so normally you won’t change them. For host endpoints you will need to add
-the IP addresses that the cluster nodes use for DNS resolution.
-
-## Before you begin
-
-**Not supported**
-
-DNS policy is not supported at egress of egress gateway pods. Domain-based rules will either never match in
-that hook, or, they may match intermittently. Intermittent matches occur when a pod on the same node as the
-egress gateway pod happens to make a matching DNS query. This is because the DNS-to-IP cache used to render
-the policy is shared node-wide.
-
-## How to
-
-You can specify allowed domain names directly in a **global network policy** or **namespaced network policy**, or specify domain names in a **global network set** (and then
-reference the global network set in a global network policy).
-
-- [Use domain names in a global network policy](#use-domain-names-in-a-global-network-policy)
-- [Use domain names in a namespaced network policy](#use-domain-names-in-a-namespaced-network-policy)
-- [Use domain names in a global network set, reference the set in a global network policy](#use-domain-names-in-a-global-network-set)
-
-### Best practice
-
-Use a **global network set** when the same set of domains needs to be referenced in multiple policies, or when you want the allowed
-destinations to be a mix of domains and IPs from global network sets, or IPs from workload endpoints and host endpoints. By using a single
-destination selector in a global network set, you can potentially match all of these resources.
-
-### Use domain names in a global network policy
-
-In this method, you create a **GlobalNetworkPolicy** with egress rules with `action: Allow` and a `destination.domains` field specifying the
-domain names to which egress traffic is allowed.
-
-In the following example, the first rule allows DNS traffic, and the second rule allows connections outside the cluster to domains
-**api.alice.com** and **\*.example.com** (which means `.example.com`, such as **bob.example.com**).
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-egress-to-domains
-spec:
- order: 1
- selector: my-pod-label == 'my-value'
- types:
- - Egress
- egress:
- - action: Allow
- protocol: UDP
- destination:
- ports:
- - 53
- - dns
- - action: Allow
- destination:
- domains:
- - api.alice.com
- - '*.example.com'
-```
-
-### Use domain names in a namespaced network policy
-
-In this method, you create a **NetworkPolicy** with egress rules with `action: Allow` and a `destination.domains` field specifying the
-domain names to which egress traffic is allowed.
-
-In the following example, the first rule allows DNS traffic, and the second rule allows connections outside the cluster to domains
-**api.alice.com** and **\*.example.com** (which means `.example.com`, such as **bob.example.com**).
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-egress-to-domains
- namespace: rollout-test
-spec:
- order: 1
- selector: my-pod-label == 'my-value'
- types:
- - Egress
- egress:
- - action: Allow
- protocol: UDP
- destination:
- ports:
- - 53
- - dns
- - action: Allow
- destination:
- domains:
- - api.alice.com
- - '*.example.com'
-```
-
-The difference between this and the **GlobalNetworkPolicy** example is that this namespaced NetworkPolicy can only grant egress access, to the specified domains, to workload endpoints in the `rollout-test` namespace.
-
-### Use domain names in a global network set
-
-In this method, you create a **GlobalNetworkSet** with the allowed destination domain names in the `allowedEgressDomains` field. Then,
-you create a **GlobalNetworkPolicy** with a `destination.selector` that matches that GlobalNetworkSet.
-
-In the following example, the allowed egress domains (`api.alice.com` and `*.example.com`) are specified in the GlobalNetworkSet.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkSet
-metadata:
- name: allowed-domains-1
- labels:
- color: red
-spec:
- allowedEgressDomains:
- - api.alice.com
- - '*.example.com'
-```
-
-Then, you reference the global network set in a **GlobalNetworkPolicy** using a destination label selector.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-egress-to-domain
-spec:
- order: 1
- selector: my-pod-label == 'my-value'
- types:
- - Egress
- egress:
- - action: Allow
- destination:
- selector: color == 'red'
-```
-
-### Use domain names in a network set
-
-In this method, you create a **NetworkSet** with the allowed destination domain names in the `allowedEgressDomains` field. Then,
-you create a **NetworkPolicy** with a `destination.selector` that matches that NetworkSet.
-
-In the following example, the allowed egress domains (`api.alice.com` and `*.example.com`) are specified in the NetworkSet.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkSet
-metadata:
- name: allowed-domains-1
- namespace: rollout-test
- labels:
- color: red
-spec:
- allowedEgressDomains:
- - api.alice.com
- - '*.example.com'
-```
-
-Then, you reference the network set in a **NetworkPolicy** using a destination label selector.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-egress-to-domain
- namespace: rollout-test
-spec:
- order: 1
- selector: my-pod-label == 'my-value'
- types:
- - Egress
- egress:
- - action: Allow
- destination:
- selector: color == 'red'
-```
-
-:::note
-
-Newly initialized pods sometimes fail to connect to domains that are allowed by a DNS policy (staged or enforced).
-In these cases, the pod tries to connect to a domain before $[prodname] has finished processing the DNS policy that allows the connection.
-As soon as the processing is complete, the pod is able to connect.
-
-To avoid these failed connections, you can add the following to your `FelixConfiguration` resource:
-
-```yaml
-...
-spec:
- DNSPolicyMode: DelayDNSResponse
-...
-```
-
-For more information, see [DNSPolicyMode](../reference/resources/felixconfig#dnspolicymode).
-
-:::
-
-## Additional resources
-
-To change the default DNS trusted servers, use the [DNSTrustedServers parameter](../reference/component-resources/node/felix/configuration.mdx).
-
-For more detail about the relevant resources, see
-[GlobalNetworkSet](../reference/resources/globalnetworkset.mdx),
-[GlobalNetworkPolicy](../reference/resources/globalnetworkpolicy.mdx),
-[NetworkPolicy](../reference/resources/networkpolicy.mdx)
-and
-[NetworkSet](../reference/resources/networkset.mdx).
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/extreme-traffic/defend-dos-attack.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/extreme-traffic/defend-dos-attack.mdx
deleted file mode 100644
index d6acc8609a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/extreme-traffic/defend-dos-attack.mdx
+++ /dev/null
@@ -1,107 +0,0 @@
----
-description: Define DoS mitigation rules in Calico Cloud policy to quickly drop connections when under attack. Learn how rules use eBPF and XDP, including hardware offload when available.
----
-
-# Defend against DoS attacks
-
-## Big picture
-
-Calico automatically enforces specific types of deny-list policies at the earliest possible point in the packet processing pipeline, including offloading to NIC hardware whenever possible.
-
-## Value
-
-During a DoS attack, a cluster can receive massive numbers of connection requests from attackers. The faster these connection requests are dropped, the less flooding and overloading to your hosts. When you define DoS mitigation rules in Calico network policy, Calico enforces the rules as efficiently as possible to minimize the impact.
-
-## Concepts
-
-### Earliest packet processing
-
-The earliest point in the packet processing pipeline that packets can be dropped, depends on the Linux kernel version and the capabilities of the NIC driver and NIC hardware. Calico automatically uses the fastest available option.
-
-| Processed by... | Used by Calico if... | Performance |
-| --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
-| NIC hardware | The NIC supports **XDP offload** mode. | Fastest |
-| NIC driver | The NIC driver supports **XDP native** mode. | Faster |
-| Kernel | The kernel supports **XDP generic mode** and Calico is configured to explicitly use it. This mode is rarely used and has no performance benefits over iptables raw mode below. To enable, see [Felix Configuration](../../reference/resources/felixconfig.mdx). | Fast |
-| Kernel | If none of the modes above are available, **iptables raw** mode is used. | Fast |
-
-:::note
-
-XDP modes require Linux kernel v4.16 or later.
-
-:::
-
-## How to
-
-The high-level steps to defend against a DoS attack are:
-
-- [Step 1: Create host endpoints](#step-1-create-host-endpoints)
-- [Step 2: Add CIDRs to deny-list in a global network set](#step-2-add-cidrs-to-deny-list-in-a-global-network-set)
-- [Step 3: Create deny incoming traffic global network policy](#step-3-create-deny-incoming-traffic-global-network-policy)
-
-### Best practice
-
-The following steps walk through the above required steps, assuming no prior configuration is in place. A best practice is to proactively do these steps before an attack (create the host endpoints, network policy, and global network set). In the event of a DoS attack, you can quickly respond by just adding the CIDRs that you want to deny-list to the global network set.
-
-### Step 1: Create host endpoints
-
-First, you create the HostEndpoints corresponding to the network interfaces where you want to enforce DoS mitigation rules. In the following example, the HostEndpoint secures the interface named **eth0** with IP **10.0.0.1** on node **jasper**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: HostEndpoint
-metadata:
- name: production-host
- labels:
- apply-dos-mitigation: 'true'
-spec:
- interfaceName: eth0
- node: jasper
- expectedIPs: ['10.0.0.1']
-```
-
-### Step 2: Add CIDRs to deny-list in a global network set
-
-Next, you create a Calico **GlobalNetworkset**, adding the CIDRs that you want to deny-list. In the following example, the global network set deny-lists the CIDR ranges **1.2.3.4/32** and **5.6.0.0/16**:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkSet
-metadata:
- name: dos-mitigation
- labels:
- dos-deny-list: 'true'
-spec:
- nets:
- - '1.2.3.4/32'
- - '5.6.0.0/16'
-```
-
-### Step 3: Create deny incoming traffic global network policy
-
-Finally, create a Calico GlobalNetworkPolicy adding the GlobalNetworkSet label (**dos-deny-list** in the previous step) as a selector to deny ingress traffic. To more quickly enforce the denial of forwarded traffic to the host at the packet level, use the **doNotTrack** and **applyOnForward** options.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: dos-mitigation
-spec:
- selector: apply-dos-mitigation == 'true'
- doNotTrack: true
- applyOnForward: true
- types:
- - Ingress
- ingress:
- - action: Deny
- source:
- selector: dos-deny-list == 'true'
-```
-
-## Additional resources
-
-- [Global network sets](../../reference/resources/globalnetworkset.mdx)
-- [Global network policy](../../reference/resources/globalnetworkpolicy.mdx)
-- [Create a host endpoint](../../reference/resources/hostendpoint.mdx)
-- [Introduction to XDP](https://www.iovisor.org/technology/xdp)
-- [Advanced XDP documentation](https://prototype-kernel.readthedocs.io/en/latest/networking/XDP/index.html)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/extreme-traffic/high-connection-workloads.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/extreme-traffic/high-connection-workloads.mdx
deleted file mode 100644
index aa42a7b1c7..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/extreme-traffic/high-connection-workloads.mdx
+++ /dev/null
@@ -1,89 +0,0 @@
----
-description: Create a Calico network policy rule to bypass Linux conntrack for traffic to workloads that experience extremely large number of connections.
----
-
-# Enable extreme high-connection workloads
-
-## Big picture
-
-Use a $[prodname] network policy rule to bypass Linux conntrack for traffic to workloads that experience extremely large number of connections.
-
-## Value
-
-When the number of connections on a node exceeds the number of connections that Linux conntrack can track, connections can be rejected or dropped. $[prodname] network policy can be used to selectively bypass Linux conntrack for traffic to/from these types of workloads.
-
-## Concepts
-
-### Linux conntrack
-
-Connection tracking (“conntrack”) is a core feature of the Linux kernel’s networking stack. It allows the kernel to keep track of all logical network connections or flows, and thereby identify all of the packets that make up each flow so they can be handled consistently together. Conntrack is an essential part of the mainline Linux network processing pipeline, normally improving performance, and enabling NAT and stateful access control.
-
-### Extreme high-connection workloads
-
-Some niche workloads handling extremely high number of simultaneous connections, or very high rate of short lived connections, can exceed the maximum number of connections Linux conntrack is able to track. One real world example of such a workload is an extreme scale memcached server handling 50k+ connections per second.
-
-### $[prodname] doNotTrack network policy
-
-The $[prodname] global network policy option, **doNotTrack**, indicates to apply the rules in the policy before connection tracking, and that packets allowed by these rules should not be tracked. The policy is applied early in the Linux packet processing pipeline, before any regular network policy rules, and independent of the policy order field.
-
-Unlike normal network policy rules, doNotTrack network policy rules are stateless, meaning you must explicitly specify rules to allow return traffic that would normally be automatically allowed by conntrack. For example, for a server on port 999, the policy must include an ingress rule allowing inbound traffic to port 999, and an egress rule to allow outbound traffic from port 999.
-
-In a doNotTrack policy:
-
-- Ingress rules apply to all incoming traffic through a host endpoint, regardless of where the traffic is going
-- Egress rules apply only to traffic that is sent from the host endpoint (not a local workload)
-
-Finally, you must add an **applyOnForward: true expression** for a **doNotTrack policy** to work.
-
-## Before you begin...
-
-Before creating a **doNotTrack** network policy, read this [blog](https://www.tigera.io/blog/when-linux-conntrack-is-no-longer-your-friend/) to understand use cases, benefits, and trade offs.
-
-## How to
-
-### Bypass connection traffic for high connection server
-
-In the following example, a memcached server pod with **hostNetwork: true** was scheduled on the node memcached-node-1. We create a HostEndpoint for the node. Next, we create a GlobalNetwork Policy with symmetrical rules for ingress and egress with doNotTrack and applyOnForward set to true.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: HostEndpoint
-metadata:
- name: memcached-node-1-eth0
- labels:
- memcached: server
-spec:
- interfaceName: eth0
- node: memcached-node-1
- expectedIPs:
- - 10.128.0.162
----
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: memcached-server
-spec:
- selector: memcached == 'server'
- applyOnForward: true
- doNotTrack: true
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: memcached == 'client'
- destination:
- ports:
- - 12211
- egress:
- - action: Allow
- protocol: TCP
- source:
- ports:
- - 12211
- destination:
- selector: memcached == 'client'
-```
-
-## Additional resources
-
-[Global network policy](../../reference/resources/globalnetworkpolicy.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/extreme-traffic/index.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/extreme-traffic/index.mdx
deleted file mode 100644
index 65e6316852..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/extreme-traffic/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Use Calico network policy early in the Linux packet processing pipeline to handle extreme traffic scenarios.
-hide_table_of_contents: true
----
-
-# Policy for extreme traffic
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/hosts/host-forwarded-traffic.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/hosts/host-forwarded-traffic.mdx
deleted file mode 100644
index e25adab636..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/hosts/host-forwarded-traffic.mdx
+++ /dev/null
@@ -1,151 +0,0 @@
----
-description: Apply Calico Cloud network policy to traffic being forward by hosts acting as routers or NAT gateways.
----
-
-# Apply policy to forwarded traffic
-
-## Big picture
-
-Enforce $[prodname] policy on traffic transiting a host that is used as a router or NAT gateway.
-
-## Value
-
-If your host has multiple network interfaces, and is configured as a router, or NAT gateway between two different networks, you may want to enforce policy on traffic as it moves between the networks. In this configuration, often neither the source or destination is a $[prodname] endpoint, so policy enforcement at the endpoint is not available. You can centrally manage the firewall policy on a fleet of such hosts using the same policy language as the rest of $[prodname].
-
-## Concepts
-
-### Workload endpoints and host endpoints
-
-The following figure shows a host with two network interfaces: eth0 and eth1. We call these **host endpoints (HEPs)**. The host also runs two guest workloads (VMs or containers). We call the virtual interfaces to the guests, **workload endpoints (WEPs)**. Each has a corresponding configuration object on the $[prodname] API called HostEndpoint and WorkloadEndpoint, respectively.
-
-The `HostEndpoint API` object is optional, and $[prodname] does not enforce any policy on the HEP if the API object is missing. The `WorkloadEndpoint API` object is required, and is automatically managed by the cluster orchestrator plugin (for example, Kubernetes or OpenStack).
-
-Several connections are shown in the figure, numbered 1 through 4. For example, connection 1 ingresses over HEP eth0, is forwarded, and then ingresses Workload A’s WEP. Calico policies select which WEPs or HEPs they apply to. So, for example an ingress policy that selects Workload A’s WEP will apply to connections as shown in number 1.
-
-![Host-forward-traffic](/img/calico-enterprise/host-forward-traffic.png)
-
-### applyOnForward
-
-By default, $[prodname] global network policies set **applyOnForward to false**. When set to false on policies that select HEPs, the policies are applied only to traffic that originates or terminates on the host, for example: connection 4 (Node process). Connections 1-3 are unaffected by policies that select the HEP, but have applyOnForward set to false.
-
-In contrast, if applyOnForward is set to true for a policy that selects a HEP, that policy can apply to all connections 1-4. For example:
-
-- Ingress policy on HEP eth0 affects connections 1 and 2
-- Egress policy on HEP eth1 affects connections 2, 3, and 4
-
-There are also different default action semantics for **applyOnForward: true policy** versus **applyOnForward: false policy**.
-An applyOnForward: true policy affects all traffic through the HEP (connections 1-4). If no applyOnForward policy selects the HEP and direction (ingress versus egress), then forwarded traffic is allowed. If no policy (regardless of applyOnForward) selects the HEP and direction, then local traffic is denied.
-
-| **HEP defined?** | **Traffic Type** | **applyOnForward defined?** | **Any policy defined?** | **Default Action** |
-| ---------------- | ---------------- | --------------------------- | ----------------------- | ------------------ |
-| No | Any | n/a | n/a | Allow |
-| Yes | Forwarded | No | Any | Allow |
-| Yes | Forwarded | Yes | Yes | Deny |
-| Yes | Local | n/a | No | Deny |
-| Yes | Local | n/a | Yes | Deny |
-
-**$[prodname] namespaced network policies** do not have an applyOnForward setting. HEPs are always cluster global, not namespaced, so network policies cannot select them.
-
-### preDNAT policy
-
-Hosts are often configured to perform Destination Network Address Translation before forwarding certain packets. A common example of this in cloud computing is when the host acts as a reverse-proxy to load balance service requests for a set of backend workload instances. To apply policy to a specific example of such a reverse-proxy, see [Kubernetes nodePorts](../beginners/services/kubernetes-node-ports.mdx).
-
-When preDNAT is set to false on a global network policy, the policy rules are evaluated on the connection after DNAT is performed. False is the default. When preDNAT is set to true, the policy rules are evaluated on the connection before DNAT has been performed.
-
-If you set preDNAT policy to true, you must set applyOnForward to true, and preDNAT policy must only include Ingress policies.
-
-### Host Endpoints with interfaceName: `*`
-
-HostEndpoint API objects can be created with the name of the host interface (as reported by ip link or similar), or they can be created with interfaceName set to `*`, which means all host interfaces on the given node, including the interfaces between the host to any WEPs on that host.
-
-With interfaceName set to a particular interface, any policies that select the HEP apply only if the traffic goes through the named interface. With it set to `*`, policies that select the HEP apply regardless of the interface.
-
-This is particularly relevant when you want to enforce policy for a host that also runs guest workloads like VMs or Pods. Traffic from local workloads to reverse-proxy IPs or ports do not traverse any external interfaces, and thus a HEP with interfaceName set to \* is required in order for policy to apply to them.
-
-## How to
-
-### Control forwarded traffic in or out of particular networks
-
-1. Choose a labeling scheme for your Host Endpoints (network interfaces).
- For example, if you have an application network and management network, you might choose the labels **network = application** and **network = management**.
-1. Write GlobalNetworkPolicies expressing your desired rules.
- - applyOnForward set to true.
- - Use the **selector:** to choose which Host Endpoints to apply policy.
-1. Create the HostEndpoint objects on the `$[prodname] API`.
- - Label the HostEndpoints according to the label scheme you developed in step 1.
- - We recommend that you create policies before you create the Host Endpoints. This ensures that all policies exist before $[prodname] starts enforcing.
-
-## Tutorial
-
-Let’s say I have a host that has two network interfaces:
-
-- eth0 - connects to the main datacenter network for application traffic
-- eth1 - connects to a special maintenance network
-
-My goal is to allow SSH traffic to be forwarded to the maintenance network, but to drop all other traffic.
-
-I choose the following label scheme:
-
-- network = application for the main datacenter network for application traffic
-- network = maintenance for the maintenance network
-
-I create the GlobalNetworkPolicy that allows SSH traffic (default deny is implicit in this case).
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-ssh-maintenance
-spec:
- selector: network == 'maintenance'
- applyOnForward: true
- types:
- - Egress
- egress:
- # Allow SSH
- - action: Allow
- protocol: TCP
- destination:
- ports:
- - 22
-```
-
-Save this as allow-ssh-maintenance.yaml.
-
-Apply the policy to the cluster:
-
-```bash
-kubectl create -f allow-ssh-maintenance.yaml
-```
-
-Finally, create the host endpoint for the interface that connects to the maintenance network.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: HostEndpoint
-metadata:
- name: myhost.eth1
- labels:
- network: maintenance
-spec:
- interfaceName: eth1
- node: myhost
- expectedIPs:
- - 192.168.0.45
-```
-
-Replace myhost with the node name $[prodname] uses, and the expected IPs with the actual interface IP address(es). Save this file as hep.yaml.
-
-Apply the host endpoint to the cluster:
-
-```bash
-kubectl create -f hep.yaml
-```
-
-For completeness, you could also create a HostEndpoint for eth0, but because we have not written any policies for the application network yet, you can omit this step.
-
-## Additional resources
-
-- [Host endpoint](../../reference/resources/hostendpoint.mdx)
-- [Workload endpoint](../../reference/resources/workloadendpoint.mdx)
-- [Global network policy](../../reference/resources/globalnetworkpolicy.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/hosts/kubernetes-nodes.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/hosts/kubernetes-nodes.mdx
deleted file mode 100644
index 0f9717c97c..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/hosts/kubernetes-nodes.mdx
+++ /dev/null
@@ -1,215 +0,0 @@
----
-description: Protect Kubernetes nodes with host endpoints managed by Calico Cloud.
----
-
-# Protect Kubernetes nodes
-
-## Big picture
-
-Secure Kubernetes nodes with host endpoints managed by $[prodname].
-
-## Value
-
-$[prodname] can automatically create host endpoints for your Kubernetes nodes. This means $[prodname] can manage the lifecycle of host endpoints as your cluster evolves, ensuring nodes are always protected by policy.
-
-## Concepts
-
-## Host endpoints
-
-Each host has one or more network interfaces that it uses to communicate externally. You can represent these interfaces in $[prodname] using host endpoints and then use network policy to secure them.
-
-$[prodname] host endpoints can have labels, and they work the same as labels on workload endpoints. The network policy rules can apply to both workload and host endpoints using label selectors.
-
-Automatic host endpoints secure all of the host's interfaces (i.e. in Linux, all the interfaces in the host network namespace). They are created by setting `interfaceName: "*"`.
-
-## Automatic host endpoints
-
-$[prodname] creates a wildcard host endpoint for each node, with the host endpoint containing the same labels and IP addresses as its corresponding node.
-$[prodname] will ensure these managed host endpoints maintain the same labels and IP addresses as its node by periodic syncs.
-This means that policy targeting these automatic host endpoints will function correctly with the policy put in place to select those nodes, even if over time the node's IPs or labels change.
-
-Automatic host endpoints are differentiated from other host endpoints by the label `projectcalico.org/created-by: calico-kube-controllers`.
-Enable or disable automatic host endpoints by configuring the default KubeControllersConfiguration resource.
-
-## Before you begin
-
-**Unsupported**
-
-- GKE
-
-## How to
-
-- [Enable automatic host endpoints](#enable-automatic-host-endpoints)
-- [Apply network policy to automatic host endpoints](#apply-network-policy-to-automatic-host-endpoints)
-
-### Enable automatic host endpoints
-
-To enable automatic host endpoints, edit the default KubeControllersConfiguration instance, and set `spec.controllers.node.hostEndpoint.autoCreate` to `true`:
-
-```bash
-kubectl patch kubecontrollersconfiguration default --patch='{"spec": {"controllers": {"node": {"hostEndpoint": {"autoCreate": "Enabled"}}}}}'
-```
-
-If successful, host endpoints are created for each of your cluster's nodes:
-
-```bash
-kubectl get heps -o wide
-```
-
-The output may look similar to this:
-
-```
-kubectl get heps -o wide
-NAME CREATED AT
-ip-172-16-101-147.us-west-2.compute.internal-auto-hep 2021-05-12T22:16:47Z
-ip-172-16-101-54.us-west-2.compute.internal-auto-hep 2021-05-12T22:16:47Z
-ip-172-16-101-79.us-west-2.compute.internal-auto-hep 2021-05-12T22:16:47Z
-ip-172-16-101-9.us-west-2.compute.internal-auto-hep 2021-05-12T22:16:47Z
-ip-172-16-102-63.us-west-2.compute.internal-auto-hep 2021-05-12T22:16:47Z
-```
-
-### Apply network policy to automatic host endpoints
-
-To apply policy that targets all Kubernetes nodes, first add a label to the nodes.
-The label will be synced to their automatic host endpoints.
-
-For example, to add the label **kubernetes-host** to all nodes and their host endpoints:
-
-```bash
-kubectl label nodes --all kubernetes-host=
-```
-
-And an example policy snippet:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: all-nodes-policy
-spec:
- selector: has(kubernetes-host)
- #
-```
-
-To select a specific set of host endpoints (and their corresponding Kubernetes nodes), use a policy selector that selects a label unique to that set of host endpoints.
-For example, if we want to add the label **environment=dev** to nodes named node1 and node2:
-
-```bash
-kubectl label node node1 environment=dev
-kubectl label node node2 environment=dev
-```
-
-With the labels in place and automatic host endpoints enabled, host endpoints for node1 and node2 will be updated with the **environment=dev** label.
-We can write policy to select that set of nodes with a combination of selectors:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: some-nodes-policy
-spec:
- selector: has(kubernetes-host) && environment == 'dev'
- #
-```
-
-## Tutorial
-
-This tutorial will lock down Kubernetes node ingress to only allow SSH and required ports for Kubernetes to function.
-We will apply two policies: one for the control plane nodes. and one for the worker nodes.
-
-:::note
-
-Note: This tutorial was tested on a cluster created with kubeadm v1.18.2 on AWS, using a "stacked etcd" [topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/). Stacked etcd topology means the etcd pods are running on the masters. kubeadm uses stacked etcd by default.
-If your Kubernetes cluster is on a different platform, is running a variant of Kubernetes, or is running a topology with an external etcd cluster,
-please review the required ports for control plane and worker nodes in your cluster and adjust the policies in this tutorial as needed.
-
-:::
-
-First, let's restrict ingress traffic to the control plane nodes. The ingress policy below contains three rules.
-The first rule allows access to the API server port from anywhere. The second rule allows all traffic to localhost, which
-allows Kubernetes to access control plane processes. These control plane processes includes the etcd server client API, the scheduler, and the controller-manager.
-This rule also allows localhost access to the kubelet API and calico/node health checks.
-And the final rule allows the etcd pods to peer with each other and allows the masters to access each others kubelet API.
-
-If you have not modified the failsafe ports, you should still have SSH access to the nodes after applying this policy.
-Now apply the ingress policy for the Kubernetes masters:
-
-```
-kubectl apply -f - << EOF
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: ingress-k8s-masters
-spec:
- selector: has(node-role.kubernetes.io/master)
- # This rule allows ingress to the Kubernetes API server.
- ingress:
- - action: Allow
- protocol: TCP
- destination:
- ports:
- # kube API server
- - 6443
- # This rule allows all traffic to localhost.
- - action: Allow
- destination:
- nets:
- - 127.0.0.0/8
- # This rule is required in multi-master clusters where etcd pods are colocated with the masters.
- # Allow the etcd pods on the masters to communicate with each other. 2380 is the etcd peer port.
- # This rule also allows the masters to access the kubelet API on other masters (including itself).
- - action: Allow
- protocol: TCP
- source:
- selector: has(node-role.kubernetes.io/master)
- destination:
- ports:
- - 2380
- - 10250
-EOF
-```
-
-Note that the above policy selects the standard **node-role.kubernetes.io/master** label that kubeadm sets on control plane nodes.
-
-Next, we need to apply policy to restrict ingress to the Kubernetes workers.
-Before adding the policy we will add a label to all of our worker nodes, which then gets added to its automatic host endpoint.
-For this tutorial we will use **kubernetes-worker**. An example command to add the label to worker nodes:
-
-```bash
-kubectl get node -l '!node-role.kubernetes.io/master' -o custom-columns=NAME:.metadata.name | tail -n +2 | xargs -I{} kubectl label node {} kubernetes-worker=
-```
-
-The workers' ingress policy consists of two rules. The first rule allows all traffic to localhost. As with the masters,
-the worker nodes need to access their localhost kubelet API and calico/node healthcheck.
-The second rule allows the masters to access the workers kubelet API. Now apply the policy:
-
-```
-kubectl apply -f - << EOF
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: ingress-k8s-workers
-spec:
- selector: has(kubernetes-worker)
- # Allow all traffic to localhost.
- ingress:
- - action: Allow
- destination:
- nets:
- - 127.0.0.0/8
- # Allow only the masters access to the nodes kubelet API.
- - action: Allow
- protocol: TCP
- source:
- selector: has(node-role.kubernetes.io/master)
- destination:
- ports:
- - 10250
-EOF
-```
-
-## Additional resources
-
-- [Apply policy to Kubernetes node ports](../beginners/services/kubernetes-node-ports.mdx)
-- [Global network policy](../../reference/resources/globalnetworkpolicy.mdx)
-- [Host endpoints](../../reference/resources/hostendpoint.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/index.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/index.mdx
deleted file mode 100644
index 99c6267e77..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/index.mdx
+++ /dev/null
@@ -1,55 +0,0 @@
----
-description: Calico Cloud Network Policy and Calico Cloud Global Network Policy are the fundamental resources to secure workloads and hosts, and to adopt a zero trust security model.
----
-
-import { DocCardLink, DocCardLinkLayout } from '/src/___new___/components';
-
-# Network policy
-
-Writing network policies is how you restrict traffic to pods in your Kubernetes cluster.
-$[prodname] extends the standard `NetworkPolicy` object to provide advanced network policy features, such as policies that apply to all namespaces.
-
-## Getting started
-
-
-
-
-
-
-
-
-
-
-## Policy rules
-
-
-
-
-
-
-
-
-
-
-## Policy tiers
-
-
-
-
-
-
-
-
-## Policy for services
-
-
-
-
-
-
-## Policy for extreme traffic
-
-
-
-
-
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/networksets.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/networksets.mdx
deleted file mode 100644
index 421b4d2a83..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/networksets.mdx
+++ /dev/null
@@ -1,267 +0,0 @@
----
-description: Learn the power of network sets and why you should create them.
----
-
-# Get started with network sets
-
-## Visualize traffic to/from your cluster
-
-Modern applications often integrate with third-party APIs and SaaS services that live outside Kubernetes clusters. To securely enable access to those integrations, you must be able to limit IP ranges for egress and ingress traffic to workloads. Limiting IP lists or ranges is also used to deny-list bad actors or embargoed countries. To limit IP ranges, you need to use the $[prodname] resource called **network sets**.
-
-## What are network sets?
-
-**Network sets** are a grouping mechanism that allows you to create an arbitrary set of IP subnetworks/CIDRs or domains that can be matched by standard label selectors in Kubernetes or $[prodname] network policy. Like IP pools for pods, they allow you to reuse/scale sets of IP addresses in policies.
-
-A **network set** is a namespaced resource that you can use with Kubernetes or $[prodname] network policies; a **global network set** is a cluster-wide resource that you can use with $[prodname] network policies.
-
-Like network policy, you manage user access to network sets using standard Kubernetes RBAC.
-
-## Why are network sets powerful?
-
-If you are familiar with Service Graph in Manager UI, you know the value of seeing pod-to-pod traffic within your cluster. But what about traffic external to your cluster?
-
-$[prodname] automatically detects IPs for pods and nodes that fall into the standard IETF “public network” and “private network” designations, and displays those as icons in Service Graph. So you get some visibility into external traffic without using any network sets.
-
-![public-private-networks](/img/calico-enterprise/public-private-networks.png)
-
-However, when you create network sets, you can get more granular visibility into what's leaving the cluster to public networks. Because you control the grouping, the naming, and labeling, you create visibility that is customized to your organization. This is why they are so powerful.
-
-Here are just a few examples of how network sets can be used:
-
-- **Egress access control**
-
- Network sets are a key resource for defining egress access controls; for example, securing ingress to microservices/apps or egress from workloads outside the cluster.
-
-- **Troubleshooting**
-
- Network sets appear as additional metadata in flow logs and Kibana, Flow Visualizer, and Service Graph.
-
-- **Efficiency and scaling**
-
- Network sets are critical when scaling your deployment. You may have only a few CIDRs when you start. But as you scale out, it is easier to update a handful of network sets than update each network policy individually. Also, in a Kubernetes deployment, putting lots of anything (CIDRs, ports, policy rules) directly into policies causes inefficiencies in traffic processing (iptables/eBPF).
-
-- **Microsegmentation and shift left**
-
- Network sets provide the same microsegmentation controls as network policy. For example, you can allow specific users to create policies (that reference network sets), but allow only certain users to manage network sets.
-
-- **Threat defense**
-
- Network sets are key to being able to manage threats by blocking bad IPs with policy in a timely way. Imagine having to update individual policies when you find a bad IP you need to quickly block. You can even give access to a controller that automatically updates CIDRs in a network set when a bad IP is found.
-
-## Create a network set and use it in policy
-
-In this section, we’ll walk through how to create a namespaced network set in Manager UI. You can follow along using your cluster or tigera-labs cluster.
-
-In this example, you will create a network set named, `google`. This network set contains a list of trusted google endpoints for a microservice called, `hipstershop`. As a service owner, you want to be able to see traffic leaving the microservices in Service Graph. Instead of matching endpoints on IP addresses, we will use domain names.
-
-1. From the left navbar, click **Network Sets**.
-1. Click **Add Network Set**, and enter these values.
- - For Name: `google`
- - For Scope: Select **Namespace** and select, `hipstershop`
-1. Under Labels, click **Add label**.
- - In the Select key field, enter `destinations` and click the green bar to add this new entry.
- - In the Value field, enter `google`, click the green bar to add the entry, and save.
-1. For Domains, click **+Add Domain** and these URLs: `clouddebugger.googleapis.com`, `cloudtrace.googleapis.com`, `metadata.google.internal`, `monitoring.googleapis.com`.
-1. Click **Create Network Set**.
-
-You’ve created your first network set.
-
-![add-networkset-google](/img/calico-enterprise/add-networkset-google.png)
-
-The YAML looks like this:
-
-```yaml
-kind: NetworkSet
-apiVersion: projectcalico.org/v3
-metadata:
- name: google
- labels:
- destinations: google
- namespace: hipstershop
-spec:
- nets: []
- allowedEgressDomains:
- - clouddebugger.googleapis.com
- - cloudtrace.googleapis.com
- - metadata.google.internal
- - monitoring.googleapis.com
-```
-
-Next, we write a DNS policy for hipstershop that allows egress traffic to the trusted google sites. The following network policy allows egress access for all destination selectors labeled, `google`. Note that putting domains in a network set referencing it in policy is the best practice. Also, note that using `selector: all()` should only be used if all pods in the namespace can access all of the domains in the network set; if not, you should create separate policies accordingly.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: application.allow-egress-domain
- namespace: hipstershop
-spec:
- tier: application
- order: 0
- selector: all()
- serviceAccountSelector: ''
- egress:
- - action: Allow
- source: {}
- destination:
- selector: destinations == "google"
- types:
- - Egress
-```
-
-## Network sets in Service Graph
-
-Continuing with our `hipstershop` example, if you go to Service Graph, you see hipstershop (highlighted in yellow).
-
-![hipstershop](/img/calico-enterprise/hipstershop.png)
-
-If we double-click `hipstershop` to drill down, we now see the `google` network set icon (highlighted in yellow). We now have visibility to traffic external from google sites to hipstershop. (If you are using the tigera-labs cluster, note that the network set will not be displayed as shown below.)
-
-![google-networkset](/img/calico-enterprise/google-networkset.png)
-
-Service Graph provides a view into how services are interconnected in a consumable view, along with easy access to flow logs. However, you can also see traffic associated with network sets in volumetric display with Flow Visualizer, and query flow log data associated with network sets in Kibana.
-
-## Tutorial
-
-In the following example, we create a global network set resource for a trusted load-balancer that can be used with microservices and applications. The label, `trusted-ep: load-balancer` is how this global network set can be referenced in policy.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkSet
-metadata:
- name: load-balancer
- labels:
- trusted-ep: "load-balancer"
-spec:
- nets:
- # Modify the ip addresses to refer to the ip addresses of load-balancers in your environment
- - 10.0.0.1/32
- - 10.0.0.2/32
-```
-
-The following network policy uses the `selector: trusted-ep == "load balancer"` to reference the above GlobalNetworkSet. All applications in the `app2-ns` namespace, that match `app2` and `svc1` are allowed ingress traffic from the trusted load balance on port 1001.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: application.app2-svc1
- namespace: app2-ns
-spec:
- tier: application
- order: 500
- selector: (app == "app2"&&svc == "svc1")
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: trusted-ep == "load-balancer"
- destination:
- ports:
- - '10001'
- types:
- - Ingress
-```
-
-### Advanced policy rules with network sets
-
-When you combine $[prodname] policy rules with network sets, you have powerful ways to fine-tune. The following example combines network sets with specific rules in a global network policy to deny access more quickly.
-We start by creating a $[prodname] GlobalNetworkSet that specifies a list of CIDR ranges we want to deny: 192.0.2.55/32 and 203.0.113.0/24.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkSet
-metadata:
- name: ip-protect
- labels:
- ip-deny-list: 'true'
-spec:
- nets:
- - 192.0.2.55/32
- - 203.0.113.0/24
-```
-
-Next, we create two $[prodname] GlobalNetworkPolicy resources. The first is a high "order" policy that allows traffic as a default for things that don’t match our second policy, which is low "order" and uses the GlobalNetworkSet label as a selector to deny ingress traffic (IP-deny-list in the previous step). In the label selector, we also include the term, `!has(projectcalico.org/namespace)`, which prevents this policy from matching pods or NetworkSets that also have this label. To more quickly enforce the denial of forwarded traffic to the host at the packet level, use the `doNotTrack` and `applyOnForward` options.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: forward-default-allow
-spec:
- selector: apply-ip-protect == 'true'
- order: 1000
- doNotTrack: true
- applyOnForward: true
- types:
- - Ingress
- ingress:
- - action: Allow
----
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: ip-protect
-spec:
- selector: apply-ip-protect == 'true'
- order: 0
- doNotTrack: true
- applyOnForward: true
- types:
- - Ingress
- ingress:
- - action: Deny
- source:
- selector: ip-deny-list == 'true' && !has(projectcalico.org/namespace)
-```
-
-## Best practices for using network sets
-
-- Create network sets as soon as possible after getting started
-
- This allows you to quickly realize the benefits of seeing custom metadata in flow logs and visualizing traffic in Service Graph and Flow Visualizer.
-
-- Create a network set label and name schema
-
- It is helpful to think: what names would be meaningful and easy to understand when you look in Service Graph? Flow Viz? Kibana? What labels will be easy to understand when used in network policies – especially if you are separating users who manage network sets from those who consume them in network policies.
-
-- Do not put large sets of CIDRs and domains directly in policy
-
- Network sets allow you to specify CIDRs and/or domains. Although you can add CIDRs and domains directly in policy, it doesn't scale.
-
-- Do not put thousands of rules into a policy, each with a different CIDR
-
- If your set of /32s can be easily aggregated into a few broader CIDRs without compromising security, it’s a good thing to do; whether you’re putting the CIDRs in the rule or using a network set.
-
-- If you want to match thousands of endpoints, write one or two rules and use selectors to match the endpoints.
-
- Having one rule per port, per host is inefficient because each rule ends up being rendered as an iptables/eBPF rule instead of making good use of IP sets.
-
-- Avoid overlapping IP addresses/subnets in networkset/globalnetworkset definitions
-
-## Efficient use of network sets
-
-If you have a large number of things to match, using a network set is more efficient both in the control plane (for example, Felix CPU), and for the packet path (latency/per packet CPU). If you use network sets and you add/remove an IP from the network set, this doesn't require changing iptables rules at all. It only requires updating the ipset, which is efficient. If you also change the policy rules, then iptables must be updated too. Using network sets is efficient for all of the following use cases:
-
-- The system applying the iptables rules to incoming connections (to decide whether to allow or deny the traffic)
-- iptables rules updates whenever one of the policies and/or network sets change
-- The Kubernetes APIserver handling changes to the policy and/or networkset CRDs
-
-Follow these guidelines for efficient use of network sets.
-
-| Policy | Network set | Results |
-| ------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------- | --------------------------------------------------------------------------------------------------------------------- |
-| source: selector: foo="bar" | With handful of broad CIDRs | **Efficient** **-** 1 iptables/eBPF rule - 1 IP set with handful of CIDRs |
-| source: nets: [ ... handful ...] | Not used | **Efficient** - Handful of iptables/eBPF rules - 0 IP sets |
-| source: selector: foo="bar" | One network set with 2000 x /32s | **`*`Most efficient** - 1 iptables/eBPF rule - 1 IP sets with 2000 entries |
-| | Two network sets with 1000 each x /32s | **Efficient** - 2 iptables/eBPF rules - 2 IP set with 1000 entries |
-| source: nets: [... 2000 /32s ...] - source: nets: [1 x /32] - source: nets: [1 x /32] - ... x 2000 | Not used | **Inefficient** Results in programming 2k iptables/eBPF rules - 2000+ iptables/eBPF rules - 0 IP sets |
-
-`*` Updating **ipsets** is fast and efficient. Adding/removing a single entry is an O(1) operation no matter the number of IPs in the set. Updating **iptables** is generally slow and gets slower the more rules you have in total (including rules created by kube-proxy, for example). Less than 10K rules are generally fine, but noticeable latency occurs when updating rules above that number (increasing as the number of rules grows). (The newer nftables may scale more efficiently but those results are not included here.)
-
-Similarly, hitting many iptables (or eBPF) rules adds latency to the first packet in a flow. For iptables, it measures around 250ns per rule. Although a single rule is negligible, hitting 10K rules add >1ms to the first packet in the flow. Packets only hit the rules for the particular interface that they arrive on; if you have 10K rules on one interface and 10 rules on another, the packet processed by the first interface will have more latency.
-
-## Additional resources
-
-- [Network set](../reference/resources/networkset.mdx)
-- [Global network set](../reference/resources/globalnetworkset.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-best-practices.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/policy-best-practices.mdx
deleted file mode 100644
index 5dfc74f2e9..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-best-practices.mdx
+++ /dev/null
@@ -1,355 +0,0 @@
----
-description: Learn policy best practices for security, scalability, and performance.
----
-
-# Policy best practices
-
-## Big picture
-
-Policy best practices for run-time security starts with $[prodname]’s robust network security policy, but other $[prodname] resources play equally important roles in security, scalability, and performance.
-
-Learn $[prodname] policy best practices and resources that support a zero trust network model:
-
-- [Prepare for policy authoring](#prepare-for-policy-authoring)
-- [Policy best practices for day-one zero trust](#policy-best-practices-for-day-one-zero-trust)
-- [Policy design for efficiency and performance](#policy-design-for-efficiency-and-performance)
-- [Policy life cycle tools](#policy-life-cycle-tools)
-
-## Prepare for policy authoring
-
-### Determine who can write policy
-
-Any team familiar with deploying microservices in Kubernetes can easily master writing network policies. The challenge in many organizations is deciding who will be given permission to write policy across teams. Although there are different approaches, $[prodname] policy tools have the flexibility and guardrails to accommodate different approaches.
-
-Let’s review two common approaches.
-
-- **Microservices teams write policy**
-
- In this model, network policy is treated as code, built into and tested during the development process, just like any other critical part of a microservice’s code. The team responsible for developing a microservice has a good understanding of other microservices they consume and depend on, and which microservices consume their microservice. With a defined, standardized approach to policy and label schemas, there is no reason that the teams cannot implement network policies for their microservice as part of the development of the microservice. With visibility in Service Graph, teams can even do basic troubleshooting.
-
-- **Dev/Ops writes policy, microservice team focuses on internals**
- An equally valid approach is to have development teams focus purely on the internals of the microservices they are responsible for, and leave responsibility for operating the microservices with devops teams. A Dev/ops team needs the same understanding as the microservices team above. However, network security may come much later in the organization’s processes, or even as an afterthought on a system already in production. This can be more challenging because getting network policies wrong can have significant production impacts. But using $[prodname] tools, this approach is still achievable.
-
-When you get clarity on who can write policies, you can move to creating tiers. $[prodname] tiers, along with standard Kubernetes RBAC, provide the infrastructure to meet security concerns across teams.
-
-### Understand the depth of $[prodname] network policy
-
-Because $[prodname] policy goes well beyond the features in Kubernetes policy, we recommend that you have a basic understanding of [network policy and global network policy](beginners/calico-network-policy.mdx) and how they provide workload access controls. And even though you may not implement the following policies, it is helpful to know the depth of defense that is available in $[prodname].
-
-- [Policy for services](beginners/services/index.mdx)
-- [Policy integration for firewalls](policy-firewalls/index.mdx)
-
-### Create policy tiers
-
-**Tiers** are a hierarchical construct used to group policies and enforce higher precedence policies that cannot be circumvented by other teams. As part of your microsegmentation strategy, tiers let you apply identity-based protection to workloads and hosts.
-
-Before creating policies, we recommend that you create your tier structure. This often requires internal debates and discussions. As noted previously, $[prodname] policy workflow has the guardrails you need to allow diverse teams to participate in policy writing.
-
-To understand how tiered policy works and best practices, see [Get started with tiered policies](policy-tiers/tiered-policy.mdx).
-
-### Create label standards
-
-Creating a label standard is often an overlooked step. But if you skip this step, it will cost you in troubleshooting down the road; especially given visibility/troubleshooting is already a challenge in a Kubernetes deployment.
-
-**Why are label standards important?**
-
-Network policies in Kubernetes depend on **labels and selectors** (not IP addresses and IP ranges) to determine which workloads can talk to each other. As pods dynamically scale up and down, network policy is enforced based on the labels and selectors that you define. So workloads and host endpoints need unique, identifiable labels. If you create duplicate label names, or labels are not intuitive, troubleshooting network policy issues and authoring network policies becomes more difficult.
-
-**Recommendations**:
-
-- Follow the [Kubernetes guidelines for labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/). If the Kubernetes guidelines do not cover your use cases, we recommend this blog from Tigera Support: [Label standard and best practices for Kubernetes security](https://www.helpnetsecurity.com/2021/05/26/kubernetes-security/).
-- Develop a comprehensive set of labels that meets the deployment, reporting, and security requirements of different stakeholders in your organization.
-- Standardize the way you label your pods and write your network policies using a consistent schema or design pattern.
-- Labels should be defined to achieve a specific and explicit purpose
-- Use an intuitive language in your label definition that enables a quick and simple identification of labeled Kubernetes objects.
-- Use label key prefixes and suffixes to identify attributes required for asset classification.
-- Ensure the right labels are applied to Kubernetes objects by implementing label governance checks in your CI/CD pipeline or at runtime.
-
-### Create network sets
-
-Network sets and global network sets are grouping mechanisms for arbitrary sets of IPs/subnets/CIDRs or domains. They are key resources for efficient policy design. The key use cases for network sets are:
-
-- **Use/reuse in policy to support scaling**
-
- You reference network sets in policies using selectors (rather than updating individual policies with CIDRs or domains).
-
-- **Visibility to traffic to/from a cluster**
-
- For apps that integrate with third-party APIs and SaaS services, you get enhanced visibility to this traffic in Service Graph.
-
-- **Global deny lists**
-
- Create a “deny-list” of CIDRs for bad actors or embargoed countries in policy.
-
-**Recommendation**: Create network sets **and labels** before writing policy.
-
-For network set tutorial and best practices, see [Get started with network sets](networksets.mdx).
-
-## Policy best practices for day-one zero trust
-
-### Create a global default deny policy
-
-A global default deny network policy provides an enhanced security posture – so pods without policy (or incorrect policy) are not allowed traffic until appropriate network policy is defined. We recommend creating a global default deny, regardless of whether you use Calico Enterprise and/or Kubernetes network policy.
-
-But, be sure to understand the [best practices for creating a default deny policy](default-deny.mdx) to avoid breaking your cluster.
-
-Here are sample [default deny policies](beginners/kubernetes-default-deny.mdx).
-
-### Define both ingress and egress network policy rules for every pod in the cluster
-
-Although defining network policy for traffic external to clusters (north-south) is certainly important, it is equally important to defend against attacks for east-west traffic. Simply put, **every connection from/to every pod in every cluster should be protected**. Although having both doesn’t guarantee protection against other attacks and vulnerabilities, one innocuous workload can lead to exposure of your most critical workloads.
-
-For examples, see [basic ingress and egress policies](beginners/calico-network-policy.mdx).
-
-## Policy design for efficiency and performance
-
-Teams can write policies that work, but ultimately you want policies that also scale, and do not negatively impact performance.
-
-If you follow a few simple guidelines, you’ll be well on your way to writing efficient policy.
-
-### Use global network policy only when all rules apply globally
-
-- **Do**
-
- Use global network policy for cluster-wide scope when all rules apply to multiple namespaces or host endpoints. For example, use a global network policy to create a deny-list of CIDRs for embargoed countries, or for global default deny everywhere, even for new namespaces.
-
- Why? Although at the level of packet processing there is no difference between network policy and global network, for CPU usage, one global network policy is faster than a large number of network policies.
-
-- **Avoid**
-
- Using a global network policy as a way to combine diverse, namespaced endpoints with different connectivity requirements. Although creating such a policy can work, appears efficient and is easier to view than several separate network policies, it is inefficient and should be avoided.
-
- Why? Putting a lot of anything in policy (rules, CIDRs, ports) that are manipulated by selectors is inefficient. iptables/eBPF rules depend on minimizing executions and updates. When a selector is encountered in a policy rule, it is converted into one iptables rule that matches on an IP set. Then, different code keeps the IP sets up to date; this is more efficient than updating iptables rules. Also, because iptables rules execute sequentially in order, having many rules results in longer network latencies for the first packet in a flow (approximately 0.25-0.5us per rule). Finally, having more rules slows down programming of the dataplane, making policy updates take longer.
-
-**Example: Inefficient global network policy**
-
-The following policy is a global network policy for a microservice that limits all egress communication external to the cluster in the security tier. Does this policy work? Yes. And logically, it seems to cleanly implement application controls.
-
-```yaml noValidation
-1 apiVersion: projectcalico.org/v3
-2 kind: GlobalNetworkPolicy
-3 metadata:
-4 name: security.allow-egress-from-pods
-5 spec:
-6 tier: security
-7 order: 1
-8 selector: all()
-9 egress:
-10 - action: Deny
-11 source:
-12 namespaceSelector: projectcalico.org/namespace starts with "tigera"
-13 destination:
-14 selector: threatfeed == "feodo"
-15 - action: Allow
-16 protocol: TCP
-17 source:
-18 namespaceSelector: projectcalico.org/name == "sso"
-19 ports:
-20 - '443'
-21 - '80'
-22 destination:
-23 domains:
-24 - '*.googleapis.com'
-25 - action: Allow
-26 protocol: TCP
-27 source:
-28 selector: psql == "external"
-29 destination:
-30 ports:
-31 - '5432'
-32 domains:
-33 - '*.postgres.database.azure.com'
-34 - action: Allow
-35 protocol: TCP
-36 source: {}
-37 destination:
-38 ports:
-39 - '443'
-40 - '80'
-41 domains:
-42 - '*.logic.azure.com'
-43 - action: Allow
-44 protocol: TCP
-45 source: {}
-46 destination:
-47 ports:
-48 - '443'
-49 - '80'
-50 domains:
-51 - '*.azurewebsites.windows.net'
-52 - action: Allow
-53 protocol: TCP
-54 source:
-55 selector: 'app in { "call-archives-api" }||app in { "finwise" }'
-56 destination:
-57 domains:
-58 - '*.documents.azure.com'
-59 - action: Allow
-60 protocol: TCP
-61 source:
-62 namespaceSelector: projectcalico.org/name == "warehouse"
-63 destination:
-64 ports:
-65 - '1433'
-66 domains:
-67 - '*.database.windows.net'
-68 - action: Allow
-69 protocol: TCP
-70 source: {}
-71 destination:
-72 nets:
-73 - 65.132.216.26/32
-74. - 10.10.10.1/32
-75 ports:
-76 - '80'
-77 - '443'
-78 - action: Allow
-79 protocol: TCP
-80 source:
-81 selector: app == "api-caller"
-82 destination:
-83 ports:
-84 - '80'
-85 - '443'
-86 domains:
-87 - api.example.com
-88 - action: Allow
-89 source:
-90 selector: component == "tunnel"
-91 - action: Allow
-92 destination:
-93 selector: all()
-94 namespaceSelector: all()
-95 - action: Deny
-96 types:
-97 - Egress
-```
-
-**Why this policy is inefficient**
-
-First, the policy does not follow guidance on use for global network policy: that all rules apply to the endpoints. So the main issue is inefficiency, although the policy works.
-
-The main selector `all()` (line 8) means the policy will be rendered on every endpoint (workload and host endpoints). The selectors in each rule (for example, lines 12 and 14) control traffic that are matched by that rule. So, even if the host doesn’t have any workloads that match `"selector: app == "api-caller"`, you’ll still get the iptables/eBPF rule rendered on every host to implement that rule. If this sample policy had 100 pods, that’s a 10 - 100x increase in the number of rules (depending on how many local endpoints match each rule). In short, it adds:
-
-- Memory and CPU to keep track of all the extra rules
-- Complexity to handle changes to endpoint labels, and to re-render all the policies too.
-
-### Avoid policies that may select unwanted endpoints
-
-The following policy is for an application in a single namespace, `app1-ns` namespace. There are two microservices that are all labeled appropriately:
-
-- microservice 1 has `app: app1`, `svc: svc1`
-- microservice 2 has `app: app1`, `svc: svc2`
-
-The following policy works correctly and does not incur a huge performance hit. But it could select additional endpoints that were not intended.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: application.app1
- namespace: app1-ns
-spec:
- tier: application
- order: 10
- selector: app == "app1"
- types:
- - Ingress
- ingress:
- - action: Allow
- source:
- selector: trusted-ip == "load-balancer"
- destination:
- selector: svc == "svc1"
- ports:
- - 10001
- protocol: TCP
- - action: Allow
- source:
- selector: svc == "svc1"
- destination:
- selector: svc == "svc2"
- ports:
- - 10002
- protocol: TCP
-```
-
-The policy incorrectly assumes that the main policy selector (`app == "app1"`) will be combined (ANDed) with the endpoint selector, and only for certain policy types. In this case,
-
-- **Ingress** - combines policy selector and _destination endpoint selector_
- or
-- **Egress** - combines policy selector and _source endpoint selector_
-
-But if the assumptions behind the labels are not understood by other policy authors and are not correctly assigned, the endpoint selector may select _additional endpoints that were not intended_. For ingress policy, this can open up the endpoint to more IP addresses than necessary. This unintended consequence would be exacerbated if the author used a global network policy.
-
-### Put multiple relevant policy rules together in the same policy
-
-As discussed previously, it is better to create separate policies for different endpoint connectivity rules, than a single global network policy. However, you may interpret this to mean that the best practice is to make unique policies that do not aggregate any rules. But that is not the case. Why? When $[prodname] calculates and enforces policy, it updates the iptables/eBPF and reads policy changes and pod/workload endpoints from the datastore. The more policies in memory, the more work it takes determine which policies match a particular endpoint. If you group more rules into one policy, there are fewer policies to match against.
-
-### Understand effective use of label selectors
-
-Label selectors abstract network policy from the network. Misuse of selectors can slow things down. As discussed previously, the more selectors you create, the harder $[prodname] works to find matches.
-
-The following policy shows an inefficient use of selectors. Using `selector: all()` renders the policy on all nodes for all workloads. If there are 10,000 workloads, but only 10 match label==foo, that is very inefficient at the dataplane level.
-
-```yaml
-selector: all()
-ingress:
- - source:
- selector: label == 'bar'
- destination:
- selector: label == 'foo'
-```
-
-The best practice policy below allows the same traffic, but is more efficient and scalable. Why? Because the policy will be rendered only on nodes with workloads that match the selector `label==foo`.
-
-```yaml
-selector: label == 'foo'
-ingress:
- source:
- selector: label == 'bar'
-```
-
-Another common mistake is using `selector: all()` when you don’t need to. `all()` means _all workloads_ so that will be a large IP set. Whenever there's a source/destination selector in a rule, it is rendered as an IP set in the dataplane.
-
-```yaml
-source:
- selector: all()
-```
-
-### Put domains and CIDRs in network sets rather than policy
-
-Network sets allow you to specify CIDRs and/or domains. As noted in [Network set best practices](policy-best-practices.mdx), we do not recommend putting large CIDRs or domains directly in policy. Although nothing stops you from do this in policy, using network sets is more efficient and supports scaling.
-
-## Policy life cycle tools
-
-### Preview, stage, deploy
-
-A big obstacle to adopting Kubernetes is not having confidence that you can effectively prevent, detect, and mitigate across diverse teams. The following policy life cycle tools in Manager UI (**Policies** tab) can help.
-
-- **Policy recommendations**
-
- Get a policy recommendation for unprotected workloads. Speeds up learning, while supporting zero trust.
-
-- **Policy impact preview**
-
- Preview the impacts of policy changes before you apply them to avoid unintentionally exposing or blocking other network traffic.
-
-- **Policy staging and audit modes**
-
- Stage network policy so you can monitor traffic impact of both Kubernetes and $[prodname] policy as if it were actually enforced, but without changing traffic flow. This minimizes misconfiguration and potential network disruption.
-
-For details, see [Policy life cycle tools](staged-network-policies.mdx).
-
-### Do not trust anything
-
-Zero trust means that you do not trust anyone or anything. $[prodname] handles authentication on a per request basis. Every action is either authorized or restricted, and the default is everything is restricted. To apply zero trust to policy and reduce your attack surface and risk, we recommend the following:
-
-- Ensure that all expected and allowed network flows are explicitly allowed; any connection not explicitly allowed is denied
-
-- Create a quarantine policy that denies all traffic that you can quickly apply to workloads when you detect suspicious activity or threats
-
-## Additional resources
-
-- [Troubleshoot policies](policy-troubleshooting.mdx)
-- [Security and policy best practices blog](https://www.tigera.io/blog/kubernetes-security-policy-10-critical-best-practices/)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/fortinet-integration/firewall-integration.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/fortinet-integration/firewall-integration.mdx
deleted file mode 100644
index ba0cfbf1c7..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/fortinet-integration/firewall-integration.mdx
+++ /dev/null
@@ -1,244 +0,0 @@
----
-description: Enable FortiGate firewalls to control traffic from Kubernetes workloads.
----
-
-# Extend Kubernetes to Fortinet firewall devices
-
-## Big picture
-
-Use $[prodname] network policy to control traffic from Kubernetes clusters in your FortiGate firewalls.
-
-## Value
-
-As platform and security engineers, you want your apps to securely communicate with the external world. But you also want to secure the network traffic from the Kubernetes clusters using your Fortigate firewalls. Using the Fortinet/$[prodname] integration, security teams can retain firewall responsibility, secure traffic using $[prodname] network policy, which frees up time for ITOps.
-
-## Concepts
-
-## Integration at a glance
-
-This $[prodname]/Fortinet integration workflow lets you control egress traffic leaving the Kubernetes cluster. You create perimeter firewall policies in FortiManager and FortiGate that reference Kubernetes workloads. $[prodname] acts as a conduit, using the `tigera-firewall-controller` and global network policies to pass Kubernetes workload information to FortiManager and Fortigate devices where policies are applied and enforced.
-
-The basic workflow is:
-
-1. Determine the Kubernetes pods that are allowed access outside the perimeter firewall.
-1. Create $[prodname] global network policies with selectors that match those pods. Each global network policy maps to an address group in the FortiGate firewall.
-1. Deploy the `tigera firewall controller` in the Kubernetes cluster.
-1. Create a ConfigMap with Fortinet firewall information.
- The `tigera firewall controller` reads the ConfigMap, gets the FortiGate firewall IP address, API token, and source IP address selection with `node` or `pod`. In your Kubernetes cluster, the controller populates pod IPs or Kubernetes node IPs of selector matching pods in Fortigate address group objects.
-
-## Before you begin
-
-**Supported versions**
-
-- FortiGate v6.2
-- FortiManager v6.4
-
-**Required**
-
-
-- IPv4 CIDR’s or IP addresses of all Kubernetes nodes; this is required for FortiManager to treat Kubernetes nodes as trusted hosts.
-
-**Recommended**
-
-- Experience creating and administering FortiGate/FortiManager firewall policies
-- Experience using [$[prodname] tiers](../../../reference/resources/tier.mdx) and [Global network policy](../../../reference/resources/globalnetworkpolicy.mdx)
-
-## How to
-
-- [Create tier and global network policy](#create-tier-and-global-network-policy)
-- [Configure FortiGate firewall to communicate with firewall controller](#configure-fortigate-firewall-to-communicate-with-firewall-controller)
-- [Configure FortiManager to communicate with firewall controller](#configure-fortimanager-to-communicate-with-firewall-controller)
-- [Create a config map for address selection in firewall controller](#create-a-config-map-for-address-selection-in-firewall-controller)
-- [Create a config map with FortiGate and FortiManager information](#create-a-config-map-with-fortigate-and-fortimanager-information)
-- [Install FortiGate ApiKey and FortiManager password as secrets](#install-fortigate-apikey-and-fortimanager-password-as-secrets)
-- [Deploy firewall controller in the Kubernetes cluster](#deploy-firewall-controller-in-the-kubernetes-cluster)
-
-### Create tier and global network policy
-
-1. Create a tier for organizing global network policies.
-
- Create a new [Tier](../../policy-tiers/tiered-policy.mdx) to organize all Fortigate firewall global network policies in a single location.
-
-1. Note the tier name to use in a later step for the FortiGate firewall information config map.
-
-1. Create a GlobalNetworkPolicy for address group mappings.
-
- For example, a GlobalNetworkPolicy can select a set of pods that require egress access to external workloads. In the following GlobalNetworkPolicy, the firewall controller creates an address group named, `default.production-microservice1` in the Fortigate firewall. The members of `default.production-microservice1` address group include IP addresses of nodes. Each node can host one or more pods whose label selector match with `env == 'prod' && role == 'microservice1'`. Each GlobalNetworkPolicy maps to an address group in FortiGate firewall.
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: GlobalNetworkPolicy
- metadata:
- name: default.production-microservice1
- spec:
- selector: "env == 'prod' && role == 'microservice1'"
- types:
- - Egress
- egress:
- - action: Allow
- ```
-
-### Configure FortiGate firewall to communicate with firewall controller
-
-1. Determine and note the CIDR's or IP addresses of all Kubernetes nodes that can run the `tigera-firewall-controller`.
- Required to explicitly allow the `tigera-firewall-controller` to access the FortiGate API.
-1. Create an Admin profile with read-write access to Address and Address Group Objects.
- For example: `tigera_api_user_profile`
-1. Create a REST API Administrator, associate this user with the `tigera_api_user_profile` profile, and add the CIDR or IP address of your Kubernetes cluster nodes as trusted hosts.
- For example: `calico_enterprise_api_user`
-1. Note the API key.
-
-### Configure FortiManager to communicate with firewall controller
-
-1. Determine and note the CIDR's or IP addresses of all Kubernetes nodes that can run the `tigera-firewall-controller`.
- Required to explicitly allow the tigera-firewall-controller to access the FortiManager API.
-1. From system settings, create an Admin profile with Read-Write access for `Policy & Objects`.
- For example: `tigera_api_user_profile`
-1. Create a JSON API administrator, associate this user with the `tigera_api_user_profile` profile, and add the CIDR or IP address of your Kubernetes cluster nodes as `Trusted Hosts`.
-1. Note the username and password.
-
-### Create a config map for address selection in firewall controller
-
-1. Create a namespace for tigera-firewall-controller.
-
- ```bash
- kubectl create namespace tigera-firewall-controller
- ```
-
-1. Create a config map with FortiGate firewall information.
-
- For example:
-
- ```bash
- kubectl -n tigera-firewall-controller create configmap tigera-firewall-controller \
- --from-literal=tigera.firewall.policy.selector="projectcalico.org/tier == 'default'" \
- --from-literal=tigera.firewall.addressSelection="node"
- ```
-
- **ConfigMap values**
-
- | Field | Enter values... |
- | -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
- | tigera.firewall.policy.selector | The tier name with the global network policies with the Fortigate address group mappings. For example, this selects the global network policies in the `default` tier: `tigera.firewall.policy.selector: "projectcalico.org/tier == 'default'" |
- | tigera.firewall.addressSelection | The addressSelection for outbound traffic leaving the cluster. For example, if outgoingNat is enabled in cluster and compute Node IP address is used "tigera.firewall.addressSelection == `node` or If pod IP address used then "tigera.firewall.addressSelection == `pod`" |
-
-### Create a config map with FortiGate and FortiManager information
-
-1. In the [Fortigate ConfigMap manifest]($[filesUrl_CE]/manifests/fortinet-device-configmap.yaml), add your FortiGate firewall information in the data section, `tigera.firewall.fortigate`.
-
- Where:
-
- | Field | Description |
- | ------------------------ | --------------------------------------------------------------------------- |
- | name | FortiGate device name |
- | ip | FortiGate Management Ip address |
- | apikey | Secret in tigera-firewall-controller namespace, to store FortiGate's APIKey |
- | apikey.secretKeyRef.name | Name of the secret to store APIKey. |
- | apikey.secretKeyRef.key | Key name in the secret, which stores APIKey |
-
- For example:
-
- ```yaml
- - name: prod-eastcoast-1
- ip: 1.2.3.1
- apikey:
- secretKeyRef:
- name: fortigate-east1
- key: apikey-fortigate-east1
- - name: prod-eastcoast-2
- ip: 1.2.3.2
- apikey:
- secretKeyRef:
- name: fortigate-east2
- key: apikey-fortigate-east2
- ```
-
-1. In the [FortiManager ConfigMap manifest]($[filesUrl_CE]/manifests/fortinet-device-configmap.yaml), add your FortiManager information in the data section, `tigera.firewall.fortimgr`.
-
- Where:
-
- | Field | Description |
- | -------------------------- | ------------------------------------------------------------------------------ |
- | name | FortiManager device name |
- | ip | FortiManager Management Ip address |
- | adom | FortiManager ADOM name to manage kubernetes cluster. |
- | username | JSON api access account name to Read/Write FortiManager address objects. |
- | password | Secret in tigera-firewall-controller namespace, to store FortiManager password |
- | password.secretKeyRef.name | Name of the secret to store password. |
- | password.secretKeyRef.key | Key name in the secret, which stores password. |
-
- For example:
-
- ```yaml
- - name: prod-east1
- ip: 1.2.4.1
- username: api_user
- adom: root
- password:
- secretKeyRef:
- name: fortimgr-east1
- key: pwd-fortimgr-east1
- ```
-
-:::note
-
-If you are not using FortiManager in the integration, include only the following field in the ConfigMap data section. `tigera.firewall.fortimgr: |`
-
-:::
-
-1. Apply the manifest.
-
- ```
- kubectl apply -f $[filesUrl_CE]/manifests/fortinet-device-configmap.yaml
- ```
-
-### Install FortiGate ApiKey and FortiManager password as secrets
-
-1. Store each FortiGate API key as a secret in the `tigera-firewall-controller` namespace.
- For example, the FortiGate device, `prod-east1`, store its ApiKey as a secret name as `fortigate-east1`, with key as `apikey-fortigate-east1`.
-
- ```
- kubectl create secret generic fortigate-east1 \
- -n tigera-firewall-controller \
- --from-literal=apikey-fortigate-east1=
- ```
-
-1. Store each FortiManager password as secret in the `tigera-firewall-controller` namespace.
- For example, for FortiMgr `prod-east1`, store its password as a secret name as `fortimgr-east1`, with key as `pwd-fortimgr-east1`.
-
- ```
- kubectl create secret generic fortimgr-east1 \
- -n tigera-firewall-controller \
- --from-literal=pwd-fortimgr-east1=
- ```
-
-### Deploy firewall controller in the Kubernetes cluster
-
-1. Install your pull secret.
-
- ```
- kubectl create secret generic tigera-pull-secret \
- --from-file=.dockerconfigjson= \
- --type=kubernetes.io/dockerconfigjson -n tigera-firewall-controller
- ```
-
-1. Apply the manifest.
-
- ```
- kubectl apply -f $[filesUrl_CE]/manifests/fortinet.yaml
- ```
-
-## Verify the integration
-
-1. Log in to the FortiGate firewall user interface.
-1. Under **Policy & Objects**, click **Addresses**.
-1. Verify that your Kubernetes-related address objects and address group objects are created with the following comments "Managed by Tigera $[prodname]".
-
-Fof all FortiManagers that are configured to work with firewall-controller, log in to each FortiManager UI with the correct ADOM.
-
-1. Click **Policy & Objects**, **Object Configuration**, \*\*Addresses.
-1. Verify that your Kubernetes-related address objects and address group objects are created with the following comments "Managed by Tigera $[prodname]".
-
-## Additional resources
-
-- [Extend FortiManager firewall policies to Kubernetes](fortimgr-integration.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/fortinet-integration/fortimgr-integration.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/fortinet-integration/fortimgr-integration.mdx
deleted file mode 100644
index 0f2496c668..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/fortinet-integration/fortimgr-integration.mdx
+++ /dev/null
@@ -1,157 +0,0 @@
----
-description: Extend FortiManager firewall policies to Kubernetes with Calico Cloud
----
-
-# Extend FortiManager firewall policies to Kubernetes
-
-## Big picture
-
-Use FortiManager firewall policies to secure workloads in your Kubernetes cluster.
-
-## Value
-
-The $[prodname]/Fortinet integration lets you control Kubernetes clusters directly and apply policy
-using the FortiManager UI as the primary interface. This allows firewall administrators to leverage existing
-tools and workflows as they learn and adopt Kubernetes orchestration at their own pace.
-
-## Concepts
-
-### Integration at a glance
-
-This $[prodname]/Fortinet solution lets you directly control Kubernetes policies using FortiManager.
-
-The basic workflow is:
-
-1. Determine the Kubernetes pods that you want to securely communicate with each other.
-1. Label these pods using a key-value pair where key is the `tigera.io/address-group`, and value is the pod matching a label name.
-1. In the FortiManager, select the cluster’s ADOM, and create an address group using the key-value pair associated with the pods.
-1. Create firewall policies using the address groups for IPv4 Source address and IPv4 Destination Address, and select services and actions as you normally would to allow or deny the traffic. Under the covers, the $[prodname] integration controller periodically reads the FortiManager firewall policies for your Kubernetes cluster, converts them to $[prodname] global network policies, and applies them to clusters.
-1. Use the $[prodname] Manager UI to verify the integration, and then FortiManager UI to make all updates to policy rules.
-
-:::note
-
-The default value for reading FortiManager firewall policies is three seconds. To change the value, modify environment variable FW_FORTIMGR_EW_POLL_INTERVAL in FortiManager integration manifest; units are in seconds.
-
-:::
-
-## Before you begin
-
-**Supported version**
-
-- FortiManager v6.4
-
-**Required**
-
-- IPv4 CIDR’s or IP addresses of all Kubernetes nodes; this is required for FortiManager to treat Kubernetes nodes as trusted hosts.
-
-
-
-**Recommended**
-
-- Experience with [ tiered policy](../../policy-tiers/tiered-policy.mdx) and [ global network policy](../../../reference/resources/globalnetworkpolicy.mdx)
-- Experience creating and administering FortiGate/FortiManager firewall policies
-
-## How to
-
-- [Create a tier](#create-a-tier)
-- [Configure FortiManager to communicate with firewall controller](#configure-fortimanager-to-communicate-with-firewall-controller)
-- [Create a FortiManager config map](#create-a-fortimanager-config-map)
-- [Install FortiManager password as secrets](#install-fortimanager-password-as-secrets)
-- [Deploy the firewall controller in the Kubernetes cluster](#deploy-the-firewall-controller-in-the-kubernetes-cluster)
-- [Verify the integration](#verify-the-integration)
-
-### Create a tier
-
-Create a [$[prodname] tier](../../policy-tiers/tiered-policy.mdx) in the $[prodname] Manager UI for each Kubernetes cluster you want to secure. We recommend that you create a new tier (rather than reusing an existing tier) for all global network policies created by the $[prodname] integration controller.
-
-## Configure FortiManager to communicate with firewall controller
-
-1. Determine and note the CIDR’s or IP addresses of all Kubernetes nodes that can run the `tigera-firewall-controller`.
- This is required to explicitly allow the `tigera-firewall-controller` to access the FortiManager API.
-1. From system settings, create an Admin profile with Read-Write access for `Policy & Objects`.
- For example: `tigera_api_user_profile`
-1. Create a JSON API administrator and associate this user with the `tigera_api_user_profile` profile and add CIDR or IP address of your Kubernetes cluster nodes as `trusted hosts`.
-1. Note the username and password.
-
-## Create a FortiManager config map
-
-1. Create a namespace for the tigera-firewall-controller.
-
- ```bash
- kubectl create namespace tigera-firewall-controller
- ```
-
-1. In this [FortiManager ConfigMap manifest]($[filesUrl_CE]/manifests/fortimanager-device-configmap.yaml), add your FortiManager device information in the data section: `tigera.firewall.fortimanager-policies`. For example:
-
- ```yaml noValidation
- tigera.firewall.fortimanager-policies: |
- - name: prod-east1
- ip: 3.2.1.4
- username: api_user
- adom: root
- tier:
- packagename: sacramento
- password:
- secretKeyRef:
- name: fortimgr-east1
- key: pwd-fortimgr-east1
- ```
-
- Where:
-
- | Field | Description |
- | -------------------------- | ---------------------------------------------------------------------------------------------------------------- |
- | name | FortiManager device name. |
- | ip | FortiManager Management IP address. |
- | adom | FortiManager ADOM name that manages Kubernetes cluster. |
- | packagename | FortiManager Firewall package. All firewall rules targeted for Kubernetes cluster are packed under this package. |
- | username | JSON api access account name to Read/Write FortiManager address objects. |
- | password | Secret in tigera-firewall-controller namespace, to store FortiManager password |
- | tier | Tier name you created in $[prodname] Manager UI |
- | password.secretKeyRef.name | Name of the secret to store password. |
- | password.secretKeyRef.key | Key name in the secret, which stores password. |
-
-1. Apply the manifest.
-
- ```bash
- kubectl apply -f $[filesUrl_CE]/manifests/fortimanager-device-configmap.yaml
- ```
-
-## Install FortiManager password as secrets
-
-Store each FortiManager password as a secret in the `tigera-firewall-controller` namespace.
-
-For example, in the ConfigMap for FortiMgr `prod-east1`, store its password as a secret name as `fortimgr-east1`, with key as `pwd-fortimgr-east1`.
-
-```bash
-kubectl create secret generic fortimgr-east1 \
--n tigera-firewall-controller \
---from-literal=pwd-fortimgr-east1=
-```
-
-### Deploy the firewall controller in the Kubernetes cluster
-
-1. Install your pull secret.
-
- ```bash
- kubectl create secret generic tigera-pull-secret \
- --from-file=.dockerconfigjson= \
- --type=kubernetes.io/dockerconfigjson -n tigera-firewall-controller
- ```
-
-1. Apply the manifest.
-
- ```bash
- kubectl apply -f $[filesUrl_CE]/manifests/fortimanager.yaml
- ```
-
-## Verify the integration
-
-1. Log in to FortiManager with the correct ADOM.
-2. Select **Policy & Objects**, **Object Configuration**, and create new **Address Groups**.
-3. Click **Policy packages** and select the Package assigned to your Kubernetes cluster.
-4. Create a test firewall policy with the following fields: Name, IPv4 Source Address, IPv4 Destination Address, Service and Action.
-5. Log in to the $[prodname] Manager UI, and under the tier that you specified in the ConfigMap, verify that the GlobalNetworkPolicies are created.
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/fortinet-integration/index.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/fortinet-integration/index.mdx
deleted file mode 100644
index ca9dfa9f85..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/fortinet-integration/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Calico Cloud Fortinet firewall integrations.
-hide_table_of_contents: true
----
-
-# Fortinet firewall integrations
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/fortinet-integration/overview.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/fortinet-integration/overview.mdx
deleted file mode 100644
index cd7cf115cb..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/fortinet-integration/overview.mdx
+++ /dev/null
@@ -1,38 +0,0 @@
----
-description: Learn how to integrate Kubernetes clusters with existing Fortinet firewall workflows using Calico Cloud.
----
-
-# Determine the best Calico Cloud/Fortinet solution
-
-## Big picture
-
-Determine the best $[prodname]/Fortinet solution to integrate Kubernetes clusters with your existing Fortinet firewall workflows.
-
-## Value
-
-Many security teams must work within the confines of their existing IT security architecture, even though perimeter firewalls do not meet the needs of Kubernetes clusters. The $[prodname]/Fortinet integration allows firewall administrators to leverage existing Fortinet security tools and workflows, continue meeting compliance requirements, while adopting Kubernetes orchestration using $[prodname] at their own pace.
-
-### Concepts
-
-The $[prodname]/Fortinet integration provides the following solutions. You can you use them separately, or together without contention.
-
-### Solution 1: Extend Kubernetes to Fortinet firewall devices
-
-**Use case**: Control egress traffic for Kubernetes clusters.
-
-**Problem**: Perimeter firewalls do not have the necessary information to act on traffic that leaves the cluster for Kubernetes workloads.
-
-**Solution**: The $[prodname]/Fortinet integration leverages the power of $[prodname] policy selectors to provide Kubernetes workload information to FortiManager and FortiGate devices. You create perimeter firewall policies in FortiManager and FortiGate that reference Kubernetes workloads. Policies are applied and enforced by FortiGate devices. And Firewall administrators can write cluster egress policies that reference Kubernetes workloads directly in Fortinet devices.
-
-### Solution 2: Extend FortiManager firewall policies to Kubernetes
-
-**Use case**: Control Kubernetes clusters directly and apply policy.
-
-**Problem**: To avoid disruption, teams need to leverage existing FortiManager as the primary user interface.
-
-**Solution**: Use FortiManager to create firewall policies that are applied as $[prodname] network policies on Kubernetes workloads. Use the power of a $[prodname] “higher-order tier” so Kubernetes policy is evaluated early in the policy processing order, but update policy using FortiManager UI. Use the $[prodname] Manager UI as a secondary interface to verify the integration and troubleshoot using logs.
-
-## Next steps
-
-- [Extend Kubernetes to Fortinet firewall devices](firewall-integration.mdx)
-- [Extend FortiManager firewall policies to Kubernetes](fortimgr-integration.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/index.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/index.mdx
deleted file mode 100644
index 363fe45624..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-firewalls/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Use Calico Cloud policy with existing firewalls.
-hide_table_of_contents: true
----
-
-# Policy for firewalls
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/allow-tigera.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/allow-tigera.mdx
deleted file mode 100644
index 3974f0d49b..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/allow-tigera.mdx
+++ /dev/null
@@ -1,75 +0,0 @@
----
-description: Understand how to change the behavior of the allow-tigera tier.
----
-
-# Change allow-tigera tier behavior
-
-:::warning
-The `allow-tigera` tier contains policies that secure $[prodname] components and is critical to cluster integrity. It is controlled by the Tigera Operator, and policies in the tier should not be edited, and the tier should not be moved. Although you can change the behavior of allow-tigera using adjacent tiers, you can inadvertently break critical cluster traffic. We highly recommend that you work with Support to implement changes around `allow-tigera` to prevent service disruption.
-
-:::
-
-## Big picture
-
-Change traffic behavior of the tier that secures $[prodname] components.
-
-## Value
-
-Although the tier that secures $[prodname] components cannot be changed, you can create policies in adjacent tiers to change its behavior.
-
-## Concepts
-
-$[prodname] automatically creates the `allow-tigera` tier during installation with network policies that select traffic to and from Tigera components. These policies ensure that traffic required for $[prodname] operation is allowed, and that any unnecessary traffic involving Tigera components is denied. This tier prevents disruption of $[prodname] functionality in case of network policy misconfiguration impacting Tigera components, and denies unexpected traffic in case of defect or compromise.
-
-### Ownership and management of allow-tigera
-
-Tigera defines the `allow-tigera` tier and manages the policies within it. The Tigera Operator installs and monitors these policies, ensuring they always match the state defined by Tigera. Management by the Operator also ensures integrity for upgrades.
-
-:::note
-
-The `allow-tigera` tier and its policies should not be edited, and the tier should not be moved. However, if you inadvertently make changes they are automatically reverted by the Operator to ensure your cluster is always protected.
-
-:::
-
-## Tutorial
-
-### Change behavior of allow-tigera
-
-If you want to change the way traffic is enforced by the `allow-tigera` tier, you must create policy in an adjacent tier to meet your needs. For example, if a policy in the `allow-tigera` tier allows or denies traffic, and you want to change how that traffic is enforced, you can create a policy in a tier before `allow-tigera` that selects the same traffic to make your desired changes. Similarly, if a policy in the `allow-tigera` tier passes or does not select traffic that you want to enforce, you can create a policy in a tier after `allow-tigera` to select this traffic to meet the desired behavior.
-
-### Example: use preceding tier to tighten security
-
-Let's say an `allow-tigera` policy allows ingress traffic from a $[prodname] component that you do not use, and you want to tighten enforcement to not allow this traffic.
-
-Within a tier that comes before `allow-tigera`, you can create a policy that selects the same endpoint and contains ingress rules that deny traffic from that component and pass to `allow-tigera` for traffic from other components.
-
-```yaml
- # allow-tigera.es-gateway-access allows ingress from deep packet inspection, a feature not utilized for the purpose of this example.
- # This policy tightens the scope of allowed ingress to es-gateway without modifying the allow-tigera policy directly.
-
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: preceding-tier.es-gateway-access
- namespace: tigera-elasticsearch
-spec:
- # Place in a tier prior to allow-tigera.
- tier: preceding-tier
-
- # Select the same endpoint as the original policy.
- selector: k8s-app == 'tigera-secure-es-gateway'
- ingress:
- # Select the same component ingress.
- - source:
- selector: k8s-app == 'tigera-dpi'
- namespaceSelector: name == 'tigera-dpi'
- # Enact different behavior (originally: Allow)
- action: Deny
-
- # Defer to allow-tigera for other ingress/egress decisions for this endpoint.
- - action: Pass
-```
-
-This example shows how you can change the impact of the `allow-tigera` tier on traffic without modifying the tier itself. This makes your changes more maintainable, and allows the allow-tigera tier to continue to receive updates as $[prodname] evolves without you needing to reconcile your changes each release.
-
-For help to manage or change the behavior of the `allow-tigera` tier, contact Tigera Support.
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/index.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/index.mdx
deleted file mode 100644
index 9e9dc39a52..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Learn how policy tiers allow diverse teams to securely manage Kubernetes policy.
-hide_table_of_contents: true
----
-
-# Policy tiers
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/policy-tutorial-ui.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/policy-tutorial-ui.mdx
deleted file mode 100644
index 458ac1c5e7..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/policy-tutorial-ui.mdx
+++ /dev/null
@@ -1,244 +0,0 @@
----
-description: Covers the basics of Calico Cloud network policy.
----
-
-# Network policy tutorial
-
-## What you will learn:
-
-- How to create a policy in Manager UI
-- How labels and selectors work
-- Basics of policy ordering and tiers
-
-## Scenario
-
-Let's start with a sample Kubernetes cluster.
-
-![policy-tutorial-overview](/img/calico-enterprise/policy-tutorial-overview.png)
-
-| Item | Description |
-| ------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| Kubernetes cluster | A Kubernetes cluster with four namespaces and three nodes that run the pods in the cluster. |
-| Namespace | Four namespaces named blue, red, green, and purple represent different applications running in the cluster. |
-| Pod | Pods with meaningful labels for our applications: - FE (frontend pods) - BE (backend pods) |
-| NetworkSet | An arbitrary set of IP subnetworks/CIDRs or domains that can be matched by standard label selectors in Kubernetes or $[prodname] network policy. Network sets are a $[prodname] namespaced resource. |
-| GlobalNetwork Set | An arbitrary set of IP subnetworks/CIDRs or domains that can be matched by standard label selectors in Kubernetes or $[prodname] network policy. Global network sets are a $[prodname] global resource. |
-| ServiceAccount | Provides an identity for processes that run in a pod. Service accounts are a Kubernetes namespaced resource. |
-| HostEndpoint (HEP) | Physical or virtual interfaces attached to a host that runs $[prodname]. HEPs enforce $[prodname] policy on the traffic that enters or leaves the host’s default network namespace through the interfaces. HEPs are a $[prodname] global resource. |
-| External component | A machine (physical or virtual) that runs outside of the Kubernetes cluster. |
-
-## Create a network policy
-
-To follow along in Manager UI, click **Policies**.
-
-There are three main parts to every $[prodname] policy:
-
-- **Scope** - namespace or global
-- **Applies to** - objects within the above scope to which policy rules will be applied using labels and selectors
-- **Type** - whether this policy affects ingress, egress, or both
- - Ingress - policy rules to apply to connections inbound to the selected objects
- - Egress - policy rules to apply to connections outbound from the selected objects
-
-![policy-parts](/img/calico-enterprise/policy-parts.png)
-
-Let's look at each part.
-
-## Scope
-
-Scope defines the reach of your policy. Use this dropdown to determine whether your policy applies globally or to a specific namespace. Think of scope as the "top-level scope" that can be further specified using the "Applies to" selection that follows.
-
-- **Global**
-
- If you select global, but do not add entries in the **Applies to** field to further limit scope, _every pod and host endpoint (HEP) in our cluster would be in scope_. The following example uses the global option to limit the scope to all pods and HEPs (noted by check marks).
-
- ![policy-tutorial-global-scope](/img/calico-enterprise/policy-tutorial-global-scope.png)
-
-- **Namespace**
-
- If you select namespace, but do not add entries in the **Applies to** field to further limit scope, _every pod in this policy's namespace would be in scope_. The following example uses the namespace option to limit the scope to pods in the RED namespace.
-
- ![policy-tutorial-namespace-scope](/img/calico-enterprise/policy-tutorial-namespace-scope.png)
-
-### Applies to
-
-As discussed above, selecting **Applies to** lets you further limit pods in a policy. You can think of it as the "top-level endpoint selector". You define labels on your endpoints, namespaces, and service accounts, then use label selectors to limit connections by matching the following object types:
-
-- **Endpoints**
-
- Specify one or more label selectors to match specific endpoints, or select all endpoints
-
-- **Namespaces** (available only when the Scope is global)
-
- Specify one or more label selectors to match specific namespaces, or select all namespaces
-
-- **Service Accounts**
-
- Specify or more label selectors to match specific service accounts, or select all service accounts
-
-For example, if we select the BLUE namespace and apply it to only pods with the label, `app/tier == FE`,
-
-![blue-namespace](/img/calico-enterprise/blue-namespace.png)
-
-the resulting scope in our diagram would be only the pods labeled, `FE`:
-
-![blue-namespace-pods](/img/calico-enterprise/blue-namespace-pods.png)
-
-### Type
-
-In the Type section, you specify whether the policy impacts ingress, egress, or both.
-
-Note that ingress and egress are defined from the point of view of the _scoped endpoints_ (pods or host endpoints). In the previous diagram, the scoped endpoints are the three pods labeled, `app/tier:FE`.
-
-- Ingress rules filter traffic _coming to_ the scoped endpoints
-- Egress rules filter traffic _leaving_ the scoped endpoints
-
-Select the **Ingress** rule, and click **+ Add ingress rule** to access the **Create New Policy rules** page.
-
-### Endpoint selector
-
-The endpoint selector lets you select the endpoint traffic that is matched within the scope you've defined in the policy.
-
-In our example, the policy is scoped to endpoints that have the `app/tier == FE` label in the BLUE namespace. In the context of an egress rule, when we add the `app/tier == BE` endpoint selector, all TCP traffic from endpoints that have the`app/tier == BE` label will be allowed to the `app/tier == FE` endpoints.
-
-![policy-tutorial-endpoint-selector](/img/calico-enterprise/policy-tutorial-endpoint-selector.png)
-
-Note that endpoints that have the `app/tier == BE` label in other namespaces are not matched because the policy is namespace scoped.
-
-### Namespace selector
-
-This is where things can get interesting. In the previous example, we did not select anything in the namespace selector. Let's change the namespace selector to have both the BLUE and GREEN namespaces.
-
-![endpoint-selector-blue-green](/img/calico-enterprise/endpoint-selector-blue-green.png)
-
-Although the overall policy is scoped to the BLUE namespace, we can match endpoints in other namespaces on a per-rule basis. Note that the top-level scope that you select remains unchanged, meaning that the policy is still applied only to endpoints in the BLUE namespace.
-
-![namespace-selector](/img/calico-enterprise/namespace-selector.png)
-
-### Network selector
-
-Using the Nets selector, we can add CIDR addresses to be matched by the policy rule.
-
-![network-selector](/img/calico-enterprise/network-selector.png)
-
-### Service account selector
-
-Network policies can be also applied to the endpoint’s service account.
-
-Using the service account selector, we can apply rules to traffic from any endpoint whose service account matches the name or label selector.
-
-![service-account-selector](/img/calico-enterprise/service-account-selector.png)
-
-### Use Match All for wider matches in policy rules
-
-The **Match All** policy rule (`all()` in YAML) matches traffic for:
-
-- All endpoints in a namespace (if the policy scope is namespace)
-- All endpoints (if the policy scope is global)
-
-Let's look at an example of using **Match All** traffic in a namespaced policy:
-
-- Scope is namespaced (BLUE)
-- Applies to `app/tier == FE`
-
-Suppose we want to match traffic to the pod labeled `BE`, and the $[prodname] `networkset-1`.
-
-![match-all-namespace](/img/calico-enterprise/match-all-namespace.png)
-
-To do this, we can use the policy rule endpoint selector, **Match All**.
-
-![match-all-endpoints](/img/calico-enterprise/match-all-endpoints.png)
-
-Not only is the pod labeled `BE` included, but also the $[prodname] `networkset-1`.
-
-![match-all-endpoints-example](/img/calico-enterprise/match-all-endpoints-example.png)
-
-Note that we could have created individual selectors to match pods labeled, `BE` and for the `network-set-1`.
-
-**Match All traffic with namespace selectors**
-
-In the following example, if we select **Match All** endpoints, but in the **Namespace selector**, we select both the BLUE and GREEN namespaces, the results for matching are: all pods and network sets in the BLUE and GREEN namespaces.
-
-![namespace-match-all](/img/calico-enterprise/namespace-match-all.png)
-
-**Global selector**
-
-Let's see what happens when we select the **Global** selector.
-
-![namespace-selector-global](/img/calico-enterprise/namespace-selector-global.png)
-
-In our example, the Global selector selects HEPs and global network sets are selected. You might think that Global (`global()` in YAML) would select all endpoints, but it doesn't. Global means "do not select any namespaced resources" (which includes namespaced network set resources). Another way to express it is, do not select any workload endpoints.
-
-![heps-networksets](/img/calico-enterprise/heps-networksets.png)
-
-**Endpoint selector, unspecified**
-
-Next, let's see what happens when the policy rule does not specify any selection criteria. In this example, the rule selects all workloads, network sets, endpoints, and host endpoints within scope of the policy, including external components (the VM database).
-
-![unspecified](/img/calico-enterprise/unspecified.png)
-
-Now that you know the basic elements of a network policy, let's move on to policy ordering and tiers.
-
-## Policy ordering
-
-$[prodname] policies can have order values that control the order of precedence. For both network policies and global network policies, $[prodname] applies the policy with the lowest value first.
-
-![policy-ordering](/img/calico-enterprise/policy-ordering.png)
-
-### Mixing Kubernetes and $[prodname] policies
-
-Kubernetes and $[prodname] policies work side by side without a problem. However, Kubernetes network policies cannot assign an order value, so $[prodname] will set an implicit order value of 1000 to any Kubernetes network policies.
-
-:::note
-Policies are immediately applied to any new connections. However, for existing connections that are already open, the policy changes will only take effect after the connection has been reestablished. This means that any ongoing sessions may not immediately reflect policy changes until they are initiated again.
-:::
-
-### $[prodname] policies with no order value
-
-$[prodname] policies with order values take precedence. Policies without order values take lesser precedence and are processed alphabetically.
-
-## Tiers
-
-Tiers are a hierarchical construct used to group policies and enforce higher precedence policies that cannot be circumvented by other teams. Access to tiers is controlled using user role permissions. For example, a security team can implement high-level policy (for example, blocking access to/from IP ranges in particular countries), while developers in a later tier can control specific rules for the microservices of an app running in the cluster.
-
-### Policy processing overview
-
-When a new connection is processed by $[prodname], each tier that contains a policy that selects the endpoint processes the connection. Tiers are sorted by their order - the smallest number first. Policies in each tier are then processed in order from lowest to highest. For example, a policy of 800 is ordered before a policy of order 1000.
-
-- If a network policy or global network policy in the tier allows or denies the connection, then evaluation is done: the connection is handled accordingly.
-
-- If a network policy or global network policy in the tier passes the connection, the next tier containing a policy that selects the endpoint processes the connection
-
-After a Pass action, if no subsequent tier contains any policies that apply to the pod, the connection is allowed.
-
-If the tier contains policies that apply to the endpoint, but the policies take no action on the connection, the connection is dropped by an implicit deny.
-
-If no tiers contain policies that apply to the endpoint, the connection is allowed by an implicit allow.
-
-### Policies with no order value
-
-You can create policies without an order value. When a policy with no order value is placed in a tier with other policies that do have an order value, the policies are processed as follows:
-
-- Policies are evaluated from smallest to largest order value within the tier
-- Policies with no order value are processed last in the tier, but before the implicit deny
-- When multiple policies without an order value are present in a tier, they are processed in alphabetical order. However, we do not recommended relying on alphabetical ordering because it hard to operationalize.
-
-### How policy action rules affect traffic processing
-
-It is also important to understand that $[prodname] policy action rules affect how traffic and connections are processed. Let's go back to the drop-down menu on the Create New Policy Rule page.
-
-Action defines what should happen when a connection matches this rule.
-
-![policy-tutorial-action](/img/calico-enterprise/policy-tutorial-action.png)
-
-- **Allow or Deny** - traffic is allowed or denied and the connection is handled accordingly. No further rules are processed.
-- **Pass** - skips to the next tier that contains a policy that applies to the endpoint, and processes the connection. If the tier applies to the endpoint but no action is taken on the connection, the connection is dropped.
-- **Log** - creates a log, and evaluation continues processing to the next rule
-
-## Additional resources
-
-The following topics go into further detail about concepts described in this tutorial:
-
-- [Get started with network policy](../beginners/calico-network-policy.mdx)
-- [Service account selectors](../beginners/policy-rules/service-accounts.mdx)
-- [Get started with tiered network policy](tiered-policy.mdx)
-- [Get started with network sets](../networksets.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/rbac-tiered-policies.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/rbac-tiered-policies.mdx
deleted file mode 100644
index 7258ca452e..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/rbac-tiered-policies.mdx
+++ /dev/null
@@ -1,518 +0,0 @@
----
-description: Configure RBAC to control access to policies and tiers.
----
-
-# Configure RBAC for tiered policies
-
-## Big picture
-
-Configure fine-grained user access controls for tiered policies.
-
-## Value
-
-Self-service is an important part of CI/CD processes for containerization and microservices. $[prodname] provides fine-grained access control (RBAC) for:
-
-- $[prodname] policy and tiers
-- Kubernetes network policy
-
-## Concepts
-
-### Standard Kubernetes RBAC
-
-$[prodname] implements the standard **Kubernetes RBAC Authorization APIs** with `Role` and `ClusterRole` types. The $[prodname] API server integrates with Kubernetes RBAC Authorization APIs as an extension API server.
-
-### RBAC for policies and tiers
-
-In $[prodname], global network policy and network policy resources are associated with a specific tier. Admins can configure access control for these $[prodname] policies using standard Kubernetes `Role` and `ClusterRole` resource types. This makes it easy to manage RBAC for both Kubernetes network policies and $[prodname] tiered network policies. RBAC permissions include managing resources using $[prodname] Manager, and `kubectl`.
-
-### Fine-grained RBAC for policies and tiers
-
-RBAC permissions can be split by resources ($[prodname] and Kubernetes), and by actions (CRUD). Tiers should be created by administrators. Full CRUD operations on tiers is synonymous with full management of network policy. Full management to network policy and global network policy also requires `GET` permissions to 1) any tier a user can view/manage, and 2) the required access to the tiered policy resources.
-
-Here are a few examples of how you can fine-tune RBAC for tiers and policies.
-
-| **User** | **Permissions** |
-| --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| Admin | The default **tigera-network-admin** role lets you create, update, delete, get, watch, and list all $[prodname] resources (full control). Examples of limiting Admin access:
List tiers only
List only specific tiers
|
-| Non-Admin | The default **tigera-ui-user** role allows users to only list $[prodname] policy and tier resources. Examples of limiting user access:
Read-only access to all policy resources across all tiers, but only write access for NetworkPolicies with a specific tier and namespace.
Perform any operations on NetworkPolicies and GlobalNetworkPolicies.
List tiers only.
List or modify any policies in any tier. Fully manage only Kubernetes network policies in the **default** tier, in the **default** namespace, with read-only access for all other tiers.
|
-
-### RBAC definitions for $[prodname] network policy
-
-To specify per-tier RBAC for the $[prodname] network policy and $[prodname] global network policy, use pseudo resource kinds and names in the `Role` and `ClusterRole` definitions. For example,
-
-```yaml
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tier-default-reader
-rules:
- - apiGroups: ['projectcalico.org']
- resources: ['tiers']
- resourceNames: ['default']
- verbs: ['get']
- - apiGroups: ['projectcalico.org']
- resources: ['tier.networkpolicies']
- resourceNames: ['default.*']
- verbs: ['get', 'list']
-```
-
-Where:
-
-- **resources**: `tier.globalnetworkpolicies` and `tier.networkpolicies`
-- **resourceNames**:
- - Blank - any policy of the specified kind across all tiers.
- - `.*` - any policy of the specified kind within the named tier.
- - `` - the specific policy of the specified kind. Because the policy name is prefixed with the tier name, this also specifies the tier.
-
-## Before you begin...
-
-**Required**
-
-A **cluster-admin** role with full permissions to create and modify resources.
-
-**Recommended**
-
-A rough idea of your tiered policy workflow, and who should access what. See [Configure tiered policies](tiered-policy.mdx).
-
-## How to
-
-- [Create Admin users, full permissions](#create-admin-users-full-permissions)
-- [Create minimum permissions for all non-Admin users](#create-minimum-permissions-for-all-non-admin-users)
-
-:::note
-
-` kubectl auth can-i` cannot be used to check RBAC for tiered policy.
-
-:::
-
-### Create Admin users, full permissions
-
-Create an Admin user with full access to the $[prodname] Manager (as well as everything else in the cluster) using the following command. See the Kubernetes documentation to identify users based on your chosen [authentication method](https://kubernetes.io/docs/reference/access-authn-authz/authentication/), and how to use the [RBAC resources](https://kubernetes.io/docs/reference/access-authn-authz/rbac/).
-
-```
-kubectl create clusterrolebinding permissive-binding \
- --clusterrole=cluster-admin \
- --user=
-```
-
-### Create minimum permissions for all non-Admin users
-
-All users using $[prodname] Manager should be able to create authorizationreviews and authorizationrequests as well as access
-license information through the services/proxy https:tigera-api:8080.
-
-1. Download the [min-ui-user-rbac.yaml manifest]($[tutorialFilesURL]/min-ui-user-rbac.yaml).
- The roles and bindings in this file provide a minimum starting point for setting up RBAC for your users according to your specific security requirements.
- This manifest provides basic RBAC to view some statistical data in the UI but does not provide permissions to
- view or modify any network policy related configuration.
-
-1. Run the following command to replace `` with the name or email of the user you are providing permissions to:
-
- ```bash
- sed -i -e 's///g' min-ui-user-rbac.yaml
- ```
-
-1. Use the following command to install the bindings:
-
- ```bash
- kubectl apply -f min-ui-user-rbac.yaml
- ```
-
-## Tutorial
-
-This tutorial shows how to use RBAC to control access to resources and CRUD actions for a non-Admin user, John, with the username **john**.
-
-The RBAC examples shown will include:
-
-- [User cannot read policies in any tier](#user-cannot-read-policies-in-any-tier)
-- [User can view all policies, and modify policies in the default namespace and tier](#user-can-view-all-policies-and-modify-policies-in-the-default-namespace-and-tier)
-- [User can read policies only in both the default tier and namespace](#user-can-read-policies-only-in-both-the-default-tier-and-namespace)
-- [User can read policies only in both a specific tier and in the default namespace](#user-can-read-policies-only-in-both-a-specific-tier-and-in-the-default-namespace)
-- [User can only view a specific tier](#user-can-only-view-a-specific-tier)
-- [User can read all policies across all tiers and namespaces](#user-can-read-all-policies-across-all-tiers-and-namespaces)
-- [User has full control over policies only in both a specific tier and in the default namespace](#user-has-full-control-over-policies-only-in-both-a-specific-tier-and-in-the-default-namespace)
-
-### User cannot read policies in any tier
-
-User 'john' is forbidden from reading policies in any tier (**default** tier, and **net-sec** tier).
-
-When John issues the following command:
-
-```bash
-kubectl get networkpolicies.p
-```
-
-It returns:
-
-```
-Error from server (Forbidden): networkpolicies.projectcalico.org is forbidden: User "john" cannot list networkpolicies.projectcalico.org in tier "default" and namespace "default" (user cannot get tier)
-```
-
-Similarly, when John issues this command:
-
-```bash
-kubectl get networkpolicies.p -l projectcalico.org/tier==net-sec
-```
-
-It returns:
-
-```
-Error from server (Forbidden): networkpolicies.projectcalico.org is forbidden: User "john" cannot list networkpolicies.projectcalico.org in tier "net-sec" and namespace "default" (user cannot get tier)
-```
-
-:::note
-
-The .p' extension (`networkpolicies.p`) is short
-for "networkpolicies.projectcalico.org" and used to
-differentiate it from the Kubernetes NetworkPolicy resource and
-the underlying CRDs (if using the Kubernetes Datastore Driver).
-
-:::
-
-:::note
-
-The label for selecting a tier is `projectcalico.org/tier`.
-When a label selector is not specified, the server defaults the selection to the
-`default` tier. Alternatively, a field selector (`spec.tier`) may be used to select
-a tier.
-
-```bash
-kubectl get networkpolicies.p --field-selector spec.tier=net-sec
-```
-
-:::
-
-### User can view all policies, and modify policies in the default namespace and tier
-
-1. Download the [`read-all-crud-default-rbac.yaml` manifest]($[tutorialFilesURL]/read-all-crud-default-rbac.yaml).
-
-1. Run the following command to replace `` with the `name or email` of
- the user you are providing permissions to:
-
- ```bash
- sed -i -e 's///g' read-all-crud-default-rbac.yaml
- ```
-
-1. Use the following command to install the bindings:
-
- ```bash
- kubectl apply -f read-all-crud-default-rbac.yaml
- ```
-
-The roles and bindings in this file provide the permissions to read all policies across all tiers and to fully manage
-policies in the **default** tier and **default** namespace. This file includes the minimum required `ClusterRole` and `ClusterRoleBinding` definitions for all UI users (see `min-ui-user-rbac.yaml` above).
-
-### User can read policies only in both the default tier and namespace
-
-In this example, we give user 'john' permission to read policies only in both the **default** tier and namespace.
-
-```yaml
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tigera-example-get-default-tier
-rules:
- # To access Calico policy in a tier, the user requires "get" access to that tier.
-- apiGroups: ["projectcalico.org"]
- resources: ["tiers"]
- resourceNames: ["default"]
- verbs: ["get"]
-
----
-
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tigera-example-read-policies-in-default-tier
-rules:
- # This allows "get" and "list" of the Calico NetworkPolicy resources in the default tier.
-- apiGroups: ["projectcalico.org"]
- resources: ["tier.networkpolicies"]
- resourceNames: ["default.*"]
- verbs: ["get", "list"]
-
----
-
-# tigera-example-get-default-tier is applied globally
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: john-can-get-default-tier
-subjects:
-- kind: User
- name: john
- apiGroup: rbac.authorization.k8s.io
-roleRef:
- kind: ClusterRole
- name: tigera-example-get-default-tier
- apiGroup: rbac.authorization.k8s.io
-
----
-
-# tigera-example-read-policies-in-default-tier is applied per-namespace
-kind: RoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: john-can-read-policies-in-default-tier-and-namespace
-subjects:
-- kind: User
- name: john
- apiGroup: rbac.authorization.k8s.io
-roleRef:
- kind: ClusterRole
- name: tigera-example-read-policies-in-default-tier
- apiGroup: rbac.authorization.k8s.io
-```
-
-With the above, user john is able to list all NetworkPolicy resources in the **default** tier:
-
-```bash
-kubectl get networkpolicies.p --all-namespaces
-```
-
-With some example policies on the cluster, returns:
-
-```
-NAMESPACE NAME CREATED AT
-blue default.calico-np-blue-ns-default-tier 2021-07-26T09:05:11Z
-default default.calico-np-default-ns-default-tier 2021-07-26T09:05:11Z
-green default.calico-np-green-ns-default-tier 2021-07-26T09:05:13Z
-red default.calico-np-red-ns-default-tier 2021-07-26T09:05:12Z
-yellow default.calico-np-yellow-ns-default-tier 2021-07-26T09:05:13Z
-```
-
-As intended, user john can only examine those in the **default** namespace:
-
-```bash
-kubectl get networkpolicies.p default.calico-np-green-ns-default-tier -o yaml -n=green
-```
-
-Correctly returns:
-
-```
-Error from server (Forbidden): networkpolicies.projectcalico.org "default.calico-np-green-ns-default-tier" is forbidden: User "john" cannot get networkpolicies.projectcalico.org in tier "default" and namespace "green"
-```
-
-John also still cannot access tier **net-sec**, as intended:
-
-```bash
-kubectl get networkpolicies.p -l projectcalico.org/tier==net-sec
-```
-
-This returns:
-
-```
-Error from server (Forbidden): networkpolicies.projectcalico.org is forbidden: User "john" cannot list networkpolicies.projectcalico.org in tier "net-sec" and namespace "default" (user cannot get tier)
-```
-
-### User can read policies only in both a specific tier and in the default namespace
-
-Let's assume that the kubernetes-admin gives user 'john' the permission to list the policies in tier **net-sec**, but only examine the detail of the policies that are also in the **default** namespace.
-To provide these permissions to user 'john', use the following `ClusterRoles`,`ClusterRoleBinding` and `RoleBinding`.
-
-```yaml
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tigera-example-get-net-sec-tier
-rules:
- # To access Calico policy in a tier, the user requires "get" access to that tier.
-- apiGroups: ["projectcalico.org"]
- resources: ["tiers"]
- resourceNames: ["net-sec"]
- verbs: ["get"]
-
----
-
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tigera-example-read-policies-in-net-sec-tier
-rules:
- # This allows "get" and "list" of the Calico NetworkPolicy resources in the net-sec tier.
-- apiGroups: ["projectcalico.org"]
- resources: ["tier.networkpolicies"]
- resourceNames: ["net-sec.*"]
- verbs: ["get", "list"]
-
----
-
-# tigera-example-get-net-sec-tier is applied globally
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: john-can-get-net-sec-tier
-subjects:
-- kind: User
- name: john
- apiGroup: rbac.authorization.k8s.io
-roleRef:
- kind: ClusterRole
- name: tigera-example-get-net-sec-tier
- apiGroup: rbac.authorization.k8s.io
-
----
-
-# tigera-example-read-policies-in-net-sec-tier is applied per-namespace
-kind: RoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: john-can-read-policies-in-net-sec-tier-and-namespace
-subjects:
-- kind: User
- name: john
- apiGroup: rbac.authorization.k8s.io
-roleRef:
- kind: ClusterRole
- name: tigera-example-read-policies-in-net-sec-tier
- apiGroup: rbac.authorization.k8s.io
-```
-
-### User can only view a specific tier
-
-In this example, the following `ClusterRole` and `ClusterRoleBinding` can be used to provide 'get' access to the **net-sec**
-tier. This has the effect of making the **net-sec** tier visible in the $[prodname] Manager (including listing the names of the policies it contains).
-
-However, to modify or view the details of policies within the **net-sec** tier, additional RBAC permissions would be required.
-
-```yaml
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tigera-example-make-net-sec-tier-visible
-rules:
- # To access Calico policy in a tier, the user requires "get" access to that tier.
-- apiGroups: ["projectcalico.org"]
- resources: ["tiers"]
- resourceNames: ["net-sec"]
- verbs: ["get"]
-
----
-
-# tigera-example-make-net-sec-tier-visible is applied globally
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: john-can-view-the-net-sec-tier
-subjects:
-- kind: User
- name: john
- apiGroup: rbac.authorization.k8s.io
-roleRef:
- kind: ClusterRole
- name: tigera-example-make-net-sec-tier-visible
- apiGroup: rbac.authorization.k8s.io
-```
-
-### User can read all policies across all tiers and namespaces
-
-In this example, a single `ClusterRole` is used to provide read access to all policy resource types across all tiers. In this case, there is no need to use both `ClusterRoleBindings` and `RoleBindings` to map these abilities to the target user, because the intention is to for the policy to apply to all current and future namespaces on the cluster, so a `ClusterRoleBinding` provides the desired granularity.
-
-```yaml
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tigera-example-all-tiers-and-namespaces-policy-reader
-rules:
- # To access Calico policy in a tier, the user requires "get" access to that tier.
- # Not specifying any specific "resourceNames" provides access to all tiers.
-- apiGroups: ["projectcalico.org"]
- resources: ["tiers"]
- verbs: ["get"]
- # This allows read access to the Kubernetes NetworkPolicy resources (these are always in the default tier).
-- apiGroups: ["networking.k8s.io", "extensions"]
- resources: ["networkpolicies"]
- verbs: ["get","watch","list"]
- # This allows read access to the Calico NetworkPolicy and GlobalNetworkPolicies.
- # Not specifying any specific "resourceNames" provides access to them in all tiers.
-- apiGroups: ["projectcalico.org"]
- resources: ["tier.networkpolicies","tier.globalnetworkpolicies"]
- verbs: ["get","watch","list"]
-
----
-
-# tigera-example-all-tiers-and-namespaces-policy-reader is applied globally, with a single ClusterRoleBinding,
-# since all the rules it contains apply to all current and future namespaces on the cluster.
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: read-all-tier
-subjects:
-- kind: User
- name: john
- apiGroup: rbac.authorization.k8s.io
-roleRef:
- kind: ClusterRole
- name: tigera-example-all-tiers-and-namespaces-policy-reader
- apiGroup: rbac.authorization.k8s.io
-```
-
-### User has full control over policies only in both a specific tier and in the default namespace
-
-In this example, two `ClusterRole` objects are used to provide full access control of Calico NetworkPolicy
-resource types in the **net-sec** tier:
-
-- The `tiers` resource is bound to a user using a `ClusterRoleBinding`, because it is a global resource.
- This results in the user having the ability to read the contents of the tier across all namespaces.
-- The `networkpolicies` resources are bound to a user using a `RoleBinding`, because the aim in this
- case was to make them CRUD-able only in the default namespace.
- You only need this one `ClusterRole` to be defined, but it can be applied to different namespaces
- using additional `RoleBinding` objects. If the intention was to apply it to all current and future namespaces,
- a `ClusterRoleBinding` could be used.
-
-```yaml
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tigera-example-get-net-sec-tier
-rules:
- # To access Calico policy in a tier, the user requires "get" access to that tier.
-- apiGroups: ["projectcalico.org"]
- resources: ["tiers"]
- resourceNames: ["net-sec"]
- verbs: ["get"]
-
----
-
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: tigera-example-crud-policies-in-net-sec-tier
-rules:
- # This allows full CRUD access to the Calico NetworkPolicy resources in the net-sec tier.
-- apiGroups: ["projectcalico.org"]
- resources: ["tier.networkpolicies"]
- resourceNames: ["net-sec.*"]
- verbs: ["*"]
-
----
-
-# tigera-example-get-net-sec-tier is applied globally
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: john-can-get-net-sec-tier
-subjects:
-- kind: User
- name: john
- apiGroup: rbac.authorization.k8s.io
-roleRef:
- kind: ClusterRole
- name: tigera-example-get-net-sec-tier
- apiGroup: rbac.authorization.k8s.io
-
----
-
-# tigera-example-crud-policies-in-net-sec-tier is applied per-namespace
-kind: RoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-metadata:
- name: john-can-crud-policies-in-net-sec-tier-and-namespace
-subjects:
-- kind: User
- name: john
- apiGroup: rbac.authorization.k8s.io
-roleRef:
- kind: ClusterRole
- name: tigera-example-crud-policies-in-net-sec-tier
- apiGroup: rbac.authorization.k8s.io
-```
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/tiered-policy.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/tiered-policy.mdx
deleted file mode 100644
index 632d768bb4..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-tiers/tiered-policy.mdx
+++ /dev/null
@@ -1,205 +0,0 @@
----
-description: Learn about policies, tiers, and policy evaluation.
----
-
-# Policy tiers tutorial
-
-## Seamless network policy integration
-
-**Network policy** is the primary tool for securing a Kubernetes network. It lets you restrict network traffic in your cluster so only the traffic that you want to flow is allowed. $[prodname] provides more robust policy than Kubernetes, but you can use them together -- seamlessly. $[prodname] supports:
-
-- $[prodname] network policy, (namespaced)
-- $[prodname] global network policy (non-namespaced, global)
-- Kubernetes network policy
-
-## Tiers: what and why?
-
-**Tiers** are a hierarchical construct used to group policies and enforce higher precedence policies that cannot be circumvented by other teams. As you will learn in this tutorial, tiers have built-in features that support workload microsegmentation.
-
-All $[prodname] and Kubernetes network policies reside in tiers. You can start "thinking in tiers" by grouping your teams and types of policies within each group. For example, we recommend these three tiers (platform, security, and application).
-
-![policy types](/img/calico-cloud/policy-types.svg)
-
-Next, you can determine the priority of policies in tiers (from top to bottom). In the following example, that platform and security tiers use $[prodname] global network policies that apply to all pods, while developer teams can safely manage pods within namespaces using Kubernetes network policy for their applications and microservices.
-
-![policy tiers](/img/calico-cloud/policy-tiers.png)
-
-## Create a tier and policy
-
-To create a tier and policy in Manager UI:
-
-1. In the left navbar, click **Policies**.
-1. On the **Policies Board**, click **Add Tier**.
-1. Name the tier, select **Order, Add after** `tigera-security`, and save.
-1. To create a policy in the tier, click **+ Add policy**.
-
-You can export all policies or a single policy to a YAML file.
-
-Here is a sample YAML that creates a security tier and uses `kubectl` to apply it.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: Tier
-metadata:
- name: security
-spec:
- order: 300
-```
-
-```bash
-kubectl apply -f security.yaml
-```
-
-## The default tier: always last
-
-The default tier is created during installation and is always the last tier.
-
-![default tier](/img/calico-cloud/default-tier.png)
-
-The default tier is where:
-
-- You manage all Kubernetes network policies
-- Network and global network policies are placed when you upgrade from Project Calico to $[prodname]
-- Recommended policies are placed when you use the **Recommend a policy** feature
-
-## System tiers
-
-System tiers are added during installation and are hidden by default. Always add your tiers after `allow-tigera` and `tigera-security` (in terms of order).
-
-- **allow-tigera** tier contains policies to secure $[prodname] components and are controlled by the Tigera Operator. These policies should not be edited, and the tier should not be moved. Inadvertent changes are automatically reverted by the Operator to ensure your cluster is always protected.
-
-:::warning
-
-Although it is possible to change the behavior of the `allow-tigera` using adjacent tiers, it is not a trivial task. You can break critical cluster traffic and impact the operation of $[prodname]. To prevent loss of cluster services, see [Change allow-tigera tier behavior](allow-tigera.mdx), and contact Support for help.
-
-:::
-
-- **tigera-security** - contains threat protection policies
-
-## Moving tiers
-
-You can move tiers by dragging and moving them in the graphical sequence, but all tiers must be visible first before you reorder tiers.
-
-To show all tiers, click **View** and select all of the tiers in the Show tiers list.
-
-![hidden tiers](/img/calico-cloud/hidden-tiers.png)
-
-Now you can reorder tiers by dragging and moving them.
-
-## Tier order
-
-Tiers are ordered from left to right, starting with the highest priority (also called highest precedence) tiers.
-
-![tier order](/img/calico-cloud/tier-order.png)
-
-In the example above, tier priorities are as follows:
-
-- **security tier** - is higher priority than platform tier
-- **platform tier** - is higher priority than default tier
-- **default tier** - is always the last tier, and cannot be reordered
-
-The tier you put as the highest priority (after system tiers), depends on your environment. In compliance-driven environments, the security tier may be the highest priority (as shown above). There is no one-size-fits-all order.
-
-## Policy processing
-
-Policies are processed in sequential order from top to bottom.
-
-![policy processing](/img/calico-cloud/policy-processing.png)
-
-Two mechanisms drive how traffic is processed across tiered policies:
-
-- Labels and selectors
-- Policy action rules
-
-It is important to understand the roles they play.
-
-### Labels and selectors
-
-Instead of IP addresses and IP ranges, network policies in Kubernetes depend on labels and selectors to determine which workloads can talk to each other. Workload identity is the same for Kubernetes and $[prodname] network policies: as pods dynamically come and go, network policy is enforced based on the labels and selectors that you define.
-
-The following diagrams show the relationship between all of the elements that affect traffic flow:
-
-- **Tiers** group and order policies
-- **Policy action rules** define how to process traffic in and across tiers, and policy labels and selectors specify how groups of pods are allowed to communicate with each other and other network endpoints
-- The **CNI**, **$[prodname] components**, and underlying **dataplane** (iptables/eBPF) all make use of labels and selectors as part of routing traffic.
-
-![tier funnel](/img/calico-cloud/tier-funnel.svg)
-
-### Policy action rules
-
-$[prodname] network policy uses action rules to specify how to process traffic/packets:
-
-- **Allow or Deny** - traffic is allowed or denied and the packet is handled accordingly. No further rules are processed.
-- **Pass** - skips to the next tier that contains a policy that applies to the endpoint, and processes the packet. If the tier applies to the endpoint but no action is taken on the packet, the packet is dropped.
-- **Log** - creates a log, and evaluation continues processing to the next rule
-
-### Implicit default deny
-
-As shown in the following diagram, at the end of each tier is an implicit default deny. This is a safeguard that helps mitigate against unsecured policy. Because of this safeguard, you must explicitly apply the **Pass** action rule when you want traffic evaluation to continue. In the following example, the Pass action in a policy ensures that traffic evaluation continues, and overrides the implicit default deny.
-
-![implicit deny](/img/calico-cloud/implicit-deny.svg)
-
-Let’s look at a Dev/Ops global network policy in a high precedence tier (Platform). The policy denies ingress and egress traffic to workloads that match selector, `env != "stage"`. To ensure that policies continue to evaluate traffic after this policy, the policy adds a Pass action for both ingress and egress.
-
-**Pass action rule example**
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: devops.stage-env
-spec:
- tier: devops
- order: 255
- selector: env == "stage"
- ingress:
- - action: Deny
- source:
- selector: env != "stage"
- - action: Pass
- egress:
- - action: Deny
- destination:
- selector: env != "stage"
- - action: Pass
- types:
- - Ingress
- - Egress
-```
-
-### Policy endpoint matching across tiers
-
-Whoever is responsible for tier creation, also needs to understand how policy selects matching endpoints across tiers. For normal policy processing (without apply-on-forward, pre-DNAT, and do-not-track), if no policies within a tier apply to endpoints, the tier is skipped, and the tier's implicit deny behavior is not executed.
-
-In the following example, policy D in the Security tier includes a **Pass action** rule because we want traffic evaluation to continue to the next tier in sequence. In the Platform tier, there are no selectors in policies that match endpoints so the tier is skipped, including the end of tier deny. Evaluation continues to the Application tier. **Policy J** is the first policy with a matching endpoint.
-
-![endpoint match](/img/calico-cloud/endpoint-match.svg)
-
-### Default endpoint behavior
-
-Also, tier managers need to understand the default behavior for endpoints based on whether the endpoint is known or unknown, and the endpoint type. As shown in the following table:
-
-- **Known endpoints** - $[prodname] resources that are managed by Felix
-- **Unknown endpoints** - interfaces/resources not recognizable as part of our data model
-
-| Endpoint type | Default behavior for known endpoints | Default behavior for unknown endpoints (outside of our data model) |
-| ---------------------- | ----------------------------------------------------------------------- | ------------------------------------------------------------------ |
-| Workload, $[prodname] | Deny | Deny |
-| Workload, Kubernetes | Allow ingress from same Kubernetes namespace; allow all egress | Deny |
-| Host | Deny. With exception of auto host endpoints, which get `default-allow`. | Fall through and use iptables rules |
-
-## Best practices for tiered policy
-
-To control and authorize access to $[prodname] tiers, policies, and Kubernetes network policies, you use Kubernetes RBAC. Security teams can prevent unauthorized viewing or modification of higher precedence (lower order) tiers, while still allowing developers or service owners to manage the detailed policies related to their workloads.
-
-We recommend:
-
-- Limit tier creation permissions to Admin users only; creating and reordering tiers affects your policy processing workflow
-
-- Limit full CRUD operations on tiers and policy management to select Admin users
-
-- Review your policy processing whenever you add/reorder tiers
-
- For example, you may need to update Pass action rules to policies before or after the new tier. Intervening tiers may require changes to policies before and after, depending on the endpoints.
-
-- Use the **policy preview** feature to see effects of policy in action before enforcing it, and use the **staged network policy** feature to test the entire tier workflow before pushing it to production
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-troubleshooting.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/policy-troubleshooting.mdx
deleted file mode 100644
index c6a31b829f..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/policy-troubleshooting.mdx
+++ /dev/null
@@ -1,42 +0,0 @@
----
-description: Common policy implementation problems.
----
-
-# Troubleshoot policies
-
-### Problem
-
-I created my first egress policy with default deny behavior, but now I’ve blocked other traffic.
-
-### Solution
-
-In Kubernetes, when there are no egress policies that apply to an endpoint, all egress traffic is allowed. However, as soon as you add the first egress policy to an endpoint, $[prodname] switches to default deny and blocks everything else; this is part of our zero trust network policy model. For new users of $[prodname], this is unexpected behavior (but it’s required by both Kubernetes and $[prodname] policy specs.)
-
-For egress policy in particular, you may not be used to worrying about “system-level” egress traffic that is now suddenly blocked. For example, most workloads rely on DNS, but you may not have thought of this when writing your policy. So you end up with this problem loop: you allow HTTP traffic, but then your DNS traffic gets blocked, but then HTTP traffic stops working because it relies on DNS to function.
-
-A natural response to this issue is to add an egress rule to allow DNS(!). For example, you add an egress rule “allow UDP to port 53 to namespace kube-system”. In some systems (OpenShift), the DNS pod actually listens on port 5353, not port 53. However, the DNS Service DNATs the traffic from port 53 to port 5353, hiding that detail from the DNS client. $[prodname] then blocks the traffic because it sees the traffic after the DNAT. So $[prodname] sees port 5353, not the expected port 53.
-
-The solution is to define policy for workload services, not for ports used by workloads. For help, see [Policy for services](beginners/services/index.mdx).
-
-### Problem
-
-Traffic is blocked, even though I allow it in a policy.
-
-### Solution
-
-The problem of blocking traffic can reside in your tier, or a different tier.
-
-1. **Check policies in your tier**
-
- Go to your policy and see if there is a higher precedent policy in the tier that is blocking processing.
-
- - If that is not the problem, go to step 2.
- - If that is the problem, and if it makes sense for the traffic, you can reorder the policies in the tier. If you cannot, you must change the policy that is dropping traffic to allow your traffic flow using a Pass action rule.
-
-2. **Check policies in other tiers**
-
- Go to the next applicable higher precedent tier for your workload to see if a policy in that tier is blocking traffic. The policy at the end of the tier could be blocking traffic because the default behavior at the end of a tier is to drop traffic as part of zero trust. To unblock traffic, add a **Pass action rule** to the policy, or create a **Pass policy**.
-
-For help with visibility, use Service Graph to see how traffic is passed. Click on your flow, and view details in the right panel.
-
-For help with Pass action rules, see [Get started with tiered policy](policy-tiers/tiered-policy.mdx).
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/recommendations/index.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/recommendations/index.mdx
deleted file mode 100644
index 25f3aec861..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/recommendations/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Enable policy recommendations for namespaces to improve your security posture.
-hide_table_of_contents: true
----
-
-# Policy recommendations
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/recommendations/learn-about-policy-recommendations.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/recommendations/learn-about-policy-recommendations.mdx
deleted file mode 100644
index 578a18846c..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/recommendations/learn-about-policy-recommendations.mdx
+++ /dev/null
@@ -1,136 +0,0 @@
----
-description: Policy recommendations tutorial.
----
-
-# Policy recommendations tutorial
-
-## Big picture
-
-In this tutorial, we show you how recommendations are generated using flow logs in your cluster for traffic to/from namespaces, network sets, private network IPs and public domains.
-
-### Create resources for the tutorial
-
-Because the policy recommendation feature requires traffic between endpoints, this step provides these resources for this hands-on tutorial. If your cluster is already generating traffic for policy recommendations, you can skip this step and follow along using your own cluster.
-
-1. Configure felix for fast flow logs collection
-
- ```bash
- kubectl patch felixconfiguration.p default -p '{"spec":{"flowLogsFlushInterval":"10s"}}'
- ```
-
-1. Download the [policy recommendation tutorial deployment]($[tutorialFilesURL]/policy-recommendation-deployments.yaml) YAML.
-
-1. Use the following command to create the necessary resources:
-
- ```bash
- kubectl apply -f policy-recommendation-deployments.yaml
- ```
-
-### Enable policy recommendation
-
-1. In the Manager UI left navbar, click the **Policies** icon.
-1. Select **Recommendations**.
-1. Click on **Enable Policy Recommendations**.
-
-Wait for the recommendations to be generated. Unless otherwise configured, recommendations will take at least 2m30s to be generated, which is default time for the [Processing Interval](../../reference/resources/policyrecommendations.mdx#spec) setting.
-
-Once ready, the recommendations will be listed in the main page, under the **Recommendations** tab.
-
-### Understand the policy recommendation
-
-You should find a recommendation named `curl-ns` (appended with a five character suffix, like `-vfzgh`) with policy selector:
-```
-Policy Label selector: [[projectcalico.org/namespace == 'curl-ns']]
-```
-meaning that this policy pertains to the traffic originating from or destined for the `curl-ns` namespace.
-
-The policy will display a list of ingress rules:
-```
-Allow:Protocol is TCP
-From: Namespaces [[projectcalico.org/name == 'service-ns']]
-To:Ports [Port is 80 ]
-```
-allows ingress traffic, for protocol TCP, on port 80, from the `service-ns` namespace.
-
-A list of egress rules:
-```
-Allow:Protocol is TCP
-To:Ports [Port is 8080 ] Domains [www.tigera.io]
-```
-allows egress traffic, for protocol TCP, on port 8080, to domain `www.tigera.io`.
-
-```
-Allow:Protocol is TCP
-To:Ports [Port is 80 ] Namespaces [[projectcalico.org/name == 'service-ns']]
-```
-allows egress traffic, for protocol TCP, on port 80, to the `service-ns` namespace.
-
-```
-Allow:Protocol is UDP
-To:Ports [Port is 53 ] Namespaces [[projectcalico.org/name == 'kube-system']]
-```
-allows egress traffic, for protocol UDP, on port 53, to the `kube-system` namespace.
-
-```
-Allow:Protocol is TCP
-To:Ports [Port is 80 ] Endpoints [[projectcalico.org/name == 'public-ips' and projectcalico.org/kind == 'NetworkSet']] Namespaces global()
-```
-allows egress traffic, for protocol TCP, on port 80, to IPs defined in the global network set named: `public-ips`.
-
-```
-Allow:Protocol is TCP
-To:Ports [Port is 8080 ] Nets [Is 10.0.0.0/8 OR Is 172.16.0.0/12 OR Is 192.168.0.0/16 ]
-```
-allows egress traffic, for protocol TCP, on port 8080, to private range IPs.
-
-```
-Allow:Protocol is TCP
-To:Ports [Port is 80 ]
-```
-allows egress traffic, for protocol TCP, on port 80, to public range IPs.
-
-### Investigate the flows that are used to generate the policy rules
-
-To view flow logs in Service Graph:
-
-1. In the Manager UI left navbar, click **Service Graph**.
-1. Select **Default** under the VIEWS option.
-1. In the bottom pane you will see flow logs in the Flows tab.
-
-To generate rules, the recommendation engine queries for flow logs that are not addressed by any other policy in the cluster. Subsequently, it builds the missing policies necessary for allowing that traffic.
-
-### Understand the flow logs used in policy recommendations
-
-To get a better understanding of which flows contributed to generating the rules in your policy, select **Filter Flows**
-
-* To find the flows that were used to generate the egress to global network set rule, add:
-```
-source_namespace = "curl-ns" AND dest_name_aggr = "public-ips"
-```
-
-* To find the flows that generated the egress rule to namespace `kube-system`, define query:
-```
-source_namespace = "curl-ns" AND dest_namespace = "kube-system"
-```
-
-You'll notice that each of the flow logs contains a field named, `policies` with a entry like:
-```
-1|__PROFILE__|__PROFILE__.kns.curl-ns|allow|0
-```
-meaning that the particular flow was not addressed by any other policy within your cluster.
-
-You will also find input like:
-```
-0|namespace-isolation|curl-ns/namespace-isolation.staged:curl-ns-vfzgh|allow|3
-```
-indicating that the 3rd rule defined in policy **curl-ns-vfzgh**, will allow traffic defined by this flow, once the policy is enforced.
-
-### Examine policy traffic
-
-Examine the **Allowed Bytes** field in the **Recommendations** tab for the `curl-ns-recommendation` policy to get a sense of the total bytes allowed by the policy.
-
-Examine the **Allowed/sec** of each rule in the policy to get a sense of the quantity of traffic allowed per second by the rule in question.
-
-### When policy recommendations are not generated
-
-You may wonder why you are not getting policy recommendations, even though there is traffic between endpoints. This is because policy recommendations are generated only for flows that are not captured by any other policy in your cluster. To see if policy is already enforcing the traffic in question, search for the flow log in question, examine the `policies` field, and verify that no other enforced policy allows or denies traffic for that flow.
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/recommendations/policy-recommendations.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/recommendations/policy-recommendations.mdx
deleted file mode 100644
index 664017e0ef..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/recommendations/policy-recommendations.mdx
+++ /dev/null
@@ -1,171 +0,0 @@
----
-description: Enable continuous policy recommendations to secure unprotected namespaces or workloads.
----
-
-# Enable policy recommendations
-
-## Big picture
-
-Use policy recommendations to automatically isolate namespaces with network policy.
-
-## Value
-
-One of the best practices for improving the security posture of your Kubernetes cluster is to implement namespace isolation with network policy. Namespace isolation helps to implement a zero-trust and least-privileged security model, where only required communication between namespaces is authorized and everything else is blocked. This helps mitigate the risk of lateral movement within a cluster in the event of an attack.
-
-$[prodname] makes it easy for platform operators to implement namespace isolation without experience in authoring network policy or detailed knowledge of how application workloads are communicating. $[prodname] analyzes the flow logs that are generated from workloads, and automatically recommends and stages policies for each namespace that can be used for isolation.
-
-## Before you begin
-
-**Required RBAC**
-
-To enable/disable and use policy recommendations, you must have the **tigera-network-admin** role or permissions to **update**, **patch**, **get**, **list**, **watch** `projectcalico.org` resources:
-* tiers
-* policyrecommendationscopes
-* stagednetworkpolicies
-* tier.stagednetworkpolicies
-* networkpolicies
-* tier.networkpolicies
-* globalnetworksets
-
-Specifically, you will need access to the `namespace-isolation` tier and to staged and network policies in the `namespace-isolation` tier.
-
-**Recommended**
-
-Basic knowledge of policies in Manager UI and tiers:
-- [Get started with tiered network policy](../../network-policy/policy-tiers/tiered-policy)
-- [Network policy tutorial](../../network-policy/policy-tiers/policy-tutorial-ui)
-
-**Limitations**
-
-Creating and managing policy recommendations is available only in Manager UI.
-
-## How to
-
-- [Enable policy recommendations](#enable-policy-recommendations)
-- [Activate and review policy recommendations](#activate-and-review-policy-recommendations)
-- [Review global settings for workloads](#review-global-settings-for-workloads)
-- [Update policy recommendations](#update-policy-recommendations)
-- [Private network recommendations](#private-network-recommendations)
-- [Troubleshoot policy recommendations](#troubleshoot-policy-recommendations)
-- [Disable the policy recommendations feature](#disable-the-policy-recommendations-feature)
-
-### Enable policy recommendations
-
-1. In the left navbar in Manager UI, click **Policies**, **Recommendations**.
-1. On the opt-in page, click **Enable Policy Recommendations**.
-
-The **Policy Recommendations** board is automatically displayed.
-
-![Policy-recommendations-board](/img/calico-cloud/policy-recommendations-board.png)
-
-**Notes**:
-
-- A policy recommendation is generated for every namespace in your cluster (unless namespaces are filtered out by an Admin using the [selector](../../reference/resources/policyrecommendations.mdx#namespaceSpec#selector) in the PolicyRecommendationScope resource).
-- Flow logs are continuously monitored for policy recommendations.
-- Recommended policies are continuously updated until you **Add to policy board** or **Dismiss policy** using the Actions menu.
-- Policy recommendations are created as **staged network policies** so you can safely observe the traffic before enforcing them.
-- Traffic originating from the recommended policy's namespace is used to generate egress rules, and traffic destined for the namespace is used to define ingress rules.
-- To stop policy recommendations from being processed and updated for a namespace, click the **Action** menu, **Dismiss policy**.
-
-### Activate and review policy recommendations
-
-Policy recommendations are not enabled until you activate them and move them to the **Active** board.
-
-From the Policy Recommendation board, select a policy recommendation (or bulk select) and select, **Add to policy board**. Click on the **Active tab**.
-
-You can now view the activated policies in the **Policies Board**. In the left navbar, click **Policies**.
-
-Policy recommendations are added to the **namespace-isolation** tier. Note the following:
-
-- Staged network policy recommendations work like any other staged network policy.
-- You cannot move recommended staged policies in the `namespace-isolation` tier.
-- The name of the `namespace-isolation` tier is fixed and cannot be changed
-
-You are now ready to observe traffic flows in Policies board to verify that the policy is authorizing traffic as expected. When a policy works as expected, you can safely enforce it. See [Stage, preview impacts, and enforce policy](network-policy/staged-network-policies.mdx) for help.
-
-### Review global settings for workloads
-
-The default global settings for capturing flows for policy recommendations are based on application workloads with *frequent communication with other namespaces in your cluster*.
-
-Global settings are found on the Policy Recommendations board, **Action** menu.
-
-![Global-settings-dialog](/img/calico-cloud/global-settings.png)
-
-- **Stabilization Period** is the learning time to capture flow logs so that a recommendation accurately reflects the cluster's traffic patterns.
-
-- **Processing Interval** is the frequency to process new flow logs and refine recommendations.
-
-:::tip
-For application workloads with less frequent communication, the stabilization period setting may not be long enough to get accurate traffic flows, so you’ll want to increase the time. We recommend that you review your workloads immediately after you enable policy recommendations and adjust the settings accordingly.
-:::
-
-Changes to all other policy recommendations parameters require Admin permissions and can be changed using the [Policy recommendations resource](../../reference/resources/policyrecommendations.mdx).
-
-### Update policy recommendations
-
-This section describes common changes you may want to make to policy recommendations.
-
-#### Relearn activated recommendations
-
-As new namespace and components are added to a cluster, your activated policy recommendation may need to be updated to reflect those changes. If a policy recommendation has not been enforced, you’ll need to update it to allow traffic.
-
-1. On the **Policies Recommendations** board, click the **Active tab**, which lists the active staged network policies.
-1. Select the Actions menu associated with the policy in question, and click **Dismiss policy**.
-1. Click the **Dismissed tab**, select the Actions menu, and **Reactivate** the policy.
-
-#### Rerun policy recommendations for an enforced policy
-
-To generate a new recommendation for an enforced policy, delete the network policy on the **Policy** board.
-
-#### Stop policy recommendation updates for a namespace
-
-1. On the Policy Recommendations board, click the **Recommendations** tab, which lists the recommendations.
-1. Select the recommendation, click the **Actions** menu, and click **Dismiss policy**.
-
-To reactivate a policy recommendation for a namespace, select the dismissed staged policy, and from the Actions menu, select **Reactivate**.
-
-### Private network recommendations
-
-If any flow to a private network in your cluster is found, a private rule is automatically created that contains RFC 1918 subnets, which will allow traffic to/from those endpoints. If you need to apply a more restrictive approach, create a [GlobalNetworkSet](../../reference/resources/globalnetworkset.mdx) and update it with the desired CIDR blocks. The recommendation engine will identify flows to your private IPs and generate the appropriate NetworkSet Rule.
-
-**Notes**:
-Exclude any CIDR ranges used by the cluster for nodes and pods.
-
-### Troubleshoot policy recommendations
-
-**Problem**: I’m not seeing policy recommendations on the Policy Recommendations board.
-
-**Solution/workaround**: Policy recommendations are based on historical flow logs that match a request, and are generated only for flows that have not been addressed by any other policy. As such, there are times when policy recommendations will not be generated:
-
-- Not enough traffic history
-
- If you recently installed $[prodname], you may not have enough traffic history. Workloads must run for some time (around 5 days) to get “typical network traffic” for applications.
-
-- Traffic is covered by existing policy
-
- Even if your cluster has been running for a long time with traffic, the flows may already be covered by existing policies.
-
-To verify why there may not be any recommendations, follow these steps:
-
-1. Go to **Service Graph**, **Default**.
-1. Filter flow logs for your namespace.
-1. Investigate the content within the `policies` field for the flow logs in question.
-1. Validate that no other enforced policy already addresses the flow.
-
-**Problem**: Why are egress-to-domain rules being generated for a Kubernetes service?
-
-**Solution/workaround**: The policy recommendation controller can only read the cluster domain of the cluster it runs in. If you have managed clusters with a non-default domain (`cluster.local`), the controller will treat egress traffic as though it is to a domain.
-
-### Disable the policy recommendations feature
-
-To disable the policy recommendations feature, set the **RecStatus** parameter to `Disabled` in the [Policy recommendations resource](../../reference/resources/policyrecommendations.mdx).
-
-```bash
-kubectl patch PolicyRecommendationScope default --type='json' -p='[{"op": "replace", "path": "/spec/namespaceSpec/recStatus", "value": "Disabled"}]'
-```
-
-Note that unactivated policy recommendations in the Policies Recommendations board are no longer updated. Existing activated and enforced staged network policies are not affected by disabling policy recommendations.
-
-## Additional resources
-
-- [Policy best practices](../../network-policy/policy-best-practices.mdx)
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/network-policy/staged-network-policies.mdx b/calico-cloud_versioned_docs/version-20-1/network-policy/staged-network-policies.mdx
deleted file mode 100644
index da32da639b..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/network-policy/staged-network-policies.mdx
+++ /dev/null
@@ -1,183 +0,0 @@
----
-description: Stage and preview policies to observe traffic implications before enforcing them.
----
-
-# Stage, preview impacts, and enforce policy
-
-## Big picture
-
-Stage and preview impacts on traffic before enforcing policy.
-
-## Value
-
-$[prodname] staged network policy resources lets you test the traffic impact of the policy as if it were enforced, but without changing traffic flow. You can also preview the impacts of a staged policy on existing traffic. By verifying that correct flows are allowed and denied before enforcement, you can minimize misconfiguration and potential network disruption.
-
-## Concepts
-
-### About staged policies
-
-The following staged policy resources have the same structure (i.e. the resource spec has the same fields) as their “enforced” counterpart.
-
-- Staged global network policy
-- Staged network policy
-- Staged Kubernetes network policy
-
-### Review permissions
-
-The default `tigera-network-admin` cluster role has the required permissions to manage the different enforced
-and staged network policies. Adjust permissions for your environment. As with $[prodname] network policy and global network policies, the RBAC for $[prodname] staged network policy and staged global network policy is tier-dependent.
-
-## How to
-
-- [Create a policy recommendation](#create-a-policy-recommendation)
-- [Stage a policy](#stage-a-policy)
-- [Preview policy impact](#preview-policy-impact)
-- [Enforce a staged policy](#enforce-a-staged-policy)
-- [Stage updates to an enforced policy](#stage-updates-to-an-enforced-policy)
-
-### Create a policy recommendation
-
-One of the first things developers need to do is secure unprotected workloads with network policy. (For example, by default, Kubernetes pods accept traffic from any source.) The **Recommend policy** feature allows developers with minimal experience writing policy to secure workloads.
-
-Because **Recommend policy** looks at historical flow log entries that match your request, you should run your workloads for a reasonable amount of time to get "typical network traffic" for your application.
-
-1. In the left navbar, click **Policies**.
-1. Click **Recommend a policy**.
-1. Enter time range, Namespace, Name, and click **Recommend**.
-1. If relevant flow logs are found within the time range for the workload endpoint, click **Preview** to assess the impact of the recommended policy, or **Stage**.
-
-![recommend-policy](/img/calico-enterprise/recommend-policy.png)
-
-### Stage a policy
-
-Stage a policy to test it in a near replica of a production environment.
-
-1. In the left navbar, click **Policies**.
-1. In a tier, click **Add Policy**.
-1. Create your policy and click **Stage** to save and stage it.
-
-![stage-new-policy](/img/calico-enterprise/stage-new-policy.png)
-
-### Preview policy impact
-
-Before enforcing a staged policy, it is a best practice to use the **Preview** feature to avoid unintentionally exposing or blocking other network traffic.
-
-1. From the **Policies Board**, select a staged policy and click **Edit policy**.
-1. Make some edits and click **Preview**.
-
-The following example shows denied flows that may or may not be intended.
-
-![policy-preview](/img/calico-enterprise/policy-preview.png)
-
-### Enforce a staged policy
-
-1. From **Policies Board**, click a staged policy.
-1. Click**Edit policy**, make changes and click **Enforce**. The staged policy is deleted and the enforced policy is created/updated (depending on whether it already exists).
-
-### Stage updates to an enforced policy
-
-1. From the **Policies Board**, open an enforced policy.
-1. In **View Policy**, click **Edit policy**.
-1. Make your changes, and click **Preview**. Depending on the results, you can click **Stage** or **Enforce**.
-
-You can also use custom resources to stage Kubernetes and $[prodname] policies, and apply them using `kubectl`. Here are sample YAML files.
-
-**Example: StagedGlobalNetworkPolicy**
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: StagedGlobalNetworkPolicy
-metadata:
- name: default.allow-tcp-6379
-spec:
- tier: default
- selector: role == 'database'
- types:
- - Ingress
- - Egress
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: role == 'frontend'
- destination:
- ports:
- - 6379
- egress:
- - action: Allow
-```
-
-**Example: StagedNetworkPolicy**
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: StagedNetworkPolicy
-metadata:
- name: default.allow-tcp-6379
- namespace: default
-spec:
- tier: default
- selector: role == 'database'
- types:
- - Ingress
- - Egress
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: role == 'frontend'
- destination:
- ports:
- - 6379
- egress:
- - action: Allow
-```
-
-**Example: StagedKubernetesNetworkPolicy**
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: StagedKubernetesNetworkPolicy
-metadata:
- name: test-network-policy
- namespace: default
-spec:
- podSelector:
- matchLabels:
- role: db
- policyTypes:
- - Ingress
- - Egress
- ingress:
- - from:
- - ipBlock:
- cidr: 172.17.0.0/16
- except:
- - 172.17.1.0/24
- - namespaceSelector:
- matchLabels:
- project: myproject
- - podSelector:
- matchLabels:
- role: frontend
- ports:
- - protocol: TCP
- port: 6379
- egress:
- - to:
- - ipBlock:
- cidr: 10.0.0.0/24
- ports:
- - protocol: TCP
- port: 5978
-```
-
-## Additional resources
-
-- [Staged global network policy](../reference/resources/stagedglobalnetworkpolicy.mdx)
-- [Staged network policy](../reference/resources/stagednetworkpolicy.mdx)
-- [Staged Kubernetes network policy](../reference/resources/stagedkubernetesnetworkpolicy.mdx)
-- For details on how to configure RBAC for staged policy resources, see [Configuring RBAC for tiered policy](policy-tiers/rbac-tiered-policies.mdx)
-- For details on staged policy metrics, see
- - [Flow logs](../visibility/elastic/flow/datatypes.mdx)
- - [Prometheus metrics](../operations/monitor/metrics/index.mdx#content-main)
diff --git a/calico-cloud_versioned_docs/version-20-1/networking/configuring/advertise-service-ips.mdx b/calico-cloud_versioned_docs/version-20-1/networking/configuring/advertise-service-ips.mdx
deleted file mode 100644
index 748840379c..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/networking/configuring/advertise-service-ips.mdx
+++ /dev/null
@@ -1,268 +0,0 @@
----
-description: Configure Calico to advertise Kubernetes service cluster IPs and external IPs outside the cluster using BGP.
----
-
-# Advertise Kubernetes service IP addresses
-
-## Big picture
-
-Enable $[prodname] to advertise Kubernetes service IPs outside a cluster. $[prodname] supports advertising a service’s cluster IPs and external IPs.
-
-## Value
-
-Typically, Kubernetes service cluster IPs are accessible only within the cluster, so external access to the service requires a dedicated load balancer or ingress controller. In cases where a service’s cluster IP is not routable, the service can be accessed using its external IP.
-
-Just as $[prodname] supports advertising **pod IPs** over BGP, it also supports advertising Kubernetes **service IPs** outside a cluster over BGP. This avoids the need for a dedicated load balancer. This feature also supports equal cost multi-path (ECMP) load balancing across nodes in the cluster, as well as source IP address preservation for local services when you need more control.
-
-## Concepts
-
-### BGP makes it easy
-
-In Kubernetes, all requests for a service are redirected to an appropriate endpoint (pod) backing that service. Because $[prodname] uses BGP, external traffic can be routed directly to Kubernetes services by advertising Kubernetes service IPs into the BGP network.
-
-If your deployment is configured to peer with BGP routers outside the cluster, those routers (plus any other upstream places the routers propagate to) can send traffic to a Kubernetes service IP for routing to one of the available endpoints for that service.
-
-### Advertising service IPs: quick glance
-
-$[prodname] implements the Kubernetes **externalTrafficPolicy** using kube-proxy to direct incoming traffic to a correct pod. Advertisement is handled differently based on the service type that you configure for your service.
-
-| **Service mode** | **Cluster IP advertisement** | **Traffic is...** | Source IP address is... |
-| ----------------- | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------- |
-| Cluster (default) | All nodes in the cluster statically advertise a route to the service CIDR. | Load balanced across nodes in the cluster using ECMP, then forwarded to appropriate pod in the service using SNAT. May incur second hop to another node, but good overall load balancing. | Obscured by SNAT |
-| Local | The nodes with a pod backing the service advertise a specific route (/32 or /128) to the service's IP. | Load balanced across nodes with endpoints for the service. Avoids second hop for LoadBalancer and NodePort type services, traffic may be unevenly load balanced. (Other traffic is load balanced across nodes in the cluster.) | Preserved |
-
-If your $[prodname] deployment is configured to peer with BGP routers outside the cluster, those routers - plus any further upstream places that those routers propagate to - will be able to send traffic to a Kubernetes service cluster IP, and that traffic is routed to one of the available endpoints for that service.
-
-### Tips for success
-
-- Generally, we recommend using “Local” for the following reasons:
- - If any of your network policy uses rules to match by specific source IP addresses, using Local is the obvious choice because the source IP address is not altered, and the policy will still work.
- - Return traffic is routed directly to the source IP because “Local” services do not require undoing the source NAT (unlike “Cluster” services).
-- Cluster IP advertisement works best with a ToR that supports ECMP. Otherwise, all traffic for a given route is directed to a single node.
-
-## Before you begin...
-
-**Required**
-
-- Calico CNI
-- [Configure BGP peering](bgp.mdx) between $[prodname] and your network infrastructure
-- For ECMP load balancing to services, the upstream routers must be configured to use BGP multipath.
-- You need at least one external node outside the cluster that acts as a router, route reflector, or ToR that is peered with calico nodes inside the cluster.
-- Services must be configured with the correct service mode (“Cluster” or “Local”) for your implementation. For `externalTrafficPolicy: Local`, the service must be type `LoadBalancer` or `NodePort`.
-
-**Limitations**
-
-- Supported in EKS and AWS, but only if you are using Calico CNI
-- OpenShift, versions 4.5 and 4.6
- There is a [bug](https://github.com/kubernetes/kubernetes/issues/91374) where the source IP is not preserved by NodePort services or traffic via a Service ExternalIP with externalTrafficPolicy:Local.
-
- OpenShift users on v4.5 or v4.6 can use this [workaround to avoid SNAT with ExternalIP](https://docs.openshift.com/container-platform/4.7/nodes/clusters/nodes-cluster-enabling-features.html):
-
- ```
- oc edit featuregates.config.openshift.io cluster
- spec:
- customNoUpgrade:
- enabled:
- - ExternalPolicyForExternalIP
- ```
-
- Kubernetes users on version v1.18 or v1.19 can enable source IP preservation for NodePort services using the ExternalPolicyForExternalIP feature gate.
-
- Source IP preservation for NodePort and services and ExternalIPs is enabled by default in OpenShift v4.7+, and Kubernetes v1.20+.
-
-## How to
-
-- [Advertise service cluster IP addresses](#advertise-service-cluster-ip-addresses)
-- [Advertise service external IP addresses](#advertise-service-external-ip-addresses)
-- [Advertise service load balancer IP addresses](#advertise-service-load-balancer-ip-addresses)
-- [Exclude certain nodes from advertisement](#exclude-certain-nodes-from-advertisement)
-
-### Advertise service cluster IP addresses
-
-1. Determine the service cluster IP range. (Or ranges, if your cluster is [dual stack](../ipam/ipv6.mdx).)
-
- The range(s) for your cluster can be inferred from the `--service-cluster-ip-range` option passed to the Kubernetes API server. For help, see the [Kubernetes API server reference guide](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/).
-
-1. Check to see if you have a default BGPConfiguration.
-
- ```bash
- kubectl get bgpconfiguration.projectcalico.org default
- ```
-
-1. Based on above results, update or create a BGPConfiguration.
-
- **Update default BGPConfiguration**.
-
- Patch the BGPConfiguration using the following command, using your own service cluster IP CIDR in place of "10.0.0.0/24":
-
- ```bash
- kubectl patch bgpconfiguration.projectcalico.org default -p '{"spec":{"serviceClusterIPs": [{"cidr": "10.0.0.0/24"}]}}'
- ```
-
- **Create default BGPConfiguration**.
-
- Use the following sample command to create a default BGPConfiguration. Add your CIDR blocks, covering the cluster IPs to be advertised, in the `serviceClusterIPs` field, for example:
-
- ```bash
- kubectl create -f - < 100).
-
-For a deeper look at common on-premises deployment models, see [Calico over IP Fabrics](../../reference/architecture/design/l2-interconnect-fabric.mdx).
-
-## Before you begin...
-
-**Required**
-
-- Calico CNI
-
-
-
-## How to
-
-:::note
-
-Significantly changing $[prodname]'s BGP topology, such as changing from full-mesh to peering with ToRs, may result in temporary loss of pod network connectivity during the reconfiguration process. It is recommended to only make such changes during a maintenance window.
-
-:::
-
-- [Configure a global BGP peer](#configure-a-global-bgp-peer)
-- [Configure a per-node BGP peer](#configure-a-per-node-bgp-peer)
-- [Configure a node to act as a route reflector](#configure-a-node-to-act-as-a-route-reflector)
-- [Disable the default BGP node-to-node mesh](#disable-the-default-bgp-node-to-node-mesh)
-- [Change from node-to-node mesh to route reflectors without any traffic disruption](#change-from-node-to-node-mesh-to-route-reflectors-without-any-traffic-disruption)
-- [View BGP peering status for a node](#view-bgp-peering-status-for-a-node)
-- [View BGP info on all peers for a node](#view-bgp-info-on-all-peers-for-a-node)
-- [Change the default global AS number](#change-the-default-global-as-number)
-- [Change AS number for a particular node](#change-as-number-for-a-particular-node)
-- [Configure a BGP filter](#configure-a-bgp-filter)
-- [Configure a BGP peer with a BGP filter](#configure-a-bgp-peer-with-a-bgp-filter)
-
-### Configure a global BGP peer
-
-Global BGP peers apply to all nodes in your cluster. This is useful if your network topology includes BGP speakers that will be peered with every $[prodname] node in your deployment.
-
-The following example creates a global BGP peer that configures every $[prodname] node to peer with **192.20.30.40** in AS **64567**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: BGPPeer
-metadata:
- name: my-global-peer
-spec:
- peerIP: 192.20.30.40
- asNumber: 64567
-```
-
-### Configure a per-node BGP peer
-
-Per-node BGP peers apply to one or more nodes in the cluster. You can choose which nodes by specifying the node’s name exactly, or using a label selector.
-
-The following example creates a BGPPeer that configures every $[prodname] node with the label, **rack: rack-1** to peer with **192.20.30.40** in AS **64567**.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: BGPPeer
-metadata:
- name: rack1-tor
-spec:
- peerIP: 192.20.30.40
- asNumber: 64567
- nodeSelector: rack == 'rack-1'
-```
-
-### Configure a node to act as a route reflector
-
-$[prodname] nodes can be configured to act as route reflectors. To do this, each node that you want to act as a route reflector must have a cluster ID - typically an unused IPv4 address.
-
-To configure a node to be a route reflector with cluster ID 244.0.0.1, run the following command.
-
-```bash
-kubectl annotate node my-node projectcalico.org/RouteReflectorClusterID=244.0.0.1
-```
-
-Typically, you will want to label this node to indicate that it is a route reflector, allowing it to be easily selected by a BGPPeer resource. You can do this with kubectl. For example:
-
-```bash
-kubectl label node my-node route-reflector=true
-```
-
-Now it is easy to configure route reflector nodes to peer with each other and other non-route-reflector nodes using label selectors. For example:
-
-```yaml
-kind: BGPPeer
-apiVersion: projectcalico.org/v3
-metadata:
- name: peer-with-route-reflectors
-spec:
- nodeSelector: all()
- peerSelector: route-reflector == 'true'
-```
-
-:::note
-
-Adding `routeReflectorClusterID` to a node spec will remove it from the node-to-node mesh immediately, tearing down the
-existing BGP sessions. Adding the BGP peering will bring up new BGP sessions. This will cause a short (about 2 seconds)
-disruption to dataplane traffic of workloads running in the nodes where this happens. To avoid this, make sure no
-workloads are running on the nodes, by provisioning new nodes or by running `kubectl drain` on the node (which may
-itself cause a disruption as workloads are drained).
-
-:::
-
-### Disable the default BGP node-to-node mesh
-
-The default **node-to-node BGP mesh** may be turned off to enable other BGP topologies. To do this, modify the default **BGP configuration** resource.
-
-Run the following command to disable the BGP full-mesh:
-
-```bash
-calicoctl patch bgpconfiguration default -p '{"spec": {"nodeToNodeMeshEnabled": false}}'
-```
-
-:::note
-
-If the default BGP configuration resource does not exist, you need to create it first. See [BGP configuration](../../reference/resources/bgpconfig.mdx) for more information.
-
-:::
-
-:::note
-
-Disabling the node-to-node mesh will break pod networking until/unless you configure replacement BGP peerings using BGPPeer resources.
-You may configure the BGPPeer resources before disabling the node-to-node mesh to avoid pod networking breakage.
-
-:::
-
-### Change from node-to-node mesh to route reflectors without any traffic disruption
-
-Switching from node-to-node BGP mesh to BGP route reflectors involves tearing down BGP sessions and bringing up new ones. This causes a short
-dataplane network disruption (of about 2 seconds) for workloads running on the nodes in the cluster. To avoid this, you may provision
-route reflector nodes and bring their BGP sessions up before tearing down the node-to-node mesh sessions.
-
-Follow these steps to do so:
-
-1. [Provision new nodes to be route reflectors.](#configure-a-node-to-act-as-a-route-reflector) The nodes [should not be schedulable](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)
- and they should have `routeReflectorClusterID` in their spec. These won't be part of the existing
- node-to-node BGP mesh, and will be the route reflectors when the mesh is disabled. These nodes should also have a label like
- `route-reflector` to select them for the BGP peerings. [Alternatively](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/),
- you can drain workloads from existing nodes in your cluster by running `kubectl drain ` to configure them to be route reflectors,
- but this will cause a disruption on the workloads on those nodes as they are drained.
-
-2. Also set up a [BGPPeer](#configure-a-node-to-act-as-a-route-reflector) spec to configure route reflector nodes to peer with each other and other non-route-reflector nodes
- using label selectors.
-
-3. Wait for these peerings to be established. This can be [verified](#view-bgp-peering-status-for-a-node) by running `sudo calicoctl node status` on the nodes. Alternatively, you can create a [`CalicoNodeStatus` resource](../../reference/resources/caliconodestatus.mdx) to get BGP session status for the node.
-
-4. [Disable the BGP node-to-node mesh for the cluster.](#disable-the-default-bgp-node-to-node-mesh)
-
-5. If you did drain workloads from the nodes or created them as unschedulable, mark the nodes as schedulable again (e.g. by running `kubectl uncordon `).
-
-### View BGP peering status for a node
-
-Create a [CalicoNodeStatus resource](../../reference/resources/caliconodestatus.mdx) to monitor BGP session status for the node.
-
-Alternatively, you can run the `calicoctl node status` command on a given node to learn more about its BGP status.
-
-:::note
-
-This command communicates with the local $[prodname] agent, so you must execute it on the node whose status you are attempting to view.
-
-:::
-
-### View BGP info on all peers for a node
-
-You can use `calicoctl` to view the BGP information for all peers of a particular node, including connection status, routing statistics, and BGP state. This is useful for confirming that your configuration is behaving as desired, and for more detailed troubleshooting.
-
-Run the following command from anywhere you have access to `kubectl`:
-
-```bash
-calicoctl bgp peers
-```
-
-Where `` is the resource name for one of the Calico node pods within your cluster.
-
-:::note
-
-The above command can be run from anywhere you have access to kubectl. We recommend running it as a kubectl plugin.
-
-
-:::
-
-If you install the binary as a kubectl plugin using the above instructions, you can then run the command as follows:
-
-```bash
-kubectl calico bgp peers
-```
-
-### Change the default global AS number
-
-By default, all Calico nodes use the 64512 autonomous system, unless a per-node AS has been specified for the node. You can change the global default for all nodes by modifying the default **BGPConfiguration** resource. The following example command sets the global default AS number to **64513**.
-
-```bash
-kubectl patch bgpconfiguration default -p '{"spec": {"asNumber": "64513"}}'
-```
-
-:::note
-
-If the default BGP configuration resource does not exist, you need to create it first. See [BGP configuration](../../reference/resources/bgpconfig.mdx) for more information.
-
-:::
-
-### Change AS number for a particular node
-
-You can configure an AS for a particular node by modifying the node object using `calicoctl`. For example, the following command changes the node named **node-1** to belong to **AS 64514**.
-
-```bash
-calicoctl patch node node-1 -p '{"spec": {"bgp": {"asNumber": "64514"}}}'
-```
-
-### Configure a BGP filter
-
-BGP filters control which routes are imported and exported between BGP peers.
-
-The BGP filter rules (importVX, exportVX) are applied sequentially, taking the
-`action` of the first matching rule. When no rules are matched, the default
-`action` is `Accept`.
-
-In order for a BGPFilter to be used in a BGP peering, its `name`
-must be added to `filters` of the corresponding BGPPeer resource.
-
-The following example creates a BGPFilter
-
-```yaml
-kind: BGPFilter
-apiVersion: projectcalico.org/v3
-metadata:
- name: my-first-bgp-filter
-spec:
- exportV4:
- - action: Accept
- matchOperator: In
- cidr: 77.0.0.0/16
- - action: Reject
- matchOperator: NotIn
- cidr: 88.0.0.0/16
- importV4:
- - action: Reject
- matchOperator: NotIn
- cidr: 44.0.0.0/16
- exportV6:
- - action: Reject
- matchOperator: NotEqual
- cidr: 9000::0/64
- importV6:
- - action: Accept
- matchOperator: Equal
- cidr: 5000::0/64
- - action: Reject
- matchOperator: NotIn
- cidr: 5000::0/64
-```
-
-### Configure a BGP peer with a BGP filter
-
-BGP peers can use BGP filters to control which routes are imported or exported between them.
-
-The following example creates a BGPFilter and associates it with a BGPPeer
-:::note
-
-BGPFilters are applied in the order listed on a BGPPeer
-
-:::
-
-```yaml
-kind: BGPFilter
-apiVersion: projectcalico.org/v3
-metadata:
- name: first-bgp-filter
-spec:
- exportV4:
- - action: Accept
- matchOperator: In
- cidr: 77.0.0.0/16
- - action: Reject
- matchOperator: NotIn
- cidr: 88.0.0.0/16
- importV4:
- - action: Reject
- matchOperator: NotIn
- cidr: 44.0.0.0/16
- exportV6:
- - action: Reject
- matchOperator: NotEqual
- cidr: 9000::0/64
- importV6:
- - action: Accept
- matchOperator: Equal
- cidr: 5000::0/64
- - action: Reject
- matchOperator: NotIn
- cidr: 5000::0/64
----
-kind: BGPFilter
-apiVersion: projectcalico.org/v3
-metadata:
- name: second-bgp-filter
-spec:
- exportV4:
- - action: Reject
- matchOperator: In
- cidr: 99.0.0.0/16
- - action: Reject
- matchOperator: In
- cidr: .0.0.0/16
- importV4:
- - action: Accept
- matchOperator: NotIn
- cidr: 55.0.0.0/16
- - action: Reject
- matchOperator: In
- cidr: 66.0.0.0/16
- exportV6:
- - action: Accept
- matchOperator: Equal
- cidr: 7000::0/64
- - action: Accept
- matchOperator: Equal
- cidr: 8000::0/64
- - action: Reject
- matchOperator: NotIn
- cidr: 7000::0/64
- importV6:
- - action: Reject
- matchOperator: NotEqual
- cidr: 4000::0/64
----
-kind: BGPPeer
-apiVersion: projectcalico.org/v3
-metadata:
- name: peer-with-filter
-spec:
- peerSelector: has(filter-bgp)
- filters:
- - first-bgp-filter
- - second-bgp-filter
-```
-
-## Additional resources
-
-- [Node resource](../../reference/resources/node.mdx)
-- [BGP configuration resource](../../reference/resources/bgpconfig.mdx)
-- [BGP peer resource](../../reference/resources/bgppeer.mdx)
-- [BGP filter resource](../../reference/resources/bgpfilter.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/networking/configuring/custom-bgp-config.mdx b/calico-cloud_versioned_docs/version-20-1/networking/configuring/custom-bgp-config.mdx
deleted file mode 100644
index 837c8ddd46..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/networking/configuring/custom-bgp-config.mdx
+++ /dev/null
@@ -1,49 +0,0 @@
----
-description: Apply a custom BGP configuration
----
-
-# Custom BGP Configuration
-
-## Big picture
-
-Use customized BIRD configuration files to enable specialized use-cases.
-
-## Concepts
-
-In $[prodname], BGP is handled by [BIRD](https://github.com/projectcalico/bird).
-The BIRD configurations are templated out through [confd](https://github.com/projectcalico/confd).
-You can modify the BIRD configuration to use BIRD features which are not typically exposed using the
-default configuration provided with $[prodname].
-
-Customization of BGP templates should be done only with the help of your Tigera Support representative.
-
-## Before you begin
-
-**Required**
-
-- Calico CNI
-
-## How to
-
-- [Update BGP configuration](#update-bgp-configuration)
-- [Apply BGP customizations](#apply-bgp-customizations) based on how you've deployed $[prodname]:
-
-### Update BGP configuration
-
-Using the directions provided with the templates, set the correct values
-for the BGP configuration using these resources:
-
-- [BGP Configuration](../../reference/resources/bgpconfig.mdx)
-- [BGP Peer](../../reference/resources/bgppeer.mdx)
-
-
-### Apply BGP Customizations
-
-1. Create your confd templates.
-1. Create a ConfigMap from the templates.
-
-```
-kubectl create configmap bird-templates -n tigera-operator --from-file=
-```
-
-The created config map will be used to populate the $[prodname] BIRD configuration file templates. If a template with the same name already exists within the node container, it will be overwritten with the contents from the config map.
diff --git a/calico-cloud_versioned_docs/version-20-1/networking/configuring/dual-tor.mdx b/calico-cloud_versioned_docs/version-20-1/networking/configuring/dual-tor.mdx
deleted file mode 100644
index 9a833535b1..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/networking/configuring/dual-tor.mdx
+++ /dev/null
@@ -1,677 +0,0 @@
----
-description: Configure a dual plane cluster for redundant connectivity between workloads.
----
-
-# Deploy a dual ToR cluster
-
-## Big picture
-
-Deploy a dual plane cluster to provide redundant connectivity between your workloads for on-premises deployments.
-
-:::note
-
-Dual ToR is not supported if you are using BGP with encapsulation (VXLAN or IP-in-IP).
-
-:::
-
-## Value
-
-A dual plane cluster provides two independent planes of connectivity between all cluster
-nodes. If a link or software component breaks somewhere in one of those planes, cluster
-nodes can still communicate over the other plane, and the cluster as a whole continues to
-operate normally.
-
-## Concepts
-
-### Dual plane connectivity, aka "dual ToR"
-
-Large on-prem Kubernetes clusters, split across multiple server racks, can use two or more
-independent planes of connectivity between all the racks. The advantages are:
-
-- The cluster can still function, even if there is a single break in connectivity
- somewhere.
-
-- The cluster can load balance across the bandwidth of _both_ planes, when both planes
- are available.
-
-The redundant approach can be applied within each rack as well, such that each node has
-two or more independent connections to those connectivity planes. Typically, each rack
-has two top-of-rack routers ("ToRs") and each node has two fabric-facing interfaces, each
-of which connects over a separate link or Ethernet to one of the ToRs for the rack.
-
-Here's an example of how a dual plane setup might look, with just two racks and two nodes
-in each rack. For simplicity, we've shown the connections _between_ racks as single
-links; in reality that would be more complex, but still following the overall dual plane
-paradigm.
-
-![dual-tor](/img/calico-enterprise/dual-tor.png)
-
-Because of the two ToRs per rack, the whole setup is often referred to as "dual ToR".
-
-### Network design for a dual ToR cluster
-
-For a dual ToR cluster to operate seamlessly when there is a break on one of the planes,
-several things are needed.
-
-- Each node should have a stable IP address that is independent of its per-interface
- addresses and remains valid if the connectivity through _one_ of those interfaces goes
- down.
-
-- Each node must somehow know or learn the stable IP address of every other node.
-
-- Wherever a connection (other than BGP) is to or from a _node_ (as opposed to a
- non-host-networked pod), that connection should use the node's stable address as its
- destination or source IP (respectively), so that the connection can continue working if
- one of the planes has an outage.
-
-- Importantly, this includes connections that Kubernetes uses as part of its own control
- plane, such as between the Kubernetes API server and kubelet on each node. Ideally,
- therefore, the stable IP address setup on each node should happen before Kubernetes
- starts running.
-
-- BGP is an exception to the previous points - in fact, the _only_ exception - because we
- want each node's BGP peerings to be interface-specific and to reflect what is actually
- reachable, moment by moment, over that interface. The Linux routing table then
- automatically adjusts so that the route to each remote destination is either ECMP -
- when both planes are up - or non-ECMP when it can only be reached over one of the
- planes.
-
-- BGP peerings should be configured to detect any outages, and to propagate their
- consequences, as quickly as possible, so that the routing can quickly respond on each
- node. Note that this is quite different from the reasoning for a single connectivity
- plane, where it's better to delay any network churn, on the assumption that an outage
- will be quickly fixed.
-
-Finally, to spread load evenly and maximise use of both planes, when both available, the
-routers and Linux kernel need to be configured for efficient ECMP.
-
-### Calico's early networking architecture
-
-$[prodname]'s $[nodecontainer] image can be run in an "early networking" mode,
-on each node, to perform all of the above points that are needed before Kubernetes starts
-running. That means that it:
-
-- Provisions the stable IP address.
-
-- Makes the changes needed to ensure that the stable address will be used as the source
- IP for any outgoing connections from the node.
-
-- Starts running BGP, peering with the node's ToRs, to advertise the node's
- stable address to other nodes.
-
-- Configures efficient ECMP in the Linux kernel (with `fib_multipath_hash_policy=1` and
- `fib_multipath_use_neigh=1`).
-
-More detail is given below on how to run this early networking image. A key point is that
-it must run as soon as possible after each node boot, and before Kubernetes starts on the
-node, so it is typically run as a Docker or podman container.
-
-After its start-of-day provisioning, the early networking container keeps running so that
-it can tag-team the BGP role with Calico's regular BGP service running inside the
-$[nodecontainer] _pod_:
-
-- Initially the $[nodecontainer] pod does not yet exist, so the early networking
- BGP runs to advertise out the node's stable address.
-
-- After Kubernetes has started on the node, and Calico has been installed in Kubernetes,
- the $[nodecontainer] pod runs and starts its own BGP service. The early
- networking container spots that the regular BGP service is now running and so shuts
- down its own BGP. Now the regular BGP service handles the advertisement of the stable
- address, as well as pod IPs and so on.
-
-- Later, the $[nodecontainer] pod might be shut down, e.g. for restart or upgrade.
- If the downtime continues for longer than the graceful restart period, the early
- networking container spots this and restarts its own BGP, to ensure that the node's
- stable IP address continues to be advertised to other nodes. The cycle can now repeat
- from the "Initially" state above.
-
- :::note
-
- The default graceful restart period is 120s for traditional BGP GR and
- 3600s for LLGR.
-
- :::
-
-### BGP configuration for rapid outage detection
-
-A dual ToR cluster needs Calico BGPPeer resources to specify how each node should peer
-with its ToRs. The remaining parts of the dual ToR network design are implemented as
-properties of those BGP peerings, and as corresponding properties on the BGP configuration
-between and within the ToRs and core infrastructure.
-
-Specifically, on Calico's BGPPeer resource,
-
-- the `failureDetectionMode` field is used to enable BFD
-
-- the `restartMode` field can be used to enable long-lived graceful restart (LLGR).
-
-See below for more on the benefits of these settings. When they are used, consistent
-settings are needed on the ToRs and core infrastructure.
-
-### ECMP routing
-
-An "Equal Cost Multiple Path" (ECMP) route is one that has multiple possible ways to reach
-a given destination or prefix, all of which are considered to be equally good. A dual ToR
-setup naturally generates ECMP routes, with the different paths going over the different
-connectivity planes.
-
-When using an ECMP route, Linux decides how to balance traffic across the available paths,
-including whether this is informed by TCP and UDP port numbers as well as source and
-destination IP addresses, whether the decision is made per-packet, per-connection, or in
-some other way, and so on; and the details here have varied with Linux kernel version.
-For a clear account of the exact options and behaviors for different kernel versions,
-please see [this blog](https://web.archive.org/web/20210204031636/https://cumulusnetworks.com/blog/celebrating-ecmp-part-two/).
-
-### BFD
-
-Bidirectional Forwarding Detection (BFD) is [a protocol](https://tools.ietf.org/html/rfc5880)
- that detects very quickly when forwarding
-along a particular path stops working - whether that's because a link has broken
-somewhere, or some software component along the path.
-
-In a dual ToR setup, rapid failure detection is important so that traffic flows within the
-cluster can quickly adjust to using the other available connectivity plane.
-
-### Long lived graceful restart
-
-Long Lived Graceful Restart (LLGR) is [an extension for BGP](https://tools.ietf.org/html/draft-uttaro-idr-bgp-persistence-05)
- that handles link
-failure by lowering the preference of routes over that link. This is a compromise between
-the base BGP behaviour - which is immediately to remove those routes - and traditional BGP
-Graceful Restart behaviour - which is not to change those routes at all, until some
-configured time has passed.
-
-For a dual ToR setup, LLGR is helpful, as explained in more detail by [this blog](https://vincent.bernat.ch/en/blog/2018-bgp-llgr)
-, because:
-
-- If a link fails somewhere, the immediate preference lowering allows traffic to adjust
- immediately to use the other connectivity plane.
-
-- If a node is restarted, we still get the traditional Graceful Restart behaviour whereby
- routes to that node persist in the rest of the network.
-
-### Default routing and "nearly default" routes
-
-Calico's early networking architecture - and more generally, the considerations for dual
-ToR that are presented on this page - is compatible with many possible [L3 fabric designs](../../reference/architecture/design/l3-interconnect-fabric.mdx)
-. One of
-the options in such designs is "downward default", which means that each ToR only
-advertises the default route to its directly connected nodes, even when it has much more
-detailed routing information. "Downward default" works because the ToR should indeed be
-the node's next hop for all destinations, except for directly connected nodes in the same
-rack.
-
-In a dual ToR cluster, each node has two ToRs, and "downward default" should result in the
-node having an ECMP default route like this:
-
-```
-default proto bird
- nexthop via 172.31.11.100 dev eth0
- nexthop via 172.31.12.100 dev eth0
-```
-
-If one of the planes is broken, BGP detects and propagates the outage and that route
-automatically changes to a non-ECMP route via the working plane:
-
-```
-default via 172.31.12.100 dev eth0 proto bird
-```
-
-That is exactly the behaviour that is wanted in a dual ToR cluster. The snag with it is
-that there can be other procedures in the node's operating system that also update the
-default route - in particular, DHCP - and that can interfere with this desired behaviour.
-For example, if a DHCP lease renewal occurs for one of the node's interfaces, the node may
-then replace the default route as non-ECMP via that interface.
-
-A simple way to avoid such interference is to export the "nearly default" routes 0.0.0.0/1
-and 128.0.0.0/1 from the ToRs, instead of the true default route 0.0.0.0/0. 0.0.0.0/1 and
-128.0.0.0/1 together cover the entire IPv4 address space and so provide correct dual ToR
-routing for any destination outside the local rack. They also mask the true default route
-0.0.0.0/0, by virtue of having longer prefixes (1 bit instead of 0 bits), and so it no
-longer matters if there is any other programming of the true default route on the node.
-
-## Before you begin
-
-**Unsupported**
-
-- AKS
-- EKS
-- GKE
-
-**Required**
-
-- Calico CNI
-
-## How to
-
-- [Prepare YAML resources describing the layout of your cluster](#prepare-yaml-resources-describing-the-layout-of-your-cluster)
-- [Arrange for dual-homed nodes to run $[nodecontainer] on each boot](#arrange-for-dual-homed-nodes-to-run-cnx-node-on-each-boot)
-- [Configure your ToR routers and infrastructure](#configure-your-tor-routers-and-infrastructure)
-- [Install Kubernetes and $[prodname]](#install-kubernetes-and-calico-enterprise)
-- [Verify the deployment](#verify-the-deployment)
-
-### Prepare YAML resources describing the layout of your cluster
-
-1. Prepare BGPPeer resources to specify how each node in your cluster should peer with
- the ToR routers in its rack. For example, if your rack 'A' has ToRs with IPs
- 172.31.11.100 and 172.31.12.100 and the rack AS number is 65001:
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: BGPPeer
- metadata:
- name: ra1
- spec:
- nodeSelector: "rack == 'ra' || rack == 'ra_single'"
- peerIP: 172.31.11.100
- asNumber: 65001
- sourceAddress: None
- ---
- apiVersion: projectcalico.org/v3
- kind: BGPPeer
- metadata:
- name: ra2
- spec:
- nodeSelector: "rack == 'ra'"
- peerIP: 172.31.12.100
- asNumber: 65001
- sourceAddress: None
- ```
-
- :::note
-
- The effect of the `nodeSelector` fields here is that any node with label
- `rack: ra` will peer with both these ToRs, while any node with label `rack: ra_single` will peer with only the first ToR. For optimal dual ToR function and
- resilience, nodes in rack 'A' should be labelled `rack: ra`, but `rack: ra_single`
- can be used instead on any nodes which cannot be dual-homed.
-
- :::
-
- Repeat for as many racks as there are in your cluster. Each rack needs a new pair of
- BGPPeer resources with its own ToR addresses and AS number, and `nodeSelector` fields
- matching the nodes that should peer with its ToR routers.
-
- Depending on what your ToR supports, consider also setting these fields in each
- BGPPeer:
-
- - `failureDetectionMode: BFDIfDirectlyConnected` to enable BFD, when possible, for
- fast failure detection.
-
- :::note
-
- $[prodname] only supports BFD on directly connected peerings, but
- in practice nodes are normally directly connected to their ToRs.
-
- :::
-
- - `restartMode: LongLivedGracefulRestart` to enable LLGR handling when the node needs
- to be restarted, if your ToR routers support LLGR. If not, we recommend instead
- `maxRestartTime: 10s`.
-
- - `birdGatewayMode: DirectIfDirectlyConnected` to enable the "direct" next hop
- algorithm, if that is helpful for optimal interworking with your ToR routers.
-
- :::note
-
- For directly connected BGP peerings, BIRD provides two gateway
- computation modes, "direct" and "recursive".
-
- "recursive" is the default, but "direct" can give better results when the ToR
- also acts as the route reflector (RR) for the rack.
- Specifically, a combined ToR/RR should ideally keep the BGP next hop intact (aka
- "next hop keep") when reflecting routes from other nodes in the same rack, but
- add itself as the BGP next hop (aka "next hop self") when forwarding routes from
- outside the rack. If your ToRs can be configured to do that, fine.
- If they cannot, an effective workaround is to configure the ToRs to do "next hop
- keep" for all routes, with "gateway direct" on the $[prodname] nodes. In
- effect the “gateway direct” applies a “next hop self” when needed, but otherwise
- not.
-
- :::
-
-1. Prepare this BGPConfiguration resource to [disable the full node-to-node mesh](bgp.mdx#disable-the-default-bgp-node-to-node-mesh):
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: BGPConfiguration
- metadata:
- name: default
- spec:
- nodeToNodeMeshEnabled: false
- ```
-
-1. Prepare disabled IPPool resources for the CIDRs from which you will allocate stable
- addresses for dual-homed nodes. For example, if the nodes in rack 'A' will have
- stable addresses from 172.31.10.0/24:
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: IPPool
- metadata:
- name: ra-stable
- spec:
- cidr: 172.31.10.0/24
- disabled: true
- nodeSelector: all()
- ```
-
- If the next rack uses a different CIDR, define a similar IPPool for that rack, and so
- on.
-
- :::note
-
- These IPPool definitions tell $[prodname]'s BGP component to export
- routes within the given CIDRs, which is essential for the core BGP infrastructure to
- learn how to route to each stable address. `disabled: true` tells $[prodname]
- _not_ to use these CIDRs for pod IPs.
-
- :::
-
-1. Prepare an enabled IPPool resource for your default CIDR for pod IPs. For example:
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: IPPool
- metadata:
- name: default-ipv4
- spec:
- cidr: 10.244.0.0/16
- nodeSelector: all()
- ```
-
- :::note
-
- The CIDR must match what you specify elsewhere in the Kubernetes
- installation. For example, `networking.clusterNetwork.cidr` in OpenShift's install
- config, or `--pod-network-cidr` with kubeadm. You should not specify `ipipMode` or
- `vxlanMode`, as these are incompatible with dual ToR operation. `natOutgoing` can
- be omitted, as here, if your core infrastructure will perform an SNAT for traffic
- from pods to the Internet.
-
- :::
-
-1. Prepare an EarlyNetworkConfiguration resource to specify the additional information
- that is needed for each node in a multi-rack dual ToR cluster:
-
- - The stable address for the node.
- - Its BGP AS number.
- - The IPs that the node should peer with, when $[nodecontainer] runs
- as a container for early networking setup after each node boot.
- - Any labels that the node should have, so as to match the right BGPPeer definitions
- for its rack, when $[nodecontainer] runs as a Kubernetes pod.
-
-
- With OpenShift, also add a toplevel `platform: openshift` setting.
-
- :::note
-
- `platform: openshift` triggers additional per-node setup that is needed
- during OpenShift's bootstrapping phase.
-
- :::
-
- For example, with IP addresses and AS numbers similar as for other resources above:
-
- ```yaml noValidation
- apiVersion: projectcalico.org/v3
- kind: EarlyNetworkConfiguration
- spec:
- platform: openshift
- nodes:
- # worker1
- - interfaceAddresses:
- - 172.31.11.3
- - 172.31.12.3
- stableAddress:
- address: 172.31.10.3
- asNumber: 65001
- peerings:
- - peerIP: 172.31.11.100
- - peerIP: 172.31.12.100
- labels:
- rack: ra
- # worker2
- - interfaceAddresses:
- - 172.31.21.4
- - 172.31.22.4
- stableAddress:
- address: 172.31.20.4
- asNumber: 65002
- peerings:
- - peerIP: 172.31.21.100
- - peerIP: 172.31.22.100
- labels:
- rack: rb
- ...
- ```
-
-1. Prepare a ConfigMap resource named "bgp-layout", in namespace "tigera-operator", that
- wraps the EarlyNetworkConfiguration like this:
-
- ```yaml noValidation
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: bgp-layout
- namespace: tigera-operator
- data:
- earlyNetworkConfiguration: |
- apiVersion: projectcalico.org/v3
- kind: EarlyNetworkConfiguration
- spec:
- nodes:
- # worker1
- - interfaceAddresses:
- ...
- ```
-
-:::note
-
-EarlyNetworkConfiguration supplies labels and AS numbers to apply to each
-Calico node, as well as peering and other network configuration to use during node
-startup prior to receiving BGPPeer and BGPConfiguration resources from the datastore.
-EarlyNetworkConfiguration will be superseded by any BGPPeer or BGPConfiguration
-resources after successful startup.
-
-:::
-
-### Arrange for dual-homed nodes to run $[nodecontainer] on each boot
-
-$[prodname]'s $[nodecontainer] image normally runs as a Kubernetes pod, but
-for dual ToR setup it must also run as a container after each boot of a dual-homed node.
-For example:
-
-```
-podman run --privileged --net=host \
- -v /calico-early:/calico-early -e CALICO_EARLY_NETWORKING=/calico-early/cfg.yaml \
- $[registry]$[imageNames.node]:latest
-```
-
-The environment variable `CALICO_EARLY_NETWORKING` must point to the
-EarlyNetworkConfiguration prepared above, so that EarlyNetworkConfiguration YAML must be
-copied into a file on the node (here, `/calico-early/cfg.yaml`) and mapped into the
-$[nodecontainer] container.
-
-We recommend defining systemd services to ensure that early networking runs on each boot,
-and before kubelet starts on the node. Following is an example that may need tweaking for
-your particular platform, but that illustrates the important points.
-
-Firstly, a "calico-early" service that runs the Calico early networking on each boot:
-
-```
-[Unit]
-Wants=network-online.target
-After=network-online.target
-After=nodeip-configuration.service
-[Service]
-ExecStartPre=/bin/sh -c "rm -f /etc/systemd/system/kubelet.service.d/20-nodenet.conf /etc/systemd/system/crio.service.d/20-nodenet.conf; systemctl daemon-reload"
-ExecStartPre=-/bin/podman rm -f calico-early
-ExecStartPre=/bin/mkdir -p /etc/calico-early
-ExecStartPre=/bin/sh -c "test -f /etc/calico-early/details.yaml || /bin/curl -o /etc/calico-early/details.yaml http://172.31.1.1:8080/calico-early/details.yaml"
-ExecStart=/bin/podman run --rm --privileged --net=host --name=calico-early -v /etc/calico-early:/etc/calico-early -e CALICO_EARLY_NETWORKING=/etc/calico-early/details.yaml $[registry]$[imageNames.node]:latest
-[Install]
-WantedBy=multi-user.target
-```
-
-:::note
-
-- You must also install your Tigera-issued pull secret at `/root/.docker/config.json`,
- on each node, to enable pulling from $[registry].
-- Some OpenShift versions have a `nodeip-configuration` service that configures
- kubelet's `--node-ip` option **wrongly** for a dual ToR setup. The
- `After=nodeip-configuration.service` setting and the deletion of `20-nodenet.conf`
- undo that service's work so that kubelet can choose its own IP correctly (using a
- reverse DNS lookup).
-- The `/bin/curl ...` line shows how you can download the EarlyNetworkConfiguration
- YAML from a central hosting point within your cluster.
-
-:::
-
-Secondly, a "calico-early-wait" service that delays kubelet until after the Calico early
-networking setup is in place:
-
-```
-[Unit]
-After=calico-early.service
-Before=kubelet.service
-[Service]
-Type=oneshot
-ExecStart=/bin/sh -c "while sleep 5; do grep -q 00000000:1FF3 /proc/net/tcp && break; done; sleep 15"
-[Install]
-WantedBy=multi-user.target
-```
-
-:::note
-
-- The `ExecStart` line here arranges that kubelet will not start running until the
- calico-early service has started listening on port 8179 (hex `1FF3`). 8179 is the
- port that the calico-early service uses for pre-Kubernetes BGP.
-- We have sometimes observed issues if kubelet starts immediately after Calico's early
- networking setup, because of NetworkManager toggling the hostname. The final `sleep 15` allows for such changes to settle down before kubelet starts.
-
-:::
-
-On OpenShift you should wrap the above service definitions in `MachineConfig` resources
-for the control and worker nodes.
-
-On other platforms either define and enable the above services directly, or use
-whatever API the platform provides to define and enable services on new nodes.
-
-### Configure your ToR routers and infrastructure
-
-You should configure your ToR routers to accept all the BGP peerings from
-$[prodname] nodes, to reflect routes between the nodes in that rack, and to
-propagate routes between the ToR routers in different racks. In addition we recommend
-consideration of the following points.
-
-BFD should be enabled if possible on all BGP sessions - both to the $[prodname]
-nodes, and between racks in your core infrastructure - so that a break in connectivity
-anywhere can be rapidly detected. The handling should be to initiate LLGR procedures if
-possible, or else terminate the BGP session non-gracefully.
-
-LLGR should be enabled if possible on all BGP sessions - again, both to the
-$[prodname] nodes, and between racks in your core infrastructure. Traditional BGP
-graceful restart should not be used, because this will delay the cluster's response to a
-break in one of the connectivity planes.
-
-### Install Kubernetes and $[prodname]
-
-Details here vary, depending on **when** your Kubernetes installer gives an opportunity
-for you to define custom resources, but fundamentally what is needed here is to perform
-the installation as usual, except that all of the Calico resources prepared above, except
-the EarlyNetworkConfiguration, must be added into the datastore **before** the
-$[nodecontainer] pods start running on any node. We can illustrate this by looking
-at two examples: with OpenShift, and when adding Calico to an existing Kubernetes cluster.
-
-**OpenShift**
-
-With OpenShift, follow our documentation as
-far as the option to provide additional configuration.
-
-Then use `kubectl create configmap ...`, as that documentation says, to combine the
-prepared BGPPeer, BGPConfiguration and IPPool resources into a `calico-resources`
-ConfigMap. Place the generated file in the manifests directory for the OpenShift install.
-
-Also place the "bgp-layout" ConfigMap file in the manifests directory.
-
-Now continue with the OpenShift install process, and it will take care of adding those
-resources into the datastore as early as possible.
-
-**Adding to an existing Kubernetes cluster**
-
-Follow our documentation
-
-as
-far as the option for installing any custom Calico resources. Then use `calicoctl`, as
-that documentation says, to install the prepared BGPPeer, BGPConfiguration and IPPool
-resources.
-
-Also use `kubectl` to install the "bgp-layout" ConfigMap.
-
-Now continue with the $[prodname] install process, and you should observe each node
-establishing BGP sessions with its ToRs.
-
-### Verify the deployment
-
-If you examine traffic and connections within the cluster - for example, using `ss` or
-`tcpdump` - you should see that all connections use loopback IP addresses or pod CIDR IPs
-as their source and destination. For example:
-
-- The kubelet on each node connecting to the API server.
-
-- The API server's connection to its backing etcd database, and peer connections between
- the etcd cluster members.
-
-- Pod connections that involve an SNAT or MASQUERADE in the data path, as can be the case
- when connecting to a Service through a cluster IP or NodePort. At the point of the
- SNAT or MASQUERADE, a loopback IP address should be used.
-
-- Direct connections between pod IPs on different nodes.
-
-The only connections using interface-specific addresses should be BGP.
-
-:::note
-
-If you plan to use [Egress Gateways](../egress/egress-gateway-on-prem.mdx) in your cluster, you must also adjust the $[nodecontainer] IP
-auto-detection method to pick up the stable IP, for example using the `interface: lo` setting
-(The default first-found setting skips over the lo interface). This can be configured via the
-$[prodname] [Installation resource](../../reference/installation/api.mdx#operator.tigera.io/v1.NodeAddressAutodetection).
-
-:::
-
-If you look at the Linux routing table on any cluster node, you should see ECMP routes
-like this to the loopback address of every other node in other racks:
-
-```
-172.31.20.4/32
- nexthop via 172.31.11.250 dev eth0
- nexthop via 172.31.12.250 dev eth1
-```
-
-and like this to the loopback address of every other node in the same rack:
-
-```
-172.31.10.4/32
- nexthop dev eth0
- nexthop dev eth1
-```
-
-If you launch some pods in the cluster, you should see ECMP routes for the /26 IP blocks
-for the nodes where those pods were scheduled, like this:
-
-```
-10.244.192.128/26
- nexthop via 172.31.11.250 dev eth0
- nexthop via 172.31.12.250 dev eth1
-```
-
-If you do something to break the connectivity between racks of one of the planes, you
-should see, within only a few seconds, that the affected routes change to have a single
-path only, via the plane that is still unbroken; for example:
-
-```
-172.31.20.4/32 via 172.31.12.250 dev eth1`
-10.244.192.128/26 via 172.31.12.250 dev eth1
-```
-
-When the connectivity break is repaired, those routes should change to become ECMP again.
diff --git a/calico-cloud_versioned_docs/version-20-1/networking/configuring/index.mdx b/calico-cloud_versioned_docs/version-20-1/networking/configuring/index.mdx
deleted file mode 100644
index a32ec45aa3..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/networking/configuring/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Configure Calico networking options.
-hide_table_of_contents: true
----
-
-# Configure Calico Cloud networking
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/networking/configuring/mtu.mdx b/calico-cloud_versioned_docs/version-20-1/networking/configuring/mtu.mdx
deleted file mode 100644
index 1b87480273..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/networking/configuring/mtu.mdx
+++ /dev/null
@@ -1,119 +0,0 @@
----
-description: Optimize network performance for workloads by configuring the MTU in Calico to best suit your underlying network.
----
-
-# Configure MTU to maximize network performance
-
-## Big picture
-
-Configure the maximum transmission unit (MTU) for your $[prodname] environment.
-
-## Value
-
-Optimize network performance for workloads by configuring the MTU in $[prodname] to best suit your underlying network.
-
-Increasing the MTU can improve performance, and decreasing the MTU can resolve packet loss and fragmentation problems when it is too high.
-
-## Concepts
-
-### MTU and $[prodname] defaults
-
-The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. MTU is configured on the veth attached to each workload, and tunnel devices (if you enable IP in IP, VXLAN, or WireGuard).
-
-In general, maximum performance is achieved by using the highest MTU value that does not cause fragmentation or dropped packets on the path. Maximum bandwidth increases and CPU consumption may drop for a given traffic rate. The improvement is often more significant when pod to pod traffic is being encapsulated (IP in IP, VXLAN, or WireGuard), and splitting and combining such traffic cannot be offloaded to your NICs.
-
-By default, $[prodname] will auto-detect the correct MTU for your cluster based on node configuration and enabled networking modes. This guide explains how you can override auto-detection
-of MTU by providing an explicit value if needed.
-
-To ensure auto-detection of MTU works correctly, make sure that the correct encapsulation modes are set in your [felix configuration](../../reference/resources/felixconfig.mdx). Disable any unused encapsulations (`vxlanEnabled`, `ipipEnabled`, `wireguardEnabled` and `wireguardEnabledV6`) in your felix configuration to ensure that auto-detection can pick the optimal MTU for your cluster.
-
-## Before you begin...
-
-**Required**
-
-- Calico CNI
-
-For help on using IP in IP and/or VXLAN overlays, see [Configure overlay networking](vxlan-ipip.mdx).
-
-For help on using WireGuard encryption, see [Configure WireGuard encryption](../../compliance/encrypt-cluster-pod-traffic.mdx).
-
-## How to
-
-- [Determine MTU size](#determine-mtu-size)
-- [Configure MTU](#configure-mtu)
-- [View current tunnel MTU values](#view-current-tunnel-mtu-values)
-
-### Determine MTU size
-
-The following table lists common MTU sizes for $[prodname] environments. Because MTU is a global property of the network path between endpoints, you should set the MTU to the minimum MTU of any path that packets may take.
-
-**Common MTU sizes**
-
-| Network MTU | $[prodname] MTU | $[prodname] MTU with IP-in-IP (IPv4) | $[prodname] MTU with VXLAN (IPv4) | $[prodname] MTU with VXLAN (IPv6) | $[prodname] MTU with WireGuard (IPv4) | $[prodname] MTU with WireGuard (IPv6) |
-| ---------------------- | --------------------- | ------------------------------------------ | --------------------------------------- | ------------------------------------------- | ---- | ---- |
-| 1500 | 1500 | 1480 | 1450 | 1430 | 1440 | 1420 |
-| 9000 | 9000 | 8980 | 8950 | 8930 | 8940 | 8920 |
-| 1500 (AKS) | 1500 | 1480 | 1450 | 1430 | 1340 | 1320 |
-| 1460 (GCE) | 1460 | 1440 | 1410 | 1390 | 1400 | 1380 |
-| 9001 (AWS Jumbo) | 9001 | 8981 | 8951 | 8931 | 8941 | 8921 |
-| 1450 (OpenStack VXLAN) | 1450 | 1430 | 1400 | 1380 | 1390 | 1370 |
-
-**Recommended MTU for overlay networking**
-
-The extra overlay header used in IP in IP, VXLAN and WireGuard protocols, reduces the minimum MTU by the size of the header. (IP in IP uses a 20-byte header, IPv4 VXLAN uses a 50-byte header, IPv6 VXLAN uses a 70-byte header, IPv4 WireGuard uses a [60-byte header](https://lists.zx2c4.com/pipermail/wireguard/2017-December/002201.html) and IPv6 WireGuard uses an 80-byte header).
-
-When using AKS, the underlying network has an [MTU of 1400](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-tcpip-performance-tuning#azure-and-vm-mtu), even though the network interface will have an MTU of 1500.
-WireGuard sets the Don't Fragment (DF) bit on its packets, and so the MTU for WireGuard on AKS needs to be set to 60 bytes below (or 80 bytes for IPv6) the 1400 MTU of the underlying network to avoid dropped packets.
-
-If you have a mix of WireGuard and either IP in IP or VXLAN in your cluster, you should configure the MTU to be the smallest of the values of each encap type. The reason for this is that only WireGuard encapsulation will be used between any nodes where both have WireGuard enabled, and IP in IP or VXLAN will then be used between any nodes where both do not have WireGuard enabled. This could be the case if, for example, you are in the process of installing WireGuard on your nodes.
-
-Therefore, we recommend the following:
-
-- If you use IPv4 WireGuard encryption anywhere in your pod network, configure MTU size as “physical network MTU size minus 60”.
-- If you use IPv6 WireGuard encryption anywhere in your pod network, configure MTU size as “physical network MTU size minus 80”.
-- If you don't use WireGuard, but use IPv4 VXLAN anywhere in your pod network, configure MTU size as “physical network MTU size minus 50”.
-- If you don't use WireGuard, but use IPv6 VXLAN anywhere in your pod network, configure MTU size as “physical network MTU size minus 70”.
-- If you don't use WireGuard, but use only IP in IP, configure MTU size as “physical network MTU size minus 20”
-- Set the workload endpoint MTU and the tunnel MTUs to the same value (so all paths have the same MTU)
-
-**eBPF mode**
-
-Implementation of NodePorts uses VXLAN tunnel to hand off packets from one node to another, therefore VXLAN MTU setting
-is used to set the MTUs of workloads (veths) and should be “physical network MTU size minus 50” (see above).
-
-**MTU for flannel networking**
-
-When using flannel for networking, the MTU for network interfaces should match the MTU of the flannel interface.
-
-- If using flannel with VXLAN, use the “$[prodname] MTU with VXLAN” column in the table above for common sizes.
-
-### Configure MTU
-
-:::note
-
-The updated MTU used by $[prodname] only applies to new workloads.
-
-:::
-
-For Operator installations, edit the $[prodname] operator `Installation` resource to set the `mtu`
-field in the `calicoNetwork` section of the `spec`. For example:
-
-```bash
-kubectl patch installation.operator.tigera.io default --type merge -p '{"spec":{"calicoNetwork":{"mtu":1440}}}'
-```
-
-Similarly, for OpenShift:
-
-```bash
-oc patch installation.operator.tigera.io default --type merge -p '{"spec":{"calicoNetwork":{"mtu":1440}}}'
-```
-
-### View current tunnel MTU values
-
-To view the current tunnel size, use the following command:
-
-`ip link show`
-
-The IP in IP tunnel appears as tunlx (for example, tunl0), along with the MTU size. For example:
-
-![Tunnel MTU](/img/calico-enterprise/tunnel.png)
diff --git a/calico-cloud_versioned_docs/version-20-1/networking/configuring/multiple-networks.mdx b/calico-cloud_versioned_docs/version-20-1/networking/configuring/multiple-networks.mdx
deleted file mode 100644
index f4cb6cbb24..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/networking/configuring/multiple-networks.mdx
+++ /dev/null
@@ -1,249 +0,0 @@
----
-description: Configure a cluster with multiple Calico Cloud networks on each pod, and enforce security using Calico Cloud tiered network policy.
----
-
-# Configure multiple Calico Cloud networks on a pod
-
-## Big picture
-
-Configure a Kubernetes cluster with multiple $[prodname] networks on each pod, and enforce security using $[prodname] tiered network policy.
-
-## Value
-
-By default, you can configure only one CNI (network and pod interface) in a cluster. But many deployments require multiple networks (for example, one that is faster or more secure) for sending different types of data. $[prodname] supports configuring additional $[prodname] networks and interfaces in your pods using the Multus-CNI plugin. You can then use $[prodname] tiered policy and other features to enforce security on all of your workload traffic.
-
-## Concepts
-
-### About the Multus-CNI plugin
-
-$[prodname] uses the [Multus-CNI plugin](https://github.com/intel/multus-cni/) to create multiple $[prodname] networks and multiple pod interfaces to access these networks. This extends the default network and pod interface that comes with the Calico CNI.
-
-You install Multus on a cluster, then simply enable Multus in the $[prodname] Installation resource. Using the Multus **NetworkAttachmentDefinition**, you define the new networks and reference them as an annotation in the pod resource.
-
-### Labels, workload endpoints, and policy
-
-When you set the `MultiInterfaceMode` field to `Multus` in the Installation resource, the following network and network interface labels are automatically added to new workload endpoints.
-
-- `projectcalico.org/network`
-- `projectcalico.org/network-namespace`
-- `projectcalico.org/network-interface`
-
-You can then create $[prodname] policies using these label selectors to target specific networks or network interfaces.
-
-### Limitations
-
-**Maximum additional networks per pod**
-
-You can define a maximum of nine additional $[prodname] networks on a pod. If you add a network that exceeds the limit for the pod, networking is not configured and the pod fails to start with an associated error.
-
-**$[prodname] features**
-
-Although the following $[prodname] features are supported for your default $[prodname] network, they are not supported at this time for additional networks/network interfaces using Multus:
-
-- Floating IPs
-- Specific IPs
-- Specifying IP pools on a per-namespace or per-pod basis
-- Egress gateways
-
-## Before you begin...
-
-**Required**
-
-- Calico CNI
-
- :::note
-
- Verify that you are using the $[prodname] CNI. The CNI plugin used by Kubernetes for AKS, EKS, and GKE may be different, which means this feature will not work.
-
- :::
-
-- [Install Multus 3.0+ on your Kubernetes cluster](https://github.com/intel/multus-cni/)
- :::note
-
- Multus is installed on OpenShift 4.0+ clusters.
-
- :::
-
-
-
-## How to
-
-1. [Configure cluster for multiple networks](#configure-cluster-for-multiple-networks)
-1. [Create a new network](#create-a-new-network)
-1. [Create a pod interface for the new network](#create-a-pod-interface-for-the-new-network)
-1. [Configure the IP pool for the network](#configure-the-ip-pool-for-the-network)
-1. [Enforce policy on the new network and pod interface](#enforce-policy-on-the-new-network-and-pod-interface)
-1. [View workload endpoints](#view-workload-endpoints)
-
-### Configure cluster for multiple networks
-
-In the [Installation custom resource](../../reference/installation/api.mdx#operator.tigera.io/v1.CalicoNetworkSpec), set the `MultiInterfaceMode` to **Multus**.
-
-### Create a new network
-
-Create a new network using the Multus **NetworkAttachmentDefinition**, and set the following required field to `"type":"calico"`.
-
-```yaml
-apiVersion: 'k8s.cni.cncf.io/v1'
-kind: NetworkAttachmentDefinition
-metadata:
- name: additional-calico-network
-spec:
- config: '{
- "cniVersion": "0.3.1",
- "type": "calico",
- "log_level": "info",
- "datastore_type": "kubernetes",
- "mtu": 1410,
- "nodename_file_optional": false,
- "ipam": {
- "type": "calico-ipam",
- "assign_ipv4" : "true",
- "assign_ipv6" : "false"
- },
- "policy": {
- "type": "k8s"
- },
- "kubernetes": {
- "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
- }
- }'
-```
-
-### Create a pod interface for the new network
-
-Create a pod interface that specifies the new network using an annotation.
-
-In the following example, we create a pod with an additional pod interface named, `cali1`. The pod interface is attached to the network named, `additional-calico-network`, using the `k8s.v1.cni.cncf.io/networks` annotation.
-Note that all networks in `k8s.v1.cni.cncf.io/networks` are assumed to be $[prodname] networks.
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: multus-test-pod-1
- namespace: default
- annotations:
- k8s.v1.cni.cncf.io/networks: additional-calico-network@cali1
-spec:
- nodeSelector:
- kubernetes.io/os: linux
- containers:
- - name: multus-test
- command: ['/bin/sh', '-c', 'trap : TERM INT; sleep infinity & wait']
- image: alpine
-```
-
-### Configure the IP pool for the network
-
-Although not required, you may want to assign IPs from specific pools to specific network interfaces. If you are using the [$[prodname] IPAM plugin](../../reference/component-resources/configuration.mdx#specifying-ip-pools-on-a-per-namespace-or-per-pod-basis), specify the IP pools in the **NetworkAttachmentDefinition** custom resource. For example:
-
-```
- "ipam": {
- "type": "calico-ipam",
- "assign_ipv4" : "true",
- "assign_ipv6" : "false"
- "ipv4_pools": ["10.0.0.0/24", "20.0.0.0/16", "default-ipv4-ippool"],
-},
-```
-
-### Enforce policy on the new network and pod interface
-
-When MultiInterfaceMode is set to Multus, WorkloadEndpoints are created with these labels:
-
-- `projectcalico.org/network`
-- `projectcalico.org/network-namespace`
-- `projectcalico.org/network-interface`
-
-You can use these labels to enforce policies on specific interfaces and networks using policy label selectors.
-
-:::note
-
-Prior to $[prodname] 3.0, if you were using Kubernetes datastore (kdd mode), the workload endpoint field and name suffix were always **eth0**. In 3.0, the value for workload labels may not be what you expect. Before creating policies targeting WorkloadEndpoints using the new labels, you should verify label values using the commands in [View workload endpoints](#view-workload-endpoints).
-
-:::
-
-In this policy example, we use the selector field to target all WorkloadEndpoints with the network interface of, `cali1`.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: internal-access.allow-tcp-6379
- namespace: production
-spec:
- tier: internal-access
- selector: projectcalico.org/network-interface == cali1
- types:
- - Ingress
- - Egress
- ingress:
- - action: Allow
- metadata:
- annotations:
- from: frontend
- to: database
- protocol: TCP
- source:
- selector: role == 'frontend'
- destination:
- ports:
- - 6379
- egress:
- - action: Allow
-```
-
-### View workload endpoints
-
-**In the $[prodname] Manager UI**, go to the **WorkloadEndpoint** page to see all of the WorkloadEndpoints, including the network labels are for targeting WorkloadEndpoints with policy.
-
-**Using the CLI...**
-
-To view all WorkloadEndpoints for pods (default and new), use the following command.
-
-```
-MULTI_INTERFACE_MODE=multus calicoctl get workloadendpoints -o wide
-```
-
-```
-NAME WORKLOAD NODE NETWORKS INTERFACE PROFILES NATS
-test--bo--72vg--kadm--infra--0-k8s-multus--test--pod--1-eth0 multus-test-pod-1 bryan-bo-72vg-kadm-infra-0 192.168.53.129/32 calif887e436e8b kns.default,ksa.default.default
-test--bo--72vg--kadm--infra--0-k8s-multus--test--pod--1-net1 multus-test-pod-1 bryan-bo-72vg-kadm-infra-0 192.168.53.140/32 calim17CD6INXIX kns.default,ksa.default.default
-test--bo--72vg--kadm--infra--0-k8s-multus--test--pod--1-testiface multus-test-pod-1 bryan-bo-72vg-kadm-infra-0 192.168.53.142/32 calim27CD6INXIX kns.default,ksa.default.default
-test--bo--72vg--kadm--infra--0-k8s-multus--test--pod--1-net3 multus-test-pod-1 bryan-bo-72vg-kadm-infra-0 192.168.52.143/32 calim37CD6INXIX kns.default,ksa.default.default
-```
-
-To view specific WorkloadEndpoints, use the following command.
-
-```
-MULTI_INTERFACE_MODE=multus calicoctl get workloadendpoint test--bz--72vg--kadm--infra--0-k8s-multus--test--pod--1-net1 -o yaml
-```
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: WorkloadEndpoint
-metadata:
- creationTimestamp: '2020-05-04T22:23:05T'
- labels:
- projectcalico.org/namespace: default
- projectcalico.org/network: calico
- projectcalico.org/network-interface: net1
- projectcalico.org/network-namespace: default
- projectcalico.org/orchestrator: k8s
- projectcalico.org/serviceaccount: default
- name: test--bz--72vg--kadm--infra--0-k8s-multus--test--pod--1-net1
- namespace: default
- resourceVersion: '73572'
- uid: b9bb7482-cdb8-48d4-9ae5-58322d48391a
-spec:
- endpoint: net1
- interfaceName: calim16CD6INXIX
- ipNetworks:
- - 192.168.52.141/32
- node: bryan-bo-72vg-kadm-infra-0
- orchestrator: k8s
- pod: multus-test-pod-1
- profiles:
- - kns.default
- - ksa.default.default
-```
diff --git a/calico-cloud_versioned_docs/version-20-1/networking/configuring/node-local-dns-cache.mdx b/calico-cloud_versioned_docs/version-20-1/networking/configuring/node-local-dns-cache.mdx
deleted file mode 100644
index 4f82925e55..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/networking/configuring/node-local-dns-cache.mdx
+++ /dev/null
@@ -1,48 +0,0 @@
----
-description: Install NodeLocal DNSCache
----
-
-# Use NodeLocal DNSCache in your cluster
-
-## Big picture
-
-Set up NodeLocal DNSCache to improve DNS lookup latency.
-
-## Before you begin
-
-### Required
-
-Follow these [steps](https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/) to enable NodeLocal DNSCache connectivity.
-
-
-## Create a policy to allow traffic from NodeLocal DNSCache
-
-The following is a sample network policy that allows all incoming TCP traffic (including incoming traffic from
-`node-local-dns` pods) on port 53 on `kube-dns`.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: default.local-dns-to-core-dns
- namespace: kube-system
-spec:
- tier: default
- selector: k8s-app == "kube-dns"
- ingress:
- - action: Allow
- protocol: TCP
- destination:
- selector: k8s-app == "kube-dns"
- ports:
- - '53'
- types:
- - Ingress
-```
-
-To refine the sources permitted by this policy, take into account that NodeLocal DNSCache pods are host networked,
-and make sure to allow traffic from the addresses of your hosts.
-If you're using encapsulation, you will need to allow connectivity from the tunnel IPs.
-
-The Tigera operator creates policy to allow Tigera components to connect to NodeLocal DNSCache when detected.
-Felix accounts for the NodeLocal DNSCache in creating DNS Logs and enforcing DNS Policy.
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/networking/configuring/pod-mac-address.mdx b/calico-cloud_versioned_docs/version-20-1/networking/configuring/pod-mac-address.mdx
deleted file mode 100644
index 0ec4c2519d..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/networking/configuring/pod-mac-address.mdx
+++ /dev/null
@@ -1,35 +0,0 @@
----
-description: Specify the MAC address for a pod instead of allowing the operating system to assign one
----
-
-# Use a specific MAC address for a pod
-
-## Big picture
-
-Choose the MAC address for a pod instead of allowing the operating system to assign one.
-
-## Value
-
-Some applications bind software licenses to networking interface MAC addresses.
-
-## Concepts
-
-### Container MAC address
-
-The MAC address configured by the annotation described here will be visible from within the container on the eth0 interface. Since it is isolated to the container it will not collide with any other MAC addresses assigned to other pods on the same node.
-
-## Before you begin...
-
-Your cluster must be using Calico CNI to use this feature.
-
-[Configuring the Calico CNI Plugins](../../reference/component-resources/configuration.mdx)
-
-## How to
-
-Annotate the pod with cni.projectcalico.org/hwAddr set to the desired MAC address. For example:
-
-```
- "cni.projectcalico.org/hwAddr": "1c:0c:0a:c0:ff:ee"
-```
-
-The annotation must be present when the pod is created; adding it later has no effect.
diff --git a/calico-cloud_versioned_docs/version-20-1/networking/configuring/vxlan-ipip.mdx b/calico-cloud_versioned_docs/version-20-1/networking/configuring/vxlan-ipip.mdx
deleted file mode 100644
index ec0c6a99ec..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/networking/configuring/vxlan-ipip.mdx
+++ /dev/null
@@ -1,148 +0,0 @@
----
-description: Configure Calico to use IP in IP or VXLAN overlay networking so the underlying network doesn’t need to understand pod addresses.
----
-
-# Overlay networking
-
-## Big picture
-
-Enable inter workload communication across networks that are not aware of workload IPs.
-
-## Value
-
-In general, we recommend running Calico without network overlay/encapsulation. This gives you the highest performance and simplest network; the packet that leaves your workload is the packet that goes on the wire.
-
-However, selectively using overlays/encapsulation can be useful when running on top of an underlying network that cannot easily be made aware of workload IPs. A common example is if you are using Calico networking in AWS across multiple VPCs/subnets. In this case, Calico can selectively encapsulate only the traffic that is routed between the VPCs/subnets, and run without encapsulation within each VPC/subnet. You might also decide to run your entire Calico network with encapsulation as an overlay network -- as a quick way to get started without setting up BGP peering or other routing information in your underlying network.
-
-## Concepts
-
-### Routing workload IP addresses
-
-Networks become aware of workload IP addresses through layer 3 routing techniques like static routes or BGP route distribution, or layer 2 address learning. As such, they can route unencapsulated traffic to the right host for the endpoint that is the ultimate destination. However, not all networks are able to route workload IP addresses. For example, public cloud environments where you don’t own the hardware, AWS across VPC subnet boundaries, and other scenarios where you cannot peer Calico over BGP to the underlay, or easily configure static routes. This is why Calico supports encapsulation, so you can send traffic between workloads without requiring the underlying network to be aware of workload IP addresses.
-
-### Encapsulation types
-
-Calico supports two types of encapsulation: VXLAN and IP in IP. VXLAN is supported in some environments where IP in IP is not (for example, Azure). VXLAN has a slightly higher per-packet overhead because the header is larger, but unless you are running very network intensive workloads the difference is not something you would typically notice. The other small difference between the two types of encapsulation is that Calico's VXLAN implementation does not use BGP, whereas Calico's IP in IP implementation uses BGP between Calico nodes.
-
-### Cross-subnet
-
-Encapsulation of workload traffic is typically required only when traffic crosses a router that is unable to route workload IP addresses on its own. Calico can perform encapsulation on: all traffic, no traffic, or only on traffic that crosses a subnet boundary.
-
-## Before you begin
-
-**Required**
-
-- Calico CNI
-
-**Not supported**
-
-- Calico for OpenStack (i.e. when Calico is used as the Neutron plugin)
-
-**Limitations**
-
-- IP in IP supports only IPv4 addresses
-- VXLAN in IPv6 is only supported for kernel versions ≥ 4.19.1 or redhat kernel version ≥ 4.18.0
-
-## How to
-
-- [Configure default IP pools at install time](#configure-default-ip-pools-at-install-time)
-- [Configure IP in IP encapsulation for only cross-subnet traffic](#configure-ip-in-ip-encapsulation-for-only-cross-subnet-traffic)
-- [Configure IP in IP encapsulation for all inter workload traffic](#configure-ip-in-ip-encapsulation-for-all-inter-workload-traffic)
-- [Configure VXLAN encapsulation for only cross-subnet traffic](#configure-vxlan-encapsulation-for-only-cross-subnet-traffic)
-- [Configure VXLAN encapsulation for all inter workload traffic](#configure-vxlan-encapsulation-for-all-inter-workload-traffic)
-
-### IPv4/6 address support
-
-IP in IP supports only IPv4 addresses.
-
-### Best practice
-
-Calico has an option to selectively encapsulate only traffic that crosses subnet boundaries. We recommend using the **cross-subnet** option with IP in IP or VXLAN to minimize encapsulation overhead. Cross-subnet mode provides better performance in AWS multi-AZ deployments, Azure VNETs, and on networks where routers are used to connect pools of nodes with L2 connectivity.
-
-Be aware that switching encapsulation modes can cause disruption to in-progress connections. Plan accordingly.
-
-### Configure default IP pools at install time
-
-Default IP pools are configured at install-time automatically by Calico.
-
-For operator managed clusters, you can configure encapsulation in the IP pools section of the default Installation. For example, the following installation snippet will enable VXLAN across subnets.
-
-```yaml
-kind: Installation
-apiVersion: operator.tigera.io/v1
-metadata:
- name: default
-spec:
- calicoNetwork:
- ipPools:
- - cidr: 192.168.0.0/16
- encapsulation: VXLANCrossSubnet
-```
-
-### Configure IP in IP encapsulation for only cross-subnet traffic
-
-IP in IP encapsulation can be performed selectively, and only for traffic crossing subnet boundaries.
-
-To enable this feature, set `ipipMode` to `CrossSubnet`.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: ippool-ipip-cross-subnet-1
-spec:
- cidr: 192.168.0.0/16
- ipipMode: CrossSubnet
- natOutgoing: true
-```
-
-### Configure IP in IP encapsulation for all inter workload traffic
-
-With `ipipMode` set to `Always`, Calico routes traffic using IP in IP for all traffic originating from a Calico enabled-host, to all Calico networked containers and VMs within the IP pool.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: ippool-ipip-1
-spec:
- cidr: 192.168.0.0/16
- ipipMode: Always
- natOutgoing: true
-```
-
-### Configure VXLAN encapsulation for only cross subnet traffic
-
-VXLAN encapsulation can be performed selectively, and only for traffic crossing subnet boundaries.
-
-To enable this feature, set `vxlanMode` to `CrossSubnet`.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: ippool-vxlan-cross-subnet-1
-spec:
- cidr: 192.168.0.0/16
- vxlanMode: CrossSubnet
- natOutgoing: true
-```
-
-### Configure VXLAN encapsulation for all inter workload traffic
-
-With `vxlanMode` set to `Always`, Calico routes traffic using VXLAN for all traffic originating from a Calico enabled host, to all Calico networked containers and VMs within the IP pool.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: ippool-vxlan-1
-spec:
- cidr: 192.168.0.0/16
- vxlanMode: Always
- natOutgoing: true
-```
-
-## Additional resources
-
-For details on IP pool resource options, see [IP pool](../../reference/resources/ippool.mdx).
diff --git a/calico-cloud_versioned_docs/version-20-1/networking/configuring/workloads-outside-cluster.mdx b/calico-cloud_versioned_docs/version-20-1/networking/configuring/workloads-outside-cluster.mdx
deleted file mode 100644
index 1ad0236c99..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/networking/configuring/workloads-outside-cluster.mdx
+++ /dev/null
@@ -1,70 +0,0 @@
----
-description: Configure Calico Cloud networking to perform outbound NAT for connections from pods to outside of the cluster.
----
-
-# Configure outgoing NAT
-
-## Big picture
-
-Configure $[prodname] networking to perform outbound NAT for connections from pods to outside of the cluster. $[prodname] optionally source NATs the pod IP to the node IP.
-
-## Value
-
-The $[prodname] NAT outbound connection option is flexible; it can be enabled, disabled, and applied to $[prodname] IP pools with public IPs, private IPs, or a specific range of IP addresses. This article describes some use cases for enabling and disabling outgoing NAT.
-
-## Concepts
-
-### $[prodname] IP pools and NAT
-
-When a pod with an IP address in the pool initiates a network connection to an IP address to outside of $[prodname]’s IP pools, the outgoing packets will have their source IP address changed from the pod IP address to the node IP address using SNAT (Source Network Address Translation). Any return packets on the connection automatically get this change reversed before being passed back to the pod.
-
-### Enable NAT: for pods with IP addresses that are not routable beyond the cluster
-
-A common use case for enabling NAT outgoing, is to allow pods in an overlay network to connect to IP addresses outside of the overlay, or pods with private IP addresses to connect to public IP addresses outside the cluster/the internet (subject to network policy allowing the connection, of course). When NAT is enabled, traffic is NATed from pods in that pool to any destination outside of all other $[prodname] IP pools.
-
-### Disable NAT: For on-premises deployments using physical infrastructure
-
-If you choose to implement $[prodname] networking with [BGP peered with your physical network infrastructure](bgp.mdx), you can use your own infrastructure to NAT traffic from pods to the internet. In this case, you should disable the $[prodname] `natOutgoing` option. For example, if you want your pods to have public internet IPs, you should:
-
-- Configure $[prodname] to peer with your physical network infrastructure
-- Create an IP pool with public IP addresses for those pods that are routed to your network with NAT disabled (`natOutgoing: false`)
-- Verify that other network equipment does not NAT the pod traffic
-
-## Before you begin
-
-**Required**
-
-- Calico CNI
-
-## How to
-
-- [Create an IP pool with NAT outgoing enabled](#create-an-ip-pool-with-nat-outgoing-enabled)
-- [Use additional IP pools to specify addresses that can be reached without NAT](#use-additional-ip-pools-to-specify-addresses-that-can-be-reached-without-nat)
-
-### Create an IP pool with NAT outgoing enabled
-
-In the following example, we create a $[prodname] IPPool with natOutgoing enabled. Outbound NAT is performed locally on the node where each workload in the pool is hosted.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: default-ipv4-ippool
-spec:
- cidr: 192.168.0.0/16
- natOutgoing: true
-```
-
-### Use additional IP pools to specify addresses that can be reached without NAT
-
-Because $[prodname] performs outgoing NAT only when connecting to an IP address that is not in a $[prodname] IPPool, you can create additional IPPools that are not used for pod IP addresses, but prevent NAT to certain CIDR blocks. This is useful if you want nodes to NAT traffic to the internet, but not to IPs in certain internal ranges. For example, if you did not want to NAT traffic from pods to 10.0.0.0/8, you could create the following pool. You must ensure that the network between the cluster and 10.0.0.0/8 can route pod IPs.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: no-nat-10.0.0.0-8
-spec:
- cidr: 10.0.0.0/8
- disabled: true
-```
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/networking/egress/egress-gateway-aws.mdx b/calico-cloud_versioned_docs/version-20-1/networking/egress/egress-gateway-aws.mdx
deleted file mode 100644
index 4803df211d..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/networking/egress/egress-gateway-aws.mdx
+++ /dev/null
@@ -1,1174 +0,0 @@
----
-description: Configure specific application traffic to exit the cluster through an egress gateway with a native AWS IP address.
----
-
-# Configure egress gateways, AWS
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-## Big picture
-
-Control the source IP address seen by external services/appliances by routing the traffic from certain pods
-through egress gateways. Use native VPC subnet IP addresses for the egress gateways so that the IPs are valid in the AWS fabric.
-
-## Value
-
-Controlling the source IP seen when traffic leaves the cluster allows groups of pods to be identified
-by external firewalls, appliances and services (even as the groups are scaled up/down or pods restarted).
-$[prodname] controls the source IP by directing traffic through one or more "egress gateway" pods, which
-change the source IP of the traffic to their own IP. The egress gateways used can be chosen at the pod or namespace
-scope allowing for flexibility in how the cluster is seen from outside.
-
-In AWS, egress gateway source IP addresses are chosen from an IP pool backed by a [VPC subnet](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html)
-using $[prodname] IPAM. $[prodname] IPAM allows the IP addresses to be precisely controlled, this allows
-for static configuration of external appliances. Using an IP pool backed by a VPC subnet allows $[prodname] to
-configure the AWS fabric to route traffic to and from the egress gateway using its own IP address.
-
-## Concepts
-
-### CIDR notation
-
-This article assumes that you are familiar with network masks and CIDR notation.
-
-- CIDR notation is defined in [RFC4632](https://datatracker.ietf.org/doc/html/rfc4632).
-- The [Wikipedia article on CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation)
- provides a good reference.
-
-### AWS-backed IP pools
-
-$[prodname] supports IP pools that are backed by the AWS fabric. Workloads that use an IP address from an
-AWS-backed pool can communicate on the AWS network using their own IP address and AWS will route their traffic
-to/from their host without changing the IP address.
-
-Pods that use an IP address from an AWS-backed pool may also be [assigned an AWS Elastic IP via a pod annotation](#add-aws-elastic-ips-to-the-egress-gateway-deployment)
-. Elastic IPs used in this
-way have the normal AWS semantics: when accessing resources inside the AWS network, the workload's private IP
-(from the IP pool) is used. When accessing resources outside the AWS network, AWS translates the workload's IP to
-the Elastic IP. Elastic IPs also allow for incoming requests from outside the AWS fabric, direct to the workload.
-
-In overview, the AWS-backed IP Pools feature works as follows:
-
-- An IP pool is created with its `awsSubnetID` field set to the ID of a VPC subnet. This "AWS-backed" IP pool's
- CIDR must be contained within the VPC subnet's CIDR.
-
- :::caution
-
- You must ensure that the CIDR(s) used for AWS-backed IP pool(s) are reserved in the AWS fabric.
- For example, by creating a dedicated VPC subnet for $[prodname]. If the CIDR is not reserved; both
- $[prodname] and AWS may try to assign the same IP address, resulting in a conflict.
-
- :::
-
-- Since they are a limited resource, $[prodname] IPAM does not use AWS-backed pools by default. To request an
- AWS-backed IP address, a pod must have a resource request:
-
- ```yaml noValidation
- spec:
- containers:
- - ...
- resources:
- requests:
- projectcalico.org/aws-secondary-ipv4: 1
- limits:
- projectcalico.org/aws-secondary-ipv4: 1
- ```
-
- $[prodname] manages the `projectcalico.org/aws-secondary-ipv4` capacity on the Kubernetes Node resource,
- ensuring that Kubernetes will not try to schedule too many AWS-backed workloads to the same node. Only AWS-backed
- pods are limited in this way; there is no limit on the number of non-AWS-backed pods.
-
-- When the CNI plugin spots such a resource request, it will choose an IP address from an AWS-backed pool. Only
- pools with VPC subnets in the availability zone of the host are considered.
-
-- When Felix, $[prodname]'s per-host agent spots a local workload with an AWS-backed address it tries to ensure
- that the IP address of the workload is assigned to the host in the AWS fabric. If need be, it will create a
- new [secondary ENI](#secondary-elastic-network-interfaces-enis) device and attach it to the host to house the IP address.
- Felix supports two modes for assigning secondary ENIs: **ENI-per-workload** mode (added in v3.13) and
- **Secondary-IP-per-workload** mode. These modes are described [below](#secondary-elastic-network-interfaces-enis).
-
-- If the pod has one or more AWS Elastic IPs listed in the `cni.projectcalico.org/awsElasticIPs` pod annotation,
- Felix will try to ensure that _one_ of the Elastic IPs is assigned to the pod's private IP address in the AWS fabric.
- (Specifying multiple Elastic IPs is useful for multi-pod deployments; ensuring that each pod in the deployment
- gets one of the IPs.)
-
-### Egress gateway
-
-An egress gateway acts as a transit pod for the outbound application traffic that is configured to
-use it. As traffic leaving the cluster passes through the egress gateway, its source IP is changed
-to that of the egress gateway pod, and the traffic is then forwarded on.
-
-### Source IP
-
-When an outbound application flow leaves the cluster, its IP packets will have a source IP.
-This begins as the pod IP of the pod that originated the flow, then:
-
-- _If no egress gateway is configured_ and the pod IP came from an [IP pool](../../reference/resources/ippool.mdx)
- with `natOutgoing: true`, the node hosting the pod will change the source IP to its own as the
- traffic leaves the host. This allows the pod to communicate with external service even though the
- external network is unaware of the pod's IP.
-
-- _If the pod is configured with an egress gateway_, the traffic is first forwarded to the egress gateway, which
- changes the source IP to its own and then sends the traffic on. To function correctly, egress gateways
- should have IPs from an IP pool with `natOutgoing: false`, meaning their host forwards the packet onto
- the network without changing the source IP again. Since the egress gateway's IP is visible to
- the underlying network fabric, the fabric must be configured to know about the egress gateway's
- IP and to send response traffic back to the same host.
-
-### AWS VPCs and subnets
-
-An AWS VPC is a virtual network that is, by default, logically isolated from other VPCs. Each VPC has one or more
-(often large) CIDR blocks associated with it (for example `10.0.0.0/16`). In general, VPC CIDRs may overlap, but only
-if the VPCs remain isolated. AWS allows VPCs to be peered with each other through VPC Peerings. VPCs can only be
-peered if _none_ of their associated CIDRs overlap.
-
-Each VPC has one or more VPC subnets associated with it, each subnet owns a non-overlapping part of one of the
-VPC's CIDR blocks. Each subnet is associated with a particular availability zone. Instances in one availability
-zone can only use IP addresses from subnets in that zone. Unfortunately, this adds some complexity to managing
-egress gateways IP addresses: much of the configuration must be repeated per-AZ.
-
-### AWS VPC and DirectConnect peerings
-
-AWS [VPC Peerings](https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-basics.html) allow multiple VPCs to be
-connected together. Similarly, [DirectConnect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html)
-allows external datacenters to be connected to an AWS VPC. Peered VPCs and datacenters communicate using private IPs
-as if they were all on one large private network.
-
-By using AWS-backed IP pools, egress gateways can be assigned private IPs allowing them to communicate without NAT
-within the same VPC, with peered VPCs, and, with peered datacenters.
-
-### Secondary Elastic Network Interfaces (ENIs)
-
-Elastic network interfaces are network interfaces that can be added and removed from an instance dynamically. Each
-ENI has a primary IP address from the VPC subnet that it belongs to, and it may also have one or more secondary IP
-addresses, chosen for the same subnet. While the primary IP address is fixed and cannot be changed, the secondary
-IP addresses can be added and removed at runtime.
-
-To arrange for AWS to route traffic to and from egress gateways, $[prodname] adds _secondary_ Elastic
-Network Interfaces (ENIs) to the host. $[prodname] supports two modes for provisioning the
-secondary ENIs. The table below describes the trade-offs between **ENI-per-workload** and **Secondary-IP-per-workload**
-modes:
-
-| **ENI-per-workload** (since v3.13) | **Secondary-IP-per-workload** |
-| ------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------- |
-| One secondary ENI is attached for each AWS-backed workload. | Secondary ENIs are shared, multiple workloads per ENI. |
-| Supports one AWS-backed workload per secondary ENI. | Supports 2-49 AWS-backed workloads per secondary ENI (depending on instance type). |
-| ENI Primary IP is set to Workload's IP. | ENI Primary IP chosen from dedicated "host secondary" IP pools. |
-| Makes best use of AWS IP space, no need to reserve IPs for hosts. | Requires "host secondary" IPs to be reserved. These cannot be used for workloads. |
-| ENI deleted when workload deleted. | ENI retained (ready for next workload to be scheduled). |
-| Slower to handle churn/workload mobility. (Creating ENI is slower than assigning IP.) | Faster at handling churn/workload mobility. |
-
-The number of ENIs that an instance can support and the number of secondary IPs that each ENI can support depends on
-the instance type according to [this table](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI).
-Note: the table lists the total number of network interfaces and IP addresses but the first interface on the host (the
-primary interface) and, in Secondary-IP-per-workload mode, the first IP of each interface (its primary IP) cannot be
-used for egress gateways.
-
-The primary interface cannot be used for egress gateways because it belongs to the VPC subnet that is
-in use for Kubernetes hosts; this means that a planned egress gateway IP could get used by AWS as the primary IP of
-an instance (for example when scaling up the cluster).
-
-## Before you begin
-
-**Required**
-
-- Calico CNI
-- Open port UDP 4790 on the host
-
-**Not Supported**
-
-- Amazon VPC CNI
-
- $[prodname] CNI and IPAM is required. The ability to control the egress gateway’s IP is a feature of $[prodname] CNI and IPAM. AWS VPC CNI does not support that feature, so it is incompatible with egress gateways.
-
-## How to
-
-- [Configure IP autodetection](#configure-ip-autodetection)
-- [Ensure Kubernetes VPC has free CIDR range](#ensure-kubernetes-vpc-has-free-cidr-range)
-- [Create dedicated VPC subnets](#create-dedicated-vpc-subnets)
-- [Configure AWS IAM roles](#configure-aws-iam-roles)
-- [Configure IP reservations for each VPC subnet](#configure-ip-reservations-for-each-vpc-subnet)
-- [Enable egress gateway support](#enable-egress-gateway-support)
-- [Enable AWS-backed IP pools](#enable-aws-backed-ip-pools)
-- [Configure IP pools backed by VPC subnets](#configure-ip-pools-backed-by-vpc-subnets)
-- [Deploy a group of egress gateways](#deploy-a-group-of-egress-gateways)
-- [Configure iptables backend for egress gateways](#configure-iptables-backend-for-egress-gateways)
-- [Configure namespaces and pods to use egress gateways](#configure-namespaces-and-pods-to-use-egress-gateways)
-- [Optionally enable ECMP load balancing](#optionally-enable-ecmp-load-balancing)
-- [Verify the feature operation](#verify-the-feature-operation)
-- [Control the use of egress gateways](#control-the-use-of-egress-gateways)
-- [Policy enforcement for flows via an egress gateway](#policy-enforcement-for-flows-via-an-egress-gateway)
-- [Upgrade egress gateways](#upgrade-egress-gateways)
-
-### Configure IP autodetection
-
-Since this feature adds additional network interfaces to nodes, it is important to configure $[prodname] to
-autodetect the correct primary interface to use for normal pod-to-pod traffic. Otherwise, $[prodname] may
-autodetect a newly-added secondary ENI as the main interface, causing an outage.
-
-For EKS clusters, the default IP autodetection method is `can-reach=8.8.8.8`, which will choose the interface
-with a route to `8.8.8.8`; this is typically the interface with a default route, which will be the correct (primary) ENI.
-($[prodname] ensures that the secondary ENIs do not have default routes in the main routing table.)
-
-For other AWS clusters, $[prodname] may default to `firstFound`, which is **not** suitable.
-
-To examine the autodetection method, check the operator's installation resource:
-
-```bash
-$ kubectl get installations.operator.tigera.io -o yaml default
-```
-```yaml noValidation
-apiVersion: operator.tigera.io/v1
-kind: Installation
-metadata:
- ...
- name: default
- ...
-spec:
- calicoNetwork:
- ...
- nodeAddressAutodetectionV4:
- firstFound: true
-...
-```
-
-If `nodeAddressAutodetectionV4` is set to `firstFound: true` or is not specified, then you must change it to another method by editing the
-resource. The NodeAddressAutodetection options, `canReach` and `cidrs` are suitable. See [Installation reference](../../reference/installation/api.mdx). If using the `cidrs` option, set the CIDRs list to include only the
-CIDRs from which your primary ENI IPs are chosen (do not include the dedicated VPC subnets chosen below).
-
-### Ensure Kubernetes VPC has free CIDR range
-
-For egress gateways to be useful in AWS, we want to assign them IP addresses from a VPC subnet that is in the same AZ
-as their host.
-
-To avoid clashes between AWS IP allocations and $[prodname] IP allocations, it is important that the range of
-IP addresses assigned to $[prodname] IP pools is not used by AWS for automatic allocations. In this guide we
-assume that you have created a dedicated VPC subnet per Availability Zone (AZ) that is reserved for $[prodname]
-and configured not to be used as the default subnet for the AZ.
-
-If you are creating your cluster and VPC from scratch, plan to subdivide the VPC CIDR into (at least) two VPC subnets
-per AZ. One VPC subnet for the Kubernetes (and any other) hosts and one VPC subnet for egress gateways. (The next
-section explains the sizing requirements for the egress gateway subnets.)
-
-If you are adding this feature to an existing cluster, you may find that the existing VPC subnets already cover the
-entire VPC CIDR, making it impossible to create a new subnet. If that is the case, you can make more room by
-adding a second CIDR to the VPC that is large enough for the new subnets. For information on adding a secondary
-CIDR range to a VPC, see [this guide](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#vpc-resize).
-
-### Create dedicated VPC subnets
-
-$[prodname] requires a dedicated VPC subnet in each AWS availability zone that you wish to deploy egress
-gateways. The subnet must be dedicated to $[prodname] so that AWS will not
-use IP addresses from the subnet for other purposes (as this could clash with an egress gateway's IP). When creating the
-subnet you should configure it not to be used for instances.
-
-Some IP addresses from the dedicated subnet are reserved for AWS and $[prodname] internal use:
-
-- The first four IP addresses in the subnet cannot be used. These are [reserved by AWS for internal use](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#vpc-sizing-ipv4).
-- Similarly, the last IP in the subnet (the broadcast address) cannot be used.
-- _In **Secondary-IP-per-workload** mode_, $[prodname] requires one IP address from the subnet per secondary ENI
- that it provisions (for use as the primary IP address of the ENI). In **ENI-per-workload** mode, this is not required.
-
-
-
-
-Example for **ENI-per-workload** mode:
-
-- You anticipate having up to 30 instances running in each availability zone (AZ).
-- You intend to use `t3.large` instances, [these are limited to](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI) 3 ENIs per host.
-- So, each host can accept 2 secondary ENIs, each of which can handle one egress gateway.
-- With 2 ENIs per node and 30 nodes, the part of the cluster in this AZ could handle up to `30 * 2 = 60` egress
- gateways.
-- AWS reserves 5 IPs from the AWS subnet for internal use, no "host secondary IPs" need to be reserved in this mode.
-- Since VPC subnets are allocated by CIDR, a `/25` subnet containing 128 IP addresses would comfortably fit the 5
- reserved IPs as well as the 60 possible gateways (with headroom for more nodes to be added later).
-
-
-
-
-Example for **Secondary-IP-per-workload** mode:
-
-- You anticipate having up to 30 instances running in each availability zone (AZ).
-- You intend to use `t3.large` instances, [these are limited to](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI)
- 3 ENIs per host (one of which is the primary) and each ENI can handle 12 IP addresses, (one of which is the primary).
-- So, each host can accept 2 secondary ENIs and each secondary ENI could handle 11 egress gateway pods.
-- Each in-use secondary ENI requires one IP from the VPC subnet (up to 60 in this case) and AWS requires 5 IPs to be
- reserved so that's up to 65 IPs reserved in total.
-- With 2 ENIs and 11 IPs per ENI, the part of the cluster in this AZ could handle up to `30 * 2 * 11 = 660` egress
- gateways.
-- Since VPC subnets are allocated by CIDR, a `/22` subnet containing 1024 IP addresses would comfortably fit the 65
- reserved IPs as well as the 660 possible gateways.
-
-$[prodname] allocates ENIs on-demand so each instance will only claim one of those reserved IP addresses when the
-first egress gateway is assigned to that node. It will only claim its second IP when that ENI becomes full and then an
-extra egress gateway is provisioned.
-
-
-
-
-### Configure AWS IAM roles
-
-To provision the required AWS resources, each $[noderunning] pod in your cluster requires the
-following IAM permissions to be granted. The permissions can be granted to the node IAM Role itself, or by using
-the AWS [IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) feature to grant the permissions to the
-`calico-node` service account.
-
-- DescribeInstances
-- DescribeInstanceTypes
-- DescribeNetworkInterfaces
-- DescribeSubnets
-- DescribeTags
-- CreateTags
-- AssignPrivateIpAddresses
-- UnassignPrivateIpAddresses
-- AttachNetworkInterface
-- CreateNetworkInterface
-- DeleteNetworkInterface
-- DetachNetworkInterface
-- ModifyNetworkInterfaceAttribute
-
-The above permissions are similar to those used by the AWS VPC CNI (since both CNIs need to provision the same kinds
-of resources). In addition, to support elastic IPs, each $[noderunning] also requires the following permissions:
-
-- DescribeAddresses
-- AssociateAddress
-- DisassociateAddress
-
-### Configure AWS Security Group rules
-
-To allow egress gateway traffic into the egress gateway pod's host from the client, the ingress rules of the security
-group need to be updated. A rule to allow all packets from within the security group must be added to the inbound rules.
-
-### Configure IP reservations for each VPC subnet
-
-Since the first four IP addresses and the last IP address in a VPC subnet cannot be used, it is important to
-prevent $[prodname] from _trying_ to use them. For each VPC subnet that you plan to use,
-ensure that you have an entry in an [IP reservation](../../reference/resources/ipreservation.mdx) for its first
-four IP addresses and its final IP address.
-
-For example, if your chosen VPC subnets are `100.64.0.0/22` and `100.64.4.0/22`, you could create the following
-`IPReservation` resource, which covers both VPC subnets (if you're not familiar with CIDR notation, replacing the
-`/22` of the original subnet with `/30` is a shorthand for "the first four IP addresses"):
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: IPReservation
-metadata:
- name: aws-ip-reservations
-spec:
- reservedCIDRs:
- - 100.64.0.0/30
- - 100.64.3.255
- - 100.64.4.0/30
- - 100.64.7.255
-```
-
-### Enable egress gateway support
-
-In the default **FelixConfiguration**, set the `egressIPSupport` field to `EnabledPerNamespace` or
-`EnabledPerNamespaceOrPerPod`, according to the level of support that you need in your cluster. For
-support on a per-namespace basis only:
-
-```bash
-kubectl patch felixconfiguration default --type='merge' -p \
- '{"spec":{"egressIPSupport":"EnabledPerNamespace"}}'
-```
-
-Or for support both per-namespace and per-pod:
-
-```bash
-kubectl patch felixconfiguration default --type='merge' -p \
- '{"spec":{"egressIPSupport":"EnabledPerNamespaceOrPerPod"}}'
-```
-
-:::note
-
-- `egressIPSupport` must be the same on all cluster nodes, so you should set them only in the
- `default` FelixConfiguration resource.
-- The operator automatically enables the required policy sync API in the FelixConfiguration.
-
-:::
-
-### Enable AWS-backed IP pools
-
-
-
-
-To enable **ENI-per-workload** mode, in the default **FelixConfiguration**, set the `awsSecondaryIPSupport` field to
-`EnabledENIPerWorkload`:
-
-```bash
-kubectl patch felixconfiguration default --type='merge' -p \
- '{"spec":{"awsSecondaryIPSupport":"EnabledENIPerWorkload"}}'
-```
-
-
-
-
-To enable **Secondary-IP-per-workload** mode, set the field to `Enabled` (the name `Enabled` predates
-the addition of the **ENI-per-workload** mode):
-
-```bash
-kubectl patch felixconfiguration default --type='merge' -p \
- '{"spec":{"awsSecondaryIPSupport":"Enabled"}}'
-```
-
-
-
-
-You can verify that the setting took effect by examining the Kubernetes Node resources:
-
-```bash
-kubectl describe node
-```
-
-Should show the new `projectcalico.org/aws-secondary-ipv4` capacity (in the Allocated Resources section).
-
-#### Changing modes
-
-You can change between the two modes by:
-
-- Ensuring that the number of egress gateways on every node is within the limits of the particular mode. i.e.
- when switching to **ENI-per-workload** mode, the number of egress gateways must be less than or equal to the number
- of secondary ENIs that your instances can handle.
-- Editing the setting (using the patch commands above, for example).
-
-Changing the mode will cause disruption as ENIs must be removed and re-added.
-
-### Configure IP pools backed by VPC subnets
-
-
-
-
-In **ENI-per-workload** mode, IP pools are (only) used to subdivide the VPC subnets into small pools used for
-particular groups of egress gateways. These IP Pools must have:
-
-- `awsSubnetID` set to the ID of the relevant VPC subnet. This activates the AWS-backed IP feature for these pools.
-- `allowedUse` set to `["Workload"]` to tell $[prodname] IPAM to use those pools for the egress gateway workloads.
-- `vxlanMode` and `ipipMode` set to `Never` to disable encapsulation for the egress gateway pods. (`Never` is the default if these fields are not specified.)
-- `blockSize` set to 32. This aligns $[prodname] IPAM with the behaviour of the AWS fabric.
-- `disableBGPExport` set to `true`. This prevents routing conflicts if your cluster is using IPIP or BGP networking.
-
-It's also recommended to:
-
-- Set `nodeSelector` to `"!all()"`. This prevents $[prodname] IPAM from using the pool automatically. It will
- only be used for workloads that explicitly name it in the `cni.projectcalico.org/ipv4pools` annotation.
-
-Continuing the example above, with VPC subnets
-
-- `100.64.0.0/22` in, say, availability zone west-1 and id `subnet-000000000000000001`
-- `100.64.4.0/22` in, say, availability zone west-2 and id `subnet-000000000000000002`
-
-And, assuming that there are two clusters of egress gateways "red" and "blue" (which in turn serve namespaces "red"
-and "blue"), one way to structure the IP pools is to have one IP pool for each group of egress gateways in each
-subnet. Then, if a particular egress gateway from the egress gateway cluster is scheduled to one AZ or the other,
-it will take an IP from the appropriate pool.
-
-For the "west-1" availability zone:
-
-- IP pool "egress-red-west-1", CIDR `100.64.0.4/30` (the first non-reserved /30 CIDR in the VPC subnet). These
- addresses will be used for "red" egress gateways in the "west-1" AZ.
-
-- IP pool "egress-blue-west-1", CIDR `100.64.0.8/30` (the next 4 IPs from the "west-1" subnet). These addresses
- will be used for "blue" egress gateways in the "west-1" AZ.
-
-For the "west-2" availability zone:
-
-- IP pool "egress-red-west-2", CIDR `100.64.4.4/30` (the first non-reserved /30 CIDR in the VPC subnet). These
- addresses will be used for "red" egress gateways in the "west-2" AZ.
-
-- IP pool "egress-blue-west-2", CIDR `100.64.4.8/30` (the next 4 IPs from the "west-2" subnet). These addresses
- will be used for "blue" egress gateways in the "west-2" AZ.
-
-Converting this to `IPPool` resources:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: egress-red-west-1
-spec:
- cidr: 100.64.0.4/30
- allowedUses: ['Workload']
- awsSubnetID: subnet-000000000000000001
- blockSize: 32
- nodeSelector: '!all()'
- disableBGPExport: true
----
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: egress-blue-west-1
-spec:
- cidr: 100.64.0.8/30
- allowedUses: ['Workload']
- awsSubnetID: subnet-000000000000000001
- blockSize: 32
- nodeSelector: '!all()'
- disableBGPExport: true
----
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: egress-red-west-2
-spec:
- cidr: 100.64.4.4/30
- allowedUses: ['Workload']
- awsSubnetID: subnet-000000000000000002
- blockSize: 32
- nodeSelector: '!all()'
- disableBGPExport: true
----
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: egress-blue-west-2
-spec:
- cidr: 100.64.4.8/30
- allowedUses: ['Workload']
- awsSubnetID: subnet-000000000000000002
- blockSize: 32
- nodeSelector: '!all()'
- disableBGPExport: true
-```
-
-
-
-
-In **Secondary-IP-per-workload** mode, IP pools are used to subdivide the VPC subnets as follows:
-
-- One medium-sized IP pool per-Subnet reserved for $[prodname] to use for the _primary_ IP addresses of its _secondary_ ENIs.
- These pools must have:
-
- - `awsSubnetID` set to the ID of the relevant VPC subnet. This activates the AWS-backed IP feature for these pools.
- - `allowedUse` set to `["HostSecondaryInterface"]` to reserve them for this purpose.
- - `blockSize` set to 32. This aligns $[prodname] IPAM with the behaviour of the AWS fabric.
- - `vxlanMode` and `ipipMode` set to `Never`. (`Never` is the default if these fields are not specified.)
- - `disableBGPExport` set to `true`. This prevents routing conflicts if your cluster is using IPIP or BGP networking.
-
-- Small pools used for particular groups of egress gateways. These must have:
-
- - `awsSubnetID` set to the ID of the relevant VPC subnet. This activates the AWS-backed IP feature for these pools.
- - `allowedUse` set to `["Workload"]` to tell $[prodname] IPAM to use those pools for the egress gateway workloads.
- - `vxlanMode` and `ipipMode` set to `Never` to disable encapsulation for the egress gateway pods. (`Never` is the default if these fields are not specified.)
- - `blockSize` set to 32. This aligns $[prodname] IPAM with the behaviour of the AWS fabric.
- - `disableBGPExport` set to `true`. This prevents routing conflicts if your cluster is using IPIP or BGP networking.
-
- It's also recommended to:
-
- - Set `nodeSelector` to `"!all()"`. This prevents $[prodname] IPAM from using the pool automatically. It will
- only be used for workloads that explicitly name it in the `cni.projectcalico.org/ipv4pools` annotation.
-
-Continuing the example above, with VPC subnets
-
-- `100.64.0.0/22` in, say, availability zone west-1 and id `subnet-000000000000000001`
-- `100.64.4.0/22` in, say, availability zone west-2 and id `subnet-000000000000000002`
-
-And, assuming that there are two clusters of egress gateways "red" and "blue" (which in turn serve namespaces "red"
-and "blue"), one way to structure the IP pools is to have a "hosts" IP pool in each VPC subnet and one IP pool for each
-group of egress gateways in each subnet. Then, if a particular egress gateway from the egress gateway cluster is
-scheduled to one AZ or the other, it will take an IP from the appropriate pool.
-
-For the "west-1" availability zone:
-
-- IP pool "hosts-west-1", CIDR `100.64.0.0/25` (the first 128 addresses in the "west-1" VPC subnet).
-
- - We'll reserve these addresses for hosts to use.
- - `100.64.0.0/25` covers the addresses from `100.64.0.0` to `100.64.0.127` (but addresses `100.64.0.0` to `100.64.0.3`
- were reserved above).
-
-- IP pool "egress-red-west-1", CIDR `100.64.0.128/30` (the next 4 IPs from the "west-1" subnet).
-
- - These addresses will be used for "red" egress gateways in the "west-1" AZ.
-
-- IP pool "egress-blue-west-1", CIDR `100.64.0.132/30` (the next 4 IPs from the "west-1" subnet).
-
- - These addresses will be used for "blue" egress gateways in the "west-1" AZ.
-
-For the "west-2" availability zone:
-
-- IP pool "hosts-west-2", CIDR `100.64.4.0/25` (the first 128 addresses in the "west-2" VPC subnet).
-
- - `100.64.4.0/25` covers the addresses from `100.64.4.0` to `100.64.4.127` (but addresses `100.64.4.0` to `100.64.4.3`
- were reserved above).
-
-- IP pool "egress-red-west-2", CIDR `100.64.4.128/30` (the next 4 IPs from the "west-2" subnet).
-
- - These addresses will be used for "red" egress gateways in the "west-2" AZ.
-
-- IP pool "egress-blue-west-2", CIDR `100.64.4.132/30` (the next 4 IPs from the "west-2" subnet).
-
- - These addresses will be used for "blue" egress gateways in the "west-2" AZ.
-
-Converting this to `IPPool` resources:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: hosts-west-1
-spec:
- cidr: 100.64.0.0/25
- allowedUses: ['HostSecondaryInterface']
- awsSubnetID: subnet-000000000000000001
- blockSize: 32
- disableBGPExport: true
----
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: egress-red-west-1
-spec:
- cidr: 100.64.0.128/30
- allowedUses: ['Workload']
- awsSubnetID: subnet-000000000000000001
- blockSize: 32
- nodeSelector: '!all()'
- disableBGPExport: true
----
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: egress-blue-west-1
-spec:
- cidr: 100.64.0.132/30
- allowedUses: ['Workload']
- awsSubnetID: subnet-000000000000000001
- blockSize: 32
- nodeSelector: '!all()'
- disableBGPExport: true
----
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: hosts-west-2
-spec:
- cidr: 100.64.4.0/25
- allowedUses: ['HostSecondaryInterface']
- awsSubnetID: subnet-000000000000000002
- blockSize: 32
- disableBGPExport: true
----
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: egress-red-west-2
-spec:
- cidr: 100.64.4.128/30
- allowedUses: ['Workload']
- awsSubnetID: subnet-000000000000000002
- blockSize: 32
- nodeSelector: '!all()'
- disableBGPExport: true
----
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: egress-blue-west-2
-spec:
- cidr: 100.64.4.132/30
- allowedUses: ['Workload']
- awsSubnetID: subnet-000000000000000002
- blockSize: 32
- nodeSelector: '!all()'
- disableBGPExport: true
-```
-
-
-
-
-### Deploy a group of egress gateways
-
-Use an egress gateway custom resource to deploy a group of egress gateways.
-
-Using the example of the "red" egress gateway cluster, we use several features of Kubernetes and $[prodname]
-in tandem to get a cluster of egress gateways that spans both availability zones and uses AWS-backed IP addresses:
-
-```bash
-kubectl apply -f - <
-# -
-# timeoutSeconds: 15
-# intervalSeconds: 5
-# httpProbe:
-# urls:
-# -
-# -
-# timeoutSeconds: 30
-# intervalSeconds: 10
- aws:
- nativeIP: Enabled
- template:
- metadata:
- labels:
- egress-code: red
- spec:
- nodeSelector:
- kubernetes.io/os: linux
- terminationGracePeriodSeconds: 0
- topologySpreadConstraints:
- - maxSkew: 1
- topologyKey: "topology.kubernetes.io/zone"
- whenUnsatisfiable: "DoNotSchedule"
- labelSelector:
- matchLabels:
- egress-code: red
-EOF
-```
-
-- `replicas: 2` tells Kubernetes to schedule two egress gateways in the "red" cluster.
-- ipPools tells $[prodname] IPAM to use one of the "red" IP pools:
-
- ```yaml
- ipPools:
- - name: "egress-red-west-1"
- - name: "egress-red-west-2"
- ```
- Depending on which AZ the pod is scheduled in, $[prodname] IPAM will automatically ignore IP pools that
- are backed by AWS subnets that are not in the local AZ.
-
- External services and appliances can recognise "red" traffic because it will all come from the CIDRs of the "red"
- IP pools.
-
-- When nativeIP is enabled, IPPools must be AWS-backed. It also tells Kubernetes to only schedule the gateway to a node
- with available AWS IP capacity:
-
- ```yaml
- aws:
- nativeIP: Enabled
- ```
-
-- The following [topology spread constraint](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/)
- ensures that Kubernetes spreads the Egress gateways evenly between AZs (assuming that your nodes are labeled with
- the expected [well-known label](https://kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesiozone)
- `topology.kubernetes.io/zone`):
-
- ```yaml
- topologySpreadConstraints:
- - maxSkew: 1
- topologyKey: topology.kubernetes.io/zone
- whenUnsatisfiable: DoNotSchedule
- labelSelector:
- matchLabels:
- egress-code: red
- ```
-
-- The labels are arbitrary. You can choose whatever names and values are convenient for your cluster's Namespaces and Pods to refer to in their egress selectors.
- If labels are not specified, a default label `projectcalico.org/egw`:`name` will be added by the Tigera Operator.
-
-- icmpProbe may be used to specify the Probe IPs, ICMP interval and timeout in seconds. `ips` if set, the
- egress gateway pod will probe each IP periodically using an ICMP ping. If all pings fail then the egress
- gateway will report non-ready via its health port. `intervalSeconds` controls the interval between probes.
- `timeoutSeconds` controls the timeout before reporting non-ready if no probes succeed.
-
- ```yaml
- icmpProbe:
- ips:
- -
- -
- timeoutSeconds: 20
- intervalSeconds: 10
- ```
-
-- httpProbe may be used to specify the Probe URLs, HTTP interval and timeout in seconds. `urls` if set, the
- egress gateway pod will probe each external service periodically. If all probes fail then the egress
- gateway will report non-ready via its health port. `intervalSeconds` controls the interval between probes.
- `timeoutSeconds` controls the timeout before reporting non-ready if all probes are failing.
-
- ```yaml
- httpProbe:
- urls:
- -
- -
- timeoutSeconds: 30
- intervalSeconds: 10
- ```
-- Please refer to the [operator reference docs](../../reference/installation/api.mdx) for details about the egress gateway resource type.
-
-:::note
-
-- It is advisable to have more than one egress gateway per group, so that the egress IP function
- continues if one of the gateways crashes or needs to be restarted. When there are multiple
- gateways in a group, outbound traffic from the applications using that group is load-balanced
- across the available gateways. The number of `replicas` specified must be less than or equal
- to the number of free IP addresses in the IP Pool.
-- IPPool can be specified either by its name (e.g. `-name: egress-ippool-1`) or by its CIDR (e.g. `-cidr: 10.10.10.0/31`).
-- The labels are arbitrary. You can choose whatever names and values are convenient for
- your cluster's Namespaces and Pods to refer to in their egress selectors.
- The health port `8080` is used by:
-- The Kubernetes `readinessProbe` to expose the status of the egress gateway pod (and any ICMP/HTTP
- probes).
-- Remote pods to check if the egress gateway is "ready". Only "ready" egress
- gateways will be used for remote client traffic. This traffic is automatically allowed by $[prodname] and
- no policy is required to allow it. $[prodname] only sends probes to egress gateway pods that have a named
- "health" port. This ensures that during an upgrade, health probes are only sent to upgraded egress gateways.
-
-:::
-
-### Configure iptables backend for egress gateways
-
-The Tigera Operator configures egress gateways to use the same iptables backend as `calico-node`.
-To modify the iptables backend for egress gateways, you must change the `iptablesBackend` field in the [Felix configuration](../../reference/resources/felixconfig.mdx).
-
-### Configure namespaces and pods to use egress gateways
-
-You can configure namespaces and pods to use an egress gateway by:
-* annotating the namespace or pod
-* applying an egress gateway policy to the namespace or pod.
-
-Using an egress gateway policy is more complicated, but it allows advanced use cases.
-
-#### Configure a namespace or pod to use an egress gateway (annotation method)
-
-In a $[prodname] deployment, the Kubernetes namespace and pod resources honor annotations that
-tell that namespace or pod to use particular egress gateways. These annotations are selectors, and
-their meaning is "the set of pods, anywhere in the cluster, that match those selectors".
-
-So, to configure all the pods in a namespace to use the egress gateways that are
-labelled with `egress-code: red`, you would annotate that namespace like this:
-
-```bash
-kubectl annotate ns egress.projectcalico.org/selector="egress-code == 'red'"
-```
-
-By default, that selector can only match egress gateways in the same namespace. To select gateways
-in a different namespace, specify a `namespaceSelector` annotation as well, like this:
-
-```bash
-kubectl annotate ns egress.projectcalico.org/namespaceSelector="projectcalico.org/name == 'default'"
-```
-
-Egress gateway annotations have the same [syntax and range of expressions](../../reference/resources/networkpolicy.mdx#selector) as the selector fields in
-$[prodname] [network policy](../../reference/resources/networkpolicy.mdx#entityrule).
-
-To configure a specific Kubernetes Pod to use egress gateways, specify the same annotations when
-creating the pod. For example:
-
-```bash
-kubectl apply -f - < egress.projectcalico.org/egressGatewayPolicy="egw-policy1"
-```
-
-To configure a specific Kubernetes pod to use the same policy, specify the same annotations when
-creating the pod.
-For example:
-
-```bash
-kubectl apply -f - < -n -- nc 8089 ` should be the IP address of the netcat server.
-
-Then, if you check the logs or output of the netcat server, you should see:
-
-```
-Connection from --from-file=apiserver.key=
-```
-
-To update existing certificates, run the following command:
-
-```bash
-kubectl create secret generic tigera-apiserver-certs -n tigera-operator --from-file=apiserver.crt= --from-file=apiserver.key= --dry-run -o yaml --save-config | kubectl replace -f -
-```
-
-:::note
-
-If the $[prodname] API server is already running, updating the secret restarts the API server. While the server restarts, the $[prodname] API server may be unavailable for a short period of time.
-
-:::
-
-
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/comms/certificate-management.mdx b/calico-cloud_versioned_docs/version-20-1/operations/comms/certificate-management.mdx
deleted file mode 100644
index 545d5fc24c..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/comms/certificate-management.mdx
+++ /dev/null
@@ -1,145 +0,0 @@
----
-description: Control the issuer of certificates used by Calico Cloud.
----
-
-# Manage TLS certificates used by Calico Cloud
-
-## Big picture
-
-Enable custom workflows for issuing and signing certificates used to secure communication between $[prodname] components.
-
-## Value
-
-Some deployments have security requirements that strictly minimize or eliminate the access to private keys and/or
-requirements to control the trusted certificates throughout clusters. Using the Kubernetes Certificates API that automates
-certificate issuance, $[prodname] provides a simple configuration option that you add to your installation.
-
-## Before you begin
-
-**Limitations**
-
-If your cluster is already running $[prodname] and you would like to enable certificate management, you need to
-temporarily remove [the logstorage resource](../../reference/installation/api.mdx#operator.tigera.io/v1.LogStorage)
-before following the steps to enable certificate management and then re-apply afterwards.
-
-
-**Supported algorithms**
-
-- Private Key Pair: RSA (size: 2048, 4096, 8192), ECDSA (curve: 256, 384, 521)
-- Certificate Signature: RSA (sha: 256, 384, 512), ECDSA (sha: 256, 384, 512)
-
-## How to
-
-- [Enable certificate management](#enable-certificate-management)
-- [Verify and monitor](#verify-and-monitor)
-- [Implement your own signing/approval process](#implement-your-own-signing-and-approval-process)
-
-### Enable certificate management
-
-1. Modify your [the installation resource](../../reference/installation/api.mdx#operator.tigera.io/v1.Installation)
- resource and add the `certificateManagement` section. Apply the following change to your cluster.
-
-```yaml
-apiVersion: operator.tigera.io/v1
-kind: Installation
-metadata:
- name: default
-spec:
- certificateManagement:
- caCert:
- signerName: /
- signatureAlgorithm: SHA512WithRSA
- keyAlgorithm: RSAWithSize4096
-```
-
-Done! If you have an automatic signer and approver, there is nothing left to do. The next section explains in more detail
-how to verify and monitor the status.
-
-### Verify and monitor
-
-1. Monitor your pods as they come up:
-
-```
-kubectl get pod -n calico-system -w
-NAMESPACE NAME READY STATUS RESTARTS AGE
-calico-system calico-node-5ckvq 0/1 Pending 0 0s
-calico-system calico-typha-688c9957f5-h9c5w 0/1 Pending 0 0s
-calico-system calico-node-5ckvq 0/1 Init:0/3 0 1s
-calico-system calico-typha-688c9957f5-h9c5w 0/1 Init:0/1 0 1s
-calico-system calico-node-5ckvq 0/1 PodInitializing 0 2s
-calico-system calico-typha-688c9957f5-h9c5w 0/1 PodInitializing 0 2s
-calico-system calico-node-5ckvq 1/1 Running 0 3s
-calico-system calico-typha-688c9957f5-h9c5w 1/1 Running 0 3s
-```
-
-During the `Init` phase a certificate signing request (CSR) is created by an init container of the pod. It will be stuck in the
-`Init` phase. Once the CSR has been approved and signed by the certificate authority, the pod continues with `PodInitializing`
-and eventually `Running`.
-
-1. Monitor certificate signing requests:
-
-```
-kubectl get csr -w
-NAME AGE REQUESTOR CONDITION
-calico-system:calico-node-5ckvq:9a3a10 0s system:serviceaccount:calico-system:calico-node Pending
-calico-system:calico-node-5ckvq:9a3a10 0s system:serviceaccount:calico-system:calico-node Pending,Issued
-calico-system:calico-node-5ckvq:9a3a10 0s system:serviceaccount:calico-system:calico-node Approved,Issued
-calico-system:typha-688c9957f5-h9c5w:2b0d82 0s system:serviceaccount:calico-system:calico-typha Pending
-calico-system:typha-688c9957f5-h9c5w:2b0d82 0s system:serviceaccount:calico-system:calico-typha Pending,Issued
-calico-system:typha-688c9957f5-h9c5w:2b0d82 0s system:serviceaccount:calico-system:calico-typha Approved,Issued
-```
-
-A CSR will be `Pending` until it has been `Issued` and `Approved`. The name of a CSR is based on the namespace, the pod
-name and the first 6 characters of the pod's UID. The pod will be `Pending` until the CSR has been `Approved`.
-
-1. Monitor the status of this feature using the `TigeraStatus`:
-
-```
-kubectl get tigerastatus
-NAME AVAILABLE PROGRESSING DEGRADED SINCE
-calico True False False 2m40s
-```
-
-### Implement your own signing and approval process
-
-**Required steps**
-
-This feature uses api version `certificates.k8s.io/v1beta1` for [certificate signing requests](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/).
-To automate the signing and approval process, run a server that performs the following actions:
-
-1. Watch `CertificateSigningRequests` resources with status `Pending` and `spec.signerName=`.
-
- :::note
-
- You can skip this step if you are using a version before Kubernetes v1.18; (the signerName field was not available).
-
- :::
-
-1. For each `Pending` CSR perform (security) checks (see next heading)
-1. Issue a certificate and update `.spec.status.certificate`
-1. Approve the CSR and update `.spec.status.conditions`
-
-**Security requirements**
-
-Based on your requirements you may want to implement custom checks to make sure that no certificates are issued for a malicious user.
-When a CSR is created, the kube-apiserver adds immutable fields to the spec to help you perform checks:
-
-- `.spec.username`: username of the requester
-- `.spec.groups`: user groups of the requester
-- `.spec.request`: certificate request in pem format
-
-Verify that the user and/or group match with the requested certificate subject (alt) names.
-
-**Implement your signer and approver using golang**
-
-- Use [client-go](https://github.com/kubernetes/client-go) to create a clientset
-- To watch CSRs, use `clientset.CertificatesV1().CertificateSigningRequests().Watch(..)`
-- To issue the certificate use `clientset.CertificatesV1().CertificateSigningRequests().UpdateStatus(...)`
-- To approve the CSR use `clientset.CertificatesV1().CertificateSigningRequests().UpdateApproval(...)`
-
-### Additional resources
-
-- Read [kubernetes certificate signing requests](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/) for more information on CSRs
-- Use [client-go](https://github.com/kubernetes/client-go) to implement a controller to sign and approve a CSR
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/comms/compliance-tls.mdx b/calico-cloud_versioned_docs/version-20-1/operations/comms/compliance-tls.mdx
deleted file mode 100644
index 50d096c9b9..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/comms/compliance-tls.mdx
+++ /dev/null
@@ -1,35 +0,0 @@
----
-description: Add TLS certificate to secure access to compliance.
----
-
-# Provide TLS certificates for compliance
-
-## Big picture
-
-Provide TLS certificates to secure access to $[prodname] to the compliance components.
-
-## Value
-
-Providing TLS certificates for $[prodname] compliance components is recommended as part of a zero trust network model for security.
-
-## Before you begin...
-
-By default, $[prodname] uses self-signed certificates for its compliance reporting components. To provide TLS certificates,
-get the certificate and key pair for the $[prodname] compliance using any X.509-compatible tool or from your organization's
-Certificate Authority. The certificate must have Common Name or a Subject Alternate Name of `compliance.tigera-compliance.svc`.
-
-## How to
-
-### Add TLS certificates for compliance
-
-To provide TLS certificates for use by $[prodname] compliance components during deployment, you must create a secret before applying the 'custom-resource.yaml' or before creating the Compliance resource. Use the following command to create a secret:
-
-```bash
-kubectl create secret generic tigera-compliance-server-tls -n tigera-operator --from-file=tls.crt= --from-file=tls.key=
-```
-
-To update existing certificates, run the following command:
-
-```bash
-kubectl create secret generic tigera-compliance-server-tls -n tigera-operator --from-file=tls.crt= --from-file=tls.key= --dry-run -o yaml --save-config | kubectl replace -f -
-```
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/comms/crypto-auth.mdx b/calico-cloud_versioned_docs/version-20-1/operations/comms/crypto-auth.mdx
deleted file mode 100644
index 26f54b7932..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/comms/crypto-auth.mdx
+++ /dev/null
@@ -1,112 +0,0 @@
----
-description: Enable TLS authentication and encryption for various Calico Cloud components.
----
-
-# Configure encryption and authentication to secure Calico Cloud components
-
-## Connections from $[prodname] components to kube-apiserver (Kubernetes and OpenShift)
-
-We recommend enabling TLS on kube-apiserver, as well as the client certificate and JSON web token (JWT)
-authentication modules. This ensures that all of its communications with $[prodname] components occur
-over TLS. The $[prodname] components present either an X.509 certificate or a JWT to kube-apiserver
-so that kube-apiserver can verify their identities.
-
-## Connections from Node to Typha (Kubernetes)
-
-Operator based installations automatically configure mutual TLS authentication on connections from
-Felix to Typha. You may also configure this TLS by providing your own secrets.
-
-### Configure Node to Typha TLS based on your deployment
-
-For clusters installed using operator, see how to [provide TLS certificates for Typha and Node](typha-node-tls.mdx).
-
-For detailed reference information on TLS configuration parameters, refer to:
-
-- **Node**: [Node-Typha TLS configuration](../../reference/component-resources/node/felix/configuration.mdx#felix-typha-tls-configuration)
-
-
-
-# Calico Cloud Manager connections
-
-Tigera $[prodname] Manager's web interface, run from your browser, uses HTTPS to securely communicate
-with the $[prodname] Manager, which in turn, communicates with the Kubernetes and $[prodname] API
-servers also using HTTPS. Through the installation steps, secure communication between
-$[prodname] components should already be configured, but secure communication through your web
-browser of choice may not. To verify if this is properly configured, the web browser
-you are using should display `Secure` in the address bar.
-
-Before we set up TLS certificates, it is important to understand the traffic
-that we are securing. By default, your web browser of choice communicates with
-$[prodname] Manager through a
-[`NodePort` service](https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typenodeport)
-over port `30003`. The NodePort service passes through packets without modification.
-TLS traffic is [terminated](https://en.wikipedia.org/wiki/TLS_termination_proxy)
-at the $[prodname] Manager. This means that the TLS certificates used to secure traffic
-between your web browser and the $[prodname] Manager do not need to be shared or related
-to any other TLS certificates that may be used elsewhere in your cluster or when
-configuring $[prodname]. The flow of traffic should look like the following:
-
-![$[prodname] Manager traffic diagram](/img/calico-enterprise/cnx-tls-mgr-comms.svg)
-
-:::note
-
-the `NodePort` service in the above diagram can be replaced with other
-[Kubernetes services](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types).
-Configuration will vary if another service, such as a load balancer, is placed between the web
-browser and the $[prodname] Manager.
-
-:::
-
-To properly configure TLS in the $[prodname] Manager, you will need
-certificates and keys signed by an appropriate Certificate Authority (CA).
-For more high level information on certificates, keys, and CAs, see
-[this blog post](http://www.steves-internet-guide.com/ssl-certificates-explained/).
-
-:::note
-
-It is important when generating your certificates to make sure
-that the Common Name or Subject Alternative Name specified in your certificates
-matches the host name/DNS entry/IP address that is used to access the $[prodname] Manager
-(i.e. what it says in the browser address bar).
-
-:::
-
-## Issues with certificates
-
-If your web browser still does not display `Secure` in the address bar, the most
-common reasons and their fixes are listed below.
-
-- **Untrusted Certificate Authority**: Your browser may not display `Secure` because
- it does not know (and therefore trust) the certificate authority (CA) that issued
- the certificates that the $[prodname] Manager is using. This is generally caused by using
- self-signed certificates (either generated by Kubernetes or manually). If you have
- certificates signed by a recognized CA, we recommend that you use them with the $[prodname]
- Manager since the browser will automatically recognize them.
-
- If you opt to use self-signed certificates you can still configure your browser to
- trust the CA on a per-browser basis by importing the CA certificates into the browser.
- In Google Chrome, this can be achieved by selecting Settings, Advanced, Privacy and security,
- Manage certificates, Authorities, Import. This is not recommended since it requires the CA
- to be imported into every browser you access $[prodname] Manager from.
-
-- **Mismatched Common Name or Subject Alternative Name**: If you are still having issues
- securely accessing $[prodname] Manager with TLS, you may want to make sure that the Common Name
- or Subject Alternative Name specified in your certificates matches the host name/DNS
- entry/IP address that is used to access the $[prodname] Manager (i.e. what it says in the browser
- address bar). In Google Chrome you can check the $[prodname] Manager certificate with Developer Tools
- (Ctrl+Shift+I), Security. If you are issued certificates which do not match,
- you will need to reissue the certificates with the correct Common Name or
- Subject Alternative Name and reconfigure $[prodname] Manager following the steps above.
-
-## Ingress proxies and load balancers
-
-You may wish to configure proxy elements, including hardware or software load balancers, Kubernetes Ingress
-proxies etc., between user web browsers and the $[prodname] Manager. If you do so, configure your proxy
-such that $[prodname] Manager receives a HTTPS (TLS) connection, not unencrypted HTTP.
-
-If you require TLS termination at any of these proxy elements, you will need to
-
-- use a proxy that supports transparent HTTP/2 proxying, for example, [Envoy](https://www.envoyproxy.io/)
-- re-originate a TLS connection from your proxy to $[prodname] Manager, as it expects TLS
-
-If you do not require TLS termination, configure your proxy to "pass thru" the TLS to $[prodname] Manager.
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/comms/index.mdx b/calico-cloud_versioned_docs/version-20-1/operations/comms/index.mdx
deleted file mode 100644
index 24e13343f5..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/comms/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Secure communications for Calico components.
-hide_table_of_contents: true
----
-
-# Secure Calico component communications
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/comms/log-storage-tls.mdx b/calico-cloud_versioned_docs/version-20-1/operations/comms/log-storage-tls.mdx
deleted file mode 100644
index 906dd93af1..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/comms/log-storage-tls.mdx
+++ /dev/null
@@ -1,45 +0,0 @@
----
-description: Add TLS certificate to secure access to log storage.
----
-
-# Provide TLS certificates for log storage
-
-## Big picture
-
-Provide TLS certificates to secure access to $[prodname] to the log storage.
-
-## Value
-
-Providing TLS certificates for $[prodname] components is recommended as part of a zero trust network model for security.
-
-## Before you begin...
-
-By default, the $[prodname] log storage uses self-signed certificates on connections. To provide TLS certificates,
-get the certificate and key pair for the $[prodname] log storage using any X.509-compatible tool or from your organization's
-Certificate Authority. The certificate must include the following Subject Alternate Names or DNS names `tigera-secure-es-http.tigera-elasticsearch.svc` and `tigera-secure-es-gateway-http.tigera-elasticsearch.svc`.
-
-If your cluster has Windows nodes, the certificate must additionally include `tigera-secure-es-http.tigera-elasticsearch.svc.` where `` is the local domain specified for in-cluster DNS.
-
-## How to
-
-### Add TLS certificates for log storage
-
-To provide TLS certificates for use by $[prodname] components during deployment, you must create a secret before applying the 'custom-resource.yaml' or before creating the LogStorage resource. Use the following command to create a secret:
-
-```bash
-kubectl create secret generic tigera-secure-elasticsearch-cert -n tigera-operator --from-file=tls.crt= --from-file=tls.key=
-```
-
-To update existing certificates, run the following command:
-
-```bash
-kubectl create secret generic tigera-secure-elasticsearch-cert -n tigera-operator --from-file=tls.crt= --from-file=tls.key= --dry-run -o yaml --save-config | kubectl replace -f -
-```
-
-:::note
-
-If the $[prodname] log storage already exists, you must manually delete the log storage pods one by one
-after updating the secret. These pods will be in the `tigera-elasticsearch` namespace with the prefix, `tigera-secure-es`.
-Other $[prodname] components will not be unable to communicate with log storage until the pods are restarted.
-
-:::
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/comms/manager-tls.mdx b/calico-cloud_versioned_docs/version-20-1/operations/comms/manager-tls.mdx
deleted file mode 100644
index e799b8e3e0..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/comms/manager-tls.mdx
+++ /dev/null
@@ -1,41 +0,0 @@
----
-description: Add TLS certificates to secure access to Calico Cloud Manager user interface.
----
-
-# Provide TLS certificates for Calico Cloud Manager
-
-## Big picture
-
-Provide TLS certificates that secure access to the $[prodname] manager user interface.
-
-## Value
-
-By default, the $[prodname] manager UI uses self-signed TLS certificates on connections. This article describes how to provide TLS certificates that users' browsers will trust.
-
-## Before you begin...
-
-- **Get the certificate and key pair for the $[prodname] Manager UI**
- Generate the certificate using any X.509-compatible tool or from your organization's Certificate Authority.
-
-
-## How to
-
-To provide certificates for use during deployment you must create a secret before applying the 'custom-resource.yaml' or before creating the Installation resource. To specify certificates for use in the manager, create a secret using the following command:
-
-```bash
-kubectl create secret generic manager-tls -n tigera-operator --from-file=cert= --from-file=key=
-```
-
-To update existing certificates, run the following command:
-
-```bash
-kubectl create secret generic manager-tls -n tigera-operator --from-file=cert= --from-file=key= --dry-run -o yaml --save-config | kubectl replace -f -
-```
-
-If the $[prodname] Manager UI is already running then updating the secret should cause it to restart and pickup the new certificate and key. This will result in a short period of unavailability of the $[prodname] Manager UI.
-
-## Additional resources
-
-Additional documentation is available for securing [$[prodname] manager connections](crypto-auth.mdx#calico-enterprise-manager-connections).
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/comms/packetcapture-tls.mdx b/calico-cloud_versioned_docs/version-20-1/operations/comms/packetcapture-tls.mdx
deleted file mode 100644
index c75f45f90c..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/comms/packetcapture-tls.mdx
+++ /dev/null
@@ -1,35 +0,0 @@
----
-description: Add TLS certificate to secure access to PacketCapture APIs.
----
-
-# Provide TLS certificates for PacketCapture APIs
-
-## Big picture
-
-Provide TLS certificates to secure access to $[prodname] to the PacketCapture components.
-
-## Value
-
-Providing TLS certificates for $[prodname] PacketCapture components is recommended as part of a zero trust network model for security.
-
-## Before you begin...
-
-By default, $[prodname] uses self-signed certificates for its PacketCapture APIs components. To provide TLS certificates,
-get the certificate and key pair for the $[prodname] PacketCapture using any X.509-compatible tool or from your organization's
-Certificate Authority. The certificate must have Common Name or a Subject Alternate Name of `tigera-packetcapture.tigera-packetcapture.svc`.
-
-## How to
-
-### Add TLS certificates for PacketCapture
-
-To provide TLS certificates for use by $[prodname] PacketCapture components during deployment, you must create a secret before applying the 'custom-resource.yaml' or before creating the APIServer resource. Use the following command to create a secret:
-
-```bash
-kubectl create secret generic tigera-packetcapture-server-tls -n tigera-operator --from-file=tls.crt= --from-file=tls.key=
-```
-
-To update existing certificates, run the following command:
-
-```bash
-kubectl create secret generic tigera-packetcapture-server-tls -n tigera-operator --from-file=tls.crt= --from-file=tls.key= --dry-run -o yaml --save-config | kubectl replace -f -
-```
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/comms/secure-bgp.mdx b/calico-cloud_versioned_docs/version-20-1/operations/comms/secure-bgp.mdx
deleted file mode 100644
index 358177aec0..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/comms/secure-bgp.mdx
+++ /dev/null
@@ -1,185 +0,0 @@
----
-description: Configure BGP passwords to prevent attackers from injecting false routing information.
----
-
-# Secure BGP sessions
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-## Big picture
-
-Use BGP passwords to prevent attackers from injecting false routing information.
-
-## Value
-
-Setting a password on a BGP peering between BGP speakers means that a peering will only
-work when both ends of the peering have the same password. This provides a layer of defense
-against an attacker impersonating an external BGP peer or a workload in the cluster, for
-example to inject malicious routing information into the cluster.
-
-## Concepts
-
-### Password protection on BGP sessions
-
-Password protection is a [standardized](https://tools.ietf.org/html/rfc5925) optional
-feature of BGP sessions. The effect is that the two peers at either end of a BGP session
-can only communicate, and exchange routing information, if they are both configured with
-the same password.
-
-Please note that password use does not cause the data exchange to be _encrypted_. It
-remains relatively easy to _eavesdrop_ on the data exchange, but not to _inject_ false
-information.
-
-### Using Kubernetes secrets to store passwords
-
-In Kubernetes, the Secret resource is designed for holding sensitive information,
-including passwords. Therefore, for this $[prodname] feature, we use Secrets to
-store BGP passwords.
-
-## How to
-
-To use a password on a BGP peering:
-
-1. Create (or update) a Kubernetes secret in the namespace where $[noderunning] is
- running, so that it has a key whose value is the desired password. Note the secret
- name and the key name.
-
- :::note
-
- BGP passwords must be 80 characters or fewer. If a
- password longer than that is configured, the BGP sessions with
- that password will fail to be established.
-
- :::
-
-1. Ensure that $[noderunning] has RBAC permissions to access that secret.
-
-1. Specify the secret and key name on the relevant BGPPeer resource.
-
-### Create or update Kubernetes secret
-
-For example:
-
-```
-kubectl create -f - <
-
-
-When [configuring a BGP peer](../../networking/configuring/bgp.mdx),
-include the secret and key name in the specification of the BGPPeer resource, like this:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: BGPPeer
-metadata:
- name: bgppeer-global-3040
-spec:
- peerIP: 192.20.30.40
- asNumber: 64567
- password:
- secretKeyRef:
- name: bgp-secrets
- key: rr-password
-```
-
-
-
-
-Include the secret in the default [BGP configuration](../../reference/resources/bgpconfig.mdx)
-similar to the following:
-
-```yaml
-kind: BGPConfiguration
-apiVersion: projectcalico.org/v3
-metadata:
- name: default
-spec:
- logSeverityScreen: Info
- nodeToNodeMeshEnabled: true
- nodeMeshPassword:
- secretKeyRef:
- name: bgp-secrets
- key: rr-password
-```
-
-:::note
-
-Node to node mesh must be enabled to set node to node mesh
-BGP password.
-
-:::
-
-
-
-
-
-## Additional resources
-
-For more detail about the BGPPeer resource, see
-[BGPPeer](../../reference/resources/bgppeer.mdx).
-
-For more on configuring BGP peers, see [configuring BGP peers](../../networking/configuring/bgp.mdx)
-.
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/comms/secure-metrics.mdx b/calico-cloud_versioned_docs/version-20-1/operations/comms/secure-metrics.mdx
deleted file mode 100644
index 1b54053fe3..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/comms/secure-metrics.mdx
+++ /dev/null
@@ -1,514 +0,0 @@
----
-description: Limit access to Calico Cloud metric endpoints using network policy.
----
-
-# Secure Calico Cloud Prometheus endpoints
-
-## About securing access to $[prodname]'s metrics endpoints
-
-When using $[prodname] with Prometheus metrics enabled, we recommend using network policy
-to limit access to $[prodname]'s metrics endpoints.
-
-## Prerequisites
-
-- $[prodname] is installed with Prometheus metrics reporting enabled.
-
-
-## Choosing an approach
-
-This guide provides two example workflows for creating network policies to limit access
-to $[prodname]'s Prometheus metrics. Choosing an approach depends on your requirements.
-
-- [Using a deny-list approach](#using-a-deny-list-approach)
-
- This approach allows all traffic to your hosts by default, but lets you limit access to specific ports using
- $[prodname] policy. This approach allows you to restrict access to specific ports, while leaving other
- host traffic unaffected.
-
-- [Using an allow-list approach](#using-an-allow-list-approach)
-
- This approach denies traffic to and from your hosts by default, and requires that all
- desired communication be explicitly allowed by a network policy. This approach is more secure because
- only explicitly-allowed traffic will get through, but it requires you to know all the ports that should be open on the host.
-
-## Using a deny-list approach
-
-### Overview
-
-The basic process is as follows:
-
-1. Create a default network policy that allows traffic to and from your hosts.
-1. Create host endpoints for each node that you'd like to secure.
-1. Create a network policy that denies unwanted traffic to the $[prodname] metrics endpoints.
-1. Apply labels to allow access to the Prometheus metrics.
-
-### Example for $[nodecontainer]
-
-This example shows how to limit access to the $[nodecontainer] Prometheus metrics endpoints.
-
-1. Create a default network policy to allow host traffic
-
- First, create a default-allow policy. Do this first to avoid a drop in connectivity when adding the host endpoints
- later, since host endpoints with no policy default to deny.
-
- To do this, create a file named `default-host-policy.yaml` with the following contents.
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: GlobalNetworkPolicy
- metadata:
- name: default-host
- spec:
- # Select all $[prodname] nodes.
- selector: running-calico == "true"
- order: 5000
- ingress:
- - action: Allow
- egress:
- - action: Allow
- ```
-
- Then, use `kubectl` to apply this policy.
-
- ```bash
- kubectl apply -f default-host-policy.yaml
- ```
-
-1. List the nodes on which $[prodname] is running with the following command.
-
- ```bash
- calicoctl get nodes
- ```
-
- In this case, we have two nodes in the cluster.
-
- ```
- NAME
- kubeadm-master
- kubeadm-node-0
- ```
-
-1. Create host endpoints for each $[prodname] node.
-
- Create a file named `host-endpoints.yaml` containing a host endpoint for each node listed
- above. In this example, the contents would look like this.
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: HostEndpoint
- metadata:
- name: kubeadm-master.eth0
- labels:
- running-calico: 'true'
- spec:
- node: kubeadm-master
- interfaceName: eth0
- expectedIPs:
- - 10.100.0.15
- ---
- apiVersion: projectcalico.org/v3
- kind: HostEndpoint
- metadata:
- name: kubeadm-node-0.eth0
- labels:
- running-calico: 'true'
- spec:
- node: kubeadm-node-0
- interfaceName: eth0
- expectedIPs:
- - 10.100.0.16
- ```
-
- In this file, replace `eth0` with the desired interface name on each node, and populate the
- `expectedIPs` section with the IP addresses on that interface.
-
- Note the use of a label to indicate that this host endpoint is running $[prodname]. The
- label matches the selector of the network policy created in step 1.
-
- Then, use `kubectl` to apply the host endpoints with the following command.
-
- ```bash
- kubectl apply -f host-endpoints.yaml
- ```
-
-1. Create a network policy that restricts access to the $[nodecontainer] Prometheus metrics port.
-
- Now let's create a network policy that limits access to the Prometheus metrics port such that
- only endpoints with the label `calico-prometheus-access: true` can access the metrics.
-
- To do this, create a file named `calico-prometheus-policy.yaml` with the following contents.
-
- ```yaml
- # Allow traffic to Prometheus only from sources that are
- # labeled as such, but don't impact any other traffic.
- apiVersion: projectcalico.org/v3
- kind: GlobalNetworkPolicy
- metadata:
- name: restrict-calico-node-prometheus
- spec:
- # Select all $[prodname] nodes.
- selector: running-calico == "true"
- order: 500
- types:
- - Ingress
- ingress:
- # Deny anything that tries to access the Prometheus port
- # but that doesn't match the necessary selector.
- - action: Deny
- protocol: TCP
- source:
- notSelector: calico-prometheus-access == "true"
- destination:
- ports:
- - 9091
- ```
-
- This policy selects all endpoints that have the label `running-calico: true`, and enforces a single ingress deny rule.
- The ingress rule denies traffic to port 9091 unless the source of traffic has the label `calico-prometheus-access: true`, meaning
- all $[prodname] workload endpoints, host endpoints, and global network sets that do not have the label, as well as any
- other network endpoints unknown to $[prodname].
-
- Then, use `kubectl` to apply this policy.
-
- ```bash
- kubectl apply -f calico-prometheus-policy.yaml
- ```
-
-1. Apply labels to any endpoints that should have access to the metrics.
-
- At this point, only endpoints that have the label `calico-prometheus-access: true` can reach
- $[prodname]'s Prometheus metrics endpoints on each node. To grant access, simply add this label to the
- desired endpoints.
-
- For example, to allow access to a Kubernetes pod you can run the following command.
-
- ```bash
- kubectl label pod my-prometheus-pod calico-prometheus-access=true
- ```
-
- If you would like to grant access to a specific IP network, you
- can create a [global network set](../../reference/resources/globalnetworkset.mdx) using `kubectl`.
-
- For example, you might want to grant access to your management subnets.
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: GlobalNetworkSet
- metadata:
- name: calico-prometheus-set
- labels:
- calico-prometheus-access: 'true'
- spec:
- nets:
- - 172.15.0.0/24
- - 172.101.0.0/24
- ```
-
-### Additional steps for Typha deployments
-
-If your $[prodname] installation uses the Kubernetes API datastore and has greater than 50 nodes, it is likely
-that you have installed Typha. This section shows how to use an additional network policy to secure the Typha
-Prometheus endpoints.
-
-After following the steps above, create a file named `typha-prometheus-policy.yaml` with the following contents.
-
-```yaml
-# Allow traffic to Prometheus only from sources that are
-# labeled as such, but don't impact any other traffic.
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: restrict-calico-node-prometheus
-spec:
- # Select all $[prodname] nodes.
- selector: running-calico == "true"
- order: 500
- types:
- - Ingress
- ingress:
- # Deny anything that tries to access the Prometheus port
- # but that doesn't match the necessary selector.
- - action: Deny
- protocol: TCP
- source:
- notSelector: calico-prometheus-access == "true"
- destination:
- ports:
- - 9093
-```
-
-This policy selects all endpoints that have the label `running-calico: true`, and enforces a single ingress deny rule.
-The ingress rule denies traffic to port 9093 unless the source of traffic has the label `calico-prometheus-access: true`, meaning
-all $[prodname] workload endpoints, host endpoints, and global network sets that do not have the label, as well as any
-other network endpoints unknown to $[prodname].
-
-Then, use `kubectl` to apply this policy.
-
-```bash
-kubectl apply -f typha-prometheus-policy.yaml
-```
-
-### Example for kube-controllers
-
-If your $[prodname] installation exposes metrics from kube-controllers, you can limit access to those metrics
-with the following network policy.
-
-Create a file named `kube-controllers-prometheus-policy.yaml` with the following contents.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: restrict-kube-controllers-prometheus
- namespace: calico-system
-spec:
- # Select kube-controllers.
- selector: k8s-app == "calico-kube-controllers"
- order: 500
- types:
- - Ingress
- ingress:
- # Deny anything that tries to access the Prometheus port
- # but that doesn't match the necessary selector.
- - action: Deny
- protocol: TCP
- source:
- notSelector: calico-prometheus-access == "true"
- destination:
- ports:
- - 9094
-```
-
-:::note
-
-The above policy is installed in the calico-system namespace. If your cluster has $[prodname] installed
-in the kube-system namespace, you will need to create the policy in that namespace instead.
-
-:::
-
-Then, use `calicoctl` to apply this policy.
-
-```bash
-kubectl apply -f kube-controllers-prometheus-policy.yaml
-```
-
-## Using an allow-list approach
-
-### Overview
-
-The basic process is as follows:
-
-1. Create host endpoints for each node that you'd like to secure.
-1. Create a network policy that allows desired traffic to the $[prodname] metrics endpoints.
-1. Apply labels to allow access to the Prometheus metrics.
-
-### Example for $[nodecontainer]
-
-1. List the nodes on which $[prodname] is running with the following command.
-
- ```bash
- calicoctl get nodes
- ```
-
- In this case, we have two nodes in the cluster.
-
- ```
- NAME
- kubeadm-master
- kubeadm-node-0
- ```
-
-1. Create host endpoints for each $[prodname] node.
-
- Create a file named `host-endpoints.yaml` containing a host endpoint for each node listed
- above. In this example, the contents would look like this.
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: HostEndpoint
- metadata:
- name: kubeadm-master.eth0
- labels:
- running-calico: 'true'
- spec:
- node: kubeadm-master
- interfaceName: eth0
- expectedIPs:
- - 10.100.0.15
- ---
- apiVersion: projectcalico.org/v3
- kind: HostEndpoint
- metadata:
- name: kubeadm-node-0.eth0
- labels:
- running-calico: 'true'
- spec:
- node: kubeadm-node-0
- interfaceName: eth0
- expectedIPs:
- - 10.100.0.16
- ```
-
- In this file, replace `eth0` with the desired interface name on each node, and populate the
- `expectedIPs` section with the IP addresses on that interface.
-
- Note the use of a label to indicate that this host endpoint is running $[prodname]. The
- label matches the selector of the network policy created in step 1.
-
- Then, use `kubectl` to apply the host endpoints with the following command. This will prevent all
- traffic to and from the host endpoints.
-
- ```bash
- kubectl apply -f host-endpoints.yaml
- ```
-
- :::note
-
- $[prodname] allows some traffic as a failsafe even after applying this policy. This can
- be adjusted using the `failsafeInboundHostPorts` and `failsafeOutboundHostPorts` options
- on the [FelixConfiguration resource](../../reference/resources/felixconfig.mdx).
-
- :::
-
-1. Create a network policy that allows access to the $[nodecontainer] Prometheus metrics port.
-
- Now let's create a network policy that allows access to the Prometheus metrics port such that
- only endpoints with the label `calico-prometheus-access: true` can access the metrics.
-
- To do this, create a file named `calico-prometheus-policy.yaml` with the following contents.
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: GlobalNetworkPolicy
- metadata:
- name: restrict-calico-node-prometheus
- spec:
- # Select all $[prodname] nodes.
- selector: running-calico == "true"
- order: 500
- types:
- - Ingress
- ingress:
- # Allow traffic from selected sources to the Prometheus port.
- - action: Allow
- protocol: TCP
- source:
- selector: calico-prometheus-access == "true"
- destination:
- ports:
- - 9091
- ```
-
- This policy selects all endpoints that have the label `running-calico: true`, and enforces a single ingress allow rule.
- The ingress rule allows traffic to port 9091 from any source with the label `calico-prometheus-access: true`, meaning
- all $[prodname] workload endpoints, host endpoints, and global network sets that have the label will be allowed access.
-
- Then, use `kubectl` to apply this policy.
-
- ```bash
- kubectl apply -f calico-prometheus-policy.yaml
- ```
-
-1. Apply labels to any endpoints that should have access to the metrics.
-
- At this point, only endpoints that have the label `calico-prometheus-access: true` can reach
- $[prodname]'s Prometheus metrics endpoints on each node. To grant access, simply add this label to the
- desired endpoints.
-
- For example, to allow access to a Kubernetes pod you can run the following command.
-
- ```bash
- kubectl label pod my-prometheus-pod calico-prometheus-access=true
- ```
-
- If you would like to grant access to a specific IP address in your network, you
- can create a [global network set](../../reference/resources/globalnetworkset.mdx) using `kubectl`.
-
- For example, creating the following network set would grant access to a host with IP 172.15.0.101.
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: GlobalNetworkSet
- metadata:
- name: calico-prometheus-set
- labels:
- calico-prometheus-access: 'true'
- spec:
- nets:
- - 172.15.0.101/32
- ```
-
-### Additional steps for Typha deployments
-
-If your $[prodname] installation uses the Kubernetes API datastore and has greater than 50 nodes, it is likely
-that you have installed Typha. This section shows how to use an additional network policy to secure the Typha
-Prometheus endpoints.
-
-After following the steps above, create a file named `typha-prometheus-policy.yaml` with the following contents.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: restrict-typha-prometheus
-spec:
- # Select all $[prodname] nodes.
- selector: running-calico == "true"
- order: 500
- types:
- - Ingress
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: calico-prometheus-access == "true"
- destination:
- ports:
- - 9093
-```
-
-This policy selects all endpoints that have the label `running-calico: true`, and enforces a single ingress allow rule.
-The ingress rule allows traffic to port 9093 from any source with the label `calico-prometheus-access: true`, meaning
-all $[prodname] workload endpoints, host endpoints, and global network sets that have the label will be allowed access.
-
-Then, use `kubectl` to apply this policy.
-
-```bash
-kubectl apply -f typha-prometheus-policy.yaml
-```
-
-### Example for kube-controllers
-
-If your $[prodname] installation exposes metrics from kube-controllers, you can limit access to those metrics
-with the following network policy.
-
-Create a file named `kube-controllers-prometheus-policy.yaml` with the following contents.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: restrict-kube-controllers-prometheus
- namespace: calico-system
-spec:
- selector: k8s-app == "calico-kube-controllers"
- order: 500
- types:
- - Ingress
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: calico-prometheus-access == "true"
- destination:
- ports:
- - 9094
-```
-
-Then, use `kubectl` to apply this policy.
-
-```bash
-kubectl apply -f kube-controllers-prometheus-policy.yaml
-```
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/comms/typha-node-tls.mdx b/calico-cloud_versioned_docs/version-20-1/operations/comms/typha-node-tls.mdx
deleted file mode 100644
index 6d33fb4bfa..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/comms/typha-node-tls.mdx
+++ /dev/null
@@ -1,85 +0,0 @@
----
-description: Add TLS certificates to secure communications between if you are using Typha to scale your deployment.
----
-
-# Provide TLS certificates for Typha and Node
-
-## Big picture
-
-Provide TLS certificates that allow mutual TLS authentication between Node and Typha.
-
-## Value
-
-By default, $[prodname] Typha and Node components are configured with self-signed Certificate Authority (CA) and certificates for mutual TLS authentication. This article describes how to provide a CA and TLS certificates.
-
-## Concepts
-
-**Mutual TLS authentication** means each side of a connection authenticates the other side. As such, the CA and certificates that are used must all be in sync. If one side of the connection is updated with a certificate that is not compatible with the other side, communication stops. So if certificate updates are mismatched on Typha, Node, or CA certificate, new pod networking and policy application will be interrupted until you restore compatibility. To make it easy to keep updates in sync, this article describes how to use one command to apply updates for all resources.
-
-## Before you begin...
-
-**Get the Certificate Authority certificate and signed certificate and key pairs for $[prodname] Typha and Node**
-
-- Generate the certificates using any X.509-compatible tool or from your organization's CA.
-- Ensure the generated certificates meet the requirements for [TLS connections between Node and Typha](crypto-auth.mdx#connections-from-node-to-typha-kubernetes).
-
-## How to
-
-### Create resource file
-
-1. Create the CA ConfigMap with the following commands:
-
- ```bash
- kubectl create configmap typha-ca -n tigera-operator --from-file=caBundle= --dry-run -o yaml --save-config > typha-node-tls.yaml
- echo '---' >> typha-node-tls.yaml
- ```
-
- :::tip
-
- The contents of the caBundle field should contain the CA or the certificates for both Typha and Node.
- It is possible to add multiple PEM blocks.
-
- :::
-
-1. Create the Typha Secret with the following command:
-
- ```bash
- kubectl create secret generic typha-certs -n tigera-operator \
- --from-file=cert.crt= --from-file=key.key= \
- --from-literal=common-name= --dry-run -o yaml --save-config >> typha-node-tls.yaml
- echo '---' >> typha-node-tls.yaml
- ```
-
- :::tip
-
- If using SPIFFE identifiers replace `--from-literal=common-name=` with `--from-literal=uri-san=`.
-
- :::
-
-1. Create the Node Secret with the following command:
-
- ```bash
- kubectl create secret generic node-certs -n tigera-operator \
- --from-file=cert.crt= --from-file=key.key= \
- --from-literal=common-name= --dry-run -o yaml --save-config >> typha-node-tls.yaml
- ```
-
- :::tip
-
- If using SPIFFE identifiers replace `--from-literal=common-name=` with `--from-literal=uri-san=`.
-
- :::
-
-### Apply or update resources
-
-1. Apply the `typha-node-tls.yaml` file.
- - To create these resource for use during deployment, you must apply this file before applying `custom-resource.yaml` or before creating the Installation resource. To apply this file, use the following command:
- ```bash
- kubectl apply -f typha-node-tls.yaml
- ```
- - To update existing resources, use the following command:
- ```bash
- kubectl replace -f typha-node-tls.yaml
- ```
-
-If $[prodname] Node and Typha are already running, the update causes a rolling restart of both. If the new CA and certificates are not compatible with the previous set, there may be a period where the Node pods produce errors until the old set CA and certificates are replaced with the new ones.
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/component-logs.mdx b/calico-cloud_versioned_docs/version-20-1/operations/component-logs.mdx
deleted file mode 100644
index b449229d5b..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/component-logs.mdx
+++ /dev/null
@@ -1,121 +0,0 @@
----
-description: Where to find component logs.
----
-
-# Component logs
-
-## Big picture
-
-View and collect $[prodname] logs.
-
-## Value
-
-It is useful to view logs to monitor component health and diagnose potential issues.
-
-## Concepts
-
-### $[nodecontainer] logs
-
-The $[nodecontainer] logs contain log output from the following subcomponents:
-
-- Per-node startup logic
-- BGP agent
-- Felix policy agent
-
-Components log either to disk within `/var/log/calico`, to stdout, or both.
-
-For components that log to disk, files are automatically rotated, and by default 10 files of 1MB each are kept. The current log file is called `current` and rotated files have @ followed by a timestamp detailing when the files was rotated in [tai64n](http://cr.yp.to/libtai/tai64.html#tai64n) format.
-
-## How to
-
-## View logs for a $[nodecontainer] instance
-
-You can view logs for a node using the `kubectl logs` command. This will show logs for all subcomponents of the given node.
-
-For example:
-
-```
-kubectl logs -n calico-system calico-node-xxxx
-```
-
-## View logs from the CNI plugin
-
-CNI plugin logs are not available through kubectl and are instead logged both to the host machine's disk as well as stderr.
-
-By default, these logs can be found at `/var/log/calico/cni/` on the host machine.
-
-The container runtime may also display the CNI plugin logs within its own log output.
-
-## Configure BGP agent log level
-
-BGP log level is configured via the [BGPConfiguration](../reference/resources/bgpconfig.mdx) API, and can be one of the following values:
-
-- `Debug`: enables "debug all" logging for BIRD. The most verbose logging level.
-- `Info`: enables logging for protocol state changes. This is the default log level.
-- `Warning`: disables BIRD logging, emits warning level configuration logs only.
-- `Error`: disables BIRD logging, emits error level configuration logs only.
-- `Fatal`: disables BIRD logging, emits fatal level configuration logs only.
-
-To modify the BGP log level:
-
-1. Get the current bgpconfig settings.
-
- ```bash
- kubectl get bgpconfiguration.projectcalico.org -o yaml > bgp.yaml
- ```
-
-1. Modify logSeverityScreen to the desired value.
-
- ```bash
- vim bgp.yaml
- ```
-
- :::tip
-
- For a global change set the name to "default".
- For a node-specific change set the name to the node name prefixed with "node.", e.g., "node.node-1".
-
- :::
-
-1. Replace the current bgpconfig settings.
-
- ```bash
- kubectl replace -f bgp.yaml
- ```
-
-## Configure Felix log level
-
-Felix log level is configured via the [FelixConfiguration](../reference/resources/felixconfig.mdx) API, and can be one of the following values:
-
-- `Debug`: The most verbose logging level - for development and debugging.
-- `Info`: The default log level. Shows important state changes.
-- `Warning`: Shows warnings only.
-- `Error`: Shows errors only.
-- `Fatal`: Shows fatal errors only.
-
-To modify Felix's log level:
-
-1. Get the current felixconfig settings.
-
- ```bash
- kubectl get felixconfiguration.projectcalico.org default -o yaml > felix.yaml
- ```
-
-1. Modify logSeverityScreen to desired value.
-
- ```bash
- vim felixconfig.yaml
- ```
-
- :::tip
-
- For a global change set the name to "default".
- For a node-specific change set the name to the node name, e.g., "$[prodname]-Node-1".
-
- :::
-
-1. Replace the current felixconfig settings.
-
- ```
- kubectl replace -f felixconfig.yaml
- ```
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/disconnect.mdx b/calico-cloud_versioned_docs/version-20-1/operations/disconnect.mdx
deleted file mode 100644
index 1daca74e86..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/disconnect.mdx
+++ /dev/null
@@ -1,55 +0,0 @@
----
-description: Steps to use migration script to uninstall Calico Cloud from a cluster.
----
-
-# Uninstall Calico Cloud from a cluster
-
-Whether you've finished with your $[prodname] Trial or decided to disconnect your cluster from
-$[prodname], we know you want your cluster to remain functional. We highly recommend running
-a simple script to migrate your cluster to open-source Project Calico.
-
-## About the migration script
-
-The script migrates all applicable $[prodname] components to open-source Project Calico; this includes
-removal and cleanup of all $[prodname] components that have no equivalents in Project Calico.
-Because Project Calico does not have the tier resource, the script will exit if any policies
-exist in any tier except for the `default` or `allow-tigera` tiers.
-To remove policies from tiers, you have these options:
-
-- Manually move policies out of tiers prior to running the script
-- Let the script remove _ALL_ Calico policies by specifying the `--remove-all-calico-policy` flag
-
-:::note
-
-To successfully downgrade to an open-source Calico configuration, policies must allow necessary traffic
-to and from $[prodname] and open-source Calico namespaces. If you keep policies in the default tier
-(especially GlobalNetworkPolicies), and you have default deny policies, update or add policies accordingly
-to allow this necessary traffic.
-
-:::
-
-:::warning
-
-If your cluster began with Calico installed and managed by AKS with AddonManager, this uninstall process will
-not be successful. You will need to reach out to your support contact to create a plan to uninstall $[prodname].
-
-:::
-
-### Before you begin
-
-* You have `kubectl` administrator access to the cluster you want to migrate to Calico Open Source.
-* You are accessing the cluster from a Linux-based machine.
-
-### Run the migration script
-
-1. Download the script `curl -O $[clouddownloadurl]/downgrade.sh`.
-
-1. Make the script executable `chmod +x downgrade.sh`.
-
-1. Run the script and read the help to determine if you need to specify any flags `./downgrade.sh --help`.
-
-1. Run the script with any needed flags, for example: `./downgrade.sh --remove-prometheus`.
-
-## Next steps
-
-Continue using your cluster with open-source [Project Calico](/calico/latest/about).
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/ebpf/enabling-ebpf.mdx b/calico-cloud_versioned_docs/version-20-1/operations/ebpf/enabling-ebpf.mdx
deleted file mode 100644
index 79b6fc26d3..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/ebpf/enabling-ebpf.mdx
+++ /dev/null
@@ -1,269 +0,0 @@
----
-description: Steps to enable the eBPF dataplane on an existing cluster.
----
-
-# Enable eBPF on an existing cluster
-
-import EbpfValue from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_ebpf-value.mdx';
-
-## Big picture
-
-Enable the eBPF dataplane on an existing cluster.
-
-## Value
-
-
-
-## Concepts
-
-### eBPF
-
-eBPF (or "extended Berkeley Packet Filter"), is a technology that allows safe mini programs to be attached to various low-level hooks in the Linux kernel. eBPF has a wide variety of uses, including networking, security, and tracing. You’ll see a lot of non-networking projects leveraging eBPF, but for $[prodname] our focus is on networking, and in particular, pushing the networking capabilities of the latest Linux kernels to the limit.
-
-## Before you begin
-
-**Supported architecture and versions**
-
-- x86-64
-- Linux distribution/kernel:
-
- - Ubuntu 20.04.
- - Red Hat v8.2 with Linux kernel v4.18.0-193 or above (Red Hat have backported the required features to that build).
- - Another supported distribution with Linux kernel v5.3 or above.
-
-
-- An underlying network fabric that allows VXLAN traffic between hosts. In eBPF mode, VXLAN is used to forward Kubernetes NodePort traffic.
-
-**Unsupported platforms**
-
-- GKE
-- MKE
-- TKG
-
-:::note
-
-eBPF supports AKS with Calico CNI and $[prodname] network policy. However, with AKS with Azure CNI and $[prodname] network policy, kube-proxy cannot be disabled so the performance benefits of eBPF are lost. However, there are other reasons to use eBPF other than performance gains, as described in [eBPF use cases](use-cases-ebpf.mdx).
-
-
-:::
-
-**Unsupported features**
-
-- Clusters with some eBPF nodes and some standard dataplane and/or Windows nodes
-- IPv6
-- Host endpoint `doNotTrack` policy (other policy types are supported)
-- Floating IPs
-- SCTP (either for policy or services)
-- `Log` action in policy rules
-- Tagged VLAN devices
-- L7 logs
-- Application layer policies
-- Web application firewall (WAF)
-
-**Recommendations for performance**
-
-For best pod-to-pod performance, we recommend using an underlying network that doesn't require $[prodname] to use an overlay. For example:
-
-- A cluster within a single AWS subnet
-- A cluster using a compatible cloud provider's CNI (such as the AWS VPC CNI plugin)
-- An on-prem cluster with BGP peering configured
-
-If you must use an overlay, we recommend that you use VXLAN, not IPIP. VXLAN has better performance than IPIP in eBPF mode due to various kernel optimizations.
-
-## How to
-
-- [Verify that your cluster is ready for eBPF mode](#verify-that-your-cluster-is-ready-for-ebpf-mode)
-- [Configure $[prodname] to talk directly to the API server](#configure-calico-cloud-to-talk-directly-to-the-api-server)
-- [Configure kube-proxy](#configure-kube-proxy)
-- [Enable eBPF mode](#enable-ebpf-mode)
-- [Try out DSR mode](#try-out-dsr-mode)
-- [Reversing the process](#reversing-the-process)
-
-### Verify that your cluster is ready for eBPF mode
-
-This section explains how to make sure your cluster is suitable for eBPF mode.
-
-To check that the kernel on a node is suitable, you can run
-
-```bash
-uname -rv
-```
-
-The output should look like this:
-
-```
-5.4.0-42-generic #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC 2020
-```
-
-In this case the kernel version is v5.4, which is suitable.
-
-On Red Hat-derived distributions, you may see something like this:
-
-```
-4.18.0-193.el8.x86_64 (mockbuild@x86-vm-08.build.eng.bos.redhat.com)
-```
-
-Since the Red Hat kernel is v4.18 with at least build number 193, this kernel is suitable.
-
-### Configure $[prodname] to talk directly to the API server
-
-In eBPF mode, $[prodname] implements Kubernetes service networking directly (rather than relying on `kube-proxy`).
-Of course, this makes it highly desirable to disable `kube-proxy` when running in eBPF mode to save resources
-and avoid confusion over which component is handling services.
-
-To be able to disable `kube-proxy`, $[prodname] needs to communicate to the API server _directly_ rather than
-going through `kube-proxy`. To make _that_ possible, we need to find a persistent, static way to reach the API server.
-The best way to do that varies by Kubernetes distribution:
-
-- If you created a cluster manually (for example by using `kubeadm`) then the right address to use depends on whether you
- opted for a high-availability cluster with multiple API servers or a simple one-node API server.
-
- - If you opted to set up a high availability cluster then you should use the address of the load balancer that you
- used in front of your API servers. As noted in the Kubernetes documentation, a load balancer is required for a
- HA set-up but the precise type of load balancer is not specified.
- - If you opted for a single control plane node then you can use the address of the control plane node itself. However,
- it's important that you use a _stable_ address for that node such as a dedicated DNS record, or a static IP address.
- If you use a dynamic IP address (such as an EC2 private IP) then the address may change when the node is restarted
- causing $[prodname] to lose connectivity to the API server.
-
-- `kops` typically sets up a load balancer of some sort in front of the API server. You should use
- the FQDN and port of the API load balancer, for example `api.internal.` as the `KUBERNETES_SERVICE_HOST`
- below and 443 as the `KUBERNETES_SERVICE_PORT`.
-- OpenShift requires various DNS records to be created for the cluster; one of these is exactly what we need:
- `api-int..` should point to the API server or to the load balancer in front of the
- API server. Use that (filling in the `` and `` as appropriate for your cluster) for the
- `KUBERNETES_SERVICE_HOST` below. Openshift uses 6443 for the `KUBERNETES_SERVICE_PORT`.
-- For AKS and EKS clusters you should use the FQDN of the API server's load balancer. This can be found with
- ```
- kubectl cluster-info
- ```
- which gives output like the following:
- ```
- Kubernetes master is running at https://60F939227672BC3D5A1B3EC9744B2B21.gr7.us-west-2.eks.amazonaws.com
- ...
- ```
- In this example, you would use `60F939227672BC3D5A1B3EC9744B2B21.gr7.us-west-2.eks.amazonaws.com` for
- `KUBERNETES_SERVICE_HOST` and `443` for `KUBERNETES_SERVICE_PORT` when creating the config map.
-
-Once you've found the correct address for your API server, create the following config map in the `tigera-operator`
-namespace using the host and port that you found above:
-
-```yaml
-kind: ConfigMap
-apiVersion: v1
-metadata:
- name: kubernetes-services-endpoint
- namespace: tigera-operator
-data:
- KUBERNETES_SERVICE_HOST: ''
- KUBERNETES_SERVICE_PORT: ''
-```
-
-The operator will pick up the change to the config map automatically and do a rolling update of $[prodname] to pass on the change. Confirm that pods restart and then reach the `Running` state with the following command:
-
-```
-watch kubectl get pods -n calico-system
-```
-
-If you do not see the pods restart then it's possible that the `ConfigMap` wasn't picked up (sometimes Kubernetes is slow to propagate `ConfigMap`s (see Kubernetes [issue #30189](https://github.com/kubernetes/kubernetes/issues/30189))). You can try restarting the operator.
-
-### Configure kube-proxy
-
-In eBPF mode $[prodname] replaces `kube-proxy` so it wastes resources (and reduces performance) to run both.
-This section explains how to disable `kube-proxy` in some common environments.
-
-#### Clusters that run `kube-proxy` with a `DaemonSet` (such as `kubeadm`)
-
-For a cluster that runs `kube-proxy` in a `DaemonSet` (such as a `kubeadm`-created cluster), you can disable `kube-proxy` reversibly by adding a node selector to `kube-proxy`'s `DaemonSet` that matches no nodes, for example:
-
-```
-kubectl patch ds -n kube-system kube-proxy -p '{"spec":{"template":{"spec":{"nodeSelector":{"non-calico": "true"}}}}}'
-```
-
-Then, should you want to start `kube-proxy` again, you can simply remove the node selector.
-
-:::note
-
-This approach is not suitable for AKS with Azure CNI since that platform makes use of the Kubernetes add-on manager.
-the change will be reverted by the system. For AKS, you should follow [Avoiding conflicts with kube-proxy](#avoiding-conflicts-with-kube-proxy)
-below.
-
-:::
-
-#### OpenShift
-
-If you are running OpenShift, you can disable `kube-proxy` as follows:
-
-```
-kubectl patch networks.operator.openshift.io cluster --type merge -p '{"spec":{"deployKubeProxy": false}}'
-```
-
-To re-enable it:
-
-```
-kubectl patch networks.operator.openshift.io cluster --type merge -p '{"spec":{"deployKubeProxy": true}}'
-```
-
-### Avoiding conflicts with kube-proxy
-
-If you cannot disable `kube-proxy` (for example, because it is managed by your Kubernetes distribution), then you _must_ change Felix configuration parameter `BPFKubeProxyIptablesCleanupEnabled` to `false`. This can be done with `kubectl` as follows:
-
-```
-kubectl patch felixconfiguration default --patch='{"spec": {"bpfKubeProxyIptablesCleanupEnabled": false}}'
-```
-
-If both `kube-proxy` and `BPFKubeProxyIptablesCleanupEnabled` is enabled then `kube-proxy` will write its iptables rules and Felix will try to clean them up resulting in iptables flapping between the two.
-
-### Enable eBPF mode
-
-To enable eBPF mode, change the `spec.calicoNetwork.linuxDataplane` parameter in the operator's `Installation`
-resource to `"BPF"`.
-
-```bash
-kubectl patch installation.operator.tigera.io default --type merge -p '{"spec":{"calicoNetwork":{"linuxDataplane":"BPF"}}}'
-```
-
-When enabling eBPF mode, preexisting connections continue to use the non-BPF datapath; such connections should
-not be disrupted, but they do not benefit from eBPF mode’s advantages.
-
-:::note
-
-The operator rolls out the change with a rolling update (non-disruptive) and then swiftly transitions all nodes to eBPF mode. However, it's inevitable that some nodes will enter eBPF mode before others. This can disrupt the flow of traffic through node ports.
-
-:::
-
-### Try out DSR mode
-
-Direct return mode skips a hop through the network for traffic to services (such as node ports) from outside the cluster. This reduces latency and CPU overhead but it requires the underlying network to allow nodes to send traffic with each other's IPs. In AWS, this requires all your nodes to be in the same subnet and for the source/dest check to be disabled.
-
-DSR mode is disabled by default; to enable it, set the `BPFExternalServiceMode` Felix configuration parameter to `"DSR"`. This can be done with `kubectl`:
-
-```
-kubectl patch felixconfiguration default --patch='{"spec": {"bpfExternalServiceMode": "DSR"}}'
-```
-
-To switch back to tunneled mode, set the configuration parameter to `"Tunnel"`:
-
-```
-kubectl patch felixconfiguration default --patch='{"spec": {"bpfExternalServiceMode": "Tunnel"}}'
-```
-
-Switching external traffic mode can disrupt in-progress connections.
-
-### Reversing the process
-
-To revert to standard Linux networking:
-
-1. Reverse the changes to the operator's `Installation`:
-
- ```bash
- kubectl patch installation.operator.tigera.io default --type merge -p '{"spec":{"calicoNetwork":{"linuxDataplane":"Iptables"}}}'
- ```
-
-1. If you disabled `kube-proxy`, re-enable it (for example, by removing the node selector added above).
-
- ```
- kubectl patch ds -n kube-system kube-proxy --type merge -p '{"spec":{"template":{"spec":{"nodeSelector":{"non-calico": null}}}}}'
- ```
-
-1. Since disabling eBPF mode is disruptive to existing connections, monitor existing workloads to make sure they re-establish any connections that were disrupted by the switch.
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/ebpf/index.mdx b/calico-cloud_versioned_docs/version-20-1/operations/ebpf/index.mdx
deleted file mode 100644
index 4350e69323..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/ebpf/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Documentation for eBPF dataplane mode, including how to enable eBPF dataplane mode.
-hide_table_of_contents: true
----
-
-# eBPF dataplane mode
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/ebpf/troubleshoot-ebpf.mdx b/calico-cloud_versioned_docs/version-20-1/operations/ebpf/troubleshoot-ebpf.mdx
deleted file mode 100644
index 5cb4befe29..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/ebpf/troubleshoot-ebpf.mdx
+++ /dev/null
@@ -1,247 +0,0 @@
----
-description: How to troubleshoot when running in eBPF mode.
----
-
-# Troubleshoot eBPF mode
-
-This document gives some general troubleshooting guidance for the eBPF dataplane.
-
-To understand basic concepts, we recommend the following video by Tigera Engineers: [Opening the Black Box: Understanding and troubleshooting Calico's eBPF Data Plane](https://www.youtube.com/watch?v=Mh43sNBu208).
-
-## Troubleshoot access to services
-
-If pods or hosts within your cluster have trouble accessing services, check the following:
-
-- Either $[prodname]'s eBPF mode or `kube-proxy` must be active on a host for services to function. If you
- disabled `kube-proxy` when enabling eBPF mode, verify that eBPF mode is actually functioning. If $[prodname]
- detects that the kernel is not supported, it will fall back to standard dataplane mode (which does not support
- services).
-
- To verify that eBPF mode is correctly enabled, examine the log for a `$[noderunning]` container; if
- eBPF mode is not supported it will log an `ERROR` log that says
-
- ```bash
- BPF dataplane mode enabled but not supported by the kernel. Disabling BPF mode.
- ```
-
- If BPF mode is correctly enabled, you should see an `INFO` log that says
-
- ```bash
- BPF enabled, starting BPF endpoint manager and map manager.
- ```
-
-- In eBPF mode, external client access to services (typically NodePorts) is implemented using VXLAN encapsulation.
- If NodePorts time out when the backing pod is on another node, check your underlying network fabric allows
- VXLAN traffic between the nodes. VXLAN is a UDP protocol; by default it uses port 4789.
-- In DSR mode, $[prodname] requires that the underlying network fabric allows one node to respond on behalf of
- another.
-
- - In AWS, to allow this, the Source/Dest check must be disabled on the node's NIC. However, note that DSR only
- works within AWS; it is not compatible with external traffic through a load balancer. This is because the load
- balancer is expecting the traffic to return from the same host.
-
- - In GCP, the "Allow forwarding" option must be enabled. As with AWS, traffic through a load balancer does not
- work correctly with DSR because the load balancer is not consulted on the return path from the backing node.
-
-# The `calico-bpf` tool
-
-Since BPF maps contain binary data, the $[prodname] team wrote a tool to examine $[prodname]'s BPF maps.
-The tool is embedded in the $[nodecontainer] container image. To run the tool:
-
-- Find the name of the $[nodecontainer] Pod on the host of interest using
-
- ```bash
- kubectl get pod -o wide -n calico-system
- ```
-
- for example, `calico-node-abcdef`
-
-- Run the tool as follows:
-
- ```bash
- kubectl exec -n calico-system calico-node-abcdef -- calico-node -bpf ...
- ```
-
- For example, to show the tool's help:
-
- ```bash
- kubectl exec -n calico-system calico-node-abcdef -- calico-node -bpf help
-
- Usage:
- calico-bpf [command]
-
- Available Commands:
- arp Manipulates arp
- connect-time Manipulates connect-time load balancing programs
- conntrack Manipulates connection tracking
- counters Show and reset counters
- help Help about any command
- ipsets Manipulates ipsets
- nat Manipulates network address translation (nat)
- routes Manipulates routes
- version Prints the version and exits
-
- Flags:
- --config string config file (default is $HOME/.calico-bpf.yaml)
- -h, --help help for calico-bpf
- --log-level string Set log level (default "warn")
- -t, --toggle Help message for toggle
- ```
-
- (Since the tool is embedded in the main `calico-node` binary the `--help` option is not available, but running
- `calico-node -bpf help` does work.)
-
- To dump the BPF conntrack table:
-
- ```
- kubectl exec -n calico-system calico-node-abcdef -- calico-node -bpf conntrack dump
- ...
- ```
-
- Also, it is possible to fetch various counters, like packets dropped by a policy or different errors, from BPF dataplane using the same tool.
- For example, to dump the BPF counters of `eth0` interface:
-
- ```
- kubectl exec -n calico-system calico-node-abcdef -- calico-node -bpf counters dump --iface=eth0
- +----------+--------------------------------+---------+--------+
- | CATEGORY | TYPE | INGRESS | EGRESS |
- +----------+--------------------------------+---------+--------+
- | Accepted | by another program | 0 | 0 |
- | | by failsafe | 0 | 4 |
- | | by policy | 21 | 0 |
- | Dropped | by policy | 4 | 0 |
- | | failed decapsulation | 0 | 0 |
- | | failed encapsulation | 0 | 0 |
- | | incorrect checksum | 0 | 0 |
- | | malformed IP packets | 0 | 0 |
- | | packets with unknown route | 0 | 0 |
- | | packets with unknown source | 0 | 0 |
- | | packets with unsupported IP | 0 | 0 |
- | | options | | |
- | | too short packets | 0 | 0 |
- | Total | packets | 1593 | 1973 |
- +----------+--------------------------------+---------+--------+
- dumped eth0 counters.
- ```
-
-## Check if a program is dropping packets
-
-To check if an eBPF program is dropping packets, you can use either the `calico-bpf` or `tc` command-line tool. For example, if you
-are worried that the eBPF program attached to `eth0` is dropping packets, you can use `calico-bpf` to fetch BPF counters as described
-in the previous section and look for one of the `Dropped` counters or you can run the following command:
-
-```
-tc -s qdisc show dev eth0
-```
-
-The output should look like the following; find the `clsact` qdisc, which is the attachment point for eBPF programs.
-The `-s` option to `tc` causes `tc` to display the count of dropped packets, which amounts to the count of packets
-dropped by the eBPF programs.
-
-```
-...
-qdisc clsact 0: dev eth0 root refcnt 2
- sent 1340 bytes 10 pkt (dropped 10, overlimits 0 requeues 0)
- backlog 0b 0p requeues 0
-...
-```
-
-## Debug high CPU usage
-
-If you notice `$[noderunning]` using high CPU:
-
-- Check if `kube-proxy` is still running. If `kube-proxy` is still running, you must either disable `kube-proxy` or
- ensure that the Felix configuration setting `bpfKubeProxyIptablesCleanupEnabled` is set to `false`. If the setting
- is set to `true` (its default), then Felix will attempt to remove `kube-proxy`'s iptables rules. If `kube-proxy` is
- still running, it will fight with `Felix`.
-- If your cluster is very large, or your workload involves significant service churn, you can increase the interval
- at which Felix updates the services dataplane by increasing the `bpfKubeProxyMinSyncPeriod` setting. The default is
- 1 second. Increasing the value has the trade-off that service updates will happen more slowly.
-- $[prodname] supports endpoint slices, similarly to `kube-proxy`. If your Kubernetes cluster supports endpoint
- slices and they are enabled, then you can enable endpoint slice support in $[prodname] with the
- `bpfKubeProxyEndpointSlicesEnabled` configuration flag.
-
-## eBPF program debug logs
-
-$[prodname]'s eBPF programs contain optional detailed debug logging. Although th logs can be very verbose (because
-the programs will log every packet), they can be invaluable to diagnose eBPF program issues. To enable the log, set the
-`bpfLogLevel` Felix configuration setting to `Debug`.
-
-:::caution
-
-Enabling logs in this way has a significant impact on eBPF program performance.
-
-:::
-
-> The logs are emitted to the kernel trace buffer, and they can be examined using the following command:
-
-```
-tc exec bpf debug
-```
-
-Logs have the following format:
-
-```
- <...>-84582 [000] .Ns1 6851.690474: 0: ens192---E: Final result=ALLOW (-1). Program execution time: 7366ns
-```
-
-The parts of the log are explained below:
-
-- `<...>-84582` gives an indication about what program (or kernel process) was handling the
- packet. For packets that are being sent, this is usually the name and PID of the program that is actually sending
- the packet. For packets that are received, it is typically a kernel process, or an unrelated program that happens to
- trigger the processing.
-- `6851.690474` is the log timestamp.
-
-- `ens192---E` is the $[prodname] log tag. For programs attached to interfaces, the first part contains the
- first few characters of the interface name. The suffix is either `-I` or `-E` indicating "Ingress" or "Egress".
- "Ingress" and "Egress" have the same meaning as for policy:
-
- - A workload ingress program is executed on the path from the host network namespace to the workload.
- - A workload egress program is executed on the workload to host path.
- - A host endpoint ingress program is executed on the path from external node to the host.
- - A host endpoint egress program is executed on the path from host to external host.
-
-- `Final result=ALLOW (-1). Program execution time: 7366ns` is the message. In this case, logging the final result of
- the program. Note that the timestamp is massively distorted by the time spent logging.
-
-## Poor performance
-
-A number of problems can reduce the performance of the eBPF dataplane.
-
-- Verify that you are using the best networking mode for your cluster. If possible, avoid using an overlay network;
- a routed network with no overlay is considerably faster. If you must use one of $[prodname]'s overlay modes,
- use VXLAN, not IPIP. IPIP performs poorly in eBPF mode due to kernel limitations.
-- If you are not using an overlay, verify that the [Felix configuration parameters](../../reference/component-resources/node/felix/configuration.mdx)
- `ipInIpEnabled` and `vxlanEnabled` are set to `false`. Those parameters control whether Felix configured itself to
- allow IPIP or VXLAN, even if you have no IP pools that use an overlay. The parameters also disable certain eBPF
- mode optimisations for compatibility with IPIP and VXLAN.
-
- To examine the configuration:
-
- ```bash
- kubectl get felixconfiguration -o yaml
- ```
-
- ```yaml noValidation
- apiVersion: projectcalico.org/v3
- items:
- - apiVersion: projectcalico.org/v3
- kind: FelixConfiguration
- metadata:
- creationTimestamp: "2020-10-05T13:41:20Z"
- name: default
- resourceVersion: "767873"
- uid: 8df8d751-7449-4b19-a4f9-e33a3d6ccbc0
- spec:
- ...
- ipipEnabled: false
- ...
- vxlanEnabled: false
- kind: FelixConfigurationList
- metadata:
- resourceVersion: "803999"
- ```
-
-- If you are running your cluster in a cloud such as AWS, then your cloud provider may limit the bandwidth between
- nodes in your cluster. For example, most AWS nodes are limited to 5GBit per connection.
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/ebpf/use-cases-ebpf.mdx b/calico-cloud_versioned_docs/version-20-1/operations/ebpf/use-cases-ebpf.mdx
deleted file mode 100644
index 82781ae836..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/ebpf/use-cases-ebpf.mdx
+++ /dev/null
@@ -1,95 +0,0 @@
----
-description: Learn when to use eBPF, and when not to.
----
-
-# eBPF use cases
-
-## Big picture
-
-Learn when to use eBPF (and when not to).
-
-## What is eBPF?
-
-eBPF is a feature available in Linux kernels that allows you to run a virtual machine inside the kernel. This virtual machine allows you to safely load programs into the kernel, to customize its operation. Why is this important?
-
-In the past, making changes to the kernel was difficult: there were APIs you could call to get data, but you couldn’t influence what was inside the kernel or execute code. Instead, you had to submit a patch to the Linux community and wait for it to be approved. With eBPF, you can load a program into the kernel and instruct the kernel to execute your program if, for example, a certain packet is seen or another event occurs.
-
-With eBPF, the kernel and its behavior become highly customizable, instead of being fixed. This can be extremely beneficial, when used under the right circumstances.
-
-## $[prodname] and eBPF
-
-$[prodname] offers an eBPF data plane as an alternative to our standard Linux dataplane (which is iptables based). While the standard data plane focuses on compatibility by working together with kube-proxy and your own iptables rules, the eBPF data plane focuses on performance, latency, and improving user experience with features that aren’t possible with the standard data plane.
-
-But $[prodname] doesn’t only support standard Linux and eBPF; it currently supports a total of three data planes, including Windows HNS, and has plans to add support for even more data planes in the near future. $[prodname] enables you, the user, to decide what works best for what you want to do.
-
-If you enable eBPF within $[prodname] but have existing iptables flows, we won’t touch them. Because maybe you want to use connect-time load balancing, but leave iptables as is. With $[prodname], it’s not an all-or-nothing deal—we allow you to easily load and unload our eBPF data plane to suit your needs, which means you can quickly try it out before making a decision. $[prodname] offers you the ability to leverage eBPF as needed, as an additional control to build your Kubernetes cluster security.
-
-## Use cases
-
-There are several use cases for eBPF, including traffic control, creating network policy, and connect-time load balancing.
-
-### Traffic control
-
-Without eBPF, packets use the standard Linux networking path on their way to a final destination. If a packet shows up at point A, and you know that the packet needs to go to point B, you can optimize the network path in the Linux kernel by sending it straight to point B. With eBPF, you can leverage additional context to make these changes in the kernel so that packets bypass complex routing and simply arrive at their final destination.
-
-This is especially relevant in a Kubernetes container environment, where you have numerous networks. (In addition to the host network stack, each container has its own mini network stack.) When traffic comes in, it is usually routed to a container stack and must travel a complex path as it makes its way there from the host stack. This routing can be bypassed using eBPF.
-
-### Creating network policy
-
-When creating network policy, there are two instances where eBPF can be used:
-
-- **eXpress Data Path (XDP)** – As a raw packet buffer enters the system, eBPF gives you an efficient way to examine that buffer and make quick decisions about what to do with it.
-
-- **Network policy** – eBPF allows you to efficiently examine a packet and apply network policy, both for pods and hosts.
-
-### Connect-time load balancing
-
-When load balancing service connections in Kubernetes, a port needs to talk to a service and therefore network address translation (NAT) must occur. A packet is sent to a virtual IP, and that virtual IP translates it to the destination IP of the pod backing the service; the pod then responds to the virtual IP and the return packet is translated back to the source.
-
-With eBPF, you can avoid this packet translation by using an eBPF program that you’ve loaded into the kernel and load balancing at the source of the connection. All NAT overhead from service connections is removed because destination network address translation (DNAT) does not need to take place on the packet processing path.
-
-## The price of performance
-
-So is eBPF more efficient than standard Linux iptables? The short answer: it depends.
-
-If you were to micro-benchmark how iptables works when applying network policies with a large number of IP addresses (i.e. ipsets), iptables in many cases is better than eBPF. But if you want to do something in the Linux kernel where you need to alter the packet flow in the kernel, eBPF would be the better choice. Standard Linux iptables is a complex system and certainly has its limitations, but at the same time it provides options to manipulate traffic; if you know how to program iptables rules, you can achieve a lot. eBPF allows you to load your own programs into the kernel to influence behavior that can be customized to your needs, so it is more flexible than iptables as it is not limited to one set of rules.
-
-Something else to consider is that, while eBPF allows you to run a program, add logic, redirect flows, and bypass processing—which is a definite win—it’s a virtual machine and as such must be translated to bytecode. By comparison, the Linux kernel’s iptables is already compiled to code.
-
-As you can see, comparing eBPF to iptables is not a straight apples-to-apples comparison. What we need to assess is performance, and the two key factors to look at here are latency (speed) and expense. If eBPF is very fast but takes up 80% of your resources, then it’s like a Lamborghini—an expensive, fast car. And if that works for you, great (maybe you really like expensive, fast cars). Just keep in mind that more CPU usage means more money spent with your cloud providers. So while a Lamborghini might be faster than a lot of other cars, it might not be the best use of money if you need to comply with speed limits on your daily commute.
-
-## When to use eBPF (and when not to)
-
-With eBPF, you get performance—but it comes at a cost. You need to find a balance between the two by figuring out the price of performance, and deciding if it’s acceptable to you from an eBPF perspective.
-
-Let’s look at some specific cases where it would make sense to use eBPF, and some where it would not.
-
-### When not to use eBPF
-
-### ✘ Packet-by-packet processing
-
-Using eBPF to perform CPU intensive or packet-by-packet processing, such as decryption and re-encryption for encrypted flows, would not be efficient because you would need to build a structure and do a lookup for every packet, which is expensive.
-
-### When to use eBPF
-
-### ✔ XDP
-
-eBPF provides an efficient way to examine raw packet buffers as they enter the system, allowing you to make quick decisions about what to do with them.
-
-### ✔ Connect-time load balancing
-
-With eBPF, you can load balance at the source using a program you’ve loaded into the kernel, instead of using a virtual IP. Since DNAT does not need to take place on the packet processing path, all NAT overhead from service connections is removed.
-
-### ✔ Building a service mesh control plane
-
-Service mesh relies on proxies like Envoy. A lot of thought has gone into designing this process over the years. The main reason for doing it this way is that, in many cases, it is not viable to do inline processing for application protocols like HTTP at the high speeds seen inside a cluster. Therefore, you should think of using eBPF to route traffic to a proxy like Envoy in an efficient way, rather than using it to replace the proxy itself. However, you do need to turn off connect-time load balancing (CTLB) so sidecars can see the service addresses. Given you are already taking a performance hit by the extra hop to the sidecar, not using CTLB performance optimization to avoid NAT overhead is likely not a big deal.
-
-## Summary
-
-Is eBPF a replacement for iptables? Not exactly. It’s hard to imagine everything working as efficiently with eBPF as it does with iptables. For now, the two co-exist and it’s up to the user to weigh the price-performance tradeoff and decide which feature to use when, given their specific needs.
-
-We believe the right solution is to leverage eBPF, along with existing mechanisms in the Linux kernel, to achieve your desired outcome. That’s why $[prodname] offers support for multiple data planes, including standard Linux, Windows HNS, and Linux eBPF. Since we have established that both eBPF and iptables are useful, the only logical thing to do in our opinion is to support both. $[prodname] gives you the choice so you can choose the best tool for the job.
-
-## Additional resources
-
-To learn more and see performance metrics from our test environment, see the blog, [Introducing the eBPF dataplane](https://www.projectcalico.org/introducing-the-calico-ebpf-dataplane/).
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/index.mdx b/calico-cloud_versioned_docs/version-20-1/operations/index.mdx
deleted file mode 100644
index 203cb25a91..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/index.mdx
+++ /dev/null
@@ -1,48 +0,0 @@
----
-description: Manage clusters and add new users.
----
-
-import { DocCardLink, DocCardLinkLayout } from '/src/___new___/components';
-
-# Operations
-
-Post-installation tasks for managing Calico Cloud.
-
-
-
-
-
-
-
-
-## Secure component communications
-
-
-
-
-
-
-## Monitoring
-
-
-
-
-
-
-
-
-
-
-
-
-## eBPF
-
-
-
-
-
-
-
-
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/license-options.mdx b/calico-cloud_versioned_docs/version-20-1/operations/license-options.mdx
deleted file mode 100644
index 50428128ab..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/license-options.mdx
+++ /dev/null
@@ -1,19 +0,0 @@
----
-description: Review options to track your Calico Enterprise license expiration.
----
-
-# License expiration and renewal
-
-import License from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_license.mdx';
-
-## Big picture
-
-Review options for tracking $[prodname] license expiration.
-
-## Concepts
-
-We highly recommend using the [license agent using Prometheus](monitor/metrics/license-agent.mdx) to get alerts on your $[prodname] license expiration day to avoid disruption to services. Regardless of whether you using the alerting feature, here are some things you should know.
-
-### FAQ
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/monitor/index.mdx b/calico-cloud_versioned_docs/version-20-1/operations/monitor/index.mdx
deleted file mode 100644
index 9f5f788a25..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/monitor/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Tools for scraping useful metrics.
-hide_table_of_contents: true
----
-
-# Monitoring
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/bgp-metrics.mdx b/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/bgp-metrics.mdx
deleted file mode 100644
index 518c7b6320..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/bgp-metrics.mdx
+++ /dev/null
@@ -1,142 +0,0 @@
----
-description: Monitor BGP peering and route exchange in your cluster and get alerts by defining rules and thresholds.
----
-
-# BGP metrics
-
-## Big picture
-
-Use Prometheus configured for $[prodname] `$[noderunning]` to monitor the health of BGP peers within your cluster.
-
-## Value
-
-Using the open-source Prometheus monitoring and alerting toolkit, you can view time-series metrics from $[prodname] components in the Prometheus or Grafana interfaces.
-
-$[prodname] adds the ability to monitor high-level operations between BGP peers in your cluster. By defining a set of simple rules and thresholds, you can monitor peer-to-peer connection health between your nodes as well as the number of routes being exchanged and receive alerts when it exceeds configured thresholds.
-
-## Concepts
-
-```
- +-------------------+
- | Host |
- | +-------------------+ +------------+ +------------+
- | | Host |------------->--| | | |--->--
- | | +-------------------+ policy | Prometheus | | Prometheus | alert
- +-| | Host |----------->--| Server |-->--| Alert- |--->--
- | | +-------------+ | metrics | | | manager | mechanisms
- +-| | BGP Metrics |-------------->--| | | |--->--
- | | Server | | | | | |
- | +-------------+ | +------------+ +------------+
- +-------------------+ ^ ^
- | |
- Collect and store metrics. Web UI for accessing alert
- WebUI for accessing and states.
- querying metrics. Configure fan out
- Configure alerting rules. notifications to different
- alert receivers.
-```
-
-BGP metric reporting is accomplished using three key pieces:
-
-- BGP Metrics Server
-- Prometheus Server
-- Prometheus Alertmanager
-
-### About Prometheus
-
-The Prometheus scrapes various instrumented jobs (endpoints) to collect time series data for a given set of metrics. Time series data can then be queried and rules can be setup to monitor specific thresholds to trigger alerts. The data can also be visualized (such as using Grafana).
-
-Prometheus Server deployed as part of the $[prodname] scrapes every configured `$[noderunning]` target. Alerting rules querying BGP metrics can be configured in Prometheus and when triggered, fire alerts to the Prometheus Alertmanager.
-
-Prometheus Alertmanager (or simply Alertmanager), deployed as part of the $[prodname], receives alerts from Prometheus and forwards alerts to various alerting mechanisms such as _Pager Duty_, or _OpsGenie_.
-
-### About $[prodname] `$[noderunning]`
-
-`$[noderunning]` bundles together the components required for networking containers with $[prodname]. The key components are:
-
-- Felix
-- BIRD
-- confd
-
-Its critical function means that it runs on every machine that provides endpoints. A binary running inside `$[noderunning]` monitors the BIRD daemon for peering and routing activity and reports these statics to Prometheus.
-
-## How to
-
-BGP metrics are generated within `$[noderunning]` every 5 seconds using statistics pulled from the BIRD daemon.
-
-The metrics generated are:
-
-- `bgp_peers` - Total number of peers with a specific BGP connection status.
-- `bgp_routes_imported` - Current number of routes successfully imported into the routing table.
-- `bgp_route_updates_received` - Total number of route updates received over time (since startup).
-
-$[prodname] will run BGP metrics for Prometheus by default. Metrics are directly available on each compute node at `http://:9900/metrics`.
-
-Refer to [Configuring Prometheus](../prometheus/index.mdx) for information on how to create a new Alerting rule or updating the scraping interval for how often Prometheus collects the metrics.
-
-### BGP peers metric
-
-The metric `bgp_peers` has the relevant labels `instance`, `status` and `ip_version`. Using this metric, you can identify how many peers have a specific BGP connection status with a given node instance and IP version. This metric will be available as a combination of `{instance, status, ip_version}`.
-
-Example queries:
-
-- Total number of peers currently with a BGP connection to the node instance “calico-node-1”, with status “Established”, for IP version “IPv4”.
-
-```
-bgp_peers{instance="calico-node-1", status="Established", ip_version="IPv4"}
-```
-
-- Total number of peers currently with a BGP connection to the node instance “calico-node-1”, with status “Down”, for IP version “IPv6”.
-
-```
-bgp_peers{instance="calico-node-1", status="Down", ip_version="IPv6"}
-```
-
-- Total number of peers currently with a BGP connection to any node instance, with a status that is not “Established”, for IP version “IPv4”.
-
-```
-bgp_peers{status!="Established", ip_version="IPv4"}
-```
-
-Valid BGP connection statuses are: "Idle", "Connect", "Active", "OpenSent", "OpenConfirm", "Established", "Close", "Down" and "Passive".
-
-### BGP routes imported metric
-
-The metric `bgp_routes_imported` has the relevant labels `instance` and `ip_version`. Using this metric, you can identify how many routes are being successfully imported into a given node instance's routing table at a specific point in time. This number can increase or decrease depending on how BGP rules process incoming routes. This metric will be available as a combination of `{instance, ip_version}`.
-
-Example queries:
-
-- Computes the per-second rate for the number of routes imported by a specific node instance “calico-node-1” looking up to 120 seconds back (using the two most recent data points).
-
-```
-irate(bgp_routes_imported{instance="calico-node-1",ip_version="IPv4"}[120s])
-```
-
-- Computes the per-second rate for the number of routes imported across all node instances looking up to 120 seconds back (using the two most recent data points).
-
-```
-irate(bgp_routes_imported{ip_version="IPv4"}[120s])
-```
-
-### BGP route updates received metric
-
-The metric `bgp_route_updates_received` has the relevant labels `instance` and `ip_version`. Using this metric, you can identify the total number of BGP routes received by a given node over time. This number includes all routes that have been accepted & imported into the routing table, as well as any routes that were rejected as invalid, rejected by filters or rejected as already in the route table. This total number should only increase over time. This metric will be available as a combination of `{instance, ip_version}`.
-
-Example queries:
-
-- Computes the per-second rate for the number of routes received by a specific node instance “calico-node-1” looking up to 5 minutes back (using the two most recent data points).
-
-```
-irate(bgp_route_updates_received{instance="calico-node-1",ip_version="IPv4"}[5m])
-```
-
-- Computes the per-second rate for the number of routes received across all node instances looking up to 5 minutes back (using the two most recent data points).
-
-```
-irate(bgp_route_updates_received{ip_version="IPv4"}[5m])
-```
-
-## Additional resources
-
-- [Secure $[prodname] Prometheus endpoints](../../comms/secure-metrics.mdx)
-- [Configuring Prometheus](../prometheus/index.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/elasticsearch-and-fluentd-metrics.mdx b/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/elasticsearch-and-fluentd-metrics.mdx
deleted file mode 100644
index 8dc6d71fc1..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/elasticsearch-and-fluentd-metrics.mdx
+++ /dev/null
@@ -1,55 +0,0 @@
----
-description: Monitor Fluentd metrics and get alerts on log storage or collection issues.
----
-
-# Fluentd metrics
-
-## Big picture
-
-Use the Prometheus monitoring and alerting tool for Fluentd metrics to ensure continuous network visibility.
-
-## Value
-
-Platform engineering teams rely on logs for visibility into their networks. If collecting or storing logs are disrupted, this can impact network visibility. Prometheus can monitor log collection and storage metrics so platform engineering teams are alerted about problems before they occur.
-
-## Concepts
-
-| Component | Description |
-| ---------- | ------------------------------------------------------------ |
-| Prometheus | Monitoring tool that scrapes metrics from instrumented jobs and displays time series data in a visualizer (such as Grafana). For $[prodname], the “jobs” that Prometheus can harvest metrics from the Fluentd component. |
-| Fluentd | Sends $[prodname] logs to Elasticsearch for storage. |
-
-## How to
-
-### Create Prometheus alerts for Fluentd
-
-The following example creates a Prometheus rule to monitor some important Fluentd metrics, and alert when they have crossed certain thresholds:
-
-```yaml noValidation
-apiVersion: monitoring.coreos.com/v1
-kind: PrometheusRule
-metadata:
- name: tigera-prometheus-log-collection-monitoring
- namespace: tigera-prometheus
- labels:
- role: tigera-prometheus-rules
- prometheus: calico-node-prometheus
-spec:
- groups:
- - name: tigera-log-collection.rules
- rules:
- - alert: FluentdPodConsistentlyLowBufferSpace
- expr: avg_over_time(fluentd_output_status_buffer_available_space_ratio[5m]) < 75
- labels:
- severity: Warning
- annotations:
- summary: "Fluentd pod {{$labels.pod}}'s buffer space is consistently below 75 percent capacity."
- description: "Fluentd pod {{$labels.pod}} has very low buffer space. There may be connection issues between Elasticsearch
-and Fluentd or there are too many logs to write out, check the logs for the Fluentd pod."
-```
-
-#### The alerts created in the example are described as follows:
-
-| Alert | Severity | Requires | Issue/reason |
-| ---------------------------------------- | --------------------- | -------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| **FluentdPodConsistentlyLowBufferSpace** | Non-critical, warning | Immediate investigation to ensure logs are being gathered correctly. | A Fluentd pod’s available buffer size has averaged less than 75% over the last 5 minutes.
This could mean Fluentd is having trouble communicating with the Elasticsearch cluster, the Elasticsearch cluster is down, or there are simply too many logs to process. |
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/index.mdx b/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/index.mdx
deleted file mode 100644
index 1a154f4983..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Configure Prometheus metrics.
-hide_table_of_contents: true
----
-
-# Metrics
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/license-agent.mdx b/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/license-agent.mdx
deleted file mode 100644
index 5125a2487f..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/license-agent.mdx
+++ /dev/null
@@ -1,94 +0,0 @@
----
-description: Monitor Calico Cloud license metrics such as nodes used, nodes available, and days until license expires.
----
-
-# License metrics
-
-import License from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_license.mdx';
-
-## Big picture
-
-Use the Prometheus monitoring and alerting tool to get $[prodname] license metrics.
-
-## Value
-
-Platform engineering teams need to report licensing usage on third-party software (like $[prodname]) for their CaaS/Kubernetes platforms. This is often driven by compliance, but also to mitigate risks from license expiration or usage that may impact operations. For teams to easily access these vital metrics, $[prodname] provides license metrics using the Prometheus monitoring and alerting tool.
-
-## Concepts
-
-### About Prometheus
-
-The Prometheus monitoring tool scrapes metrics from instrumented jobs and displays time series data in a visualizer (such as Grafana). For $[prodname], the “jobs” that Prometheus can harvest metrics from the License Agent component.
-
-### About License Agent
-
-The **License Agent** is a containerized application that monitors the following $[prodname] licensing information from the Kubernetes cluster, and exports the metrics through the Prometheus server:
-
-- Days till expiration
-- Nodes available
-- Nodes used
-
-### FAQ
-
-
-
-## How to
-
-- [Add license agent in your Kubernetes cluster](#add-license-agent-in-your-kubernetes-cluster)
-- [Create alerts using Prometheus metrics](#create-alerts-using-prometheus-metrics)
-
-### Add license agent in your Kubernetes cluster
-
-To add the license-agent component in a Kubernetes cluster for license metrics, install the pull secret and apply the license-agent manifest.
-
-1. Create a namespace for the license-agent.
- ```
- kubectl create namespace tigera-license-agent
- ```
-1. Install your pull secret.
- ```
- kubectl create secret generic tigera-pull-secret \
- --type=kubernetes.io/dockerconfigjson -n tigera-license-agent \
- --from-file=.dockerconfigjson=
- ```
-1. Apply the manifest.
- ```
- kubectl apply -f $[filesUrl_CE]/manifests/licenseagent.yaml
- ```
-
-### Create alerts using Prometheus metrics
-
-In the following example, an alert is configured when the license expiry is fewer than 15 days.
-
-```yaml
-apiVersion: monitoring.coreos.com/v1
-kind: PrometheusRule
-metadata:
- name: calico-prometheus-license
- namespace: tigera-prometheus
- labels:
- role: tigera-prometheus-rules
- prometheus: calico-node-prometheus
-spec:
- groups:
- - name: tigera-license.rules
- rules:
- - alert: CriticalLicenseExpiry
- expr: license_number_of_days < 15
- labels:
- severity: Warning
- annotations:
- summary: 'Calico Enterprise License expires in less than 15 days'
- description: 'Calico Enterprise License expires in less than 15 days'
-```
-
-:::note
-
-If the Kubernetes api-server serves on any port other than 6443 or 443, add that port in the Egress policy of the license agent manifest.
-
-:::
-
-## Additional resources
-
-- [LicenseKey resource](../../../reference/resources/licensekey.mdx)
-- [Configure Alertmanager](../prometheus/alertmanager.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/policy-metrics.mdx b/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/policy-metrics.mdx
deleted file mode 100644
index bc22cc85bb..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/policy-metrics.mdx
+++ /dev/null
@@ -1,142 +0,0 @@
----
-description: Monitor the effects of policy in your cluster and received alerts by defining rules and thresholds.
----
-
-# Policy metrics
-
-$[prodname] adds the ability to monitor effects of policies configured in your cluster.
-By defining a set of simple rules and thresholds, you can monitor traffic metrics and receive
-alerts when it exceeds configured thresholds.
-
-```
- +------------+
- | |
- | TSEE |
- | Manager |
- | |
- | |
- | |
- +------------+
- ^
- |
- |
- |
- +-----------------+ |
- | Host | |
- | +-----------------+ +------------+ +------------+
- | | Host |------------->--| | | |--->--
- | | +-----------------+ policy | Prometheus | | Prometheus | alert
- +-| | Host |----------->--| Server |-->--| Alert |--->--
- | | +----------+ | metrics | | | Manager | mechanisms
- +-| | Felix |-------------->--| | | |--->--
- | +----------+ | +------------+ +------------+
- +-----------------+ ^ ^
- | |
- Collect and store metrics. Web UI for accessing alert
- WebUI for accessing and states.
- querying metrics. Configure fan out
- Configure alerting rules. notifications to different
- alert receivers.
-```
-
-Policy inspection and reporting is accomplished using four key pieces:
-
-- A $[prodname] specific Felix binary running inside `$[noderunning]` container
- monitors the host for denied/allowed packets and collects metrics.
-- Prometheus Server(s) deployed as part of the $[prodname] manifest scrapes
- every configured `$[noderunning]` target. Alerting rules querying denied packet
- metrics are configured in Prometheus and when triggered, fire alerts to
- the Prometheus Alertmanager.
-- Prometheus Alertmanager (or simply Alertmanager), deployed as part of
- the $[prodname] manifest, receives alerts from Prometheus and forwards
- alerts to various alerting mechanisms such as _Pager Duty_, or _OpsGenie_.
-- $[prodname] Manager, also deployed as part of the $[prodname] manifest,
- processes the metrics using pre-defined Prometheus queries and provides dashboards and associated workflows.
-
-Metrics will only be generated at a node when there are packets directed at an endpoint that are being actively profiled by a policy.
-Once generated they stay alive for 60 seconds.
-
-Once Prometheus scrapes a node and collects policy metrics, it will be
-available at Prometheus until the metric is considered _stale_, i.e.,
-Prometheus has not seen any updates to this metric for some time. This time is
-configurable. Refer to
-[Configuring Prometheus configuration](../prometheus/index.mdx)
-for more information.
-
-Because of metrics being expired, as just described, it is entirely possible
-for a GET on a metrics query URL to return no information. This is expected
-if there have not been any packets being processed by a policy on that node, in
-the last 60 seconds.
-
-Metrics generated by each $[prodname] node are:
-
-- `calico_denied_packets` - Total number of packets denied by $[prodname] policies.
-- `calico_denied_bytes` - Total number of bytes denied by $[prodname] policies.
-- `cnx_policy_rule_packets` - Sum of allowed/denied packets over rules processed by
- $[prodname] policies.
-- `cnx_policy_rule_bytes` - Sum of allowed/denied bytes over rules processed by
- $[prodname] policies.
-- `cnx_policy_rule_connections` - Sum of connections over rules processed by $[prodname]
- policies.
-
-The metrics `calico_denied_packets` and `calico_denied_bytes` have the labels `policy` and `srcIP`.
-Using these two metrics, one can identify the policy that denied packets as well as
-the source IP address of the packets that were denied by this policy. Using
-Prometheus terminology, `calico_denied_packets` is the metric name and `policy`
-and `srcIP` are labels. Each one of these metrics will be available as a
-combination of `{policy, srcIP}`.
-
-Example queries:
-
-- Total number of bytes, denied by $[prodname] policies, originating from the IP address "10.245.13.133"
- by `k8s_ns.ns-0` profile.
-
-```
-calico_denied_bytes{policy="profile|k8s_ns.ns-0|0|deny", srcIP="10.245.13.133"}
-```
-
-- Total number of packets denied by $[prodname] policies, originating from the IP address "10.245.13.149"
- by `k8s_ns.ns-0` profile.
-
-```
-calico_denied_packets{policy="profile|k8s_ns.ns-0|0|deny", srcIP="10.245.13.149"}}
-```
-
-The metrics `cnx_policy_rule_packets`, `cnx_policy_rule_bytes` and `cnx_policy_rule_connections` have the
-labels: `tier`, `policy`, `namespace`, `rule_index`, `action`, `traffic_direction`, `rule_direction`.
-
-Using these metrics, one can identify allow, and denied byte rate and packet rate, both inbound and outbound, indexed by both policy and rule. $[prodname] Manager Dashboard makes heavy usage of these metrics.
-Staged policy names are prefixed with "staged:".
-
-Example queries:
-
-- Query counts for rules: Packet rates for specific rule by traffic_direction
-
-```
-sum(irate(cnx_policy_rule_packets{namespace="namespace-2",policy="policy-0",rule_direction="ingress",rule_index="rule-5",tier="tier-0"}[30s])) without (instance)
-```
-
-- Query counts for rules: Packet rates for each rule in a policy by traffic_direction
-
-```
-sum(irate(cnx_policy_rule_packets{namespace="namespace-2",policy="policy-0",tier="tier-0"}[30s])) without (instance)
-```
-
-- Query counts for a single policy by traffic_direction and action
-
-```
-sum(irate(cnx_policy_rule_packets{namespace="namespace-2",policy="policy-0",tier="tier-0"}[30s])) without (instance,rule_index,rule_direction)
-```
-
-- Query counts for all policies across all tiers by traffic_direction and action
-
-```
-sum(irate(cnx_policy_rule_packets[30s])) without (instance,rule_index,rule_direction)
-```
-
-See the
-[Felix configuration reference](../../../reference/component-resources/node/felix/configuration.mdx#calico-cloud-specific-configuration) for
-the settings that control the reporting of these metrics. $[prodname] manifests
-normally set `PrometheusReporterEnabled=true` and
-`PrometheusReporterPort=9081`, so these metrics are available on each compute
-node at `http://:9081/metrics`.
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/recommended-metrics.mdx b/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/recommended-metrics.mdx
deleted file mode 100644
index 798c87b36d..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/monitor/metrics/recommended-metrics.mdx
+++ /dev/null
@@ -1,562 +0,0 @@
----
-description: Recommended Prometheus metrics for monitoring Calico Enterprise components.
----
-
-# Recommended Prometheus metrics
-
-## Big picture
-
-Monitor the $[prodname] Typha, Felix, and policy component metrics to ensure optimal cluster operation.
-
-## Concepts
-
-$[prodname] Typha, Felix, and policy components are the most critical to monitor because they are responsible for ensuring networking and security functions are up-to-date and working as expected.
-
-### Typha
-
-Typha is a caching datastore proxy that sits between calico-nodes and Kubernetes API Server. Its primary function is to allow for increased cluster scale by reducing the load on Kubernetes API Server. Without Typha, large clusters (200+ nodes) would need a considerable amount of memory to correspond to the continuous watches and requests from calico-nodes running in the cluster.
-
-Typha maintains a single datastore connection on behalf of all of its clients (processes running in the calico-node pods, with Felix being Typhas’ main client). Typha watches for node, pod, network policy, bgp configuration, and other events on the Kubernetes API Server, caches and deduplicates this data, and fans out these events to its clients.
-
-### Felix
-
-Felix is a component of calico-node and is responsible for $[prodname] network policy.
-Felix must be continuously in sync with the datastore to ensure the correct set of policies are applied to the node it is running on.
-
-![Typha-felix](/img/calico-enterprise/typha-felix.png)
-
-### About metrics
-
-Each $[prodname] component that you want to connect to Prometheus for endpoint metrics has its own configuration (bgp, license, policy, felix, and typha).
-
-Note that Felix is a separate application with metric endpoints, its own core metrics to monitor itself, and a separate port for a second policy metric endpoint.
-
-## Metrics
-
-This section provides metrics recommendations for maintaining optimal cluster operations. Note the following:
-
-- Threshold values for each metric depend on the cluster size and churn rate.
-- Threshold recommendations are provided where possible, but because each cluster is different, and metrics can depend on cluster churn rate and scale. We recommend that you baseline the cluster to establish numbers that represent normal figures for your cluster.
-- Metrics that start increasing rapidly from the baseline set need attention.
-
-Typha
-- [Typha general metrics](#typha-general-metrics)
-- [Typha cluster mesh metrics](#typha-cluster-mesh-metrics)
-- [Typha client metrics](#typha-client-metrics)
-- [Typha cache internals](#typha-cache-internals)
-- [Typha snapshot details](#typha-snapshot-details)
-
-Felix
-- [Policy metrics](#policy-metrics)
-- [Felix cluster state metrics](#felix-cluster-state-metrics)
-- [Felix error metrics](#felix-error-metrics)
-- [Felix time-based metrics](#felix-time-based-metrics)
-
-## Typha general metrics
-
-### Datastore cache size
-
-| Datastore cache size | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | Note: Syncer (type) is Typha's internal name for a client (type). Individual syncer values: (typha_cache_size\{syncer="bgp"\}) (typha_cache_size\{syncer="dpi"\}) (typha_cache_size\{syncer="felix"\}) (typha_cache_size\{syncer="node-status"\}) (typha_cache_size\{syncer="tunnel-ip-allocation"\})
Sum of all syncers: The sum of all cache sizes (each syncer type has a cache). sum by (instance)(typha_cache_size)
Largest syncer: max by (instance)(typha_cache_size) |
-| Example value | Example of: max by (instance)(typha_cache_size\{syncer="felix"\})
\{instance="10.0.1.20:9093"\} 661 \{instance="10.0.1.31:9093"\} 661 |
-| Explanation | The total number of key/value pairs in Typha's in-memory cache.This metric represents the scale of the $[prodname] datastore as it tracks how many WEPs (pods and services), HEPs (hostendpoints), networksets, globalnetworksets, $[prodname] Network Policies etc that Typha is aware of across the entire Calico Federation.You can use this metric to monitor individual syncers to Typha (like Felix, BGP etc), or to get a sum of all syncers. We recommend that you monitor the largest syncer but it is completely up to you. This is a good metric to understand how much data is in Typha. Note: If all Typhas are in sync then they should have the same value for this metric. |
-| Threshold value recommendation | The value of this metric will depend on the scale of the Calico Federation and will always increase as WEPs, $[prodname] network policies and clusters are added. Achieve a baseline first, then monitor for any unexpected increases from the baseline. |
-| Threshold breach symptoms | Unexpected increases may indicate memory leaks and performance issues with Typha. |
-| Threshold breach recommendations | Check CPU usage on Typha pods and Kubernetes nodes. Increase resources if needed, rollout and restart Typha(s) if needed. |
-| Priority level | Optional. |
-
-### CPU usage
-
-| CPU usage | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | rate(process_cpu_seconds_total\{30s\}) \* 100 |
-| Example value | \{endpoint="metrics-port", instance="10.0.1.20:9093", job="typha-metrics-svc", namespace="calico-system", pod="calico-typha-6c6cc9fcf7-csbdl", service="typha-metrics-svc"\} 0.27999999999999403 |
-| Explanation | CPU in use by Typha represented as a percentage of a core. |
-| Threshold value recommendation | A spike at startup is normal. It is recommended to achieve a baseline first, then monitor for any unexpected increases from this baseline. A rule of thumb is to investigate maintained CPU usage above 90%. |
-| Threshold breach symptoms | Unexpected maintained CPU usage could cause Typha to fall behind in updating its clients (for example, Felix) and could cause delays to policy updates. |
-| Threshold breach recommendations | Check CPU usage on Kubernetes nodes. If needed, increase resources, and rollout restart Typha(s). |
-| Priority level | Recommended. |
-
-### Memory usage
-
-| Memory usage | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | process_resident_memory_bytes |
-| Example value | process_resident_memory_bytes\{endpoint="metrics-port", instance="10.0.1.20:9093", job="typha-metrics-svc", namespace="calico-system", pod="calico-typha-6c6cc9fcf7-csbdl", service="typha-metrics-svc"\} 80515072 |
-| Explanation | Amount of memory used by Typha. |
-| Threshold value recommendation | It is recommended to achieve a baseline first, then monitor for any unexpected increases from this baseline. A rule of thumb is to investigate if maintained memory usage is above 90% of what is available from the underlying node. The metric can also be used for memory leaks. In this case, the metric would show Typhas' memory consumption rising over time, even though the cluster is in a stable state. |
-| Threshold breach symptoms | Unexpected maintained memory usage could cause Typha to fall behind in updating its clients (for example, Felix) and could cause delays to policy updates. |
-| Threshold breach recommendations | Check memory usage on Kubernetes nodes. Increase resources if needed, and rollout restart Typha(s) if needed. |
-| Priority level | Recommended. |
-
-## Typha cluster mesh metrics
-
-The following metrics are applicable only if you have implemented [Cluster mesh](multicluster/overview.mdx).
-
-Note that this metric requires a count syntax because you will have a copy of the metric per RemoteClusterConfiguration. As shown in the table, the value `2 = In Sync` reflects good connections.
-
-```
-remote_cluster_connection_status\{cluster="foo"\} = 2
-remote_cluster_connection_status\{cluster="bar"\} = 2
-remote_cluster_connection_status\{cluster="baz"\} = 1
-```
-
-### Remote cluster connections (in-sync)
-
-| Remote cluster connections (in-sync) | |
-| ------------------------------------ | ------------------------------------------------------------ |
-| Metric | count by (instance) (remote_cluster_connection_status == 2) |
-| Explanation | This represents the number of remote cluster connections that are connected and in sync. Each remote cluster will report a *connection_status* value from the following list: - 0 = Not Connected - 1 = Connecting - 2 = In Sync - 3 = Resync in Process - 4 = Config Change Restart Required
We suggest the count syntax because there will be one copy of *remote_cluster_connection_status* per cluster: - remote_cluster_connection_status[cluster="foo"] = 2 remote_cluster_connection_status[cluster="bar"] = 2 remote_cluster_connection_status[cluster="baz"] = '
Counting the number of metrics with value 2 returns the number of In Sync clusters. |
-| Threshold value recommendation | When remote cluster connections are initializing, *connection_status* values will fluctuate. After the connection is established, this value should be equal to the number of remote clusters in the environment (if everything is in sync). |
-| Threshold breach symptoms | N/A For out-of-sync symptoms, see the out-of-sync metric. |
-| Threshold breach recommendations | N/A For out-of-sync recommendations, see the out-of-sync metric. |
-| Priority level | Recommended. |
-
-### Remote cluster connections (out-of-sync)
-
-The following metrics are applicable only if you have implemented [Cluster mesh](multicluster/overview.mdx).
-
-| Remote cluster connections (out-of-sync) | |
-| ---------------------------------------- | ------------------------------------------------------------ |
-| Metric | count by (instance) (remote_cluster_connection_status != 2) |
-| Explanation | Number of remote cluster connections that are not in sync (i.e. resyncing or failing to connect). Each remote cluster will report a *connection_status* value from the following list: - 0 = Not Connected - 1 = Connecting - 2 = In Sync - 3 = Resync in Process - 4 = Config Change Restart Required |
-| Threshold value recommendation | This value should be 2 if everything is in sync. Note: At Typha startup, it is normal to have non-2 values, but it should stabilize at 2 after connections come up. |
-| Threshold breach symptoms | Typha will not receive updates from the relevant remote clusters. Connected clients will see stale or partial data from remote clusters. |
-| Threshold breach recommendations | Investigate Typha's logs where remote cluster connectivity events are logged. Ensure the networking between clusters is not experiencing issues. |
-| Priority level | Recommended. |
-
-## Typha client metrics
-
-### Total connections accepted
-
-| Total connections accepted | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | typha_connections_accepted |
-| Example value | typha_connections_accepted\{endpoint="metrics-port", instance="10.0.1.20:9093", job="typha-metrics-svc", namespace="calico-system", pod="calico-typha-6c6cc9fcf7-csbdl", service="typha-metrics-svc"\} 10 |
-| Explanation | Total number of connections accepted over time. This value always increases. |
-| Threshold value recommendation | A steady increase over time is normal. Counters rising after a Felix or Typha restart is also normal (as clients get rebalanced). Investigate connection counters that rise rapidly with no Felix or Typha restarts. |
-| Threshold breach symptoms | Counters rising when there are no Felix or Typha restarts, or no action that could cause restarts (an upgrade for example), could indicate unexpected Felix or Typha restarts or issues. |
-| Threshold breach recommendations | Check resource usage on Typha(s) and Kubernetes nodes. Increase resources if needed. |
-| Priority level | Optional. |
-
-### Client connections actively streaming
-
-| Client connections actively streaming | |
-| ------------------------------------- | ------------------------------------------------------------ |
-| Metric | sum by (instance) (typha_connections_streaming) |
-| Example value | \{instance="10.0.1.20:9093"\} 10 \{instance="10.0.1.31:9093"\} 5 |
-| Explanation | Current number of active connections that are "streaming" (have completed the handshake), to this Typha. After a connection has been Accepted (reported in the previous metric), there will be a handshake before the connection is deemed to be actively streaming. This indicates how many clients are connected to a Typha. The sum reflects per-cache metrics as well. |
-| Threshold value recommendation | Compare the value for Total Connections Accepted and Client Connections Actively Streaming. The fluctuation of these values should be in-sync with each other if Accepted Connections are turning into Actively Streamed connections. If there is a discrepancy , you should investigate. Note: As always, it is recommended to baseline the relationship between these two metrics to have a sense of what is normal. It is also worth noting that in smaller clusters, it is normal for Typha to be unbalanced. Typha can handle hundreds of connections so it is of no concern if all nodes in a 10-node cluster (for example) connect to the same Typha. |
-| Threshold breach symptoms | Felix is not getting updates from Typha. $[prodname] network policies are out-of-sync. |
-| Threshold breach recommendations | Check Typha and Felix logs, and rollout restart Typha(s) if needed. |
-| Priority level | Recommended. |
-
-### Rebalanced client connections
-
-| Rebalanced client connections | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | rate(typha_connections_dropped\{$_rate_interval\}) |
-| Example value | \{endpoint="metrics-port", instance="10.0.1.20:9093", job="typha-metrics-svc", namespace="calico-system", pod="calico-typha-6c6cc9fcf7-csbdl", service="typha-metrics-svc"\} |
-| Explanation | Number of client connections dropped to rebalance and share the load across different Typhas. |
-| Threshold value recommendation | It is normal to see this value increasing sometimes. Investigate if connection dropped counters is rising constantly. If all Typhas are dropping connections because all Typhas believe they have too much load, this also warrants investigation. |
-| Threshold breach symptoms | Dropping connections is rate limited so it should not affect the cluster as a whole. Typha clients, like Felix, will get dropped sometimes (but not constantly), and could result in periodic delays to policy updates. |
-| Threshold breach recommendations | Ensure that the Kubernetes nodes have enough resources. |
-| Priority level | Optional. |
-
-### 99 percentile client fall-behind
-
-| 99 percentile client fall-behind | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | max by (instance) (typha_client_latency_secs\{quantile='0.99'\}) |
-| Example value | \{instance="10.0.1.20:9093"\} 0.1234 \{instance="10.0.1.31:9093"\} 0.1234 |
-| Explanation | This metric measures how far behind Typha's client-handling threads are at reading updates.This metric will increase if: a) The client (e.g Felix) is slow or overloaded and cannot keep up with what Typha is sending or b) Typha is overloaded and it cannot keep up with writes to all its clients.
This metric is a good indication of your cluster, Felix, and Typha health. |
-| Threshold value recommendation | It is normal for this to spike when new clients connect; they must download and process the snapshot, during which time they will fall slightly behind. Investigate of latency persists. |
-| Threshold breach symptoms | Typha clients receiving updates from Typha will be behind in time. Potential symptoms could include $[prodname] network policies being out-of-sync. |
-| Threshold breach recommendations | Check Typha and Felix logs and resource usage. It is recommended to focus on Felix logs and resource usage first, as there is generally more overhead with Felix and thus more of a chance of overload. Rollout restart Typha(s) and calico-node(s) if needed. |
-| Priority level | Recommended. |
-
-### 99 percentile client write latency
-
-| 99 percentile client write latency | |
-| ---------------------------------- | ------------------------------------------------------------ |
-| Metric | max by (instance) (typha_client_write_latency_secs) |
-| Example value | \{instance="10.0.1.20:9093"\} 0.007450815 |
-| Explanation | Time for Typha to write to a client's socket (for example, Felix). |
-| Threshold value recommendation | If the write latency is increasing, this indicates that a client (for example, Felix) is having an issue, or the network is having an issue. It is normal for intermittent spikes. Investigate any persistent latency. |
-| Threshold breach symptoms | Typha clients will lag behind in receiving updates that Typha is sending. Potential symptoms include $[prodname] network policies being out-of-sync. |
-| Threshold breach recommendations | Check Felix logs and resource usage. |
-| Priority level | Recommended. |
-
-### 99 percentile client ping latency
-
-| 99 percentile client ping latency | |
-| --------------------------------- | ------------------------------------------------------------ |
-| Metric | max by (instance) (typha_ping_latency\{quantile="0.99"\}) |
-| Example value | \{instance="10.0.1.20:9093"\} 0.034285331 |
-| Explanation | This metric tracks the round-trip-time from Typha to a client. How long it takes for Typha's clients to respond to pings over the Typha protocol. |
-| Threshold value recommendation | An increase in this metric above 1 second indicates that the clients, network or Typha are more heavily loaded. It is normal for intermittent spikes. Persistent latency above 1 second warrants investigation. |
-| Threshold breach symptoms | Typha clients could be behind in time on updates Typha is sending. Potential symptoms include $[prodname] network policies being out-of-sync. |
-| Threshold breach recommendations | Check Typha and Felix logs and resource usage. It is recommended to focus on Felix logs and resource usage first, as there is generally more overhead with Felix and thus more of a chance of overload. Check if the node is overloaded and review/increase calico-node/Typha CPU requests if needed. If needed, rollout restart Typha(s) and calico-node(s). |
-| Priority level | Recommended. |
-
-## Typha cache internals
-
-### 99 percentile breadcrumb size
-
-| 99 percentile breadcrumb size | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | max by (instance) (typha_breadcrumb_size\{quantile="0.99"\}) |
-| Explanation | Typha stores datastore changes as a series of blocks called breadcrumbs. Typha will store updates inside of these breadcrumbs (for example if a pod churned, this would be a single update). Typha can store multiple updates in a single breadcrumb with the default maximum size number being 100. |
-| Threshold value recommendation | Typha generating blocks of size 100 during start up is normal. Investigate if Typha is consistently generating blocks of size 90+, which can indicate Typha is overloaded. |
-| Threshold breach symptoms | Maintained block of sizes of 100 can indicate that Typha is falling behind on information and updates contained in the datastore. This will lead to Typha clients also falling behind (for example, $[prodname] network policy object may not be current). |
-| Threshold breach recommendations | Check Typha logs and resource usage. Check if there is a lot of activity within the cluster that would cause Typha to send large breadcrumbs (for example, a huge amount of pod churn). If possible, reduce churn rate of resources on the cluster. |
-| Priority level | Recommended. |
-
-### Non-blocking breadcrumbs fraction
-
-| Non-blocking breadcrumb fraction | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | (sum by (instance) (rate(typha_breadcrumb_non_block\{30s\})))/((sum by (instance) (rate(typha_breadcrumb_non_block\{30s\})))+(sum by (instance) (rate(typha_breadcrumb_block\{30s\})))) |
-| Example value | \{instance="10.0.1.20:9093"\} NaN |
-| Explanation | Typha stores datastore changes as a series of blocks called "breadcrumbs". Each client "follows the breadcrumbs" either by blocking and waiting, or skipping to the next one (non-blocking) if it is already available. Non-blocking breadcrumbactions indicates that Typha is constantly sending breadcrumbs to keep up with the datastore. Blocking breadcrumbactions indicate that Typha and the client have caught up, are up-to-date, and are waiting on the next breadcrumb. This metric will give a ratio between blocking and non-blocking actions that can indicate the health of Typha, its clients, and the cluster. |
-| Threshold value recommendation | As the load on Typha increases, the ratio of skip-ahead, non-blocking reads, increases. If it approaches 100% then Typha may be overloaded (since clients only do non-blocking reads when they're behind). |
-| Threshold breach symptoms | Consistent non-blocking breadcrumbs could indicate that Typha is falling behind on information and updates contained in the datastore. This will lead to Typha clients also being behind (for example, $[prodname] network policy object may not be current). |
-| Threshold breach recommendations | Check Typha and Felix logs and resource usage. Check if there is a lot of activity within the cluster that would cause Typha to continuously send non-blocking breadcrumbs. |
-| Priority level | Recommended. |
-
-### Datastore updates total
-
-| Datastore updates total | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | sum by (instance) (rate(typha_updates_total\{30s\})) |
-| Example value | \{instance="10.0.1.20:9093"\} 0 |
-| Explanation | The rate of updates from the datastore(s). For example, updates to Pods/Nodes/Policies/etc. |
-| Threshold value recommendation | Intermittent spikes are expected. Constant updates indicates a very busy cluster (for example, lots of pod churn). |
-| Threshold breach symptoms | Constant updates could lead to overloaded Typhas whereTyphas clients could fall behind. |
-| Threshold breach recommendations | Ensure Typha has enough resources to handle a very dynamic cluster. |
-| Priority level | Optional. |
-
-### Datastore update skipped (no-ops)
-
-| Datastore update skipped (no-ops) | |
-| --------------------------------- | ------------------------------------------------------------ |
-| Metric | sum by (instance) (rate(typha_updates_skipped\{30s\})) |
-| Example value | \{instance="10.0.1.20:9093"\} 0 |
-| Explanation | The number of updates from the datastore that Typha detected were no-ops. For example, an update to a Kubernetes node resource that did not touch any values that is of interest to $[prodname]. Such updates are not propagated to clients, which saves resources. |
-| Threshold value recommendation | N/A |
-| Threshold breach symptoms | N/A |
-| Threshold breach recommendations | N/A |
-| Priority level | Optional. |
-
-## Typha snapshot details
-
-### Snapshot send time
-
-| Median snapshot send time | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | max by (instance) (typha_client_snapshot_send_secs\{quantile="0.5"\}) |
-| Example value | \{instance="10.0.1.20:9093"\} NaN |
-| Explanation | The median time to stream the initial datastore snapshot to each client. It is useful to know the time it takes for a client to receive the data when it connects; it does not include time to process the data. |
-| Threshold value recommendation | Investigate if this value is moving towards 10s of seconds. |
-| Threshold breach symptoms | High values of this metric could indicate that newly-started clients are taking a long time to get the latest snapshot of the datastore, increasing the window of time where networking/policy updates are not being applied to the dataplane during a restart/upgrade. Typha has a write timeout for writing the snapshot; if a client cannot receive the snapshot within that timeout, it is disconnected. Clients falling behind on information and updates contained in the datastore (for example, $[prodname] network policy object may not be current). |
-| Threshold breach recommendations | Check Typha and calico-node logs and resource usage. Check for network congestion. Investigate why a particular calico-node is slow; it is likely on an overloaded node with insufficient CPU). |
-| Priority level | Optional. |
-
-### Clients requiring grace period
-
-| Clients requiring grace period | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | sum by (instance) (typha_connections_grace_used) |
-| Example value | \{instance="10.0.1.20:9093"\} 0 |
-| Explanation | The number of Typhas with clients that required a grace period. After sending the snapshot to the client, Typha allows a grace period for the client to catch up to the most recent data. Typha sending the initial snapshot should take < 1 second, but the processing of the snapshot could take longer, so this grace period is there to allow the newly connected client to process the snapshot. |
-| Threshold value recommendation | If this metric is constantly increasing, it can indicate potential performance issues with Typha and clients. It can indicate that performance is being impacted and may warrant investigation. |
-| Threshold breach symptoms | High values of this metric could indicate clients falling behind on information and updates contained in the datastore (for example, $[prodname] network policy object may not be current). |
-| Threshold breach recommendations | Check Typha and calico-node logs and resource usage. Check for network congestion, and determine the root cause. |
-| Priority level | Optional. |
-
-### Max snapshot size (raw)
-
-| Max snapshot size (raw) | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | max(typha_snapshot_raw_bytes) |
-| Example value | \{\} 557359 |
-| Explanation | The raw size in bytes of snapshots sent from Typha to clients. |
-| Threshold value recommendation | N/A |
-| Threshold breach symptoms | N/A |
-| Threshold breach recommendations | N/A |
-| Priority Level | Optional. |
-
-### Max snapshot size (compressed)
-
-| Max snapshot size (compressed) | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | max(typha_snapshot_compressed_bytes) |
-| Example value | \{\}134845 |
-| Explanation | The compressed size in bytes of snapshots sent from Typha to clients. |
-| Threshold value recommendation | This metric can be helpful for customers to estimate the bandwidth requirements for Felix to startup. For example, if the compressed snapshot size is 20MB in size on average, and 1000 Felix/calico-nodes start up, the bandwidth requirements could be estimated at 20GB between the pool of Typha and the set of Felixes across the network. |
-| Threshold breach symptoms | N/A |
-| Threshold breach recommendations | N/A |
-| Priority Level | Optional. |
-
-## Policy metrics
-
-:::note
-The following policy metrics are a separate endpoint exposed by Felix that are used in Manager UI. They require special Prometheus configuration to scrape the metrics. For details, see [Policy metrics](./policy-metrics).
-
-:::
-
-### Denied traffic
-
-| Denied traffic | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | calico_denied_packets calico_denied_bytes |
-| Example value | calico_denied_packets\{endpoint="calico-metrics-port", instance="ip-10-0-1-30.ca-central-1.compute.internal", job="calico-node-metrics", namespace="calico-system", pod="calico-node-6pcqm", policy="default |
-| Explanation | Number of packets or bytes that have been dropped by explicit or implicit deny rules. Note that you'll get one instance of `calico_denied_packets/bytes` for each policy rule that is denying traffic. For example: calico_denied_packets\{policy="tier1\|fv/policy1\|0\|deny\|-1",scrIP="10.245.13.133"\} |
-| Threshold value recommendation | The general rule of thumb is this metric should report zero at a stable state. Any deviation means that policy and traffic have diverged. Achieving a zero state depends on the stability and maturity of your cluster and policy. |
-| Threshold breach symptoms | Either unexpected traffic is being denied because of an attack (one example), or expected traffic is being denied because of a misconfiguration in a policy. |
-| Threshold breach recommendations | If this metric indicates that policy and traffic have diverged, the recommended steps are: Determine if an attack is causing the metric to spike, or if these flows should be allowed. If the flow should indeed be allowed, update the policy or a preceding policy to allow this traffic. |
-| Priority level | Recommended. |
-
-### Traffic per rule
-
-| Traffic per rule | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | cnx_policy_rule_bytes cnx_policy_rule_packets |
-| Example value | cnx_policy_rule_bytes\{action="allow", endpoint="calico-metrics-port", instance="ip-10-0-1-20.ca-central-1.compute.internal", job="calico-node-metrics", namespace="calico-system", pod="calico-node-qzpkt", policy="es-kube-controller-access", rule_direction="egress", rule_index="1", service="calico-node-metrics", tier="allow-tigera", traffic_direction="inbound"\} |
-| Explanation | Number of bytes or packets handled by $[prodname] network policy rules. |
-| Threshold value recommendation | This metric should usually be non-zero (unless expected). A zero value indicates the rule is not matching any packets, and could be surplus to requirements. |
-| Threshold breach symptoms | N/A |
-| Threshold breach recommendations | If this metrics consistently reports a zero value over an acceptable period of time, you can consider removing the policy rule. |
-| Priority Level | Optional. |
-
-### Connections per policy rule
-
-| Connections per policy rule | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | cnx_policy_rule_connections |
-| Example value | cnx_policy_rule_connections\{endpoint="calico-metrics-port", instance="ip-10-0-1-20.ca-central-1.compute.internal", job="calico-node-metrics", namespace="calico-system", pod="calico-node-qzpkt", policy="es-kube-controller-access", rule_direction="egress", rule_index="0", service="calico-node-metrics", tier="allow-tigera", traffic_direction="outbound"\} |
-| Explanation | Number connections handled by $[prodname] policy rules. |
-| Threshold value recommendation | This metric is similar to *Traffic per Rule* but this deals more with flow monitoring. This metric should usually be non-zero. A zero value indicates that the rule is not matching any packets and could be surplus to requirements. |
-| Threshold breach symptoms | N/A |
-| Threshold breach recommendations | If this metrics consistently reports a zero value over an acceptable period of time, this policy rule can be considered for removal. |
-| Priority Level | Optional. |
-## Felix cluster-state metrics
-
-### CPU usage
-
-| CPU usage | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | rate(process_cpu_seconds_total\{30s\}) \* 100 |
-| Example value | \{endpoint="metrics-port", instance="10.0.1.20:9091", job="felix-metrics-svc", namespace="calico-system", pod="calico-node-qzpkt", service="felix-metrics-svc"\}3.1197504199664072 |
-| Explanation | CPU in use by calico-node represented as a percentage of a core. |
-| Threshold value recommendation | A spike at startup is normal. It is recommended to first achieve a baseline and then monitor for any unexpected increases from this baseline. Investigate if maintained CPU usage goes above 90%. |
-| Threshold breach symptoms | Unexpected maintained CPU usage could cause Felix to fall behind and could cause delays to policy updates. |
-| Threshold breach recommendations | Check CPU usage on Kubernetes nodes. Increase resources if needed, rollout restart calico-node(s) if needed. |
-| Priority level | Recommended. |
-
-### Memory usage
-
-| Memory usage | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | process_resident_memory_bytes |
-| Example value | process_resident_memory_bytes\{endpoint="metrics-port", instance="10.0.1.20:9091", job="felix-metrics-svc", namespace="calico-system", pod="calico-node-qzpkt", service="felix-metrics-svc"\} 98996224 |
-| Explanation | Amount of memory in use by calico-node. |
-| Threshold value recommendation | Recommended to achieve a baseline first, then monitor for any unexpected increases from this baseline. Investigate if maintained CPU usage goes above 90% of what is available from the underlying node. |
-| Threshold breach symptoms | Unexpected, maintained, memory usage could cause Felix to fall behind and could cause delays to policy updates. |
-| Threshold breach recommendations | Check memory usage on Kubernetes nodes. Increase resources if needed, rollout restart typha(s) if needed. |
-| Priority level | Recommended. |
-
-### Active hosts on each endpoint
-
-| Active hosts on each endpoint | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | felix_active_local_endpoints |
-| Example value | felix_active_local_endpoints\{endpoint="metrics-port", instance="10.0.1.30:9091", job="felix-metrics-svc", namespace="calico-system", pod="calico-node-6pcqm", service="felix-metrics-svc"\} 36 |
-| Explanation | Number of active pod-networked pods, and HEPs, on this node. |
-| Threshold value recommendation | Threshold relates to resource limits on the node for example kubelet's max pods setting. |
-| Threshold breach symptoms | Suggests Felix is getting out of sync. |
-| Threshold breach recommendations | Rolling restart calico-node and report issue to support. |
-| Priority level | Optional. |
-
-### Active calico nodes
-
-| Active calico nodes | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | max(felix_cluster_num_hosts) |
-| Example value | \{\} 3 |
-| Explanation | Total number of nodes in the cluster that have calico-node deployed and running. |
-| Threshold value recommendation | This value should be equal to the number of nodes in the cluster. If there are discrepancies, then calico-nodes on some nodes are having issues. |
-| Threshold breach symptoms | $[prodname] network policies on affected nodes could be out-of-sync. |
-| Threshold breach recommendations | Check calico-node logs, rollout restart calico-node if needed. |
-| Priority level | Recommended. |
-
-### Felix cluster policies
-
-| Felix cluster policies | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | felix_cluster_num_policies |
-| Example value | felix_cluster_num_policies\{endpoint="metrics-port", instance="10.0.1.20:9091", job="felix-metrics-svc", namespace="calico-system", pod="calico-node-qzpkt", service="felix-metrics-svc"\} 58 |
-| Explanation | Total number of $[prodname] network policies in the cluster. |
-| Threshold value recommendation | Because $[prodname] is a distributed system, the number of policies should be generally consistent across all nodes. It is expected to have some skew between nodes for a short period of time while they sync, however they should never be out of sync for very long. |
-| Threshold breach symptoms | If nodes are out of sync for long time, calico-nodes may be having issues or experiencing resource contention. Check the Errors Plot to see if there are any iptables errors reported. |
-| Threshold breach recommendations | Redeploy calico-node if issues are seen, and increase resources if needed. |
-| Priority level | Optional. |
-
-### Felix active local policies
-
-| Felix active local policies | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | felix_active_local_policies |
-| Example value | felix_active_local_policies\{endpoint="metrics-port", instance="10.0.1.30:9091", job="felix-metrics-svc", namespace="calico-system", pod="calico-node-6pcqm", service="felix-metrics-svc"\} 44 |
-| Explanation | Total number of network policies deployed on per node basis. |
-| Threshold value recommendation | There is no hard limit to active policies. We can handle 1000+ active policies, but it impacts performance, especially if there's pod churn. The best solution is to optimize policies by combining multiple rules into one policy, and make sure that top-level policy selectors are being used. |
-| Threshold breach symptoms | N/A |
-| Threshold breach recommendations | Redeploy calico-node if issues are seen, and increase resources if needed. |
-| Priority level | Recommended. |
-
-### Felix open FDS
-
-| Felix open FDS | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | sum by (pod) (process_open_fds\{pod=~"calico-node.*"\}) |
-| Example value | \{pod="calico-node-6pcqm"\} 90 |
-| Explanation | Number of opened file descriptors per calico-node pod. |
-| Threshold value recommendation | Alert on this metric when it approaches the ulimit (as reported in `process_max_fds` value). You should not be anywhere near the maximum. |
-| Threshold breach symptoms | Felix may become unstable/crash or fail to apply updates as it should. These failures and issues are logged. |
-| Threshold breach recommendations | Check Felix logs, redeploy calico-node if you see log issues, and increase `max_fds value` if possible. |
-| Priority Level | Optional. |
-
-### Felix max FDS
-
-| Felix max FDS | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | sum by (pod) (process_max_fds\{pod=~"calico-node.*"\}) |
-| Example value | \{pod="calico-node-qzpkt"\} 1048576 |
-| Explanation | Maximum number of opened file descriptors allowed per calico-node pod. |
-| Threshold value recommendation | N/A |
-| Threshold breach symptoms | N/A |
-| Threshold breach recommendations | N/A |
-| Priority level | Optional. |
-
-### Felix resync started
-
-| Felix resync started | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | sum(rate(felix_resyncs_started\{5m\})) |
-| Explanation | This is the number of times that Typha has reported to Felix that it is re-connecting with the datastore. |
-| Threshold value recommendation | Occasional resyncs are normal. Investigate resync counters that rapidly rise. |
-| Threshold breach symptoms | Typha pods may be having issues or experiencing resource contention. Some calico-nodes that are paired with Typha pods experiencing issues will not be able to sync with the datastore. |
-| Threshold breach recommendations | Investigate the root cause to avoid redeploying Typha (which can be very disruptive). Check resource contention and network connectivity from Typha to the datastore to see if Typha is working fine or if the API server is overloaded. |
-| Priority level | Recommended. |
-
-### Felix dropped logs
-
-| Felix dropped logs | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | felix_logs_dropped |
-| Example value | felix_logs_dropped\{endpoint="metrics-port", instance="10.0.1.20:9091", job="felix-metrics-svc", namespace="calico-system", pod="calico-node-qzpkt", service="felix-metrics-svc"\} 0 |
-| Explanation | The number of logs Felix has dropped. Note that this metric does not count flow-logs; it counts logs to stdout. |
-| Threshold value recommendation | Occasional drops are normal. Investigate if drop counters rapidly rise. |
-| Threshold breach symptoms | Felix will drop logs if it cannot keep up with writing them out. These are ordinary code logs, not flow logs. Calico-node may be under resource constraints. |
-| Threshold breach recommendations | Check CPU usage on calico-nodes and Kubernetes nodes. Increase resources if needed, and rollout restart calico-node(s) if needed. |
-| Priority level | Optional. |
-
-## Felix error metrics
-
-### IPset errors
-
-| IPset errors | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | sum(rate(felix_ipset_errors\{5m\})) |
-| Example value | \{\} 0 |
-| Explanation | Number of ipset creation, modification, and deletion command failures. This metric reports how many times the ipset command has failed when Felix tried to run it. An error can occur when Felix sends bad ipset command data, or the kernel throws an error (potentially because it was too busy to handle this request at that time). |
-| Threshold value recommendation | Occasional errors are normal. Investigate error counters that rapidly rise. |
-| Threshold breach symptoms | $[prodname] network policies may not scope all endpoints in network policy rules. Cluster nodes may be under resource contention, which may result in other _error and _seconds metrics rising. Repeated errors could mean some persistent problem (for example, some other process has created an IP set with that name, which is incompatible). |
-| Threshold breach recommendations | See the Errors Plot graph to determine if the scope is cluster-wide or node-local. Check calico-node logs. Check resource usage and contention on Kubernetes nodes and calico-nodes. Add nodes/resources if needed. If resource contention is not seen, restart calico-node(s) and monitor. Ensure that other process using IPtables are not blocking $[prodname] network policy management. |
-| Priority level | Optional. |
-
-### Iptables restore errors
-
-| Iptables restore errors | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | sum(rate(felix_iptables_restore_errors\{5m\})) |
-| Explanation | The number of iptables-restore errors over five minutes. The iptables-restore command is used when $[prodname] makes a change to iptables. For example, a new WEP or HEP is created, changes to a WEP or HEP or a change to a policy that affects a WEP or HEP. |
-| Threshold value recommendation | Occasional errors are normal. Investigate error counters that rapidly rise. |
-| Threshold breach symptoms | $[prodname] network policies are not up to date. Cluster nodes may be under resource contention, which may result in other _error and _seconds metrics rising. |
-| Threshold breach recommendations | See the Errors Plot graph to determine if the scope is cluster-wide or node-local. Check calico-node logs. Check resource usage and contention on Kubernetes nodes and calico-nodes. Add nodes/resources if needed. If no resource contention is seen, restart calico-node and monitor. |
-| Priority level | Optional. |
-
-### Iptables save errors
-
-| Iptables save errors | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | sum(rate(felix_iptables_save_errors\{5m\})) |
-| Example value | \{\} 0 |
-| Explanation | Number of iptables-save errors. The iptables-save command is run before every iptables-restore command so that $[prodname] has the current state of iptables. |
-| Threshold value recommendation | Occasional errors are normal. Investigate error counters that rapidly rise. |
-| Threshold breach symptoms | $[prodname] network policies are not up to date. Cluster nodes may be under resource contention, which may result in other _error and _seconds metrics rising. Repeated errors could mean some persistent problem (for example, some other process has creating iptables rules that $[prodname] cannot decode with the version of iptables-save in use). |
-| Threshold breach recommendations | See the Errors Plot graph to determine if the scope is cluster-wide or node-local. Check calico-node logs. Check resource usage and contention on Kubernetes nodes and calico-nodes. Add nodes/resources if needed. If no resource contention is seen, restart calico-node and monitor. |
-| Priority level | Optional. |
-
-### Felix log errors
-
-| Felix log errors | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | sum(rate(felix_log_errors\{5m\})) |
-| Example value | \{\} 0 |
-| Explanation | The number of times Felix fails to write out a log because the log buffer is full. |
-| Threshold value recommendation | Occasional errors are normal. Investigate error counters that rapidly rise. |
-| Threshold breach symptoms | Calico-node may be under resource contention, which may result in other _error and _seconds metrics rising. |
-| Threshold breach recommendations | See the Errors Plot graph to determine if the scope is cluster-wide or node-local. Check resource usage and contention on Kubernetes nodes and calico-nodes. Add nodes/resources if needed. If no resource contention is seen, restart calico-node and monitor. |
-| Priority level | Optional. |
-
-### Monitor Felix metrics using a graph
-
-| Errors plot graph | |
-| -------------------------------- | ------------------------------------------------------------ |
-| Metric | rate(felix_ipset_errors\{5m\}) \|\| rate(felix_iptables_restore_errors[5m]) \|\| rate(felix_iptables_save_errors[5m]) \|\| rate(felix_log_errors\{5m\}) |
-| Example value | \{endpoint="metrics-port", instance="10.0.1.20:9091", job="felix-metrics-svc", namespace="calico-system", pod="calico-node-qzpkt", service="felix-metrics-svc"\} 0 |
-| Explanation | Checks if there have been any iptables-save, iptables-restore, or ipset command errors in the past five minutes. Keeps track of what node is reporting which error. |
-| Threshold value recommendation | Occasional errors are normal. Investigate error counters that rapidly rise. For this specific metric it is worth focusing on the metric that is spiking, and referencing that metric information. |
-| Threshold breach symptoms | Dependent on the specific metric that is logging errors. |
-| Threshold breach recommendations | If more than one metric is rising, check if all rising metrics are related to a specific calico-node. If this is the case, then the issue is local to that calico-node. Check calico-node logs. Check resource usage for the node and calico-node pod. If more than one metric is rising rapidly across all calico-nodes, then it is a cluster-wide issue and cluster health must be checked. Check cluster resource usage, cluster networking/infrastructure health, and restart calico-nodes and calico-typha pods. |
-| Priority level | Recommended. |
-
-## Felix time-based metrics
-
-### Dataplane apply time quantile 0.5/0.9/0.99
-
-| Dataplane apply time quantile 0.5/0.9/0.99 | |
-| ------------------------------------------ | ------------------------------------------------------------ |
-| Metric | felix_int_dataplane_apply_time_seconds\{quantile="0.5"\} felix_int_dataplane_apply_time_seconds\{quantile="0.9"\} felix_int_dataplane_apply_time_seconds\{quantile="0.99"\} |
-| Example value | felix_int_dataplane_apply_time_seconds\{quantile="0.5"\}:felix_int_dataplane_apply_time_seconds\{endpoint="metrics-port", instance="10.0.1.30:9091", job="felix-metrics-svc", namespace="calico-system", pod="calico-node-6pcqm", quantile="0.5", service="felix-metrics-svc"\} 0.020859218 |
-| Explanation | Time in seconds that it took to apply a dataplane update ,viewed at the median, 90th percentile, and 99th percentile. |
-| Threshold value recommendation | Thresholds will vary depending on cluster size and rate of churn. It is recommended that a baseline be set to determine a normal threshold value. In the field we have seen >10s in extremely high-scale clusters with 100k+ endpoints and lots of policy/Kubernetes services. |
-| Threshold breach symptoms | Large time-to-apply values will cause a delay between $[prodname] network policy commits and enforcement in the dataplane. This is dependent on how $[prodname] waiting for kube-proxy to release the iptables lock, which is influenced by the number of services in use. |
-| Threshold breach recommendations | Increase cluster resources, and reduce the number of Kubernetes services if possible. |
-| Priority level | Recommended. |
-
-### Felix route table list seconds quantile 0.5/0.9/0.99
-
-| Felix route table list seconds quantile 0.5/0.9/0.99 | |
-| ---------------------------------------------------- | ------------------------------------------------------------ |
-| Metric | felix_route_table_list_seconds\{quantile="0.5"\} felix_route_table_list_seconds\{quantile="0.9"\} felix_route_table_list_seconds\{quantile="0.99"\} |
-| Example value | felix_route_table_list_seconds\{quantile="0.5"\}:felix_route_table_list_seconds\{endpoint="metrics-port",instance="10.0.1.30:9091",job="felix-metrics-svc",namespace="calico-system", pod="calico-node-6pcqm",quantile="0.5", service="felix-metrics-svc"\} 0.000860426 |
-| Explanation | Time to list all the interfaces during a resync, viewed at the median, 90th percentile and 99th percentile. |
-| Threshold value recommendation | Thresholds will vary depending on the number of cali interfaces per node. It is recommended that a baseline be set to determine a normal threshold value. |
-| Threshold breach symptoms | High values indicate high CPU usage in felix and slow dataplane updates. |
-| Threshold breach recommendations | Increase cluster resources. Reduce the number of cali interfaces per node where possible. |
-| Priority level | Optional. |
-
-### Felix graph update time quantile 0.5/0.9/0/99
-
-| Felix graph update time seconds quantile 0.5/0.9/0.99 | |
-| ----------------------------------------------------- | ------------------------------------------------------------ |
-| Metric | felix_calc_graph_update_time_seconds\{quantile="0.5"\} felix_calc_graph_update_time_seconds\{quantile="0.9"\} felix_calc_graph_update_time_seconds\{quantile="0.99"\} |
-| Example value | felix_calc_graph_update_time_seconds\{quantile="0.5"\}:felix_calc_graph_update_time_seconds\{endpoint="metrics-port",instance="10.0.1.30:9091", job="felix-metrics-svc",namespace="calico-system", pod="calico-node-6pcqm",quantile="0.5", service="felix-metrics-svc"\} 0.00007129 |
-| Explanation | This metric reports the time taken to update the calculation graph for each datastore on an update call, viewed at the median, 90th percentile and 99th percentile. The calculation graph is the Felix component that takes all the policies/workload endpoints/host endpoints information that it has received from Typha, and distills it down to dataplane updates that are relevant for this node. |
-| Threshold value recommendation | After *start of day* (where we will typically get a large update), then values should be sub 1 second (with occasional blips to 1+ seconds). Should be measured in milliseconds with the occasional blip to a second or two. Investigate if the result is constantly in values of seconds. |
-| Threshold breach symptoms | High values indicate high CPU usage in felix and slow dataplane updates. |
-| Threshold breach recommendations | Increase cluster resources. Check calico-node logs. Rollout restart calico-node(s) if needed. |
-| Priority level | Recommended. |
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/alertmanager.mdx b/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/alertmanager.mdx
deleted file mode 100644
index 7215bf68dc..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/alertmanager.mdx
+++ /dev/null
@@ -1,103 +0,0 @@
----
-description: Configure Alertmanager, a Prometheus feature that routes alerts.
----
-
-# Configure Alertmanager
-
-Alertmanager is used by $[prodname] to route alerts from Prometheus to the administrators.
-It handles routing, deduplicating, grouping, silencing and inhibition of alerts.
-
-More detailed information about Alertmanager is available in the [upstream documentation](https://prometheus.io/docs/alerting/configuration).
-
-### Updating the AlertManager config
-
-- Save the current alertmanager secret, usually named `alertmanager-`.
- Our manifests will end up creating a secret called: `alertmanager-calico-node-alertmanager`.
-
- ```bash
- kubectl -n tigera-operator get secrets alertmanager-calico-node-alertmanager -o yaml > alertmanager-secret.yaml
- ```
-
-- The current alertmanager.yaml file is encoded and stored inside the
- `alertmanager.yaml` key under the `data` field. You can decode it by
- copying the value of `alertmanager.yaml` and using the `base64` command.
-
- ```bash
- echo "" | base64 --decode > alertmanager-config.yaml
- ```
-
-- Make necessary changes to `alertmanager-config.yaml`. Once this is done,
- you have to re-encode and save it to `alertmanager-secret.yaml`. You can do
- this by (in Linux):
-
- ```bash
- cat alertmanager-config.yaml | base64 -w 0
- ```
-
-- Paste the output of the running the command above back in `alertmanager-secret.yaml`
- replacing the value present in `alertmanager.yaml` field. Then apply this
- updated manifest.
-
- ```bash
- kubectl -n tigera-operator apply -f alertmanager-secret.yaml
- ```
-
-Your changes should be applied in a few seconds by the config-reloader
-container inside the alertmanager pod launched by the prometheus-operator
-(usually named `alertmanager-`).
-
-For more advice on writing alertmanager configuration files, see the
-[alertmanager configuration](https://prometheus.io/docs/alerting/configuration/) documentation.
-
-### Configure Inhibition Rules
-
-Alertmanager has a feature to suppress certain notifications according to
-defined rules. A typical use case for defining `inhibit` rules is to suppress
-notifications from a lower priority alert when one with a higher priority is
-firing. These inhibition rules are defined in the alertmanager configuration
-file. You can define one by adding this configuration snippet to your
-`alertmanager.yaml`.
-
-```yaml noValidation
-[...]
-inhibit_rules:
-- source_match:
- severity: 'critical'
- target_match:
- severity: 'info'
- # Apply inhibition for alerts generated by the same alerting rule
- # and on the same node.
- equal: ['alertname', 'instance']
-[...]
-```
-
-### Configure Grouping of Alerts
-
-Alertmanager also has a feature to group alerts based on labels and fine tune
-how often to resend an alert and so on. In the case of Denied Packet metrics,
-simply defining a Prometheus alerting rule would mean that you will get an
-page (if so defined in your alertmanager configuration) for every policy on
-every node for every Source IP. All these alerts can be combined into a single
-alert by configuring grouping. The Alertmanager configuration file that is
-provided with $[prodname] by default, groups alerts on a
-per-node basis. Instead, if the goal is to group all alerts with the same
-name, edit (and apply) the alertmanager configuration file like so:
-
-```yaml
-global:
- resolve_timeout: 5m
-route:
- group_by: ['alertname']
- group_wait: 30s
- group_interval: 1m
- repeat_interval: 5m
- receiver: 'webhook'
-receivers:
- - name: 'webhook'
- webhook_configs:
- - url: 'http://calico-alertmanager-webhook:30501/'
-```
-
-More information, including descriptions of the various options can be found under the
-[route section](https://prometheus.io/docs/alerting/configuration/#route)
-of the Alertmanager Configuration guide.
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/byo-prometheus.mdx b/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/byo-prometheus.mdx
deleted file mode 100644
index a4ac8946b1..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/byo-prometheus.mdx
+++ /dev/null
@@ -1,431 +0,0 @@
----
-description: Steps to get Calico Cloud metrics using your own Prometheus.
----
-
-# Bring your own Prometheus
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-## Big picture
-
-Scrape $[prodname] metrics for Bring Your Own (BYO) Prometheus.
-
-## Value
-
-$[prodname] uses the Prometheus monitoring tool to scrape metrics from instrumented jobs, and displays time-series data in a visualizer such as Grafana. You can scrape the following time-series metrics for $[prodname] components to your own Prometheus:
-
-- elasticsearch
-- fluentd
-- calico-node
-- kube-controllers
-- felix
-- typha (not enabled by default)
-
-## Before you begin
-
-**Supported**
-
-For the supported version of Prometheus in this release, see the [Release Notes](../../../release-notes/index.mdx) (`coreos-prometheus`).
-
-## How to
-
-- [Scrape all enabled metrics](#scrape-all-enabled-metrics)
-- [Scrape metrics from specific components directly](#scrape-metrics-from-specific-components-directly)
-- [Verify BYO Prometheus](#verify-byo-prometheus)
-- [Create policy to secure traffic between pods](#create-policy-to-secure-traffic-between-pods)
-- [Troubleshooting](#troubleshooting)
-
-### Scrape all enabled metrics
-
-In this section we create a service monitor that scrapes all enabled metrics. To enable metrics that
-are not enabled by default, please consult the [next section](#scrape-metrics-from-specific-components).
-
-The following example shows a Prometheus server installed in namespace "external-prometheus" with a `serviceMonitorSelector` that selects all service monitors with the label `k8s-app=tigera-external-prometheus`.
-
-1. Save the following configuration in a file called `monitor.yaml`.
-
- ```yaml
- apiVersion: operator.tigera.io/v1
- kind: Monitor
- metadata:
- name: tigera-secure
- spec:
- externalPrometheus:
- namespace: external-prometheus
- serviceMonitor:
- labels:
- k8s-app: tigera-external-prometheus
- ```
- For a list of all configuration options, see the [Installation API reference](../../../reference/installation/api.mdx).
-
-2. Apply the manifest to your cluster.
-
- ```bash
- kubectl apply -f monitor.yaml
- ```
-
-3. Verify that the new configuration has been added to your cluster
- ```bash
- export NS=external-prometheus
- kubectl get servicemonitor -n $NS tigera-external-prometheus
- kubectl get serviceaccount -n $NS tigera-external-prometheus
- kubectl get secret -n $NS tigera-external-prometheus
- kubectl get clusterrole tigera-external-prometheus
- kubectl get clusterrolebinding tigera-external-prometheus
- ```
- That's it. You should be seeing the new metrics show up in your Prometheus instance within a minute. For more information on verifying metrics, see the section, [Verify BYO Prometheus](#verify-byo-prometheus).
-
-### Scrape metrics from specific components directly
-
-We recommend the previous section for scraping all enabled metrics. Read on if you wish to scrape metrics from specific
-components directly using mTLS, or if you wish to enable metrics that are disabled by default.
-
-
-
-
-**Configure TLS certificates**
-
-1. Copy the required secret and configmap to your namespace.
-2. Save the manifest of the required TLS secret and CA configmap.
-
- ```bash
- kubectl get secret calico-node-prometheus-client-tls -n tigera-prometheus -o yaml > calico-node-prometheus-client-tls.yaml
- ```
-
- ```bash
- kubectl get configmap -n tigera-prometheus tigera-ca-bundle -o yaml > tigera-ca-bundle.yaml
- ```
-
-3. Edit `calico-node-prometheus-client-tls.yaml` and `tigera-ca-bundle.yaml` by changing the namespace to the namespace where your prometheus instance is running.
-4. Apply the manifests to your cluster.
-
- ```bash
- kubectl apply -f calico-node-prometheus-client-tls.yaml
- ```
-
- ```bash
- kubectl apply -f tigera-ca-bundle.yaml
- ```
-
-**Create the service monitor**
-
-Apply the ServiceMonitor to the namespace where Prometheus is running.
-
-```bash
-export NAMESPACE=
-```
-
-```bash
-kubectl apply -f $[filesUrl_CE]/manifests/prometheus/elasticsearch-metrics-service-monitor.yaml -n $NAMESPACE
-```
-
-The .yamls have no namespace defined so when you apply `kubectl`, it is applied in the $NAMESPACE.
-
-
-
-
-**Configure TLS certificates**
-
-1. Copy the required secret and configmap to your namespace.
-2. Save the manifest of the required TLS secret and CA configmap.
-
- ```bash
- kubectl get secret calico-node-prometheus-client-tls -n tigera-prometheus -o yaml > calico-node-prometheus-client-tls.yaml
- ```
-
- ```bash
- kubectl get configmap -n tigera-prometheus tigera-ca-bundle -o yaml > tigera-ca-bundle.yaml
- ```
-
-3. Edit `calico-node-prometheus-client-tls.yaml` and `tigera-ca-bundle.yaml` and change the namespace to the namespace where your prometheus instance is running.
-4. Apply the manifests to your cluster.
-
- ```bash
- kubectl apply -f calico-node-prometheus-client-tls.yaml
- ```
-
- ```bash
- kubectl apply -f tigera-ca-bundle.yaml
- ```
-
-**Create the service monitor**
-
-Apply the ServiceMonitor to the namespace where Prometheus is running.
-
-```bash
-export NAMESPACE=
-```
-
-```bash
-kubectl apply -f $[filesUrl_CE]/manifests/prometheus/fluentd-metrics-service-monitor.yaml -n $NAMESPACE
-```
-
-The .yamls have no namespace defined so when you apply `kubectl`, it is applied in the $NAMESPACE.
-
-
-
-
-**Configure TLS certificates**
-
-1. Copy the required secret and configmap to your namespace.
-2. Save the manifest of the required TLS secret and CA configmap.
-
- ```bash
- kubectl get secret calico-node-prometheus-client-tls -n tigera-prometheus -o yaml > calico-node-prometheus-client-tls.yaml
- ```
-
- ```bash
- kubectl get configmap -n tigera-prometheus tigera-ca-bundle -o yaml > tigera-ca-bundle.yaml
- ```
-
-3. Edit `calico-node-prometheus-client-tls.yaml` and `tigera-ca-bundle.yaml` by changing the namespace to the namespace where your prometheus instance is running.
-4. Apply the manifests to your cluster.
-
- ```bash
- kubectl apply -f calico-node-prometheus-client-tls.yaml
- ```
-
- ```bash
- kubectl apply -f tigera-ca-bundle.yaml
- ```
-
-**Create the service monitor**
-
-Apply the ServiceMonitor to the namespace where Prometheus is running.
-
-```bash
-export NAMESPACE=
-```
-
-```bash
-kubectl apply -f $[filesUrl_CE]/manifests/prometheus/calico-node-monitor-service-monitor.yaml -n $NAMESPACE
-```
-
-The .yamls have no namespace defined so when you apply `kubectl`, it is applied in $NAMESPACE.
-
-
-
-
-**Configure TLS certificates**
-
-1. Copy the required secret and configmap to your namespace.
-2. Save the manifest of the required TLS secret and CA configmap.
-
- ```bash
- kubectl get secret calico-node-prometheus-client-tls -n tigera-prometheus -o yaml > calico-node-prometheus-client-tls.yaml
- ```
-
- ```bash
- kubectl get configmap -n tigera-prometheus tigera-ca-bundle -o yaml > tigera-ca-bundle.yaml
- ```
-
-3. Edit `calico-node-prometheus-client-tls.yaml` and `tigera-ca-bundle.yaml` by changing the namespace to the namespace where your prometheus instance is running.
-4. Apply the manifests to your cluster.
-
- ```bash
- kubectl apply -f calico-node-prometheus-client-tls.yaml
- ```
-
- ```bash
- kubectl apply -f tigera-ca-bundle.yaml
- ```
-
-**Create the service monitor**
-
-Apply the ServiceMonitor to the namespace where Prometheus is running.
-
-```bash
-export NAMESPACE=
-```
-
-```bash
-kubectl apply -f $[filesUrl_CE]/manifests/prometheus/kube-controller-metrics-service-monitor.yaml -n $NAMESPACE
-```
-
-The .yamls have no namespace defined so when you apply `kubectl`, it is applied in the $NAMESPACE.
-
-
-
-
-**Enable metrics**
-
-Felix metrics are not enabled by default.
-
-By default, Felix uses **port 9091 TCP** to publish metrics.
-
-Use the following command to enable Felix metrics.
-
-```bash
-kubectl patch felixconfiguration default --type merge --patch '{"spec":{"prometheusMetricsEnabled": true}}'
-```
-
-You should see a result similar to:
-
-```
-felixconfiguration.projectcalico.org/default patched
-```
-
-For all Felix configuration values, see [Felix configuration](../../../reference/component-resources/node/felix/configuration.mdx).
-
-For all Prometheus Felix configuration values, see [Felix Prometheus](../../../reference/component-resources/node/felix/prometheus.mdx).
-
-**Create a service to expose Felix metrics**
-
-```bash
-kubectl apply -f - <
-```
-
-```bash
-kubectl apply -f $[filesUrl_CE]/manifests/prometheus/felix-metrics-service-monitor.yaml -n $NAMESPACE
-```
-
-The .yamls have no namespace defined so when you apply `kubectl`, it is applied in the $NAMESPACE.
-
-
-
-
-**Enable metrics**
-
-Typha metrics are not enabled by default.
-
-By default, Typha uses **port 9091** TCP to publish metrics. However, if $[prodname] is installed using the Amazon yaml file, this port will be 9093 because it is set manually using the **TYPHA_PROMETHEUSMETRICSPORT** environment variable.
-
-Use the following command to enable Typha metrics.
-
-```bash
-kubectl patch installation default --type=merge -p '{"spec": {"typhaMetricsPort":9093}}'
-```
-
-You should see a result similar to:
-
-```bash
-installation.operator.tigera.io/default patched
-```
-
-**Create the service monitor**
-
-Apply the ServiceMonitor to the namespace where Prometheus is running.
-
-```bash
-export NAMESPACE=
-```
-
-```bash
-kubectl apply -f $[filesUrl_CE]/manifests/prometheus/typha-metrics-service-monitor.yaml -n $NAMESPACE
-```
-
-The .yamls have no namespace defined so when you apply `kubectl`, it is applied in the $NAMESPACE.
-
-
-
-
-### Verify BYO Prometheus
-
-1. Access the Prometheus dashboard using the port-forwarding feature.
-
- ```bash
- kubectl port-forward pod/byo-prometheus-pod 9090:9090 -n $NAMESPACE
- ```
-
-1. Browse to the Prometheus dashboard: http://localhost:9090.
-
-1. In the Expression text box, enter your metric name and click the **Execute** button.
-
- The Console table is populated with all of your nodes with the number of endpoints.
-
-### Troubleshooting
-
-This section is applicable only if you experience issues with mTLS after following the [Scrape metrics from specific components directly](#scrape-metrics-from-specific-components)
-section.
-
-1. Use the following command to retrieve the tls.key and tls.cert.
-
- ```bash
- export NAMESPACE=
- ```
-
- ```bash
- kubectl get secret -n $NAMESPACE calico-node-prometheus-client-tls -o yaml
- ```
-
-1. Save the tls.key and tls.cert content into key and cert after base64 decode.
-
- ```bash
- $:tls_key=
- $:echo $tls_key|base64 -d >key.pem
-
- $:tls_cert=
- $:echo $cert|base64 -d>cert.pem
- ```
-
-1. Get the ca-bundle certificate using this command:
-
- ```bash
- kubectl get cm -n $NAMESPACE tigera-ca-bundle -o yaml
- ```
-
-1. Open a new file (bundle.pem) in your favorite editor, and paste the content from "BEGIN CERTIFICATE" to "END CERTIFICATE".
-
-1. Port-forward the prometheus pods and run this command with the forwarded port.
-
- ```bash
- curl --cacert bundle.pem --key key.pem --cert cert.pem https://localhost:8080/metrics
- ```
-
-You should be able to see the metrics.
-
-### Create policy to secure traffic between pods
-
-To support zero trust, we recommend that you create $[prodname] network policy to allow the traffic between BYO Prometheus pods, and the respective metrics pods. For samples of ingress and egress policies, see [Get started with Calico network policy](../../../network-policy/beginners/calico-network-policy.mdx).
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/configure-prometheus.mdx b/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/configure-prometheus.mdx
deleted file mode 100644
index 5ff021d239..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/configure-prometheus.mdx
+++ /dev/null
@@ -1,342 +0,0 @@
----
-description: Configure rules for alerts and denied packets, for persistent storage.
----
-
-# Configure Prometheus
-
-## Updating Denied Packets Rules
-
-This is an example of how to modify the sample rule created by the sample manifest.
-The process of updating rules is the same as for user created rules (documented below).
-
-- Save the current alert rule:
-
- ```bash
- kubectl -n tigera-prometheus get prometheusrule -o yaml > calico-prometheus-alert-rule-dp.yaml
- ```
-
-- Make necessary edits to the alerting rules then apply the updated manifest.
-
- ```bash
- kubectl apply -f calico-prometheus-alert-rule-dp.yaml
- ```
-
-Your changes should be applied in a few seconds by the prometheus-config-reloader
-container inside the prometheus pod launched by the prometheus-operator
-(usually named `prometheus-`).
-
-As an example, the range query in this Manifest is 10 seconds.
-
-```yaml
-apiVersion: monitoring.coreos.com/v1
-kind: PrometheusRule
-metadata:
- name: calico-prometheus-dp-rate
- namespace: tigera-prometheus
- labels:
- role: tigera-prometheus-rules
- prometheus: calico-node-prometheus
-spec:
- groups:
- - name: calico.rules
- rules:
- - alert: DeniedPacketsRate
- expr: rate(calico_denied_packets[10s]) > 50
- labels:
- severity: critical
- annotations:
- summary: 'Instance {{$labels.instance}} - Large rate of packets denied'
- description: '{{$labels.instance}} with calico-node pod {{$labels.pod}} has been denying packets at a fast rate {{$labels.sourceIp}} by policy {{$labels.policy}}.'
-```
-
-To update this alerting rule, to say, execute the query with a range of
-20 seconds modify the manifest to this:
-
-```yaml
-apiVersion: monitoring.coreos.com/v1
-kind: PrometheusRule
-metadata:
- name: calico-prometheus-dp-rate
- namespace: tigera-prometheus
- labels:
- role: tigera-prometheus-rules
- prometheus: calico-node-prometheus
-spec:
- groups:
- - name: calico.rules
- rules:
- - alert: DeniedPacketsRate
- expr: rate(calico_denied_packets[20s]) > 50
- labels:
- severity: critical
- annotations:
- summary: 'Instance {{$labels.instance}} - Large rate of packets denied'
- description: '{{$labels.instance}} with calico-node pod {{$labels.pod}} has been denying packets at a fast rate {{$labels.sourceIp}} by policy {{$labels.policy}}.'
-```
-
-## Creating a New Alerting Rule
-
-Creating a new alerting rule is straightforward once you figure out what you
-want your rule to look for. Check [alerting rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/)
-and [Queries](https://prometheus.io/docs/querying/examples/) for more
-information.
-
-### New Alerting Rule for Monitoring Calico Node
-
-To add the new alerting rule to our Prometheus instance, define a PrometheusRule manifest
-in the `tigera-prometheus` namespace with the labels
-`role: tigera-prometheus-rules` and `prometheus: calico-node-prometheus`. The
-labels should match the labels defined by the `ruleSelector` field of the
-Prometheus manifest.
-
-As an example, to fire a alert when a $[noderunning] instance has been down for
-more than 5 minutes, save the following to a file, say `calico-node-down-alert.yaml`.
-
-```yaml
-apiVersion: monitoring.coreos.com/v1
-kind: PrometheusRule
-metadata:
- name: calico-prometheus-calico-node-down
- namespace: tigera-prometheus
- labels:
- role: tigera-prometheus-rules
- prometheus: calico-node-prometheus
-spec:
- groups:
- - name: calico.rules
- rules:
- - alert: CalicoNodeInstanceDown
- expr: up == 0
- for: 5m
- labels:
- severity: warning
- annotations:
- summary: 'Instance {{$labels.instance}} Pod: {{$labels.pod}} is down'
- description: '{{$labels.instance}} of job {{$labels.job}} has been down for more than 5 minutes'
-```
-
-Then create/apply this manifest in kubernetes.
-
-```bash
-kubectl apply -f calico-node-down-alert.yaml
-```
-
-Your changes should be applied in a few seconds by the prometheus-config-reloader
-container inside the prometheus pod launched by the prometheus-operator
-(usually named `prometheus-`).
-
-### New Alerting Rule for Monitoring BGP Peers
-
-Let’s look at an example of a new alerting rule to our Prometheus instance with respect to monitoring BGP
-peering health. Define a PrometheusRule manifest in the tigera-prometheus namespace with the labels
-`role: tigera-prometheus-rules` and `prometheus: calico-node-prometheus`. The labels should match the labels
-defined by the `ruleSelector` field of the Prometheus manifest.
-
-As an example, to fire an alert when the number of peering connections with a status other than “Established”
-is increasing at a non-zero rate in the cluster (over the last 5 minutes), save the following to a file, say
-`tigera-peer-status-not-established.yaml`.
-
-```yaml
-apiVersion: monitoring.coreos.com/v1
-kind: PrometheusRule
-metadata:
- labels:
- prometheus: calico-node-prometheus
- role: tigera-prometheus-rules
- name: tigera-prometheus-peer-status-not-established
- namespace: tigera-prometheus
-spec:
- groups:
- - name: calico.rules
- rules:
- - alert: CalicoNodePeerStatusNotEstablished
- annotations:
- description: '{{$labels.instance}} has at least one peer connection that is
- no longer up.'
- summary: Instance {{$labels.instance}} has peer connection that is no longer
- up
- expr: rate(bgp_peers{status!~"Established"}[5m]) > 0
- labels:
- severity: critical
-```
-
-Then create/apply this manifest in kubernetes.
-
-```bash
-kubectl apply -f tigera-peer-status-not-established.yaml
-```
-
-Your changes should be applied in a few seconds by the prometheus-config-reloader
-container inside the prometheus pod launched by the prometheus-operator
-(usually named `prometheus-`).
-
-## Additional Alerting Rules
-
-The Alerting Rules installed by the $[prodname] install manifest is a simple
-one that fires an alert when the rate of denied packets denied by a policy on
-a node from a particular Source IP exceeds a certain packets per second
-threshold. The Prometheus query used for this (ignoring the threshold value 20) is:
-
-```
-rate(calico_denied_packets[10s])
-```
-
-and this query will return results something along the lines of:
-
-```
-{endpoint="calico-metrics-port",instance="10.240.0.81:9081",job="calico-node-metrics",namespace="kube-system",pod="calico-node-hn0kl",policy="profile/k8s_ns.test/0/deny",service="calico-node-metrics",srcIP="192.168.167.129"} 0.6
-{endpoint="calico-metrics-port",instance="10.240.0.84:9081",job="calico-node-metrics",namespace="kube-system",pod="calico-node-97m3g",policy="profile/k8s_ns.test/0/deny",service="calico-node-metrics",srcIP="192.168.167.175"} 0.2
-{endpoint="calico-metrics-port",instance="10.240.0.84:9081",job="calico-node-metrics",namespace="kube-system",pod="calico-node-97m3g",policy="profile/k8s_ns.test/0/deny",service="calico-node-metrics",srcIP="192.168.252.157"} 0.4
-{endpoint="calico-metrics-port",instance="10.240.0.81:9081",job="calico-node-metrics",namespace="kube-system",pod="calico-node-hn0kl",policy="profile/k8s_ns.test/0/deny",service="calico-node-metrics",srcIP="192.168.167.175"} 1
-{endpoint="calico-metrics-port",instance="10.240.0.84:9081",job="calico-node-metrics",namespace="kube-system",pod="calico-node-97m3g",policy="profile/k8s_ns.test/0/deny",service="calico-node-metrics",srcIP="192.168.167.129"} 0.4
-{endpoint="calico-metrics-port",instance="10.240.0.81:9081",job="calico-node-metrics",namespace="kube-system",pod="calico-node-hn0kl",policy="profile/k8s_ns.test/0/deny",service="calico-node-metrics",srcIP="192.168.167.159"} 0.4
-{endpoint="calico-metrics-port",instance="10.240.0.81:9081",job="calico-node-metrics",namespace="kube-system",pod="calico-node-hn0kl",policy="profile/k8s_ns.test/0/deny",service="calico-node-metrics",srcIP="192.168.252.175"} 0.4
-{endpoint="calico-metrics-port",instance="10.240.0.84:9081",job="calico-node-metrics",namespace="kube-system",pod="calico-node-97m3g",policy="profile/k8s_ns.test/0/deny",service="calico-node-metrics",srcIP="192.168.252.175"} 0.6
-{endpoint="calico-metrics-port",instance="10.240.0.81:9081",job="calico-node-metrics",namespace="kube-system",pod="calico-node-hn0kl",policy="profile/k8s_ns.test/0/deny",service="calico-node-metrics",srcIP="192.168.252.157"} 0.6
-{endpoint="calico-metrics-port",instance="10.240.0.84:9081",job="calico-node-metrics",namespace="kube-system",pod="calico-node-97m3g",policy="profile/k8s_ns.test/0/deny",service="calico-node-metrics",srcIP="192.168.167.159"} 0.6
-```
-
-We can modify this query to find out all packets dropped by different policies
-on every node.
-
-```
-(sum by (instance,policy) (rate(calico_denied_packets[10s])))
-```
-
-This query will aggregate the results from all different Source IPs, and
-preserve the `policy` and `instance` labels. Note that the `instance` label
-represents the calico node's IP Address and `PrometheusReporterPort`. This
-query will return results like so:
-
-```
-{instance="10.240.0.84:9081",policy="profile/k8s_ns.test/0/deny"} 2
-{instance="10.240.0.81:9081",policy="profile/k8s_ns.test/0/deny"} 2.8
-```
-
-To include the pod name in these results, add the label `pod` to the labels
-listed in the `by` expression like so:
-
-```
-(sum by (instance,pod,policy) (rate(calico_denied_packets[10s])))
-```
-
-which will return the following results:
-
-```
-{instance="10.240.0.84:9081",pod="calico-node-97m3g",policy="profile/k8s_ns.test/0/deny"} 2
-{instance="10.240.0.81:9081",pod="calico-node-hn0kl",policy="profile/k8s_ns.test/0/deny"} 2.8
-```
-
-An interesting use case is when a rogue Pod is using tools such as nmap to
-scan a subnet for open ports. To do this, we have to execute a query that will
-aggregate across all policies on all instances while preserving the source IP
-address. This can be done using this query:
-
-```
-(sum by (srcIP) (rate(calico_denied_packets[10s])))
-```
-
-which will return results, different source IP address:
-
-```
-{srcIP="192.168.167.159"} 1.0000000000000002
-{srcIP="192.168.167.129"} 1.2000000000000002
-{srcIP="192.168.252.175"} 1.4000000000000001
-{srcIP="192.168.167.175"} 0.4
-{srcIP="192.168.252.157"} 1.0000000000000002
-```
-
-To use these queries as Alerting Rules, follow the instructions defined in the
-[Creating a new Alerting Rule](#creating-a-new-alerting-rule) section and create
-a ConfigMap with the appropriate query.
-
-## Updating the scrape interval
-
-You may wish to modify the scrape interval (time between Prometheus polling each node for new denied packet information).
-Increasing the interval reduces load on Prometheus and the amount of storage required, but decreases the detail of the collected metrics.
-
-The scrape interval of endpoints ($[noderunning] in our case) is defined as part of
-the ServiceMonitor manifest. To change the interval:
-
-- Save the current ServiceMonitor manifest:
-
- ```bash
- kubectl -n tigera-prometheus get servicemonitor calico-node-monitor -o yaml > calico-node-monitor.yaml
- ```
-
-- Update the `interval` field under `endpoints` to desired settings and
- apply the updated manifest.
-
- ```bash
- kubectl apply -f calico-node-monitor.yaml
- ```
-
-Your changes should be applied in a few seconds by the prometheus-config-reloader
-container inside the prometheus pod launched by the prometheus-operator
-(usually named `prometheus-`).
-
-As an example on what to update, the interval in this ServiceMonitor manifest
-is 5 seconds (`5s`).
-
-```yaml
-apiVersion: monitoring.coreos.com/v1alpha1
-kind: ServiceMonitor
-metadata:
- name: calico-node-monitor
- namespace: tigera-prometheus
- labels:
- team: network-operators
-spec:
- selector:
- matchLabels:
- k8s-app: calico-node
- namespaceSelector:
- matchNames:
- - kube-system
- endpoints:
- - port: calico-metrics-port
- interval: 5s
-```
-
-To update $[prodname] Prometheus' scrape interval to 10 seconds modify the manifest
-to this:
-
-```yaml
-apiVersion: monitoring.coreos.com/v1alpha1
-kind: ServiceMonitor
-metadata:
- name: calico-node-monitor
- namespace: tigera-prometheus
- labels:
- team: network-operators
-spec:
- selector:
- matchLabels:
- k8s-app: calico-node
- namespaceSelector:
- matchNames:
- - kube-system
- endpoints:
- - port: calico-metrics-port
- interval: 10s
-```
-
-## Troubleshooting Config Updates
-
-Check config reloader logs to see if they detected any recent activity.
-
-- For prometheus run:
-
- ```bash
- kubectl -n tigera-prometheus logs prometheus- prometheus-config-reloader
- ```
-
-- For alertmanager run:
-
- ```bash
- kubectl -n tigera-prometheus logs alertmanager- config-reloader
- ```
-
-The config-reloaders watch each pods file-system for updated config from
-ConfigMap's or Secret's and will perform steps necessary for reloading
-the configuration.
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/index.mdx b/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/index.mdx
deleted file mode 100644
index 67a21e239a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Configure open-source toolkit for systems monitoring and alerting.
-hide_table_of_contents: true
----
-
-# Prometheus
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/support.mdx b/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/support.mdx
deleted file mode 100644
index be35d23aaf..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/monitor/prometheus/support.mdx
+++ /dev/null
@@ -1,19 +0,0 @@
----
-description: Prometheus support in Calico Cloud.
----
-
-# Prometheus support
-
-## Big picture
-
-$[prodname] uses the open-source [Prometheus monitoring and alerting toolkit](https://prometheus.io/docs/introduction/overview/). With these tools, you can view time-series metrics from $[prodname] components in the Prometheus and Grafana interfaces, or scrape the metrics for a BYO Prometheus deployment.
-
-## Install options
-
-- Use Prometheus operator managed by Tigera operator
-
- You install the $[prodname] Prometheus operator and CRDs during $[prodname] installation. $[prodname] metrics and alerts are available in Manager UI. You configure alerts through Prometheus AlertManager.
-
- If you want to specify your own Prometheus operator during installation for management by the Tigera operator, the require operator version must be **v0.40.0 or higher**. Because $[prodname] creates AlertManager and Prometheus CRs in the `tigera-prometheus` namespace, all you need to do is verify that your Prometheus operator is configured to manage Prometheus and AlertManager instances in the `tigera-prometheus` namespace.
-
-- [Bring your own Prometheus](byo-prometheus.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/operations/usage-metrics.mdx b/calico-cloud_versioned_docs/version-20-1/operations/usage-metrics.mdx
deleted file mode 100644
index e141ed35c8..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/operations/usage-metrics.mdx
+++ /dev/null
@@ -1,20 +0,0 @@
----
-description: Where to find Calico Cloud usage metrics.
----
-
-# Usage and billing
-
-$[prodname] Admins can view product usage metrics on the **Usage Metrics** page of the UI.
-
-![usage-metrics](/img/calico-cloud/usage-metrics.png)
-
-You can view the node hours and ingested data consumed by each managed cluster each month, the cost in node hours, and any data overage cost. You can also download an invoice and export to CSV.
-
-## How NodeHours Cost is calculated
-
-To calculate **Node Hours**, divide the vCPU of each node by four, round up, and then add them together. (Note this is different from adding up all the vCPU and then dividing by four, because it is done on a per-node basis.)
-
-## How Ingested Data is calculated
-
-$[prodname] provides users with 200 GB per month of ingested data, which is included in the standard pricing before overage charges are incurred. Additional data usage is charged at a rate of $0.25 / GB / month.
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/api.mdx b/calico-cloud_versioned_docs/version-20-1/reference/api.mdx
deleted file mode 100644
index 03509371a6..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/api.mdx
+++ /dev/null
@@ -1,12 +0,0 @@
----
-description: Learn about the Tigera API and how to use it.
----
-
-# Tigera API
-
-$[prodname] provides and consumes a public API in Go that allows
-developers to work with $[prodname] resources.
-
-To learn more about the Tigera API and how to use it, see the Tigera API project [README](https://github.com/tigera/api/blob/master/README.md) or
-the [github.com/tigera/api Go module page](https://pkg.go.dev/github.com/tigera/api)
-.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/architecture/data-path.mdx b/calico-cloud_versioned_docs/version-20-1/reference/architecture/data-path.mdx
deleted file mode 100644
index 83c592c747..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/architecture/data-path.mdx
+++ /dev/null
@@ -1,63 +0,0 @@
----
-description: Learn how packets flow between workloads in a datacenter, or between a workload and the internet.
----
-
-# 'The Calico Cloud data path: IP routing and iptables'
-
-One of $[prodname]’s key features is how packets flow between workloads in a
-data center, or between a workload and the Internet, without additional
-encapsulation.
-
-In the $[prodname] approach, IP packets to or from a workload are routed and
-firewalled by the Linux routing table and iptables or eBPF infrastructure on the
-workload’s host. For a workload that is sending packets, $[prodname] ensures
-that the host is always returned as the next hop MAC address regardless
-of whatever routing the workload itself might configure. For packets
-addressed to a workload, the last IP hop is that from the destination
-workload’s host to the workload itself.
-
-![Calico datapath](/img/calico-enterprise/calico-datapath.png)
-
-Suppose that IPv4 addresses for the workloads are allocated from a
-datacenter-private subnet of 10.65/16, and that the hosts have IP
-addresses from 172.18.203/24. If you look at the routing table on a host:
-
-```bash
-route -n
-```
-
-You will see something like this:
-
-```
-Kernel IP routing table
-Destination Gateway Genmask Flags Metric Ref Use Iface
-0.0.0.0 172.18.203.1 0.0.0.0 UG 0 0 0 eth0
-10.65.0.0 0.0.0.0 255.255.0.0 U 0 0 0 ns-db03ab89-b4
-10.65.0.21 172.18.203.126 255.255.255.255 UGH 0 0 0 eth0
-10.65.0.22 172.18.203.129 255.255.255.255 UGH 0 0 0 eth0
-10.65.0.23 172.18.203.129 255.255.255.255 UGH 0 0 0 eth0
-10.65.0.24 0.0.0.0 255.255.255.255 UH 0 0 0 tapa429fb36-04
-172.18.203.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
-```
-
-There is one workload on this host with IP address 10.65.0.24, and
-accessible from the host via a TAP (or veth, etc.) interface named
-tapa429fb36-04. Hence there is a direct route for 10.65.0.24, through
-tapa429fb36-04. Other workloads, with the .21, .22 and .23 addresses,
-are hosted on two other hosts (172.18.203.126 and .129), so the routes
-for those workload addresses are via those hosts.
-
-The direct routes are set up by a $[prodname] agent named Felix when it is
-asked to provision connectivity for a particular workload. A BGP client
-(such as BIRD) then notices those and distributes them – perhaps via a
-route reflector – to BGP clients running on other hosts, and hence the
-indirect routes appear also.
-
-## Is that all?
-
-As far as the static data path is concerned, yes. It’s just a
-combination of responding to workload ARP requests with the host MAC, IP
-routing and iptables or eBPF. There’s a great deal more to $[prodname] in terms of
-how the required routing and security information is managed, and for
-handling dynamic things such as workload migration – but the basic data
-path really is that simple.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/architecture/design/index.mdx b/calico-cloud_versioned_docs/version-20-1/reference/architecture/design/index.mdx
deleted file mode 100644
index 717ac9f295..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/architecture/design/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Deep dive into using Calico over Ethernet and IP fabrics.
-hide_table_of_contents: true
----
-
-# Network design
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/architecture/design/l2-interconnect-fabric.mdx b/calico-cloud_versioned_docs/version-20-1/reference/architecture/design/l2-interconnect-fabric.mdx
deleted file mode 100644
index 9565ad0458..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/architecture/design/l2-interconnect-fabric.mdx
+++ /dev/null
@@ -1,238 +0,0 @@
----
-description: Understand the interconnect fabric options in a Calico network.
----
-
-# Calico over Ethernet fabrics
-
-This is the first of a few _tech notes_ that I will be authoring that
-will discuss some of the various interconnect fabric options in a $[prodname]
-network.
-
-Any technology that is capable of transporting IP packets can be used as
-the interconnect fabric in a $[prodname] network (the first person to test
-and publish the results of using [IP over Avian Carrier](https://datatracker.ietf.org/doc/html/rfc1149)
- as a transport for $[prodname]
-will earn a very nice dinner on or with the core $[prodname] team). This
-means that the standard tools used to transport IP, such as MPLS and
-Ethernet can be used in a $[prodname] network.
-
-In this note, I'm going to focus on Ethernet as the interconnect
-network. Talking to most at-scale cloud operators, they have converted
-to IP fabrics, and as will cover in the next blog post that
-infrastructure will work for $[prodname] as well. However, the concerns that
-drove most of those operators to IP as the interconnection network in
-their pods are largely ameliorated by Project Calico, allowing Ethernet
-to be viably considered as a $[prodname] interconnect, even in large-scale
-deployments.
-
-## Concerns over Ethernet at scale
-
-It has been acknowledged by the industry for years that, beyond a
-certain size, classical Ethernet networks are unsuitable for production
-deployment. Although there have been
-[multiple](https://en.wikipedia.org/wiki/Provider_Backbone_Bridge_Traffic_Engineering)
-[attempts](https://web.archive.org/web/20150923231827/https://www.cisco.com/web/about/ac123/ac147/archived_issues/ipj_14-3/143_trill.html) [to address](https://en.wikipedia.org/wiki/Virtual_Private_LAN_Service)
-these issues, the scale-out networking community has, largely abandoned
-Ethernet for anything other than providing physical point-to-point links
-in the networking fabric. The principal reasons for Ethernet failures at
-large scale are:
-
-1. Large numbers of _end points_ [^1]. Each switch in an Ethernet
- network must learn the path to all Ethernet endpoints that are
- connected to the Ethernet network. Learning this amount of state can
- become a substantial task when we are talking about hundreds of
- thousands of _end points_.
-2. High rate of _churn_ or change in the network. With that many end
- points, most of them being ephemeral (such as virtual machines or
- containers), there is a large amount of _churn_ in the network. That
- load of re-learning paths can be a substantial burden on the control
- plane processor of most Ethernet switches.
-3. High volumes of broadcast traffic. As each node on the Ethernet
- network must use Broadcast packets to locate peers, and many use
- broadcast for other purposes, the resultant packet replication to
- each and every end point can lead to _broadcast storms_ in large
- Ethernet networks, effectively consuming most, if not all resources
- in the network and the attached end points.
-4. Spanning tree. Spanning tree is the protocol used to keep an
- Ethernet network from forming loops. The protocol was designed in
- the era of smaller, simpler networks, and it has not aged well. As
- the number of links and interconnects in an Ethernet network goes
- up, many implementations of spanning tree become more _fragile_.
- Unfortunately, when spanning tree fails in an Ethernet network, the
- effect is a catastrophic loop or partition (or both) in the network,
- and, in most cases, difficult to troubleshoot or resolve.
-
-While many of these issues are crippling at _VM scale_ (tens of
-thousands of end points that live for hours, days, weeks), they will be
-absolutely lethal at _container scale_ (hundreds of thousands of end
-points that live for seconds, minutes, days).
-
-If you weren't ready to turn off your Ethernet data center network
-before this, I bet you are now. Before you do, however, let's look at
-how Project Calico can mitigate these issues, even in very large
-deployments.
-
-## How does $[prodname] tame the Ethernet daemons?
-
-First, let's look at how $[prodname] uses an Ethernet interconnect fabric.
-It's important to remember that an Ethernet network _sees_ nothing on
-the other side of an attached IP router, the Ethernet network just
-_sees_ the router itself. This is why Ethernet switches can be used at
-Internet peering points, where large fractions of Internet traffic is
-exchanged. The switches only see the routers from the various ISPs, not
-those ISPs' customers' nodes. We leverage the same effect in $[prodname].
-
-To take the issues outlined above, let's revisit them in a $[prodname]
-context.
-
-1. Large numbers of end points. In a $[prodname] network, the Ethernet
- interconnect fabric only sees the routers/compute servers, not the
- end point. In a standard cloud model, where there is tens of VMs per
- server (or hundreds of containers), this reduces the number of nodes
- that the Ethernet sees (and has to learn) by one to two orders
- of magnitude. Even in very large pods (say twenty thousand servers),
- the Ethernet network would still only see a few tens of thousands of
- end points. Well within the scale of any competent data center
- Ethernet top of rack (ToR) switch.
-2. High rate of _churn_. In a classical Ethernet data center fabric,
- there is a _churn_ event each time an end point is created,
- destroyed, or moved. In a large data center, with hundreds of
- thousands of endpoints, this _churn_ could run into tens of events
- per second, every second of the day, with peaks easily in the
- hundreds or thousands of events per second. In a $[prodname] network,
- however, the _churn_ is very low. The only event that would lead to
- _churn_ in a $[prodname] network's Ethernet fabric would be the addition
- or loss of a compute server, switch, or physical connection. In a
- twenty thousand server pod, even with a 5% daily failure rate (a few
- orders of magnitude more than what is normally experienced), there
- would only be two thousand events per **day**. Any switch that can
- not handle that volume of change in the network should not be used
- for any application.
-3. High volume of broadcast traffic. Since the first (and last) hop for
- any traffic in a $[prodname] network is an IP hop, and IP hops terminate
- broadcast traffic, there is no endpoint broadcast network in the
- Ethernet fabric, period. In fact, the only broadcast traffic that
- should be seen in the Ethernet fabric is the ARPs of the compute
- servers locating each other. If the traffic pattern is fairly
- consistent, the steady-state ARP rate should be almost zero. Even in
- a pathological case, the ARP rate should be well within normal
- accepted boundaries.
-4. Spanning tree. Depending on the architecture chosen for the Ethernet
- fabric, it may even be possible to turn off spanning tree. However,
- even if it is left on, due to the reduction in node count, and
- reduction in churn, most competent spanning tree implementations
- should be able to handle the load without stress.
-
-With these considerations in mind, it should be evident that an Ethernet
-connection fabric in $[prodname] is not only possible, it is practical and
-should be seriously considered as the interconnect fabric for a $[prodname]
-network.
-
-As mentioned in the IP fabric post, an IP fabric is also quite feasible
-for $[prodname], but there are more considerations that must be taken into
-account. The Ethernet fabric option has fewer architectural
-considerations in its design.
-
-## A brief note about Ethernet topology
-
-As mentioned elsewhere in the $[prodname] documentation, since $[prodname] can use
-most of the standard IP tooling, some interesting options regarding
-fabric topology become possible.
-
-We assume that an Ethernet fabric for $[prodname] would most likely be
-constructed as a _leaf/spine_ architecture. Other options are possible,
-but the _leaf/spine_ is the predominant architectural model in use in
-scale-out infrastructure today.
-
-Since $[prodname] is an IP routed fabric, a $[prodname] network can use
-[ECMP](https://en.wikipedia.org/wiki/Equal-cost_multi-path_routing) to
-distribute traffic across multiple links (instead of using Ethernet
-techniques such as MLAG). By leveraging ECMP load balancing on the
-$[prodname] compute servers, it is possible to build the fabric out of
-multiple _independent_ leaf/spine planes using no technologies other
-than IP routing in the $[prodname] nodes, and basic Ethernet switching in the
-interconnect fabric. These planes would operate completely independently
-and could be designed such that they would not share a fault domain.
-This would allow for the catastrophic failure of one (or more) plane(s)
-of Ethernet interconnect fabric without the loss of the pod (the failure
-would just decrease the amount of interconnect bandwidth in the pod).
-This is a gentler failure mode than the pod-wide IP or Ethernet failure
-that is possible with today's designs.
-
-A more in-depth discussion is possible, so if you'd like, please make a
-request, and I will put up a post or white paper. In the meantime, it
-may be interesting to venture over to Facebook's [blog post](https://engineering.fb.com/2014/11/14/production-engineering/introducing-data-center-fabric-the-next-generation-facebook-data-center-network/)
-
-on their fabric approach. A quick picture to visualize the idea is shown
-below.
-
-![A diagram showing the Ethernet spine planes. Each color represents a distinct Ethernet network, transporting a unique IP network.](/img/calico-enterprise/l2-spine-planes.png)
-
-I am not showing the end points in this diagram, and the end points
-would be unaware of anything in the fabric (as noted above).
-
-In the particular case of this diagram, each ToR is segmented into four
-logical switches (possibly by using 'port VLANs'), [^2] and each compute
-server has a connection to each of those logical switches. We will
-identify those logical switches by their color. Each ToR would then have
-a blue, green, orange, and red logical switch. Those 'colors' would be
-members of a given _plane_, so there would be a blue plane, a green
-plane, an orange plane, and a red plane. Each plane would have a
-dedicated spine switch. and each ToR in a given spine would be connected
-to its spine, and only its spine.
-
-Each plane would constitute an IP network, so the blue plane would be
-2001:db8:1000::/36, the green would be 2001:db8:2000::/36, and the
-orange and red planes would be 2001:db8:3000::/36 and 2001:db8:4000::/36
-respectively. [^3]
-
-Each IP network (plane) requires its own BGP route reflectors. Those
-route reflectors need to be peered with each other within the plane, but
-the route reflectors in each plane do not need to be peered with one
-another. Therefore, a fabric of four planes would have four route
-reflector meshes. Each compute server, border router, _etc._ would need
-to be a route reflector client of at least one route reflector in each
-plane, and very preferably two or more in each plane.
-
-A diagram that visualizes the route reflector environment can be found
-below.
-
-![A diagram showing the route reflector topology in the l2 spine plane architecture. The dashed diamonds are the route reflectors, with one or more per L2 spine plane. All compute servers are peered to all route reflectors, and all the route reflectors in a given plane are also meshed. However, the route reflectors in each spine plane are not meshed together (*e.g.* the *blue* route reflectors are not peered or meshed with the *red* route reflectors. The route reflectors themselves could be daemons running on the actual compute servers or on other dedicated or networking hardware.](/img/calico-enterprise/l2-rr-spine-planes.png)
-
-These route reflectors could be dedicated hardware connected to the
-spine switches (or the spine switches themselves), or physical or
-virtual route reflectors connected to the necessary logical leaf
-switches (blue, green, orange, and red). That may be a route reflector
-running on a compute server and connected directly to the correct plane
-link, and not routed through the vRouter, to avoid the chicken and egg
-problem that would occur if the route reflector were "behind" the $[prodname]
-network.
-
-Other physical and logical configurations and counts are, of course,
-possible, this is just an example.
-
-The logical configuration would then have each compute server would have
-an address on each plane's subnet, and announce its end points on each
-subnet. If ECMP is then turned on, the compute servers would distribute
-the load across all planes.
-
-If a plane were to fail (say due to a spanning tree failure), then only
-that one plane would fail. The remaining planes would stay running.
-
-[^1]:
- In this document (and in all $[prodname] documents) we tend to use the
- terms _end point_ to refer to a virtual machine, container,
- appliance, bare metal server, or any other entity that is connected
- to a $[prodname] network. If we are referring to a specific type of end
- point, we will call that out (such as referring to the behavior of
- VMs as distinct from containers).
-
-[^2]:
- We are using logical switches in this example. Physical ToRs could
- also be used, or a mix of the two (say 2 logical switches hosted on
- each physical switch).
-
-[^3]:
- We use IPv6 here purely as an example. IPv4 would be configured
- similarly. I welcome your questions, either here on the blog, or via
- the Project Calico mailing list.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/architecture/design/l3-interconnect-fabric.mdx b/calico-cloud_versioned_docs/version-20-1/reference/architecture/design/l3-interconnect-fabric.mdx
deleted file mode 100644
index 325eb888b3..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/architecture/design/l3-interconnect-fabric.mdx
+++ /dev/null
@@ -1,564 +0,0 @@
----
-description: Understand considerations for implementing interconnect fabrics with Calico.
----
-
-# Calico over IP fabrics
-
-$[prodname] provides an end-to-end IP network that interconnects the
-endpoints [^1] in a scale-out or cloud environment. To do that, it needs
-an _interconnect fabric_ to provide the physical networking layer on
-which $[prodname] operates [^2].
-
-While $[prodname] is designed to work with any underlying interconnect fabric
-that can support IP traffic, the fabric that has the least
-considerations attached to its implementation is an Ethernet fabric as
-discussed in our earlier [technical note](l2-interconnect-fabric.mdx).
-
-In most cases, the Ethernet fabric is the appropriate choice, but there
-are infrastructures where L3 (an IP fabric) has already been deployed,
-or will be deployed, and it makes sense for $[prodname] to operate in those
-environments.
-
-However, since $[prodname] is, itself, a routed infrastructure, there are
-more engineering, architecture, and operations considerations that have
-to be weighed when running $[prodname] with an IP routed interconnection
-fabric. We will briefly outline those in the rest of this post. That
-said, $[prodname] operates equally well with Ethernet or IP interconnect
-fabrics.
-
-## Background
-
-### Basic $[prodname] architecture overview
-
-
-In a $[prodname] network, each compute server acts as a router for all of the
-endpoints that are hosted on that compute server. We call that function
-a vRouter. The data path is provided by the Linux kernel, the control
-plane by a BGP protocol server, and management plane by $[prodname]'s
-on-server agent, _Felix_.
-
-Each endpoint can only communicate through its local vRouter, and the
-first and last _hop_ in any $[prodname] packet flow is an IP router hop
-through a vRouter. Each vRouter announces all of the endpoints it is
-attached to all the other vRouters and other routers on the
-infrastructure fabric, using BGP, usually with BGP route reflectors to
-increase scale. A discussion of why we use BGP can be found in the [Why BGP?](https://www.projectcalico.org/why-bgp/)
- blog post.
-
-Access control lists (ACLs) enforce security (and other) policy as
-directed by whatever cloud orchestrator is in use. There are other
-components in the $[prodname] architecture, but they are irrelevant to the
-interconnect network fabric discussion.
-
-### Overview of current common IP scale-out fabric architectures
-
-There are two approaches to building an IP fabric for a scale-out
-infrastructure. However, all of them, to date, have assumed that the
-edge router in the infrastructure is the top of rack (TOR) switch. In
-the $[prodname] model, that function is pushed to the compute server itself.
-
-The two approaches are outlined below, in this technical note, we will
-cover the second option, as it is more common in the scale-out world. If
-there is interest in the first approach, please contact Project Calico,
-and we can discuss, and if there is enough interest, maybe we will do
-another technical note on that approach. If you know of other approaches
-in use, we would be happy to host a guest technical note.
-
-1. The routing infrastructure is based on some form of IGP. Due to the
- limitations in scale of IGP networks (see the [why BGP post](https://www.projectcalico.org/why-bgp/)
- for discussion of this
- topic), the Project Calico team does not believe that using an IGP
- to distribute endpoint reachability information will adequately
- scale in a $[prodname] environment. However, it is possible to use a
- combination of IGP and BGP in the interconnect fabric, where an IGP
- communicates the path to the _next-hop_ router (in $[prodname], this is
- often the destination compute server) and BGP is used to distribute
- the actual next-hop for a given endpoint. This is a valid model,
- and, in fact is the most common approach in a widely distributed IP
- network (say a carrier's backbone network). The design of these
- networks is somewhat complex though, and will not be addressed
- further in this technical note. [^3]
-2. The other model, and the one that this note concerns itself with, is
- one where the routing infrastructure is based entirely on BGP. In
- this model, the IP network is "tight enough" or has a small enough
- diameter that BGP can be used to distribute endpoint routes, and the
- paths to the next-hops for those routes is known to all of the
- routers in the network (in a $[prodname] network this includes the
- compute servers). This is the network model that this note
- will address.
-
-### BGP-only interconnect fabrics
-
-There are multiple methods to build a BGP-only interconnect fabric. We
-will focus on three models, each with two widely viable variations.
-There are other options, and we will briefly touch on why we didn't
-include some of them in the [Other Options appendix](#other-options).
-
-The two methods are:
-
-1. A BGP fabric where each of the TOR switches (and their subsidiary
- compute servers) are a unique [Autonomous System (AS)]()
- and they are interconnected via either an Ethernet switching plane
- provided by the spine switches in a
- [leaf/spine](http://bradhedlund.com/2012/10/24/video-a-basic-introduction-to-the-leafspine-data-center-networking-fabric-design/)
- architecture, or via a set of spine switches, each of which is also
- a unique AS. We'll refer to this as the _AS per rack_ model. This
- model is detailed in [IETF RFC 7938](https://datatracker.ietf.org/doc/html/rfc7938).
-2. A BGP fabric where each of the compute servers is a unique AS, and
- the TOR switches make up a transit AS. We'll refer to this as the
- _AS per server_ model.
-
-Each of these models can either have an Ethernet or IP spine. In the
-case of an Ethernet spine, each spine switch provides an isolated
-Ethernet connection _plane_ as in the $[prodname] Ethernet interconnect
-fabric model and each TOR switch is connected to each spine switch.
-
-Another model is where each spine switch is a unique AS, and each TOR
-switch BGP peers with each spine switch. In both cases, the TOR switches
-use ECMP to load-balance traffic between all available spine switches.
-
-### Some BGP network design considerations
-
-Contrary to popular opinion, BGP is actually a fairly simple protocol.
-For example, the BGP configuration on a $[prodname] compute server is
-approximately sixty lines long, not counting comments. The perceived
-complexity is due to the things that you can _do_ with BGP. Many uses of
-BGP involve complex policy rules, where the behavior of BGP can be
-modified to meet technical (or business, financial, political, _etc._)
-requirements. A default $[prodname] network does not venture into those
-areas, [^4] and therefore is fairly straight forward.
-
-That said, there are a few design rules for BGP that need to be kept in
-mind when designing an IP fabric that will interconnect nodes in a
-$[prodname] network. These BGP design requirements _can_ be worked around, if
-necessary, but doing so takes the designer out of the standard BGP
-_envelope_ and should only be done by an implementer who is _very_
-comfortable with advanced BGP design.
-
-These considerations are:
-
-AS continuity
-
-: or _AS puddling_ Any router in an AS _must_ be able to communicate
-with any other router in that same AS without transiting another AS.
-
-Next hop behavior
-
-: By default BGP routers do not change the _next hop_ of a route if it
-is peering with another router in its same AS. The inverse is also
-true, a BGP router will set itself as the _next hop_ of a route if
-it is peering with a router in another AS.
-
-Route reflection
-
-: All BGP routers in a given AS must _peer_ with all the other routers
-in that AS. This is referred to a _complete BGP mesh_. This can
-become problematic as the number of routers in the AS scales up. The
-use of _route reflectors_ reduce the need for the complete BGP mesh.
-However, route reflectors also have scaling considerations.
-
-Endpoints
-
-: In a $[prodname] network, each endpoint is a route. Hardware networking
-platforms are constrained by the number of routes they can learn.
-This is usually in range of 10,000's or 100,000's of routes. Route
-aggregation can help, but that is usually dependent on the
-capabilities of the scheduler used by the orchestration software.
-
-A deeper discussion of these considerations can be found in the IP
-Fabric Design Considerations\_ appendix.
-
-The designs discussed below address these considerations.
-
-### The _AS Per Rack_ model
-
-This model is the closest to the model suggested by [IETF RFC 7938](https://datatracker.ietf.org/doc/html/rfc7938).
-
-As mentioned earlier, there are two versions of this model, one with an
-set of Ethernet planes interconnecting the ToR switches, and the other
-where the core planes are also routers. The following diagrams may be
-useful for the discussion.
-
-![](/img/calico-enterprise/l3-fabric-diagrams-as-rack-l2-spine.png)
-
-The diagram above shows the _AS per rack model_ where the ToR switches are
-physically meshed via a set of Ethernet switching planes.
-
-![](/img/calico-enterprise/l3-fabric-diagrams-as-rack-l3-spine.png)
-
-The diagram above shows the _AS per rack model_ where the ToR switches are
-physically meshed via a set of discrete BGP spine routers, each in
-their own AS.
-
-In this approach, every ToR-ToR or ToR-Spine (in the case of an AS per
-spine) link is an eBGP peering which means that there is no
-route-reflection possible (using standard BGP route reflectors) _north_
-of the ToR switches.
-
-If the L2 spine option is used, the result of this is that each ToR must
-either peer with every other ToR switch in the cluster (which could be
-hundreds of peers).
-
-If the AS per spine option is used, then each ToR only has to peer with
-each spine (there are usually somewhere between two and sixteen spine
-switches in a pod). However, the spine switches must peer with all ToR
-switches (again, that would be hundreds, but most spine switches have
-more control plane capacity than the average ToR, so this might be more
-scalable in many circumstances).
-
-Within the rack, the configuration is the same for both variants, and is
-somewhat different than the configuration north of the ToR.
-
-Every router within the rack, which, in the case of $[prodname] is every
-compute server, shares the same AS as the ToR that they are connected
-to. That connection is in the form of an Ethernet switching layer. Each
-router in the rack must be directly connected to enable the AS to remain
-contiguous. The ToR's _router_ function is then connected to that
-Ethernet switching layer as well. The actual configuration of this is
-dependent on the ToR in use, but usually it means that the ports that
-are connected to the compute servers are treated as _subnet_ or
-_segment_ ports, and then the ToR's _router_ function has a single
-interface into that subnet.
-
-This configuration allows each compute server to connect to each other
-compute server in the rack without going through the ToR router, but it
-will, of course, go through the ToR switching function. The compute
-servers and the ToR router could all be directly meshed, or a route
-reflector could be used within the rack, either hosted on the ToR
-itself, or as a virtual function hosted on one or more compute servers
-within the rack.
-
-The ToR, as the eBGP router redistributes all of the routes from other
-ToRs as well as routes external to the data center to the compute
-servers that are in its AS, and announces all of the routes from within
-the AS (rack) to the other ToRs and the larger world. This means that
-each compute server will see the ToR as the next hop for all external
-routes, and the individual compute servers are the next hop for all
-routes internal to the rack.
-
-### The _AS per Compute Server_ model
-
-This model takes the concept of an AS per rack to its logical
-conclusion. In the earlier referenced [IETF RFC 7938](https://datatracker.ietf.org/doc/html/rfc7938)
-the assumption in the overall model is that the ToR is first tier
-aggregating and routing element. In $[prodname], the ToR, if it is an L3
-router, is actually the second tier. Remember, in $[prodname], the compute
-server is always the first/last router for an endpoint, and is also the
-first/last point of aggregation.
-
-Therefore, if we follow the architecture of the draft, the compute
-server, not the ToR should be the AS boundary. The differences can be
-seen in the following two diagrams.
-
-![](/img/calico-enterprise/l3-fabric-diagrams-as-server-l2-spine.png)
-
-The diagram above shows the _AS per compute server model_ where the ToR
-switches are physically meshed via a set of Ethernet switching planes.
-
-![](/img/calico-enterprise/l3-fabric-diagrams-as-server-l3-spine.png)
-
-The diagram above shows the _AS per compute server model_ where the ToR
-switches are physically connected to a set of independent routing
-planes.
-
-As can be seen in these diagrams, there are still the same two variants
-as in the _AS per rack_ model, one where the spine switches provide a
-set of independent Ethernet planes to interconnect the ToR switches, and
-the other where that is done by a set of independent routers.
-
-The real difference in this model, is that the compute servers as well
-as the ToR switches are all independent autonomous systems. To make this
-work at scale, the use of four byte AS numbers as discussed in
-[RFC 4893](http://www.faqs.org/rfcs/rfc4893.html 'RFC 4893'). Without
-using four byte AS numbering, the total number of ToRs and compute
-servers in a $[prodname] fabric would be limited to the approximately five
-thousand available private AS [^5] numbers. If four byte AS numbers are
-used, there are approximately ninety-two million private AS numbers
-available. This should be sufficient for any given $[prodname] fabric.
-
-The other difference in this model _vs._ the AS per rack model, is that
-there are no route reflectors used, as all BGP peerings are eBGP. In
-this case, each compute server in a given rack peers with its ToR switch
-which is also acting as an eBGP router. For two servers within the same
-rack to communicate, they will be routed through the ToR. Therefore,
-each server will have one peering to each ToR it is connected to, and
-each ToR will have a peering with each compute server that it is
-connected to (normally, all the compute servers in the rack).
-
-The inter-ToR connectivity considerations are the same in scale and
-scope as in the AS per rack model.
-
-### The _Downward Default_ model
-
-The final model is a bit different. Whereas, in the previous models, all
-of the routers in the infrastructure carry full routing tables, and
-leave their AS paths intact, this model [^6] removes the AS numbers at
-each stage of the routing path. This is to prevent routes from other
-nodes in the network from not being installed due to it coming from the
-_local_ AS (since they share the source and dest of the route share the
-same AS).
-
-The following diagram will show the AS relationships in this model.
-
-![](/img/calico-enterprise/l3-fabric-downward-default.png)
-
-In the diagram above, we are showing that all $[prodname] nodes share the same
-AS number, as do all ToR switches. However, those ASs are different
-(_A1_ is not the same network as _A2_, even though the both share the
-same AS number _A_ ).
-
-While the use of a single AS for all ToR switches, and another for all
-compute servers simplifies deployment (standardized configuration), the
-real benefit comes in the offloading of the routing tables in the ToR
-switches.
-
-In this model, each router announces all of its routes to its upstream
-peer (the $[prodname] routers to their ToR, the ToRs to the spine switches).
-However, in return, the upstream router only announces a default route.
-In this case, a given $[prodname] router only has routes for the endpoints
-that are locally hosted on it, as well as the default from the ToR.
-Since the ToR is the only route for the $[prodname] network the rest of the
-network, this matches reality. The same happens between the ToR switches
-and the spine. This means that the ToR only has to install the routes
-that are for endpoints that are hosted on its downstream $[prodname] nodes.
-Even if we were to host 200 endpoints per $[prodname] node, and stuff 80
-$[prodname] nodes in each rack, that would still limit the routing table on
-the ToR to a maximum of 16,000 entries (well within the capabilities of
-even the most modest of switches).
-
-Since the default is originated by the Spine (originally) there is no
-chance for a downward announced route to originate from the recipient's
-AS, preventing the _AS puddling_ problem.
-
-There is one (minor) drawback to this model, in that all traffic that is
-destined for an invalid destination (the destination IP does not exist)
-will be forwarded to the spine switches before they are dropped.
-
-It should also be noted that the spine switches do need to carry all of
-the $[prodname] network routes, just as they do in the routed spines in the
-previous examples. In short, this model imposes no more load on the
-spines than they already would have, and substantially reduces the
-amount of routing table space used on the ToR switches. It also reduces
-the number of routes in the $[prodname] nodes, but, as we have discussed
-before, that is not a concern in most deployments as the amount of
-memory consumed by a full routing table in $[prodname] is a fraction of the
-total memory available on a modern compute server.
-
-## Recommendation
-
-The Project Calico team recommends the use of the [AS per rack](#the-as-per-rack-model) model if
-the resultant routing table size can be accommodated by the ToR and
-spine switches, remembering to account for projected growth.
-
-If there is concern about the route table size in the ToR switches, the
-team recommends the [Downward Default](#the-downward-default-model) model.
-
-If there are concerns about both the spine and ToR switch route table
-capacity, or there is a desire to run a very simple L2 fabric to connect
-the $[prodname] nodes, then the user should consider the Ethernet fabric as
-detailed in [this post](l2-interconnect-fabric.mdx).
-
-If a $[prodname] user is interested in the AS per compute server, the Project
-Calico team would be very interested in discussing the deployment of
-that model.
-
-## Appendix
-
-### Other Options
-
-The way the physical and logical connectivity is laid out in this note,
-and the [Ethernet fabric note](l2-interconnect-fabric.mdx),
-The next hop router for a given route is always directly connected to
-the router receiving that route. This makes the need for another
-protocol to distribute the next hop routes unnecessary.
-
-However, in many (or most) WAN BGP networks, the routers within a given
-AS may not be directly adjacent. Therefore, a router may receive a route
-with a next hop address that it is not directly adjacent to. In those
-cases, an IGP, such as OSPF or IS-IS, is used by the routers within a
-given AS to determine the path to the BGP next hop route.
-
-There may be $[prodname] architectures where there are similar models where
-the routers within a given AS are not directly adjacent. In those
-models, the use of an IGP in $[prodname] may be warranted. The configuration
-of those protocols are, however, beyond the scope of this technical
-note.
-
-### IP Fabric Design Considerations
-
-#### AS puddling
-
-The first consideration is that an AS must be kept contiguous. This
-means that any two nodes in a given AS must be able to communicate
-without traversing any other AS. If this rule is not observed, the
-effect is often referred to as _AS puddling_ and the network will _not_
-function correctly.
-
-A corollary of that rule is that any two administrative regions that
-share the same AS number, are in the same AS, even if that was not the
-desire of the designer. BGP has no way of identifying if an AS is local
-or foreign other than the AS number. Therefore re-use of an AS number
-for two _networks_ that are not directly connected, but only connected
-through another _network_ or AS number will not work without a lot of
-policy changes to the BGP routers.
-
-Another corollary of that rule is that a BGP router will not propagate a
-route to a peer if the route has an AS in its path that is the same AS
-as the peer. This prevents loops from forming in the network. The effect
-of this prevents two routers in the same AS from transiting another
-router (either in that AS or not).
-
-#### Next hop behavior
-
-Another consideration is based on the differences between iBGP and eBGP.
-BGP operates in two modes, if two routers are BGP peers, but share the
-same AS number, then they are considered to be in an _internal_ BGP (or
-iBGP) peering relationship. If they are members of different AS's, then
-they are in an _external_ or eBGP relationship.
-
-BGP's original design model was that all BGP routers within a given AS
-would know how to get to one another (via static routes, IGP [^7]
-routing protocols, or the like), and that routers in different ASs would
-not know how to reach one another unless they were directly connected.
-
-Based on that design point, routers in an iBGP peering relationship
-assume that they do not transit traffic for other iBGP routers in a
-given AS (i.e. A can communicate with C, and therefore will not need to
-route through B), and therefore, do not change the _next hop_ attribute
-in BGP [^8].
-
-A router with an eBGP peering, on the other hand, assumes that its eBGP
-peer will not know how to reach the next hop route, and then will
-substitute its own address in the next hop field. This is often referred
-to as _next hop self_.
-
-In the $[prodname] [Ethernet fabric](l2-interconnect-fabric.mdx)
-model, all of the compute servers (the routers in a $[prodname] network) are
-directly connected over one or more Ethernet network(s) and therefore
-are directly reachable. In this case, a router in the $[prodname] network
-does not need to set _next hop self_ within the $[prodname] fabric.
-
-The models we present in this technical note insure that all routes that
-may traverse a non-$[prodname] router are eBGP routes, and therefore _next
-hop self_ is automatically set correctly. If a deployment of $[prodname] in
-an IP interconnect fabric does not satisfy that constraint, then _next
-hop self_ must be appropriately configured.
-
-#### Route reflection
-
-As mentioned above, BGP expects that all of the iBGP routers in a
-network can see (and speak) directly to one another, this is referred to
-as a _BGP full mesh_. In small networks this is not a problem, but it
-does become interesting as the number of routers increases. For example,
-if you have 99 BGP routers in an AS and wish to add one more, you would
-have to configure the peering to that new router on each of the 99
-existing routers. Not only is this a problem at configuration time, it
-means that each router is maintaining 100 protocol adjacencies, which
-can start being a drain on constrained resources in a router. While this
-might be _interesting_ at 100 routers, it becomes an impossible task
-with 1000's or 10,000's of routers (the potential size of a $[prodname]
-network).
-
-Conveniently, large scale/Internet scale networks solved this problem
-almost 20 years ago by deploying BGP route reflection as described in
-[RFC 1966](http://www.faqs.org/rfcs/rfc1966.html 'RFC 1966'). This is a
-technique supported by almost all BGP routers today. In a large network,
-a number of route reflectors [^9] are evenly distributed and each iBGP
-router is _peered_ with one or more route reflectors (usually 2 or 3).
-Each route reflector can handle 10's or 100's of route reflector clients
-(in $[prodname]'s case, the compute server), depending on the route reflector
-being used. Those route reflectors are, in turn, peered with each other.
-This means that there are an order of magnitude less route reflectors
-that need to be completely meshed, and each route reflector client is
-only configured to peer to 2 or 3 route reflectors. This is much easier
-to manage.
-
-Other route reflector architectures are possible, but those are beyond
-the scope of this document.
-
-#### Endpoints
-
-The final consideration is the number of endpoints in a $[prodname] network.
-In the [Ethernet fabric](l2-interconnect-fabric.mdx)
-case the number of endpoints is not constrained by the interconnect
-fabric, as the interconnect fabric does not _see_ the actual endpoints,
-it only _sees_ the actual vRouters, or compute servers. This is not the
-case in an IP fabric, however. IP networks forward by using the
-destination IP address in the packet, which, in $[prodname]'s case, is the
-destination endpoint. That means that the IP fabric nodes (ToR switches
-and/or spine switches, for example) must know the routes to each
-endpoint in the network. They learn this by participating as route
-reflector clients in the BGP mesh, just as the $[prodname] vRouter/compute
-server does.
-
-However, unlike a compute server which has a relatively unconstrained
-amount of memory, a physical switch is either memory constrained, or
-quite expensive. This means that the physical switch has a limit on how
-many _routes_ it can handle. The current industry standard for modern
-commodity switches is in the range of 128,000 routes. This means that,
-without other routing _tricks_, such as aggregation, a $[prodname]
-installation that uses an IP fabric will be limited to the routing table
-size of its constituent network hardware, with a reasonable upper limit
-today of 128,000 endpoints.
-
-[^1]:
- In $[prodname]'s terminology, an endpoint is an IP address and
- interface. It could refer to a VM, a container, or even a process
- bound to an IP address running on a bare metal server.
-
-[^2]:
- This interconnect fabric provides the connectivity between the
- $[prodname] (v)Router (in almost all cases, the compute servers) nodes,
- as well as any other elements in the fabric (_e.g._ bare metal
- servers, border routers, and appliances).
-
-[^3]:
- If there is interest in a discussion of this approach, please let
- us know. The Project Calico team could either arrange a discussion,
- or if there was enough interest, publish a follow-up tech note.
-
-[^4]:
- However those tools are available if a given $[prodname] instance needs
- to utilize those policy constructs.
-
-[^5]:
- The two byte AS space reserves approximately the last five
- thousand AS numbers for private use. There is no technical reason
- why other AS numbers could not be used. However the re-use of global
- scope AS numbers within a private infrastructure is strongly
- discouraged. The chance for routing system failure or incorrect
- routing is substantial, and not restricted to the entity that is
- doing the reuse.
-
-[^6]:
- We first saw this design in a customer's lab, and thought it
- innovative enough to share (we asked them first, of course). Similar
- _AS Path Stripping_ approaches are used in ISP networks, however.
-
-[^7]:
- An Interior Gateway Protocol is a local routing protocol that does
- not cross an AS boundary. The primary IGPs in use today are OSPF and
- IS-IS. While complex iBGP networks still use IGP routing protocols,
- a data center is normally a fairly simple network, even if it has
- many routers in it. Therefore, in the data center case, the use of
- an IGP can often be disposed of.
-
-[^8]:
- A Next hop is an attribute of a route announced by a routing
- protocol. In simple terms a route is defined by a _target_, or the
- destination that is to be reached, and a _next hop_, which is the
- next router in the path to reach that target. There are many other
- characteristics in a route, but those are well beyond the scope of
- this post.
-
-[^9]:
- A route reflector may be a physical router, a software appliance,
- or simply a BGP daemon. It only processes routing messages, and does
- not pass actual data plane traffic. However, some route reflectors
- are co-resident on regular routers that do pass data plane traffic.
- While they may sit on one platform, the functions are distinct.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/architecture/index.mdx b/calico-cloud_versioned_docs/version-20-1/reference/architecture/index.mdx
deleted file mode 100644
index 2782bfb10a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/architecture/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Calico Cloud component architecture diagram, network design, and the data path between workloads.
-hide_table_of_contents: true
----
-
-# Architecture
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/attribution.mdx b/calico-cloud_versioned_docs/version-20-1/reference/attribution.mdx
deleted file mode 100644
index 156f3e6ae6..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/attribution.mdx
+++ /dev/null
@@ -1,3394 +0,0 @@
----
-description: Attribution report
----
-
-# Attribution
-
-## $[prodname] attribution report
-
-24 Feb 2021
-
-$[prodname] incorporates various open source softwares. The following open source components and their respective licenses used in the product are provided for your informational purposes.
-In the table below, you can look at the details of each project and license associated with it.
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/configuration.mdx b/calico-cloud_versioned_docs/version-20-1/reference/component-resources/configuration.mdx
deleted file mode 100644
index 3cc46715c8..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/configuration.mdx
+++ /dev/null
@@ -1,574 +0,0 @@
----
-description: Details for configuring the Calico Cloud CNI plugins.
----
-
-# Configuring the Calico Cloud CNI plugins
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-
-
-
-The $[prodname] CNI plugins do not need to be configured directly when installed by the operator. For a complete operator
-configuration reference, see [the installation API reference documentation][installation].
-
-
-
-
-The $[prodname] CNI plugin is configured through the standard CNI
-[configuration mechanism](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration)
-
-A minimal configuration file that uses $[prodname] for networking
-and IPAM looks like this
-
-```json
-{
- "name": "any_name",
- "cniVersion": "0.1.0",
- "type": "calico",
- "ipam": {
- "type": "calico-ipam"
- }
-}
-```
-
-If the `$[nodecontainer]` container on a node registered with a `NODENAME` other than the node hostname, the CNI plugin on this node must be configured with the same `nodename`:
-
-```json
-{
- "name": "any_name",
- "nodename": "",
- "type": "calico",
- "ipam": {
- "type": "calico-ipam"
- }
-}
-```
-
-Additional configuration can be added as detailed below.
-
-## Generic
-
-### Datastore type
-
-The $[prodname] CNI plugin supports the following datastore:
-
-- `datastore_type` (kubernetes)
-
-### Logging
-
-Logging is always to `stderr`. Logs are also written to `/var/log/calico/cni/cni.log` on each host by default.
-
-Logging can be configured using the following options in the netconf.
-
-| Option name | Default | Description |
-| -------------------- | ----------------------------- | --------------------------------------------------------------------------------------------------------- |
-| `log_level` | INFO | Logging level. Allowed levels are `ERROR`, `WARNING`, `INFO`, and `DEBUG`. |
-| `log_file_path` | `/var/log/calico/cni/cni.log` | Location on each host to write CNI log files to. Logging to file can be disabled by removing this option. |
-| `log_file_max_size` | 100 | Max file size in MB log files can reach before they are rotated. |
-| `log_file_max_age` | 30 | Max age in days that old log files will be kept on the host before they are removed. |
-| `log_file_max_count` | 10 | Max number of rotated log files allowed on the host before they are cleaned up. |
-
-```json
-{
- "name": "any_name",
- "cniVersion": "0.1.0",
- "type": "calico",
- "log_level": "DEBUG",
- "log_file_path": "/var/log/calico/cni/cni.log",
- "ipam": {
- "type": "calico-ipam"
- }
-}
-```
-
-### IPAM
-
-When using $[prodname] IPAM, the following flags determine what IP addresses should be assigned. NOTE: These flags are strings and not boolean values.
-
-- `assign_ipv4` (default: `"true"`)
-- `assign_ipv6` (default: `"false"`)
-
-A specific IP address can be chosen by using [`CNI_ARGS`](https://github.com/appc/cni/blob/master/SPEC.md#parameters) and setting `IP` to the desired value.
-
-By default, $[prodname] IPAM will assign IP addresses from all the available IP pools.
-
-Optionally, the list of possible IPv4 and IPv6 pools can also be specified via the following properties:
-
-- `ipv4_pools`: An array of CIDR strings or pool names. (e.g., `"ipv4_pools": ["10.0.0.0/24", "20.0.0.0/16", "default-ipv4-ippool"]`)
-- `ipv6_pools`: An array of CIDR strings or pool names. (e.g., `"ipv6_pools": ["2001:db8::1/120", "namedpool"]`)
-
-Example CNI config:
-
-```json
-{
- "name": "any_name",
- "cniVersion": "0.1.0",
- "type": "calico",
- "ipam": {
- "type": "calico-ipam",
- "assign_ipv4": "true",
- "assign_ipv6": "true",
- "ipv4_pools": ["10.0.0.0/24", "20.0.0.0/16", "default-ipv4-ippool"],
- "ipv6_pools": ["2001:db8::1/120", "default-ipv6-ippool"]
- }
-}
-```
-
-:::note
-
-`ipv6_pools` will be respected only when `assign_ipv6` is set to `"true"`.
-
-:::
-
-Any IP pools specified in the CNI config must have already been created. It is an error to specify IP pools in the config that do not exist.
-
-### Container settings
-
-The following options allow configuration of settings within the container namespace.
-
-- allow_ip_forwarding (default is `false`)
-
-```json
-{
- "name": "any_name",
- "cniVersion": "0.1.0",
- "type": "calico",
- "ipam": {
- "type": "calico-ipam"
- },
- "container_settings": {
- "allow_ip_forwarding": true
- }
-}
-```
-
-### Readiness Gates
-
-The following option makes CNI plugin wait for specified endpoint(s) to be ready before configuring pod networking.
-
-- `readiness_gates`
-
-This is an optional property that takes an array of URLs. Each URL specified will be polled for readiness and pod networking will continue startup once all readiness_gates are ready.
-
-Example CNI config:
-
-```json
-{
- "name": "any_name",
- "cniVersion": "0.1.0",
- "type": "calico",
- "ipam": {
- "type": "calico-ipam"
- },
- "readiness_gates": ["http://localhost:9099/readiness", "http://localhost:8888/status"]
-}
-```
-
-## Kubernetes specific
-
-When using the $[prodname] CNI plugin with Kubernetes, the plugin must be able to access the Kubernetes API server to find the labels assigned to the Kubernetes pods. The recommended way to configure access is through a `kubeconfig` file specified in the `kubernetes` section of the network config. e.g.
-
-```json
-{
- "name": "any_name",
- "cniVersion": "0.1.0",
- "type": "calico",
- "kubernetes": {
- "kubeconfig": "/path/to/kubeconfig"
- },
- "ipam": {
- "type": "calico-ipam"
- }
-}
-```
-
-As a convenience, the API location can also be configured directly, e.g.
-
-```json
-{
- "name": "any_name",
- "cniVersion": "0.1.0",
- "type": "calico",
- "kubernetes": {
- "k8s_api_root": "http://127.0.0.1:8080"
- },
- "ipam": {
- "type": "calico-ipam"
- }
-}
-```
-
-### Enabling Kubernetes policy
-
-If you wish to use the Kubernetes `NetworkPolicy` resource then you must set a policy type in the network config.
-There is a single supported policy type, `k8s`. When set,
-you must also run `$[imageNames.kubeControllers]` with the policy, profile, and workloadendpoint controllers enabled.
-
-```json
-{
- "name": "any_name",
- "cniVersion": "0.1.0",
- "type": "calico",
- "policy": {
- "type": "k8s"
- },
- "kubernetes": {
- "kubeconfig": "/path/to/kubeconfig"
- },
- "ipam": {
- "type": "calico-ipam"
- }
-}
-```
-
-When using `type: k8s`, the $[prodname] CNI plugin requires read-only Kubernetes API access to the `Pods` resource in all namespaces.
-
-## IPAM
-
-### Using host-local IPAM
-
-Calico can be configured to use [host-local IPAM](https://www.cni.dev/plugins/current/ipam/host-local/) instead of the default `calico-ipam`. Host
-local IPAM uses a pre-determined CIDR per-host, and stores allocations locally on each node. This is in contrast to Calico IPAM, which dynamically
-allocates blocks of addresses and single addresses alike in response to cluster needs.
-
-Host local IPAM is generally only used on clusters where integration with the Kubernetes [route controller](https://kubernetes.io/docs/concepts/architecture/cloud-controller/#route-controller) is necessary.
-Note that some Calico features - such as the ability to request a specific address or pool for a pod - require Calico IPAM to function, and will not work with host-local IPAM enabled.
-
-
-
-
-The `host-local` IPAM plugin can be configured by setting the `Spec.CNI.IPAM.Plugin` field to `HostLocal` on the [operator.tigera.io/Installation](../installation/api.mdx#operator.tigera.io/v1.Installation) API.
-
-Calico will use the `host-local` IPAM plugin to allocate IPv4 addresses from the node's IPv4 pod CIDR if there is an IPv4 pool configured in `Spec.IPPools`, and an IPv6 address from the node's IPv6 pod CIDR if
-there is an IPv6 pool configured in `Spec.IPPools`.
-
-The following example configures Calico to assign dual-stack IPs to pods using the host-local IPAM plugin.
-
-```yaml
-kind: Installation
-apiVersion: operator.tigera.io/v1
-metadata:
- name: default
-spec:
- calicoNetwork:
- ipPools:
- - cidr: 192.168.0.0/16
- - cidr: 2001:db8::/64
- cni:
- type: Calico
- ipam:
- type: HostLocal
-```
-
-
-
-
-When using the CNI `host-local` IPAM plugin, two special values - `usePodCidr` and `usePodCidrIPv6` - are allowed for the subnet field (either at the top-level, or in a "range"). This tells the plugin to determine the subnet to use from the Kubernetes API based on the Node.podCIDR field. $[prodname] does not use the `gateway` field of a range so that field is not required and it will be ignored if present.
-
-:::note
-
-`usePodCidr` and `usePodCidrIPv6` can only be used as the value of the `subnet` field, it cannot be used in
-`rangeStart` or `rangeEnd` so those values are not useful if `subnet` is set to `usePodCidr`.
-
-:::
-
-$[prodname] supports the host-local IPAM plugin's `routes` field as follows:
-
-- If there is no `routes` field, $[prodname] will install a default `0.0.0.0/0`, and/or `::/0` route into the pod (depending on whether the pod has an IPv4 and/or IPv6 address).
-
-- If there is a `routes` field then $[prodname] will program _only_ the routes in the routes field into the pod. Since $[prodname] implements a point-to-point link into the pod, the `gw` field is not required and it will be ignored if present. All routes that $[prodname] installs will have $[prodname]'s link-local IP as the next hop.
-
-$[prodname] CNI plugin configuration:
-
-- `node_name`
- - The node name to use when looking up the CIDR value (defaults to current hostname)
-
-```json
-{
- "name": "any_name",
- "cniVersion": "0.1.0",
- "type": "calico",
- "kubernetes": {
- "kubeconfig": "/path/to/kubeconfig",
- "node_name": "node-name-in-k8s"
- },
- "ipam": {
- "type": "host-local",
- "ranges": [[{ "subnet": "usePodCidr" }], [{ "subnet": "usePodCidrIPv6" }]],
- "routes": [{ "dst": "0.0.0.0/0" }, { "dst": "2001:db8::/96" }]
- }
-}
-```
-
-When making use of the `usePodCidr` or `usePodCidrIPv6` options, the $[prodname] CNI plugin requires read-only Kubernetes API access to the `Nodes` resource.
-
-#### Configuring node and typha
-
-When using `host-local` IPAM with the Kubernetes API datastore, you must configure both $[nodecontainer] and the Typha deployment to use the `Node.podCIDR` field by setting the environment variable `USE_POD_CIDR=true` in each.
-
-
-
-
-### Using Kubernetes annotations
-
-#### Specifying IP pools on a per-namespace or per-pod basis
-
-In addition to specifying IP pools in the CNI config as discussed above, $[prodname] IPAM supports specifying IP pools per-namespace or per-pod using the following [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/).
-
-- `cni.projectcalico.org/ipv4pools`: A list of configured IPv4 Pools from which to choose an address for the pod.
-
- Example:
-
- ```yaml
- annotations:
- 'cni.projectcalico.org/ipv4pools': '["default-ipv4-ippool"]'
- ```
-
-- `cni.projectcalico.org/ipv6pools`: A list of configured IPv6 Pools from which to choose an address for the pod.
-
- Example:
-
- ```yaml
- annotations:
- 'cni.projectcalico.org/ipv6pools': '["2001:db8::1/120"]'
- ```
-
-If provided, these IP pools will override any IP pools specified in the CNI config.
-
-:::note
-
-This requires the IP pools to exist before `ipv4pools` or
-`ipv6pools` annotations are used. Requesting a subset of an IP pool
-is not supported. IP pools requested in the annotations must exactly
-match a configured [IPPool](../resources/ippool.mdx) resource.
-
-:::
-
-:::note
-
-The $[prodname] CNI plugin supports specifying an annotation per namespace.
-If both the namespace and the pod have this annotation, the pod information will be used.
-Otherwise, if only the namespace has the annotation the annotation of the namespace will
-be used for each pod in it.
-
-:::
-
-#### Requesting a specific IP address
-
-You can also request a specific IP address through [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) with $[prodname] IPAM.
-There are two annotations to request a specific IP address:
-
-- `cni.projectcalico.org/ipAddrs`: A list of IPv4 and/or IPv6 addresses to assign to the Pod. The requested IP addresses will be assigned from $[prodname] IPAM and must exist within a configured IP pool.
-
- Example:
-
- ```yaml
- annotations:
- 'cni.projectcalico.org/ipAddrs': '["192.168.0.1"]'
- ```
-
-- `cni.projectcalico.org/ipAddrsNoIpam`: A list of IPv4 and/or IPv6 addresses to assign to the Pod, bypassing IPAM. Any IP conflicts and routing have to be taken care of manually or by some other system.
- $[prodname] will only distribute routes to a Pod if its IP address falls within a $[prodname] IP pool using BGP mode. Calico will not distribute ipAddrsNoIpam routes when operating in VXLAN mode. If you assign an IP address that is not in a $[prodname] IP pool or if its IP address falls within a $[prodname] IP pool that uses VXLAN encapsulation, you must ensure that routing to that IP address is taken care of through another mechanism.
-
- Example:
-
- ```yaml
- annotations:
- 'cni.projectcalico.org/ipAddrsNoIpam': '["10.0.0.1"]'
- ```
-
- The ipAddrsNoIpam feature is disabled by default. It can be enabled in the feature_control section of the CNI network config:
-
- ```json
- {
- "name": "any_name",
- "cniVersion": "0.1.0",
- "type": "calico",
- "ipam": {
- "type": "calico-ipam"
- },
- "feature_control": {
- "ip_addrs_no_ipam": true
- }
- }
- ```
-
- :::caution
-
- This feature allows for the bypassing of network policy via IP spoofing.
- Users should make sure the proper admission control is in place to prevent users from selecting arbitrary IP addresses.
-
- :::
-
-:::note
-
-- The `ipAddrs` and `ipAddrsNoIpam` annotations can't be used together.
-- You can only specify one IPv4/IPv6 or one IPv4 and one IPv6 address with these annotations.
-- When `ipAddrs` or `ipAddrsNoIpam` is used with `ipv4pools` or `ipv6pools`, `ipAddrs` / `ipAddrsNoIpam` take priority.
-
-:::
-
-#### Requesting a floating IP
-
-You can request a floating IP address for a pod through [Kubernetes annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) with $[prodname].
-
-:::note
-
-The specified address must belong to an IP Pool for advertisement to work properly.
-
-:::
-
-- `cni.projectcalico.org/floatingIPs`: A list of floating IPs which will be assigned to the pod's workload endpoint.
-
- Example:
-
- ```yaml
- annotations:
- 'cni.projectcalico.org/floatingIPs': '["10.0.0.1"]'
- ```
-
- The floatingIPs feature is disabled by default. It can be enabled in the feature_control section of the CNI network config:
-
- ```json
- {
- "name": "any_name",
- "cniVersion": "0.1.0",
- "type": "calico",
- "ipam": {
- "type": "calico-ipam"
- },
- "feature_control": {
- "floating_ips": true
- }
- }
- ```
-
- :::caution
-
- This feature can allow pods to receive traffic which may not have been intended for that pod.
- Users should make sure the proper admission control is in place to prevent users from selecting arbitrary floating IP addresses.
-
- :::
-
-### Using IP pools node selectors
-
-Nodes will only assign workload addresses from IP pools which select them. By
-default, IP pools select all nodes, but this can be configured using the
-`nodeSelector` field. Check out the [IP pool resource document](../resources/ippool.mdx)
-for more details.
-
-Example:
-
-1. Create (or update) an IP pool that only allocates IPs for nodes where it
- contains a label `rack=0`.
-
- ```bash
- kubectl create -f -<
-
-
-[installation]: /reference/installation/api.mdx
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/configure-resources.mdx b/calico-cloud_versioned_docs/version-20-1/reference/component-resources/configure-resources.mdx
deleted file mode 100644
index 54155b6182..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/configure-resources.mdx
+++ /dev/null
@@ -1,699 +0,0 @@
----
-description: Configure Resource requests and limits.
----
-
-# Configure resource requests and limits
-
-## Big picture
-
-Resource requests and limits are essential configurations for managing resource allocation and ensuring optimal performance of Kubernetes workloads. In $[prodname], these configurations can be customized using custom resources to meet specific requirements and optimize resource utilization.
-
-:::note
-It's important to note that the CPU and memory values used in the examples are for demonstration purposes and should be adjusted based on individual system requirements. To find the list of all applicable containers for a component, please refer to its specification.
-:::
-
-## APIServer custom resource
-
-The [APIServer](../../reference/installation/api.mdx#operator.tigera.io/v1.APIServer) CR provides a way to configure APIServerDeployment. The following sections provide example configurations for this CR.
-
-### APIServerDeployment
-
-To configure resource specification for the [APIServerDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.APIServerDeployment), patch the installation CR using the below command:
-
-```bash
-$ kubectl patch apiserver tigera-secure --type=merge --patch='{"spec": {"apiServerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"calico-apiserver","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"tigera-queryserver","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-$ kubectl get deployment.apps/tigera-apiserver -n tigera-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}'
-```
-
-This command will output the configured resource requests and limits for the Calico APIServerDeployment component in JSON format.
-
-```bash
-{
- "name": "calico-apiserver",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-{
- "name": "tigera-queryserver",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-```
-
-## ApplicationLayer custom resource
-
-The [ApplicationLayer](../../reference/installation/api.mdx#operator.tigera.io/v1.ApplicationLayer) CR provides a way to configure resources for L7LogCollectorDaemonSet. The following sections provide example configurations for this CR.
-
-### L7LogCollectorDaemonSet
-
-To configure resource specification for the [L7LogCollectorDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.L7LogCollectorDaemonSet), patch the ApplicationLayer CR using the below command:
-
-```bash
-$ kubectl patch applicationlayer tigera-secure --type=merge --patch='{"spec": {"l7LogCollectorDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"l7-collector","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}},{"name":"envoy-proxy","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}'
-applicationlayer.operator.tigera.io/tigera-secure patched
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-$ kubectl get daemonset.apps/l7-log-collector -n calico-system -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}'
-```
-
-This command will output the configured resource requests and limits for the Calico L7LogCollectorDaemonSet component in JSON format.
-
-```bash
-{
- "name": "envoy-proxy",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-{
- "name": "l7-collector",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-```
-
-## Compliance custom resource
-
-The [Compliance](../../reference/installation/api.mdx#operator.tigera.io/v1.Compliance) CR provides a way to configure resources for ComplianceControllerDeployment, ComplianceSnapshotterDeployment, ComplianceBenchmarkerDaemonSet, ComplianceServerDeployment, ComplianceReporterPodTemplate. The following sections provide example configurations for this CR.
-
-### ComplianceControllerDeployment
-
-To configure resource specification for the [ComplianceControllerDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceControllerDeployment), patch the Compliance CR using the below command:
-
-```bash
-kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceControllerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-controller","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-kubectl get deployment.apps/compliance-controller -n tigera-compliance -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}'
-```
-
-This command will output the configured resource requests and limits for the ComplianceControllerDeployment component in JSON format.
-
-```bash
-{
- "name": "compliance-controller",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-```
-
-
-### ComplianceSnapshotterDeployment
-
-To configure resource specification for the [ComplianceSnapshotterDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceSnapshotterDeployment), patch the Compliance CR using the below command:
-
-```bash
-kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceSnapshotterDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-snapshotter","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-kubectl get deployment.apps/compliance-snapshotter -n tigera-compliance -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}'
-```
-
-This command will output the configured resource requests and limits for the ComplianceSnapshotterDeployment in JSON format.
-
-```bash
-{
- "name": "compliance-snapshotter",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-```
-
-
-### ComplianceBenchmarkerDaemonSet
-
-To configure resource specification for the [ComplianceBenchmarkerDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceBenchmarkerDaemonSet), patch the Compliance CR using the below command:
-
-```bash
-kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceBenchmarkerDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-benchmarker","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-kubectl get daemonset.apps/compliance-benchmarker -n tigera-compliance -o json |jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}'
-```
-
-```bash
-{
- "name": "compliance-benchmarker",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-```
-
-This command will output the configured resource requests and limits for the ComplianceBenchmarkerDaemonSet in JSON format.
-
-### ComplianceServerDeployment
-
-To configure resource specification for the [ComplianceServerDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceServerDeployment), patch the Compliance CR using the below command:
-
-```bash
-kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceServerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"compliance-server","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-kubectl get deployment.apps/compliance-server -n tigera-compliance -o json| jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}'
-```
-
-This command will output the configured resource requests and limits for the ComplianceServerDeployment in JSON format.
-
-```bash
-{
- "name": "compliance-server",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-```
-
-
-### ComplianceReporterPodTemplate.
-
-To configure resource specification for the [ComplianceReporterPodTemplate](../../reference/installation/api.mdx#operator.tigera.io/v1.ComplianceReporterPodTemplate), patch the Compliance CR using the below command:
-
-```bash
-kubectl patch compliance tigera-secure --type=merge --patch='{"spec": {"complianceReporterPodTemplate": {"template": {"spec": {"containers":[{"name":"reporter","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-kubectl get Podtemplates tigera.io.report -n tigera-compliance -o json | jq '.template.spec.containers[] | {name: .name, resources: .resources}'
-```
-
-This command will output the configured resource requests and limits for the ComplianceReporterPodTemplate component in JSON format.
-
-```bash
-{
- "name": "reporter",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-```
-
-## Installation custom resource
-
-The [Installation CR](../../reference/installation/api.mdx) provides a way to configure resources for various Calico Enterprise components, including TyphaDeployment, calicoNodeDaemonSet, CalicoNodeWindowsDaemonSet, csiNodeDriverDaemonSet and KubeControllersDeployment. The following sections provide example configurations for this CR.
-
-Example Configurations:
-
-
-### TyphaDeployment
-
-To configure resource specification for the [TyphaDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.TyphaDeployment), patch the installation CR using the below command:
-
-```bash
-kubectl patch installations default --type=merge --patch='{"spec": {"typhaDeployment": {"spec": {"template": {"spec": {"containers": [{"name": "calico-typha", "resources": {"requests": {"cpu": "100m", "memory": "100Mi"}, "limits": {"cpu": "1", "memory": "1000Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-$ kubectl get deployment.apps/calico-typha -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}'
-```
-
-This command will output the configured resource requests and limits for the Calico TyphaDeployment component in JSON format.
-
-```bash
-{
- "name": "calico-typha",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-```
-
-
-### CalicoNodeDaemonSet
-
-To configure resource requests for the [calicoNodeDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.calicoNodeDaemonSet) component, patch the installation CR using the below command:
-
-```bash
-$ kubectl patch installations default --type=merge --patch='{"spec": {"calicoNodeDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-node","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-$ kubectl get daemonset.apps/calico-node -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}'
-```
-
-This command will output the configured resource requests and limits for the Calico calicoNodeDaemonSet component in JSON format.
-
-```bash
-{
- "name": "calico-node",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-```
-
-### calicoNodeWindowsDaemonSet
-
-To configure resource requests for the [calicoNodeWindowsDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.calicoNodeWindowsDaemonSet) component, patch the installation CR using the below command:
-
-```bash
-$ kubectl patch installations default --type=merge --patch='{"spec": {"calicoNodeWindowsDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-node-windows","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-$ kubectl get daemonset.apps/calico-node -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}'
-```
-
-This command will output the configured resource requests and limits for the Calico calicoNodeWindowsDaemonSet component in JSON format.
-
-```bash
-{
- "name": "calico-node",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-```
-
-### CalicoKubeControllersDeployment
-
-To configure resource requests for the [CalicoKubeControllersDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.CalicoKubeControllersDeployment) component, patch the installation CR using the below command:
-
-```bash
-$ kubectl patch installations default --type=merge --patch='{"spec": {"calicoKubeControllersDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"calico-kube-controllers","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-$ kubectl get deployment.apps/calico-kube-controllers -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}'
-```
-
-This command will output the configured resource requests and limits for the Calico CalicoKubeControllersDeployment component in JSON format.
-
-```bash
-{
- "name": "calico-kube-controllers",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-
-```
-
-### CSINodeDriverDaemonSet
-
-To configure resource requests for the [CSINodeDriverDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.CSINodeDriverDaemonSet) component, patch the installation CR using the below command:
-
-```bash
-$ kubectl patch installations default --type=merge --patch='{"spec": {"csiNodeDriverDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"calico-csi","resources":{"requests":{"cpu":"100m", "memory":"100Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}},{"name":"csi-node-driver-registrar","resources":{"requests":{"cpu":"50m", "memory":"50Mi"}, "limits":{"cpu":"1", "memory":"1000Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-$ kubectl get daemonset.apps/csi-node-driver -n calico-system -o json | jq '.spec.template.spec.containers[]| {name:.name,resources:.resources}'
-```
-
-This command will output the configured resource requests and limits for the Calico calicoNodeDaemonSet component in JSON format.
-
-```bash
-{
- "name": "calico-csi",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-{
- "name": "csi-node-driver-registrar",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "50m",
- "memory": "50Mi"
- }
- }
-}
-```
-
-## IntrusionDetection custom resource
-
-The [IntrusionDetection](../../reference/installation/api.mdx#operator.tigera.io/v1.IntrusionDetection) CR provides a way to configure resources for IntrusionDetectionControllerDeployment. The following sections provide example configurations for this CR.
-
-### IntrusionDetectionControllerDeployment.
-
-To configure resource specification for the [IntrusionDetectionControllerDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.IntrusionDetectionControllerDeployment), patch the IntrusionDetection CR using the below command:
-
-```bash
-$ kubectl patch intrusiondetection tigera-secure --type=merge --patch='{"spec": {"intrusionDetectionControllerDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"webhooks-processor","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}},{"name":"controller","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"1000Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-$ kubectl get deployment.apps/intrusion-detection-controller -n tigera-intrusion-detection -o json|jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}'
-```
-
-This command will output the configured resource requests and limits for the IntrusionDetectionControllerDeployment in JSON format.
-
-```bash
-{
- "name": "controller",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "1000Mi"
- }
- }
-}
-{
- "name": "webhooks-processor",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "1000Mi"
- }
- }
-}
-```
-
-## LogCollector custom resource
-
-The [LogCollector](../../reference/installation/api.mdx#operator.tigera.io/v1.LogCollector) CR provides a way to configure resources for FluentdDaemonSet, EKSLogForwarderDeployment.
-
-### FluentdDaemonSet.
-
-To configure resource specification for the [FluentdDaemonSet](../../reference/installation/api.mdx#operator.tigera.io/v1.FluentdDaemonSet), patch the LogCollector CR using the below command:
-
-```bash
-kubectl patch logcollector tigera-secure --type=merge --patch='{"spec": {"fluentdDaemonSet":{"spec": {"template": {"spec": {"containers":[{"name":"fluentd","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-kubectl get daemonset.apps/fluentd-node -n tigera-fluentd -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}'
-```
-
-This command will output the configured resource requests and limits for the FluentdDaemonSet in JSON format.
-
-```bash
-{
- "name": "fluentd",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-```
-
-
-### EKSLogForwarderDeployment.
-
-To configure resource specification for the [EKSLogForwarderDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.EKSLogForwarderDeployment), patch the LogCollector CR using the below command:
-
-```bash
-kubectl patch logcollector tigera-secure --type=merge --patch='{"spec": {"eksLogForwarderDeployment": {"spec": {"template": {"spec": {"containers":[{"name":"eks-log-forwarder","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-kubectl get deployment.apps/eks-log-forwarder -n tigera-fluentd -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}'
-```
-
-This command will output the configured resource requests and limits for the EKSLogForwarderDeployment in JSON format.
-
-```bash
-{
- "name": "eks-log-forwarder",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-```
-
-## ManagementClusterConnection custom resource
-
-The [ManagementClusterConnection](../../reference/installation/api.mdx#operator.tigera.io/v1.ManagementClusterConnection) CR provides a way to configure resources for GuardianDeployment. The following sections provide example configurations for this CR.
-
-### GuardianDeployment.
-
-To configure resource specification for the [GuardianDeployment](../../reference/installation/api.mdx#operator.tigera.io/v1.GuardianDeployment), patch the ManagementClusterConnection CR using the below command:
-
-```bash
-kubectl patch managementclusterconnection tigera-secure --type=merge --patch='{"spec": {"guardianDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-guardian","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-kubectl get deployment.apps/tigera-guardian -n tigera-guardian -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}'
-```
-
-This command will output the configured resource requests and limits for the GuardianDeployment in JSON format.
-
-```bash
-{
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
-}
-```
-
-## PacketCaptureAPI custom resource
-
-The [PacketCaptureAPI](../../reference/installation/api.mdx#operator.tigera.io/v1.PacketCaptureAPI) CR provides a way to configure resources for PacketCapture. The following sections provide example configurations for this CR.
-
-### PacketCaptureAPIDeployment
-
-To configure resource specification for the [PacketCaptureAPI](../../reference/installation/api.mdx#operator.tigera.io/v1.PacketCaptureAPI), patch the PacketCapture CR using the below command:
-
-```bash
-kubectl patch packetcaptureapis tigera-secure --type=merge --patch='{"spec": {"packetCaptureAPIDeployment":{"spec": {"template": {"spec": {"containers":[{"name":"tigera-packetcapture-server","resources":{"limits":{"cpu":"1", "memory":"1000Mi"},"requests":{"cpu":"100m", "memory":"100Mi"}}}]}}}}}}'
-```
-This command sets the CPU request to 100 milliCPU (mCPU) and the memory request is set to 100 Mebibytes (MiB) while the CPU limit is set to 1 CPU and the memory limit is set to 1000 Mebibytes (MiB).
-
-#### Verification
-
-You can verify the configured resources using the following command:
-
-```bash
-kubectl get deployment.apps/tigera-packetcapture -n tigera-packetcapture -o json | jq '.spec.template.spec.containers[] | {name: .name, resources: .resources}'
-```
-
-This command will output the configured resource requests and limits for the PacketCaptureDeployment in JSON format.
-
-```bash
-{
- "name": "tigera-packetcapture-server",
- "resources": {
- "limits": {
- "cpu": "1",
- "memory": "1000Mi"
- },
- "requests": {
- "cpu": "100m",
- "memory": "100Mi"
- }
- }
-}
-```
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/index.mdx b/calico-cloud_versioned_docs/version-20-1/reference/component-resources/index.mdx
deleted file mode 100644
index c7c4e12306..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Reference content for Calico Cloud component resources.
-hide_table_of_contents: true
----
-
-# Component resources
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/kube-controllers/configuration.mdx b/calico-cloud_versioned_docs/version-20-1/reference/component-resources/kube-controllers/configuration.mdx
deleted file mode 100644
index e69f9d7fe4..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/kube-controllers/configuration.mdx
+++ /dev/null
@@ -1,91 +0,0 @@
----
-description: Calico Cloud Kubernetes controllers monitor the Kubernetes API and perform actions based on cluster state.
----
-
-# Configuring the Calico Cloud Kubernetes controllers
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-The $[prodname] Kubernetes controllers are deployed in a Kubernetes cluster. The different controllers monitor the Kubernetes API
-and perform actions based on cluster state.
-
-
-
-
-If you have installed Calico using the operator, see the [KubeControllersConfiguration](../../resources/kubecontrollersconfig.mdx) resource instead.
-
-
-
-
-The controllers are primarily configured through environment variables. When running
-the controllers as a Kubernetes pod, this is accomplished through the pod manifest `env`
-section.
-
-## The $[imageNames.kubeControllers] container
-
-The `$[imageNames.kubeControllers]` container includes the following controllers:
-
-1. node controller: watches for the removal of Kubernetes nodes and removes corresponding data from $[prodname], and optionally watches for node updates to create and sync host endpoints for each node.
-1. federation controller: watches Kubernetes services and endpoints locally and across all remote clusters, and programs
- Kubernetes endpoints for any locally configured service that specifies a service federation selector annotation.
-
-### Configuring datastore access
-
-The datastore type can be configured via the `DATASTORE_TYPE` environment variable. Only supported value is `kubernetes`.
-
-#### kubernetes
-
-When running the controllers as a Kubernetes pod, Kubernetes API access is [configured automatically][in-cluster-config] and
-no additional configuration is required. However, the controllers can also be configured to use an explicit [kubeconfig][kubeconfig] file override to
-configure API access if needed.
-
-| Environment | Description | Schema |
-| ------------ | ------------------------------------------------------------------ | ------ |
-| `KUBECONFIG` | Path to a Kubernetes kubeconfig file mounted within the container. | path |
-
-### Other configuration
-
-:::note
-
-Whenever possible, prefer configuring the kube-controllers component using the [KubeControllersConfiguration](../../resources/kubecontrollersconfig.mdx) API resource,
-Some configuration options may not be available through environment variables.
-
-:::
-
-The following environment variables can be used to configure the $[prodname] Kubernetes controllers.
-
-| Environment | Description | Schema | Default |
-| --------------------- | --------------------------------------------------------------------------- | --------------------------------------------------------- | ----------------------------------------------------- |
-| `DATASTORE_TYPE` | Which datastore type to use | etcdv3, kubernetes | kubernetes |
-| `ENABLED_CONTROLLERS` | Which controllers to run | namespace, node, policy, serviceaccount, workloadendpoint | policy,namespace,serviceaccount,workloadendpoint,node |
-| `LOG_LEVEL` | Minimum log level to be displayed. | debug, info, warning, error | info |
-| `KUBECONFIG` | Path to a kubeconfig file for Kubernetes API access | path |
-| `SYNC_NODE_LABELS` | When enabled, Kubernetes node labels will be copied to Calico node objects. | boolean | true |
-| `AUTO_HOST_ENDPOINTS` | When set to enabled, automatically create a host endpoint for each node. | enabled, disabled | disabled |
-
-## About each controller
-
-### Node controller
-
-The node controller has several functions.
-
-- Garbage collects IP addresses.
-- Automatically provisions host endpoints for Kubernetes nodes.
-
-### Federation controller
-
-The federation controller syncs Kubernetes federated endpoint changes to the $[prodname] datastore.
-The controller must have read access to the Kubernetes API to monitor `Service` and `Endpoints` events, and must
-also have write access to update `Endpoints`.
-
-The federation controller is disabled by default if `ENABLED_CONTROLLERS` is not explicitly specified.
-
-This controller is valid for all $[prodname] datastore types. For more details refer to the
-[Configuring federated services](../../../multicluster/services-controller.mdx) usage guide.
-
-
-
-
-[in-cluster-config]: https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod
-[kubeconfig]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/kube-controllers/index.mdx b/calico-cloud_versioned_docs/version-20-1/reference/component-resources/kube-controllers/index.mdx
deleted file mode 100644
index 303cb85423..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/kube-controllers/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: kube-controllers is a set of Kubernetes controllers for Calico
-hide_table_of_contents: true
----
-
-# kube-controllers
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/kube-controllers/prometheus.mdx b/calico-cloud_versioned_docs/version-20-1/reference/component-resources/kube-controllers/prometheus.mdx
deleted file mode 100644
index b3d1799ed7..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/kube-controllers/prometheus.mdx
+++ /dev/null
@@ -1,86 +0,0 @@
----
-description: Review metrics for the kube-controllers component if you are using Prometheus.
----
-
-# Prometheus metrics
-
-kube-controllers can be configured to report a number of metrics through Prometheus. This reporting is enabled by default on port 9094. See the
-[configuration reference](../../resources/kubecontrollersconfig.mdx) for how to change metrics reporting configuration (or disable it completely).
-
-## Metric reference
-
-#### kube-controllers specific
-
-kube-controllers exports a number of Prometheus metrics. The current set is as follows. Since some metrics
-may be tied to particular implementation choices inside kube-controllers we can't make any hard guarantees that
-metrics will persist across releases. However, we aim not to make any spurious changes to
-existing metrics.
-
-| Metric Name | Labels | Description |
-| ------------------------------------ | ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `ipam_allocations_in_use` | ippool, node | Number of Calico IP allocations currently in use by a workload or interface. |
-| `ipam_allocations_borrowed` | ippool, node | Number of Calico IP allocations currently in use where the allocation was borrowed from a block affine to another node. |
-| `ipam_allocations_gc_candidates` | ippool, node | Number of Calico IP allocations currently marked by the GC as potential leaks. This metric returns to zero under normal GC operation. |
-| `ipam_allocations_gc_reclamations` | ippool, node | Count of Calico IP allocations that have been reclaimed by the GC. Increase of this counter corresponds with a decrease of the candidates gauge under normal operation. |
-| `ipam_blocks` | ippool, node | Number of IPAM blocks. |
-| `ipam_ippool_size` | ippool | Number of IP addresses in the IP Pool CIDR. |
-| `ipam_blocks_per_node` | node | Number of IPAM blocks, indexed by the node to which they have affinity. Prefer `ipam_blocks` for new integrations. |
-| `ipam_allocations_per_node` | node | Number of Calico IP allocations, indexed by node on which the allocation was made. Prefer `ipam_allocations_in_use` for new integrations. |
-| `ipam_allocations_borrowed_per_node` | node | Number of Calico IP allocations borrowed from a non-affine block, indexed by node on which the allocation was made. Prefer `ipam_allocations_borrowed` for new integrations. |
-| `remote_cluster_connection_status` | remote_cluster_name | Status of the remote cluster connection in federation. Represented as numeric values 0 (NotConnecting) ,1 (Connecting), 2 (InSync), 3 (ReSyncInProgress), 4 (ConfigChangeRestartRequired), 5 (ConfigInComplete). |
-
-Labels can be interpreted as follows:
-
-| Label Name | Description |
-| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `node` | For allocation metrics, the node on which the allocation was made. For block metrics, the node for which the block has affinity. If the block has no affinity, value will be `no_affinity`. |
-| `ippool` | The IP Pool that the IPAM block occupies. If there is no IP Pool which matches the block, value will be `no_ippool`. |
-| `remote_cluster_name` | Name of the remote cluster in federation. |
-
-Prometheus metrics are self-documenting, with metrics turned on, `curl` can be used to list the
-metrics along with their help text and type information.
-
-```bash
-curl -s http://localhost:9094/metrics | head
-```
-
-#### CPU / memory metrics
-
-kube-controllers also exports the default set of metrics that Prometheus makes available. Currently, those
-include:
-
-| Name | Description |
-| -------------------------------------------- | ------------------------------------------------------------------ |
-| `go_gc_duration_seconds` | A summary of the GC invocation durations. |
-| `go_goroutines` | Number of goroutines that currently exist. |
-| `go_memstats_alloc_bytes` | Number of bytes allocated and still in use. |
-| `go_memstats_alloc_bytes_total` | Total number of bytes allocated, even if freed. |
-| `go_memstats_buck_hash_sys_bytes` | Number of bytes used by the profiling bucket hash table. |
-| `go_memstats_frees_total` | Total number of frees. |
-| `go_memstats_gc_sys_bytes` | Number of bytes used for garbage collection system metadata. |
-| `go_memstats_heap_alloc_bytes` | Number of heap bytes allocated and still in use. |
-| `go_memstats_heap_idle_bytes` | Number of heap bytes waiting to be used. |
-| `go_memstats_heap_inuse_bytes` | Number of heap bytes that are in use. |
-| `go_memstats_heap_objects` | Number of allocated objects. |
-| `go_memstats_heap_released_bytes_total` | Total number of heap bytes released to OS. |
-| `go_memstats_heap_sys_bytes` | Number of heap bytes obtained from system. |
-| `go_memstats_last_gc_time_seconds` | Number of seconds since 1970 of last garbage collection. |
-| `go_memstats_lookups_total` | Total number of pointer lookups. |
-| `go_memstats_mallocs_total` | Total number of mallocs. |
-| `go_memstats_mcache_inuse_bytes` | Number of bytes in use by mcache structures. |
-| `go_memstats_mcache_sys_bytes` | Number of bytes used for mcache structures obtained from system. |
-| `go_memstats_mspan_inuse_bytes` | Number of bytes in use by mspan structures. |
-| `go_memstats_mspan_sys_bytes` | Number of bytes used for mspan structures obtained from system. |
-| `go_memstats_next_gc_bytes` | Number of heap bytes when next garbage collection will take place. |
-| `go_memstats_other_sys_bytes` | Number of bytes used for other system allocations. |
-| `go_memstats_stack_inuse_bytes` | Number of bytes in use by the stack allocator. |
-| `go_memstats_stack_sys_bytes` | Number of bytes obtained from system for stack allocator. |
-| `go_memstats_sys_bytes` | Number of bytes obtained by system. Sum of all system allocations. |
-| `process_cpu_seconds_total` | Total user and system CPU time spent in seconds. |
-| `process_max_fds` | Maximum number of open file descriptors. |
-| `process_open_fds` | Number of open file descriptors. |
-| `process_resident_memory_bytes` | Resident memory size in bytes. |
-| `process_start_time_seconds` | Start time of the process since unix epoch in seconds. |
-| `process_virtual_memory_bytes` | Virtual memory size in bytes. |
-| `promhttp_metric_handler_requests_in_flight` | Current number of scrapes being served. |
-| `promhttp_metric_handler_requests_total` | Total number of scrapes by HTTP status code. |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/configuration.mdx b/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/configuration.mdx
deleted file mode 100644
index 5f65449226..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/configuration.mdx
+++ /dev/null
@@ -1,313 +0,0 @@
----
-description: Customize cnx-node using environment variables.
----
-
-# Configuring cnx-node
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-The `$[nodecontainer]` container is deployed to every node (on Kubernetes, by a DaemonSet), and runs three internal daemons:
-
-- Felix, the Calico daemon that runs on every node and provides endpoints.
-- BIRD, the BGP daemon that distributes routing information to other nodes.
-- confd, a daemon that watches the Calico datastore for config changes and updates BIRD’s config files.
-
-For manifest-based installations, `$[nodecontainer]` is primarily configured through environment
-variables, typically set in the deployment manifest. Individual nodes may also be updated through the Node
-custom resource. `$[nodecontainer]` can also be configured through the Calico Operator.
-
-The rest of this page lists the available configuration options, and is followed by specific considerations for
-various settings.
-
-
-
-
-`$[nodecontainer]` does not need to be configured directly when installed by the operator. For a complete operator
-configuration reference, see [the installation API reference documentation][installation].
-
-
-
-
-## Environment variables
-
-### Configuring the default IP pool(s)
-
-Calico uses IP pools to configure how addresses are allocated to pods, and how networking works for certain
-sets of addresses. You can see the full schema for IP pools here.
-
-`$[nodecontainer]` can be configured to create a default IP pool for you, but only if none already
-exist in the cluster. The following options control the parameters on the created pool.
-
-| Environment | Description | Schema |
-| ---------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- |
-| CALICO_IPV4POOL_CIDR | The IPv4 Pool to create if none exists at start up. It is invalid to define this variable and NO_DEFAULT_POOLS. [Default: First not used in locally of (192.168.0.0/16, 172.16.0.0/16, .., 172.31.0.0/16) ] | IPv4 CIDR |
-| CALICO_IPV4POOL_BLOCK_SIZE | Block size to use for the IPv4 Pool created at startup. Block size for IPv4 should be in the range 20-32 (inclusive) [Default: `26`] | int |
-| CALICO_IPV4POOL_IPIP | IPIP Mode to use for the IPv4 Pool created at start up. If set to a value other than `Never`, `CALICO_IPV4POOL_VXLAN` should not be set. [Default: `Always`] | Always, CrossSubnet, Never ("Off" is also accepted as a synonym for "Never") |
-| CALICO_IPV4POOL_VXLAN | VXLAN Mode to use for the IPv4 Pool created at start up. If set to a value other than `Never`, `CALICO_IPV4POOL_IPIP` should not be set. [Default: `Never`] | Always, CrossSubnet, Never |
-| CALICO_IPV4POOL_NAT_OUTGOING | Controls NAT Outgoing for the IPv4 Pool created at start up. [Default: `true`] | boolean |
-| CALICO_IPV4POOL_NODE_SELECTOR | Controls the NodeSelector for the IPv4 Pool created at start up. [Default: `all()`] | [selector](../../resources/ippool.mdx#node-selector) |
-| CALICO_IPV6POOL_CIDR | The IPv6 Pool to create if none exists at start up. It is invalid to define this variable and NO_DEFAULT_POOLS. [Default: ``] | IPv6 CIDR |
-| CALICO_IPV6POOL_BLOCK_SIZE | Block size to use for the IPv6 POOL created at startup. Block size for IPv6 should be in the range 116-128 (inclusive) [Default: `122`] | int |
-| CALICO_IPV6POOL_VXLAN | VXLAN Mode to use for the IPv6 Pool created at start up. [Default: `Never`] | Always, CrossSubnet, Never |
-| CALICO_IPV6POOL_NAT_OUTGOING | Controls NAT Outgoing for the IPv6 Pool created at start up. [Default: `false`] | boolean |
-| CALICO_IPV6POOL_NODE_SELECTOR | Controls the NodeSelector for the IPv6 Pool created at start up. [Default: `all()`] | [selector](../../resources/ippool.mdx#node-selector) |
-| CALICO_IPV4POOL_DISABLE_BGP_EXPORT | Disable exporting routes over BGP for the IPv4 Pool created at start up. [Default: `false`] | boolean |
-| CALICO_IPV6POOL_DISABLE_BGP_EXPORT | Disable exporting routes over BGP for the IPv6 Pool created at start up. [Default: `false`] | boolean |
-| NO_DEFAULT_POOLS | Prevents $[prodname] from creating a default pool if one does not exist. [Default: `false`] | boolean |
-
-### Configuring BGP Networking
-
-BGP configuration for Calico nodes is normally configured through the [Node](../../resources/node.mdx), [BGPConfiguration](../../resources/bgpconfig.mdx), and [BGPPeer](../../resources/bgppeer.mdx) resources.
-`$[nodecontainer]` also exposes some options to allow setting certain fields on these objects, as described
-below.
-
-| Environment | Description | Schema |
-| ------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- |
-| NODENAME | A unique identifier for this host. See [node name determination](#node-name-determination) for more details. | lowercase string |
-| IP | The IPv4 address to assign this host or detection behavior at startup. Refer to [IP setting](#ip-setting) for the details of the behavior possible with this field. | IPv4 |
-| IP6 | The IPv6 address to assign this host or detection behavior at startup. Refer to [IP setting](#ip-setting) for the details of the behavior possible with this field. | IPv6 |
-| IP_AUTODETECTION_METHOD | The method to use to autodetect the IPv4 address for this host. This is only used when the IPv4 address is being autodetected. See [IP Autodetection methods](#ip-autodetection-methods) for details of the valid methods. [Default: `first-found`] | string |
-| IP6_AUTODETECTION_METHOD | The method to use to autodetect the IPv6 address for this host. This is only used when the IPv6 address is being autodetected. See [IP Autodetection methods](#ip-autodetection-methods) for details of the valid methods. [Default: `first-found`] | string |
-| AS | The AS number for this node. When specified, the value is saved in the node resource configuration for this host, overriding any previously configured value. When omitted, if an AS number has been previously configured in the node resource, that AS number is used for the peering. When omitted, if an AS number has not yet been configured in the node resource, the node will use the global value (see [example modifying Global BGP settings](../../../networking/configuring/bgp.mdx) for details.) | int |
-| CALICO_ROUTER_ID | Sets the `router id` to use for BGP if no IPv4 address is set on the node. For an IPv6-only system, this may be set to `hash`. It then uses the hash of the nodename to create a 4 byte router id. See note below. [Default: ``] | string |
-| CALICO_K8S_NODE_REF | The name of the corresponding node object in the Kubernetes API. When set, used for correlating this node with events from the Kubernetes API. | string |
-
-### Configuring Datastore Access
-
-| Environment | Description | Schema |
-| -------------- | ------------------------------------------ | ------------------ |
-| DATASTORE_TYPE | Type of datastore. [Default: `kubernetes`] | kubernetes, etcdv3 |
-
-#### Configuring Kubernetes Datastore Access
-
-| Environment | Description | Schema |
-| ---------------- | ------------------------------------------------------------------------------ | ------ |
-| KUBECONFIG | When using the Kubernetes datastore, the location of a kubeconfig file to use. | string |
-| K8S_API_ENDPOINT | Location of the Kubernetes API. Not required if using kubeconfig. | string |
-| K8S_CERT_FILE | Location of a client certificate for accessing the Kubernetes API. | string |
-| K8S_KEY_FILE | Location of a client key for accessing the Kubernetes API. | string |
-| K8S_CA_FILE | Location of a CA for accessing the Kubernetes API. | string |
-
-:::note
-
-When $[prodname] is configured to use the Kubernetes API as the datastore, the environments
-used for BGP configuration are ignored—this includes selection of the node AS number (AS)
-and all of the IP selection options (IP, IP6, IP_AUTODETECTION_METHOD, IP6_AUTODETECTION_METHOD).
-
-:::
-
-### Configuring Logging
-
-| Environment | Description | Schema |
-| --------------------------- | -------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------- |
-| CALICO_DISABLE_FILE_LOGGING | Disables logging to file. [Default: "false"] | string |
-| CALICO_STARTUP_LOGLEVEL | The log severity above which startup `$[nodecontainer]` logs are sent to the stdout. [Default: `ERROR`] | DEBUG, INFO, WARNING, ERROR, CRITICAL, or NONE (case-insensitive) |
-
-### Configuring CNI Plugin
-
-`$[nodecontainer]` has a few options that are configurable based on the CNI plugin and CNI plugin
-configuration used on the cluster.
-
-| Environment | Description | Schema |
-| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
-| USE_POD_CIDR | Use the Kubernetes `Node.Spec.PodCIDR` field when using host-local IPAM. Requires Kubernetes API datastore. This field is required when using the Kubernetes API datastore with host-local IPAM. [Default: false] | boolean |
-| CALICO_MANAGE_CNI | Tells Calico to update the kubeconfig file at /host/etc/cni/net.d/calico-kubeconfig on credentials change. [Default: true] | boolean |
-
-### Other Environment Variables
-
-| Environment | Description | Schema |
-| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- |
-| DISABLE_NODE_IP_CHECK | Skips checks for duplicate Node IPs. This can reduce the load on the cluster when a large number of Nodes are restarting. [Default: `false`] | boolean |
-| WAIT_FOR_DATASTORE | Wait for connection to datastore before starting. If a successful connection is not made, node will shutdown. [Default: `false`] | boolean |
-| CALICO_NETWORKING_BACKEND | The networking backend to use. In `bird` mode, Calico will provide BGP networking using the BIRD BGP daemon; VXLAN networking can also be used. In `vxlan` mode, only VXLAN networking is provided; BIRD and BGP are disabled. If set to `none` (also known as policy-only mode), both BIRD and VXLAN are disabled. [Default: `bird`] | bird, vxlan, none |
-| CLUSTER_TYPE | Contains comma delimited list of indicators about this cluster. e.g. k8s, mesos, kubeadm, canal, bgp | string |
-
-## Appendix
-
-### Node name determination
-
-The `$[nodecontainer]` must know the name of the node on which it is running. The node name is used to
-retrieve the [Node resource](../../resources/node.mdx) configured for this node if it exists, or to create a new node resource representing the node if it does not. It is
-also used to associate the node with per-node [BGP configuration](../../resources/bgpconfig.mdx), [felix configuration](../../resources/felixconfig.mdx), and endpoints.
-
-When launched, the `$[nodecontainer]` container sets the node name according to the following order of precedence:
-
-1. The value specified in the `NODENAME` environment variable, if set.
-1. The value specified in `/var/lib/calico/nodename`, if it exists.
-1. The value specified in the `HOSTNAME` environment variable, if set.
-1. The hostname as returned by the operating system, converted to lowercase.
-
-Once the node has determined its name, the value will be cached in `/var/lib/calico/nodename` for future use.
-
-For example, if given the following conditions:
-
-- `NODENAME=""`
-- `/var/lib/calico/nodename` does not exist
-- `HOSTNAME="host-A"`
-- The operating system returns "host-A.internal.myorg.com" for the hostname
-
-$[nodecontainer] will use "host-a" for its name and will write the value in `/var/lib/calico/nodename`. If $[nodecontainer]
-is then restarted, it will use the cached value of "host-a" read from the file on disk.
-
-### IP setting
-
-The IP (for IPv4) and IP6 (for IPv6) environment variables are used to set,
-force autodetection, or disable auto detection of the address for the
-appropriate IP version for the node. When the environment variable is set,
-the address is saved in the
-[node resource configuration](../../resources/node.mdx)
-for this host, overriding any previously configured value.
-
-calico/node will attempt to detect subnet information from the host, and augment the provided address
-if possible.
-
-#### IP setting special case values
-
-There are several special case values that can be set in the IP(6) environment variables, they are:
-
-- Not set or empty string: Any previously set address on the node
- resource will be used. If no previous address is set on the node resource
- the two versions behave differently:
- - IP will do autodetection of the IPv4 address and set it on the node
- resource.
- - IP6 will not do autodetection.
-- `autodetect`: Autodetection will always be performed for the IP address and
- the detected address will overwrite any value configured in the node
- resource.
-- `none`: Autodetection will not be performed (this is useful to disable IPv4).
-
-### IP autodetection methods
-
-When $[prodname] is used for routing, each node must be configured with an IPv4
-address and/or an IPv6 address that will be used to route between
-nodes. To eliminate node specific IP address configuration, the `$[nodecontainer]`
-container can be configured to autodetect these IP addresses. In many systems,
-there might be multiple physical interfaces on a host, or possibly multiple IP
-addresses configured on a physical interface. In these cases, there are
-multiple addresses to choose from and so autodetection of the correct address
-can be tricky.
-
-The IP autodetection methods are provided to improve the selection of the
-correct address, by limiting the selection based on suitable criteria for your
-deployment.
-
-The following sections describe the available IP autodetection methods.
-
-#### first-found
-
-The `first-found` option enumerates all interface IP addresses and returns the
-first valid IP address (based on IP version and type of address) on
-the first valid interface. Certain known "local" interfaces
-are omitted, such as the docker bridge. The order that both the interfaces
-and the IP addresses are listed is system dependent.
-
-This is the default detection method. However, since this method only makes a
-very simplified guess, it is recommended to either configure the node with a
-specific IP address, or to use one of the other detection methods.
-
-e.g.
-
-```
-IP_AUTODETECTION_METHOD=first-found
-IP6_AUTODETECTION_METHOD=first-found
-```
-
-#### kubernetes-internal-ip
-
-The `kubernetes-internal-ip` method will select the first internal IP address listed in the Kubernetes node's `Status.Addresses` field
-
-Example:
-
-```
-IP_AUTODETECTION_METHOD=kubernetes-internal-ip
-IP6_AUTODETECTION_METHOD=kubernetes-internal-ip
-```
-
-#### can-reach=DESTINATION
-
-The `can-reach` method uses your local routing to determine which IP address
-will be used to reach the supplied destination. Both IP addresses and domain
-names may be used.
-
-Example using IP addresses:
-
-```
-IP_AUTODETECTION_METHOD=can-reach=8.8.8.8
-IP6_AUTODETECTION_METHOD=can-reach=2001:4860:4860::8888
-```
-
-Example using domain names:
-
-```
-IP_AUTODETECTION_METHOD=can-reach=www.google.com
-IP6_AUTODETECTION_METHOD=can-reach=www.google.com
-```
-
-#### interface=INTERFACE-REGEX
-
-The `interface` method uses the supplied interface [regular expression](https://pkg.go.dev/regexp)
-to enumerate matching interfaces and to return the first IP address on
-the first matching interface. The order that both the interfaces
-and the IP addresses are listed is system dependent.
-
-Example with valid IP address on interface eth0, eth1, eth2 etc.:
-
-```
-IP_AUTODETECTION_METHOD=interface=eth.*
-IP6_AUTODETECTION_METHOD=interface=eth.*
-```
-
-#### skip-interface=INTERFACE-REGEX
-
-The `skip-interface` method uses the supplied interface [regular expression](https://pkg.go.dev/regexp)
-to exclude interfaces and to return the first IP address on the first
-interface that does not match. The order that both the interfaces
-and the IP addresses are listed is system dependent.
-
-Example with valid IP address on interface exclude enp6s0f0, eth0, eth1, eth2 etc.:
-
-```
-IP_AUTODETECTION_METHOD=skip-interface=enp6s0f0,eth.*
-IP6_AUTODETECTION_METHOD=skip-interface=enp6s0f0,eth.*
-```
-
-#### cidr=CIDR
-
-The `cidr` method will select any IP address from the node that falls within the given CIDRs. For example:
-
-Example:
-
-```
-IP_AUTODETECTION_METHOD=cidr=10.0.1.0/24,10.0.2.0/24
-IP6_AUTODETECTION_METHOD=cidr=2001:4860::0/64
-```
-
-### Node readiness
-
-The `calico/node` container supports an exec readiness endpoint.
-
-To access this endpoint, use the following command.
-
-```bash
-docker exec calico-node /bin/calico-node [flag]
-```
-
-Substitute `[flag]` with one or more of the following.
-
-- `-bird-ready`
-- `-bird6-ready`
-- `-felix-ready`
-
-The BIRD readiness endpoint ensures that the BGP mesh is healthy by verifying that all BGP peers are established and
-no graceful restart is in progress.
-
-### Setting `CALICO_ROUTER_ID` for IPv6 only system
-
-Setting CALICO_ROUTER_ID to value `hash` will use a hash of the configured nodename for the router ID. This should only be used in IPv6-only systems with no IPv4 address to use for the router ID. Since each node chooses its own router ID in isolation, it is possible for two nodes to pick the same ID resulting in a clash. The probability of such a clash grows with cluster size so this feature should not be used in a large cluster (500+ nodes).
-
-
-
-
-
-[installation]: /reference/installation/api.mdx
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/felix/configuration.mdx b/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/felix/configuration.mdx
deleted file mode 100644
index ce6018eafb..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/felix/configuration.mdx
+++ /dev/null
@@ -1,433 +0,0 @@
----
-description: Configure Felix, the daemon that runs on every machine that provides endpoints.
----
-
-# Configuring Felix
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-
-
-
-If you have installed Calico using the operator, you cannot modify the environment provided to felix directly. To configure felix, see the [FelixConfiguration](../../../resources/felixconfig.mdx) resource instead.
-
-
-
-
-:::note
-
-The following tables detail the configuration file and
-environment variable parameters. For `FelixConfiguration` resource settings,
-refer to [Felix Configuration Resource](../../../resources/felixconfig.mdx).
-
-:::
-
-Configuration for Felix is read from one of four possible locations, in order, as follows.
-
-1. Environment variables.
-2. The Felix configuration file.
-3. Host-specific `FelixConfiguration` resources (`node.`).
-4. The global `FelixConfiguration` resource (`default`).
-
-The value of any configuration parameter is the value read from the
-_first_ location containing a value. For example, if an environment variable
-contains a value, it takes top precedence.
-
-If not set in any of these locations, most configuration parameters have
-defaults, and it should be rare to have to explicitly set them.
-
-The full list of parameters which can be set is as follows.
-
-### General configuration
-
-| Configuration file parameter | Environment variable | Description | Schema |
-| ----------------------------------- | ---------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | --- | --- | ------------ | ---- | ---- | ---------- | ----- |
-| `DataplaneWatchdogTimeout` | `FELIX_DATAPLANEWATCHDOGTIMEOUT` | Deprecated: superseded by `HealthTimeoutOverrides`. Timeout before the main dataplane goroutine is determined to have hung and Felix will report non-live and non-ready. Can be increased if the liveness check incorrectly fails (for example if Felix is running slowly on a heavily loaded system). [Default: `90`] | int |
-| `AwsSrcDstCheck` | `FELIX_AWSSRCDSTCHECK` | Set the [source-destination-check](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html#EIP_Disable_SrcDestCheck) when using AWS EC2 instances. Check [IAM role and profile configuration](../../../resources/felixconfig.mdx#aws-iam-rolepolicy-for-source-destination-check-configuration) for setting the necessary permission for this setting to work. [Default: `DoNothing`] | `DoNothing`, `Disable`, `Enable` |
-| `AWSSecondaryIPSupport` | `FELIX_AWSSECONDARYIPSUPPORT` | Controls whether Felix will create secondary AWS ENIs for AWS-backed IP pools. This feature is documented in the [egress gateways on AWS guide](../../../../networking/egress/egress-gateway-aws.mdx). Should only be enabled on AWS. [Default: "Disabled"] | `Enabled`, `EnabledENIPerWorkload`, `Disabled` |
-| `AWSSecondaryIPRoutingRulePriority` | `FELIX_AWSSECONDARYIPROUTINGRULEPRIORITY` | Controls the priority of the policy-based routing rules used to implement AWS-backed IP addresses. Should only be changed to avoid conflicts if your nodes have additional policy based routing rules. [Default: 101] | int |
-| `AWSRequestTimeout` | `FELIX_AWSREQUESTTIMEOUT` Timeout used for communicating with the AWS API (seconds). [Default: "30"] | int |
-| `DatastoreType` | `FELIX_DATASTORETYPE` | The datastore that Felix should read endpoints and policy information from. [Default: `etcdv3`] | `etcdv3`, `kubernetes` |
-| `DeviceRouteSourceAddress` | `FELIX_DEVICEROUTESOURCEADDRESS` | IPv4 address to use as the source hint on device routes programmed by Felix [Default: No source hint is set on programmed routes and for local traffic from host to workload the source address will be chosen by the kernel.] | `` |
-| `DeviceRouteSourceAddressIPv6` | `FELIX_DEVICEROUTESOURCEADDRESSIPV6` | IPv6 address to use as the source hint on device routes programmed by Felix [Default: No source hint is set on programmed routes and for local traffic from host to workload the source address will be chosen by the kernel.] | `` |
-| `DeviceRouteProtocol` | `FELIX_DEVICEROUTEPROTOCOL` | This defines the route protocol added to programmed device routes. [Default: `RTPROT_BOOT`] | int |
-| `DisableConntrackInvalidCheck` | `FELIX_DISABLECONNTRACKINVALIDCHECK` | Disable the dropping of packets that aren't either a valid handshake or part of an established connection. [Default: `false`] | boolean |
-| `EndpointReportingDelaySecs` | `FELIX_ENDPOINTREPORTINGDELAYSECS` | Set the endpoint reporting delay between status check intervals, in seconds. Only used if endpoint reporting is enabled. [Default: `1`] | int |
-| `EndpointReportingEnabled` | `FELIX_ENDPOINTREPORTINGENABLED` | Enable the endpoint status reporter. [Default: `false`] | boolean |
-| `ExternalNodesCIDRList` | `FELIX_EXTERNALNODESCIDRLIST` | Comma-delimited list of IPv4 or CIDR of external-non-calico-nodes from which IPIP traffic is accepted by calico-nodes. [Default: ""] | string |
-| `FailsafeInboundHostPorts` | `FELIX_FAILSAFEINBOUNDHOSTPORTS` | List of PortProto struct objects including UDP/TCP/SCTP ports and CIDRs that Felix will allow incoming traffic to host endpoints on irrespective of the security policy. This is useful to avoid accidentally cutting off a host with incorrect configuration. For backwards compatibility, if the protocol is not specified, it defaults to `"tcp"`. If a CIDR is not specified, it will allow traffic from all addresses. To disable all inbound host ports, use the value `[]`. The default value allows ssh access, DHCP, BGP, etcd and the Kubernetes API. [Default: `[{"port":22,"protocol":"tcp"},{"port":68,"protocol":"udp"},{"port":179,"protocol":"tcp"},{"port":2379,"protocol":"tcp"}, {"port":2380,"protocol":"tcp"}, {"port":5473,"protocol":"tcp"}, {"port":6443,"protocol":"tcp"}, {"port":6666,"protocol":"tcp"}, {"port":6667,"protocol":"tcp"}]`] | list |
-| `FailsafeOutboundHostPorts` | `FELIX_FAILSAFEOUTBOUNDHOSTPORTS` | List of PortProto struct objects including UDP/TCP/SCTP ports and CIDRs that Felix will allow outgoing traffic from host endpoints to irrespective of the security policy. This is useful to avoid accidentally cutting off a host with incorrect configuration. For backwards compatibility, if the protocol is not specified, it defaults to `"tcp"`. If a CIDR is not specified, it will allow traffic from all addresses. To disable all outbound host ports, use the value `[]`. The default value opens etcd's standard ports to ensure that Felix does not get cut off from etcd as well as allowing DHCP, DNS, BGP and the Kubernetes API. [Default: `[{"port":53,"protocol":"udp"},{"port":67,"protocol":"udp"}, {"port":179,"protocol":"tcp"}, {"port":2379,"protocol":"tcp"}, {"port":2380,"protocol":"tcp"}, {"port":5473,"protocol":"tcp"}, {"port":6443,"protocol":"tcp"}, {"port": 6666,"protocol":"tcp"}, {"port":6667,"protocol":"tcp"}]`] | list | |
-| `FelixHostname` | `FELIX_FELIXHOSTNAME` | The hostname Felix reports to the plugin. Should be used if the hostname Felix autodetects is incorrect or does not match what the plugin will expect. [Default: `socket.gethostname()`] | string |
-| `HealthEnabled` | `FELIX_HEALTHENABLED` | When enabled, exposes felix health information via an http endpoint. | boolean |
-| `HealthHost` | `FELIX_HEALTHHOST` | The address on which Felix will respond to health requests. [Default: `localhost`] | string |
-| `HealthPort` | `FELIX_HEALTHPORT` | The port on which Felix will respond to health requests. [Default: `9099`] | int |
-| `HealthTimeoutOverrides` | `FELIX_HEALTHTIMEOUTOVERRIDES` | Allows the internal watchdog timeouts of individual subcomponents to be overridden; example: "InternalDataplaneMainLoop=30s,CalculationGraph=2m". This is useful for working around "false positive" liveness timeouts that can occur in particularly stressful workloads or if CPU is constrained. For a list of active subcomponents, see Felix's logs. [Default: ``] | Comma-delimited list of key/value pairs where the values are durations: `1s`, `10s`, `5m`, etc. |
-| `IpInIpEnabled` | `FELIX_IPINIPENABLED` | Optional, you shouldn't need to change this setting as Felix calculates if IPIP should be enabled based on the existing IP Pools. When set, this overrides whether Felix should configure an IPinIP interface on the host. When explicitly disabled in FelixConfiguration, Felix will not clean up addresses from the `tunl0` interface (use this if you need to add addresses to that interface and don't want to have them removed). [Default: unset] | optional boolean |
-| `IpInIpMtu` | `FELIX_IPINIPMTU` | The MTU to set on the IPIP tunnel device. Zero value means auto-detect. See [Configuring MTU](../../../../networking/configuring/mtu.mdx) [Default: `0`] | int |
-| `IPv4VXLANTunnelAddr` | | IP address of the IPv4 VXLAN tunnel. This is system configured and should not be updated manually. | string |
-| `LogFilePath` | `FELIX_LOGFILEPATH` | The full path to the Felix log. Set to `none` to disable file logging. [Default: `/var/log/calico/felix.log`] | string |
-| `LogSeverityFile` | `FELIX_LOGSEVERITYFILE` | The log severity above which logs are sent to the log file. [Default: `Info`] | `Debug`, `Info`, `Warning`, `Error`, `Fatal` |
-| `LogSeverityScreen` | `FELIX_LOGSEVERITYSCREEN` | The log severity above which logs are sent to the stdout. [Default: `Info`] | `Debug`, `Info`, `Warning`, `Error`, `Fatal` |
-| `LogSeveritySys` | `FELIX_LOGSEVERITYSYS` | The log severity above which logs are sent to the syslog. Set to `none` for no logging to syslog. [Default: `Info`] | `Debug`, `Info`, `Warning`, `Error`, `Fatal` |
-| `LogDebugFilenameRegex` | `FELIX_LOGDEBUGFILENAMEREGEX` | Controls which source code files have their Debug log output included in the logs. Only logs from files with names that match the given regular expression are included. The filter only applies to Debug level logs. [Default: `""`] | regex |
-| `PolicySyncPathPrefix` | `FELIX_POLICYSYNCPATHPREFIX` | File system path where Felix notifies services of policy changes over Unix domain sockets. This is required only if you're configuring [L7 logs](../../../../visibility/elastic/l7/configure.mdx), or [egress gateways](../../../../networking/egress/index.mdx). Set to `""` to disable. [Default: `""`] | string |
-| `PrometheusGoMetricsEnabled` | `FELIX_PROMETHEUSGOMETRICSENABLED` | Set to `false` to disable Go runtime metrics collection, which the Prometheus client does by default. This reduces the number of metrics reported, reducing Prometheus load. [Default: `true`] | boolean |
-| `PrometheusMetricsEnabled` | `FELIX_PROMETHEUSMETRICSENABLED` | Set to `true` to enable the Prometheus metrics server in Felix. [Default: `false`] | boolean |
-| `PrometheusMetricsHost` | `FELIX_PROMETHEUSMETRICSHOST` | TCP network address that the Prometheus metrics server should bind to. [Default: `""`] | string |
-| `PrometheusMetricsPort` | `FELIX_PROMETHEUSMETRICSPORT` | TCP port that the Prometheus metrics server should bind to. [Default: `9091`] | int |
-| `PrometheusProcessMetricsEnabled` | `FELIX_PROMETHEUSPROCESSMETRICSENABLED` | Set to `false` to disable process metrics collection, which the Prometheus client does by default. This reduces the number of metrics reported, reducing Prometheus load. [Default: `true`] | boolean |
-| `PrometheusWireguardMetricsEnabled` | `FELIX_PROMETHEUSWIREGUARDMETRICSENABLED` | Set to `false` to disable wireguard device metrics collection, which Felix does by default. [Default: `true`] | boolean |
-| `RemoveExternalRoutes` | `FELIX_REMOVEEXTERNALROUTES` | Whether or not to remove device routes that have not been programmed by Felix. Disabling this will allow external applications to also add device routes. [Default: `true`] | bool |
-| `ReportingIntervalSecs` | `FELIX_REPORTINGINTERVALSECS` | Interval at which Felix reports its status into the datastore. 0 means disabled and is correct for Kubernetes-only clusters. Must be non-zero in OpenStack deployments. [Default: `30`] | int |
-| `ReportingTTLSecs` | `FELIX_REPORTINGTTLSECS` | Time-to-live setting for process-wide status reports. [Default: `90`] | int |
-| `RouteTableRange` | `FELIX_ROUTETABLERANGE` | _deprecated in favor of `RouteTableRanges`_ Calico programs additional Linux route tables for various purposes. `RouteTableRange` specifies the indices of the route tables that Calico should use. [Default: `""`] | `-` |
-| `RouteTableRanges` | `FELIX_ROUTETABLERANGES` | Calico programs additional Linux route tables for various purposes. `RouteTableRanges` specifies a set of table index ranges that Calico should use. Deprecates `RouteTableRange`, overrides `RouteTableRange`. [Default: `"1-250"`] | `-,-,...` |
-| `RouteSyncDisabled` | `FELIX_ROUTESYNCDISABLED` | Set to `true` to disable Calico programming routes to local workloads. [Default: `false`] | boolean |
-| `VXLANEnabled` | `FELIX_VXLANENABLED` | Optional, you shouldn't need to change this setting as Felix calculates if VXLAN should be enabled based on the existing IP Pools. When set, this overrides whether Felix should create the VXLAN tunnel device for VXLAN networking. [Default: unset] | optional boolean |
-| `VXLANMTU` | `FELIX_VXLANMTU` | The MTU to set on the IPv4 VXLAN tunnel device. Zero value means auto-detect. Also controls NodePort MTU when eBPF enabled. See [Configuring MTU](../../../../networking/configuring/mtu.mdx) [Default: `0`] | int |
-| `VXLANMTUV6` | `FELIX_VXLANMTUV6` | The MTU to set on the IPv6 VXLAN tunnel device. Zero value means auto-detect. Also controls NodePort MTU when eBPF enabled. See [Configuring MTU](../../../../networking/configuring/mtu.mdx) [Default: `0`] | int |
-| `VXLANPort` | `FELIX_VXLANPORT` | The UDP port to use for VXLAN. [Default: `4789`] | int |
-| `VXLANTunnelMACAddr` | | MAC address of the IPv4 VXLAN tunnel. This is system configured and should not be updated manually. | string |
-| `VXLANVNI` | `FELIX_VXLANVNI` | The virtual network ID to use for VXLAN. [Default: `4096`] | int |
-| `AllowVXLANPacketsFromWorkloads` | `FELIX_ALLOWVXLANPACKETSFROMWORKLOADS` | Set to `true` to allow VXLAN encapsulated traffic from workloads. [Default: `false`] | boolean |
-| `AllowIPIPPacketsFromWorkloads` | `FELIX_ALLOWIPIPPACKETSFROMWORKLOADS` | Set to `true` to allow IPIP encapsulated traffic from workloads. [Default: `false`] | boolean |
-| `TyphaAddr` | `FELIX_TYPHAADDR` | IPv4 address at which Felix should connect to Typha. [Default: none] | string |
-| `TyphaK8sServiceName` | `FELIX_TYPHAK8SSERVICENAME` | Name of the Typha Kubernetes service | string |
-| `Ipv6Support` | `FELIX_IPV6SUPPORT` | Enable $[prodname] networking and security for IPv6 traffic as well as for IPv4. | boolean |
-| `RouteSource` | `FELIX_ROUTESOURCE` | Where Felix gets is routing information from for VXLAN and the BPF dataplane. The CalicoIPAM setting is more efficient because it supports route aggregation, but it only works when Calico's IPAM or host-local IPAM is in use. Use the WorkloadIPs setting if you are using Calico's VXLAN or BPF dataplane and not using Calico IPAM or host-local IPAM. [Default: "CalicoIPAM"] | 'CalicoIPAM', or 'WorkloadIPs' |
-| `mtuIfacePattern` | `FELIX_MTUIFACEPATTERN` | Pattern used to discover the host's interface for MTU auto-detection. [Default: `^((en | wl | ww | sl | ib)[opsvx].\* | (eth | wlan | wwan).\*)` | regex |
-| `TPROXYMode` | `FELIX_TPROXYMODE` | Sets transparent proxying mode. [Default: "Disabled"] | 'Disabled', 'Enabled' |
-| `TPROXYPort` | `FELIX_TPROXYPORT` | What local ports is the proxied traffic sent to. [Default: `16001`] | int |
-| `FeatureDetectOverride` | `FELIX_FEATUREDETECTOVERRIDE` | Is used to override the feature detection. Values are specified in a comma separated list with no spaces, example; "SNATFullyRandom=true,MASQFullyRandom=false,RestoreSupportsLock=true,IPIPDeviceIsL3=true. "true" or "false" will force the feature, empty or omitted values are auto-detected. [Default: `""`] | string |
-
-### etcd datastore configuration
-
-| Configuration parameter | Environment variable | Description | Schema |
-| ----------------------- | --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------- |
-| `EtcdCaFile` | `FELIX_ETCDCAFILE` | Path to the file containing the root certificate of the certificate authority (CA) that issued the etcd server certificate. Configures Felix to trust the CA that signed the root certificate. The file may contain multiple root certificates, causing Felix to trust each of the CAs included. To disable authentication of the server by Felix, set the value to `none`. [Default: `/etc/ssl/certs/ca-certificates.crt`] | string |
-| `EtcdCertFile` | `FELIX_ETCDCERTFILE` | Path to the file containing the client certificate issued to Felix. Enables Felix to participate in mutual TLS authentication and identify itself to the etcd server. Example: `/etc/felix/cert.pem` (optional) | string |
-| `EtcdEndpoints` | `FELIX_ETCDENDPOINTS` | Comma-delimited list of etcd endpoints to connect to. Example: `http://127.0.0.1:2379,http://127.0.0.2:2379`. | `://:` |
-| `EtcdKeyFile` | `FELIX_ETCDKEYFILE` | Path to the file containing the private key matching Felix's client certificate. Enables Felix to participate in mutual TLS authentication and identify itself to the etcd server. Example: `/etc/felix/key.pem` (optional) | string |
-
-### Kubernetes API datastore configuration
-
-The Kubernetes API datastore driver reads its configuration from Kubernetes-provided environment variables.
-
-### iptables dataplane configuration
-
-| Configuration parameter | Environment variable | Description | Schema |
-| ------------------------------------ | ------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------- |
-| `ChainInsertMode` | `FELIX_CHAININSERTMODE` | Controls whether Felix hooks the kernel's top-level iptables chains by inserting a rule at the top of the chain or by appending a rule at the bottom. `Insert` is the safe default since it prevents $[prodname]'s rules from being bypassed. If you switch to `Append` mode, be sure that the other rules in the chains signal acceptance by falling through to the $[prodname] rules, otherwise the $[prodname] policy will be bypassed. [Default: `Insert`] | `Insert`, `Append` |
-| `DefaultEndpointToHostAction` | `FELIX_DEFAULTENDPOINTTOHOSTACTION` | This parameter controls what happens to traffic that goes from a workload endpoint to the host itself (after the traffic hits the endpoint egress policy). By default $[prodname] blocks traffic from workload endpoints to the host itself with an iptables `Drop` action. If you want to allow some or all traffic from endpoint to host, set this parameter to `Return` or `Accept`. Use `Return` if you have your own rules in the iptables "INPUT" chain; $[prodname] will insert its rules at the top of that chain, then `Return` packets to the "INPUT" chain once it has completed processing workload endpoint egress policy. Use `Accept` to unconditionally accept packets from workloads after processing workload endpoint egress policy. [Default: `Drop`] | `Drop`, `Return`, `Accept` |
-| `GenericXDPEnabled` | `FELIX_GENERICXDPENABLED` | When enabled, Felix can fallback to the non-optimized `generic` XDP mode. This should only be used for testing since it doesn't improve performance over the non-XDP mode. [Default: `false`] | boolean |
-| `InterfaceExclude` | `FELIX_INTERFACEEXCLUDE` | A comma-separated list of interface names that should be excluded when Felix is resolving host endpoints. The default value ensures that Felix ignores Kubernetes' internal `kube-ipvs0` device. If you want to exclude multiple interface names using a single value, the list supports regular expressions. For regular expressions you must wrap the value with `/`. For example having values `/^kube/,veth1` will exclude all interfaces that begin with `kube` and also the interface `veth1`. [Default: `kube-ipvs0`] | string |
-| `IpsetsRefreshInterval` | `FELIX_IPSETSREFRESHINTERVAL` | Period, in seconds, at which Felix re-checks the IP sets in the dataplane to ensure that no other process has accidentally broken $[prodname]'s rules. Set to 0 to disable IP sets refresh. [Default: `10`] | int |
-| `IptablesBackend` | `FELIX_IPTABLESBACKEND` | This parameter controls which variant of iptables Felix uses. Set this to `Auto` for auto detection of the backend. If a specific backend is needed then use `nft` for hosts using a netfilter backend or `Legacy` for others. [Default: `Auto`] | `Legacy`, `NFT`, `Auto` |
-| `IptablesFilterAllowAction` | `FELIX_IPTABLESFILTERALLOWACTION` | This parameter controls what happens to traffic that is allowed by a Felix policy chain in the iptables filter table (i.e., a normal policy chain). The default will immediately `Accept` the traffic. Use `Return` to send the traffic back up to the system chains for further processing. [Default: `Accept`] | `Accept`, `Return` |
-| `IptablesLockFilePath` | `FELIX_IPTABLESLOCKFILEPATH` | _Deprecated:_ For iptables versions prior to v1.6.2, location of the iptables lock file (later versions of iptables always use value "/run/xtables.lock"). You may need to change this if the lock file is not in its standard location (for example if you have mapped it into Felix's container at a different path). [Default: `/run/xtables.lock`] | string |
-| `IptablesLockProbeIntervalMillis` | `FELIX_IPTABLESLOCKPROBEINTERVALMILLIS` | Time, in milliseconds, that Felix will wait between attempts to acquire the iptables lock if it is not available. Lower values make Felix more responsive when the lock is contended, but use more CPU. [Default: `50`] | int |
-| `IptablesLockTimeoutSecs` | `FELIX_IPTABLESLOCKTIMEOUTSECS` | Time, in seconds, that Felix will wait for the iptables lock. Versions of iptables prior to v1.6.2 support disabling the iptables lock by setting this value to 0; v1.6.2 and above do not so Felix will default to 10s if a non-positive number is used. To use this feature, Felix must share the iptables lock file with all other processes that also take the lock. When running Felix inside a container, this typically requires the file /run/xtables.lock on the host to be mounted into the `$[nodecontainer]` or `calico/felix` container. [Default: `0` disabled for iptables <v1.6.2 or 10s for later versions] | int |
-| `IptablesMangleAllowAction` | `FELIX_IPTABLESMANGLEALLOWACTION` | This parameter controls what happens to traffic that is allowed by a Felix policy chain in the iptables mangle table (i.e., a pre-DNAT policy chain). The default will immediately `Accept` the traffic. Use `Return` to send the traffic back up to the system chains for further processing. [Default: `Accept`] | `Accept`, `Return` |
-| `IptablesMarkMask` | `FELIX_IPTABLESMARKMASK` | Mask that Felix selects its IPTables Mark bits from. Should be a 32 bit hexadecimal number with at least 8 bits set, none of which clash with any other mark bits in use on the system. When using $[prodname] with Kubernetes' `kube-proxy` in IPVS mode, [we recommend allowing at least 16 bits](#ipvs-bits). [Default: `0xffff0000`] | netmask |
-| `IptablesNATOutgoingInterfaceFilter` | `FELIX_IPTABLESNATOUTGOINGINTERFACEFILTER` | This parameter can be used to limit the host interfaces on which Calico will apply SNAT to traffic leaving a Calico IPAM pool with "NAT outgoing" enabled. This can be useful if you have a main data interface, where traffic should be SNATted and a secondary device (such as the docker bridge) which is local to the host and doesn't require SNAT. This parameter uses the iptables interface matching syntax, which allows `+` as a wildcard. Most users will not need to set this. Example: if your data interfaces are eth0 and eth1 and you want to exclude the docker bridge, you could set this to `eth+` | string |
-| `IptablesPostWriteCheckIntervalSecs` | `FELIX_IPTABLESPOSTWRITECHECKINTERVALSECS` | Period, in seconds, after Felix has done a write to the dataplane that it schedules an extra read back to check the write was not clobbered by another process. This should only occur if another application on the system doesn't respect the iptables lock. [Default: `1`] | int |
-| `IptablesRefreshInterval` | `FELIX_IPTABLESREFRESHINTERVAL` | Period, in seconds, at which Felix re-checks all iptables state to ensure that no other process has accidentally broken $[prodname]'s rules. Set to 0 to disable iptables refresh. [Default: `90`] | int |
-| `LogPrefix` | `FELIX_LOGPREFIX` | The log prefix that Felix uses when rendering LOG rules. [Default: `calico-packet`] | string |
-| `MaxIpsetSize` | `FELIX_MAXIPSETSIZE` | Maximum size for the ipsets used by Felix. Should be set to a number that is greater than the maximum number of IP addresses that are ever expected in a selector. [Default: `1048576`] | int |
-| `NATPortRange` | `FELIX_NATPORTRANGE` | Port range used by iptables for port mapping when doing outgoing NAT. (Example: `32768:65000`). [Default: iptables maps source ports below 512 to other ports below 512: those between 512 and 1023 inclusive will be mapped to ports below 1024, and other ports will be mapped to 1024 or above. Where possible, no port alteration will occur.] | string |
-| `NATOutgoingAddress` | `FELIX_NATOUTGOINGADDRESS` | Source address used by iptables for an SNAT rule when doing outgoing NAT. [Default: an iptables `MASQUERADE` rule is used for outgoing NAT which will use the address on the interface traffic is leaving on.] | `` |
-| `NetlinkTimeoutSecs` | `FELIX_NETLINKTIMEOUTSECS` | Time, in seconds, that Felix will wait for netlink (i.e. routing table list/update) operations to complete before giving up and retrying. [Default: `10`] | float |
-| `RouteRefreshInterval` | `FELIX_ROUTEREFRESHINTERVAL` | Period, in seconds, at which Felix re-checks the routes in the dataplane to ensure that no other process has accidentally broken $[prodname]'s rules. Set to 0 to disable route refresh. [Default: `90`] | int |
-| `ServiceLoopPrevention` | `FELIX_SERVICELOOPPREVENTION` | When [service IP advertisement is enabled](../../../../networking/configuring/advertise-service-ips.mdx), prevent routing loops to service IPs that are not in use, by dropping or rejecting packets that do not get DNAT'd by kube-proxy. Unless set to "Disabled", in which case such routing loops continue to be allowed. [Default: `Drop`] | `Drop`, `Reject`, `Disabled` |
-| `WorkloadSourceSpoofing` | `FELIX_WORKLOADSOURCESPOOFING` | Controls whether pods can enable source IP address spoofing with the `cni.projectcalico.org/allowedSourcePrefixes` annotation. When set to `Any`, pods can use this annotation to send packets from any IP address. [Default: `Disabled`] | `Any`, `Disabled` |
-| `XDPRefreshInterval` | `FELIX_XDPREFRESHINTERVAL` | Period, in seconds, at which Felix re-checks the XDP state in the dataplane to ensure that no other process has accidentally broken $[prodname]'s rules. Set to 0 to disable XDP refresh. [Default: `90`] | int |
-| `XDPEnabled` | `FELIX_XDPENABLED` | Enable XDP acceleration for host endpoint policies. [Default: `true`] | boolean |
-
-### eBPF dataplane configuration
-
-eBPF dataplane mode uses the Linux Kernel's eBPF virtual machine to implement networking and policy instead of iptables. When BPFEnabled is set to `true`, Felix will:
-
-- Require a v5.3 Linux kernel.
-- Implement policy with eBPF programs instead of iptables.
-- Activate its embedded implementation of `kube-proxy` to implement Kubernetes service load balancing.
-- Disable support for IPv6.
-
-See [Enable the eBPF dataplane](../../../../operations/ebpf/enabling-ebpf.mdx) for step-by step instructions to enable this feature.
-
-| Configuration parameter / Environment variable | Description | Schema | Default |
-| ----------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ | -------------- | ------ | ------ | ----------------- |
-| BPFEnabled / FELIX_BPFENABLED | Enable eBPF dataplane mode. eBPF mode has a number of limitations, see the [HOWTO guide](../../../../operations/ebpf/enabling-ebpf.mdx). | true, false | false |
-| BPFDisableUnprivileged / FELIX_BPFDISABLEUNPRIVILEGED | If true, Felix sets the kernel.unprivileged_bpf_disabled sysctl to disable unprivileged use of BPF. This ensures that unprivileged users cannot access Calico's BPF maps and cannot insert their own BPF programs to interfere with the ones that $[prodname] installs. | true, false | true |
-| BPFLogLevel / FELIX_BPFLOGLEVEL | The log level used by the BPF programs. The logs are emitted to the BPF trace pipe, accessible with the command `tc exec BPF debug`. | Off,Info,Debug | Off |
-| BPFDataIfacePattern / FELIX_BPFDATAIFACEPATTERN | Controls which interfaces Felix should attach BPF programs to catch traffic to/from the external network. This needs to match the interfaces that Calico workload traffic flows over as well as any interfaces that handle incoming traffic to NodePorts and services from outside the cluster. It should not match the workload interfaces (usually named cali...).. | regular expression | `^(en[opsvx].\* | eth.\* | tunl0$ | wireguard.cali$)` |
-| BPFConnectTimeLoadBalancingEnabled / FELIX_BPFCONNECTTIMELOADBALANCINGENABLED | Controls whether Felix installs the connect-time load balancer. In the current release, the connect-time load balancer is required for the host to reach kubernetes services. | true,false | true |
-| BPFExternalServiceMode / FELIX_BPFEXTERNALSERVICEMODE | Controls how traffic from outside the cluster to NodePorts and ClusterIPs is handled. In Tunnel mode, packet is tunneled from the ingress host to the host with the backing pod and back again. In DSR mode, traffic is tunneled to the host with the backing pod and then returned directly; this requires a network that allows direct return. | Tunnel,DSR | Tunnel |
-| BPFExtToServiceConnmark / FELIX_BPFEXTTOSERVICECONNMARK | Controls a 32bit mark that is set on connections from an external client to a local service. This mark allows us to control how packets of that connection are routed within the host and how is routing interpreted by RPF check. | int | 0 |
-| BPFKubeProxyIptablesCleanupEnabled / FELIX_BPFKUBEPROXYIPTABLESCLEANUPENABLED | Controls whether Felix will clean up the iptables rules created by the Kubernetes `kube-proxy`; should only be enabled if `kube-proxy` is not running. | true,false | true |
-| BPFKubeProxyMinSyncPeriod / FELIX_BPFKUBEPROXYMINSYNCPERIOD | Controls the minimum time between dataplane updates for Felix's embedded `kube-proxy` implementation. | seconds | `1` |
-| BPFKubeProxyEndpointSlicesEnabled / FELIX_BPFKUBEPROXYENDPOINTSLICESENABLED | Controls whether Felix's embedded kube-proxy derives its services from Kubernetes' EndpointSlices resources. Using EndpointSlices is more efficient but it requires EndpointSlices support to be enabled at the Kubernetes API server. | true,false | false |
-| BPFMapSizeConntrack / FELIX_BPFMapSizeConntrack | Controls the size of the conntrack map. This map must be large enough to hold an entry for each active connection. Warning: changing the size of the conntrack map can cause disruption. | int | 512000 |
-| BPFMapSizeNATFrontend / FELIX_BPFMapSizeNATFrontend | Controls the size of the NAT frontend map. FrontendMap should be large enough to hold an entry for each nodeport, external IP and each port in each service. | int | 65536 |
-| BPFMapSizeNATBackend / FELIX_BPFMapSizeNATBackend | Controls the size of the NAT backend map. This is the total number of endpoints. This is mostly more than the size of the number of services. | int | 262144 |
-| BPFMapSizeNATAffinity / FELIX_BPFMapSizeNATAffinity | Controls the size of the NAT affinity map. | int | 65536 |
-| BPFMapSizeIPSets / FELIX_BPFMapSizeIPSets | Controls the size of the IPSets map. The IP sets map must be large enough to hold an entry for each endpoint matched by every selector in the source/destination matches in network policy. Selectors such as "all()" can result in large numbers of entries (one entry per endpoint in that case). | int | 1048576 |
-| BPFMapSizeRoute / FELIX_BPFMapSizeRoute | Controls the size of the route map. The routes map should be large enough to hold one entry per workload and a handful of entries per host (enough to cover its own IPs and tunnel IPs). | int | 262144 |
-| BPFHostConntrackBypass / FELIX_BPFHostConntrackBypass | Controls whether to bypass Linux conntrack in BPF mode for workloads and services. | true,false | true |
-| BPFPolicyDebugEnabled / FELIX_BPFPOLICYDEBUGENABLED | In eBPF dataplane mode, Felix records detailed information about the BPF policy programs, which can be examined with the calico-bpf command-line tool. | true, false | true |
-
-### Windows-specific configuration
-
-| Configuration parameter | Environment variable | Description | Schema | Default |
-| ------------------------------- | ------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | ------------------------------------------- |
-| windowsFlowLogsFileDirectory | FELIX_WINDOWSFLOWLOGSFILEDIRECTORY | Set the directory where flow logs files are stored on Windows nodes. This parameter only takes effect when `flowLogsFileEnabled` is set to `true`. | string | `c:\\TigeraCalico\\flowlogs` |
-| windowsFlowLogsPositionFilePath | FELIX_WINDOWSFLOWLOGSPOSITIONFILEPATH | Specify the position of the external pipeline that reads flow logs on Windows nodes. This parameter only takes effect when `FlowLogsDynamicAggregationEnabled` is set to `true`. | string | `c:\\TigeraCalico\\flowlogs\\flows.log.pos` |
-| windowsStatsDumpFilePath | FELIX_WINDOWSTATSDUMPFILEPATH | Specify the position of the file used for dumping flow log statistics on Windows nodes. Note this is an internal setting that users shouldn't need to modify. | string | `c:\\TigeraCalico\\stats\\dump` |
-| WindowsDNSCacheFile | FELIX_WINDOWSDNSCACHEFILE | Specify the name of the file that Felix uses to preserve learned DNS information when restarting. | string | `c:\\TigeraCalico\\felix-dns-cache.txt` |
-| WindowsDNSExtraTTL | FELIX_WINDOWSDNSEXTRATTL | Specify extra time in seconds to keep IPs and alias names that are learned from DNS, in addition to each name or IP's advertised TTL. | seconds | `120` |
-
-### Kubernetes-specific configuration
-
-| Configuration parameter | Environment variable | Description | Schema |
-| ----------------------- | -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------ |
-| `KubeNodePortRanges` | `FELIX_KUBENODEPORTRANGES` | A list of port ranges that Felix should treat as Kubernetes node ports. Only when `kube-proxy` is configured to use IPVS mode: Felix assumes that traffic arriving at the host of one of these ports will ultimately be forwarded instead of being terminated by a host process. [Default: `30000:32767`] | Comma-delimited list of `:` port ranges or single ports. |
-| `KubeMasqueradeBit` | `FELIX_KUBEMASQUERADEBIT` | KubeMasqueradeBit should be set to the same value as --iptables-masquerade-bit of kube-proxy when TPROXY is used. This defaults to the corresponding kube-proxy default value so it only needs to change if kube-proxy is using a non-standard setting. Must be within the range of 0-31. OpenShift sets the bit to 0 by default. [Default: 14] | integer |
-
-:::note
-
- When using $[prodname] with Kubernetes' `kube-proxy` in IPVS mode, $[prodname] uses additional
-iptables mark bits to store an ID for each local $[prodname] endpoint. For example, the default `IptablesMarkMask` value,
-`0xffff0000` gives $[prodname] 16 bits, up to 6 of which are used for internal purposes, leaving 10 bits for endpoint
-IDs. 10 bits is enough for 1024 different values and $[prodname] uses 2 of those for internal purposes, leaving enough
-for 1022 endpoints on the host.
-
-:::
-
-### Bare metal specific configuration
-
-| Configuration parameter | Environment variable | Description | Schema |
-| ----------------------- | ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ |
-| `InterfacePrefix` | `FELIX_INTERFACEPREFIX` | The interface name prefix that identifies workload endpoints and so distinguishes them from host endpoint interfaces. Accepts more than one interface name prefix in comma-delimited format, e.g., `tap,cali`. Note: in environments other than bare metal, the orchestrators configure this appropriately. For example our Kubernetes and Docker integrations set the `cali` value, and our OpenStack integration sets the `tap` value. [Default: `cali`] | string |
-
-### $[prodname] specific configuration
-
-| Setting | Environment variable | Default | Meaning |
-| --------------------------------------- | --------------------------------------------- | ------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `DropActionOverride` | `FELIX_DROPACTIONOVERRIDE` | `Drop` | How to treat packets that are disallowed by the current $[prodname] policy. For more detail please see below. |
-| `LogDropActionOverride` | `FELIX_LOGDROPACTIONOVERRIDE` | `false` | Set to `true` to add the `DropActionOverride` to the syslog entries. For more detail please see below. |
-| `PrometheusReporterEnabled` | `FELIX_PROMETHEUSREPORTERENABLED` | `false` | Set to `true` to enable Prometheus reporting of denied packet metrics. For more detail please see below. |
-| `PrometheusReporterPort` | `FELIX_PROMETHEUSREPORTERPORT` | `9092` | The TCP port on which to report denied packet metrics. |
-| `PrometheusReporterCertFile` | `FELIX_PROMETHEUSREPORTERCERTFILE` | None | Certificate for encrypting Prometheus denied packet metrics. |
-| `PrometheusReporterKeyFile` | `FELIX_PROMETHEUSREPORTERKEYFILE` | None | Private key for encrypting Prometheus denied packet metrics. |
-| `PrometheusReporterCAFile` | `FELIX_PROMETHEUSREPORTERCAFILE` | None | Trusted CA file for clients attempting to read Prometheus denied packet metrics. |
-| `PrometheusMetricsCertFile` | `FELIX_PROMETHEUSMETRICSCERTFILE` | None | Certificate for encrypting general Felix Prometheus metrics. |
-| `PrometheusMetricsKeyFile` | `FELIX_PROMETHEUSMETRICSKEYFILE` | None | Private key for encrypting general Felix Prometheus metrics. |
-| `PrometheusMetricsCAFile` | `FELIX_PROMETHEUSMETRICSCAFILE` | None | Trusted CA file for clients attempting to read general Felix Prometheus metrics. |
-| `IPSecMode` | `FELIX_IPSECMODE` | None | Controls which mode IPsec is operating on. The only supported value is `PSK`. An empty value means IPsec is not enabled. |
-| `IPSecAllowUnsecuredTraffic` | `FELIX_IPSECALLOWUNSECUREDTRAFFIC` | `false` | When set to false, only IPsec-protected traffic will be allowed on the packet paths where IPsec is supported. When set to true, IPsec will be used but non-IPsec traffic will be accepted. In general, setting this to `true` is less safe since it allows an attacker to inject packets. However, it is useful when transitioning from non-IPsec to IPsec since it allows traffic to flow while the cluster negotiates the IPsec mesh. |
-| `IPSecIKEAlgorithm` | `FELIX_IPSECIKEALGORITHM` | `aes128gcm16-prfsha256-ecp256` | IPsec IKE algorithm. Default is NIST suite B recommendation. |
-| `IPSecESPAlgorithm` | `FELIX_IPSECESPALGORITHM` | `aes128gcm16-ecp256` | IPsec ESP algorithm. Default is NIST suite B recommendation. |
-| `IPSecLogLevel` | `FELIX_IPSECLOGLEVEL` | `Info` | Controls log level for IPsec components. Set to `None` for no logging. Other valid values are `Notice`, `Info`, `Debug` and `Verbose`. |
-| `IPSecPSKFile` | `FELIX_IPSECPSKFILE` | None | The path to the pre shared key file for IPsec. |
-| `FlowLogsFileEnabled` | `FELIX_FLOWLOGSFILEENABLED` | `false` | Set to `true`, enables flow logs. If set to `false` no flow logging will occur. Flow logs are written to a file `flows.log` and sent to Elasticsearch. The location of this file can be configured using the `FlowLogsFileDirectory` field. File rotation settings for this `flows.log` file can be configured using the fields `FlowLogsFileMaxFiles` and `FlowLogsFileMaxFileSizeMB`. Note that flow log exports to Elasticsearch are dependent on flow logs getting written to this file. Setting this parameter to `false` will disable flow logs. |
-| `FlowLogsFileIncludeLabels` | `FELIX_FLOWLOGSFILEINCLUDELABELS` | `false` | Set to `true` to include endpoint label information in flow logs. This parameter only takes effect when `FlowLogsFileEnabled` is set to `true`. |
-| `FlowLogsFileIncludePolicies` | `FELIX_FLOWLOGSFILEINCLUDEPOLICIES` | `false` | Set to `true` to include policy match information in flow logs. This parameter only takes effect when `FlowLogsFileEnabled` is set to `true`. |
-| `FlowLogsFileIncludeService` | `FELIX_FLOWLOGSFILEINCLUDESERVICE` | `false` | Set to `true` to include destination service information in flow logs. The service information is derived from pre-DNAT destination IP and is therefore only available on the node where DNAT occurs. This parameter only takes effect when `FlowLogsFileEnabled` is set to `true`. |
-| `FlowLogsFileDirectory` | `FELIX_FLOWLOGSFILEDIRECTORY` | `/var/log/calico/flowlogs` | The directory where flow logs files are stored. This parameter only takes effect when `FlowLogsFileEnabled` is set to `true`. |
-| `FlowLogsFileMaxFiles` | `FELIX_FLOWLOGSFILEMAXFILES` | `5` | The number of files to keep when rotating flow log files. This parameter only takes effect when `FlowLogsFileEnabled` is set to `true`. |
-| `FlowLogsFileMaxFileSizeMB` | `FELIX_FLOWLOGSFILEMAXFILESIZEMB` | `100` | The max size in MB of flow logs files before rotation. This parameter only takes effect when `FlowLogsFileEnabled` is set to `true`. |
-| `FlowLogsFlushInterval` | `FELIX_FLOWLOGSFLUSHINTERVAL` | `300` | The period, in seconds, at which Felix exports the flow logs. |
-| `FlowLogsEnableNetworkSets` | `FELIX_FLOWLOGSENABLENETWORKSETS` | `false` | Whether to specify the network set a flow log originates from. |
-| `FlowLogsFileAggregationKindForAllowed` | `FELIX_FLOWLOGSFILEAGGREGATIONKINDFORALLOWED` | `2` | How much to aggregate the flow logs sent to Elasticsearch for allowed traffic. Bear in mind that changing this value may have a dramatic impact on the volume of flow logs sent to Elasticsearch. `0` means no aggregation, `1` means aggregate all flows that share a source port on each node, `2` means aggregate all flows that share source ports or are from the same ReplicaSet and `3` means aggregate all flows that share destination and source ports and are from the same ReplicateSet |
-| `FlowLogsFileAggregationKindForDenied` | `FELIX_FLOWLOGSFILEAGGREGATIONKINDFORDENIED` | `1` | How much to aggregate the flow logs sent to Elasticsearch for denied traffic. Bear in mind that changing this value may have a dramatic impact on the volume of flow logs sent to Elasticsearch. `0` means no aggregation, `1` means aggregate all flows that share a source port on each node, and `2` means aggregate all flows that share source ports or are from the same ReplicaSet and `3` means aggregate all flows that share destination and source ports and are from the same ReplicateSet. |
-| `FlowLogsDynamicAggregationEnabled` | `FELIX_FLOWLOGSDYNAMICAGGREGATIONENABLED` | `false` | Enable dynamic aggregation for flow logs. This will increase aggregation up to the maximum level allowed (which is 3 and means aggregate all flows that share destination and source ports and are from the same ReplicateSet) when it detects the pipeline for reading flow logs is stalled. It will revert to its initial aggregation level when this condition changes. The initial aggregation level can be specified using `FlowLogsFileAggregationKindForAllowed` and `FlowLogsFileAggregationKindForDenied`. If these values are not specified, default values of `2` and `1` will be used. |
-| `FlowLogsPositionFilePath` | `FELIX_FLOWLOGSPOSITIONPATH` | `/var/log/calico/flows.log.pos` | Default path of the position file. It is used to read the current state of pipeline for flow logs. This parameter will be used only when `FlowLogsDynamicAggregationEnabled` is set to `true` |
-| `FlowLogsAggregationThresholdBytes` | `FELIX_FLOWLOGSAGGREGATIONTHRESHOLDBYTES` | `8192` | Default threshold to determine how far behind the pipeline for flow logs can get before aggregation starts in. Detecting a difference of 8192 bytes means increase 1 level, while a difference of 16384 means increasing two levels. This parameter will be used only when `FlowLogsDynamicAggregationEnabled` is set to `true`. |
-| `FlowLogsCollectProcessInfo` | `FELIX_FLOWLOGSCOLLECTPROCESSINFO` | `true` | If enabled Felix will load the kprobe BPF programs to collect process info. |
-| `FlowLogsCollectTcpStats` | `FELIX_FLOWLOGSCOLLECTTCPSTATS` | `true` | If enabled Felix will collect TCP socket stats using BPF and requires a recent kernel that supports BPF |
-| `FlowLogsCollectProcessPath` | `FELIX_FLOWLOGSCOLLECTPROCESSPATH` | `true` | If enabled, along with FlowLogsCollectProcessInfo, each flow log will contain the full path of the process executable and the arguments with which the executable was invoked. If path or arguments cannot be determined, Felix will fallback to using task names and arguments will be empty. For full functionality, this feature should be enabled via operator see [Enabling process path](../../../../visibility/elastic/flow/processpath.mdx) |
-| `FlowLogsFilePerFlowProcessLimit` | `FELIX_FLOWLOGSFILEPERFLOWPROCESSLIMIT` | `2` | Specify the maximum number of flow log entries with distinct process information beyond which process information will be aggregated. |
-| `FlowLogsFilePerFlowProcessArgsLimit` | `FELIX_FLOWLOGSFILEPERFLOWPROCESSARGSLIMIT` | `5` | Specify the maximum number of arguments beyond which the process arguments will be aggregated. |
-| `DNSCacheFile` | `FELIX_DNSCACHEFILE` | `/var/run/calico/felix-dns-cache.txt` | The name of the file that Felix uses to preserve learned DNS information when restarting. |
-| `DNSCacheSaveInterval` | `FELIX_DNSCACHESAVEINTERVAL` | `60` | The periodic interval at which Felix saves learned DNS information to the cache file. |
-| `DNSCacheEpoch` | `FELIX_DNSCACHEEPOCH` | `0` | An arbitrary number that can be changed, at runtime, to tell Felix to discard all its learned DNS information. |
-| `DNSExtraTTL` | `FELIX_DNSEXTRATTL` | `0` | Extra time, in seconds, to keep IPs and alias names that are learned from DNS, in addition to each name or IP's advertised TTL. |
-| `DNSTrustedServers` | `FELIX_DNSTRUSTEDSERVERS` | `k8s-service:kube-dns` | The DNS servers that Felix should trust. Each entry here must be `[:]` - indicating an explicit DNS server IP - or `k8s-service:[/][:port]` - indicating a Kubernetes DNS service. `` defaults to the first service port, or 53 for an IP, and `` to `kube-system`. An IPv6 address with a port must use the square brackets convention, for example `[fd00:83a6::12]:5353`. Note that Felix (calico-node) will need RBAC permission to read the details of each service specified by a `k8s-service:...` form. |
-| `DNSLogsFileEnabled` | `FELIX_DNSLOGSFILEENABLED` | `false` | Set to `true`, enables DNS logs. If set to `false` no DNS logging will occur. DNS logs are written to a file `dns.log` and sent to Elasticsearch. The location of this file can be configured using the `DNSLogsFileDirectory` field. File rotation settings for this `dns.log` file can be configured using the fields `DNSLogsFileMaxFiles` and `DNSLogsFileMaxFileSizeMB`. Note that DNS log exports to Elasticsearch are dependent on DNS logs getting written to this file. Setting this parameter to `false` will disable DNS logs. |
-| `DNSLogsFileDirectory` | `FELIX_DNSLOGSFILEDIRECTORY` | `/var/log/calico/dnslogs` | The directory where DNS logs files are stored. This parameter only takes effect when `DNSLogsFileEnabled` is `true`. |
-| `DNSLogsFileMaxFiles` | `FELIX_DNSLOGSFILEMAXFILES` | `5` | The number of files to keep when rotating DNS log files. This parameter only takes effect when `DNSLogsFileEnabled` is `true`. |
-| `DNSLogsFileMaxFileSizeMB` | `FELIX_DNSLOGSFILEMAXFILESIZEMB` | `100` | The max size in MB of DNS log files before rotation. This parameter only takes effect when `DNSLogsFileEnabled` is `true`. |
-| `DNSLogsFlushInterval` | `FELIX_DNSLOGSFLUSHINTERVAL` | `300` | The period, in seconds, at which Felix exports DNS logs. |
-| `DNSLogsFileAggregationKind` | `FELIX_DNSLOGSFILEAGGREGATIONKIND` | `1` | How much to aggregate DNS logs. Bear in mind that changing this value may have a dramatic impact on the volume of flow logs sent to Elasticsearch. `0` means no aggregation, `1` means aggregate similar DNS logs from workloads in the same ReplicaSet. |
-| `DNSLogsFileIncludeLabels` | `FELIX_DNSLOGSFILEINCLUDELABELS` | `true` | Whether to include client and server workload labels in DNS logs. |
-| `DNSLogsFilePerNodeLimit` | `FELIX_DNSLOGSFILEPERNODELIMIT` | `0` (no limit) | Limit on the number of DNS logs that can be emitted within each flush interval. When this limit has been reached, Felix counts the number of unloggable DNS responses within the flush interval, and emits a WARNING log with that count at the same time as it flushes the buffered DNS logs. |
-| `DNSLogsLatency` | `FELIX_DNSLOGSLATENCY` | `true` | Indicates to include measurements of DNS request/response latency in each DNS log. |
-| `EgressIPSupport` | `FELIX_EGRESSIPSUPPORT` | `Disabled` | Defines three different support modes for egress gateway function. `Disabled` means egress gateways are not supported. `EnabledPerNamespace` means egress gateway function is enabled and can be configured on a per-namespace basis (but per-pod egress annotations are ignored). `EnabledPerNamespaceOrPerPod` means egress gateway function is enabled and can be configured per-namespace or per-pod (with per-pod egress annotations overriding namespace annotations). |
-| `EgressIPVXLANPort` | `FELIX_EGRESSIPVXLANPORT` | `4097` | Port to use for egress gateway VXLAN traffic. A value of `0` means "use the kernel default". |
-| `EgressIPVXLANVNI` | `FELIX_EGRESSIPVXLANVNI` | `4790` | Virtual network ID to use for egress gateway VXLAN traffic. A value of `0` means "use the kernel default". |
-| `EgressIPRoutingRulePriority` | `FELIX_EGRESSIPROUTINGRULEPRIORITY` | `100` | Priority value to use for the egress gateway routing rule. |
-| `L7LogsFileEnabled` | `FELIX_L7LOGSFILEENABLED` | `true` | If set to `false` no L7 logging will occur. L7 logs are written to a file `l7.log` and sent to Elasticsearch. The location of this file can be configured using the `L7LogsFileDirectory` field. File rotation settings for this `l7.log` file can be configured using the fields `L7LogsFileMaxFiles` and `L7LogsFileMaxFileSizeMB`. Note that L7 log exports to Elasticsearch are dependent on L7 logs getting written to this file. |
-| `L7LogsFileDirectory` | `FELIX_L7LOGSFILEDIRECTORY` | `/var/log/calico/l7logs` | The directory where L7 log files are stored. This parameter only takes effect when `L7LogsFileEnabled` is `true`. |
-| `L7LogsFileMaxFiles` | `FELIX_L7LOGSFILEMAXFILES` | `5` | The number of files to keep when rotating L7 log files. This parameter only takes effect when `L7LogsFileEnabled` is `true`. |
-| `L7LogsFileMaxFileSizeMB` | `FELIX_L7LOGSFILEMAXFILESIZEMB` | `100` | The max size in MB of L7 log files before rotation. This parameter only takes effect when `L7LogsFileEnabled` is `true`. |
-| `L7LogsFlushInterval` | `FELIX_L7LOGSFLUSHINTERVAL` | `300` | The period, in seconds, at which Felix exports L7 logs. |
-| `L7LogsFileAggregationHTTPHeaderInfo` | `FELIX_L7LOGSFILEAGGREGATIONHTTPHEADERINFO` | `ExcludeL7HTTPHeaderInfo` | How to handle HTTP header information for aggregating L7 logs. Bear in mind that changing this value may have a dramatic impact on the volume of L7 logs sent to Elasticsearch. Possible values include `ExcludeL7HTTPHeaderInfo` and `IncludeL7HTTPHeaderInfo`. |
-| `L7LogsFileAggregationHTTPMethod` | `FELIX_L7LOGSFILEAGGREGATIONHTTPMETHOD` | `IncludeL7HTTPMethod` | How to handle HTTP method data for aggregating L7 logs. Bear in mind that changing this value may have a dramatic impact on the volume of L7 logs sent to Elasticsearch. Possible values include `ExcludeL7HTTPMethod` and `IncludeL7HTTPMethod`. |
-| `L7LogsFileAggregationServiceInfo` | `FELIX_L7LOGSFILEAGGREGATIONSERVICEINFO` | `IncludeL7ServiceInfo` | How to handle service information for aggregating L7 logs. Bear in mind that changing this value may have a dramatic impact on the volume of L7 logs sent to Elasticsearch. Possible values include `ExcludeL7ServiceInfo` and `IncludeL7ServiceInfo`. |
-| `L7LogsFileAggregationDestinationInfo` | `FELIX_L7LOGSFILEAGGREGATIONDESTINATIONINFO` | `IncludeL7DestinationInfo` | How to handle destination metadata for aggregating L7 logs. Bear in mind that changing this value may have a dramatic impact on the volume of L7 logs sent to Elasticsearch. Possible values include `ExcludeL7DestinationInfo` and `IncludeL7DestinationInfo`. |
-| `L7LogsFileAggregationSourceInfo` | `FELIX_L7LOGSFILEAGGREGATIONSOURCEINFO` | `IncludeL7SourceInfoNoPort` | How to handle source metadata for aggregating L7 logs. Bear in mind that changing this value may have a dramatic impact on the volume of L7 logs sent to Elasticsearch. Possible values include `ExcludeL7SourceInfo`, `IncludeL7SourceInfoNoPort`, and `IncludeL7SourceInfo`. |
-| `L7LogsFileAggregationResponseCode` | `FELIX_L7LOGSFILEAGGREGATIONRESPONSECODE` | `IncludeL7ResponseCode` | How to handle response code data for aggregating L7 logs. Bear in mind that changing this value may have a dramatic impact on the volume of L7 logs sent to Elasticsearch. Possible values include `ExcludeL7ResponseCode` and `IncludeL7ResponseCode`. |
-| `L7LogsFileAggregationTrimURL` | `FELIX_L7LOGSFILEAGGREGATIONTRIMURL` | `IncludeL7FullURL` | How to handle URL data for aggregating L7 logs. Bear in mind that changing this value may have a dramatic impact on the volume of L7 logs sent to Elasticsearch. Possible values include `ExcludeL7URL`, `TrimURLQuery`, `TrimURLQueryAndPath`, and `IncludeL7FullURL`. |
-| `L7LogsFileAggregationNumURLPath` | `FELIX_L7LOGSFILEAGGREGATIONNUMURLPATH` | `5` | How many components in the path to limit the URL by. This parameter only takes effect when `L7LogsFileAggregationTrimURL` is set to `IncludeL7FullURL`. Bear in mind that changing this value may have a dramatic impact on the volume of L7 logs sent to Elasticsearch. Negative values set the limit to infinity. |
-
-DropActionOverride controls what happens to each packet that is denied by
-the current $[prodname] policy - i.e. by the ordered combination of all the
-configured policies and profiles that apply to that packet. It may be
-set to one of the following values:
-
-- `Drop`
-- `Accept`
-- `LogAndDrop`
-- `LogAndAccept`
-
-Normally the `Drop` or `LogAndDrop` value should be used, as dropping a
-packet is the obvious implication of that packet being denied. However when
-experimenting, or debugging a scenario that is not behaving as you expect, the
-`Accept` and `LogAndAccept` values can be useful: then the packet will be
-still be allowed through.
-
-When set to `LogAndDrop` or `LogAndAccept`, each denied packet is logged in
-syslog, with an entry like this:
-
-```
-May 18 18:42:44 ubuntu kernel: [ 1156.246182] calico-drop: IN=tunl0 OUT=cali76be879f658 MAC= SRC=192.168.128.30 DST=192.168.157.26 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56743 DF PROTO=TCP SPT=56248 DPT=80 WINDOW=29200 RES=0x00 SYN URGP=0 MARK=0xa000000
-```
-
-If the `LogDropActionOverride` flag is set, then the `DropActionOverride` will also appear in the syslog entry:
-
-```
-May 18 18:42:44 ubuntu kernel: [ 1156.246182] calico-drop LOGandDROP: IN=tunl0 OUT=cali76be879f658 MAC= SRC=192.168.128.30 DST=192.168.157.26 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56743 DF PROTO=TCP SPT=56248 DPT=80 WINDOW=29200 RES=0x00 SYN URGP=0 MARK=0xa000000
-```
-
-When the reporting of denied packet metrics is enabled, Felix keeps counts of
-recently denied packets and publishes these as Prometheus metrics on the port
-configured by the `PrometheusReporterPort` setting.
-
-Note that denied packet metrics are independent of the DropActionOverride
-setting. Specifically, if packets that would normally be denied are being
-allowed through by a setting of `Accept` or `LogAndAccept`, those packets
-still contribute to the denied packet metrics as just described.
-
-### Felix-Typha Configuration
-
-| Configuration parameter | Environment variable | Description | Schema |
-| ----------------------- | --------------------------- | ----------------------------------------------------------------------------------------- | ------ |
-| `TyphaAddr` | `FELIX_TYPHAADDR` | Address of the Typha Server when running outside a K8S Cluster, in the format IP:PORT | string |
-| `TyphaK8sServiceName` | `FELIX_TYPHAK8SSERVICENAME` | Service Name of Typha Deployment when running inside a K8S Cluster | string |
-| `TyphaK8sNamespace` | `FELIX_TYPHAK8SNAMESPACE` | Namespace of Typha Deployment when running inside a K8S Cluster. [Default: `kube-system`] | string |
-| `TyphaReadTimeout` | `FELIX_TYPHAREADTIMEOUT` | Timeout of Felix when reading information from Typha, in seconds. [Default: 30] | int |
-| `TyphaWriteTimeout` | `FELIX_TYPHAWRITETIMEOUT` | Timeout of Felix when writing information to Typha, in seconds. [Default: 30] | int |
-
-### Felix-Typha TLS configuration
-
-| Configuration parameter | Environment variable | Description | Schema |
-| ----------------------- | --------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ |
-| `TyphaCAFile` | `FELIX_TYPHACAFILE` | Path to the file containing the root certificate of the CA that issued the Typha server certificate. Configures Felix to trust the CA that signed the root certificate. The file may contain multiple root certificates, causing Felix to trust each of the CAs included. Example: `/etc/felix/ca.pem` | string |
-| `TyphaCertFile` | `FELIX_TYPHACERTFILE` | Path to the file containing the client certificate issued to Felix. Enables Felix to participate in mutual TLS authentication and identify itself to the Typha server. Example: `/etc/felix/cert.pem` | string |
-| `TyphaCN` | `FELIX_TYPHACN` | If set, the `Common Name` that Typha's certificate must have. If you have enabled TLS on the communications from Felix to Typha, you must set a value here or in `TyphaURISAN`. You can set values in both, as well, such as to facilitate a migration from using one to the other. If either matches, the communication succeeds. [Default: none] | string |
-| `TyphaKeyFile` | `FELIX_TYPHAKEYFILE` | Path to the file containing the private key matching the Felix client certificate. Enables Felix to participate in mutual TLS authentication and identify itself to the Typha server. Example: `/etc/felix/key.pem` (optional) | string |
-| `TyphaURISAN` | `FELIX_TYPHAURISAN` | If set, a URI SAN that Typha's certificate must have. We recommend populating this with a [SPIFFE](https://github.com/spiffe/spiffe/blob/master/standards/SPIFFE-ID.md#2-spiffe-identity) string that identifies Typha. All Typha instances should use the same SPIFFE ID. If you have enabled TLS on the communications from Felix to Typha, you must set a value here or in `TyphaCN`. You can set values in both, as well, such as to facilitate a migration from using one to the other. If either matches, the communication succeeds. [Default: none] | string |
-
-For more information on how to use and set these variables, refer to
-[Connections from Node to Typha (Kubernetes)](../../../../operations/comms/crypto-auth.mdx#connections-from-node-to-typha-kubernetes).
-
-### PacketCapture configuration
-
-The following parameters fine tune packet capture rotation:
-
-| Configuration parameter | Environment variable | Description | Schema |
-| ------------------------ | --------------------------------- | --------------------------------------------------------------------------------------------- | ------ |
-| `CaptureDir` | `FELIX_CAPTUREDIR` | Controls the directory where packet capture files are stored. Example: `/var/log/calico/pcap` | string |
-| `CaptureMaxSizeBytes` | `FELIX_CAPTUREMAXSIZEBYTES` | Controls the maximum size in bytes for a packet capture file before rotation. | int |
-| `CaptureRotationSeconds` | `FELIX_CAPTUREMAXROTATIONSECONDS` | Controls the rotation period in seconds for a packet capture file. | int |
-| `CaptureMaxFiles` | `FELIX_CAPTUREMAXFILES` | Controls the maximum number rotated packet capture files. | int |
-
-### WireGuard configuration
-
-| Configuration parameter | Environment variable | Description | Schema |
-| ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | ------- | -------------- |
-| wireguardEnabled | Enable encryption for IPv4 on WireGuard supported nodes in cluster. When enabled, pod to pod traffic will be sent over encrypted tunnels between the nodes. | `true`, `false` | boolean | `false` |
-| wireguardEnabledV6 | Enable encryption for IPv6 on WireGuard supported nodes in cluster. When enabled, pod to pod traffic will be sent over encrypted tunnels between the nodes. | `true`, `false` | boolean | `false` |
-| wireguardInterfaceName | Name of the IPv4 WireGuard interface created by Felix. If you change the name, and want to clean up the previously-configured interface names on each node, this is a manual process. | string | string | wireguard.cali |
-| wireguardInterfaceNameV6 | Name of the IPv6 WireGuard interface created by Felix. If you change the name, and want to clean up the previously-configured interface names on each node, this is a manual process. | string | string | wg-v6.cali |
-| wireguardListeningPort | Port used by IPv4 WireGuard tunnels. Felix sets up an IPv4 WireGuard tunnel on each node specified by this port. Available for configuration only in the global FelixConfiguration resource; setting it per host, config-file or environment variable will not work. | 1-65535 | int | 51820 |
-| wireguardListeningPortV6 | Port used by IPv6 WireGuard tunnels. Felix sets up an IPv6 WireGuard tunnel on each node specified by this port. Available for configuration only in the global FelixConfiguration resource; setting it per host, config-file or environment variable will not work. | 1-65535 | int | 51821 |
-| wireguardMTU | MTU set on the IPv4 WireGuard interface created by Felix. Zero value means auto-detect. See [Configuring MTU](../../../../networking/configuring/mtu.mdx). | int | int | 0 |
-| wireguardMTUV6 | MTU set on the IPv6 WireGuard interface created by Felix. Zero value means auto-detect. See [Configuring MTU](../../../../networking/configuring/mtu.mdx). | int | int | 0 |
-| wireguardRoutingRulePriority | WireGuard routing rule priority value set up by Felix. If you change the default value, set it to a value most appropriate to routing rules for your nodes. | 1-32765 | int | 99 |
-| wireguardHostEncryptionEnabled | **Experimental**: Adds host-namespace workload IP's to WireGuard's list of peers. Should **not** be enabled when WireGuard is enabled on a cluster's control plane node, as networking deadlock can occur. | true, false | boolean | false |
-| wireguardKeepAlive | WireguardKeepAlive controls Wireguard PersistentKeepalive option. Set 0 to disable. [Default: 0] | int | int | 25 |
-
-For more information on encrypting in-cluster traffic with WireGuard, refer to
-[Encrypt cluster pod traffic](../../../../compliance/encrypt-cluster-pod-traffic.mdx)
-
-## Environment variables
-
-The highest priority of configuration is that read from environment
-variables. To set a configuration parameter via an environment variable,
-set the environment variable formed by taking `FELIX_` and appending the
-uppercase form of the variable name. For example, to set the etcd
-address, set the environment variable `FELIX_ETCDADDR`. Other examples
-include `FELIX_ETCDSCHEME`, `FELIX_ETCDKEYFILE`, `FELIX_ETCDCERTFILE`,
-`FELIX_ETCDCAFILE`, `FELIX_FELIXHOSTNAME`, `FELIX_LOGFILEPATH` and
-`FELIX_METADATAADDR`.
-
-## Configuration file
-
-On startup, Felix reads an ini-style configuration file. The path to
-this file defaults to `/etc/calico/felix.cfg` but can be overridden
-using the `-c` or `--config-file` options on the command line. If the
-file exists, then it is read (ignoring section names) and all parameters
-are set from it.
-
-In OpenStack, we recommend putting all configuration into configuration
-files, since the etcd database is transient (and may be recreated by the
-OpenStack plugin in certain error cases). However, in a Docker
-environment the use of environment variables or etcd is often more
-convenient.
-
-## Datastore
-
-Felix also reads configuration parameters from the datastore. It supports
-a global setting and a per-host override.
-
-1. Get the current felixconfig settings.
-
- ```bash
- kubectl get felixconfiguration.projectcalico.org default -o yaml --export > felix.yaml
- ```
-
-1. Modify logFilePath to your intended path, e.g. "/tmp/felix.log"
-
- ```bash
- vim felix.yaml
- ```
-
- :::tip
-
- For a global change set name to "default".
- For a node-specific change: set name to `node.`, e.g. "node.$[prodname]-node-1"
-
- :::
-
-1. Replace the current felixconfig settings
-
- ```bash
- kubectl replace -f felix.yaml
- ```
-
-For more information, see [Felix Configuration Resource](../../../resources/felixconfig.mdx).
-
-
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/felix/index.mdx b/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/felix/index.mdx
deleted file mode 100644
index b7c1e2d1c1..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/felix/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Felix is a Calico component that runs on every machine that provides endpoints.
-hide_table_of_contents: true
----
-
-# Felix
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/felix/prometheus.mdx b/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/felix/prometheus.mdx
deleted file mode 100644
index fc5ede70f9..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/felix/prometheus.mdx
+++ /dev/null
@@ -1,226 +0,0 @@
----
-description: Review metrics for the Felix component if you are using Prometheus.
----
-
-# Prometheus metrics
-
-Felix can be configured to report a number of metrics through Prometheus. See the
-[configuration reference](configuration.mdx) for how to enable metrics reporting.
-
-## Metric reference
-
-### Felix specific
-
-Felix exports a number of Prometheus metrics. The current set is as follows. Since some metrics
-are tied to particular implementation choices inside Felix we can't make any hard guarantees that
-metrics will persist across releases. However, we aim not to make any spurious changes to
-existing metrics.
-
-| Name | Description |
-| ---------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `felix_active_local_endpoints` | Number of active endpoints on this host. |
-| `felix_active_local_policies` | Number of active policies on this host. |
-| `felix_active_local_selectors` | Number of active selectors on this host. |
-| `felix_active_local_tags` | Number of active tags on this host. |
-| `felix_bpf_conntrack_cleaned` | Number of entries cleaned during a conntrack table sweep. |
-| `felix_bpf_conntrack_cleaned_total` | Total number of entries cleaned during conntrack table sweeps, incremented for each clean individually. |
-| `felix_bpf_conntrack_expired` | Number of entries cleaned during a conntrack table sweep due to expiration. |
-| `felix_bpf_conntrack_expired_total` | Total number of entries cleaned during conntrack table sweep due to expiration - by reason. |
-| `felix_bpf_conntrack_inforeader_blocks` | Conntrack InfoReader would-blocks. |
-| `felix_bpf_conntrack_stale_nat` | Number of entries cleaned during a conntrack table sweep due to stale NAT. |
-| `felix_bpf_conntrack_stale_nat_total` | Total number of entries cleaned during conntrack table sweeps due to stale NAT. |
-| `felix_bpf_conntrack_sweeps` | Number of conntrack table sweeps made so far. |
-| `felix_bpf_conntrack_used` | Number of used entries visited during a conntrack table sweep. |
-| `felix_bpf_conntrack_sweep_duration` | Conntrack sweep execution time (ns). |
-| `felix_bpf_num_ip_sets` | Number of BPF IP sets managed in the dataplane. |
-| `felix_calc_graph_output_events` | Number of events emitted by the calculation graph. |
-| `felix_calc_graph_update_time_seconds` | Seconds to update calculation graph for each datastore OnUpdate call. |
-| `felix_calc_graph_updates_processed` | Number of datastore updates processed by the calculation graph. |
-| `felix_cluster_num_host_endpoints` | Total number of host endpoints cluster-wide. |
-| `felix_cluster_num_hosts` | Total number of $[prodname] hosts in the cluster. |
-| `felix_cluster_num_policies` | Total number of policies in the cluster. |
-| `felix_cluster_num_profiles` | Total number of profiles in the cluster. |
-| `felix_cluster_num_tiers` | Total number of $[prodname] tiers in the cluster. |
-| `felix_cluster_num_workload_endpoints` | Total number of workload endpoints cluster-wide. |
-| `felix_egress_gateway_remote_polls{status="total"}` | Total number of remote egress gateway pods that Felix is polling for health/connectivity. Only egress gateways with a named "health" port will be polled. |
-| `felix_egress_gateway_remote_polls{status="up"}` | Total number of remote egress gateway pods that have successful probes. |
-| `felix_egress_gateway_remote_polls{status="probe-failed"}` | Total number of remote egress gateway pods that have failed probes. |
-| `felix_exec_time_micros` | Summary of time taken to fork/exec child processes |
-| `felix_int_dataplane_addr_msg_batch_size` | Number of interface address messages processed in each batch. Higher values indicate we're doing more batching to try to keep up. |
-| `felix_int_dataplane_apply_time_seconds` | Time in seconds that it took to apply a dataplane update. |
-| `felix_int_dataplane_failures` | Number of times dataplane updates failed and will be retried. |
-| `felix_int_dataplane_iface_msg_batch_size` | Number of interface state messages processed in each batch. Higher values indicate we're doing more batching to try to keep up. |
-| `felix_int_dataplane_messages` | Number dataplane messages by type. |
-| `felix_int_dataplane_msg_batch_size` | Number of messages processed in each batch. Higher values indicate we're doing more batching to try to keep up. |
-| `felix_ipsec_bindings_total` | Total number of ipsec bindings. |
-| `felix_ipsec_errors` | Number of ipsec command failures. |
-| `felix_ipset_calls` | Number of ipset commands executed. |
-| `felix_ipset_errors` | Number of ipset command failures. |
-| `felix_ipset_lines_executed` | Number of ipset operations executed. |
-| `felix_ipsets_calico` | Number of active $[prodname] IP sets. |
-| `felix_ipsets_total` | Total number of active IP sets. |
-| `felix_iptables_chains` | Number of active iptables chains. |
-| `felix_iptables_lines_executed` | Number of iptables rule updates executed. |
-| `felix_iptables_lock_acquire_secs` | Time taken to acquire the iptables lock. |
-| `felix_iptables_lock_retries` | Number of times the iptables lock was already held and felix had to retry to acquire it. |
-| `felix_iptables_restore_calls` | Number of iptables-restore calls. |
-| `felix_iptables_restore_errors` | Number of iptables-restore errors. |
-| `felix_iptables_rules` | Number of active iptables rules. |
-| `felix_iptables_save_calls` | Number of iptables-save calls. |
-| `felix_iptables_save_errors` | Number of iptables-save errors. |
-| `felix_log_errors` | Number of errors encountered while logging. |
-| `felix_logs_dropped` | Number of logs dropped because the output stream was blocked. |
-| `felix_reporter_log_errors` | Number of errors encountered while logging in the Syslog. |
-| `felix_reporter_logs_dropped` | Number of logs dropped because the output was blocked in the Syslog reporter. |
-| `felix_resync_state` | Current datastore state. |
-| `felix_resyncs_started` | Number of times Felix has started resyncing with the datastore. |
-| `felix_route_table_list_seconds` | Time taken to list all the interfaces during a resync. |
-| `felix_route_table_per_iface_sync_seconds` | Time taken to sync each interface |
-
-Prometheus metrics are self-documenting, with metrics turned on, `curl` can be used to list the
-metrics along with their help text and type information.
-
-```bash
-curl -s http://localhost:9091/metrics | head
-```
-
-Example response:
-
-```
-# HELP felix_active_local_endpoints Number of active endpoints on this host.
-# TYPE felix_active_local_endpoints gauge
-felix_active_local_endpoints 91
-# HELP felix_active_local_policies Number of active policies on this host.
-# TYPE felix_active_local_policies gauge
-felix_active_local_policies 0
-# HELP felix_active_local_selectors Number of active selectors on this host.
-# TYPE felix_active_local_selectors gauge
-felix_active_local_selectors 82
-...
-```
-
-### Label indexing metrics
-
-The label index is a subcomponent of Felix that is responsible for calculating the set of endpoints and network sets
-that match each selector that is in an active policy rule. Policy rules are active on a particular node if the policy
-they belong to selects a workload or host endpoint on that node with its top-level selector (in `spec.selector`).
-Inactive policies have minimal CPU cost because their selectors do not get indexed.
-
-Since the label index must match the active selectors against _all_ endpoints and network sets in the cluster, its
-performance is critical and it supports various optimizations to minimize CPU usage. Its metrics can be used to
-check that the optimizations are active for your policy set.
-
-#### `felix_label_index_num_endpoints`
-
-Reports the total number of endpoints (and similar objects such as network sets) being tracked by the index.
-This should match the number of endpoints and network sets in your cluster.
-
-#### `felix_label_index_num_active_selectors{optimized="true|false"}`
-
-Reports the total number of active selectors, broken into `optimized="true"` and `optimized="false"` sub-totals.
-
-The `optimized="true"` total tracks the number of selectors that the label index was able to optimize. Those
-selectors should be calculated efficiently even in clusters with hundreds of thousands of endpoints. In general the
-CPU used to calculate them should be proportional to the number of endpoints that match them and the churn rate of
-_those_ endpoints.
-
-The `optimized="false"` total tracks the number of selectors that could not be optimized. Unoptimized selectors are
-much more costly to calculate; the CPU used to calculate them is proportional to the number of endpoints
-in the cluster and their churn rate. It is generally OK to have a handful of unoptimized selectors,
-but if many selectors are unoptimized the CPU usage can be substantial at high scale.
-
-For more information on writing selectors that can be optimized, see the [this](../../../resources/networkpolicy.mdx#selector-performance-in-entityrules)
-section of the `NetworkPolicy` reference.
-
-#### `felix_label_index_selector_evals{result="true|false"}`
-
-Counts the total number of times that a selector was evaluated vs an endpoint to determine if it matches, broken
-down by match (`true`) or no-match (`false`). The ratio of match to no-match shows how effective the selector
-indexing optimizations are for your policy set. The more effectively the label index can optimize the selectors,
-the fewer "no-match" results it will report relative to "match".
-
-If you have more than a handful of active selectors and `felix_label_index_selector_evals{result="false"}` is many
-times `felix_label_index_selector_evals{result="true"}` then it is likely that some selectors in the policy set are
-not being optimized effectively.
-
-#### `felix_label_index_strategy_evals{strategy="..."}`
-
-This is a technical statistic that shows how many times the label index has employed each optimization
-strategy that it has available. The strategies will likely evolve over time but, at time of writing, they are
-as follows:
-
-- `endpoint-full-scan`: the least efficient fall back strategy for unoptimized selectors. The index
- scanned _all_ endpoints to find the matches for a selector.
-
-- `endpoint|parent-no-match`: the most efficient strategy; the index was able to prove that nothing matched the
- selector so it was able to skip the scan entirely.
-
-- `endpoint|parent-single-value`: the label index was able to limit the scan to only those endpoints/parents that
- have a particular label and value combination. For example, selector `label == "value"` would only scan items that
- had exactly that label set to "value".
-
-- `endpoint|parent-multi-value`: the label index was able to limit the scan to only those endpoints/parents that
- have a particular label and one of a few values. For example, selector `label in {"a", "b")` would only scan items
- that had exactly that label with one of the given values.
-
-- `endpoint|parent-label-name`: the label index was able to limit the scan to only those endpoints/parents that
- hava a particular label (but was unable to limit it to a particular subset of values). For example, `has(label)`
- would result in that kind of scan.
-
-Terminology: here "endpoint" means "endpoint or NetworkSet" and "parent" is Felix's internal name for resources like
-Kubernetes Namespaces. A "parent" scan means that the label index scanned all endpoints that have a parent
-matching the strategy.
-
-### CPU / memory metrics
-
-Felix also exports the default set of metrics that Prometheus makes available. Currently, those
-include:
-
-| Name | Description |
-| ---------------------------------- | ------------------------------------------------------------------------------------------- |
-| `go_gc_duration_seconds` | A summary of the GC invocation durations. |
-| `go_goroutines` | Number of goroutines that currently exist. |
-| `go_info` | Go version. |
-| `go_memstats_alloc_bytes` | Number of bytes allocated and still in use. |
-| `go_memstats_alloc_bytes_total` | Total number of bytes allocated, even if freed. |
-| `go_memstats_buck_hash_sys_bytes` | Number of bytes used by the profiling bucket hash table. |
-| `go_memstats_frees_total` | Total number of frees. |
-| `go_memstats_gc_cpu_fraction` | The fraction of this program’s available CPU time used by the GC since the program started. |
-| `go_memstats_gc_sys_bytes` | Number of bytes used for garbage collection system metadata. |
-| `go_memstats_heap_alloc_bytes` | Number of heap bytes allocated and still in use. |
-| `go_memstats_heap_idle_bytes` | Number of heap bytes waiting to be used. |
-| `go_memstats_heap_inuse_bytes` | Number of heap bytes that are in use. |
-| `go_memstats_heap_objects` | Number of allocated objects. |
-| `go_memstats_heap_released_bytes` | Number of heap bytes released to OS. |
-| `go_memstats_heap_sys_bytes` | Number of heap bytes obtained from system. |
-| `go_memstats_last_gc_time_seconds` | Number of seconds since 1970 of last garbage collection. |
-| `go_memstats_lookups_total` | Total number of pointer lookups. |
-| `go_memstats_mallocs_total` | Total number of mallocs. |
-| `go_memstats_mcache_inuse_bytes` | Number of bytes in use by mcache structures. |
-| `go_memstats_mcache_sys_bytes` | Number of bytes used for mcache structures obtained from system. |
-| `go_memstats_mspan_inuse_bytes` | Number of bytes in use by mspan structures. |
-| `go_memstats_mspan_sys_bytes` | Number of bytes used for mspan structures obtained from system. |
-| `go_memstats_next_gc_bytes` | Number of heap bytes when next garbage collection will take place. |
-| `go_memstats_other_sys_bytes` | Number of bytes used for other system allocations. |
-| `go_memstats_stack_inuse_bytes` | Number of bytes in use by the stack allocator. |
-| `go_memstats_stack_sys_bytes` | Number of bytes obtained from system for stack allocator. |
-| `go_memstats_sys_bytes` | Number of bytes obtained by system. Sum of all system allocations. |
-| `go_threads` | Number of OS threads created. |
-| `process_cpu_seconds_total` | Total user and system CPU time spent in seconds. |
-| `process_max_fds` | Maximum number of open file descriptors. |
-| `process_open_fds` | Number of open file descriptors. |
-| `process_resident_memory_bytes` | Resident memory size in bytes. |
-| `process_start_time_seconds` | Start time of the process since unix epoch in seconds. |
-| `process_virtual_memory_bytes` | Virtual memory size in bytes. |
-| `process_virtual_memory_max_bytes` | Maximum amount of virtual memory available in bytes. |
-
-### Wireguard Metrics
-
-Felix also exports wireguard device stats if found/detected. Can be disabled via Felix configuration.
-
-| Name | Description |
-| ------------------------------------ | ------------------------------------------------------------------------------------------------- |
-| `wireguard_meta` | Gauge. Device / interface information for a felix/calico node, values are in this metric's labels |
-| `wireguard_bytes_rcvd` | Counter. Current bytes received from a peer identified by a peer public key and endpoint |
-| `wireguard_bytes_sent` | Counter. Current bytes sent to a peer identified by a peer public key and endpoint |
-| `wireguard_latest_handshake_seconds` | Gauge. Last handshake with a peer, unix timestamp in seconds. |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/index.mdx b/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/index.mdx
deleted file mode 100644
index 0c41d47538..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/component-resources/node/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Learn about the components that comprise the cnx-node.
-hide_table_of_contents: true
----
-
-# Calico Cloud node (cnx-node)
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/faq.mdx b/calico-cloud_versioned_docs/version-20-1/reference/faq.mdx
deleted file mode 100644
index 8ddc95317e..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/faq.mdx
+++ /dev/null
@@ -1,442 +0,0 @@
----
-description: Common questions that users ask about Calico Enterprise.
----
-
-# Frequently asked questions
-
-## Why use $[prodname]?
-
-The problem $[prodname] tries to solve is the networking of workloads (VMs,
-containers, etc) in a high scale environment. Existing L2-based methods
-for solving this problem have problems at high scale. Compared to these,
-we think $[prodname] is more scalable, simpler, and more flexible. We think
-you should look into it if you have more than a handful of nodes on a
-single site.
-
-$[prodname] also provides a rich network security model that
-allows operators and developers to declare intent-based network security
-policy that is automatically rendered into distributed firewall rules
-across a cluster of containers, VMs, and/or servers.
-
-For a more detailed discussion of this topic, see our blog post at
-[Why Calico?](https://www.projectcalico.org/why-calico/).
-
-## Does $[prodname] work with IPv6?
-
-Yes! $[prodname]'s core components support IPv6 out of the box. However,
-not all orchestrators that we integrate with support IPv6 yet.
-
-## Why does my container have a route to 169.254.1.1?
-
-In a $[prodname] network, each host acts as a gateway router for the
-workloads that it hosts. In container deployments, $[prodname] uses
-169.254.1.1 as the address for the $[prodname] router. By using a
-link-local address, $[prodname] saves precious IP addresses and avoids
-burdening the user with configuring a suitable address.
-
-While the routing table may look a little odd to someone who is used to
-configuring LAN networking, using explicit routes rather than
-subnet-local gateways is fairly common in WAN networking.
-
-## Why isn't $[prodname] working with a containerized Kubelet?
-
-$[prodname] hosted install places the necessary CNI binaries and config on each
-Kubernetes node in a directory on the host as specified in the manifest. By
-default it places binaries in /opt/cni/bin and config /etc/cni/net.d.
-
-When running the kubelet as a container using hyperkube,
-you need to make sure that the containerized kubelet can see the CNI network
-plugins and config that have been installed by mounting them into the kubelet container.
-
-For example add the following arguments to the kubelet-wrapper service:
-
-```
---volume /etc/cni/net.d:/etc/cni/net.d \
---volume /opt/cni/bin:/opt/cni/bin \
-```
-
-Without the above volume mounts, the kubelet will not call the $[prodname] CNI binaries, and so
-$[prodname] [workload endpoints](resources/workloadendpoint.mdx) will
-not be created, and $[prodname] policy will not be enforced.
-
-## How do I view $[prodname] CNI logs?
-
-The $[prodname] CNI plugin emits logs to stderr, which are then logged out by the kubelet. Where these logs end up
-depend on how your kubelet is configured. For deployments using `systemd`, you can do this via `journalctl`.
-
-The log level can be configured via the CNI network configuration file, by changing the value of the
-key `log_level`. See [Configuring the $[prodname] CNI plugins](component-resources/configuration.mdx) for more information.
-
-CNI plugin logs can also be found in `/var/log/calico/cni`.
-
-## How do I configure the pod IP range?
-
-When using $[prodname] IPAM, IP addresses are assigned from [IP Pools](resources/ippool.mdx).
-
-By default, all enabled IP pools are used. However, you can specify which IP pools to use for IP address management in the [CNI network config](component-resources/configuration.mdx#ipam),
-or on a per-pod basis using [Kubernetes annotations](component-resources/configuration.mdx#using-kubernetes-annotations).
-
-## How do I assign a specific IP address to a pod?
-
-For most use cases it's not necessary to assign specific IP addresses to a Kubernetes pod and it's recommended to use Kubernetes services instead.
-However, if you do need to assign a particular address to a pod, $[prodname] provides two ways of doing this:
-
-- You can request an IP that is available in $[prodname] IPAM using the `cni.projectcalico.org/ipAddrs` annotation.
-- You can request an IP using the `cni.projectcalico.org/ipAddrsNoIpam` annotation. Note that this annotation bypasses the configured IPAM plugin, and thus in most cases it is recommended to use the above annotation.
-
-See the [Requesting a specific IP address](component-resources/configuration.mdx#requesting-a-specific-ip-address) section in the CNI plugin reference documentation for more details.
-
-## Why can't I see the 169.254.1.1 address mentioned above on my host?
-
-$[prodname] tries hard to avoid interfering with any other configuration
-on the host. Rather than adding the gateway address to the host side
-of each workload interface, $[prodname] sets the `proxy_arp` flag on the
-interface. This makes the host behave like a gateway, responding to
-ARPs for 169.254.1.1 without having to actually allocate the IP address
-to the interface.
-
-## Why do all cali\* interfaces have the MAC address ee:ee:ee:ee:ee:ee:?
-
-In some setups the kernel is unable to generate a persistent MAC address and so
-$[prodname] assigns a MAC address itself. Since $[prodname] uses
-point-to-point routed interfaces, traffic does not reach the data link layer
-so the MAC Address is never used and can therefore be the same for all the
-cali\* interfaces.
-
-## Can I prevent my Kubernetes pods from initiating outgoing connections?
-
-Yes! The Kubernetes [`NetworkPolicy`](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
-API added support for egress policies in v1.8. You can also use `calicoctl`
-to configure egress policy to prevent Kubernetes pods from initiating outgoing
-connections based on the full set of supported $[prodname] policy primitives
-including labels, Kubernetes namespaces, CIDRs, and ports.
-
-## I've heard $[prodname] uses proxy ARP, doesn't proxy ARP cause a lot of problems?
-
-It can, but not in the way that $[prodname] uses it.
-
-In container deployments, $[prodname] only uses proxy ARP for resolving the
-169.254.1.1 address. The routing table inside the container ensures
-that all traffic goes via the 169.254.1.1 gateway so that is the only
-IP that will be ARPed by the container.
-
-## Is $[prodname] compliant with PCI/DSS requirements?
-
-PCI certification applies to the whole end-to-end system, of which
-$[prodname] would be a part. We understand that most current solutions use
-VLANs, but after studying the PCI requirements documents, we believe
-that $[prodname] does meet those requirements and that nothing in the
-documents _mandates_ the use of VLANs.
-
-## How do I enable IP-in-IP and NAT outgoing on an IP pool?
-
-1. Retrieve current IP pool config.
-
- ```bash
- calicoctl get ipPool --export -o yaml > pool.yaml
- ```
-
-2. Modify IP pool config.
-
- Modify the pool's spec to enable IP-in-IP and NAT outgoing. (See
- [IP pools](resources/ippool.mdx)
- for other settings that can be edited.)
-
- ```shell
- - apiVersion: projectcalico.org/v3
- kind: IPPool
- metadata:
- name: ippool-1
- spec:
- cidr: 192.168.0.0/16
- ipipMode: Always
- natOutgoing: true
- ```
-
-3. Load the modified file.
-
- ```bash
- kubectl replace -f pool.yaml
- ```
-
-## How does $[prodname] maintain saved state?
-
-State is saved in a few places in a $[prodname] deployment, depending on
-whether it's global or local state.
-
-Local state is state that belongs on a single compute host, associated
-with a single running Felix instance (things like kernel routes, tap
-devices etc.). Local state is entirely stored by the Linux kernel on the
-host, with Felix storing it only as a temporary mirror. This makes Felix
-effectively stateless, with the kernel acting as a backing data store on
-one side and Kubernetes (kdd) as a data source on the other.
-
-If Felix is restarted, it learns current local state by interrogating
-the kernel at start up. It then reads from the etcd datastore all the local state
-which it should have, and updates the kernel to match. This approach has
-strong resiliency benefits, in that if Felix restarts you don't suddenly
-lose access to your VMs or containers. As long as the Linux kernel is
-running, you've still got full functionality.
-
-The bulk of global state is mastered in whatever component hosts the
-plugin.
-
-- In certain cases, `etcd` itself contains the master copy of
- the data. This is because some Docker deployments have an `etcd`
- cluster that has the required resiliency characteristics, used to
- store all system configuration and so `etcd` is configured so as to
- be a suitable store for critical data.
-- In other orchestration systems, it may be stored in distributed
- databases, either owned directly by the plugin or by the
- orchestrator itself.
-
-The only other state storage in a $[prodname] network is in the BGP sessions,
-which approximate a distributed database of routes. This BGP state is
-simply a replicated copy of the per-host routes configured by Felix
-based on the global state provided by the orchestrator.
-
-This makes the $[prodname] design very simple, because we store very little
-state. All of our components can be shut down and restarted without risk,
-because they resynchronize state as necessary. This makes modeling
-their behavior extremely simple, reducing the complexity of bugs.
-
-## I heard $[prodname] is suggesting layer 2: I thought you were layer 3! What's happening?
-
-It's important to distinguish what $[prodname] provides to the workloads
-hosted in a data center (a purely layer 3 network) with what the $[prodname]
-project _recommends_ operators use to build their underlying network
-fabric.
-
-$[prodname]'s core principle is that _applications_ and _workloads_
-overwhelmingly need only IP connectivity to communicate. For this reason
-we build an IP-forwarded network to connect the tenant applications and
-workloads to each other and the broader world.
-
-However, the underlying physical fabric obviously needs to be set up
-too. Here, $[prodname] has discussed how both a layer 2 (see
-[here](architecture/design/l2-interconnect-fabric.mdx))
-or a layer 3 (see
-[here](architecture/design/l3-interconnect-fabric.mdx))
-fabric
-could be integrated with $[prodname]. This is one of the great strengths of
-the $[prodname] model: it allows the infrastructure to be decoupled from what
-we show to the tenant applications and workloads.
-
-We have some thoughts on different interconnect approaches (as noted
-above), but just because we say that there are layer 2 and layer 3 ways
-of building the fabric, and that those decisions may have an impact on
-route scale, does not mean that $[prodname] is "going back to Ethernet" or
-that we're recommending layer 2 for tenant applications. In all cases we
-forward on IP packets, no matter what architecture is used to build the
-fabric.
-
-## How do I control policy/connectivity without virtual/physical firewalls?
-
-$[prodname] provides an extremely rich security policy model, applying policy at the first and last hop
-of the routed traffic within the $[prodname] network (the source and
-destination compute hosts).
-
-This model is substantially more robust to failure than a centralized
-firewall-based model. In particular, the $[prodname] approach has no
-single point of failure: if the device enforcing the firewall has failed
-then so has one of the workloads involved in the traffic (because the
-firewall is enforced by the compute host).
-
-This model is also extremely amenable to scaling out. Because we have a
-central repository of policy configuration, but apply it at the edges of
-the network (the hosts) where it is needed, we automatically ensure that
-the rules match the topology of the data center. This allows easy
-scaling out, and gives us all the advantages of a single firewall (one
-place to manage the rules), but none of the disadvantages (single points
-of failure, state sharing, hairpinning of traffic, etc.).
-
-Lastly, we decouple the reachability of nodes and the policy applied to
-them. We use BGP to distribute the topology of the network, telling
-every node how to get to every endpoint in case two endpoints need to
-communicate. We use policy to decide _if_ those two nodes should
-communicate, and if so, how. If policy changes and two endpoints should
-now communicate, where before they shouldn’t have, all we have to do is
-update policy: the reachability information does not change. If later
-they should be denied the ability to communicate, the policy is updated
-again, and again the reachability doesn’t have to change.
-
-## Why isn't the `-p` flag on `docker run` working as expected?
-
-The `-p` flag tells Docker to set up port mapping to connect a port on the
-Docker host to a port on your container via the `docker0` bridge.
-
-If a host's containers are connected to the `docker0` bridge interface, $[prodname]
-would be unable to enforce security rules between workloads on the same host;
-all containers on the bridge would be able to communicate with one other.
-
-You can securely configure port mapping by following [Configure outgoing NAT](../networking/configuring/workloads-outside-cluster.mdx.
-
-## Can $[prodname] containers use any IP address within a pool, even subnet network/broadcast addresses?
-
-Yes! $[prodname] is fully routed, so all IP address within a $[prodname] pool are usable as
-private IP addresses to assign to a workload. This means addresses commonly
-reserved in a L2 subnet, such as IPv4 addresses ending in .0 or .255, are perfectly
-okay to use.
-
-## How do I get network traffic into and out of my $[prodname] cluster?
-
-The recommended way to get traffic to/from your $[prodname] network is by peering to
-your existing data center L3 routers using BGP and by assigning globally
-routable IPs (public IPs) to containers that need to be accessed from the internet.
-This allows incoming traffic to be routed directly to your containers without the
-need for NAT. This flat L3 approach delivers exceptional network scalability
-and performance.
-
-A common scenario is for your container hosts to be on their own
-isolated layer 2 network, like a rack in your server room or an entire data
-center. Access to that network is via a router, which also is the default
-router for all the container hosts.
-
-If this describes your infrastructure,
-[Configure outgoing NAT](../networking/configuring/workloads-outside-cluster.mdx explains in more detail
-what to do. Otherwise, if you have a layer 3 (IP) fabric, then there are
-detailed datacenter networking recommendations given
-in [$[prodname] over IP fabrics](architecture/design/l3-interconnect-fabric.mdx).
-We'd also encourage you to [get in touch](https://www.projectcalico.org/contact)
-to discuss your environment.
-
-### How can I enable NAT for outgoing traffic from containers with private IP addresses?
-
-If you want to allow containers with private IP addresses to be able to access the
-internet then you can use your data center's existing outbound NAT capabilities
-(typically provided by the data center's border routers).
-
-Alternatively you can use $[prodname]'s built in outbound NAT capability by enabling it on any
-$[prodname] IP pool. In this case $[prodname] will perform outbound NAT locally on the compute
-node on which each container is hosted.
-
-```bash
-cat <
- natOutgoing: true
-EOF
-```
-
-Where `` is the CIDR of your IP pool, for example `192.168.0.0/16`.
-
-Remember: the security profile for the container will need to allow traffic to the
-internet as well. Refer to the appropriate guide for your orchestration
-system for details on how to configure policy.
-
-### How can I enable NAT for incoming traffic to containers with private IP addresses?
-
-As discussed, the recommended way to get traffic to containers that
-need to be accessed from the internet is to give them public IP addresses and
-to configure $[prodname] to peer with the data center's existing L3 routers.
-
-In cases where this is not possible then you can configure incoming NAT
-(also known as DNAT) on your data centers existing border routers. Alternatively
-you can configure incoming NAT with port mapping on the host on which the container
-is running on.
-
-1. Create a new chain called `expose-ports` to hold the NAT rules.
-
- ```bash
- iptables -t nat -N expose-ports
- ```
-
-1. Jump to that chain from the `OUTPUT` and `PREROUTING` chains.
-
- ```bash
- iptables -t nat -A OUTPUT -j expose-ports
- iptables -t nat -A PREROUTING -j expose-ports
- ```
-
- :::tip
-
- The `OUTPUT` chain is hit by traffic originating on the host itself;
- the `PREROUTING` chain is hit by traffic coming from elsewhere.
-
- :::
-
-1. For each port you want to expose, add a rule to the
- expose-ports chain, replacing `` with the host IP that you
- want to use to expose the port and `` with the host port.
-
- ```bash
- iptables -t nat -A expose-ports -p tcp --destination \
- --dport -j DNAT --to :
- ```
-
-For example, you have a container to which you've assigned the `CALICO_IP`
-of 192.168.7.4, and you have NGINX running on port 8080 inside the container.
-If you want to expose this service on port 80 and your host has IP 192.0.2.1,
-then you could run the following commands:
-
-```bash
-iptables -t nat -N expose-ports
-iptables -t nat -A OUTPUT -j expose-ports
-iptables -t nat -A PREROUTING -j expose-ports
-
-iptables -t nat -A expose-ports -p tcp --destination 192.0.2.1 --dport 80 -j DNAT --to 192.168.7.4:8080
-```
-
-The commands will need to be run each time the host is restarted.
-
-Remember: the security profile for the container will need to allow traffic to the exposed port as well.
-Refer to the appropriate guide for your orchestration system for details on how to configure policy.
-
-### Can I run $[prodname] in a public cloud environment?
-
-Yes. If you are running in a public cloud that doesn't allow either L3 peering or L2 connectivity between $[prodname] hosts then you can enable IP-in-IP in your $[prodname] IP pool:
-
-```bash
-cat <
- ipipMode: Always
- natOutgoing: true
-EOF
-```
-
-$[prodname] will then route traffic between $[prodname] hosts using IP-in-IP.
-
-For best performance in AWS, you can disable [Source/Destination Check](resources/felixconfig.mdx#spec) instead of using IP-in-IP or VXLAN; but only if all your instances are in the same subnet of your VPC. The setting must be `Disable` for the EC2 instance(s) to process traffic not matching the host interface IP address. This is also applicable if your cluster is spread across multiple subnets. If your cluster traffic crosses subnets, set `ipipMode` (or `vxlanMode`) to `CrossSubnet` to reduce the encapsulation overhead. Check [configuring overlay networking](../networking/configuring/vxlan-ipip.mdx) for the details.
-
-You can disable Source/Destination Check using [Felix configuration](resources/felixconfig.mdx), the AWS CLI, or the EC2 console. For example, using the AWS CLI:
-
-```bash
-aws ec2 modify-instance-attribute --instance-id --source-dest-check "{\"Value\": false}"
-
-cat <
- natOutgoing: true
-EOF
-```
-
-### On AWS with IP-in-IP, why do I see no connectivity between workloads or only see connectivity if I ping in both directions?
-
-By default, AWS security groups block incoming IP-in-IP traffic.
-
-However, if an instance has recently sent some IP-in-IP traffic out when it receives some incoming IP-in-IP traffic,
-then AWS sees that as a response to an outgoing connection and it allows the incoming traffic. This leads to some very
-confusing behavior where traffic can be blocked and then suddenly start working!
-
-To resolve the issue, add a rule to your security groups that allows inbound and outbound IP-in-IP traffic (IP protocol
-number 4) between your hosts.
-
-## Can Calico do IP multicast?
-
-Calico is a routed L3 network where each pod gets a /32. There's no broadcast domain for pods.
-That means that multicast doesn't just work as a side effect of broadcast. To get multicast to
-work, the host needs to act as a multicast gateway of some kind. Calico's architecture was designed
-to extend to cover that case but it's not part of the product as yet.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/connectivity.mdx b/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/connectivity.mdx
deleted file mode 100644
index 96c79c5c23..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/connectivity.mdx
+++ /dev/null
@@ -1,91 +0,0 @@
----
-description: Customize the Calico failsafe policy to protect host endpoints.
----
-
-# Creating policy for basic connectivity
-
-When a host endpoint is added, if there is no security policy for that
-endpoint, $[prodname] will default to denying traffic to/from that endpoint,
-except for traffic that is allowed by the [failsafe rules](failsafe.mdx).
-
-While the [failsafe rules](failsafe.mdx) provide protection against removing all
-connectivity to a host:
-
-- They are overly broad in allowing inbound SSH on any interface and
- allowing traffic out to etcd's ports on any interface.
-
-- Depending on your network, they may not cover all the ports that are
- required; for example, your network may rely on allowing ICMP,
- or DHCP.
-
-Therefore, we recommend creating a failsafe $[prodname] security policy that
-is tailored to your environment. The example command below shows one
-example of how you might do that; the command uses `calicoctl` to create a single
-policy resource, which:
-
-- Applies to all known endpoints.
-- Allows inbound ssh access from a defined “management” subnet.
-- Allows outbound connectivity to etcd on a particular IP; if you have multiple etcd servers you should duplicate the rule for each destination.
-- Allows inbound ICMP.
-- Allows outbound UDP on port 67, for DHCP.
-
-When running this command, replace the placeholders in angle brackets with
-appropriate values for your deployment.
-
-
-
-```bash
-cat <"
- destination:
- ports: [22]
- - action: Allow
- protocol: ICMP
- egress:
- - action: Allow
- protocol: TCP
- destination:
- nets: [/32]
- ports: []
- - action: Allow
- protocol: TCP
- destination:
- nets: []
- - action: Allow
- protocol: UDP
- destination:
- ports: [67]
-EOF
-```
-
-Once you have such a policy in place, you may want to disable the
-[failsafe rules](failsafe.mdx).
-
-:::note
-
-Packets that reach the end of the list of rules fall-through to the
-next policy (sorted by the `order` field).
-The selector in the policy, `all()`, will match _all_ endpoints,
-including any workload endpoints. If you have workload endpoints as
-well as host endpoints then you may wish to use a more restrictive
-selector. For example, you could label management interfaces with
-label `endpoint_type = management` and then use selector
-`endpoint_type == "management"`
-If you are using $[prodname] for networking workloads, you should add
-inbound and outbound rules to allow BGP: add an ingress and egress rule
-to allow TCP traffic to destination port 179.
-
-:::
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/conntrack.mdx b/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/conntrack.mdx
deleted file mode 100644
index c7d6fa1255..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/conntrack.mdx
+++ /dev/null
@@ -1,29 +0,0 @@
----
-description: Workaround for Linux conntrack if Calico policy is not working as it should.
----
-
-# Connection tracking
-
-$[prodname] uses Linux's connection tracking ('conntrack') as an important
-optimization to its processing. It generally means that $[prodname] only needs to
-check its policies for the first packet in an allowed flow—between a pair of
-IP addresses and ports—and then conntrack automatically allows further
-packets in the same flow, without $[prodname] rechecking every packet.
-
-This can, however, make it look like a $[prodname] policy is not working as it
-should, if policy is changed to disallow a flow that was previously allowed.
-If packets were recently exchanged on the previously allowed flow, and so there
-is conntrack state for that flow that has not yet expired, that conntrack state
-will allow further packets between the same IP addresses and ports, even after
-the $[prodname] policy has been changed.
-
-Per $[prodname]'s current implementation, there are two workarounds for this:
-
-- Somehow ensure that no further packets flow between the relevant IP
- addresses and ports until the conntrack state has expired (typically about
- a minute).
-
-- Use the 'conntrack' tool to delete the relevant conntrack state; for example
- `conntrack -D -p tcp --orig-port-dst 80`.
-
-Then you should observe that the new $[prodname] policy is enforced for new packets.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/failsafe.mdx b/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/failsafe.mdx
deleted file mode 100644
index d203c50af4..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/failsafe.mdx
+++ /dev/null
@@ -1,39 +0,0 @@
----
-description: Avoid cutting off connectivity to hosts because of incorrect network policies.
----
-
-# Failsafe rules
-
-To avoid completely cutting off a host via incorrect or malformed
-policy, $[prodname] has a failsafe mechanism that keeps various pinholes open
-in the firewall.
-
-By default, $[prodname] keeps the following ports open on _all_ host endpoints:
-
-| Port | Protocol | Direction | Purpose |
-| ---- | -------- | ------------------ | ------------------------------ |
-| 22 | TCP | Inbound | SSH access |
-| 53 | UDP | Outbound | DNS queries |
-| 67 | UDP | Outbound | DHCP access |
-| 68 | UDP | Inbound | DHCP access |
-| 179 | TCP | Inbound & Outbound | BGP access (Calico networking) |
-| 6443 | TCP | Inbound & Outbound | Kubernetes API server access |
-
-The lists of failsafe ports can be configured via the configuration parameters
-`FailsafeInboundHostPorts` and `FailsafeOutboundHostPorts`
-described in [Configuring Felix](../component-resources/node/felix/configuration.mdx)
-. They
-can be disabled by setting each configuration value to "[]".
-
-:::note
-
-Removing the inbound failsafe rules can leave a host inaccessible.
-
-Removing the outbound failsafe rules can leave Felix unable to connect
-to the datastore.
-
-Before disabling the failsafe rules, we recommend creating a policy to
-replace it with more-specific rules for your environment: see
-[Creating policy for basic connectivity](connectivity.mdx).
-
-:::
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/forwarded.mdx b/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/forwarded.mdx
deleted file mode 100644
index 92b6c0e087..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/forwarded.mdx
+++ /dev/null
@@ -1,80 +0,0 @@
----
-description: Learn the subtleties using the applyOnForward option in host endpoint policies.
----
-
-# Apply on forwarded traffic
-
-If `applyOnForward` is `false`, the host endpoint policy applies to traffic to/from
-local processes only.
-
-If `applyOnForward` is `true`, the host endpoint policy also applies to forwarded traffic:
-
-- Traffic that comes in via a host endpoint and is forwarded to a local workload (container/pod/VM).
-- Traffic from a local workload that is forwarded out via a host endpoint.
-- Traffic that comes in via a host endpoint and is forwarded out via another host endpoint.
-
-By default, `applyOnForward` is `false`.
-
-Untracked policies and pre-DNAT policies must have `applyOnForward` set to `true`
-because they apply to all forwarded traffic.
-
-Forwarded traffic is allowed by default if no policies apply to the endpoint and direction. In
-other words, if a host endpoint is configured, but there are no policies with `applyOnForward`
-set to `true` that apply to that host endpoint and traffic direction, forwarded traffic is
-allowed in that direction. For example if a forwarded flow is incoming via a host endpoint, but there are
-no Ingress policies with `applyOnForward: true` that apply to that host endpoint, the flow is
-allowed. If there are `applyOnForward: true` policies that select the host endpoint and direction,
-but no rules in the policies allow the traffic, the traffic is denied.
-
-This is different from how $[prodname] treats traffic to or from a local process:
-if a host endpoint is configured and there are no policies that select the host endpoint in
-the traffic direction, or no rules that allow the traffic, the traffic is denied.
-
-Traffic that traverses a host endpoint and is forwarded to a workload endpoint must also pass
-the applicable workload endpoint policy, if any. That is to say, if an `applyOnForward: true` host
-endpoint policy allows the traffic, but workload endpoint policy denies it, the packet is still dropped.
-
-Traffic that ingresses one host endpoint, is forwarded, and egresses host endpoint must
-pass ingress policy on the first host endpoint and egress policy on the second host endpoint.
-
-:::note
-
-$[prodname]'s handling of host endpoint policy has changed, since before
-Calico v3.0, in two ways:
-
-- It will not apply at all to forwarded traffic, by default. If you have an existing
- policy and you want it to apply to forwarded traffic, you need to add `applyOnForward: true` to the policy.
-- Even with `applyOnForward: true`, the treatment is not quite the same in
- Calico v3.0 as in previous releases, because–once a host endpoint is configured–
- Calico v3.0 allows forwarded traffic through that endpoint by default, whereas
- previous releases denied forwarded traffic through that endpoint by default.
- If you want to maintain the default-deny behavior for all host-endpoint forwarded
- traffic, you can create an empty policy with `applyOnForward` set to `true`
- that applies to all traffic on all host endpoints.
-
-:::
-
-```bash
-calicoctl apply -f - <
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/objects.mdx b/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/objects.mdx
deleted file mode 100644
index 23dacc565e..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/objects.mdx
+++ /dev/null
@@ -1,124 +0,0 @@
----
-description: To protect a host interface, start by creating a host endpoint object in etcd.
----
-
-# Creating host endpoint objects
-
-For each host endpoint that you want $[prodname] to secure, you'll need to
-create a host endpoint object in etcd. Use the `calicoctl create` command
-to create a host endpoint resource (`HostEndpoint`).
-
-There are two ways to specify the interface that a host endpoint should
-refer to. You can either specify the name of the interface or its
-expected IP address. In either case, you'll also need to know the name given to
-the $[prodname] node running on the host that owns the interface; in most cases this
-will be the same as the hostname of the host.
-
-For example, to secure the interface named `eth0` with IP 10.0.0.1 on
-host `my-host`, run the command below. The name of the endpoint is an
-arbitrary name required for endpoint identification.
-
-When running this command, replace the placeholders in angle brackets with
-appropriate values for your deployment.
-
-```bash
-calicoctl create -f - <
- labels:
- role: webserver
- environment: production
- spec:
- interfaceName: eth0
- node:
- profiles: []
- expectedIPs: ["10.0.0.1"]
-EOF
-```
-
-:::note
-
-Felix tries to detect the correct hostname for a system. It logs
-out the value it has determined at start-of-day in the following
-format:
-`2015-10-20 17:42:09,813 \[INFO\]\[30149/5\] calico.felix.config 285: Parameter FelixHostname (Felix compute host hostname) has value 'my-hostname' read from None`
-The value (in this case `'my-hostname'`) needs to match the hostname
-used in etcd. Ideally, the host's system hostname should be set
-correctly but if that's not possible, the Felix value can be
-overridden with the FelixHostname configuration setting. See
-configuration for more details.
-
-:::
-
-Where `` is an optional list of security profiles
-to apply to the endpoint and labels contains a set of arbitrary
-key/value pairs that can be used in selector expressions.
-
-
-
-:::note
-
-When rendering security rules on other hosts, $[prodname] uses the
-`expectedIPs` field to resolve label selectors
-to IP addresses. If the `expectedIPs` field is omitted
-then security rules that use labels will fail to match
-this endpoint.
-Or, if you knew that the IP address should be 10.0.0.1, but not the name
-of the interface:
-
-:::
-
-```bash
-calicoctl create -f - <
- labels:
- role: webserver
- environment: production
- spec:
- node:
- profiles: []
- expectedIPs: ["10.0.0.1"]
-EOF
-```
-
-After you create host endpoint objects, Felix will start policing
-traffic to/from that interface. If you have no policy or profiles in
-place, then you should see traffic being dropped on the interface.
-
-:::note
-
-By default, $[prodname] has a failsafe in place that allows certain
-traffic such as ssh. See below for more details on
-disabling/configuring the failsafe rules.
-
-:::
-
-If you don't see traffic being dropped, check the hostname, IP address
-and (if used) the interface name in the configuration. If there was
-something wrong with the endpoint data, Felix will log a validation
-error at `WARNING` level and it will ignore the endpoint:
-
-A `grep` through the Felix logs for the string "Validation failed" should allow
-you to locate the error.
-
-```bash
-grep "Validation failed" /var/log/calico/felix.log
-```
-
-An example error follows.
-
-```
-2016-05-31 12:16:21,651 [WARNING][8657/3] calico.felix.fetcd 1017:
- Validation failed for host endpoint HostEndpointId, treating as
- missing: 'name' or 'expected_ipvX_addrs' must be present.;
- '{ "labels": {"foo": "bar"}, "profile_ids": ["prof1"]}'
-```
-
-The error can be quite long but it should log the precise cause of the
-rejection; in this case `'name' or 'expected\_ipvX\_addrs' must be present` tells us that either the interface's name or its expected IP
-address must be specified.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/overview.mdx b/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/overview.mdx
deleted file mode 100644
index 6b2b005849..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/overview.mdx
+++ /dev/null
@@ -1,57 +0,0 @@
----
-description: Secure host network interfaces.
----
-
-# Host endpoints
-
-This guide describes how to use $[prodname] to secure the network interfaces
-of the host itself (as opposed to those of any container/VM workloads
-that are present on the host). We call such interfaces "host endpoints",
-to distinguish them from "workload endpoints" (such as containers or VMs).
-
-$[prodname] supports the same rich security policy model for host endpoints (host
-endpoint policy) that it supports for workload endpoints. Host endpoints can
-have labels, and their labels are in the same "namespace" as those of workload
-endpoints. This allows security rules for either type of endpoint to refer to
-the other type (or a mix of the two) using labels and selectors.
-
-$[prodname] does not support setting IPs or policing MAC addresses for host
-interfaces, it assumes that the interfaces are configured by the
-underlying network fabric.
-
-$[prodname] distinguishes workload endpoints from host endpoints by a configurable
-prefix. Unless you happen to have host interfaces whose name matches the
-default for that prefix (`cali`), you won't need to change it. In case you do,
-see the `InterfacePrefix` configuration value at [Configuring Felix](../component-resources/node/felix/configuration.mdx)
-.
-Interfaces that start with a value listed in `InterfacePrefix` are assumed to
-be workload interfaces. Others are treated as host interfaces.
-
-$[prodname] blocks all traffic to/from workload interfaces by default;
-allowing traffic only if the interface is known and policy is in place.
-However, for host endpoints, $[prodname] is more lenient; it only polices
-traffic to/from interfaces that it's been explicitly told about. Traffic
-to/from other interfaces is left alone.
-
-You can use host endpoint policy to secure a NAT gateway or router. $[prodname]
-supports selector-based policy when running on a gateway or router, allowing for
-rich, dynamic security policy based on the labels attached to your host endpoints.
-
-You can apply host endpoint policies to three types of traffic:
-
-- Traffic that is terminated locally.
-- Traffic that is forwarded between host endpoints.
-- Traffic that is forwarded between a host endpoint and a workload endpoint on the
- same host.
-
-Set the `applyOnForward` flag to `true` to apply a policy to forwarded traffic.
-See [GlobalNetworkPolicy spec](../resources/globalnetworkpolicy.mdx#spec).
-
-:::note
-
-Both traffic forwarded between host endpoints and traffic forwarded
-between a host endpoint and a workload endpoint on the same host is regarded as
-`forwarded traffic`.
-![](/img/calico-enterprise/bare-metal-packet-flows.svg)
-
-:::
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/pre-dnat.mdx b/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/pre-dnat.mdx
deleted file mode 100644
index 298b001ce8..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/pre-dnat.mdx
+++ /dev/null
@@ -1,46 +0,0 @@
----
-description: Apply rules in a host endpoint policy before any DNAT.
----
-
-# Pre-DNAT policy
-
-Policy for host endpoints can be marked as `preDNAT`. This means that rules in
-that policy should be applied before any DNAT (Destination Network Address
-Translation), which is useful if it is more convenient to specify $[prodname] policy
-in terms of a packet's original destination IP address and port, than in terms
-of that packet's destination IP address and port after it has been DNAT'd.
-
-An example is securing access to Kubernetes NodePorts from outside the cluster.
-Traffic from outside is addressed to any node's IP address, on a known
-NodePort, and Kubernetes (kube-proxy) then DNATs that to the IP address of one
-of the pods that provides the corresponding service, and the relevant port
-number on that pod (which is usually different from the NodePort).
-
-As NodePorts are the externally advertised way of connecting to services (and a
-NodePort uniquely identifies a service, whereas an internal port number may
-not), it makes sense to express $[prodname] policy to expose or secure particular
-Services in terms of the corresponding NodePorts. But that is only possible if
-the $[prodname] policy is applied before DNAT changes the NodePort to something
-else. Hence this kind of policy needs `preDNAT` set to `true`.
-
-In addition to being applied before any DNAT, the enforcement of pre-DNAT
-policy differs from that of normal host endpoint policy in three key details,
-reflecting that it is designed for the policing of incoming traffic from
-outside the cluster:
-
-- Pre-DNAT policy may only have ingress rules, not egress. (When incoming
- traffic is allowed by the ingress rules, standard connection tracking is
- sufficient to allow the return path traffic.)
-
-- Pre-DNAT policy is enforced for all traffic arriving through a host
- endpoint, regardless of where that traffic is going, and - in particular -
- even if that traffic is routed to a local workload on the same host.
- (Whereas normal host endpoint policy is skipped, for traffic going to a
- local workload.)
-
-- There is no 'default drop' semantic for pre-DNAT policy (as there is for
- normal host endpoint policy). In other words, if a host endpoint is defined
- but has no pre-DNAT policies that explicitly allow or deny a particular
- incoming packet, that packet is allowed to continue on its way, and will
- then be accepted or dropped according to workload policy (if it is going to
- a local workload) or to normal host endpoint policy (if not).
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/selector.mdx b/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/selector.mdx
deleted file mode 100644
index 9fc2d1bc4d..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/host-endpoints/selector.mdx
+++ /dev/null
@@ -1,30 +0,0 @@
----
-description: Apply ordered policies to endpoints that match specific label selectors.
----
-
-# Selector-based policies
-
-We recommend using selector-based security policy with
-host endpoints. This allows ordered policy to be applied to
-endpoints that match particular label selectors.
-
-For example, you could add a second policy for webserver access:
-
-```bash
-cat <
-
-
-
-
-
-
-## Resource definitions
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-## Component resources
-
-
-
-
-
-
-
-
-
-
-
-## Configuration on public clouds
-
-
-
-
-
-
-
-## Host endpoints
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-## Architecture
-
-
-
-
-
-
-
-
-
-## Other reference topics
-
-
-
-
-
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/installation/_README.mdx b/calico-cloud_versioned_docs/version-20-1/reference/installation/_README.mdx
deleted file mode 100644
index a35570764b..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/installation/_README.mdx
+++ /dev/null
@@ -1,7 +0,0 @@
-# Generating API reference docs
-
-The api.html doc in this directory is generated using https://github.com/tmjd/gen-crd-api-reference-docs/tree/kb_v2.
-
-To generate an updated file, change to the root of the docs repository and run
-the appropriate Makefile target. See the `README.md` file for more details on
-how to list available targets and which ones to run.
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/installation/_api.mdx b/calico-cloud_versioned_docs/version-20-1/reference/installation/_api.mdx
deleted file mode 100644
index 4e99e6bbda..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/installation/_api.mdx
+++ /dev/null
@@ -1,21368 +0,0 @@
-
-APIServer installs the Tigera API server and related resources. At most one instance
-of this resource is supported. It must be named “default” or “tigera-secure”.
-
-APIServerDeployment configures the calico-apiserver (or tigera-apiserver in Enterprise) Deployment. If
-used in conjunction with ControlPlaneNodeSelector or ControlPlaneTolerations, then these overrides
-take precedence.
-
-WebApplicationFirewall controls whether or not ModSecurity enforcement is enabled for the cluster.
-When enabled, Services may opt-in to having ingress traffic examed by ModSecurity.
-
-Application Layer Policy controls whether or not ALP enforcement is enabled for the cluster.
-When enabled, NetworkPolicies with HTTP Match rules may be defined to opt-in workloads for traffic enforcement on the application layer.
-
-ManagerDomain is the domain name of the Manager
-
-
-
-
-
-
-
-usernamePrefix
-
-string
-
-
-
-
-
-(Optional)
-
-If specified, UsernamePrefix is prepended to each user obtained from the identity provider. Note that
-Kibana does not support a user prefix, so this prefix is removed from Kubernetes User when translating log access
-ClusterRoleBindings into Elastic.
-
-
-
-
-
-
-
-groupsPrefix
-
-string
-
-
-
-
-
-(Optional)
-
-If specified, GroupsPrefix is prepended to each group obtained from the identity provider. Note that
-Kibana does not support a groups prefix, so this prefix is removed from Kubernetes Groups when translating log access
-ClusterRoleBindings into Elastic.
-
-Compliance installs the components required for Tigera compliance reporting. At most one instance
-of this resource is supported. It must be named “tigera-secure”.
-
-IPPools defines the IP Pools that the Egress Gateway pods should be using.
-Either name or CIDR must be specified.
-IPPools must match existing IPPools.
-
-
-
-
-
-
-
-externalNetworks
-
-[]string
-
-
-
-
-
-(Optional)
-
-ExternalNetworks defines the external network names this Egress Gateway is
-associated with.
-ExternalNetworks must match existing external networks.
-
-EgressGatewayFailureDetection is used to configure how Egress Gateway
-determines readiness. If both ICMP, HTTP probes are defined, one ICMP probe and one
-HTTP probe should succeed for Egress Gateways to become ready.
-Otherwise one of ICMP or HTTP probe should succeed for Egress gateways to become
-ready if configured.
-
-ImageSet is used to specify image digests for the images that the operator deploys.
-The name of the ImageSet is expected to be in the format <variant>-<release>.
-The variant used is enterprise if the InstallationSpec Variant is
-TigeraSecureEnterprise otherwise it is calico.
-The release must match the version of the variant that the operator is built to deploy,
-this version can be obtained by passing the --version flag to the operator binary.
-
-Images is the list of images to use digests. All images that the operator will deploy
-must be specified.
-
-
-
-
-
-
-
-
-
-
Installation
-
-Installation configures an installation of Calico or Calico Enterprise. At most one instance
-of this resource is supported. It must be named “default”. The Installation API installs core networking
-and network policy components, and provides general install-time configuration.
-
-Variant is the product to install - one of Calico or TigeraSecureEnterprise
-Default: Calico
-
-
-
-
-
-
-
-registry
-
-string
-
-
-
-
-
-(Optional)
-
-Registry is the default Docker registry used for component Docker images.
-If specified then the given value must end with a slash character (/) and all images will be pulled from this registry.
-If not specified then the default registries will be used. A special case value, UseDefault, is
-supported to explicitly specify the default registries will be used.
-
-This option allows configuring the <registry> portion of the above format.
-
-
-
-
-
-
-
-imagePath
-
-string
-
-
-
-
-
-(Optional)
-
-ImagePath allows for the path part of an image to be specified. If specified
-then the specified value will be used as the image path for each image. If not specified
-or empty, the default for each image will be used.
-A special case value, UseDefault, is supported to explicitly specify the default
-image path will be used for each image.
-
-This option allows configuring the <imagePath> portion of the above format.
-
-
-
-
-
-
-
-imagePrefix
-
-string
-
-
-
-
-
-(Optional)
-
-ImagePrefix allows for the prefix part of an image to be specified. If specified
-then the given value will be used as a prefix on each image. If not specified
-or empty, no prefix will be used.
-A special case value, UseDefault, is supported to explicitly specify the default
-image prefix will be used for each image.
-
-KubernetesProvider specifies a particular provider of the Kubernetes platform and enables provider-specific configuration.
-If the specified value is empty, the Operator will attempt to automatically determine the current provider.
-If the specified value is not empty, the Operator will still attempt auto-detection, but
-will additionally compare the auto-detected value to the specified value to confirm they match.
-
-ControlPlaneNodeSelector is used to select control plane nodes on which to run Calico
-components. This is globally applied to all resources created by the operator excluding daemonsets.
-
-ControlPlaneTolerations specify tolerations which are then globally applied to all resources
-created by the operator.
-
-
-
-
-
-
-
-controlPlaneReplicas
-
-int32
-
-
-
-
-
-(Optional)
-
-ControlPlaneReplicas defines how many replicas of the control plane core components will be deployed.
-This field applies to all control plane components that support High Availability. Defaults to 2.
-
-
-
-
-
-
-
-nodeMetricsPort
-
-int32
-
-
-
-
-
-(Optional)
-
-NodeMetricsPort specifies which port calico/node serves prometheus metrics on. By default, metrics are not enabled.
-If specified, this overrides any FelixConfiguration resources which may exist. If omitted, then
-prometheus metrics may still be configured through FelixConfiguration.
-
-
-
-
-
-
-
-typhaMetricsPort
-
-int32
-
-
-
-
-
-(Optional)
-
-TyphaMetricsPort specifies which port calico/typha serves prometheus metrics on. By default, metrics are not enabled.
-
-
-
-
-
-
-
-flexVolumePath
-
-string
-
-
-
-
-
-(Optional)
-
-FlexVolumePath optionally specifies a custom path for FlexVolume. If not specified, FlexVolume will be
-enabled by default. If set to ‘None’, FlexVolume will be disabled. The default is based on the
-kubernetesProvider.
-
-
-
-
-
-
-
-kubeletVolumePluginPath
-
-string
-
-
-
-
-
-(Optional)
-
-KubeletVolumePluginPath optionally specifies enablement of Calico CSI plugin. If not specified,
-CSI will be enabled by default. If set to ‘None’, CSI will be disabled.
-Default: /var/lib/kubelet
-
-Deprecated. Please use CalicoNodeDaemonSet, TyphaDeployment, and KubeControllersDeployment.
-ComponentResources can be used to customize the resource requirements for each component.
-Node, Typha, and KubeControllers are supported for installations.
-
-CertificateManagement configures pods to submit a CertificateSigningRequest to the certificates.k8s.io/v1beta1 API in order
-to obtain TLS certificates. This feature requires that you bring your own CSR signing and approval process, otherwise
-pods will be stuck during initialization.
-
-CalicoNodeDaemonSet configures the calico-node DaemonSet. If used in
-conjunction with the deprecated ComponentResources, then these overrides take precedence.
-
-CalicoKubeControllersDeployment configures the calico-kube-controllers Deployment. If used in
-conjunction with the deprecated ComponentResources, then these overrides take precedence.
-
-TyphaDeployment configures the typha Deployment. If used in conjunction with the deprecated
-ComponentResources or TyphaAffinity, then these overrides take precedence.
-
-Deprecated. The CalicoWindowsUpgradeDaemonSet is deprecated and will be removed from the API in the future.
-CalicoWindowsUpgradeDaemonSet configures the calico-windows-upgrade DaemonSet.
-
-Most recently observed state for the Calico or Calico Enterprise installation.
-
-
-
-
-
-
-
IntrusionDetection
-
-IntrusionDetection installs the components required for Tigera intrusion detection. At most one instance
-of this resource is supported. It must be named “tigera-secure”.
-
-Most recently observed state for Tigera intrusion detection.
-
-
-
-
-
-
-
LogCollector
-
-LogCollector installs the components required for Tigera flow and DNS log collection. At most one instance
-of this resource is supported. It must be named “tigera-secure”. When created, this installs fluentd on all nodes
-configured to collect Tigera log data and export it to Tigera’s Elasticsearch cluster as well as any additionally configured destinations.
-
-Configuration for enabling/disabling process path collection in flowlogs.
-If Enabled, this feature sets hostPID to true in order to read process cmdline.
-Default: Enabled
-
-Most recently observed state for Tigera log collection.
-
-
-
-
-
-
-
LogStorage
-
-LogStorage installs the components required for Tigera flow and DNS log storage. At most one instance
-of this resource is supported. It must be named “tigera-secure”. When created, this installs an Elasticsearch cluster for use by
-Calico Enterprise.
-
-Retention defines how long data is retained in the Elasticsearch cluster before it is cleared.
-
-
-
-
-
-
-
-storageClassName
-
-string
-
-
-
-
-
-(Optional)
-
-StorageClassName will populate the PersistentVolumeClaim.StorageClassName that is used to provision disks to the
-Tigera Elasticsearch cluster. The StorageClassName should only be modified when no LogStorage is currently
-active. We recommend choosing a storage class dedicated to Tigera LogStorage only. Otherwise, data retention
-cannot be guaranteed during upgrades. See https://docs.tigera.io/maintenance/upgrading for up-to-date instructions.
-Default: tigera-elasticsearch
-
-
-
-
-
-
-
-dataNodeSelector
-
-map[string]string
-
-
-
-
-
-(Optional)
-
-DataNodeSelector gives you more control over the node that Elasticsearch will run on. The contents of DataNodeSelector will
-be added to the PodSpec of the Elasticsearch nodes. For the pod to be eligible to run on a node, the node must have
-each of the indicated key-value pairs as labels as well as access to the specified StorageClassName.
-
-ECKOperatorStatefulSet configures the ECKOperator StatefulSet. If used in conjunction with the deprecated
-ComponentResources, then these overrides take precedence.
-
-Most recently observed state for Tigera log storage.
-
-
-
-
-
-
-
ManagementCluster
-
-The presence of ManagementCluster in your cluster, will configure it to be the management plane to which managed
-clusters can connect. At most one instance of this resource is supported. It must be named “tigera-secure”.
-
-This field specifies the externally reachable address to which your managed cluster will connect. When a managed
-cluster is added, this field is used to populate an easy-to-apply manifest that will connect both clusters.
-Valid examples are: “0.0.0.0:31000”, “example.com:32000”, “[::1]:32500”
-
-TLS provides options for configuring how Managed Clusters can establish an mTLS connection with the Management Cluster.
-
-
-
-
-
-
-
-
-
-
ManagementClusterConnection
-
-ManagementClusterConnection represents a link between a managed cluster and a management cluster. At most one
-instance of this resource is supported. It must be named “tigera-secure”.
-
-Specify where the managed cluster can reach the management cluster. Ex.: “10.128.0.10:30449”. A managed cluster
-should be able to access this address. This field is used by managed clusters only.
-
-Manager installs the Calico Enterprise manager graphical user interface. At most one instance
-of this resource is supported. It must be named “tigera-secure”.
-
-ExternalPrometheus optionally configures integration with an external Prometheus for scraping Calico metrics. When
-specified, the operator will render resources in the defined namespace. This option can be useful for configuring
-scraping from git-ops tools without the need of post-installation steps.
-
-Most recently observed state for the PacketCaptureAPI.
-
-
-
-
-
-
-
PolicyRecommendation
-
-PolicyRecommendation is the Schema for the policy recommendation API. At most one instance
-of this resource is supported. It must be named “tigera-secure”.
-
-SNIMatch is used to match requests based on the server name for the intended destination server. Matching requests
-will be proxied to the Destination.
-
-
-
-
-
-
-
-destination
-
-string
-
-
-
-
-
-
-Destination is the destination url to proxy the request to.
-
-ForwardingMTLSCert is the certificate used for mTLS between voltron and the destination. Either both ForwardingMTLSCert
-and ForwardingMTLSKey must be specified, or neither can be specified.
-
-ForwardingMTLSKey is the key used for mTLS between voltron and the destination. Either both ForwardingMTLSCert
-and ForwardingMTLSKey must be specified, or neither can be specified.
-
-
-
-
-
-
-
-unauthenticated
-
-bool
-
-
-
-
-
-(Optional)
-
-Unauthenticated says whether the request should go through authentication. This is only applicable if the Target
-is UI.
-
-Elastic configures per-tenant ElasticSearch and Kibana parameters.
-This field is required for clusters using external ES.
-
-
-
-
-
-
-
-controlPlaneReplicas
-
-int32
-
-
-
-
-
-(Optional)
-
-ControlPlaneReplicas defines how many replicas of the control plane core components will be deployed
-in the Tenant’s namespace. Defaults to the controlPlaneReplicas in Installation CR
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named API server Deployment container’s resources.
-If omitted, the API server Deployment will use its default value for this container’s resources.
-If used in conjunction with the deprecated ComponentResources, then this value takes precedence.
-
-APIServerDeploymentInitContainer is an API server Deployment init container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the API server Deployment init container by name.
-Supported values are: calico-apiserver-certs-key-cert-provisioner
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named API server Deployment init container’s resources.
-If omitted, the API server Deployment will use its default value for this init container’s resources.
-
-InitContainers is a list of API server init containers.
-If specified, this overrides the specified API server Deployment init containers.
-If omitted, the API server Deployment will use its default values for its init containers.
-
-Containers is a list of API server containers.
-If specified, this overrides the specified API server Deployment containers.
-If omitted, the API server Deployment will use its default values for its containers.
-
-Affinity is a group of affinity scheduling rules for the API server pods.
-If specified, this overrides any affinity that may be set on the API server Deployment.
-If omitted, the API server Deployment will use its default value for affinity.
-WARNING: Please note that this field will override the default API server Deployment affinity.
-
-
-
-
-
-
-
-nodeSelector
-
-map[string]string
-
-
-
-
-
-
-NodeSelector is the API server pod’s scheduling constraints.
-If specified, each of the key/value pairs are added to the API server Deployment nodeSelector provided
-the key does not already exist in the object’s nodeSelector.
-If used in conjunction with ControlPlaneNodeSelector, that nodeSelector is set on the API server Deployment
-and each of this field’s key/value pairs are added to the API server Deployment nodeSelector provided
-the key does not already exist in the object’s nodeSelector.
-If omitted, the API server Deployment will use its default value for nodeSelector.
-WARNING: Please note that this field will modify the default API server Deployment nodeSelector.
-
-TopologySpreadConstraints describes how a group of pods ought to spread across topology
-domains. Scheduler will schedule pods in a way which abides by the constraints.
-All topologySpreadConstraints are ANDed.
-
-Tolerations is the API server pod’s tolerations.
-If specified, this overrides any tolerations that may be set on the API server Deployment.
-If omitted, the API server Deployment will use its default value for tolerations.
-WARNING: Please note that this field will override the default API server Deployment tolerations.
-
-APIServerDeploymentSpec defines configuration for the API server Deployment.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-minReadySeconds
-
-int32
-
-
-
-
-
-(Optional)
-
-MinReadySeconds is the minimum number of seconds for which a newly created Deployment pod should
-be ready without any of its container crashing, for it to be considered available.
-If specified, this overrides any minReadySeconds value that may be set on the API server Deployment.
-If omitted, the API server Deployment will use its default value for minReadySeconds.
-
-APIServerDeployment configures the calico-apiserver (or tigera-apiserver in Enterprise) Deployment. If
-used in conjunction with ControlPlaneNodeSelector or ControlPlaneTolerations, then these overrides
-take precedence.
-
-Conditions represents the latest observed set of conditions for the component. A component may be one or more of
-Ready, Progressing, Degraded or other customer types.
-
-WebApplicationFirewall controls whether or not ModSecurity enforcement is enabled for the cluster.
-When enabled, Services may opt-in to having ingress traffic examed by ModSecurity.
-
-Application Layer Policy controls whether or not ALP enforcement is enabled for the cluster.
-When enabled, NetworkPolicies with HTTP Match rules may be defined to opt-in workloads for traffic enforcement on the application layer.
-
-Conditions represents the latest observed set of conditions for the component. A component may be one or more of
-Ready, Progressing, Degraded or other customer types.
-
-AuthenticationLDAP is the configuration needed to setup LDAP.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-host
-
-string
-
-
-
-
-
-
-The host and port of the LDAP server. Example: ad.example.com:636
-
-
-
-
-
-
-
-startTLS
-
-bool
-
-
-
-
-
-(Optional)
-
-StartTLS whether to enable the startTLS feature for establishing TLS on an existing LDAP session.
-If true, the ldap:// protocol is used and then issues a StartTLS command, otherwise, connections will use
-the ldaps:// protocol.
-
-AuthenticationOIDC is the configuration needed to setup OIDC.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-issuerURL
-
-string
-
-
-
-
-
-
-IssuerURL is the URL to the OIDC provider.
-
-
-
-
-
-
-
-usernameClaim
-
-string
-
-
-
-
-
-
-UsernameClaim specifies which claim to use from the OIDC provider as the username.
-
-
-
-
-
-
-
-requestedScopes
-
-[]string
-
-
-
-
-
-(Optional)
-
-RequestedScopes is a list of scopes to request from the OIDC provider. If not provided, the following scopes are
-requested: [“openid”, “email”, “profile”, “groups”, “offline_access”].
-
-
-
-
-
-
-
-usernamePrefix
-
-string
-
-
-
-
-
-(Optional)
-
-Deprecated. Please use Authentication.Spec.UsernamePrefix instead.
-
-
-
-
-
-
-
-groupsClaim
-
-string
-
-
-
-
-
-(Optional)
-
-GroupsClaim specifies which claim to use from the OIDC provider as the group.
-
-
-
-
-
-
-
-groupsPrefix
-
-string
-
-
-
-
-
-(Optional)
-
-Deprecated. Please use Authentication.Spec.GroupsPrefix instead.
-
-Some providers do not include the claim “email_verified” when there is no verification in the user enrollment
-process or if they are acting as a proxy for another identity provider. By default those tokens are deemed invalid.
-To skip this check, set the value to “InsecureSkip”.
-Default: Verify
-
-PromptTypes is an optional list of string values that specifies whether the identity provider prompts the end user
-for re-authentication and consent. See the RFC for more information on prompt types:
-https://openid.net/specs/openid-connect-core-1_0.html.
-Default: “Consent”
-
-AuthenticationSpec defines the desired state of Authentication
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-managerDomain
-
-string
-
-
-
-
-
-
-ManagerDomain is the domain name of the Manager
-
-
-
-
-
-
-
-usernamePrefix
-
-string
-
-
-
-
-
-(Optional)
-
-If specified, UsernamePrefix is prepended to each user obtained from the identity provider. Note that
-Kibana does not support a user prefix, so this prefix is removed from Kubernetes User when translating log access
-ClusterRoleBindings into Elastic.
-
-
-
-
-
-
-
-groupsPrefix
-
-string
-
-
-
-
-
-(Optional)
-
-If specified, GroupsPrefix is prepended to each group obtained from the identity provider. Note that
-Kibana does not support a groups prefix, so this prefix is removed from Kubernetes Groups when translating log access
-ClusterRoleBindings into Elastic.
-
-Conditions represents the latest observed set of conditions for the component. A component may be one or more of
-Ready, Progressing, Degraded or other customer types.
-
-Specifies the CNI plugin that will be used in the Calico or Calico Enterprise installation.
-* For KubernetesProvider GKE, this field defaults to GKE.
-* For KubernetesProvider AKS, this field defaults to AzureVNET.
-* For KubernetesProvider EKS, this field defaults to AmazonVPC.
-* If aws-node daemonset exists in kube-system when the Installation resource is created, this field defaults to AmazonVPC.
-* For all other cases this field defaults to Calico.
-
-
-For the value Calico, the CNI plugin binaries and CNI config will be installed as part of deployment,
-for all other values the CNI plugin binaries and CNI config is a dependency that is expected
-to be installed separately.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named csi-node-driver DaemonSet container’s resources.
-If omitted, the csi-node-driver DaemonSet will use its default value for this container’s resources.
-
-Containers is a list of csi-node-driver containers.
-If specified, this overrides the specified csi-node-driver DaemonSet containers.
-If omitted, the csi-node-driver DaemonSet will use its default values for its containers.
-
-Affinity is a group of affinity scheduling rules for the csi-node-driver pods.
-If specified, this overrides any affinity that may be set on the csi-node-driver DaemonSet.
-If omitted, the csi-node-driver DaemonSet will use its default value for affinity.
-WARNING: Please note that this field will override the default csi-node-driver DaemonSet affinity.
-
-
-
-
-
-
-
-nodeSelector
-
-map[string]string
-
-
-
-
-
-(Optional)
-
-NodeSelector is the csi-node-driver pod’s scheduling constraints.
-If specified, each of the key/value pairs are added to the csi-node-driver DaemonSet nodeSelector provided
-the key does not already exist in the object’s nodeSelector.
-If omitted, the csi-node-driver DaemonSet will use its default value for nodeSelector.
-WARNING: Please note that this field will modify the default csi-node-driver DaemonSet nodeSelector.
-
-Tolerations is the csi-node-driver pod’s tolerations.
-If specified, this overrides any tolerations that may be set on the csi-node-driver DaemonSet.
-If omitted, the csi-node-driver DaemonSet will use its default value for tolerations.
-WARNING: Please note that this field will override the default csi-node-driver DaemonSet tolerations.
-
-CSINodeDriverDaemonSetSpec defines configuration for the csi-node-driver DaemonSet.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-minReadySeconds
-
-int32
-
-
-
-
-
-(Optional)
-
-MinReadySeconds is the minimum number of seconds for which a newly created DaemonSet pod should
-be ready without any of its container crashing, for it to be considered available.
-If specified, this overrides any minReadySeconds value that may be set on the csi-node-driver DaemonSet.
-If omitted, the csi-node-driver DaemonSet will use its default value for minReadySeconds.
-
-CalicoKubeControllersDeploymentContainer is a calico-kube-controllers Deployment container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the calico-kube-controllers Deployment container by name.
-Supported values are: calico-kube-controllers, es-calico-kube-controllers
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named calico-kube-controllers Deployment container’s resources.
-If omitted, the calico-kube-controllers Deployment will use its default value for this container’s resources.
-If used in conjunction with the deprecated ComponentResources, then this value takes precedence.
-
-Containers is a list of calico-kube-controllers containers.
-If specified, this overrides the specified calico-kube-controllers Deployment containers.
-If omitted, the calico-kube-controllers Deployment will use its default values for its containers.
-
-Affinity is a group of affinity scheduling rules for the calico-kube-controllers pods.
-If specified, this overrides any affinity that may be set on the calico-kube-controllers Deployment.
-If omitted, the calico-kube-controllers Deployment will use its default value for affinity.
-WARNING: Please note that this field will override the default calico-kube-controllers Deployment affinity.
-
-
-
-
-
-
-
-nodeSelector
-
-map[string]string
-
-
-
-
-
-
-NodeSelector is the calico-kube-controllers pod’s scheduling constraints.
-If specified, each of the key/value pairs are added to the calico-kube-controllers Deployment nodeSelector provided
-the key does not already exist in the object’s nodeSelector.
-If used in conjunction with ControlPlaneNodeSelector, that nodeSelector is set on the calico-kube-controllers Deployment
-and each of this field’s key/value pairs are added to the calico-kube-controllers Deployment nodeSelector provided
-the key does not already exist in the object’s nodeSelector.
-If omitted, the calico-kube-controllers Deployment will use its default value for nodeSelector.
-WARNING: Please note that this field will modify the default calico-kube-controllers Deployment nodeSelector.
-
-Tolerations is the calico-kube-controllers pod’s tolerations.
-If specified, this overrides any tolerations that may be set on the calico-kube-controllers Deployment.
-If omitted, the calico-kube-controllers Deployment will use its default value for tolerations.
-WARNING: Please note that this field will override the default calico-kube-controllers Deployment tolerations.
-
-CalicoKubeControllersDeploymentSpec defines configuration for the calico-kube-controllers Deployment.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-minReadySeconds
-
-int32
-
-
-
-
-
-(Optional)
-
-MinReadySeconds is the minimum number of seconds for which a newly created Deployment pod should
-be ready without any of its container crashing, for it to be considered available.
-If specified, this overrides any minReadySeconds value that may be set on the calico-kube-controllers Deployment.
-If omitted, the calico-kube-controllers Deployment will use its default value for minReadySeconds.
-
-LinuxDataplane is used to select the dataplane used for Linux nodes. In particular, it
-causes the operator to add required mounts and environment variables for the particular dataplane.
-If not specified, iptables mode is used.
-Default: Iptables
-
-WindowsDataplane is used to select the dataplane used for Windows nodes. In particular, it
-causes the operator to add required mounts and environment variables for the particular dataplane.
-If not specified, it is disabled and the operator will not render the Calico Windows nodes daemonset.
-Default: Disabled
-
-IPPools contains a list of IP pools to create if none exist. At most one IP pool of each
-address family may be specified. If omitted, a single pool will be configured if needed.
-
-
-
-
-
-
-
-mtu
-
-int32
-
-
-
-
-
-(Optional)
-
-MTU specifies the maximum transmission unit to use on the pod network.
-If not specified, Calico will perform MTU auto-detection based on the cluster network.
-
-NodeAddressAutodetectionV4 specifies an approach to automatically detect node IPv4 addresses. If not specified,
-will use default auto-detection settings to acquire an IPv4 address for each node.
-
-NodeAddressAutodetectionV6 specifies an approach to automatically detect node IPv6 addresses. If not specified,
-IPv6 addresses will not be auto-detected.
-
-MultiInterfaceMode configures what will configure multiple interface per pod. Only valid for Calico Enterprise installations
-using the Calico CNI plugin.
-Default: None
-
-Sysctl configures sysctl parameters for tuning plugin
-
-
-
-
-
-
-
-linuxPolicySetupTimeoutSeconds
-
-int32
-
-
-
-
-
-(Optional)
-
-LinuxPolicySetupTimeoutSeconds delays new pods from running containers
-until their policy has been programmed in the dataplane.
-The specified delay defines the maximum amount of time
-that the Calico CNI plugin will wait for policy to be programmed.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named calico-node DaemonSet container’s resources.
-If omitted, the calico-node DaemonSet will use its default value for this container’s resources.
-If used in conjunction with the deprecated ComponentResources, then this value takes precedence.
-
-CalicoNodeDaemonSetInitContainer is a calico-node DaemonSet init container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the calico-node DaemonSet init container by name.
-Supported values are: install-cni, hostpath-init, flexvol-driver, mount-bpffs, node-certs-key-cert-provisioner, calico-node-prometheus-server-tls-key-cert-provisioner
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named calico-node DaemonSet init container’s resources.
-If omitted, the calico-node DaemonSet will use its default value for this container’s resources.
-If used in conjunction with the deprecated ComponentResources, then this value takes precedence.
-
-InitContainers is a list of calico-node init containers.
-If specified, this overrides the specified calico-node DaemonSet init containers.
-If omitted, the calico-node DaemonSet will use its default values for its init containers.
-
-Containers is a list of calico-node containers.
-If specified, this overrides the specified calico-node DaemonSet containers.
-If omitted, the calico-node DaemonSet will use its default values for its containers.
-
-Affinity is a group of affinity scheduling rules for the calico-node pods.
-If specified, this overrides any affinity that may be set on the calico-node DaemonSet.
-If omitted, the calico-node DaemonSet will use its default value for affinity.
-WARNING: Please note that this field will override the default calico-node DaemonSet affinity.
-
-
-
-
-
-
-
-nodeSelector
-
-map[string]string
-
-
-
-
-
-(Optional)
-
-NodeSelector is the calico-node pod’s scheduling constraints.
-If specified, each of the key/value pairs are added to the calico-node DaemonSet nodeSelector provided
-the key does not already exist in the object’s nodeSelector.
-If omitted, the calico-node DaemonSet will use its default value for nodeSelector.
-WARNING: Please note that this field will modify the default calico-node DaemonSet nodeSelector.
-
-Tolerations is the calico-node pod’s tolerations.
-If specified, this overrides any tolerations that may be set on the calico-node DaemonSet.
-If omitted, the calico-node DaemonSet will use its default value for tolerations.
-WARNING: Please note that this field will override the default calico-node DaemonSet tolerations.
-
-CalicoNodeDaemonSetSpec defines configuration for the calico-node DaemonSet.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-minReadySeconds
-
-int32
-
-
-
-
-
-(Optional)
-
-MinReadySeconds is the minimum number of seconds for which a newly created DaemonSet pod should
-be ready without any of its container crashing, for it to be considered available.
-If specified, this overrides any minReadySeconds value that may be set on the calico-node DaemonSet.
-If omitted, the calico-node DaemonSet will use its default value for minReadySeconds.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named calico-node-windows DaemonSet container’s resources.
-If omitted, the calico-node-windows DaemonSet will use its default value for this container’s resources.
-If used in conjunction with the deprecated ComponentResources, then this value takes precedence.
-
-CalicoNodeWindowsDaemonSetInitContainer is a calico-node-windows DaemonSet init container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the calico-node-windows DaemonSet init container by name.
-Supported values are: install-cni;hostpath-init, flexvol-driver, mount-bpffs, node-certs-key-cert-provisioner, calico-node-windows-prometheus-server-tls-key-cert-provisioner
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named calico-node-windows DaemonSet init container’s resources.
-If omitted, the calico-node-windows DaemonSet will use its default value for this container’s resources.
-If used in conjunction with the deprecated ComponentResources, then this value takes precedence.
-
-InitContainers is a list of calico-node-windows init containers.
-If specified, this overrides the specified calico-node-windows DaemonSet init containers.
-If omitted, the calico-node-windows DaemonSet will use its default values for its init containers.
-
-Containers is a list of calico-node-windows containers.
-If specified, this overrides the specified calico-node-windows DaemonSet containers.
-If omitted, the calico-node-windows DaemonSet will use its default values for its containers.
-
-Affinity is a group of affinity scheduling rules for the calico-node-windows pods.
-If specified, this overrides any affinity that may be set on the calico-node-windows DaemonSet.
-If omitted, the calico-node-windows DaemonSet will use its default value for affinity.
-WARNING: Please note that this field will override the default calico-node-windows DaemonSet affinity.
-
-
-
-
-
-
-
-nodeSelector
-
-map[string]string
-
-
-
-
-
-(Optional)
-
-NodeSelector is the calico-node-windows pod’s scheduling constraints.
-If specified, each of the key/value pairs are added to the calico-node-windows DaemonSet nodeSelector provided
-the key does not already exist in the object’s nodeSelector.
-If omitted, the calico-node-windows DaemonSet will use its default value for nodeSelector.
-WARNING: Please note that this field will modify the default calico-node-windows DaemonSet nodeSelector.
-
-Tolerations is the calico-node-windows pod’s tolerations.
-If specified, this overrides any tolerations that may be set on the calico-node-windows DaemonSet.
-If omitted, the calico-node-windows DaemonSet will use its default value for tolerations.
-WARNING: Please note that this field will override the default calico-node-windows DaemonSet tolerations.
-
-CalicoNodeWindowsDaemonSetSpec defines configuration for the calico-node-windows DaemonSet.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-minReadySeconds
-
-int32
-
-
-
-
-
-(Optional)
-
-MinReadySeconds is the minimum number of seconds for which a newly created DaemonSet pod should
-be ready without any of its container crashing, for it to be considered available.
-If specified, this overrides any minReadySeconds value that may be set on the calico-node-windows DaemonSet.
-If omitted, the calico-node-windows DaemonSet will use its default value for minReadySeconds.
-
-Deprecated. The CalicoWindowsUpgradeDaemonSet is deprecated and will be removed from the API in the future.
-CalicoWindowsUpgradeDaemonSet is the configuration for the calico-windows-upgrade DaemonSet.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named calico-windows-upgrade DaemonSet container’s resources.
-If omitted, the calico-windows-upgrade DaemonSet will use its default value for this container’s resources.
-
-Containers is a list of calico-windows-upgrade containers.
-If specified, this overrides the specified calico-windows-upgrade DaemonSet containers.
-If omitted, the calico-windows-upgrade DaemonSet will use its default values for its containers.
-
-Affinity is a group of affinity scheduling rules for the calico-windows-upgrade pods.
-If specified, this overrides any affinity that may be set on the calico-windows-upgrade DaemonSet.
-If omitted, the calico-windows-upgrade DaemonSet will use its default value for affinity.
-WARNING: Please note that this field will override the default calico-windows-upgrade DaemonSet affinity.
-
-
-
-
-
-
-
-nodeSelector
-
-map[string]string
-
-
-
-
-
-(Optional)
-
-NodeSelector is the calico-windows-upgrade pod’s scheduling constraints.
-If specified, each of the key/value pairs are added to the calico-windows-upgrade DaemonSet nodeSelector provided
-the key does not already exist in the object’s nodeSelector.
-If omitted, the calico-windows-upgrade DaemonSet will use its default value for nodeSelector.
-WARNING: Please note that this field will modify the default calico-windows-upgrade DaemonSet nodeSelector.
-
-Tolerations is the calico-windows-upgrade pod’s tolerations.
-If specified, this overrides any tolerations that may be set on the calico-windows-upgrade DaemonSet.
-If omitted, the calico-windows-upgrade DaemonSet will use its default value for tolerations.
-WARNING: Please note that this field will override the default calico-windows-upgrade DaemonSet tolerations.
-
-CalicoWindowsUpgradeDaemonSetSpec defines configuration for the calico-windows-upgrade DaemonSet.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-minReadySeconds
-
-int32
-
-
-
-
-
-(Optional)
-
-MinReadySeconds is the minimum number of seconds for which a newly created Deployment pod should
-be ready without any of its container crashing, for it to be considered available.
-If specified, this overrides any minReadySeconds value that may be set on the calico-windows-upgrade DaemonSet.
-If omitted, the calico-windows-upgrade DaemonSet will use its default value for minReadySeconds.
-
-CertificateManagement configures pods to submit a CertificateSigningRequest to the certificates.k8s.io/v1beta1 API in order
-to obtain TLS certificates. This feature requires that you bring your own CSR signing and approval process, otherwise
-pods will be stuck during initialization.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-caCert
-
-[]byte
-
-
-
-
-
-
-Certificate of the authority that signs the CertificateSigningRequests in PEM format.
-
-
-
-
-
-
-
-signerName
-
-string
-
-
-
-
-
-
-When a CSR is issued to the certificates.k8s.io API, the signerName is added to the request in order to accommodate for clusters
-with multiple signers.
-Must be formatted as: <my-domain>/<my-signername>.
-
-
-
-
-
-
-
-keyAlgorithm
-
-string
-
-
-
-
-
-(Optional)
-
-Specify the algorithm used by pods to generate a key pair that is associated with the X.509 certificate request.
-Default: RSAWithSize2048
-
-
-
-
-
-
-
-signatureAlgorithm
-
-string
-
-
-
-
-
-(Optional)
-
-Specify the algorithm used for the signature of the X.509 certificate request.
-Default: SHA256WithRSA
-
-Containers is a list of Prometheus containers.
-If specified, this overrides the specified Prometheus Deployment containers.
-If omitted, the Prometheus Deployment will use its default values for its containers.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named Compliance Benchmarker DaemonSet container’s resources.
-If omitted, the Compliance Benchmarker DaemonSet will use its default value for this container’s resources.
-
-ComplianceBenchmarkerDaemonSetInitContainer is a Compliance Benchmarker DaemonSet init container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the Compliance Benchmarker DaemonSet init container by name.
-Supported values are: tigera-compliance-benchmarker-tls-key-cert-provisioner
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named Compliance Benchmarker DaemonSet init container’s resources.
-If omitted, the Compliance Benchmarker DaemonSet will use its default value for this init container’s resources.
-
-InitContainers is a list of Compliance benchmark init containers.
-If specified, this overrides the specified Compliance Benchmarker DaemonSet init containers.
-If omitted, the Compliance Benchmarker DaemonSet will use its default values for its init containers.
-
-Containers is a list of Compliance benchmark containers.
-If specified, this overrides the specified Compliance Benchmarker DaemonSet containers.
-If omitted, the Compliance Benchmarker DaemonSet will use its default values for its containers.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named compliance controller Deployment container’s resources.
-If omitted, the compliance controller Deployment will use its default value for this container’s resources.
-
-ComplianceControllerDeploymentInitContainer is a compliance controller Deployment init container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the compliance controller Deployment init container by name.
-Supported values are: tigera-compliance-controller-tls-key-cert-provisioner
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named compliance controller Deployment init container’s resources.
-If omitted, the compliance controller Deployment will use its default value for this init container’s resources.
-
-InitContainers is a list of compliance controller init containers.
-If specified, this overrides the specified compliance controller Deployment init containers.
-If omitted, the compliance controller Deployment will use its default values for its init containers.
-
-Containers is a list of compliance controller containers.
-If specified, this overrides the specified compliance controller Deployment containers.
-If omitted, the compliance controller Deployment will use its default values for its containers.
-
-InitContainers is a list of ComplianceReporter PodSpec init containers.
-If specified, this overrides the specified ComplianceReporter PodSpec init containers.
-If omitted, the ComplianceServer Deployment will use its default values for its init containers.
-
-Containers is a list of ComplianceServer containers.
-If specified, this overrides the specified ComplianceReporter PodSpec containers.
-If omitted, the ComplianceServer Deployment will use its default values for its containers.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named ComplianceServer Deployment container’s resources.
-If omitted, the ComplianceServer Deployment will use its default value for this container’s resources.
-
-ComplianceReporterPodTemplateInitContainer is a ComplianceServer Deployment init container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the ComplianceReporter PodSpec init container by name.
-Supported values are: tigera-compliance-reporter-tls-key-cert-provisioner
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named ComplianceReporter PodSpec init container’s resources.
-If omitted, the ComplianceServer Deployment will use its default value for this init container’s resources.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named ComplianceServer Deployment container’s resources.
-If omitted, the ComplianceServer Deployment will use its default value for this container’s resources.
-
-ComplianceServerDeploymentInitContainer is a ComplianceServer Deployment init container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the ComplianceServer Deployment init container by name.
-Supported values are: tigera-compliance-server-tls-key-cert-provisioner
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named ComplianceServer Deployment init container’s resources.
-If omitted, the ComplianceServer Deployment will use its default value for this init container’s resources.
-
-InitContainers is a list of ComplianceServer init containers.
-If specified, this overrides the specified ComplianceServer Deployment init containers.
-If omitted, the ComplianceServer Deployment will use its default values for its init containers.
-
-Containers is a list of ComplianceServer containers.
-If specified, this overrides the specified ComplianceServer Deployment containers.
-If omitted, the ComplianceServer Deployment will use its default values for its containers.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named compliance snapshotter Deployment container’s resources.
-If omitted, the compliance snapshotter Deployment will use its default value for this container’s resources.
-
-ComplianceSnapshotterDeploymentInitContainer is a compliance snapshotter Deployment init container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the compliance snapshotter Deployment init container by name.
-Supported values are: tigera-compliance-snapshotter-tls-key-cert-provisioner
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named compliance snapshotter Deployment init container’s resources.
-If omitted, the compliance snapshotter Deployment will use its default value for this init container’s resources.
-
-InitContainers is a list of compliance snapshotter init containers.
-If specified, this overrides the specified compliance snapshotter Deployment init containers.
-If omitted, the compliance snapshotter Deployment will use its default values for its init containers.
-
-Containers is a list of compliance snapshotter containers.
-If specified, this overrides the specified compliance snapshotter Deployment containers.
-If omitted, the compliance snapshotter Deployment will use its default values for its containers.
-
-Conditions represents the latest observed set of conditions for the component. A component may be one or more of
-Ready, Progressing, Degraded or other customer types.
-
-Deprecated. Please use component resource config fields in Installation.Spec instead.
-The ComponentResource struct associates a ResourceRequirements with a component by name
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named Dashboard Job container’s resources.
-If omitted, the Dashboard Job will use its default value for this container’s resources.
-
-Containers is a list of dashboards job containers.
-If specified, this overrides the specified Dashboard job containers.
-If omitted, the Dashboard job will use its default values for its containers.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named Dex Deployment container’s resources.
-If omitted, the Dex Deployment will use its default value for this container’s resources.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named Dex Deployment init container’s resources.
-If omitted, the Dex Deployment will use its default value for this init container’s resources.
-
-InitContainers is a list of Dex init containers.
-If specified, this overrides the specified Dex Deployment init containers.
-If omitted, the Dex Deployment will use its default values for its init containers.
-
-Containers is a list of Dex containers.
-If specified, this overrides the specified Dex Deployment containers.
-If omitted, the Dex Deployment will use its default values for its containers.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named ECKOperator StatefulSet container’s resources.
-If omitted, the ECKOperator StatefulSet will use its default value for this container’s resources.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named ECKOperator StatefulSet init container’s resources.
-If omitted, the ECKOperator StatefulSet will use its default value for this init container’s resources.
-
-InitContainers is a list of ECKOperator StatefulSet init containers.
-If specified, this overrides the specified ECKOperator StatefulSet init containers.
-If omitted, the ECKOperator StatefulSet will use its default values for its init containers.
-
-Containers is a list of ECKOperator StatefulSet containers.
-If specified, this overrides the specified ECKOperator StatefulSet containers.
-If omitted, the ECKOperator StatefulSet will use its default values for its containers.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named EGW Deployment container’s resources.
-If omitted, the EGW Deployment will use its default value for this container’s resources.
-If used in conjunction with the deprecated ComponentResources, then this value takes precedence.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named EGW Deployment init container’s resources.
-If omitted, the EGW Deployment will use its default value for this init container’s resources.
-If used in conjunction with the deprecated ComponentResources, then this value takes precedence.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named EKSLogForwarder Deployment container’s resources.
-If omitted, the EKSLogForwarder Deployment will use its default value for this container’s resources.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named EKSLogForwarder Deployment init container’s resources.
-If omitted, the EKSLogForwarder Deployment will use its default value for this init container’s resources.
-
-InitContainers is a list of EKSLogForwarder init containers.
-If specified, this overrides the specified EKSLogForwarder Deployment init containers.
-If omitted, the EKSLogForwarder Deployment will use its default values for its init containers.
-
-Containers is a list of EKSLogForwarder containers.
-If specified, this overrides the specified EKSLogForwarder Deployment containers.
-If omitted, the EKSLogForwarder Deployment will use its default values for its containers.
-
-InitContainers is a list of EGW init containers.
-If specified, this overrides the specified EGW Deployment init containers.
-If omitted, the EGW Deployment will use its default values for its init containers.
-
-Containers is a list of EGW containers.
-If specified, this overrides the specified EGW Deployment containers.
-If omitted, the EGW Deployment will use its default values for its containers.
-
-Tolerations is the egress gateway pod’s tolerations.
-If specified, this overrides any tolerations that may be set on the EGW Deployment.
-If omitted, the EGW Deployment will use its default value for tolerations.
-
-
-
-
-
-
-
-priorityClassName
-
-string
-
-
-
-
-
-(Optional)
-
-PriorityClassName allows to specify a PriorityClass resource to be used.
-
-EgressGatewayFailureDetection defines the fields the needed for determining Egress Gateway
-readiness.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-healthTimeoutDataStoreSeconds
-
-int32
-
-
-
-
-
-(Optional)
-
-HealthTimeoutDataStoreSeconds defines how long Egress Gateway can fail to connect
-to the datastore before reporting not ready.
-This value must be greater than 0.
-Default: 90
-
-ICMPProbe define outgoing ICMP probes that Egress Gateway will use to
-verify its upstream connection. Egress Gateway will report not ready if all
-fail. Timeout must be greater than interval.
-
-HTTPProbe define outgoing HTTP probes that Egress Gateway will use to
-verify its upsteam connection. Egress Gateway will report not ready if all
-fail. Timeout must be greater than interval.
-
-EgressGatewayMetadata contains the standard Kubernetes labels and annotations fields.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-labels
-
-map[string]string
-
-
-
-
-
-(Optional)
-
-Labels is a map of string keys and values that may match replica set and
-service selectors. Each of these key/value pairs are added to the
-object’s labels provided the key does not already exist in the object’s labels.
-If not specified will default to projectcalico.org/egw:[name], where [name] is
-the name of the Egress Gateway resource.
-
-
-
-
-
-
-
-annotations
-
-map[string]string
-
-
-
-
-
-(Optional)
-
-Annotations is a map of arbitrary non-identifying metadata. Each of these
-key/value pairs are added to the object’s annotations provided the key does not
-already exist in the object’s annotations.
-
-IPPools defines the IP Pools that the Egress Gateway pods should be using.
-Either name or CIDR must be specified.
-IPPools must match existing IPPools.
-
-
-
-
-
-
-
-externalNetworks
-
-[]string
-
-
-
-
-
-(Optional)
-
-ExternalNetworks defines the external network names this Egress Gateway is
-associated with.
-ExternalNetworks must match existing external networks.
-
-EgressGatewayFailureDetection is used to configure how Egress Gateway
-determines readiness. If both ICMP, HTTP probes are defined, one ICMP probe and one
-HTTP probe should succeed for Egress Gateways to become ready.
-Otherwise one of ICMP or HTTP probe should succeed for Egress gateways to become
-ready if configured.
-
-Conditions represents the latest observed set of conditions for the component. A component may be one or more of
-Ready, Progressing, Degraded or other customer types.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named ElasticsearchMetricsDeployment container’s resources.
-If omitted, the ElasticsearchMetrics Deployment will use its default value for this container’s resources.
-
-ElasticsearchMetricsDeploymentInitContainer is a ElasticsearchMetricsDeployment init container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the ElasticsearchMetricsDeployment init container by name.
-Supported values are: tigera-ee-elasticsearch-metrics-tls-key-cert-provisioner
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named ElasticsearchMetricsDeployment init container’s resources.
-If omitted, the ElasticsearchMetrics Deployment will use its default value for this init container’s resources.
-
-InitContainers is a list of ElasticsearchMetricsDeployment init containers.
-If specified, this overrides the specified ElasticsearchMetricsDeployment init containers.
-If omitted, the ElasticsearchMetrics Deployment will use its default values for its init containers.
-
-Containers is a list of ElasticsearchMetricsDeployment containers.
-If specified, this overrides the specified ElasticsearchMetricsDeployment containers.
-If omitted, the ElasticsearchMetrics Deployment will use its default values for its containers.
-
-Secret to mount to read bearer token for scraping targets.
-Recommended: when unset, the operator will create a Secret, a ClusterRole and a ClusterRoleBinding.
-
-Timeout after which the scrape is ended.
-If not specified, the Prometheus global scrape timeout is used unless it is less than Interval in which the latter is used.
-
-
-
-
-
-
-
-honorLabels
-
-bool
-
-
-
-
-
-
-HonorLabels chooses the metric’s labels on collisions with target labels.
-
-
-
-
-
-
-
-honorTimestamps
-
-bool
-
-
-
-
-
-
-HonorTimestamps controls whether Prometheus respects the timestamps present in scraped data.
-
-The number of additional ingress proxy hops from the right side of the
-x-forwarded-for HTTP header to trust when determining the origin client’s
-IP address. 0 is permitted, but >=1 is the typical setting.
-
-
-
-
-
-
-
-useRemoteAddress
-
-bool
-
-
-
-
-
-(Optional)
-
-If set to true, the Envoy connection manager will use the real remote address
-of the client connection when determining internal versus external origin and
-manipulating various headers.
-
-ServiceMonitor when specified, the operator will create a ServiceMonitor object in the namespace. It is recommended
-that you configure labels if you want your prometheus instance to pick up the configuration automatically.
-The operator will configure 1 endpoint by default:
-- Params to scrape all metrics available in Calico Enterprise.
-- BearerTokenSecret (If not overridden, the operator will also create corresponding RBAC that allows authz to the metrics.)
-- TLSConfig, containing the caFile and serverName.
-
-
-
-
-
-
-
-namespace
-
-string
-
-
-
-
-
-
-Namespace is the namespace where the operator will create resources for your Prometheus instance. The namespace
-must be created before the operator will create Prometheus resources.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named Fluentd DaemonSet container’s resources.
-If omitted, the Fluentd DaemonSet will use its default value for this container’s resources.
-
-FluentdDaemonSetInitContainer is a Fluentd DaemonSet init container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the Fluentd DaemonSet init container by name.
-Supported values are: tigera-fluentd-prometheus-tls-key-cert-provisioner
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named Fluentd DaemonSet init container’s resources.
-If omitted, the Fluentd DaemonSet will use its default value for this init container’s resources.
-
-InitContainers is a list of Fluentd DaemonSet init containers.
-If specified, this overrides the specified Fluentd DaemonSet init containers.
-If omitted, the Fluentd DaemonSet will use its default values for its init containers.
-
-Containers is a list of Fluentd DaemonSet containers.
-If specified, this overrides the specified Fluentd DaemonSet containers.
-If omitted, the Fluentd DaemonSet will use its default values for its containers.
-
-Following list contains field pairs that are used to match a user to a group. It adds an additional
-requirement to the filter that an attribute in the group must match the user’s
-attribute value.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named guardian Deployment container’s resources.
-If omitted, the guardian Deployment will use its default value for this container’s resources.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named guardian Deployment init container’s resources.
-If omitted, the guardian Deployment will use its default value for this init container’s resources.
-
-InitContainers is a list of guardian init containers.
-If specified, this overrides the specified guardian Deployment init containers.
-If omitted, the guardian Deployment will use its default values for its init containers.
-
-Containers is a list of guardian containers.
-If specified, this overrides the specified guardian Deployment containers.
-If omitted, the guardian Deployment will use its default values for its containers.
-
-Specifies the IPAM plugin that will be used in the Calico or Calico Enterprise installation.
-* For CNI Plugin Calico, this field defaults to Calico.
-* For CNI Plugin GKE, this field defaults to HostLocal.
-* For CNI Plugin AzureVNET, this field defaults to AzureVNET.
-* For CNI Plugin AmazonVPC, this field defaults to AmazonVPC.
-
-
-The IPAM plugin is installed and configured only if the CNI plugin is set to Calico,
-for all other values of the CNI plugin the plugin binaries and CNI config is a dependency
-that is expected to be installed separately.
-
-Image is an image that the operator deploys and instead of using the built in tag
-the operator will use the Digest for the image identifier.
-The value should be the image name without registry or tag or digest.
-For the image docker.io/calico/node:v3.17.1 it should be represented as calico/node
-
-
-
-
-
-
-
-digest
-
-string
-
-
-
-
-
-
-Digest is the image identifier that will be used for the Image.
-The field should not include a leading @ and must be prefixed with sha256:.
-
-Variant is the product to install - one of Calico or TigeraSecureEnterprise
-Default: Calico
-
-
-
-
-
-
-
-registry
-
-string
-
-
-
-
-
-(Optional)
-
-Registry is the default Docker registry used for component Docker images.
-If specified then the given value must end with a slash character (/) and all images will be pulled from this registry.
-If not specified then the default registries will be used. A special case value, UseDefault, is
-supported to explicitly specify the default registries will be used.
-
-This option allows configuring the <registry> portion of the above format.
-
-
-
-
-
-
-
-imagePath
-
-string
-
-
-
-
-
-(Optional)
-
-ImagePath allows for the path part of an image to be specified. If specified
-then the specified value will be used as the image path for each image. If not specified
-or empty, the default for each image will be used.
-A special case value, UseDefault, is supported to explicitly specify the default
-image path will be used for each image.
-
-This option allows configuring the <imagePath> portion of the above format.
-
-
-
-
-
-
-
-imagePrefix
-
-string
-
-
-
-
-
-(Optional)
-
-ImagePrefix allows for the prefix part of an image to be specified. If specified
-then the given value will be used as a prefix on each image. If not specified
-or empty, no prefix will be used.
-A special case value, UseDefault, is supported to explicitly specify the default
-image prefix will be used for each image.
-
-KubernetesProvider specifies a particular provider of the Kubernetes platform and enables provider-specific configuration.
-If the specified value is empty, the Operator will attempt to automatically determine the current provider.
-If the specified value is not empty, the Operator will still attempt auto-detection, but
-will additionally compare the auto-detected value to the specified value to confirm they match.
-
-ControlPlaneNodeSelector is used to select control plane nodes on which to run Calico
-components. This is globally applied to all resources created by the operator excluding daemonsets.
-
-ControlPlaneTolerations specify tolerations which are then globally applied to all resources
-created by the operator.
-
-
-
-
-
-
-
-controlPlaneReplicas
-
-int32
-
-
-
-
-
-(Optional)
-
-ControlPlaneReplicas defines how many replicas of the control plane core components will be deployed.
-This field applies to all control plane components that support High Availability. Defaults to 2.
-
-
-
-
-
-
-
-nodeMetricsPort
-
-int32
-
-
-
-
-
-(Optional)
-
-NodeMetricsPort specifies which port calico/node serves prometheus metrics on. By default, metrics are not enabled.
-If specified, this overrides any FelixConfiguration resources which may exist. If omitted, then
-prometheus metrics may still be configured through FelixConfiguration.
-
-
-
-
-
-
-
-typhaMetricsPort
-
-int32
-
-
-
-
-
-(Optional)
-
-TyphaMetricsPort specifies which port calico/typha serves prometheus metrics on. By default, metrics are not enabled.
-
-
-
-
-
-
-
-flexVolumePath
-
-string
-
-
-
-
-
-(Optional)
-
-FlexVolumePath optionally specifies a custom path for FlexVolume. If not specified, FlexVolume will be
-enabled by default. If set to ‘None’, FlexVolume will be disabled. The default is based on the
-kubernetesProvider.
-
-
-
-
-
-
-
-kubeletVolumePluginPath
-
-string
-
-
-
-
-
-(Optional)
-
-KubeletVolumePluginPath optionally specifies enablement of Calico CSI plugin. If not specified,
-CSI will be enabled by default. If set to ‘None’, CSI will be disabled.
-Default: /var/lib/kubelet
-
-Deprecated. Please use CalicoNodeDaemonSet, TyphaDeployment, and KubeControllersDeployment.
-ComponentResources can be used to customize the resource requirements for each component.
-Node, Typha, and KubeControllers are supported for installations.
-
-CertificateManagement configures pods to submit a CertificateSigningRequest to the certificates.k8s.io/v1beta1 API in order
-to obtain TLS certificates. This feature requires that you bring your own CSR signing and approval process, otherwise
-pods will be stuck during initialization.
-
-CalicoNodeDaemonSet configures the calico-node DaemonSet. If used in
-conjunction with the deprecated ComponentResources, then these overrides take precedence.
-
-CalicoKubeControllersDeployment configures the calico-kube-controllers Deployment. If used in
-conjunction with the deprecated ComponentResources, then these overrides take precedence.
-
-TyphaDeployment configures the typha Deployment. If used in conjunction with the deprecated
-ComponentResources or TyphaAffinity, then these overrides take precedence.
-
-Deprecated. The CalicoWindowsUpgradeDaemonSet is deprecated and will be removed from the API in the future.
-CalicoWindowsUpgradeDaemonSet configures the calico-windows-upgrade DaemonSet.
-
-Variant is the most recently observed installed variant - one of Calico or TigeraSecureEnterprise
-
-
-
-
-
-
-
-mtu
-
-int32
-
-
-
-
-
-
-MTU is the most recently observed value for pod network MTU. This may be an explicitly
-configured value, or based on Calico’s native auto-detetion.
-
-
-
-
-
-
-
-imageSet
-
-string
-
-
-
-
-
-(Optional)
-
-ImageSet is the name of the ImageSet being used, if there is an ImageSet
-that is being used. If an ImageSet is not being used then this will not be set.
-
-Conditions represents the latest observed set of conditions for the component. A component may be one or more of
-Ready, Progressing, Degraded or other customer types.
-
-IntrusionDetectionControllerDeploymentContainer is a IntrusionDetectionController Deployment container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the IntrusionDetectionController Deployment container by name.
-Supported values are: controller, webhooks-processor
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named IntrusionDetectionController Deployment container’s resources.
-If omitted, the IntrusionDetection Deployment will use its default value for this container’s resources.
-
-IntrusionDetectionControllerDeploymentInitContainer is a IntrusionDetectionController Deployment init container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the IntrusionDetectionController Deployment init container by name.
-Supported values are: intrusion-detection-tls-key-cert-provisioner
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named IntrusionDetectionController Deployment init container’s resources.
-If omitted, the IntrusionDetectionController Deployment will use its default value for this init container’s resources.
-
-InitContainers is a list of IntrusionDetectionController init containers.
-If specified, this overrides the specified IntrusionDetectionController Deployment init containers.
-If omitted, the IntrusionDetectionController Deployment will use its default values for its init containers.
-
-Containers is a list of IntrusionDetectionController containers.
-If specified, this overrides the specified IntrusionDetectionController Deployment containers.
-If omitted, the IntrusionDetectionController Deployment will use its default values for its containers.
-
-Conditions represents the latest observed set of conditions for the component. A component may be one or more of
-Ready, Progressing, Degraded or other customer types.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named Kibana container’s resources.
-If omitted, the Kibana will use its default value for this container’s resources.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named Kibana Deployment init container’s resources.
-If omitted, the Kibana Deployment will use its default value for this init container’s resources.
-If used in conjunction with the deprecated ComponentResources, then this value takes precedence.
-
-InitContainers is a list of Kibana init containers.
-If specified, this overrides the specified Kibana Deployment init containers.
-If omitted, the Kibana Deployment will use its default values for its init containers.
-
-Containers is a list of Kibana containers.
-If specified, this overrides the specified Kibana Deployment containers.
-If omitted, the Kibana Deployment will use its default values for its containers.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named L7LogCollector DaemonSet container’s resources.
-If omitted, the L7LogCollector DaemonSet will use its default value for this container’s resources.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named L7LogCollector DaemonSet init container’s resources.
-If omitted, the L7LogCollector DaemonSet will use its default value for this init container’s resources.
-
-InitContainers is a list of L7LogCollector DaemonSet init containers.
-If specified, this overrides the specified L7LogCollector DaemonSet init containers.
-If omitted, the L7LogCollector DaemonSet will use its default values for its init containers.
-
-Containers is a list of L7LogCollector DaemonSet containers.
-If specified, this overrides the specified L7LogCollector DaemonSet containers.
-If omitted, the L7LogCollector DaemonSet will use its default values for its containers.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named linseed Deployment container’s resources.
-If omitted, the linseed Deployment will use its default value for this container’s resources.
-
-LinseedDeploymentInitContainer is a linseed Deployment init container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the linseed Deployment init container by name.
-Supported values are: tigera-secure-linseed-token-tls-key-cert-provisioner,tigera-secure-linseed-cert-key-cert-provisioner
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named linseed Deployment init container’s resources.
-If omitted, the linseed Deployment will use its default value for this init container’s resources.
-
-InitContainers is a list of linseed init containers.
-If specified, this overrides the specified linseed Deployment init containers.
-If omitted, the linseed Deployment will use its default values for its init containers.
-
-Containers is a list of linseed containers.
-If specified, this overrides the specified linseed Deployment containers.
-If omitted, the linseed Deployment will use its default values for its containers.
-
-This setting enables or disable log collection.
-Allowed values are Enabled or Disabled.
-
-
-
-
-
-
-
-logIntervalSeconds
-
-int64
-
-
-
-
-
-(Optional)
-
-Interval in seconds for sending L7 log information for processing.
-Default: 5 sec
-
-
-
-
-
-
-
-logRequestsPerInterval
-
-int64
-
-
-
-
-
-(Optional)
-
-Maximum number of unique L7 logs that are sent LogIntervalSeconds.
-Adjust this to limit the number of L7 logs sent per LogIntervalSeconds
-to felix for further processing, use negative number to ignore limits.
-Default: -1
-
-Configuration for enabling/disabling process path collection in flowlogs.
-If Enabled, this feature sets hostPID to true in order to read process cmdline.
-Default: Enabled
-
-Conditions represents the latest observed set of conditions for the component. A component may be one or more of
-Ready, Progressing, Degraded or other customer types.
-
-Retention defines how long data is retained in the Elasticsearch cluster before it is cleared.
-
-
-
-
-
-
-
-storageClassName
-
-string
-
-
-
-
-
-(Optional)
-
-StorageClassName will populate the PersistentVolumeClaim.StorageClassName that is used to provision disks to the
-Tigera Elasticsearch cluster. The StorageClassName should only be modified when no LogStorage is currently
-active. We recommend choosing a storage class dedicated to Tigera LogStorage only. Otherwise, data retention
-cannot be guaranteed during upgrades. See https://docs.tigera.io/maintenance/upgrading for up-to-date instructions.
-Default: tigera-elasticsearch
-
-
-
-
-
-
-
-dataNodeSelector
-
-map[string]string
-
-
-
-
-
-(Optional)
-
-DataNodeSelector gives you more control over the node that Elasticsearch will run on. The contents of DataNodeSelector will
-be added to the PodSpec of the Elasticsearch nodes. For the pod to be eligible to run on a node, the node must have
-each of the indicated key-value pairs as labels as well as access to the specified StorageClassName.
-
-ECKOperatorStatefulSet configures the ECKOperator StatefulSet. If used in conjunction with the deprecated
-ComponentResources, then these overrides take precedence.
-
-LogStorageStatus defines the observed state of Tigera flow and DNS log storage.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-state
-
-string
-
-
-
-
-
-
-State provides user-readable status.
-
-
-
-
-
-
-
-elasticsearchHash
-
-string
-
-
-
-
-
-
-ElasticsearchHash represents the current revision and configuration of the installed Elasticsearch cluster. This
-is an opaque string which can be monitored for changes to perform actions when Elasticsearch is modified.
-
-
-
-
-
-
-
-kibanaHash
-
-string
-
-
-
-
-
-
-KibanaHash represents the current revision and configuration of the installed Kibana dashboard. This
-is an opaque string which can be monitored for changes to perform actions when Kibana is modified.
-
-Conditions represents the latest observed set of conditions for the component. A component may be one or more of
-Ready, Progressing, Degraded or other customer types.
-
-ManagementClusterConnectionSpec defines the desired state of ManagementClusterConnection
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-managementClusterAddr
-
-string
-
-
-
-
-
-(Optional)
-
-Specify where the managed cluster can reach the management cluster. Ex.: “10.128.0.10:30449”. A managed cluster
-should be able to access this address. This field is used by managed clusters only.
-
-Conditions represents the latest observed set of conditions for the component. A component may be one or more of
-Ready, Progressing, Degraded or other customer types.
-
-ManagementClusterSpec defines the desired state of a ManagementCluster
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-address
-
-string
-
-
-
-
-
-(Optional)
-
-This field specifies the externally reachable address to which your managed cluster will connect. When a managed
-cluster is added, this field is used to populate an easy-to-apply manifest that will connect both clusters.
-Valid examples are: “0.0.0.0:31000”, “example.com:32000”, “[::1]:32500”
-
-CA indicates which verification method the tunnel client should use to verify the tunnel server’s identity.
-
-
-When left blank or set to ‘Tigera’, the tunnel client will expect a self-signed cert to be included in the certificate bundle
-and will expect the cert to have a Common Name (CN) of ‘voltron’.
-
-
-When set to ‘Public’, the tunnel client will use its installed system certs and will use the managementClusterAddr to verify the tunnel server’s identity.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named Manager Deployment container’s resources.
-If omitted, the Manager Deployment will use its default value for this container’s resources.
-
-ManagerDeploymentInitContainer is a Manager Deployment init container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the Manager Deployment init container by name.
-Supported values are: manager-tls-key-cert-provisioner, internal-manager-tls-key-cert-provisioner, tigera-voltron-linseed-tls-key-cert-provisioner
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named Manager Deployment init container’s resources.
-If omitted, the Manager Deployment will use its default value for this init container’s resources.
-If used in conjunction with the deprecated ComponentResources, then this value takes precedence.
-
-InitContainers is a list of Manager init containers.
-If specified, this overrides the specified Manager Deployment init containers.
-If omitted, the Manager Deployment will use its default values for its init containers.
-
-Containers is a list of Manager containers.
-If specified, this overrides the specified Manager Deployment containers.
-If omitted, the Manager Deployment will use its default values for its containers.
-
-Conditions represents the latest observed set of conditions for the component. A component may be one or more of
-Ready, Progressing, Degraded or other customer types.
-
-Metadata contains the standard Kubernetes labels and annotations fields.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-labels
-
-map[string]string
-
-
-
-
-
-(Optional)
-
-Labels is a map of string keys and values that may match replicaset and
-service selectors. Each of these key/value pairs are added to the
-object’s labels provided the key does not already exist in the object’s labels.
-
-
-
-
-
-
-
-annotations
-
-map[string]string
-
-
-
-
-
-(Optional)
-
-Annotations is a map of arbitrary non-identifying metadata. Each of these
-key/value pairs are added to the object’s annotations provided the key does not
-already exist in the object’s annotations.
-
-ExternalPrometheus optionally configures integration with an external Prometheus for scraping Calico metrics. When
-specified, the operator will render resources in the defined namespace. This option can be useful for configuring
-scraping from git-ops tools without the need of post-installation steps.
-
-Conditions represents the latest observed set of conditions for the component. A component may be one or more of
-Ready, Progressing, Degraded or other customer types.
-
-NodeAddressAutodetection provides configuration options for auto-detecting node addresses. At most one option
-can be used. If no detection option is specified, then IP auto detection will be disabled for this address family and IPs
-must be specified directly on the Node resource.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-firstFound
-
-bool
-
-
-
-
-
-(Optional)
-
-FirstFound uses default interface matching parameters to select an interface, performing best-effort
-filtering based on well-known interface names.
-
-The scheduler will prefer to schedule pods to nodes that satisfy
-the affinity expressions specified by this field, but it may choose
-a node that violates one or more of the expressions.
-
-WARNING: Please note that if the affinity requirements specified by this field are not met at
-scheduling time, the pod will NOT be scheduled onto the node.
-There is no fallback to another affinity rules with this setting.
-This may cause networking disruption or even catastrophic failure!
-PreferredDuringSchedulingIgnoredDuringExecution should be used for affinity
-unless there is a specific well understood reason to use RequiredDuringSchedulingIgnoredDuringExecution and
-you can guarantee that the RequiredDuringSchedulingIgnoredDuringExecution will always have sufficient nodes to satisfy the requirement.
-NOTE: RequiredDuringSchedulingIgnoredDuringExecution is set by default for AKS nodes,
-to avoid scheduling Typhas on virtual-nodes.
-If the affinity requirements specified by this field cease to be met
-at some point during pod execution (e.g. due to an update), the system
-may or may not try to eventually evict the pod from its node.
-
-SelectionAttributes defines K8s node attributes a NodeSet should use when setting the Node Affinity selectors and
-Elasticsearch cluster awareness attributes for the Elasticsearch nodes. The list of SelectionAttributes are used
-to define Node Affinities and set the node awareness configuration in the running Elasticsearch instance.
-
-NodeSetSelectionAttribute defines a K8s node “attribute” the Elasticsearch nodes should be aware of. The “Name” and “Value”
-are used together to set the “awareness” attributes in Elasticsearch, while the “NodeLabel” and “Value” are used together
-to define Node Affinity for the Pods created for the Elasticsearch nodes.
-
-OIDCType defines how OIDC is configured for Tigera Enterprise. Dex should be the best option for most use-cases.
-The Tigera option can help in specific use-cases, for instance, when you are unable to configure a client secret.
-One of: Dex, Tigera
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named PacketCaptureAPI Deployment container’s resources.
-If omitted, the PacketCaptureAPI Deployment will use its default value for this container’s resources.
-
-PacketCaptureAPIDeploymentInitContainer is a PacketCaptureAPI Deployment init container.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-name
-
-string
-
-
-
-
-
-
-Name is an enum which identifies the PacketCaptureAPI Deployment init container by name.
-Supported values are: tigera-packetcapture-server-tls-key-cert-provisioner
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named PacketCaptureAPI Deployment init container’s resources.
-If omitted, the PacketCaptureAPI Deployment will use its default value for this init container’s resources.
-
-InitContainers is a list of PacketCaptureAPI init containers.
-If specified, this overrides the specified PacketCaptureAPI Deployment init containers.
-If omitted, the PacketCaptureAPI Deployment will use its default values for its init containers.
-
-Containers is a list of PacketCaptureAPI containers.
-If specified, this overrides the specified PacketCaptureAPI Deployment containers.
-If omitted, the PacketCaptureAPI Deployment will use its default values for its containers.
-
-Conditions represents the latest observed set of conditions for the component. A component may be one or more of
-Ready, Progressing, Degraded or other customer types.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named PolicyRecommendation Deployment container’s resources.
-If omitted, the PolicyRecommendation Deployment will use its default value for this container’s resources.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named PolicyRecommendation Deployment init container’s resources.
-If omitted, the PolicyRecommendation Deployment will use its default value for this init container’s resources.
-
-InitContainers is a list of PolicyRecommendation init containers.
-If specified, this overrides the specified PolicyRecommendation Deployment init containers.
-If omitted, the PolicyRecommendation Deployment will use its default values for its init containers.
-
-Containers is a list of PolicyRecommendation containers.
-If specified, this overrides the specified PolicyRecommendation Deployment containers.
-If omitted, the PolicyRecommendation Deployment will use its default values for its containers.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named Prometheus container’s resources.
-If omitted, the Prometheus will use its default value for this container’s resources.
-
-PromptType is a value that specifies whether the identity provider prompts the end user for re-authentication and
-consent.
-One of: None, Login, Consent, SelectAccount.
-
-Retention defines how long data is retained in an Elasticsearch cluster before it is cleared.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-flows
-
-int32
-
-
-
-
-
-(Optional)
-
-Flows configures the retention period for flow logs, in days. Logs written on a day that started at least this long ago
-are removed. To keep logs for at least x days, use a retention period of x+1.
-Default: 8
-
-
-
-
-
-
-
-auditReports
-
-int32
-
-
-
-
-
-(Optional)
-
-AuditReports configures the retention period for audit logs, in days. Logs written on a day that started at least this long ago are
-removed. To keep logs for at least x days, use a retention period of x+1.
-Default: 91
-
-
-
-
-
-
-
-snapshots
-
-int32
-
-
-
-
-
-(Optional)
-
-Snapshots configures the retention period for snapshots, in days. Snapshots are periodic captures
-of resources which along with audit events are used to generate reports.
-Consult the Compliance Reporting documentation for more details on snapshots.
-Logs written on a day that started at least this long ago are
-removed. To keep logs for at least x days, use a retention period of x+1.
-Default: 91
-
-
-
-
-
-
-
-complianceReports
-
-int32
-
-
-
-
-
-(Optional)
-
-ComplianceReports configures the retention period for compliance reports, in days. Reports are output
-from the analysis of the system state and audit events for compliance reporting.
-Consult the Compliance Reporting documentation for more details on reports.
-Logs written on a day that started at least this long ago are
-removed. To keep logs for at least x days, use a retention period of x+1.
-Default: 91
-
-
-
-
-
-
-
-dnsLogs
-
-int32
-
-
-
-
-
-(Optional)
-
-DNSLogs configures the retention period for DNS logs, in days. Logs written on a day that started at least this long ago
-are removed. To keep logs for at least x days, use a retention period of x+1.
-Default: 8
-
-
-
-
-
-
-
-bgpLogs
-
-int32
-
-
-
-
-
-(Optional)
-
-BGPLogs configures the retention period for BGP logs, in days. Logs written on a day that started at least this long ago
-are removed. To keep logs for at least x days, use a retention period of x+1.
-Default: 8
-
-Labels are the metadata.labels of the ServiceMonitor. When combined with spec.serviceMonitorSelector.matchLabels
-on your prometheus instance, the service monitor will automatically be picked up.
-Default: k8s-app=tigera-prometheus
-
-The endpoints to scrape. This struct contains a subset of the Endpoint as defined in the prometheus docs. Fields
-related to connecting to our Prometheus server are automatically set by the operator.
-
-SyslogLogType represents the allowable log types for syslog.
-Allowable values are Audit, DNS, Flows and IDSEvents.
-* Audit corresponds to audit logs for both Kubernetes resources and Enterprise custom resources.
-* DNS corresponds to DNS logs generated by Calico node.
-* Flows corresponds to flow logs generated by Calico node.
-* IDSEvents corresponds to event logs for the intrusion detection system (anomaly detection, suspicious IPs, suspicious domains and global alerts).
-
-SyslogStoreSpec defines configuration for exporting logs to syslog.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-endpoint
-
-string
-
-
-
-
-
-
-Location of the syslog server. example: tcp://1.2.3.4:601
-
-
-
-
-
-
-
-packetSize
-
-int32
-
-
-
-
-
-(Optional)
-
-PacketSize defines the maximum size of packets to send to syslog.
-In general this is only needed if you notice long logs being truncated.
-Default: 1024
-
-SecretName indicates the name of the secret in the tigera-operator namespace that contains the private key and certificate that the management cluster uses when it listens for incoming connections.
-
-
-When set to tigera-management-cluster-connection voltron will use the same cert bundle which Guardian client certs are signed with.
-
-
-When set to manager-tls, voltron will use the same cert bundle which Manager UI is served with.
-This cert bundle must be a publicly signed cert created by the user.
-Note that Tigera Operator will generate a self-signed manager-tls cert if one does not exist,
-and use of that cert will result in Guardian being unable to verify Voltron’s identity.
-
-
-If changed on a running cluster with connected managed clusters, all managed clusters will disconnect as they will no longer be able to verify Voltron’s identity.
-To reconnect existing managed clusters, change the tls.ca of the managed clusters’ ManagementClusterConnection resource.
-
-SNIMatch is used to match requests based on the server name for the intended destination server. Matching requests
-will be proxied to the Destination.
-
-
-
-
-
-
-
-destination
-
-string
-
-
-
-
-
-
-Destination is the destination url to proxy the request to.
-
-ForwardingMTLSCert is the certificate used for mTLS between voltron and the destination. Either both ForwardingMTLSCert
-and ForwardingMTLSKey must be specified, or neither can be specified.
-
-ForwardingMTLSKey is the key used for mTLS between voltron and the destination. Either both ForwardingMTLSCert
-and ForwardingMTLSKey must be specified, or neither can be specified.
-
-
-
-
-
-
-
-unauthenticated
-
-bool
-
-
-
-
-
-(Optional)
-
-Unauthenticated says whether the request should go through authentication. This is only applicable if the Target
-is UI.
-
-Elastic configures per-tenant ElasticSearch and Kibana parameters.
-This field is required for clusters using external ES.
-
-
-
-
-
-
-
-controlPlaneReplicas
-
-int32
-
-
-
-
-
-(Optional)
-
-ControlPlaneReplicas defines how many replicas of the control plane core components will be deployed
-in the Tenant’s namespace. Defaults to the controlPlaneReplicas in Installation CR
-
-The timestamp representing the start time for the current status.
-
-
-
-
-
-
-
-reason
-
-string
-
-
-
-
-
-
-A brief reason explaining the condition.
-
-
-
-
-
-
-
-message
-
-string
-
-
-
-
-
-
-Optionally, a detailed message providing additional context.
-
-
-
-
-
-
-
-observedGeneration
-
-int64
-
-
-
-
-
-(Optional)
-
-observedGeneration represents the generation that the condition was set based upon.
-For instance, if generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date
-with respect to the current state of the instance.
-
-
-
-
-
-
-
TigeraStatusReason
-(string alias)
-
-TigeraStatusReason represents the reason for a particular condition.
-
-Conditions represents the latest observed set of conditions for this component. A component may be one or more of
-Available, Progressing, or Degraded.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named typha Deployment container’s resources.
-If omitted, the typha Deployment will use its default value for this container’s resources.
-If used in conjunction with the deprecated ComponentResources, then this value takes precedence.
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named typha Deployment init container’s resources.
-If omitted, the typha Deployment will use its default value for this init container’s resources.
-If used in conjunction with the deprecated ComponentResources, then this value takes precedence.
-
-InitContainers is a list of typha init containers.
-If specified, this overrides the specified typha Deployment init containers.
-If omitted, the typha Deployment will use its default values for its init containers.
-
-Containers is a list of typha containers.
-If specified, this overrides the specified typha Deployment containers.
-If omitted, the typha Deployment will use its default values for its containers.
-
-Affinity is a group of affinity scheduling rules for the typha pods.
-If specified, this overrides any affinity that may be set on the typha Deployment.
-If omitted, the typha Deployment will use its default value for affinity.
-If used in conjunction with the deprecated TyphaAffinity, then this value takes precedence.
-WARNING: Please note that this field will override the default calico-typha Deployment affinity.
-
-
-
-
-
-
-
-nodeSelector
-
-map[string]string
-
-
-
-
-
-
-NodeSelector is the calico-typha pod’s scheduling constraints.
-If specified, each of the key/value pairs are added to the calico-typha Deployment nodeSelector provided
-the key does not already exist in the object’s nodeSelector.
-If omitted, the calico-typha Deployment will use its default value for nodeSelector.
-WARNING: Please note that this field will modify the default calico-typha Deployment nodeSelector.
-
-
-
-
-
-
-
-terminationGracePeriodSeconds
-
-int64
-
-
-
-
-
-(Optional)
-
-Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request.
-Value must be non-negative integer. The value zero indicates stop immediately via
-the kill signal (no opportunity to shut down).
-If this value is nil, the default grace period will be used instead.
-The grace period is the duration in seconds after the processes running in the pod are sent
-a termination signal and the time when the processes are forcibly halted with a kill signal.
-Set this value longer than the expected cleanup time for your process.
-Defaults to 30 seconds.
-
-TopologySpreadConstraints describes how a group of pods ought to spread across topology
-domains. Scheduler will schedule pods in a way which abides by the constraints.
-All topologySpreadConstraints are ANDed.
-
-Tolerations is the typha pod’s tolerations.
-If specified, this overrides any tolerations that may be set on the typha Deployment.
-If omitted, the typha Deployment will use its default value for tolerations.
-WARNING: Please note that this field will override the default calico-typha Deployment tolerations.
-
-TyphaDeploymentSpec defines configuration for the typha Deployment.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-minReadySeconds
-
-int32
-
-
-
-
-
-(Optional)
-
-MinReadySeconds is the minimum number of seconds for which a newly created Deployment pod should
-be ready without any of its container crashing, for it to be considered available.
-If specified, this overrides any minReadySeconds value that may be set on the typha Deployment.
-If omitted, the typha Deployment will use its default value for minReadySeconds.
-
-TyphaDeploymentStrategy describes how to replace existing pods with new ones. Only RollingUpdate is supported
-at this time so the Type field is not exposed.
-
-CNIBinDir is the path to the CNI binaries directory on Windows, it must match what is used as ‘bin_dir’ under
-[plugins]
-[plugins.“io.containerd.grpc.v1.cri”]
-[plugins.“io.containerd.grpc.v1.cri”.cni]
-on the containerd ‘config.toml’ file on the Windows nodes.
-
-
-
-
-
-
-
-cniConfigDir
-
-string
-
-
-
-
-
-(Optional)
-
-CNIConfigDir is the path to the CNI configuration directory on Windows, it must match what is used as ‘conf_dir’ under
-[plugins]
-[plugins.“io.containerd.grpc.v1.cri”]
-[plugins.“io.containerd.grpc.v1.cri”.cni]
-on the containerd ‘config.toml’ file on the Windows nodes.
-
-
-
-
-
-
-
-cniLogDir
-
-string
-
-
-
-
-
-(Optional)
-
-CNILogDir is the path to the Calico CNI logs directory on Windows.
-
-
-
-
-
-
-
-vxlanMACPrefix
-
-string
-
-
-
-
-
-(Optional)
-
-VXLANMACPrefix is the prefix used when generating MAC addresses for virtual NICs
-
-
-
-
-
-
-
-vxlanAdapter
-
-string
-
-
-
-
-
-(Optional)
-
-VXLANAdapter is the Network Adapter used for VXLAN, leave blank for primary NIC
-
-Resources allows customization of limits and requests for compute resources such as cpu and memory.
-If specified, this overrides the named crawdad DaemonSet container’s resources.
-If omitted, the crawdad DaemonSet will use its default value for this container’s resources.
-If used in conjunction with the deprecated ComponentResources, then this value takes precedence.
-
-Containers is a list of crawdad containers.
-If specified, this overrides the specified crawdad DaemonSet cluster-scanner containers.
-If omitted, the crawdad DaemonSet will use its default values for its containers.
-
-Affinity is a group of affinity scheduling rules for the crawdad pods.
-If specified, this overrides any affinity that may be set on the crawdad DaemonSet.
-If omitted, the crawdad DaemonSet will use its default value for affinity.
-WARNING: Please note that this field will override the default crawdad DaemonSet affinity.
-
-
-
-
-
-
-
-nodeSelector
-
-map[string]string
-
-
-
-
-
-
-NodeSelector is the crawdad pod’s scheduling constraints.
-If specified, each of the key/value pairs are added to the crawdad DaemonSet nodeSelector provided
-the key does not already exist in the object’s nodeSelector.
-If used in conjunction with ControlPlaneNodeSelector, that nodeSelector is set on the crawdad DaemonSet
-and each of this field’s key/value pairs are added to the crawdad DaemonSet nodeSelector provided
-the key does not already exist in the object’s nodeSelector.
-If omitted, the crawdad DaemonSet will use its default value for nodeSelector.
-WARNING: Please note that this field will modify the default crawdad DaemonSet nodeSelector.
-
-Tolerations is the crawdad pod’s tolerations.
-If specified, this overrides any tolerations that may be set on the crawdad DaemonSet.
-If omitted, the crawdad DaemonSet will use its default value for tolerations.
-WARNING: Please note that this field will override the default crawdad DaemonSet tolerations.
-
-CrawdadDaemonSetSpec defines configuration for the crawdad DaemonSet.
-
-
-
-
-
Field
-
Description
-
-
-
-
-
-
-minReadySeconds
-
-int32
-
-
-
-
-
-(Optional)
-
-MinReadySeconds is the minimum number of seconds for which a newly created DaemonSet pod should
-be ready without any of its container crashing, for it to be considered available.
-If specified, this overrides any minReadySeconds value that may be set on the crawdad DaemonSet.
-If omitted, the crawdad DaemonSet will use its default value for minReadySeconds.
-
-ImageAssuranceStatus defines the observed state of ImageAssurance
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/installation/api.mdx b/calico-cloud_versioned_docs/version-20-1/reference/installation/api.mdx
deleted file mode 100644
index c897085ea7..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/installation/api.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Installation API reference
----
-
-# Installation reference
-
-import API from '@site/calico-cloud_versioned_docs/version-20-1/reference/installation/_api.mdx';
-
-The Kubernetes resources below configure $[prodname] installation when using the operator. Each resource is responsible for installing and configuring a different subsystem of $[prodname] during installation. Most options can be modified on a running cluster using `kubectl`.
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/installation/config.json b/calico-cloud_versioned_docs/version-20-1/reference/installation/config.json
deleted file mode 100644
index 11c75a849a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/installation/config.json
+++ /dev/null
@@ -1,36 +0,0 @@
-{
- "hideMemberFields": [
- "TypeMeta"
- ],
- "hideTypePatterns": [
- "ParseError$",
- "List$",
- "ImageAssuranceCentral$",
- "ImageAssuranceCentralSpec$",
- "ImageAssuranceCentralStatus$",
- "APIProxyDeployment",
- "ManagedClusterControllerDeployment",
- "RuntimeCleanerDeployment",
- "ScannerWorkerDeployment",
- "WaiterDeployment"
- ],
- "externalPackages": [
- {
- "typeMatchPrefix": "^k8s\\.io/apimachinery/pkg/apis/meta/v1\\.Duration$",
- "docsURLTemplate": "https://godoc.org/k8s.io/apimachinery/pkg/apis/meta/v1#Duration"
- },
- {
- "typeMatchPrefix": "^k8s\\.io/(api|apimachinery/pkg/apis)/",
- "docsURLTemplate": "https://v1-21.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#{{lower .TypeIdentifier}}-{{arrIndex .PackageSegments -1}}-{{arrIndex .PackageSegments -2}}"
- },
- {
- "typeMatchPrefix": "^github\\.com/knative/pkg/apis/duck/",
- "docsURLTemplate": "https://godoc.org/github.com/knative/pkg/apis/duck/{{arrIndex .PackageSegments -1}}#{{.TypeIdentifier}}"
- }
- ],
- "typeDisplayNamePrefixOverrides": {
- "k8s.io/api/": "Kubernetes ",
- "k8s.io/apimachinery/pkg/apis/": "Kubernetes "
- },
- "markdownDisabled": false
-}
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/installation/ia-api.mdx b/calico-cloud_versioned_docs/version-20-1/reference/installation/ia-api.mdx
deleted file mode 100644
index d38731ef11..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/installation/ia-api.mdx
+++ /dev/null
@@ -1,13 +0,0 @@
----
-description: Image Assurance Installation API reference
----
-
-# Image Assurance Installation reference
-
-import IAAPI from '@site/calico-cloud_versioned_docs/version-20-1/reference/installation/_ia-api.mdx';
-
-## Image Assurance installation reference
-
-The Kubernetes resources below configure $[prodname] Image Assurance installation when using the operator. Each resource is responsible for installing and configuring a different subsystem of $[prodname] Image Assurance during installation. Most options can be modified on a running cluster using `kubectl`.
-
-
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/public-cloud/aws.mdx b/calico-cloud_versioned_docs/version-20-1/reference/public-cloud/aws.mdx
deleted file mode 100644
index ab9e3cc421..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/public-cloud/aws.mdx
+++ /dev/null
@@ -1,176 +0,0 @@
----
-description: Advantages of using Calico Cloud in AWS.
----
-
-# Amazon Web Services
-
-$[prodname] provides the following advantages when running in Amazon Web Services (AWS):
-
-- **Network Policy for Containers**: $[prodname] provides fine-grained network security policy for individual containers.
-- **No Overlays**: Within each VPC subnet $[prodname] doesn't need an overlay, which means high performance networking for your containers.
-- **No 50 Node Limit**: $[prodname] allows you to surpass the 50 node limit, which exists as a consequence of the [AWS 50 route limit](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html#vpc-limits-route-tables) when using the VPC routing table.
-
-## Routing traffic within a single VPC subnet
-
-Since $[prodname] assigns IP addresses outside the range used by AWS for EC2 instances, you must disable AWS src/dst
-checks on each EC2 instance in your cluster
-[as described in the AWS documentation](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html#EIP_Disable_SrcDestCheck). This
-allows $[prodname] to route traffic natively within a single VPC subnet without using an overlay or any of the limited VPC routing table entries.
-
-## Routing traffic across different VPC subnets / VPCs
-
-If you need to split your deployment across multiple AZs for high availability then each AZ will have its own VPC subnet. To
-use $[prodname] across multiple different VPC subnets or [peered VPCs](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html),
-in addition to disabling src/dst checks as described above you must also enable IPIP encapsulation and outgoing NAT
-on your $[prodname] IP pools.
-
-See the [IP pool configuration reference](../resources/ippool.mdx)
-for information on how to configure $[prodname] IP pools.
-
-By default, $[prodname]'s IPIP encapsulation applies to all container-to-container traffic. However,
-encapsulation is only required for container traffic that crosses a VPC subnet boundary. For better
-performance, you can configure $[prodname] to perform IPIP encapsulation only across VPC subnet boundaries.
-
-To enable the "CrossSubnet" IPIP feature, configure your $[prodname] IP pool resources
-to enable IPIP and set the mode to "CrossSubnet".
-
-:::note
-
-This feature was introduced in $[prodname] v2.1, if your deployment was created with
-an older version of $[prodname], or if you if you are unsure whether your deployment
-is configured correctly, follow the [Configuring IP-in-IP guide](../../networking/configuring/vxlan-ipip.mdx)
-which discusses this in more detail.
-
-:::
-
-The following `kubectl` command will create or modify an IPv4 pool with
-CIDR 192.168.0.0/16 using IPIP mode `CrossSubnet`. Adjust the pool CIDR for your deployment.
-
-```bash
-kubectl apply -f - <
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/alertexception.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/alertexception.mdx
deleted file mode 100644
index fb215f458a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/alertexception.mdx
+++ /dev/null
@@ -1,78 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Alert exception
-
-An alert exception resource is a filter that hides specific alerts from users in $[prodname] Manager UI.
-You can filter alerts by time range or indefinitely. If an alert exception expires, alerts will reappear in Manager UI.
-
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/),
-the following case-insensitive aliases can be used to specify the resource type on the CLI:
-`alertexception.projectcalico.org`, `alertexceptions.projectcalico.org` and abbreviations such as
-`alertexception.p` and `alertexceptions.p`.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: AlertException
-metadata:
- name: sample
-spec:
- description: 'Sample alert exception'
- selector: origin = "" and source_namespace = ""
- startTime: '2022-01-02T00:00:00Z'
- endTime: '2022-01-03T00:00:00Z'
-```
-
-## Alert exception definition
-
-| Field | Description | Accepted Values | Schema |
-| ----- | --------------------------------- | --------------------------------------------------- | ------ |
-| name | The name of this alert exception. | Alphanumeric string with optional `.`, `_`, or `-`. | string |
-
-### Spec
-
-| Field | Description | Type | Required | Acceptable Values |
-| ----------- | ----------------------------------------------------------------------------------- | ----------------------- | -------- | ----------------------- |
-| description | Human-readable description of the alert exception. | string | yes |
-| selector | Selects alerts to filter from $[prodname] Manager UI queries. | string | yes | [selector](#selector) |
-| startTime | Defines the start time from which this alert exception will start filtering alerts. | Date in RFC 3339 format | yes | [startTime](#starttime) |
-| endTime | Defines the end time at which this alert exception will stop filtering alerts. | Date in RFC 3339 format | | [endTime](#endtime) |
-
-### Selector
-
-A selector is an expression that matches alerts based on their fields. For each alert,
-`origin` and `type` fields are automatically set by the applicable component, but other fields can be empty.
-
-| Field | Description |
-| ---------------- | ---------------------------------------------------------------------------------- |
-| origin | User specified or generated names from $[prodname] threat defense components. |
-| type | $[prodname] threat defense components an alert is generated from. |
-| host | Name of the node that triggers this alert. |
-| dest_ip | IP address of the destination pod. |
-| dest_name | Name of the destination pod. |
-| dest_name_aggr | Aggregated name of the destination pod. |
-| dest_namespace | Namespace of the destination endpoint. A `-` means the endpoint is not namespaced. |
-| source_ip | IP address of the source pod. |
-| source_name | Name of the source pod. |
-| source_name_aggr | Aggregated name of the source pod. |
-| source_namespace | Namespace of the source endpoint. A `-` means the endpoint is not namespaced. |
-
-The selector also supports logical operators, which can be combined into larger expressions.
-
-| Expression | Meaning |
-| ----------------------------------- | ----------------------------------------------------------------------------- |
-| ` AND ` | Matches if and only if both ``, and, `` matches |
-| ` OR ` | Matches if and only if either ``, or, `` matches. |
-
-### StartTime
-
-Defines the start time when this alert exception starts filtering alerts in RFC 3339 format. This value is required.
-
-### EndTime
-
-Defines the end time when this alert exception stops filtering alerts in RFC 3339 format.
-If omitted, alerts are filtered indefinitely.
-If the value is changed to the past, this alert exception is disabled immediately.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/bgpconfig.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/bgpconfig.mdx
deleted file mode 100644
index a8d1c89acd..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/bgpconfig.mdx
+++ /dev/null
@@ -1,88 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# BGP configuration
-
-A BGP configuration resource (`BGPConfiguration`) represents BGP specific configuration options for the cluster or a
-specific node.
-
-For `kubectl` commands, the following case-insensitive aliases may be used to specify the resource type on the CLI: `bgpconfiguration.projectcalico.org`, `bgpconfigurations.projectcalico.org` as well as abbreviations such as `bgpconfiguration.p` and `bgpconfigurations.p`.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: BGPConfiguration
-metadata:
- name: default
-spec:
- logSeverityScreen: Info
- nodeToNodeMeshEnabled: true
- nodeMeshMaxRestartTime: 120s
- asNumber: 63400
- serviceClusterIPs:
- - cidr: 10.96.0.0/12
- serviceExternalIPs:
- - cidr: 104.244.42.129/32
- - cidr: 172.217.3.0/24
- listenPort: 178
- bindMode: NodeIP
- communities:
- - name: bgp-large-community
- value: 63400:300:100
- prefixAdvertisements:
- - cidr: 172.218.4.0/26
- communities:
- - bgp-large-community
- - 63400:120
-```
-
-## BGP configuration definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | --------------------------------------------------------- | --------------------------------------------------- | ------ |
-| name | Unique name to describe this resource instance. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string |
-
-- The resource with the name `default` has a specific meaning - this contains the BGP global default configuration.
-- The resources with the name `node.` contain the node-specific overrides, and will be applied to the node ``. When deleting a node the BGPConfiguration resource associated with the node will also be deleted. Only prefixAdvertisements, listenPort, and logSeverityScreen can be overridden this way.
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------- | ----------------------------------------------------- | --------------------------------------------------------------- |
-| logSeverityScreen | Global log level | Debug, Info, Warning, Error, Fatal | string | `Info` |
-| nodeToNodeMeshEnabled | Full BGP node-to-node mesh. Only valid on the global `default` BGPConfiguration. | true, false | string | true |
-| asNumber | The default local AS Number that $[prodname] should use when speaking with BGP peers. Only valid on the global `default` BGPConfiguration; to set a per-node override, use the `bgp` field on the [Node resource](node.mdx). | A valid AS Number, may be specified in dotted notation. | integer/string | 64512 |
-| extensions | Additional mapping of keys and values. Used for setting values in custom BGP configurations. | valid strings for both keys and values | map | |
-| serviceClusterIPs | The CIDR blocks for Kubernetes Service Cluster IPs to be advertised over BGP. Only valid on the global `default` BGPConfiguration: will be ignored otherwise. | A list of valid IPv4 or IPv6 CIDR blocks. | List of `cidr: /` values. | Empty List |
-| serviceExternalIPs | The CIDR blocks for Kubernetes Service External IPs to be advertised over BGP. Kubernetes Service External IPs will only be advertised if they are within one of these blocks. Only valid on the global `default` BGPConfiguration: will be ignored otherwise. | A list of valid IPv4 or IPv6 CIDR blocks. | List of `cidr: /` values. | Empty List |
-| serviceLoadBalancerIPs | The CIDR blocks for Kubernetes Service status.LoadBalancer IPs to be advertised over BGP. Kubernetes LoadBalancer IPs will only be advertised if they are within one of these blocks. Only valid on the global `default` BGPConfiguration: will be ignored otherwise. | A list of valid IPv4 or IPv6 CIDR blocks. | List of `cidr: /` values. | Empty List |
-| listenPort | The port where BGP protocol should listen. | A valid port number. | integer | 179 |
-| bindMode | Indicates whether to listen for BGP connections on all addresses (None) or only on the node's canonical IP address Node.Spec.BGP.IPvXAddress (NodeIP). If this field is changed when calico-node is already running, the change will not take effect until calico-node is manually restarted. | None, NodeIP. | string | None |
-| communities | List of BGP community names and their values, communities are not advertised unless they are used in [prefixAdvertisements](#prefixadvertisements). | | List of [communities](#communities) |
-| prefixAdvertisements | List of per-prefix advertisement properties, like BGP communities. | | List of [prefixAdvertisements](#prefixadvertisements) |
-| nodeMeshPassword | BGP password for the all the peerings in a full mesh configuration. | | [BGPPassword](bgppeer.mdx#bgppassword) | `nil` (no password) |
-| nodeMeshMaxRestartTime | Restart time that is announced by BIRD in the BGP graceful restart capability and that specifies how long the neighbor would wait for the BGP session to re-establish after a restart before deleting stale routes in full mesh configurations. Note: extra care should be taken when changing this configuration, as it may break networking in your cluster. When not specified, BIRD uses the default value of 120 seconds. | `10s`, `120s`, `2m` etc. | [Duration string][parse-duration] | `nil` (empty config, BIRD will use the default value of `120s`) |
-
-### communities
-
-| Field | Description | Accepted Values | Schema |
-| ----- | -------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ |
-| name | Name or identifier for the community. This should be used in [prefixAdvertisements](#prefixadvertisements) to advertise the community value. | | string |
-| value | Standard or large BGP community value. | For standard community, value should be in `aa:nn` format, where both `aa` and `nn` are 16 bit integers. For large community, value should be `aa:nn:mm` format, where `aa`, `nn` and `mm` are all 32 bit integers. Where `aa` is an AS Number, `nn` and `mm` are per-AS identifier. | string |
-
-### prefixAdvertisements
-
-| Field | Description | Accepted Values | Schema |
-| ----------- | ----------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------- |
-| cidr | CIDR for which properties should be advertised. | `cidr: XXX.XXX.XXX.XXX/XX` | string |
-| communities | BGP communities to be advertised. | Communities can be list of either community names already defined in [communities](#communities) or community value of format `aa:nn` or `aa:nn:mm`. For standard community, value should be in `aa:nn` format, where both `aa` and `nn` are 16 bit integers. For large community, value should be `aa:nn:mm` format, where `aa`, `nn` and `mm` are all 32 bit integers. Where `aa` is an AS Number, `nn` and `mm` are per-AS identifier. | List of string |
-
-## Supported operations
-
-| Datastore type | Create | Delete | Delete (Global `default`) | Update | Get/List | Notes |
-| --------------------- | ------ | ------ | ------------------------- | ------ | -------- | ----- |
-| Kubernetes API server | Yes | Yes | No | Yes | Yes |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/bgpfilter.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/bgpfilter.mdx
deleted file mode 100644
index 26ca0c89f1..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/bgpfilter.mdx
+++ /dev/null
@@ -1,118 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# BGP Filter
-
-A BGP filter resource (`BGPFilter`) represents a way to control
-routes imported by and exported to BGP peers specified using a
-BGP peer resource (`BGPPeer`).
-
-The BGPFilter rules are applied sequentially: the `action` for
-the **first** rule that matches an address to its `cidr` +
-`matchOperator` is executed immediately. If an address does not
-match any explicit BGP filter rule, the default action is
-`accept`.
-
-In order for a BGPFilter to be used in a BGP peering, its `name`
-must be added to `filters` of the corresponding BGPPeer resource.
-
-For `kubectl` commands, the following case-sensitive aliases may
-be used to specify the resource type on the CLI: `bgpfilters.crd.projectcalico.org`
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: BGPFilter
-metadata:
- name: my-filter
-spec:
- exportV4:
- - action: Accept
- matchOperator: In
- cidr: 77.0.0.0/16
- - action: Reject
- matchOperator: NotIn
- cidr: 88.0.0.0/16
- importV4:
- - action: Reject
- matchOperator: NotIn
- cidr: 44.0.0.0/16
- exportV6:
- - action: Reject
- matchOperator: NotEqual
- cidr: 9000::0/64
- importV6:
- - action: Accept
- matchOperator: Equal
- cidr: 5000::0/64
- - action: Reject
- matchOperator: NotIn
- cidr: 5000::0/64
-```
-
-## BGP filter definition
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: BGPFilter
-metadata:
- name: my-filter
-spec:
- exportV4:
- - action: Accept
- matchOperator: In
- cidr: 77.0.0.0/16
- importV4:
- - action: Accept
- matchOperator: NotIn
- cidr: 44.0.0.0/16
- exportV6:
- - action: Accept
- matchOperator: Equal
- cidr: 9000::0/64
- importV6:
- - action: Accept
- matchOperator: NotEqual
- cidr: 5000::0/64
-```
-
-## BGP filter definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | ------------------------------------------------------------------ | --------------------------------------------------- | ------ |
-| name | Unique name to describe this resource instance. Must be specified. | Alphanumeric string with optional `.`, `_`, or `-`. | string |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| -------- | -------------------------------------------------- | --------------- | ----------------------------------------- | ------- |
-| exportV4 | List of v4 CIDRs and export action | | [BGP Filter Rule v4](#bgp-filter-rule-v4) | |
-| importV4 | List of v4 CIDRs and import action | | [BGP Filter Rule v4](#bgp-filter-rule-v4) | |
-| exportV6 | List of v6 CIDRs and export action | | [BGP Filter Rule v6](#bgp-filter-rule-v6) | |
-| importV6 | List of v6 CIDRs and import action | | [BGP Filter Rule v6](#bgp-filter-rule-v6) | |
-
-### BGP Filter Rule v4
-
-| Field | Description | Accepted Values | Schema | Default |
-| ------------- | ----------------------------------------- | ---------------------------------- | ------ | ------- |
-| cidr | IPv4 range | A valid IPv4 CIDR | string | |
-| matchOperator | Method by which to match candidate routes | `In`, `NotIn`, `Equal`, `NotEqual` | string | |
-| action | Action to be taken for this CIDR | `Accept` or `Reject` | string | |
-
-### BGP Filter Rule v6
-
-| Field | Description | Accepted Values | Schema | Default |
-| ------------- | ----------------------------------------- | ---------------------------------- | ------ | ------- |
-| cidr | IPv6 range | A valid IPv6 CIDR | string | |
-| matchOperator | Method by which to match candidate routes | `In`, `NotIn`, `Equal`, `NotEqual` | string | |
-| action | Action to be taken for this CIDR | `Accept` or `Reject` | string | |
-
-## Supported operations
-
-| Datastore type | Create/Delete | Update | Get/List | Notes |
-| --------------------- | ------------- | ------ | -------- | ----- |
-| Kubernetes API server | Yes | Yes | Yes | |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/bgppeer.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/bgppeer.mdx
deleted file mode 100644
index af831f88ce..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/bgppeer.mdx
+++ /dev/null
@@ -1,118 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# BGP peer
-
-import Selectors from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_selectors.mdx';
-
-A BGP peer resource (`BGPPeer`) represents a remote BGP peer with
-which the node(s) in a $[prodname] cluster will peer.
-Configuring BGP peers allows you to peer a $[prodname] network
-with your datacenter fabric (e.g. ToR). For more
-information on cluster layouts, see $[prodname]'s documentation on
-[$[prodname] over IP fabrics](../architecture/design/l3-interconnect-fabric.mdx).
-
-For `kubectl` commands, the following case-insensitive aliases may be used to specify the resource type on the CLI: `bgppeer.projectcalico.org`, `bgppeers.projectcalico.org` as well as abbreviations such as `bgppeer.p` and `bgppeers.p`.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: BGPPeer
-metadata:
- name: some.name
-spec:
- node: rack1-host1
- peerIP: 192.168.1.1
- asNumber: 63400
-```
-
-## BGP peer definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | ------------------------------------------------------------------ | --------------------------------------------------- | ------ |
-| name | Unique name to describe this resource instance. Must be specified. | Alphanumeric string with optional `.`, `_`, or `-`. | string |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | --------------------------- | ---------------------------------------------------------------------------- |
-| node | If specified, the scope is node level, otherwise the scope is global. | The hostname of the node to which this peer applies. | string | |
-| peerIP | The IP address of this peer and an optional port number. If port number is not set, and peer is Calico node with `listenPort` set, then `listenPort` is used. | Valid IPv4 or IPv6 address. If port number is set use, `IPv4:port` or `[IPv6]:port` format. | string | |
-| asNumber | The remote AS Number of the peer. | A valid AS Number, may be specified in dotted notation. | integer/string |
-| nodeSelector | Selector for the nodes that should have this peering. When this is set, the `node` field must be empty. | | [selector](#selector) |
-| peerSelector | Selector for the remote nodes to peer with. When this is set, the `peerIP` and `asNumber` fields must be empty. | | [selector](#selector) |
-| keepOriginalNextHop | Maintain and forward the original next hop BGP route attribute to a specific Peer within a different AS. | | boolean |
-| extensions | Additional mapping of keys and values. Used for setting values in custom BGP configurations. | valid strings for both keys and values | map | |
-| password | [BGP password](../../operations/comms/secure-bgp.mdx) for the peerings generated by this BGPPeer resource. | | [BGPPassword](#bgppassword) | `nil` (no password) |
-| sourceAddress | Specifies whether and how to configure a source address for the peerings generated by this BGPPeer resource. Default value "UseNodeIP" means to configure the node IP as the source address. "None" means not to configure a source address. | "UseNodeIP", "None" | string | "UseNodeIP" |
-| failureDetectionMode | Specifies whether and how to detect loss of connectivity on the peerings generated by this BGPPeer resource. Default value "None" means nothing beyond BGP's own (slow) hold timer. "BFDIfDirectlyConnected" means to use BFD when the peer is directly connected. | "None", "BFDIfDirectlyConnected" | string | "None" |
-| restartMode | Specifies restart behaviour to configure on the peerings generated by this BGPPeer resource. Default value "GracefulRestart" means traditional graceful restart. "LongLivedGracefulRestart" means LLGR according to draft-uttaro-idr-bgp-persistence-05. | "GracefulRestart", "LongLivedGracefulRestart" | string | "GracefulRestart" |
-| maxRestartTime | Restart time that is announced by BIRD in the BGP graceful restart capability and that specifies how long the neighbor would wait for the BGP session to re-establish after a restart before deleting stale routes. When specified, this is configured as the graceful restart timeout when `RestartMode` is "GracefulRestart", and as the LLGR stale time when `RestartMode` is "LongLivedGracefulRestart". When not specified, the BIRD defaults are used, which are 120s for "GracefulRestart" and 3600s for "LongLivedGracefulRestart". Note: extra care should be taken when changing this configuration, as it may break networking in your cluster. | | duration | None |
-| birdGatewayMode | Specifies the BIRD "gateway" mode, i.e. method for computing the immediate next hop for each received route, for peerings generated by this BGPPeer resource. Default value "Recursive" means "gateway recursive". "DirectIfDirectlyConnected" means to configure "gateway direct" when the peer is directly connected. | "Recursive", "DirectIfDirectlyConnected" | string | "Recursive" |
-| numAllowedLocalASNumbers | The number of local AS numbers to allow in the AS path for received routes. This disables BGP loop prevention and should only be used if necessary. | | integer | `nil` (BIRD will default to 0 meaning no change to loop prevention behavior) |
-| ttlSecurity | Enables the generalized TTL security mechanism (GTSM) which protects against spoofed packets by ignoring received packets with a smaller than expected TTL value. The provided value is the number of hops (edges) between the peers. | 0 - 255 | 8-bit integer | `nil` (results in BIRD configuration `ttl security off`) |
-| filters | List of names of [BGPFilter](bgpfilter.mdx) resources to apply to this peering. | | string | |
-
-:::note
-
-The cluster-wide default local AS number used when speaking with a peer is controlled by the
-[BGPConfiguration resource](bgpconfig.mdx). That value can be overridden per-node by using the `bgp` field of
-the [node resource](node.mdx).
-
-:::
-
-### BGPPassword
-
-:::note
-
-BGP passwords must be 80 characters or fewer. If a password longer than that
-is configured, the BGP sessions with that password will fail to be established.
-
-:::
-
-| Field | Description | Schema |
-| ------------ | ------------------------------- | ----------------- |
-| secretKeyRef | Get the password from a secret. | [KeyRef](#keyref) |
-
-### KeyRef
-
-KeyRef tells $[prodname] where to get a BGP password. The referenced Kubernetes
-secret must be in the same namespace as the $[nodecontainer] pod.
-
-| Field | Description | Schema |
-| ----- | ------------------------- | ------ |
-| name | The name of the secret | string |
-| key | The key within the secret | string |
-
-## Peer scopes
-
-BGP Peers can exist at either global or node-specific scope. A peer's scope
-determines which `$[nodecontainer]`s will attempt to establish a BGP session with that peer.
-If `$[nodecontainer]` has a `listenPort` set in `BGPConfiguration`, it will be used in peering.
-
-### Global peer
-
-To assign a BGP peer a global scope, omit the `node` and `nodeSelector` fields. All nodes in
-the cluster will attempt to establish BGP connections with it
-
-### Node-specific peer
-
-A BGP peer can also be node-specific. When the `node` field is included, only the specified node
-will peer with it. When the `nodeSelector` field is included, the nodes with labels that match that selector
-will peer with it.
-
-## Supported operations
-
-| Datastore type | Create/Delete | Update | Get/List | Notes |
-| --------------------- | ------------- | ------ | -------- | ----- |
-| Kubernetes API server | Yes | Yes | Yes |
-
-## Selector
-
-
-
-[parse-duration]: https://golang.org/pkg/time/#ParseDuration
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/blockaffinity.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/blockaffinity.mdx
deleted file mode 100644
index 451a6bad29..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/blockaffinity.mdx
+++ /dev/null
@@ -1,31 +0,0 @@
----
-description: IP address management block affinity
----
-
-# Block affinity
-
-A block affinity resource (`BlockAffinity`) represents the affinity for an IPAM block. These are managed by Calico IPAM.
-
-## Block affinity definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | ------------------------------------------------------------------ | --------------------------------------------------- | ------ |
-| name | Unique name to describe this resource instance. Must be specified. | Alphanumeric string with optional `.`, `_`, or `-`. | string |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ------- | -------------------------------------------------------------------------- | ----------------------------------- | ------- | ------- |
-| state | State of the affinity with regard to any referenced IPAM blocks. | confirmed, pending, pendingDeletion | string | |
-| node | The node that this affinity is assigned to. | The hostname of the node | string | |
-| cidr | The CIDR range this block affinity references. | A valid IPv4 or IPv6 CIDR. | string | |
-| deleted | When set to true, clients should treat this block as if it does not exist. | true, false | boolean | `false` |
-
-## Supported operations
-
-| Datastore type | Create | Delete | Update | Get/List | Watch |
-| --------------------- | ------ | ------ | ------ | -------- | ----- |
-| etcdv3 | No | No | No | Yes | Yes |
-| Kubernetes API server | No | No | No | Yes | Yes |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/caliconodestatus.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/caliconodestatus.mdx
deleted file mode 100644
index 69fdb137eb..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/caliconodestatus.mdx
+++ /dev/null
@@ -1,216 +0,0 @@
----
-description: API for this Calico resource.
----
-
-# Calico node status
-
-A Calico node status resource (`CalicoNodeStatus`) represents a collection of status information for a node that $[prodname] reports back to the user for use during troubleshooting.
-
-As of today, status of BGP agents, BGP sessions and routes exposed to BGP agents are collected from Linux nodes only. **Windows nodes are not supported at this time.**
-Calico node status resource is only valid when $[prodname] BGP networking is in use.
-
-### Notes
-
-The updating of `CalicoNodeStatus` will have a small performance impact on CPU/Memory usage of the node as well as adding load to kubernetes apiserver.
-
-In our testing on a ten node, full mesh cluster, a `CalicoNodeStatus` resource was created for each node where the update interval was set to ten seconds. On each node, this resulted in an increase in CPU use of 5% of a vCPU and an increase of 4MB of memory. The control plane node recorded an increase in CPU usage of 5% of a vCPU for these 10 nodes.
-
-:::caution
-
-The implementation of `CalicoNodeStatus` is designed to handle a small number of nodes (less than 10 is recommended) reporting back status in the same time. If `CalicoNodeStatus` are created for a large number of nodes, and with short update interval,
-the kubernetes apiserver may become slower and less responsive.
-You should create `CalicoNodeStatus` for the node you are interested in and for debugging purpose only. `CalicoNodeStatus` resource should be deleted upon the completion of the debugging process.
-
-:::
-
-## Sample YAML
-
-To use this function, the user creates a CalicoNodeStatus object for the node, specifying the information to collect and the interval it should be collected at. This example collects information for node "my-kadm-node-0" with an update interval of 10 seconds.
-
-```bash
-kubectl apply -f -<
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/compliance-reports/inventory.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/compliance-reports/inventory.mdx
deleted file mode 100644
index f8c8e3219b..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/compliance-reports/inventory.mdx
+++ /dev/null
@@ -1,86 +0,0 @@
----
-description: API for this resource.
----
-
-# Inventory report
-
-To create an Inventory report, create a [`GlobalReport`](../globalreport.mdx) with the `reportType`
-set to `inventory`.
-
-The following sample command creates a GlobalReport that results in a daily inventory report for
-endpoints in the `public` namespace.
-
-```bash
-kubectl apply -f - << EOF
-apiVersion: projectcalico.org/v3
-kind: GlobalReport
-metadata:
- name: daily-public-inventory-report
- labels:
- deployment: production
-spec:
- reportType: inventory
- endpoints:
- namespaces:
- names:
- - public
- schedule: 0 0 * * *
-EOF
-```
-
-## Downloadable reports
-
-### summary.csv
-
-A summary CSV file that includes details about the report parameters and the top level counts.
-
-| Heading | Description | Format |
-| ----------------------------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------------- |
-| startTime | The report interval start time. | RFC3339 string |
-| endTime | The report interval end time. | RFC3339 string |
-| endpointSelector | The endpoint selector used to restrict in-scope endpoints by endpoint label selection. | selector string |
-| namespaceNames | The set of namespace names used to restrict in-scope endpoints by namespace. | ";" separated list of namespace names |
-| namespaceSelector | The namespace selector used to restrict in-scope endpoints by namespace label selection. | selector string |
-| serviceAccountNames | The set of service account names used to restrict in-scope endpoints by service account. | ";" separated list of service account names |
-| serviceAccountSelectors | The service account selector used to restrict in-scope endpoints by service account label selection. | selector string |
-| endpointsNumInScope | The number of enumerated endpoints that are in-scope according to the requested endpoint selection options. | number |
-| endpointsNumIngressProtected | The number of in-scope endpoints that were always ingress protected during the report interval. | number |
-| endpointsNumEgressProtected | The number of in-scope endpoints that were always egress protected during the report interval. | number |
-| namespacesNumInScope | The number of namespaces containing in-scope endpoints. | number |
-| namespacesNumIngressProtected | The number of namespaces whose in-scope endpoints were always ingress protected during the report interval. | number |
-| namespacesNumEgressProtected | The number of namespaces whose in-scope endpoints were always egress protected during the report interval. | number |
-| serviceAccountsNumInScope | The number of service accounts associated with in-scope endpoints. | number |
-
-### endpoints.csv
-
-An endpoints CSV file that includes per-endpoint information.
-
-| Heading | Description | Format |
-| ---------------- | --------------------------------------------------------------------------------------------- | ----------------------------------- |
-| endpoint | The name of the endpoint. | string |
-| ingressProtected | Whether the endpoint was always ingress protected during the report interval. | bool |
-| egressProtected | Whether the endpoint was always egress protected during the report interval. | bool |
-| envoyEnabled | Whether the endpoint was always Envoy enabled during the report interval. | bool |
-| appliedPolicies | The full set of policies that applied to the endpoint at any time during the report interval. | ";" separated list of policy names |
-| services | The full set of services that included this endpoint at any time during the report interval. | ";" separated list of service names |
-
-### namespaces.csv
-
-A namespaces CSV file that includes per-namespace information.
-
-| Heading | Description | Format |
-| ---------------- | ------------------------------------------------------------------------------------------------------------- | ------ |
-| namespace | The name of the namespace. | string |
-| ingressProtected | Whether all in-scope endpoints within the namespace were always ingress protected during the report interval. | bool |
-| egressProtected | Whether all in-scope endpoints within the namespace were always egress protected during the report interval. | bool |
-| envoyEnabled | Whether all in-scope endpoints within the namespace were always Envoy enabled during the report interval. | bool |
-
-### services.csv
-
-A services CSV file that includes per-service information.
-
-| Heading | Description | Format |
-| ---------------- | ---------------------------------------------------------------------------------------------------------------- | ------ |
-| service | The name of the service. | string |
-| ingressProtected | Whether all in-scope endpoints that are in the service were always ingress protected during the report interval. | bool |
-| envoyEnabled | Whether all in-scope endpoints that are in the service were always Envoy enabled during the report interval. | bool |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/compliance-reports/network-access.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/compliance-reports/network-access.mdx
deleted file mode 100644
index 7e720f86e0..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/compliance-reports/network-access.mdx
+++ /dev/null
@@ -1,92 +0,0 @@
----
-description: API for this resource.
----
-
-# Network Access report
-
-To create an Inventory report, create a [`GlobalReport`](../globalreport.mdx) with the `reportType`
-set to `network-access`.
-
-The following sample command creates a GlobalReport that results in a daily network access report for
-endpoints in the `public` namespace.
-
-```bash
-kubectl apply -f - << EOF
-apiVersion: projectcalico.org/v3
-kind: GlobalReport
-metadata:
- name: daily-public-network-access-report
- labels:
- deployment: production
-spec:
- reportType: network-access
- endpoints:
- namespaces:
- names:
- - public
- schedule: 0 0 * * *
-EOF
-```
-
-:::note
-
-There is a known issue that audit logs do not contain deletion events for resources that were
-deleted implicitly as part of a namespace deletion event. Currently, this means policies and pods that have been
-deleted in this way may still appear in the reports that cover any period within the next day.
-
-:::
-
-## Downloadable reports
-
-### summary.csv
-
-A summary CSV file that includes details about the report parameters and the top level counts.
-
-| Heading | Description | Format |
-| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------- |
-| startTime | The report interval start time. | RFC3339 string |
-| endTime | The report interval end time. | RFC3339 string |
-| endpointSelector | The endpoint selector used to restrict in-scope endpoints by endpoint label selection. | selector string |
-| namespaceNames | The set of namespace names used to restrict in-scope endpoints by namespace. | ";" separated list of namespace names |
-| namespaceSelector | The namespace selector used to restrict in-scope endpoints by namespace label selection. | selector string |
-| serviceAccountNames | The set of service account names used to restrict in-scope endpoints by service account. | ";" separated list of service account names |
-| serviceAccountSelectors | The service account selector used to restrict in-scope endpoints by service account label selection. | selector string |
-| endpointsNumIngressProtected | The number of in-scope endpoints that were always ingress protected during the report interval. | number |
-| endpointsNumEgressProtected | The number of in-scope endpoints that were always egress protected during the report interval. | number |
-| endpointsNumIngressUnprotected | The number of in-scope endpoints that were ingress unprotected at any point during the report interval. | number |
-| endpointsNumEgressUnprotected | The number of in-scope endpoints that were egress unprotected at any point during the report interval. | number |
-| endpointsNumIngressFromInternet | The number of in-scope endpoints that allowed ingress traffic from the public internet at any point during the report interval. | number |
-| endpointsNumEgressToInternet | The number of in-scope endpoints that allowed egress traffic to the public internet at any point during the report interval. | number |
-| endpointsNumIngressFromOtherNamespace | The number of in-scope endpoints that allowed ingress traffic from another namespace at any point during the report interval. | number |
-| endpointsNumEgressToOtherNamespace | The number of in-scope endpoints that allowed egress traffic to another namespace at any point during the report interval. | number |
-| endpointsNumEnvoyEnabled | The number of in-scope endpoints that were always Envoy enabled during the report interval. | number |
-
-### endpoints.csv
-
-An endpoints CSV file that includes per-endpoint information.
-
-| Heading | Description | Format |
-| ------------------------------------------- | -------------------------------------------------------------------------------------------------------------- | ----------------------------------- |
-| endpoint | The name of the endpoint. | string |
-| ingressProtected | Whether the endpoint was always ingress protected during the report interval. | bool |
-| egressProtected | Whether the endpoint was always egress protected during the report interval. | bool |
-| ingressFromInternet | Whether the endpoint allowed ingress traffic from the public internet at any point during the report interval. | number |
-| egressToInternet | Whether the endpoint allowed egress traffic to the public internet at any point during the report interval. | number |
-| ingressFromOtherNamespace | Whether the endpoint allowed ingress traffic from another namespace at any point during the report interval. | number |
-| egressToOtherNamespace | Whether the endpoint allowed egress traffic to another namespace at any point during the report interval. | number |
-| envoyEnabled | Whether the endpoint was always Envoy enabled during the report interval. | bool |
-| appliedPolicies | The full set of policies that applied to the endpoint at any time during the report interval. | ";" separated list of policy names |
-| services | The full set of services that included this endpoint at any time during the report interval. | ";" separated list of service names |
-| trafficAggregationPrefix\* | The flow log aggregation prefix. | string |
-| endpointsGeneratingTrafficToThisEndpoint\* | The set of endpoints that were generating traffic to this endpoint. | ";" separated list of service names |
-| endpointsReceivingTrafficFromThisEndpoint\* | The set of endpoints that this endpoint is generating traffic to. | ";" separated list of service names |
-
-\* Traffic data is determined from flow logs. By default, $[prodname] aggregates flow logs so that flows to
-and from pods in the same replica set are summarized if the flows are accepted. (Denied flows are not aggregated this
-way by default). This means that the per-endpoint traffic details do not refer specifically to that endpoint, but
-rather the set of endpoints specified by the trafficAggregationPrefix.
-
-If you want per-endpoint detail you should turn down the level of aggregation. To do so,
-set the value of `flowLogsFileAggregationKindForAllowed` to 1 using a [FelixConfiguration][felixconfig]
-
-[felixconfig]: /reference/resources/felixconfig.mdx
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/compliance-reports/overview.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/compliance-reports/overview.mdx
deleted file mode 100644
index d97b5514c0..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/compliance-reports/overview.mdx
+++ /dev/null
@@ -1,102 +0,0 @@
----
-description: Schedule reports and configure report scope.
----
-
-# Compliance reports
-
-The $[prodname] compliance reporting feature provides the following compliance reports:
-
-- [Inventory](inventory.mdx)
-- [Network Access](network-access.mdx)
-- [Policy Audit](policy-audit.mdx)
-- [CIS Benchmark](cis-benchmark.mdx)
-
-Create a [`GlobalReport`](../globalreport.mdx) resource to automatically schedule report generation, and specify the report scope (resources to include in the report).
-
-## Concepts
-
-### In-scope asset
-
-An asset (Pod or HostEndpoint) is flagged as in-scope by endpoint labels, namespace and/or namespace labels, and service
-account and/or service account labels.
-
-_How this applies to the report_:
-The report includes all resources that were in-scope at any point during the report interval. The resource is included
-when it is first flagged as in-scope according to the configured label selector and name selections. The resource is
-included even if the resource is deleted or goes out-of-scope before the end of the report interval.
-
-### Ingress protected
-
-An endpoint is ingress protected if it has at least one Ingress policy that is applied to it.
-
-A service is ingress protected if all of the inscope endpoints within that service are ingress protected.
-
-A namespace is ingress protected if all of the inscope endpoints within that namespace are ingress protected.
-
-_How this applies to the report_:
-An endpoint is ingress protected only if it was ingress protected throughout the entire report interval.
-
-### Egress protected
-
-As per ingress, but with egress policy rules. Note that egress statistics are not obtained for services.
-
-### Allows ingress traffic from another namespace
-
-An endpoint is flagged as allowing ingress traffic from another namespace if it has one or more policies that apply to
-it with an ingress allow rule that:
-
-- has an explicit namespace selector configured, or
-- has no source selector or source CIDR configured, or
-- (for GlobalNetworkPolicy) has no source CIDR.
-
-A service is flagged as allowing ingress traffic from another namespace if any of the inscope endpoints within that
-service are flagged.
-
-A namespace is flagged as allowing ingress traffic from another namespace if all of the inscope endpoints within that
-namespace are flagged.
-
-_How this applies to the report_:
-An endpoint is flagged as allowing ingress traffic from another namespace if it was flagged at any time during the
-report interval.
-
-### Allows egress traffic to another namespace
-
-As per ingress, but with egress policy rules and destination selector/CIDR. Note that egress statistics are not obtained
-for services.
-
-### Allows ingress traffic from the internet
-
-An endpoint is flagged as allowing ingress traffic from the internet if it has one or more policies that apply to it
-with an ingress allow rule that:
-
-- has no source selector or source CIDR configured, or
-- has a source CIDR in the non-private IP ranges and has no source selector, or
-- has a source selector that matches one or more NetworkSets that contain at least one non-private IP.
-
-A service is flagged as allowing ingress traffic from the internet if any of the inscope endpoints within that service
-are flagged.
-
-A namespace is flagged as allowing ingress traffic from the internet if all of the inscope endpoints within that
-namespace are flagged.
-
-_How this applies to the report_:
-An endpoint is flagged as allowing ingress traffic from the internet if it was flagged as such at any time during the
-report interval.
-
-### Allows egress traffic to the internet
-
-As per ingress, but with egress policy rules and destination selector/CIDR. Note that egress statistics are not obtained
-for services.
-
-### Envoy enabled
-
-An endpoint is flagged as Envoy Enabled if the associated Pod Spec and Annotations indicate that an Istio init and main
-container are deployed in the Pod. Provided Istio is appropriately configured on the cluster, this can be extrapolated
-to be indication of whether mTLS is enabled for the endpoint.
-
-A service is flagged as Envoy enabled if all of the inscope endpoints within that service are flagged.
-
-A namespace is flagged as Envoy enabled if all of the inscope endpoints within that namespace are flagged.
-
-_How this applies to the report_:
-An endpoint is flagged as Envoy enabled if it was flagged as such throughout the entire report interval.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/compliance-reports/policy-audit.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/compliance-reports/policy-audit.mdx
deleted file mode 100644
index 06970a93ce..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/compliance-reports/policy-audit.mdx
+++ /dev/null
@@ -1,56 +0,0 @@
----
-description: API for this resource.
----
-
-# Policy audit report
-
-To create a Policy Audit report, create a [`GlobalReport`](../globalreport.mdx) with the `reportType`
-set to `policy-audit`.
-
-The following sample command creates a GlobalReport that results in a daily policy audit report for
-policies that are applied to endpoints in the `public` namespace.
-
-```bash
-kubectl apply -f - << EOF
-apiVersion: projectcalico.org/v3
-kind: GlobalReport
-metadata:
- name: daily-public-policy-audit-report
- labels:
- deployment: production
-spec:
- reportType: policy-audit
- endpoints:
- namespaces:
- names:
- - public
- schedule: 0 0 * * *
-EOF
-```
-
-## Downloadable reports
-
-### summary.csv
-
-A summary CSV file that includes details about the report parameters and the top level counts.
-
-| Heading | Description | Format |
-| ----------------------- | ------------------------------------------------------------------------------------------------------ | ------------------------------------------- |
-| startTime | The report interval start time. | RFC3339 string |
-| endTime | The report interval end time. | RFC3339 string |
-| endpointSelector | The endpoint selector used to restrict in-scope endpoints by endpoint label selection. | selector string |
-| namespaceNames | The set of namespace names used to restrict in-scope endpoints by namespace. | ";" separated list of namespace names |
-| namespaceSelector | The namespace selector used to restrict in-scope endpoints by namespace label selection. | selector string |
-| serviceAccountNames | The set of service account names used to restrict in-scope endpoints by service account. | ";" separated list of service account names |
-| serviceAccountSelectors | The service account selector used to restrict in-scope endpoints by service account label selection. | selector string |
-| numCreatedPolicies | The number of policies that apply to in-scope endpoints that were created during the report interval. | number |
-| numModifiedPolicies | The number of policies that apply to in-scope endpoints that were modified during the report interval. | number |
-| numDeletedPolicies | The number of policies that apply to in-scope endpoints that were deleted during the report interval. | number |
-
-### events.json
-
-Events formatted in JSON.
-
-### events.yaml
-
-Events formatted in YAML.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/containeradmissionpolicy.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/containeradmissionpolicy.mdx
deleted file mode 100644
index a260bc428d..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/containeradmissionpolicy.mdx
+++ /dev/null
@@ -1,194 +0,0 @@
----
-description: Resource definition.
----
-
-# Container admission policy
-
-A container admission policy (`ContainerAdmissionPolicy`) represents an ordered set of rules which are applied to a
-collection of containers that match a [label selector](#selector).
-
-`ContainerAdmissionPolicy` is a cluster resource. `ContainerAdmissionPolicy` can be restricted to a set of namespaces
-using a [namespace selector](#selector).
-
-## Sample YAML
-
-This sample policy allows admission requests for pod creating resources whose image is in the registry / repository
-`gcr.io/company/production-repository/*` with a scan status of either `Pass` or `Warn`, and rejects all other admission
-requests.
-
-```yaml
-apiVersion: containersecurity.tigera.io/v1beta1
-kind: ContainerAdmissionPolicy
-metadata:
- name: reject-failed-and-non-gcr
-spec:
- selector: all()
- namespaceSelector: all()
- order: 10
- rules:
- - action: "Reject"
- imagePath:
- operator: IsNoneOf
- values:
- - "^gcr.io/company/production-repository/.*"
- - action: Allow
- imageScanStatus:
- operator: IsOneOf
- values:
- - Pass
- - Warn
- - action: Reject
-```
-
-## Container admission policy definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema | Default |
-|-----------|--------------------------------------------------------------------|-----------------------------------------------------|--------|-----------|
-| name | The name of the network policy. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-|-------------------|-----------------------------------------------------------------------------------------------------|-----------------|-----------------------|---------|
-| order | Controls the order of precedence. $[prodname] applies the policy with the lowest value first. | | float | |
-| selector | Selects the resources to which this policy applies. | | [selector](#selector) | all() |
-| namespaceSelector | Selects the namespace to which this policy applies. | | [selector](#selector) | all() |
-| rules | Ordered list of rules applied by this policy. | | List of [Rule](#rule) | |
-
-### Rule
-
-A single rule matches a set of pod creating resources and applies some action to them. Multiple rules are executed in order.
-
-| Field | Description | Accepted Values | Schema | Default |
-|-------------------|---------------------------------------------------------------------|------------------------------------------|--------------------------------------------|------------|
-| action | Action to perform when matching this rule. | `Allow`, `Reject` | string | |
-| imagePath | Image path matching criteria. | | [String Values Match](#stringvaluematch) | |
-| imageScanStatus | Vulnerability scan status criteria. | | [String Values Match](#stringvaluematch) | |
-| imageLastScan | Criteria for the last time the containers image was scanned. | | [Time Match](#timematch) | |
-
-### Selector
-
-A label selector is an expression which either matches or does not match a resource based on its labels.
-
-$[prodname] label selectors support a number of operators, which can be combined into larger expressions
-using the boolean operators and parentheses.
-
-| Expression | Meaning |
-|---------------------------|-----------------------------|
-| **Logical operators** |
-| `( )` | Matches if and only if `` matches. (Parentheses are used for grouping expressions.)
-| `! ` | Matches if and only if `` does not match. **Tip:** `!` is a special character at the start of a YAML string, if you need to use `!` at the start of a YAML string, enclose the string in quotes.
-| ` && ` | "And": matches if and only if both ``, and, `` matches
-| \ || \ | "Or": matches if and only if either ``, or, `` matches.
-| **Match operators** |
-| `all()` | Match all in-scope resources. To match _no_ resources, combine this operator with `!` to form `!all()`.
-| `global()` | Match all non-namespaced resources. Useful in a `namespaceSelector` to select global resources such as global network sets.
-| `k == 'v'` | Matches resources with the label 'k' and value 'v'.
-| `k != 'v'` | Matches resources without label 'k' or with label 'k' and value _not_ equal to `v`
-| `has(k)` | Matches resources with label 'k', independent of value. To match pods that do not have label `k`, combine this operator with `!` to form `!has(k)`
-| `k in { 'v1', 'v2' }` | Matches resources with label 'k' and value in the given set
-| `k not in { 'v1', 'v2' }` | Matches resources without label 'k' or with label 'k' and value _not_ in the given set
-| `k contains 's'` | Matches resources with label 'k' and value containing the substring 's'
-| `k starts with 's'` | Matches resources with label 'k' and value starting with the substring 's'
-| `k ends with 's'` | Matches resources with label 'k' and value ending with the substring 's'
-
-Operators have the following precedence:
-
-* **Highest**: all the match operators
-* Parentheses `( ... )`
-* Negation with `!`
-* Conjunction with `&&`
-* **Lowest**: Disjunction with `||`
-
-For example, the expression
-
-```
-! has(my-label) || my-label starts with 'prod' && role in {'frontend','business'}
-```
-
-Would be "bracketed" like this:
-
-```
-((!(has(my-label)) || ((my-label starts with 'prod') && (role in {'frontend','business'}))
-```
-
-It would match:
-* Any resource that did not have label "my-label".
-* Any resource that both:
-* Has a value for `my-label` that starts with "prod", and,
-* Has a role label with value either "frontend", or "business".
-
-Understanding scopes and the `all()` and `global()` operators: selectors have a scope of resources
-that they are matched against, which depends on the context in which they are used. For example:
-
-* The `nodeSelector` in an `IPPool` selects over `Node` resources.
-
-* The top-level selector in a `NetworkPolicy` selects over the workloads _in the same namespace_ as the
-`NetworkPolicy`.
-
-* The top-level selector in a `GlobalNetworkPolicy` doesn't have the same restriction, it selects over all endpoints
-including namespaced `WorkloadEndpoint`s and non-namespaced `HostEndpoint`s.
-
-* The `namespaceSelector` in a `NetworkPolicy` (or `GlobalNetworkPolicy`) _rule_ selects over the labels on namespaces
-rather than workloads.
-
-* The `namespaceSelector` determines the scope of the accompanying `selector` in the entity rule. If no `namespaceSelector`
-is present then the rule's `selector` matches the default scope for that type of policy. (This is the same namespace
-for `NetworkPolicy` and all endpoints/network sets for `GlobalNetworkPolicy`)
-
-* The `global()` operator can be used (only) in a `namespaceSelector` to change the scope of the main `selector` to
-include non-namespaced resources such as [GlobalNetworkSet](globalnetworkset.mdx).
-This allows namespaced `NetworkPolicy` resources to refer to global non-namespaced resources, which would otherwise
-be impossible.
-
-### StringValueMatch
-A string values match does a match or negative match on a target against a set of values.
-
-| Field | Description | Accepted Values | Schema | Default |
-|-----------|---------------------------------------------------|-----------------------|-------------------|------------|
-| operator | Match operator to use against the list of values | `IsOneOf`, `IsNoneOf` | string | |
-| values | List of values to match against. | | list of strings | |
-
-### TimeMatch
-A time match does a match or negative match against a time or duration.
-
-Duration is the length of time into the past from the current time to match against. As an example,
-
-```yaml
- operator: "gt"
- duration:
- days: 3
-```
-
-is false for 4 days prior to the current date time, and true for 2 days prior to current date time.
-
-Time is the absolute time to match a given time against. As an example,
-
-```yaml
- operator: "gt"
- time: "2022-01-02T0:00:00Z"
-```
-
-is false for "2022-01-01T0:00:00Z", and true for "2022-01-03T0:00:00Z".
-
-| Field | Description | Accepted Values | Schema | Default |
-|-----------|-------------------------------------------------------|-----------------------|---------------------------------------------------|------------|
-| operator | Match operator to use against the duration or time. | `gt`, `lt` | string | |
-| duration | The duration that this operator matches within. | | [Duration](#duration) | |
-| time | List of values to match against. | | String of the format "2006-01-02T15:04:05Z07:00" | |
-
-### Duration
-
-| Field | Description | Accepted Values | Schema | Default |
-|-----------|-----------------------------------------------|-------------------|-----------|------------|
-| days | Number of days past from the current time. | | integer | |
-| months | Number of months past from the current time. | | integer | |
-| years | Number of years past from the current time. | | integer | |
-
-## Supported operations
-
-| Datastore type | Create/Delete | Update | Get/List | Notes
-|--------------------------|---------------|--------|----------|------
-| Kubernetes API datastore | Yes | Yes | Yes |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/deeppacketinspection.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/deeppacketinspection.mdx
deleted file mode 100644
index eb4add9433..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/deeppacketinspection.mdx
+++ /dev/null
@@ -1,74 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Deep packet inspection
-
-import Selectors from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_selectors.mdx';
-
-A deep packet inspection resource (`DeepPacketInspection`) represents live network traffic monitor for malicious activities
-by analyzing header and payload of the packet using specific rules. Malicious activities are added to the “Alerts” page in
-$[prodname] Manager.
-
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases can be used to specify the resource type on the CLI:
-`deeppacketinspection`,`deeppacketinspections`, `deeppacketinspection.projectcalico.org`, `deeppacketinspections.projectcalico.org` as well as
-abbreviations such as `deeppacketinspection.p` and `deeppacketinspections.p`.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: DeepPacketInspection
-metadata:
- name: sample-dpi
- namespace: sample-namespace
-spec:
- selector: k8s-app == "sample-app"
-```
-
-## DeepPacketInspection definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema | Default |
-| --------- | ------------------------------------------------------------------ | --------------------------------------------------- | ------ | --------- |
-| name | The name of the deep packet inspection. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | |
-| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| -------- | ------------------------------------------------------------------- | --------------- | --------------------- | ------- |
-| selector | Selects the endpoints to which this deep packet inspection applies. | | [selector](#selector) | |
-
-### Status
-
-| Field | Description |
-| ----- | ------------------------ |
-| nodes | List of [Nodes](#nodes). |
-
-### Nodes
-
-| Field | Description |
-| --------------- | -------------------------------------------- |
-| node | Name of the node that generated this status. |
-| active | [Active](#active) status. |
-| errorConditions | List of [errors](#error-conditions). |
-
-### Active
-
-| Field | Description |
-| ----------- | ------------------------------------------------------------ |
-| success | Whether the deep packet inspection is active on the backend. |
-| lastUpdated | Time when the [active](#active) field was updated. |
-
-### Error Conditions
-
-| Field | Description |
-| ----------- | ------------------------------------------------------------------- |
-| message | Errors preventing deep packet inspection from running successfully. |
-| lastUpdated | Time when the [error](#error-conditions) was updated. |
-
-### Selector
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/egressgatewaypolicy.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/egressgatewaypolicy.mdx
deleted file mode 100644
index 3cd371bfa5..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/egressgatewaypolicy.mdx
+++ /dev/null
@@ -1,83 +0,0 @@
----
-description: API for this Calico Enterprise resource.
----
-
-# Egress gateway policy
-
-An EgressGatewayPolicy resource (`EgressGatewayPolicy`) represents a way to select
-different egress gateways or skip one for different destinations.
-
-Rules in an Egress EgressGatewayPolicy are checked in Longest Prefix Match(LPM) fashion
-like routers. As such it is not valid to use the exact destination in two rules.
-
-In order for an EgressGatewayPolicy to be used, its `name` must be added
-to a pod or namespace by using `egress.projectcalico.org/egressGatewayPolicy` annotation.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: EgressGatewayPolicy
-metadata:
- name: my-egwpolicy
-spec:
- rules:
- - destination:
- cidr: 10.0.0.0/8
- description: "Local: no gateway"
- - destination:
- cidr: 11.0.0.0/8
- description: "Gateway to on prem"
- gateway:
- namespaceSelector: "projectcalico.org/name == 'default'"
- selector: "egress-code == 'blue'"
- maxNextHops: 2
- - description: "Gateway to internet"
- gateway:
- namespaceSelector: "projectcalico.org/name == 'default'"
- selector: "egress-code == 'red'"
- gatewayPreference: PreferNodeLocal
-```
-
-## Egress gateway policy definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | ------------------------------------------------------------------ | --------------------------------------------------- | ------ |
-| name | Unique name to describe this resource instance. Must be specified. | Alphanumeric string with optional `.`, `_`, or `-`. | string |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| -------- | -------------------------------------- | --------------- | --------------------------------------------------------- | ------- |
-| rules | List of egress gateway policies | | [Egress Gateway Policy Rule](#egress-gateway-policy-rule) | |
-
-### Egress gateway policy rule
-
-| Field | Description | Accepted Values | Schema | Default |
-| ----------------- | ------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------- | ------- |
-| description | A description of rule | | string | |
-| destination | CIDR representing a destination | | [destination](#destination) | |
-| gateway | egress gateway to be used for a destination | | [gateway](#egress-gateway) | |
-| gatewayPreference | Hints about egress gateway selection | `None` for using all available egress gateway replicas from the selected deployment, or `PreferNodeLocal` to use only egress gateway replicas on the same local node as the client pod or namespace if available, otherwise fall back to the default behaviour. | | 'None' |
-
-### Destination
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------- | ------------------------------------------- | --------------------------- | --------------------------- | ------- |
-| cidr | CIDR of destination network | | string | |
-
-### Egress gateway
-
-| Field | Description | Accepted Values | Schema | Default |
-| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------- | --------------------------- | ------- |
-| selector | selector to choose an egress gateway deployment | | string | |
-| namespaceSelector | name space of egress gateway deployment | | string | |
-| maxNextHops | Specifies the maximum number of egress gateway replicas from the selected deployment that a pod should depend on. Replicas will be chosen in a manner that attempts to balance load across the whole egress gateway replicaset. If unset, or set to "0", egress traffic will behave in the default manner (load balanced over all available gateways). | | string | |
-
-## Supported operations
-
-| Datastore type | Create/Delete | Update | Get/List | Notes |
-| --------------------- | ------------- | ------ | -------- | ----- |
-| Kubernetes API server | Yes | Yes | Yes | |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/felixconfig.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/felixconfig.mdx
deleted file mode 100644
index c1f2c9c1b2..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/felixconfig.mdx
+++ /dev/null
@@ -1,358 +0,0 @@
----
-description: API for this Calico Enterprise resource.
----
-
-# Felix configuration
-
-A Felix configuration resource (`FelixConfiguration`) represents Felix configuration options for the cluster.
-
-For `kubectl` commands, the following case-insensitive aliases may be used to specify the resource type on the CLI: `felixconfiguration.projectcalico.org`, `felixconfigurations.projectcalico.org` as well as abbreviations such as `felixconfiguration.p` and `felixconfigurations.p`.
-
-See [Configuring Felix](../component-resources/node/felix/configuration.mdx) for more details.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: FelixConfiguration
-metadata:
- name: default
-spec:
- ipv6Support: false
- ipipMTU: 1400
- chainInsertMode: Append
-```
-
-## Felix configuration definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | --------------------------------------------------------- | --------------------------------------------------- | ------ |
-| name | Unique name to describe this resource instance. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string |
-
-- $[prodname] automatically creates a resource named `default` containing the global default configuration settings for Felix.
-
-- The resources with the name `node.` contain the node-specific overrides, and will be applied to the node ``. When deleting a node the FelixConfiguration resource associated with the node will also be deleted.
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------ | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| awsSrcDstCheck | Controls automatically setting [source-destination-check](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html#EIP_Disable_SrcDestCheck) on an AWS EC2 instance running Felix. Setting the value to `Enable` will set the check value in the instance description to `true`. For `Disable`, the check value will be `false`. Setting must be `Disable` if you want the EC2 instance to process traffic not matching the host interface IP address. For example, EKS cluster using Calico CNI with `VXLANMode=CrossSubnet`. Check [IAM role and profile configuration](#aws-iam-rolepolicy-for-source-destination-check-configuration) for setting the necessary permission for this setting to work. | DoNothing, Enable, Disable | string | `DoNothing` |
-| awsSecondaryIPSupport | Controls whether Felix will create secondary AWS ENIs for AWS-backed IP pools. This feature is documented in the [egress gateways on AWS guide](../../networking/egress/egress-gateway-aws.mdx). Should only be enabled on AWS. | `Enabled`, `EnabledENIPerWorkload`, `Disabled` | string | `Disabled` |
-| awsSecondaryIPRoutingRulePriority | Controls the priority of the policy-based routing rules used to implement AWS-backed IP addresses. Should only be changed to avoid conflicts if your nodes have additional policy based routing rules. | 0-4294967295 | int | 101 |
-| awsRequestTimeout | Timeout used for communicating with the AWS API. | `5s`, `10s`, `1m` etc. | duration | `30s` |
-| dropActionOverride | Controls what happens to each packet that is denied by the current $[prodname] policy. Normally the `Drop` or `LogAndDrop` value should be used. However when experimenting or debugging a scenario that is not behaving as you expect, the `Accept` and `LogAndAccept` values can be useful: then the packet will be still be allowed through. When one of the `LogAnd...` values is set, each denied packet is logged in syslog.\* | `Drop`, `Accept`, `LogAndDrop`, `LogAndAccept` | string | `Drop` |
-| chainInsertMode | Controls whether Felix hooks the kernel's top-level iptables chains by inserting a rule at the top of the chain or by appending a rule at the bottom. `Insert` is the safe default since it prevents $[prodname]'s rules from being bypassed. If you switch to `Append` mode, be sure that the other rules in the chains signal acceptance by falling through to the $[prodname] rules, otherwise the $[prodname] policy will be bypassed. | `Insert`, `Append` | string | `Insert` |
-| healthTimeoutOverrides | A list of overrides for Felix's internal liveness/readiness timeouts. | see [below](#health-timeout-overrides) | List of `HealthTimeoutOverride` objects | `[]` |
-| dataplaneWatchdogTimeout | Deprecated, use `healthTimeoutOverrides` instead. Timeout before the main dataplane goroutine is determined to have hung and Felix will report non-live and non-ready. Can be increased if the liveness check incorrectly fails (for example if Felix is running slowly on a heavily loaded system). | `90s`, `120s`, `10m` etc. | duration | `90s` |
-| defaultEndpointToHostAction | This parameter controls what happens to traffic that goes from a workload endpoint to the host itself (after the traffic hits the endpoint egress policy). By default $[prodname] blocks traffic from workload endpoints to the host itself with an iptables "DROP" action. If you want to allow some or all traffic from endpoint to host, set this parameter to `Return` or `Accept`. Use `Return` if you have your own rules in the iptables "INPUT" chain; $[prodname] will insert its rules at the top of that chain, then `Return` packets to the "INPUT" chain once it has completed processing workload endpoint egress policy. Use `Accept` to unconditionally accept packets from workloads after processing workload endpoint egress policy. | Drop, Return, Accept | string | `Drop` |
-| deviceRouteSourceAddress | IPv4 address to set as the source hint for routes programmed by Felix. When not set the source address for local traffic from host to workload will be determined by the kernel. | IPv4 | string | `""` |
-| deviceRouteSourceAddressIPv6 | IPv6 address to set as the source hint for routes programmed by Felix. When not set the source address for local traffic from host to workload will be determined by the kernel. | IPv6 | string | `""` |
-| deviceRouteProtocol | This defines the route protocol added to programmed device routes. | Protocol | int | RTPROT_BOOT |
-| externalNodesCIDRList | A comma-delimited list of CIDRs of external non-calico nodes that can source tunnel traffic for acceptance by calico-nodes. | IPv4 | string | `""` |
-| failsafeInboundHostPorts | UDP/TCP/SCTP protocol/cidr/port groupings that Felix will allow incoming traffic to host endpoints on irrespective of the security policy. This is useful to avoid accidentally cutting off a host with incorrect configuration. The default value allows SSH access, etcd, BGP, DHCP and the Kubernetes API. | | List of [ProtoPort](#protoport) |
|
-| failsafeOutboundHostPorts | UDP/TCP/SCTP protocol/port groupings that Felix will allow outgoing traffic from host endpoints to irrespective of the security policy. This is useful to avoid accidentally cutting off a host with incorrect configuration. The default value opens etcd's standard ports to ensure that Felix does not get cut off from etcd as well as allowing DHCP, DNS, BGP and the Kubernetes API. | | List of [ProtoPort](#protoport) |
|
-| featureDetectOverride | Is used to override the feature detection. Values are specified in a comma separated list with no spaces, example; "SNATFullyRandom=true,MASQFullyRandom=false,RestoreSupportsLock=". "true" or "false" will force the feature, empty or omitted values are auto-detected. | string | string | `""` |
-| genericXDPEnabled | When enabled, Felix can fallback to the non-optimized `generic` XDP mode. This should only be used for testing since it doesn't improve performance over the non-XDP mode. | true,false | boolean | `false` |
-| interfaceExclude | A comma-separated list of interface names that should be excluded when Felix is resolving host endpoints. The default value ensures that Felix ignores Kubernetes' internal `kube-ipvs0` device. If you want to exclude multiple interface names using a single value, the list supports regular expressions. For regular expressions you must wrap the value with `/`. For example having values `/^kube/,veth1` will exclude all interfaces that begin with `kube` and also the interface `veth1`. | string | string | `kube-ipvs0` |
-| interfacePrefix | The interface name prefix that identifies workload endpoints and so distinguishes them from host endpoint interfaces. Note: in environments other than bare metal, the orchestrators configure this appropriately. For example our Kubernetes and Docker integrations set the 'cali' value, and our OpenStack integration sets the 'tap' value. | string | string | `cali` |
-| ipipEnabled | Optional, you shouldn't need to change this setting as Felix calculates if IPIP should be enabled based on the existing IP Pools. When set, this overrides whether Felix should configure an IPinIP interface on the host. When explicitly disabled in FelixConfiguration, Felix will not clean up addresses from the `tunl0` interface (use this if you need to add addresses to that interface and don't want to have them removed). | `true`, `false`, unset | optional boolean | unset |
-| ipipMTU | The MTU to set on the tunnel device. Zero value means auto-detect. See [Configuring MTU](../../networking/configuring/mtu.mdx) | int | int | `0` |
-| ipsetsRefreshInterval | Period at which Felix re-checks the IP sets in the dataplane to ensure that no other process has accidentally broken $[prodname]'s rules. Set to 0 to disable IP sets refresh. | `5s`, `10s`, `1m` etc. | duration | `10s` |
-| iptablesFilterAllowAction | This parameter controls what happens to traffic that is accepted by a Felix policy chain in the iptables filter table (i.e. a normal policy chain). The default will immediately `Accept` the traffic. Use `Return` to send the traffic back up to the system chains for further processing. | Accept, Return | string | `Accept` |
-| iptablesBackend | This parameter controls which variant of iptables Felix uses. If using Felix on a system that uses the netfilter-backed iptables binaries, set this to `nft`. | Legacy, nft | string | automatic detection |
-| iptablesLockFilePath | Location of the iptables lock file. You may need to change this if the lock file is not in its standard location (for example if you have mapped it into Felix's container at a different path). | string | string | `/run/xtables.lock` |
-| iptablesLockProbeInterval | Time that Felix will wait between attempts to acquire the iptables lock if it is not available. Lower values make Felix more responsive when the lock is contended, but use more CPU. | `5s`, `10s`, `1m` etc. | duration | `50ms` |
-| iptablesLockTimeout | Time that Felix will wait for the iptables lock, or 0, to disable. To use this feature, Felix must share the iptables lock file with all other processes that also take the lock. When running Felix inside a container, this requires the /run directory of the host to be mounted into the $[nodecontainer] or calico/felix container. | `5s`, `10s`, `1m` etc. | duration | `0` (Disabled) |
-| iptablesMangleAllowAction | This parameter controls what happens to traffic that is accepted by a Felix policy chain in the iptables mangle table (i.e. a pre-DNAT policy chain). The default will immediately `Accept` the traffic. Use `Return` to send the traffic back up to the system chains for further processing. | `Accept`, `Return` | string | `Accept` |
-| iptablesMarkMask | Mask that Felix selects its IPTables Mark bits from. Should be a 32 bit hexadecimal number with at least 8 bits set, none of which clash with any other mark bits in use on the system. | netmask | netmask | `0xffff0000` |
-| iptablesNATOutgoingInterfaceFilter | This parameter can be used to limit the host interfaces on which Calico will apply SNAT to traffic leaving a Calico IPAM pool with "NAT outgoing" enabled. This can be useful if you have a main data interface, where traffic should be SNATted and a secondary device (such as the docker bridge) which is local to the host and doesn't require SNAT. This parameter uses the iptables interface matching syntax, which allows `+` as a wildcard. Most users will not need to set this. Example: if your data interfaces are eth0 and eth1 and you want to exclude the docker bridge, you could set this to `eth+` | string | string | `""` |
-| iptablesPostWriteCheckInterval | Period after Felix has done a write to the dataplane that it schedules an extra read back to check the write was not clobbered by another process. This should only occur if another application on the system doesn't respect the iptables lock. | `5s`, `10s`, `1m` etc. | duration | `1s` |
-| iptablesRefreshInterval | Period at which Felix re-checks all iptables state to ensure that no other process has accidentally broken $[prodname]'s rules. Set to 0 to disable iptables refresh. | `5s`, `10s`, `1m` etc. | duration | `90s` |
-| ipv6Support | IPv6 support for Felix | `true`, `false` | boolean | `true` |
-| logFilePath | The full path to the Felix log. Set to `none` to disable file logging. | string | string | `/var/log/calico/felix.log` |
-| logPrefix | The log prefix that Felix uses when rendering LOG rules. | string | string | `calico-packet` |
-| logSeverityFile | The log severity above which logs are sent to the log file. | Same as logSeveritySys | string | `Info` |
-| logSeverityScreen | The log severity above which logs are sent to the stdout. | Same as logSeveritySys | string | `Info` |
-| logSeveritySys | The log severity above which logs are sent to the syslog. Set to `none` for no logging to syslog. | Debug, Info, Warning, Error, Fatal | string | `Info` |
-| logDebugFilenameRegex | controls which source code files have their Debug log output included in the logs. Only logs from files with names that match the given regular expression are included. The filter only applies to Debug level logs. | regex | string | `""` |
-| maxIpsetSize | Maximum size for the ipsets used by Felix. Should be set to a number that is greater than the maximum number of IP addresses that are ever expected in a selector. | int | int | `1048576` |
-| metadataAddr | The IP address or domain name of the server that can answer VM queries for cloud-init metadata. In OpenStack, this corresponds to the machine running nova-api (or in Ubuntu, nova-api-metadata). A value of `none` (case insensitive) means that Felix should not set up any NAT rule for the metadata path. | IPv4, hostname, none | string | `127.0.0.1` |
-| metadataPort | The port of the metadata server. This, combined with global.MetadataAddr (if not 'None'), is used to set up a NAT rule, from 169.254.169.254:80 to MetadataAddr:MetadataPort. In most cases this should not need to be changed. | int | int | `8775` |
-| natOutgoingAddress | The source address to use for outgoing NAT. By default an iptables MASQUERADE rule determines the source address which will use the address on the host interface the traffic leaves on. | IPV4 | string | `""` |
-| policySyncPathPrefix | File system path where Felix notifies services of policy changes over Unix domain sockets. This is required only if you're configuring [L7 logs](../../visibility/elastic/l7/configure.mdx), or [egress gateways](../../networking/egress/index.mdx). Set to `""` to disable. | string | string | `""` |
-| prometheusGoMetricsEnabled | Set to `false` to disable Go runtime metrics collection, which the Prometheus client does by default. This reduces the number of metrics reported, reducing Prometheus load. | boolean | boolean | `true` |
-| prometheusMetricsEnabled | Set to `true` to enable the experimental Prometheus metrics server in Felix. | boolean | boolean | `false` |
-| prometheusMetricsHost | TCP network address that the Prometheus metrics server should bind to. | IPv4, IPv6, Hostname | string | `""` |
-| prometheusMetricsPort | TCP port that the Prometheus metrics server should bind to. | int | int | `9091` |
-| prometheusProcessMetricsEnabled | Set to `false` to disable process metrics collection, which the Prometheus client does by default. This reduces the number of metrics reported, reducing Prometheus load. | boolean | boolean | `true` |
-| prometheusReporterEnabled | Set to `true` to enable configure Felix to keep count of recently denied packets and publish these as Prometheus metrics. Note that denied packet metrics are independent of the `dropActionOverride` setting. Specifically, if packets that would normally be denied are being allowed through by a setting of `Accept` or `LogAndAccept`, those packets still get counted as denied packets. | `true`, `false` | boolean | `false` |
-| prometheusReporterPort | The TCP port on which to report denied packet metrics, if `prometheusReporterEnabled` is set to `true`. | | | `9092` |
-| removeExternalRoutes | Whether or not to remove device routes that have not been programmed by Felix. Disabling this will allow external applications to also add device routes. | bool | boolean | `true` |
-| reportingInterval | Interval at which Felix reports its status into the datastore. 0 means disabled and is correct for Kubernetes-only clusters. Must be non-zero in OpenStack deployments. | `5s`, `10s`, `1m` etc. | duration | `30s` |
-| reportingTTL | Time-to-live setting for process-wide status reports. | `5s`, `10s`, `1m` etc. | duration | `90s` |
-| routeRefreshInterval | Period at which Felix re-checks the routes in the dataplane to ensure that no other process has accidentally broken $[prodname]'s rules. Set to 0 to disable route refresh. | `5s`, `10s`, `1m` etc. | duration | `90s` |
-| ipsecMode | Controls which mode IPsec is operating on. The only supported value is `PSK`. An empty value means IPsec is not enabled. | PSK | string | `""` |
-| ipsecAllowUnsecuredTraffic | When set to `false`, only IPsec-protected traffic will be allowed on the packet paths where IPsec is supported. When set to `true`, IPsec will be used but non-IPsec traffic will be accepted. In general, setting this to `true` is less safe since it allows an attacker to inject packets. However, it is useful when transitioning from non-IPsec to IPsec since it allows traffic to flow while the cluster negotiates the IPsec mesh. | `true`, `false` | boolean | `false` |
-| ipsecIKEAlgorithm | IPsec IKE algorithm. Default is NIST suite B recommendation. | string | string | `aes128gcm16-prfsha256-ecp256` |
-| ipsecESPAlgorithm | IPsec ESP algorithm. Default is NIST suite B recommendation. | string | string | `aes128gcm16-ecp256` |
-| ipsecLogLevel | Controls log level for IPsec components. Set to `None` for no logging. | `None`, `Notice`, `Info`, `Debug`, `Verbose` | string | `Info` |
-| ipsecPSKFile | The path to the pre shared key file for IPsec. | string | string | `""` |
-| flowLogsFileEnabled | Set to `true`, enables flow logs. If set to `false` no flow logging will occur. Flow logs are written to a file `flows.log` and sent to Elasticsearch. The location of this file can be configured using the `flowLogsFileDirectory` field. File rotation settings for this `flows.log` file can be configured using the fields `flowLogsFileMaxFiles` and `flowLogsFileMaxFileSizeMB`. Note that flow log exports to Elasticsearch are dependent on flow logs getting written to this file. Setting this parameter to `false` will disable flow logs. | `true`, `false` | boolean | `false` |
-| flowLogsFileDirectory | Set the directory where flow logs files are stored on Linux nodes. This parameter only takes effect when `flowLogsFileEnabled` is set to `true`. | string | string | `/var/log/calico/flowlogs` |
-| flowLogsPositionFilePath | Specify the position of the external pipeline that reads flow logs on Linux nodes. This parameter only takes effect when `FlowLogsDynamicAggregationEnabled` is set to `true`. | string | string | `/var/log/calico/flows.log.pos` |
-| flowLogsFileMaxFiles | Set the number of log files to keep. This parameter only takes effect when `flowLogsFileEnabled` is set to `true`. | int | int | `5` |
-| flowLogsFileMaxFileSizeMB | Set the max size in MB of flow logs files before rotation. This parameter only takes effect when `flowLogsFileEnabled` is set to `true`. | int | int | `100` |
-| flowLogsFlushInterval | The period, in seconds, at which Felix exports the flow logs. | int | int | `300s` |
-| flowLogsFileAggregationKindForAllowed | How much to aggregate the flow logs sent to Elasticsearch for allowed traffic. Bear in mind that changing this value may have a dramatic impact on the volume of flow logs sent to Elasticsearch. | 0-2 | [AggregationKind](#aggregationkind) | 2 |
-| flowLogsFileAggregationKindForDenied | How much to aggregate the flow logs sent to Elasticsearch for denied traffic. Bear in mind that changing this value may have a dramatic impact on the volume of flow logs sent to Elasticsearch. | 0-2 | [AggregationKind](#aggregationkind) | 1 |
-| flowLogsFileIncludeService | When set to `true`, include destination service information in the aggregated flow log. Note that service information will only be included when the flow can be explicitly determined to be bound to a service (e.g. pre-DNAT destination matches a service ClusterIP). | `true`, `false` | boolean | `false` |
-| flowLogsFileIncludeLabels | When set to `true`, include source and destination endpoint labels in the aggregated flow log. Note that only Kubernetes endpoints or network sets are included; arbitrary networks do not contain labels. | `true`, `false` | boolean | `false` |
-| flowLogsFileIncludePolicies | When set to `true`, include all policies in the aggregated flow logs that acted upon and matches the flow log traffic. | `true`, `false` | boolean | `false` |
-| flowLogsDestDomainsByClient | When set to true, top-level domains are strictly associated with the source IP that originally queried the domains. (default: true)
-| flowLogsEnableNetworkSets | When set to `true`, include an arbitrary network set in the aggregated flow log that matches the IP address of the flow log endpoint. | `true`, `false` | boolean | `false` |
-| flowLogsCollectProcessInfo | When set to `true`, Felix will load the kprobe BPF programs to collect process info. | `true`, `false` | boolean | `false` |
-| flowLogsCollectTcpStats | When set to `true`, Felix will collect the TCP socket stats. | `true`, `false` | boolean | `true` |
-| flowLogsCollectProcessPath | When set to `true`, along with flowLogsCollectProcessInfo, each flow log will include the full path of the executable and the arguments with which the executable was invoked. | `true`, `false` | boolean | `false` |
-| flowLogsFilePerFlowProcessLimit | Specify the maximum number of flow log entries with distinct process information beyond which process information will be aggregated | int | int | `2` |
-| flowLogsFileNatOutgoingPortLimit | Specify the maximum number of distinct post SNAT ports that will appear in the flowLogs | int | int | `3` |
-| flowLogsFilePerFlowProcessArgsLimit | Specify the maximum number of unique arguments in the flowlogs beyond which process arguments will be aggregated | int | int | `5` |
-| flowLogsFileDomainsLimit | Specify the maximum number of top-level domains to include in a flow log. This only applies to source reported flows to destinations external to the cluster. | int | int | `5` |
-| statsDumpFilePath | Specify the position of the file used for dumping flow log statistics on Linux nodes. Note this is an internal setting that users shouldn't need to modify. | string | string | `/var/log/calico/stats/dump` |
-| routeTableRange | _deprecated in favor of `RouteTableRanges`_ Calico programs additional Linux route tables for various purposes. `RouteTableRange` specifies the indices of the route tables that Calico should use. | | [RouteTableRanges](#routetablerange) | `""` |
-| routeTableRanges | Calico programs additional Linux route tables for various purposes. `RouteTableRanges` specifies a set of table index ranges that Calico should use. Deprecates `RouteTableRange`, overrides `RouteTableRange` | | [RouteTableRanges](#routetableranges) | `[{"min": 1, "max": 250}]` |
-| routeSyncDisabled | Set to `true` to disable Calico programming routes to local workloads. | boolean | boolean | `false` |
-| serviceLoopPrevention | When [service IP advertisement is enabled](../../networking/configuring/advertise-service-ips.mdx), prevent routing loops to service IPs that are not in use, by dropping or rejecting packets that do not get DNAT'd by kube-proxy. Unless set to "Disabled", in which case such routing loops continue to be allowed. | `Drop`, `Reject`, `Disabled` | string | `Drop` |
-| workloadSourceSpoofing | Controls whether pods can enable source IP address spoofing with the `cni.projectcalico.org/allowedSourcePrefixes` annotation. When set to `Any`, pods can use this annotation to send packets from any IP address. | `Any`, `Disabled` | string | `Disabled` |
-| vxlanEnabled | Optional, you shouldn't need to change this setting as Felix calculates if VXLAN should be enabled based on the existing IP Pools. When set, this overrides whether Felix should create the VXLAN tunnel device for VXLAN networking. | `true`, `false`, unset | optional boolean | unset |
-| vxlanMTU | MTU to use for the IPv4 VXLAN tunnel device. Zero value means auto-detect. Also controls NodePort MTU when eBPF enabled. | int | int | `0` |
-| vxlanMTUV6 | MTU to use for the IPv6 VXLAN tunnel device. Zero value means auto-detect. Also controls NodePort MTU when eBPF enabled. | int | int | `0` |
-| vxlanPort | Port to use for VXLAN traffic. A value of `0` means "use the kernel default". | int | int | `4789` |
-| vxlanVNI | Virtual network ID to use for VXLAN traffic. A value of `0` means "use the kernel default". | int | int | `4096` |
-| allowVXLANPacketsFromWorkloads | Set to `true` to allow VXLAN encapsulated traffic from workloads. | boolean | boolean | `false` |
-| allowIPIPPacketsFromWorkloads | Set to `true` to allow IPIP encapsulated traffic from workloads. | boolean | boolean | `false` |
-| windowsFlowLogsFileDirectory | Set the directory where flow logs files are stored on Windows nodes. This parameter only takes effect when `flowLogsFileEnabled` is set to `true`. | string | string | `c:\\TigeraCalico\\flowlogs` |
-| windowsFlowLogsPositionFilePath | Specify the position of the external pipeline that reads flow logs on Windows nodes. This parameter only takes effect when `FlowLogsDynamicAggregationEnabled` is set to `true`. | string | string | `c:\\TigeraCalico\\flowlogs\\flows.log.pos` |
-| windowsStatsDumpFilePath | Specify the position of the file used for dumping flow log statistics on Windows nodes. Note this is an internal setting that users shouldn't need to modify. | string | string | `c:\\TigeraCalico\\stats\\dump` |
-| windowsDNSCacheFile | Specify the name of the file that Calico uses to preserve learnt DNS information when restarting. | string | string | `c:\\TigeraCalico\\felix-dns-cache.txt` |
-| windowsDNSExtraTTL | Specify extra time in seconds to keep IPs and alias names that are learnt from DNS, in addition to each name or IP's advertised TTL. | int | int | `120` |
-| wireguardEnabled | Enable encryption on WireGuard supported nodes in cluster. When enabled, pod to pod traffic will be sent over encrypted tunnels between the nodes. | `true`, `false` | boolean | `false` |
-| wireguardEnabledV6 | Enable encryption for IPv6 on WireGuard supported nodes in cluster. When enabled, pod to pod traffic will be sent over encrypted tunnels between the nodes. | `true`, `false` | boolean | `false` |
-| wireguardInterfaceName | Name of the WireGuard interface created by Felix. If you change the name, and want to clean up the previously-configured interface names on each node, this is a manual process. | string | string | wireguard.cali |
-| wireguardInterfaceNameV6 | Name of the IPv6 WireGuard interface created by Felix. If you change the name, and want to clean up the previously-configured interface names on each node, this is a manual process. | string | string | wg-v6.cali |
-| wireguardListeningPort | Port used by WireGuard tunnels. Felix sets up WireGuard tunnel on each node specified by this port. Available for configuration only in the global FelixConfiguration resource; setting it per host, config-file or environment variable will not work. | 1-65535 | int | 51820 |
-| wireguardListeningPortV6 | Port used by IPv6 WireGuard tunnels. Felix sets up an IPv6 WireGuard tunnel on each node specified by this port. Available for configuration only in the global FelixConfiguration resource; setting it per host, config-file or environment variable will not work. | 1-65535 | int | 51821 |
-| wireguardMTU | MTU set on the WireGuard interface created by Felix. Zero value means auto-detect. See [Configuring MTU](../../networking/configuring/mtu.mdx). | int | int | 0 |
-| wireguardMTUV6 | MTU set on the IPv6 WireGuard interface created by Felix. Zero value means auto-detect. See [Configuring MTU](../../networking/configuring/mtu.mdx). | int | int | 0 |
-| wireguardRoutingRulePriority | WireGuard routing rule priority value set up by Felix. If you change the default value, set it to a value most appropriate to routing rules for your nodes. | 1-32765 | int | 99 |
-| wireguardHostEncryptionEnabled | **Experimental**: Adds host-namespace workload IP's to WireGuard's list of peers. Should **not** be enabled when WireGuard is enabled on a cluster's control plane node, as networking deadlock can occur. | true, false | boolean | false |
-| wireguardKeepAlive | WireguardKeepAlive controls Wireguard PersistentKeepalive option. Set 0 to disable. [Default: 0] | `5s`, `10s`, `1m` etc. | duration | `0` |
-| xdpRefreshInterval | Period at which Felix re-checks the XDP state in the dataplane to ensure that no other process has accidentally broken $[prodname]'s rules. Set to 0 to disable XDP refresh. | `5s`, `10s`, `1m` etc. | duration | `90s` |
-| xdpEnabled | When `bpfEnabled` is `false`: enable XDP acceleration for host endpoint policies. When `bpfEnabled` is `true`, XDP is automatically used for Calico policy where that makes sense, regardless of this setting. [Default: `true`] | true,false | boolean | `true` |
-| dnsCacheFile | The name of the file that Felix uses to preserve learnt DNS information when restarting. | file name | string | `/var/run/calico/felix-dns-cache.txt` |
-| dnsCacheSaveInterval | The period, in seconds, at which Felix saves learnt DNS information to the cache file. | `5s`, `10s`, `1m` etc. | duration | `60s` |
-| dnsCacheEpoch | An arbitrary number that can be changed, at runtime, to tell Felix to discard all its learnt DNS information. | int | int | `0` |
-| dnsExtraTTL | Extra time to keep IPs and alias names that are learnt from DNS, in addition to each name or IP's advertised TTL. | `5s`, `10s`, `1m` etc. | duration | `0s` |
-| dnsTrustedServers | The DNS servers that Felix should trust. Each entry here must be `[:]` - indicating an explicit DNS server IP - or `k8s-service:[/][:port]` - indicating a Kubernetes DNS service. `` defaults to the first service port, or 53 for an IP, and `` to `kube-system`. An IPv6 address with a port must use the square brackets convention, for example `[fd00:83a6::12]:5353`. Note that Felix (calico-node) will need RBAC permission to read the details of each service specified by a `k8s-service:...` form. | IPs or service names | comma-separated strings | `k8s-service:kube-dns` |
-| dnsLogsFileEnabled | Set to `true`, enables DNS logs. If set to `false` no DNS logging will occur. DNS logs are written to a file `dns.log` and sent to Elasticsearch. The location of this file can be configured using the `DNSLogsFileDirectory` field. File rotation settings for this `dns.log` file can be configured using the fields `DNSLogsFileMaxFiles` and `DNSLogsFileMaxFileSizeMB`. Note that DNS log exports to Elasticsearch are dependent on DNS logs getting written to this file. Setting this parameter to `false` will disable DNS logs. | `true`, `false` | boolean | `false` |
-| dnsLogsFileDirectory | The directory where DNS logs files are stored. This parameter only takes effect when `DNSLogsFileEnabled` is `true`. | directory | string | `/var/log/calico/dnslogs` |
-| dnsLogsFileMaxFiles | The number of files to keep when rotating DNS log files. This parameter only takes effect when `DNSLogsFileEnabled` is `true`. | int | int | `5` |
-| dnsLogsFileMaxFileSizeMB | The max size in MB of DNS log files before rotation. This parameter only takes effect when `DNSLogsFileEnabled` is `true`. | int | int | `100` |
-| dnsLogsFlushInterval | The period, in seconds, at which Felix exports DNS logs. | int | int | `300s` |
-| dnsLogsFileAggregationKind | How much to aggregate DNS logs. Bear in mind that changing this value may have a dramatic impact on the volume of flow logs sent to Elasticsearch. `0` means no aggregation, `1` means aggregate similar DNS logs from workloads in the same ReplicaSet. | `0`,`1` | int | `1` |
-| dnsLogsFileIncludeLabels | Whether to include client and server workload labels in DNS logs. | `true`, `false` | boolean | `true` |
-| dnsLogsFilePerNodeLimit | Limit on the number of DNS logs that can be emitted within each flush interval. When this limit has been reached, Felix counts the number of unloggable DNS responses within the flush interval, and emits a WARNING log with that count at the same time as it flushes the buffered DNS logs. | int | int | `0` (no limit) |
-| dnsLogsLatency | Indicates to include measurements of DNS request/response latency in each DNS log. | `true`, `false` | boolean | `true` |
-| dnsPolicyMode | DNSPolicyMode specifies how DNS policy programming will be handled. | `NoDelay`, `DelayDNSResponse`, `DelayDeniedPacket` | [DNSPolicyMode](#dnspolicymode) | `DelayDeniedPacket` |
-| dnsPolicyNfqueueID | DNSPolicyNfqueueID is the NFQUEUE ID to use for DNS Policy re-evaluation when the domains IP hasn't been programmed to ipsets yet. This value can be changed to avoid conflicts with other users of NFQUEUEs. Used when `DNSPolicyMode` is `DelayDeniedPacket`. | 0-65535 | int | `100` |
-| dnsPolicyNfqueueSize | DNSPolicyNfqueueID is the size of the NFQUEUE for DNS policy re-evaluation. This is the maximum number of denied packets that may be queued up pending re-evaluation. Used when `DNSPolicyMode` is `DelayDeniedPacket`. | 0-65535 | int | `100` |
-| dnsPacketsNfqueueID | DNSPacketsNfqueueID is the NFQUEUE ID to use for capturing DNS packets to ensure programming IPSets occurs before the response is released. Used when `DNSPolicyMode` is `DelayDNSResponse`. | 0-65535 | int | `101` |
-| dnsPacketsNfqueueSize | DNSPacketsNfqueueSize is the size of the NFQUEUE for captured DNS packets. This is the maximum number of DNS packets that may be queued awaiting programming in the dataplane. Used when `DNSPolicyMode` is `DelayDNSResponse`. | 0-65535 | int | `100` |
-| dnsPacketsNfqueueMaxHoldDuration | DNSPacketsNfqueueMaxHoldDuration is the max length of time to hold on to a DNS response while waiting for the dataplane to be programmed. Used when `DNSPolicyMode` is `DelayDNSResponse`. | `5s`, `10s`, `1m` etc. | duration | `3s` |
-| bpfEnabled | Enable eBPF dataplane mode. eBPF mode has some limitations, see the [HOWTO guide](../../operations/ebpf/enabling-ebpf.mdx) for more details. | true, false | boolean | false |
-| bpfDisableUnprivileged | If true, Felix sets the kernel.unprivileged_bpf_disabled sysctl to disable unprivileged use of BPF. This ensures that unprivileged users cannot access Calico's BPF maps and cannot insert their own BPF programs to interfere with the ones that $[prodname] installs. | true, false | boolean | true |
-| bpfLogLevel | In eBPF dataplane mode, the log level used by the BPF programs. The logs are emitted to the BPF trace pipe, accessible with the command `tc exec bpf debug`. This is a tech preview feature and subject to change in future releases. | Off,Info,Debug | string | Off |
-| bpfDataIfacePattern | In eBPF dataplane mode, controls which interfaces Felix should attach BPF programs to catch traffic to/from the external network. This needs to match the interfaces that Calico workload traffic flows over as well as any interfaces that handle incoming traffic to NodePorts and services from outside the cluster. It should not match the workload interfaces (usually named cali...).. This is a tech preview feature and subject to change in future releases. | regular expression | string | ^(en.*|eth.*|tunl0$) |
-| bpfConnectTimeLoadBalancingEnabled | In eBPF dataplane mode, controls whether Felix installs the connect-time load balancer. In the current release, the connect-time load balancer is required for the host to reach kubernetes services. This is a tech preview feature and subject to change in future releases. | true,false | boolean | true |
-| bpfExternalServiceMode | In eBPF dataplane mode, controls how traffic from outside the cluster to NodePorts and ClusterIPs is handled. In Tunnel mode, packet is tunneled from the ingress host to the host with the backing pod and back again. In DSR mode, traffic is tunneled to the host with the backing pod and then returned directly; this requires a network that allows direct return. | Tunnel,DSR | string | Tunnel |
-| bpfKubeProxyIptablesCleanupEnabled | In eBPF dataplane mode, controls whether Felix will clean up the iptables rules created by the Kubernetes `kube-proxy`; should only be enabled if `kube-proxy` is not running. This is a tech preview feature and subject to change in future releases. | true,false | boolean | true |
-| bpfKubeProxyMinSyncPeriod | In eBPF dataplane mode, controls the minimum time between dataplane updates for Felix's embedded `kube-proxy` implementation. | `5s`, `10s`, `1m` etc. | duration | `1s` |
-| BPFKubeProxyEndpointSlicesEnabled | In eBPF dataplane mode, controls whether Felix's embedded kube-proxy derives its services from Kubernetes' EndpointSlices resources. Using EndpointSlices is more efficient but it requires EndpointSlices support to be enabled at the Kubernetes API server. | true,false | boolean | false |
-| bpfMapSizeConntrack | In eBPF dataplane mode, controls the size of the conntrack map. | int | int | 512000 |
-| bpfMapSizeIPSets | In eBPF dataplane mode, controls the size of the ipsets map. | int | int | 1048576 |
-| bpfMapSizeNATAffinity | In eBPF dataplane mode, controls the size of the NAT affinity map. | int | int | 65536 |
-| bpfMapSizeNATFrontend | In eBPF dataplane mode, controls the size of the NAT front end map. | int | int | 65536 |
-| bpfMapSizeNATBackend | In eBPF dataplane mode, controls the size of the NAT back end map. | int | int | 262144 |
-| bpfMapSizeRoute | In eBPF dataplane mode, controls the size of the route map. | int | int | 262144 |
-| bpfPolicyDebugEnabled | In eBPF dataplane mode, controls whether felix will collect policy dump for each interface. | true, false | boolean | true |
-| routeSource | Where Felix gets is routing information from for VXLAN and the BPF dataplane. The CalicoIPAM setting is more efficient because it supports route aggregation, but it only works when Calico's IPAM or host-local IPAM is in use. Use the WorkloadIPs setting if you are using Calico's VXLAN or BPF dataplane and not using Calico IPAM or host-local IPAM. | CalicoIPAM,WorkloadIPs | string | `CalicoIPAM` |
-| mtuIfacePattern | Pattern used to discover the host's interface for MTU auto-detection. | regex | string | ^((en|wl|ww|sl|ib)[opsvx].*|(eth|wlan|wwan).*) |
-| bpfForceTrackPacketsFromIfaces | Forces traffic from these interfaces in BPF mode to skip Calico's iptables NOTRACK rule, allowing traffic from those interfaces to be tracked by Linux conntrack. Use only for interfaces that are not used for the Calico fabric, for example, a docker bridge device for non-Calico-networked containers. | A list of strings | A list of strings | docker+ |
-| bpfDisableGROForIfaces | BPFDisableGROForIfaces is a regular expression that controls which interfaces Felix should disable the Generic Receive Offload [GRO] option. It should not match the workload interfaces (usually named cali...). | regex | string | "" |
-| egressIPSupport | Defines three different support modes for egress gateway function. `Disabled` means egress gateways are not supported. `EnabledPerNamespace` means egress gateway function is enabled and can be configured on a per-namespace basis (but per-pod egress annotations are ignored). `EnabledPerNamespaceOrPerPod` means egress gateway function is enabled and can be configured per-namespace or per-pod (with per-pod egress annotations overriding namespace annotations). | Disabled, EnabledPerNamespace, EnabledPerNamespaceOrPerPod | string | `Disabled` |
-| egressIPVXLANPort | Port to use for egress gateway VXLAN traffic. A value of `0` means "use the kernel default". | int | int | `4790` |
-| egressIPVXLANVNI | Virtual network ID to use for egress gateway VXLAN traffic. A value of `0` means "use the kernel default". | int | int | `4097` |
-| egressIPRoutingRulePriority | Controls the priority value to use for the egress gateway routing rule. | int | int | `100` |
-| egressGatewayPollInterval | Controls the interval at which Felix will poll remote egress gateways to check their health. Only Egress Gateways with a named "health" port will be polled in this way. Egress Gateways that fail the health check will be taken our of use as if they have been deleted. | `5s`, `10s`, `1m` etc. | duration | `10s` |
-| egressGatewayPollFailureCount | Controls the minimum number of poll failures before a remote Egress Gateway is considered to have failed. | int | int | `3` |
-| captureDir | Controls the directory where packet capture files are stored. | string | string | `/var/log/calico/pcap` |
-| captureMaxSizeBytes | Controls the maximum size in bytes for a packet capture file before rotation. | int | int | `10000000` |
-| captureRotationSeconds | Controls the rotation period in seconds for a packet capture file. | int | int | `3600` |
-| captureMaxFiles | Controls the maximum number rotated packet capture files. | int | int | `2` |
-
-\* When `dropActionOverride` is set to `LogAndDrop` or `LogAndAccept`, the `syslog` entries look something like the following.
-
-```
-May 18 18:42:44 ubuntu kernel: [ 1156.246182] calico-drop: IN=tunl0 OUT=cali76be879f658 MAC= SRC=192.168.128.30 DST=192.168.157.26 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56743 DF PROTO=TCP SPT=56248 DPT=80 WINDOW=29200 RES=0x00 SYN URGP=0 MARK=0xa000000
-```
-
-\*\* Duration is denoted by the numerical amount followed by the unit of time. Valid units of time include nanoseconds (ns), microseconds (µs), milliseconds (ms), seconds (s), minutes (m), and hours (h). Units of time can also be used together e.g. `3m30s` to represent 3 minutes and 30 seconds. Any amounts of time that can be converted into larger units of time will be converted e.g. `90s` will become `1m30s`.
-
-
-
-`genericXDPEnabled` and `xdpRefreshInterval` are only relevant when `bpfEnabled` is `false` and
-`xdpEnabled` is `true`; in other words when XDP is being used to accelerate denial-of-service
-prevention policies in the iptables dataplane.
-
-When `bpfEnabled` is `true` the "xdp" settings all have no effect; in BPF mode the implementation of
-policy is always accelerated, using the best available BPF technology.
-
-### Health Timeout Overrides
-
-Felix has internal liveness and readiness watchdog timers that monitor its various loops.
-If a loop fails to "check in" within the allotted timeout then Felix will report non-Ready
-or non-Live on its health port (which is monitored by Kubelet in a Kubernetes system).
-If Felix reports non-Live, this can result in the Pod being restarted.
-
-In Kubernetes, if you see the calico-node Pod readiness or liveness checks fail
-intermittently, check the calico-node Pod log for a log from Felix that gives the
-overall health status (the list of components will depend on which features are enabled):
-
-```
-+---------------------------+---------+----------------+-----------------+--------+
-| COMPONENT | TIMEOUT | LIVENESS | READINESS | DETAIL |
-+---------------------------+---------+----------------+-----------------+--------+
-| CalculationGraph | 30s | reporting live | reporting ready | |
-| FelixStartup | 0s | reporting live | reporting ready | |
-| InternalDataplaneMainLoop | 1m30s | reporting live | reporting ready | |
-+---------------------------+---------+----------------+-----------------+--------+
-```
-
-If some health timeouts show as "timed out" it may help to apply an override
-using the `healthTimeoutOverrides` field:
-
-```yaml
-...
-spec:
- healthTimeoutOverrides:
- - name: InternalDataplaneMainLoop
- timeout: "5m"
- - name: CalculationGraph
- timeout: "1m30s"
- ...
-```
-
-A timeout value of 0 disables the timeout.
-
-### ProtoPort
-
-| Field | Description | Accepted Values | Schema |
-| -------- | -------------------- | ------------------------------------ | ------ |
-| port | The exact port match | 0-65535 | int |
-| protocol | The protocol match | tcp, udp, sctp | string |
-| net | The CIDR match | any valid CIDR (e.g. 192.168.0.0/16) | string |
-
-Keep in mind that in the following example, `net: ""` and `net: "0.0.0.0/0"` are processed as the same in the policy enforcement.
-
-```yaml noValidation
- ...
-spec:
- failsafeInboundHostPorts:
- - net: "192.168.1.1/32"
- port: 22
- protocol: tcp
- - net: ""
- port: 67
- protocol: udp
-failsafeOutboundHostPorts:
- - net: "0.0.0.0/0"
- port: 67
- protocol: udp
- ...
-```
-
-### AggregationKind
-
-| Value | Description |
-| ----- | ---------------------------------------------------------------------------------------- |
-| 0 | No aggregation |
-| 1 | Aggregate all flows that share a source port on each node |
-| 2 | Aggregate all flows that share source ports or are from the same ReplicaSet on each node |
-
-### DNSPolicyMode
-
-| Value | Description |
-| ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| DelayDeniedPacket | Felix delays any denied packet that traversed a policy that included egress domain matches, but did not match. The packet is released after a fixed time, or after the destination IP address was programmed. |
-| DelayDNSResponse | Felix delays any DNS response until related IPSets are programmed. This introduces some latency to all DNS packets (even when no IPSet programming is required), but it ensures policy hit statistics are accurate. This is the recommended setting when you are making use of staged policies or policy rule hit statistics. |
-| NoDelay | Felix does not introduce any delay to the packets. DNS rules may not have been programmed by the time the first packet traverses the policy rules. Client applications need to handle reconnection attempts if initial connection attempts fail. This may be problematic for some applications or for very low DNS TTLs. |
-
-On Windows, or when using the eBPF dataplane, this setting is ignored and `NoDelay` is always used.
-
-A linux kernel version of 3.13 or greater is required to use `DelayDNSResponse`. For earlier kernel versions, this value is modified to `DelayDeniedPacket`.
-
-### RouteTableRange
-
-The `RouteTableRange` option is now deprecated in favor of [RouteTableRanges](#routetableranges).
-
-| Field | Description | Accepted Values | Schema |
-| ----- | -------------------- | --------------- | ------ |
-| min | Minimum index to use | 1-250 | int |
-| max | Maximum index to use | 1-250 | int |
-
-### RouteTableRanges
-
-`RouteTableRanges` is a list of `RouteTableRange` objects:
-
-| Field | Description | Accepted Values | Schema |
-| ----- | -------------------- | --------------- | ------ |
-| min | Minimum index to use | 1 - 4294967295 | int |
-| max | Maximum index to use | 1 - 4294967295 | int |
-
-Each item in the `RouteTableRanges` list designates a range of routing tables available to Calico. By default, Calico will use a single range of `1-250`. If a range spans Linux's reserved table range (`253-255`) then those tables are automatically excluded from the list. It's possible that other table ranges may also be reserved by third-party systems unknown to Calico. In that case, multiple ranges can be defined to target tables below and above the sensitive ranges:
-
-```sh
- target tables 65-99, and 256-1000, skipping 100-255
-calicoctl patch felixconfig default --type=merge -p '{"spec":{"routeTableRanges": [{"min": 65, "max": 99}, {"min": 256, "max": 1000}] }}
-```
-
-_Note_, for performance reasons, the maximum total number of routing tables that Felix will accept is 65535 (or 2\*16).
-
-Specifying both the `RouteTableRange` and `RouteTableRanges` arguments is not supported and will result in an error from the api.
-
-### AWS IAM Role/Policy for source-destination-check configuration
-
-Setting `awsSrcDstCheck` to `Disable` will automatically disable source-destination-check on EC2 instances in a cluster, provided necessary IAM roles and policies are set. One of the policies assigned to IAM role of cluster nodes must contain a statement similar to the following:
-
-```
-{
- "Effect": "Allow",
- "Action": [
- "ec2:DescribeInstances",
- "ec2:ModifyNetworkInterfaceAttribute"
- ],
- "Resource": "*"
-}
-```
-
-If there are no policies attached to node roles containing the above statement, attach a new policy. For example, if a node role is `test-cluster-nodeinstance-role`, click on the IAM role in AWS console. In the `Permission policies` list, add a new inline policy with the above statement to the new policy JSON definition. For detailed information, see [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html?icmpid=docs_iam_console).
-
-For an EKS cluster, the necessary IAM role and policy is available by default. No further actions are needed.
-
-## Supported operations
-
-| Datastore type | Create | Delete | Delete (Global `default`) | Update | Get/List | Notes |
-| --------------------- | ------ | ------ | ------------------------- | ------ | -------- | ----- |
-| Kubernetes API server | Yes | Yes | No | Yes | Yes | |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/globalalert.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/globalalert.mdx
deleted file mode 100644
index 2e02283b4c..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/globalalert.mdx
+++ /dev/null
@@ -1,317 +0,0 @@
----
-description: API for this Calico Enterprise resource.
----
-
-# Global Alert
-
-A global alert resource represents a query that is periodically run
-against data sets collected by $[prodname] whose findings are
-added to the Alerts page in $[prodname] Manager. Alerts may
-search for the existence of rows in a query, or when aggregated metrics
-satisfy a condition.
-
-$[prodname] supports alerts on the following data sets:
-
-- [Audit logs](../../visibility/elastic/audit-overview.mdx)
-- [DNS logs](../../visibility/elastic/dns/index.mdx)
-- [Flow logs](../../visibility/elastic/flow/index.mdx)
-- [L7 logs](../../visibility/elastic/l7/index.mdx)
-
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases
-can be used to specify the resource type on the CLI:
-`globalalert.projectcalico.org`, `globalalerts.projectcalico.org` and abbreviations such as
-`globalalert.p` and `globalalerts.p`.
-
-## Sample YAML
-
-```yaml noValidation
-apiVersion: projectcalico.org/v3
-kind: GlobalAlert
-metadata:
- name: sample
-spec:
- summary: 'Sample'
- description: 'Sample ${source_namespace}/${source_name_aggr}'
- severity: 100
- dataSet: flows
- query: action=allow
- aggregateBy: [source_namespace, source_name_aggr]
- field: num_flows
- metric: sum
- condition: gt
- threshold: 0
-```
-
-## GlobalAlert definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | ----------------------- | ----------------------------------------- | ------ |
-| name | The name of this alert. | Lower-case alphanumeric with optional `-` | string |
-
-### Spec
-
-| Field | Description | Type | Required | Acceptable Values | Default |
-| ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- | ----------------------------------------- | ---------------------------- | -------------------------------- |
-| type | Type will dictate how the fields of the GlobalAlert will be utilized. Each `type` will have different usages and/or defaults for the other GlobalAlert fields as described in the table. | string | no | RuleBased | RuleBased |
-| description | Human-readable description of the template. | string | yes |
-| summary | Template for the description field in generated events. See the summary section below for more details. `description` is used if this is omitted. | string | no |
-| severity | Severity of the alert for display in Manager. | int | yes | 1 - 100 |
-| dataSet | Which data set to execute the alert against. | string | if `type` is `RuleBased` | audit, dns, flows, l7, vulnerability |
-| period | How often the query defined will run, if `type` is `RuleBased`. | duration | no | 1h 2m 3s | 5m, 15m if `type` is `RuleBased` |
-| lookback | Specifies how far back in time data is to be collected. Must exceed audit log flush interval, `dnsLogsFlushInterval`, or `flowLogsFlushInterval` as appropriate. | duration | no | 1h 2m 3s | 10m |
-| query | Which data to include from the source data set. Written in a domain-specific query language. See the query section below. | string | no |
-| aggregateBy | An optional list of fields to aggregate results. | string array | no |
-| field | Which field to aggregate results by if using a metric other than count. | string | if metric is one of avg, max, min, or sum |
-| metric | A metric to apply to aggregated results. `count` is the number of log entries matching the aggregation pattern. Others are applied only to numeric fields in the logs. | string | no | avg, max, min, sum, count |
-| condition | Compare the value of the metric to the threshold using this condition. | string | if metric defined | eq, not_eq, lt, lte, gt, gte |
-| threshold | A numeric value to compare the value of the metric against. | float | if metric defined |
-| substitutions | An optional list of values to replace variable names in query. | List of [GlobalAlertSubstitution](#globalalertsubstitution) | no |
-
-### GlobalAlertSubstitution
-
-| Field | Description | Type | Required |
-| ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ | -------- |
-| name | The name of the global alert substitution. It will be referenced by the variable names in query. Duplicate names are not allowed in the substitutions list. | string | yes |
-| values | A list of values for this substitution. Wildcard operators asterisk (`*`) and question mark (`?`) are supported. | string array | yes |
-
-### Status
-
-| Field | Description |
-| --------------- | ------------------------------------------------------------------------------------------- |
-| lastUpdate | When the alert was last modified on the backend. |
-| active | Whether the alert is active on the backend. |
-| healthy | Whether the alert is in an error state or not. |
-| lastExecuted | When the query for the alert last ran. |
-| lastEvent | When the condition of the alert was last satisfied and an alert was successfully generated. |
-| errorConditions | List of errors preventing operation of the updates or search. |
-
-## Query
-
-Alerts use a domain-specific query language to select which records
-from the data set should be used in the alert. This could be used to
-identify flows with specific features, or to select (or omit) certain
-namespaces from consideration.
-
-The query language is composed of any number of selectors, combined
-with boolean expressions (`AND`, `OR`, and `NOT`), set expressions
-(`IN` and `NOTIN`) and bracketed subexpressions. These are translated
-by $[prodname] to Elastic DSL queries that are executed on the backend.
-
-Set expressions support wildcard operators asterisk (`*`) and question mark (`?`).
-The asterisk sign matches zero or more characters and the question mark matches a single character.
-Set values can be embedded into the query string or reference the values
-in the global alert substitution list.
-
-A selector consists of a key, comparator, and value. Keys and values
-may be identifiers consisting of alphanumerics and underscores (`_`)
-with the first character being alphabetic or an underscore, or may be
-quoted strings. Values may also be integer or floating point numbers.
-Comparators may be `=` (equal), `!=` (not equal), `<` (less than),
-`<=` (less than or equal), `>` (greater than), or `>=` (greater than
-or equal).
-
-Keys must be indexed fields in their corresponding data set. See the
-appendix for a list of valid keys in each data set.
-
-Examples:
-
-- `query: "count > 0"`
-- `query: "\"servers.ip\" = \"127.0.0.1\""`
-
-Selectors may be combined using `AND`, `OR`, and `NOT` boolean expressions,
-`IN` and `NOTIN` set expressions, and bracketed subexpressions.
-
-Examples:
-
-- `query: "count > 100 AND client_name=mypod"`
-- `query: "client_namespace = ns1 OR client_namespace = ns2"`
-- `query: "count > 100 AND NOT (client_namespace = ns1 OR client_namespace = ns2)"`
-- `query: "(qtype = A OR qtype = AAAA) AND rcode != NoError"`
-- `query: "process_name IN {\"proc1?\", \"*proc2\"} AND source_namespace = ns1`
-- `query: "qname NOTIN ${domains}"`
-
-## Aggregation
-
-Results from the query can be aggregated by any number of data fields.
-Only these data fields will be included in the generated alerts, and
-each unique combination of aggregations will generate a unique alert.
-Careful consideration of fields for aggregation will yield the best
-results.
-
-Some good choices for aggregations on the `flows` data set are
-`[source_namespace, source_name_aggr, source_name]`, `[source_ip]`,
-`[dest_namespace, dest_name_aggr, dest_name]`, and `[dest_ip]`
-depending on your use case. For the `dns` data set,
-`[client_namespace, client_name_aggr, client_name]` is a good choice
-for an aggregation pattern.
-
-## Metrics and conditions
-
-Results from the query can be further aggregated using a metric that
-is applied to a numeric field, or counts the number of rows in an
-aggregation. Search hits satisfying the condition are output as
-alerts.
-
-| Metric | Description | Applied to Field |
-| ------ | ---------------------------------- | ---------------- |
-| count | Counts the number of rows | No |
-| min | The minimal value of the field | Yes |
-| max | The maximal value of the field | Yes |
-| sum | The sum of all values of the field | Yes |
-| avg | The average value of the field | Yes |
-
-| Condition | Description |
-| --------- | --------------------- |
-| eq | Equals |
-| not_eq | Not equals |
-| lt | Less than |
-| lte | Less than or equal |
-| gt | Greater than |
-| gte | Greater than or equal |
-
-Example:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalAlert
-metadata:
- name: frequent-dns-responses
-spec:
- description: 'Monitor for NXDomain'
- summary: 'Observed ${sum} NXDomain responses for ${qname}'
- severity: 100
- dataSet: dns
- query: rcode = NXDomain AND (rtype = A or rtype = AAAA)
- aggregateBy: qname
- field: count
- metric: sum
- condition: gte
- threshold: 100
-```
-
-This alert identifies non-existing DNS responses for Internet addresses
-that were observed more than 100 times in the past 10 minutes.
-
-### Unconditional alerts
-
-If the `field`, `metric`, `condition`, and `threshold` fields of an
-alert are left blank then the alert will trigger whenever its query
-returns any data. Each hit (or aggregation pattern, if `aggregateBy`
-is non-empty) returned will cause an event to be created. This should
-be used **only** when the query is highly specific to avoid filling
-the Alerts page and index with a large number of events. The use of
-`aggregateBy` is strongly recommended to reduce the number of entries
-added to the Alerts page.
-
-The following example would alert on incoming connections to postgres
-pods from the Internet that were not denied by policy. It runs hourly
-to reduce the noise. Noise could be further reduced by removing
-`source_ip` from the `aggregateBy` clause at the cost of removing
-`source_ip` from the generated events.
-
-```yaml
-period: 1h
-lookback: 75m
-query: 'dest_labels="application=postgres" AND source_type=net AND action=allow AND proto=tcp AND dest_port=5432'
-aggregateBy: [dest_namespace, dest_name, source_ip]
-```
-
-## Summary template
-
-Alerts may include a summary template to provide context for the
-alerts in the $[prodname] Manager Alert user interface. Any field
-in the `aggregateBy` section, or the value of the `metric` may be
-substituted in the summary using a bracketed variable syntax.
-
-Example:
-
-```yaml
-summary: 'Observed ${sum} NXDomain responses for ${qname}'
-```
-
-The `description` field is validated in the same manner. If not
-provided, the `description` field is used in place of the `summary`
-field.
-
-## Period and lookback
-
-The interval between alerts, and the amount of data considered by the
-alert may be controlled using the `period` and `lookback` parameters
-respectively. These fields are formatted as [duration](https://golang.org/pkg/time/#ParseDuration) strings.
-
-> A duration string is a possibly signed sequence of decimal numbers,
-> each with optional fraction and a unit suffix, such as "300ms",
-> "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"),
-> "ms", "s", "m", "h".
-
-The default period is 5 minutes, and lookback is 10 minutes. The lookback
-should always be greater than the sum of the period and the configured
-`FlowLogsFlushInterval` or `DNSLogsFlushInterval` as appropriate to avoid gaps
-in coverage.
-
-## Alert records
-
-With only aggregations and no metrics, the alert will generate one event
-per aggregation pattern returned by the query. The record field will
-contain only the aggregated fields. As before, this should be used
-with specific queries.
-
-The addition of a metric will include the value of that metric in the
-record, along with any aggregations. This, combined with queries as
-necessary, will yield the best results in most cases.
-
-With no aggregations the alert will generate one event per record
-returned by the query. The record will be included in its entirety
-in the record field of the event. This should only be used with very
-narrow and specific queries.
-
-## Templates
-
-$[prodname] supports the `GlobalAlertTemplate` resource type.
-These are used in the $[prodname] Manager to create alerts
-with prepopulated fields that can be modified to suit your needs.
-The `GlobalAlertTemplate` resource is configured identically to the
-`GlobalAlert` resource. $[prodname] includes some sample Alert
-templates; add your own templates as needed.
-
-### Sample YAML
-
-**RuleBased GlobalAlert**
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalAlertTemplate
-metadata:
- name: http.connections
-spec:
- description: 'HTTP connections to a target namespace'
- summary: 'HTTP connections from ${source_namespace}/${source_name_aggr} to /${dest_name_aggr}'
- severity: 50
- dataSet: flows
- query: dest_namespace="" AND dest_port=80
- aggregateBy: [source_namespace, dest_name_aggr, source_name_aggr]
- field: count
- metric: sum
- condition: gte
- threshold: 1
-```
-
-## Appendix: Valid fields for queries
-
-### Audit logs
-
-See [audit.k8s.io group v1](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/apis/audit/v1/types.go) for descriptions of fields.
-
-### DNS logs
-
-See [DNS logs](../../visibility/elastic/dns/dns-logs.mdx) for description of fields.
-
-### Flow logs
-
-See [Flow logs](../../visibility/elastic/flow/datatypes.mdx) for description of fields.
-
-### L7 logs
-
-See [L7 logs](../../visibility/elastic/l7/datatypes.mdx) for description of fields.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/globalnetworkpolicy.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/globalnetworkpolicy.mdx
deleted file mode 100644
index 7e72123bcb..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/globalnetworkpolicy.mdx
+++ /dev/null
@@ -1,167 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Global network policy
-
-import Servicematch from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_servicematch.mdx';
-
-import Serviceaccountmatch from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_serviceaccountmatch.mdx';
-
-import Ports from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_ports.mdx';
-
-import SelectorScopes from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_selector-scopes.mdx';
-
-import Selectors from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_selectors.mdx';
-
-import Entityrule from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_entityrule.mdx';
-
-import Icmp from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_icmp.mdx';
-
-import Rule from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_rule.mdx';
-
-A global network policy resource (`GlobalNetworkPolicy`) represents an ordered set of rules which are applied
-to a collection of endpoints that match a [label selector](#selector).
-
-`GlobalNetworkPolicy` is not a namespaced resource. `GlobalNetworkPolicy` applies to [workload endpoint resources](workloadendpoint.mdx) in all namespaces, and to [host endpoint resources](hostendpoint.mdx).
-Select a namespace in a `GlobalNetworkPolicy` in the standard selector by using
-`projectcalico.org/namespace` as the label name and a `namespace` name as the
-value to compare against, e.g., `projectcalico.org/namespace == "default"`.
-See [network policy resource](networkpolicy.mdx) for namespaced network policy.
-
-`GlobalNetworkPolicy` resources can be used to define network connectivity rules between groups of $[prodname] endpoints and host endpoints.
-
-
-GlobalNetworkPolicies are organized into [tiers](tier.mdx), which provide an additional layer of ordering—in particular, note that the `Pass` action skips to the
-next [tier](tier.mdx), to enable hierarchical security policy.
-
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases
-may be used to specify the resource type on the CLI:
-`globalnetworkpolicy.projectcalico.org`, `globalnetworkpolicies.projectcalico.org` and abbreviations such as
-`globalnetworkpolicy.p` and `globalnetworkpolicies.p`.
-
-## Sample YAML
-
-This sample policy allows TCP traffic from `frontend` endpoints to port 6379 on
-`database` endpoints.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: internal-access.allow-tcp-6379
-spec:
- tier: internal-access
- selector: role == 'database'
- types:
- - Ingress
- - Egress
- ingress:
- - action: Allow
- metadata:
- annotations:
- from: frontend
- to: database
- protocol: TCP
- source:
- selector: role == 'frontend'
- destination:
- ports:
- - 6379
- egress:
- - action: Allow
-```
-
-## Definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema | Default |
-| ----- | ----------------------------------------- | --------------------------------------------------- | ------ | ------- |
-| name | The name of the network policy. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------- | --------------------- | --------------------------------------------- |
-| order | Controls the order of precedence. $[prodname] applies the policy with the lowest value first. | | float | |
-| tier | Name of the [tier](tier.mdx) this policy belongs to. | | string | `default` |
-| selector | Selects the endpoints to which this policy applies. | | [selector](#selector) | all() |
-| serviceAccountSelector | Selects the service account(s) to which this policy applies. Select all service accounts in the cluster with a specific name using the `projectcalico.org/name` label. | | [selector](#selector) | all() |
-| namespaceSelector | Selects the namespace(s) to which this policy applies. Select a specific namespace by name using the `projectcalico.org/name` label. | | [selector](#selector) | all() |
-| types | Applies the policy based on the direction of the traffic. To apply the policy to inbound traffic, set to `Ingress`. To apply the policy to outbound traffic, set to `Egress`. To apply the policy to both, set to `Ingress, Egress`. | `Ingress`, `Egress` | List of strings | Depends on presence of ingress/egress rules\* |
-| ingress | Ordered list of ingress rules applied by policy. | | List of [Rule](#rule) | |
-| egress | Ordered list of egress rules applied by this policy. | | List of [Rule](#rule) | |
-| doNotTrack\*\* | Indicates to apply the rules in this policy before any data plane connection tracking, and that packets allowed by these rules should not be tracked. | true, false | boolean | false |
-| preDNAT\*\* | Indicates to apply the rules in this policy before any DNAT. | true, false | boolean | false |
-| applyOnForward\*\* | Indicates to apply the rules in this policy on forwarded traffic as well as to locally terminated traffic. | true, false | boolean | false |
-| performanceHints | Contains a list of hints to Calico's policy engine to help process the policy more efficiently. Hints never change the enforcement behaviour of the policy. The available hints are described [below](#performance-hints). | `AssumeNeededOnEveryNode` | List of strings | |
-
-\* If `types` has no value, $[prodname] defaults as follows.
-
-> | Ingress Rules Present | Egress Rules Present | `Types` value |
-> | --------------------- | -------------------- | ----------------- |
-> | No | No | `Ingress` |
-> | Yes | No | `Ingress` |
-> | No | Yes | `Egress` |
-> | Yes | Yes | `Ingress, Egress` |
-
-\*\* The `doNotTrack` and `preDNAT` and `applyOnForward` fields are meaningful
-only when applying policy to a [host endpoint](hostendpoint.mdx).
-
-Only one of `doNotTrack` and `preDNAT` may be set to `true` (in a given policy). If they are both `false`, or when applying the policy to a
-[workload endpoint](workloadendpoint.mdx),
-the policy is enforced after connection tracking and any DNAT.
-
-`applyOnForward` must be set to `true` if either `doNotTrack` or `preDNAT` is
-`true` because for a given policy, any untracked rules or rules before DNAT will
-in practice apply to forwarded traffic.
-
-### Rule
-
-
-
-### ICMP
-
-
-
-### EntityRule
-
-
-
-### Selector
-
-
-
-
-### Ports
-
-
-
-### ServiceAccountMatch
-
-
-
-### ServiceMatch
-
-
-
-### Performance Hints
-
-Performance hints provide a way to tell $[prodname] about the intended use of the policy so that it may
-process it more efficiently. Currently only one hint is defined:
-
-* `AssumeNeededOnEveryNode`: normally, $[prodname] only calculates a policy's rules and selectors on nodes where
- the policy is actually in use (i.e. its selector matches a local endpoint). This saves work in most cases.
- The `AssumeNeededOnEveryNode` hint tells $[prodname] to treat the policy as "in use" on *every* node. This is
- useful for large policy sets that are known to apply to all (or nearly all) endpoints. It effectively "preloads"
- the policy on every node so that there is less work to do when the first endpoint matching the policy shows up.
- It also prevents work from being done to tear down the policy when the last endpoint is drained.
-
-## Supported operations
-
-| Datastore type | Create/Delete | Update | Get/List | Notes |
-| ------------------------ | ------------- | ------ | -------- | ----- |
-| Kubernetes API datastore | Yes | Yes | Yes |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/globalnetworkset.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/globalnetworkset.mdx
deleted file mode 100644
index 3e09fcc6b4..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/globalnetworkset.mdx
+++ /dev/null
@@ -1,83 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Global network set
-
-import DomainNames from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_domain-names.mdx';
-
-A global network set resource (GlobalNetworkSet) represents an arbitrary set of IP subnetworks/CIDRs,
-allowing it to be matched by $[prodname] policy. Network sets are useful for applying policy to traffic
-coming from (or going to) external, non-$[prodname], networks.
-
-GlobalNetworkSets can also include domain names, whose effect is to allow egress traffic to those
-domain names, when the GlobalNetworkSet is matched by the destination selector of an egress rule
-with action Allow. Domain names have no effect in ingress rules, or in a rule whose action is not
-Allow.
-
-:::note
-
-$[prodname] implements policy for domain names by learning the
-corresponding IPs from DNS, then programming rules to allow those IPs. This means that
-if multiple domain names A, B and C all map to the same IP, and there is domain-based
-policy to allow A, traffic to B and C will be allowed as well.
-
-:::
-
-The metadata for each network set includes a set of labels. When $[prodname] is calculating the set of
-IPs that should match a source/destination selector within a
-[global network policy](globalnetworkpolicy.mdx) rule, or within a
-[network policy](networkpolicy.mdx) rule whose `namespaceSelector` includes `global()`, it includes
-the CIDRs from any network sets that match the selector.
-
-:::note
-
-Since $[prodname] matches packets based on their source/destination IP addresses,
-$[prodname] rules may not behave as expected if there is NAT between the $[prodname]-enabled node and the
-networks listed in a network set. For example, in Kubernetes, incoming traffic via a service IP is
-typically SNATed by the kube-proxy before reaching the destination host so $[prodname]'s workload
-policy will see the kube-proxy's host's IP as the source instead of the real source.
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases
-may be used to specify the resource type on the CLI:
-`globalnetworkset.projectcalico.org`, `globalnetworksets.projectcalico.org` and abbreviations such as
-`globalnetworkset.p` and `globalnetworksets.p`.
-
-:::
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkSet
-metadata:
- name: a-name-for-the-set
- labels:
- role: external-database
-spec:
- nets:
- - 198.51.100.0/28
- - 203.0.113.0/24
- allowedEgressDomains:
- - db.com
- - '*.db.com'
-```
-
-## Global network set definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ------ | ------------------------------------------ | ------------------------------------------------- | ------ |
-| name | The name of this network set. | Lower-case alphanumeric with optional `-` or `-`. | string |
-| labels | A set of labels to apply to this endpoint. | | map |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------- | ------ | ------- |
-| nets | The IP networks/CIDRs to include in the set. | Valid IPv4 or IPv6 CIDRs, for example "192.0.2.128/25" | list | |
-| allowedEgressDomains | The list of domain names that belong to this set and are honored in egress allow rules only. Domain names specified here only work to allow egress traffic from the cluster to external destinations. They don't work to _deny_ traffic to destinations specified by domain name, or to allow ingress traffic from _sources_ specified by domain name. | List of [exact or wildcard domain names](#exact-and-wildcard-domain-names) | list | |
-
-### Exact and wildcard domain names
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/globalreport.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/globalreport.mdx
deleted file mode 100644
index fb10e58bb9..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/globalreport.mdx
+++ /dev/null
@@ -1,149 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Global report
-
-A global report resource is a configuration for generating compliance reports. A global report configuration in $[prodname] lets you:
-
-- Specify report contents, frequency, and data filtering
-- Specify the node(s) on which to run the report generation jobs
-- Enable/disable creation of new jobs for generating the report
-
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases
-may be used to specify the resource type on the CLI:
-`globalreport.projectcalico.org`, `globalreports.projectcalico.org` and abbreviations such as
-`globalreport.p` and `globalreports.p`.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalReport
-metadata:
- name: weekly-full-inventory
-spec:
- reportType: inventory
- schedule: 0 0 * * 0
- jobNodeSelector:
- nodetype: infrastructure
-
----
-apiVersion: projectcalico.org/v3
-kind: GlobalReport
-metadata:
- name: hourly-accounts-networkaccess
-spec:
- reportType: network-access
- endpoints:
- namespaces:
- names: ['payable', 'collections', 'payroll']
- schedule: 0 * * * *
-
----
-apiVersion: projectcalico.org/v3
-kind: GlobalReport
-metadata:
- name: monthly-widgets-controller-tigera-policy-audit
-spec:
- reportType: policy-audit
- schedule: 0 0 1 * *
- endpoints:
- serviceAccounts:
- names: ['controller']
- namespaces:
- names: ['widgets']
-
----
-apiVersion: projectcalico.org/v3
-kind: GlobalReport
-metadata:
- name: daily-cis-benchmark
-spec:
- reportType: cis-benchmark
- schedule: 0 0 * * *
- cis:
- resultsFilters:
- - benchmarkSelection: { kubernetesVersion: '1.13' }
- exclude: ['1.1.4', '1.2.5']
-```
-
-## GlobalReport Definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ------ | ---------------------------------------- | ------------------------------------------------ | ------ |
-| name | The name of this report. | Lower-case alphanumeric with optional `-` or `.` | string |
-| labels | A set of labels to apply to this report. | | map |
-
-### Spec
-
-| Field | Description | Required | Accepted Values | Schema |
-| --------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------- |
-| reportType | The type of report to produce. This field controls the content of the report - see the links for each type for more details. | Yes | [cis‑benchmark](compliance-reports/cis-benchmark.mdx), [inventory](compliance-reports/inventory.mdx), [network‑access](compliance-reports/network-access.mdx), [policy‑audit](compliance-reports/policy-audit.mdx) | string |
-| endpoints | Specify which endpoints are in scope. If omitted, selects everything. | | | [EndpointsSelection](#endpointsselection) |
-| schedule | Configure report frequency by specifying start and end time in [cron-format][cron-format]. Reports are started 30 minutes (configurable) after the scheduled value to allow enough time for data archival. A maximum limit of 12 schedules per hour is enforced (an average of one report every 5 minutes). | Yes | | string |
-| jobNodeSelector | Specify the node(s) for scheduling the report jobs using selectors. | | | map |
-| suspend | Disable future scheduled report jobs. In-flight reports are not affected. | | | bool |
-| cis | Parameters related to generating a CIS benchmark report. | | | [CISBenchmarkParams](#cisbenchmarkparams) |
-
-### EndpointsSelection
-
-| Field | Description | Schema |
-| --------------- | ------------------------------------------------------------------------------------------- | ------------------------------------------- |
-| selector | Endpoint label selector to restrict endpoint selection. | string |
-| namespaces | Namespace name and label selector to restrict endpoints by selected namespaces. | [NamesAndLabelsMatch](#namesandlabelsmatch) |
-| serviceAccounts | Service account name and label selector to restrict endpoints by selected service accounts. | [NamesAndLabelsMatch](#namesandlabelsmatch) |
-
-### CISBenchmarkParams
-
-| Fields | Description | Required | Schema |
-| -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------------------------------- |
-| highThreshold | Integer percentage value that determines the lower limit of passing tests to consider a node as healthy. Default: 100 | No | int |
-| medThreshold | Integer percentage value that determines the lower limit of passing tests to consider a node as unhealthy. Default: 50 | No | int |
-| includeUnscoredTests | Boolean value that when false, applies a filter to exclude tests that are marked as “Unscored” by the CIS benchmark standard. If true, the tests will be included in the report. Default: false | No | bool |
-| numFailedTests | Integer value that sets the number of tests to display in the Top-failed Tests section of the CIS benchmark report. Default: 5 | No | int |
-| resultsFilters | Specifies an include or exclude filter to apply on the test results that will appear on the report. | No | [CISBenchmarkFilter](#cisbenchmarkfilter) |
-
-### CISBenchmarkFilter
-
-| Fields | Description | Required | Schema |
-| ------------------ | ---------------------------------------------------------------------------------------------- | -------- | ----------------------------------------------- |
-| benchmarkSelection | Specify which set of benchmarks that this filter should apply to. Selects all benchmark types. | No | [CISBenchmarkSelection](#cisbenchmarkselection) |
-| exclude | Specify which benchmark tests to exclude | No | array of strings |
-| include | Specify which benchmark tests to include only (higher precedence than exclude) | No | array of strings |
-
-### CISBenchmarkSelection
-
-| Fields | Description | Required | Schema |
-| ----------------- | -------------------------------------- | -------- | ------ |
-| kubernetesVersion | Specifies a version of the benchmarks. | Yes | string |
-
-### NamesAndLabelsMatch
-
-| Field | Description | Schema |
-| -------- | ------------------------------------ | ------ |
-| names | Set of resource names. | list |
-| selector | Selects a set of resources by label. | string |
-
-Use the `NamesAndLabelsMatch`to limit the scope of endpoints. If both `names`
-and `selector` are specified, the resource is identified using label _AND_ name
-match.
-
-:::note
-
-To use the $[prodname] compliance reporting feature, you must ensure all required resource types
-are being audited and the logs archived in Elasticsearch. You must explicitly configure the [Kubernetes API Server](../../visibility/kube-audit.mdx)
- to send audit logs for Kubernetes-owned resources
-to Elasticsearch.
-
-:::
-
-## Supported operations
-
-| Datastore type | Create/Delete | Update | Get/List | Notes |
-| --------------------- | ------------- | ------ | -------- | ----- |
-| Kubernetes API server | Yes | Yes | Yes | |
-
-[cron-format]: https://en.wikipedia.org/wiki/Cron
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/globalthreatfeed.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/globalthreatfeed.mdx
deleted file mode 100644
index edbab0bc20..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/globalthreatfeed.mdx
+++ /dev/null
@@ -1,269 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Global threat feed
-
-A global threat feed resource (GlobalThreatFeed) represents a feed of threat intelligence used for
-security purposes.
-
-$[prodname] supports threat feeds that give either
-
-- a set of IP addresses or IP prefixes, with content type IPSet, or
-- a set of domain names, with content type DomainNameSet
-
-For each IPSet threat feed, $[prodname] automatically monitors flow logs for members of the set.
-IPSet threat feeds can also be configured to be synchronized to a [global network set](globalnetworkset.mdx),
-allowing you to use them as a dynamically-updating deny-list by incorporating the global network set into network policy.
-
-For each DomainNameSet threat feed, $[prodname] automatically monitors DNS logs for queries (QNAME) or answers (RR NAME or RDATA) that contain members of the set.
-
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases
-may be used to specify the resource type on the CLI:
-`globalthreatfeed.projectcalico.org`, `globalthreatfeeds.projectcalico.org` and abbreviations such as
-`globalthreatfeed.p` and `globalthreatfeeds.p`.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalThreatFeed
-metadata:
- name: sample-global-threat-feed
-spec:
- content: IPSet
- mode: Enabled
- description: "This is the sample global threat feed"
- feedType: Custom
- globalNetworkSet:
- # labels to set on the GNS
- labels:
- level: high
- pull:
- # accepts time in golang duration format
- period: 24h
- http:
- format:
- newlineDelimited: {}
- url: https://an.example.threat.feed/deny-list
- headers:
- - name: "Accept"
- value: "text/plain"
- - name: "APIKey"
- valueFrom:
- # secrets selected must be in the "tigera-intrusion-detection" namespace to be used
- secretKeyRef:
- name: "globalthreatfeed-sample-global-threat-feed-example"
- key: "apikey"
-```
-
-## Push or Pull
-
-You can configure $[prodname] to pull updates from your threat feed using a [`pull`](#pull) stanza in
-the global threat feed spec.
-
-Alternately, you can have your threat feed push updates directly. Leave out the `pull` stanza, and configure
-your threat feed to create or update the Elasticsearch document that corresponds to the global threat
-feed object.
-
-For IPSet threat feeds, this Elasticsearch document will be in the index `.tigera.ipset.` and must have the ID set
-to the name of the global threat feed object. The doc should have a single field called `ips`, containing
-a list of IP prefixes.
-
-For example:
-
-```
-PUT .tigera.ipset.cluster01/_doc/sample-global-threat-feed
-{
- "ips" : ["99.99.99.99/32", "100.100.100.0/24"]
-}
-```
-
-For DomainNameSet threat feeds, this Elasticsearch document will be in the index `.tigera.domainnameset.` and must
-have the ID set to the name of the global threat feed object. The doc should have a single field called `domains`, containing
-a list of domain names.
-
-For example:
-
-```
-PUT .tigera.domainnameset.cluster01/_doc/example-global-threat-feed
-{
- "domains" : ["malware.badstuff", "hackers.r.us"]
-}
-```
-
-Refer to the [Elasticsearch document APIs][elastic-document-apis] for more information on how to
-create and update documents in Elasticsearch.
-
-## GlobalThreatFeed Definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ------ | --------------------------------------------- | ----------------------------------------- | ------ |
-| name | The name of this threat feed. | Lower-case alphanumeric with optional `-` | string |
-| labels | A set of labels to apply to this threat feed. | | map |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------------- | ---------------------------------------------------- | ---------------------- | --------------------------------------------- | ------- |
-| content | What kind of threat intelligence is provided | IPSet, DomainNameSet | string | IPSet |
-| mode | Determines if the threat feed is Enabled or Disabled | Enabled, Disabled | string | Enabled |
-| description | Human-readable description of the template | Maximum 256 characters | string | |
-| feedType | Distinguishes Builtin threat feeds from Custom feeds | Builtin, Custom | string | Custom |
-| globalNetworkSet | Include to sync with a global network set | | [GlobalNetworkSetSync](#globalnetworksetsync) | |
-| pull | Configure periodic pull of threat feed updates | | [Pull](#pull) | |
-
-### Status
-
-The `status` is read-only for users and updated by the `intrusion-detection-controller` component as
-it processes global threat feeds.
-
-| Field | Description |
-| -------------------- | -------------------------------------------------------------------------------- |
-| lastSuccessfulSync | Timestamp of the last successful update to the threat intelligence from the feed |
-| lastSuccessfulSearch | Timestamp of the last successful search of logs for threats |
-| errorConditions | List of errors preventing operation of the updates or search |
-
-### GlobalNetworkSetSync
-
-When you include a `globalNetworkSet` stanza in a global threat feed, it triggers synchronization
-with a [global network set](globalnetworkset.mdx). This global network set will have the name `threatfeed.`
-where `` is the name of the global threat feed it is synced with. This is only supported for
-threat feeds of type IPSet.
-
-:::note
-
-A `globalNetworkSet` stanza only works for `IPSet` threat feeds, and you must also include a `pull` stanza.
-
-:::
-
-| Field | Description | Accepted Values | Schema |
-| ------ | --------------------------------------------------------- | --------------- | ------ |
-| labels | A set of labels to apply to the synced global network set | | map |
-
-### Pull
-
-When you include a `pull` stanza in a global threat feed, it triggers a periodic pull of new data. On successful
-pull and update to the data store, we update the `status.lastSuccessfulSync` timestamp.
-
-If you do not include a `pull` stanza, you must configure your system to [push](#push-or-pull) updates.
-
-| Field | Description | Accepted Values | Schema | Default |
-| ------ | ------------------------------------- | --------------- | --------------------------------- | ------- |
-| period | How often to pull an update | ≥ 5m | [Duration string][parse-duration] | 24h |
-| http | Pull the update from an HTTP endpoint | | [HTTPPull](#httppull) | |
-
-### HTTPPull
-
-Pull updates from the threat feed by doing an HTTP GET against the given URL.
-
-| Field | Description | Accepted Values | Schema |
-| ------- | --------------------------------------------------------- | --------------- | ------------------------- |
-| format | Format of the data the threat feed returns | | [Format](#format) |
-| url | The URL to query | | string |
-| headers | List of additional HTTP Headers to include on the request | | [HTTPHeader](#httpheader) |
-
-IPSet threat feeds must contain IP addresses or IP prefixes. For example:
-
-```
- This is an IP Prefix
-100.100.100.0/24
- This is an address
-99.99.99.99
-```
-
-DomainNameSet threat feeds must contain domain names. For example:
-
-```
- Suspicious domains
-malware.badstuff
-hackers.r.us
-```
-
-Internationalized domain names (IDNA) may be encoded either as Unicode in UTF-8 format, or as
-ASCII-Compatible Encoding (ACE) according to [RFC 5890][idna].
-
-### Format
-
-Several different feed formats are supported. The default,
-`newlineDelimited`, expects a text file containing entries separated by
-newline characters. It may also include comments prefixed by `#`.
-`json` uses a [jsonpath] to extract the desired information from a
-JSON document. `csv` extracts one column from CSV-formatted data.
-
-| Field | Description | Schema |
-| ---------------- | --------------------------- | ------------- |
-| newlineDelimited | Newline-delimited text file | Empty object |
-| json | JSON object | [JSON](#json) |
-| csv | CSV file | [CSV](#csv) |
-
-#### JSON
-
-| Field | Description | Schema |
-| ----- | ----------------------------- | ------ |
-| path | [jsonpath] to extract values. | string |
-
-Values can be extracted from the document using any [jsonpath]
-expression, subject to the limitations mentioned below, that evaluates
-to a list of strings. For example: `$.` is valid for `["a", "b", "c"]`,
-and `$.a` is valid for `{"a": ["b", "c"]}`.
-
-:::caution
-
-No support for subexpressions and filters. Strings in
-brackets must use double quotes. It cannot operate on JSON decoded
-struct fields.
-
-:::
-
-#### CSV
-
-| Field | Description | Schema |
-| --------------------------- | ------------------------------------------------------------------------- | ------ | ------ |
-| fieldNum | Number of column containing values. Mutually exclusive with `fieldName`. | int |
-| fieldName | Name of column containing values, requires `header: true`. | string |
-| header | Whether or not the document contains a header row. | bool |
-| columnDelimiter | An alternative delimiter character, such as ` | `. | string |
-| commentDelimiter | Lines beginning with this character are skipped. `#` is common. | string |
-| recordSize | The number of columns expected in the document. Auto detected if omitted. | int |
-| disableRecordSizeValidation | Disable row size checking. Mutually exclusive with `recordSize`. | bool |
-
-### HTTPHeader
-
-| Field | Description | Schema |
-| --------- | --------------------------------------------------------- | ------------------------------------- |
-| name | Header name | string |
-| value | Literal value | string |
-| valueFrom | Include to retrieve the value from a config map or secret | [HTTPHeaderSource](#httpheadersource) |
-
-:::note
-
-You must include either `value` or `valueFrom`, but not both.
-
-:::
-
-### HTTPHeaderSource
-
-| Field | Description | Schema |
-| --------------- | ------------------------------- | ----------------- |
-| configMapKeyRef | Get the value from a config map | [KeyRef](#keyref) |
-| secretKeyRef | Get the value from a secret | [KeyRef](#keyref) |
-
-### KeyRef
-
-KeyRef tells $[prodname] where to get the value for a header. The referenced Kubernetes object
-(either a config map or a secret) must be in the `tigera-intrusion-detection` namespace. The referenced
-Kubernetes object should have a name with following prefix format: `globalthreatfeed--`.
-
-| Field | Description | Accepted Values | Schema | Default |
-| -------- | --------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | ------ | ------- |
-| name | The name of the config map or secret | | string | |
-| key | The key within the config map or secret | | string | |
-| optional | Whether the pull can proceed without the referenced value | If the referenced value does not exist, `true` means omit the header. `false` means abort the entire pull until it exists | bool | `false` |
-
-[elastic-document-apis]: https://www.elastic.co/guide/en/elasticsearch/reference/6.4/docs-update.html
-[parse-duration]: https://golang.org/pkg/time/#ParseDuration
-[idna]: https://tools.ietf.org/html/rfc5890
-[jsonpath]: https://goessner.net/articles/JsonPath/
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/hostendpoint.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/hostendpoint.mdx
deleted file mode 100644
index ce36529efa..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/hostendpoint.mdx
+++ /dev/null
@@ -1,121 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Host endpoint
-
-import Endpointport from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_endpointport.mdx';
-
-A host endpoint resource (`HostEndpoint`) represents one or more real or virtual interfaces
-attached to a host that is running $[prodname]. It enforces $[prodname] policy on
-the traffic that is entering or leaving the host's default network namespace through those
-interfaces.
-
-- A host endpoint with `interfaceName: *` represents _all_ of a host's real or virtual
- interfaces.
-
-- A host endpoint for one specific real interface is configured by `interfaceName: `,
- for example `interfaceName: eth0`, or by leaving `interfaceName`
- empty and including one of the interface's IPs in `expectedIPs`.
-
-Each host endpoint may include a set of labels and list of profiles that $[prodname]
-will use to apply
-[policy](networkpolicy.mdx)
-to the interface. If no profiles or labels are applied, $[prodname] will not apply
-any policy.
-
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases
-may be used to specify the resource type on the CLI:
-`hostendpoint.projectcalico.org`, `hostendpoints.projectcalico.org` and abbreviations such as
-`hostendpoint.p` and `hostendpoints.p`.
-
-**Default behavior of external traffic to/from host**
-
-If a host endpoint is created and network policy is not in place, the $[prodname] default is to deny traffic to/from that endpoint (except for traffic allowed by failsafe rules).
-For a named host endpoint (i.e. a host endpoint representing a specific interface), $[prodname] blocks traffic only to/from the interface specified in the host endpoint. Traffic to/from other interfaces is ignored.
-
-:::note
-
-Host endpoints with `interfaceName: *` do not support [untracked policy](../../network-policy/extreme-traffic/high-connection-workloads.mdx).
-
-:::
-
-For a wildcard host endpoint (i.e. a host endpoint representing all of a host's interfaces), $[prodname] blocks traffic to/from _all_ interfaces on the host (except for traffic allowed by failsafe rules).
-
-However, profiles can be used in conjunction with host endpoints to modify default behavior of external traffic to/from the host in the absence of network policy.
-$[prodname] provides a default profile resource named `projectcalico-default-allow` that consists of allow-all ingress and egress rules.
-Host endpoints with the `projectcalico-default-allow` profile attached will have "allow-all" semantics instead of "deny-all" in the absence of policy.
-
-Note: If you have custom iptables rules, using host endpoints with allow-all rules (with no policies) will accept all traffic and therefore bypass those custom rules.
-
-:::note
-
-Auto host endpoints specify the `projectcalico-default-allow` profile so they behave similarly to pod workload endpoints.
-
-:::
-
-:::note
-
-When rendering security rules on other hosts, $[prodname] uses the
-`expectedIPs` field to resolve label selectors to IP addresses. If the `expectedIPs` field
-is omitted then security rules that use labels will fail to match this endpoint.
-
-:::
-
-**Host to local workload traffic**: Traffic from a host to its workload endpoints (e.g. Kubernetes pods) is always allowed, despite any policy in place. This ensures that `kubelet` liveness and readiness probes always work.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: HostEndpoint
-metadata:
- name: some.name
- labels:
- type: production
-spec:
- interfaceName: eth0
- node: myhost
- expectedIPs:
- - 192.168.0.1
- - 192.168.0.2
- profiles:
- - profile1
- - profile2
- ports:
- - name: some-port
- port: 1234
- protocol: TCP
- - name: another-port
- port: 5432
- protocol: UDP
-```
-
-## Host endpoint definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ------ | ------------------------------------------ | --------------------------------------------------- | ------ |
-| name | The name of this hostEndpoint. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string |
-| labels | A set of labels to apply to this endpoint. | | map |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ------------- | -------------------------------------------------------------------------- | -------------------------- | -------------------------------------- | ------- |
-| node | The name of the node where this HostEndpoint resides. | | string |
-| interfaceName | Either `*` or the name of the specific interface on which to apply policy. | | string |
-| expectedIPs | The expected IP addresses associated with the interface. | Valid IPv4 or IPv6 address | list |
-| profiles | The list of profiles to apply to the endpoint. | | list |
-| ports | List of named ports that this workload exposes. | | List of [EndpointPorts](#endpointport) |
-
-### EndpointPort
-
-
-
-## Supported operations
-
-| Datastore type | Create/Delete | Update | Get/List | Notes |
-| --------------------- | ------------- | ------ | -------- | ----- |
-| Kubernetes API server | Yes | Yes | Yes |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/index.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/index.mdx
deleted file mode 100644
index 2285c6bf97..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: APIs for all Calico networking and network policy resources.
-hide_table_of_contents: true
----
-
-# Resource definitions
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/ipamconfig.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/ipamconfig.mdx
deleted file mode 100644
index c8737aac6b..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/ipamconfig.mdx
+++ /dev/null
@@ -1,43 +0,0 @@
----
-description: IP address management global configuration
----
-
-# IPAM configuration
-
-An IPAM configuration resource (`IPAMConfiguration`) represents global IPAM configuration options.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: IPAMConfiguration
-metadata:
- name: default
-spec:
- strictAffinity: false
- maxBlocksPerHost: 4
-```
-
-## IPAM configuration definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | --------------------------------------------------------- | --------------- | ------ |
-| name | Unique name to describe this resource instance. Required. | default | string |
-
-The resource is a singleton which must have the name `default`.
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------------- | ------------------------------------------------------------------- | --------------- | ------ | --------- |
-| strictAffinity | When StrictAffinity is true, borrowing IP addresses is not allowed. | true, false | bool | false |
-| maxBlocksPerHost | The max number of blocks that can be affine to each host. | 0 - max(int32) | int | unlimited |
-
-## Supported operations
-
-| Datastore type | Create | Delete | Update | Get/List |
-| --------------------- | ------ | ------ | ------ | -------- |
-| etcdv3 | Yes | Yes | Yes | Yes |
-| Kubernetes API server | Yes | Yes | Yes | Yes |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/ippool.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/ippool.mdx
deleted file mode 100644
index bb32b351eb..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/ippool.mdx
+++ /dev/null
@@ -1,155 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# IP pool
-
-import Selectors from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_selectors.mdx';
-
-An IP pool resource (`IPPool`) represents a collection of IP addresses from which $[prodname] expects
-endpoint IPs to be assigned.
-
-For `kubectl` commands, the following case-insensitive aliases may be used to specify the resource type on the CLI: `ippool.projectcalico.org`, `ippools.projectcalico.org` as well as abbreviations such as `ippool.p` and `ippools.p`.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: IPPool
-metadata:
- name: my.ippool-1
-spec:
- cidr: 10.1.0.0/16
- ipipMode: CrossSubnet
- natOutgoing: true
- disabled: false
- nodeSelector: all()
- allowedUses:
- - Workload
- - Tunnel
-```
-
-## IP pool definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | ------------------------------------------- | --------------------------------------------------- | ------ |
-| name | The name of this IPPool resource. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- | --------------------------------------------- |
-| cidr | IP range to use for this pool. | A valid IPv4 or IPv6 CIDR. Subnet length must be at least big enough to fit a single block (by default `/26` for IPv4 or `/122` for IPv6). Must not overlap with the Link Local range `169.254.0.0/16` or `fe80::/10`. | string | |
-| blockSize | The CIDR size of allocation blocks used by this pool. Blocks are allocated on demand to hosts and are used to aggregate routes. The value can only be set when the pool is created. | 20 to 32 (inclusive) for IPv4 and 116 to 128 (inclusive) for IPv6 | int | `26` for IPv4 pools and `122` for IPv6 pools. |
-| ipipMode | The mode defining when IPIP will be used. Cannot be set at the same time as `vxlanMode`. | Always, CrossSubnet, Never | string | `Never` |
-| vxlanMode | The mode defining when VXLAN will be used. Cannot be set at the same time as `ipipMode`. | Always, CrossSubnet, Never | string | `Never` |
-| natOutgoing | When enabled, packets sent from $[prodname] networked containers in this pool to destinations outside of any Calico IP pools will be masqueraded. | true, false | boolean | `false` |
-| disabled | When set to true, $[prodname] IPAM will not assign addresses from this pool. | true, false | boolean | `false` |
-| disableBGPExport _(since v3.11.0)_ | Disable exporting routes from this IP Pool’s CIDR over BGP. | true, false | boolean | `false` |
-| nodeSelector | Selects the nodes where $[prodname] IPAM should assign pod addresses from this pool. Can be overridden if a pod [explicitly identifies this IP pool by annotation](../component-resources/configuration.mdx#using-kubernetes-annotations). | | [selector](#node-selector) | all() |
-| allowedUses _(since v3.11.0)_ | Controls whether the pool will be used for automatic assignments of certain types. See [below](#allowed-uses). | Workload, Tunnel, HostSecondaryInterface | list of strings | `["Workload", "Tunnel"]` |
-| awsSubnetID _(since v3.11.0)_ | May be set to the ID of an AWS VPC Subnet that contains the CIDR of this IP pool to activate the AWS-backed pool feature. See [below](#aws-backed-pools). | Valid AWS Subnet ID. | string | |
-
-:::note
-
-Do not use a custom `blockSize` until **all** $[prodname] components have been updated to a version that
-supports it (at least v2.3.0). Older versions of components do not understand the field so they may corrupt the
-IP pool by creating blocks of incorrect size.
-
-:::
-
-### Allowed uses
-
-When automatically assigning IP addresses to workloads, only pools with "Workload" in their `allowedUses` field are
-consulted. Similarly, when assigning IPs for tunnel devices, only "Tunnel" pools are eligible. Finally, when
-assigning IP addresses for AWS secondary ENIs, only pools with allowed use "HostSecondaryInterface" are candidates.
-
-If the `allowedUses` field is not specified, it defaults to `["Workload", "Tunnel"]` for compatibility with older
-versions of Calico. It is not possible to specify a pool with no allowed uses.
-
-The `allowedUses` field is only consulted for new allocations, changing the field has no effect on previously allocated
-addresses.
-
-$[prodname] supports Kubernetes [annotations that force the use of specific IP addresses](../component-resources/configuration.mdx#requesting-a-specific-ip-address). These annotations take precedence over the `allowedUses` field.
-
-### AWS-backed pools
-
-$[prodname] supports IP pools that are backed by the AWS fabric. This feature was added in order
-to support egress gateways on the AWS fabric; the restrictions and requirements are currently documented as part of the
-[egress gateways on AWS guide](../../networking/egress/egress-gateway-aws.mdx).
-
-### IPIP
-
-Routing of packets using IP-in-IP will be used when the destination IP address
-is in an IP Pool that has IPIP enabled. In addition, if the `ipipMode` is set to `CrossSubnet`,
-$[prodname] will only route using IP-in-IP if the IP address of the destination node is in a different
-subnet. The subnet of each node is configured on the node resource (which may be automatically
-determined when running the `$[nodecontainer]` service).
-
-For details on configuring IP-in-IP on your deployment, please refer to
-[Configuring IP-in-IP](../../networking/configuring/vxlan-ipip.mdx).
-
-:::note
-
-Setting `natOutgoing` is recommended on any IP Pool with `ipip` enabled.
-When `ipip` is enabled without `natOutgoing` routing between Workloads and
-Hosts running $[prodname] is asymmetric and may cause traffic to be filtered due to
-[RPF](https://en.wikipedia.org/wiki/Reverse_path_forwarding) checks failing.
-
-:::
-
-### VXLAN
-
-Routing of packets using VXLAN will be used when the destination IP address
-is in an IP Pool that has VXLAN enabled.. In addition, if the `vxlanMode` is set to `CrossSubnet`,
-$[prodname] will only route using VXLAN if the IP address of the destination node is in a different
-subnet. The subnet of each node is configured on the node resource (which may be automatically
-determined when running the `$[nodecontainer]` service).
-
-:::note
-
-Setting `natOutgoing` is recommended on any IP Pool with `vxlan` enabled.
-When `vxlan` is enabled without `natOutgoing` routing between Workloads and
-Hosts running $[prodname] is asymmetric and may cause traffic to be filtered due to
-[RPF](https://en.wikipedia.org/wiki/Reverse_path_forwarding) checks failing.
-
-:::
-
-### Block sizes
-
-The default block sizes of `26` for IPv4 and `122` for IPv6 provide blocks of 64 addresses. This allows addresses to be allocated in groups to workloads running on the same host. By grouping addresses, fewer routes need to be exchanged between hosts and to other BGP peers. If a host allocates all of the addresses in a block then it will be allocated an additional block. If there are no more blocks available then the host can take addresses from blocks allocated to other hosts. Specific routes are added for the borrowed addresses which has an impact on route table size.
-
-Increasing the block size from the default (e.g., using `24` for IPv4 to give 256 addresses per block) means fewer blocks per host, and potentially fewer routes. But try to ensure that there are at least as many blocks in the pool as there are hosts.
-
-Reducing the block size from the default (e.g., using `28` for IPv4 to give 16 addresses per block) means more blocks per host and therefore potentially more routes. This can be beneficial if it allows the blocks to be more fairly distributed amongst the hosts.
-
-### Node Selector
-
-For details on configuring IP pool node selectors, please read the
-[Assign IP addresses based on topology guide.](../../networking/ipam/assign-ip-addresses-topology.mdx).
-
-:::tip
-
-To prevent an IP pool from being used automatically by $[prodname] IPAM, while still allowing
-it to be used manually for static assignments, set the `IPPool`'s `nodeSelector` to `!all()`. Since the selector
-matches no nodes, the IPPool will not be used automatically and, unlike setting `disabled: true`, it can still be
-used for manual assignments.
-
-:::
-
-#### Selector reference
-
-
-
-## Supported operations
-
-| Datastore type | Create/Delete | Update | Get/List | Notes |
-| --------------------- | ------------- | ------ | -------- | ----- |
-| Kubernetes API server | Yes | Yes | Yes |
-
-## See also
-
-The [`IPReservation` resource](ipreservation.mdx) allows for small parts of an IP pool to be reserved so that they will
-not be used for automatic IPAM assignments.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/ipreservation.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/ipreservation.mdx
deleted file mode 100644
index ab41de81c8..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/ipreservation.mdx
+++ /dev/null
@@ -1,56 +0,0 @@
----
-description: API for this Calico resource.
----
-
-# IP reservation
-
-An IP reservation resource (`IPReservation`) represents a collection of IP addresses that $[prodname] should
-not use when automatically assigning new IP addresses. It only applies when $[prodname] IPAM is in use.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: IPReservation
-metadata:
- name: my-ipreservation-1
-spec:
- reservedCIDRs:
- - 192.168.2.3
- - 10.0.2.3/32
- - cafe:f00d::/123
-```
-
-## IP reservation definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | -------------------------------------------------- | --------------------------------------------------- | ------ |
-| name | The name of this IPReservation resource. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ------------- | --------------------------------------------------------------- | -------------------------------------------------- | ------ | ------- |
-| reservedCIDRs | List of IP addresses and/or networks specified in CIDR notation | List of valid IP addresses (v4 or v6) and/or CIDRs | list | |
-
-### Notes
-
-The implementation of `IPReservation`s is designed to handle reservation of a small number of IP addresses/CIDRs from
-(generally much larger) IP pools. If a significant portion of an IP pool is reserved (say more than 10%) then
-$[prodname] may become significantly slower when searching for free IPAM blocks.
-
-Since `IPReservations` must be consulted for every IPAM assignment request, it's best to have one or two
-`IPReservation` resources with multiple addresses per `IPReservation` resource (rather than having many IPReservation
-resources), each with one address inside.
-
-If an `IPReservation` is created after an IP from its range is already in use then the IP is not automatically
-released back to the pool. The reservation check is only done at auto allocation time.
-
-$[prodname] supports Kubernetes [annotations that force the use of specific IP addresses](../component-resources/configuration.mdx#requesting-a-specific-ip-address). These annotations override any `IPReservation`s that
-are in place.
-
-When Windows nodes claim blocks of IPs they automatically assign the first three IPs
-in each block and the final IP for internal purposes. These assignments cannot be blocked by an `IPReservation`.
-However, if a whole IPAM block is reserved with an `IPReservation`, Windows nodes will not claim such a block.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/kubecontrollersconfig.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/kubecontrollersconfig.mdx
deleted file mode 100644
index 4a6cc6451f..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/kubecontrollersconfig.mdx
+++ /dev/null
@@ -1,87 +0,0 @@
----
-description: API for KubeControllersConfiguration resource.
----
-
-# Kubernetes controllers configuration
-
-A $[prodname] [Kubernetes controllers](../component-resources/kube-controllers/configuration.mdx) configuration resource (`KubeControllersConfiguration`) represents configuration options for the $[prodname] Kubernetes controllers.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: KubeControllersConfiguration
-metadata:
- name: default
-spec:
- logSeverityScreen: Info
- healthChecks: Enabled
- prometheusMetricsPort: 9094
- controllers:
- node:
- reconcilerPeriod: 5m
- leakGracePeriod: 15m
- syncLabels: Enabled
- hostEndpoint:
- autoCreate: Disabled
-```
-
-## Kubernetes controllers configuration definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | --------------------------------------------------------- | ----------------- | ------ |
-| name | Unique name to describe this resource instance. Required. | Must be `default` | string |
-
-- $[prodname] automatically creates a resource named `default` containing the configuration settings, only the name `default` is used and only one object of this type is allowed.
-
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| --------------------- | --------------------------------------------------------- | ----------------------------------- | --------------------------- | ------- |
-| logSeverityScreen | The log severity above which logs are sent to the stdout. | Debug, Info, Warning, Error, Fatal | string | Info |
-| healthChecks | Enable support for health checks | Enabled, Disabled | string | Enabled |
-| prometheusMetricsPort | Port on which to serve prometheus metrics. | Set to 0 to disable, > 0 to enable. | TCP port | 9094 |
-| controllers | Enabled controllers and their settings | | [Controllers](#controllers) | |
-
-### Controllers
-
-| Field | Description | Schema |
-| ----------------- | ------------------------------------------------------ | ------------------------------------------------------------------------------- |
-| node | Enable and configure the node controller | omit to disable, or [NodeController](#nodecontroller) |
-| federatedservices | Enable and configure the federated services controller | omit to disable, or [FederatedServicesController](#federatedservicescontroller) |
-
-### NodeController
-
-The node controller automatically cleans up configuration for nodes that no longer exist. Optionally, it can create host endpoints for all Kubernetes nodes.
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------------- | --------------------------------------------------------------------------------- | ----------------- | --------------------------------- | ------- |
-| reconcilerPeriod | Period to perform reconciliation with the $[prodname] datastore | | [Duration string][parse-duration] | 5m |
-| syncLabels | When enabled, Kubernetes node labels will be copied to $[prodname] node objects. | Enabled, Disabled | string | Enabled |
-| hostEndpoint | Controls allocation of host endpoints | | [HostEndpoint](#hostendpoint) | |
-| leakGracePeriod | Grace period to use when garbage collecting suspected leaked IP addresses. | | [Duration string][parse-duration] | 15m |
-
-### HostEndpoint
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------- | ---------------------------------------------------------------- | ----------------- | ------ | -------- |
-| autoCreate | When enabled, automatically create a host endpoint for each node | Enabled, Disabled | string | Disabled |
-
-### FederatedServicesController
-
-The federated services controller syncs Kubernetes services from remote clusters defined through [RemoteClusterConfigurations](remoteclusterconfiguration.mdx).
-
-| Field | Description | Schema | Default |
-| ---------------- | ---------------------------------------------------------------- | --------------------------------- | ------- |
-| reconcilerPeriod | Period to perform reconciliation with the $[prodname] datastore | [Duration string][parse-duration] | 5m |
-
-## Supported operations
-
-| Datastore type | Create | Delete (Global `default`) | Update | Get/List | Notes |
-| --------------------- | ------ | ------------------------- | ------ | -------- | ----- |
-| Kubernetes API server | Yes | Yes | Yes | Yes |
-
-[parse-duration]: https://golang.org/pkg/time/#ParseDuration
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/licensekey.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/licensekey.mdx
deleted file mode 100644
index ad9ac0620a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/licensekey.mdx
+++ /dev/null
@@ -1,82 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# License key
-
-A License Key resource (`LicenseKey`) represents a user's license to use $[prodname]. Keys are
-provided by Tigera support, and must be applied to the cluster to enable
-$[prodname] features.
-
-For `kubectl` commands, the following case-insensitive aliases may be used to specify
-the resource type on the CLI: `licensekey.projectcalico.org`, `licensekeys.projectcalico.org`
-as well as abbreviations such as `licensekey.p` and `licensekeys.p`.
-
-## Working with license keys
-
-### Applying or updating a license key
-
-When you add $[prodname] to an existing Kubernetes cluster or create a
-new OpenShift cluster, you must apply your license key to complete the installation
-and gain access to the full set of $[prodname] features.
-
-When your license key expires, you must update it to continue using $[prodname].
-
-To apply or update a license key use the following command, replacing ``
-with the customer name in the file sent to you by Tigera.
-
-**Command**
-
-```bash
-kubectl apply -f -license.yaml
-```
-
-**Example**
-
-```bash
-kubectl apply -f awesome-corp-license.yaml
-```
-
-### Viewing information about your license key
-
-To view the number of licensed nodes and the license key expiry, use:
-
-```bash
-kubectl get licensekeys.p -o custom-columns='Name:.metadata.name,MaxNodes:.status.maxnodes,Expiry:.status.expiry,PackageType:.status.package'
-```
-
-This is an example of the output of above command.
-
-```
-Name MaxNodes Expiry Package
-default 100 2021-10-01T23:59:59Z Enterprise
-```
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: LicenseKey
-metadata:
- creationTimestamp: null
- name: default
-spec:
- certificate: |
- -----BEGIN CERTIFICATE-----
- MII...n5
- -----END CERTIFICATE-----
- token: eyJ...zaQ
-status:
- expiry: '2021-10-01T23:59:59Z'
- maxnodes: 100
- package: Enterprise
-```
-
-The data fields in the license key resource may change without warning. The license key resource
-is currently a singleton: the only valid name is `default`.
-
-## Supported operations
-
-| Datastore type | Create | Delete | Update | Get/List | Notes |
-| --------------------- | ------ | ------ | ------ | -------- | ----- |
-| Kubernetes API server | Yes | No | Yes | Yes |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/managedcluster.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/managedcluster.mdx
deleted file mode 100644
index 9a12699a5f..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/managedcluster.mdx
+++ /dev/null
@@ -1,70 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Managed Cluster
-
-A Managed Cluster resource (`ManagedCluster`) represents a cluster managed by a centralized management plane with a shared Elasticsearch.
-The management plane provides central control of the managed cluster and stores its logs.
-
-$[prodname] supports connecting multiple $[prodname] clusters as describe in the [Multi-cluster management] installation guide.
-
-For `kubectl` commands, the following case-insensitive aliases may be used to specify the resource type on the CLI:
-`managedcluster`,`managedclusters`, `managedcluster.projectcalico.org`, `managedclusters.projectcalico.org` as well as
-abbreviations such as `managedcluster.p` and `managedclusters.p`.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: ManagedCluster
-metadata:
- name: managed-cluster
-spec:
- operatorNamespace: tigera-operator
-```
-
-## Managed cluster definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | --------------------------------------------------------- | --------------------------------------------------- | ------ |
-| name | Unique name to describe this resource instance. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string |
-
-- `cluster` is a reserved name for the management plane and is considered an invalid value
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| -------------------- | ----------------------------------------------------------------------------------------------------------------- | --------------- | ------ | ------- |
-| installationManifest | Installation Manifest to be applied on a managed cluster infrastructure | None | string | `Empty` |
-| operatorNamespace | The namespace of the managed cluster's operator. This value is used in the generation of the InstallationManifest | None | string | `Empty` |
-
-- `installationManifest` field can be retrieved only once at creation time. Updates are not supported for this field.
-
-To extract the installation manifest at creation time `-o jsonpath="{.spec.installationManifest}"` parameters
-can be used with a `kubectl` command.
-
-### Status
-
-Status represents the latest observed status of Managed cluster. The `status` is read-only for users and updated by the
-$[prodname] components.
-
-| Field | Description | Schema |
-| ---------- | -------------------------------------------------------------------------- | -------------------------------------- |
-| conditions | List of condition that describe the current status of the Managed cluster. | List of ManagedClusterStatusConditions |
-
-**ManagedClusterStatusConditions**
-
-Conditions represent the latest observed set of conditions for a Managed cluster. The connection between a management
-plane and managed plane will be reported as following:
-
-- `Unknown` when no initial connection has been established
-- `True` when both planes have an established connection
-- `False` when neither planes have an established connection
-
-| Field | Description | Accepted Values | Schema | Default |
-| ------ | ------------------------------------------------------------------------- | -------------------------- | ------ | ------------------------- |
-| type | Type of status that is being reported | - | string | `ManagedClusterConnected` |
-| status | Status of the connection between a Managed cluster and management cluster | `Unknown`, `True`, `False` | string | `Unknown` |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/networkpolicy.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/networkpolicy.mdx
deleted file mode 100644
index 08fea45f1d..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/networkpolicy.mdx
+++ /dev/null
@@ -1,155 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Network policy
-
-import Servicematch from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_servicematch.mdx';
-
-import Serviceaccountmatch from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_serviceaccountmatch.mdx';
-
-import Ports from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_ports.mdx';
-
-import SelectorScopes from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_selector-scopes.mdx';
-
-import Selectors from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_selectors.mdx';
-
-import Entityrule from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_entityrule.mdx';
-
-import Icmp from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_icmp.mdx';
-
-import Rule from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_rule.mdx';
-
-A network policy resource (`NetworkPolicy`) represents an ordered set of rules which are applied
-to a collection of endpoints that match a [label selector](#selector).
-
-`NetworkPolicy` is a namespaced resource. `NetworkPolicy` in a specific namespace
-only applies to [workload endpoint resources](workloadendpoint.mdx)
-in that namespace. Two resources are in the same namespace if the `namespace`
-value is set the same on both.
-See [global network policy resource](globalnetworkpolicy.mdx) for non-namespaced network policy.
-
-`NetworkPolicy` resources can be used to define network connectivity rules between groups of $[prodname] endpoints and host endpoints.
-
-
-
-NetworkPolicies are organized into [tiers](tier.mdx), which provide an additional layer of ordering—in particular, note that the `Pass` action skips to the
-next [tier](tier.mdx), to enable hierarchical security policy.
-
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases
-may be used to specify the resource type on the CLI:
-`networkpolicy.projectcalico.org`, `networkpolicies.projectcalico.org` and abbreviations such as
-`networkpolicy.p` and `networkpolicies.p`.
-
-## Sample YAML
-
-This sample policy allows TCP traffic from `frontend` endpoints to port 6379 on
-`database` endpoints.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: internal-access.allow-tcp-6379
- namespace: production
-spec:
- tier: internal-access
- selector: role == 'database'
- types:
- - Ingress
- - Egress
- ingress:
- - action: Allow
- metadata:
- annotations:
- from: frontend
- to: database
- protocol: TCP
- source:
- selector: role == 'frontend'
- destination:
- ports:
- - 6379
- egress:
- - action: Allow
-```
-
-## Definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema | Default |
-| --------- | ------------------------------------------------------------------ | --------------------------------------------------- | ------ | --------- |
-| name | The name of the network policy. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | |
-| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------- | --------------------- | --------------------------------------------- |
-| order | Controls the order of precedence. $[prodname] applies the policy with the lowest value first. | | float | |
-| tier | Name of the [tier](tier.mdx) this policy belongs to. | | string | `default` |
-| selector | Selects the endpoints to which this policy applies. | | [selector](#selector) | all() |
-| types | Applies the policy based on the direction of the traffic. To apply the policy to inbound traffic, set to `Ingress`. To apply the policy to outbound traffic, set to `Egress`. To apply the policy to both, set to `Ingress, Egress`. | `Ingress`, `Egress` | List of strings | Depends on presence of ingress/egress rules\* |
-| ingress | Ordered list of ingress rules applied by policy. | | List of [Rule](#rule) | |
-| egress | Ordered list of egress rules applied by this policy. | | List of [Rule](#rule) | |
-| serviceAccountSelector | Selects the service account(s) to which this policy applies. Select a specific service account by name using the `projectcalico.org/name` label. | | [selector](#selector) | all() |
-| performanceHints | Contains a list of hints to Calico's policy engine to help process the policy more efficiently. Hints never change the enforcement behaviour of the policy. The available hints are described [below](#performance-hints). | `AssumeNeededOnEveryNode` | List of strings | |
-
-\* If `types` has no value, $[prodname] defaults as follows.
-
-> | Ingress Rules Present | Egress Rules Present | `Types` value |
-> | --------------------- | -------------------- | ----------------- |
-> | No | No | `Ingress` |
-> | Yes | No | `Ingress` |
-> | No | Yes | `Egress` |
-> | Yes | Yes | `Ingress, Egress` |
-
-### Rule
-
-
-
-### ICMP
-
-
-
-### EntityRule
-
-
-
-### Selector
-
-
-
-
-### Ports
-
-
-
-### ServiceAccountMatch
-
-
-
-### ServiceMatch
-
-
-
-### Performance Hints
-
-Performance hints provide a way to tell $[prodname] about the intended use of the policy so that it may
-process it more efficiently. Currently only one hint is defined:
-
-* `AssumeNeededOnEveryNode`: normally, $[prodname] only calculates a policy's rules and selectors on nodes where
- the policy is actually in use (i.e. its selector matches a local endpoint). This saves work in most cases.
- The `AssumeNeededOnEveryNode` hint tells $[prodname] to treat the policy as "in use" on *every* node. This is
- useful for large policy sets that are known to apply to all (or nearly all) endpoints. It effectively "preloads"
- the policy on every node so that there is less work to do when the first endpoint matching the policy shows up.
- It also prevents work from being done to tear down the policy when the last endpoint is drained.
-
-## Supported operations
-
-| Datastore type | Create/Delete | Update | Get/List | Notes |
-| ------------------------ | ------------- | ------ | -------- | ----- |
-| Kubernetes API datastore | Yes | Yes | Yes |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/networkset.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/networkset.mdx
deleted file mode 100644
index 86c8382615..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/networkset.mdx
+++ /dev/null
@@ -1,71 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Network set
-
-import DomainNames from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_domain-names.mdx';
-
-A network set resource (NetworkSet) represents an arbitrary set of IP subnetworks/CIDRs,
-allowing it to be matched by $[prodname] policy. Network sets are useful for applying policy to traffic
-coming from (or going to) external, non-$[prodname], networks.
-
-`NetworkSet` is a namespaced resource. `NetworkSets` in a specific namespace
-only applies to [network policies](networkpolicy.mdx)
-in that namespace. Two resources are in the same namespace if the `namespace`
-value is set the same on both. (See [GlobalNetworkSet](globalnetworkset.mdx) for non-namespaced network sets.)
-
-The metadata for each network set includes a set of labels. When $[prodname] is calculating the set of
-IPs that should match a source/destination selector within a
-[network policy](networkpolicy.mdx) rule, it includes
-the CIDRs from any network sets that match the selector.
-
-:::note
-
-Since $[prodname] matches packets based on their source/destination IP addresses,
-$[prodname] rules may not behave as expected if there is NAT between the $[prodname]-enabled node and the
-networks listed in a network set. For example, in Kubernetes, incoming traffic via a service IP is
-typically SNATed by the kube-proxy before reaching the destination host so $[prodname]'s workload
-policy will see the kube-proxy's host's IP as the source instead of the real source.
-
-:::
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkSet
-metadata:
- name: external-database
- namespace: staging
- labels:
- role: db
-spec:
- nets:
- - 198.51.100.0/28
- - 203.0.113.0/24
- allowedEgressDomains:
- - db.com
- - '*.db.com'
-```
-
-## Network set definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema | Default |
-| --------- | ------------------------------------------------------------------ | ------------------------------------------------- | ------ | --------- |
-| name | The name of this network set. Required. | Lower-case alphanumeric with optional `_` or `-`. | string | |
-| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" |
-| labels | A set of labels to apply to this endpoint. | | map | |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------- | ------ | ------- |
-| nets | The IP networks/CIDRs to include in the set. | Valid IPv4 or IPv6 CIDRs, for example "192.0.2.128/25" | list | |
-| allowedEgressDomains | The list of domain names that belong to this set and are honored in egress allow rules only. Domain names specified here only work to allow egress traffic from the cluster to external destinations. They don't work to _deny_ traffic to destinations specified by domain name, or to allow ingress traffic from _sources_ specified by domain name. | List of [exact or wildcard domain names](#exact-and-wildcard-domain-names) | list | |
-
-### Exact and wildcard domain names
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/node.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/node.mdx
deleted file mode 100644
index 5fd746f496..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/node.mdx
+++ /dev/null
@@ -1,82 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Node
-
-A node resource (`Node`) represents a node running $[prodname]. When adding a host
-to a $[prodname] cluster, a node resource needs to be created which contains the
-configuration for the `$[nodecontainer]` instance running on the host.
-
-When starting a `$[nodecontainer]` instance, the name supplied to the instance should
-match the name configured in the Node resource.
-
-By default, starting a `$[nodecontainer]` instance will automatically create a node resource
-using the `hostname` of the compute host.
-
-This resource is not supported in `kubectl`.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: Node
-metadata:
- name: node-hostname
-spec:
- bgp:
- asNumber: 64512
- ipv4Address: 10.244.0.1/24
- ipv6Address: 2001:db8:85a3::8a2e:370:7334/120
- ipv4IPIPTunnelAddr: 192.168.0.1
-```
-
-## Definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | -------------------------------- | --------------------------------------------------- | ------ |
-| name | The name of this node. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| -------------------- | --------------------------------------------------------------------------------------------------------------------------------- | --------------- | ---------------------------- | ------- |
-| bgp | BGP configuration for this node. Omit if using $[prodname] for policy only. | | [BGP](#bgp) |
-| ipv4VXLANTunnelAddr | IPv4 address of the VXLAN tunnel. This is system configured and should not be updated manually. | | string |
-| ipv6VXLANTunnelAddr | IPv6 address of the VXLAN tunnel. This is system configured and should not be updated manually. | | string |
-| vxlanTunnelMACAddr | MAC address of the VXLAN tunnel. This is system configured and should not be updated manually. | | string |
-| vxlanTunnelMACAddrV6 | MAC address of the IPv6 VXLAN tunnel. This is system configured and should not be updated manually. | | string |
-| orchRefs | Correlates this node to a node in another orchestrator. | | list of [OrchRefs](#orchref) |
-| wireguard | WireGuard configuration for this node. This is applicable only if WireGuard is enabled in [Felix Configuration](felixconfig.mdx). | | [WireGuard](#wireguard) |
-
-### OrchRef
-
-| Field | Description | Accepted Values | Schema | Default |
-| ------------ | ------------------------------------------------ | --------------- | ------ | ------- |
-| nodeName | Name of this node according to the orchestrator. | | string |
-| orchestrator | Name of the orchestrator. | k8s | string |
-
-### BGP
-
-| Field | Description | Accepted Values | Schema | Default |
-| ----------------------- | -------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | ------- |
-| asNumber | The AS Number of your `$[nodecontainer]`. | Optional. If omitted the global value is used (see [example modifying Global BGP settings](../../networking/configuring/bgp.mdx) for details about modifying the `asNumber` setting). | integer |
-| ipv4Address | The IPv4 address and subnet exported as the next-hop for the $[prodname] endpoints on the host | The IPv4 address must be specified if BGP is enabled. | string |
-| ipv6Address | The IPv6 address and subnet exported as the next-hop for the $[prodname] endpoints on the host | Optional | string |
-| ipv4IPIPTunnelAddr | IPv4 address of the IP-in-IP tunnel. This is system configured and should not be updated manually. | Optional IPv4 address | string |
-| routeReflectorClusterID | Enables this node as a route reflector within the given cluster | Optional IPv4 address | string |
-
-### WireGuard
-
-| Field | Description | Accepted Values | Schema | Default |
-| -------------------- | ----------------------------------------------------------------------------------------- | --------------- | ------ | ------- |
-| interfaceIPv4Address | The IP address and subnet for the IPv4 WireGuard interface created by Felix on this node. | Optional | string |
-| interfaceIPv6Address | The IP address and subnet for the IPv6 WireGuard interface created by Felix on this node. | Optional | string |
-
-## Supported operations
-
-| Datastore type | Create/Delete | Update | Get/List | Notes |
-| --------------------- | ------------- | ------ | -------- | ------------------------------------------------------------------ |
-| Kubernetes API server | No | Yes | Yes | `$[nodecontainer]` data is directly tied to the Kubernetes nodes. |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/overview.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/overview.mdx
deleted file mode 100644
index f729636306..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/overview.mdx
+++ /dev/null
@@ -1,117 +0,0 @@
----
-description: Calico Cloud resources (APIs) that you can manage using calicoctl.
----
-
-# Resource definitions
-
-This section describes the set of valid resource types that can be managed
-through `calicoctl` or `kubectl`.
-
-While resources may be supplied in YAML or JSON format, this guide provides examples in YAML.
-
-## Overview of resource structure
-
-The calicoctl commands for resource management (create, apply, delete, replace, get)
-all take resource manifests as input.
-
-Each manifest may contain a single resource
-(e.g. a profile resource), or a list of multiple resources (e.g. a profile and two
-hostEndpoint resources).
-
-The general structure of a single resource is as follows:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind:
-metadata:
- # Identifying information
- name:
- ...
-spec:
- # Specification of the resource
- ...
-```
-
-### Schema
-
-| Field | Description | Accepted Values | Schema |
-| ---------- | --------------------------------------------------------------------------------------- | -------------------- | ------------------------ |
-| apiVersion | Indicates the version of the API that the data corresponds to. | projectcalico.org/v3 | string |
-| kind | Specifies the type of resource described by the YAML document. | | [kind](#supported-kinds) |
-| metadata | Contains information used to uniquely identify the particular instance of the resource. | | map |
-| spec | Contains the resource specification. | | map |
-
-### Supported kinds
-
-The following resources are supported:
-
-- [AlertException](alertexception.mdx)
-- [BGPConfiguration](bgpconfig.mdx)
-- [BGPPeer](bgppeer.mdx)
-- [DeepPacketInspection](deeppacketinspection.mdx)
-- [EgressGatewayPolicy](egressgatewaypolicy.mdx)
-- [FelixConfiguration](felixconfig.mdx)
-- [GlobalAlert](globalalert.mdx)
-- [GlobalNetworkPolicy](globalnetworkpolicy.mdx)
-- [GlobalNetworkSet](globalnetworkset.mdx)
-- [GlobalReport](globalreport.mdx)
-- [GlobalThreatFeed](globalthreatfeed.mdx)
-- [HostEndpoint](hostendpoint.mdx)
-- [IPPool](ippool.mdx)
-- [IPReservation](ipreservation.mdx)
-- [KubeControllersConfiguration](kubecontrollersconfig.mdx)
-- [LicenseKey](licensekey.mdx)
-- [ManagedCluster](managedcluster.mdx)
-- [NetworkPolicy](networkpolicy.mdx)
-- [NetworkSet](networkset.mdx)
-- [Node](node.mdx)
-- [PacketCapture](packetcapture.mdx)
-
-- [RemoteClusterConfiguration](remoteclusterconfiguration.mdx)
-- [StagedGlobalNetworkPolicy](stagedglobalnetworkpolicy.mdx)
-- [StagedKubernetesNetworkPolicy](stagedkubernetesnetworkpolicy.mdx)
-- [StagedNetworkPolicy](stagednetworkpolicy.mdx)
-- [Tier](tier.mdx)
-- [WorkloadEndpoint](workloadendpoint.mdx)
-
-### Resource name requirements
-
-Every resource must have the `name` field specified. Name must be unique within a namespace.
-Name required when creating resources, and cannot be updated.
-A valid resource name can have alphanumeric characters with optional `.`, `_`, or `-`. of up to 128 characters total.
-
-### Multiple resources in a single file
-
-A file may contain multiple resource documents specified in a YAML list format. For example, the following is the contents of a file containing two `HostEndpoint` resources:
-
-```yaml
-- apiVersion: projectcalico.org/v3
- kind: HostEndpoint
- metadata:
- name: endpoint1
- labels:
- type: database
- spec:
- interface: eth0
- node: host1
- profiles:
- - prof1
- - prof2
- expectedIPs:
- - 1.2.3.4
- - '00:bb::aa'
-- apiVersion: projectcalico.org/v3
- kind: HostEndpoint
- metadata:
- name: endpoint2
- labels:
- type: frontend
- spec:
- interface: eth1
- node: host1
- profiles:
- - prof1
- - prof2
- expectedIPs:
- - 1.2.3.5
-```
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/packetcapture.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/packetcapture.mdx
deleted file mode 100644
index 7c5df91ad7..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/packetcapture.mdx
+++ /dev/null
@@ -1,148 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# PacketCapture
-
-import Selectors from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_selectors.mdx';
-
-A Packet Capture resource (`PacketCapture`) represents captured live traffic for debugging microservices and application
-interaction inside a Kubernetes cluster.
-
-$[prodname] supports selecting one or multiple [WorkloadEndpoints resources](workloadendpoint.mdx)
-as described in the [Packet Capture] guide.
-
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases may be used to specify the resource type on the CLI:
-`packetcapture`,`packetcaptures`, `packetcapture.projectcalico.org`, `packetcaptures.projectcalico.org` as well as
-abbreviations such as `packetcapture.p` and `packetcaptures.p`.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: PacketCapture
-metadata:
- name: sample-capture
- namespace: sample-namespace
-spec:
- selector: k8s-app == "sample-app"
- filters:
- - protocol: TCP
- ports:
- - 80
-```
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: PacketCapture
-metadata:
- name: sample-capture
- namespace: sample-namespace
-spec:
- selector: all()
- startTime: '2021-08-26T12:00:00Z'
- endTime: '2021-08-26T12:30:00Z'
-```
-
-## Packet capture definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema | Default |
-| --------- | ------------------------------------------------------------------ | --------------------------------------------------- | ------ | --------- |
-| name | The name of the packet capture. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | |
-| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| --------- | ---------------------------------------------------------------------------------- | ----------------------- | ----------------------- | ------- |
-| selector | Selects the endpoints to which this packet capture applies. | | [selector](#selector) | |
-| filters | The ordered set of filters applied to traffic captured from an interface. | | [filters](#filters) | |
-| startTime | Defines the start time from which this PacketCapture will start capturing packets. | Date in RFC 3339 format | [startTime](#starttime) | |
-| endTime | Defines the end time at which this PacketCapture will stop capturing packets. | Date in RFC 3339 format | [endTime](#endtime) | |
-
-### Selector
-
-
-
-### Filters
-
-| Field | Description | Accepted Values | Schema | Default |
-| -------- | ------------------------------------- | ------------------------------------------------------------ | ----------------- | ------- |
-| protocol | Positive protocol match. | `TCP`, `UDP`, `ICMP`, `ICMPv6`, `SCTP`, `UDPLite`, `1`-`255` | string \| integer | |
-| ports | Positive match on the specified ports | | list of ports | |
-
-$[prodname] supports the following syntax for expressing ports.
-
-| Syntax | Example | Description |
-| --------- | --------- | ------------------------------------------------------ |
-| int | 80 | The exact (numeric) port specified |
-| start:end | 6040:6050 | All (numeric) ports within the range start ≤ x ≤ end |
-
-An individual numeric port may be specified as a YAML/JSON integer. A port range must be represented as a string. Named ports are not supported by `PacketCapture`.
-Multiple ports can be defined to filter traffic. All specified ports or port ranges concatenated using the logical operator "OR".
-
-For example, this would be a valid list of ports:
-
-```yaml
-ports: [8080, '1234:5678']
-```
-
-Multiple filter rules can be defined to filter traffic. All rules are concatenated using the logical operator "OR".
-For example, filtering both TCP or UDP traffic will be defined as:
-
-```yaml
-filters:
- - protocol: TCP
- - protocol: UDP
-```
-
-Within a single filter rule, protocol and list of valid ports will be concatenated using the logical operator "AND".
-
-For example, filtering TCP traffic and traffic for port 80 will be defined as:
-
-```yaml
-filters:
- - protocol: TCP
- ports: [80]
-```
-
-### StartTime
-
-Defines the start time from which this PacketCapture will start capturing packets in RFC 3339 format.
-If omitted or the value is in the past, the capture will start immediately.
-If the value is changed to a future time, capture will stop immediately and restart at that time.
-
-```yaml
-startTime: '2021-08-26T12:00:00Z'
-```
-
-### EndTime
-
-Defines the end time from which this PacketCapture will stop capturing packets in RFC 3339 format.
-If omitted the capture will continue indefinitely.
-If the value is changed to the past, capture will stop immediately.
-
-```yaml
-endTime: '2021-08-26T12:30:00Z'
-```
-
-### Status
-
-`PacketCaptureStatus` lists the current state of a `PacketCapture` and its generated capture files.
-
-| Field | Description |
-| ----- | -------------------------------------------------------------------------------------------------------------------------------- |
-| files | It describes the location of the packet capture files that is identified via a node, its directory and the file names generated. |
-
-### Files
-
-| Field | Description |
-| --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| directory | The path inside the calico-node container for the generated files. |
-| fileNames | The name of the generated file for a `PacketCapture` ordered alphanumerically. The active packet capture file will be identified using the following schema: `{}.pcap`. Rotated capture files name will contain an index matching the rotation timestamp. |
-| node | The hostname of the Kubernetes node the files are located on. |
-| state | Determines whether a PacketCapture is capturing traffic from any interface attached to the current node. Possible values include: Capturing, Scheduled, Finished, Error, WaitingForTraffic |
-
-[packet capture]: /visibility/packetcapture.mdx
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/policyrecommendations.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/policyrecommendations.mdx
deleted file mode 100644
index 350756ac65..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/policyrecommendations.mdx
+++ /dev/null
@@ -1,56 +0,0 @@
----
-description: API for this Calico Enterprise resource.
----
-
-# Policy recommendation scope
-
-import Servicematch from '../../_includes/content/_servicematch.mdx';
-
-import Serviceaccountmatch from '../../_includes/content/_serviceaccountmatch.mdx';
-
-import Ports from '../../_includes/content/_ports.mdx';
-
-import Selectors from '../../_includes/content/_selectors.mdx';
-
-import Entityrule from '../../_includes/content/_entityrule.mdx';
-
-import Icmp from '../../_includes/content/_icmp.mdx';
-
-import Rule from '../../_includes/content/_rule.mdx';
-
-The policy recommendation scope is a collection of configuration options to control [policy recommendation](../../network-policy/recommendations/policy-recommendations.mdx) in Manager UI.
-
-To apply changes to this resource, use the following format:
-
-```
-$ kubectl patch policyrecommendationscope default -p '{"spec":{"":""}}'
-```
-**Example**
-
-`$ kubectl patch policyrecommendationscope default -p '{"spec":{"interval":"5m"}}'`
-
-## Definition
-
-###
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------------- | ------------------------------------------------------------ | --------------------------------------------------- | ------ | -------------------------------------------------- |
-| name | The name of the policy recommendation scope. | `default` | string | |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------------------- | ------------------------------------------------------------ | --------------- | ------ | -------------- |
-| Interval | The frequency to create and refine policy recommendations. | | | 2.5m (minutes) |
-| InitialLookback | Start time to look at flow logs when first creating a policy recommendation. | | | 24h (hours) |
-| StabilizationPeriod | Time that a recommended policy should remain unchanged so it is stable and ready to be enforced. | | | 10m (minutes) |
-
-#### NamespaceSpec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------------------- | ------------------------------------------------------------ | --------------- | ------ | -------------- |
-| recStatus | Defines the policy recommendation engine status. | Enabled/Disabled | | Disabled |
-| selector | Selects the namespaces for generating recommendations. | | | `!(projectcalico.org/name starts with ''tigera-'') && !(projectcalico.org/name starts with ''calico-'') && !(projectcalico.org/name starts with ''kube-'')` |
-| intraNamespacePassThroughTraffic | When true, sets all intra-namespace traffic to Pass | true/false | | false |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/remoteclusterconfiguration.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/remoteclusterconfiguration.mdx
deleted file mode 100644
index a0a0fb393e..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/remoteclusterconfiguration.mdx
+++ /dev/null
@@ -1,97 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Remote cluster configuration
-
-A remote cluster configuration resource (RemoteClusterConfiguration) represents a cluster in a federation of clusters.
-Each remote cluster needs a configuration to be specified to allow the local cluster to access resources on the remote
-cluster. The connection is one-way: the information flows only from the remote to the local cluster. To share
-information from the local cluster to the remote one a remote cluster configuration resource must be created on the
-remote cluster.
-
-A remote cluster configuration causes Typha and `calicoq` to retrieve the following resources from a remote cluster:
-
-- [Workload endpoints](workloadendpoint.mdx)
-- [Host endpoints](hostendpoint.mdx)
-
-
-When using the Kubernetes API datastore with RBAC enabled on the remote cluster, the RBAC rules must be configured to
-allow access to these resources.
-
-For more details on the federation feature refer to the [Overview](../../multicluster/overview.mdx).
-
-
-
-This resource is not supported in `kubectl`.
-
-## Sample YAML
-
-For a remote Kubernetes datastore cluster:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: RemoteClusterConfiguration
-metadata:
- name: cluster1
-spec:
- datastoreType: kubernetes
- kubeconfig: /etc/tigera-federation-remotecluster/kubeconfig-rem-cluster-1
-```
-
-For a remote etcdv3 cluster:
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: RemoteClusterConfiguration
-metadata:
- name: cluster1
-spec:
- datastoreType: etcdv3
- etcdEndpoints: 'https://10.0.0.1:2379,https://10.0.0.2:2379'
-```
-
-## RemoteClusterConfiguration Definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | ---------------------------------------------- | ----------------------------------------- | ------ |
-| name | The name of this remote cluster configuration. | Lower-case alphanumeric with optional `-` | string |
-
-### Spec
-
-| Field | Secret key | Description | Accepted Values | Schema | Default |
-| ------------------- | ------------- | ------------------------------------------------------------------ | --------------------- | -------------------------- | ------- |
-| clusterAccessSecret | | Reference to a Secret that contains connection information | | Kubernetes ObjectReference | none |
-| datastoreType | datastoreType | The datastore type of the remote cluster. | `etcdv3` `kubernetes` | string | none |
-| etcdEndpoints | etcdEndpoints | A comma separated list of etcd endpoints. | | string | none |
-| etcdUsername | etcdUsername | Username for RBAC. | | string | none |
-| etcdPassword | etcdPassword | Password for the given username. | | string | none |
-| etcdKeyFile | etcdKey | Path to the etcd key file. | | string | none |
-| etcdCertFile | etcdCert | Path to the etcd certificate file. | | string | none |
-| etcdCACertFile | etcdCACert | Path to the etcd CA certificate file. | | string | none |
-| kubeconfig | kubeconfig | Location of the `kubeconfig` file. | | string | none |
-| k8sAPIEndpoint | | Location of the kubernetes API server. | | string | none |
-| k8sKeyFile | | Location of a client key for accessing the Kubernetes API. | | string | none |
-| k8sCertFile | | Location of a client certificate for accessing the Kubernetes API. | | string | none |
-| k8sCAFile | | Location of a CA certificate. | | string | none |
-| k8sAPIToken | | Token to be used for accessing the Kubernetes API. | | string | none |
-
-When using the `clusterAccessSecret` field, all other fields in the RemoteClusterconfiguration resource must be empty.
-When the `clusterAccessSecret` reference is used, all datastore configuration will be read from the referenced Secret
-using the "Secret key" fields named in the above table as the data key in the Secret. The fields read from a Secret
-that were file path or locations in a RemoteClusterConfiguration will be expected to be the file contents when read
-from a Secret.
-
-All of the fields that start with `etcd` are only valid when the DatastoreType is etcdv3 and the fields that start with `k8s` or `kube` are only valid when the datastore type is kubernetes.
-The `kubeconfig` field and the fields that end with `File` must be accessible to Typha and `calicoq`, this does not apply when the data is coming from a Secret referenced by `clusterAccessSecret`.
-
-When the DatastoreType is `kubernetes`, the `kubeconfig` file is optional but since it can contain all of the authentication information needed to access the Kubernetes API server it is generally easier to use than setting all the individual `k8s` fields. The other kubernetes fields can be used by themselves though or to override specific kubeconfig values.
-
-## Supported operations
-
-| Datastore type | Create/Delete | Update | Get/List | Notes |
-| --------------------- | ------------- | ------ | -------- | ----- |
-| etcdv3 | Yes | Yes | Yes |
-| Kubernetes API server | Yes | Yes | Yes |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/runtimesecurity.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/runtimesecurity.mdx
deleted file mode 100644
index 14a53f1077..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/runtimesecurity.mdx
+++ /dev/null
@@ -1,72 +0,0 @@
----
-description: API for this Calico Cloud resource
----
-
-# RuntimeSecurity
-
-The **RuntimeSecurity** custom resource (CR) is used to enable and configure Container Threat Detection in a Calico Cloud managed cluster.
-
-### Resource Definition
-
-```yaml
-apiVersion: operator.tigera.io/v1
-kind: RuntimeSecurity
-metadata:
- name: default
-spec:
- detectorConfig:
- - id: execution-container_deployment_command
- disabled: true
- - id: discovery-enumeration_of_linux_capabilities
- disabled: true
- runtimeExceptionList:
- - matching: regex
- processInvocation: "/bin/ls*"
- pod: "not-evil-pod"
- namespace: "default"
- - matching: exact
- pod: "nginx"
- namespace: default
- - matching: regex
- namespace: "company-operations"
-```
-
-## Runtime Security Definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ------ | --------------------------------------------- | ---------------- | ------ |
-| name | The name of the runtime security resource. | default | string |
-| labels | A set of labels to apply to this resource. | | map |
-
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ------------------------ | -------------------------------------------------------------------------------------------- | ---------------- | ----------------------------------------------------- | ------- |
-| detectorConfig | Configuration that allows particular threat detectors to be disabled | | [DetectorConfig](#detectorconfig) | |
-| runtimeExceptionList | List of entries of processes that are allowed to run that won't generate an event | | [runtimeExceptionList](#runtimeexceptionlist) | Enabled |
-
-### DetectorConfig
-
-The `DetectorConfig` by default is not present but can be used to disable particular threat detectors in the Calico Cloud Managed cluster.
-One entry per detector
-
-| Field | Description | Accepted Values | Schema |
-| -------- | ----------------------------------------------------------------- | --------------- | ------- |
-| id | The ID of the detector this entry applies too | | string |
-| disabled | Boolean represents weather the detector should be disabled or not | True, False | boolean |
-
-
-### RuntimeExceptionList
-
-This `RuntimeExceptionList` holds a list of entries that contain a list of supported fields by which a user can negate the
-generation of runtime reports.
-
-| Field | Description | Accepted Values | Schema |
-| -------- | ---------------------------------------------------------------------------------------------------------------------- | --------------- | ------- |
-| matching | Whether the entries are exact matches to fields or considered a regular expression | Exact, Regex | string |
-| processInvocation | The exact name or regex of the process to which a user wants to negate the generation of runtime logs | | string |
-| pod | The exact name or regex of the pod(s) to which a user wants to negate the generation of runtime logs | | string |
-| namespace | The exact name or regex of the namespace(s) for which a user wants to negate the generation of runtime logs | | string |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/securityeventwebhook.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/securityeventwebhook.mdx
deleted file mode 100644
index 770e87f812..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/securityeventwebhook.mdx
+++ /dev/null
@@ -1,146 +0,0 @@
----
-description: API for this Calico Enterprise resource.
----
-
-# Security event webhook
-
-A security event webhook (`SecurityEventWebhook`) is a cluster-scoped resource that represents instances
-of integrations with external systems through the webhook callback mechanism.
-
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases
-can be used to specify the resource type on the CLI:
-`securityeventwebhook.projectcalico.org`, `securityeventwebhooks.projectcalico.org` and abbreviations such as
-`securityeventwebhook.p` and `securityeventwebhooks.p`.
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: SecurityEventWebhook
-metadata:
- name: jira-webhook
- annotations:
- webhooks.projectcalico.org/labels: 'Cluster name:Calico Enterprise'
-spec:
- consumer: Jira
- state: Enabled
- query: type=waf
- config:
- - name: url
- value: 'https://your-jira-instance-name.atlassian.net/rest/api/2/issue/'
- - name: project
- value: PRJ
- - name: issueType
- value: Bug
- - name: username
- valueFrom:
- secretKeyRef:
- name: jira-secrets
- key: username
- - name: apiToken
- valueFrom:
- secretKeyRef:
- name: jira-secrets
- key: token
-```
-
-## Security event webhook definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | --------------------------------------------------------- | --------------------------------------------------- | ------ |
-| name | Unique name to describe this resource instance. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string |
-
-#### Annotations
-
-Security event webhooks provide an easy way to add arbitrary data to the webhook generated HTTP payload through the metadata annotation.
-The value of the `webhooks.projectcalico.org/labels`, if present, will be converted into the payload labels.
-The value must conform to the following rules:
-
-- Key and value data for a single label are separated by the `:` character,
-- Multiple labels are separated by the `,` character.
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Required |
-| -------- | --------------------------------------------------------------------------------------------------------------- | ----------------------------------- | ------------------------------------------------------------------------| --------------------------- |
-| consumer | Specifies intended consumer of the webhook. | Slack, Jira, Generic | string | yes |
-| state | Defines current state of the webhook. | Enabled, Disabled, Debug | string | yes |
-| query | Defines query used to retrieve security events from Calico. | [see Query](#query) | string | yes |
-| config | Webhook configuration, required contents of this structure is determined by the value of the `consumer` field. | [see Config](#configuration) | list of [SecurityEventWebhookConfigVar](#securityeventwebhookconfigvar) | yes |
-
-### SecurityEventWebhookConfigVar
-
-| Field | Description | Schema | Required |
-| ------------ | -------------------------------------------------------------------------- | --------------------------------------------------------------------------- | ----------------------------------- |
-| name | Configuration variable name. | string | yes |
-| value | Direct value for the variable. | string | yes if `valueFrom` is not specified |
-| valueFrom | Value defined either in a Kubernetes ConfigMap or in a Kubernetes Secret. | [SecurityEventWebhookConfigVarSource](#securityeventwebhookconfigvarsource) | yes if `value` is not specified |
-
-### SecurityEventWebhookConfigVarSource
-
-| Field | Description | Schema | Required |
-| ---------------- | --------------------------------- | ------------------------------------------------------------------------------------------------------------ | ----------------------------------------- |
-| configMapKeyRef | Kubernetes ConfigMap reference. | `ConfigMapKeySelector` (referenced ConfigMap key should exist in the `tigera-intrusion-detection` namespace) | yes if `secretKeyRef` is not specified |
-| secretKeyRef | Kubernetes Secret reference. | `SecretKeySelector` (referenced Secret key should exist in the `tigera-intrusion-detection` namespace) | yes if `configMapKeyRef` is not specified |
-
-### Status
-
-Field `status` reflects the health of a webhook. It is a list of [Kubernetes Conditions](https://pkg.go.dev/k8s.io/apimachinery@v0.23.0/pkg/apis/meta/v1#Condition).
-
-## Query
-
-Security event webhooks use a domain-specific query language to select which records
-from the data set should trigger the HTTP request.
-
-The query language is composed of any number of selectors, combined
-with boolean expressions (`AND`, `OR`, and `NOT`), set expressions
-(`IN` and `NOTIN`) and bracketed subexpressions. These are translated
-by $[prodname] to Elastic DSL queries that are executed on the backend.
-
-Set expressions support wildcard operators asterisk (`*`) and question mark (`?`).
-The asterisk sign matches zero or more characters and the question mark matches a single character.
-
-A selector consists of a key, comparator, and value. Keys and values
-may be identifiers consisting of alphanumerics and underscores (`_`)
-with the first character being alphabetic or an underscore, or may be
-quoted strings. Values may also be integer or floating point numbers.
-Comparators may be `=` (equal), `!=` (not equal), `<` (less than),
-`<=` (less than or equal), `>` (greater than), or `>=` (greater than
-or equal).
-
-## Configuration
-
-Data required to be present in the `config` section of the security event webhook `spec` depends on the intended consumer for the HTTP
-requests generated by the webhook. The value in the `consumer` field of the `spec` specifies the consumer and therefore data
-that is required to be present. Currently Calico supports the following consumers: `Slack`, `Jira` and `Generic`.
-Payloads generated by the webhook will be different for each of the listed use cases.
-
-### Slack
-
-Data fields required for the `Slack` value present in the `spec.consumer` field of a webhook:
-
-| Field | Description | Required |
-| ---------------- | ------------------------------------------------------------------------------ | ---------- |
-| url | A valid Slack [Incoming Webhook URL](https://api.slack.com/messaging/webhooks). | yes |
-
-### Generic
-
-Data fields required for the `Generic` value present in the `spec.consumer` field of a webhook:
-
-| Field | Description | Required |
-| ---------------- | --------------------------------------------------- | ---------- |
-| url | A generic and valid URL of another HTTP(s) endpoint. | yes |
-
-### Jira
-
-Data fields required for the `Jira` value present in the `spec.consumer` field of a webhook:
-
-| Field | Description | Required |
-| ---------------- | ---------------------------------------------------------------------- | ---------- |
-| url | URL of a Jira REST API v2 endpoint for the organisation. | yes |
-| project | A valid Jira project abbreviation. | yes |
-| issueType | A valid issue type for the selected project, examples: `Bug` or `Task` | yes |
-| username | A valid Jira user name. | yes |
-| apiToken | A valid Jira API token for the user. | yes |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/stagedglobalnetworkpolicy.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/stagedglobalnetworkpolicy.mdx
deleted file mode 100644
index f4284e5f3c..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/stagedglobalnetworkpolicy.mdx
+++ /dev/null
@@ -1,164 +0,0 @@
----
-description: API for this resource.
----
-
-# Staged Global Network Policy
-
-import Servicematch from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_servicematch.mdx';
-
-import Serviceaccountmatch from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_serviceaccountmatch.mdx';
-
-import Ports from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_ports.mdx';
-
-import Selectors from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_selectors.mdx';
-
-import Entityrule from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_entityrule.mdx';
-
-import Icmp from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_icmp.mdx';
-
-import Rule from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_rule.mdx';
-
-A staged global network policy resource (`StagedGlobalNetworkPolicy`) represents an ordered set of rules which are applied
-to a collection of endpoints that match a [label selector](#selector). These rules are used to preview network behavior and do
-not to enforce network traffic. For enforcing network traffic, see [global network policy resource](globalnetworkpolicy.mdx).
-
-`StagedGlobalNetworkPolicy` is not a namespaced resource. `StagedGlobalNetworkPolicy` applies to [workload endpoint resources](workloadendpoint.mdx) in all namespaces, and to [host endpoint resources](hostendpoint.mdx).
-Select a namespace in a `StagedGlobalNetworkPolicy` in the standard selector by using
-`projectcalico.org/namespace` as the label name and a `namespace` name as the
-value to compare against, e.g., `projectcalico.org/namespace == "default"`.
-See [staged network policy resource](stagednetworkpolicy.mdx) for staged namespaced network policy.
-
-`StagedGlobalNetworkPolicy` resources can be used to define network connectivity rules between groups of $[prodname] endpoints and host endpoints.
-
-
-StagedGlobalNetworkPolicies are organized into [tiers](tier.mdx), which provide an additional layer of ordering—in particular, note that the `Pass` action skips to the
-next [tier](tier.mdx), to enable hierarchical security policy.
-
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases
-may be used to specify the resource type on the CLI:
-`stagedglobalnetworkpolicy.projectcalico.org`, `stagedglobalnetworkpolicies.projectcalico.org` and abbreviations such as
-`stagedglobalnetworkpolicy.p` and `stagedglobalnetworkpolicies.p`.
-
-## Sample YAML
-
-This sample policy allows TCP traffic from `frontend` endpoints to port 6379 on
-`database` endpoints.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: StagedGlobalNetworkPolicy
-metadata:
- name: internal-access.allow-tcp-6379
-spec:
- tier: internal-access
- selector: role == 'database'
- types:
- - Ingress
- - Egress
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: role == 'frontend'
- destination:
- ports:
- - 6379
- egress:
- - action: Allow
-```
-
-## Definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema | Default |
-| ----- | ----------------------------------------- | --------------------------------------------------- | ------ | ------- |
-| name | The name of the network policy. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------- | --------------------- | --------------------------------------------- |
-| order | Controls the order of precedence. $[prodname] applies the policy with the lowest value first. | | float | |
-| tier | Name of the [tier](tier.mdx) this policy belongs to. | | string | `default` |
-| selector | Selects the endpoints to which this policy applies. | | [selector](#selector) | all() |
-| serviceAccountSelector | Selects the service account(s) to which this policy applies. Select all service accounts in the cluster with a specific name using the `projectcalico.org/name` label. | | [selector](#selector) | all() |
-| namespaceSelector | Selects the namespace(s) to which this policy applies. Select a specific namespace by name using the `projectcalico.org/name` label. | | [selector](#selector) | all() |
-| types | Applies the policy based on the direction of the traffic. To apply the policy to inbound traffic, set to `Ingress`. To apply the policy to outbound traffic, set to `Egress`. To apply the policy to both, set to `Ingress, Egress`. | `Ingress`, `Egress` | List of strings | Depends on presence of ingress/egress rules\* |
-| ingress | Ordered list of ingress rules applied by policy. | | List of [Rule](#rule) | |
-| egress | Ordered list of egress rules applied by this policy. | | List of [Rule](#rule) | |
-| doNotTrack\*\* | Indicates to apply the rules in this policy before any data plane connection tracking, and that packets allowed by these rules should not be tracked. | true, false | boolean | false |
-| preDNAT\*\* | Indicates to apply the rules in this policy before any DNAT. | true, false | boolean | false |
-| applyOnForward\*\* | Indicates to apply the rules in this policy on forwarded traffic as well as to locally terminated traffic. | true, false | boolean | false |
-| performanceHints | Contains a list of hints to Calico's policy engine to help process the policy more efficiently. Hints never change the enforcement behaviour of the policy. The available hints are described [below](#performance-hints). | `AssumeNeededOnEveryNode` | List of strings | |
-
-\* If `types` has no value, $[prodname] defaults as follows.
-
-> | Ingress Rules Present | Egress Rules Present | `Types` value |
-> | --------------------- | -------------------- | ----------------- |
-> | No | No | `Ingress` |
-> | Yes | No | `Ingress` |
-> | No | Yes | `Egress` |
-> | Yes | Yes | `Ingress, Egress` |
-
-\*\* The `doNotTrack` and `preDNAT` and `applyOnForward` fields are meaningful
-only when applying policy to a [host endpoint](hostendpoint.mdx).
-
-Only one of `doNotTrack` and `preDNAT` may be set to `true` (in a given policy). If they are both `false`, or when applying the policy to a
-[workload endpoint](workloadendpoint.mdx),
-the policy is enforced after connection tracking and any DNAT.
-
-`applyOnForward` must be set to `true` if either `doNotTrack` or `preDNAT` is
-`true` because for a given policy, any untracked rules or rules before DNAT will
-in practice apply to forwarded traffic.
-
-See [Using $[prodname] to Secure Host Interfaces](../host-endpoints/index.mdx)
-for how `doNotTrack` and `preDNAT` and `applyOnForward` can be useful for host endpoints.
-
-### Rule
-
-
-
-### ICMP
-
-
-
-### EntityRule
-
-
-
-### Selector
-
-
-
-### Ports
-
-
-
-### ServiceAccountMatch
-
-
-
-### ServiceMatch
-
-
-
-### Performance Hints
-
-Performance hints provide a way to tell $[prodname] about the intended use of the policy so that it may
-process it more efficiently. Currently only one hint is defined:
-
-* `AssumeNeededOnEveryNode`: normally, $[prodname] only calculates a policy's rules and selectors on nodes where
- the policy is actually in use (i.e. its selector matches a local endpoint). This saves work in most cases.
- The `AssumeNeededOnEveryNode` hint tells $[prodname] to treat the policy as "in use" on *every* node. This is
- useful for large policy sets that are known to apply to all (or nearly all) endpoints. It effectively "preloads"
- the policy on every node so that there is less work to do when the first endpoint matching the policy shows up.
- It also prevents work from being done to tear down the policy when the last endpoint is drained.
-
-## Supported operations
-
-| Datastore type | Create/Delete | Update | Get/List | Notes |
-| ------------------------ | ------------- | ------ | -------- | ----- |
-| Kubernetes API datastore | Yes | Yes | Yes |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/stagedkubernetesnetworkpolicy.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/stagedkubernetesnetworkpolicy.mdx
deleted file mode 100644
index 284474eb78..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/stagedkubernetesnetworkpolicy.mdx
+++ /dev/null
@@ -1,65 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Staged Kubernetes Network policy
-
-A staged kubernetes network policy resource (`StagedKubernetesNetworkPolicy`) represents a staged version
-of [Kubernetes network policy](https://kubernetes.io/docs/concepts/services-networking/network-policies).
-This is used to preview network behavior before actually enforcing the network policy. Once persisted, this
-will create a Kubernetes network policy backed by a $[prodname]
-[network policy](networkpolicy.mdx).
-
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases
-may be used to specify the resource type on the CLI:
-`stagedkubernetesnetworkpolicy.projectcalico.org`, `stagedkubernetesnetworkpolicies.projectcalico.org` and abbreviations such as
-`stagedkubernetesnetworkpolicy.p` and `stagedkubernetesnetworkpolicies.p`.
-
-## Sample YAML
-
-Below is a sample policy created from the example policy from the
-[Kubernetes NetworkPolicy documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource).
-The only difference between this policy and the example Kubernetes version is that the `apiVersion` and `kind` are changed
-to properly specify a staged Kubernetes network policy.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: StagedKubernetesNetworkPolicy
-metadata:
- name: test-network-policy
- namespace: default
-spec:
- podSelector:
- matchLabels:
- role: db
- policyTypes:
- - Ingress
- - Egress
- ingress:
- - from:
- - ipBlock:
- cidr: 172.17.0.0/16
- except:
- - 172.17.1.0/24
- - namespaceSelector:
- matchLabels:
- project: myproject
- - podSelector:
- matchLabels:
- role: frontend
- ports:
- - protocol: TCP
- port: 6379
- egress:
- - to:
- - ipBlock:
- cidr: 10.0.0.0/24
- ports:
- - protocol: TCP
- port: 5978
-```
-
-## Definition
-
-See the [Kubernetes NetworkPolicy documentation](https://v1-21.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#networkpolicyspec-v1-networking-k8s-io)
-for more information.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/stagednetworkpolicy.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/stagednetworkpolicy.mdx
deleted file mode 100644
index c58631d9fc..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/stagednetworkpolicy.mdx
+++ /dev/null
@@ -1,148 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Staged network policy
-
-import Servicematch from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_servicematch.mdx';
-
-import Serviceaccountmatch from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_serviceaccountmatch.mdx';
-
-import Ports from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_ports.mdx';
-
-import Selectors from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_selectors.mdx';
-
-import Entityrule from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_entityrule.mdx';
-
-import Icmp from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_icmp.mdx';
-
-import Rule from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_rule.mdx';
-
-A staged network policy resource (`StagedNetworkPolicy`) represents an ordered set of rules which are applied
-to a collection of endpoints that match a [label selector](#selector). These rules are used to preview network behavior and do
-not to enforce network traffic. For enforcing network traffic, see [network policy resource](networkpolicy.mdx).
-
-`StagedNetworkPolicy` is a namespaced resource. `StagedNetworkPolicy` in a specific namespace
-only applies to [workload endpoint resources](workloadendpoint.mdx)
-in that namespace. Two resources are in the same namespace if the `namespace`
-value is set the same on both.
-See [staged global network policy resource](stagedglobalnetworkpolicy.mdx) for staged non-namespaced network policy.
-
-`StagedNetworkPolicy` resources can be used to define network connectivity rules between groups of $[prodname] endpoints and host endpoints.
-
-
-StagedNetworkPolicies are organized into [tiers](tier.mdx), which provide an additional layer of ordering—in particular, note that the `Pass` action skips to the
-next [tier](tier.mdx), to enable hierarchical security policy.
-
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases
-may be used to specify the resource type on the CLI:
-`stagednetworkpolicy.projectcalico.org`, `stagednetworkpolicies.projectcalico.org` and abbreviations such as
-`stagednetworkpolicy.p` and `stagednetworkpolicies.p`.
-
-## Sample YAML
-
-This sample policy allows TCP traffic from `frontend` endpoints to port 6379 on
-`database` endpoints.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: StagedNetworkPolicy
-metadata:
- name: internal-access.allow-tcp-6379
- namespace: production
-spec:
- tier: internal-access
- selector: role == 'database'
- types:
- - Ingress
- - Egress
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: role == 'frontend'
- destination:
- ports:
- - 6379
- egress:
- - action: Allow
-```
-
-## Definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema | Default |
-| --------- | ------------------------------------------------------------------ | --------------------------------------------------- | ------ | --------- |
-| name | The name of the network policy. Required. | Alphanumeric string with optional `.`, `_`, or `-`. | string | |
-| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------- | --------------------- | --------------------------------------------- |
-| order | Controls the order of precedence. $[prodname] applies the policy with the lowest value first. | | float | |
-| tier | Name of the [tier](tier.mdx) this policy belongs to. | | string | `default` |
-| selector | Selects the endpoints to which this policy applies. | | [selector](#selector) | all() |
-| types | Applies the policy based on the direction of the traffic. To apply the policy to inbound traffic, set to `Ingress`. To apply the policy to outbound traffic, set to `Egress`. To apply the policy to both, set to `Ingress, Egress`. | `Ingress`, `Egress` | List of strings | Depends on presence of ingress/egress rules\* |
-| ingress | Ordered list of ingress rules applied by policy. | | List of [Rule](#rule) | |
-| egress | Ordered list of egress rules applied by this policy. | | List of [Rule](#rule) | |
-| serviceAccountSelector | Selects the service account(s) to which this policy applies. Select a specific service account by name using the `projectcalico.org/name` label. | | [selector](#selector) | all() |
-| performanceHints | Contains a list of hints to Calico's policy engine to help process the policy more efficiently. Hints never change the enforcement behaviour of the policy. The available hints are described [below](#performance-hints). | `AssumeNeededOnEveryNode` | List of strings | |
-
-\* If `types` has no value, $[prodname] defaults as follows.
-
-> | Ingress Rules Present | Egress Rules Present | `Types` value |
-> | --------------------- | -------------------- | ----------------- |
-> | No | No | `Ingress` |
-> | Yes | No | `Ingress` |
-> | No | Yes | `Egress` |
-> | Yes | Yes | `Ingress, Egress` |
-
-### Rule
-
-
-
-### ICMP
-
-
-
-### EntityRule
-
-
-
-### Selector
-
-
-
-### Ports
-
-
-
-### ServiceAccountMatch
-
-
-
-### ServiceMatch
-
-
-
-### Performance Hints
-
-Performance hints provide a way to tell $[prodname] about the intended use of the policy so that it may
-process it more efficiently. Currently only one hint is defined:
-
-* `AssumeNeededOnEveryNode`: normally, $[prodname] only calculates a policy's rules and selectors on nodes where
- the policy is actually in use (i.e. its selector matches a local endpoint). This saves work in most cases.
- The `AssumeNeededOnEveryNode` hint tells $[prodname] to treat the policy as "in use" on *every* node. This is
- useful for large policy sets that are known to apply to all (or nearly all) endpoints. It effectively "preloads"
- the policy on every node so that there is less work to do when the first endpoint matching the policy shows up.
- It also prevents work from being done to tear down the policy when the last endpoint is drained.
-
-## Supported operations
-
-| Datastore type | Create/Delete | Update | Get/List | Notes |
-| ------------------------ | ------------- | ------ | -------- | ----- |
-| Kubernetes API datastore | Yes | Yes | Yes |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/tier.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/tier.mdx
deleted file mode 100644
index 09c405ec78..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/tier.mdx
+++ /dev/null
@@ -1,62 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Tier
-
-A tier resource (`Tier`) represents an ordered collection of [NetworkPolicies](networkpolicy.mdx)
-and/or [GlobalNetworkPolicies](globalnetworkpolicy.mdx).
-Tiers are used to divide these policies into groups of different priorities. These policies
-are ordered within a Tier: the additional hierarchy of Tiers provides more flexibility
-because the `Pass` `action` in a Rule jumps to the next Tier. Some example use cases for this are.
-
-- Allowing privileged users to define security policy that takes precedence over other users.
-- Translating hierarchies of physical firewalls directly into $[prodname] policy.
-
-For `kubectl` [commands](https://kubernetes.io/docs/reference/kubectl/overview/), the following case-insensitive aliases
-may be used to specify the resource type on the CLI:
-`tier.projectcalico.org`, `tiers.projectcalico.org` and abbreviations such as
-`tier.p` and `tiers.p`.
-
-## How Policy Is Evaluated
-
-When a new connection is processed by $[prodname], each tier that contains a policy that applies to the endpoint processes the packet.
-Tiers are sorted by their `order` - smallest number first.
-
-Policies in each Tier are then processed in order.
-
-- If a [NetworkPolicy](networkpolicy.mdx) or [GlobalNetworkPolicy](globalnetworkpolicy.mdx) in the Tier `Allow`s or `Deny`s the packet, then evaluation is done: the packet is handled accordingly.
-- If a [NetworkPolicy](networkpolicy.mdx) or [GlobalNetworkPolicy](globalnetworkpolicy.mdx) in the Tier `Pass`es the packet, the next Tier containing a Policy that applies to the endpoint processes the packet.
-
-If the Tier applies to the endpoint, but takes no action on the packet the packet is dropped.
-
-
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: Tier
-metadata:
- name: internal-access
-spec:
- order: 100
-```
-
-## Definition
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema |
-| ----- | --------------------- | --------------- | ------ |
-| name | The name of the tier. | | string |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-|-------|--------------------------------------------------------------------------------------------------------------------------------------|-----------------|--------|-----------------------|
-| order | (Optional) Indicates priority of this Tier, with lower order taking precedence. No value indicates highest order (lowest precedence) | | float | `nil` (highest order) |
-
-All Policies created by $[prodname] orchestrator integrations are created in the default (last) Tier.
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/resources/workloadendpoint.mdx b/calico-cloud_versioned_docs/version-20-1/reference/resources/workloadendpoint.mdx
deleted file mode 100644
index f9ca037212..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/resources/workloadendpoint.mdx
+++ /dev/null
@@ -1,141 +0,0 @@
----
-description: API for this Calico Cloud resource.
----
-
-# Workload endpoint
-
-import Ipnat from '@site/calico-cloud_versioned_docs/version-20-1/_includes/content/_ipnat.mdx';
-
-A workload endpoint resource (`WorkloadEndpoint`) represents an interface
-connecting a $[prodname] networked container or VM to its host.
-
-Each endpoint may specify a set of labels and list of profiles that $[prodname] will use
-to apply policy to the interface.
-
-A workload endpoint is a namespaced resource, that means a
-[NetworkPolicy](networkpolicy.mdx)
-in a specific namespace only applies to the WorkloadEndpoint in that namespace.
-Two resources are in the same namespace if the namespace value is set the same
-on both.
-
-This resource is not supported in `kubectl`.
-
-:::note
-
-While `calicoctl` allows the user to fully manage Workload Endpoint resources,
-the lifecycle of these resources is generally handled by an orchestrator-specific
-plugin such as the $[prodname] CNI plugin. In general, we recommend that you only
-use `calicoctl` to view this resource type.
-
-:::
-
-**Multiple networks**
-
-If multiple networks are enabled, workload endpoints will have additional labels which can be used in network policy selectors:
-
-- `projectcalico.org/network`: The name of the network specified in the NetworkAttachmentDefinition.
-- `projectcalico.org/network-namespace`: This namespace the network is in.
-- `projectcalico.org/network-interface`: The network interface for the workload endpoint.
-
-For more information, see the [multiple-networks how-to guide](../../networking/configuring/multiple-networks.mdx).
-
-## Sample YAML
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: WorkloadEndpoint
-metadata:
- name: node1-k8s-my--nginx--b1337a-eth0
- namespace: default
- labels:
- app: frontend
- projectcalico.org/namespace: default
- projectcalico.org/orchestrator: k8s
-spec:
- node: node1
- orchestrator: k8s
- endpoint: eth0
- containerID: 1337495556942031415926535
- pod: my-nginx-b1337a
- endpoint: eth0
- interfaceName: cali0ef24ba
- mac: ca:fe:1d:52:bb:e9
- ipNetworks:
- - 192.168.0.0/32
- profiles:
- - profile1
- ports:
- - name: some-port
- port: 1234
- protocol: TCP
- - name: another-port
- port: 5432
- protocol: UDP
-```
-
-## Definitions
-
-### Metadata
-
-| Field | Description | Accepted Values | Schema | Default |
-| --------- | ------------------------------------------------------------------ | -------------------------------------------------- | ------ | --------- |
-| name | The name of this workload endpoint resource. Required. | Alphanumeric string with optional `.`, `_`, or `-` | string | |
-| namespace | Namespace provides an additional qualification to a resource name. | | string | "default" |
-| labels | A set of labels to apply to this endpoint. | | map | |
-
-### Spec
-
-| Field | Description | Accepted Values | Schema | Default |
-| ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- | ---------------------------------------------- | ------- |
-| workload | The name of the workload to which this endpoint belongs. | | string | |
-| orchestrator | The orchestrator that created this endpoint. | | string | |
-| node | The node where this endpoint resides. | | string | |
-| containerID | The CNI CONTAINER_ID of the workload endpoint. | | string | |
-| pod | Kubernetes pod name for this workload endpoint. | | string | |
-| endpoint | Container network interface name. | | string | |
-| ipNetworks | The CIDRs assigned to the interface. | | List of strings | |
-| ipNATs | List of 1:1 NAT mappings to apply to the endpoint. | | List of [IPNATs](#ipnat) | |
-| awsElasticIPs | List of AWS Elastic IP addresses that should be considered for this workload; only used for workloads in an AWS-backed IP pool. This should be set via the `cni.projectcalico.org/awsElasticIPs` Pod annotation. | | List of valid IP addresses | |
-| ipv4Gateway | The gateway IPv4 address for traffic from the workload. | | string | |
-| ipv6Gateway | The gateway IPv6 address for traffic from the workload. | | string | |
-| profiles | List of profiles assigned to this endpoint. | | List of strings | |
-| interfaceName | The name of the host-side interface attached to the workload. | | string | |
-| mac | The source MAC address of traffic generated by the workload. | | IEEE 802 MAC-48, EUI-48, or EUI-64 | |
-| ports | List on named ports that this workload exposes. | | List of [WorkloadEndpointPorts](#endpointport) | |
-
-### IPNAT
-
-
-
-### EndpointPort
-
-A WorkloadEndpointPort associates a name with a particular TCP/UDP/SCTP port of the endpoint, allowing it to
-be referenced as a named port in [policy rules](networkpolicy.mdx#entityrule).
-
-| Field | Description | Accepted Values | Schema | Default |
-| -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------- | ------ | ------- |
-| name | The name to attach to this port, allowing it to be referred to in [policy rules](networkpolicy.mdx#entityrule). Names must be unique within an endpoint. | | string | |
-| protocol | The protocol of this named port. | `TCP`, `UDP`, `SCTP` | string | |
-| port | The workload port number. | `1`-`65535` | int | |
-| hostPort | Port on the host that is forwarded to this port. | `1`-`65535` | int | |
-| hostIP | IP address on the host on which the hostPort is accessible. | `1`-`65535` | int | |
-
-:::note
-
-On their own, WorkloadEndpointPort entries don't result in any change to the connectivity of the port.
-They only have an effect if they are referred to in policy.
-
-:::
-
-:::note
-
-The hostPort and hostIP fields are read-only and determined from Kubernetes hostPort configuration.
-These fields are used only when host ports are enabled in Calico.
-
-:::
-
-## Supported operations
-
-| Datastore type | Create/Delete | Update | Get/List | Notes |
-| --------------------- | ------------- | ------ | -------- | -------------------------------------------------------- |
-| Kubernetes API server | No | Yes | Yes | WorkloadEndpoints are directly tied to a Kubernetes pod. |
diff --git a/calico-cloud_versioned_docs/version-20-1/reference/rest-api-reference.mdx b/calico-cloud_versioned_docs/version-20-1/reference/rest-api-reference.mdx
deleted file mode 100644
index 13b5395dee..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/reference/rest-api-reference.mdx
+++ /dev/null
@@ -1,15 +0,0 @@
----
-description: REST API reference
----
-
-# REST API Reference
-
-import SwaggerUI from 'swagger-ui-react';
-
-
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/release-notes/index.mdx b/calico-cloud_versioned_docs/version-20-1/release-notes/index.mdx
deleted file mode 100644
index 13acdb3756..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/release-notes/index.mdx
+++ /dev/null
@@ -1,799 +0,0 @@
----
-description: What's new, and why features provide value for upgrading.
-title: Release notes
----
-
-# Calico Cloud release notes
-
-## November 6, 2024 (version 20.2.0)
-
-### New features and enhancements
-
-### Image Assurance scan result management
-
-In this release, you can more easily manage your Image Assurance scan results by deleting results you don't need.
-On the **All Scan Results** page, select the checkbox next to result item, and then click **Actions > Delete**.
-You can also select multiple results and delete them as a bulk action.
-
-### Enhancements
-
-* Various detector improvements, including better handling of historical data and a detector export function.
-* Improved webhooks with ability to send global alerts.
-* Reduced memory usage for clusters with many `ConfigMap` resources.
-
-### Bug fixes
-
-* Fixed an issue with the Image Assurance CLI scanner to ensure it takes CVE exceptions into account when scanning multiple images from the command line at the same time.
-
-## October 1, 2024 (version 20.1.0)
-
-### New features and enhancements
-
-#### View and manage detectors for Container Threat Detection
-
-We've provided better access to the detectors we use as part of our Container Threat Detection system.
-You can now view the complete list of detectors and turn them on or off as you see fit.
-Detectors can also be configured as part of a new RuntimeSecurity custom resource.
-
-For more information, see [Update detector settings](../threat/container-threat-detection.mdx#update-detectors-settings).
-
-#### Create Security Event exceptions for known processes
-
-We added a way to create Security Event exceptions for processes in your cluster that you know to be safe.
-This can be a helpful way to eliminate noise and false positives in your alerts.
-
-For more information, see [Exclude a process from Security Events alerts](../threat/container-threat-detection.mdx#exclude-process).
-
-#### Added EPSS data to Image Assurance results
-
-Image Assurance scans results now include information using the [Exploit Prediction Scoring System (EPSS)](https://www.first.org/epss/).
-EPSS scores help you determine the likelihood that a given vulnerability will be exploited in the near future.
-Being able to view this information and filter scan results by EPSS score can help you judge the risk of vulnerabilities and prioritize your remediation efforts.
-
-#### Enhancements
-
-* Functional and performance improvements to Image Assurance scan results filtering.
-
-## September 20, 2024 (version 20.0.1)
-
-### Bug fixes
-
-* Fixed an issue so that guardian picks up refreshed Service Account tokens which caused an issue on some AKS configurations due to a token expiration change.
-
-## September 10, 2024 (version 20.0.0)
-
-### New features and enhancements
-
-#### Helm customizations
-
-We've added new options for customizing Helm installations:
-
-* You can now enable or disable the Compliance and Packet Capture features at installation.
-* For many Calico Cloud components, you can specify node selectors, tolerations, and resource requests and limits.
-
-For more information, see [Connect a cluster to Calico Cloud](../get-started/install-cluster.mdx) and [Install Calico Cloud as part of an automated workflow](../get-started/install-automated.mdx)
-
-#### Automatic Calico Cloud access for connected IdP groups
-
-We made it easier for administrators to control access to Calico Cloud by managing users in their existing identity provider groups.
-When you add an identity provider, users who have memberships of one or more IdP Groups that have been enabled on Calico Cloud can log in without needing to be invited individually.
-
-To enable automatic access for your IdP group members, [open a support ticket](https://tigeraio.my.site.com/community/s/login/).
-
-#### Support for ARM64
-
-This release adds support for clusters running on ARM64 architectures.
-
-#### Support for Kubernetes 1.30
-
-This release adds support for Kubernetes 1.30.
-
-### Deprecated and removed features
-
-* The honeypods feature has been removed from this release.
-
-### Version support
-
-You can now install or upgrade to the following versions:
-
-* Calico Cloud 20
-* Calico Cloud 19
-* Calico Cloud 18
-
-Calico Cloud versions 17 and earlier are no longer supported.
-
-For information about upgrading, see [Upgrade Calico Cloud](../get-started/upgrade-cluster.mdx).
-
-## July 23, 2024 (version 19.4.1)
-
-### Bug fixes
-
-* For AKS clusters with managed Calico, AKS changed the resources deployed which causes the installer
- to fail or for existing clusters the operator will stop managing resources. We made changes to the Calico Cloud
- installation to deploy additional resources needed to allow normal operation to continue and installs to proceed.
-
-## July 9, 2024 (version 19.4.0)
-
-## New features and enhancements
-
-### Bulk vulnerability exceptions for Image Assurance
-
-We added a way to efficiently add large numbers of vulnerability exceptions to your Image Assurance scan results.
-Instead of creating exceptions one by one, you can add them all at once by uploading a CSV file with the vulnerability definitions.
-
-For more information, see [Exclude vulnerabilities from scan results](../image-assurance/exclude-vulnerabilities-from-scan-results.mdx).
-
-### Bug fixes
-
-* Previously, defining new values for the crawdad daemon by using the ImageAssurance custom resource had no effect, and the default values remained in place.
- This problem is now fixed.
-* We fixed a bug that caused problems during Helm upgrades from Calico Cloud versions earlier than 19.1.0.
-
-## June 11, 2024 (version 19.3.0)
-
-### New features and enhancements
-
-#### Jira integration for Image Assurance scan results
-
-We added a way to create and assign Jira issues directly from your Image Assurance scan results page.
-You can filter and prioritize vulnerabilities, and then assign the remediation work to members of your team.
-Calico Cloud populates the information you need, including a CSV file with detailed information about the vulnerabilities in your packages.
-
-For more information, see [Creating Jira issues for scan results](../image-assurance/creating-jira-issues-for-scan-results.mdx)
-
-#### Security events dashboard
-
-A new dashboard summarizes security events and helps practitioners easily understand how events map across namespaces, MITRE techniques, event types, and attack phases. This allows first responders to quickly make sense of potential threats, engage the right stakeholders, and start the incident response and investigation process.
-
-For more information, see [Security event management](../threat/security-event-management.mdx).
-
-#### Exceptions for security events
-
-$[prodname] now allows users to create exceptions for Security Events with varying levels of scope, from excluding an entire namespace to a specific deployment or workload. This gives operators a way to tune the runtime threat detection they have deployed and focus their investigations and response on critical applications and infrastructure.
-
-For more information, see [Security event management](../threat/security-event-management.mdx).
-
-#### New flow logs panel for Endpoints and View Policy pages
-
-$[prodname] has added new entry points to view flow logs directly from the Endpoints listing and View Policy pages in the UI.
-Users can easily see which endpoints are involved in denied traffic, filter on those workloads, and click a link to open a panel that shows associated flows.
-A similar link has been added for View Policy pages, which allows users to quickly see the flows that have been recently evaluated by that policy to make sense of denied traffic or updates to rules.
-
-#### Security Events in Service Graph
-
-$[prodname] now includes a new tab for Security Events which has taken the Alerts. Most runtime threat detection features now generate Security Events, and their inclusion Service Graph enables users to automatically filter events based on where they are occurring in a cluster.
-
-#### Security Events IP addresses enriched with ASN and geolocation
-
-For security events that contain external IP addresses, $[prodname] now automatically performs a geolocation lookup. Understanding the country of origin for an IP address can often be the quickest and easiest way to distinguish legitimate traffic from malicious traffic.
-
-#### Extend Workload-based WAF to Ingress Gateways
-
-This latest release enables operators to plug-in a modifiedsimplified version of WAF to their own instances of Envoy.
-This allows users to deploy this version of WAF at the edge of their cluster integrated with an Ingress Gateway (if based on Envoy), with fully customizable rules based on OWASP CoreRuleSet 4.0 and powered by the Coraza engine.
-
-For more information, see [Deploying WAF with an ingress gateway ](../threat/deploying-waf-ingress-gateway.mdx).
-
-#### Specifying resource requests and limits in $[prodname] components
-
-$[prodname] now provides the ability to set resource requests and limits for the components that run as part of $[prodname]. Please see documentation for specific guidance on setting these limits.
-
-### Known issues
-
-* It is no longer supported to uninstall $[prodname] on a cluster that before connecting to $[prodname] it had Calico managed with AddonManager. This includes AKS clusters that had Calico installed and managed by AKS.
-
-### Bug fixes
-
-* Fixed an issue where occasionally the status of a cluster would not be updated when the cluster connected resulting in auth tokens and license not being updated in the cluster.
-
-## April 30, 2024 (version 19.2.0)
-
-### New features and enhancements
-
-#### Automated installation with client credentials
-
-You can now generate and manage client credentials that you can use to automate the Calico Cloud installation process.
-With persistent API keys, you can build repeatable installation commands that connect your clusters as part of an automated workflow.
-
-For more information, see [Install Calico Cloud as part of an automated workflow](../get-started/install-automated.mdx).
-
-#### Feature options for Helm installations
-
-For Helm installations, you can now configure some feature options during installation.
-You can enable or disable Image Assurance, Container Threat Detection, and the Security Posture Dashboard by adding optional parameters to your Helm command.
-
-For more information, see [Connect a cluster to Calico Cloud](../get-started/install-cluster.mdx).
-
-#### Namespace exclusions for image scanning and runtime view
-
-We added the ability to exclude namespaces from image scanning and runtime view.
-By excluding certain namespaces, you can reduce noise in your scan results and focus attention on higher priority workloads.
-
-For more information, see [Configure exclusions for image scanning](../image-assurance/scanners/cluster-scanner.mdx#configure-exclusions-for-image-scanning).
-
-## April 2, 2024 (version 19.1.0)
-
-### Bug fixes
-
-* We fixed a problem that caused the Image Assurance operator to stop working when it reached its memory limit.
-
-## February 28, 2024 (version 19.0.0)
-
-### New features and enhancements
-
-#### Improved flow log filtering for destination domains
-
-We’ve updated the Felix parameter (`dest_domains`) for DNS policy to make it easy to find only domain names that the deployment connected to (not all the domain names that got translated to the same IP address).
-For more information, see [Flow log data types](../visibility/elastic/flow/datatypes.mdx).
-
-#### New flow logs panel on Endpoints page
-
-We've updated the Endpoints page in Manager UI with a new flow logs panel so you can view and filter Endpoints associated with denied traffic. Flow log metadata includes the source, destination, ports, protocols, and other key forms. We've also updated the Policy Board to highlight policies with denied traffic.
-
-#### Improvements to security events dashboard
-
-We've added the following improvements to the [Security events dashboard](../threat/security-event-management):
-
-- Jira and Slack webhook integration for security event alerts
-
- By [configuring security event alerts](../threat/configuring-webhooks), you can push security event alerts to Slack, Jira, or an external HTTP endpoint of your choice.
- This lets incident response and security teams to use native tools to respond to security event alerts.
-
-- Added threat feed alerts
-
- If you have implemented global threat feeds for suspicious activity (domains or suspicious IPs), alerts are now visible in the Security Overview dashboard.
- For more information on threat feeds, see [Trace and block suspicious IPs](../threat/suspicious-ips).
-
-## Deprecated and removed features
-
-* The AWS security groups integration is removed in this release.
-* The ingress log collection feature is removed in this release.
-
-## January 31, 2024 (version 18.3.0)
-
-### New features and enhancements
-
-#### Assign custom roles to users automatically with Entra ID (formerly Azure AD) groups
-
-We've added the ability to link custom roles in Calico Cloud to your organization's Entra ID groups.
-You can define and modify group membership in Entra ID, and Calico Cloud will automatically grant role-based access to users based on that group membership.
-
-For more information, see [Create a custom role for an Entra ID group](../users/create-custom-role-for-entra-id-group.mdx).
-
-#### Export custom roles
-
-In this release you can export custom roles from a managed cluster and apply them to another managed cluster.
-Previously, if you wanted to duplicate roles in multiple clusters, you needed to create them manually for each cluster.
-Now there is a process to apply those roles quickly and accurately in all clusters.
-
-For more information, see [Creating and assigning custom roles](../users/create-and-assign-custom-roles.mdx).
-
-#### Windows node support for Azure Kubernetes Service
-
-We've added support for Windows nodes in AKS clusters.
-
-#### VXLAN support for cluster mesh and federation
-
-We've expanded our support of cluster mesh to clusters using VXLAN for networking. Cluster mesh can be used to federate services and endpoints to authorize cross-cluster communication with $[prodname] network policies.
-
-For more information, see [Configure federated endpoint identity and services](../multicluster/kubeconfig.mdx).
-
-#### Support for Azure CNI with overlay networking for AKS
-
-We now officially support AKS clusters that are using overlay networking.
-This option is useful if you've exhausted your IP addresses.
-This option augments existing support for Azure CNI with no overlay (where a VNET IP address is assigned to every pod).
-
-#### Support for Kubernetes 1.28
-
-This release adds support for Kubernetes 1.28.
-
-### Deprecated and removed features
-
-* The anomaly detection feature is removed in this release.
- If you enabled this feature, you will now stop receiving anomaly detection alerts.
-* The AWS security groups integration is deprecated in this release.
- It will be removed in a future release.
-* The ingress log collection feature is deprecated in this release.
- It will be removed in future release.
-
-### Bug fixes
-
-* We fixed a problem that stopped diagnostics from being collected after a failed installation.
-
-## December 21, 2023 (version 18.2.0)
-
-### New features and enhancements
-
-#### Security Posture Overview dashboard
-
-We've added a new Security Posture Overview dashboard that helps you assess the security posture of your cluster.
-Using a list of prioritized recommended actions, you can start to take steps to reduce your risk over time.
-Because the dashboard is based on existing Calico Cloud data, no configuration is required.
-You can start improving the security posture of your Kubernetes cluster immediately.
-
-For more information, see [Security Posture Overview dashboard](../threat/security-posture-overview.mdx)
-
-#### Support for RKE2
-
-This release comes with support for connecting RKE2 clusters to Calico Cloud.
-
-### Known issues
-
-* You can't connect RKE2 clusters to Calico Cloud if you enabled the Image Assurance runtime view feature on any of your managed clusters.
-As a workaround, disable runtime view before connecting your RKE2 cluster.
-
-### Bug fixes
-
-* Code changes to the Kubernetes controller-runtime led to intermittent errors in how the Container Threat Detection status was displayed in Manager UI.
-We modified the Runtime Security operator to account for these changes.
-
-## November 29, 2023 (version 18.1.0)
-
-### New features and enhancements
-
-* We limited the permissions that are assigned to the Calico Cloud installer.
-Previously, the installer had cluster administrator privileges.
-Now the installer gets access only to what is required to install Calico Cloud.
-* **Image Assurance**. We added a filter that lets you sort your list of running images by severity rating.
-
-### Known issues
-
-* If you update your cluster to a previous version of Calico Cloud (18.0.0 or earlier), you may see an erroneous message about a failed installation that took place before the successful installation.
-This failed installation message can be disregarded.
-
-### Bug fixes
-
-* Fixed an issue that caused security events generated by AKS managed clusters to be missing pod and namespace information.
-* Fixed an issue that caused some pods on managed clusters to crash-loop after upgrading clusters that also have Dynatrace running.
-
-## October 23, 2023 (version 18.0.0)
-
-### New features and enhancements
-
-#### Image Assurance registry scanner
-
-The Image Assurance feature adds the ability to scan images in container registries at any time, on any infrastructure, including Kubernetes. This is ideal protection for images that don’t go through a pipeline (for example, third-party images), but are published to a registry. If CVEs are missed in your build pipeline, you can catch them before they are deployed.
-
-For more information, see [Scan images in container registries](../image-assurance/scanners/registry-scanner.mdx).
-
-#### Security event management
-
-We've added a new Security event management dashboard for threat detection.
-Security events provides context for suspicious activity detected in your cluster.
-Combined with the Kubernetes context, you can see what workloads are affected.
-
-For more information, see [Security event management](../threat/security-event-management.mdx).
-
-#### New performance optimizations for egress gateways
-
-$[prodname] includes new performance options for egress gateway policies that can be used to ensure that application client and gateway pods are on the same cluster node.
-
-For more information, see [Optimize egress networking for workloads with long-lived TCP connections](../networking/egress/egress-gateway-maintenance.mdx).
-
-#### Configurable XFF headers for Envoy
-
-We've added support for XFF to propagate the original IP address when proxying application layer traffic with Envoy within a Kubernetes cluster.
-
-For more information, see [Installation reference](../reference/installation/api.mdx#operator.tigera.io/v1.EnvoySettings).
-
-#### Alert-only mode for workload-based Web Application Firewall (WAF)
-
-We've added a new default mode for WAF that is monitor/event only.
-This allows operators and security teams to verify the accuracy of configured rules before actively blocking traffic.
-
-For more information, see [Web application firewall](../threat/web-application-firewall.mdx).
-
-#### Enhancements
-
-* You can now remove disconnected clusters from the list of managed clusters. See [Cluster management](../operations/cluster-management.mdx#remove-a-cluster).
-
-### Known issues
-
-
-* The policy recommendation tool is not displaying policy recommendations.
-There is currently no workaround, but the issue will be fixed in an upcoming release.
-
-* Using a bookmarked link to log in to the $[prodname] UI in this release causes the following problems:
-- Image Assurance configuration page fails to load
-- The Manager UI shows the enterprise license countdown at the top of the screen even for paid customers
-
-If you are experiencing any login issues, go to https://calicocloud.io and log back in.
-
-* Calico panics if kube-proxy or other components are using native `nftables` rules instead of the `iptables-nft` compatibility shim. Until Calico supports native nftables mode, we recommend that you continue to use the iptables-nft compatibility layer for all components. (The compatibility layer was the only option before Kubernetes v1.29 added alpha-level `nftables` support.) Do not run Calico in "legacy" iptables mode on a system that is also using `nftables`. Although this combination does not panic or fail (at least on kernels that support both), the interaction between `iptables` "legacy" mode and `nftables` is confusing: both `iptables` and `nftables` rules can be executed on the same packet, leading to policy verdicts being "overturned". Note that this issue applies to all previous versions of $[prodname].
-
-### Bug fixes
-
-
-* We fixed an issue that caused a bad error message to appear when you changed a Calico Cloud user's role.
-
-## September 11, 2023 (version 17.1.1)
-
-### New features and enhancements
-
-* We redesigned a section of the Image Assurance UI to make it easier to see how vulnerable an image is.
-* We made improvements to the way Container Threat Detection processes large volumes of alerts.
-
-### Bug fixes
-
-* We fixed a problem that caused the user interface to crash when a user attempted to edit a policy.
-* We fixed a problem that prevented certain images from being scanned.
-
-## September 5, 2023 (version 17.1.0)
-
-### New features and enhancements
-
-#### Improvements to software versioning for Calico Cloud installations on managed clusters
-
-We've made it easier to see what version of Calico Cloud you're running or installing on a managed cluster.
-Now you can:
-* view the Calico Cloud version number for each connected cluster from the Managed Clusters page
-* see when an update is available for a managed cluster
-* select a specific Calico Cloud version to install when you connect a cluster
-
-#### Security Events UI page
-
-Alerts corresponding to detections generated by container threat detection will now be published to the Security Events UI page, found within the Threat Defense left navigation menu item.
-
-On this page, for every security event detected, users can view:
-
-* security event name
-* security event type
-* severity level
-* a description of what suspicious activity has been detected
-* impacted assets (pod and namespace)
-* attack vector type
-* Mitre tactic and techniques associated with the detection
-* mitigation recommendation
-* additional metadata and context associated with the detection.
-
-Alerts will continue to also appear on the Alerts UI page.
-
-For more information, see [Security event management](../threat/security-event-management.mdx).
-
-### Bug fixes
-
-* Runtime Security alerts now correctly show the generated_time as the time the alert was generated. Previously they incorrectly showed the time when the underlying event which caused the alert was generated.
-* Runtime Security alerts now correctly show the associated Mitre information.
-
-### Known issues
-
-* Enabling WAF and Container Threat Detection through the UI is not possible for clusters running Kubernetes v1.27+. Both features can be enabled using kubectl.
-* If you connected your cluster to Calico Cloud using Helm before the release of version 17.1.0, reinstalling or upgrading to any version of Calico Cloud may result in an error: "Error: rendered manifests contain a resource that already exists."
-Previously, the `installers.operator.calicocloud.io` custom resource definition (CRD) installed by Helm required manual upgrades.
-After the release of Calico Cloud 17.1.0, this CRD is updated automatically, but this change causes errors the first time you attempt to reinstall or upgrade Calico Cloud on a cluster that was connected using Helm before the release of Calico Cloud 17.1.0.
-
-As a workaround, label the CRD so that it is managed by Helm by running the following command:
-
-```bash
-kubectl patch crd installers.operator.calicocloud.io -p '{"metadata": {"annotations": {"meta.helm.sh/release-name":"calico-cloud-crds","meta.helm.sh/release-namespace":"calico-cloud"},"labels":{"app.kubernetes.io/managed-by":"Helm"}}}'
- ```
-
-This allows you to successfully reinstall or upgrade to Calico Cloud by following the procedure in [Upgrade Calico Cloud](../get-started/upgrade-cluster.mdx).
-
-### Security updates
-
-* Runtime security upgraded to [golang 1.20.7](https://go.dev/doc/devel/release#go1.20.7), which includes security updates.
-* We rebuilt `cc-operator` and `cc-cni-config-scanner`, which has reduced the number of CVEs.
-
-## August 21, 2023 (version 17.0.0)
-
-### New features and enhancements
-
-#### New policy recommendations engine for namespace isolation
-
-$[prodname] has added a new policy recommendations engine that automatically generates staged policies for namespace isolation within your cluster. [Policy recommendations](../network-policy/recommendations/policy-recommendations).
-
-#### Destination-based routing for egress gateways
-
-$[prodname] introduces a new mode for egress gateways that can leverage destination-based routing. Destination-based routing for egress gateways allows operators associated with a destination that is external to a Kubernetes cluster (for example, IP address or CIDR), to a specific egress gateway deployment. [Egress gateways](../networking/egress/egress-gateway-on-prem).
-
-#### Support for DNS rules in clusters using NodeLocal DNSCache
-
-$[prodname] has added support for DNS rules in clusters using NodeLocal DNSCache. Also related, there is new documentation on using Calico policy to secure DNS traffic within the cluster with NodeLocal DNSCache enabled. [Use NodeLocal DNSCache in your cluster](../networking/configuring/node-local-dns-cache).
-
-#### Improved UI for configuring Workload-based Web Application Firewall (WAF)
-
-$[prodname] includes updates to the UI that allows you to select which services are enabled for the Workload-based Web Application Firewall. [Web application firewall](../threat/web-application-firewall).
-
-#### Wireguard support for AKS and EKS with Calico CNI
-
-$[prodname] now offers official support for Wireguard when using Microsoft AKS or Amazon EKS with Calico CNI. This mode of deployment offers performance benefits and a more efficient routing table compared to using cloud provider CNIs. [Encrypt data in transit](../compliance/encrypt-cluster-pod-traffic).
-
-#### Additional custom roles for $[prodname]
-
-You can now create custom role-based access controls for two new roles: "Usage Metrics" and "Image Assurance Admin".
-
-#### Image Assurance improvements
-
-* A containerized version of Image Assurance scanner is now available to integrate into your CI/CD platform.
-See [Image Assurance containerized scanner](https://quay.io/repository/tigera/image-assurance-scanner-cli) to pull the latest image.
-* Substantial UI improvements including a new package-centric view of images
-
-### Known issues
-
-* The canvas on Service Graph may zoom and pan unexpectedly when modifying Views or Layers
-* Dragging tiers to modify their order is currently not working in the UI, though you can still change its order when editing a tier
-* Policy recommendations may generate rules with ports and protocols for intra-namespace traffic.
-This will be modified in the next patch release to exclude ports and protocols and provide an option to Allow or Pass this traffic.
-
-## June 6, 2023
-
-### New features and enhancements
-
-#### In-cluster scanning with Image Assurance
-
-Calico Cloud now includes the ability to scan and monitor the images running in your Kubernetes clusters for new vulnerabilities.
-In-cluster scanning will scan any new images not previously scanned, and continuously monitor the BOM (Bill of Materials) for running images that have prior scan results.
-
-#### New detectors for container threat detection
-
-We've added several new detectors for container threat detection.
-These detectors help identify unsanctioned use of network tools, task scheduling, container admin and Docker commands, and much more.
-Calico Cloud now includes over 40 different detectors across each category of the [MITRE ATT&CK Matrix](https://attack.mitre.org/).
-
-## May 2, 2023
-
-This release includes a number of performance improvements and bug fixes.
-
-## April 24, 2023
-
-### Depreciated support for RKE, RKE2
-
-Calico Cloud no longer supports installation on RKE or RKE2.
-
-## April 11, 2023
-
-## New features and enhancements
-
-### Updates to Managed Clusters
-
-The **Managed Clusters** page has been redesigned to make it easier and more intuitive to search and filter your clusters.
-
-#### Egress gateways for AKS and Azure
-
-$[prodname] adds egress gateway support for Microsoft Azure and AKS. Egress gateways allow you to identify the namespaces associated with egress traffic outside of your cluster. [Egress gateways for AKS and Azure](../networking/egress/egress-gateway-azure).
-
-#### UI for workload-based Web Application Firewall (WAF)
-
-$[prodname] includes a new UI to enable and configure a workload-based Web Application Firewall. For more information, see [Workload-based web application firewall](../threat/web-application-firewall.mdx#enable-waf).
-
-#### Application layer policy with Envoy
-
-$[prodname] now includes support for application layer policy with Envoy, enabling platform operators to define authorization rules in $[prodname] policies for protocols such as HTTP and gRPC. For more information, see [Application layer policies](../network-policy/application-layer-policies/).
-
-#### Service Graph performance optimizations
-
-$[prodname] added several optimizations to improve the performance of Service Graph for clusters with larger numbers of namespaces.
-
-#### Improvements to Envoy to accommodate advanced ingress controllers
-
-$[prodname] improves its Envoy deployment so you can use this feature in clusters with ingress controllers that perform advanced load balancing. For more information, see [Workload-based web application firewall](../threat/web-application-firewall.mdx).
-
-#### Improved $[prodname] component security
-
-$[prodname] components were updated with more restrictive access for pods and containers using the Kubernetes security context:
-* Non-root context whenever possible
-* Root context and privilege escalation are used only when necessary
-* Added `drop ALL capabilities` for pod security
-* Enabled `RuntimeDefault` as the default seccomp profile for all workloads
-
-## February 28, 2023
-
-### New features and enhancements
-
-* Adds Bottlerocket support for Container Threat Detection.
-* Adds support for scanning multiple images with Image Assurance
-
-### Bug fixes
-
-* Fixes "Kibana" menu item rename to "Logs".
-* Bug fixes for Container Threat Detection alerts.
-
-## February 7, 2023
-
-### New features and enhancements
-
-#### New and improved Dashboards
-
-Calico Cloud includes new and improved Dashboards that enable operators to define cluster* and namespace-scoped dashboards with new modules for policy usage, application layer and DNS metrics, and much more.
-
-#### Configure Threat Feeds in the Calico Cloud UI
-
-Calico Cloud includes a new UI that can be used to manage and configure global threat feeds
-
-For more information, see [Trace and block suspicious IPs](../threat/suspicious-ips.mdx).
-
-#### Namespace-based policy recommendations
-
-Calico Cloud has improved its policy recommendation engine to add namespace-based recommendations.
-This enables operators to easily implement microsegmentation for namespaces.
-
-For more information, see [Create policy recommendation](../network-policy/recommendations/policy-recommendations.mdx).
-
-#### Create custom roles for Calico Cloud users
-
-Calico Cloud administrators can now define granular roles and permissions for users using custom role-based access controls.
-
-For more information, see [Create and assign custom roles](../users/create-and-assign-custom-roles.mdx).
-
-#### Egress gateway improvements
-
-Calico Cloud has improved the probes to check readiness and outbound connectivity of egress gateways.
-Calico Cloud has also rearchitected egress gateway pods to improve security and make use of a temporary init container to set up packet forwarding.
-
-#### Image Assurance updates
-
-* CLI Version v1.3.4
-* Calico Cloud supports the Image Assurance CLI scanner versions 1.3.0 and later.
-* **Bug fix:** Previously, the scanner returned an error if it reached a size limit while uploading vulnerabilities.
-This size limit has been removed.
-
-## December 13, 2022
-
-### New features and enhancements
-
-#### Search by CVE in Image Assurance
-
-Image Assurance reporting features now includes a search and filtering capability that allows you to find list items based on a single CVE ID within any Image Assurance reports.
-
-#### Enable and disable Container Threat Detection in the Calico Cloud UI
-
-You can now enable or disable Container Threat Detection within the UI. After enabling the feature, you can review the status of which nodes are being monitored by the feature and which nodes of your cluster are unsupported.
-
-#### New Feature: Calico Cloud Service Status Page
-
-All users can view the status and health of the Calico Cloud service on our new status page: [https://status.calicocloud.io](https://status.calicocloud.io/).
-
-## November 1, 2022
-
-### Image Assurance
-
-CLI Version v1.1.2.
-
-New CLI will now check that it is compatible with the latest Image Assurance API.
-
-### Container Threat Detection
-
-![tech-preview](/img/calico-cloud/tech-preview.svg)
-
-Release of Container Threat Detection
-
-With Container Threat Detection, you can monitor container activity using eBPF. Enable this feature to receive alerts based on file and process activity for known malicious and suspicious behavior. Alert events can be viewed on the Alerts page in Manager UI.
-
-To get started, see [Container Threat Detection](../threat/container-threat-detection.mdx)
-
-## September 26, 2022
-
-### New feature: Helm
-
-$[prodname] now supports [installation using Helm](../get-started/install-cluster.mdx).
-
-### New feature: Private Registry
-
-$[prodname] now supports [installation from private registries](https://docs.calicocloud.io/get-started/connect/install-cluster). Note that this is only supported when installing with Helm.
-
-### Expanded platform support: RKEv2
-
-Installation works on clusters with Calico deployed by RKEv2.
-
-## September 12, 2022
-
-### Image Assurance is GA
-
-Image Assurance is now released for general availability.
-
-With Image Assurance, DevOps and platform teams can scan images in public and private registries, including images that are automatically discovered in connected clusters.
-Image Assurance provides a runtime view into risk, based on known vulnerabilities.
-It also offers admission controller policies to block resources in Kubernetes from creating containers with vulnerable images from entering your cluster.
-
-#### Changes from the tech preview version
-
-**New Image Assurance CLI scanner**
-Image scanning is now configured and performed by the `tigera-scanner` CLI.
-You can integrate `tigera-scanner` into your CI/CD pipelines to ensure builds are checked by Image Assurance before deployment.
-You can also use the CLI scanner offline and on-demand for ad hoc scanning and emergency patching.
-
-**Export options for vulnerability scan results and runtime views**
-
-We've made it easier for platform operators to share Image Assurance scan results and runtime views with these export options:
-
-* Export one row per image or one row per image and CVE.
-* Export CSV or JSON files.
-
-To get started, see [Image Assurance](../image-assurance).
-
-### Malware detection is GA
-
-Malware detection is now released for general availability.
-
-Calico Cloud's malware detection identifies malicious files in your cluster and generates alerts.
-Calico Cloud uses eBPF-based monitoring to log file hashes of programs running in your cluster.
-If there's a match to known malware from our threat intelligence library, you receive an alert.
-You can view your alerts on the _Alerts_ page on Manager UI.
-
-To get started, see [Malware Detection](../threat/container-threat-detection.mdx))
-
-## July 27, 2022
-
-Improvement: Export logs to a SIEM
-
-To help meet your compliance requirements, we've added documentation to export logs to a SIEM (syslog, Splunk, or Amazon S3). See [Export logs to a SIEM](../visibility/elastic/archive-storage.mdx).
-
-## July 7, 2022
-
-### New feature: Distributed Web Application Firewall (WAF) with Envoy
-
-![tech-preview](/img/calico-cloud/tech-preview.svg)
-
-$[prodname] now includes the option to enable Web Application Firewall (WAF) rule sets when using Envoy as a daemonset. This enables operators to implement an additional layer of security and threat detection for application layer traffic. See [Workload-based Web Application Firewall (WAF)](../threat/web-application-firewall.mdx).
-
-### New Feature: Configuration option to use DNS rules with StagedNetworkPolicies
-
-$[prodname] has added a new configuration option in Felix (`DNSPolicyMode`) that lets you audit DNS rules with StagedNetworkPolicies. There is a small performance trade off if you enable this option, so we recommended to disabling it when it’s not required. See [Felix configuration](../reference/resources/felixconfig.mdx#dnspolicymode).
-
-### Improvement: Additional predefined RBAC options
-
-$[prodname] now supports 3 more pre-defined RBAC controls (devops, security and compliance persona) for role assignment.
-
-### Improvement: Anomaly detection deployment
-
-![tech-preview](/img/calico-cloud/tech-preview.svg)
-
-$[prodname] has made the configuration and deployment of anomaly detection jobs for threat detection and performance hotspots more granular, allowing you to selectively enable jobs depending on your use case.
-
-### Improvement: Manager UI now displays cluster installation progress and streaming logs
-
-$[prodname] now displays information about managed cluster install progress right in the UI.
-
-After you run the install command (**Connect Cluster** wizard in Managed Clusters), installation progress is automatically displayed along with logs for the managed cluster.
-
-## May 10, 2022
-
-### New feature: Visibility into usage metrics
-
-$[prodname] now displays information about cloud usage metrics. This will provide visibility into the node hours and data ingested for consumption-based invoices.
-
-Account owners can click the new "Usage Metrics" button at the bottom of the left navbar to navigate to the new page.
-
-### Expanded platform support: AKS with managed Calico
-
-Installation works on clusters with Calico deployed by AKS.
-
-## April 26, 2022
-
-### New feature: Malware detection
-
-![tech-preview](/img/calico-cloud/tech-preview.svg)
-
-$[prodname] introduces malware detection in tech preview, which uses eBPF-based monitoring to log observed file hashes of programs running in your $[prodname] Kubernetes clusters. Malware detection identifies malicious files by comparing observed file hashes with our threat intelligence library of known malware, and generates alerts when malware is detected in your cluster. Alerts can be viewed on the Alerts page of Manager UI.
-
-**If you started using $[prodname] before January 24, 2022**, you must upgrade your existing cluster to get malware detection:
-
-1. Navigate to the **Managed Clusters** page.
-1. Select the cluster from the list, and click **Reinstall**.
-1. Copy the updated install script command and run it against your cluster.
-
-## April 20, 2022
-
-### Improved installation
-
-We’ve updated the $[prodname] installation process to improve security, reduce dependencies on utilities (such as bash), and allow you to customize the name of your connected clusters.
-
-The $[prodname] installation process will now require running a `kubectl apply` command instead of a bash script. Additionally, the installation script has been moved behind an authenticated endpoint. The updated install script is now available on the **Managed Clusters** page of the $[prodname] UI.
-
-**If you started using $[prodname] before January 24, 2022**, you must upgrade your existing cluster to get these changes:
-
-1. Navigate to the **Managed Clusters** page.
-1. Select the cluster from the list, and click **Reinstall**.
-1. Copy the updated install script command and run it against your cluster.
-
-## April 19, 2022
-
-### New feature: Image Assurance
-
-![tech-preview](/img/calico-cloud/tech-preview.svg)
-
-$[prodname] introduces Image Assurance in tech preview, enabling DevOps and platform teams to scan images in public and private registries, and images that are automatically discovered in connected clusters. Image Assurance provides a runtime view into risk, based on discovered vulnerabilities. It also offers admission controller policies to enforce how vulnerable images are used to create resources within Kubernetes.
-
-To get started, see [Image Assurance](../image-assurance).
diff --git a/calico-cloud_versioned_docs/version-20-1/releases.json b/calico-cloud_versioned_docs/version-20-1/releases.json
deleted file mode 100644
index a0db016116..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/releases.json
+++ /dev/null
@@ -1,257 +0,0 @@
-[
- {
- "title": "v3.20.0-1.0",
- "tigera-operator": {
- "image": "tigera/operator",
- "version": "v1.35.2",
- "registry": "quay.io"
- },
- "calico": {
- "minor_version": "v3.28",
- "archive_path": "archive"
- },
- "components": {
- "cnx-manager": {
- "image": "tigera/cnx-manager",
- "version": "v3.20.0-1.0"
- },
- "voltron": {
- "image": "tigera/voltron",
- "version": "v3.20.0-1.0"
- },
- "guardian": {
- "image": "tigera/guardian",
- "version": "v3.20.0-1.0"
- },
- "cnx-apiserver": {
- "image": "tigera/cnx-apiserver",
- "version": "v3.20.0-1.0"
- },
- "cnx-queryserver": {
- "image": "tigera/cnx-queryserver",
- "version": "v3.20.0-1.0"
- },
- "cnx-kube-controllers": {
- "image": "tigera/kube-controllers",
- "version": "v3.20.0-1.0"
- },
- "calicoq": {
- "image": "tigera/calicoq",
- "version": "v3.20.0-1.0"
- },
- "typha": {
- "image": "tigera/typha",
- "version": "v3.20.0-1.0"
- },
- "calicoctl": {
- "image": "tigera/calicoctl",
- "version": "v3.20.0-1.0"
- },
- "cnx-node": {
- "image": "tigera/cnx-node",
- "version": "v3.20.0-1.0"
- },
- "dikastes": {
- "image": "tigera/dikastes",
- "version": "v3.20.0-1.0"
- },
- "dex": {
- "image": "tigera/dex",
- "version": "v3.20.0-1.0"
- },
- "fluentd": {
- "image": "tigera/fluentd",
- "version": "v3.20.0-1.0"
- },
- "fluentd-windows": {
- "image": "tigera/fluentd-windows",
- "version": "v3.20.0-1.0"
- },
- "es-proxy": {
- "image": "tigera/es-proxy",
- "version": "v3.20.0-1.0"
- },
- "eck-kibana": {
- "version": "7.17.11"
- },
- "kibana": {
- "image": "tigera/kibana",
- "version": "v3.20.0-1.0"
- },
- "eck-elasticsearch": {
- "version": "7.17.11"
- },
- "elasticsearch": {
- "image": "tigera/elasticsearch",
- "version": "v3.20.0-1.0"
- },
- "cloud-controllers": {
- "image": "tigera/cloud-controllers",
- "version": "v3.20.0-1.0"
- },
- "elastic-tsee-installer": {
- "image": "tigera/intrusion-detection-job-installer",
- "version": "v3.20.0-1.0"
- },
- "es-curator": {
- "image": "tigera/es-curator",
- "version": "v3.20.0-1.0"
- },
- "intrusion-detection-controller": {
- "image": "tigera/intrusion-detection-controller",
- "version": "v3.20.0-1.0"
- },
- "compliance-controller": {
- "image": "tigera/compliance-controller",
- "version": "v3.20.0-1.0"
- },
- "compliance-reporter": {
- "image": "tigera/compliance-reporter",
- "version": "v3.20.0-1.0"
- },
- "compliance-snapshotter": {
- "image": "tigera/compliance-snapshotter",
- "version": "v3.20.0-1.0"
- },
- "compliance-server": {
- "image": "tigera/compliance-server",
- "version": "v3.20.0-1.0"
- },
- "compliance-benchmarker": {
- "image": "tigera/compliance-benchmarker",
- "version": "v3.20.0-1.0"
- },
- "ingress-collector": {
- "image": "tigera/ingress-collector",
- "version": "v3.20.0-1.0"
- },
- "l7-collector": {
- "image": "tigera/l7-collector",
- "version": "v3.20.0-1.0"
- },
- "license-agent": {
- "image": "tigera/license-agent",
- "version": "v3.20.0-1.0"
- },
- "linseed": {
- "image": "tigera/linseed",
- "version": "v3.20.0-1.0"
- },
- "tigera-cni": {
- "image": "tigera/cni",
- "version": "v3.20.0-1.0"
- },
- "firewall-integration": {
- "image": "tigera/firewall-integration",
- "version": "v3.20.0-1.0"
- },
- "egress-gateway": {
- "image": "tigera/egress-gateway",
- "version": "v3.20.0-1.0"
- },
- "key-cert-provisioner": {
- "image": "tigera/key-cert-provisioner",
- "version": "v1.1.10",
- "registry": "quay.io"
- },
- "anomaly_detection_jobs": {
- "image": "tigera/anomaly_detection_jobs",
- "version": "v3.20.0-1.0"
- },
- "anomaly-detection-api": {
- "image": "tigera/anomaly-detection-api",
- "version": "v3.20.0-1.0"
- },
- "elasticsearch-metrics": {
- "image": "tigera/elasticsearch-metrics",
- "version": "v3.20.0-1.0"
- },
- "packetcapture": {
- "image": "tigera/packetcapture",
- "version": "v3.20.0-1.0"
- },
- "prometheus": {
- "image": "tigera/prometheus",
- "version": "v3.20.0-1.0"
- },
- "coreos-prometheus": {
- "version": "v2.43.1"
- },
- "coreos-prometheus-operator": {
- "version": "v0.62.0"
- },
- "coreos-config-reloader": {
- "version": "v0.62.0"
- },
- "prometheus-operator": {
- "image": "tigera/prometheus-operator",
- "version": "v3.20.0-1.0"
- },
- "prometheus-config-reloader": {
- "image": "tigera/prometheus-config-reloader",
- "version": "v3.20.0-1.0"
- },
- "tigera-prometheus-service": {
- "image": "tigera/prometheus-service",
- "version": "v3.20.0-1.0"
- },
- "es-gateway": {
- "image": "tigera/es-gateway",
- "version": "v3.20.0-1.0"
- },
- "deep-packet-inspection": {
- "image": "tigera/deep-packet-inspection",
- "version": "v3.20.0-1.0"
- },
- "eck-elasticsearch-operator": {
- "version": "2.6.1"
- },
- "elasticsearch-operator": {
- "image": "tigera/eck-operator",
- "version": "v3.20.0-1.0"
- },
- "coreos-alertmanager": {
- "version": "v0.25.0"
- },
- "alertmanager": {
- "image": "tigera/alertmanager",
- "version": "v3.20.0-1.0"
- },
- "envoy": {
- "image": "tigera/envoy",
- "version": "v3.20.0-1.0"
- },
- "envoy-init": {
- "image": "tigera/envoy-init",
- "version": "v3.20.0-1.0"
- },
- "windows": {
- "image": "tigera/calico-windows",
- "version": "v3.20.0-1.0"
- },
- "windows-upgrade": {
- "image": "tigera/calico-windows-upgrade",
- "version": "v3.20.0-1.0"
- },
- "policy-recommendation": {
- "image": "tigera/policy-recommendation",
- "version": "v3.20.0-1.0"
- },
- "flexvol": {
- "image": "tigera/pod2daemon-flexvol",
- "version": "v3.20.0-1.0",
- "registry": "quay.io"
- },
- "csi-driver": {
- "image": "tigera/csi",
- "version": "v3.20.0-1.0",
- "registry": "quay.io"
- },
- "csi-node-driver-registrar": {
- "image": "tigera/node-driver-registrar",
- "version": "v3.20.0-1.0",
- "registry": "quay.io"
- }
- }
- }
-]
diff --git a/calico-cloud_versioned_docs/version-20-1/threat/configuring-webhooks.mdx b/calico-cloud_versioned_docs/version-20-1/threat/configuring-webhooks.mdx
deleted file mode 100644
index dcaaf7c7e4..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/threat/configuring-webhooks.mdx
+++ /dev/null
@@ -1,28 +0,0 @@
----
-description: Send security event alerts to 3rd party systems.
-title: Webhooks for security events
----
-
-# Webhooks for security event alerts
-
-You can configure $[prodname] webhooks to post security alerts directly to Slack, Jira or any custom HTTP endpoint.
-
-## Before you begin
-
-Your target application must be configured to receive data from the $[prodname] webhook.
-
-* **Slack**. You must have a webhook URL for the Slack app that you want $[prodname] to send alerts to.
-See [Sending messages using Incoming Webhooks](https://api.slack.com/messaging/webhooks) for more information.
-* **Jira**. You must have an API token for an Atlassian user account that has write permissions to your Jira instance.
- See [Manage API tokens for your Atlassian account](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/) for details on how to obtain an API token.
- You also need:
- * Your Atlassian site URL. If you access Jira at the URL `https://.atlassian.net/jira`, then your site URL is `.atlassian.net`.
- * A Jira project key. This is the Jira project where your $[prodname] webhook creates new issues. This user associated with your API token must have write permissions to this project.
-* **Generic JSON**. You must have a webhook URL for any other application you want the $[prodname] webhook to send alerts to.
-
-## Create a webhook for security event alerts
-
-1. In Manager UI, select **Activity** > **Webhooks**, and then click **Create your first webhook**.
-2. Enter a **Name** for your webhook, select which **Event types** you want to get alerts for, and, under **Type**, select whether to configure the webhook for Slack, Jira, or for generic JSON output.
-3. Complete the fields for your webhook type and click **Create Webhook**.
-
diff --git a/calico-cloud_versioned_docs/version-20-1/threat/container-threat-detection.mdx b/calico-cloud_versioned_docs/version-20-1/threat/container-threat-detection.mdx
deleted file mode 100644
index 1b7a581cb7..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/threat/container-threat-detection.mdx
+++ /dev/null
@@ -1,144 +0,0 @@
----
-description: Threat detection for containerized workloads.
-redirect_from:
- - /threat/malware-detection
----
-
-# Container threat detection
-
-Protect your cluster with our eBPF runtime threat detection engine, which detects malware and suspicious process activity in your containers.
-
-## Value
-
-$[prodname] provides a threat detection engine that analyzes observed file and process activity to detect known malicious and suspicious activity.
-
-As part of these threat detection capabilities, $[prodname] maintains a database of malware file
-hashes. This database consists of SHA256, SHA1, and MD5 hashes of executable file contents that are
-known to be malicious. Whenever a program is launched in a $[prodname] cluster, malware
-detection generates an alert in the **Security Events Dashboard** if the program's hash matches one that is known
-to be malicious.
-
-Our threat detection engine also monitors activity within the containers running in your clusters to detect suspicious behavior and generate corresponding alerts. The threat detection engine monitors the following types of suspicious activity within containers:
-
-- Access to sensitive system files and directories
-- Command and control
-- Defense evasion
-- Discovery
-- Execution
-- Impact
-- Persistence
-- Privilege escalation
-
-## Before you begin...
-
-### Required
-
-$[prodname] Container threat detection uses eBPF to monitor container activity, and it runs on Linux-based
-nodes in a Kubernetes cluster.
-
-Nodes require amd64 (x86_64) architecture CPUs and one of the following distributions:
-
-- Ubuntu Bionic with kernel version 4.15.0 or 5.4.0
-- Ubuntu Focal with kernel version 5.4.0, 5.8.0 or 5.11.0
-- CentOS 7 or 8
-- Fedora 29, 30, 31, 32, 33 or 34
-- Amazon Linux 2
-- Debian Stretch or later
-- Any other distribution with a Linux kernel 5.0 or later that provides BPF Type Format (BTF) for that kernel at the standard place (/sys/kernel/btf/vmlinux)
-
-:::note
-
-If your nodes are running a variant kernel, or a similarly-modern kernel but with another platform,
-please open a [Support ticket](https://support.tigera.io/)
-so we can bundle the BTF data to precisely match the version of the kernel running on your cluster nodes.
-
-:::
-
-## How to
-
-- [Enable Container threat detection in the managed cluster](#enable-container-threat-detection)
-- [Monitor the Security Events page for malicious programs](#monitor-alerts-page-for-malicious-programs)
-- [Exclude a process from Security Events alerts](#exclude-a-process-from-Security-Events-alerts)
-- [Update detectors settings](#update-detectors-settings)
- - [Configure detectors via RuntimeSecurity Custom Resource](#configure-detectors-via-runtimesecurity-custom-resource)
-
-### Enable Container Threat Detection
-
-Container threat detection is disabled by default.
-
-To enable Container threat detection on your managed cluster, go to the **Threat Defense** section in the $[prodname] UI, and select **Enable Container Threat Detection**.
-This will result in Container threat detection running on all nodes in the managed cluster to detect malware and suspicious processes.
-
-Alternatively, Container threat detection can be enabled using kubectl:
-
-```bash
-kubectl apply -f - <
-```
-
-To stop deep packet inspection, delete the DeepPacketInspection resource from your cluster.
-
-```bash
-kubectl delete -f
-```
-
-**Examples of selecting workloads**
-
-Following is a basic example that selects a single workload that has the label `k8s-app` with the value `nginx`.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: DeepPacketInspection
-metadata:
- name: sample-dpi-nginx
- namespace: sample
-spec:
- selector: k8s-app == "nginx"
-```
-
-In the following example, we select all workload endpoints in the `sample` namespace.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: DeepPacketInspection
-metadata:
- name: sample-dpi-all
- namespace: sample
-spec:
- selector: all()
-```
-
-### Configure resource requirements
-
-Adjust the CPU and RAM used for performing deep packet inspection by updating the [component resource in IntrusionDetection](../reference/installation/api.mdx#operator.tigera.io/v1.IntrusionDetectionComponentResource).
-
-For a data transfer rate of 1GB/sec on workload endpoints being monitored, we recommend a minimum of 1 CPU and 1GB RAM.
-
-The following example configures deep packet inspection to use a maximum of 1 CPU and 1GB RAM.
-
-```yaml
-apiVersion: operator.tigera.io/v1
-kind: IntrusionDetection
-metadata:
- name: tigera-secure
-spec:
- componentResources:
- - componentName: DeepPacketInspection
- resourceRequirements:
- limits:
- cpu: '1'
- memory: 1Gi
- requests:
- cpu: 100m
- memory: 100Mi
-```
-
-### Access alerts
-
-The alerts generated by deep packet inspection are available in the Manager UI in the Alerts page.
-
-### Verify deep packet inspection is running
-
-Get the [status of DeepPacketInspection](../reference/resources/deeppacketinspection.mdx#status) resource to verify if live traffic is being monitored on selected workload endpoints.
-
-```bash
-kubectl get -n
-```
-
-## Additional resources
-
-- [Configure packet capture](../visibility/packetcapture.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/threat/deploying-waf-ingress-gateway.mdx b/calico-cloud_versioned_docs/version-20-1/threat/deploying-waf-ingress-gateway.mdx
deleted file mode 100644
index 8c83b6fdc3..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/threat/deploying-waf-ingress-gateway.mdx
+++ /dev/null
@@ -1,319 +0,0 @@
----
-description: Deploy WAF with ingress gateways
----
-
-# Deploying WAF with an ingress gateway
-
-## Introduction
-
-In addition to automatically deploying and managing our WAF inside the cluster to protect each workload, we also offer the option to manually deploy our WAF to integrate with Envoy-based Gateways.
-
-Deploying WAF in this way has the following characteristics and caveats:
-
-* Comes with OWASP CoreRuleSet 4.0 built-in, which can be overridden with custom rules
-* Uses Coraza as the WAF engine
-* Integrates with Envoy using ext_authz filter
-* Logs to stdout, allowing the user to decide where to send the WAF logs
-* Manually deployed and configured by the user
-
-## Deployment guide
-
-This documentation outlines the process of deploying our Web Application Firewall (WAF) with an Istio Ingress gateway. By deploying the WAF alongside Istio ingress gateway, incoming requests to the cluster will be inspected, secured and filtered before they reach the underlying services within the cluster.
-
-There are three steps to deploying WAF with Istio ingress gateway
-
-* Add WAF as a sidecar injected in Istio ingress gateway pods
-* Update Istio ingress gateway to use WAF with Envoy’s ext_authz filter
-* Validate the configuration works by testing WAF
-
-## Step 1: Enable Istio ingress gateway for custom sidecar injection
-
-### 1. Initialize Istio operator
-
-If your Istio installation was done using the Istio operator, there's no need to reinstall the Istio operator. However, if Istio was installed by means other than the Istio operator, then you should install the Istio operator using the following command to ensure that you can leverage custom sidecar injection capabilities.
-
-```bash
-istioctl operator init
-```
-
-This command will deploy the Istio operator named istio-operator in the `istio-operator` namespace.
-
-### 2. Deploy IstioOperator custom resource for custom sidecar injection
-
-Create an IstioOperator custom resource to enable custom sidecar injection.
-Use the provided IstioOperator definition as a starting point:
-
-```bash
-kubectl apply -f - < -y` for Istio installation does not automatically enable custom sidecar injection. This is because it won't update the istio-sidecar-injector ConfigMap with the configured sidecar injection template.
-
-This limitation can be overcome by installing the IstioOperator and deploying the IstioOperator CR to enable the custom sidecar injection template. By doing so, you can ensure that the custom sidecar injection templates are properly applied and managed within your Istio service mesh. This approach provides a more flexible and customizable way to manage sidecar injections, allowing for configurations that meet specific requirements of your applications and services.
-
-#### Manual updates to istio-sidecar-injector ConfigMap:
-
-While pods are generally injected based on the sidecar injection template configured in the istio-sidecar-injector ConfigMap, manually updating, patching or adding the sidecar injection template into the ConfigMap does not guarantee the injection of custom sidecars into annotated and labeled pods.
-
-### Considerations
-
-* Istio's default installation commands may not automatically integrate custom sidecar injection configurations.
-* Manual modifications to the istio-sidecar-injector ConfigMap may not trigger the injection of custom sidecars into pods as expected.
-
-## Custom injection support across cloud providers
-
-The ability to use custom injection mechanisms in Istio may vary across different Kubernetes clusters on various cloud providers. Below is a detailed section outlining the specific scenarios for AWS EKS, AWS Kubeadm, and Google GKE.
-
-### AWS EKS (Elastic Kubernetes Service)
-
-Custom injection mechanisms, especially those involving Istio sidecars or other custom configurations, may encounter challenges on AWS EKS. This limitation is due to the managed nature of EKS, where certain components and configurations are controlled by AWS.
-
-### AWS Kubeadm
-
-Custom injection is generally well-supported on Kubernetes clusters created using kubeadm on AWS. In this self-managed setup, you have more control over the cluster components, making it suitable for custom configurations, including injection of Istio sidecars or other custom components.
-
-### Google Kubernetes Engine (GKE)
-
-GKE, being a managed Kubernetes service by Google Cloud, supports custom injection mechanisms. Google provides a flexible environment where you can apply custom configurations to the cluster, making it compatible with Istio sidecar injection and similar customization approaches.
-
-### Azure AKS (Azure Kubernetes Service)
-
-Azure AKS generally supports custom injection mechanisms. While it's a managed Kubernetes service, AKS provides flexibility for certain customizations, making it compatible with Istio sidecar injection and similar customization approaches.
diff --git a/calico-cloud_versioned_docs/version-20-1/threat/index.mdx b/calico-cloud_versioned_docs/version-20-1/threat/index.mdx
deleted file mode 100644
index e5596789bf..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/threat/index.mdx
+++ /dev/null
@@ -1,24 +0,0 @@
----
-description: Trace, analyze, and block malicious threats using intelligent feeds and alerts.
-hide_table_of_contents: true
----
-
-import { DocCardLink, DocCardLinkLayout } from '/src/___new___/components';
-
-
-# Threat defense
-
-Use real-time monitoring to detect and block threats to your cluster.
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/threat/security-event-management.mdx b/calico-cloud_versioned_docs/version-20-1/threat/security-event-management.mdx
deleted file mode 100644
index 095e4290ac..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/threat/security-event-management.mdx
+++ /dev/null
@@ -1,122 +0,0 @@
----
-description: Manage security events from your cluster in a single place.
----
-
-# Security event management
-
-Manage security events from your cluster in a single place.
-
-## Value
-
-Security events indicate that a threat actor may be present in your Kubernetes cluster. For example, a DNS request to a malicious hostname, a triggered WAF rule, or the opening of a sensitive file. $[prodname] provides security engineers and incident response teams with a single dashboard to manage threat alerts. Benefits include:
-
-- A filtered list of critical events with recommended remediation
-- Identify impacts on applications
-- Understand the scope and frequency of the issue
-- Manage alert noise by dismissing events (show/hide)
-- Manage alert noise by creating exceptions
-
-## Before you begin
-
-**Required**
-
--- [Container threat detection is enabled](./container-threat-detection)
-
-**Limitations**
-
-- Only WAF basic security events. Over time, the dashboard will contain a full range of $[prodname] security events.
-- You cannot control which users can view or edit the page using fine-grained role-based access controls
-
-## Security Events Dashboard
-
-The **Security Events Dashboard** page gives you a high-level view of recent security events.
-You can use this visual reference to get an overall sense of your cluster's health.
-If you find anything that merits further investigation, you can click on an event for more details.
-
-* In Manager UI, go to **Threat defense > Security Events Dashboard**.
-
-![Security Events Dashboard](/img/calico-enterprise/security-events-dashboard.png)
-
-## Security Events
-
-The **Security Events** page lists all the security events that have been detected for your cluster.
-You can view and filter your security events to focus on
-
-* In Manager UI, go to **Threat Defense > Security Events**.
-
-### Dismiss a security event
-
-You can clear your security events list by dismissing events that you've finished reviewing.
-When you dismiss an event, that event is no longer visibile in the list.
-
-1. In Manager UI, go to **Threat Defense > Security Events**.
-1. Find a security event in the list, and then click **Action > Dismiss Security Event**.
-
-### Create a security event exception
-
-You can prevent certain kinds of security events from appearing in the list by creating a security event exception.
-This is helpful if you want to reduce alert noise for workloads that you know are safe.
-When you create an exception, all matching security events are removed from the security events list.
-Future matches will not appear in the list.
-
-1. In Manager UI, go to **Threat Defense > Security Events**.
-1. Find a security event in the list, and then click **Action > Add exception**.
-1. On the **Create an Exception** dialog, select a scope for the exception and click **Create Exception**.
-
-You can manage your exceptions by clicking **Threat Defense > Security Events > Exceptions**.
-You can browse, edit, and delete exceptions on the list.
-
-### UI help
-
-**Event details page**
-
-Provides actions to remediate the detection and stop the attack from progressing. For example:
-
-![waf-security](/img/calico-enterprise/waf-security-events-latest.png)
-
-**Severity**
-
-$[prodname] calculates severity (Critical, High, Medium, Low) using a combination of NIST CVSS 3.0 and MITRE IDs.
-
-**MITRE IDs**
-
-Multiple MITRE IDs may be associated with a security event.
-
-**Attack Vector**
-- Network
-- Process
-- File
-
-**MITRE Tactic** (based on the [MITRE tactics](https://attack.mitre.org/tactics/enterprise/)) includes a specific path, method, or scenario that can compromise cluster security. Valid entries:
-
-| Tactic | Target | Attack techniques |
-| ------------------------------------------------------------ | ------------------------------- | ------------------------------------------------------------ |
-| [Initial access](https://attack.mitre.org/tactics/TA0001/) | Network | Gain an initial foothold within a network using various entry vectors. |
-| [Execution](https://attack.mitre.org/tactics/TA0002/) | Code in local or remove systems | Control code running on local or remote systems using malicious code. |
-| [Impact](https://attack.mitre.org/tactics/TA0040/) | Systems and data | Disrupt availability or compromise integrity by manipulating business and operational processes. |
-| [Persistence](https://attack.mitre.org/tactics/TA0003/) | Maintain footholds | Maintain access to systems across restarts, credential changes, and other interruptions. |
-| [Privilege Escalation](https://attack.mitre.org/tactics/TA0004/) | Access permissions | Access higher-level permissions on a system or network. |
-| [Defense Evasion](https://attack.mitre.org/tactics/TA0005/) | Avoid detection | Masquerade and hide malware to avoid detection to compromise software, data, scripts, and processes. |
-| [Discovery](https://attack.mitre.org/tactics/TA0007/) | Determine your environment | Gain knowledge about your system and internal network. |
-
-### Frequently asked questions
-
-**How is the recommended remediation determined?**
-
-The Tigera Security Research team maps MITRE IDs to events and provides the recommended remediation.
-
-**Will I see all $[prodname] alerts in this dashboard?**
-
-No. $[prodname] security events do not encompass all types of alerts nor all security alert types; they only contain alerts for threats. Alerts for vulnerabilities detected in a container image, or misconfigurations in your Kubernetes cluster, are displayed in their respective dashboards. However, when vulnerabilities or misconfigurations are exploited by an attacker, those indicators of an attack are considered security events.
-
-**What does dismissing a security event do?**
-
-Dismissing a security event hides it from view.
-
-**Why are some fields in columns blank?**
-
-Security events generated from older managed clusters will not have values for the new fields (for example, MITRE IDs). You can dismiss these events.
-
-**Where can I view security event logs?**
-
-Go to: **Logs**, Kibana index, `tigera_secure_ee_events`.
diff --git a/calico-cloud_versioned_docs/version-20-1/threat/security-posture-overview.mdx b/calico-cloud_versioned_docs/version-20-1/threat/security-posture-overview.mdx
deleted file mode 100644
index 89fa1fd2a3..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/threat/security-posture-overview.mdx
+++ /dev/null
@@ -1,126 +0,0 @@
----
-description: View the overall risk of your Kubernetes cluster and address high-priority security issues.
----
-
-# Security Posture Overview dashboard
-
-:::note
-
-This feature is tech preview. Tech preview features may be subject to significant changes before they become GA.
-
-:::
-
-## Big picture
-
-View the overall security posture of your Kubernetes cluster and use a prioritized list of recommended actions to improve your security score over time.
-
-## Value
-
-Is the security posture of your cluster getting worse or better over time? Do your stakeholders ask for evidence that you are addressing security issues and showing improvement?
-
-The Security Posture Overview dashboard allows every team no matter how small, to measure the security posture of their cluster and take steps to reduce risk over time. Because the dashboard is based on existing Calico Cloud data, no configuration is required. You can start improving the security posture of your Kubernetes cluster from day one.
-
-![security-posture-first](/img/calico-cloud/security-posture-first.png)
-
-The $[prodname] Security Posture Overview dashboard in Manager UI provides:
-- An overall **Security Cluster Score** that measures the following aspects of security posture management:
- - Namespaces are isolated with a network policy
- - Running images do not contain critical or high vulnerabilities
- - Running images have been scanned for vulnerabilities
- - Egress access to destinations outside the cluster have been secured with network policy
-- A prioritized list of **Recommended Actions** to improve the score
-- A summary of top 10 namespaces by risk
-
-## Concepts
-
-### Security posture management
-
-Security posture management for Kubernetes is the secure configuration of the control plane, applications, and other resources to reduce risk and prevent security events from happening. For each cluster that you connect to $[prodname], we calculate a Cluster Security Score based on several risk types and measure its security posture.
-
-### Scoring frequency
-
-By default, $[prodname] automatically runs risk calculators on every managed cluster approximately every 24 hours.
-
-## Before you begin
-
-:::note
-
-This feature is currently part of an active Beta program and will include significant changes and improvements based on user feedback. If you would like to participate, please reach out to your Customer Success representative.
-
-:::
-
-**Limitations**
-
-- The current bias of the dashboard is to include the Image Assurance feature as a contributing risk, even if you do not use the feature. If you are not using Image Assurance, the Cluster Security Score assumes a perfect score (100) for "High Risk Images" and "Unscanned Images" risks. However, Image Assurance is weighted at 50%; although you are not seeing the widest view of risk available without it, you can still make progress with the other 50% contributing risks. To use Image Assurance, see [Image Assurance scanner](../image-assurance/scanners/overview.mdx).
-
-- Currently, the historical score graph does not properly display a single data point, which is the case when the dashboard first starts assessing your cluster. With time, the historical graph will properly display data.
-
-- You cannot customize the dashboard
-
-## Dashboard walkthrough
-
-### Security Posture Overview dashboard
-
-The Security Posture Overview dashboard provides an overall view of a cluster’s risk. As a strategic dashboard, you typically won’t have the bandwidth to tackle everything at once, but with planning, you can achieve your goals over time.
-
-![security-posture-overview](/img/calico-cloud/security-posture-overview.png)
-
-Note that the dashboard reflects risk assessments related to *build and deploy time only*. Security events related to runtime threat defense features (DPI, container threat detection, WAF) are not included. To view threat security events, see the [Security Events Management dashboard](../threat/security-event-management).
-
-#### Updates to the dashboard
-
-The Cluster Security Score (and the data used by the risk calculators) are updated once a day.
-
-In the left navbar, go to: **Dashboards**, **Security Posture**.
-
-### Cluster security score
-
-The **Cluster Security Score** measures the overall security posture of your Kubernetes cluster based on contributing risks (listed under the score).
-
-![security-posture-score](/img/calico-cloud/security-posture-score.png)
-
-**How the score is calculated**
-
-The cluster security score is a complex combination of cluster score, namespace score, and risk calculator scores (based on percentages and averages). It is calculated by taking the total risk score for the cluster (current and previous value) and subtracting it from 100. The higher the score, the better the cluster is doing.
-
-Note also that the risk scores for egress access security and namespace isolation do not include misconfigurations.
-
-**Contributing risk types**
-
-Each risk type that contributes to the score is calculated and aggregated at two levels: for each namespace and each cluster.
-
-| **Risk** | **Enabled by default?** | **Score reflects…** | **Why it matters** |
-| ---------------------- | ----------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
-| High Risk Images | No | The number of images running in the namespace with High or Critical vulnerabilities. | Scanning images for known vulnerabilities is one of the most effective ways to prevent attackers from leveraging known exploits to gain access to systems and data. |
-| Unscanned images | No | Images that have not been scanned. | Unscanned images can contain secrets, passwords, private keys, etc. and pose a risk. |
-| Egress Access Security | Yes | The percentage of workloads that are communicating with endpoints external to the cluster that are not secured by network policy. | Implementing egress access controls helps mitigate the risk of unauthorized communication with malicious destinations, and helps prevent several phases of the [MITRE Attack Matrix](https://attack.mitre.org/matrices/enterprise/). |
-| Namespace isolation | Yes | The percentage of egress and ingress network traffic to/from namespaces that are not secured by network policy. | Isolating namespaces enforces multi-tenancy in your cluster to reduce the impact of potential issues. It also improves security to prevent several phases of the [MITRE Attack Matrix](https://attack.mitre.org/matrices/enterprise/). |
-
-#### About High Risk Images and Unscanned Images
-
-These risks are associated with Image Assurance, an optional feature for scanning images for vulnerabilities
-**If you are not using Image Assurance**, the risk assessment displays a perfect 100 (maximum security). So your cluster security score will never drop below 50 out of 100. When you decide to use Image Assurance, your overall score will likely change.
-
-Unscanned images are considered high risk until they are scanned.
-
-**How to remediate scores for high risk images**
-
-Use the Remediation panel in the Security Posture Overview dashboard to understand the recommended actions, then use the [Image Assurance dashboard](../image-assurance/understanding-scan-results.mdx) to address the issues.
-
-### Historical graph
-
-The historical graph is a time series of cluster risk scores. The historical scores are static (recorded once and do not change). Data starts to display in the graph approximately TBD days after you install a managed cluster.
-
-![security-posture-historical-graph](/img/calico-cloud/security-posture-historical-graph.png)
-
-#### Top 5 recommended actions
-The Recommended Actions panel is a prioritized list of actions you can take to improve your cluster security score; remediating the first recommendation in the list will improve your score the most.
-
-A key mitigation control is $[prodname]'s automatic [Policy recommendation](../network-policy/recommendations/learn-about-policy-recommendations), which is a staged network policy that teams can test and quickly enforce. Policy recommendations requires enabling, but no configuration.
-
-![security-posture-recommendations](/img/calico-cloud/security-posture-recommendations.png)
-
-### Dismissing recommended actions
-
-If you dismiss an action, the risk included in the action is ignored and will no longer affect the cluster security score. Although you can dismiss and revert a recommended action, $[prodname] controls removing and updating existing actions.
-
diff --git a/calico-cloud_versioned_docs/version-20-1/threat/suspicious-domains.mdx b/calico-cloud_versioned_docs/version-20-1/threat/suspicious-domains.mdx
deleted file mode 100644
index 30ca25f452..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/threat/suspicious-domains.mdx
+++ /dev/null
@@ -1,90 +0,0 @@
----
-description: Add threat intelligence feeds to trace DNS queries that involve suspicious domains.
----
-
-# Trace and alert on suspicious domains
-
-## Big picture
-
-Add threat intelligence feeds to $[prodname] to trace DNS queries involving suspicious domains.
-
-## Value
-
-$[prodname] integrates with threat intelligence feeds so you can detect when endpoints in your Kubernetes clusters query DNS for suspicious domains, or receive answers with suspicious domains. When events are detected, an anomaly detection dashboard in the UI shows the full context, including which pod(s) were involved so you can analyze and remediate.
-
-## Concepts
-
-$[prodname] supports pull methods for updating threat feeds. Use this method for fully automated threat feed updates without user intervention.
-
-### Domain name threat feeds
-
-A best practice is to develop an allow-list of "known-good" domains that particular applications or services must access, and then [enforce this allow-list with network policy](../network-policy/domain-based-policy.mdx).
-
-In addition to allow-lists, you can use threat feeds to monitor your cluster for DNS queries to known malicious or suspicious domain names. $[prodname] monitors DNS queries and generates alerts for any that are listed in your threat feed.
-
-Threat feeds for domain names associated with malicious **egress** activity (e.g. command and control (C2) servers or data exfiltration), provide the most security value. Threat feeds that associate domain names with malicious **ingress** activity (e.g. port scans or IP sweeps) are less useful since these activities do not cause endpoints in your cluster to query DNS. It is better to consider [IP-based threat feeds](suspicious-ips.mdx) for ingress activity.
-
-## Before you begin...
-
-### Required
-
-Privileges to manage GlobalThreatFeed.
-
-### Recommended
-
-We recommend that you turn down the aggregation of DNS logs sent to Elasticsearch for configuring threat feeds. If you do not adjust DNS log aggregation settings, $[prodname] aggregates DNS queries from workloads in the same replica set. This means if a suspicious DNS query is detected, you will only know which replica set made the query and not which specific pod. Go to: [FelixConfiguration](../reference/resources/felixconfig.mdx) and set the field, **dnsLogsFileAggregationKind** to **0** to log individual pods separately.
-
-## How to
-
-This section describes how to pull threat feeds to $[prodname].
-
-### Pull threat feed updates
-
-To add threat feeds to $[prodname] for automatic updates (default is once a day), the threat feed(s) must be available using HTTP(S), and return a newline-separated list of domain names.
-
-#### Using Manager UI
-
-1. From the Manager UI, select **Threat Feeds** --> **Add Feed**.
-2. Add your threat feed on the Add a New Threat Feed window. For example:
- - **Feed Name**: feodo-tracker
- - **Description**: This is my threat feed based on domains.
- - **URL**: https://my.threatfeed.com/deny-list
- - **Content type**: DomainNameSet
- - **Labels**: Choose a label from the list.
-3. Click **Save Changes**.
-
- From the **Action** menu, you can view or edit the details that you entered and can download the manifest file.
-
-> Go to the Security Events page to view events that are generated when an endpoint in the cluster queries a name on the list. For more information, see [Manage alerts](../visibility/alerts.mdx).
-
-#### Using CLIs
-
-1. Create the GlobalThreatFeed YAML and save it to file.
- The simplest example of this looks like the following. Replace the **name** and the **URL** with your feed.
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: GlobalThreatFeed
- metadata:
- name: my-threat-feed
- spec:
- content: DomainNameSet
- mode: Enabled
- description: 'This is my threat feed'
- feedType: Custom
- pull:
- http:
- url: https://my.threatfeed.com/deny-list
- ```
-
-2. Add the global threat feed to the cluster.
-
- ```shell
- kubectl apply -f
- ```
-
-> Go to the Security Events page to view events that are generated when an endpoint in the cluster queries a name on the list. For more information, see [Manage alerts](../visibility/alerts.mdx).
-
-## Additional resources
-
-See [GlobalThreatFeed](../reference/resources/globalthreatfeed.mdx) resource definition for all configuration options.
diff --git a/calico-cloud_versioned_docs/version-20-1/threat/suspicious-ips.mdx b/calico-cloud_versioned_docs/version-20-1/threat/suspicious-ips.mdx
deleted file mode 100644
index 1165212493..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/threat/suspicious-ips.mdx
+++ /dev/null
@@ -1,308 +0,0 @@
----
-description: Add threat intelligence feeds to trace network flows of suspicious IP addresses, and optionally block traffic to them.
----
-
-# Trace and block suspicious IPs
-
-## Big picture
-
-Add threat intelligence feeds to $[prodname] to trace network flows of suspicious IP addresses, and optionally block traffic to suspicious IPs.
-
-## Value
-
-$[prodname] integrates with threat intelligence feeds so you can detect when your Kubernetes clusters communicate with suspicious IPs. When communications are detected, an anomaly detection dashboard in the UI shows the full context, including which pod(s) were involved so you can analyze and remediate. You can also use a threat intelligence feed to power a dynamic deny-list, either to or from a specific group of sensitive pods, or your entire cluster.
-
-## Concepts
-
-$[prodname] supports pull methods for updating threat feeds. Use this method for fully automated threat feed updates without user intervention.
-
-### Suspicious IPs: test before you block
-
-There are many different types of threat intelligence feeds (community-curated, company-paid, and internally-developed) that you can choose to monitor in $[prodname]. We recommend that you assess the threat feed contents for false positives before blocking based on the feed. If you decide to block, test a subset of your workloads before rolling to production to ensure legitimate application traffic is not blocked.
-
-## Before you begin...
-
-### Required
-
-Privileges to manage GlobalThreatFeed and GlobalNetworkPolicy.
-
-### Recommended
-
-We recommend that you turn down the aggregation of flow logs sent to Elasticsearch for configuring threat feeds. If you do not adjust flow logs, $[prodname] aggregates over the external IPs for allowed traffic, and threat feed searches will not provide useful results (unless the traffic is denied by policy). Go to: [FelixConfiguration](../reference/resources/felixconfig.mdx) and set the field, **flowLogsFileAggregationKindForAllowed** to **1**.
-
-You can adjust the flow logs by running the following command:
-
-```bash
-kubectl patch felixconfiguration default --type='merge' -p '{"spec":{"flowLogsFileAggregationKindForAllowed":1}}'
-```
-
-## How to
-
-This section describes how to pull threat feeds to $[prodname], and block traffic to a cluster for a suspicious IP.
-
-- [Pull threat feed updates](#pull-threat-feed-updates)
-- [Block traffic to a cluster](#block-traffic-to-a-cluster)
-
-### Pull threat feed updates
-
-To add threat feeds to $[prodname] for automatic updates (default is once a day), the threat feed(s) must be available using HTTP(S), and return a newline-separated list of IP addresses or prefixes in CIDR notation.
-
-#### Using Manager UI
-
-1. From the Manager UI, select **Threat Feeds** --> **Add Feed**.
-2. Add your threat feed on the Add a New Threat Feed window. For example:
- - **Feed Name**: feodo-tracker
- - **Description**: This is the feodo-tracker threat feed.
- - **URL**: [https://feodotracker.abuse.ch/downloads/ipblocklist.txt](https://feodotracker.abuse.ch/downloads/ipblocklist.txt)
- - **Content type**: IPSet
- - **Labels**: Choose a label from the list.
-3. Click **Save Changes**.
-
- From the **Action** menu, you can view or edit the details that you entered and can download the manifest file.
-
-> Go to the Security Events page to view events that are generated when an IP is displayed on the threat feed list. For more information, see [Manage alerts](../visibility/alerts.mdx). When you create a global threat feed in Manager UI, network traffic is not automatically blocked. If you find suspicious IPs on the Security Events page, you need to create a network policy to block the traffic. For help with policy, see [Block traffic to a cluster](#block-traffic-to-a-cluster).
-
-#### Using CLIs
-
-1. Create the GlobalThreatFeed YAML and save it to file.
- The simplest example of this looks like the following. Replace the **name** and the **URL** with your feed.
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: GlobalThreatFeed
- metadata:
- name: my-threat-feed
- spec:
- content: IPSet
- mode: Enabled
- description: 'This is my threat feed'
- feedType: Custom
- pull:
- http:
- url: https://my.threatfeed.com/deny-list
- ```
-
-2. Add the global threat feed to the cluster.
-
- ```bash
- kubectl apply -f
- ```
-
-> Go to the Security Events page to view events that are generated when an IP is displayed on the threat feed list. For more information, see [Manage alerts](../visibility/alerts.mdx).
-
-### Block traffic to a cluster
-
-Create a new/edit existing threat feed to include the globalNetworkSet stanza, setting the labels you want to use to represent the deny-listed IPs. This stanza instructs $[prodname] to search for flows to and from the listed IP addresses, and maintain a GlobalNetworkSet containing the IP addresses.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalThreatFeed
-metadata:
- name: sample-global-threat-feed
-spec:
- content: IPSet
- mode: Enabled
- description: 'This is the sample global threat feed'
- feedType: Custom
- pull:
- http:
- url: https://an.example.threat.feed/deny-list
- globalNetworkSet:
- labels:
- security-action: block
-```
-
-1. Add the global threat feed to the cluster.
-
- ```bash
- kubectl apply -f
- ```
-
-2. Create a GlobalNetworkPolicy that blocks traffic based on the threat feed, by selecting sources or destinations using the labels you assigned in step 1. For example, the following GlobalNetworkPolicy blocks all traffic coming into the cluster if it came from any of the suspicious IPs.
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: GlobalNetworkPolicy
- metadata:
- name: default.blockthreats
- spec:
- tier: default
- selector: all()
- types:
- - Ingress
- ingress:
- - action: Deny
- source:
- selector: security-action == 'block'
- ```
-
-3. Add the global network policy to the cluster.
-
-```bash
- kubectl apply -f
-```
-
-## Tutorial
-
-In this tutorial, we’ll walk through setting up a threat feed to search for connections to suspicious IPs. Then, we’ll use the same threat feed to block traffic to those IPs.
-
-We will use the free [FEODO botnet tracker](https://feodotracker.abuse.ch/) from abuse.ch that lists IP addresses associated with command and control servers. But the steps are the same for your commercial or internal threat feeds.
-
-If you haven’t already adjusted your [aggregation flows](#before-you-begin), we recommend it before you start.
-
-### Configure the threat feed
-
-1. Create a file called feodo-tracker.yaml with the following contents:
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: GlobalThreatFeed
- metadata:
- name: feodo-tracker
- spec:
- content: IPSet
- mode: Enabled
- description: 'This is the feodo-tracker threat feed'
- feedType: Custom
- pull:
- http:
- url: https://feodotracker.abuse.ch/downloads/ipblocklist.txt
- ```
-
- This pulls updates using the default period of once per day. See the [Global Resource Threat Feed API](../reference/resources/globalthreatfeed.mdx) for all configuration options.
-
-2. Add the feed to your cluster.
-
- ```bash
- kubectl apply -f feodo-tracker.yaml
- ```
-
-### Check search results
-
-Open $[prodname] Manager, and navigate to the “Security Events” page. If any of your pods have been communicating with the IP addresses in the FEODO tracker feed, you will see the results listed on this page. It is normal to not see any events listed on this page.
-
-### Block pods from contacting IPs
-
-If you have high confidence in the IP addresses listed as malicious in a threat feed, you can take stronger action than just searching for connections after the fact. For example, the FEODO tracker lists IP addresses used by command and control servers for botnets. We can configure $[prodname] to block all egress traffic to addresses on this list.
-
-It is strongly recommended that you assess the contents of a threat feed for false positives before using it as a deny-list, and that you apply it to a test subset of your workloads before rolling out application or cluster-wide. Failure to do so could cause legitimate application traffic to be blocked and could lead to an outage in your application.
-
-In this demo, we will apply the policy only to a test workload (so we do not impact other traffic).
-
-1. Create a file called **tf-ubuntu.yaml** with the following contents:
-
- ```yaml
- apiVersion: v1
- kind: Pod
- metadata:
- labels:
- docs.tigera.io-tutorial: threat-feed
- name: tf-ubuntu
- spec:
- nodeSelector:
- kubernetes.io/os: linux
- containers:
- - command:
- - sleep
- - '3600'
- image: ubuntu
- name: test
- ```
-
-2. Apply the pod configuration.
-
- ```bash
- kubectl apply -f tf-ubuntu.yaml
- ```
-
-3. Edit the feodo-tracker.yaml to include a globalNetworkSet stanza:
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: GlobalThreatFeed
- metadata:
- name: feodo-tracker
- spec:
- content: IPSet
- mode: Enabled
- description: 'This is the feodo-tracker threat feed'
- feedType: Custom
- pull:
- http:
- url: https://feodotracker.abuse.ch/downloads/ipblocklist.txt
- globalNetworkSet:
- labels:
- docs.tigera.io-threat-feed: feodo
- ```
-
-4. Reapply the new YAML.
-
- ```bash
- kubectl apply -f feodo-tracker.yaml
- ```
-
-5. Verify that the GlobalNetworkSet is created.
-
- ```bash
- kubectl get globalnetworksets threatfeed.feodo-tracker -o yaml
- ```
-
-### Apply global network policy
-
-We will now apply a GlobalNetworkPolicy that blocks the test workload from connecting to any IPs in the threat feed.
-
-1. Create a file called block-feodo.yaml with the following contents:
-
- ```yaml
- apiVersion: projectcalico.org/v3
- kind: GlobalNetworkPolicy
- metadata:
- name: default.block-feodo
- spec:
- tier: default
- selector: docs.tigera.io-tutorial == 'threat-feed'
- types:
- - Egress
- egress:
- - action: Deny
- destination:
- selector: docs.tigera.io-threat-feed == 'feodo'
- - action: Allow
- ```
-
-2. Apply this policy to the cluster
-
- ```bash
- kubectl apply -f block-feodo.yaml
- ```
-
-### Verify policy on test workload
-
-We will verify the policy from the test workload that we created earlier.
-
-1. Get a shell in the pod by running the following command:
-
- ```bash
- kubectl exec -it tf-ubuntu -- bash
- ```
-
- You should get a prompt inside the pod.
-
-2. Install the ping command.
-
- ```bash
- apt update && apt install iputils-ping
- ```
-
-3. Ping a known safe IP (like 8.8.8.8, Google’s public DNS server).
-
- ```bash
- ping 8.8.8.8
- ```
-
-4. Open the [FEODO tracker list](https://feodotracker.abuse.ch/downloads/ipblocklist.txt) and choose an IP on the list to ping.
- You should not get connectivity, and the pings will show up as denied traffic in the flow logs.
-
-## Additional resources
-
-See [GlobalThreatFeed](../reference/resources/globalthreatfeed.mdx) resource definition for all configuration options.
diff --git a/calico-cloud_versioned_docs/version-20-1/threat/tor-vpn-feed-and-dashboard.mdx b/calico-cloud_versioned_docs/version-20-1/threat/tor-vpn-feed-and-dashboard.mdx
deleted file mode 100644
index 081b96ab19..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/threat/tor-vpn-feed-and-dashboard.mdx
+++ /dev/null
@@ -1,69 +0,0 @@
----
-description: Detect and analyze malicious anonymization activity using Tor-VPN feeds.
----
-
-# Anonymization attacks
-
-## Big picture
-
-Detect and analyze malicious anonymization activity using Tor-VPN feeds.
-
-## Value
-
-**Tor and VPN infrastructure** are used in enabling anonymous communication, where an attacker can leverage anonymity to scan, attack or compromise the target. It’s hard for network security teams to track malicious actors using such anonymization tools. Hence **Tor and VPN feeds** come into play where the feeds track all the Tor bulk exit nodes as well as most of the anonymizing VPN infrastructure on the internet. **The Tor-VPN Dashboard** helps network security teams to monitor and respond to any detected activity where they have a clusterwide view and granular control over logs which is critical in stopping the possible attack in early stages.
-
-## Concepts
-
-### About Tor and VPN threats
-
-**Tor** is a popular anonymization network on the internet. It is also popular among the malicious actors, hacktivist groups, criminal enterprises as the infrastructure hides the real identity of an attacker carrying out malicious activities. To track down such attackers, Tor historically was subject to investigation by various state level intelligence agencies from US and UK for criminal activities such as Silk Road marketplace, Mirai Botnet C&C. Though it’s not possible to completely de-anonymize the attacker. Hence **Tor bulk exit feed** came into existence to track all the Tor exit IPs over the internet to know attackers using the Tor infrastructure.
-Over the years, many Tor flaws became public and attackers evolved to leverage Tor network with additional VPN layers. There are many individual VPN providers which have the anonymizing infrastructure. Attackers can use these new breed of VPN providers with existing options like Tor to make sure of anonymity. To help security teams, the **X4B vpn feed** detects all the major VPN providers on the internet.
-
-### Tor-VPN feed types
-
-**Tor Bulk Exit feed**
-The Tor Bulk Exit feed lists available Tor exit nodes on the internet which are used by Tor network. The list is continuously updated and maintained by the Tor project. An attacker using Tor network, is likely to use one of the bulk exit nodes to connect to your infrastructure. The network security teams can detect such activity with Tor bulk exit feed and investigate as required.
-
-**X4B VPN feed**
-In recent times it became a trend to use multiple anonymization networks to hide real attacker identity. There are lots of lists of open proxies and tor nodes on the web, but surprisingly few usable ones dedicated to VPN providers and datacenters. This list combines known VPN netblocks and ASNs owned by datacenters and VPN providers. This list doesn't list all VPNs, but should list the vast majority of common ones.
-
-### The $[prodname] Tor-VPN dashboard
-
-The Tor-VPN dashboard helps network security teams to monitor and respond to any detected activity by Tor and VPN feeds. It provides a cluster context to the detection and shows multiple artifacts e.g. flow logs, filtering controls, a tag cloud and line graph to analyze the activity and respond faster.
-The Tor-VPN dashboard may be accessed as below:
-
-- Log in to $[prodname] Manager, and go to **kibana**, select **dashboard**, and select **Tor-VPN Dashboard**.
-
-## Before you begin...
-
-### Required
-
-Privileges to manage GlobalThreatFeed i.e. clusterrole `intrusion-detection-controller`
-
-### Recommended
-
-We recommend that you turn down the aggregation of flow logs sent to Elasticsearch for configuring threat feeds. If you do not adjust flow logs, $[prodname] aggregates over the external IP addresses for allowed traffic, and threat feed searches will not provide useful results (unless the traffic is denied by policy). Go to: [FelixConfiguration](../reference/resources/felixconfig.mdx) and set the field, **flowLogsFileAggregationKindForAllowed** to **1**.
-
-## How to
-
-In this section we will look at how to add Tor and VPN feeds to $[prodname]. Installation process is straightforward as below.
-
-1. Add threat feed to the cluster.
- For VPN Feed,
- ```shell
- kubectl apply -f $[filesUrl_CE]/manifests/threatdef/vpn-feed.yaml
- ```
- For Tor Bulk Exit Feed,
- ```shell
- kubectl apply -f $[filesUrl_CE]/manifests/threatdef/tor-exit-feed.yaml
- ```
-2. Now, you can monitor the Dashboard for any malicious activity. The dashboard can be found at $[prodname] Manager, go to "kibana" and then go to "Dashboard". Select "Tor-VPN Dashboard".
-3. Additionally, feeds can be checked using following command:
- ```shell
- kubectl get globalthreatfeeds
- ```
-
-## Additional resources
-
-- See [GlobalThreatFeed](../reference/resources/globalthreatfeed.mdx) resource definition for all configuration options.
-- Check example to Trace and block Suspicious IPs [Here](suspicious-ips.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/threat/web-application-firewall.mdx b/calico-cloud_versioned_docs/version-20-1/threat/web-application-firewall.mdx
deleted file mode 100644
index 93b434f0a8..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/threat/web-application-firewall.mdx
+++ /dev/null
@@ -1,364 +0,0 @@
----
-description: Configure Calico to use with Layer 7 Web Application Firewall.
----
-
-# Workload-based Web Application Firewall (WAF)
-
-:::note
-
-This feature is tech preview. Tech preview features may be subject to significant changes before they become GA.
-
-:::
-
-## Big picture
-
-Protect cloud-native applications from application layer attacks with $[prodname] Workload-based Web Application Firewall (WAF).
-
-## Value
-
-Our workload-centric Web Application Firewall (WAF) protects your workloads from a variety of application layer attacks originating from within your cluster such as [SQL injection](https://owasp.org/www-community/attacks/SQL_Injection). Given that attacks on apps are the [leading cause of breaches](https://www.f5.com/labs/articles/threat-intelligence/application-protection-report-2019--episode-2--2018-breach-trend), you need to secure the HTTP traffic inside your cluster.
-
-Historically, web application firewalls (WAFs) were deployed at the edge of your cluster to filter incoming traffic. Our workload-based WAF solution takes a unique, cloud-native approach to web security by allowing you to implement zero-trust rules for workloads inside your cluster.
-
-## Concepts
-
-### About $[prodname] WAF
-
-WAF is deployed in your cluster along with Envoy DaemonSet. $[prodname] proxies selected service traffic through Envoy, checking HTTP requests using the industry-standard
-[ModSecurity](https://owasp.org/www-project-modsecurity-core-rule-set/) with OWASP Core Rule Set `v4.0.0-rc2` with some modifications for Kubernetes workloads.
-
-
-You simply enable WAF in Manager UI, and determine the services that you want to enable for WAF protection. By default WAF is set to `DetectionOnly` so no traffic will be denied until you are ready to turn on blocking mode.
-
-Every request that WAF finds an issue with, will result in a Security Event being created for [you to review in the UI](#view-waf-events), regardless of whether the traffic was allowed or denied. This can greatly help in tuning later.
-
-#### How WAF determines if a request should be allowed or denied
-
-If you configure WAF in blocking mode, WAF will use something called [anomaly scoring mode](https://coreruleset.org/docs/concepts/anomaly_scoring/) to determine if a request is allowed with `200 OK` or denied `403 Forbidden`.
-
-This works by matching a single HTTP request against all the configured WAF rules. Each rule has a score and WAF adds all the matched rule scores together, and compares it to the overall anomaly threshold score (100 by default). If the score is under the threshold the request is allowed and if the score is over the threshold the request is denied. Our WAF starts in detection mode only and with a high default scoring threshold so is safe to turn on and then [fine-tune the WAF](#manage-your-waf) for your specific needs in your cluster.
-
-## Before you begin
-
-**Not supported**
-- GKE
-
-**Limitations**
-
-WAF cannot be used with:
- - Host-networked client pods
- - TLS traffic
- - [LoadBalancer services](https://Kubernetes.io/docs/concepts/services-networking/service/#loadbalancer)
- - Egress gateways
- - WireGuard on AKS or EKS (unless you apply a specific kernel variable). Contact Support for help.
-
-:::note
-When selecting and deselecting traffic for WAF, active connections may be disrupted.
-:::
-
-:::caution
-
-Enabling WAF for certain system services may result in an undesired cluster state.
-- Do not enable WAF for system service with the following prefixes:
-
- - `tigera-*`
- - `calico-*`
- - `kube-system`
- - `openshift-*`
-
-- Do not enable WAF for system services with the following combination of name and namespaces:
- - name: `Kubernetes`, namespace: `default`
- - name: `openshift`, namespace: `default`
- - name: `gatekeeper-webhook-service`, namespace: `gatekeeper-system`
-
-The rules are not overridden during upgrade, you will have to manage deploying updates to the OWASP Core Rule Set to the cluster over time.
-
-If you modify the rules, it is recommended to keep your rules in git or similar source control systems.
-
-:::
-
-## How to
-
-- [Enable WAF on your cluster](#enable-waf)
-- [Apply WAF to your services](#apply-waf)
-- [View WAF events](#view-waf-events)
-- [Manage your WAF](#manage-waf-configuration)
-- [Disable WAF feature from your cluster](#disable-waf-feature-from-your-cluster)
-
-### Enable WAF
-
-#### (Optional) Deploy a sample application
-If you don’t have an application to test WAF with or don’t want to use it right away against your own application,
-we recommend that you install the [GoogleCloudPlatform/microservices-demo app](https://github.com/GoogleCloudPlatform/microservices-demo):
-
-```bash
-kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/main/release/kubernetes-manifests.yaml
-```
-
-#### Enable WAF using the CLI
-
-##### Enable the Policy Sync API in Felix
-To enable WAF using the CLI, you must enable the Policy Sync API in Felix. To do this cluster-wide,
-modify the `default` FelixConfiguration to set the field `policySyncPathPrefix` to `/var/run/nodeagent`:
-
-```bash
-kubectl patch felixconfiguration default --type='merge' -p '{"spec":{"policySyncPathPrefix":"/var/run/nodeagent"}}'
-```
-
-##### Enable WAF using kubectl
-
-In the ApplicationLayer custom resource, named `tigera-secure`, set the `webApplicationFirewall` field to `Enabled`.
-
-```bash
-kubectl apply -f - <
-
-### Apply WAF to services
-
-Now that you have deployed WAF in your cluster, you can select the services you want to protect from application layer attacks.
-
-If you have deployed the sample application, you can apply WAF on a service associated with your app, as follows:
-```bash
-kubectl annotate svc frontend -n default --overwrite projectcalico.org/l7-logging=true
-```
-Alternatively, you can use the Manager UI to apply WAF to the `frontend` service.
-
-In this example, we applied WAF to the `frontend` service. This means that every request that goes through the `frontend` service is inspected.
-However, the traffic is not blocked because the WAF rule is set to `DetectionOnly` by default. You can adjust rules and start blocking traffic by [fine-tuning your WAF](#manage-your-waf).
-
-In the previous example, we applied WAF to the `frontend` service of the sample application. Here, we are
-applying WAF to a service of your own application.
-
-1. On the Manager UI, click **Threat Defense**, **Web Application Firewall**.
-2. Select the services you want WAF to inspect, and then click **Confirm Selections**.
-
-
-
-3. On the **Web Application Firewall** page, you can verify that WAF is enabled for a service by locating the service and checking that the **Status** column says **Enabled**.
-
-4. To make further changes to a service, click **Actions**, and then **Enable** or **Disable**.
-
-You have now applied WAF rule sets to your own services, and note that the traffic that goes through the selected services will be alerted but not blocked by default.
-
-#### Trigger a WAF event
-If you would like to trigger a WAF event for testing purposes, you can simulate an SQL injection attack inside your cluster by crafting a HTTP request with a query string that WAF will detect as an SQL injection attempt.
-The query string in this example has some SQL syntax embedded in the text. This is harmless and for demo purposes, but WAF will detect this pattern and create an event for this HTTP request.
-
-Run a simple curl command from any pod inside your cluster targeting a service you have selected for WAF protection e.g. from the demo app above we could attempt to send a simple HTTP request to the cartservice.
-```
- curl http://cartservice/cart?artist=0+div+1+union%23foo*%2F*bar%0D%0Aselect%23foo%0D%0A1%2C2%2Ccurrent_user
-```
-
-### Manage WAF configuration
-
-Reviewing the default rule set config:
-
-```bash
-Include @coraza.conf-recommended
-Include @crs-setup.conf.example
-Include @owasp_crs/*.conf
-
-SecRuleEngine DetectionOnly
-```
-
-The configuration file starts with importing the appropriate rule set config. We use Coraza WAF's recommended [Core Rule Set setup](https://coraza.io/docs/tutorials/coreruleset/) files:
-
-1. Coraza recommended [configuration](https://github.com/corazawaf/coraza/blob/main/coraza.conf-recommended)
-1. The rest of the [coreruleset](https://github.com/coreruleset/coreruleset) files, currently [v4.0.0-rc2](https://github.com/coreruleset/coreruleset/tree/v4.0.0-rc2)
-
-These files can be customized if desired. Add all your customizations directly under `tigera.conf`:
-
-```bash
-kubectl edit cm -n tigera-operator modsecurity-ruleset
-```
-
-After editing this ConfigMap successfully, the `modsecurity-ruleset` ConfigMap will be replaced in the `tigera-operator` namespace,
-which then triggers a rolling restart of your L7 pods. This means that the HTTP connections going through L7 pods at the time of pod termination will be (RST) reset.
-
-:::note
-
-It is important to adhere to the [Core Rule Set documentation](https://coreruleset.org/docs) on how to edit the behaviour of
- your WAF. A good place to begin at is the [Installing Core Rule Set](https://coreruleset.org/docs/deployment/install/).
-
-In many scenarios, the default example CRS configuration will be a good enough starting point. It is recommended to review the example configuration file before
-you deploy it to make sure it’s right for your environment.
-:::
-
-#### Customization options
-
-##### Set WAF to block traffic
-By default WAF will not block a request even if it has matching rule violations. The rule engine is set to `DetectionOnly`. You can configure to block traffic instead with an `HTTP 403 Forbidden` response status code when the combined matched rules scores exceed a certain threshold.
-
-1. Edit the configmap:
- ```bash
- kubectl edit cm -n tigera-operator modsecurity-ruleset
- ```
-2. Look for `SecRuleEngine DetectionOnly` and change it to `SecRuleEngine On`.
-3. Save your changes. This triggers a rolling update of the L7 pods.
-
-| Action | Description | Disruptive? |
-| ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
-| DetectionOnly | Traffic is not denied nor dropped. $[prodname] will log events. | No
-| On | Denies HTTP traffic. $[prodname] will log the event in Security Events. | Yes |
-| Off | Be cautious about using this option. Traffic is not denied, and there are no events. |No | Yes |
-
-
-##### Other basic customizations
-
-For basic customizations, it's best to add it after all the includes in `tigera.conf`. In fact, this is the reason why the `SecRuleEngine` directive and the rest of [our customizations](https://github.com/tigera/operator/blob/master/pkg/render/applicationlayer/embed/coreruleset/tigera.conf#L8-L17) are situated there.
-
-An example is adding a sampling mode. For that, the `tigera.conf` will look like this:
-
-```bash
-# Core Rule Set activation
-Include @coraza.conf-recommended
-Include @crs-setup.conf.example
-Include @owasp_crs/*.conf
-
-SecRuleEngine DetectionOnly
-
-# --- all customizations appear below this line, unless they need a specific loading order like plugins ---
-
-# --- Add sampling mode
-# Read about sampling mode here https://coreruleset.org/docs/concepts/sampling_mode/
-SecAction "id:900400,\
- phase:1,\
- pass,\
- nolog,\
- setvar:tx.sampling_percentage=50"
-```
-
-Also you can disable certain rules here:
-
-```bash
-# --- disable 'Request content type is not allowed by policy'
-SecRuleRemoveById 920420
-```
-
-Change anomaly scoring threshold:
-
-```bash
- SecAction \
- "id:900110,\
- phase:1,\
- nolog,\
- pass,\
- t:none,\
- setvar:tx.inbound_anomaly_score_threshold=25,\
- setvar:tx.outbound_anomaly_score_threshold=20"
-```
-
-Or even change rule action parameters or behavior. For example:
-
- ```bash
- # --- append to more allowed content types to request bodies
-SecAction \
- "id:900220,\
- phase:1,\
- nolog,\
- pass,\
- t:none,\
- setvar:'tx.allowed_request_content_type=|application/x-www-form-urlencoded| |multipart/form-data| |multipart/related| |text/xml| |application/xml| |application/soap+xml| |application/json| |application/cloudevents+json| |application/cloudevents-batch+json| |application/grpc|'"
-```
-
-
-##### Using Core Rule Set plugins
-
-Let's go with an example plugin: [Wordpress Rule Exclusions](https://github.com/coreruleset/wordpress-rule-exclusions-plugin/).
-
-Plugin files are the following:
-
-```
-wordpress-rule-exclusions-before.conf
-wordpress-rule-exclusions-config.conf
-```
-
-To include these files properly, structure your work directory like so:
-
-```
-tigera.conf
-wordpress-rule-exclusions-before.conf
-wordpress-rule-exclusions-config.conf
-```
-
-and then `tigera.conf` contents should be:
-
-```bash
-Include @coraza.conf-recommended
-
-Include /etc/modsecurity-ruleset/wordpress-rule-exclusions-config.conf
-Include /etc/modsecurity-ruleset/wordpress-rule-exclusions-before.conf
-
-Include @crs-setup.conf.example
-Include @owasp_crs/*.conf
-
-# if your plugin has an -after.conf, include them here
-# but wordpress rule exclusions doesn't so we're adding a comment placeholder
-# Include /etc/modsecurity-ruleset/wordpress-rule-exclusions-after.conf
-
-SecRuleEngine DetectionOnly
-```
-
-Then create and apply the configmap:
-
-```bash
-## create the configuration map itself
-kubectl create cm --dry-run=client \
- --from-file=tigera.conf \
- --from-file=wordpress-rule-exclusions-config.conf \
- --from-file=wordpress-rule-exclusions-before.conf \
- -n tigera-operator modsecurity-ruleset -o yaml > rule set.configmap.yaml
-
-## replace active configmap
-kubectl replace -f rule set.configmap.yaml
-```
-
-Read more about the order of execution for plugins here: https://coreruleset.org/docs/concepts/plugins/
-
-### View WAF events
-
-#### Security Events
-
-To view WAF events in a centralized security events dashboard, go to: **Threat defense**, **Security Events**. For help, see [Security Event Management](../threat/security-event-management).
-
-#### Kibana
-
-To view WAF events In Kibana, select the `tigera_secure_ee_waf*` index pattern.
-
-#### Disable WAF for a service
-
-To disable WAF on a service, use the Actions menu on the WAF board, or use the following command:
-
-```bash
-kubectl annotate svc -n projectcalico.org/l7-logging-
-```
-
-### Disable WAF feature from your cluster
-
-To disable WAF, update the [ApplicationLayer](../reference/installation/api#operator.tigera.io/v1.ApplicationLayer) resource to include the `webApplicationFirewall` field, and ensure it is set to `Disabled`.
-
-Example:
-
-```yaml
-apiVersion: operator.tigera.io/v1
-kind: ApplicationLayer
-metadata:
- name: tigera-secure
-spec:
- webApplicationFirewall: Disabled
-```
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/applications/egress-controls.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/applications/egress-controls.mdx
deleted file mode 100644
index 7a40244abf..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/applications/egress-controls.mdx
+++ /dev/null
@@ -1,206 +0,0 @@
----
-description: Learn egress access controls using domains and IP addresses.
----
-
-# Secure egress access from workloads to destinations outside the cluster
-
-In this article we'll show you how to restrict egress access for your application or microservice pods to external endpoints outside of the cluster.
-
-- [Use a network set with a network policy](#use-a-network-set-with-a-network-policy)
-- [Use wildcards in domain names](#use-wildcards-in-domain-names)
-- [Use a global network set with a global network policy](#use-a-global-network-set-with-a-global-network-policy)
-
-## What are networks sets?
-
-$[prodname] NetworkSet and GlobalNetworkSet resources are used to define endpoints external to the Kubernetes cluster. The scope of a NetworkSet is the **namespace where they are defined**; the scope of a GlobalNetworkSet is **cluster-wide**.
-
-NetworkSet/GlobalNetworkSet come in two types:
-
-- **Network-based** - defines IP address-based external endpoints
-- **Domain-based** - defines URL-based external endpoints. Using domains in policy is often called, DNS policy.
-
-## Use a network set with a network policy
-
-In this example, we have a microservice that requires egress access to two external endpoints: a repo and a partner.
-
-![egress-example](/img/calico-cloud/egress-example.png)
-
-`svc3` pods needs egress access to:
-
-- A repo named, `app2-repo` at domain `app2-repo.example.com`, port 443
-- A partner named, `app2-partners` at endpoint `10.10.10.10/32`, port 1010 and 53
-
-First, we define a domain-based NetworkSet. Using `allowedEgressDomains` we can specify the trusted repo by its URL, `app2-repo.example.com`.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkSet
-metadata:
- name: app2-repo
- namespace: app2-ns
- labels:
- trusted-ep: 'app2-repo'
-spec:
- allowedEgressDomains:
- - 'app2-repo.example.com'
-```
-
-Next, we create a network-based NetworkSet that specifies the IP address of the trusted partner, `10.10.10.10/32`.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkSet
-metadata:
- name: app2-partners
- namespace: app2-ns
- labels:
- trusted-ep: 'app2-partners'
-spec:
- nets:
- - 10.10.10.10/32
-```
-
-Now we can reference these NetworkSets by their labels in NetworkPolicy. We use a selector to specify the service, `app == "app2"&&svc == "svc3"`, and then selectors to allow egress to our two trusted endpoints, `selector: trusted-ep == "app2-repo"` at port 443, and `selector: trusted-ep == "app2-partners"` at ports 1010 and 53.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: application.app2-svc3-egress
- namespace: app2-ns
-spec:
- tier: application
- selector: (app == "app2" && svc == "svc3")
- egress:
- - action: Allow
- source: {}
- destination:
- selector: trusted-ep == "app2-repo"
- ports:
- - '443'
- protocol: TCP
- - action: Allow
- source: {}
- destination:
- selector: trusted-ep == "app2-partners"
- ports:
- - '1010'
- protocol: TCP
- - action: Allow
- protocol: UDP
- source: {}
- destination:
- ports:
- - '53'
- types:
- - Egress
-```
-
-## Use wildcards in domain names
-
-In this example, we create another namespaced NetworkPolicy with egress rules with `action: Allow` and a `destination.domains` field specifying the domain names to which egress traffic is allowed.
-
-The first egress rule allows DNS traffic using UDP over port 53, and the second rule allows connections outside the cluster to domains `api.alice.com` and `*.example.com` (which means `.example.com`, such as `bob.example.com`).
-
-Note that our namespaced NetworkPolicy can only grant egress access to specified domains, and to workload endpoints in the `rollout-test` namespace.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: allow-egress-to-domains
- namespace: rollout-test
-spec:
- order: 1
- selector: my-pod-label == 'my-value'
- types:
- - Egress
- egress:
- - action: Allow
- protocol: UDP
- destination:
- ports:
- - 53
- - dns
- - action: Allow
- destination:
- domains:
- - api.alice.com
- - '*.example.com'
-```
-
-## Use a global network set with a global network policy
-
-We recommend using a GlobalNetworkSet when the same set of domains needs to be referenced in multiple policies, or when you want the allowed destinations to be a mix of domains and IPs from global network sets, or IPs from workload endpoints and host endpoints. By using a single destination selector in a global network set, you can potentially match all of these resources.
-
-In the following example, the allowed egress domains (`api.alice.com` and `*.example.com`) are specified in the GlobalNetworkSet.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkSet
-metadata:
- name: allowed-domains-1
- labels:
- color: red
-spec:
- allowedEgressDomains:
- - api.alice.com
- - '*.example.com'
-```
-
-Then, we reference the global network set in a GlobalNetworkPolicy using a destination label selector.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: allow-egress-to-domain
-spec:
- order: 1
- selector: my-pod-label == 'my-value'
- types:
- - Egress
- egress:
- - action: Allow
- destination:
- selector: color == 'red'
-```
-
-## For reference...
-
-### Allowed egress domains
-
-Using domain names in policy rules is limited to only egress allow rules. $[prodname] allows connections only to IP addresses returned from DNS lookups to trusted DNS servers. The supported DNS types are: A, AAAA, and CNAME records. The domain name must be an exact match; for example, **google.com** is treated as distinct from **www.google.com**.
-
-**Note:** Kubernetes labels provide a similar convenience for services within the cluster. $[prodname] does not support using domain names for services within the cluster. Use Kubernetes labels for services within the cluster.
-
-### Domain name matching
-
-When a configured domain name has no wildcard (`*`), it matches exactly that domain name. For example:
-
-- `microsoft.com`
-- `tigera.io`
-
-With a single asterisk in any part of the domain name, it matches 1 or more path components at that position. For example:
-
-- `*.google.com` matches `www.google.com` and `www.ipv6.google.com`, but not `google.com`
-- `www.*.com` matches `www.sun.com` and `www.apple.com`, but not `www.com`
-- `update.*.mycompany.com` matches `update.tools.mycompany.com`, `update.secure.suite.mycompany.com`, and so on
-
-**Not** supported are:
-
-- Multiple wildcards in the same domain, for example: `*.*.mycompany.com`
-- Asterisks that are not the entire component, for example: `www.g*.com`
-- More general wildcards, such as regular expressions
-
-### Workload and host endpoints
-
-Policy with domain names can be enforced on workload or host endpoints. When a policy with domain names applies to a workload endpoint, it allows that workload to connect out to the specified domains. When policy with domain names applies to a host endpoint, it allows clients directly on the relevant host (including any host-networked workloads) to connect out to the specified domains.
-
-### Trusted DNS servers
-
-$[prodname] trusts DNS information only from its list of DNS trusted servers. Using trusted DNS servers to back domain names in policy, prevents a malicious workload from using IPs returned by a fake DNS server to hijack domain names in policy rules.
-
-By default, $[prodname] trusts the Kubernetes cluster’s DNS service (kube-dns or CoreDNS). For workload endpoints, these out-of-the-box defaults work with standard Kubernetes installs, so normally you won’t change them. For host endpoints you will need to add the IP addresses that the cluster nodes use for DNS resolution.
-
-To change the default DNS trusted servers, use the DNSTrustedServers parameter in Felix.
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/applications/index.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/applications/index.mdx
deleted file mode 100644
index 4853e54751..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/applications/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Learn how to secure ingress and egress access to/from applications and microservices.
-hide_table_of_contents: true
----
-
-# Secure ingress and egress for applications
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/applications/ingress-microservices.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/applications/ingress-microservices.mdx
deleted file mode 100644
index b2dcc0e7ee..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/applications/ingress-microservices.mdx
+++ /dev/null
@@ -1,275 +0,0 @@
----
-description: Create policy to secure ingress access to your microservice or application.
----
-
-# Secure ingress access to a microservice or application
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-In this article you will learn how to create $[prodname] policies that securely allow ingress access to a microservice or application.
-
-
-
-
-## Secure ingress access to a microservice
-
-In this example, we have a microservices with four services (sv1, sv2, sv3, and sv4). Let's assume the following requirements:
-
-- Everyone internal to the organization can access `svc1` pods at TCP port 10001
-- `svc1` pods can access `svc2` at TCP port 10002
-- `svc3` pods can access at TCP port 10003
-- `svc2` pods can access `vc4` at TCP ports 10004, 10005
-
-Also, because `svc1` pods pass through a trusted load balancer, we want to allow ingress traffic for the load balancer.
-
-![ingress-microservices](/img/calico-cloud/ingress-microservices.png)
-
-Let's start with securing ingress access to the trusted load balancer. We will use the $[prodname] **GlobalNetworkSet** resource with cluster-wide scope to define the load balancer IP addresses; this allows the same trusted external endpoints to be used across multiple namespaces.
-
-**GlobalNetworkSet for load balancer**
-
-This GlobalNetworkSet contains the IP addresses for the trusted load balancer.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkSet
-metadata:
- name: load-balancer
- labels:
- trusted-ep: "load-balancer"
-spec:
- nets:
- # Modify the ip addresses to refer to the ip addresses of load-balancers in your environment
- - 10.0.0.1/32
- - 10.0.0.2/32
-```
-
-Next, we will create four network policies.
-
-**NetworkPolicy 1**
-
-This policy allows ingress from the trusted load balancer to pods in `sv1`.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: application.app2-svc1
- namespace: app2-ns
-spec:
- tier: application
- order: 500
- selector: (app == "app2" && svc == "svc1")
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: trusted-ep == "load-balancer"
- destination:
- ports:
- - '10001'
- types:
- - Ingress
-```
-
-**NetworkPolicy 2**
-
-This policy allows ingress access from `sv1` pods to `sv2` pods.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: application.app2-svc2
- namespace: app2-ns
-spec:
- tier: application
- order: 600
- selector: (app == "app2" && svc == "svc2")
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: svc == "svc1"
- destination:
- ports:
- - '10002'
- types:
- - Ingress
-```
-
-**NetworkPolicy 3**
-
-This policy allows ingress access from `svc1` pods to `svc3` pods.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: application.app2-svc3
- namespace: app2-ns
-spec:
- tier: application
- order: 700
- selector: (app == "app2" && svc == "svc3")
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: svc == "svc1"
- destination:
- ports:
- - '10003'
- types:
- - Ingress
-```
-
-**NetworkPolicy 4**
-
-This policy allows ingress access from `svc2` pods to `svc4` pods.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: application.app2-svc4
- namespace: app2-ns
-spec:
- tier: application
- order: 800
- selector: (app == "app2" && svc == "svc4")
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: svc == "svc2"
- destination:
- ports:
- - '10003'
- - '10004'
- types:
- - Ingress
-```
-
-
-
-
-## Secure ingress access to an application
-
-In this example, we have an application with frontend, backend, and a database. Let's assume the following requirements:
-
-- Everyone internal to the organization can access the `frontend` at TCP port 10001
-- `frontend` can access the `backend` at TCP port 10002
-- `backend` can access the `database` at TCP ports 10003, 10004
-
-Also, because frontend pods pass through a trusted load balancer, we want to allow ingress traffic for the load balancer.
-
-![ingress-application](/img/calico-cloud/ingress-application.png)
-
-Let's start with securing ingress access to the trusted load balancer. We will use the $[prodname] **GlobalNetworkSet** resource with cluster-wide scope to define the load balancer IP addresses; this allows the same trusted external endpoints to be used across multiple namespaces.
-
-**GlobalNetworkSet for load balancer**
-
-This GlobalNetworkSet contains the IP addresses for the trusted load balancer.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkSet
-metadata:
- name: load-balancer
- labels:
- trusted-ep: "load-balancer"
-spec:
- nets:
- # Modify the ip addresses to refer to the ip addresses of load-balancers in your environment
- - 10.0.0.1/32
- - 10.0.0.2/32
-```
-
-Next, we will create three network policies.
-
-**NetworkPolicy 1**
-
-This policy allows ingress from the trusted load balancer to pods in `frontend`.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: application.app1-frontend
- namespace: app1-ns
-spec:
- tier: application
- order: 200
- selector: (app == "app1" && svc == "frontend")
- serviceAccountSelector: ''
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: trusted-ep == "load-balancer"
- destination:
- ports:
- - '10001'
- types:
- - Ingress
-```
-
-**NetworkPolicy 2**
-
-This policy allows ingress access from pods in the `frontend` to pods in the `backend`.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: application.app1-backend
- namespace: app1-ns
-spec:
- tier: application
- order: 300
- selector: (app == "app1" && svc == "backend")
- serviceAccountSelector: ''
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: svc == "frontend"
- destination:
- ports:
- - '10002'
- types:
- - Ingress
-```
-
-**NetworkPolicy 3**
-
-This policy allows ingress access from pods in the `backend` to pods in the `database`.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: application.app1-database
- namespace: app1-ns
-spec:
- tier: application
- order: 400
- selector: (app == "app1" && svc == "database")
- serviceAccountSelector: ''
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: svc == "backend"
- destination:
- ports:
- - '10003'
- - '10004'
- types:
- - Ingress
-```
-
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/calico-cloud-features/index.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/calico-cloud-features/index.mdx
deleted file mode 100644
index cd9ed5849e..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/calico-cloud-features/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Learn about visibility and troubleshooting features in Manager UI.
-hide_table_of_contents: true
----
-
-# Manager UI features
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/calico-cloud-features/networksets.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/calico-cloud-features/networksets.mdx
deleted file mode 100644
index 74f2256113..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/calico-cloud-features/networksets.mdx
+++ /dev/null
@@ -1,260 +0,0 @@
----
-description: Learn the power of network sets and why you should create them.
----
-
-# Understanding network sets
-
-## Visualize traffic to/from your cluster
-
-Modern applications often integrate with third-party APIs and SaaS services that live outside Kubernetes clusters. To securely enable access to those integrations, you must be able to limit IP ranges for egress and ingress traffic to workloads. Limiting IP lists or ranges is also used to deny-list bad actors or embargoed countries. To limit IP ranges, you need to use the $[prodname] resource called **network sets**.
-
-## What are network sets?
-
-**Network sets** are a grouping mechanism that allows you to create an arbitrary set of IP subnetworks/CIDRs or domains that can be matched by standard label selectors in Kubernetes or $[prodname] network policy. Like IP pools for pods, they allow you to reuse/scale sets of IP addresses in policies.
-
-A **network set** is a namespaced resource that you can use with Kubernetes or $[prodname] network policies; a **global network set** is a cluster-wide resource that you can use with $[prodname] network policies.
-
-Like network policy, you manage user access to network sets using standard Kubernetes RBAC.
-
-## Why are network sets powerful?
-
-If you are familiar with Service Graph in Manager UI, you know the value of seeing pod-to-pod traffic within your cluster. But what about traffic external to your cluster?
-
-$[prodname] automatically detects IPs for pods and nodes that fall into the standard IETF “public network” and “private network” designations, and displays those as icons in Service Graph. So you get some visibility into external traffic without using any network sets.
-
-![public-private-networks](/img/calico-cloud/public-private-networks.png)
-
-However, when you create network sets, you can get more granular visibility into what's leaving the cluster to public networks. Because you control the grouping, the naming, and labeling, you create visibility that is customized to your organization. This is why they are so powerful.
-
-Here are just a few examples of how network sets can be used:
-
-- **Egress access control**
-
- Network sets are a key resource for defining egress access controls; for example, securing ingress to microservices/apps or egress from workloads outside the cluster.
-
-- **Troubleshooting**
-
- Network sets appear as additional metadata in flow logs and Kibana, Flow Visualizer, and Service Graph.
-
-- **Efficiency and scaling**
-
- Network sets are critical when scaling your deployment. You may have only a few CIDRs when you start. But as you scale out, it is easier to update a handful of network sets than update each network policy individually. Also, in a Kubernetes deployment, putting lots of anything (CIDRs, ports, policy rules) directly into policies causes inefficiencies in traffic processing (iptables/eBPF).
-
-- **Microsegmentation and shift left**
-
- Network sets provide the same microsegmentation controls as network policy. For example, you can allow specific users to create policies (that reference network sets), but allow only certain users to manage network sets.
-
-- **Threat defense**
-
- Network sets are key to being able to manage threats by blocking bad IPs with policy in a timely way. Imagine having to update individual policies when you find a bad IP you need to quickly block. You can even give access to a controller that automatically updates CIDRs in a network set when a bad IP is found.
-
-## Create a network set and use it in policy
-
-In this section, we’ll walk through how to create a namespaced network set in Manager UI. You can follow along using your cluster or tigera-labs cluster.
-
-In this example, you will create a network set named, `google`. This network set contains a list of trusted google endpoints for a microservice called, `hipstershop`. As a service owner, you want to be able to see traffic leaving the microservices in Service Graph. Instead of matching endpoints on IP addresses, we will use domain names.
-
-1. From the left navbar, click **Network Sets**.
-1. Click **Add Network Set**, and enter these values.
- - For Name: `google`
- - For Scope: Select **Namespace** and select, `hipstershop`
-1. Under Labels, click **Add label**.
- - In the Select key field, enter `destinations` and click the green bar to add this new entry.
- - In the Value field, enter `google`, click the green bar to add the entry, and save.
-1. For Domains, click **+Add Domain** and these URLs: `clouddebugger.googleapis.com`, `cloudtrace.googleapis.com`, `metadata.google.internal`, `monitoring.googleapis.com`.
-1. Click **Create Network Set**.
-
-You’ve created your first network set.
-
-![add-networkset-google](/img/calico-cloud/add-networkset-google.png)
-
-The YAML looks like this:
-
-```yaml
-kind: NetworkSet
-apiVersion: projectcalico.org/v3
-metadata:
- name: google
- labels:
- destinations: google
- namespace: hipstershop
-spec:
- nets: []
- allowedEgressDomains:
- - clouddebugger.googleapis.com
- - cloudtrace.googleapis.com
- - metadata.google.internal
- - monitoring.googleapis.com
-```
-
-Next, we write a DNS policy for hipstershop that allows egress traffic to the trusted google sites. The following network policy allows egress access for all destination selectors labeled, `google`. Note that putting domains in a network set referencing it in policy is the best practice. Also, note that using `selector: all()` should only be used if all pods in the namespace can access all of the domains in the network set; if not, you should create separate policies accordingly.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: application.allow-egress-domain
- namespace: hipstershop
-spec:
- tier: application
- order: 0
- selector: all()
- serviceAccountSelector: ''
- egress:
- - action: Allow
- source: {}
- destination:
- selector: destinations == "google"
- types:
- - Egress
-```
-
-## Network sets in Service Graph
-
-Continuing with our `hipstershop` example, if you go to Service Graph, you see hipstershop (highlighted in yellow).
-
-![hipstershop](/img/calico-cloud/hipstershop.png)
-
-If we double-click `hipstershop` to drill down, we now see the `google` network set icon (highlighted in yellow). We now have visibility to traffic external from google sites to hipstershop. (If you are using the tigera-labs cluster, note that the network set will not be displayed as shown below.)
-
-![google-networkset](/img/calico-cloud/google-networkset.png)
-
-Service Graph provides a view into how services are interconnected in a consumable view, along with easy access to flow logs. However, you can also see traffic associated with network sets in volumetric display with Flow Visualizer, and query flow log data associated with network sets in Kibana.
-
-## Tutorial
-
-In the following example, we create a global network set resource for a trusted load-balancer that can be used with microservices and applications. The label, `trusted-ep: load-balancer` is how this global network set can be referenced in policy.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkSet
-metadata:
- name: load-balancer
- labels:
- trusted-ep: "load-balancer"
-spec:
- nets:
- # Modify the ip addresses to refer to the ip addresses of load-balancers in your environment
- - 10.0.0.1/32
- - 10.0.0.2/32
-```
-
-The following network policy uses the `selector: trusted-ep == "load balancer"` to reference the above GlobalNetworkSet. All applications in the `app2-ns` namespace, that match `app2` and `svc1` are allowed ingress traffic from the trusted load balance on port 1001.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: NetworkPolicy
-metadata:
- name: application.app2-svc1
- namespace: app2-ns
-spec:
- tier: application
- order: 500
- selector: (app == "app2"&&svc == "svc1")
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: trusted-ep == "load-balancer"
- destination:
- ports:
- - '10001'
- types:
- - Ingress
-```
-
-### Advanced policy rules with network sets
-
-When you combine $[prodname] policy rules with network sets, you have powerful ways to fine-tune. The following example combines network sets with specific rules in a global network policy to deny access more quickly.
-We start by creating a $[prodname] GlobalNetworkSet that specifies a list of CIDR ranges we want to deny: 192.0.2.55/32 and 203.0.113.0/24.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkSet
-metadata:
- name: ip-protect
- labels:
- ip-deny-list: 'true'
-spec:
- nets:
- - 192.0.2.55/32
- - 203.0.113.0/24
-```
-
-Next, we create two $[prodname] GlobalNetworkPolicy resources. The first is a high "order" policy that allows traffic as a default for things that don’t match our second policy, which is low "order" and uses the GlobalNetworkSet label as a selector to deny ingress traffic (IP-deny-list in the previous step). In the label selector, we also include the term, `!has(projectcalico.org/namespace)`, which prevents this policy from matching pods or NetworkSets that also have this label. To more quickly enforce the denial of forwarded traffic to the host at the packet level, use the `doNotTrack` and `applyOnForward` options.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: forward-default-allow
-spec:
- selector: apply-ip-protect == 'true'
- order: 1000
- doNotTrack: true
- applyOnForward: true
- types:
- - Ingress
- ingress:
- - action: Allow
----
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: ip-protect
-spec:
- selector: apply-ip-protect == 'true'
- order: 0
- doNotTrack: true
- applyOnForward: true
- types:
- - Ingress
- ingress:
- - action: Deny
- source:
- selector: ip-deny-list == 'true' && !has(projectcalico.org/namespace)
-```
-
-## Best practices for using network sets
-
-- Create network sets as soon as possible after getting started
-
- This allows you to quickly realize the benefits of seeing custom metadata in flow logs and visualizing traffic in Service Graph and Flow Visualizer.
-
-- Create a network set label and name schema
-
- It is helpful to think: what names would be meaningful and easy to understand when you look in Service Graph? Flow Viz? Kibana? What labels will be easy to understand when used in network policies – especially if you are separating users who manage network sets from those who consume them in network policies.
-
-- Do not put large sets of CIDRs and domains directly in policy
-
- Network sets allow you to specify CIDRs and/or domains. Although you can add CIDRs and domains directly in policy, it doesn't scale.
-
-- Do not put thousands of rules into a policy, each with a different CIDR
-
- If your set of /32s can be easily aggregated into a few broader CIDRs without compromising security, it’s a good thing to do; whether you’re putting the CIDRs in the rule or using a network set.
-
-- If you want to match thousands of endpoints, write one or two rules and use selectors to match the endpoints.
-
- Having one rule per port, per host is inefficient because each rule ends up being rendered as an iptables/eBPF rule instead of making good use of IP sets.
-
-- Avoid overlapping IP addresses/subnets in networkset/globalnetworkset definitions
-
-The following table provides guidance on the efficient use of network sets.
-
-| Policy | Network set | Results |
-| ------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------- | --------------------------------------------------------------------------------------------------------------------- |
-| source: selector: foo="bar" | With handful of broad CIDRs | **Efficient** **-** 1 iptables/eBPF rule - 1 IP set with handful of CIDRs |
-| source: nets: [ ... handful ...] | Not used | **Efficient** - Handful of iptables/eBPF rules - 0 IP sets |
-| source: selector: foo="bar" | One network set with 2000 x /32s | **Fairly efficient** - 1 iptables/eBPF rule - 1 IP sets with 2000 entries |
-| | Two network sets with 1000 each x /32s | **Efficient** - 2 iptables/eBPF rules - 2 IP set with 1000 entries |
-| source: nets: [... 2000 /32s ...] - source: nets: [1 x /32] - source: nets: [1 x /32] - ... x 2000 | Not used | **Inefficient** Results in programming 2k iptables/eBPF rules - 2000+ iptables/eBPF rules - 0 IP sets |
-
-For more examples of network sets with policy, see:
-
-- Namespaced network set
-
- - [Secure ingress access to a microservice or application](../applications/index.mdx)
-
-- Global network set
- - [Secure egress access from workloads to destinations outside the cluster](../applications/egress-controls.mdx)
- - [Global egress access controls](../enterprise-security/global-egress.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/calico-cloud-features/service-graph.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/calico-cloud-features/service-graph.mdx
deleted file mode 100644
index 957a040b80..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/calico-cloud-features/service-graph.mdx
+++ /dev/null
@@ -1,132 +0,0 @@
----
-description: Learn the basics of using Service Graph.
----
-
-# Service Graph tutorial
-
-import ReactPlayer from 'react-player';
-
-:::note
-
-This tutorial references the labs cluster setup that is part of the $[prodname] Trial environment.
-
-:::
-
-## What you will learn
-
-- The basics of Service Graph
-- To visualize workload communication in the labs cluster using Service Graph
-- Features of Service Graph that help you manage the scale of clusters with a large number of namespaces
-
-## The what and why of Service Graph
-
-One thing we consistently hear from DevOps engineers, SREs, and platform operators is that they struggle with getting basic visibility within their Kubernetes cluster. As more apps and microservices are deployed, and workloads are changing continuously, it becomes hard to understand how everything is working together in such a complex multi-tenant environment. Service Graph provides a "weather map" view of everything in a cluster, helping new team members ramp up quickly on how everything is communicating with each other, and an easy way to engage the right stakeholders when issues come up. Service Graph delivers a point-to-point, topographical view of how namespaces, services, and deployments are communicating with each other.
-
-## Quick demo
-
-**12-minute video**
-
-
-
-## Exploring Service Graph with the labs cluster
-
-> Select the **tigera-labs cluster** in the upper right corner of the Manager UI.
-> In the left navigation menu, select **Service Graph**, **Default**.
-
-The default view in Service Graph provides a representation of how namespaces within the cluster are communicating with each other. Each namespace is represented with the standard Kubernetes icon for namespaces. There are some demo namespaces - **hipstershop**, **storefront**, and **acme** - and other Tigera namespaces that are used for $[prodname]. Egress traffic leaving the cluster is represented with globe icons. Under the hood, each of these globe icons is a custom resource called a **NetworkSet**.
-
-### Network sets
-
-By default, $[prodname] provides two system-defined network sets - one for public IPs called **public network** and another for private IPs called **private network**. You can also define your own network sets with a list of IPs or CIDRs and they will appear on this graph, enabling you to get more granular visibility into the egress traffic that is specific to your environment. The **public-ip-range** is an example of such a network set that has been already created in the labs cluster.
-
-For more examples of using network sets, see [Secure egress access from workloads to destinations outside the cluster](../applications/egress-controls.mdx), and [Understanding network sets](networksets.mdx).
-
-### Selecting graph edges and graph nodes
-
-> Click the `<<` tab in the upper right to open the details panel.
-
-The details panel provides additional information on a selected graph node or edge. The log panel below the graph also automatically updates and/or filters data (if available) for the selection.
-
-> Click on the edge between the **hipstershop** namespace, and the network set **public network**. The flow logs are automatically filtered below in the Flows tab, and you can expand any of the rows in that tab to view some of the detailed metadata that $[prodname] provides around workload communication.
-
-### Double-clicking on a namespace
-
-You can also double-click on a namespace to see services and deployments within a namespace, and how they are communicating externally, and with the rest of the cluster.
-
-> Double-click **hipstershop** or **storefront**. All of the resources in purple are part of the selected namespace (which is also listed in the breadcrumb in the upper left), and anything external to the namespace is represented in blue. These views are useful when troubleshooting a specific application or microservice running in your cluster.
-
-> To return the default view, click the **Namespaces** breadcrumb in the upper left.
-
-### Right-click actions to manage scale
-
-Right-clicking a namespace, service, or deployment brings up another set of actions that are designed to help you manage the scale of your Kubernetes cluster. Although the labs cluster has only a dozen or so namespaces, your Kubernetes clusters could likely have over one hundred namespaces.
-
-> Right-click **hipstershop**, and select **Hide unrelated**.
-
-Hide unrelated allows you to quickly filter and trim the Service Graph to show just the selected entity and anything it is communicating with. This is helpful in troubleshooting issues for a specific namespace or application, and for application teams to quickly understand their upstream and downstream dependencies.
-
-> To reset the view after this action, select the **Reset icon** (vertical column of icons), and click, **Reset view & filters**.
-
-## Using layers and views
-
-Another unique feature of Service Graph is the concept of **layers**, which allows you to create meaningful groupings of resources so you can easily hide and show them on the graph. It is another tool to help you manage the overall scale of visualizing workloads within your cluster. You can create layers for different types of platform infrastructure you might have in your cluster - networking, storage, logging.
-
-> Click the `>>` tab in the upper left (next to Namespaces) to open the panel. Expand the Tigera components layer to view its namespaces.
-
-The layer called **Tigera components** contains namespaces related to $[prodname]. Click the ellipsis/spillover menu on the right, and select **Hide layer**.
-
-This hides the entire layer on the graph, making it easy to hide/show a group of related namespaces.
-
-> Reset the view by selecting, **Restore layer**.
-
-### Views
-
-**Views** allow you to save the state of the graph on the canvas.
-
-> Click the **Views** (panel above Layers).
-
-Let's create a view that shows only items related to the hipstershop product catalog.
-
-> Double-click **hipstershop**, select the _productcatalogservice_, right-click and select **Hide unrelated to service group**. In the View tab, click **Save Current** and save the view as "Product Catalog”.
-
-> Click the **Namespaces** breadcrumb to reset the views and filters. In the Views tab, you can see your saved view, "Product Catalog" -- so you can quickly revisit this view at any time.
-
-## Logs and details panel
-
-As you are working with Service Graph, the bottom panel and the details panel (right of the main canvas) will display additional information for selections you make on the graph. The exact information displayed will vary depending on the following:
-
-- Whether you select a graph node or edge
-- Type of graph node selected (namespace, network set, service, etc.)
-- Whether or not you have deployed Envoy for Layer 7 visibility
-
-### Details panel
-
-> Double-click the **hipstershop** namespace to see a detailed view of service-to-service communication. Select the **checkoutservice** on the graph.
-
-You can see the **Inbound** and **Outbound** connections for this service. Expanding any of those connections (hovering over the line and clicking the arrow) shows you volumetric data for each of those flows. In the **Insights** section, you can click on items like DNS stats in use for the service.
-
-> In the line, `checkoutservice >> shippingservice`, click the green arrows to display protocols and policies.
-
-The **Process Info** section shows you specific processes associated with the traffic flows for the `checkoutservice`. This can be helpful in troubleshooting scenarios where identifying the specific binaries that are involved in service communication may be required to track down a bug.
-
-### Logs panel
-
-In the bottom panel are several tabs: Flows, DNS, HTTP, Alerts, and Capture Jobs.
-
-With the `checkoutservice` still selected, filters are automatically applied on the Flows tab with detailed logs for Layer 4 traffic. Similar filters are automatically applied to the DNS and HTTP flows. Note that HTTP flows require the deployment of Envoy to see Layer 7 traffic in any cluster connected to $[prodname].
-
-The Alerts tab will show alerts if they are generated, and Capture Jobs will show any packet capture jobs that have been defined for this namespace.
-
-Now that you understand the basics for Service Graph, we recommend:
-
-- [Understanding policy tiers](../../network-policy/policy-tiers/tiered-policy.mdx)
-- [Understanding network sets](networksets.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/calico-cloud-features/tour.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/calico-cloud-features/tour.mdx
deleted file mode 100644
index 93b8272fb9..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/calico-cloud-features/tour.mdx
+++ /dev/null
@@ -1,226 +0,0 @@
----
-description: A quick tour of the Calico Cloud user interface.
----
-
-# Manager UI tutorial
-
-## What you will learn
-
-- Manager UI features and controls
-- How to gain visibility into clusters
-
-Let's go through each item in the Manager UI left navbar from top to bottom. You can follow along using any cluster.
-
-## Dashboard
-
-> From the left navbar, click Dashboards.
-
-The Dashboard provides a birds-eye view of cluster activity. Note the following:
-
-- The filter panel at the top lets you change dashboard views and the time range.
-- The **Customize Layout** menu lets you choose what components are displayed on the dashboard. To get WireGuard metrics for pod-to-pod and host-to-host encryption, you must [enable WireGuard](../../compliance/encrypt-cluster-pod-traffic.mdx).
-- For application-related dashboard cards to show data, like HTTP Response Codes or Url Requests, you need to [configure L7 logs](../../visibility/elastic/l7/configure.mdx).
-
-![dashboards](/img/calico-enterprise/dashboards.png)
-
-## Service Graph
-
-> From the left navbar, select **Service Graph**, **Default**
-
-Service Graph provides a point-to-point, topographical representation of network traffic within your cluster. It is the primary tool for visibility and troubleshooting.
-
-![service-graph](/img/calico-cloud/service-graph.png)
-
-**Namespaces**
-
-Namespaces are the default view in Service Graph.
-
-When you expand the top right panel `<<`, you see a detailed view of the service-to-service communications for the namespace.
-
-![service-graph-namespace](/img/calico-cloud/service-graph-namespace.png)
-
-**Nodes and edges**
-
-Lines going to/from nodes are called edges. When you click on a node or edge, the right panel shows details, and the associated flow logs are automatically filtered in the bottom panel.
-
-![edges](/img/calico-cloud/edges.png)
-
-**Layers**
-
-Layers allow you to create meaningful groupings of resources so you can easily hide and show them on the graph. For example, you can group resources for different platform infrastructure types in your cluster like networking, storage, and logging.
-
-> Click the panel on the left (`>>`) by the Namespaces breadcrumb, and then expand the Tigera components layer.
-
-![service-graph-layers](/img/calico-cloud/service-graph-layers.png)
-
-The **Tigera components** layer contains namespaces for $[prodname] networking components, and a view of interest to Dev/Ops.
-
-> Click the vertical ellipses and select, **Hide layer**. Notice that only the business application namespaces remain visible in the graph.
-
-> To make this layer less visible, select **Restore layer**and click **De-emphasize layer**.
-
-**Logs, alerts, and capture jobs**
-
-The panel at the bottom below the graph provides tools for troubleshooting connectivity and performance issues. **Logs** (Flows, DNS, and HTTP) are the foundation of security and observability in $[prodname]. When you select a node or edge in the graph, logs are filtered for the node or service. For example, here is a flow log with details including how the policies were processed in tiers.
-
-![service-graph-flows](/img/calico-cloud/service-graph-flows.png)
-
-**Alerts**
-
-For convenience, the Alerts tab duplicates the alerts you have enabled in the **Alerts tab** in the left navbar. By default, alerts are not enabled.
-
-**Capture jobs**
-
-Service Graph integrates a packet feature for capturing traffic for a specific namespace, service, replica set, daemonset, statefulset, or pod. You can then download capture files to your favorite visualization tool like WireShark.
-
-> Right-click on any endpoint to start or schedule a capture.
-
-![packet-capture-service](/img/calico-cloud/packet-capture-service.png)
-
-**Flow Visualizations**
-
-> From the left navbar, select **Service Graph**, **Flow Visualizations**.
-
-Flow Visualizer (also called, "FlowViz") is a $[prodname] tool for drilling down into network traffic within the cluster to troubleshoot issues. The most common use of Flow Visualizer is to drill down and pinpoint which policies are allowing and denying traffic between services.
-
-![flow-viz](/img/calico-cloud/flow-viz.png)
-
-## Policies
-
-> From the left navbar, click **Policies**.
-
-Network policy is the primary tool for securing a Kubernetes network. Policy is used to restrict network traffic (egress and ingress) in your cluster so only the traffic that you want to flow is allowed. $[prodname] supports these policies:
-
-- $[prodname] network policy
-- $[prodname] global network policy
-- Kubernetes policy
-
-$[prodname] uses **tiers** (also called, hierarchical tiers) to provide guardrails for managing network policy across teams. Policy tiers allow users with more authority (for example, Dev/ops user) to enforce network policies that take precedence over teams (for example, service owners and developers).
-
-**Policies Board** is the default view for managing tiered policies.
-
-![policy-board](/img/calico-cloud/policy-board.png)
-
-Users typically use a mix of Policy Board and YAML files. Note that you can export one or all policies in a tier to YAML.
-
-The **Policy Board filter** lets you filter by policy types and label selectors.
-
-![policy-filters](/img/calico-cloud/policy-filters.png)
-
-The following features provide more security and guardrails for teams.
-
-**Recommended a policy**
-
-> In Policies Board, click **Recommend a policy**.
-
-One of the first things you'll want to do after installation is to secure unprotected pods/workloads with network policy. (For example, Kubernetes pods allow traffic from any source by default.) The Recommend a policy feature generates policies that protect specific endpoints in the cluster. Users with minimal experience with network policy can easily get started.
-
-![recommend-policy](/img/calico-cloud/recommend-policy.png)
-
-**Policy stage**
-
-When you create a policy, it is a best practice to stage it to evaluate the effects before enforcing it. After you verify that a staged network policy is allowing traffic as expected, you can enforce it.
-
-![stage-policy](/img/calico-cloud/stage-policy.png)
-
-**Preview**
-
-When you edit a policy, you can select **Preview** to see how changes may affect existing traffic.
-
-![policy-preview](/img/calico-cloud/policy-preview.png)
-
-## Endpoints
-
-> From the left navbar, click **Endpoints**.
-
-**Endpoint Details**
-
-This page is a list of all pods in the cluster (also known as workload endpoints).
-
-![endpoints](/img/calico-cloud/endpoints.png)
-
-**Node List**
-
-This page lists all nodes associated with your cluster.
-
-![node-list](/img/calico-cloud/node-list.png)
-
-## Network Sets
-
-Network sets and global network sets are $[prodname] resources for defining IP subnetworks/CIDRs, which can be matched by standard label selectors in policy (Kubernetes or $[prodname]). They are a powerful feature for use/reuse and scaling policy.
-
-A simple use case is to limit traffic to/from external networks. For example, you can create a global network set with "deny-list CIDR ranges 192.0.2.55/32 and 203.0.113.0/24", and then reference the network set in a global network policy. This also allows you to see this traffic in Service Graph.
-
-![networksets](/img/calico-cloud/networksets.png)
-
-## Managed clusters
-
-> From the left navbar, click **Managed clusters**.
-
-This page is where you switch views between clusters in Manager UI. When you connect to a different cluster, the entire Manager view changes to reflect the selected cluster.
-
-![managed-clusters](/img/calico-cloud/managed-clusters.png)
-
-## Compliance Reports
-
-> From the left navbar, click **Compliance**.
-
-Compliance tools that rely on periodic snapshots, do not provide accurate assessments of Kubernetes workloads against your compliance standards. $[prodname] compliance dashboard and reports provide a complete inventory of regulated workloads, along with evidence of enforcement of network controls for these workloads. Additionally, audit reports are available to see changes to any network security controls.
-
-**Compliance reports** are based on archived flow logs and audit logs for all $[prodname] resources, and audit logs for Kubernetes resources in the Kubernetes API server.
-
-![cis-benchmark](/img/calico-cloud/cis-benchmark.png)
-
-Using the filter, you can select report types.
-
-![compliance-filter](/img/calico-cloud/compliance-filter.png)
-
-## Activity
-
-> From the left navbar, select **Activity**, **Timeline**.
-
-**Timeline**
-
-What changed, who did it, and when? This information is critical for security. Native Kubernetes doesn’t provide an easy way to capture audit logs for pods, namespaces, service accounts, network policies, and endpoints. The $[prodname] timeline provides audit logs for all changes to network policy and other resources associated with your $[prodname] deployment.
-
-![timeline](/img/calico-cloud/timeline.png)
-
-> From the left navbar, selection **Activity**, **Alerts**.
-
-**Alerts**
-
-How do you know if you have an infected workload? A possible threat? $[prodname] detects and alerts on unexpected network behavior that may indicate a security breach. You can create alerts for:
-
-- Known attacks and exploits (for example, exploits found at Shopify, Tesla, Atlassian)
-- DOS attempts
-- Attempted connections to botnets and command and control servers
-
-![alerts](/img/calico-cloud/alerts.png)
-
-## Logs
-
-$[prodname] includes a fully-integrated deployment of Elastic to collect flow log data that drives key features like the Flow Visualizer, metrics in the Dashboard and Policy Board, policy automation, and testing features and security. $[prodname] also embeds Kibana so you can view raw log data for the traffic within your cluster.
-
-> From the left navbar, click **Logs**.
-
-**Dashboards**
-
-$[prodname] comes with built-in dashboards.
-
-![kibana-dashboards](/img/calico-cloud/kibana-dashboards.png)
-
-**Log data**
-
-Kibana provides its own set of filtering capabilities to drill down into log data. For example, use filters to drill into flow log data for specific namespaces and pods. Or view details and metadata for a single flow log entry.
-
-![kibana](/img/calico-cloud/kibana.png)
-
-## Threat feeds
-
-You can add threat intelligence feeds to $[prodname] to trace network flows of suspicious IP addresses and domains. Then, you can use network policy to block pods from contacting IPs or domains.
-
-Now that you understand the basics, we recommend the following:
-
-- [Service Graph tutorial](service-graph.mdx)
-- [Understanding policy tiers](../../network-policy/policy-tiers/tiered-policy.mdx)
-- [Understanding network sets](networksets.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/default-deny.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/default-deny.mdx
deleted file mode 100644
index 685c7dcdb9..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/default-deny.mdx
+++ /dev/null
@@ -1,78 +0,0 @@
----
-description: Implement a global default deny policy in the default tier to block unwanted traffic.
----
-
-# Global default deny policy
-
-In this article you will learn when and how to create a global default deny policy for the cluster.
-
-## What is it, when should you create one?
-
-A global default deny policy ensures that unwanted traffic (ingress and egress) is denied by default. Pods without policy (or incorrect policy) are not allowed traffic until appropriate network policy is defined. Although the staging policy tool will help you find incorrect and missing policy, a global deny helps mitigate against other lateral malicious attacks.
-
-## Best practice #1: Allow, stage, then deny
-
-We recommend that you create a global default deny policy _after you complete writing policy for the traffic that you want to allow_. Use the stage policy feature to get your allowed traffic working as expected, then lock down the cluster to block unwanted traffic. The following steps summarizes the best practice:
-
-1. Create a staged global default deny policy. It will shows all the traffic that would be blocked if it were converted into a deny.
-1. Create other network policies to individually allow the traffic shown as blocked in step 1, until no connections are denied.
-1. Convert the staged global network policy to an enforced policy.
-
-## Best practice #2: Keep the scope to non-system pods
-
-A global default deny policy applies to the entire cluster including all workloads in all namespaces, hosts (computers that run the hypervisor for VMs, or container runtime for containers), including Kubernetes control plane and $[prodname] control plane nodes and pods.
-
-For this reason, the best practice is to create a global default deny policy for **non-system pods** as shown in the following example.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: deny-app-policy
-spec:
- namespaceSelector: has(projectcalico.org/name) && projectcalico.org/name not in {"kube-system", "calico-system"}
- types:
- - Ingress
- - Egress
- egress:
- # allow all namespaces to communicate to DNS pods
- - action: Allow
- protocol: UDP
- destination:
- selector: 'k8s-app == "kube-dns"'
- ports:
- - 53
- - action: Allow
- protocol: TCP
- destination:
- selector: 'k8s-app == "kube-dns"'
- ports:
- - 53
-```
-
-Note the following:
-
-- Even though we call this policy "global default deny", the above policy is not explicitly _denying traffic_. By selecting the traffic with the `namespaceSelector` but not specifying an allow, the traffic is denied after all other policy is evaluated. This design also makes it unnecessary to ensure any specific order (priority) for the default-deny policy.
-- Allowing access to `kube-dns` simplifies per-pod policies because you don't need to duplicate the DNS rules in every policy
-- The policy deliberately excludes the `kube-system` and `calico-system` namespaces by using a negative `namespaceSelector` to avoid impacting any control plane components
-
-Next, add the policy to the default tier. (As noted above, anywhere in the default tier is fine.)
-
-Next, use the stage policy feature and verify that the policy does not block any necessary traffic before enforcing it.
-
-### Don't try this!
-
-The following policy looks fine on the surface, and it does works. But as described in Best practices #2, the policy could break your cluster because the scope is too broad. Therefore, we do not recommend adding this type of policy to the default tier, even if you have verified allowed traffic using the stage policy feature.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: default.default-deny
-spec:
- tier: default
- selector: all()
- types:
- - Ingress
- - Egress
-```
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/global-egress.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/global-egress.mdx
deleted file mode 100644
index 276e44a488..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/global-egress.mdx
+++ /dev/null
@@ -1,90 +0,0 @@
----
-description: Implement global egress access controls.
----
-
-# Global egress access controls
-
-In this article you will learn how to implement egress access controls cluster-wide for all applications and microservices.
-
-In this example, we will implement global egress access controls for **dev1 team**:
-
-- Egress access control for all applications managed by dev1: applications (**app1**) and microservices (**app2**)
-- dev1 pods can egress access to a repo named, `repo.acme.corp` at port 443
-- dev1 pods can egress access to a trusted partner (in this example, another business unit in the organization) at `10.10.10.10` on port 1010.
-
-![global-egress](/img/calico-cloud/global-egress.png)
-
-## Create global network sets
-
-First, we need to create a GlobalNetworkSet for the trusted repo, `repo.acme.corp`. Because we already have a label taxonomy following the best practices, it is easy. Just use the `allowedEgressDomains` to specify the trusted repo, `repo.acme.corp`.
-
-```yaml
-kind: GlobalNetworkSet
-apiVersion: projectcalico.org/v3
-metadata:
- name: trusted-repo
- labels:
- trusted-ep: dev1-repo
-spec:
- allowedEgressDomains:
- - repo.acme.corp
-```
-
-Next, we create a separate global network set for our trusted business unit within the organization. We use the `dev1-partners` label, and specify the IP address, `10.10.10.10/32`.
-
-```yaml
-kind: GlobalNetworkSet
-apiVersion: projectcalico.org/v3
-metadata:
- name: trusted-partners
- labels:
- trusted-ep: dev1-partners
-spec:
- nets:
- - 10.10.10.10/32
-```
-
-## Create global network policy
-
-Before we create our GlobalNetworkPolicy, let's review our labels and set up.
-
-- We assume that app1 and app2 and all other dev1 team apps’ pods have a label that identifies dev1 (tenant: dev1) to enforce the declared controls.
-
-- For selected pods (all dev1 pods), all egress traffic that is not explicitly allowed is denied, as part of the best practices for policy tiers.
-
-- The policy below, which is allowing egress communication of dev1 pods with all other pods in the cluster, is done by selecting all namespaces as the destination. This means we are applying granular controls to pods for traffic destined to external endpoints, and we are allowing egress traffic destined to other pods in the cluster. This is recommended unless you have specific control requirements that dictate otherwise. This simplifies policy creation since we have already defined granular ingress controls for intra-cluster pod-to-pod communication. The alternative would be to define granular ingress and egress controls to all pods, which adds complexity in policy development.
-
-Here is our GlobalNetworkPolicy. The dev1 team is allowed egress access to the trusted repo, `dev-repo` via port 443, and also to the trusted business unit, `dev-partners` via port 1010.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: security.egress-dev1
-spec:
- tier: security
- selector: tenant == "dev1"
- egress:
- - action: Allow
- protocol: TCP
- source: {}
- destination:
- selector: trusted-ep == "dev1-repo"
- ports:
- - '443'
- - action: Allow
- protocol: TCP
- source: {}
- destination:
- selector: trusted-ep == "dev1-partners"
- ports:
- - '1010'
- - action: Allow
- source: {}
- destination:
- namespaceSelector: all()
- types:
- - Egress
-```
-
-As you can see, once you have your labels in place, creating policy to secure teams is straightforward.
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/index.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/index.mdx
deleted file mode 100644
index 5111a8d2d6..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Implement common enterprise security controls for security and platform tiers.
-hide_table_of_contents: true
----
-
-# Implement enterprise security controls
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/namespace-isolation.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/namespace-isolation.mdx
deleted file mode 100644
index 26a3aaf28e..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/namespace-isolation.mdx
+++ /dev/null
@@ -1,88 +0,0 @@
----
-description: Learn how to isolate namespaces for ingress traffic.
----
-
-# Namespace isolation and access controls
-
-In this article you will learn how to create $[prodname] global network policies to isolate namespaces for ingress traffic.
-
-## Isolate access
-
-Isolating ingress traffic access can be implemented across different groups in your organization, for example:
-
-- Business unit, development team, or project
-- Deployment environment (Dev/Stage/Prod or Pre-prod/Prod)
-- Control plane
-- Management plane
-- Compliance environment
-
-## Create global network policies
-
-Microsegmentation across namespaces is achieved using $[prodname] policy labels and selectors with a **Pass** action rule. If you are unfamiliar with policy action rules, see [Tutorial - Understanding policy tiers](../../network-policy/policy-tiers/tiered-policy.mdx).
-
-In the following example, let's assume the following:
-
-- We created a **security tier** (recommended for your enterprise security controls like multi-tenancy)
-- Two development teams (`dev1` and `dev2`), each managing two applications or microservices
-- The security tier will delegate (pass) microsegmentation controls to the application tier
-- We will use a GlobalNetworkPolicy with cluster wide scope that applies to pods in all namespaces in the cluster.
-
-For example:
-
-![multi-tenancy-ingress](/img/calico-cloud/multi-tenancy-ingress.png)
-
-**GlobalNetworkPolicy 1**
-
-The following policy is for ingress traffic that goes through a trusted load balancer. It resides in the security tier, and uses the **Pass** action rule to delegate control to the application tier for the `dev1` tenant. Because the application tier contains endpoints that apply, traffic is processed.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: security.ingress-dev1
-spec:
- tier: security
- order: 200
- selector: tenant == "dev1"
- ingress:
- - action: Pass
- source:
- selector: trusted-ep == "load-balancer"
- - action: Pass
- source:
- selector: tenant == "dev1"
- types:
- - Ingress
-```
-
-**GlobalNetworkPolicy 2**
-
-Similarly, we create a second GlobalNetworkPolicy for `dev2` tenant that delegates control to the application tier for the `dev2` applications.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: security.ingress-dev2
-spec:
- tier: security
- order: 300
- selector: tenant == "dev2"
- ingress:
- - action: Pass
- source:
- selector: trusted-ep == "load-balancer"
- - action: Pass
- source:
- selector: tenant == "dev2"
- types:
- - Ingress
-```
-
-## Summary and recommendations
-
-In these examples, we implemented ingress access controls for two development teams. The global network policy has cluster-wide scope (applying to pods in all namespaces in the cluster). For **egress controls**, we recommend the following:
-
-- Use permissive egress policies for intra-cluster communication
-- Implement [Global default deny](default-deny.mdx) security controls
-- Implement [Global egress access controls](global-egress.mdx) for communication with endpoints external to the cluster in the security tier. These controls govern the communication of all applications for a given tenant with the outside world.
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/platform.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/platform.mdx
deleted file mode 100644
index 1bf0a0b024..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/enterprise-security/platform.mdx
+++ /dev/null
@@ -1,60 +0,0 @@
----
-description: Implement ingress and egress access controls for platform applications.
----
-
-# Platform application access controls
-
-In this article you will learn some best practices to secure platform tier applications.
-
-## Isolate platform applications
-
-You may have several types of platform applications in your environment:
-
-- Storage platform
-- Secret management
-- Container security
-- Platform monitoring
-- Application performance monitoring
-
-We recommend that you implement all your platform applications controls in a **platform tier**, implicitly allowing granular ingress and egress controls for platform applications pods and denying everything else. This effectively isolates platform applications from business applications, and extends the multi-tenancy and application microsegmentation controls to platform applications.
-
-## Platform order
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: Tier
-metadata:
- name: platform
-spec:
- order: 200
-```
-
-Order 200 implies that the platform tier comes before the security tier and that its policies are processed first. Some organizations require that security team controls take precedence over other controls. In that case, the security tier policies need to pass controls to the platform tier for granular control.
-
-## Map your application access controls to policy
-
-Next, we recommend that you refer to your platform application documentation and get the communication map of application pods, and reflect the controls in your network policy.
-
-## Create a global network policy
-
-Here is our global network policy that allows application (`platform-app1`) and service (`platf-app1-svc2`), ingress access to service (`svc = platf-app1-svc1`) from port 12345.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: platform.platform-app2-svc2
-spec:
- tier: platform
- selector: (app == "platform-app1" && svc == "platf-app1-svc2")
- ingress:
- - action: Allow
- protocol: TCP
- source:
- selector: svc == "platf-app1-svc1"
- destination:
- ports:
- - '12345'
- types:
- - Ingress
-```
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/index.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/index.mdx
deleted file mode 100644
index ca23ddcf3a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/index.mdx
+++ /dev/null
@@ -1,52 +0,0 @@
----
-title: Tutorials
-hide_table_of_contents: true
----
-import { DocCardLink, DocCardLinkLayout } from '/src/___new___/components';
-
-# Tutorials
-
-Learn more about Calico Cloud and Kubernetes network policy.
-
-## Calico Cloud features
-
-
-
-
-
-
-
-## Secure ingress and egress for applications
-
-
-
-
-
-
-## Implement enterprise security controls
-
-
-
-
-
-
-
-
-## Kubernetes networking for beginners
-
-
-
-
-
-
-
-
-## Kubernetes tutorials and demos
-
-
-
-
-
-
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/index.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/index.mdx
deleted file mode 100644
index ac413487dc..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Kubernetes tutorials and demos
-hide_table_of_contents: true
----
-
-# Kubernetes tutorials and demos
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/kubernetes-demo.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/kubernetes-demo.mdx
deleted file mode 100644
index f0d3894c14..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/kubernetes-demo.mdx
+++ /dev/null
@@ -1,103 +0,0 @@
----
-description: An interactive demo that visually shows how applying Kubernetes policy allows and denies connections.
----
-
-# Kubernetes policy, demo
-
-The included demo sets up a frontend and backend service, as well as a client service, all
-running on Kubernetes. It then configures network policy on each service.
-
-
-
-## Running the stars example
-
-### 1) Create the frontend, backend, client, and management-ui apps.
-
-```shell
-kubectl create -f $[tutorialFilesURL]/00-namespace.yaml
-kubectl create -f $[tutorialFilesURL]/01-management-ui.yaml
-kubectl create -f $[tutorialFilesURL]/02-backend.yaml
-kubectl create -f $[tutorialFilesURL]/03-frontend.yaml
-kubectl create -f $[tutorialFilesURL]/04-client.yaml
-```
-
-Wait for all the pods to enter `Running` state.
-
-```bash
-kubectl get pods --all-namespaces --watch
-```
-
-> Note that it may take several minutes to download the necessary Docker images for this demo.
-
-The management UI runs as a `NodePort` Service on Kubernetes, and shows the connectivity
-of the Services in this example.
-
-You can view the UI by visiting `http://:30002` in a browser.
-
-Once all the pods are started, they should have full connectivity. You can see this by visiting the UI. Each service is
-represented by a single node in the graph.
-
-- `backend` -> Node "B"
-- `frontend` -> Node "F"
-- `client` -> Node "C"
-
-### 2) Enable isolation
-
-Running following commands will prevent all access to the frontend, backend, and client Services.
-
-```shell
-kubectl create -n stars -f $[tutorialFilesURL]/default-deny.yaml
-kubectl create -n client -f $[tutorialFilesURL]/default-deny.yaml
-```
-
-#### Confirm isolation
-
-Refresh the management UI (it may take up to 10 seconds for changes to be reflected in the UI).
-Now that we've enabled isolation, the UI can no longer access the pods, and so they will no longer show up in the UI.
-
-### 3) Allow the UI to access the services using network policy objects
-
-Apply the following YAML files to allow access from the management UI.
-
-```shell
-kubectl create -f $[tutorialFilesURL]/allow-ui.yaml
-kubectl create -f $[tutorialFilesURL]/allow-ui-client.yaml
-```
-
-After a few seconds, refresh the UI - it should now show the Services, but they should not be able to access each other anymore.
-
-### 4) Create the backend-policy.yaml file to allow traffic from the frontend to the backend
-
-```shell
-kubectl create -f $[tutorialFilesURL]/backend-policy.yaml
-```
-
-Refresh the UI. You should see the following:
-
-- The frontend can now access the backend (on TCP port 6379 only).
-- The backend cannot access the frontend at all.
-- The client cannot access the frontend, nor can it access the backend.
-
-### 5) Expose the frontend service to the client namespace
-
-```shell
-kubectl create -f $[tutorialFilesURL]/frontend-policy.yaml
-```
-
-The client can now access the frontend, but not the backend. Neither the frontend nor the backend
-can initiate connections to the client. The frontend can still access the backend.
-
-To use $[prodname] to enforce egress policy on Kubernetes pods, see [the advanced policy demo](kubernetes-policy-advanced.mdx).
-
-### 6) (Optional) Clean up the demo environment
-
-You can clean up the demo by deleting the demo Namespaces:
-
-```bash
-kubectl delete ns client stars management-ui
-```
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/kubernetes-network-policy.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/kubernetes-network-policy.mdx
deleted file mode 100644
index 00f1a04eee..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/kubernetes-network-policy.mdx
+++ /dev/null
@@ -1,179 +0,0 @@
----
-description: Learn Kubernetes policy syntax, rules, and features for controlling network traffic.
----
-
-# Get started with Kubernetes network policy
-
-## Big picture
-
-Kubernetes network policy lets administrators and developers enforce which network traffic is allowed using rules.
-
-## Value
-
-Kubernetes network policy lets developers secure access to and from their applications using the same simple language they use to deploy them. Developers can focus on their applications without understanding low-level networking concepts. Enabling developers to easily secure their applications using network policies supports a shift left DevOps environment.
-
-## Concepts
-
-The Kubernetes Network Policy API provides a standard way for users to define network policy for controlling network traffic. However, Kubernetes has no built-in capability to enforce the network policy. To enforce network policy, you must use a network plugin such as Calico.
-
-### Ingress and egress
-
-The bulk of securing network traffic typically revolves around defining egress and ingress rules. From the point of view of a Kubernetes pod, **ingress** is incoming traffic to the pod, and **egress** is outgoing traffic from the pod. In Kubernetes network policy, you create ingress and egress “allow” rules independently (egress, ingress, or both).
-
-### Default deny/allow behavior
-
-**Default allow** means all traffic is allowed by default, unless otherwise specified.
-**Default deny** means all traffic is denied by default, unless explicitly allowed.
-
-## How to
-
-Before you create your first Kubernetes network policy, you need to understand the default network policy behaviors. If no Kubernetes network policies apply to a pod, then all traffic to/from the pod are allowed (default-allow). As a result, if you do not create any network policies, then all pods are allowed to communicate freely with all other pods. If one or more Kubernetes network policies apply to a pod, then only the traffic specifically defined in that network policy are allowed (default-deny).
-
-You are now ready to start fine-tuning traffic that should be allowed.
-
-- [Create ingress policies](#create-ingress-policies)
-- [Allow ingress traffic from pods in the same namespace](#allow-ingress-traffic-from-pods-in-the-same-namespace)
-- [Allow ingress traffic from pods in a different namespace](#allow-ingress-traffic-from-pods-in-a-different-namespace)
-- [Create egress policies](#create-egress-policies)
-- [Allow egress traffic from pods in the same namespace](#allow-egress-traffic-from-pods-in-the-same-namespace)
-- [Allow egress traffic to IP addresses or CIDR range](#allow-egress-traffic-to-ip-addresses-or-cidr-range)
-- [Best practice: create deny-all default network policy](#best-practice-create-deny-all-default-network-policy)
-- [Create deny-all default ingress and egress network policy](#create-deny-all-default-ingress-and-egress-network-policy)
-
-### Create ingress policies
-
-Create ingress network policies to allow inbound traffic from other pods.
-
-Network policies apply to pods within a specific **namespace**. Policies can include one or more ingress rules. To specify which pods in the namespace the network policy applies to, use a **pod selector**. Within the ingress rule, use another pod selector to define which pods allow incoming traffic, and the **ports** field to define on which ports traffic is allowed.
-
-#### Allow ingress traffic from pods in the same namespace
-
-In the following example, incoming traffic to pods with label **color=blue** are allowed only if they come from a pod with **color=red**, on port **80**.
-
-```yaml
-kind: NetworkPolicy
-apiVersion: networking.k8s.io/v1
-metadata:
- name: allow-same-namespace
- namespace: default
-spec:
- podSelector:
- matchLabels:
- color: blue
- ingress:
- - from:
- - podSelector:
- matchLabels:
- color: red
- ports:
- - port: 80
-```
-
-#### Allow ingress traffic from pods in a different namespace
-
-To allow traffic from pods in a different namespace, use a namespace selector in the ingress policy rule. In the following policy, the namespace selector matches one or more Kubernetes namespaces and is combined with the pod selector that selects pods within those namespaces.
-
-:::note
-
-Namespace selectors can be used only in policy rules. The **spec.podSelector** applies to pods only in the same namespace as the policy.
-
-:::
-
-In the following example, incoming traffic is allowed only if they come from a pod with label **color=red**, in a namespace with label **shape=square**, on port **80**.
-
-```yaml
-kind: NetworkPolicy
-apiVersion: networking.k8s.io/v1
-metadata:
- name: allow-different-namespace
- namespace: default
-spec:
- podSelector:
- matchLabels:
- color: blue
- ingress:
- - from:
- - podSelector:
- matchLabels:
- color: red
- namespaceSelector:
- matchLabels:
- shape: square
- ports:
- - port: 80
-```
-
-### Create egress policies
-
-Create egress network policies to allow outbound traffic from pods.
-
-#### Allow egress traffic from pods in the same namespace
-
-The following policy allows pod outbound traffic to other pods in the same namespace that match the pod selector. In the following example, outbound traffic is allowed only if they go to a pod with label **color=red**, on port **80**.
-
-```yaml
-kind: NetworkPolicy
-apiVersion: networking.k8s.io/v1
-metadata:
- name: allow-egress-same-namespace
- namespace: default
-spec:
- podSelector:
- matchLabels:
- color: blue
- egress:
- - to:
- - podSelector:
- matchLabels:
- color: red
- ports:
- - port: 80
-```
-
-#### Allow egress traffic to IP addresses or CIDR range
-
-Egress policies can also be used to allow traffic to specific IP addresses and CIDR ranges. Typically, IP addresses/ranges are used to handle traffic that is external to the cluster for static resources or subnets.
-
-The following policy allows egress traffic to pods in CIDR, **172.18.0.0/24**.
-
-```yaml
-kind: NetworkPolicy
-apiVersion: networking.k8s.io/v1
-metadata:
- name: allow-egress-external
- namespace: default
-spec:
- podSelector:
- matchLabels:
- color: red
- egress:
- - to:
- - ipBlock:
- cidr: 172.18.0.0/24
-```
-
-### Best practice: create deny-all default network policy
-
-To ensure that all pods in the namespace are secure, a best practice is to create a default network policy. This avoids accidentally exposing an app or version that doesn’t have policy defined.
-
-#### Create deny-all default ingress and egress network policy
-
-The following network policy implements a default **deny-all** ingress and egress policy, which prevents all traffic to/from pods in the **policy-demo** namespace. Note that the policy applies to all pods in the policy-demo namespace, but does not explicitly allow any traffic. All pods are selected, but because the default changes when pods are selected by a network policy, the result is: **deny all ingress and egress traffic**. (Unless the traffic is allowed by another network policy).
-
-```yaml
-kind: NetworkPolicy
-apiVersion: networking.k8s.io/v1
-metadata:
- name: default-deny
- namespace: policy-demo
-spec:
- podSelector:
- matchLabels: {}
- policyTypes:
- - Ingress
- - Egress
-```
-
-## Additional resources
-
-- [Kubernetes Network Policy API documentation](https://v1-21.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#networkpolicyspec-v1-networking-k8s-io)
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/kubernetes-policy-advanced.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/kubernetes-policy-advanced.mdx
deleted file mode 100644
index b0cf5131d9..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/kubernetes-policy-advanced.mdx
+++ /dev/null
@@ -1,342 +0,0 @@
----
-description: Learn how to create more advanced Kubernetes network policies (namespace, allow and deny all ingress and egress).
----
-
-# Kubernetes policy, advanced tutorial
-
-The Kubernetes `NetworkPolicy` API allows users to express ingress and egress policies (starting with Kubernetes 1.8.0) to Kubernetes pods
-based on labels and ports.
-
-This guide walks through using Kubernetes `NetworkPolicy` to define more complex network policies.
-
-## Requirements
-
-- A working Kubernetes cluster and access to it using kubectl
-- Your Kubernetes nodes have connectivity to the public internet
-- You are familiar with [Kubernetes NetworkPolicy](kubernetes-policy-basic.mdx)
-
-## Tutorial flow
-
-1. Create the Namespace and Nginx Service
-1. Deny all ingress traffic
-1. Allow ingress traffic to Nginx
-1. Deny all egress traffic
-1. Allow egress traffic to kube-dns
-1. Cleanup Namespace
-
-## 1. Create the namespace and nginx service
-
-We'll use a new namespace for this guide. Run the following commands to create it and a plain nginx service listening on port 80.
-
-```bash
-kubectl create ns advanced-policy-demo
-kubectl create deployment --namespace=advanced-policy-demo nginx --image=nginx
-kubectl expose --namespace=advanced-policy-demo deployment nginx --port=80
-```
-
-### Verify access - allowed all ingress and egress
-
-Open up a second shell session which has `kubectl` connectivity to the Kubernetes cluster and create a busybox pod to test policy access. This pod will be used throughout this tutorial to test policy access.
-
-```bash
-kubectl run --namespace=advanced-policy-demo access --rm -ti --image busybox /bin/sh
-```
-
-This should open up a shell session inside the `access` pod, as shown below.
-
-```
-Waiting for pod advanced-policy-demo/access-472357175-y0m47 to be running, status is Pending, pod ready: false
-
-If you don't see a command prompt, try pressing enter.
-/ #
-```
-
-Now from within the busybox "access" pod execute the following command to test access to the nginx service.
-
-```bash
-wget -q --timeout=5 nginx -O -
-```
-
-It should return the HTML of the nginx welcome page.
-
-Still within the busybox "access" pod, issue the following command to test access to google.com.
-
-```bash
-wget -q --timeout=5 google.com -O -
-```
-
-It should return the HTML of the google.com home page.
-
-## 2. Deny all ingress traffic
-
-Enable ingress isolation on the namespace by deploying a [default deny all ingress traffic policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-traffic).
-
-```bash
-kubectl create -f - <
-
-
-Welcome to nginx!...
-```
-
-After creating the policy, we can now access the nginx Service.
-
-## 4. Deny all egress traffic
-
-Enable egress isolation on the namespace by deploying a [default deny all egress traffic policy](https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-egress-traffic).
-
-```bash
-kubectl create -f - <
-
-
-Welcome to nginx!...
-```
-
-Next, try to retrieve the home page of google.com.
-
-```bash
-wget -q --timeout=5 google.com -O -
-```
-
-It should return:
-
-```
-wget: download timed out
-```
-
-Access to `google.com` times out because it can resolve DNS but has no egress access to anything other than pods with labels matching `app: nginx` in the `advanced-policy-demo` namespace.
-
-# 7. Clean up namespace
-
-You can clean up after this tutorial by deleting the advanced policy demo namespace.
-
-```bash
-kubectl delete ns advanced-policy-demo
-```
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/kubernetes-policy-basic.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/kubernetes-policy-basic.mdx
deleted file mode 100644
index 6ddd899b01..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/kubernetes-tutorials/kubernetes-policy-basic.mdx
+++ /dev/null
@@ -1,209 +0,0 @@
----
-description: Learn how to use basic Kubernetes network policy to securely restrict traffic to/from pods.
----
-
-# Kubernetes policy, basic tutorial
-
-This guide provides a simple way to try out Kubernetes `NetworkPolicy` with $[prodname]. It requires a Kubernetes cluster configured with $[prodname] networking, and expects that you have `kubectl` configured to interact with the cluster.
-
-
-
-## Configure namespaces
-
-This guide will deploy pods in a Kubernetes namespace. Let's create the `Namespace` object for this guide.
-
-```bash
-kubectl create ns policy-demo
-```
-
-## Create demo pods
-
-We'll use Kubernetes `Deployment` objects to easily create pods in the namespace.
-
-1. Create some nginx pods in the `policy-demo` namespace.
-
- ```bash
- kubectl create deployment --namespace=policy-demo nginx --image=nginx
- ```
-
-1. Expose them through a service.
-
- ```bash
- kubectl expose --namespace=policy-demo deployment nginx --port=80
- ```
-
-1. Ensure the nginx service is accessible.
-
- ```bash
- kubectl run --namespace=policy-demo access --rm -ti --image busybox /bin/sh
- ```
-
- This should open up a shell session inside the `access` pod, as shown below.
-
- ```
- Waiting for pod policy-demo/access-472357175-y0m47 to be running, status is Pending, pod ready: false
-
- If you don't see a command prompt, try pressing enter.
-
- / #
- ```
-
-1. From inside the `access` pod, attempt to reach the `nginx` service.
-
- ```bash
- wget -q nginx -O -
- ```
-
- You should see a response from `nginx`. Great! Our service is accessible. You can exit the pod now.
-
-## Enable isolation
-
-Let's turn on isolation in our `policy-demo` namespace. $[prodname] will then prevent connections to pods in this namespace.
-
-Running the following command creates a NetworkPolicy which implements a default deny behavior for all pods in the `policy-demo` namespace.
-
-```bash
-kubectl create -f - <
-
-:::note
-
-This guide provides educational material that is not specific to $[prodname].
-
-:::
-
-Kubernetes defines a network model that helps provide simplicity and consistency across a range of networking
-environments and network implementations. The Kubernetes network model provides the foundation for understanding how
-containers, pods, and services within Kubernetes communicate with each other. This guide explains the key concepts and
-how they fit together.
-
-In this guide you will learn:
-
-- The fundamental network behaviors the Kubernetes network model defines.
-- How Kubernetes works with a variety of different network implementations.
-- What Kubernetes Services are.
-- How DNS works within Kubernetes.
-- What "NAT outgoing" is and when you would want to use it.
-- What "dual stack" is.
-
-## The Kubernetes network model
-
-The Kubernetes network model specifies:
-
-- Every pod gets its own IP address
-- Containers within a pod share the pod IP address and can communicate freely with each other
-- Pods can communicate with all other pods in the cluster using pod IP addresses (without
- [NAT](about-networking.mdx))
-- Isolation (restricting what each pod can communicate with) is defined using network policies
-
-As a result, pods can be treated much like VMs or hosts (they all have unique IP addresses), and the containers within
-pods very much like processes running within a VM or host (they run in the same network namespace and share an IP
-address). This model makes it easier for applications to be migrated from VMs and hosts to pods managed by Kubernetes.
-In addition, because isolation is defined using network policies rather than the structure of the network, the network
-remains simple to understand. This style of network is sometimes referred to as a "flat network".
-
-Note that, although very rarely needed, Kubernetes does also support the ability to map host ports through to pods, or
-to run pods directly within the host network namespace sharing the host's IP address.
-
-## Kubernetes network implementations
-
-Kubernetes built in network support, kubenet, can provide some basic network connectivity. However, it is more common to
-use third party network implementations which plug into Kubernetes using the CNI (Container Network Interface) API.
-
-There are lots of different kinds of CNI plugins, but the two main ones are:
-
-- network plugins, which are responsible for connecting pod to the network
-- IPAM (IP Address Management) plugins, which are responsible for allocating pod IP addresses.
-
-$[prodname] provides both network and IPAM plugins, but can also integrate and work seamlessly with some other CNI
-plugins, including AWS, Azure, and Google network CNI plugins, and the host local IPAM plugin. This flexibility allows
-you to choose the best networking options for your specific needs and deployment environment.
-
-## Kubernetes Services
-
-Kubernetes [Services](https://kubernetes.io/docs/concepts/services-networking/service/) provide a way of abstracting access to a group
-of pods as a network service. The group of pods is usually defined using a [label selector](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels)
-. Within the cluster the
-network service is usually represented as virtual IP address, and kube-proxy load balances connections to the virtual IP
-across the group of pods backing the service. The virtual IP is discoverable through Kubernetes DNS. The DNS name and
-virtual IP address remain constant for the life time of the service, even though the pods backing the service may be
-created or destroyed, and the number of pods backing the service may change over time.
-
-Kubernetes Services can also define how a service is accessed from outside of the cluster, for example using
-
-- a node port, where the service can be accessed via a specific port on every node
-- or a load balancer, whether a network load balancer provides a virtual IP address that the service can be accessed via
- from outside the cluster.
-
-Note that when using $[prodname] in on-prem deployments you can also advertise service IP
-addresses, allowing services to be conveniently accessed without
-going via a node port or load balancer.
-
-## Kubernetes DNS
-
-Each Kubernetes cluster provides a DNS service. Every pod and every service is discoverable through the Kubernetes DNS
-service.
-
-For example:
-
-- Service: `my-svc.my-namespace.svc.cluster-domain.example`
-- Pod: `pod-ip-address.my-namespace.pod.cluster-domain.example`
-- Pod created by a deployment exposed as a service:
- `pod-ip-address.deployment-name.my-namespace.svc.cluster-domain.example`.
-
-The DNS service is implemented as Kubernetes Service that maps to one or more DNS server pods (usually CoreDNS), that
-are scheduled just like any other pod. Pods in the cluster are configured to use the DNS service, with a DNS search list
-that includes the pod's own namespace and the cluster's default domain.
-
-This means that if there is a service named `foo` in Kubernetes namespace `bar`, then pods in the same namespace can
-access the service as `foo`, and pods in other namespaces can access the service as `foo.bar`
-
-Kubernetes supports a rich set of options for controlling DNS in different scenarios. You can read more about these in
-the Kubernetes guide [DNS for Services and Pods](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/).
-
-## NAT outgoing
-
-The Kubernetes network model specifies that pods must be able to communicate with each other directly using pod IP
-addresses. But it does not mandate that pod IP addresses are routable beyond the boundaries of the cluster. Many
-Kubernetes network implementations use [overlay networks](about-networking.mdx).
-Typically for these deployments, when a pod initiates a connection to an IP address outside of the cluster, the node
-hosting the pod will SNAT (Source Network Address Translation) map the source address of the packet from the pod IP to
-the node IP. This enables the connection to be routed across the rest of the network to the destination (because the
-node IP is routable). Return packets on the connection are automatically mapped back by the node replacing the node IP
-with the pod IP before forwarding the packet to the pod.
-
-When using $[prodname], depending on your environment, you can generally choose whether you prefer to run an
-overlay network, or prefer to have fully routable pod IPs. $[prodname] also
-allows you to configure outgoing NAT for specific IP address
-ranges if more granularity is desired.
-
-## Dual stack
-
-If you want to use a mix of IPv4 and IPv6 then you can enable Kubernetes [dual-stack](https://kubernetes.io/docs/concepts/services-networking/dual-stack/) mode. When enabled, all
-pods will be assigned both an IPv4 and IPv6 address, and Kubernetes Services can specify whether they should be exposed
-as IPv4 or IPv6 addresses.
-
-## Additional resources
-
-- [The Kubernetes Network Model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model)
-- [Video: Everything you need to know about Kubernetes networking on AWS](https://www.projectcalico.org/everything-you-need-to-know-about-kubernetes-pod-networking-on-aws/)
-- [Video: Everything you need to know about Kubernetes networking on Azure](https://www.projectcalico.org/everything-you-need-to-know-about-kubernetes-networking-on-azure/)
-- [Video: Everything you need to know about Kubernetes networking on Google Cloud](https://www.projectcalico.org/everything-you-need-to-know-about-kubernetes-networking-on-google-cloud/)
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/training/about-kubernetes-services.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/training/about-kubernetes-services.mdx
deleted file mode 100644
index add18d1773..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/training/about-kubernetes-services.mdx
+++ /dev/null
@@ -1,141 +0,0 @@
----
-description: Learn the three main service types and how to use them.
----
-
-# Kubernetes services
-
-:::note
-
-This guide provides educational material that is not specific to $[prodname].
-
-:::
-
-In this guide you will learn:
-
-- What are Kubernetes services?
-- What are the differences between the three main service types and what do you use them for?
-- How do services and network policy interact?
-- Some options for optimizing how services are handled.
-
-## What are Kubernetes services?
-
-Kubernetes [Services](https://kubernetes.io/docs/concepts/services-networking/service/) provide a way of abstracting access to a group
-of pods as a network service. The group of pods backing each service is usually defined using a [label selector](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels).
-
-When a client connects to a Kubernetes service, the connection is load balanced to one of the pods backing the service,
-as illustrated in this conceptual diagram:
-
-![Kubernetes Service conceptual diagram](/img/calico-cloud/k8s-service-concept.svg)
-
-There are three main types of Kubernetes services:
-
-- Cluster IP - which is the usual way of accessing a service from inside the cluster
-- Node port - which is the most basic way of accessing a service from outside the cluster
-- Load balancer - which uses an external load balancer as a more sophisticated way to access a service from outside the
- cluster.
-
-## Cluster IP services
-
-The default service type is `ClusterIP`. This allows a service to be accessed within the cluster via a virtual IP
-address, known as the service Cluster IP. The Cluster IP for a service is discoverable through Kubernetes DNS. For
-example, `my-svc.my-namespace.svc.cluster-domain.example`. The DNS name and Cluster IP address remain constant for the
-life time of the service, even though the pods backing the service may be created or destroyed, and the number of pods
-backing the service may change over time.
-
-In a typical Kubernetes deployment, kube-proxy runs on every node and is responsible for intercepting connections to
-Cluster IP addresses and load balancing across the group of pods backing each service. As part of this process
-[DNAT](about-networking.mdx) is used to map the destination IP address from the Cluster IP to the
-chosen backing pod. Response packets on the connection then have the NAT reverse on their way back to the pod that
-initiated the connection.
-
-![kube-proxy cluster IP](/img/calico-cloud/kube-proxy-cluster-ip.svg)
-
-Importantly, network policy is enforced based on the pods, not the service Cluster IP. (i.e. Egress network policy is
-enforced for the client pod after the DNAT has changed the connection's destination IP to the chosen service backing
-pod. And because only the destination IP for the connection is changed, ingress network policy for the backing pod sees the
-original client pod as the source of the connection.)
-
-## Node port services
-
-The most basic way to access a service from outside the cluster is to use a service of type `NodePort`. A Node Port is a
-port reserved on each node in the cluster through which the service can be accessed. In a typical Kubernetes deployment,
-kube-proxy is responsible for intercepting connections to Node Ports and load balancing them across the pods backing
-each service.
-
-As part of this process NAT is used to map the destination IP address and
-port from the node IP and Node Port, to the chosen backing pod and service port. In addition the source IP address is
-mapped from the client IP to the node IP, so that response packets on the connection flow back via the original node,
-where the NAT can be reversed. (It's the node which performed the NAT that has the connection tracking state needed to
-reverse the NAT.)
-
-![kube-proxy node port](/img/calico-cloud/kube-proxy-node-port.svg)
-
-Note that because the connection source IP address is SNATed to the node IP address, ingress network policy for the
-service backing pod does not see the original client IP address. Typically this means that any such policy is limited to
-restricting the destination protocol and port, and cannot restrict based on the client / source IP. This limitation can
-be circumvented in some scenarios by using [externalTrafficPolicy](#externaltrafficpolicylocal) or by using
-$[prodname]'s eBPF dataplane native service handling (rather than kube-proxy) which preserves source IP address.
-
-## Load balancer services
-
-Services of type `LoadBalancer` expose the service via an external network load balancer (NLB). The exact type of
-network load balancer depends on which public cloud provider or, if on-prem, which specific hardware load balancer integration is
-integrated with your cluster.
-
-The service can be accessed from outside of the cluster via a specific IP address on the network load balancer, which by
-default will load balance evenly across the nodes using the service node port.
-
-![kube-proxy load balancer](/img/calico-cloud/kube-proxy-load-balancer.svg)
-
-Most network load balancers preserve the client source IP address, but because the service then goes via a node port,
-the backing pods themselves do not see the client IP, with the same implications for network policy. As with node
-ports, this limitation can be circumvented in some scenarios by using [externalTrafficPolicy](#externaltrafficpolicylocal)
-or by using $[prodname]'s eBPF dataplane [native service handling](#calico-ebpf-native-service-handling) (rather
-than kube-proxy) which preserves source IP address.
-
-## Advertising service IPs
-
-One alternative to using node ports or network load balancers is to advertise service IP addresses over BGP. This
-requires the cluster to be running on an underlying network that supports BGP, which typically means an on-prem
-deployment with standard Top of Rack routers.
-
-$[prodname] supports advertising service Cluster IPs, or External IPs for services configured with one. If you are
-not using Calico as your network plugin then [MetalLB](https://github.com/metallb/metallb) provides similar capabilities that work with a variety of different network
-plugins.
-
-![kube-proxy service advertisement](/img/calico-cloud/kube-proxy-service-advertisement.svg)
-
-## externalTrafficPolicy:local
-
-By default, whether using service type `NodePort` or `LoadBalancer` or advertising service IP addresses over BGP,
-accessing a service from outside the cluster load balances evenly across all the pods backing the service, independent
-of which node the pods are on. This behavior can be changed by configuring the service with
-`externalTrafficPolicy:local` which specifies that connections should only be load balanced to pods backing the service
-on the local node.
-
-When combined with services of type `LoadBalancer` or with $[prodname] service IP address advertising, traffic is
-only directed to nodes that host at least one pod backing the service. This reduces the potential extra network hop
-between nodes, and perhaps more importantly, to maintain the source IP address all the way to the pod, so network policy
-can restrict specific external clients if desired.
-
-![kube-proxy service advertisement](/img/calico-cloud/kube-proxy-service-local.svg)
-
-Note that in the case of services of type `LoadBalancer`, not all Load Balancers support this mode. And in the case of
-service IP advertisement, the evenness of the load balancing becomes topology dependent. In this case, pod anti-affinity
-rules can be used to ensure even distribution of backing pods across your topology, but this does add some complexity to
-deploying the service.
-
-## Calico eBPF native service handling
-
-As an alternative to using Kubernetes standard kube-proxy, $[prodname]'s eBPF
-dataplane supports native service handling. This preserves source IP to
-simplify network policy, offers DSR (Direct Server Return) to reduce the number of network hops for return traffic, and
-provides even load balancing independent of topology, with reduced CPU and latency compared to kube-proxy.
-
-![kube-proxy service advertisement](/img/calico-cloud/calico-native-service-handling.svg)
-
-# Additional resources
-
-- [Video: Everything you need to know about Kubernetes Services networking ](https://www.projectcalico.org/everything-you-need-to-know-about-kubernetes-services-networking/)
-- [Blog: Introducing the Calico eBPF dataplane](https://www.projectcalico.org/introducing-the-calico-ebpf-dataplane/)
-- [Blog: Hands on with Calico eBPF native service handling](https://www.projectcalico.org/hands-on-with-calicos-ebpf-service-handling/)
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/training/about-network-policy.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/training/about-network-policy.mdx
deleted file mode 100644
index e7073bfbea..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/training/about-network-policy.mdx
+++ /dev/null
@@ -1,231 +0,0 @@
----
-description: Learn the basics of Kubernetes and Calico Cloud network policy
----
-
-# What is network policy?
-
-:::note
-
-This guide provides educational material that is not specific to $[prodname].
-
-:::
-
-Kubernetes and $[prodname] provide network policy APIs to help you secure your workloads.
-
-In this guide you will learn:
-
-- What network policy is and why it is important.
-- The differences between Kubernetes and $[prodname] network policies and when you might want to use each.
-- Some best practices for using network policy.
-
-## What is network policy?
-
-Network policy is the primary tool for securing a Kubernetes network. It allows you to easily restrict the network
-traffic in your cluster so only the traffic that you want to flow is allowed.
-
-To understand the significance of network policy, let's briefly explore how network security was typically achieved
-prior to network policy. Historically in enterprise networks, network security was provided by designing a physical
-topology of network devices (switches, routers, firewalls) and their associated configuration. The physical topology
-defined the security boundaries of the network. In the first phase of virtualization, the same network and network
-device constructs were virtualized in the cloud, and the same techniques for creating specific network topologies of
-(virtual) network devices were used to provide network security. Adding new applications or services often required
-additional network design to update the network topology and network device configuration to provide the desired
-security.
-
-In contrast, the [Kubernetes network model](about-kubernetes-networking.mdx) defines a "flat"
-network in which every pod can communicate with all other pods in the cluster using pod IP addresses. This approach
-massively simplifies network design and allows new workloads to be scheduled dynamically anywhere in the cluster with no
-dependencies on the network design.
-
-In this model, rather than network security being defined by network topology boundaries, it is defined using network
-policies that are independent of the network topology. Network policies are further abstracted from the network by using
-label selectors as their primary mechanism for defining which workloads can talk to which workloads, rather than IP
-addresses or IP address ranges.
-
-## Why is network policy important?
-
-In an age where attackers are becoming more and more sophisticated, network security as a line of defense is more important
-than ever.
-
-While you can (and should) use firewalls to restrict traffic at the perimeters of your network (commonly referred to as
-north-south traffic), their ability to police Kubernetes traffic is often limited to a granularity of the cluster as a
-whole, rather than to specific groups of pods, due to the dynamic nature of pod scheduling and pod IP addresses. In
-addition, the goal of most attackers once they gain a small foothold inside the perimeter is to move laterally (commonly
-referred to as east-west) to gain access to higher value targets, which perimeter based firewalls can't police against.
-
-Network policy on the other hand is designed for the dynamic nature of Kubernetes by following the standard Kubernetes
-paradigm of using label selectors to define groups of pods, rather than IP addresses. And because network policy is
-enforced within the cluster itself it can police both north-south and east-west traffic.
-
-Network policy represents an important evolution of network security, not just because it handles the dynamic nature of
-modern microservices, but because it empowers dev and devops engineers to easily define network security themselves,
-rather than needing to learn low-level networking details or raise tickets with a separate team responsible for managing
-firewalls. Network policy makes it easy to define intent, such as "only this microservice gets to connect to the
-database", write that intent as code (typically in YAML files), and integrate authoring of network policies into git
-workflows and CI/CD processes.
-
-:::note
-
-Note: $[prodname] offers capabilities that can help perimeter firewalls integrate
-more tightly with Kubernetes. However, this does not remove the need or value of network policies within the cluster itself.)
-
-:::
-
-## Kubernetes network policy
-
-Kubernetes network policies are defined using the Kubernetes [NetworkPolicy](https://kubernetes.io/docs/reference/kubernetes-api/policy-resources/network-policy-v1/) resource.
-
-The main features of Kubernetes network policies are:
-
-- Policies are namespace scoped (i.e. you create them within the context of a specific namespace just like, for example, pods)
-- Policies are applied to pods using label selectors
-- Policy rules can specify the traffic that is allowed to/from other pods, namespaces, or CIDRs
-- Policy rules can specify protocols (TCP, UDP, SCTP), named ports or port numbers
-
-Kubernetes itself does not enforce network policies, and instead delegates their enforcement to network plugins. Most
-network plugins implement the mainline elements of Kubernetes network policies, though not all implement every feature
-of the specification. ($[prodname] does implement every feature, and was the original reference implementation of Kubernetes
-network policies.)
-
-
-
-## $[prodname] network policy
-
-In addition to enforcing Kubernetes network policy, $[prodname] supports its own
-namespaced [NetworkPolicy](../../reference/resources/networkpolicy.mdx) and non-namespaced
-[GlobalNetworkPolicy](../../reference/resources/globalnetworkpolicy.mdx) resources, which provide additional
-features beyond those supported by Kubernetes network policy. This includes support for:
-
-- policy ordering/priority
-- deny and log actions in rules
-- more flexible match criteria for applying policies and in policy rules, including matching on Kubernetes
- ServiceAccounts, and (if using Istio & Envoy) cryptographic identity and layer 5-7 match criteria such as HTTP & gRPC URLs.
-- ability to reference non-Kubernetes workloads in polices, including matching on
- [NetworkSets](../../reference/resources/networkset.mdx) in policy rules
-
-While Kubernetes network policy applies only to pods, $[prodname] network policy can be applied to multiple types of
-endpoints including pods, VMs, and host interfaces.
-
-To learn more about $[prodname] network policies, read the [Get started with $[prodname] network policy](../../network-policy/beginners/calico-network-policy.mdx)
- guide.
-
-## Benefits of using $[prodname] for network policy
-
-### Full Kubernetes network policy support
-
-Unlike some other network policy implementations, $[prodname] implements the full set of Kubernetes network policy features.
-
-### Richer network policy
-
-$[prodname] network policies allow even richer traffic control than Kubernetes network policies if you need it. In addition,
-$[prodname] network policies allow you to create policy that applies across multiple namespaces using GlobalNetworkPolicy
-resources.
-
-### Mix Kubernetes and $[prodname] network policy
-
-Kubernetes and $[prodname] network policies can be mixed together seamlessly. One common use case for this is to split
-responsibilities between security / cluster ops teams and developer / service teams. For example, giving the security /
-cluster ops team RBAC permissions to define $[prodname] policies, and giving developer / service teams RBAC permissions to
-define Kubernetes network policies in their specific namespaces. As $[prodname] policy rules can be ordered to be enforced
-either before or after Kubernetes network policies, and can include actions such as deny and log, this allows the
-security / cluster ops team to define basic higher-level more-general purpose rules, while empowering the developer /
-service teams to define their own fine-grained constraints on the apps and services they are responsible for.
-
-For more flexible control and delegation of responsibilities between two or more teams, $[prodname] extends this
-model to provide hierarchical policy.
-
-![Example mix of network policy types](/img/calico-cloud/example-k8s-calico-policy-mix.svg)
-
-### Extendable with $[prodname]
-
-Calico Cloud adds even richer network policy capabilities, with the ability
-to specify hierarchical policies, with each team have particular boundaries of trust, and FQDN / domain names in policy
-rules for restricting access to specific external services.
-
-## Best practices for network policies
-
-### Ingress and egress
-
-At a minimum we recommend that every pod is protected by network policy ingress rules that restrict what is allowed
-to connect to the pod and on which ports. The best practice is also to define network policy egress rules that restrict
-the outgoing connections that are allowed by pods themselves. Ingress rules protect your pod from attacks outside of the
-pod. Egress rules help protect everything outside of the pod if the pod gets compromised, reducing the attack surface to
-make moving laterally (east-west) or to prevent an attacker from exfiltrating compromised data from your cluster (north-south).
-
-### Policy schemas
-
-Due to the flexibility of network policy and labelling, there are often multiple different ways of labelling and writing
-policies that can achieve the same particular goal. One of the most common approaches is to have a small number of
-global policies that apply to all pods, and then a single pod specific policy that defines all the ingress and egress
-rules that are particular to that pod.
-
-For example:
-
-```yaml
-kind: NetworkPolicy
-apiVersion: networking.k8s.io/v1
-metadata:
- name: front-end
- namespace: staging
-spec:
- podSelector:
- matchLabels:
- app: back-end
- ingress:
- - from:
- - podSelector:
- matchLabels:
- app: front-end
- ports:
- - protocol: TCP
- port: 443
- egress:
- - to:
- - podSelector:
- matchLabels:
- app: database
- ports:
- - protocol: TCP
- port: 27017
-```
-
-### Default deny
-
-One approach to ensuring these best practices are being followed is to define [default deny](../../network-policy/beginners/kubernetes-default-deny.mdx)
- network policies. These ensure that if no other policy is
-defined that explicitly allows traffic to/from a pod, then the traffic will be denied. As a result, anytime a team
-deploys a new pod, they are forced to also define network policy for the pod. It can be useful to use a $[prodname]
-GlobalNetworkPolicy for this (rather than needing to define a policy every time a new namespace is created) and to
-include some exceptions to the default deny (for example to allow pods to access DNS).
-
-For example, you might use the following policy to default-deny all (non-system) pod traffic except for DNS queries to kube-dns/core-dns.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalNetworkPolicy
-metadata:
- name: default-app-policy
-spec:
- namespaceSelector: has(projectcalico.org/name) && projectcalico.org/name not in {"kube-system", "calico-system"}
- types:
- - Ingress
- - Egress
- egress:
- - action: Allow
- protocol: UDP
- destination:
- selector: k8s-app == "kube-dns"
- ports:
- - 53
-```
-
-### Hierarchical policy
-
-[Calico Cloud](../../network-policy/policy-tiers/tiered-policy.mdx) supports hierarchical network policy using policy tiers. RBAC
-for each tier can be defined to restrict who can interact with each tier. This can be used to delegate trust across
-multiple teams.
-
-![Example tiers](/img/calico-cloud/example-tiers.svg)
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/training/about-networking.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/training/about-networking.mdx
deleted file mode 100644
index bb23819ee8..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/training/about-networking.mdx
+++ /dev/null
@@ -1,164 +0,0 @@
----
-description: Learn about networking layers, packets, IP addressing, and routing.
----
-
-# Networking overview
-
-:::note
-
-This guide provides educational material that is not specific to $[prodname].
-
-:::
-
-You can get up and running with $[prodname] without needing to be a networking expert. $[prodname] hides the complexities for
-you. However, if you would like to learn more about networking so you can better understand what is happening under the
-covers, this guide provides a short introduction to some of the key fundamental networking concepts for anyone who is
-not already familiar with them.
-
-In this guide you will learn:
-
-- The terms used to described different layers of the network.
-- The anatomy of a network packet.
-- What MTU is and why it makes a difference.
-- How IP addressing, subnets, and IP routing works.
-- What an overlay network is.
-- What DNS and NAT are.
-
-## Network layers
-
-The process of sending and receiving data over a network is commonly categorized into 7 layers (referred to as the [OSI model](https://en.wikipedia.org/wiki/OSI_model)). The layers are
-typically abbreviated as L1 - L7. You can think of data as passing through each of these layers in turn as it is sent or
-received from an application, with each layer being responsible for a particular part of the processing required to
-send or receive the data over the network.
-
-![OSI network layers diagram](/img/calico-cloud/osi-network-layers.svg)
-
-In a modern enterprise or public cloud network, the layers commonly map as follows:
-
-- L5-7: all the protocols most application developers are familiar with. e.g. HTTP, FTP, SSH, SSL, DNS.
-- L4: TCP or UDP, including source and destination ports.
-- L3: IP packets and IP routing.
-- L2: Ethernet packets and Ethernet switching.
-
-## Anatomy of a network packet
-
-When sending data over the network, each layer in the network stack adds its own header containing the control/metadata
-the layer needs to process the packet as it traverses the network, passing the resulting packet on to the next
-layer of the stack. In this way the complete packet is produced, which includes all the control/metadata required by
-every layer of the stack, without any layer understanding the data or needing to process the control/metadata of
-adjacent network layers.
-
-![Anatomy of a network packet](/img/calico-cloud/anatomy-of-a-packet.svg)
-
-## IP addressing, subnets and IP routing
-
-The L3 network layer introduces IP addresses and typically marks the boundary between the part of networking that
-application developers care about, and the part of networking that network engineers care about. In particular
-application developers typically regard IP addresses as the source and destination of the network traffic, but have much
-less of a need to understand L3 routing or anything lower in the network stack, which is more the domain of network
-engineers.
-
-There are two variants of IP addresses: IPv4 and IPv6.
-
-- IPv4 addresses are 32 bits long and the most commonly used. They are typically represented as 4 bytes in decimal (each
- 0-255) separated by dots. e.g. `192.168.27.64`. There are several ranges of IP addresses that are reserved as
- "private", that can only be used within local private networks, are not routable across the internet. These can be
- reused by enterprises as often as they want to. In contrast "public" IP addresses are globally unique across the whole
- of the internet. As the number of network devices and networks connected to the internet has grown, public IPv4
- addresses are now in short supply.
-- IPv6 addresses are 128 bits long and designed to overcome the shortage of IPv4 address space. They are typically
- represented by 8 groups of 4 digit hexadecimal numbers. e.g. `1203:8fe0:fe80:b897:8990:8a7c:99bf:323d`. Due to the 128
- bit length, there's no shortage of IPv6 addresses. However, many enterprises have been slow to adopt IPv6, so for now
- at least, IPv4 remains the default for many enterprise and data center networks.
-
-Groups of IP addresses are typically represented using [CIDR notation](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that consists of an IP address and number of
-significant bits on the IP address separated by a `/`. For example, `192.168.27.0/24` represents the group of 256 IP
-addresses from `192.168.27.0` to `192.168.27.255`.
-
-A group of IP addresses within a single L2 network is referred to as a subnet. Within a subnet, packets can be sent
-between any pair of devices as a single network hop, based solely on the L2 header (and footer).
-
-To send packets beyond a single subnet requires L3 routing, with each L3 network device (router) being responsible for
-making decisions on the path to send the packet based on L3 routing rules. Each network device acting as a router has
-routes that determine where a packet for a particular CIDR should be sent next. So for example, in a Linux system, a
-route of `10.48.0.128/26 via 10.0.0.12 dev eth0` indicates that packets with destination IP address in `10.48.0.128/26`
-should be routed to a next network hop of `10.0.0.12` over the `eth0` interface.
-
-Routes can be configured statically by an administrator, or programmed dynamically using routing protocols. When using
-routing protocols each network device typically needs to be configured to tell it which other network devices it should
-be exchanging routes with. The routing protocol then handles programming the right routes across the whole of the
-network as devices are added or removed, or network links come in or out of service.
-
-One common routing protocol used in large enterprise and data center networks is [BGP](https://en.wikipedia.org/wiki/Border_Gateway_Protocol). BGP is one of the main protocols that powers
-the internet, so scales incredibly well, and is very widely supported by modern routers.
-
-## Overlay networks
-
-An overlay network allows network devices to communicate across an underlying network (referred to as the underlay)
-without the underlay network having any knowledge of the devices connected to the overlay network. From the point of
-view of the devices connected to the overlay network, it looks just like a normal network. There are many different
-kinds of overlay networks that use different protocols to make this happen, but in general they share the same common
-characteristic of taking a network packet, referred to as the inner packet, and encapsulating it inside an outer network
-packet. In this way the underlay sees the outer packets without needing to understand how to handle the inner packets.
-
-How the overlay knows where to send packets varies by overlay type and the protocols they use. Similarly exactly how the
-packet is wrapped varies between different overlay types. In the case of VXLAN for example, the inner packet is wrapped
-and sent as UDP in the outer packet.
-
-![Anatomy of an overlay network packet](/img/calico-cloud/anatomy-of-an-overlay-packet.svg)
-
-Overlay networks have the advantage of having minimal dependencies on the underlying network infrastructure, but have
-the downsides of:
-
-- having a small performance impact compared to non-overlay networking, which you might want to avoid if running
- network intensive workloads
-- workloads on the overlay are not easily addressable from the rest of the network. so NAT gateways or load balancers
- are required to bridge between the overlay and the underlay network for any ingress to, or egress from, the overlay.
-
-$[prodname] networking options are exceptionally flexible, so in general you can choose whether you prefer
-$[prodname] to provide an overlay network, or non-overlay network.
-
-## DNS
-
-While the underlying network packet flow across the network is determined using IP addresses, users and applications
-typically want to use well known names to identify network destinations that remain consistent over time, even if the
-underlying IP addresses change. For example, to map `google.com` to `216.58.210.46`. This translation from name to IP
-address is handled by [DNS](https://en.wikipedia.org/wiki/Domain_Name_System). DNS runs on top of the base networking described so far. Each device connected to a network is typically configured
-with the IP addresses one or more DNS servers. When an application wants to connect to a domain name, a DNS message is
-sent to the DNS server, which then responds with information about which IP address(es) the domain name maps to. The
-application can then initiate its connection to the chosen IP address.
-
-## NAT
-
-Network Address Translation ([NAT](https://en.wikipedia.org/wiki/Network_address_translation)) is the process of mapping an IP address in a packet
-to a different IP address as the packet passes through the device performing the NAT. Depending on the use case, NAT can
-apply to the source or destination IP address, or to both addresses.
-
-One common use case for NAT is to allow devices with private IP address to talk to devices with public IP address across
-the internet. For example, if a device with a private IP address attempts to connect to a public IP address, then the
-router at the border of the private network will typically use SNAT (Source Network Address Translation) to map the
-private source IP address of the packet to the router's own public IP address before forwarding it on to the internet.
-The router then maps response packets coming in the opposite direction back to the original private IP address, so
-packets flow end-to-end in both directions, with neither source or destination being aware the mapping is happening. The
-same technique is commonly used to allow devices connected to an overlay network to connect with devices outside of the
-overlay network.
-
-Another common use case for NAT for load balancing. In this case the load balancer performs DNAT (Destination Network
-Address Translation) to change the destination IP address of the incoming connection to the IP address of the chosen
-device it is load balancing to. The load balancer then reverses this NAT on response packets so neither source or
-destination device is aware the mapping is happening.
-
-## MTU
-
-The Maximum Transmission Unit ([MTU](https://en.wikipedia.org/wiki/Maximum_transmission_unit)) of a network link is the maximum size of packet that
-can be sent across that network link. It is common for all links in a network to be configured with the same MTU to
-reduce the need to fragment packets as they traverse the network, which can significantly lower the performance of the
-network. In addition, TCP tries to learn path MTUs, and adjust packet sizes for each network path based on the smallest
-MTU of any of the links in the network path. When an application tries to send more data than can fit in a single
-packet, TCP will fragment the data into multiple TCP segments, so the MTU is not exceeded.
-
-Most networks have links with an MTU of 1,500 bytes, but some networks support MTUs of 9,000 bytes. In a Linux system,
-larger MTU sizes can result in lower CPU being used by the Linux networking stack when sending large amounts of data,
-because it has to process fewer packets for the same amount of data. Depending on the network interface hardware being
-used, some of this overhead may be offloaded to the network interface hardware, so the impact of small vs large MTU
-sizes varies from device to device.
diff --git a/calico-cloud_versioned_docs/version-20-1/tutorials/training/index.mdx b/calico-cloud_versioned_docs/version-20-1/tutorials/training/index.mdx
deleted file mode 100644
index 63269b5401..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/tutorials/training/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Learn the basics of Kubernetes networking and Calico Cloud networking.
-hide_table_of_contents: true
----
-
-# Kubernetes for beginners
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/users/create-and-assign-custom-roles.mdx b/calico-cloud_versioned_docs/version-20-1/users/create-and-assign-custom-roles.mdx
deleted file mode 100644
index f3c155caa7..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/users/create-and-assign-custom-roles.mdx
+++ /dev/null
@@ -1,85 +0,0 @@
----
-description: Create and assign custom roles to give specific, cluster-specific permission to your users.
----
-
-import IconUser from '/img/icons/user-icon.svg';
-
-# Creating and assigning custom roles
-
-As an administrator, you can create custom, cluster-specific roles that restrict users to particular functions on a cluster.
-
-## Overview
-
-$[prodname] comes with a set of predefined global roles that let you give permissions to users based on what they need to do.
-For example, a user with the Security global role has a broader set of permissions than a user with the Viewer global role.
-These permissions apply to all clusters.
-
-But in some cases these global roles can be too broad.
-
-By creating and assigning custom roles, you can be much more discriminating about what permissions you give users.
-For example, you could create a role that allows the user to modify network policy for a particular tier and namespace and gives view access to all other network policies.
-Permissions are assigned on a cluster-by-cluster basis.
-
-## Required permissions for common $[prodname] features
-
-Certain permissions are required for a user to access common $[prodname] features.
-
-| Feature area | Required permissions | Notes |
-| --| -- | -- |
-| Alerts | • **View** or **Modify Alerts** and •**View Event Logs** or **View All Logs** | |
-| Compliance reports | • **View Compliance Reports** | |
-| Dashboard | • **View All Logs** and • **View Global Network Sets** or **View Network Sets** and (optional) • **View Compliance Reports** | These permissions are required for the dashboard to fully populate. All users are granted limited dashboard metrics by having access to a cluster. |
-| Network policies | • **View** or **Modify Policies** or • **View** or **Modify Global Policies** and (optional) • **View Audit Logs** or **View All Logs** | The **Policies** permissions apply to one or more namespaces. The **Global Policies** permissions apply to the whole cluster. These permissions are also scoped by [policy tier](../network-policy/policy-tiers/tiered-policy.mdx).
The optional **View Audit Logs** or **View All Logs** let users view the change history on the policies. |
-| Service graph | • **View All Logs** and • **View** or **Modify Network Sets** and (optional) • **View** or **Modify Packet Captures** | Network sets can be restricted to a namespace or set to all namespaces to see all flows. |
-| Threat feeds | • **View** or **Modify Threat Feeds** | |
-| Timeline | • **View Event Logs** or **View All Logs** | |
-
-## Before you begin
-
-* You are signed in with owner or administrator permissions to the Calico Cloud Manager UI.
-
-## Create a custom role and add permissions
-
-1. Click the user icon > **Manage Team**.
-1. Under the **Roles** tab, click **Add Role**, enter a name and description for the custom role, and then click **Save**.
-1. Select the cluster you want to want the role to apply to by clicking **Cluster:** and choosing the cluster.
-1. Locate your new role in the list, select **Action** > **Manage permissions** > **Edit**, and the click **Add Permission**.
-1. Under **Permission**, choose a permission type from the list.
- Depending on the permission, you may also need to choose a namespace or policy tier.
-1. (optional) Click **Add permission** to add more permissions to your role for this cluster.
-1. Click **Save** to save these permissions to the role for this cluster.
-1. (optional) If you want to add the permissions for another cluster, repeat steps 3 to 7 for the cluster.
-
-## Assign custom roles to a user
-
-1. Select the user icon > **Manage Team**.
-1. Under the **Users** tab, locate the user in the list and select **Action** > **Edit**.
-1. Select the checkboxes for each custom role you want to assign to this user and then click **Save**.
-
-## Export custom roles and apply to other managed clusters
-
-You can export custom roles from one cluster and apply them to another cluster.
-
-:::note
-Importing custom roles is fully supported on managed clusters running Calico Cloud 18.3 or higher.
-:::
-
-***Prerequisites***
-
-* You connected two or more managed clusters to Calico Cloud.
-* You have a managed cluster with one or more custom roles.
-* You have `kubectl` administrator permissions for the managed clusters you want to apply custom roles to.
-
-***Procedure***
-
-1. From the cluster menu in the Calico Cloud Manager UI, select the managed cluster that has the custom roles you want to export.
-1. Click the user icon > **Manage Team**.
-1. Under the **Roles** tab, click **Export Custom Roles** and select **Download YAML** to download the custom role definitions.
-This file contains definitions for all the custom roles you created in this cluster.
-1. For each managed cluster you want to apply the custom roles to, run the following command:
-
- ```bash
- kubectl apply -f roles..yaml
- ```
-
- The custom roles are available immediately for this cluster in the Calico Cloud Manager UI.
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/users/create-custom-role-for-entra-id-group.mdx b/calico-cloud_versioned_docs/version-20-1/users/create-custom-role-for-entra-id-group.mdx
deleted file mode 100644
index 99c2ba5bd0..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/users/create-custom-role-for-entra-id-group.mdx
+++ /dev/null
@@ -1,34 +0,0 @@
----
-description: Create custom roles for Entra ID groups.
-title: Assign roles to Entra ID groups
----
-
-import IconUser from '/img/icons/user-icon.svg';
-
-# Give role-based access to an Entra ID group
-
-If you have Microsoft Entra ID configured as your identity provider, you can define role-based access in Calico Cloud and assign that role to an Entra ID (formerly Azure AD) security group.
-By managing membership in that security group, you can manage role-based access to Calico Cloud directly from your identity provider portal.
-
-***Prerequisites***
-
-* You have owner or administrator permissions to the Calico Cloud Manager UI.
-* You set up Entra ID as your identity provider for Calico Cloud.
- To set up an identity provider for Calico Cloud, open a [support ticket](https://support.tigera.io).
-* You have administrator permissions for your organization in the Azure Portal.
-* You have the Object ID for an Entra ID security group.
-* The **Email** property for all users in the security group has a valid email address.
-
-***Procedure***
-
-1. In Manager UI, click the user icon > **Manage Team**.
-1. Under the **Roles** tab, click **Add Role** and enter a name and description for the custom role.
- Under **IdP Group Identifier**, enter your Entra ID security group's Object ID and click **Save**.
- :::note
- If you don't see **IdP Group Identifier**, open a [support ticket](https://support.tigera.io) to enable this option.
- :::
-1. To add permissions, locate your new role under the **Roles** tab, select **Action** > **Manage permissions** > **Edit**, and then click **Add Permission**.
-1. Under **Permission**, choose a permission type from the list.
- Depending on the permission, you may also need to choose a namespace or policy tier.
-1. (optional) Click **Add permission** to add more permissions to your role for this cluster.
-1. Click **Save** to save these permissions to the role for this cluster.
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/users/index.mdx b/calico-cloud_versioned_docs/version-20-1/users/index.mdx
deleted file mode 100644
index 4e9e9e8aec..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/users/index.mdx
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: Manage users and user permissions for Calico Cloud Manager UI.
-hide_table_of_contents: true
----
-
-import { DocCardLink, DocCardLinkLayout } from '/src/___new___/components';
-
-# Users
-
-Manage users and user permissions for Calico Cloud Manager UI.
-
-
-
-
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/users/user-management.mdx b/calico-cloud_versioned_docs/version-20-1/users/user-management.mdx
deleted file mode 100644
index c5f482c609..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/users/user-management.mdx
+++ /dev/null
@@ -1,60 +0,0 @@
----
-description: Authenticate and authorize users.
----
-
-# Set up users
-
-## Authentication
-
-$[prodname] supports Google Social login and username / password for user authentication.
-
-## Roles and authorization
-
-Users can have one or more of the following predefined user roles to access features in Manager UI.
-The default permissions align with typical needs for each role.
-
-This table describes what level of access each predefined role has for features in Manager UI:
-
-| | Owner | Admin | Viewer | DevOps | Security | Compliance | UsageMetrics | ImageAssuranceAdmin |
-|---------------------------------------|:------------------:|:------------------:|:------:|:----------:|:----------:|:----------:|:------------:|:-------------------:|
-| _Service Graph_ and _Flow Visualizer_ | view | view | view | view | view | - | - | - |
-| _Policies_ | view, edit | view, edit | view | view, edit | view, edit | view | - | - |
-| _Nodes_ and _Endpoints_ | view | view | view | view | view | view | - | - |
-| _Network Sets_ | view, edit | view, edit | view | view, edit | view, edit | - | - | - |
-| _Managed Clusters_ | view, edit, delete | view, edit, delete | view | view, edit | view | - | - | - |
-| _Compliance Reports_ | view | view | view | - | view | view | - | - |
-| _Timeline_ | view | view | view | view | view | - | - | - |
-| _Alerts_ | view, edit | view, edit | view | view, edit | view, edit | - | - | - |
-| _Kibana_ | view, edit | view, edit | view | view, edit | view, edit | - | - | - |
-| _Image Assurance_ | view, edit | view, edit | - | view, edit | view, edit | - | - | view, edit |
-| _Manage Team_ | view, edit | view, edit | view | view | view | - | - | - |
-| _Usage Metrics_ | view | - | - | - | - | - | view | - |
-| _Threat Feeds_ | view, edit | view, edit | view | view, edit | view, edit | - | - | - |
-| _Web Application Firewall_ | view, edit | view, edit | view | view | view, edit | - | - | - |
-| _Container Threat Detection_ | view, edit | view, edit | view | view | view, edit | - | - | - |
-
-:::note
-
-The Owner role cannot be assigned to new users. The only Owner is the user who created the $[prodname] account.
-
-:::
-
-## Add your own identity provider
-
-$[prodname] works with any identity provider that supports [OpenID Connect](https://openid.net/connect/). For example, OKTA, Google, and Azure AD.
-
-To add an identity provider, open a [Support ticket](https://support.tigera.io/).
-
-### Azure AD requirements
-
-To add Azure AD as your identity provider, create an Active Directory "App Registration" with a Redirect URI of type "Web" set to https://auth.calicocloud.io/login/callback.
-
-Enable "ID Token" for implicit flows.
-
-Add the following Microsoft Graph API delegated permissions:
-
-- User.Read
-- OpenId permissions:
- - email
- - openid
- - profile
diff --git a/calico-cloud_versioned_docs/version-20-1/variables.js b/calico-cloud_versioned_docs/version-20-1/variables.js
deleted file mode 100644
index e113630e4b..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/variables.js
+++ /dev/null
@@ -1,31 +0,0 @@
-const releases = require('./releases.json');
-
-const variables = {
- releaseTitle: 'v3.20.0-1.0',
- cloudUserVersion: 'v20.2.0',
- prodname: 'Calico Cloud',
- prodnamedash: 'calico-cloud',
- baseUrl: '/calico-cloud',
- filesUrl: 'https://docs.calicocloud.io',
- filesUrl_CE: 'https://downloads.tigera.io/ee/v3.20.0-1.0',
- tutorialFilesURL: 'https://docs.tigera.io/files',
- prodnameWindows: 'Calico Enterprise for Windows',
- rootDirWindows: 'C:\\TigeraCalico',
- nodecontainer: 'cnx-node',
- noderunning: 'calico-node',
- cloudversion: 'v3.20.0-1.0-18',
- clouddownloadurl: 'https://installer.calicocloud.io/manifests/v3.20.0-1.0-18',
- clouddownloadbase: 'https://installer.calicocloud.io',
- cloudoperatorimage: 'quay.io/tigera/cc-operator',
- imageassuranceversion: 'v1.21.0',
- tigeraOperator: releases[0]['tigera-operator'],
- dikastesVersion: releases[0].components.dikastes.version,
- releases,
- registry: 'quay.io/',
- imageNames: {
- node: 'tigera/cnx-node',
- kubeControllers: 'tigera/kube-controllers',
- },
-};
-
-module.exports = variables;
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/alerts.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/alerts.mdx
deleted file mode 100644
index e411348add..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/alerts.mdx
+++ /dev/null
@@ -1,234 +0,0 @@
----
-description: Manage alerts and events for Calico Enterprise features.
----
-
-# Manage alerts
-
-## Big picture
-
-Manage alerts and alert events for $[prodname] features.
-
-## Value
-
-You can configure alerts for many $[prodname] features. Alerts are critical to teams for different reasons, for example:
-
-- **Visibility and troubleshooting** - alerts may indicate infrastructure problems, application bugs, or performance degradation
-- **Security** - alerts on suspicious traffic or workload behavior may indicate a compromise or malicious actor
-
-You can manage alerts and alert events in Manager UI, or using the CLI. $[prodname] also provides alert templates
-for common tasks that you can rename and edit to suit your own needs.
-
-## Before you begin
-
-**Recommended**
-
-We recommend turning down the aggregation level for flow logs to ensure that you see pod-specific results. $[prodname] aggregates flow logs over the external IPs for allowed traffic, and alert events will not provide pod-specific results (unless the traffic is denied by policy).
-
-:::caution
-
-Turning down aggregation levels for flow logs increases the amount of log data generated and may increase your $[prodname] bill.
-
-:::
-
-To turn down aggregation on flow logs, go to [FelixConfiguration](../reference/resources/felixconfig.mdx) and set the field, **flowLogsFileAggregationKindForAllowed** to **1**.
-
-## How To
-
-- [Manage alerts in Manager UI](#manage-alerts-in-manager-ui)
-- [Manage alerts using CLI](#manage-alerts-using-cli)
-
-### Manage alerts in Manager UI
-
-You can view alert events in Manager UI in several places: the **Alerts** page, **Service Graph**, and the **Kibana** dashboard.
-
-Click **Activity**, **Alerts** to follow along.
-
-**Alerts page**
-
-The Alerts page lists **alert events** that are generated by alerts that you’ve configured. (A list of Alerts can be found by clicking the **Alert configuration** icon).
-
-![alerts-list](/img/calico-enterprise/alerts-list.png)
-
-You can create alerts for many $[prodname] features. Although the following list of features is not exhaustive and will grow, you get a sense of the range of alerts that can be displayed on this page.
-
-- $[prodname] logs from Elasticsearch (flow, dns, audit, bgp, L7)
-- Deep packet inspection (DPI)
-- Threat defense (suspicious IPs, suspicious domains)
-- Web Application Firewall (WAF)
-
-Note the following:
-
-- The alert event list will be empty, if no alerts have occurred yet
-- You can dismiss alert events from view using the checkboxes or bulk action
-- The list may contain alert events that are identical or nearly identical. For nearly identical events, you can see differences in the `record` field when you expand the event.
-- Because alert events share the same interface, fields that do not apply to the alert are noted by “N/A”
-- You can filter alert events by Type.
-
- ![filter-alerts](/img/calico-enterprise/filter-alerts.png)
-
- Note these types:
-
- - **Custom** - filters legacy global alert events that were created before v3.12
- - **Global Alert** - includes alerts for $[prodname] Elasticsearch logs (audit, dns, flow, L7, WAF)
-
-**Add/edit/delete alerts**
-
-To manage alerts, click the **Alerts Configuration** icon.
-
-The following alert is an example of a global alert in the list view. This sample alert generates alert events when there are 100 flows in the cluster in the last 5 mins. (The YAML version of this alert is shown in the section on using the [CLI](#examples).)
-
-![alert-list-view](/img/calico-enterprise/alert-list-view.png)
-
-To create a new alert, click the **New** drop-down menu, and select **Blank**.
-
-Global alerts use a domain-specific query language to select records from a data set to use in the alert. You can also select/omit specific namespaces.
-
-![alert-example-ui](/img/calico-enterprise/alert-example-ui.png)
-
-For help with fields on this page, see [GlobalAlert](../reference/resources/globalalert.mdx).
-
-**Alert templates**
-
-From the **New** drop-down menu, select **Template**.
-
-![alert-template](/img/calico-enterprise/alert-template.png)
-
-The template list contains alerts for common tasks created by $[prodname]. With templates you can:
-
-- Update and rename an existing template
-- Create a new template from scratch
-- Create a new alert and save it as a template
-
-### Manage alerts using CLI
-
-This section provides examples of how to create and delete global alerts using `kubectl` and YAML files.
-
-**Create a global alert**
-
-1. Create a YAML file with one or more alerts.
-1. Apply the alert to your cluster.
-
- ```bash
- kubectl apply -f
- ```
-
-1. Wait until the alert runs and check the status.
-
- ```bash
- kubectl get globalalert -o yaml
- ```
-
-1. In Manager UI, go to the **Alerts** page to view alert events.
-
-### Examples
-
-The following alert generates alert events when there are 100 flows in the cluster in the last 5 mins.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalAlert
-metadata:
- name: example-flows
-spec:
- description: '100 flows Example'
- summary: 'Flows example ${count} > 100'
- severity: 100
- dataSet: flows
- metric: count
- condition: gt
- threshold: 100
-```
-
-The following alert generates alert events when there is ssh traffic in the default namespace.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalAlert
-metadata:
- name: network.ssh
-spec:
- description: 'ssh flows to default namespace'
- summary: '[flows] ssh flow in default namespace detected from ${source_namespace}/${source_name_aggr}'
- severity: 100
- period: 10m
- lookback: 10m
- dataSet: flows
- query: proto='tcp' AND action='allow' AND dest_port='22' AND (source_namespace='default' OR dest_namespace='default') AND reporter=src
- aggregateBy: [source_namespace, source_name_aggr]
- field: num_flows
- metric: sum
- condition: gt
- threshold: 0
-```
-
-The following alert generates alert events when $[prodname] globalnetworksets are modified.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalAlert
-metadata:
- name: policy.globalnetworkset
-spec:
- description: 'Changed globalnetworkset'
- summary: '[audit] [privileged access] change detected for ${objectRef.resource} ${objectRef.name}'
- severity: 100
- period: 10m
- lookback: 10m
- dataSet: audit
- query: (verb=create OR verb=update OR verb=delete OR verb=patch) AND "objectRef.resource"=globalnetworksets
- aggregateBy: [objectRef.resource, objectRef.name]
- metric: count
- condition: gt
- threshold: 0
-```
-
-The following alert generates alert events for all flow from processes in the data set.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalAlert
-metadata:
- name: example-process-set-embedded
-spec:
- description: Generate alerts for all flows from processes in the set
- summary: Generate alerts for all flows from processes in the set
- severity: 100
- dataSet: flows
- query: process_name IN {"python?", "*checkoutservice"}
-```
-
-The following example generates alert events for DNS lookups that are not in the allowed domain set. Because this set can be potentially large, a variable is used in the query string and is referenced in the substitutions list.
-
-```yaml
-apiVersion: projectcalico.org/v3
-kind: GlobalAlert
-metadata:
- name: example-domain-set-variable
-spec:
- description: Generate alerts for all DNS lookups not in the domain set
- summary: Generate alerts for all DNS lookups not in the domain set with variable
- severity: 100
- dataSet: dns
- query: qname NOTIN ${domains}
- substitutions:
- - name: domains
- values:
- - '*cluster.local'
- - '?.mydomain.com'
-```
-
-**Delete a global alert**
-
-To delete a global alert and stop all alert event generation, use the following command.
-
-```bash
-kubectl delete globalalert
-```
-
-## Additional resources
-
-- [GlobalAlert and templates](../reference/resources/globalalert.mdx)
-- Alerts for [Deep packet inspection](../threat/deeppacketinspection.mdx)
-- Alerts for [suspicious IPs](../threat/suspicious-ips.mdx)
-- Alerts for [suspicious domains](../threat/suspicious-domains.mdx)
-- Alerts for [Web Application Firewall](../threat/web-application-firewall.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/archive-storage.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/archive-storage.mdx
deleted file mode 100644
index 0b3293a68a..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/archive-storage.mdx
+++ /dev/null
@@ -1,210 +0,0 @@
----
-description: Archive logs to Syslog, Splunk, or Amazon S3 for maintaining compliance data.
----
-
-# Archive logs
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-
-## Big picture
-
-Archive $[prodname] logs to SIEMs like Syslog, Splunk, or Amazon S3 to meet compliance storage requirements.
-
-## Value
-
-Archiving your $[prodname] Elasticsearch logs to storage services like Amazon S3, Syslog, or Splunk are reliable
-options for maintaining and consolidating your compliance data long term.
-
-## Before you begin
-
-**Supported logs for export**
-
-- Syslog - flow, dns, idsevents, audit
-- Amazon S3 - l7, flow, dns, runtime, audit
-- Splunk - flow, audit, dns
-
-## How to
-
-:::note
-Because $[prodname] and Kubernetes logs are integral to $[prodname] diagnostics, there is no mechanism to tune down the verbosity. To manage log verbosity, filter logs using your SIEM.
-:::
-
-
-
-
-1. Create an AWS bucket to store your logs.
- You will need the bucket name, region, key, secret key, and the path in the following steps.
-
-2. Create a Secret in the `tigera-operator` namespace named `log-collector-s3-credentials` with the fields `key-id` and `key-secret`.
- Example:
-
- ```
- kubectl create secret generic log-collector-s3-credentials \
- --from-literal=key-id= \
- --from-literal=key-secret= \
- -n tigera-operator
- ```
-
-3. Update the [LogCollector](../../reference/installation/api.mdx#operator.tigera.io/v1.LogCollector)
- resource named, `tigera-secure` to include an [S3 section](../../reference/installation/api.mdx#operator.tigera.io/v1.S3StoreSpec)
- with your information noted from above.
- Example:
-
- ```yaml
- apiVersion: operator.tigera.io/v1
- kind: LogCollector
- metadata:
- name: tigera-secure
- spec:
- additionalStores:
- s3:
- bucketName:
- bucketPath:
- region:
- ```
-
- This can be done during installation by editing the custom-resources.yaml
- by applying it, or after installation by editing the resource with the command:
-
- ```bash
- kubectl edit logcollector tigera-secure
- ```
-
-
-
-
-1. Update the [LogCollector](../../reference/installation/api.mdx#operator.tigera.io/v1.LogCollector)
- resource named `tigera-secure` to include a [Syslog section](../../reference/installation/api.mdx#operator.tigera.io/v1.SyslogStoreSpec)
- with your syslog information.
- Example:
- ```yaml
- apiVersion: operator.tigera.io/v1
- kind: LogCollector
- metadata:
- name: tigera-secure
- spec:
- additionalStores:
- syslog:
- # (Required) Syslog endpoint, in the format protocol://host:port
- endpoint: tcp://1.2.3.4:514
- # (Optional) If messages are being truncated set this field
- packetSize: 1024
- # (Required) Types of logs to forward to Syslog (must specify at least one option)
- logTypes:
- - Audit
- - DNS
- - Flows
- - IDSEvents
- ```
- This can be done during installation by editing the custom-resources.yaml by applying it or after installation by editing the resource with the command:
- ```bash
- kubectl edit logcollector tigera-secure
- ```
-2. You can control which types of $[prodname] log data you would like to send to syslog.
- The [Syslog section](../../reference/installation/api.mdx#operator.tigera.io/v1.SyslogStoreSpec)
- contains a field called `logTypes` which allows you to list which log types you would like to include.
- The allowable log types are:
-
- - Audit
- - DNS
- - Flows
- - IDSEvents
-
- Refer to the [Syslog section](../../reference/installation/api.mdx#operator.tigera.io/v1.SyslogStoreSpec) for more details on what data each log type represents.
-
- :::note
-
- The log type `IDSEvents` is only supported for a cluster that has [LogStorage](../../reference/installation/api.mdx#operator.tigera.io/v1.LogStorage) configured. It is because intrusion detection event data is pulled from the corresponding LogStorage datastore directly.
-
- :::
-
- The `logTypes` field is a required, which means you must specify at least one type of log to export to syslog.
-
-**TLS configuration**
-
-3. You can enable TLS option for syslog forwarding by including the "encryption" option in the [Syslog section](../../reference/installation/api.mdx#operator.tigera.io/v1.SyslogStoreSpec).
-
- ```yaml
- apiVersion: operator.tigera.io/v1
- kind: LogCollector
- metadata:
- name: tigera-secure
- spec:
- additionalStores:
- syslog:
- # (Required) Syslog endpoint, in the format protocol://host:port
- endpoint: tcp://1.2.3.4:514
- # (Optional) If messages are being truncated set this field
- packetSize: 1024
- # (Optional) To Configure TLS mode
- encryption: TLS
- # (Required) Types of logs to forward to Syslog (must specify at least one option)
- logTypes:
- - Audit
- - DNS
- - Flows
- - IDSEvents
- ```
-
-4. Using the self-signed CA with the field name tls.crt, create a configmap in the tigera-operator namespace named, syslog-ca. Example:
-
- :::note
-
- Skip this step if publicCA bundle is good enough to verify the server certificates.
-
- :::
-
- ```bash
- kubectl create configmap syslog-ca --from-file=tls.crt -n tigera-operator
- ```
-
-
-
-
-**Support**
-
-In this release, only [Splunk Enterprise](https://www.splunk.com/en_us/products/splunk-enterprise.html) is supported.
-
-$[prodname] uses Splunk's **HTTP Event Collector** to send data to Splunk server. To copy the flow, audit, and dns logs to Splunk, follow these steps:
-
-1. Create a HTTP Event Collector token by following the steps listed in Splunk's documentation for your specific Splunk version. Here is the link to do this for [Splunk version 8.0.0](https://docs.splunk.com/Documentation/Splunk/8.0.0/Data/UsetheHTTPEventCollector).
-
-2. Create a Secret in the `tigera-operator` namespace named `logcollector-splunk-credentials` with the field `token`.
- Example:
-
- ```
- kubectl create secret generic logcollector-splunk-credentials \
- --from-literal=token= \
- -n tigera-operator
- ```
-
-3. Update the
- [LogCollector](../../reference/installation/api.mdx#operator.tigera.io/v1.LogCollector)
- resource named `tigera-secure` to include
- a [Splunk section](../../reference/installation/api.mdx#operator.tigera.io/v1.SplunkStoreSpec)
- with your Splunk information.
- Example:
-
- ```yaml
- apiVersion: operator.tigera.io/v1
- kind: LogCollector
- metadata:
- name: tigera-secure
- spec:
- additionalStores:
- splunk:
- # Splunk HTTP Event Collector endpoint, in the format protocol://host:port
- endpoint: https://1.2.3.4:8088
- ```
-
- This can be done during installation by editing the custom-resources.yaml
- by applying it or after installation by editing the resource with the command:
-
- ```
- kubectl edit logcollector tigera-secure
- ```
-
-
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/audit-overview.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/audit-overview.mdx
deleted file mode 100644
index d9203cb08b..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/audit-overview.mdx
+++ /dev/null
@@ -1,45 +0,0 @@
----
-description: Calico Cloud audit logs provide data on changes to resources.
----
-
-# Audit logs
-
-## Big picture
-
-$[prodname] audit logs provide security teams and auditors historical data of all changes to resources over time.
-
-## Concepts
-
-### Resources used in audit logs
-
-$[prodname] audit logs are enabled by default for the following resources:
-
-- Global networkpolicies
-- Network policies
-- Staged global networkpolicies
-- Staged networkpolicies
-- Staged Kubernetes network policies
-- Global network sets
-- Network sets
-- Tiers
-- Host endpoints
-
-### Audit logs in Manager UI
-
-$[prodname] audit logs are displayed in the Timeline dashboard in Manager UI. You can filter logs, and export data in .json or .yaml formats.
-
-![audit-logs](/img/calico-enterprise/audit-logs.png)
-
-Audit logs are also visible in the Kibana dashboard (indexed by, `tigera_secure_ee_audit_ee`), and are useful for looking at policy differences.
-
-![kibana-auditlogs](/img/calico-enterprise/kibana-auditlogs.png)
-
-Finally, audit logs provide the core data for compliance reports.
-
-![compliance-reports](/img/calico-enterprise/configuration-compliance.png)
-
-## Required next step
-
-**Kubernetes resources** are also used in compliance reports and other audit-related features, but they are not enabled by default. You must enable Kubernetes resources through the Kubernetes API server. If you miss this step, some compliance reports will not work, and audit trails will not provide a complete view to your security team.
-
-- [Enable Kubernetes audit logs](../kube-audit.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/bgp.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/bgp.mdx
deleted file mode 100644
index 0d44aff407..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/bgp.mdx
+++ /dev/null
@@ -1,22 +0,0 @@
----
-description: Key/value pairs of BGP activity logs and how to construct queries.
----
-
-# BGP logs
-
-$[prodname] pushes BGP activity logs to Elasticsearch. To view them, go to the Discovery view, and from the dropdown menu, select `tigera_secure_ee_bgp.*` to view the collected BIRD and BIRD6 logs.
-
-The following table details key/value pairs for constructing queries, including their Elasticsearch datatype.
-
-| Name | Datatype | Description |
-| ------------ | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `logtime` | date | When the log was collected in UTC timestamp format. |
-| `host` | keyword | The name of the node where log was collected. |
-| `ip_version` | keyword | Contains one of the following values: ● IPv4: Log from BIRD process ● IPv6: Log from BIRD6 process |
-| `message` | text | The message contained in the log. |
-
-Once a set of BGP logs has accumulated in Elasticsearch, you can perform many interesting queries. Depending on the field that you want to query, different techniques are required. For example:
-
-- To view BGP logs only for IPv4 or IPv6, query on the `ip_version` field and sort by `logtime`
-- To see all logs from a specific node, query on the `host` field
-- To view events in the cluster, query on the `message` field
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/dns/dns-logs.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/dns/dns-logs.mdx
deleted file mode 100644
index 45119345a9..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/dns/dns-logs.mdx
+++ /dev/null
@@ -1,64 +0,0 @@
----
-description: Key/value pairs of DNS activity logs and how to construct queries.
----
-
-# Query DNS logs
-
-$[prodname] pushes DNS activity logs to Elasticsearch, for DNS information that is obtained from [trusted DNS servers](../../../network-policy/domain-based-policy.mdx#trusted-dns-servers). The following table
-details the key/value pairs in the JSON blob, including their
-[Elasticsearch datatype](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-types.html).
-This information should assist you in constructing queries.
-
-| Name | Datatype | Description |
-| ------------------ | ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `start_time` | date | When the collection of the log began in UNIX timestamp format. |
-| `end_time` | date | When the collection of the log concluded in UNIX timestamp format. |
-| `type` | keyword | This field contains one of the following values: ● LOG: Indicates that this is a normal DNS activity log. ● UNLOGGED: Indicates that this log is reporting DNS activity that could not be logged in detail because of [DNSLogsFilePerNodeLimit](../../../reference/resources/felixconfig.mdx#spec). |
-| `count` | long | When `type` is: ● LOG: How many DNS lookups there were, during the log collection interval, with details matching this log. ● UNLOGGED: The number of DNS responses that could not be logged in detail because of [DNSLogsFilePerNodeLimit](../../../reference/resources/felixconfig.mdx#spec). In this case none of the following fields are provided. |
-| `client_ip` | ip | The IP address of the client pod. A null value indicates aggregation. |
-| `client_name` | keyword |
This field contains one of the following values: ● The name of the client pod. ● -: the name of the pod was aggregated. Check client_name_aggr for the pod name prefix.
|
-| `client_name_aggr` | keyword | The aggregated name of the client pod. |
-| `client_namespace` | keyword | Namespace of the client pod. |
-| `client_labels` | array of keywords | Labels applied to the client pod. With aggregation, the label name/value pairs that are common to all aggregated clients. |
-| `qname` | keyword | The domain name that was looked up. |
-| `qtype` | keyword | The type of the DNS query (e.g. A, AAAA). |
-| `qclass` | keyword | The class of the DNS query (e.g. IN). |
-| `rcode` | keyword | The result code of the DNS query response (e.g. NoError, NXDomain). |
-| `rrsets` | nested | Detailed DNS query response data - see below. |
-| `servers` | nested | Details of the DNS servers that provided this response. |
-| `latency_count` | long | The number of lookups for which latency was measured. (The same as `count` above, unless some DNS requests were missed, or latency reporting is disabled - see `dnsLogsLatency` in the [FelixConfiguration resource](../../../reference/resources/felixconfig.mdx).) |
-| `latency_mean` | long | Mean latency, in nanoseconds. |
-| `latency_max` | long | Max latency, in nanoseconds. |
-
-Each nested `rrsets` object contains response data for a particular name and a particular type and
-class of response information. Its key/value pairs are as follows.
-
-| Name | Datatype | Description |
-| ------- | ----------------- | ----------------------------------------------------------------------------------------------------------------------- |
-| `name` | keyword | The domain name that this information is for. |
-| `type` | keyword | The type of the information (e.g. A, AAAA). |
-| `class` | keyword | The class of the information (e.g. IN). |
-| `rdata` | array of keywords | Array of data, for the name, of that type and class. For example, when `type` is A, this is an array of IPs for `name`. |
-
-Each nested `servers` object provides details of a DNS server that provided the information in the
-containing log. Its key/value pairs are as follows.
-
-| Name | Datatype | Description |
-| ----------- | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `ip` | ip | The IP address of the DNS server. |
-| `name` | keyword |
This field contains one of the following values: ● The name of the DNS server pod. ● -: the DNS server is not a pod.
|
-| `name_aggr` | keyword |
This field contains one of the following values: ● The aggregated name of the DNS server pod. ● pvt: the DNS server is not a pod. Its IP address belongs to a private subnet. ● pub: the DNS server is not a pod. Its IP address does not belong to a private subnet. It is probably on the public internet.
|
-| `namespace` | keyword | Namespace of the DNS server pod, or `-` if the DNS server is not a pod. |
-| `labels` | array of keywords | Labels applied to the DNS server pod or host endpoint; empty if there are no labels or the DNS server is not a pod or host endpoint. |
-
-The `latency_*` fields provide information about the latency of the DNS lookups that contributed to
-this log. For each successful DNS lookup $[prodname] measures the time between when the DNS
-request was sent and when the corresponding DNS response was received.
-
-## Query DNS log fields
-
-After a set of DNS logs has accumulated in Elasticsearch, you can perform many interesting queries. For example, if you query on:
-
-- `qname`, you can find all of the DNS response information that was provided to clients trying to resolve a particular domain name
-
-- `rrsets.rdata`, you can find all of the DNS lookups that included a particular IP address in their response data.
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/dns/filtering-dns.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/dns/filtering-dns.mdx
deleted file mode 100644
index c40c2fef01..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/dns/filtering-dns.mdx
+++ /dev/null
@@ -1,63 +0,0 @@
----
-description: Suppress DNS logs of low significance using filters.
----
-
-# Filter DNS logs
-
-$[prodname] supports filtering out DNS logs based on user provided
-configuration. Use filtering to suppress logs of low significance.
-
-## Configure DNS filtering
-
-DNS log filtering is configured through a ConfigMap in the `tigera-operator`
-namespace.
-
-To enable DNS log filtering, follow these steps:
-
-1. Create a `filters` directory with a file named `dns` with the contents of
- your desired filter using [Filter configuration files](#filter-configuration-files).
- If you are also adding [flow filters](../flow/filtering.mdx) also add the `flow` file
- to the directory.
-1. Create the `fluentd-filters` ConfigMap in the `tigera-operator` namespace
- with the following command.
- ```bash
- kubectl create configmap fluentd-filters -n tigera-operator --from-file=filters
- ```
-
-## Filter configuration files
-
-The filters defined by the ConfigMap are inserted into the fluentd configuration file.
-The [upstream fluentd documentation](https://docs.fluentd.org/filter/grep)
-describes how to write fluentd filters. The [DNS log schema](dns-logs.mdx) can be referred to
-for the specification of the various fields you can filter based on. Remember to ensure
-that the config file is properly indented in the ConfigMap.
-
-## Example 1: filter out cluster-internal lookups
-
-This example filters out lookups for domain names ending with ".cluster.local". More
-logs could be filtered by adjusting the regular expression "pattern", or by adding
-additional `exclude` blocks.
-
-```
-
- @type grep
-
- key qname
- pattern /\.cluster\.local$/
-
-
-```
-
-## Example 2: keep logs only for particular domain names
-
-This example will filter out all logs _except_ those for domain names ending `.co.uk`.
-
-```
-
- @type grep
-
- key qname
- pattern /\.co\.uk$/
-
-
-```
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/dns/index.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/dns/index.mdx
deleted file mode 100644
index 1d3d366013..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/dns/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Configure and filter DNS logs.
-hide_table_of_contents: true
----
-
-# Manage DNS logs for Calico Cloud
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/aggregation.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/aggregation.mdx
deleted file mode 100644
index 459977ff67..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/aggregation.mdx
+++ /dev/null
@@ -1,110 +0,0 @@
----
-description: Configure flow log aggregation to reduce log volume and costs.
----
-
-# Configure flow log aggregation
-
-## Big picture
-
-Configure flow log aggregation level to reduce log volume and costs.
-
-## Value
-
-Beyond using filtering to suppress flow logs, $[prodname] provides controls to aggregate flow logs. Although aggressive aggregation levels reduce
-flow volume and costs, it can also reduce visibility into specific metadata of allowed and denied traffic. Review this article to see which level of
-aggregation is suitable for your implementation.
-
-## Concepts
-
-### Volume and cost versus visibility
-
-$[prodname] enables flow log aggregation for pod/workload endpoints by default, and uses an aggressive aggregation level to reduce log volume. The
-level assumes that most users do not need to see pod IP information (due to the ephemeral nature of pod IP address allocation). However, it all
-depends on your deployment; we recommend reviewing aggregation levels to understand what information gets grouped (and thus suppressed from view).
-
-### Aggregation types and levels
-
-For allowed flows, the default aggregation level is 2, `AnyConnectionFromSamePodPrefix` and for denied flows the default aggregation level is 1,
-`AnyConnectionFromSameSourcePod`.
-
-The following table summarizes the aggregation levels by flow log traffic:
-
-| **Level** | **Name** | **Description** |
-|-----------|-------------------------------------|-------------------------------------------------------------------|
-| 0 | | No aggregation |
-| 1 | AnyConnectionFromSameSourcePod | Identity fields below source pod level are masked out. It means that flows, to the same destination, from processes or controllers in the same source pod, are aggregated together. |
-| 2 | AnyConnectionFromSamePodPrefix | In addition to the above, source pod names are aggregated based on their shared prefixes. This means that flows, to the same destination, from pods within the same pod controller (Deployment/ReplicaSet) are aggregated together. |
-| 3 | AnyConnectionBetweenSamePodPrefixes | This level of aggregation builds on the previous two levels and also groups destination pod names based on their shared prefixes. |
-
-### Understanding aggregation level differences
-
-Here are examples of pod-to-pod flows, highlighting the differences between flow logs at various aggregation levels.
-
-The source port is usually ephemeral and does not convey useful information. By suppressing the source port, `AnyConnectionFromSameSourcePod` aggregation
-type minimizes the flow logs generated for traffic coming from different containers within the same pod and going to the same destination endpoint
-and port. The two flows originating from `client-a` without aggregation are combined into one.
-
-In Kubernetes, pod controllers (Deployments, DaemonSets, ReplicaSets etc.) can automatically create names for pods. For example, the pods `nginx-1`
-and `nginx-2` are created by the ReplicaSet `nginx`. The controller's name is considered a pod-prefix and is used to aggregate flow log entries
-(indicated with an asterisk * at the end of the name). Flow logs originating from pods with the same prefix will be aggregated as long as the traffic
-is on the same protocol, and destined towards the same IP, and destination port. The three flow logs without aggregation originating from `client-a`
-and `client-b` are combined into a single flow log. This aggregation level is called `AnyConnectionFromSameSourcePodPrefix`.
-
-Finally, with `AnyConnectionBetweenSamePodPrefixes` we combine source and destination pods that are part of the same pod controller. With level 3, the flow logs
-are aggregated by the destination port and protocol, as long as they originate from pods with the same pod-prefix and destined for pods of the same
-pod-prefix. Previously distinct logs are aggregated into a single flow log (see the last row).
-
-| | | **Src Traffic** | | | **Dst Traffic** | | | **Packet counts** | |
-|--------------------------|-----------|----------|---------|----------|----------|---------|----------|------------|-------------|
-| **Aggr lvl** | **Flows** | **Name** | **IP** | **Port** | **Name** | **IP** | **Port** | **Pkt in** | **Pkt out** |
-| 0 (no aggr) | 4 | client-a | 1.1.1.1 | 45556 | nginx-1 | 2.2.2.2 | 80 | 1 | 2 |
-| | | client-b | 1.1.2.2 | 45666 | nginx-2 | 2.2.3.3 | 80 | 2 | 2 |
-| | | client-a | 1.1.1.1 | 65533 | nginx-1 | 2.2.2.2 | 80 | 1 | 3 |
-| | | client-c | 1.1.1.2 | 65534 | nginx-2 | 2.2.3.3 | 80 | 3 | 4 |
-| 1 (src port) | 3 | client-a | 1.1.1.1 | - | nginx-1 | 2.2.2.2 | 80 | 2 | 5 |
-| | | client-b | 1.1.2.2 | - | nginx-1 | 2.2.2.2 | 80 | 2 | 2 |
-| | | client-c | 1.1.3.3 | - | nginx-2 | 2.2.3.3 | 80 | 3 | 4 |
-| 2 (src pod-prefix) | 2 | client-* | - | - | nginx-1 | 2.2.2.2 | 80 | 4 | 7 |
-| | | client-* | - | - | nginx-2 | 2.2.3.3 | 80 | 3 | 4 |
-| 3 (src/dest pod-prefix) | 1 | client-* | - | - | nginx-* | - | 80 | 7 | 11 |
-
-## How to
-
-- [Verify existing aggregation level](#verify-existing-aggregation-level)
-- [Change default aggregation level](#change-default-aggregation-level)
-- [Troubleshoot logs with aggregation levels](#troubleshoot-logs-with-aggregation-levels)
-
-### Verify existing aggregation level
-
-Use the following command:
-
-```bash
-kubectl get felixconfiguration -o yaml
-```
-
-### Change default aggregation level
-
-Before [changing the default aggregation level](../../../reference/resources/felixconfig.mdx#aggregationkind), note the following:
-
-- Although any change in aggregation level affects flow log volume, lowering the aggregation number (especially to `0` for no aggregation) will cause significant impacts to log storage. If you allow more flow logs, ensure that you provision more log storage.
-- Verify that the parameters that you want to see in your aggregation level are not already [filtered](filtering.mdx).
-
-### Troubleshoot logs with aggregation levels
-
-When you use flow log aggregation, sometimes you may see two Alerts,
-
-![two-alerts](/img/calico-enterprise/two-alerts.png)
-
-along with two flow log entries. Note that the entries are identical except for the slight timestamp difference.
-
-![two-logs](/img/calico-enterprise/two-logs.png)
-
-The reason you may see two entries is because of the interaction between the aggregation interval, and the time interval to export logs (`flowLogsFlushInterval`).
-
-In each aggregation interval, connections/connection attempts can be started or completed. However, flow logs do not start/stop when a connection starts/stops. Let’s assume the default export logs “flush” time of 10 seconds. If a connection is started in one flush interval, but terminates in the next, it is recorded across two entries. To get visibility into flow logs to differentiate the entries, go to Service Graph, flow logs tab, and look at these fields: `num_flows`, `num_flows_started`, and `num_flows_completed`.
-
-The underlying reason for this overlap is a dependency on Linux conntrack, which provides the lifetime of stats that $[prodname] tracks across different protocols (TCP, ICMP, UDP). For example, for UDP and ICMP, $[prodname] waits for a conntrack entry to timeout before it considers a “connection” closed, and this is usually greater than 10 seconds.
-
-## Additional resources
-
-- [Archive logs to storage](../archive-storage.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/datatypes.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/datatypes.mdx
deleted file mode 100644
index 971331f2b6..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/datatypes.mdx
+++ /dev/null
@@ -1,166 +0,0 @@
----
-description: Data that Calico Cloud sends to Elasticsearch.
----
-
-# Flow log data types
-
-## Big picture
-
-$[prodname] sends the following data to Elasticsearch.
-
-The following table details the key/value pairs in the JSON blob, including their [Elasticsearch datatype](https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-types.html).
-
-| Name | Datatype | Description |
-| --------------------------------- | ----------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `host` | keyword | Name of the node that collected the flow log entry. |
-| `start_time` | date | Start time of log collection in UNIX timestamp format. |
-| `end_time` | date | End time of log collection in UNIX timestamp format. |
-| `action` | keyword | - `allow`: $[prodname] accepted the flow. - `deny`: $[prodname] denied the flow. |
-| `bytes_in` | long | Number of incoming bytes since the last export. |
-| `bytes_out` | long | Number of outgoing bytes since the last export. |
-| `dest_ip` | ip | IP address of the destination pod. A null value indicates aggregation. |
-| `dest_name` | keyword | Contains one of the following values: - Name of the destination pod. - Name of the pod that was aggregated or the endpoint is not a pod. Check dest_name_aggr for more information, such as the name of the pod if it was aggregated. |
-| `dest_name_aggr` | keyword | Contains one of the following values: - Aggregated name of the destination pod. - `pvt`: endpoint is not a pod. Its IP address belongs to a private subnet. - `pub`: endpoint is not a pod. Its IP address does not belong to a private subnet. It is probably an endpoint on the public internet. |
-| `dest_namespace` | keyword | Namespace of the destination endpoint. A `-` means the endpoint is not namespaced. |
-| `dest_port` | long | Destination port. Not applicable for ICMP packets. |
-| `dest_service_name` | keyword | Name of the destination service. A `-` means the original destination did not correspond to a known Kubernetes service (e.g. a services ClusterIP). |
-| `dest_service_namespace` | keyword | Namespace of the destination service. A `-` means the original destination did not correspond to a known Kubernetes service (e.g. a services ClusterIP). |
-| `dest_service_port` | keyword | Port name of the destination service. A `-` means : - the original destination did not correspond to a known Kubernetes service (e.g. a services ClusterIP), or - the destination port is aggregated. A `*` means there are multiple service port names matching the destination port number. |
-| `dest_type` | keyword | Destination endpoint type. Possible values: - `wep`: A workload endpoint, a pod in Kubernetes. - `ns`: A Networkset. If multiple Networksets match, then the one with the longest prefix match is chosen. - `net`: A Network. The IP address did not fall into a known endpoint type. |
-| `dest_labels` | array of keywords | Labels applied to the destination pod. A hyphen indicates aggregation. |
-| `dest_domains` | array of keywords | Find all the destination domain names for use in a DNS policy by examining `dest_domains`. The field displays information on the top-level domains linked to the destination IP. Applies to flows reported from the source to destinations outside the cluster. If `flowLogsDestDomainsByClient` is disabled, having `dest_domains`: ["A"] doesn't guarantee that the flow corresponds to a connection with domain name A. The destination IP may also be linked to other domain names not yet captured by Calico. |
-| `reporter` | keyword | - `src`: flow came from the pod that initiated the connection. - `dst`: flow came from the pod that received the initial connection. |
-| `num_flows` | long | Number of flows aggregated into this entry during this export interval. |
-| `num_flows_completed` | long | Number of flows that were completed during the export interval. |
-| `num_flows_started` | long | Number of flows that were started during the export interval. |
-| `num_process_names` | long | Number of unique process names aggregated into this entry during this export interval. |
-| `num_process_ids` | long | Number of unique process ids aggregated into this entry during this export interval. |
-| `num_process_args` | long | Number of unique process args aggregated into this entry during this export interval. |
-| `nat_outgoing_ports` | array of ints | List of [NAT](https://en.wikipedia.org/wiki/Network_address_translation) outgoing ports for the packets that were Source NAT'd in the flow |
-| `packets_in` | long | Number of incoming packets since the last export. |
-| `packets_out` | long | Number of outgoing packets since the last export. |
-| `proto` | keyword | Protocol. |
-| `policies` | array of keywords | List of policies that interacted with this flow. See [Format of the policies field](#format-of-the-policies-field). |
-| `process_name` | keyword | The name of the process that initiated or received the connection or connection request. This field will have the executable path if flowLogsCollectProcessPath is enabled. A "-" indicates that the process name is not logged. A "\*" indicates that the per flow process limit has exceeded and the process names are now aggregated. |
-| `process_id` | keyword | The process ID of the corresponding process (indicated by the `process_name` field) that initiated or received the connection or connection request. A "-" indicates that the process ID is not logged. A "\*" indicates that there are more than one unique process IDs for the corresponding process name. |
-| `process_args` | array of strings | The arguments with which the executable was invoked. The size of the list depends on the per flow process args limit. |
-| `source_ip` | ip | IP address of the source pod. A null value indicates aggregation. |
-| `source_name` | keyword | Contains one of the following values: - Name of the source pod. - Name of the pod that was aggregated or the endpoint is not a pod. Check source_name_aggr for more information, such as the name of the pod if it was aggregated. |
-| `source_name_aggr` | keyword | Contains one of the following values: - Aggregated name of the source pod. - `pvt`: Endpoint is not a pod. Its IP address belongs to a private subnet. - `pub`: the endpoint is not a pod. Its IP address does not belong to a private subnet. It is probably an endpoint on the public internet. |
-| `source_namespace` | keyword | Namespace of the source endpoint. A `-` means the endpoint is not namespaced. |
-| `source_port` | long | Source port. A null value indicates aggregation. |
-| `source_type` | keyword | The type of source endpoint. Possible values: - `wep`: A workload endpoint, a pod in Kubernetes. - `ns`: A Networkset. If multiple Networksets match, then the one with the longest prefix match is chosen. - `net`: A Network. The IP address did not fall into a known endpoint type. |
-| `source_labels` | array of keywords | Labels applied to the source pod. A hyphen indicates aggregation. |
-| `original_source_ips` | array of ips | List of external IP addresses collected from requests made to the cluster through an ingress resource. This field is only available if capturing external IP addresses is configured. |
-| `num_original_source_ips` | long | Number of unique external IP addresses collected from requests made to the cluster through an ingress resource. This count includes the IP addresses included in the `original_source_ips` field. This field is only available if capturing external IP addresses is configured. |
-| `tcp_mean_send_congestion_window` | long | Mean tcp send congestion window size. This field is only available if flowLogsEnableTcpStats is enabled |
-| `tcp_min_send_congestion_window` | long | Minimum tcp send congestion window size. This field is only available if flowLogsEnableTcpStats is enabled |
-| `tcp_mean_smooth_rtt` | long | Mean smooth RTT in micro seconds. This field is only available if flowLogsEnableTcpStats is enabled |
-| `tcp_max_smooth_rtt` | long | Maximum smooth RTT in micro seconds. This field is only available if flowLogsEnableTcpStats is enabled |
-| `tcp_mean_min_rtt` | long | Mean MinRTT in micro seconds. This field is only available if flowLogsEnableTcpStats is enabled |
-| `tcp_max_min_rtt` | long | Maximum MinRTT in micro seconds. This field is only available if flowLogsEnableTcpStats is enabled |
-| `tcp_mean_mss` | long | Mean TCP MSS. This field is only available if flowLogsEnableTcpStats is enabled |
-| `tcp_min_mss` | long | Minimum TCP MSS. This field is only available if flowLogsEnableTcpStats is enabled |
-| `tcp_total_retransmissions` | long | Total retransmitted packets. This field is only available if flowLogsEnableTcpStats is enabled |
-| `tcp_lost_packets` | long | Total lost packets. This field is only available if flowLogsEnableTcpStats is enabled |
-| `tcp_unrecovered_to` | long | Total unrecovered timeouts. This field is only available if flowLogsEnableTcpStats is enabled |
-
-### Format of the policies field
-
-The `policies` field contains a comma-delimited list of policy rules that matched the flow. Each entry in the
-list has the following format:
-
-```
-||||
-```
-
-Where,
-
-* `` numbers the order in which the rules were hit, starting with `0`.
- :::tip
- Sort the entries of the list by the `` to see the order that rules were hit. The entries are displayed in
- random order due to the way they are stored in the datastore.
- :::
-
-* `` is the name of the policy tier containing the policy, or `__PROFILE__` for a rule derived from a
- `Profile` resource (this is the internal datatype used to represent a Kubernetes namespace and its associated
- "default allow" rule).
-* `` is the name of the policy/profile; its format depends on the type of policy:
-
- * `.` for $[prodname] `GlobalNetworkPolicy`.
- * `/knp.default.` for Kubernetes `NetworkPolicy`.
- * `/.` for $[prodname] `NetworkPolicy`.
-
- Staged policy names are prefixed with "staged:".
-
-* `` is the action performed by the rule; one of `allow`, `deny`, `pass`.
-* `` if non-negative, is the index of the rule that was matched within the policy, starting with 0.
- Otherwise, a special value:
-
- * `-1` means the reporting endpoint was selected by the policy but no rule matched. The traffic hit the default
- action for the tier. In this case, the `` is selected arbitrarily from the set of policies within
- the tier that apply to the endpoint.
- * `-2` means "unknown". The rule index was not recorded.
-
-### Flow log example, with `no aggregation`
-
-A flow log with aggregation level 0, `no aggregation`, might look like:
-
-```
- {
- "start_time": 1597166083,
- "end_time": 1597166383,
- "source_ip": "192.168.47.9",
- "source_name": "access-6b687c8dcb-zn5s2",
- "source_name_aggr": "access-6b687c8dcb-*",
- "source_namespace": "policy-demo",
- "source_port": 42106,
- "source_type": "wep",
- "source_labels": {
- "labels": [
- "pod-template-hash=6b687c8dcb",
- "app=access"
- ]
- },
- "dest_ip": "192.168.138.79",
- "dest_name": "nginx-86c57db685-h6792",
- "dest_name_aggr": "nginx-86c57db685-*",
- "dest_namespace": "policy-demo",
- "dest_port": 80,
- "dest_type": "wep",
- "dest_labels": {
- "labels": [
- "pod-template-hash=86c57db685",
- "app=nginx"
- ]
- },
- "proto": "tcp",
- "action": "allow",
- "reporter": "dst",
- "policies": {
- "all_policies": [
- "0|default|policy-demo/default.access-nginx|allow"
- ]
- },
- "bytes_in": 388,
- "bytes_out": 1113,
- "num_flows": 1,
- "num_flows_started": 1,
- "num_flows_completed": 1,
- "packets_in": 6,
- "packets_out": 5,
- "http_requests_allowed_in": 0,
- "http_requests_denied_in": 0,
- "original_source_ips": null,
- "num_original_source_ips": 0,
- "host": "bz-n8kf-kadm-node-1",
- "@timestamp": 1597166383000
- }
-```
-
-The log shows an incoming connection reported by the destination node, allowed by a policy on port 80. The **`start_time`** and **`end_time`**
-describe the aggregation period (5 min.) During this interval, one flow (**`"num_flow": 1`**) was recorded. At higher aggregation levels, flows from
-endpoints performing the same operation and originating from the same Deployment/ReplicaSet are grouped into a single log. In this example, the
-common source endpoints that are prefixed with **`access-6b687c8dcb-`**. Parameters like **`source_ip`** may be dropped and set to **`null`**, depending on
-the aggregation level. As aggregation levels increase, more flows will be grouped together based on your data. For more details on aggregation
-levels, see [configure flow log aggregation](./aggregation.mdx).
\ No newline at end of file
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/filtering.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/filtering.mdx
deleted file mode 100644
index 42b65cbe32..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/filtering.mdx
+++ /dev/null
@@ -1,109 +0,0 @@
----
-description: Filter Calico Cloud flow logs.
----
-
-# Filter flow logs
-
-## Big picture
-
-Filter $[prodname] flow logs.
-
-## Value
-
-Filter $[prodname] flow logs to suppress logs of low significance, and troubleshoot threats.
-
-## Concepts
-
-### Container monitoring tools versus flow logs
-
-Container monitoring tools are good for monitoring Kubernetes and orchestrated workloads for CPU usage, network usage, and log aggregation. For example, a data monitoring tool can tell if a pod has turned into a bitcoin miner based on it using more than normal CPU.
-
-$[prodname] flow logs provide continuous records of every single packet sent/received by all pods in your Kubernetes cluster. Note that flow logs do not contain all packet data; only the number of packets/bytes that were sent between specific IP/ports, and when. In the previous monitoring tool example, $[prodname] flow logs could see the packets running to/from the bitcoin mining network.
-
-$[prodname] flow logs tell you when a pod is compromised, specifically:
-
-- Where a pod is sending data to
-- If the pod is talking to a known command-and-control server
-- Other pods that the compromised pod has been talking to (so you can see if they're compromised too)
-
-### Flow log format
-
-A flow log contains these space-delimited fields (unless filtered out).
-
-```
-startTime endTime srcType srcNamespace srcName srcLabels dstType dstNamespace dstName
-dstLabels srcIP dstIP proto srcPort dstPort numFlows numFlowsStarted numFlowsCompleted
-reporter packetsIn packetsOut bytesIn bytesOut action
-```
-
-**Example**
-
-```
-1528842551 1528842851 wep dev rails-81531* - wep dev memcached-38456* - - - 6 - 3000 7 3 4 out 154 61 70111 49404 allow
-```
-
-- Fields that are not enabled or are aggregated, are noted by `-`
-- Aggregated names (such as “pod prefix”), are noted by `*` at the end of the name
-- If `srcName` or `dstName` fields contain only a `*`, aggregation was performed using other means (such as specific labels), and no unique prefix was present.
-
-## How to
-
-- [Create flow log filters](#create-flow-log-filters)
-- [Add filters to ConfigMap file](#add-filters-to-configmap-file)
-
-### Create flow log filters
-
-Create your [fluentd filters](https://docs.fluentd.org/filter/grep).
-
-**Example: filter out a specific namespace**
-
-This example filters out all flow logs whose source or destination namespace is "dev". Additional namespaces could be filtered by adjusting the regular expression "pattern"s, or by adding additional `exclude` blocks.
-
-```
-
- @type grep
-
- key source_namespace
- pattern dev
-
-
- key dest_namespace
- pattern dev
-
-
-```
-
-**Example: filter out internet traffic to a specific deployment**
-
-This example filters inbound internet traffic to the deployment with pods named, `nginx-internet-*`. Note the use of the `and` directive to filter out traffic that is both to the deployment, and from the internet (source `pub`).
-
-```
-
- @type grep
-
-
- key dest_name_aggr
- pattern ^nginx-internet
-
-
- key source_name_aggr
- pattern pub
-
-
-
-```
-
-### Add filters to ConfigMap file
-
-1. Create a `filters` directory with a file called `flow` with your desired filters. If you are also adding [dns filters](../dns/filtering-dns.mdx), add the `dns` file to the directory.
-
-1. Create the `fluentd-filters` ConfigMap in the `tigera-operator` namespace with the following command.
-
- ```bash
- kubectl create configmap fluentd-filters -n tigera-operator --from-file=filters
- ```
-
-## Additional resources
-
-- [Flow log aggregation](aggregation.mdx)
-- [Archive logs to storage](../archive-storage.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/hep.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/hep.mdx
deleted file mode 100644
index 07c39916f6..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/hep.mdx
+++ /dev/null
@@ -1,35 +0,0 @@
----
-description: Enable hostendpoint reporting in flow logs.
----
-
-# Enable HostEndpoint reporting in flow logs
-
-## Big picture
-
-Enable $[prodname] flow logs to report HostEndpoint information.
-
-## Value
-
-Get visibility into the network activity at the HostEndpoint level using $[prodname] flow logs.
-
-## Before you begin
-
-**Limitations**
-
-- HostEndpoint reporting is only supported on Kubernetes nodes.
-- Flow logs on ApplyOnForward policies are currently not supported. As a result, a policy blocking traffic at the host level
-from forwarding to a workload endpoint would not result in a flow log from the host endpoint.
-
-## How to
-
-### Enable HostEndpoint reporting
-
-To enable reporting HostEndpoint metadata in flow logs, use the following command:
-
-```
- kubectl patch felixconfiguration default -p '{"spec":{"flowLogsEnableHostEndpoint":true}}'
-```
-
-## Additional resources
-
-- [Protect Kubernetes nodes](../../../network-policy/hosts/kubernetes-nodes.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/index.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/index.mdx
deleted file mode 100644
index 87603be8db..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Configure, filter, and aggregate flow logs.
-hide_table_of_contents: true
----
-
-# Configure flow logs
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/processpath.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/processpath.mdx
deleted file mode 100644
index 87c4bef1aa..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/processpath.mdx
+++ /dev/null
@@ -1,53 +0,0 @@
----
-description: Get visibility into process-level network activity in flow logs.
----
-
-# Enable process-level information in flow logs
-
-## Big picture
-
-Configure $[prodname] to collect process executable path and arguments and add them to flow logs.
-
-## Value
-
-Get visibility into the network activity at the process level using $[prodname] flow logs.
-
-## Concepts
-
-### eBPF kprobe programs
-
-eBPF is a Linux kernel technology that allows safe mini-programs to be attached to various hooks inside the kernel. To collect the path and arguments of short-lived processes, this feature uses an eBPF kprobe program.
-
-### Host's PID namespace
-
-For long-lived processes, path and arguments are read from `/proc/pid/cmdline`. This requires access to the host's PID namespace. If the access is not available then the process path and arguments will only be captured (by the eBPF kprobes) for newly-created processes.
-
-## Before you begin
-
-Ensure that your kernel contains support for eBPF kprobes that $[prodname] uses. The minimum supported
-kernel for this is feature is: `v4.4.0`.
-
-## Privileges
-
-For full functionality, this feature requires the `$[noderunning]` `DaemonSet` to have access to the host's PID namespace. The Tigera Operator will automatically grant this extra privilege to the daemonset if the feature is enabled in the operator's LogCollector resource, as described below.
-
-# How to
-
-### Enable process path and argument collection
-
-$[prodname] can be configured to enable process path and argument collection on supported Linux kernels
-using the command:
-
-```
- kubectl patch logcollector.operator.tigera.io tigera-secure --type merge -p '{"spec":{"collectProcessPath":"Enabled"}}'
-```
-
-Enabling/Disabling collectProcessPath causes a rolling update of the `$[noderunning] DaemonSet`.
-
-### View process path and arguments in flow logs using Kibana
-
-Navigate to the Kibana Flow logs dashboard to view process path and arguments associated with a flow log entry.
-
-The executable path will appear in the `process_name` field and `process_args` will have the executable arguments. Executable path
-and arguments cannot be collected under certain circumstances, in that `process_name` will have the task name and `process_args`
-will be empty. Information about these fields are described in the [Flow log datatype document](datatypes.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/tcpstats.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/tcpstats.mdx
deleted file mode 100644
index 313beb391d..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/flow/tcpstats.mdx
+++ /dev/null
@@ -1,43 +0,0 @@
----
-description: Enabling TCP socket stats information in flow logs
----
-
-# Enabling TCP socket stats in flow logs
-
-## Big picture
-
-Configure $[prodname] to collect additional TCP socket statistics. While this feature is available in both iptables and eBPF dataplane modes, it uses eBPF to collect the statistics. Therefore it requires a recent Linux kernel (at least v5.3.0/v4.18.0-193 for RHEL).
-
-## Value
-
-Get visibility into the network activity at the socket level using $[prodname] flow logs.
-
-## Concepts
-
-### eBPF TC programs
-
-eBPF is a Linux kernel technology that allows safe mini-programs to be attached to various hooks inside the kernel. This feature leverages eBPF to look up the TCP socket associated with packets flowing through an interface and sends them to userspace for addition to flow logs.
-
-## Before you begin
-
-Ensure that your kernel contains support for eBPF that $[prodname] uses. The minimum supported
-kernel for tcp socket stats is: `v5.3.0`. For distros based on RHEL, the minimum kernel version is `v4.18.0-193`.
-
-# How to
-
-### Enable tcp stats collection
-
-$[prodname] can be configured to enable tcp socket stats collection on supported Linux kernels
-using the command:
-
-```
- kubectl patch felixconfiguration default -p '{"spec":{"flowLogsCollectTcpStats":true}}'
-```
-
-### View tcp stats in flow logs using Kibana.
-
-Navigate to the Kibana Flow logs dashboard to view tcp stats associated with a flow log entry.
-
-The additional fields collected are `tcp_mean_send_congestion_window`, `tcp_min_send_congestion_window`, `tcp_mean_smooth_rtt`, `tcp_max_smooth_rtt`,
-`tcp_mean_min_rtt`, `tcp_max_min_rtt`, `tcp_mean_mss`, `tcp_min_mss`, `tcp_total_retransmissions`, `tcp_lost_packets`, `tcp_unrecovered_to`.
-Information about these fields are described in the [Flow log datatype document](datatypes.mdx)
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/index.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/index.mdx
deleted file mode 100644
index da62035e69..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/index.mdx
+++ /dev/null
@@ -1,11 +0,0 @@
----
-description: Configure logs for visibility in Manager UI.
-hide_table_of_contents: true
----
-
-# Manage Calico Cloud logs
-
-import DocCardList from '@theme/DocCardList';
-import { useCurrentSidebarCategory } from '@docusaurus/theme-common';
-
-
diff --git a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/l7/configure.mdx b/calico-cloud_versioned_docs/version-20-1/visibility/elastic/l7/configure.mdx
deleted file mode 100644
index b152ab511c..0000000000
--- a/calico-cloud_versioned_docs/version-20-1/visibility/elastic/l7/configure.mdx
+++ /dev/null
@@ -1,144 +0,0 @@
----
-description: Configure and aggregate L7 logs.
----
-
-# Configure L7 logs
-
-## Big picture
-
-Deploy Envoy and use $[prodname] L7 logs to monitor application activity.
-
-## Value
-
-Just like L3/4 $[prodname] logs, platform operators and
-development teams want visibility into L7 logs to see how applications are interacting with each
-other. $[prodname] flow logs only display which workloads are communicating
-with each other, not the specific request details. $[prodname] provides visibility into L7 traffic without the need for a service mesh.
-
-L7 logs are also key for detecting anomalous behaviors like attempts to
-access applications, restricted URLs, and scans for particular URLs.
-
-## Concepts
-
-### About L7 logs
-
-L7 logs capture application interactions from HTTP header data in requests. Data shows what is actually sent in communications between specific pods, providing more specificity than flow logs. (Flow logs capture data only from connections for workload interactions).
-
-$[prodname] collects L7 logs by sending the selected traffic through an Envoy proxy.
-
-L7 logs are visible in the Manager UI, service graph, in the HTTP tab.
-
-## Before you begin
-
-**Not supported**
-- GKE
-
-**Limitations**
-
-- L7 log collection is not supported for host-networked client pods.
-- When selecting and deselecting traffic for L7 log collection, active connections may be disrupted.
-
-
-
-## How to
-
-- [Configure Felix for log data collection](#configure-felix-for-log-data-collection)
-- [Configure L7 logs](#configure-l7-logs)
-- [View L7 logs in Manager UI](#view-l7-logs-in-manager-ui)
-
-### Configure Felix for log data collection
-
-1. Enable the Policy Sync API in Felix.
-
- For cluster-wide enablement, modify the `default` FelixConfiguration and set the field `policySyncPathPrefix` to `/var/run/nodeagent`.
-
- ```bash
- kubectl patch felixconfiguration default --type='merge' -p '{"spec":{"policySyncPathPrefix":"/var/run/nodeagent"}}'
- ```
-
-1. Configure L7 log aggregation, retention, and reporting.
-
- For help, see [Felix Configuration documentation](../../../reference/component-resources/node/felix/configuration.mdx#calico-enterprise-specific-configuration).
-
-### Configure L7 logs
-
-In this step, you will configure L7 logs, select logs for collection, and test the configuration.
-
-**Configure the ApplicationLayer resource for L7 logs**
-
-1. Create or update the [ApplicationLayer](../../../reference/installation/api.mdx#operator.tigera.io/v1.ApplicationLayer) resource named, `tigera-secure`.
-
- Example:
-
- ```yaml
- apiVersion: operator.tigera.io/v1
- kind: ApplicationLayer
- metadata:
- name: tigera-secure
- spec:
- logCollection:
- collectLogs: Enabled
- logIntervalSeconds: 5
- logRequestsPerInterval: -1
- ```
-
- Read more about the log collection specification [here](../../../reference/installation/api.mdx#operator.tigera.io/v1.LogCollector).
-
- Applying this resource creates an `l7-log-collector` daemonset in `calico-system` namespace.
-
-1. Ensure that the daemonset progresses and `l7-collector` and `envoy-proxy` containers inside the daemonset are in a `Running` state.
-
-**Select traffic for L7 log collection**
-
-1. Annotate the services you wish to collect L7 logs as shown.
-
- ```bash
- kubectl annotate svc -n projectcalico.org/l7-logging=true
- ```
-
-2. To disable the L7 log collection, remove the annotation.
-
- ```bash
- kubectl annotate svc -n projectcalico.org/l7-logging-
- ```
-
-After annotating a service for L7 log collection, only newly-established connections through that service are proxied by Envoy. Connections established before the service is annotated are not proxied or interrupted, and no logs are generated.
-
-Conversely, when a service is deselected, any previous connections established through the annotated service continue to be proxied by Envoy until they are terminated, and logs are generated.
-
-**Test your configuration**
-
-1. Identify the path to access your cluster. Where `` can be:
-
- - Public address of your cluster/service
- or
- - Cluster IP of your application's service (if testing within the cluster)
-
-1. `curl` your service with a command similar to the following. You will see `Server` header as `envoy`.
-
- ```bash
- curl --head :/