- 7.11.1
- 6.8.14
- 7.10.0
- 7.9.3 - 2020/10/22
- 6.8.13 - 2020/10/22
- 7.9.1 - 2020/09/03
- 7.9.0 - 2020/08/18
- 6.8.12 - 2020/08/18
- 7.8.1 - 2020/07/28
- 6.8.11 - 2020/07/28
- 7.8.0 - 2020/06/18
Following recent license change, Elasticsearch and Kibana OSS versions are no more available starting 7.11.0. See Elastic blog post for more details.
Starting from the 7.10.0 release, Helm 3 is fully supported in Elastic Helm charts and Helm 2 is deprecated.
In most cases, Helm 2to3 can be used to migrate from previous charts releases deployed with Helm 2:
# Install Helm 3
# Install 2to3 plugin
helm plugin install https://github.com/helm/helm-2to3.git
# Migrate Helm 2 local config
helm3 2to3 move config
# Migrate Helm 2 releases
helm3 2to3 convert <release-name>
# Upgrade to 7.10.0
helm upgrade <release-name> elastic/<chart-name> --version 7.10.0
# Cleanup Helm 2 data
helm3 2to3 cleanup
Migration to Helm 3 with 7.10.0 charts release should work smoothly for the following charts.
- apm-server >= 7.6.0
- elasticsearch >= 7.4.0 (except when
persistence.labels.enabled
is true) - filebeat >= 7.9.0
- kibana >= 7.4.0
- logstash >= 7.9.0
#916 remove some helpers used for K8S version < 1.14.
If you are using an older K8S version, you should upgrade it or stay with helm-charts < 7.10.
Metricbeat 7.10.0 introduce a breaking change in #516 to make it compatible with Helm 3.
The removing of some heritage
labels in Metricbeat deployment make upgrade
fail with the following error:
UPGRADE FAILED
Error: Deployment.apps "mb-metricbeat-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"mb-metricbeat-metrics", "release":"mb"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Error: UPGRADE FAILED: Deployment.apps "mb-metricbeat-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"mb-metricbeat-metrics", "release":"mb"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Unfortunately using helm upgrade --force
with Helm 3 is not enough. This chart
will need to be uninstalled and re-installed.
If you are using persistence.labels.enabled=true
with Elasticsearch, upgrade
will fail even with --force
.
You'll need to deploy a new release with the same clusterName
using Helm 3,
migrate your data, then remove the old release.
We experimented some rendered manifests contain a resource that already exists
errors with some charts upgrade, mostly for charts deploying ClusterRole
and
ClusterRoleBinding
resources.
Helm 3 is automatically adding some new annotations (meta.helm.sh/release-name
and meta.helm.sh/release-namespace
) and labels (app.kubernetes.io/managed-by
)
to the charts resources during upgrade. However it sometimes fail to add the annotations with the below error:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: ClusterRole "apm-apm-server-cluster-role" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "apm"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
The workaround is to manually add these annotations and labels to the existing failing resources
using kubectl edit
for example, then relaunch the upgrade command.
#839 fix the issue reported in #807 when using a NodePort
Service
(see Fix Logstash headless Service for more
details).
See 7.9.3 - 2020/10/22 and 7.9.1 - 2020/09/03
#776 fixed an issue with headless Service
when using extraPorts
value
(see Add headless Service for StatefulSet
for more details). Unfortunately, it introduced a new bug when using a NodePort
Service
(#807). This is fixed by #839 in 7.9.3 (and 6.8.13).
Starting with 7.9.0, all the main blockers for Helm 3 are fixed. While automated CI tests are not updated to use Helm 3 yet, deploying these charts Helm 3 with Helm 3 is now supported in beta.
A headless Service
has been added to Logstash chart in #695.
The headless Service
is required for Statefulsets
. Helm 2 allowed
deploying a Statefulset
without a serviceName
, however Helm 3 enforces this
requirement and fails if serviceName
is missing.
Statefulset
does not accept the serviceName
field update during release upgrades.
Upgrading the Logstash chart from a previous version will require using
helm upgrade --force
.
Edit: This change introduced a bug when using extraPorts
value (#765).
This will be fixed by #776 with 7.9.1 (and 6.8.13) release.
Meanwhile, you should rollback to 7.8.0 (or 6.8.10) release of Logstash chart if
you are using some custom extraPorts
value.
Stable Elasticsearch chart is now deprecated in favor of Elastic Elasticsearch chart (see Stable Elasticsearch chart notice).
Existing users of Stable Elasticsearch chart can use the migration guide.
APM Servers default memory limit is increased in #664.
This change may impact memory available resources capacity in your Kubernetes cluster.
To come back to former default values, use the following values:
resources:
limits:
memory: "200Mi"
Elasticsearch service selector is no more including heritage
label.
This label is immutable and causes issues with the latest Helm v3 version which
does more verification (heritage has Tiller
value with Helm 2 but Helm
value in Helm 3).
As this change is forcing Service
recreation, a short disruption of a few
seconds can be noted during upgrade to 7.8.0.
See #437 for more details.
Elasticsearch nodes could be restarted too quickly during an upgrade or rolling
restart, potentially resulting in service disruption.
This is due to a bug introduced by the changes to the Elasticsearch
readinessProbe
in #586.
Elasticsearch, Kibana, Filebeat and Metricbeat are moving from beta to GA and are supported by Elastic following these limitations:
- only released charts coming from Elastic Helm repo or GitHub releases are supported.
- released charts are only supported when using the same chart version and application version (ie: using 7.7.0 chart with 6.8.8 or 7.6.2 application is not supported).
Elastic Helm charts repository is now following a new branching model:
master
branch is now a development branch for next major release.- new
7.x
branch is a development branch for next minor release using SNAPSHOT Docker images. - new
7.7
branch is a development branch for next patch release using SNAPSHOT Docker images
Filebeat chart default config is now using container input instead of docker input in #568.
Metricbeat upgrade are failing with
spec.selector: Invalid value: ... field is immutable
error. This is related to
Metricbeat deployment selector including chart version which is not immutable.
You should use helm upgrade --force
to upgrade Metricbeat. See #621 for
more details.
Metricbeat is now using dedicated values for daemonset and deployment config. The old values are still working but are now deprecated. See #572 for more details.
Warning: When upgrading Metricbeat while using custom metricbeatConfig
value
for kube-state-metrics-metricbeat.yml
, Metricbeat deployment fails with
missing field accessing 'metricbeat.modules.0.hosts.0' (source:'metricbeat.yml')
.
In this case metricbeatConfig.kube-state-metrics-metricbeat.yml
value should
be migrated to deployment.metricbeatConfig.metricbeat.yml
. See #623 for
more details.
Kibana default resources (cpu/memory requests and limits) are increased in #540.
This change may impact cpu/memory available resources capacity in your Kubernetes cluster.
To come back to former default values, use the following values:
extraEnvs:
- name: "NODE_OPTIONS"
value: ""
resources:
requests:
cpu: "100m"
memory: "500Mi"
limits:
cpu: "1000m"
memory: "1Gi"
Elasticsearch default cpu requests is increased in #458 following our recommendation that resources requests and limits should have the same values.
This change may impact available cpu capacity in your Kubernetes cluster.
To come back to former default values, use the following values:
resources:
requests:
cpu: "100m"
kube-state-metrics chart dependency is upgraded from 1.6.0 to 2.4.1 in #352. This is causing Metricbeat chart upgrade from versions < 7.5.0 failing with the following error:
UPGRADE FAILED
Error: Deployment.apps "metricbeat-kube-state-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"kube-state-metrics"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && Deployment.apps "metricbeat-metricbeat-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"metricbeat-metricbeat-metrics", "chart":"metricbeat-7.5.0", "heritage":"Tiller", "release":"metricbeat"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Error: UPGRADE FAILED: Deployment.apps "metricbeat-kube-state-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"kube-state-metrics"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable && Deployment.apps "metricbeat-metricbeat-metrics" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"metricbeat-metricbeat-metrics", "chart":"metricbeat-7.5.0", "heritage":"Tiller", "release":"metricbeat"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
The workaround is to use --force
argument for helm upgrade
command which
will force Metricbeat resources update through delete/recreate.
If you were using the default Elasticsearch version from the previous release (6.6.2-alpha1) you will first need to upgrade to Elasticsearch 6.7.1 before being able to upgrade to 7.0.0. You can do this by adding this to your values file:
esMajorVersion: 6
imageTag: 6.7.1
If you are upgrading an existing cluster that did not override the default
storageClassName
you will now need to specify the storageClassName
. This
only affects existing clusters and was changed in #94. The advantage of this
is that now the Helm chart will just use the default storageClassName
rather
than needing to override it for any providers where it is not called standard
.
volumeClaimTemplate:
storageClassName: "standard"