0.33.0-rc1
Pre-release
Pre-release
Main changes since 0.32
New features and improvements
- Add support for Kafka 3.3.2
- Support
loadBalancerClass
attribute in listeners with typeloadbalancer
- Support for automatically restarting failed Connect or Mirror Maker 2 connectors
- Redesign of Strimzi User Operator to improve its scalability
- Use Java 17 as the runtime for all containers and language level for all modules except
api
,crd-generator
,crd-annotations
, andtest
- Improved FIPS (Federal Information Processing Standards) support
- Moved from using the Jaeger exporter to OTLP exporter by default
- Kafka Exporter support for
Recreate
deployment strategy ImageStream
validation for Kafka Connect builds on OpenShift- Support for configuring the metadata for the Role / RoleBinding of Entity Operator
- Add liveness and readiness probes specifically for nodes running in KRaft combined mode
- Upgrade HTTP bridge to latest 0.24.0 release
Notable changes, deprecations, and removals
- The
UseStrimziPodSet
feature gate will move to GA in Strimzi 0.35. Support for StatefulSets will be removed from Strimzi right after the 0.34 release. Please use the Strimzi 0.33 release to test StrimziPodSets in your environment and report any major or blocking issues before the StatefulSet support is removed. - The default length of any new SCRAM-SHA-512 passwords will be 32 characters instead of 12 characters used in the previous Strimzi versions. Existing passwords will not be affected by this change until they are regenerated (for example because the user secret is deleted). If you want to keep using the original password length, you can set it using the
STRIMZI_SCRAM_SHA_PASSWORD_LENGTH
environment variable in.spec.entityOperator.template.userOperatorContainer.env
in theKafka
custom resource or in the
Deployment
of the standalone User Operator.userOperatorContainer: env: - name: STRIMZI_SCRAM_SHA_PASSWORD_LENGTH value: "12"
- In previous versions, the
ssl.secure.random.implementation
option in Kafka brokers was always set toSHA1PRNG
. From Strimzi 0.33 on, it is using the default SecureRandom implementation from the Java Runtime. If you want to keep usingSHA1PRNG
as your SecureRandom, you can configure it in.spec.kafka.config
in yourKafka
custom resource. - Support for JmxTrans in Strimzi is deprecated. It is currently planned to be removed in Strimzi 0.35.0.
- Support for
type: jaeger
tracing based on Jaeger clients and OpenTracing API was deprecated in the Strimzi 0.31 release. As the Jaeger clients are retired and the OpenTracing project is archived, we cannot guarantee their support for future versions. In Strimzi 0.32 and 0.33, we added support for OpenTelemetry tracing as a replacement. If possible, we will maintain the support fortype: jaeger
tracing until June 2023 and remove it afterward. Please migrate to OpenTelemetry as soon as possible. - When OpenTelemetry is enabled for tracing, starting from this release, the operator configures the OTLP exporter instead of the Jaeger one by default. The Jaeger exporter is even not included in the Kafka images anymore, so if you want to use it you have to add the binary by yourself. The
OTEL_EXPORTER_OTLP_ENDPOINT
environment variable has to be used instead of theOTEL_EXPORTER_JAEGER_ENDPOINT
in order to specify the OTLP endpoint to send traces to. If you are using Jaeger as the backend system for tracing, you need to have 1.35 release at least which is the first one exposing an OLTP endpoint.
All changes can be found under the 0.33.0 milestone.
Maven artifacts
To test the Maven artifacts which are part of this release, use the staging repository by including the following in your pom.xml
:
<repositories>
<repository>
<id>staging</id>
<url>https://oss.sonatype.org/content/repositories/iostrimzi-1169/</url>
</repository>
</repositories>
Upgrading from Strimzi 0.32.0
See the documentation for upgrade instructions.
Upgrading from Strimzi 0.22 or earlier
Direct upgrade from Strimzi 0.22 or earlier is not supported anymore! You have to upgrade first to one of the previous versions of Strimzi. You will also need to convert the CRD resources. For more details, see the documentation.