Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] KEP-3926: show how we address client cache inconsistency issue #4949

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

tkashem
Copy link
Contributor

@tkashem tkashem commented Nov 5, 2024

  • One-line PR description:
  • Issue link:
  • Other comments:

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Nov 5, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: tkashem
Once this PR has been reviewed and has the lgtm label, please assign enj for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/auth Categorizes an issue or PR as relevant to SIG Auth. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Nov 5, 2024
@tkashem tkashem changed the title KEP-3926: show how we address client cache inconsistency issue [WIP] KEP-3926: show how we address client cache inconsistency issue Nov 5, 2024
@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 5, 2024
@tkashem
Copy link
Contributor Author

tkashem commented Nov 5, 2024

/cc @deads2k @dgrisonnet @stlaz

Comment on lines 418 to 422
A client backed up an informer already has the object in its cache, since the
client never receives a `watch.DELETED` event the object remains in the lsiter
cache. This creates an inconsistency - retrieving the object from the cache
yields the object, but if we get it from the storage we see a `corrupt object`
error.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the proposed solution, this means that all existing clients will remain in a corrupted state indefinitely without the ability to recover. To have a safe rollout for our own clients (not even considering external clients yet), this means we need to promote handling for this special error to locked-to-true status until every supported kubelet (n-3) has the value locked-to-true before we can start enabling the server-side capability. Otherwise we can end up with unrecoverable corruption, correct?

@tkashem tkashem force-pushed the kep-3926-changes branch 3 times, most recently from 3c01a2e to 1064dac Compare November 6, 2024 01:34
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Nov 6, 2024
Comment on lines +418 to +420
A client backed up by an informer already has the object in its cache, since the
client never receives a `watch.DELETED` event the object remains in the lsiter
cache. This creates an inconsistency - retrieving the object from the cache
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I can parse the sentence. You are saying the cache will be stuck?

the old cache be replaced.

- Pros: the existing clients will work without any code change
- Cons: relisting is expensive
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is acceptable for the situation of a single object being unsafely deleted. Wondering, for the usecase of a lost encryption key, aren't we confronted with potentially many objects? Each will cause a relist 🤔

Copy link
Contributor

@sttts sttts Nov 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the whole discussion about downsides of the approaches, wondering about the alternative of doing a manual delete directly through etcd. Where are we with that today? How does the apiserver and clients (reflector) behave in that case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe manually deleting the etcd key directly will have the same effect, etcd will relay the delete event to any client watching for it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In other words: we don't regress even if we need a client change? That is crucial IMO and should be documented.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rereading your option description: A does not need that client change. And your implementation PR kubernetes/kubernetes#127513 has no client-go changes (beyond the added test).

yields the object, but if we get it from the storage we see a `corrupt object`
error.

There are a few factors we need to consider to understan how the client is impacted:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
There are a few factors we need to consider to understan how the client is impacted:
There are a few factors we need to consider to understand how the client is impacted:

@k8s-ci-robot
Copy link
Contributor

@tkashem: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-enhancements-verify 2be03c0 link true /test pull-enhancements-verify

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/auth Categorizes an issue or PR as relevant to SIG Auth. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
Status: In Review
Development

Successfully merging this pull request may close these issues.

4 participants