Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deploy: Remove lock when re-staging #3077

Merged
merged 1 commit into from
Oct 13, 2023

Conversation

cgwalters
Copy link
Member

This closes the biggest foot-gun when doing e.g.
rpm-ostree rebase when zincati is running on a FCOS system.

Previously if zincati happened to have staged + locked a deployment, we'd keep around the lock which is definitely not what is desired.

This closes the biggest foot-gun when doing e.g.
`rpm-ostree rebase` when zincati is running on a FCOS system.

Previously if zincati happened to have staged + locked a deployment,
we'd keep around the lock which is definitely not what is desired.
Copy link
Member

@jmarrero jmarrero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@jmarrero jmarrero merged commit 5d92407 into ostreedev:main Oct 13, 2023
21 checks passed
jlebon added a commit to jlebon/rpm-ostree that referenced this pull request Apr 26, 2024
Now that we've made stabilized and made public the finalization APIs,
let's use the new bindings.

This also fixes an issue where when creating a locked deployment using
the legacy API (i.e. touching the `/run/ostree/staged/deployment-locked`
file before calling the staging API), if a staged deployment already
exists, libostree would just nuke the lockfile (this behaviour was
introduced in ostreedev/ostree#3077).

In theory the legacy API (via the lockfile) should keep working, but
the core issue is that there's no way for libostree to know if the
lockfile is carried-over state, or was freshly created for the current
invocation.

So let's not try to salvage the legacy API and just move over to the
new one.

We already have finalization tests; they will now test that the new API
functions correctly. But stop looking for the legacy lockfile. We could
instead inspect the staged deployment GVariant, but these checks were
redundant anyway since the tests verify the finalization by actually
rebooting and/or not use `finalize-deployment --allow-unlocked`.

Fixes: coreos/fedora-coreos-tracker#1691
jlebon added a commit to jlebon/rpm-ostree that referenced this pull request Apr 26, 2024
Now that we've stabilized and made public deployment finalization APIs,
let's use them.

This also fixes an issue where when creating a locked deployment using
the legacy API (i.e. touching the `/run/ostree/staged/deployment-locked`
file before calling the staging API), if a staged deployment already
exists, libostree would just nuke the lockfile (this behaviour was
introduced in ostreedev/ostree#3077).

In theory the legacy API (via the lockfile) should keep working, but
the core issue is that there's no way for libostree to know if the
lockfile is carried-over state, or was freshly created for the current
invocation.

So let's not try to salvage the legacy API and just move over to the
new one.

We already have finalization tests; they will now test that the new API
functions correctly. But stop looking for the legacy lockfile. We could
instead inspect the staged deployment GVariant, but these checks were
redundant anyway since the tests verify the finalization by actually
rebooting and/or not use `finalize-deployment --allow-unlocked`.

Fixes: coreos/fedora-coreos-tracker#1691
jlebon added a commit to jlebon/rpm-ostree that referenced this pull request Apr 26, 2024
Now that we've stabilized and made public deployment finalization APIs,
let's use them.

This also fixes an issue where when creating a locked deployment using
the legacy API (i.e. touching the `/run/ostree/staged-deployment-locked`
file before calling the staging API), if a staged deployment already
exists, libostree would just nuke the lockfile (this behaviour was
introduced in ostreedev/ostree#3077).

In theory the legacy API (via the lockfile) should keep working, but
the core issue is that there's no way for libostree to know if the
lockfile is carried-over state, or was freshly created for the current
invocation.

So let's not try to salvage the legacy API and just move over to the
new one.

We already have finalization tests; they will now test that the new API
functions correctly. But stop looking for the legacy lockfile. We could
instead inspect the staged deployment GVariant, but these checks were
redundant anyway since the tests verify the finalization by actually
rebooting and/or not use `finalize-deployment --allow-unlocked`.

Fixes: coreos/fedora-coreos-tracker#1691
jlebon added a commit to jlebon/rpm-ostree that referenced this pull request May 6, 2024
Now that we've stabilized and made public deployment finalization APIs,
let's use them.

This also fixes an issue where when creating a locked deployment using
the legacy API (i.e. touching the `/run/ostree/staged-deployment-locked`
file before calling the staging API), if a staged deployment already
exists, libostree would just nuke the lockfile (this behaviour was
introduced in ostreedev/ostree#3077).

In theory the legacy API (via the lockfile) should keep working, but
the core issue is that there's no way for libostree to know if the
lockfile is carried-over state, or was freshly created for the current
invocation.

So let's not try to salvage the legacy API and just move over to the
new one.

We already have finalization tests; they will now test that the new API
functions correctly. But stop looking for the legacy lockfile. We could
instead inspect the staged deployment GVariant, but these checks were
redundant anyway since the tests verify the finalization by actually
rebooting and/or not use `finalize-deployment --allow-unlocked`.

Fixes: coreos/fedora-coreos-tracker#1691
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants