Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 20 additions & 2 deletions crates/lib/src/cli.rs
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,11 @@ pub(crate) struct UpgradeOpts {
#[clap(long = "soft-reboot", conflicts_with = "check")]
pub(crate) soft_reboot: Option<SoftRebootMode>,

/// Use podman/skopeo to pull image to additionalimagestore, then read from container storage.
/// This provides a unified approach that leverages existing container tooling.
#[clap(long)]
pub(crate) unified: bool,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this should be a CLI flag that operates just once; it should be something like bootc image --set-unified or so and act persistently.

Also we should support setting this at install time so that it happens from the very start.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(hmm my bad I thought we discussed this but I failed to update the issue #20 or something maybe?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh man yeah, we talked about saving the config on the origin file IIRC, I forgot.


#[clap(flatten)]
pub(crate) progress: ProgressOptions,
}
Expand Down Expand Up @@ -144,6 +149,11 @@ pub(crate) struct SwitchOpts {
/// Target image to use for the next boot.
pub(crate) target: String,

/// Use podman/skopeo to pull image to additionalimagestore, then read from container storage.
/// This provides a unified approach that leverages existing container tooling.
#[clap(long)]
pub(crate) unified: bool,

#[clap(flatten)]
pub(crate) progress: ProgressOptions,
}
Expand Down Expand Up @@ -975,7 +985,11 @@ async fn upgrade(opts: UpgradeOpts) -> Result<()> {
}
}
} else {
let fetched = crate::deploy::pull(repo, imgref, None, opts.quiet, prog.clone()).await?;
let fetched = if opts.unified {
crate::deploy::pull_unified(repo, imgref, None, opts.quiet, prog.clone(), sysroot).await?
} else {
crate::deploy::pull(repo, imgref, None, opts.quiet, prog.clone()).await?
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This relates to #1599 (comment)

Basically how about changing this code to take a Storage which would hold the persistent flag, and then crate::deploy::pull would itself query that flag and change its behavior.

That would also implicitly then fix the install path to behave the same.

};
let staged_digest = staged_image.map(|s| s.digest().expect("valid digest in status"));
let fetched_digest = &fetched.manifest_digest;
tracing::debug!("staged: {staged_digest:?}");
Expand Down Expand Up @@ -1096,7 +1110,11 @@ async fn switch(opts: SwitchOpts) -> Result<()> {

let new_spec = RequiredHostSpec::from_spec(&new_spec)?;

let fetched = crate::deploy::pull(repo, &target, None, opts.quiet, prog.clone()).await?;
let fetched = if opts.unified {
crate::deploy::pull_unified(repo, &target, None, opts.quiet, prog.clone(), sysroot).await?
} else {
crate::deploy::pull(repo, &target, None, opts.quiet, prog.clone()).await?
};

if !opts.retain {
// By default, we prune the previous ostree ref so it will go away after later upgrades
Expand Down
92 changes: 92 additions & 0 deletions crates/lib/src/deploy.rs
Original file line number Diff line number Diff line change
Expand Up @@ -380,6 +380,98 @@ pub(crate) async fn prepare_for_pull(
Ok(PreparedPullResult::Ready(Box::new(prepared_image)))
}

/// Unified approach: First pull with podman to containers-storage, then prepare from containers-storage
pub(crate) async fn prepare_for_pull_unified(
repo: &ostree::Repo,
imgref: &ImageReference,
target_imgref: Option<&OstreeImageReference>,
store: &Storage,
) -> Result<PreparedPullResult> {
// Ensure bootc storage is properly initialized before using unified storage
let _imgstore = store.get_ensure_imgstore()?;

// First, pull the image using podman with unified storage
crate::podman::pull_image_unified(&format!("{imgref:#}")).await?;

// Now create a containers-storage reference to the pulled image
let containers_storage_imgref = ImageReference {
transport: "containers-storage".to_string(),
image: imgref.image.clone(),
signature: imgref.signature.clone(),
};
let ostree_imgref = &OstreeImageReference::from(containers_storage_imgref);

// Use the standard preparation flow but reading from containers-storage
let mut imp = new_importer(repo, ostree_imgref).await?;
if let Some(target) = target_imgref {
imp.set_target(target);
}
let prep = match imp.prepare().await? {
PrepareResult::AlreadyPresent(c) => {
println!("No changes in {imgref:#} => {}", c.manifest_digest);
return Ok(PreparedPullResult::AlreadyPresent(Box::new((*c).into())));
}
PrepareResult::Ready(p) => p,
};
check_bootc_label(&prep.config);
if let Some(warning) = prep.deprecated_warning() {
ostree_ext::cli::print_deprecated_warning(warning).await;
}
ostree_ext::cli::print_layer_status(&prep);
let layers_to_fetch = prep.layers_to_fetch().collect::<Result<Vec<_>>>()?;

let prepared_image = PreparedImportMeta {
imp,
n_layers_to_fetch: layers_to_fetch.len(),
layers_total: prep.all_layers().count(),
bytes_to_fetch: layers_to_fetch.iter().map(|(l, _)| l.layer.size()).sum(),
bytes_total: prep.all_layers().map(|l| l.layer.size()).sum(),
digest: prep.manifest_digest.clone(),
prep,
};

Ok(PreparedPullResult::Ready(Box::new(prepared_image)))
}

/// Unified pull: Use podman to pull to containers-storage, then read from there
pub(crate) async fn pull_unified(
repo: &ostree::Repo,
imgref: &ImageReference,
target_imgref: Option<&OstreeImageReference>,
quiet: bool,
prog: ProgressWriter,
store: &Storage,
) -> Result<Box<ImageState>> {
match prepare_for_pull_unified(repo, imgref, target_imgref, store).await? {
PreparedPullResult::AlreadyPresent(existing) => {
// Log that the image was already present (Debug level since it's not actionable)
const IMAGE_ALREADY_PRESENT_ID: &str = "5c4d3e2f1a0b9c8d7e6f5a4b3c2d1e0f9";
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that if we go this route, we should also not log to the journal in the ostree pull path because otherwise we're double logging.

tracing::debug!(
message_id = IMAGE_ALREADY_PRESENT_ID,
bootc.image.reference = &imgref.image,
bootc.image.transport = &imgref.transport,
bootc.status = "already_present",
"Image already present: {}",
imgref
);
Ok(existing)
}
PreparedPullResult::Ready(prepared_image_meta) => {
// Log that we're pulling a new image
const PULLING_NEW_IMAGE_ID: &str = "6d5e4f3a2b1c0d9e8f7a6b5c4d3e2f1a0";
tracing::info!(
message_id = PULLING_NEW_IMAGE_ID,
bootc.image.reference = &imgref.image,
bootc.image.transport = &imgref.transport,
bootc.status = "pulling_new",
"Pulling new image: {}",
imgref
);
pull_from_prepared(imgref, quiet, prog, *prepared_image_meta).await
}
}
}

#[context("Pulling")]
pub(crate) async fn pull_from_prepared(
imgref: &ImageReference,
Expand Down
24 changes: 24 additions & 0 deletions crates/lib/src/podman.rs
Original file line number Diff line number Diff line change
Expand Up @@ -51,3 +51,27 @@ pub(crate) fn storage_exists(root: &Dir, path: impl AsRef<Utf8Path>) -> Result<b
pub(crate) fn storage_exists_default(root: &Dir) -> Result<bool> {
storage_exists(root, CONTAINER_STORAGE.trim_start_matches('/'))
}

/// Pull an image using podman with additionalimagestore pointing to bootc storage.
/// This allows the image to be available in both regular container storage and bootc storage.
pub(crate) async fn pull_image_unified(imgref: &str) -> Result<()> {
use bootc_utils::CommandRunExt;

// Use podman pull with additionalimagestore pointing to bootc storage
let bootc_storage_path = "/usr/lib/bootc/storage";
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use podstorage.rs instead please

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To elaborate on this, an important aspect here is currently the GC of the bootc/storage instance is rooted in the set of LBIs. That will need to be extended to include the host image.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another big task related to this - we'll eventually really want to use the podman HTTP APIs so we can properly monitor progress. Right now we are very crudely relying on doing this synchronously to the existing tty and reusing the podman CLI tty output. But that can't really work if we ever need to render our own progress APIs.

https://lib.rs/crates/bollard might be a good choice for this?


tracing::info!("Pulling image via podman with unified storage: {}", imgref);

std::process::Command::new("podman")
.args([
"pull",
"--storage-opt",
&format!("additionalimagestore={}", bootc_storage_path),
imgref,
])
.run_capture_stderr()
.map_err(|e| anyhow::anyhow!("Failed to pull image via podman: {}", e))?;

tracing::info!("Successfully pulled image to unified storage: {}", imgref);
Ok(())
}
Loading