Skip to content

OCI AI Blueprints v1.0.9

Latest

Choose a tag to compare

@grantneumanoracle grantneumanoracle released this 13 Oct 18:10
362fc09

OCI AI Blueprints v1.0.9 - What's New

Overview

Version 1.0.9 extends Deployment Groups to support job-based workloads, adds a new CPU inference example, and improves deployment stability.


🎯 Key Features & Improvements

1. Deployment Groups: Now Support Jobs

Deployment Groups (introduced in v1.0.5 for services) now work with job-based workloads as well. This means you can:

  • Run batch processing jobs in different deployment groups
  • Execute training jobs with group isolation
  • Schedule one-time or periodic tasks across multiple logical environments
  • Maintain consistent grouping between your services and their associated jobs

This completes the deployment groups feature set, allowing full workload isolation for both long-running services and short-lived jobs.

2. New CPU Inference Blueprint

A ready-to-use example blueprint is now available for running model inference on OCI A2 bare-metal shapes using CPU-based inference. This provides:

  • Pre-configured infrastructure template for A2 BM instances
  • Reference architecture for CPU-based model serving
  • Lower-cost alternative for inference workloads that don't require GPU acceleration

📁 Find it at: docs/sample_blueprints/model_serving/cpu-inference/cpu-inference-A2-bm.json

3. Enhanced Deployment Stability

Multiple fixes to the Helm chart configurations improve the reliability of your deployments:

  • Corrected Kong ingress controller settings
  • Fixed Helm release dependencies and value overrides
  • More consistent module outputs for downstream integrations

Result: Fewer deployment failures and more predictable infrastructure provisioning.


🔧 Bug Fixes

  • Resolved issues with GCP Managed Instance Group (MIG) deployments
  • Fixed Helm chart parameter inconsistencies across workload modules

💡 What This Means for You

If you're using Deployment Groups: You can now apply the same organizational benefits to job workloads that you've been using for services.

If you're deploying new environments: Expect smoother, more reliable deployments with fewer Helm-related errors.

If you need CPU-based inference: Use the new A2 blueprint as a starting point for cost-effective model serving.


📝 Notes

  • The deployment groups enhancement for jobs requires no changes to your Terraform code—it's exposed through the backend API when creating or updating recipes
  • All improvements are backward compatible with existing v1.0.8 deployments