diff --git a/docs/kusion/1-what-is-kusion/1-overview.md b/docs/kusion/1-what-is-kusion/1-overview.md
index bbbc5fbb..7141aeee 100644
--- a/docs/kusion/1-what-is-kusion/1-overview.md
+++ b/docs/kusion/1-what-is-kusion/1-overview.md
@@ -6,7 +6,7 @@ slug: /
# Overview
-Welcome to Kusion! This introduction section covers what Kusion is, the Kusion workflow, and how Kusion compares to other software. If you just want to dive into using Kusion, feel free to skip ahead to the [Getting Started](getting-started/install-kusion) section.
+Welcome to Kusion! This introduction section covers what Kusion is, the Kusion workflow, and how Kusion compares to other software. If you just want to dive into using Kusion, feel free to skip ahead to the [Getting Started](../2-getting-started/2-getting-started-with-kusion-cli/0-install-kusion.md) section.
## What is Kusion?
diff --git a/docs/kusion/2-getting-started/1-install-kusion.md b/docs/kusion/2-getting-started/2-getting-started-with-kusion-cli/0-install-kusion.md
similarity index 99%
rename from docs/kusion/2-getting-started/1-install-kusion.md
rename to docs/kusion/2-getting-started/2-getting-started-with-kusion-cli/0-install-kusion.md
index 540881d6..556228b4 100644
--- a/docs/kusion/2-getting-started/1-install-kusion.md
+++ b/docs/kusion/2-getting-started/2-getting-started-with-kusion-cli/0-install-kusion.md
@@ -1,7 +1,7 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-# Install Kusion
+# Install Kusion CLI
You can install the latest Kusion CLI on MacOS, Linux and Windows.
diff --git a/docs/kusion/2-getting-started/2-getting-started-with-kusion-cli/1-deliver-quickstart.md b/docs/kusion/2-getting-started/2-getting-started-with-kusion-cli/1-deliver-quickstart.md
index 744ebd20..ff1c3e9e 100644
--- a/docs/kusion/2-getting-started/2-getting-started-with-kusion-cli/1-deliver-quickstart.md
+++ b/docs/kusion/2-getting-started/2-getting-started-with-kusion-cli/1-deliver-quickstart.md
@@ -10,7 +10,7 @@ In this tutorial, we will walk through how to deploy a quickstart application on
Before we start to play with this example, we need to have the Kusion CLI installed and run an accessible Kubernetes cluster. Here are some helpful documents:
-- Install [Kusion CLI](../1-install-kusion.md).
+- Install [Kusion CLI](../2-getting-started-with-kusion-cli/0-install-kusion.md).
- Run a [Kubernetes](https://kubernetes.io) cluster. Some light and convenient options for Kubernetes local deployment include [k3s](https://docs.k3s.io/quick-start), [k3d](https://k3d.io/v5.4.4/#installation), and [MiniKube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node).
## Initialize Project
diff --git a/docs/kusion/2-getting-started/3-getting-started-with-kusion-server/1-deliver-quickstart.md b/docs/kusion/2-getting-started/3-getting-started-with-kusion-server/1-deliver-quickstart.md
index e9d5c5d8..d2488116 100644
--- a/docs/kusion/2-getting-started/3-getting-started-with-kusion-server/1-deliver-quickstart.md
+++ b/docs/kusion/2-getting-started/3-getting-started-with-kusion-server/1-deliver-quickstart.md
@@ -10,7 +10,7 @@ In this tutorial, we will walk through how to deploy a quickstart application on
Before we start to play with this example, we need to have the Kusion Server installed and run an accessible Kubernetes cluster. Here are some helpful documents:
-- Install [Kusion Server](../1-install-kusion.md).
+- Install [Kusion Server](../2-getting-started-with-kusion-cli/0-install-kusion.md).
- Run a [Kubernetes](https://kubernetes.io) cluster. Some light and convenient options for Kubernetes local deployment include [k3s](https://docs.k3s.io/quick-start), [k3d](https://k3d.io/v5.4.4/#installation), and [MiniKube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node).
## Initialize Backend, Source, and Workspace
diff --git a/docs/kusion/2-getting-started/3-getting-started-with-kusion-server/2-deliver-quickstart-with-db.md b/docs/kusion/2-getting-started/3-getting-started-with-kusion-server/2-deliver-quickstart-with-db.md
index d0993aec..d5632ab9 100644
--- a/docs/kusion/2-getting-started/3-getting-started-with-kusion-server/2-deliver-quickstart-with-db.md
+++ b/docs/kusion/2-getting-started/3-getting-started-with-kusion-server/2-deliver-quickstart-with-db.md
@@ -10,7 +10,7 @@ In this tutorial, we will learn how to create and manage our own application wit
Before we start to play with this example, we need to have the Kusion Server installed and run an accessible Kubernetes cluster. Besides, we need to have a GitHub account to initiate our own config code repository as `Source` in Kusion.
-- Install [Kusion Server](../1-install-kusion.md).
+- Install [Kusion Server](../2-getting-started-with-kusion-cli/0-install-kusion.md).
- Run a [Kubernetes](https://kubernetes.io) cluster. Some light and convenient options for Kubernetes local deployment include [k3s](https://docs.k3s.io/quick-start), [k3d](https://k3d.io/v5.4.4/#installation), and [MiniKube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node).
diff --git a/docs/kusion/5-user-guides/1-using-kusion-cli/1-cloud-resources/1-database.md b/docs/kusion/5-user-guides/1-using-kusion-cli/1-cloud-resources/1-database.md
index 613b01bf..ecd99d4b 100644
--- a/docs/kusion/5-user-guides/1-using-kusion-cli/1-cloud-resources/1-database.md
+++ b/docs/kusion/5-user-guides/1-using-kusion-cli/1-cloud-resources/1-database.md
@@ -11,7 +11,7 @@ This tutorial will demonstrate how to deploy a WordPress application with Kusion
## Prerequisites
-- Install [Kusion](../../../2-getting-started/1-install-kusion.md).
+- Install [Kusion](../../../2-getting-started/2-getting-started-with-kusion-cli/0-install-kusion.md).
- Install [kubectl CLI](https://kubernetes.io/docs/tasks/tools/#kubectl) and run a [Kubernetes](https://kubernetes.io/) or [k3s](https://docs.k3s.io/quick-start) or [k3d](https://k3d.io/v5.4.4/#installation) or [MiniKube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node) cluster.
- Prepare a cloud service account and create a user with at least **VPCFullAccess** and **RDSFullAccess** related permissions to use the Relational Database Service (RDS). This kind of user can be created and managed in the Identity and Access Management (IAM) console of the cloud vendor.
- The environment that executes `kusion` needs to have connectivity to terraform registry to download the terraform providers.
diff --git a/docs/kusion/5-user-guides/1-using-kusion-cli/2-working-with-k8s/1-deploy-application.md b/docs/kusion/5-user-guides/1-using-kusion-cli/2-working-with-k8s/1-deploy-application.md
index df283a11..7ab07df3 100644
--- a/docs/kusion/5-user-guides/1-using-kusion-cli/2-working-with-k8s/1-deploy-application.md
+++ b/docs/kusion/5-user-guides/1-using-kusion-cli/2-working-with-k8s/1-deploy-application.md
@@ -31,7 +31,7 @@ Before we start, we need to complete the following steps:
1、Install Kusion
We recommend using HomeBrew(Mac), Scoop(Windows), or an installation shell script to download and install Kusion.
-See [Download and Install](../../../2-getting-started/1-install-kusion.md) for more details.
+See [Download and Install](../../../2-getting-started/2-getting-started-with-kusion-cli/0-install-kusion.md) for more details.
2、Running Kubernetes cluster
diff --git a/docs_versioned_docs/version-v0.14/1-what-is-kusion/1-overview.md b/docs_versioned_docs/version-v0.14/1-what-is-kusion/1-overview.md
new file mode 100644
index 00000000..7141aeee
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/1-what-is-kusion/1-overview.md
@@ -0,0 +1,62 @@
+---
+id: overview
+title: Overview
+slug: /
+---
+
+# Overview
+
+Welcome to Kusion! This introduction section covers what Kusion is, the Kusion workflow, and how Kusion compares to other software. If you just want to dive into using Kusion, feel free to skip ahead to the [Getting Started](../2-getting-started/2-getting-started-with-kusion-cli/0-install-kusion.md) section.
+
+## What is Kusion?
+
+Kusion is an intent-driven [Platform Orchestrator](https://internaldeveloperplatform.org/platform-orchestrators/), which sits at the core of an [Internal Developer Platform (IDP)](https://internaldeveloperplatform.org/what-is-an-internal-developer-platform/). With Kusion you can enable app-centric development, your developers only need to write a single application specification - [AppConfiguration](https://www.kusionstack.io/docs/concepts/app-configuration). [AppConfiguration](https://www.kusionstack.io/docs/concepts/app-configuration) defines the workload and all resource dependencies without needing to supply environment-specific values, Kusion ensures it provides everything needed for the application to run.
+
+Kusion helps app developers who are responsible for creating applications and the platform engineers responsible for maintaining the infrastructure the applications run on. These roles may overlap or align differently in your organization, but Kusion is intended to ease the workload for any practitioner responsible for either set of tasks.
+
+
+
+
+## How does Kusion work?
+
+As a Platform Orchestrator, Kusion enables you to address challenges often associated with Day 0 and Day 1. Both platform engineers and application engineers can benefit from Kusion.
+
+There are two key workflows for Kusion:
+
+1. **Day 0 - Set up the modules and workspaces:** Platform engineers create shared modules for deploying applications and their underlying infrastructure, and workspace definitions for target landing zone. These standardized, shared modules codify the requirements from stakeholders across the organization including security, compliance, and finance.
+
+ Kusion modules abstract the complexity of underlying infrastructure tooling, enabling app developers to deploy their applications using a self-service model.
+
+
+
+ 
+
+
+2. **Day 1 - Set up the application:** Application developers leverage the workspaces and modules created by the platform engineers to deploy applications and their supporting infrastructure. The platform team maintains the workspaces and modules, which allows application developers to focus on building applications using a repeatable process on standardized infrastructure.
+
+
+
+ 
+
+
+## Kusion Highlights
+
+* **Platform as Code**
+
+ Specify desired application intent through declarative configuration code, drive continuous deployment with any CI/CD systems or GitOps to match that intent. No ad-hoc scripts, no hard maintain custom workflows, just declarative configuration code.
+
+* **Dynamic Configuration Management**
+
+ Enable platform teams to set baseline-templates, control how and where to deploy application workloads and provision accessory resources. While still enabling application developers freedom via workload-centric specification and deployment.
+
+* **Security & Compliance Built In**
+
+ Enforce security and infrastructure best practices with out-of-box [base models](https://github.com/KusionStack/catalog), create security and compliance guardrails for any Kusion deploy with third-party Policy as Code tools. All accessory resource secrets are automatically injected into Workloads.
+
+* **Lightweight and Open Model Ecosystem**
+
+ Pure client-side solution ensures good portability and the rich APIs make it easier to integrate and automate. Large growing model ecosystem covers all stages in application lifecycle, with extensive connections to various infrastructure capabilities.
+
+:::tip
+
+**Kusion is an early project.** The end goal of Kusion is to boost [Internal Developer Platform](https://internaldeveloperplatform.org/) revolution, and we are iterating on Kusion quickly to strive towards this goal. For any help or feedback, please contact us in [Slack](https://github.com/KusionStack/community/discussions/categories/meeting) or [issues](https://github.com/KusionStack/kusion/issues).
diff --git a/docs_versioned_docs/version-v0.14/1-what-is-kusion/2-kusion-vs-x.md b/docs_versioned_docs/version-v0.14/1-what-is-kusion/2-kusion-vs-x.md
new file mode 100644
index 00000000..ca077620
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/1-what-is-kusion/2-kusion-vs-x.md
@@ -0,0 +1,37 @@
+---
+id: kusion-vs-x
+---
+
+# Kusion vs Other Software
+
+It can be difficult to understand how different software compare to each other. Is one a replacement for the other? Are they complementary? etc. In this section, we compare Kusion to other software.
+
+**vs. GitOps (ArgoCD, FluxCD, etc.)**
+
+According to the [open GitOps principles](https://opengitops.dev/), GitOps systems typically have its desired state expressed declaratively, continuously observe actual system state and attempt to apply the desired state. In the design of Kusion toolchain, we refer to those principles but have no intention to reinvent any GitOps systems wheel.
+
+Kusion adopts your GitOps process and improves it with richness of features. The declarative [AppConfiguration](../concepts/appconfigurations) model can be used to express desired intent, once intent is declared [Kusion CLI](../reference/commands) takes the role to make production match intent as safely as possible.
+
+**vs. PaaS (Heroku, Vercel, etc.)**
+
+Kusion shares the same goal with traditional PaaS platforms to provide application delivery and management capabilities. The intuitive difference from the full functionality PaaS platforms is that Kusion is a client-side toolchain, not a complete PaaS platform.
+
+Also traditional PaaS platforms typically constrain the type of applications they can run but there is no such constrain for Kusion which means Kusion provides greater flexibility.
+
+Kusion allows you to have platform-like features without the constraints of a traditional PaaS. However, Kusion is not attempting to replace any PaaS platforms, instead Kusion can be used to deploy to a platform such as Heroku.
+
+**vs. KubeVela**
+
+KubeVela is a modern software delivery and management control plane which makes it easier to deploy and operate applications on top of Kubernetes.
+
+Although some might initially perceive an overlap between Kusion and KubeVela, they are in fact complementary and can be integrated to work together. As a lightweight, purely client-side tool, coupled with corresponding [Generator](https://github.com/KusionStack/kusion-module-framework) implementation, Kusion can render [AppConfiguration](../concepts/appconfigurations) schema to generate CRD resources for KubeVela and leverage KubeVela's control plane to implement application delivery.
+
+**vs. Helm**
+
+The concept of Helm originates from the [package management](https://en.wikipedia.org/wiki/Package_manager) mechanism of the operating system. It is a package management tool based on templated YAML files and supports the execution and management of resources in the package.
+
+Kusion is not a package manager. Kusion naturally provides a superset of Helm capabilities with the modeled key-value pairs, so that developers can use Kusion directly as a programable alternative to avoid the pain of writing text templates. For users who have adopted Helm, the stack compilation results in Kusion can be packaged and used in Helm format.
+
+**vs. Kubernetes**
+
+Kubernetes(K8s) is a container scheduling and management runtime widely used around the world, an "operating system" core for containers, and a platform for building platforms. Above the Kubernetes API layer, Kusion aims to provide app-centric **abstraction** and unified **workspace**, better **user experience** and automation **workflow**, and helps developers build the app delivery model easily and collaboratively.
diff --git a/docs_versioned_docs/version-v0.14/1-what-is-kusion/_category_.json b/docs_versioned_docs/version-v0.14/1-what-is-kusion/_category_.json
new file mode 100644
index 00000000..0817eb90
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/1-what-is-kusion/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "What is Kusion?"
+}
diff --git a/docs_versioned_docs/version-v0.14/2-getting-started/2-getting-started-with-kusion-cli/0-install-kusion.md b/docs_versioned_docs/version-v0.14/2-getting-started/2-getting-started-with-kusion-cli/0-install-kusion.md
new file mode 100644
index 00000000..556228b4
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/2-getting-started/2-getting-started-with-kusion-cli/0-install-kusion.md
@@ -0,0 +1,144 @@
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Install Kusion CLI
+
+You can install the latest Kusion CLI on MacOS, Linux and Windows.
+
+## MacOs/Linux
+
+For the MacOs and Linux, Homebrew and sh script are supported. Choose the one you prefer from the methods below.
+
+
+
+
+The recommended method for installing on MacOS and Linux is to use the brew package manager.
+
+**Install Kusion**
+
+```bash
+# tap formula repository Kusionstack/tap
+brew tap KusionStack/tap
+
+# install Kusion
+brew install KusionStack/tap/kusion
+```
+
+**Update Kusion**
+
+```bash
+# update formulae from remote
+brew update
+
+# update Kusion
+brew upgrade KusionStack/tap/kusion
+```
+
+**Uninstall Kusion**
+
+```bash
+# uninstall Kusion
+brew uninstall KusionStack/tap/kusion
+```
+
+```mdx-code-block
+
+
+```
+
+**Install Kusion**
+
+```bash
+# install Kusion, default latest version
+curl https://www.kusionstack.io/scripts/install.sh | sh
+```
+
+**Install the Specified Version of Kusion**
+
+You can also install the specified version of Kusion by appointing the version as shell script parameter, where the version is the [available tag](https://github.com/KusionStack/kusion/tags) trimming prefix "v", such as 0.11.0, 0.10.0, etc. In general, you don't need to specify Kusion version, just use the command above to install the latest version.
+
+```bash
+# install Kusion of specified version 0.11.0
+curl https://www.kusionstack.io/scripts/install.sh | sh -s 0.11.0
+```
+
+**Uninstall Kusion**
+
+```bash
+# uninstall Kusion
+curl https://www.kusionstack.io/scripts/uninstall.sh | sh
+```
+
+```mdx-code-block
+
+
+```
+
+## Windows
+
+For the Windows, Scoop and Powershell script are supported. Choose the one you prefer from the methods below.
+
+
+
+
+The recommended method for installing on Windows is to use the scoop package manager.
+
+**Install Kusion**
+
+```bash
+# add scoop bucket KusionStack
+scoop bucket add KusionStack https://github.com/KusionStack/scoop-bucket.git
+
+# install kusion
+scoop install KusionStack/kusion
+```
+
+**Update Kusion**
+
+```bash
+# update manifest from remote
+scoop update
+
+# update Kusion
+scoop install KusionStack/kusion
+```
+
+**Uninstall Kusion**
+
+```bash
+# uninstall Kusion
+brew uninstall KusionStack/kusion
+```
+
+```mdx-code-block
+
+
+```
+
+**Install Kusion**
+
+```bash
+# install Kusion, default latest version
+powershell -Command "iwr -useb https://www.kusionstack.io/scripts/install.ps1 | iex"
+```
+
+**Install the Specified Version of Kusion**
+
+You can also install the specified version of Kusion by appointing the version as shell script parameter, where the version is the [available tag](https://github.com/KusionStack/kusion/tags) trimming prefix "v", such as 0.11.0, etc. In general, you don't need to specify Kusion version, just use the command above to install the latest version.
+
+```bash
+# install Kusion of specified version 0.10.0
+powershell {"& { $(irm https://www.kusionstack.io/scripts/install.ps1) } -Version 0.11.0" | iex}
+```
+
+**Uninstall Kusion**
+
+```bash
+# uninstall Kusion
+powershell -Command "iwr -useb https://www.kusionstack.io/scripts/uninstall.ps1 | iex"
+```
+
+```mdx-code-block
+
+
+```
diff --git a/docs_versioned_docs/version-v0.14/2-getting-started/2-getting-started-with-kusion-cli/1-deliver-quickstart.md b/docs_versioned_docs/version-v0.14/2-getting-started/2-getting-started-with-kusion-cli/1-deliver-quickstart.md
new file mode 100644
index 00000000..ff1c3e9e
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/2-getting-started/2-getting-started-with-kusion-cli/1-deliver-quickstart.md
@@ -0,0 +1,221 @@
+---
+id: deliver-quickstart
+---
+
+# Run Your First App on Kubernetes with Kusion CLI
+
+In this tutorial, we will walk through how to deploy a quickstart application on Kubernetes with Kusion. The demo application can interact with a locally deployed MySQL database, which is declared as an accessory in the config codes and will be automatically created and managed by Kusion.
+
+## Prerequisites
+
+Before we start to play with this example, we need to have the Kusion CLI installed and run an accessible Kubernetes cluster. Here are some helpful documents:
+
+- Install [Kusion CLI](../2-getting-started-with-kusion-cli/0-install-kusion.md).
+- Run a [Kubernetes](https://kubernetes.io) cluster. Some light and convenient options for Kubernetes local deployment include [k3s](https://docs.k3s.io/quick-start), [k3d](https://k3d.io/v5.4.4/#installation), and [MiniKube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node).
+
+## Initialize Project
+
+We can start by initializing this tutorial project with `kusion init` cmd.
+
+```shell
+# Create a new directory and navigate into it.
+mkdir quickstart && cd quickstart
+
+# Initialize the demo project with the name of the current directory.
+kusion init
+```
+
+The created project structure looks like below:
+
+```shell
+tree
+.
+├── default
+│ ├── kcl.mod
+│ ├── main.k
+│ └── stack.yaml
+└── project.yaml
+
+2 directories, 4 files
+```
+
+:::info
+More details about the project and stack structure can be found in [Project](../../3-concepts/1-project/1-overview.md) and [Stack](../../3-concepts/2-stack/1-overview.md).
+:::
+
+### Review Configuration Files
+
+Now let's have a glance at the configuration codes of `default` stack:
+
+```shell
+cat default/main.k
+```
+
+```python
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+
+# main.k declares the customized configuration codes for default stack.
+quickstart: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ quickstart: c.Container {
+ image: "kusionstack/kusion-quickstart:latest"
+ }
+ }
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 8080
+ }
+ ]
+ }
+ }
+}
+```
+
+The configuration file `main.k`, usually written by the **App Developers**, declares the customized configuration codes for `default` stack, including an `AppConfiguration` instance with the name of `quickstart`. The `quickstart` application consists of a `Workload` with the type of `service.Service`, which runs a container named `quickstart` using the image of `kusionstack/kusion-quickstart:latest`.
+
+Besides, it declares a **Kusion Module** with the type of `network.Network`, exposing `8080` port to be accessed for the long-running service.
+
+The `AppConfiguration` model can hide the major complexity of Kubernetes resources such as `Namespace`, `Deployment`, and `Service` which will be created and managed by Kusion, providing the concepts that are **application-centric** and **infrastructure-agnostic** for a more developer-friendly experience.
+
+:::info
+More details about the `AppConfiguration` model and built-in Kusion Module can be found in [kam](https://github.com/KusionStack/kam) and [catalog](https://github.com/KusionStack/catalog).
+:::
+
+The declaration of the dependency packages can be found in `default/kcl.mod`:
+
+```shell
+cat default/kcl.mod
+```
+
+```shell
+[dependencies]
+kam = { git = "https://github.com/KusionStack/kam.git", tag = "0.2.0" }
+service = {oci = "oci://ghcr.io/kusionstack/service", tag = "0.1.0" }
+network = { oci = "oci://ghcr.io/kusionstack/network", tag = "0.2.0" }
+```
+
+:::info
+More details about the application model and module dependency declaration can be found in [Kusion Module guide for app dev](../../3-concepts/3-module/3-app-dev-guide.md).
+:::
+
+:::tip
+The specific module versions we used in the above demonstration is only applicable for Kusion CLI after **v0.12.0**.
+:::
+
+## Application Delivery
+
+Use the following command to deliver the quickstart application in `default` stack on your accessible Kubernetes cluster, while watching the resource creation and automatically port-forwarding the specified port (8080) from local to the Kubernetes Service of the application. We can check the details of the resource preview results before we confirm to apply the diffs.
+
+```shell
+cd default && kusion apply --port-forward 8080
+```
+
+
+
+:::info
+During the first apply, the models and modules that the application depends on will be downloaded, so it may take some time (usually within one minute). You can take a break and have a cup of coffee.
+:::
+
+:::info
+Kusion by default will create the Kubernetes resources of the application in the namespace the same as the project name. If you want to customize the namespace, please refer to [Project Namespace Extension](../../3-concepts/1-project/2-configuration.md#kubernetesnamespace) and [Stack Namespace Extension](../../3-concepts/2-stack/2-configuration.md#kubernetesnamespace).
+:::
+
+Now we can visit [http://localhost:8080](http://localhost:8080) in our browser and play with the demo application!
+
+
+
+## Add MySQL Accessory
+
+As you can see, the demo application page indicates that the MySQL database is not ready yet. Hence, we will now add a MySQL database as an accessory for the workload.
+
+We can add the Kusion-provided built-in dependency in the `default/kcl.mod`, so that we can use the `MySQL` module in the configuration codes.
+
+```shell
+[dependencies]
+kam = { git = "https://github.com/KusionStack/kam.git", tag = "0.2.0" }
+service = {oci = "oci://ghcr.io/kusionstack/service", tag = "0.1.0" }
+network = { oci = "oci://ghcr.io/kusionstack/network", tag = "0.2.0" }
+mysql = { oci = "oci://ghcr.io/kusionstack/mysql", tag = "0.2.0" }
+```
+
+We can update the `default/main.k` with the following configuration codes:
+
+```python
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+import mysql
+
+# main.k declares the customized configuration codes for default stack.
+quickstart: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ quickstart: c.Container {
+ image: "kusionstack/kusion-quickstart:latest"
+ env: {
+ "DB_HOST": "$(KUSION_DB_HOST_QUICKSTART_DEFAULT_QUICKSTART_MYSQL)"
+ "DB_USERNAME": "$(KUSION_DB_USERNAME_QUICKSTART_DEFAULT_QUICKSTART_MYSQL)"
+ "DB_PASSWORD": "$(KUSION_DB_PASSWORD_QUICKSTART_DEFAULT_QUICKSTART_MYSQL)"
+ }
+ }
+ }
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 8080
+ }
+ ]
+ }
+ "mysql": mysql.MySQL {
+ type: "local"
+ version: "8.0"
+ }
+ }
+}
+```
+
+The configuration codes above declare a local `mysql.MySQL` with the engine version of `8.0` as an accessory for the application workload. The necessary Kubernetes resources for deploying and using the local MySQL database will be generated and users can get the `host`, `username` and `password` of the database through the [MySQL Credentials And Connectivity](../../6-reference/2-modules/1-developer-schemas/database/mysql.md#credentials-and-connectivity) of Kusion in application containers.
+
+:::info
+For more information about the naming convention of Kusion built-in MySQL module, you can refer to [Module Naming Convention](../../6-reference/2-modules/3-naming-conventions.md).
+:::
+
+After that, we can re-apply the application, and we can set the `--watch=false` to skip watching the resources to be reconciled:
+
+```shell
+kusion apply --port-forward 8080 --watch=false
+```
+
+
+
+:::info
+You may wait another minute to download the MySQL Module.
+:::
+
+Let's visit [http://localhost:8080](http://localhost:8080) in our browser, and we can find that the application has successfully connected to the MySQL database. The connection information is also printed on the page.
+
+
+
+Now please feel free to enjoy the demo application!
+
+
+
+## Delete Application
+
+We can delete the quickstart demo workload and related accessory resources with the following cmd:
+
+```shell
+kusion destroy --yes
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/2-getting-started/2-getting-started-with-kusion-cli/_category_.json b/docs_versioned_docs/version-v0.14/2-getting-started/2-getting-started-with-kusion-cli/_category_.json
new file mode 100644
index 00000000..da29385b
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/2-getting-started/2-getting-started-with-kusion-cli/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Start with Kusion CLI"
+}
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/2-getting-started/3-getting-started-with-kusion-server/0-installation.md b/docs_versioned_docs/version-v0.14/2-getting-started/3-getting-started-with-kusion-server/0-installation.md
new file mode 100644
index 00000000..d667f64b
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/2-getting-started/3-getting-started-with-kusion-server/0-installation.md
@@ -0,0 +1,222 @@
+---
+title: Installation
+---
+
+## Install with Helm
+
+If you have a Kubernetes cluster, Helm is the recommended installation method.
+
+The following tutorial will guide you to install Kusion using Helm, which will install the chart with the release name `kusion-release` in namespace `kusion`.
+
+### Prerequisites
+
+* Helm v3+
+* A Kubernetes Cluster (The simplest way is to deploy a Kubernetes cluster locally using `kind` or `minikube`)
+
+### Installation Options
+
+> Note: A valid kubeconfig configuration is required for Kusion to function properly. You must either use the installation script, provide your own kubeconfig in values.yaml, or set it through the --set parameter
+
+You have several options to install Kusion:
+
+#### 1. Using the installation script (recommended)
+
+Download the installation script from the [KusionStack charts repository](https://github.com/KusionStack/charts/blob/master/scripts/install-kusion.sh)
+
+```shell
+curl -O https://raw.githubusercontent.com/KusionStack/charts/master/scripts/install-kusion.sh
+chmod +x install-kusion.sh
+```
+
+Run the installation script with your kubeconfig files:
+
+```shell
+./install-kusion-server.sh ...
+```
+
+**Parameters:**
+
+- **kubeconfig_key**: The key for the kubeconfig file. It should be unique and not contain spaces.
+
+- **kubeconfig_path**: The path to the kubeconfig file.
+
+#### 2. Remote installation with Helm
+
+First, add the `kusionstack` chart repo to your local repository:
+
+```shell
+helm repo add kusionstack https://kusionstack.github.io/charts
+helm repo update
+```
+
+Then install with your encoded kubeconfig:
+
+```shell
+# Base64 encode your kubeconfig files
+KUBECONFIG_CONTENT1=$(base64 -w 0 /path/to/your/kubeconfig1)
+KUBECONFIG_CONTENT2=$(base64 -w 0 /path/to/your/kubeconfig2)
+
+# Install with kubeconfig and optional configurations
+helm install kusion-release kusionstack/kusion \
+--set kubeconfig.kubeConfigs.kubeconfig0="$KUBECONFIG_CONTENT1" \
+--set kubeconfig.kubeConfigs.kubeconfig1="$KUBECONFIG_CONTENT2"
+```
+
+You may have to set your specific configurations if it is deployed into a production cluster, or you want to customize the chart configuration, such as `database`, `replicas`, `port` etc.
+
+```shell
+helm install kusion-release kusionstack/kusion \
+--set kubeconfig.kubeConfigs.kubeconfig0="$KUBECONFIG_CONTENT1" \
+--set kubeconfig.kubeConfigs.kubeconfig1="$KUBECONFIG_CONTENT2" \
+--set server.port=8080 \
+--set server.replicas=3 \
+--set mysql.enabled=true \
+```
+
+> All configurable parameters of the Kusion chart are detailed [here](#chart-parameters).
+
+### Search all available versions
+
+You can use the following command to view all installable chart versions.
+
+```shell
+helm repo update
+helm search repo kusionstack/kusion --versions
+```
+
+### Upgrade specified version
+
+You can specify the version to be upgraded through the `--version`.
+
+```shell
+# Upgrade to the latest version.
+helm upgrade kusion-release kusionstack/kusion
+
+# Upgrade to the specified version.
+helm upgrade kusion-release kusionstack/kusion --version 1.2.3
+```
+
+### Local Installation
+
+If you have problem connecting to [https://kusionstack.github.io/charts/](https://kusionstack.github.io/charts/) in production, you may need to manually download the chart from [here](https://github.com/KusionStack/charts) and use it to install or upgrade locally.
+
+```shell
+git clone https://github.com/KusionStack/charts.git
+```
+
+Edit the [default template values](https://github.com/KusionStack/charts/blob/master/charts/kusion/values.yaml) file to set your own kubeconfig and other configurations.
+> For more information about the KubeConfig configuration, please refer to the [KubeConfig](#kubeconfig) section.
+
+Then install the chart:
+
+```shell
+helm install kusion-release charts/kusion
+```
+
+### Uninstall
+
+To uninstall/delete the `kusion-release` Helm release in namespace `kusion`:
+
+```shell
+helm uninstall kusion-release
+```
+
+### Image Registry Proxy for China
+
+If you are in China and have problem to pull image from official DockerHub, you can use the registry proxy:
+
+```shell
+helm install kusion-release kusionstack/kusion --set registryProxy=docker.m.daocloud.io
+```
+
+**NOTE**: The above is just an example, you can replace the value of `registryProxy` as needed. You also need to provide your own kubeconfig in values.yaml or set it through the --set parameter.
+
+## Chart Parameters
+
+The following table lists the configurable parameters of the chart and their default values.
+
+### General Parameters
+
+| Key | Type | Default | Description |
+|-----|------|---------|-------------|
+| namespace | string | `"kusion"` | Which namespace to be deployed |
+| namespaceEnabled | bool | `true` | Whether to generate namespace |
+| registryProxy | string | `""` | Image registry proxy will be the prefix as all component images |
+
+### Global Parameters
+
+| Key | Type | Default | Description |
+|-----|------|---------|-------------|
+
+### Kusion Server
+
+The Kusion Server Component is the main backend server that provides the core functionality and REST APIs.
+
+| Key | Type | Default | Description |
+|-----|------|---------|-------------|
+| server.args.authEnabled | bool | `false` | Whether to enable authentication |
+| server.args.authKeyType | string | `"RSA"` | Authentication key type |
+| server.args.authWhitelist | list | `[]` | Authentication whitelist |
+| server.args.autoMigrate | bool | `true` | Whether to enable automatic migration |
+| server.args.dbHost | string | `""` | Database host |
+| server.args.dbName | string | `""` | Database name |
+| server.args.dbPassword | string | `""` | Database password |
+| server.args.dbPort | int | `3306` | Database port |
+| server.args.dbUser | string | `""` | Database user |
+| server.args.defaultSourceRemote | string | `""` | Default source URL |
+| server.args.logFilePath | string | `"/logs/kusion.log"` | Logging |
+| server.args.maxAsyncBuffer | int | `100` | Maximum number of buffer zones during concurrent async executions including generate, preview, apply and destroy |
+| server.args.maxAsyncConcurrent | int | `1` | Maximum number of concurrent async executions including generate, preview, apply and destroy |
+| server.args.maxConcurrent | int | `10` | Maximum number of concurrent executions including preview, apply and destroy |
+| server.args.migrateFile | string | `""` | Migration file path |
+| server.env | list | `[]` | Additional environment variables for the server |
+| server.image.imagePullPolicy | string | `"IfNotPresent"` | Image pull policy |
+| server.image.repo | string | `"kusionstack/kusion"` | Repository for Kusion server image |
+| server.image.tag | string | `""` | Tag for Kusion server image. Defaults to the chart's appVersion if not specified |
+| server.name | string | `"kusion-server"` | Component name for kusion server |
+| server.port | int | `80` | Port for kusion server |
+| server.replicas | int | `1` | The number of kusion server pods to run |
+| server.resources | object | `{"limits":{"cpu":"500m","memory":"1Gi"},"requests":{"cpu":"250m","memory":"256Mi"}}` | Resource limits and requests for the kusion server pods |
+| server.serviceType | string | `"ClusterIP"` | Service type for the kusion server. The available type values list as ["ClusterIP"、"NodePort"、"LoadBalancer"]. |
+
+### MySQL Database
+
+The MySQL database is used to store Kusion's persistent data.
+
+| Key | Type | Default | Description |
+|-----|------|---------|-------------|
+| mysql.database | string | `"kusion"` | MySQL database name |
+| mysql.enabled | bool | `true` | Whether to enable MySQL deployment |
+| mysql.image.imagePullPolicy | string | `"IfNotPresent"` | Image pull policy |
+| mysql.image.repo | string | `"mysql"` | Repository for MySQL image |
+| mysql.image.tag | string | `"8.0"` | Specific tag for MySQL image |
+| mysql.name | string | `"mysql"` | Component name for MySQL |
+| mysql.password | string | `""` | MySQL password |
+| mysql.persistence.accessModes | list | `["ReadWriteOnce"]` | Access modes for MySQL PVC |
+| mysql.persistence.size | string | `"10Gi"` | Size of MySQL persistent volume |
+| mysql.persistence.storageClass | string | `""` | Storage class for MySQL PVC |
+| mysql.port | int | `3306` | Port for MySQL |
+| mysql.replicas | int | `1` | The number of MySQL pods to run |
+| mysql.resources | object | `{"limits":{"cpu":"1000m","memory":"1Gi"},"requests":{"cpu":"250m","memory":"512Mi"}}` | Resource limits and requests for MySQL pods |
+| mysql.rootPassword | string | `""` | MySQL root password |
+| mysql.user | string | `"kusion"` | MySQL user |
+
+### KubeConfig
+
+The KubeConfig is used to store the KubeConfig files for the Kusion Server.
+
+| Key | Type | Default | Description |
+|-----|------|---------|-------------|
+| kubeconfig.kubeConfigVolumeMountPath | string | `"/var/run/secrets/kubernetes.io/kubeconfigs/"` | Volume mount path for KubeConfig files |
+| kubeconfig.kubeConfigs | object | `{}` | KubeConfig contents map |
+
+**NOTE**: The KubeConfig contents map is a key-value pair, where the key is the key of the KubeConfig file and the value is the contents of the KubeConfig file.
+
+```yaml
+# Example structure:
+ kubeConfigs:
+ kubeconfig0: |
+ Please fill in your KubeConfig contents here.
+ kubeconfig1: |
+ Please fill in your KubeConfig contents here.
+```
diff --git a/docs_versioned_docs/version-v0.14/2-getting-started/3-getting-started-with-kusion-server/1-deliver-quickstart.md b/docs_versioned_docs/version-v0.14/2-getting-started/3-getting-started-with-kusion-server/1-deliver-quickstart.md
new file mode 100644
index 00000000..d2488116
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/2-getting-started/3-getting-started-with-kusion-server/1-deliver-quickstart.md
@@ -0,0 +1,195 @@
+---
+id: deliver-quickstart
+---
+
+# Run Your First App on Kubernetes with Kusion Server
+
+In this tutorial, we will walk through how to deploy a quickstart application on Kubernetes with Kusion. The demo application only contains the `Namespace`, `Deployment`, and `Service` resources necessary for a long-running service workload.
+
+## Prerequisites
+
+Before we start to play with this example, we need to have the Kusion Server installed and run an accessible Kubernetes cluster. Here are some helpful documents:
+
+- Install [Kusion Server](../2-getting-started-with-kusion-cli/0-install-kusion.md).
+- Run a [Kubernetes](https://kubernetes.io) cluster. Some light and convenient options for Kubernetes local deployment include [k3s](https://docs.k3s.io/quick-start), [k3d](https://k3d.io/v5.4.4/#installation), and [MiniKube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node).
+
+## Initialize Backend, Source, and Workspace
+
+We can start this tutorial with the initialization of `Backend`, `Source`, and `Workspace` on Kusion.
+
+First, create a `Backend` with local file system as Kusion's storage 👇
+
+
+
+Second, set the sample repository `konfig` we provided as the `Source` of application configuration codes 👇
+
+
+
+Then, create a `Workspace` named `dev` which can correspond to the development environment of the application. After it is created, we can copy the following example configurations into the `workspace.yaml`. This configuration declares the `Kusion Modules` that can be used in the application config codes, and specifies the Kubernetes cluster associated with this `Workspace`.
+
+
+
+```yaml
+# This is a sample of a `workspace.yaml` configuration, in which three Kusion Modules (kam, service, and network) and
+# their specified versions are declared, along with the Kubernetes cluster bound to this workspace.
+# Usually, applications deployed to this workspace can only use the Kusion Modules declared in the `workspace.yaml`.
+modules:
+ kam:
+ path: git://github.com/KusionStack/kam
+ version: 0.2.2
+ configs:
+ default: {}
+ service:
+ path: oci://ghcr.io/kusionstack/service
+ version: 0.2.1
+ configs:
+ default: {}
+ network:
+ path: oci://ghcr.io/kusionstack/network
+ version: 0.3.0
+ configs:
+ default: {}
+context:
+ KUBECONFIG_PATH: /var/run/secrets/kubernetes.io/kubeconfigs/kubeconfig-0
+```
+
+
+
+We can check the available Kusion Modules declared in the workspace. The `kam`, `service`, and `network` modules declared in the example have been pre-registered when we installed Kusion.
+
+
+
+
+
+:::info
+More info about the concepts of `Backend`, `Source`, `Workspace`, and `Kusion Module` can be found [here](../../3-concepts/0-overview.md)
+:::
+
+## Initialize Project and Stack
+
+Next, we can create our first `Project` and `Stack` with the `Source` of `konfig` repo.
+
+
+
+When creating a `Project`, the `path` field should be filled with the path of the project relative to the root directory of the `Source` repo. After the creation, click the project name to initiate a `Stack`.
+
+
+
+Similarly, the `path` field of `Stack` should also be filled with the path of the stack relative to the root directory of the `Source` repo.
+
+:::info
+More info about the concepts of `Project` and `Stack` can be found [here](../../3-concepts/0-overview.md)
+:::
+
+### Review Configuration Files
+
+Now let's have a glance at the configuration codes of `default` stack in `quickstart` project, the configuration code link is https://github.com/KusionStack/konfig/tree/main/example/quickstart/default.
+
+
+
+The codes in the configuration file `main.k` are shown below:
+
+```python
+# The configuration codes in perspective of developers.
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+
+# `main.k` declares the customized configuration codes for default stack.
+#
+# Please replace the ${APPLICATION_NAME} with the name of your application, and complete the
+# 'AppConfiguration' instance with your own workload and accessories.
+quickstart: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ quickstart: c.Container {
+ image: "kusionstack/kusion-quickstart:latest"
+ }
+ }
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 8080
+ }
+ ]
+ }
+ }
+}
+
+```
+
+The configuration file `main.k`, usually written by the **App Developers**, declares the customized configuration codes for `default` stack, including an `AppConfiguration` instance with the name of `quickstart`. The `quickstart` application consists of a `Workload` with the type of `service.Service`, which runs a container named `quickstart` using the image of `kusionstack/kusion-quickstart:latest`.
+
+Besides, it declares a **Kusion Module** with the type of `network.Network`, exposing `8080` port to be accessed for the long-running service.
+
+The `AppConfiguration` model can hide the major complexity of Kubernetes resources such as `Namespace`, `Deployment`, and `Service` which will be created and managed by Kusion, providing the concepts that are **application-centric** and **infrastructure-agnostic** for a more developer-friendly experience.
+
+:::info
+More details about the `AppConfiguration` model and built-in Kusion Module can be found in [kam](https://github.com/KusionStack/kam) and [catalog](https://github.com/KusionStack/catalog).
+:::
+
+The declaration of the dependency packages can be found in `default/kcl.mod`:
+
+```shell
+[dependencies]
+kam = { git = "git://github.com/KusionStack/kam", tag = "0.2.2" }
+network = { oci = "oci://ghcr.io/kusionstack/network", tag = "0.3.0" }
+service = { oci = "oci://ghcr.io/kusionstack/service", tag = "0.2.1" }
+```
+
+:::info
+More details about the application model and module dependency declaration can be found in [Kusion Module guide for app dev](../../3-concepts/3-module/3-app-dev-guide.md).
+:::
+
+:::tip
+The specific module versions we used in the above demonstration is only applicable for Kusion after **v0.14.0**.
+:::
+
+## Application Delivery
+
+After the initialization, we can start to run the application delivery.
+
+### Preview Changes
+
+We can first preview the changes to the application resources that are going to be deployed to the `dev` workspace.
+
+
+
+We can click the `Detail` button to view the `Preview` results.
+
+
+
+:::info
+During the first preview, the models and modules that the application depends on will be downloaded, so it may take some time (usually within one minute). You can take a break and have a cup of coffee.
+:::
+
+### Apply Resources
+
+Then we can create a `Run` operation of the type of `Apply` to conveniently deploy the previewed application resources to the Kubernetes cluster corresponding to the `dev` workspace.
+
+
+
+After successfully completing the `Apply`, we can check the application resource graph, which will display the topology of the application resources.
+
+
+
+Next, we can expose the service of the application we just applied through port-forwarding Kubernetes Pod and verify it in the browser.
+
+
+
+
+
+Oops, it seems that the page indicates we are missing a database. But no worries, we will cover how to add a database configuration for our application in the [next post](2-deliver-quickstart-with-db.md).
+
+:::info
+Kusion by default will create the Kubernetes resources of the application in the namespace the same as the project name. If you want to customize the namespace, please refer to [Project Namespace Extension](../../3-concepts/1-project/2-configuration.md#kubernetesnamespace) and [Stack Namespace Extension](../../3-concepts/2-stack/2-configuration.md#kubernetesnamespace).
+:::
+
+## Delete Application
+
+We can delete the quickstart demo workload and related accessory resources with the `Destroy` run:
+
+
diff --git a/docs_versioned_docs/version-v0.14/2-getting-started/3-getting-started-with-kusion-server/2-deliver-quickstart-with-db.md b/docs_versioned_docs/version-v0.14/2-getting-started/3-getting-started-with-kusion-server/2-deliver-quickstart-with-db.md
new file mode 100644
index 00000000..d5632ab9
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/2-getting-started/3-getting-started-with-kusion-server/2-deliver-quickstart-with-db.md
@@ -0,0 +1,159 @@
+---
+id: deliver-quickstart-with-db
+---
+
+# Run Your Own App with MySQL on Kubernetes with Kusion Server
+
+In this tutorial, we will learn how to create and manage our own application with MySQL database on Kubernetes with Kusion. The locally deployed MySQL database is declared as an accessory in the config codes and will be automatically created and managed by Kusion.
+
+## Prerequisites
+
+Before we start to play with this example, we need to have the Kusion Server installed and run an accessible Kubernetes cluster. Besides, we need to have a GitHub account to initiate our own config code repository as `Source` in Kusion.
+
+- Install [Kusion Server](../2-getting-started-with-kusion-cli/0-install-kusion.md).
+- Run a [Kubernetes](https://kubernetes.io) cluster. Some light and convenient options for Kubernetes local deployment include [k3s](https://docs.k3s.io/quick-start), [k3d](https://k3d.io/v5.4.4/#installation), and [MiniKube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node).
+
+
+:::info
+Please walk through [this documentation](1-deliver-quickstart.md) before proceeding with the upcoming instructions.
+:::
+
+## Initialize Source
+
+First, we need to create our own application configuration code repository as `Source` in Kusion.
+
+
+
+We can simply copy the `quickstart` example in [KusionStack/konfig](https://github.com/KusionStack/konfig/tree/main/example/quickstart) into our new repo.
+
+
+
+Then, we need to create a new `Source` with the created repository url.
+
+
+
+## Register Module And Update Workspace
+
+Next, we can register the `mysql` module provided by KusionStack community in Kusion.
+
+
+
+After the registration, we should add the `mysql` module to the `dev` workspace and re-generate the `kcl.mod`.
+
+
+
+
+
+We can copy and paste to save the updated `kcl.mod`.
+
+## Create Project and Stack
+
+Next, we can create a new `Project` with our own config code repo.
+
+
+
+And similarly, create a new `Stack`.
+
+
+
+## Add MySQL Accessory
+
+As you can see, the demo application page in [this doc](1-deliver-quickstart.md#apply-resources) indicates that the MySQL database is not ready yet. Hence, we will now add a MySQL database as an `accessory` for the workload.
+
+We should first update the module dependencies in the `default/kcl.mod` with the ones we previously stored, so that we can use the `MySQL` module in the configuration codes.
+
+```shell
+[dependencies]
+kam = { git = "git://github.com/KusionStack/kam", tag = "0.2.2" }
+mysql = { oci = "oci://ghcr.io/kusionstack/mysql", tag = "0.2.0" }
+network = { oci = "oci://ghcr.io/kusionstack/network", tag = "0.3.0" }
+service = { oci = "oci://ghcr.io/kusionstack/service", tag = "0.2.1" }
+```
+
+We can update the `default/main.k` with the following configuration codes:
+
+```python
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+import mysql
+
+# main.k declares the customized configuration codes for default-with-db stack.
+quickstart: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ quickstart: c.Container {
+ image: "kusionstack/kusion-quickstart:latest"
+ env: {
+ "DB_HOST": "$(KUSION_DB_HOST_QUICKSTARTWITHDBDEFAULTWITHDBQU)"
+ "DB_USERNAME": "$(KUSION_DB_USERNAME_QUICKSTARTWITHDBDEFAULTWITHDBQU)"
+ "DB_PASSWORD": "$(KUSION_DB_PASSWORD_QUICKSTARTWITHDBDEFAULTWITHDBQU)"
+ }
+ }
+ }
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 8080
+ }
+ ]
+ }
+ "mysql": mysql.MySQL {
+ type: "local"
+ version: "8.0"
+ }
+ }
+}
+```
+
+The configuration codes above declare a local `mysql.MySQL` with the engine version of `8.0` as an accessory for the application workload. The necessary Kubernetes resources for deploying and using the local MySQL database will be generated and users can get the `host`, `username` and `password` of the database through the [MySQL Credentials And Connectivity](../../6-reference/2-modules/1-developer-schemas/database/mysql.md#credentials-and-connectivity) of Kusion in application containers.
+
+:::info
+For more information about the naming convention of Kusion built-in MySQL module, you can refer to [Module Naming Convention](../../6-reference/2-modules/3-naming-conventions.md).
+:::
+
+After that, we need to update the remote repository with the modified config code files.
+
+
+
+## Application Delivery
+
+Now we can start to run the application delivery!
+
+### Preview Changes
+
+We can first preview the changes to the application resources that are going to be deployed to the `dev` workspace.
+
+
+
+We can click the `Detail` button to view the `Preview` results, and we can find that compared to the results in [this doc](1-deliver-quickstart.md#preview-changes), the `mysql` accessory has brought us database related Kubernetes resources.
+
+
+
+### Apply Resources
+
+Then we can create a `Run` operation of the type of `Apply` to deploy the previewed application resources to the Kubernetes cluster corresponding to the `dev` workspace.
+
+
+
+After successfully completing the `Apply`, we can check the application resource graph, which will display the topology of the application resources related to mysql database, including Kubernetes `Deployment`, `Service`, `Secret`, and `PVC`.
+
+
+
+Next, we can expose the service of the application we just applied through port-forwarding Kubernetes Pod and verify it in the browser.
+
+
+
+We can find that the application has successfully connected to the MySQL database, and the connection information is also printed on the page. Now please feel free to enjoy the demo application!
+
+
+
+## Delete Application
+
+We can delete the quickstart demo workload and related accessory resources with the `Destroy` run:
+
+
+
diff --git a/docs_versioned_docs/version-v0.14/2-getting-started/3-getting-started-with-kusion-server/_category_.json b/docs_versioned_docs/version-v0.14/2-getting-started/3-getting-started-with-kusion-server/_category_.json
new file mode 100644
index 00000000..4a3ae631
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/2-getting-started/3-getting-started-with-kusion-server/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Start with Kusion Server"
+}
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/2-getting-started/_category_.json b/docs_versioned_docs/version-v0.14/2-getting-started/_category_.json
new file mode 100644
index 00000000..41f4c00e
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/2-getting-started/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Getting Started"
+}
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/0-overview.md b/docs_versioned_docs/version-v0.14/3-concepts/0-overview.md
new file mode 100644
index 00000000..b0db99dc
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/0-overview.md
@@ -0,0 +1,29 @@
+---
+id: overview
+---
+
+# Overview
+
+## Platform Orchestrator
+
+
+
+A Platform Orchestrator is a system designed to capture and "orchestrate" intents from different configurations coming from different roles, and connecting them with different infrastructures. It serves as the glue for different intents throughout the software development lifecycle, from deployment to monitoring and operations, ensuring that users' intentions can seamlessly integrate and flow across different environments and infrastructures.
+
+## Kusion Workflow
+
+In this section, we will provide an overview of the core concepts of Kusion from the perspective of the Kusion workflow.
+
+
+
+The workflow of Kusion is illustrated in the diagram above, which consists of three steps.
+
+The first step is **Write**, where the platform engineers build the [Kusion Modules](./3-module/1-overview.md) and initialize a [Workspace](./4-workspace/1-overview.md). And the application developers declare their operational intent in [AppConfiguration](./5-appconfigurations.md) under a specific [Project](./1-project/1-overview.md) and [Stack](./2-stack/1-overview.md) path.
+
+The second step is the **Generate** process, which results in the creation of the **SSoT** (Single Source of Truth), also known as the [Spec](./6-specs.md) of the current operation. Kusion stores and version controls the Spec as part of a [Release](./8-release.md).
+
+The third step is **Apply**, which makes the `Spec` effective. Kusion parses the operational intent based on the `Spec` produced in the previous step. Before applying the `Spec`, Kusion will execute the `Preview` command (you can also execute this command manually) which will use a three-way diff algorithm to preview changes and prompt users to make sure all changes meet their expectations. And the `Apply` command will then actualize the operation intent onto various infrastructure platforms, currently supporting **Kubernetes**, **Terraform**, and **On-Prem** infrastructures. A [Release](./8-release.md) file will be created in the [Storage Backend](./7-backend/1-overview.md) to record an operation. The `Destroy` command will delete the resources recorded in the `Release` file of a project in a specific workspace.
+
+A more detailed demonstration of the Kusion engine can be seen below.
+
+
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/1-project/1-overview.md b/docs_versioned_docs/version-v0.14/3-concepts/1-project/1-overview.md
new file mode 100644
index 00000000..152d0ed1
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/1-project/1-overview.md
@@ -0,0 +1,14 @@
+---
+sidebar_label: Overview
+id: overview
+---
+
+# Overview
+
+Projects, [Stacks](../2-stack/1-overview.md), [Modules](../3-module/1-overview.md) and [Workspaces](../4-workspace/1-overview.md) are the most pivotal concepts when using Kusion.
+
+A project in Kusion is defined as a folder that contains a `project.yaml` file and is generally recommended to be stored to a Git repository. There are generally no constraints on where this folder may reside. In some cases it can be stored in a large monorepo that manages configurations for multiple applications. It could also be stored either in a separate repo or alongside the application code. A project can logically consist of one or more applications.
+
+The purpose of the project is to bundle application configurations that are relevant. Specifically, it contains configurations for internal components that makes up the application and orchestrates these configurations to suit different roles, such as developers and SREs, thereby covering the entire lifecycle of application deployment.
+
+From the perspective of the application development lifecycle, the configurations delineated by the project is decoupled from the application code. It takes an immutable application image as input, allowing users to perform operations and maintain the application within an independent configuration codebase.
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/1-project/2-configuration.md b/docs_versioned_docs/version-v0.14/3-concepts/1-project/2-configuration.md
new file mode 100644
index 00000000..b5823df8
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/1-project/2-configuration.md
@@ -0,0 +1,38 @@
+---
+id: configuration
+sidebar_label: Project file reference
+---
+
+# Kusion project file reference
+
+Every Kusion project has a project file, `project.yaml`, which specifies metadata about your project, such as the project name and project description. The project file must begin with lowercase `project` and have an extension of either `.yaml` or `.yml`.
+
+## Attributes
+
+| Name | Required | Description | Options |
+|:------------- |:--------------- |:------------- |:------------- |
+| `name` | required | Name of the project containing alphanumeric characters, hyphens, underscores. | None |
+| `description` | optional | A brief description of the project. | None |
+| `extensions` | optional | List of extensions on the project. | [See blow](#extensions) |
+
+### Extensions
+
+Extensions allow you to customize how resources are generated or customized as part of release.
+
+#### kubernetesNamespace
+
+The Kubernetes namespace extension allows you to customize namespace within your application generate Kubernetes resources.
+
+| Key | Required | Description | Example |
+|:------|:--------:|:-------------|:---------|
+| kind | y | The kind of extension being used. Must be 'kubernetesNamespace' | `kubernetesNamespace` |
+| namespace | y | The namespace where all application-scoped resources generate Kubernetes objects. | `default` |
+
+```yaml
+# Example `project.yaml` file with customized namespace of `test`.
+name: example
+extensions:
+ - kind: kubernetesNamespace
+ kubernetesNamespace:
+ namespace: test
+```
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/1-project/_category_.json b/docs_versioned_docs/version-v0.14/3-concepts/1-project/_category_.json
new file mode 100644
index 00000000..b62ac774
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/1-project/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Projects"
+}
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/10-resources.md b/docs_versioned_docs/version-v0.14/3-concepts/10-resources.md
new file mode 100644
index 00000000..582f17bf
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/10-resources.md
@@ -0,0 +1,43 @@
+# Resources
+
+Kusion uses [Spec](./6-specs.md) to manage resource specifications. A Kusion resource is a logical concept that encapsulates physical resources on different resource planes, including but not limited to Kubernetes, AWS, AliCloud, Azure and Google Cloud.
+
+Kusion Resources are produced by [Kusion Module Generators](./3-module/1-overview.md) and usually map to a physical resource that can be applied via a Kusion Runtime.
+
+## Runtimes
+
+Runtimes is the consumer of the resources in a Spec by turning them into actual physical resources in infrastructure providers.
+
+Currently there are two in-tree runtimes defined in the Kusion source code but we are planning to make them out-of-tree in the future:
+
+- Kubernetes, used to manage resources inside a Kubernetes cluster. This can be any native Kubernetes resources or CRDs (if you wish to manage infrastructures via [CrossPlane](https://www.crossplane.io/) or [Kubevela](https://kubevela.io/), for example, this is completely doable by creating a Kusion Module with a generator that produces the resource YAML for the relevant CRDs)
+- Terraform, used to manage infrastructure resources outside a Kubernetes cluster. Kusion uses Terraform as an executor that can manage basically any infrastructure given there is a [terraform provider](https://developer.hashicorp.com/terraform/language/providers) tailored for the infrastructure API. This is generally used to manage the lifecycle of infrastructure resources on clouds, no matter public or on-prem.
+
+## Resource Planes
+
+Resource Plane is a property of a Kusion resource. It represents the actual plane on which the resource exists. Current resource planes include `kubernetes`,`aws`,`azure`,`google`,`alicloud`,`ant` and `custom`.
+
+## Resource ID, Resource URN and Cloud Resource ID
+
+Kusion Resource ID is a unique identifier for a Kusion Resource within a Spec. It must be unique across a Spec. The resource ID is technically generated by module generators so there are no definite rules for producing a Kusion Resource ID. The best practice is to use the `KubernetesResourceID()` and `TerraformResourceID()` method from [kusion-module-framework](https://github.com/KusionStack/kusion-module-framework) to manage Kusion Resource IDs. You can use the [official module generators](https://github.com/KusionStack/catalog/blob/main/modules/mysql/src/alicloud_rds.go#L164) as a reference.
+
+'''tip
+Resource ID validations do exist.
+For Kubernetes resources, the resource ID must include API version, kind, namespace (if applicable) and name.
+For Terraform resources, the resource ID must include provider namespace, provider name, resource type and resource name.
+It's always recommended to use the `KubernetesResourceID()` and `TerraformResourceID()` method from [kusion-module-framework](https://github.com/KusionStack/kusion-module-framework) to produce the Resource IDs.
+'''
+
+Kusion Resource URN is used to uniquely identify a Kusion Resource across a Kusion server instance. It consists of `${project-name}:${stack-name}:${workspace-name}:${kusion-resource-id}` to ensure global uniqueness.
+
+Cloud Resource ID is used to map to an actual resource on the cloud. For AWS and Alicloud, this is usually known as the resource `ARN` on the cloud. For Azure and Google Cloud, this is known as the Resource ID. It can be empty in some cases, for example, a Kubernetes resource does not have cloud resource ID.
+
+## Resource Graphs
+
+A Resource Graph visualizes the relationship between all resources for a given stack. In the Kusion developer portal, you can inspect the resource graph by clicking on the `Resource Graph` tab on the stack page:
+
+
+
+You can closely inspect the resource details by hovering over the resource node on the graph.
+
+
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/11-cli-configuration.md b/docs_versioned_docs/version-v0.14/3-concepts/11-cli-configuration.md
new file mode 100644
index 00000000..fc681634
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/11-cli-configuration.md
@@ -0,0 +1,118 @@
+---
+id: configuration
+sidebar_label: CLI Configurations
+---
+
+# CLI Configurations
+
+:::tip
+If you are using Kusion server, this article does NOT apply.
+:::
+
+Kusion CLI can be configured with some global settings, which are separate from the AppConfiguration written by the application developers and the workspace configurations written by the platform engineers.
+
+The configurations are only relevant to the Kusion itself, and can be managed by command `kusion config`. The configuration items are specified, which are in the hierarchical format with full stop for segmentation, such as `backends.current`. For now, only the backend configurations are included.
+
+The configuration is stored in the file `${KUSION_HOME}/config.yaml`. For sensitive data, such as password, access key id and secret, setting them in the configuration file is not recommended, using the corresponding environment variables is safer.
+
+## Configuration Management
+
+Kusion provides the command `kusion config`, and its sub-commands `get`, `list`, `set`, `unset` to manage the configuration. The usages are shown as below:
+
+### Get a Specified Configuration Item
+
+Use `kusion config get` to get the value of a specified configuration item, only the registered item can be obtained correctly. The example is as below.
+
+```shell
+# get a configuration item
+kusion config get backends.current
+```
+
+### List the Configuration Items
+
+Use `kusion config list` to list all the Kusion configurations, where the result is in the YAML format. The example is as below.
+
+```shell
+# list all the Kusion configurations
+kusion config list
+```
+
+### Set a Specified Configuration Item
+
+Use `kusion config set` to set the value of a specified configuration item, where the type of the value of is also determinate. Kusion supports `string`, `int`, `bool`, `array` and `map` as the value type, which should be conveyed in the following format through CLI.
+
+- `string`: the original format, such as `local-dev`, `oss-pre`;
+- `int`: convert to string, such as `3306`, `80`;
+- `bool`: convert to string, only support `true` and `false`;
+- `array`: convert to string with JSON marshal, such as `'["s3","oss"]'`. To preserve the format, enclosing the string content in single quotes is a good idea, or there may be unexpected errors;
+- `map`: convert to string with JSON marshal, such as `'{"path":"\etc"}'`.
+
+Besides the type, some configuration items have more setting requirements. The configuration item dependency may exist, that is, a configuration item must be set after another item. And there may exist more restrictions for the configuration values themselves. For example, the valid keys for the map type value, the data range for the int type value. For detailed configuration item information, please refer to the following content of this article.
+
+The example of setting configuration item is as blow.
+
+```shell
+# set a configuration item of type string
+kusion config set backends.pre.type s3
+
+# set a configuration item of type map
+kusion config set backends.prod `{"configs":{"bucket":"kusion"},"type":"s3"}`
+```
+
+### Unset a Specified Configuration Item
+
+Use `kusion config unset` to unset a specified configuration item. Be attention, some items have dependencies, which must be unset in a correct order. The example is as below.
+
+```shell
+# unset a specified configuration item
+kusion config unset backends.pre
+```
+
+## Backend Configurations
+
+The backend configurations define the place to store Workspace, Spec and State files. Multiple backends and current backend are supported to set.
+
+### Available Configuration Items
+
+- **backends.current**: type `string`, the current used backend name. It can be set as the configured backend name. If not set, the default local backend will be used.
+- **backends.${name}**: type `map`, a total backend configuration, contains type and config items, whose format is as below. It can be unset when the backend is not the current.
+```yaml
+{
+ "type": "${backend_type}", # type string, required, support local, oss, s3.
+ "configs": ${backend_configs} # type map, optional for type local, required for the others, the specific keys depend on the type, refer to the description of backends.${name}.configs.
+}
+```
+- **backends.${name}.type**: type `string`, the backend type, support `local`, `s3` and `oss`. It can be unset when the backend is not the current, and the corresponding `backends.${name}.configs` are empty.
+- **backends.${name}.configs**: type `map`, the backend config items, whose format depends on the backend type and is as below. It must be set after `backends.${name}.type`.
+```yaml
+# type local
+{
+ "path": "${local_path}" # type string, optional, the directory to store the files. If not set, use the default path ${KUSION_HOME}.
+}
+
+# type oss
+{
+ "endpoint": "${oss_endpoint}", # type string, required, the oss endpoint.
+ "accessKeyID": "${oss_access_key_id}", # type string, optional, the oss access key id, which can be also obtained by environment variable OSS_ACCESS_KEY_ID.
+ "accessKeySecret": "${oss_access_key_secret}", # type string, optional, the oss access key secret, which can be also obtained by environment variable OSS_ACCESS_KEY_SECRET
+ "bucket": "${oss_bucket}", # type string, required, the oss bucket.
+ "prefix": "${oss_prefix}" # type string, optional, the prefix to store the files.
+}
+
+ # type s3
+{
+ "region": "${s3_region}", # type string, optional, the aws region, which can be also obtained by environment variables AWS_REGION and AWS_DEFAULT_REGION.
+ "endpoint": "${s3_endpoint}", # type string, optional, the aws endpoint.
+ "accessKeyID": "${s3_access_key_id}", # type string, optional, the aws access key id, which can be also obtained by environment variable AWS_ACCESS_KEY_ID.
+ "accessKeySecret": "${s3_access_key_secret}", # type string, optional, the aws access key secret, which can be also obtained by environment variable AWS_SECRET_ACCESS_KEY
+ "bucket": "${s3_bucket}", # type string, required, the s3 bucket.
+ "prefix": "${s3_prefix}" # type string, optional, the prefix to store the files.
+}
+```
+- **backends.${name}.configs.path**: type `string`, the path of local type backend. It must be set after `backends.${name}.type` and which must be `local`.
+- **backends.${name}.configs.endpoint**: type `string`, the endpoint of oss or s3 type backend. It must be set after `backends.${name}.type` and which must be `oss` or `s3`.
+- **backends.${name}.configs.accessKeyID**: type `string`, the access key id of oss or s3 type backend. It must be set after `backends.${name}.type` and which must be `oss` or `s3`. For `oss`, it can be also obtained by environment variable `OSS_ACCESS_KEY_ID`; while for s3, it is `AWS_ACCESS_KEY_ID`.
+- **backends.${name}.configs.accessKeySecret**: type `string`, the access key secret of oss or s3 type backend. It must be set after `backends.${name}.type` and which must be `oss` or `s3`. For `oss`, it can be also obtained by environment variable `OSS_ACCESS_KEY_SECRET`; while for s3, it is `AWS_SECRET_ACCESS_KEY`.
+- **backends.${name}.configs.bucket**: type `string`, the bucket of oss or s3 type backend. It must be set after `backends.${name}.type` and which must be `oss` or `s3`.
+- **backends.${name}.configs.prefix**: type `string`, the prefix to store the files of oss or s3 type backend. It must be set after `backends.${name}.type` and which must be `oss` or `s3`.
+- **backends.${name}.configs.region**: type `string`, the aws region of s3 type backend. It must be set after `backends.${name}.type` and which must be `s3`. It can be also obtained by environment variables `AWS_REGION` and `AWS_DEFAULT_REGION`, where the former is priority.
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/2-stack/1-overview.md b/docs_versioned_docs/version-v0.14/3-concepts/2-stack/1-overview.md
new file mode 100644
index 00000000..b896a736
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/2-stack/1-overview.md
@@ -0,0 +1,16 @@
+---
+sidebar_label: Overview
+id: overview
+---
+
+# Overview
+
+A stack in Kusion is defined as a folder within the project directory that contains a `stack.yaml` file. Stacks provide a mechanism to isolate multiple sets of different configurations in the same project. It is also the smallest unit of operation that can be configured and deployed independently.
+
+The most common way to leverage stacks is to denote different phases of the software development lifecycle, such as `development`, `staging`, `production`, etc. For instance, in the case where the image and resource requirements for an application workload might be different across different phases in the SDLC, they can be represented by different stacks in the same project, namely `dev`, `stage` and `prod`.
+
+To distinguish this from the deploy-time concept of a "target environment" - which Kusion defines as [workspaces](../4-workspace/1-overview.md), **stack** is a development-time concept for application developers to manage different configurations. One way to illustrate the difference is that you can easily be deploying the `prod` stack to multiple target environments, for example, `aws-prod-us-east`, `aws-prod-us-east-2` and `azure-prod-westus`.
+
+## High Level Schema
+
+
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/2-stack/2-configuration.md b/docs_versioned_docs/version-v0.14/3-concepts/2-stack/2-configuration.md
new file mode 100644
index 00000000..b09a5c43
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/2-stack/2-configuration.md
@@ -0,0 +1,38 @@
+---
+id: configuration
+sidebar_label: Stack file reference
+---
+
+# Kusion stack file reference
+
+Every Kusion project's stack has a stack file, `stack.yaml`, which specifies metadata about your stack, such as the stack name and stack description. The stack file must begin with lowercase `stack` and have an extension of either `.yaml` or `.yml`.
+
+## Attributes
+
+| Name | Required | Description | Options |
+|:------------- |:--------------- |:------------- |:------------- |
+| `name` | required | Name of the stack containing alphanumeric characters, hyphens, underscores. | None |
+| `description` | optional | A brief description of the stack. | None |
+| `extensions` | optional | List of extensions on the stack. | [See blow](#extensions) |
+
+### Extensions
+
+Extensions allow you to customize how resources are generated or customized as part of release.
+
+#### kubernetesNamespace
+
+The Kubernetes namespace extension allows you to customize namespace within your application generate Kubernetes resources.
+
+| Key | Required | Description | Example |
+|:------|:--------:|:-------------|:---------|
+| kind | y | The kind of extension being used. Must be 'kubernetesNamespace' | `kubernetesNamespace` |
+| namespace | y | The namespace where all application-scoped resources generate Kubernetes objects. | `default` |
+
+```yaml
+# Example `stack.yaml` file with customized namespace of `test`.
+name: dev
+extensions:
+ - kind: kubernetesNamespace
+ kubernetesNamespace:
+ namespace: test
+```
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/2-stack/_category_.json b/docs_versioned_docs/version-v0.14/3-concepts/2-stack/_category_.json
new file mode 100644
index 00000000..914c863f
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/2-stack/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Stacks"
+}
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/3-module/1-overview.md b/docs_versioned_docs/version-v0.14/3-concepts/3-module/1-overview.md
new file mode 100644
index 00000000..fb9f1359
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/3-module/1-overview.md
@@ -0,0 +1,16 @@
+# Overview
+
+A Kusion module is a reusable building block designed by platform engineers to standardize application deployments and enable application developers to self-service. It consists of two parts:
+
+- App developer-friendly schema: It is a [KCL schema](https://kcl-lang.io/docs/user_docs/guides/schema-definition/). Fields in this schema are recommended to be understandable to application developers and workspace-agnostic. For example, a database Kusion module schema only contains fields like database engine type and database version.
+- Kusion module generator: It is a piece of logic that generates the [Spec](../6-specs.md) with an instantiated schema mentioned above, along with platform configurations managed in [workspaces](../4-workspace/1-overview.md). As the most fundamental building block, Kusion module provides the necessary abstraction to the developers by hiding the complexity of infrastructures. A database Kusion module not only represents a cloud RDS, but it also contains logic to configure other resources such as security groups and IAM policies. Additionally, it seamlessly injects the database host address, username, and password into the workload's environment variables. The generator logic can be very complex in some situations so we recommend implementing it in a GPL like [go](https://go.dev/).
+
+Here are some explanations of the Kusion Module:
+
+1. It represents an independent unit that provides a specific capability to the application with clear business semantics.
+2. It may consist of one or multiple infrastructure resources (K8s/Terraform resources), but it is not merely a collection of unrelated resources. For instance, a database, monitoring capabilities, and network access are typical Kusion Modules.
+3. Modules should not have dependencies or be nested within each other.
+4. AppConfig is not a Module.
+5. Kusion Module is a superset of [KPM](https://www.kcl-lang.io/docs/user_docs/guides/package-management/quick-start). It leverages the KPM to manage KCL schemas in the Kusion module.
+
+
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/3-module/2-develop-guide.md b/docs_versioned_docs/version-v0.14/3-concepts/3-module/2-develop-guide.md
new file mode 100644
index 00000000..9b60aaf4
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/3-module/2-develop-guide.md
@@ -0,0 +1,256 @@
+# Platform Engineer Develop Guide
+
+## Prerequisites
+
+To follow this guide, you will need:
+
+- Go 1.22 or higher installed and configured
+- Kusion v0.12 or higher installed locally
+
+## Workflow
+
+As a platform engineer, the workflow of developing a Kusion module looks like this:
+
+1. Communicate with app developers and identify the fields that should exposed to them in the dev-orient schema
+2. Identify module input parameters that should be configured by platform engineers in the [workspace](../4-workspace/1-overview.md)
+3. Define the app dev-orient schema
+4. Develop the module by implementing gRPC interfaces
+
+The first two steps primarily involve communication with the application development team, and the specific details are not included in this tutorial. This tutorial begins with the subsequent two steps.
+
+## Set up a developing environment
+
+Developing a Kusion module includes defining a KCL schema and developing a module binary in golang. We will provide a scaffold repository and a new command `kusion mod init` to help developers set up the developing environment easily.
+
+After executing the command
+
+```shell
+kusion mod init
+```
+
+Kusion will download a [scaffold repository](https://github.com/KusionStack/kusion-module-scaffolding) and rename this project with your module name. The scaffold contains code templates and all files needed for developing a Kusion module.
+
+## Developing
+
+The scaffold repository directory structure is shown below:
+
+```shell
+$ tree kawesome/
+.
+├── example
+│ ├── dev
+│ │ ├── example_workspace.yaml
+│ │ ├── kcl.mod
+│ │ ├── main.k
+│ │ └── stack.yaml
+│ └── project.yaml
+├── kawesome.k
+├── kcl.mod
+└── src
+ ├── Makefile
+ ├── go.mod
+ ├── go.sum
+ ├── kawesome_generator.go
+ └── kawesome_generator_test.go
+```
+
+When developing a Kusion module with the scaffold repository, you could follow the steps below:
+
+1. **Define the module name and version**
+ - For go files. Rename the module name in the `go.mod` and related files to your Kusion module name.
+ ```yaml
+ module kawsome
+ go 1.22
+ require (
+ ...
+ )
+ ```
+ - For KCL files. Rename package name and version in the `kcl.mod`
+ ```toml
+ [package]
+ name = "kawesome"
+ version = 0.2.0
+ ```
+
+ We assume the module named is `kawesome` and the version is `0.2.0` in this guide.
+
+2. **Define the dev-orient schemas**. They would be initialized by app developers. In this scaffold repository, we've defined a schema named Kawesome in `kawesome.k` that consists of two resources `Service` and `RandomPassword` and they will be generated into a Kubernetes Service and a Terraform RandomPassword later.
+
+```python
+schema Kawesome:
+""" Kawesome is a sample module schema consisting of Service
+and RandomPassword
+
+Attributes
+----------
+service: Service, default is Undefined, required.
+ The exposed port of Workload, which will be generated into Kubernetes Service.
+randomPassword: RandomPassword, default is Undefined, required.
+ The sensitive random string, which will be generated into Terraform random_password.
+
+Examples
+--------
+import kawesome as ks
+
+... ...
+
+accessories: {
+ "kawesome": kawesome.Kawesome {
+ service: kawesome.Service{
+ port: 5678
+ }
+ randomPassword: kawesome.RandomPassword {
+ length: 20
+ }
+ }
+}
+"""
+
+# The exposed port of Workload, which will be generated into Kubernetes Service.
+service: Service
+
+# The sensitive random string, which will be generated into Terraform random_password.
+randomPassword: RandomPassword
+```
+
+3. **Implement the [gRPC proto](https://github.com/KusionStack/kusion/blob/main/pkg/modules/proto/module.proto) generate interface.** The `generate` interface consumes the application developer's config described in the [`AppConfiguration`](../appconfigurations) and the platform engineer's config described in the [`workspace`](../4-workspace/1-overview.md) to generate all infrastructure resources represented by this module.
+
+```go
+func (k *Kawesome) Generate(_ context.Context, request *module.GeneratorRequest) (*module.GeneratorResponse, error) {
+ // generate your infrastructure resoruces
+}
+
+// Patcher primarily contains patches for fields associated with Workloads, and additionally offers the capability to patch other resources.
+type Patcher struct {
+ // Environments represent the environment variables patched to all containers in the workload.
+ Environments []v1.EnvVar `json:"environments,omitempty" yaml:"environments,omitempty"`
+ // Labels represent the labels patched to the workload.
+ Labels map[string]string `json:"labels,omitempty" yaml:"labels,omitempty"`
+ // PodLabels represent the labels patched to the pods.
+ PodLabels map[string]string `json:"podLabels,omitempty" yaml:"podLabels,omitempty"`
+ // Annotations represent the annotations patched to the workload.
+ Annotations map[string]string `json:"annotations,omitempty" yaml:"annotations,omitempty"`
+ // PodAnnotations represent the annotations patched to the pods.
+ PodAnnotations map[string]string `json:"podAnnotations,omitempty" yaml:"podAnnotations,omitempty"`
+ // JSONPatchers represents patchers that can be patched to an arbitrary resource.
+ // The key of this map represents the ResourceId of the resource to be patched.
+ JSONPatchers map[string]JSONPatcher `json:"jsonPatcher,omitempty" yaml:"jsonPatcher,omitempty"`
+}
+```
+
+The `GeneratorRequest` contains the application developer's config, platform engineer's config, workload config and related metadata a module could need to generate infrastructure resources.
+In the `GeneratorResponse`, there are two fields, `Resources` and `Patchers`. The `Resource` represents resources that should operated by Kusion and they will be appended into the [Spec](../specs). The `Patchers` are used to patch the workload and other resources.
+
+### Workload
+
+Workload in the AppConfiguration is also a Kusion module. If the workload module only generates one resource, this resource will be regarded as the workload resource. However, if the workload module generates more than one resource, one and only one of them must contain a key-value pair in the 'extension' field, where the key is 'kusion.io/is-workload' and the value is 'true' and this resource will be regarded as the workload resource.
+
+### Implicit Resource Dependency
+
+When you need to use an attribute of another resource as the value of a specific resource attribute, Kusion supports declaring the implicit resource dependencies with the `$kusion_path` prefix. You can concatenate the implicit resource dependency path with the resource `id`, attribute `name` and the `$kusion_path` prefix, for example:
+
+```yaml
+# Dependency path as an attribute value.
+spec:
+ resources:
+ - id: v1:Service:test-ns:test-service
+ type: Kubernetes
+ attributes:
+ metadata:
+ annotations:
+ deployment-name: $kusion_path.v1:Deployment:test-ns:test-deployment.metadata.name
+```
+
+In addition, please note that:
+
+- The implicit resource dependency path can only be used to replace the value in `Attributes` field of the `Resource`, but not the key. For example, the following `Spec` is **invalid**:
+
+```yaml
+# Dependency path not in `attributes`.
+spec:
+ resources:
+ - id: v1:Service:test:$kusion_path.apps/v1:Deployment:test-ns:test-deployment.metadata.name
+```
+
+```yaml
+# Dependency path in the key, but not in the value.
+spec:
+ resources:
+ - id: apps/v1:Deployment:test-ns:test-deployment
+ type: Kubernetes
+ attributes:
+ metadata:
+ annotations:
+ $kusion_path.v1:Service:test-ns:test-service.metadata.name: test-svc
+```
+
+- The implicit resource dependency path can only be used as a standalone value and cannot be combined with other string. For example, the following `Spec` is **invalid**:
+
+```yaml
+# Dependency path combined with other string.
+spec:
+ resources:
+ - id: apps/v1:Deployment:test-ns:test-deployment
+ type: Kubernetes
+ attributes:
+ metadata:
+ annotations:
+ test-svc: $kusion_path.v1:Service:test-ns:test-service.metadata.name + "-test"
+```
+
+- The impliciy resource dependency path does not support accessing the value in an array, so the following is currently **invalid**:
+
+```yaml
+# Dependency path accessing the value in an array.
+spec:
+ resources:
+ - id: apps/v1:Deployment:test-ns:test-deployment
+ type: Kubernetes
+ attributes:
+ metadata:
+ annotations:
+ test-svc: $kusion_path.v1:Service:test-ns:test-service.spec.ports[0].name
+```
+
+## Publish
+
+Publish the Kusion module to an OCI registry with the command `kusion mod push`. If your module is open to the public, we **welcome and highly encourage** you to contribute it to the module registry [catalog](https://github.com/KusionStack/catalog), so that more people can benefit from the module. Submit a pull request to this repository, once it is merged, it will be published to the [KusionStack GitHub container registry](https://github.com/orgs/KusionStack/packages).
+
+Publish a stable version
+```shell
+kusion mod push /path/to/my-module oci:/// --creds
+```
+
+Publish a module of a specific OS arch
+```shell
+kusion mod push /path/to/my-module oci:/// --os-arch==darwin/arm64 --creds
+```
+
+Publish a pre-release version
+```shell
+kusion mod push /path/to/my-module oci:/// --latest=false --creds
+```
+
+:::info
+The OCI URL format is `oci:///` and please ensure that your token has permissions to write to the registry.
+:::
+
+More details can be found in the `kusion mod push` reference doc.
+
+## Register to the workspace
+
+```yaml
+modules:
+ kawesome:
+ path: oci://ghcr.io/kusionstack/kawesome
+ version: 0.2.0
+ configs:
+ default:
+ service:
+ labels:
+ kusionstack.io/module-name: kawesome
+ annotations:
+ kusionstack.io/module-version: 0.2.0
+```
+
+Register module platform configuration in the `workspace.yaml` to standardize the module's behavior. App developers can list all available modules registered in the workspace.
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/3-module/3-app-dev-guide.md b/docs_versioned_docs/version-v0.14/3-concepts/3-module/3-app-dev-guide.md
new file mode 100644
index 00000000..3169c67c
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/3-module/3-app-dev-guide.md
@@ -0,0 +1,127 @@
+# Application Developer User Guide
+
+## Prerequisites
+
+To follow this guide, you will need:
+
+- Kusion v0.12 or higher installed locally
+
+## Workflow
+
+As an application developer, the workflow of using a Kusion module looks like this:
+
+1. Browse available modules registered by platform engineers in the workspace
+2. Add modules you need to your Stack
+3. Initialize modules
+4. Apply the AppConfiguration
+
+## Browse available modules
+
+For all KusionStack built-in modules, you can find all available modules and documents in the [reference](../../6-reference/2-modules/index.md)
+
+Since the platform engineers have already registered the available modules in the workspace, app developers can execute `kusion mod list` to list the available modules.
+
+```shell
+kusion mod list --workspace dev
+
+Name Version URL
+kawesome 0.2.0 oci://ghcr.io/kusionstack/kawesome
+```
+
+## Add modules to your Stack
+
+Taking `kawesome` as an example, the directory structure is shown below:
+
+```shell
+example
+├── dev
+│ ├── example_workspace.yaml
+│ ├── kcl.mod
+│ ├── main.k
+│ └── stack.yaml
+└── project.yaml
+```
+
+Select the module you need from the result of `kusion mod list` and execute `kusion mod add kawesome` to add `kawesome` into your Stack.
+
+Once you have added the `kawesome` module, the `kcl.mod` file will be updated to look like this.
+
+``` toml
+[package]
+name = "example"
+
+[dependencies]
+kawesome = { oci = "oci://ghcr.io/kusionstack/kawesome", tag = "0.2.0" }
+service = {oci = "oci://ghcr.io/kusionstack/service", tag = "0.1.0" }
+kam = { git = "https://github.com/KusionStack/kam.git", tag = "0.2.0" }
+
+[profile]
+entries = ["main.k"]
+```
+
+- The `kam` dependency represents the [Kusion Application Module](https://github.com/KusionStack/kam.git) which contains the AppConfiguration.
+- The `service` dependency represents the service workload module.
+- The `kawesome` is the Kusion module we are going to use in the AppConfiguration.
+
+## Initialize modules
+
+```python
+# The configuration codes in perspective of developers.
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import kawesome.v1.kawesome
+
+kawesome: ac.AppConfiguration {
+ # Declare the workload configurations.
+ workload: service.Service {
+ containers: {
+ kawesome: c.Container {
+ image: "hashicorp/http-echo"
+ env: {
+ "ECHO_TEXT": "$(KUSION_KAWESOME_RANDOM_PASSWORD)"
+ }
+ }
+ }
+ replicas: 1
+ }
+ # Declare the kawesome module configurations.
+ accessories: {
+ "kawesome": kawesome.Kawesome {
+ service: kawesome.Service{
+ port: 5678
+ }
+ randomPassword: kawesome.RandomPassword {
+ length: 20
+ }
+ }
+ }
+}
+```
+
+Initialize the `kawesome` module in the `accessories` block of the AppConfiguration. The key of the `accessories` item represents the module name and the value represents the actual module you required.
+
+## Apply the result
+
+Execute the preview command to validate the result.
+
+```shell
+kusion apply
+ ✔︎ Generating Spec in the Stack dev...
+Stack: dev
+ID Action
+hashicorp:random:random_password:example-dev-kawesome Create
+v1:Namespace:example Create
+v1:Service:example:example-dev-kawesome Create
+apps/v1:Deployment:example:example-dev-kawesome Create
+
+
+Do you want to apply these diffs?:
+ > details
+Which diff detail do you want to see?:
+> all
+ hashicorp:random:random_password:example-dev-kawesome Create
+ v1:Namespace:example Create
+ v1:Service:example:example-dev-kawesome Create
+ apps/v1:Deployment:example:example-dev-kawesome Create
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/3-module/4-registration.md b/docs_versioned_docs/version-v0.14/3-concepts/3-module/4-registration.md
new file mode 100644
index 00000000..e67f42a1
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/3-module/4-registration.md
@@ -0,0 +1,41 @@
+# Module Registration
+
+After a module is published, it needs to be registered before it can be used in **Kusion server**. This is to help the server to accurately generate a necessary `kcl.mod` file that describes the dependency of a configurations (i.e. what modules can I use as a developer).
+
+This step is not required when using Kusion CLI.
+
+## APIs
+
+The APIs to manage workspaces can be found in the swagger docs under `{server-endpoint}/docs/index.html#/module`.
+
+## Register a module via Developer Portal
+
+To register a new module via the developer portal, switch to the `Modules` tab and click on `New Module`.
+
+
+
+Fill out the module details. It's always recommended to provide a link to the module's documentation for developers to read.
+
+
+
+## Edit a registered module
+
+To edit a registered module, click the `edit` button.
+
+
+
+## Delete a registered module
+
+To delete a registered module, click the `delete` button.
+
+
+
+## Generate kcl.mod
+
+To generate the `kcl.mod` for a stack targeting a workspace, go the workspace page and select `Available Modules`, then click on `Generate kcl.mod`.
+
+
+
+A text box appears with the module dependency content generated. This should be copied and pasted into the `kcl.mod` file in the stack folder.
+
+
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/3-module/_category_.json b/docs_versioned_docs/version-v0.14/3-concepts/3-module/_category_.json
new file mode 100644
index 00000000..5952a21e
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/3-module/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Modules"
+}
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/4-workspace/1-overview.md b/docs_versioned_docs/version-v0.14/3-concepts/4-workspace/1-overview.md
new file mode 100644
index 00000000..f77158b6
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/4-workspace/1-overview.md
@@ -0,0 +1,106 @@
+# Overview
+
+Workspace is a logical concept that maps to an actual target environment to deploy a stack to. In today's context, this _usually_ represents a Kubernetes cluster for the application workload and an optional cloud account to provision infrastructure resources that the workload depends on (A database, for example). Aside from the deployment destination, workspaces are also designed to be associated with a series of platform configurations that are used by all the stacks deployed to said workspace.
+
+When executing the command `kusion generate`, Kusion will "match" the AppConfiguration and the approriate workspace configuration to dynamically generate the `Spec`, which contains the complete manifest to describe the resources in the stack. The relationship of the Project, Stack and Workspace is shown as below. Notice that all three ways to organize stacks are valid.
+
+
+
+Workspace is designed to address separation of concerns. As opposed to the development-time concept of a "stack", a workspace is a deploy-time concept that represents a deployment target, a.k.a an actual runtime environment. Workspaces should be entirely managed by Platform Engineers to alleviate the burden on developers to understand environment-specific details.
+
+To dig a little deeper, a workspace represents the need for a distinct set of "platform opinions". That includes things that application developer either don't want to or shouldn't need to worry about, such as which Kubernetes cluster to deploy to, the credentials to deploy to said clusters, and other infrastructure details like what database instance to provision.
+
+Workspace is intended to be flexible so you can map them as your see fit to the boundaries that are relevant to your use case. For example, you can map a workspace to a cloud region (aws-us-east-1), provided that regional isolation is sufficient for you (this is an extreme case). Alternatively, a workspace can be map to the combination of a cloud region and an SDLC phase (aws-dev-us-east-1), provided that it represents the right boundary from a platform perspective.
+
+The workspace configuration is in a deterministic format and currently written in YAML. The subcommands of `kusion workspace` are provided to manage the workspaces. When using `kusion workspace`, the workspace configuration will be saved as YAML file in local file system. To avoid the possible risks, the environment variables are provided to hold the sensitive data such as Access Keys and Secret keys.
+
+## Workspace Configuration
+
+The configuration of a Workspace is stored in a single YAML file, which consists of `modules`, `secretStore`, and `context`. An example of Workspace configuration is shown as below.
+
+:::tip
+The workspace configuration files are stored in [Backends](../7-backend/1-overview.md).
+:::
+
+```yaml
+# The platform configuration for Modules or KAMs.
+# For each Module or KAM, the configuration format is as below.
+# # ${module_identifier} or ${KAM_name}:
+# # path: oci://ghcr.io/kusionstack/module-name # url of the module artifact
+# # version: 0.2.0 # version of the module
+# # configs:
+# # default: # default configuration, applied to all projects
+# # ${field1}: ${value1}
+# # #{field2}: ${value2}
+# # ...
+# # ${patcher_name}: #patcher configuration, applied to the projects assigned in projectSelector
+# # ${field1}: ${value1_override}
+# # ...
+# # projectSelector:
+# # - ${project1_name}
+# # - ${project2_name}
+# # ...
+modules:
+ mysql:
+ path: oci://ghcr.io/kusionstack/mysql
+ version: 0.2.0
+ configs:
+ default:
+ cloud: alicloud
+ size: 20
+ instanceType: mysql.n2.serverless.1c
+ category: serverless_basic
+ privateRouting: false
+ subnetID: ${mysql_subnet_id}
+ databaseName: kusion
+ largeSize:
+ size: 50
+ projectSelector:
+ - foo
+ - bar
+ importDBInstance:
+ importedResources:
+ "aliyun:alicloud:alicloud_db_instance:wordpress-demo": "your-imported-resource-id"
+ projectSelector:
+ - baz
+
+secretStore:
+ provider:
+ aws:
+ region: us-east-1
+ profile: The optional profile to be used to interact with AWS Secrets Manager.
+
+context:
+ KUBECONFIG_PATH: $HOME/.kube/config
+ AWS_ACCESS_KEY_ID: ref://secrets-manager-name/key-for-ak
+ AWS_SECRET_ACCESS_KEY: ref://secrets-manager-name/key-for-sk
+```
+
+### modules
+
+The `modules` are the platform-part configurations of Modules and KAMs, where the identifier of them are `${namespace}/${module_name}@${module_tag}` and `${kam_name}`. For each Module or KAM configuration, it is composed of a `default` and several `patcher` blocks. The `default` block contains the universal configuration of the Workspace, and can be applied to all Stacks in the Workspace, which is composed of the values of the Module's or KAM's fields. The `patcher` block contains the exclusive configuration for certain Stacks, which includes not only the fields' values, but also the applied Projects.
+
+The `patcher` block is designed to increase the flexibility for platform engineers managing Workspaces. Cause the Workspace should map to the real physical environment, in the actual production practice, it's almost impossible that all the Stacks share the same platform configuration, although we want them the same.
+
+The values of the same fields in `patcher` will override the `default`, and one field in multiple patchers is forbidden to assign to the same Project. That is, if there are more than one `patcher` declaring the same field with different values, the applied Projects are prohibited to overlap. And, The name of `patcher` must not be `default`.
+
+In the `patcher`, the applied Projects are assigned by the field `ProjectSelector`, which is an array of the Project names. The `ProjectSelector` is provided rather than something may like `StackSelector`, which specifies the applied Stacks. Here are the reasons. Explaining from the perspective of using Workspace, the mapping of Workspace and Stack is specified by the Kusion operation commands' users. While explaining from the perspective of the relationship among Project, Stack and Workspace, Workspace is designed for the reuse of platform-level configuration among multiple Projects. When a Project "encounters" a Workspace, it becomes a "Stack instance", which can be applied to a series of real resources. If using something like `StackSelector`, the reuse would not get realized, and Workspace would also lose its relevance. For more information of the relationship, please refer to [Project](../1-project/1-overview.md) and [Stack](../2-stack/1-overview.md).
+
+Different Module and KAM has different name, fields, and corresponding format and restrictions. When writing the configuration, check the corresponding Module's or KAM's description, and make sure all the requisite Modules and KAMs have correctly configured. Please refer to [Kuiosn Module](../3-module/1-overview.md) and find more information. The example above gives a sample of the Module `mysql`.
+
+The `importedResources` block is designed to declare the import of existing cloud resources. The `importedResources` is a `map` where you can declare the mapping from `id` of the resource in Kusion `Spec` to the Terraform ID of the resource to be imported. Kusion will automatically synchronize the state of the existing cloud resource for the Kusion resource.
+
+### secretStore
+
+The `secretStore` field can be used to access the sensitive data stored in a cloud-based secrets manager. More details can be found in [here](../../5-user-guides/1-using-kusion-cli/4-secrets-management/1-using-cloud-secrets.md).
+
+### context
+
+The `context` field can be used to declare the information such as Kubernetes `kubeconfig` path or content, and the AK/SK of the Terraform providers. Below shows the configurable attributes.
+
+- `KUBECONFIG_PATH`: the local path of the `kubeConfig` file
+- `KUBECONFIG_CONTENT`: the content of the `kubeConfig` file, can be used with cloud-based secrets management (e.g. `ref://secrets-management-name/secrets-key-for-kubeconfig`)
+- `AWS_ACCESS_KEY_ID`: the access key ID of the AWS provider
+- `AWS_SECRET_ACCESS_KEY`: the secret key of the AWS provider
+- `ALICLOUD_ACCESS_KEY`: the access key ID of the Alicloud provider
+- `ALICLOUD_SECRET_KEY`: the secret key of the Alicloud provider
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/4-workspace/2-manage-workspaces-with-server.md b/docs_versioned_docs/version-v0.14/3-concepts/4-workspace/2-manage-workspaces-with-server.md
new file mode 100644
index 00000000..f478c7e7
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/4-workspace/2-manage-workspaces-with-server.md
@@ -0,0 +1,57 @@
+# Managing Workspace With Kusion Server
+
+When using Kusion server, workspaces are managed via the `/api/v1/workspaces` API. You can either access the APIs directly or if you have the Dev Portal enabled (`--dev-portal-enabled`, default to true), workspaces can be managed via the portal. The workspace configurations are also stored in the backends when using Kusion server.
+
+:::Tip
+There are no "default" workspace when using the Kusion server. A workspace needs to be specified every time an operation (generate/preview/apply/destroy) is triggered.
+:::
+
+## APIs
+
+The APIs to manage workspaces can be found in the swagger docs under `{server-endpoint}/docs/index.html#/workspace`.
+
+## Developer Portal
+
+### Creating Workspace
+
+Create a workspace by clicking on the `Create Workspace` button in the top right corner.
+
+
+
+Select the backend to store the workspace configurations.
+
+
+
+The Workspaces are generally managed by the platform engineers. We recommend that they are organized by the following rules:
+
+- **SDLC phases**, such as `dev`, `pre`, `prod`;
+- **cloud vendors**, such as `aws`, `alicloud`;
+- combination of the two above, such as `dev-aws`, `prod-alicloud`.
+
+In design, Kusion does not support deploying Stack to multiple clouds or regions within a single Workspace. While users can technically define a Module that provisions resources across multiple clouds or regions, Kusion does not recommend this practice, and will not provide technical support for such configuration. If the platform engineers need to manage resources across multiple clouds or regions, they should create separate Workspaces.
+
+### Listing Workspace
+
+List workspaces by clicking on the `Workspace` tab.
+
+## Showing Workspace
+
+Clicking on individual workspace to display the [workspace configurations](./1-overview.md#workspace-configuration).
+
+
+
+## Updating Workspace
+
+Update workspaces by clicking on the edit button on each workspace card.
+
+
+
+## Deleting Workspace
+
+
+
+## Using Workspace
+
+Workspace is used when creating [Runs](../9-runs.md). When creating each run, no matter what type, a workspace must be selected to represent the target environment.
+
+
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/4-workspace/3-manage-workspaces-with-cli.md b/docs_versioned_docs/version-v0.14/3-concepts/4-workspace/3-manage-workspaces-with-cli.md
new file mode 100644
index 00000000..703b95c7
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/4-workspace/3-manage-workspaces-with-cli.md
@@ -0,0 +1,114 @@
+# Managing Workspace With CLI
+
+The subcommands of `kusion workspace` are used to manage Workspaces, including `create`, `show`, `list`, `switch`, `update` and `delete`. Beause the Workspace configurations are stored persistently, the current or a specified Backend will be used. For more information of Backend, please refer to [Backend](../7-backend/1-overview.md).
+
+Kusion will create a `default` Workspace with empty configuration in every Backend automatically, and set it as the current. When first using Kusion, or no configuration of Workspace, the `default` Workspace will be used.
+
+## Creating Workspace
+
+Use `kusion workspace create ${name} -f ${configuration_file_path}` to create a new Workspace with the configuration in a YAML file. The Workspace is identified by the `name`, and must be a new one, while the configuration must be written in a YAML file with correct format.
+
+The command above will create the Workspace in current Backend. If to create a Workspace in another backend, please use flag `--backend` to specify. The Workspace names in a Backend must be different, but allow the same in different Backends.
+
+In some scenarios, when a Workspace is created, it is expected to be the current. For simplification, the flag `--current` is provided to set the Workspace current alongside the creation.
+
+Be attention, creating `default` Workspace is not allowed, because it's created by Kusion automatically.
+
+The example is shown as below.
+
+```shell
+# create a workspace in current backend
+kusion workspace create dev -f dev.yaml
+
+# create a workspace in current backend ans set it as current
+kusion workspace create dev -f dev.yaml --current
+
+# create a workspace in specified backend
+kusion workspace create dev -f dev.yaml --backend oss-pre
+```
+
+The Workspaces to create are decided by the platform engineers. We recommend that they are organized by the following rules:
+
+- **SDLC phases**, such as `dev`, `pre`, `prod`;
+- **cloud vendors**, such as `aws`, `alicloud`;
+- combination of the two above, such as `dev-aws`, `prod-alicloud`.
+
+In design, Kusion does not support deploying Stack to multiple clouds or regions within a single Workspace. While users can technically define a Module that provisions resources across multiple clouds or regions, Kusion does not recommend this practice, and will not provide technical support for such configuration. If the platform engineers need to manage resources across multiple clouds or regions, they should create separate Workspaces.
+
+## Listing Workspace
+
+Use `kusion workspace list` to get all the workspace names.
+
+The example is shown as below. In order to simplify, The following examples will not give using specified backend, which is supported by `--backend` flag.
+
+```shell
+# list all the workspace names
+kusion workspace list
+```
+
+## Switching Workspace
+
+In order not to specify the Workspace name for each Kusion operation command, `kusion workspace switch ${name}` is provided to switch the current Workspace. Then when executing `kusion generate`, the current Workspace will be used. The to-switch Workspace must be created.
+
+The example is shown as below.
+
+```shell
+# switch workspace
+kusion workspace switch dev
+```
+
+## Showing Workspace
+
+Use `kusion workspace show ${name}` to get the Workspace configuration. If the `name` is not specified, the configuration of current Workspace will get returned.
+
+The example is shown as below.
+
+```shell
+# show a specified workspace configuration
+kusion workspace show dev
+
+# show the current workspace configuration
+kusion workspace show
+```
+
+## Updating Workspace
+
+When the Workspace needs to update, use `kusion workspace update ${name} -f ${configuration_file_path}` to update with the new configuration file. The whole updated configuration is asked to provide, and the Workspace must be created. Get the Workspace configuration first, then refresh the configuration and execute the command, which are the recommended steps. If the `name` is not specified, the current Workspace will be used.
+
+Updating the `default` Workspace is allowed. And the flag `--current` is also supported to set it as the current.
+
+The example is shown as below.
+
+```shell
+# update a specified workspace
+kusion workspace update dev -f dev_new.yaml
+
+# update a specified workspace and set it as current
+kusion workspace update dev -f dev_new.yaml --current
+
+# update the current workspace
+kusion workspace update -f dev_new.yaml
+```
+
+## Deleting Workspace
+
+When a Workspace is not in use anymore, use `kusion workspace delete ${name}` to delete a Workspace. If the `name` is not specified, the current Workspace will get deleted, and the `default` Workspace will be set as the current Workspace. Therefore, deleting the `default` Workspace is not allowed.
+
+The example is shown as below.
+
+```shell
+# delete a specified workspace
+kusion workspace delete dev
+
+# delete the current workspace
+kusion workspace delete
+```
+
+## Using Workspace
+
+Workspace is used in the command `kusion generate`, the following steps help smooth the operation process.
+
+1. Write the Workspace configuration file with the format shown above, and fulfill all the necessary fields;
+2. Create the workspace with `kusion workspace create`, then Kusion perceives the Workspace. The flag `--current` can be used to set it as the current.
+3. Execute `kusion generate` in a Stack to generate the whole Spec, the AppConfiguration and Workspace configuration get rendered automatically, and can be applied to the real infrastructure. If the appointed Workspace or Backend is asked, the flags `--workspace` and `--backend` will help achieve that.
+4. If the Workspace needs to update, delete, switch, etc. Use the above commands to achieve that.
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/4-workspace/_category_.json b/docs_versioned_docs/version-v0.14/3-concepts/4-workspace/_category_.json
new file mode 100644
index 00000000..c79291d2
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/4-workspace/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Workspaces"
+}
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/5-appconfigurations.md b/docs_versioned_docs/version-v0.14/3-concepts/5-appconfigurations.md
new file mode 100644
index 00000000..a24b1ed0
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/5-appconfigurations.md
@@ -0,0 +1,33 @@
+# AppConfigurations
+
+As a modern cloud-native application delivery toolchain, declarative intent-based actuation is the central idea of Kusion, and `AppConfiguration` model plays the role of describing the intent, which provides a simpler path for on-boarding developers to the platform without leaking low-level details in runtime infrastructure and allows developers to fully focus on the application logic itself.
+
+The `AppConfiguration` model consolidates workload and their dependent accessories for the application deployment, along with any pipeline and operational requirements into one standardized, infrastructure-independent declarative specification. This declarative specification represents the intuitive user intent for the application, which drives a standardized and efficient application delivery and operation process in a hybrid environment.
+
+
+
+AppConfiguration consists of four core concepts, namely `Workload`, `Accessory`, `Pipeline`, and `Dependency`. Each of them represents a [Kusion module](./3-module/1-overview.md). We will walk through these concepts one by one.
+
+#### Workload
+
+Workload is a representation of the business logic that runs in the cluster. Common workload types include long-running services that should “never” go down and batch jobs that take from a few seconds to a few days to complete.
+
+In most cases, a Workload is a backend service or the frontend of an Application. For example, in a micro-service architecture, each service would be represented by a distinct Workload. This allows developers to manage and deploy their code in a more organized and efficient manner.
+
+#### Accessory
+
+Using the analogy of a car, workload is the core engine of the application, but only having the engine isn’t enough for the application to function properly. In most cases, there must be other supporting parts for the workload to operate as intended. For those supporting parts, we call them Accessory. Accessory refers to various runtime capabilities and operational requirements provided by the underlying infrastructure, such as database, network load-balancer, storage and so on.
+
+From the perspective of team collaboration, the platform team should be responsible for creating and maintaining various accessory definitions, providing reusable building blocks out-of-the-box. Application developers just need to leverage the existing accessories to cover the evolving application needs. This helps software organizations achieve separation of concern so that different roles can focus on the subject matter they are an expert in.
+
+#### Pipeline
+
+Running reliable applications requires reliable delivery pipelines. By default, Kusion provides a relatively fixed built-in application delivery pipeline, which should be sufficient for most use cases. However, as the application scale and complexity grow, so does the need for a customizable delivery pipeline. Developers wish for more fine-tuned control and customization over the workflow to deliver their applications. That’s why we introduced the Pipeline section in AppConfiguration model.
+
+A customized delivery pipeline is made of several steps, each corresponding to an operation that needs to be executed, such as running certain tests after a deployment, scanning artifacts for vulnerabilities prior to deployment, and so on. Implementation-wise, the execution of each step should be carried out in the form of a plugin, developed and managed by the platform owners.
+
+#### Topologies
+
+Application dependencies refer to the external services or other software that an application relies on to function properly. These dependencies may be required to provide certain functionality or to use certain features in the application.
+
+Similar to declaring a dependency from an application to an accessory, AppConfiguration lets you declare the dependencies between different applications in the same way.
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/6-specs.md b/docs_versioned_docs/version-v0.14/3-concepts/6-specs.md
new file mode 100644
index 00000000..79f61fd9
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/6-specs.md
@@ -0,0 +1,130 @@
+# Specs
+
+The spec is a system-generated, immutable, declarative representation of the resources involved in a particular deployment. As opposed to the static configurations that are stored in a stack folder in a git repository, which may or may not be scoped to a given deploy target, a spec is dynamically rendered from the aggregated intents from multiple sources, including those that are target-specific, and those aren't (e.g. global configs, constraints posed by security, compliance and so on, for example what kind of application may have Internet access). Specs are resource-facing desired states and are always rendered on the spot based on all the relevant inputs.
+
+The Specs are designed to be **THE** intermediate data layer between configuration code and actual resources. It is designed to be a structured data format that is both machine-friendly (so that we can use the proper libraries to process and actualize them) and human-friendly (so that it provides a readable reference to the resource perspective of an application).
+
+## Generators
+
+The rendering logic that transforms the static configuration to the **Spec** are produced by "Generators", which are pieces of code written and distributed in Go. Generators are in charge of converting configuration code written in KCL into resource specifications in the Spec. They are packaged and wrapped inside a GRPC server whose lifecycle are dynamically managed as individual go-plugins.
+
+## Runtimes
+
+In this workflow, the component that processes the resources in the Spec is called a Runtime. Runtimes are in charge of bridging the resource specification to the actual infrastructure API. For Kubernetes resources, its runtime uses client-go to connect to the clusters. For the cloud resources, we are using IAC tools like Terraform/Crossplane and their providers to connect to the cloud control APIs. Runtimes are also extensible.
+
+
+
+## Purpose of Spec
+
+### Single Source of Truth
+
+In Kusion's workflow, the platform engineer builds Kusion modules and provides environment configurations, application developers choose Kusion modules they need and deploy operational intentions to an environment with related environment configurations. They can also input dynamic parameters like the container image when executing the `kusion generate` command. So the final operational intentions include configurations written by application developers, environment configurations and dynamic inputs. Due to this reason, we introduce **Spec** to represent the SSoT(Single Source of Truth) of Kusion. It is the result of `kusion generate` which contains all operational intentions from different sources.
+
+### Consistency
+
+Delivering an application to different environments with identical configurations is a common practice, especially for applications that require scalable distribution. In such cases, an immutable configuration package is helpful. By utilizing the Spec, all configurations and changes are stored in a single file. As the Spec is the input of Kusion, it ensures consistency across different environments whenever you execute Kusion with the same Spec file.
+
+### Rollback and Disaster Recovery
+
+The ability to roll back is crucial in reducing incident duration. Rolling back the system to a previously validated version is much faster compared to attempting to fix it during an outage. We regard a validated Spec as a snapshot of the system and recommend storing the Spec in a version control system like Git. This enables better change management practices and makes it simpler to roll back to previous versions if needed. In case of a failure or outage, having a validated Spec simplifies the rollback process, ensuring that the system can be quickly recovered.
+
+## Example
+
+The API definition of the `Spec` structure in Kusion can be found [here](https://github.com/KusionStack/kusion/blob/main/pkg/apis/api.kusion.io/v1/types.go#L862). Below is an example `Spec` file generated from the `quickstart` demo application (more details can be found [here](../2-getting-started/2-getting-started-with-kusion-cli/1-deliver-quickstart.md)).
+
+```yaml
+resources:
+ - id: v1:Namespace:quickstart
+ type: Kubernetes
+ attributes:
+ apiVersion: v1
+ kind: Namespace
+ metadata:
+ creationTimestamp: null
+ name: quickstart
+ spec: {}
+ status: {}
+ extensions:
+ GVK: /v1, Kind=Namespace
+ - id: apps/v1:Deployment:quickstart:quickstart-default-quickstart
+ type: Kubernetes
+ attributes:
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ creationTimestamp: null
+ labels:
+ app.kubernetes.io/name: quickstart
+ app.kubernetes.io/part-of: quickstart
+ name: quickstart-default-quickstart
+ namespace: quickstart
+ spec:
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: quickstart
+ app.kubernetes.io/part-of: quickstart
+ strategy: {}
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ app.kubernetes.io/name: quickstart
+ app.kubernetes.io/part-of: quickstart
+ spec:
+ containers:
+ - image: kusionstack/kusion-quickstart:latest
+ name: quickstart
+ resources: {}
+ status: {}
+ dependsOn:
+ - v1:Namespace:quickstart
+ - v1:Service:quickstart:quickstart-default-quickstart-private
+ extensions:
+ GVK: apps/v1, Kind=Deployment
+ - id: v1:Service:quickstart:quickstart-default-quickstart-private
+ type: Kubernetes
+ attributes:
+ apiVersion: v1
+ kind: Service
+ metadata:
+ creationTimestamp: null
+ labels:
+ app.kubernetes.io/name: quickstart
+ app.kubernetes.io/part-of: quickstart
+ name: quickstart-default-quickstart-private
+ namespace: quickstart
+ spec:
+ ports:
+ - name: quickstart-default-quickstart-private-8080-tcp
+ port: 8080
+ protocol: TCP
+ targetPort: 8080
+ selector:
+ app.kubernetes.io/name: quickstart
+ app.kubernetes.io/part-of: quickstart
+ type: ClusterIP
+ status:
+ loadBalancer: {}
+ dependsOn:
+ - v1:Namespace:quickstart
+ extensions:
+ GVK: /v1, Kind=Service
+secretStore: null
+context: {}
+```
+
+From the example above, we can see that the `Spec` contains a list of `resources` required by the application.
+
+A `resource` is a concept in `Kusion` that abstract infrastructure. It represents an individual unit of infrastructure or application component managed by the `Kusion`, serving as a fundamental building block for defining the desired state of the infrastructure. They provide a unified way to define various types of resources, including `Kubernetes` objects and `Terraform` resources. Each `resource` in the `Spec` needs to have `id`, `type`, `attributes`, `dependsOn`, and `extensions` fields:
+
+- `id` is the unique key of this resource. An idiomatic way for `Kubernetes` resources is `apiVersion:kind:namespace:name`, and for `Terraform` resources is `providerNamespace:providerName:resourceType:resourceName`.
+- `type` represents the type of runtime Kusion supports, and currently includes `Kubernetes` and `Terraform`.
+- `attributes` represents all specified attributes of this resource, basically the manifest and argument attributes for the `Kubernetes` and `Terraform` resources.
+- `dependsOn` contains all the other resources the resource depends on.
+- `extensions` specifies the arbitrary metadata of the resource, where you can declare information such as Kubernetes GVK, Terraform provider, and imported resource id, etc.
+
+Besides the `resources`, Spec also records the `secretStore` and `context` field in the corresponding workspace. The former can be used to access sensitive data stored in an external secrets manager, while the latter can be used to declare the workspace-level configurations such as Kubernetes `kubeconfig` file path or content, and Terraform providers' AK/SK. More information can be found [here](./4-workspace/1-overview.md#secretstore).
+
+## Apply with Spec File
+
+When using the CLI, Kusion supports using the Spec file directly as input. Users can place the Spec file in the stack directory and execute `kusion preview --spec-file spec.yaml` and `kusion apply --spec-file spec.yaml` to preview and apply the resources.
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/7-backend/1-overview.md b/docs_versioned_docs/version-v0.14/3-concepts/7-backend/1-overview.md
new file mode 100644
index 00000000..b51b78a1
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/7-backend/1-overview.md
@@ -0,0 +1,118 @@
+# Overview
+
+Backend is Kusion's storage, which defines the place to store Workspace and Release related files. By default, Kusion uses the `local` type of backend to store on the local disk. In the case of team collaboration, the Workspace and Release can be stored on a remote backend, such as an `AliCloud OSS Bucket` or an `AWS S3 bucket`, to enable simutaneous access from users.
+
+## Available Backend Types
+
+There are four available backend types: `local`, `oss`, `s3` and `google`.
+
+### local
+
+The `local` type backend uses local file system as storage, which is suitable for local operations, but not ideal for multi-user collaboration. The supported config items are as below.
+
+- **path**: `type string`, `optional`, specify the directory to store the Workspace and Release files. The subdirectories `workspaces` and `releases` are used to store the corresponding files separately. It's recommended to use an empty or a Kusion exclusive directory as the local backend path. If not set, the default path `${KUSION_HOME}` is in use.
+
+The whole local type backend configuration is as below.
+
+```yaml
+{
+ "type": "local",
+ "configs": {
+ "path": "${local_path}" # type string, optional, the directory to store files.
+ }
+}
+```
+
+### oss
+
+The `oss` type backend uses the Alicloud Object Storage Service (OSS) as storage. The supported config items are as below.
+
+- **endpoint**: `type string`, `required`, specify the access endpoint for alicloud oss bucket.
+- **accessKeyID**: `type string`, `required`, specify the alicloud account accessKeyID, support declaring by environment variable `OSS_ACCESS_KEY_ID`.
+- **accessKeySecret**: `type string`, `required`, specify the alicloud account accessKeySecret, support declaring by environment variable `OSS_ACCESS_KEY_SECRET`.
+- **bucket**: `type string`, `required`, specify the name of the alicloud oss bucket.
+- **prefix**: `type string`, `optional`, constitute the prefix to store the Workspace and Release files, whose prefixes are `${prefix}/workspaces` and `${prefix}/releases` respectively. Using prefix can create a "dedicated space" for the Kusion data, which is beneficial for the management and reuse of the bucket. If not set, there is no prefix, the files are stored in the root path of the bucket if analogy to a file system.
+
+Noted that `accessKeyID` and `accessKeySecret` are required for the whole configuration combined by the configuration managed by the command `kusion config` and the environment variables. For the `kusion config` alone, they are not obligatory. And for the safety reason, using environment variables is the recommended way.
+
+The whole oss type backend configuration is as below.
+
+```yaml
+{
+ "type": "oss",
+ "configs": {
+ "endpoint": "${oss_endpoint}", # type string, required, the oss endpoint.
+ "accessKeyID": "${oss_access_key_id}", # type string, ooptional for the command "kusion config", the oss access key id.
+ "accessKeySecret": "${oss_access_key_secret}", # type string, optional for the command "kusion config", the oss access key secret.
+ "bucket": "${oss_bucket}", # type string, required, the oss bucket.
+ "prefix": "${oss_prefix}" # type string, optional, the prefix to store the files.
+ }
+}
+```
+
+The supported environment variables are as below.
+
+```bash
+export OSS_ACCESS_KEY_ID="${oss-access-key-id}" # configure accessKeyID
+export OSS_ACCESS_KEY_SECRET="${oss-access-key-secret}" # configure accessKeySecret
+```
+
+### s3
+
+The `s3` type backend uses the AWS Simple Storage Service (S3) as storage. The supported config items are as below.
+
+- **region**: `type string`, `required`, specify the region of aws s3 bucket, support declaring by environment variable `AWS_DEFAULT_REGION` or `AWS_REGION`, where the latter has higher priority.
+- **endpoint**: `type string`, `optional`, specify the access endpoint for aws s3 bucket.
+- **accessKeyID**: `type string`, `required`, specify the aws account accessKeyID, support declaring by environment variable `AWS_ACCESS_KEY_ID`.
+- **accessKeySecret**: `type string`, `required`, specify the aws account.accessKeySecret, support declaring by environment variable `AWS_SECRET_ACCESS_KEY`.
+- **bucket**: `type string`, `required`, specify the name of the aws s3 bucket.
+- **prefix**: `type string`, `optional`, constitute the prefix to store the Workspace and Release files, whose prefixes are `${prefix}/workspaces` and `${prefix}/releases` respectively.
+
+Noted that `region`, `accessKeyID` and `accessKeySecret` are optional for the `kusion config` command.
+
+The whole s3 type backend configuration is as below.
+
+```yaml
+{
+ "type": "s3",
+ "configs": {
+ "region": "${s3_region}", # type string, optional for the command "kusion config", the aws region.
+ "endpoint": "${s3_endpoint}", # type string, optional, the aws endpoint.
+ "accessKeyID": "${s3_access_key_id}", # type string, optional for the command "kusion config", the aws access key id.
+ "accessKeySecret": "${s3_access_key_secret}", # type string, optional for the command "kusion config", the aws access key secret.
+ "bucket": "${s3_bucket}", # type string, required, the s3 bucket.
+ "prefix": "${s3_prefix}" # type string, optional, the prefix to store the files.
+ }
+}
+```
+
+The supported environment variables are as below.
+
+```bash
+export AWS_DEFAULT_REGION="${s3_region}" # configure region, lower priority than AWS_REGION
+export AWS_REGION="${s3_region}" # configure region, higher priority than AWS_DEFAULT_REGION
+export AWS_ACCESS_KEY_ID="${s3_access_key_id}" # configure accessKeyID
+export AWS_SECRET_ACCESS_KEY="${s3_access_key_secret}" # configure accessKeySecret
+```
+
+### google cloud storage
+
+The `google` type backend uses the Google Cloud Storage as storage. The supported config items are as below.
+- **bucket**: `type string`, `required`, specify the name of the google cloud storage bucket.
+- **credentials**: `type string`, `required`, specify the content of the `credentials.json` file required to access the aforementioned bucket.
+
+```yaml
+{
+ "type":"google",
+ "configs":{
+ "bucket":"kusion-kcp-gcp-backend",
+ "credentials":${content-from-credentials.json}
+ }
+}
+```
+
+The supported environment variables are as below.
+
+```bash
+export GOOGLE_CLOUD_CREDENTIALS="${content-from-credentials.json}" # configure credentials.json
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/7-backend/2-managing-backends-with-server.md b/docs_versioned_docs/version-v0.14/3-concepts/7-backend/2-managing-backends-with-server.md
new file mode 100644
index 00000000..f7064e00
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/7-backend/2-managing-backends-with-server.md
@@ -0,0 +1,43 @@
+# Managing backends with Kusion Server
+
+When using Kusion server, backends are managed via the `/api/v1/backends` API. You can either access the APIs directly or if you have the Dev Portal enabled (`--dev-portal-enabled`, default to true), backends can be also managed via the portal. The backend configurations are also stored in the database when using Kusion server.
+
+## APIs
+
+The APIs to manage backends can be found in the swagger docs under `{server-endpoint}/docs/index.html#/backend`.
+
+## Developer Portal
+
+### Creating Backend
+
+Create a backend by clicking on the `Create Backend` button in the top right corner.
+
+
+
+The Backends are generally managed by the platform engineers.
+
+### Listing Backend
+
+List backends by clicking on the `Backend` tab.
+
+## Showing Backend
+
+Clicking on individual backend to display the backend configurations.
+
+
+
+## Updating Backend
+
+Update backends by clicking on the edit button on each backend.
+
+
+
+## Deleting Backend
+
+
+
+## Using Backend
+
+Backend is used when creating [Workspaces](../4-workspace/1-overview.md). When creating each workspace, a backend must be selected to represent the storage to keep the workspace related configurations.
+
+
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/7-backend/3-managing-backends-with-cli.md b/docs_versioned_docs/version-v0.14/3-concepts/7-backend/3-managing-backends-with-cli.md
new file mode 100644
index 00000000..36687c1e
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/7-backend/3-managing-backends-with-cli.md
@@ -0,0 +1,127 @@
+# Managing backends with Kusion CLI
+
+The command `kusion config` is used to configure the backend configuration. Configuring a whole backend or an individual config item are both supported. For the sensitive data, the environment variables are supported, and with higher priority.
+
+Furthermore, Kusion CLI provides the operation of setting current backend. Thus, the trouble of specifying backend can be saved when executing operation commands and managing `workspace`.
+
+## Setting a Backend
+
+When there is a new backend or the backend configuration needs to update, use the command `kusion config set ${key} ${value}` to set a backend. A backend is identified by a unique name, and its whole configuration is made up of the backend type and its corresponding config items.
+
+Be attention, do not confuse backend with backend type. For example, a backend named `s3_prod` uses `s3` as its storage, the `s3_prod` is the backend, while the `s3` is the backend type.
+
+There are four configuration modes:
+
+- setting a whole backend
+- setting a backend type
+- setting a whole set of backend config items
+- setting a backend config item
+
+A unique backend name is required to do the configuration. Take `s3` type backend with name `s3_prod` for an example to explain how these modes work.
+
+### Setting a Whole Backend
+
+The key to configure a whole backend is `backends.${name}`, whose value must be the JSON marshal result in a specified format, which is determined by the backend type. Enclosing the value in single quotation marks is a good choice, which can keep the format correct.
+
+```shell
+# set a whole backend
+kusion config set backends.s3_prod '{"type":"s3","configs":{"bucket":"kusion"}}'
+```
+
+### Setting a Backend Type
+
+The key to set a backend type is `backends.${name}.type`, whose value must be `local`, `oss` or `s3`.
+
+```shell
+# set a backend type
+kusion config set backends.s3_prod.type s3
+```
+
+### Setting a Whole Set of Backend Config Items
+
+The key to set a whole set of backend config items is `backends.${name}.configs`, whose value must be the JSON marshal result in a specified format, which is determined by the backend type. The backend config must be set after the backend type, and corresponds to the backend type.
+
+```shell
+# set a whole backend config
+kusion config set backends.s3_prod.configs '{"bucket":"kusion"}'
+```
+
+### Setting a Backend Config Item
+
+The key to set a backend config item is `backends.${name}.configs.${item}`. The item name and value type both depend on the backend type. Like the whole backend config, the config item must be valid and set after the backend type.
+
+```shell
+# set a backend config item
+kusion config set backends.s3_prod.configs.bucket kusion
+```
+
+When executing `kusion config set`, the configuration will be stored in a local file. For security reason, the environment variables are supported to configure some config items, such as `password`, `accessKeyID`, `accessKeySecret`. Using environment variables rather than `kusion config` set to set sensitive data is the best practice. If both configured, the environment variables have higher priority. For details about the supported environment variables, please see above.
+
+Kusion has a default backend with `local` type and the path is `$KUSION_HOME`, whose name is exactly `default`. The `default` backend is forbidden to modification, that is setting or unsetting the default backend is not allowed. Besides, the keyword `current` is also used by Kusion itself, please do not use it as the backend name.
+
+## Unsetting a Backend
+
+When a backend is not in use, or the configuration is out of date, use the command `kusion config unset ${key}` to unset a backend or a specified config item. Same as the setting, there are also four modes of unsetting.
+
+- unsetting a whole backend
+- unsetting a backend type
+- unsetting a whole set of backend config items
+- unsetting a backend config item
+
+When unsetting a whole backend, the backend must not be the current backend. When unsetting the backend type, the config items must be empty and the backend not be the current.
+
+Unsetting the `default` backend is forbidden.
+
+## Setting the Current Backend
+
+In order not to specify backend for every operation command. Kusion provides the mechanism of setting current backend, then the current workspace will be use by default. This is very useful when you execute a series of Kusion operation commands, for they usually use the same backend.
+
+Use the command `kusion config set backends.current ${name}` to set the current backend, where the `name` must be the already set backend.
+
+```shell
+# set the current workspace
+kusion config set backends.current s3_prod
+```
+
+Setting the current backend to `default` is legal. Actually, if there is no backend related configuration, the current backend is the `default` backend.
+
+## Getting Backend Configuration
+
+Use the command `kusion config get ${key}` to get a whole backend configuration or a specified backend config item. The `key` is same as setting and unsetting operation, the whole list can be found in the [Configuration](../11-cli-configuration.md).
+
+```shell
+# get a whole backend
+kusion config get backends.s3_prod
+
+# get a specified config item
+kusion config get backends.s3_prod.configs.bucekt
+```
+
+Besides, the command `kusion config list` can also be used, which returns the whole kusion configuration, while the backend configuration is included.
+
+```shell
+# get the whole Kusion configuration
+kusion config list
+```
+
+## Using Backend
+
+The backend is used to store Workspace and Release. Thus, the following commands use the backend, shown as below.
+
+- subcommands of `kusion workspace`: use to store the Workspace;
+- `kusion apply`, `kusion destroy`: use to store the Release;
+
+For all the commands above, the flag `--backend` is provided to specify the backend, or using the current backend. When using backend, you usually need to specify the sensitive data by environment variables. The example is shown below.
+
+```shell
+# set environment variables of sensitive and other necessary data
+export AWS_REGION="${s3_region}"
+export AWS_ACCESS_KEY_ID="${s3_access_key_id}"
+export AWS_SECRET_ACCESS_KEY="${s3_access_key_secret}"
+
+# use current backend
+kusion apply
+
+# use a specified backend
+kusion apply --backend s3_prod
+```
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/7-backend/_category_.json b/docs_versioned_docs/version-v0.14/3-concepts/7-backend/_category_.json
new file mode 100644
index 00000000..c24bf98e
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/7-backend/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Backends"
+}
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/8-release.md b/docs_versioned_docs/version-v0.14/3-concepts/8-release.md
new file mode 100644
index 00000000..6f1096b1
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/8-release.md
@@ -0,0 +1,28 @@
+---
+id: release
+sidebar_label: Releases
+---
+
+# Release
+
+Release is used to indicate a single operation, triggered only by `apply` and `destroy`, providing users with a more coherent and consistent operation experience with Kusion. Release also provides audit and rollback capabilities, which is currently under development.
+
+Every time an `apply` or `destroy` operation is executed, it will trigger the generation of a `release` file. The combination of a `project` and `workspace` corresponds to a set of `release` files, which also relates to a set of the real application resources. The `release` file is stored in the same `backend` as the `workspace`, and the default path is `releases/$PROJECT_NAME/$WORKSPACE_NAME`, whose revision starts from 1 and increments.
+
+:::tip
+For kusion server, the default release path is `releases/server/$SOURCE_NAME/$PROJECT_NAME/$WORKSPACE_NAME`
+:::
+
+The release file contains the [Spec](./6-specs.md) and [State](./8-release.md#state) of an application, both of which are composed of `Resources`, representing the expected description from the configuration code and the actual state of the resources respectively. In addition, the release file also contains the information of creation and modification time, operation phase, and application metadata, etc.
+
+## State
+
+State is a record of an operation's result. It is a mapping between `resources` managed by `Kusion` and the actual infra resources. State is often used as a data source for three-way merge/diff in operations like `Apply` and `Preview`.
+
+A `resource` here represents an individual unit of infrastructure or application component, serving as a fundamental building block for defining and managing the actual state of your `project`. These `resources` are defined within the `State` and accurately reflect the actual states of the infrastructure. By providing a unified and consistent approach, `Kusion` enables seamless management of diverse resource types, encompassing Kubernetes objects and Terraform resources.Importantly, the structure of these resources in the `State` mirrors that of the `resources` in the `Spec`, ensuring coherence and facilitating efficient state management throughout the lifecycle of your `project`.
+
+State can be stored in many storage [backend](./7-backend/1-overview.md) mediums like filesystems, S3, and OSS, etc.
+
+## Concurrency Control
+
+Release supports collaboration among multiple users and implements the concurrency control through operation `phase`. When the field of `phase` in the release file is not `succeeded` or `failed`, kusion will not be able to execute `apply` or `destroy` operation to the corresponding stack. For example, if a user unexpectedly exits during the `apply` or `destroy` process, the `phase` of the release file may be kept as `applying` or `destroying`. In this case, the user can use the command of `kusion release unlock` to unlock the release file for a specified application and workspace, setting the `phase` to `failed`.
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/9-runs.md b/docs_versioned_docs/version-v0.14/3-concepts/9-runs.md
new file mode 100644
index 00000000..63194132
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/9-runs.md
@@ -0,0 +1,50 @@
+# Runs
+
+:::tip
+Run is a concept only applicable in Kusion server.
+:::
+
+A `Run` represents an operation performed on a given [Stack](./2-stack/1-overview.md) to a target [workspace](./4-workspace/1-overview.md).
+
+## APIs
+
+The APIs to manage runs can be found in the swagger docs under `{server-endpoint}/docs/index.html#/run`.
+
+:::tip
+Please note that in the case of APIs the runs APIs are asynchronous. An external system interacting with runs API is expected to query the status of runs on a polling basis.
+:::
+
+## Types of Runs
+
+There are 4 types of runs:
+
+- Generate: Generate the resource-facing desired state, or [Spec](./6-specs.md) for the given stack and workspace
+- Preview: Preview the resource changes for the given stack and workspace based on the difference between desired state and current state
+- Apply: Apply the desired state to the target workspace
+- Destroy: Destroy the resources for the given stack in the target workspace
+
+## Run History
+
+- Run history are persisted in Kusion Server. To check all runs for a given stack, click on the `Project` tab, select a project, click on the stack tab, then select the Runs tab to locate all runs for this stack.
+
+
+
+## Run Results and Logs
+
+- Each run persisted also includes the run result and its corresponding logs.
+
+The run result may differ based on the different types of runs:
+
+- `Generate` runs yield a `Spec` in the YAML format on success.
+
+
+- `Preview` runs yield a structured json that represents the resource changes calculated based on desired state and current state. In the case of a developer portal, this is visualized with a left-right comparison.
+
+
+- `Apply` and `Destroy` runs produces a message that says "apply/destroy completed".
+
+The run logs are used for troubleshooting in case of any errors during the run.
+
+## Exception Handling
+
+Kusion workflow includes the execution of out-of-tree go-plugins in the form of [Kusion Module](../3-concepts/3-module/1-overview.md) Generators. If any of the plugin code causes a panic, the stack trace is expected to be printed in the run log for closer examination.
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/3-concepts/_category_.json b/docs_versioned_docs/version-v0.14/3-concepts/_category_.json
new file mode 100644
index 00000000..bccddbf1
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/3-concepts/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Concepts"
+}
diff --git a/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/1-overview.md b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/1-overview.md
new file mode 100644
index 00000000..3a4916a8
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/1-overview.md
@@ -0,0 +1,223 @@
+---
+id: overview
+---
+
+# Configuration File Overview
+
+Kusion consumes one or more declarative configuration files (written in KCL) that describe the application, and delivers intent to the target runtime including Kubernetes, clouds, or on-prem infrastructure.
+
+This documentation series walks you through the odds and ends of managing such configuration files.
+
+## Table of Content
+
+- [Configuration File Overview](#configuration-file-overview)
+ - [Table of Content](#table-of-content)
+ - [Directory Structure](#directory-structure)
+ - [AppConfiguration Model](#appconfiguration-model)
+ - [Authoring Configuration Files](#authoring-configuration-files)
+ - [Identifying KCL file](#identifying-kcl-file)
+ - [KCL Schemas and KAM](#kcl-schemas-and-kam)
+ - [Kusion Modules](#kusion-modules)
+ - [Import Statements](#import-statements)
+ - [Understanding kcl.mod](#understanding-kclmod)
+ - [Building Blocks](#building-blocks)
+ - [Instantiating an application](#instantiating-an-application)
+ - [Using `kusion init`](#using-kusion-init)
+ - [Using references](#using-references)
+
+## Directory Structure
+
+Kusion expects the configuration file to be placed in a certain directory structure because it might need some metadata (that is not stored in the application configuration itself) in order to proceed.
+
+:::info
+
+See [Project](../concepts/project/overview) and [Stack](../concepts/stack/overview) for more details about Project and Stack.
+:::
+
+A sample multi-stack directory structure looks like the following:
+```
+~/playground$ tree multi-stack-project/
+multi-stack-project/
+├── README.md
+├── base
+│ └── base.k
+├── dev
+│ ├── kcl.mod
+│ ├── main.k
+│ └── stack.yaml
+├── prod
+│ ├── kcl.mod
+│ ├── main.k
+│ └── stack.yaml
+└── project.yaml
+```
+
+In general, the directory structure follows a hierarchy where the top-level is the project configurations, and the sub-directories represent stack-level configurations.
+
+You may notice there is a `base` directory besides all the stacks. The `base` directory is not mandatory, but rather a place to store common configurations between different stacks. A common pattern we observed is to use stacks to represent different stages (dev, stage, prod, etc.) in the software development lifecycle, and/or different deployment targets (azure-eastus, aws-us-east-1, etc). A project can have as many stacks as needed.
+
+In practice, the applications deployed into dev and prod might very likely end up with a similar set of configurations except a few fields such as the application image (dev might be on newer versions), resource requirements (prod might require more resources), etc.
+
+As a general best practice, we recommend managing the common configurations in `base.k` as much as possible to minimize duplicate code. We will cover how override works in [Base and Override](base-override).
+
+## AppConfiguration Model
+
+`AppConfiguration` is the out-of-the-box model we build that describes an application. It serves as the declarative intent for a given application.
+
+The schema for `AppConfiguration` is defined in the [KusionStack/kam](https://github.com/KusionStack/kam/blob/main/v1/app_configuration.k) repository. It is designed as a unified, application-centric model that encapsulates the comprehensive configuration details and in the meantime, hides the complexity of the infrastructure as much as possible.
+
+`AppConfiguration` consists of multiple sub-components that each represent either the application workload itself, its dependencies (in the form of [Kusion Modules](../concepts/module/overview)), relevant workflows or operational expectations. We will deep dive into the details on how to author each of these elements in this upcoming documentation series.
+
+For more details on the `AppConfiguration`, please refer to the [design documentation](../concepts/appconfigurations).
+
+## Authoring Configuration Files
+
+[KCL](https://kcl-lang.io/) is the choice of configuration language consumed by Kusion. KCL is an open-source constraint-based record and functional language. KCL works well with a large number of complex configurations via modern programming language technology and practice, and is committed to provide better modularity, scalability, stability and extensibility.
+
+### Identifying KCL file
+
+KCL files are identified with `.k` suffix in the filename.
+
+### KCL Schemas and KAM
+
+Similar to most modern General Programming Languages (GPLs), KCL provide packages that are used to organize collections of related KCL source files into modular and re-usable units.
+
+In the context of Kusion, we abstracted a core set of KCL Schemas (such as the aforementioned `AppConfiguration`, `Workload`, `Container`, etc)that represent the concepts that we believe that are relatively universal and developer-friendly, also known as [Kusion Application Model](https://github.com/KusionStack/kam), or KAM.
+
+### Kusion Modules
+
+To extend the capabilities beyond the core KAM model, we use a concept known as [Kusion Modules](../concepts/module/overview) to define components that could best abstract the capabilities during an application delivery. We provide a collection of official out-of-the-box Kusion Modules that represents the most common capabilities. They are maintained in [KusionStack's GitHub container registry](https://github.com/orgs/KusionStack/packages). When authoring an application configuration file, you can simply declare said Kusion Modules as dependencies and import them to declare ship-time capabilities that the application requires.
+
+If the modules in the KusionStack container registry does not meet the needs of your applications, Kusion provides the necessary mechanisms to extend with custom-built Kusion Modules. You can always create and publish your own module, then import the new module in your application configuration written in KCL.
+
+For the steps to develop your own module, please refer to the Module developer guide.
+
+### Import Statements
+
+An example of the import looks like the following:
+```
+### import from the official kam package
+import kam.v1.app_configuration as ac
+
+### import kusion modules
+import service
+import service.container as c
+import monitoring as m
+import network as n
+```
+
+Take `import kam.v1.app_configuration as ac` as an example, the `.v1.app_configuration` part after `import kam` represents the relative path of a specific schema to import. In this case, the `AppConfiguration` schema is defined under `v1/app_configuration` directory in the `kam` package.
+
+### Understanding kcl.mod
+
+Much similar to the concept of `go.mod`, Kusion uses `kcl.mod` as the source of truth to manage metadata (such as package name, dependencies, etc.) for the current package. Kusion will also auto-generate a `kcl.mod.lock` as the dependency lock file.
+
+The most common usage for `kcl.mod` is to manage the dependency of your application configuration file.
+
+:::info
+
+Please note this `kcl.mod` will be automatically generated if you are using `kusion init` to initialize a project with a template. You will only need to modify this file if you are modifying the project metadata outside the initialization process, such as upgrading the dependency version or adding a new dependency altogether, etc.
+:::info
+
+There are 3 sections in a `kcl.mod` file:
+- `package`, representing the metadata for the current package.
+- `dependencies`, describing the packages the current package depends on. Supports referencing either a git repository or an OCI artifact.
+- `profile`, defining the behavior for Kusion. In the example below, it describes the list of files Kusion should look for when parsing the application configuration.
+
+An example of `kcl.mod`:
+```
+[package]
+name = "multi-stack-project"
+edition = "0.5.0"
+version = "0.1.0"
+
+[dependencies]
+monitoring = { oci = "oci://ghcr.io/kusionstack/monitoring", tag = "0.1.0" }
+kam = { git = "https://github.com/KusionStack/kam.git", tag = "0.1.0" }
+# Uncomment the line below to use your own modified module
+# my-module = { oci = "oci://ghcr.io/my-repository/my-package", tag = "my-version" }
+
+[profile]
+entries = ["../base/base.k", "main.k"]
+```
+
+### Building Blocks
+
+Configuration files consist of building blocks that are made of instances of schemas. An `AppConfiguration` instance consists of several child schemas, most of which are optional. The only mandatory one is the `workload` instance. We will take a closer look in the [workload walkthrough](workload). The order of the building blocks does NOT matter.
+
+The major building blocks as of version `0.12.0`:
+```
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "myapp": c.Container {}
+ ...
+ }
+ secrets: {}
+ ...
+ }
+ # optional dependencies, usually expressed in kusion modules
+ accessories: {
+ ...
+ }
+ ...
+}
+```
+
+We will deep dive into each one of the building blocks in this documentation series.
+
+### Instantiating an application
+
+In Kusion's out-of-the-box experience, an application is identified with an instance of `AppConfiguration`. You may have more than one application in the same project or stack.
+
+Here's an example of a configuration that can be consumed by Kusion (assuming it is placed inside the proper directory structure that includes project and stack configurations, with a `kcl.mod` present):
+
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+
+gocity: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "gocity": c.Container {
+ image = "howieyuen/gocity:latest"
+ resources: {
+ "cpu": "500m"
+ "memory": "512Mi"
+ }
+ }
+ }
+ replicas: 1
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 80
+ public: True
+ }
+ ]
+ }
+ }
+}
+```
+
+Don't worry about what `workload` or `n.Network` stand for at the moment. We will deep dive into each one of them in this upcoming documentation series.
+
+### Using `kusion init`
+
+Kusion offers a `kusion init` sub-command which initializes a new project using a pre-built template, which saves you from the hassle of manually building the aforementioned directory structure that Kusion expects.
+
+There is a built-in template `quickstart` in the Kusion binary that can be used offline.
+
+The pre-built templates are meant to help you get off the ground quickly with some simple out-of-the-box examples. You can refer to the [QuickStart documentation](../2-getting-started/2-getting-started-with-kusion-cli/1-deliver-quickstart.md) for some step-by-step tutorials.
+
+### Using references
+
+The reference documentation for the `kam` package and the official Kusion Modules is located in [Reference](../reference/modules/developer-schemas/app-configuration).
+
+If you are using them out of the box, the reference documentation provides a comprehensive view for each schema involved, including all the attribute names and description, their types, default value if any, and whether a particular attribute is required or not. There will also be an example attached to each schema reference.
+
+We will also deep dive into some common examples in the upcoming sections.
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/2-kcl-basics.md b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/2-kcl-basics.md
new file mode 100644
index 00000000..aaa80366
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/2-kcl-basics.md
@@ -0,0 +1,144 @@
+---
+id: kcl-basics
+---
+
+# KCL Basics
+
+## Table of Content
+- [Variable assignments](#variable-assignments)
+- [Common built-in types](#common-built-in-types)
+- [Lists and maps](#lists-and-maps)
+- [Conditional statements](#conditional-statements)
+- [The : and = operator](#the--and--operator)
+- [Advanced KCL capabilities](#advanced-kcl-capabilities)
+
+[KCL](https://kcl-lang.io/) is the choice of configuration language consumed by Kusion. KCL is an open source constraint-based record and functional language. KCL works well with a large number of complex configurations via modern programming language technology and practice, and is committed to provide better modularity, scalability, stability and extensibility.
+
+## Variable assignments
+
+There are two ways to initialize a variable in KCL. You can either use the `:` operator or the `=` operator. We will discuss the difference between them in [this section later](#the--and--operator).
+
+Here are the two ways to create a variable and initialize it:
+```
+foo = "Foo" # Declare a variable named `foo` and its value is a string literal "Foo"
+bar: "Bar" # Declare a variable named `bar` and its value is a string literal "Bar"
+```
+
+You will be able to override a variable assignment via the `=` operator. We will discuss this in depth in the [`:` and `=` operator section](#the--and--operator).
+
+## Common built-in types
+
+KCL supports `int`, `float`, `bool` and `string` as the built-in types.
+
+Other types are defined in the packages that are imported into the application configuration files. One such example would be the `AppConfiguration` object (or `Container`, `Probe`, `Port` object, etc) that are defined in the `kam` repository.
+
+## Lists and maps
+
+Lists are represented using the `[]` notation.
+An example of lists:
+```
+list0 = [1, 2, 3]
+list1 = [4, 5, 6]
+joined_list = list0 + list1 # [1, 2, 3, 4, 5, 6]
+```
+
+Maps are represented using the `{}` notation.
+An example of maps:
+```
+a = {"one" = 1, "two" = 2, "three" = 3}
+b = {'one' = 1, 'two' = 2, 'three' = 3}
+assert a == b # True
+assert len(a) == 3 # True
+```
+
+## Conditional statements
+You can also use basic control flow statements when writing the configuration file.
+
+An example that sets the value of `replicas` conditionally based on the value of `containers.myapp.resources.cpu`:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "myapp": c.Container {
+ image: ""
+ resources: {
+ "cpu": "500m"
+ "memory": "512Mi"
+ }
+ }
+ }
+ replicas: 1 if containers.myapp.resources.cpu == "500m" else 2
+ }
+}
+```
+
+For more details on KCL's control flow statements, please refer to the [KCL documentation](https://kcl-lang.io/docs/reference/lang/tour#control-flow-statements).
+
+## The `:` and `=` operator
+
+You might have noticed there is a mixed usage of the `:` and `=` in the samples above.
+
+:::info
+
+**TLDR: The recommendation is to use `:` in the common configurations, and `=` for override in the environment-specific configurations.**
+:::
+
+In KCL:
+- `:` represents a union-ed value assignment. In the pattern `identifier: E` or `identifier: T E`, the value of the expression `E` with optional type annotation `T` will be merged and union-ed into the element value.
+- `=` represents a value override. In the pattern `identifier = E` or `identifier = T E`, The value of the expression `E` with optional type annotation `T` will override the `identifier` attribute value.
+
+Let's take a look at an example:
+```
+# This is one configuration that will be merged.
+config: Config {
+ data.d1 = 1
+}
+# This is another configuration that will be merged.
+config: Config {
+ data.d2 = 2
+}
+```
+
+The above is equivalent to the snippet below since the two expressions for `config` get merged/union-ed into one:
+```
+config: Config {
+ data.d1 = 1
+ data.d2 = 1
+}
+```
+
+whereas using the `=` operators will result in a different outcome:
+```
+# This is first configuration.
+config = Config {
+ data.d1 = 1
+}
+# This is second configuration that will override the prior one.
+config = Config {
+ data.d2 = 2
+}
+```
+
+The config above results in:
+```
+config: Config {
+ data.d2 = 2
+}
+```
+
+Please note that the `:` attribute operator represents an idempotent merge operation, and an error will be thrown when the values that need to be merged conflict with each other.
+
+```
+data0 = {id: 1} | {id: 2} # Error:conflicting values between {'id': 2} and {'id': 1}
+data1 = {id: 1} | {id = 2} # Ok, the value of `data` is {"id": 2}
+```
+
+More about `:` and `=` operator can be found in the [KCL documentation](https://kcl-lang.io/docs/reference/lang/tour#config-operations).
+
+## Advanced KCL capabilities
+
+For more advanced KCL capabilities, please visit the [KCL website](https://kcl-lang.io/docs/user_docs/support/faq-kcl).
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/3-base-override.md b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/3-base-override.md
new file mode 100644
index 00000000..f14af112
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/3-base-override.md
@@ -0,0 +1,94 @@
+---
+id: base-override
+---
+
+# Base and Override
+
+In practice, what we have observed for production-grade applications is that they usually need to be deployed to a wide range of different targets, be it different environments in the SDLC, or different clouds, regions or runtimes for cost/regulation/performance or disaster recovery related reasons.
+
+In that context, we advocate for a pattern where you can leverage some Kusion and KCL features to minimize the amount of duplicate configurations, by separating the common base application configuration and environment-specific ones.
+
+:::info
+
+The file names in the below examples don't matter as long as they are called out and appear in the correct order in the `entries` field (the field is a list) in `kcl.mod`. The files with common configurations should appear first in the list and stack-specific ones last. The latter one takes precedence.
+
+The configurations also don't have be placed into a single `.k` file. For complex projects, they can be broken down into smaller organized `.k` files for better readability.
+:::
+
+Base configuration defined in `base/base.k`:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network.network as n
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "myapp": c.Container {
+ image: ""
+ resources: {
+ "cpu": "500m"
+ "memory": "512Mi"
+ }
+ }
+ }
+ replicas: 1
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 80
+ public: True
+ }
+ ]
+ }
+ }
+}
+```
+
+Environment-specific configuration defined in `dev/main.k`:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+
+# main.k declares customized configurations for dev stack.
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "myapp": c.Container {
+ # dev stack has different app configuration from the base
+ image = "gcr.io/google-samples/gb-frontend:v5"
+ resources = {
+ "cpu": "250m"
+ "memory": "256Mi"
+ }
+ }
+ }
+ replicas = 2
+ }
+}
+```
+
+Alternatively, you could locate a specific property (in this case below, the `Container` object) in the `AppConfiguration` object using the dot selector shorthand(such as `workload.containers.myapp` or `workload.replicas` below):
+```
+import kam.v1.app_configuration as ac
+
+# main.k declares customized configurations for dev stack.
+myapp: ac.AppConfiguration {
+ workload.replicas = 2
+ workload.containers.myapp: {
+ # dev stack has different app configuration
+ image = "gcr.io/google-samples/gb-frontend:v5"
+ resources = {
+ "cpu": "250m"
+ "memory": "256Mi"
+ }
+ }
+}
+```
+This is especially useful when the application configuration is complex but the override is relatively straightforward.
+
+The two examples above are equivalent when overriding the base.
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/4-workload.md b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/4-workload.md
new file mode 100644
index 00000000..2b880df0
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/4-workload.md
@@ -0,0 +1,373 @@
+# Workload
+
+The `workload` attribute in the `AppConfiguration` instance is used to describe the specification for the application workload. The application workload generally represents the computing component for the application.
+
+A `workload` maps to an `AppConfiguration` instance 1:1. If there are more than one workload, they should be considered different applications.
+
+## Table of Content
+- [Import](#import)
+- [Types of workloads](#types-of-workloads)
+- [Configure containers](#configure-containers)
+ - [Application image](#application-image)
+ - [Resource Requirements](#resource-requirements)
+ - [Health Probes](#health-probes)
+ - [Lifecycle Hooks](#lifecycle-hooks)
+ - [Create Files](#create-files)
+ - [Customize container initialization](#customize-container-initialization)
+- [Configure Replicas](#configure-replicas)
+- [Differences between Service and Job](#differences-between-service-and-job)
+- [Workload References](#workload-references)
+
+## Import
+
+In the examples below, we are using schemas defined in the `catalog` package. For more details on KCL package import, please refer to the [Configuration File Overview](overview).
+
+The `import` statements needed for the following walkthrough:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import service.container.probe as p
+import service.container.lifecycle as lc
+```
+
+## Types of Workloads
+
+There are currently two types of workloads:
+
+- `Service`, representing a long-running, scalable workload type that should "never" go down and respond to short-lived latency-sensitive requests. This workload type is commonly used for web applications and services that expose APIs.
+- `Job`, representing batch tasks that take from a few seconds to days to complete and then stop. These are commonly used for batch processing that is less sensitive to short-term performance fluctuations.
+
+To instantiate a `Service`:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {}
+}
+```
+
+To instantiate a `Job`:
+```
+import kam.v1.app_configuration as ac
+import job
+import job.container as c
+
+myapp: ac.AppConfiguration {
+ workload: job.Job {}
+}
+```
+
+Of course, the `AppConfiguration` instances above is not sufficient to describe an application. We still need to provide more details in the `workload` section.
+
+## Configure containers
+
+Kusion is built on top of cloud-native philosophies. One of which is that applications should run as loosely coupled microservices on abstract and self-contained software units, such as containers.
+
+The `containers` attribute in a workload instance is used to define the behavior for the containers that run application workload. The `containers` attribute is a map, from the name of the container to the `catalog.models.schema.v1.workload.container.Container` Object which includes the container configurations.
+
+:::info
+
+The name of the container is in the context of the configuration file, so you could refer to it later. It's not referring to the name of the container in the Kubernetes cluster (or any other runtime).
+:::
+
+Everything defined in the `containers` attribute is considered an application container, as opposed to a sidecar container. Sidecar containers will be introduced in a different attribute in a future version.
+
+In most of the cases, only one application container is needed. Ideally, we recommend mapping an `AppConfiguration` instance to a microservice in the microservice terminology.
+
+We will walk through the details of configuring a container using an example of the `Service` type.
+
+To add an application container:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "myapp": c.Container {}
+ }
+ }
+}
+```
+
+### Application image
+
+The `image` attribute in the `Container` schema specifies the application image to run. This is the only required field in the `Container` schema.
+
+To specify an application image:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "myapp": c.Container {
+ image: "gcr.io/google-samples/gb-frontend:v5"
+ }
+ # ...
+ }
+ }
+}
+```
+
+### Resource Requirements
+
+The `resources` attribute in the `Container` schema specifies the application resource requirements such as cpu and memory.
+
+You can specify an upper limit (which maps to resource limits only) or a range as the resource requirements (which maps to resource requests and limits in Kubernetes).
+
+To specify an upper bound (only resource limits):
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "myapp": c.Container {
+ image: "gcr.io/google-samples/gb-frontend:v5"
+ resources: {
+ "cpu": "500m"
+ "memory": "512Mi"
+ }
+ # ...
+ }
+ }
+ }
+}
+```
+
+To specify a range (both resource requests and limits):
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "myapp": c.Container {
+ image: "gcr.io/google-samples/gb-frontend:v5"
+ # Sets requests to cpu=250m and memory=256Mi
+ # Sets limits to cpu=500m and memory=512Mi
+ resources: {
+ "cpu": "250m-500m"
+ "memory": "256Mi-512Mi"
+ }
+ # ...
+ }
+ }
+ }
+}
+```
+
+### Health Probes
+
+There are three types of `Probe` defined in a `Container`:
+
+- `livenessProbe` - used to determine if the container is healthy and running
+- `readinessProbe` - used to determine if the container is ready to accept traffic
+- `startupProbe` - used to determine if the container has started properly. Liveness and readiness probes don't start until `startupProbe` succeeds. Commonly used for containers that takes a while to start
+
+The probes are optional. You can only have one Probe of each kind for a given `Container`.
+
+To configure a `Http` type `readinessProbe` that probes the health via HTTP request and a `Exec` type `livenessProbe` which executes a command:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "myapp": c.Container {
+ image: "gcr.io/google-samples/gb-frontend:v5"
+ # ...
+ # Configure an Http type readiness probe at /healthz
+ readinessProbe: p.Probe {
+ probeHandler: p.Http {
+ url: "/healthz"
+ }
+ initialDelaySeconds: 10
+ timeoutSeconds: 5
+ periodSeconds: 15
+ successThreshold: 3
+ failureThreshold: 1
+ }
+ # Configure an Exec type liveness probe that executes probe.sh
+ livenessProbe: p.Probe {
+ probeHandler: p.Exec {
+ command: ["probe.sh"]
+ }
+ initialDelaySeconds: 10
+ }
+ }
+ }
+ }
+}
+```
+
+### Lifecycle Hooks
+
+You can also configure lifecycle hooks that triggers in response to container lifecycle events such as liveness/startup probe failure, preemption, resource contention, etc.
+
+There are two types that is currently supported:
+
+- `PreStop` - triggers before the container is terminated.
+- `PostStart` - triggers after the container is initialized.
+
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "myapp": c.Container {
+ image: "gcr.io/google-samples/gb-frontend:v5"
+ # ...
+ # Configure lifecycle hooks
+ lifecycle: lc.Lifecycle {
+ # Configures an Exec type pre-stop hook that executes preStop.sh
+ preStop: p.Exec {
+ command: ["preStop.sh"]
+ }
+ # Configures an Http type pre-stop hook at /post-start
+ postStart: p.Http {
+ url: "/post-start"
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+### Create Files
+
+You can also create files on-demand during the container initialization.
+
+To create a custom file and mount it to `/home/admin/my-file` when the container starts:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "myapp": c.Container {
+ image: "gcr.io/google-samples/gb-frontend:v5"
+ }
+ # ...
+ # Creates a file during container startup
+ files: {
+ "/home/admin/my-file": c.FileSpec {
+ content: "some file contents"
+ mode: "0777"
+ }
+ }
+ }
+ }
+}
+```
+
+### Customize container initialization
+
+You can also customize the container entrypoint via `command`, `args`, and `workingDir`. These should **most likely not be required**. In most of the cases, the entrypoint details should be baked into the application image itself.
+
+To customize the container entrypoint:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "myapp": c.Container {
+ image: "gcr.io/google-samples/gb-frontend:v5"
+ # ...
+ # This command will overwrite the entrypoint set in the image Dockerfile
+ command: ["/usr/local/bin/my-init-script.sh"]
+ # Extra arguments append to command defined above
+ args: [
+ "--log-dir=/home/my-app/logs"
+ "--timeout=60s"
+ ]
+ # Run the command as defined above, in the directory "/tmp"
+ workingDir: "/tmp"
+ }
+ }
+ }
+}
+```
+
+## Configure Replicas
+
+The `replicas` field in the `workload` instance describes the number of identical copies to run at the same time. It is generally recommended to have multiple replicas in production environments to eliminate any single point of failure. In Kubernetes, this corresponds to the `spec.replicas` field in the relevant workload manifests.
+
+To configure a workload to have a replica count of 3:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ # ...
+ }
+ replicas: 3
+ # ...
+ }
+ # ...
+}
+```
+
+## Differences between Service and Job
+
+The two types of workloads, namely `Service` and `Job`, share a majority of the attributes with some minor differences.
+
+### Exposure
+
+A `Service` usually represents a long-running, scalable workload that responds to short-lived latency-sensitive requests and never go down. Hence, a `Service` has an additional attribute that determines how it is exposed and can be accessed. A `Job` does NOT have the option to be exposed. We will explore more in the [application networking walkthrough](networking).
+
+### Job Schedule
+
+A `Job` can be configured to run in a recurring manner. In this case, the job will have a cron-format schedule that represents its recurring schedule.
+
+To configure a job to run at 21:00 every night:
+```
+import kam.v1.app_configuration as ac
+import job
+import job.container as c
+
+myjob: ac.AppConfiguration {
+ workload: job.Job {
+ containers: {
+ "busybox": c.Container {
+ image: "busybox:1.28"
+ # Run the following command as defined
+ command: ["/bin/sh", "-c", "echo hello"]
+ }
+ }
+ # Run every hour.
+ schedule: "0 * * * *"
+ }
+}
+```
+
+## Workload References
+
+You can find workload references [here](../reference/modules/developer-schemas/workload/service).
+
+You can find workload schema source [here](https://github.com/KusionStack/catalog/tree/main/models/schema/v1/workload).
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/5-networking.md b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/5-networking.md
new file mode 100644
index 00000000..adaa9904
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/5-networking.md
@@ -0,0 +1,174 @@
+---
+id: networking
+---
+
+# Application Networking
+
+In addition to configuring application's [container specifications](workload#configure-containers), you can also configure its networking behaviors, including how to expose the application and how it can be accessed. You can specify a `network` module in the `accessories` field in `AppConfiguration` to achieve that.
+
+In future versions, this will also include ingress-based routing strategy and DNS configurations.
+
+## Import
+
+In the examples below, we are using schemas defined in the `kam` package and the `network` Kusion Module. For more details on KCL package and module import, please refer to the [Configuration File Overview](overview).
+
+The `import` statements needed for the following walkthrough:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+```
+
+The `kcl.mod` must contain reference to the network module:
+```
+#...
+
+[dependencies]
+network = { oci = "oci://ghcr.io/kusionstack/network", tag = "0.2.0" }
+
+#...
+```
+
+## Private vs Public Access
+
+Private network access means the service can only be access from within the target cluster.
+
+Public access is implemented using public load balancers on the cloud. This generally requires a Kubernetes cluster that is running on the cloud with a vendor-specific service controller.
+
+Any ports defined default to private access unless explicitly specified.
+
+To expose port 80 to be accessed privately:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ # ...
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 80
+ }
+ ]
+ }
+ }
+}
+```
+
+To expose port 80 to be accessed publicly:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ # ...
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 80
+ public: True
+ }
+ ]
+ }
+ }
+}
+```
+
+:::info
+The CSP (Cloud Service Provider) used to provide load balancers is defined by platform engineers in workspace.
+:::
+
+## Mapping ports
+
+To expose a port `80` that maps to a different port `8088` on the container:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ # ...
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 80
+ targetPort: 8088
+ }
+ ]
+ }
+ }
+}
+```
+
+## Exposing multiple ports
+
+You can also expose multiple ports and configure them separately.
+
+To expose port 80 to be accessed publicly, and port 9099 for private access (to be scraped by Prometheus, for example):
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ # ...
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 80
+ public: True
+ }
+ n.Port {
+ port: 9099
+ }
+ ]
+ }
+ }
+}
+```
+
+## Choosing protocol
+
+To expose a port using the `UDP` protocol:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ # ...
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 80
+ targetPort: 8088
+ protocol: "UDP"
+ }
+ ]
+ }
+ }
+}
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/6-database.md b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/6-database.md
new file mode 100644
index 00000000..64316e98
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/6-database.md
@@ -0,0 +1,467 @@
+---
+id: databse
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Managed Databases
+
+You could also specify a database needed for the application. That can be achieved via a `mysql` or a `postgres` module (or bring-your-own-module) in the `accessories` field in `AppConfiguration` to achieve that.
+
+You can currently have several databases with **different database names** for an application at the same time.
+
+## Import
+
+In the examples below, we are using schemas defined in the `kam` package and the `mysql` Kusion Module. For more details on KCL package and module import, please refer to the [Configuration File Overview](./1-overview.md#configuration-file-overview).
+
+The `import` statements needed for the following walkthrough:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import mysql
+import postgres
+```
+
+The `kcl.mod` must contain reference to the `mysql` module or `postgres` module:
+```
+#...
+
+[dependencies]
+mysql = { oci = "oci://ghcr.io/kusionstack/mysql", tag = "0.2.0" }
+postgres = { oci = "oci://ghcr.io/kusionstack/postgres", tag = "0.2.0" }
+#...
+```
+
+## Types of Database offerings
+
+As of version 0.11.0, Kusion supports the following database offerings on the cloud:
+- MySQL and PostgreSQL Relational Database Service (RDS) on [AWS](https://aws.amazon.com/rds/)
+- MySQL and PostgreSQL Relational Database Service (RDS) on [AliCloud](https://www.alibabacloud.com/product/databases)
+
+More database types on more cloud vendors will be added in the future.
+
+Alternatively, Kusion also supports creating a database at `localhost` for local testing needs. A local database is quicker to stand up and easier to manage. It also eliminates the need for an account and any relevant costs with the cloud providers in the case that a local testing environment is sufficient.
+
+:::info
+You do need a local Kubernetes cluster to run the local database workloads. You can refer to [Minikube](https://minikube.sigs.k8s.io/docs/start/) or [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/) to get started.
+To see an end-to-end use case for standing up a local testing environment including a local database, please refer to the [Kusion Quickstart](../2-getting-started/2-getting-started-with-kusion-cli/1-deliver-quickstart.md).
+:::
+
+## Cloud Credentials and Permissions
+
+Kusion provisions databases on the cloud via [terraform](https://www.terraform.io/) providers. For it to create _any_ cloud resources, it requires a set of credentials that belongs to an account that has the appropriate write access so the terraform provider can be initialized properly.
+
+For AWS, the environment variables needed:
+```
+export AWS_REGION=us-east-1 # replace it with your region
+export AWS_ACCESS_KEY_ID="xxxxxxxxxxx" # replace it with your AccessKey
+export AWS_SECRET_ACCESS_KEY="xxxxxxx" # replace it with your SecretKey
+```
+
+For AliCloud, the environment variables needed:
+```
+export ALICLOUD_REGION=cn-shanghai # replace it with your region
+export ALICLOUD_ACCESS_KEY="xxxxxxxxx" # replace it with your AccessKey
+export ALICLOUD_SECRET_KEY="xxxxxxxxx" # replace it with your SecretKey
+```
+
+The user account that owns these credentials would need to have the proper permission policies attached to create databases and security groups. If you are using the cloud-managed policies, the policies needed to provision a database and configure firewall rules are listed below.
+
+For AWS:
+- `AmazonVPCFullAccess` for creating and managing database firewall rules via security group
+- `AmazonRDSFullAccess` for creating and managing RDS instances
+
+For AliCloud:
+- `AliyunVPCFullAccess` for creating and managing database firewall rules via security group
+- `AliyunRDSFullAccess` for creating and managing RDS instances
+
+Alternatively, you can use customer managed policies if the cloud provider built-in policies don't meet your needs. The list of permissions needed are in the [AmazonRDSFullAccess Policy Document](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonRDSFullAccess.html#AmazonRDSFullAccess-json) and [AmazonVPCFullAccess Policy Document](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonVPCFullAccess.html). It will most likely be a subset of the permissions in the policy documents.
+
+## Configure Database
+
+### Provision a Cloud Database
+
+Assuming the steps in the [Cloud Credentials and Permissions](#cloud-credentials-and-permissions) section is setup properly, you can now provision cloud databases via Kusion.
+
+#### AWS RDS Instance
+To provision an AWS RDS instance with MySQL v8.0 or PostgreSQL v14.0, you can append the following YAML file to your own workspace configurations and update the corresponding workspace with command `kusion workspace update`.
+
+
+
+
+```yaml
+runtimes:
+ terraform:
+ random:
+ version: 3.5.1
+ source: hashicorp/random
+ aws:
+ version: 5.0.1
+ source: hashicorp/aws
+ region: us-east-1 # Please replace with your own aws provider region
+
+# MySQL configurations for AWS RDS
+modules:
+ kusionstack/mysql@0.1.0:
+ default:
+ cloud: aws
+ size: 20
+ instanceType: db.t3.micro
+ securityIPs:
+ - 0.0.0.0/0
+ suffix: "-mysql"
+```
+
+```mdx-code-block
+
+
+```
+```yaml
+runtimes:
+ terraform:
+ random:
+ version: 3.5.1
+ source: hashicorp/random
+ aws:
+ version: 5.0.1
+ source: hashicorp/aws
+ region: us-east-1 # Please replace with your own aws provider region
+
+# PostgreSQL configurations for AWS RDS
+modules:
+ kusionstack/postgres@0.1.0:
+ default:
+ cloud: aws
+ size: 20
+ instanceType: db.t3.micro
+ securityIPs:
+ - 0.0.0.0/0
+ suffix: "-postgres"
+```
+
+```mdx-code-block
+
+
+```
+
+For KCL configuration file declarations:
+
+
+
+
+```python
+wordpress: ac.AppConfiguration {
+ # ...
+ accessories: {
+ "mysql": mysql.MySQL {
+ type: "cloud"
+ version: "8.0"
+ }
+ }
+}
+```
+
+```mdx-code-block
+
+
+```
+
+```python
+pgadmin: ac.AppConfiguration {
+ # ...
+ accessories: {
+ "postgres": postgres.PostgreSQL {
+ type: "cloud"
+ version: "14.0"
+ }
+ }
+}
+```
+
+```mdx-code-block
+
+
+```
+
+It's highly recommended to replace `0.0.0.0/0` and closely manage the whitelist of IPs that can access the database for security purposes. The `0.0.0.0/0` in the example above or if `securityIPs` is omitted altogether will allow connections from anywhere which would typically be a security bad practice.
+
+The `instanceType` field determines the computation and memory capacity of the RDS instance. The `db.t3.micro` instance type in the example above represents the `db.t3` instance class with a size of `micro`. In the same `db.t3` instance family there are also `db.t3.small`, `db.t3.medium`, `db.t3.2xlarge`, etc.
+
+The full list of supported `instanceType` values can be found [here](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html#Concepts.DBInstanceClass.Support).
+
+You can also adjust the storage capacity for the database instance by changing the `size` field which is storage size measured in gigabytes. The minimum is 20. More details can be found [here](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#Concepts.Storage.GeneralSSD).
+
+#### AliCloud RDS Instance
+
+To provision an Alicloud RDS instance with MySQL or PostgreSQL, you can append the following YAML file to your own workspace configurations and update the corresponding workspace with command `kusion workspace update`. Note that AliCloud RDS has several additional fields such as `category`, `subnetID` and `privateRouting`:
+
+
+
+
+```yaml
+runtimes:
+ terraform:
+ random:
+ version: 3.5.1
+ source: hashicorp/random
+ alicloud:
+ version: 1.209.1
+ source: aliyun/alicloud
+ region: cn-beijing # Please replace with your own alicloud provider region
+
+# MySQL configurations for Alicloud RDS
+modules:
+ kusionstack/mysql@0.1.0:
+ default:
+ cloud: alicloud
+ size: 20
+ instanceType: mysql.n2.serverless.1c
+ category: serverless_basic
+ privateRouting: false
+ subnetID: [your-subnet-id]
+ securityIPs:
+ - 0.0.0.0/0
+ suffix: "-mysql"
+```
+
+```mdx-code-block
+
+
+```
+```yaml
+runtimes:
+ terraform:
+ random:
+ version: 3.5.1
+ source: hashicorp/random
+ alicloud:
+ version: 1.209.1
+ source: aliyun/alicloud
+ region: cn-beijing # Please replace with your own alicloud provider region
+
+# PostgreSQL configurations for Alicloud RDS
+modules:
+ kusionstack/postgres@0.1.0:
+ default:
+ cloud: alicloud
+ size: 20
+ instanceType: pg.n2.serverless.1c
+ category: serverless_basic
+ privateRouting: false
+ subnetID: [your-subnet-id]
+ securityIPs:
+ - 0.0.0.0/0
+ suffix: "-postgres"
+```
+
+```mdx-code-block
+
+
+```
+
+For KCL configuration file declarations:
+
+
+
+
+```python
+wordpress: ac.AppConfiguration {
+ # ...
+ accessories: {
+ "mysql": mysql.MySQL {
+ type: "cloud"
+ version: "8.0"
+ }
+ }
+}
+```
+
+```mdx-code-block
+
+
+```
+
+```python
+pgadmin: ac.AppConfiguration {
+ # ...
+ accessories: {
+ "postgres": postgres.PostgreSQL {
+ type: "cloud"
+ version: "14.0"
+ }
+ }
+}
+```
+
+```mdx-code-block
+
+
+```
+
+We will walkthrough `subnetID` and `privateRouting` in the [Configure Network Access](#configure-network-access) section.
+
+The full list of supported `instanceType` values can be found in:
+- [MySQL instance types(x86)](https://www.alibabacloud.com/help/en/rds/apsaradb-rds-for-mysql/primary-apsaradb-rds-for-mysql-instance-types#concept-2096487)
+- [PostgreSQL instance types](https://www.alibabacloud.com/help/en/rds/apsaradb-rds-for-postgresql/primary-apsaradb-rds-for-postgresql-instance-types#concept-2096578)
+
+### Local Database
+
+To deploy a local database with MySQL v8.0 or PostgreSQL v14.0:
+
+
+
+
+```python
+wordpress: ac.AppConfiguration {
+ # ...
+ accessories: {
+ "mysql": mysql.MySQL {
+ type: "local"
+ version: "8.0"
+ }
+ }
+}
+```
+
+```mdx-code-block
+
+
+```
+
+```python
+pgadmin: ac.AppConfiguration {
+ # ...
+ accessories: {
+ "postgres": postgres.PostgreSQL {
+ type: "local"
+ version: "14.0"
+ }
+ }
+}
+```
+
+```mdx-code-block
+
+
+```
+
+## Database Credentials
+
+There is no need to manage the database credentials manually. Kusion will automatically generate a random password, set it as the credential when creating the database, and then inject the hostname, username and password into the application runtime.
+
+You have the option to BYO (Bring Your Own) username for the database credential by specifying the `username` attribute in the `workspace.yaml`:
+```yaml
+modules:
+ kusionstack/mysql@0.1.0:
+ default:
+ # ...
+ username: "my_username"
+```
+
+You **cannot** bring your own password. The password will always be managed by Kusion automatically.
+
+The database credentials are injected into the environment variables of the application container. You can access them via the following env vars:
+```
+# env | grep KUSION_DB
+KUSION_DB_HOST_WORDPRESS_MYSQL=wordpress.xxxxxxxx.us-east-1.rds.amazonaws.com
+KUSION_DB_USERNAME_WORDPRESS_MYSQL=xxxxxxxxx
+KUSION_DB_PASSWORD_WORDPRESS_MYSQL=xxxxxxxxx
+```
+
+:::info
+More details about the environment of database credentials injected by Kusion can be found at [mysql credentials and connectivity](../6-reference/2-modules/1-developer-schemas/database/mysql.md#credentials-and-connectivity) and [postgres credentials and connectivity](../6-reference/2-modules/1-developer-schemas/database/postgres.md#credentials-and-connectivity)
+:::
+
+You can use these environment variables out of the box. Or most likely, your application might retrieve the connection details from a different set of environment variables. In that case, you can map the kusion environment variables to the ones expected by your application using the `$()` expression.
+
+This example below will assign the value of `KUSION_DB_HOST_WORDPRESS_MYSQL` into `WORDPRESS_DB_HOST`, `KUSION_DB_USERNAME_WORDPRESS_MYSQL` into `WORDPRESS_DB_USER`, likewise for `KUSION_DB_PASSWORD_WORDPRESS_MYSQL` and `WORDPRESS_DB_PASSWORD`:
+```
+wordpress: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ wordpress: c.Container {
+ image = "wordpress:6.3-apache"
+ env: {
+ "WORDPRESS_DB_HOST": "$(KUSION_DB_HOST_WORDPRESS_MYSQL)"
+ "WORDPRESS_DB_USER": "$(KUSION_DB_USERNAME_WORDPRESS_MYSQL)"
+ "WORDPRESS_DB_PASSWORD": "$(KUSION_DB_PASSWORD_WORDPRESS_MYSQL)"
+ }
+ # ...
+ }
+ }
+ # ...
+ }
+ accessories: {
+ # ...
+ }
+}
+```
+
+## Configure Network Access
+
+You can also optionally configure the network access to the database as part of the `AppConfiguration`. This is highly recommended because it dramatically increases the security posture of your cloud environment in the means of least privilege principle.
+
+The `securityIPs` field in the `Database` schema declares the list of network addresses that are allowed to access the database. The network addresses are in the [CIDR notation](https://aws.amazon.com/what-is/cidr/) and can be either a private IP range ([RFC-1918](https://datatracker.ietf.org/doc/html/rfc1918) and [RFC-6598](https://datatracker.ietf.org/doc/html/rfc6598) address) or a public one.
+
+If the database need to be accessed from a public location (which should most likely not be the case in a production environment), `securityIPs` need to include the public IP address of the traffic source (For instance, if the RDS database needs to be accessed from your computer).
+
+To configure AWS RDS to restrict network access from a VPC with a CIDR of `10.0.1.0/24` and a public IP of `103.192.227.125`:
+
+```yaml
+modules:
+ kusionstack/mysql@0.1.0:
+ default:
+ cloud: aws
+ # ...
+ securityIPs:
+ - "10.0.1.0/24"
+ - "103.192.227.125/32"
+```
+
+Depending on the cloud provider, the default behavior of the database firewall settings may differ if omitted.
+
+### Subnet ID
+
+On AWS, you have the option to launch the RDS instance inside a specific VPC if a `subnetID` is present in the application configuration. By default, if `subnetID` is not provided, the RDS will be created in the default VPC for that account. However, the recommendation is to self-manage your VPCs to provider better isolation from a network security perspective.
+
+On AliCloud, the `subnetID` is required. The concept of subnet maps to VSwitch in AliCloud.
+
+To place the RDS instance into a specific VPC on Alicloud:
+
+```yaml
+modules:
+ kusionstack/mysql@0.1.0:
+ default:
+ cloud: alicloud
+ # ...
+ subnetID: "subnet-xxxxxxxxxxxxxxxx"
+```
+
+### Private Routing
+
+There is an option to enforce private routing on certain cloud providers if both the workload and the database are running on the cloud.
+
+On AliCloud, you can set the `privateRouting` flag to `True`. The database host generated will be a private FQDN that is only resolvable and accessible from within the AliCloud VPCs. Setting `privateRouting` flag to `True` when `type` is `aws` is a no-op.
+
+To enforce private routing on AliCloud:
+
+```yaml
+modules:
+ kusionstack/mysql@0.1.0:
+ default:
+ cloud: alicloud
+ # ...
+ privateRouting: true
+```
+
+Kusion will then generate a private FQDN and inject it into the application runtime as the environment variable `KUSION_DB_HOST_` for the application to use. A complete list of Kusion-managed environment variables for mysql database can be found [here](../6-reference/2-modules/1-developer-schemas/database/mysql.md#credentials-and-connectivity).
+
+Otherwise when using the public FQDN to connect to a database from the workload, the route will depend on cloud provider's routing preference. The options are generally either:
+- Travel as far as possible on the cloud provider's global backbone network, or also referred to as cold potato routing, or
+- Egress as early as possible to the public Internet and re-enter the cloud provider's datacenter later, or also referred to as hot potato routing
+
+The prior generally has better performance but is also more expensive.
+
+You can find a good read on the [AWS Blog](https://aws.amazon.com/blogs/architecture/internet-routing-and-traffic-engineering/) or the [Microsoft Learn](https://learn.microsoft.com/en-us/azure/virtual-network/ip-services/routing-preference-overview).
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/7-secret.md b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/7-secret.md
new file mode 100644
index 00000000..db1d576e
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/7-secret.md
@@ -0,0 +1,251 @@
+---
+id: secret
+---
+
+# Secrets
+
+Secrets are used to store sensitive data like passwords, API keys, TLS certificates, tokens, or other credentials. Kusion provides multiple secret types, and makes it easy to be consumed in containers.
+
+For application dependent cloud resources that are managed by Kusion, their credentials are automatically managed by Kusion (generated and injected into application runtime environment variable). You shouldn't have to manually create those.
+
+## Using secrets in workload
+
+Secrets must be defined in AppConfiguration. The values can be generated by Kusion or reference existing secrets stored in third-party vault. Secrets can be consumed in containers by referencing them through the `secret:///` URI syntax.
+
+### Consume secret in an environment variable
+
+You can consume the data in Secrets as environment variable in your container. For example the db container uses an environment variable to set the root password.
+
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import service.secret as sec
+
+sampledb: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "db": c.Container {
+ image: "mysql"
+ env: {
+ # Consume db-root-password secret in environment
+ "ROOT_PASSWORD": "secret://db-root-password/token"
+ }
+ }
+ }
+ # Secrets used to generate token
+ secrets: {
+ "init-info": sec.Secret {
+ type: "token"
+ }
+ }
+ }
+}
+```
+
+The example shows the secret `root-password` being consumed as an environment variable in the db container. The secret is of type token and will automatically be generated at runtime by Kusion.
+
+### Consume all secret keys as environment variables
+
+Sometimes your secret contains multiple data that need to be consumed as environment variables. The example below shows how to consume all the values in a secret as environment variables named after the keys.
+
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import service.secret as sec
+
+sampledb: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "db": c.Container {
+ image: "mysql"
+ env: {
+ # Consume all init-info secret keys as environment variables
+ "secret://init-info": ""
+ }
+ }
+ }
+ # Secrets used to init mysql instance
+ secrets: {
+ "init-info": sec.Secret {
+ type: "opaque"
+ data: {
+ "ROOT_PASSWORD": "admin"
+ }
+ }
+ }
+ }
+}
+```
+
+This will set the environment variable "ROOT_PASSWORD" to the value "admin" in the db container.
+
+## Types of secrets
+
+Kusion provides multiple types of secrets to application developers.
+
+1. Basic: Used to generate and/or store usernames and passwords.
+2. Token: Used to generate and/or store secret strings for password.
+3. Opaque: A generic secret that can store arbitrary user-defined data.
+4. Certificate: Used to store a certificate and its associated key that are typically used for TLS.
+5. External: Used to retrieve secret form third-party vault.
+
+### Basic secrets
+
+Basic secrets are defined in the secrets block with the type "basic".
+
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import service.secret as sec
+
+sampleapp: ac.AppConfiguration {
+ workload: service.Service {
+ # ...
+ secrets: {
+ "auth-info": sec.Secret {
+ type: "basic"
+ data: {
+ "username": "admin"
+ "password": "******"
+ }
+ }
+ }
+ }
+}
+```
+
+The basic secret type is typically used for basic authentication. The key names must be username and password. If one or both of the fields are defined with a non-empty string, those values will be used. If the empty string, the default value, is used Acorn will generate random values for one or both.
+
+### Token secrets
+
+Token secrets are useful for generating a password or secure string used for passwords when the user is already known or not required.
+
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import service.secret as sec
+
+sampleapp: ac.AppConfiguration {
+ workload: service.Service {
+ # ...
+ secrets: {
+ "api-token": sec.Secret {
+ type: "token"
+ data: {
+ "token": ""
+ }
+ }
+ }
+ }
+}
+```
+
+The token secret type must be defined. The `token` field in the data object is optional and if left empty Kusion will generate the token, which is 54 characters in length by default. If the `token` is defined that value will always be used.
+
+### Opaque secrets
+
+Opaque secrets have no defined structure and can have arbitrary key value pairs.
+
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import service.secret as sec
+
+sampleapp: ac.AppConfiguration {
+ workload: service.Service {
+ # ...
+ secrets: {
+ "my-secret": sec.Secret {
+ type: "opaque"
+ }
+ }
+ }
+}
+```
+
+### Certificate secrets
+
+Certificate secrets are useful for storing a certificate and its associated key. One common use for TLS Secrets is to configure encryption in transit for an Ingress, but you can also use it with other resources or directly in your workload.
+
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import service.secret as sec
+
+sampleapp: ac.AppConfiguration {
+ workload: service.Service {
+ # ...
+ secrets: {
+ "server-cert": sec.Secret {
+ type: "certificate"
+ data: {
+ # Please do not put private keys in configuration files
+ "tls.crt": "The cert file content"
+ "tls.key": "The key file content"
+ }
+ }
+ }
+ }
+}
+```
+
+### External secrets
+
+As a general principle, storing secrets in a plain text configuration file is highly discouraged, keeping secrets outside of Git is especially important for future-proofing, even encrypted secrets are not recommended to check into Git. The most common approach is to store secrets in a third-party vault (such as Hashicorp Vault, AWS Secrets Manager and Azure Key Vault, etc) and retrieve the secret in the runtime only. External secrets are used to retrieve sensitive data from external secret store to make it easy to be consumed in containers.
+
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import service.secret as sec
+
+sampleapp: ac.AppConfiguration {
+ workload: service.Service {
+ # ...
+ secrets: {
+ "api-access-token": sec.Secret {
+ type: "external"
+ data: {
+ # Please do not put private keys in configuration files
+ "accessToken": "ref://api-auth-info/accessToken?version=1"
+ }
+ }
+ }
+ }
+}
+```
+
+The value field in data object follow `ref://PATH[?version=]` URI syntax. `PATH` is the provider-specific path for the secret to be retried. Kusion provides out-of-the-box integration with `Hashicorp Vault`, `AWS Secrets Manager`, `Azure Key Vault` and `Alicloud Secrets Manager`.
+
+## Immutable secrets
+
+You can also declare a secret as immutable to prevent it from being changed accidentally.
+
+To declare a secret as immutable:
+
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import service.secret as sec
+
+sampleapp: ac.AppConfiguration {
+ workload: service.Service {
+ # ...
+ secrets: {
+ "my-secret": sec.Secret {
+ # ...
+ immutable: True
+ }
+ }
+ }
+}
+```
+
+You can change a secret from mutable to immutable but not the other way around. That is because the Kubelet will stop watching secrets that are immutable. As the name suggests, you can only delete and re-create immutable secrets but you cannot change them.
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/8-monitoring.md b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/8-monitoring.md
new file mode 100644
index 00000000..c3dbf04c
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/8-monitoring.md
@@ -0,0 +1,102 @@
+# Application Monitoring
+
+You could also specify the collection of monitoring requirements for the application. That can be achieved via a `monitoring` module (or bring-your-own-module) in the `accessories` field in `AppConfiguration` to achieve that.
+
+As of version 0.11.0, Kusion supports integration with Prometheus by managing scraping behaviors in the configuration file.
+
+:::info
+
+For the monitoring configuration to work (more specifically, consumed by Prometheus), this requires the target cluster to have installed Prometheus correctly, either as a Kubernetes operator or a server/agent.
+
+More about how to set up Prometheus can be found in the [Prometheus User Guide for Kusion](../5-user-guides/1-using-kusion-cli/3-observability/1-prometheus.md)
+:::
+
+## Import
+
+In the examples below, we are using schemas defined in the `kam` package and the `monitoring` Kusion Module. For more details on KCL package and module import, please refer to the [Configuration File Overview](overview).
+
+The `import` statements needed for the following walkthrough:
+```
+import kam.v1.app_configuration as ac
+import kam.v1.workload as wl
+import monitoring as m
+```
+
+## Workspace configurations
+
+In addition to the KCL configuration file, there are also workspace-level configurations that should be set first. In an ideal scenario, this step is done by the platform engineers.
+
+In the event that they do not exist for you or your organization, e.g. if you are an individual developer, you can either do it yourself or use the [default values](#default-values) provided by the KusionStack team. The steps to do this yourself can be found in the [Prometheus User Guide for Kusion](../5-user-guides/1-using-kusion-cli/3-observability/1-prometheus.md#setting-up-workspace-configs).
+
+:::info
+
+For more details on how workspaces work, please refer to the [workspace concept](../3-concepts/4-workspace/1-overview.md)
+:::
+
+By separating configurations that the developers are interested in and those that platform owners are interested in, we can reduce the cognitive complexity of the application configuration and achieve separation of concern.
+
+You can append the following YAML file to your own workspace configurations and update the corresponding workspace with command `kusion workspace update`.
+
+```yaml
+modules:
+ kusionstack/monitoring@v0.1.0:
+ default:
+ interval: 30s
+ monitorType: Pod
+ operatorMode: true
+ scheme: http
+ timeout: 15s
+```
+
+## Managing Scraping Configuration
+To manage scrape configuration for the application:
+```
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ # ...
+ }
+ # Add the monitoring configuration backed by Prometheus
+ accessories: {
+ "monitoring": m.Prometheus {
+ path: "/metrics"
+ port: "web"
+ }
+ }
+}
+```
+
+The example above will instruct the Prometheus job to scrape metrics from the `/metrics` endpoint of the application on the port named `web`.
+
+To instruct Prometheus to scrape from `/actuator/metrics` on port `9099` instead:
+```
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ # ...
+ }
+ # Add the monitoring configuration backed by Prometheus
+ accessories: {
+ "monitoring": m.Prometheus {
+ path: "/actuator/metrics"
+ port: "9099"
+ }
+ }
+}
+```
+
+Note that numbered ports only work when your Prometheus is not running as an operator.
+
+Neither `path` and `port` are required fields if Prometheus runs as an operator. If omitted, `path` defaults to `/metrics`, and `port` defaults to the container port or service port, depending on which resource is being monitored. If Prometheus does not run as an operator, both fields are required.
+
+Scraping scheme, interval and timeout are considered platform-managed configurations and are therefore managed as part of the [workspace configurations](../5-user-guides/1-using-kusion-cli/3-observability/1-prometheus.md#setting-up-workspace-configs).
+
+More details about how the Prometheus integration works can be found in the [design documentation](https://github.com/KusionStack/kusion/blob/main/docs/prometheus.md).
+
+## Default values
+
+If no workspace configurations are found, the default values provided by the KusionStack team are:
+- Scraping interval defaults to 30 seconds
+- Scraping timeout defaults to 15 seconds
+- Scraping scheme defaults to http
+- Defaults to NOT running as an operator
+
+If any of the default values does not meet your need, you can change them by [setting up the workspace configuration](../5-user-guides/1-using-kusion-cli/3-observability/1-prometheus.md#setting-up-workspace-configs).
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/9-operational-rules.md b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/9-operational-rules.md
new file mode 100644
index 00000000..674d2f2c
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/9-operational-rules.md
@@ -0,0 +1,54 @@
+---
+id: operational-rules
+---
+
+# Operational Rules
+
+You could also specify the collection of operational rule requirements for the application. That can be achieved via a `opsrule` module (or bring-your-own-module) in the `accessories` field in `AppConfiguration` to achieve that. Operational rules are used as a preemptive measure to police and stop any unwanted changes.
+
+## Import
+
+In the examples below, we are using schemas defined in the `kam` package and the `opsrule` Kusion Module. For more details on KCL package and module import, please refer to the [Configuration File Overview](overview).
+
+The `import` statements needed for the following walkthrough:
+```
+import kam.v1.app_configuration as ac
+import kam.v1.workload as wl
+import opsrule as o
+```
+
+## Max Unavailable Replicas
+
+Currently, the `opsrule` module supports setting a `maxUnavailable` parameter, which specifies the maximum number of pods that can be rendered unavailable at any time. It can be either a fraction of the total pods for the current application or a fixed number. This operational rule is particularly helpful against unexpected changes or deletes to the workloads. It can also prevent too many workloads from going down during an application upgrade.
+
+More rules will be available in future versions of Kusion.
+
+To set `maxUnavailable` to a percentage of pods:
+```
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ # ...
+ }
+ }
+ accessories: {
+ "opsRule": o.OpsRule {
+ maxUnavailable: "30%"
+ }
+ }
+}
+```
+
+To set `maxUnavailable` to a fixed number of pods:
+```
+myapp: ac.AppConfiguration {
+ workload: service.Service {
+ # ...
+ }
+ accessories: {
+ "opsRule": o.OpsRule {
+ maxUnavailable: 2
+ }
+ }
+}
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/_category_.json b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/_category_.json
new file mode 100644
index 00000000..64d45678
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/4-configuration-walkthrough/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Configuration Walkthrough"
+}
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/1-cloud-resources/1-database.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/1-cloud-resources/1-database.md
new file mode 100644
index 00000000..ecd99d4b
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/1-cloud-resources/1-database.md
@@ -0,0 +1,305 @@
+---
+id: database
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Deliver the WordPress Application with Cloud RDS
+
+This tutorial will demonstrate how to deploy a WordPress application with Kusion, which relies on both Kubernetes and IaaS resources provided by cloud vendors. We can learn how to declare the Relational Database Service (RDS) to provide a cloud-based database solution with Kusion for our application from this article.
+
+## Prerequisites
+
+- Install [Kusion](../../../2-getting-started/2-getting-started-with-kusion-cli/0-install-kusion.md).
+- Install [kubectl CLI](https://kubernetes.io/docs/tasks/tools/#kubectl) and run a [Kubernetes](https://kubernetes.io/) or [k3s](https://docs.k3s.io/quick-start) or [k3d](https://k3d.io/v5.4.4/#installation) or [MiniKube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node) cluster.
+- Prepare a cloud service account and create a user with at least **VPCFullAccess** and **RDSFullAccess** related permissions to use the Relational Database Service (RDS). This kind of user can be created and managed in the Identity and Access Management (IAM) console of the cloud vendor.
+- The environment that executes `kusion` needs to have connectivity to terraform registry to download the terraform providers.
+
+Additionally, we also need to configure the obtained AccessKey and SecretKey as well as the cloud resource region as environment variables for specific cloud provider:
+
+
+
+
+```bash
+export AWS_ACCESS_KEY_ID="AKIAQZDxxxx" # replace it with your AccessKey
+export AWS_SECRET_ACCESS_KEY="oE/xxxx" # replace it with your SecretKey
+export AWS_REGION=us-east-1 # replace it with your region
+```
+
+
+
+```mdx-code-block
+
+
+```
+
+```bash
+export ALICLOUD_ACCESS_KEY="LTAI5txxx" # replace it with your AccessKey
+export ALICLOUD_SECRET_KEY="nxuowIxxx" # replace it with your SecretKey
+export ALICLOUD_REGION=cn-hangzhou # replace it with your region
+```
+
+
+
+```mdx-code-block
+
+
+```
+
+## Init Workspace
+
+To deploy the WordPress application with cloud rds, we first need to initiate a `Workspace` for the targeted stack (here we are using `dev`). Please copy the following example YAML file to your local `workspace.yaml`.
+
+
+
+
+`workspace.yaml`
+```yaml
+# MySQL configurations for AWS RDS
+modules:
+ mysql:
+ path: oci://ghcr.io/kusionstack/mysql
+ version: 0.2.0
+ configs:
+ default:
+ cloud: aws
+ size: 20
+ instanceType: db.t3.micro
+ privateRouting: false
+ databaseName: "wordpress-mysql"
+```
+
+```mdx-code-block
+
+
+```
+
+`workspace.yaml`
+```yaml
+# MySQL configurations for Alicloud RDS
+modules:
+ mysql:
+ path: oci://ghcr.io/kusionstack/mysql
+ version: 0.2.0
+ configs:
+ default:
+ cloud: alicloud
+ size: 20
+ instanceType: mysql.n2.serverless.1c
+ category: serverless_basic
+ privateRouting: false
+ subnetID: [your-subnet-id]
+ databaseName: "wordpress-mysql"
+```
+
+```mdx-code-block
+
+
+```
+
+If you would like to try creating the `Alicloud` RDS instance, you should replace the `[your-subnet-id]` of `modules.kusionstack/mysql@0.1.0.default.subnetID` field with the Alicloud `vSwitchID` to which the database will be provisioned in. After that, you can execute the following command line to initiate the configuration for `dev` workspace.
+
+```shell
+kusion workspace create dev -f workspace.yaml
+```
+
+Since Kusion by default use the `default` workspace, we can switch to the `dev` workspace with the following cmd:
+
+```shell
+kusion workspace switch dev
+```
+
+If you have already created and used the configuration of `dev` workspace, you can append the MySQL module configs to your workspace YAML file and use the following command line to update the workspace configuration.
+
+```shell
+kusion workspace update dev -f workspace.yaml
+```
+
+We can use the following command lines to show the current workspace configurations for `dev` workspace.
+
+```shell
+kusion workspace show
+```
+
+The `workspace.yaml` is a sample configuration file for workspace management, including `MySQL` module configs. Workspace configurations are usually declared by **Platform Engineers** and will take effect through the corresponding stack.
+
+:::info
+More details about the configuration of Workspace can be found in [Concepts of Workspace](../../../3-concepts/4-workspace/1-overview.md).
+:::
+
+## Create Project And Stack
+
+We can create a new project named `wordpress-rds-cloud` with the `kusion project create` command.
+
+```shell
+# Create a new directory and navigate into it.
+mkdir wordpress-rds-cloud && cd wordpress-rds-cloud
+
+# Create a new project with the name of the current directory.
+kusion project create
+```
+
+After creating the new project, we can create a new stack named `dev` with the `kusion stack create` command.
+
+```shell
+# Create a new stack with the specified name under current project directory.
+kusion stack create dev
+```
+
+The created project and stack structure looks like below:
+
+```shell
+tree
+.
+├── dev
+│ ├── kcl.mod
+│ ├── main.k
+│ └── stack.yaml
+└── project.yaml
+
+2 directories, 4 files
+```
+
+### Update And Review Configuration Codes
+
+The configuration codes in the created stack are basically empty, thus we should replace the `dev/kcl.mod` and `dev/main.k` with the below codes:
+
+```shell
+# dev/kcl.mod
+[dependencies]
+kam = { git = "https://github.com/KusionStack/kam.git", tag = "0.2.0" }
+service = { oci = "oci://ghcr.io/kusionstack/service", tag = "0.1.0" }
+network = { oci = "oci://ghcr.io/kusionstack/network", tag = "0.2.0" }
+mysql = { oci = "oci://ghcr.io/kusionstack/mysql", tag = "0.2.0" }
+```
+
+```python
+# dev/main.k
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+import mysql
+
+# main.k declares customized configurations for dev stacks.
+wordpress: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ wordpress: c.Container {
+ image: "wordpress:6.3"
+ env: {
+ "WORDPRESS_DB_HOST": "$(KUSION_DB_HOST_WORDPRESS_MYSQL)"
+ "WORDPRESS_DB_USER": "$(KUSION_DB_USERNAME_WORDPRESS_MYSQL)"
+ "WORDPRESS_DB_PASSWORD": "$(KUSION_DB_PASSWORD_WORDPRESS_MYSQL)"
+ "WORDPRESS_DB_NAME": "mysql"
+ }
+ resources: {
+ "cpu": "500m"
+ "memory": "512Mi"
+ }
+ }
+ }
+ replicas: 1
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 80
+ }
+ ]
+ }
+ "mysql": mysql.MySQL {
+ type: "cloud"
+ version: "8.0"
+ }
+ }
+}
+```
+
+## Application Delivery
+
+You can complete the delivery of the WordPress application in the folder of `wordpress-cloud-rds/dev` using the following command line. Kusion will enable the watching of the application resource creation and automatic port-forwarding of the specified port (80) from local to the Kubernetes Service.
+
+```shell
+cd dev && kusion apply --watch
+```
+
+:::info
+During the first apply, the models and modules as well as the Terraform CLI (if not exists) that the application depends on will be downloaded, so it may take some time (usually within two minutes). You can take a break and have a cup of coffee.
+:::
+
+
+
+
+
+
+```mdx-code-block
+
+
+```
+
+
+
+```mdx-code-block
+
+
+```
+
+After all the resources reconciled, we can port-forward our local port (e.g. 12345) to the WordPress frontend service port (80) in the cluster:
+
+```shell
+kubectl port-forward -n wordpress-cloud-rds svc/wordpress-cloud-rds-dev-wordpress-private 12345:80
+```
+
+
+
+## Verify WordPress Application
+
+Next, we will verify the WordPress site service we just delivered, along with the creation of the RDS instance it depends on. We can start using the WordPress site by accessing the link of local-forwarded port [(http://localhost:12345)](http://localhost:12345) we just configured in the browser.
+
+
+
+In addition, we can also log in to the cloud service console page to view the RDS instance we just created.
+
+
+
+
+
+
+```mdx-code-block
+
+
+```
+
+
+
+```mdx-code-block
+
+
+```
+
+## Delete WordPress Application
+
+You can delete the WordPress application and related RDS resources using the following command line.
+
+```shell
+kusion destroy --yes
+```
+
+
+
+
+
+
+```mdx-code-block
+
+
+```
+
+
+
+```mdx-code-block
+
+
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/1-cloud-resources/2-expose-service.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/1-cloud-resources/2-expose-service.md
new file mode 100644
index 00000000..e424e47c
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/1-cloud-resources/2-expose-service.md
@@ -0,0 +1,259 @@
+---
+id: expose-service
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Expose Application Service Deployed on CSP Kubernetes
+
+Deploying applications on the Kubernetes provided by CSP (Cloud Service Provider) is convenient and reliable, which is adopted by many enterprises. Kusion has a good integration with CSP Kubernetes service. You can deploy your application to the Kubernetes cluster, and expose the service in a quite easy way.
+
+This tutorial demonstrates how to expose service of the application deployed on CSP Kubernetes. And the responsibilities of platform engineers and application developers are also clearly defined.
+
+## Prerequisites
+
+Create a Kubernetes cluster provided by CSP, and complete the corresponding configurations for `KUBECONFIG`. The following CSP Kubernetes services are supported:
+
+- [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks)
+- [Alibaba Cloud Container Service for Kubernetes (ACK)](https://www.alibabacloud.com/product/kubernetes)
+
+## Expose Service Publicly
+
+If you want the application to be accessed from outside the cluster, you should expose the service publicly. Follow the steps below, you will simply hit the goal.
+
+### Set up Workspace
+
+Create the workspace as the target where the application will be deployed to. The workspace is usually set up by **Platform Engineers**, which contains platform-standard and application-agnostic configurations. The workspace configurations are organized through a YAML file.
+
+
+
+
+```yaml
+modules:
+ network:
+ path: oci://ghcr.io/kusionstack/network
+ version: 0.2.0
+ configs:
+ default:
+ port:
+ type: aws
+```
+
+```mdx-code-block
+
+
+```
+
+```yaml
+modules:
+ network:
+ path: oci://ghcr.io/kusionstack/network
+ version: 0.2.0
+ configs:
+ default:
+ port:
+ type: alicloud
+ annotations:
+ service.beta.kubernetes.io/alibaba-cloud-loadbalancer-spec: slb.s1.small
+```
+
+```mdx-code-block
+
+
+```
+
+The YAML shown above gives an example of the workspace configuration to expose service on **EKS** and **ACK**. The block `port` contains the workspace configuration of Kusion module `network`, which has the following fields:
+
+- type: the CSP providing Kubernetes service, support `alicloud` and `aws`
+- annotations: annotations attached to the service, should be a map
+- labels: labels attached to the service, should be a map
+
+Then, create the workspace with the configuration file. The following command creates a workspace named `dev` with configuration file `workspace.yaml`.
+
+```bash
+kusion workspace create dev -f workspace.yaml
+```
+
+After that, we can switch to the `dev` workspace with the following cmd:
+
+```shell
+kusion workspace switch dev
+```
+
+If you already create and use the configuration of `dev` workspace, you can append the MySQL module configs to your workspace YAML file and use the following command line to update the workspace configuration.
+
+```shell
+kusion workspace update dev -f workspace.yaml
+```
+
+We can use the following command lines to show the current workspace configurations for `dev` workspace.
+
+```shell
+kusion workspace show
+```
+
+
+### Init Project
+
+After creating workspace, you should write application configuration code, which only contains simple and application-centric configurations. This step is usually accomplished by application developers.
+
+We can start by initializing this tutorial project with `kusion init` cmd:
+
+```shell
+# Create a new directory and navigate into it.
+mkdir nginx && cd nginx
+
+# Initialize the demo project with the name of the current directory.
+kusion init
+```
+
+The created project structure looks like below:
+
+```shell
+tree
+.
+├── dev
+│ ├── kcl.mod
+│ ├── main.k
+│ └── stack.yaml
+└── project.yaml
+
+2 directories, 4 files
+```
+
+:::info
+More details about the directory structure can be found in [Project](../../../3-concepts/1-project/1-overview.md) and [Stack](../../../3-concepts/2-stack/1-overview.md).
+:::
+
+### Update And Review Configuration Codes
+
+The initiated configuration codes are for the demo quickstart application, we should replace the `dev/kcl.mod` and `dev/main.k` with the below codes:
+
+`dev/kcl.mod`
+```shell
+[package]
+name = "nginx"
+version = "0.1.0"
+
+[dependencies]
+kam = { git = "https://github.com/KusionStack/kam.git", tag = "0.2.0" }
+service = { oci = "oci://ghcr.io/kusionstack/service", tag = "0.1.0" }
+network = { oci = "oci://ghcr.io/kusionstack/network", tag = "0.2.0" }
+
+[profile]
+entries = ["main.k"]
+```
+
+`dev/main.k`
+```python
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+
+# main.k declares customized configurations for dev stacks.
+nginx: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ nginx: c.Container {
+ image: "nginx:1.25.2"
+ resources: {
+ "cpu": "500m"
+ "memory": "512Mi"
+ }
+ }
+ }
+ replicas: 1
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 80
+ protocol: "TCP"
+ public: True
+ }
+ ]
+ }
+ }
+}
+```
+
+The code shown above describes how to expose service publicly. Kusion use schema `Port` to describe the network configuration, the primary fields of Port are as follows:
+
+- port: port number to expose service
+- protocol: protocol to expose service, support `TCP` and `UDP`
+- public: whether to public the service
+
+To public the service, you should set `public` as True. Besides, schema `Service` should be used to describe the workload configuration.
+
+That's all what an application developer needs to configure! Next, preview and apply the configuration, the application will get deployed and exposed publicly.
+
+:::info
+Kusion uses Load Balancer (LB) provided by the CSP to expose service publicly. For more detailed network configuration, please refer to [Application Networking](https://www.kusionstack.io/docs/kusion/configuration-walkthrough/networking)
+:::
+
+:::info
+During the first preview and apply, the models and modules as well as the Terraform CLI (if not exists) that the application depends on will be downloaded, so it may take some time (usually within two minutes). You can take a break and have a cup of coffee.
+:::
+
+### Preview and Apply
+
+Execute `kusion preview` under the stack path, you will get what will be created in the real infrastructure. The picture below gives the preview result of the example. A Namespace, Service and Deployment will be created, which meets the expectation. The service name has a suffix `public`, which shows it can be accessed publicly.
+
+
+
+Then, execute `kusion apply --yes` to do the real deploying job. Just a command and a few minutes, you have accomplished deploying application and expose it publicly.
+
+
+
+### Verify Accessibility
+
+In the example above, the kubernetes Namespace whose name is nginx, and a `Service` and `Deployment` under the Namespace should be created. Use `kubectl get` to check, the Service whose type is `LoadBalancer` and Deployment are created indeed. And the Service has `EXTERNAL-IP` 106.5.190.109, which means it can be accessed from outside the cluster.
+
+
+
+Visit the `EXTERNAL-IP` via browser, the correct result is returned, which illustrates the servie getting publicly exposed successfully.
+
+
+
+## Expose Service Inside Cluster
+
+If you only need the application to be accessed inside the cluster, just configure `Public` as `False` in schema `Port`. There is no need to change the workspace, which means an application developer can easily change a service exposure range, without the involvement of platform engineers.
+
+```python
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+
+# main.k declares customized configurations for dev stacks.
+nginx: ac.AppConfiguration {
+ workload: service.Service {
+ ...
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 80
+ protocol: "TCP"
+ public: False
+ }
+ ]
+ }
+ }
+}
+```
+
+Execute `kusion apply --yes`, the generated Service has suffix `private`.
+
+
+
+And the Service type is `ClusterIP`, only has `CLUSTER_IP` and no `EXTERNAL_IP`, which means it cannot get accessed from outside the cluster.
+
+
+
+## Summary
+This tutorial demonstrates how to expose service of the application deployed on the CSP Kubernetes. By platform engineers' setup of workspace, and application developers' configuration of schema `Port` of Kusion module `network`, Kusion enables you expose service simply and efficiently.
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/1-cloud-resources/_category_.json b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/1-cloud-resources/_category_.json
new file mode 100644
index 00000000..f6f2c380
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/1-cloud-resources/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Cloud Resources"
+}
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/1-deploy-application.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/1-deploy-application.md
new file mode 100644
index 00000000..7ab07df3
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/1-deploy-application.md
@@ -0,0 +1,282 @@
+---
+id: deploy-application
+---
+
+# Deploy Application
+
+This guide shows you how to use Kusion CLIs to complete the deployment of an application running in Kubernetes.
+We call the abstraction of application operation and maintenance configuration as `AppConfiguration`, and its instance as `Application`.
+It is essentially a configuration model that describes an application. The complete definition can be seen [here](../../../6-reference/2-modules/1-developer-schemas/app-configuration.md).
+
+In production, the application generally includes minimally several k8s resources:
+
+- Namespace
+- Deployment
+- Service
+
+:::tip
+This guide requires you to have a basic understanding of Kubernetes.
+If you are not familiar with the relevant concepts, please refer to the links below:
+
+- [Learn Kubernetes Basics](https://kubernetes.io/docs/tutorials/kubernetes-basics/)
+- [Namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/)
+- [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
+- [Service](https://kubernetes.io/docs/concepts/services-networking/service/)
+:::
+
+## Prerequisites
+
+Before we start, we need to complete the following steps:
+
+1、Install Kusion
+
+We recommend using HomeBrew(Mac), Scoop(Windows), or an installation shell script to download and install Kusion.
+See [Download and Install](../../../2-getting-started/2-getting-started-with-kusion-cli/0-install-kusion.md) for more details.
+
+2、Running Kubernetes cluster
+
+There must be a running and accessible Kubernetes cluster and a [kubectl](https://Kubernetes.io/docs/tasks/tools/#kubectl) command line tool.
+If you don't have a cluster yet, you can use [Minikube](https://minikube.sigs.k8s.io/docs/tutorials/multi_node/) to start one of your own.
+
+## Initializing
+
+This guide is to deploy an app using Kusion, relying on the Kusion CLI and an existing Kubernetes cluster.
+
+### Initializing workspace configuration
+
+In version 0.10.0, we have introduced the new concept of [workspaces](../../../3-concepts/4-workspace/1-overview.md), which is a logical layer whose configurations represent an opinionated set of defaults, often appointed by the platform team. In most cases workspaces are represented with an "environment" in traditional SDLC terms. These workspaces provide a means to separate the concerns between the **application developers** who wish to focus on business logic, and a group of **platform engineers** who wish to standardize the applications on the platform.
+
+Driven by the discipline of Platform Engineering, management of the workspaces, including create/updating/deleting workspaces and their configurations should be done by dedicated platform engineers in a large software organizations to facilitate a more mature and scalable collaboration pattern.
+
+:::tip
+More on the collaboration pattern can be found in the [design doc](https://github.com/KusionStack/kusion/blob/main/docs/design/collaboration/collaboration_paradigm.md).
+:::
+
+However, if that does NOT apply to your scenario, e.g. if you work in a smaller org without platform engineers or if you are an individual developer, we wish Kusion can still be a value tool to have when delivering an application. In this guide, we are NOT distinctively highlighting the different roles or what the best practices entails (the design doc above has all that) but rather the steps needed to get Kusion tool to work.
+
+As of version 0.11.0, workspace configurations in Kusion can not only be managed on the local filesystem in the form of YAML files, but the remotely-managed workspaces have been supported as well.
+
+To initialize the workspace configuration:
+
+```bash
+~/playground$ touch ~/dev.yaml
+~/playground$ kusion workspace create dev -f ~/dev.yaml
+create workspace dev successfully
+```
+
+To verify the workspace has been created properly:
+
+```
+~/playground$ kusion workspace list
+- default
+- dev
+~/playground$ kusion workspace show dev
+{}
+```
+
+Note that `show` command tells us the workspace configuration is currently empty, which is expected because we created the `dev` workspace with an empty YAML file. An empty workspace configuration will suffice in some cases, where no platform configurations are needed.
+
+Kusion by default uses the `default` workspace, thus we need to switch to the `dev` workspace we have just created.
+
+```bash
+~/playground$ kusion workspace switch dev
+```
+
+We will progressively add more workspace configurations throughout this user guide.
+
+### Initializing application configuration
+
+Now that workspaces are properly initialized, we can begin by initializing the application configuration:
+
+```bash
+# Create a new directory and navigate into it.
+mkdir simple-service && cd simple-service
+
+# Initialize the demo project with the name of the current directory.
+kusion init
+```
+
+The directory structure is as follows:
+
+```shell
+simple-service/
+.
+├── dev
+│ ├── kcl.mod
+│ ├── main.k
+│ └── stack.yaml
+└── project.yaml
+
+2 directories, 4 files
+```
+
+The project directory has the following files that are automatically generated:
+- `project.yaml` represents project-level configurations.
+- `dev` directory stores the customized stack configuration:
+ - `dev/main.k` stores configurations in the `dev` stack.
+ - `dev/stack.yaml` stores stack-level configurations.
+ - `dev/kcl.mod` stores stack-level dependencies.
+
+In general, the `.k` files are the KCL source code that represents the application configuration, and the `.yaml` is the static configuration file that describes behavior at the project or stack level.
+
+:::info
+See [Project](../../../3-concepts/1-project/1-overview.md) and [Stack](../../../3-concepts/2-stack/1-overview.md) for more details about Project and Stack.
+:::
+
+The `kusion init` command will create a demo quickstart application, we may update the `dev/kcl.mod` and `dev/main.k` later.
+
+#### kcl.mod
+There should be a `kcl.mod` file generated automatically under the project directory. The `kcl.mod` file describes the dependency for the current project or stack. By default, it should contain a reference to the official [`kam` repository](https://github.com/KusionStack/kam) which holds the Kusion `AppConfiguration` and related workload model definitions that fits best practices. You can also create your own models library and reference that.
+
+You can change the package name in `kcl.mod` to `simple-service`:
+
+`dev/kcl.mod`
+```shell
+[package]
+name = "simple-service"
+version = "0.1.0"
+
+[dependencies]
+kam = { git = "https://github.com/KusionStack/kam.git", tag = "0.2.0" }
+service = { oci = "oci://ghcr.io/kusionstack/service", tag = "0.1.0" }
+network = { oci = "oci://ghcr.io/kusionstack/network", tag = "0.2.0" }
+
+[profile]
+entries = ["main.k"]
+```
+
+#### main.k
+The configuration file `main.k`, usually written by the application developers, declare customized configurations for a specific stack, including an `Application` instance of `AppConfiguration` model.
+
+You can update the `main.k` as follows:
+
+```python
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+
+helloworld: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "helloworld": c.Container {
+ image = "kusionstack/kusion-quickstart:latest"
+ }
+ }
+ replicas: 2
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 80
+ }
+ ]
+ }
+ }
+}
+```
+
+## Previewing
+
+At this point, the project has been completely initialized.
+The configuration is written in KCL, not JSON/YAML which Kubernetes recognizes, so it needs to be built to get the final output. And we can use the `kusion preview` cmd to preview the Kubernetes resources intended to deliver.
+
+Enter stack dir `simple-service/dev` and preview:
+
+```bash
+cd simple-service/dev && kusion preview
+```
+
+:::tip
+For instructions on the kusion command line tool, execute `kusion -h`, or refer to the tool's online [documentation](../../../6-reference/1-commands/index.md).
+:::
+
+## Applying
+
+Preview is now completed. We can apply the configuration as the next step. In the output from `kusion preview`, you can see 3 resources:
+
+- a Namespace named `simple-service`
+- a Deployment named `simple-service-dev-helloworld` in the `simple-service` namespace
+- a Service named `simple-service-dev-helloworld-private` in the `simple-service` namespace
+
+Execute command:
+
+```bash
+kusion apply
+```
+
+The output is similar to:
+
+```
+ ✔︎ Generating Spec in the Stack dev...
+Stack: dev ID Action
+* ├─ v1:Namespace:simple-service Create
+* ├─ v1:Service:simple-service:simple-service-dev-helloworld-private Create
+* └─ apps/v1:Deployment:simple-service:simple-service-dev-helloworld Create
+
+
+? Do you want to apply these diffs? yes
+Start applying diffs ...
+ SUCCESS Create v1:Namespace:simple-service success
+ SUCCESS Create v1:Service:simple-service:simple-service-dev-helloworld-private success
+ SUCCESS Create apps/v1:Deployment:simple-service:simple-service-dev-helloworld success
+Create apps/v1:Deployment:simple-service:simple-service-dev-helloworld success [3/3] ███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ 100% | 0s
+Apply complete! Resources: 3 created, 0 updated, 0 deleted.
+```
+
+After the configuration applying successfully, you can use the `kubectl` to check the actual status of these resources.
+
+1、 Check Namespace
+
+```bash
+kubectl get ns
+```
+
+The output is similar to:
+
+```
+NAME STATUS AGE
+default Active 117d
+simple-service Active 38s
+kube-system Active 117d
+...
+```
+
+2、Check Deployment
+
+```bash
+kubectl get deploy -n simple-service
+```
+
+The output is similar to:
+
+```
+NAME READY UP-TO-DATE AVAILABLE AGE
+simple-service-dev-helloworld 1/1 1 1 59s
+```
+
+3、Check Service
+
+```bash
+kubectl get svc -n simple-service
+```
+
+The output is similar to:
+
+```
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+simple-service-dev-helloworld-private ClusterIP 10.98.89.104 80/TCP 79s
+```
+
+4、Validate app
+
+Using the `kubectl` tool, forward native port `30000` to the service port `80`.
+
+```bash
+kubectl port-forward svc/simple-service-dev-helloworld-private -n simple-service 30000:80
+```
+
+Open browser and visit [http://127.0.0.1:30000](http://127.0.0.1:30000):
+
+
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/10-customize-health-policy.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/10-customize-health-policy.md
new file mode 100644
index 00000000..ea31e55a
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/10-customize-health-policy.md
@@ -0,0 +1,194 @@
+---
+id: health-policy
+---
+
+# Customized Health Check with KCL
+
+Kusion now offers advanced customized health checks leveraging the power of `KCL`. This robust feature empowers users to define complex and tailored health policies for their Kubernetes resources. By implementing these custom policies, you can ensure that your resources not only meet specific criteria but also satisfy complex conditions before being deemed healthy. This capability is particularly valuable when assessing the health status of Kubernetes Custom Resources (CRs), providing a flexible and precise mechanism to validate the state of your entire `project`.
+
+## Prerequisites
+
+Please refer to the [prerequisites](1-deploy-application.md#prerequisites) in the guide for deploying an application.
+
+The example below also requires you to have [initialized the project](1-deploy-application.md#initializing) using the `kusion workspace create` and `kusion init` command, which will create a workspace and also generate a [`kcl.mod` file](1-deploy-application.md#kclmod) under the stack directory.
+
+## Defining a Health Policy
+
+You can define a health policy in the `Workspace` configuration via the `healthPolicy` field. The `healthPolicy` should contain a KCL script that defines the health check logic and can be used to assert healthy conditions on your `Kubernetes` resources.
+
+Firstly, you need to initialize the workspace configuration:
+
+```shell
+~/$ touch ~/dev.yaml
+~/$ kusion workspace create dev -f ~/dev.yaml
+create workspace dev successfully
+```
+
+### Example Health Policy
+
+Here is an example of how to define a health policy for a Kubernetes Deployment. This policy checks multiple aspects of the Deployment's health status. Update ~/dev.yaml with this example:
+
+```yaml
+modules:
+ service:
+ configs:
+ default:
+ healthPolicy:
+ health.kcl: |
+ assert res.metadata.generation == res.status.observedGeneration
+ assert res.status.replicas == 1
+```
+
+In this example, the custom health check ensures that:
+
+1. The Deployment has 1 replicas
+2. The observed generation matches the current generation (indicating that the latest changes have been processed)
+
+:::note
+`res` represents the Kubernetes resource being evaluated. It's a fixed expression in this feature that provides access to all fields and properties of the resource. You can use dot notation (e.g., `res.metadata.name`) to access nested fields within the resource. This allows you to create complex health checks based on various aspects of your Kubernetes resources.
+:::
+
+## How It Works
+
+When you apply your configuration, `kusion` will patch the provided `KCL` script into the `extension` field of the specified resource in the `Spec` and use the provided KCL script to evaluate the health of this resource. The health check will be performed repeatedly until it passes or a timeout is reached.
+
+The KCL script has access to the full Kubernetes resource object through the `res` variable. You can use any fields or properties of the resource in your health check logic.
+
+Besides configuring the workspace, platform engineers can also utilize the useful `PatchHealthPolicyToExtension` function in SDK to perform this feature while constructing the `module`. This function allows for a more programmatic and flexible approach to applying health policies while it's multiple resources case for a module.
+
+Here's a code snippet of how to use the `PatchHealthPolicyToExtension` function:
+
+```golang
+// Generate Kusion resource ID and wrap the Kubernetes Service into Kusion resource
+// with the SDK provided by kusion module framework.
+resourceID := module.KubernetesResourceID(svc.TypeMeta, svc.ObjectMeta)
+resource, err := module.WrapK8sResourceToKusionResource(resourceID, svc)
+if err != nil {
+ return nil, err
+}
+module.PatchHealthPolicyToExtension(resource, "assert res.metadata.generation == res.status.observedGeneration")
+```
+
+## Applying the Health Policy
+
+To apply the health policy, update your workspace configuration:
+
+```shell
+~/$ kusion workspace update dev -f ~/dev.yaml
+update workspace dev successfully
+```
+
+After updating the workspace configuration, apply your new configuration with the customized health check with the following commands:
+
+```shell
+~/$ cd quickstart/default
+~/quickstart/default/$ kusion apply
+ ✔︎ Generating Spec in the Stack default...
+
+Stack: default
+ID Action
+v1:Namespace:quickstart Create
+v1:Service:quickstart:quickstart-default-quickstart-private Create
+apps/v1:Deployment:quickstart:quickstart-default-quickstart Create
+
+
+Do you want to apply these diffs?:
+ > yes
+
+Start applying diffs ...
+ ✔︎ Succeeded v1:Namespace:quickstart
+ ⣽ Creating v1:Service:quickstart:quickstart-default-quickstart-private (0s)
+ ✔︎ Succeeded v1:Namespace:quickstart
+ ⢿ Creating v1:Service:quickstart:quickstart-default-quickstart-private (0s)
+ ⢿ Creating apps/v1:Deployment:quickstart:quickstart-default-quickstart (0s)
+ ......
+ ✔︎ Succeeded v1:Namespace:quickstart
+ ✔︎ Succeeded v1:Service:quickstart:quickstart-default-quickstart-private
+ ✔︎ Succeeded apps/v1:Deployment:quickstart:quickstart-default-quickstart
+Apply complete! Resources: 3 created, 0 updated, 0 deleted.
+
+[v1:Namespace:quickstart]
+Type Kind Name Detail
+READY Namespace quickstart Phase: Active
+READY ServiceAccount default Secrets: 0, Age: 0s
+[v1:Service:quickstart:quickstart-default-quickstart-private]
+Type Kind Name Detail
+READY Service quickstart-default-quickstart-private Type: ClusterIP, InternalIP: 10.96.196.38, ExternalIP: , Port(s): 8080/TCP
+READY EndpointSlice quickstart-default-quickstart-private-v42zc AddressType: IPv4, Ports: 8080, Endpoints: 10.244.1.99
+[apps/v1:Deployment:quickstart:quickstart-default-quickstart]
+Type Kind Name Detail
+READY Deployment quickstart-default-quickstart Reconciled
+READY ReplicaSet quickstart-default-quickstart-67459cd68d Desired: 1, Current: 1, Ready: 1
+READY Pod quickstart-default-quickstart-67459cd68d-jqtt7 Ready: 1/1, Status: Running, Restart: 0, Age: 4s
+```
+
+## Explanation
+
+The `Detail` column for the Deployment `quickstart-default-quickstart` provides crucial information about the resource's reconciliation status:
+
+- If it shows "Reconciled", it means the resource has successfully met the conditions defined in the health policy.
+
+```shell
+Type Kind Name Detail
+READY Deployment quickstart-default-quickstart Reconciled
+```
+
+- If it displays "Reconciling...", it indicates that the resource is still in the process of reaching the desired state as per the health policy.
+
+```shell
+Type Kind Name Detail
+MODIFIED Deployment quickstart-default-quickstart Reconciling...
+```
+
+- In case of any errors or unsupported configurations, appropriate messages will be shown, and customized health check will be skipped.
+
+```shell
+Type Kind Name Detail
+READY Deployment quickstart-default-quickstart health policy err: invalid syntax error, skip
+```
+
+This `Detail` helps you quickly assess whether your Kubernetes resources have reached their intended state after applying changes. It's an essential feedback mechanism for ensuring the reliability and correctness of your deployments.
+
+:::note
+`Detail` showing as `Unsupported kind, skip` indicates that the health policy is not configured for this resource's health check. This can occur due to several reasons:
+
+1. There's a mismatch between the resource kind in your Kubernetes manifests and the kinds specified in your health policy.
+2. The health policy in your workspace configuration might be missing or incorrectly defined for this particular resource.
+3. You might forgot to updated your workspace with the new configuration.
+
+To resolve this:
+
+1. Review your workspace configuration to ensure the health policy is correctly defined for all intended resource kinds.
+2. Check that the resource kind in your Kubernetes manifests matches the kinds specified in your health policy.
+
+If the issue persists, you may need to update your workspace configuration or adjust your health policy to include the specific resource kind.
+:::
+
+## Best Practices
+
+- Keep your health check logic simple and focused on key indicators of health for your specific resource.
+- Use assertions to clearly define what conditions must be true for the resource to be considered healthy.
+- Consider both the desired state (e.g., number of replicas) and the current state (e.g., available replicas) in your health checks.
+- For complex resources, you may want to check multiple conditions to ensure full health and readiness.
+
+## Limitations
+
+- The customized health check feature is currently only available for Kubernetes resources.
+- The KCL script must complete execution within a reasonable time to avoid timeouts during the apply process.
+- Errors in the KCL script syntax will cause the health check to be skipped, so be sure to test your scripts thoroughly.
+
+## Validation
+
+To verify the health policy, you can check the status of your Kubernetes resources:
+
+```bash
+kubectl get -n quickstart deployment quickstart-default-quickstart -o yaml
+```
+
+Ensure that the resource meets the conditions defined in your health policy.
+
+## Conclusion
+
+Customized health checks provides a powerful way to ensure your Kubernetes resources are in the desired state before considering an `apply` operation complete. By defining health policies, you can automate the validation of your resources and ensure they meet specific criteria before being considered healthy. By leveraging KCL, you can create sophisticated health check logic tailored to your specific `project` needs.
+
+For more details on KCL and its syntax, refer to the [KCL documentation](../../../4-configuration-walkthrough/2-kcl-basics.md).
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/2-container.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/2-container.md
new file mode 100644
index 00000000..26d2c270
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/2-container.md
@@ -0,0 +1,146 @@
+---
+id: container
+---
+
+# Configure Containers
+
+You can manage container-level configurations in the `AppConfiguration` model via the `containers` field (under the `workload` schema). By default, everything defined in the `containers` field will be treated as application containers. Sidecar containers will be supported in a future version of kusion.
+
+For the full `Container` schema reference, please see [here](../../../6-reference/2-modules/1-developer-schemas/workload/service.md#schema-container) for more details.
+
+## Pre-requisite
+
+Please refer to the [prerequisites](1-deploy-application.md#prerequisites) in the guide for deploying an application.
+
+The example below also requires you to have [initialized the project](1-deploy-application.md#initializing) using the `kusion workspace create` and `kusion init` command, which will create a workspace and also generate a [`kcl.mod` file](1-deploy-application.md#kclmod) under the stack directory.
+
+## Managing Workspace Configuration
+
+In the last guide, we introduced a step to [initialize a workspace](1-deploy-application.md#initializing-workspace-configuration) with an empty configuration. The same empty configuration will still work in this guide, no changes are required there.
+
+However, if you (or the platform team) would like to set default values for the workloads to standardize the behavior of applications in the `dev` workspace, you can do so by updating the `~/dev.yaml`:
+
+```yaml
+modules:
+ service:
+ default:
+ replicas: 3
+ labels:
+ label-key: label-value
+ annotations:
+ annotation-key: annotation-value
+ type: CollaSet
+```
+
+Please note that the `replicas` in the workspace configuration only works as a default value and will be overridden by the value set in the application configuration.
+
+The workspace configuration need to be updated with the command:
+
+```bash
+kusion workspace update dev -f ~/dev.yaml
+```
+
+For a full reference of what can be configured in the workspace level, please see the [workspace reference](../../../6-reference/2-modules/2-workspace-configs/workload/service.md).
+
+## Example
+
+`simple-service/dev/main.k`:
+```python
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+
+"helloworld": ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "helloworld": c.Container {
+ image = "gcr.io/google-samples/gb-frontend:v4"
+ env: {
+ "env1": "VALUE"
+ "env2": "VALUE2"
+ }
+ resources: {
+ "cpu": "500m"
+ "memory": "512Mi"
+ }
+ # Configure an HTTP readiness probe
+ readinessProbe: p.Probe {
+ probeHandler: p.Http {
+ url: "http://localhost:80"
+ }
+ initialDelaySeconds: 10
+ }
+ }
+ }
+ replicas: 2
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 80
+ }
+ ]
+ }
+ }
+}
+```
+
+## Apply
+
+Re-run steps in [Applying](1-deploy-application.md#applying), new container configuration can be applied.
+
+```
+$ kusion apply
+ ✔︎ Generating Spec in the Stack dev...
+Stack: dev ID Action
+* ├─ v1:Namespace:simple-service UnChanged
+* ├─ v1:Service:simple-service:simple-service-dev-helloworld-private UnChanged
+* └─ apps/v1:Deployment:simple-service:simple-service-dev-helloworld Update
+
+
+? Do you want to apply these diffs? yes
+Start applying diffs ...
+ SUCCESS UnChanged v1:Namespace:simple-service, skip
+ SUCCESS UnChanged v1:Service:simple-service:simple-service-dev-helloworld-private, skip
+ SUCCESS Update apps/v1:Deployment:simple-service:simple-service-dev-helloworld success
+Update apps/v1:Deployment:simple-service:simple-service-dev-helloworld success [3/3] ███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ 100% | 0s
+Apply complete! Resources: 0 created, 1 updated, 0 deleted.
+```
+
+## Validation
+
+We can verify the container (in the deployment template) now has the updated attributes as defined in the container configuration:
+```
+$ kubectl get deployment -n simple-service -o yaml
+...
+ template:
+ ...
+ spec:
+ containers:
+ - env:
+ - name: env1
+ value: VALUE
+ - name: env2
+ value: VALUE2
+ image: gcr.io/google-samples/gb-frontend:v4
+ imagePullPolicy: IfNotPresent
+ name: helloworld
+ readinessProbe:
+ failureThreshold: 3
+ httpGet:
+ host: localhost
+ path: /
+ port: 80
+ scheme: HTTP
+ initialDelaySeconds: 10
+ periodSeconds: 10
+ successThreshold: 1
+ timeoutSeconds: 1
+ resources:
+ limits:
+ cpu: 500m
+ memory: 512M
+...
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/3-service.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/3-service.md
new file mode 100644
index 00000000..223d8d56
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/3-service.md
@@ -0,0 +1,139 @@
+---
+id: service
+---
+
+# Expose Service
+
+You can determine how to expose your service in the `AppConfiguration` model via the `ports` field (under the `network` accessory). The `ports` field defines a list of all the `Port`s you want to expose for the application (and their corresponding listening ports on the container, if they don't match the service ports), so that it can be consumed by other applications.
+
+Unless explicitly defined, each of the ports exposed is by default exposed privately as a `ClusterIP` type service. You can expose a port publicly by specifying the `public` field in the `Port` schema. At the moment, the implementation for publicly access is done via Load Balancer type service backed by cloud providers. Ingress will be supported in a future version of kusion.
+
+For the `Port` schema reference, please see [here](../../../6-reference/2-modules/1-developer-schemas/workload/service.md#schema-port) for more details.
+
+## Prerequisites
+
+Please refer to the [prerequisites](1-deploy-application.md#prerequisites) in the guide for deploying an application.
+
+The example below also requires you to have [initialized the project](1-deploy-application.md#initializing) using the `kusion workspace create` and `kusion init` command, which will create a workspace and also generate a [`kcl.mod` file](1-deploy-application.md#kclmod) under the stack directory.
+
+## Managing Workspace Configuration
+
+In the first guide in this series, we introduced a step to [initialize a workspace](1-deploy-application.md#initializing-workspace-configuration) with an empty configuration. The same empty configuration will still work in this guide, no changes are required there.
+
+However, if you (or the platform team) would like to set default values for the services to standardize the behavior of applications in the `dev` workspace, you can do so by updating the `~/dev.yaml`:
+
+```yaml
+modules:
+ kusionstack/network@0.1.0:
+ default:
+ port:
+ type: alicloud
+ labels:
+ kusionstack.io/control: "true"
+ annotations:
+ service.beta.kubernetes.io/alibaba-cloud-loadbalancer-spec: slb.s1.small
+```
+
+The workspace configuration need to be updated with the command:
+
+```bash
+kusion workspace update dev -f ~/dev.yaml
+```
+
+For a full reference of what can be configured in the workspace level, please see the [workspace reference](../../../6-reference/2-modules/2-workspace-configs/networking/network.md).
+
+## Example
+
+`simple-service/dev/main.k`:
+```python
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+
+"helloworld": ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "helloworld": c.Container {
+ image = "gcr.io/google-samples/gb-frontend:v4"
+ env: {
+ "env1": "VALUE"
+ "env2": "VALUE2"
+ }
+ resources: {
+ "cpu": "500m"
+ "memory": "512Mi"
+ }
+ # Configure an HTTP readiness probe
+ readinessProbe: p.Probe {
+ probeHandler: p.Http {
+ url: "http://localhost:80"
+ }
+ initialDelaySeconds: 10
+ }
+ }
+ }
+ replicas: 2
+ }
+ accessories: {
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 8080
+ targetPort: 80
+ }
+ ]
+ }
+ }
+}
+```
+
+The code above changes the service port to expose from `80` in the last guide to `8080`, but still targeting the container port `80` because that's what the application is listening on.
+
+## Applying
+
+Re-run steps in [Applying](1-deploy-application.md#applying), new service configuration can be applied.
+
+```
+$ kusion apply
+ ✔︎ Generating Spec in the Stack dev...
+Stack: dev ID Action
+* ├─ v1:Namespace:simple-service UnChanged
+* ├─ v1:Service:simple-service:simple-service-dev-helloworld-private Update
+* └─ apps/v1:Deployment:simple-service:simple-service-dev-helloworld UnChanged
+
+
+? Do you want to apply these diffs? yes
+Start applying diffs ...
+ SUCCESS UnChanged v1:Namespace:simple-service, skip
+ SUCCESS Update v1:Service:simple-service:simple-service-dev-helloworld-private success
+ SUCCESS UnChanged apps/v1:Deployment:simple-service:simple-service-dev-helloworld, skip
+UnChanged apps/v1:Deployment:simple-service:simple-service-dev-helloworld, skip [3/3] ██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ 100% | 0s
+Apply complete! Resources: 0 created, 1 updated, 0 deleted.
+```
+
+## Validation
+
+We can verify the Kubernetes service now has the updated attributes (mapping service port 8080 to container port 80) as defined in the `ports` configuration:
+
+```
+kubectl get svc -n simple-service -o yaml
+...
+ spec:
+ ...
+ ports:
+ - name: simple-service-dev-helloworld-private-8080-tcp
+ port: 8080
+ protocol: TCP
+ targetPort: 80
+...
+```
+
+Exposing service port 8080:
+```
+kubectl port-forward svc/simple-service-dev-helloworld-private -n simple-service 30000:8080
+```
+
+Open browser and visit [http://127.0.0.1:30000](http://127.0.0.1:30000), the application should be up and running:
+
+
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/4-image-upgrade.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/4-image-upgrade.md
new file mode 100644
index 00000000..8240aeec
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/4-image-upgrade.md
@@ -0,0 +1,78 @@
+---
+id: image-upgrade
+---
+
+# Upgrade Image
+
+You can declare the application's container image via `image` field of the `Container` schema.
+
+For the full `Container` schema reference, please see [here](../../../6-reference/2-modules/1-developer-schemas/workload/service.md#schema-container) for more details.
+
+## Pre-requisite
+
+Please refer to the [prerequisites](1-deploy-application.md#prerequisites) in the guide for deploying an application.
+
+The example below also requires you to have [initialized the project](1-deploy-application.md#initializing) using the `kusion workspace create` and `kusion init` command, which will create a workspace and also generate a [`kcl.mod` file](1-deploy-application.md#kclmod) under the stack directory.
+
+## Managing Workspace Configuration
+
+In the first guide in this series, we introduced a step to [initialize a workspace](1-deploy-application.md#initializing-workspace-configuration) with an empty configuration. The same empty configuration will still work in this guide, no changes are required there.
+
+## Example
+
+Update the image value in `simple-service/dev/main.k`:
+```python
+import kam.v1.app_configuration as ac
+
+helloworld: ac.AppConfiguration {
+ workload.containers.nginx: {
+ ...
+ # before:
+ # image = "gcr.io/google-samples/gb-frontend:v4"
+ # after:
+ image = "gcr.io/google-samples/gb-frontend:v5"
+ ...
+ }
+}
+```
+
+Everything else in `main.k` stay the same.
+
+## Applying
+
+Re-run steps in [Applying](1-deploy-application.md#applying), update image is completed.
+
+```
+$ kusion apply
+ ✔︎ Generating Spec in the Stack dev...
+Stack: dev ID Action
+* ├─ v1:Namespace:simple-service UnChanged
+* ├─ v1:Service:simple-service:simple-service-dev-helloworld-private UnChanged
+* └─ apps/v1:Deployment:simple-service:simple-service-dev-helloworld Update
+
+
+? Do you want to apply these diffs? yes
+Start applying diffs ...
+ SUCCESS UnChanged v1:Namespace:simple-service, skip
+ SUCCESS UnChanged v1:Service:simple-service:simple-service-dev-helloworld-private, skip
+ SUCCESS Update apps/v1:Deployment:simple-service:simple-service-dev-helloworld success
+Update apps/v1:Deployment:simple-service:simple-service-dev-helloworld success [3/3] ███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ 100% | 0s
+Apply complete! Resources: 0 created, 1 updated, 0 deleted.
+```
+
+## Validation
+
+We can verify the application container (in the deployment template) now has the updated image (v5) as defined in the container configuration:
+```
+kubectl get deployment -n simple-service -o yaml
+...
+ template:
+ ...
+ spec:
+ containers:
+ - env:
+ ...
+ image: gcr.io/google-samples/gb-frontend:v5
+ ...
+...
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/5-resource-spec.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/5-resource-spec.md
new file mode 100644
index 00000000..bae6d5bc
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/5-resource-spec.md
@@ -0,0 +1,90 @@
+---
+id: resource-spec
+---
+
+# Configure Resource Specification
+
+You can manage container-level resource specification in the `AppConfiguration` model via the `resources` field (under the `Container` schema).
+
+For the full `Container` schema reference, please see [here](../../../6-reference/2-modules/1-developer-schemas/workload/service.md#schema-container) for more details.
+
+## Prerequisites
+
+Please refer to the [prerequisites](1-deploy-application.md#prerequisites) in the guide for deploying an application.
+
+The example below also requires you to have [initialized the project](1-deploy-application.md#initializing) using the `kusion workspace create` and `kusion init` command, which will create a workspace and also generate a [`kcl.mod` file](1-deploy-application.md#kclmod) under the stack directory.
+
+## Managing Workspace Configuration
+
+In the first guide in this series, we introduced a step to [initialize a workspace](1-deploy-application.md#initializing-workspace-configuration) with an empty configuration. The same empty configuration will still work in this guide, no changes are required there.
+
+## Example
+
+Update the resources value in `simple-service/dev/main.k`:
+
+```py
+import kam.v1.app_configuration as ac
+
+helloworld: ac.AppConfiguration {
+ workload.containers.helloworld: {
+ ...
+ # before:
+ # resources: {
+ # "cpu": "500m"
+ # "memory": "512M"
+ # }
+ # after:
+ resources: {
+ "cpu": "250m"
+ "memory": "256Mi"
+ }
+ ...
+ }
+}
+```
+
+Everything else in `main.k` stay the same.
+
+## Applying
+
+Re-run steps in [Applying](1-deploy-application.md#applying), resource scaling is completed.
+
+```
+$ kusion apply
+ ✔︎ Generating Spec in the Stack dev...
+Stack: dev ID Action
+* ├─ v1:Namespace:simple-service UnChanged
+* ├─ v1:Service:simple-service:simple-service-dev-helloworld-private UnChanged
+* └─ apps/v1:Deployment:simple-service:simple-service-dev-helloworld Update
+
+
+? Do you want to apply these diffs? yes
+Start applying diffs ...
+ SUCCESS UnChanged v1:Namespace:simple-service, skip
+ SUCCESS UnChanged v1:Service:simple-service:simple-service-dev-helloworld-private, skip
+ SUCCESS Update apps/v1:Deployment:simple-service:simple-service-dev-helloworld success
+Update apps/v1:Deployment:simple-service:simple-service-dev-helloworld success [3/3] ███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ 100% | 0s
+Apply complete! Resources: 0 created, 1 updated, 0 deleted.
+```
+
+## Validation
+
+We can verify the application container (in the deployment template) now has the updated resources attributes (cpu:250m, memory:256Mi) as defined in the container configuration:
+
+```
+kubectl get deployment -n simple-service -o yaml
+...
+ template:
+ ...
+ spec:
+ containers:
+ - env:
+ ...
+ image: gcr.io/google-samples/gb-frontend:v5
+ ...
+ resources:
+ limits:
+ cpu: 250m
+ memory: 256Mi
+...
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/6-set-up-operational-rules.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/6-set-up-operational-rules.md
new file mode 100644
index 00000000..c585f370
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/6-set-up-operational-rules.md
@@ -0,0 +1,86 @@
+---
+id: set-up-operational-rules
+---
+
+# Set up Operational Rules
+
+You can set up operational rules in the `AppConfiguration` model with the `opsrule` accessory and corresponding platform configurations in the workspace directory. The `opsrule` is the collection of operational rule requirements for the application that are used as a preemptive measure to police and stop any unwanted changes.
+
+## Prerequisites
+
+Please refer to the [prerequisites](1-deploy-application.md#prerequisites) in the guide for deploying an application.
+
+The example below also requires you to have [initialized the project](1-deploy-application.md#initializing) using the `kusion workspace create` and `kusion init` command, which will create a workspace and also generate a [`kcl.mod` file](1-deploy-application.md#kclmod) under the stack directory.
+
+## Managing Workspace Configuration
+
+In the first guide in this series, we introduced a step to [initialize a workspace](1-deploy-application.md#initializing-workspace-configuration) with an empty configuration. The same empty configuration will still work in this guide, no changes are required there.
+
+However, if you (or the platform team) would like to set default values for the opsrule to standardize the behavior of applications, you can do so by updating the `~/dev.yaml`.
+Note that the platform engineers should set the default workload to [Kusion Operation CollaSet](https://github.com/KusionStack/operating) and installed the Kusion Operation controllers properly, the `opsrules` module will generate a [PodTransitionRule](https://www.kusionstack.io/docs/operating/manuals/podtransitionrule) instead of updating the `maxUnavailable` value in the deployment:
+
+```yaml
+modules:
+ service:
+ default:
+ type: CollaSet
+ kusionstack/opsrule@0.1.0:
+ default:
+ maxUnavailable: 30%
+```
+
+Please note that the `maxUnavailable` in the workspace configuration only works as a default value and will be overridden by the value set in the application configuration.
+
+The workspace configuration need to be updated with the command:
+
+```bash
+kusion workspace update dev -f ~/dev.yaml
+```
+
+## Example
+
+Add the `opsrule` module dependency to `kcl.mod`:
+
+```shell
+[package]
+name = "simple-service"
+version = "0.1.0"
+
+[dependencies]
+kam = { git = "https://github.com/KusionStack/kam.git", tag = "0.2.0" }
+service = { oci = "oci://ghcr.io/kusionstack/service", tag = "0.1.0" }
+network = { oci = "oci://ghcr.io/kusionstack/network", tag = "0.2.0" }
+opsrule = { oci = "oci://ghcr.io/kusionstack/opsrule", tag = "0.1.0" }
+
+[profile]
+entries = ["main.k"]
+```
+
+Add the `opsrule` snippet to the `AppConfiguration` in `simple-service/dev/main.k`:
+
+```py
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import opsrule
+
+helloworld: ac.AppConfiguration {
+ workload: service.Service {
+ ...
+ }
+ # Configure the maxUnavailable rule
+ accessories: {
+ "opsrule": opsrule.OpsRule {
+ "maxUnavailable": 50%
+ }
+ }
+}
+```
+
+## Applying
+
+Re-run steps in [Applying](1-deploy-application.md#applying), resource scaling is completed.
+
+## Validation
+
+We can verify the application deployment strategy now has the updated attributes `maxUnavailable: 50%` in the container configuration.
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/7-job.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/7-job.md
new file mode 100644
index 00000000..dec8a530
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/7-job.md
@@ -0,0 +1,146 @@
+---
+id: job
+---
+
+# Schedule a Job
+
+The guides above provide examples on how to configure workloads of the type `service.Service`, which is typically used for long-running web applications that should **never** go down. Alternatively, you could also schedule another kind of workload profile, namely `wl.Job` which corresponds to a one-off or recurring execution of tasks that run to completion and then stop.
+
+## Prerequisites
+
+Please refer to the [prerequisites](1-deploy-application.md#prerequisites) in the guide for scheduling a job.
+
+The example below also requires you to have [initialized the project](1-deploy-application.md#initializing) using the `kusion workspace create` and `kusion init` command, which will create a workspace and also generate a [`kcl.mod` file](1-deploy-application.md#kclmod) under the stack directory.
+
+## Managing Workspace Configuration
+
+In the first guide in this series, we introduced a step to [initialize a workspace](1-deploy-application.md#initializing-workspace-configuration) with an empty configuration. The same empty configuration will still work in this guide, no changes are required there. Alternatively, if you have updated your workspace config in the previous guides, no changes need to be made either.
+
+However, if you (or the platform team) would like to set default values for the workloads to standardize the behavior of applications in the `dev` workspace, you can do so by updating the `~/dev.yaml`:
+
+```yaml
+modules:
+ service:
+ default:
+ replicas: 3
+ labels:
+ label-key: label-value
+ annotations:
+ annotation-key: annotation-value
+```
+
+Please note that the `replicas` in the workspace configuration only works as a default value and will be overridden by the value set in the application configuration.
+
+The workspace configuration need to be updated with the command:
+
+```bash
+kusion workspace update dev -f ~/dev.yaml
+```
+
+For a full reference of what can be configured in the workspace level, please see the [workspace reference](../../../6-reference/2-modules/2-workspace-configs/workload/job.md).
+
+## Example
+
+To schedule a job with cron expression, update `simple-service/dev/kcl.mod` and `simple-service/dev/main.k` to the following:
+
+`simple-service/dev/kcl.mod`:
+```py
+[package]
+name = "simple-service"
+version = "0.1.0"
+
+[dependencies]
+kam = { git = "https://github.com/KusionStack/kam.git", tag = "0.2.0" }
+job = { oci = "oci://ghcr.io/kusionstack/job", tag = "0.1.0" }
+network = { oci = "oci://ghcr.io/kusionstack/network", tag = "0.2.0" }
+
+[profile]
+entries = ["main.k"]
+```
+
+`simple-service/dev/main.k`:
+```py
+import kam.v1.app_configuration as ac
+import job
+import job.container as c
+
+helloworld: ac.AppConfiguration {
+ workload: job.Job {
+ containers: {
+ "busybox": c.Container {
+ # The target image
+ image: "busybox:1.28"
+ # Run the following command as defined
+ command: ["/bin/sh", "-c", "echo hello"]
+ }
+ }
+ # Run every minute.
+ schedule: "* * * * *"
+ }
+}
+```
+
+The KCL snippet above schedules a job. Alternatively, if you want a one-time job without cron, simply remove the `schedule` from the configuration.
+
+You can find the full example in here in the [konfig repo](https://github.com/KusionStack/konfig/tree/main/example/simple-job).
+
+## Applying
+
+Re-run steps in [Applying](1-deploy-application.md#applying) and schedule the job. Your output might look like one of the following:
+
+If you are starting from scratch, all resources are created on the spot:
+
+```
+$ kusion apply
+ ✔︎ Generating Spec in the Stack dev...
+Stack: dev ID Action
+* ├─ v1:Namespace:simple-service Create
+* └─ batch/v1:CronJob:simple-service:simple-service-dev-helloworld Create
+
+
+? Do you want to apply these diffs? yes
+Start applying diffs ...
+ SUCCESS Create v1:Namespace:simple-service success
+ SUCCESS Create batch/v1:CronJob:simple-service:helloworld-dev-helloworld success
+Create batch/v1:CronJob:simple-service:simple-service-dev-helloworld success [2/2] ██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ 100% | 0s
+Apply complete! Resources: 2 created, 0 updated, 0 deleted.
+```
+
+If you are starting from the last guide which configures an `opsrule`, the output looks like the following which destroys the `Deployment` and `Service` and replace it with a `CronJob`:
+
+```
+$ kusion apply
+ ✔︎ Generating Spec in the Stack dev...
+Stack: dev ID Action
+* ├─ v1:Namespace:simple-service UnChanged
+* ├─ batch/v1:CronJob:simple-service:simple-service-dev-helloworld Create
+* ├─ apps/v1:Deployment:simple-service:simple-service-dev-helloworld Delete
+* └─ v1:Service:simple-service:simple-service-dev-helloworld-private Delete
+
+
+? Do you want to apply these diffs? yes
+Start applying diffs ...
+ SUCCESS UnChanged v1:Namespace:simple-service, skip
+ SUCCESS Delete apps/v1:Deployment:simple-service:simple-service-dev-helloworld success
+ SUCCESS Create batch/v1:CronJob:simple-service:simple-service-dev-helloworld success
+ SUCCESS Delete v1:Service:simple-service:simple-service-dev-helloworld-private success
+Delete v1:Service:simple-service:simple-service-dev-helloworld-private success [4/4] ███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ 100% | 0s
+Apply complete! Resources: 1 created, 0 updated, 2 deleted.
+```
+
+## Validation
+
+We can verify the job has now been scheduled:
+
+```shell
+$ kubectl get cronjob -n simple-service
+NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
+simple-service-dev-helloworld * * * * * False 0 2m18s
+```
+
+Verify the job has been triggered after the minute mark since we scheduled it to run every minute:
+```shell
+$ kubectl get job -n simple-service
+NAME COMPLETIONS DURATION AGE
+simple-service-dev-helloworld-28415748 1/1 5s 11s
+```
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/9-k8s-manifest.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/9-k8s-manifest.md
new file mode 100644
index 00000000..e30b0244
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/9-k8s-manifest.md
@@ -0,0 +1,208 @@
+---
+id: k8s-manifest
+---
+
+# Apply the Raw K8s Manifest YAML
+
+The guides above provide examples on how to configure workloads and accessories with KCL, and generate the related Kubernetes resources with Kusion Module generators, which is the usage method we recommend, as it can achieve the separation of concerns between developers and platform engineers, reducing the cognitive burden on developers.
+
+However, in some specific scenario, users may also have the need to directly use Kusion to apply and manage the raw Kubernetes manifest YAML files, such as taking over some existing resources and deploying CRD (CustomResourceDefinition), or other special resources.
+
+To help users directly apply raw K8s manifests, the KusionStack community has provided the [k8s_manifest](../../../6-reference/2-modules/1-developer-schemas/k8s_manifest/k8s_manifest.md) Kusion Module.
+
+:::info
+The module definition and implementation, as well as the example can be found at [here](https://github.com/KusionStack/catalog/tree/main/modules/k8s_manifest).
+:::
+
+## Prerequisites
+
+Please refer to the [prerequisites](1-deploy-application.md#prerequisites) in the guide for deploying an application.
+
+The example below also requires you to have [initialized the project](1-deploy-application.md#initializing) using the `kusion workspace create`, `kusion project create`, `kusion stack create` command, which will create a workspace and project, and also generate a [kcl.mod](1-deploy-application.md#kclmod) file under the stack directory.
+
+## Managing Workspace Configuration
+
+In the first guide in this series, we introduced a step to [initialize a workspace](1-deploy-application.md#initializing-workspace-configuration) with an empty configuration. The same empty configuration will still work in this guide, no changes are required there. Alternatively, if you have updated your workspace config in the previous guides, no changes need to be made either.
+
+However, if you (or the platform team) would like to set some default paths for the raw K8s manifest YAML files to standardize the behavior of applications in the `dev` workspace, you can do so by updating the `dev.yaml` with the following config block:
+
+```yaml
+modules:
+ k8s_manifest:
+ path: oci://ghcr.io/kusionstack/k8s_manifest
+ version: 0.1.0
+ configs:
+ default:
+ # The default paths to apply for the raw K8s manifest YAML files.
+ paths:
+ - /path/to/k8s_manifest.yaml
+ - /dir/to/k8s_manifest/
+```
+
+Please note that the `paths` decalred by the platform engineers in the workspace configs will be merged with the ones declared by the developers in the `AppConfiguration` in `main.k`.
+
+The workspace configuration needs to be updated with the command:
+
+```bash
+kusion workspace update dev -f dev.yaml
+```
+
+## Example
+
+To apply the specified raw K8s manifest YAML files with `k8s_manifest` module, please use the `v0.2.1` version of `kam`, whose `workload` is no longer a required field in the `AppConfiguration` model. An example is shown below:
+
+`kcl.mod`:
+```py
+[dependencies]
+kam = { git = "https://github.com/KusionStack/kam.git", tag = "v0.2.1" }
+k8s_manifest = { oci = "oci://ghcr.io/kusionstack/k8s_manifest", tag = "0.1.0" }
+```
+
+`stack.yaml`:
+```yaml
+# Generate a specified namespace
+name: dev
+extensions:
+ - kind: kubernetesNamespace
+ kubernetesNamespace:
+ namespace: test
+```
+
+`main.k`:
+```py
+import kam.v1.app_configuration as ac
+import k8s_manifest
+
+test: ac.AppConfiguration {
+ accessories: {
+ "k8s_manifests": k8s_manifest.K8sManifest {
+ paths: [
+ # The `test.yaml` should be placed under the stack directory,
+ # as it is declared using a relative path.
+ "./test.yaml"
+ ]
+ }
+ }
+}
+```
+
+`test.yaml`:
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx-deployment
+ namespace: test
+ labels:
+ app: nginx
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:1.14.2
+ ports:
+ - containerPort: 80
+```
+
+## Generate and Applying
+
+Execute the `kusion generate` command, the `Deployment` in the `test.yaml` will be generated into a Kusion `Resource` with a Kusion ID in the `Spec`.
+
+```
+➜ dev git:(main) ✗ kusion generate
+ ✔︎ Generating Spec in the Stack dev...
+resources:
+ - id: v1:Namespace:test
+ type: Kubernetes
+ attributes:
+ apiVersion: v1
+ kind: Namespace
+ metadata:
+ creationTimestamp: null
+ name: test
+ spec: {}
+ status: {}
+ extensions:
+ GVK: /v1, Kind=Namespace
+ - id: apps/v1:Deployment:test:nginx-deployment
+ type: Kubernetes
+ attributes:
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ labels:
+ app: nginx
+ name: nginx-deployment
+ namespace: test
+ spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - image: nginx:1.14.2
+ name: nginx
+ ports:
+ - containerPort: 80
+ dependsOn:
+ - v1:Namespace:test
+secretStore: null
+context: {}
+```
+
+Execute the `kusion apply` command, you may get the output like the following:
+
+```
+➜ dev git:(main) ✗ kusion apply
+ ✔︎ Generating Spec in the Stack dev...
+Stack: dev
+ID Action
+v1:Namespace:test Create
+apps/v1:Deployment:test:nginx-deployment Create
+
+
+Do you want to apply these diffs?:
+ > yes
+
+Start applying diffs ...
+ ✔︎ Succeeded v1:Namespace:test
+ ✔︎ Succeeded apps/v1:Deployment:test:nginx-deployment
+Apply complete! Resources: 2 created, 0 updated, 0 deleted.
+
+[v1:Namespace:test]
+Type Kind Name Detail
+READY Namespace test Phase: Active
+[apps/v1:Deployment:test:nginx-deployment]
+Type Kind Name Detail
+READY Deployment nginx-deployment Ready: 3/3, Up-to-date: 3, Available: 3
+READY ReplicaSet nginx-deployment-7fb96c846b Desired: 3, Current: 3, Ready: 3
+READY Pod nginx-deployment-7fb96c846b-d9pp4 Ready: 1/1, Status: Running, Restart: 0, Age: 2s
+```
+
+## Validation
+
+We can verify the `Deployment` and `Pod` we have just applied:
+
+```shell
+➜ dev git:(main) ✗ kubectl get deployment -n test
+NAME READY UP-TO-DATE AVAILABLE AGE
+nginx-deployment 3/3 3 3 70s
+➜ dev git:(main) ✗ kubectl get pod -n test
+NAME READY STATUS RESTARTS AGE
+nginx-deployment-7fb96c846b-d9pp4 1/1 Running 0 87s
+nginx-deployment-7fb96c846b-j45nt 1/1 Running 0 87s
+nginx-deployment-7fb96c846b-tnz5f 1/1 Running 0 87s
+```
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/_category_.json b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/_category_.json
new file mode 100644
index 00000000..79d3c6c5
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/2-working-with-k8s/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Kubernetes"
+}
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/3-observability/1-prometheus.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/3-observability/1-prometheus.md
new file mode 100644
index 00000000..d1dddc13
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/3-observability/1-prometheus.md
@@ -0,0 +1,327 @@
+---
+id: prometheus
+---
+
+# Configure Monitoring Behavior With Prometheus
+
+This document provides the step-by-step instruction to set up monitoring for your application.
+
+As of today, Kusion supports the configuration of Prometheus scraping behaviors for the target application. In the future, we will add more cloud-provider-native solutions, such as AWS CloudWatch, Azure Monitor, etc.
+
+The user guide below is composed of the following components:
+
+- Namespace
+- Deployment
+- Service
+- ServiceMonitor
+
+:::tip
+
+This guide requires you to have a basic understanding of Kubernetes and Prometheus.
+If you are not familiar with the relevant concepts, please refer to the links below:
+
+- [Learn Kubernetes Basics](https://kubernetes.io/docs/tutorials/kubernetes-basics/)
+- [Prometheus Introduction](https://prometheus.io/docs/introduction/overview/)
+:::
+
+## Pre-requisite
+Please refer to the [prerequisites](../2-working-with-k8s/1-deploy-application.md#prerequisites) in the guide for deploying an application.
+
+The example below also requires you to have [initialized the project](../2-working-with-k8s/1-deploy-application.md#initializing) using the `kusion init` command, which will generate a [`kcl.mod` file](../2-working-with-k8s/1-deploy-application.md#kclmod) under the project directory.
+
+## Setting up your own Prometheus
+
+There a quite a few ways to set up Prometheus in your cluster:
+1. Installing a Prometheus operator
+2. Installing a standalone Prometheus server
+3. Installing a Prometheus agent and connect to a remote Prometheus server
+
+[The advice from the Prometheus team](https://github.com/prometheus-operator/prometheus-operator/issues/1547#issuecomment-401092041) is to use the `ServiceMonitor` or `PodMonitor` CRs via the Prometheus operator to manage scrape configs going forward[2].
+
+In either case, you only have to do this setup once per cluster. This doc will use a minikube cluster and Prometheus operator as an example.
+
+### Installing Prometheus operator[3].
+To get the example in this user guide working, all you need is a running Prometheus operator. You can have that installed by running:
+```
+LATEST=$(curl -s https://api.github.com/repos/prometheus-operator/prometheus-operator/releases/latest | jq -cr .tag_name)
+curl -sL https://github.com/prometheus-operator/prometheus-operator/releases/download/${LATEST}/bundle.yaml | kubectl create -f -
+```
+
+This will install all the necessary CRDs and the Prometheus operator itself in the default namespace. Wait a few minutes, you can confirm the operator is up by running:
+```
+kubectl wait --for=condition=Ready pods -l app.kubernetes.io/name=prometheus-operator -n default
+```
+
+### Make sure RBAC is properly set up
+If you have RBAC enabled on the cluster, the following must be created for Prometheus to work properly:
+```
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: prometheus
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: prometheus
+rules:
+- apiGroups: [""]
+ resources:
+ - nodes
+ - nodes/metrics
+ - services
+ - endpoints
+ - pods
+ verbs: ["get", "list", "watch"]
+- apiGroups: [""]
+ resources:
+ - configmaps
+ verbs: ["get"]
+- apiGroups:
+ - networking.k8s.io
+ resources:
+ - ingresses
+ verbs: ["get", "list", "watch"]
+- nonResourceURLs: ["/metrics"]
+ verbs: ["get"]
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: prometheus
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: prometheus
+subjects:
+- kind: ServiceAccount
+ name: prometheus
+ namespace: default
+```
+
+### Configure Prometheus instance via the operator
+Once all of the above is set up, you can then configure the Prometheus instance via the operator:
+```
+apiVersion: monitoring.coreos.com/v1
+kind: Prometheus
+metadata:
+ name: prometheus
+spec:
+ serviceAccountName: prometheus
+ serviceMonitorNamespaceSelector: {}
+ serviceMonitorSelector: {}
+ podMonitorNamespaceSelector: {}
+ podMonitorSelector: {}
+ resources:
+ requests:
+ memory: 400Mi
+```
+This Prometheus instance above will be cluster-wide, picking up ALL the service monitors and pod monitors across ALL the namespaces.
+You can adjust the requests and limits accordingly if you have a larger cluster.
+
+### Exposing the Prometheus portal (optional)
+Once you have the managed Prometheus instance created via the Prometheus CR above, you should be able to see a service created called `prometheus-operated`:
+
+
+
+If you are also running on minikube, you can expose it onto your localhost via kubectl:
+```
+kubectl port-forward svc/prometheus-operated 9099:9090
+```
+
+You should then be able to see the Prometheus portal via `localhost:9099` in your browser:
+
+
+
+If you are running a non-local cluster, you can try to expose it via another way, through an ingress controller for example.
+
+## Setting up workspace configs
+
+Since v0.10.0, we have introduced the concept of [workspaces](../../../3-concepts/4-workspace/1-overview.md), whose configurations represent the part of the application behaviors that platform teams are interested in standardizing, or the ones to eliminate from developer's mind to make their lives easier.
+
+In the case of setting up Prometheus, there are a few things to set up on the workspace level:
+
+### Operator mode
+
+The `operatorMode` flag indicates to Kusion whether the Prometheus instance installed in the cluster runs as a Kubernetes operator or not. This determines the different kinds of resources Kusion manages.
+
+To see more about different ways to run Prometheus in the Kubernetes cluster, please refer to the [design documentation](https://github.com/KusionStack/kusion/blob/main/docs/prometheus.md#prometheus-installation).
+
+Most cloud vendors provide an out-of-the-box monitoring solutions for workloads running in a managed-Kubernetes cluster (EKS, AKS, etc), such as AWS CloudWatch, Azure Monitor, etc. These solutions mostly involve installing an agent (CloudWatch Agent, OMS Agent, etc) in the cluster and collecting the metrics to a centralized monitoring server. In those cases, you don't need to set `operatorMode` to `True`. It only needs to be set to `True` when you have an installation of the [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) running inside the Kubernetes cluster.
+
+:::info
+
+For differences between [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator), [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) and the [community kube-prometheus-stack helm chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack), the details are documented [here](https://github.com/prometheus-operator/prometheus-operator#prometheus-operator-vs-kube-prometheus-vs-community-helm-chart).
+:::
+
+### Monitor types
+
+The `monitorType` flag indicates the kind of monitor Kusion will create. It only applies when `operatorMode` is set to `True`. As of version 0.10.0, Kusion provides options to scrape metrics from either the application pods or its corresponding Kubernetes services. This determines the different kinds of resources Kusion manages when Prometheus runs as an operator in the target cluster.
+
+A sample `workspace.yaml` with Prometheus settings:
+```yaml
+modules:
+ ...
+ kusionstack/monitoring@0.1.0:
+ default:
+ operatorMode: True
+ monitorType: Service
+ scheme: http
+ interval: 30s
+ timeout: 15s
+...
+```
+
+To instruct Prometheus to scrape from pod targets instead:
+```yaml
+modules:
+ ...
+ kusionstack/monitoring@0.1.0:
+ default:
+ operatorMode: True
+ monitorType: Pod
+ scheme: http
+ interval: 30s
+ timeout: 15s
+...
+```
+
+If the `operatorMode` is omitted from the `workspace.yaml`, Kusion defaults `operatorMode` to false.
+
+### Overriding with projectSelector
+
+Workspace configurations contain a set of default setting group for all projects in the workspace, with means to override them by Projects using a `projectSelector` keyword.
+
+Projects with the name matching those in projectSelector will use the values defined in that override group instead of the default. If a key is not present in the override group, the default value will be used.
+
+Take a look at the sample `workspace.yaml`:
+```yaml
+modules:
+ ...
+ kusionstack/monitoring@0.1.0:
+ default:
+ operatorMode: True
+ monitorType: Pod
+ scheme: http
+ interval: 30s
+ timeout: 15s
+ low_frequency:
+ operatorMode: False
+ interval: 2m
+ projectSelector:
+ - foobar
+ high_frequency:
+ monitorType: Service
+ projectSelector:
+ - helloworld
+...
+```
+
+In the example above, a project with the name `helloworld` will have the monitoring settings where `operatorMode` is set to `False`, a 2 minute scraping interval, 15 seconds timeout (coming from default) and http scheme (coming from default).
+
+You cannot have the same project appear in two projectSelectors.
+
+For a full reference of what can be configured in the workspace level, please see the [workspace reference](../../../6-reference/2-modules/2-workspace-configs/monitoring/prometheus.md).
+
+## Updating the workspace config
+
+Assuming you now have a `workspace.yaml` that looks like the following:
+```yaml
+modules:
+ kusionstack/monitoring@0.1.0:
+ default:
+ operatorMode: True
+ monitorType: Service
+ scheme: http
+ interval: 30s
+ timeout: 15s
+...
+```
+
+Update the workspace configuration by running the following command:
+```
+kusion workspace update dev -f workspace.yaml
+```
+Verify the workspace config is properly updated by running the command:
+```
+kusion workspace show dev
+```
+
+## Using kusion to deploy your application with monitoring requirements
+
+At this point we are set up for good! Any new applications you deploy via kusion will now automatically have the monitoring-related resources created, should you declare you want it via the `monitoring` field in the `AppConfiguration` model.
+
+The monitoring in an AppConfiguration is declared in the `monitoring` field. See the example below for a full, deployable AppConfiguration.
+
+Please note we are using a new image `quay.io/brancz/prometheus-example-app` since the app itself need to expose metrics for Prometheus to scrape:
+
+`helloworld/dev/kcl.mod`:
+```
+[package]
+name = "helloworld"
+
+[dependencies]
+monitoring = { oci = "oci://ghcr.io/kusionstack/monitoring", tag = "0.2.0" }
+kam = { git = "https://github.com/KusionStack/kam.git", tag = "0.2.0" }
+service = { oci = "oci://ghcr.io/kusionstack/service", tag = "0.1.0" }
+
+[profile]
+entries = ["main.k"]
+```
+
+`helloworld/dev/main.k`:
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import monitoring as m
+import network.network as n
+
+helloworld: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "monitoring-sample-app": c.Container {
+ image: "quay.io/brancz/prometheus-example-app:v0.3.0"
+ }
+ }
+ }
+ # Add the monitoring configuration backed by Prometheus
+ accessories: {
+ "monitoring": m.Prometheus {
+ path: "/metrics"
+ }
+ "network": n.Network {
+ ports: [
+ n.Port {
+ port: 8080
+ }
+ ]
+ }
+ }
+}
+```
+
+The KCL file above represents an application with a service type workload, exposing the port 8080, and would like Prometheus to scrape the `/metrics` endpoint every 2 minutes.
+
+Running `kusion apply` would show that kusion will create a `Namespace`, a `Deployment`, a `Service` and a `ServiceMonitor`:
+
+
+Continue applying all resources:
+
+
+If we want to, we can verify the service monitor has been created successfully:
+
+
+In a few seconds, you should be able to see in the Prometheus portal that the service we just deployed has now been discovered and monitored by Prometheus:
+
+
+You can run a few simply queries for the data that Prometheus scraped from your application:
+
+
+For more info about PromQL, you can find them [here](https://prometheus.io/docs/prometheus/latest/querying/basics/)[4].
+
+## References
+1. Prometheus: https://prometheus.io/docs/introduction/overview/
+2. Prometheus team advise: https://github.com/prometheus-operator/prometheus-operator/issues/1547#issuecomment-446691500
+3. Prometheus operator getting started doc: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md
+4. PromQL basics: https://prometheus.io/docs/prometheus/latest/querying/basics/
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/3-observability/2-resource-graph.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/3-observability/2-resource-graph.md
new file mode 100644
index 00000000..1d169d6c
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/3-observability/2-resource-graph.md
@@ -0,0 +1,88 @@
+---
+id: resource-graph
+---
+
+# Resource Graph
+
+Kusion provides a powerful feature to visualize the relationships and dependencies between kusion `resources` using a resource graph. This feature offers several key benefits:
+
+- Comprehensive Visualization: The resource graph offers a clear, visual representation of your entire infrastructure, allowing you to see all resources and their interconnections at a glance. It includes detailed information about each cloud resource, such as its type, name, and unique identifiers, making it easier to locate and manage resources in your cloud environment.
+
+- Dependency Tracking: It helps you understand how resources are linked, making it easier to identify potential impacts when making changes to your infrastructure.
+
+- Troubleshooting Aid: When issues arise during the `apply` operation, the resource graph becomes an invaluable tool for pinpointing the source of problems. It provides a clear visual representation of resource relationships and their current status. This comprehensive view significantly reduces debugging time and enhances your ability to maintain a stable and efficient infrastructure.
+
+- Visual Documentation: The resource graph provides a clear, up-to-date visual representation of your infrastructure. It automatically updates as changes occur,providing a clear and current representation of your resource
+landscape. It improves team understanding and communication about the infrastructure setup.
+
+This feature empowers you to gain a comprehensive and intuitive understanding of your infrastructure's architecture, enabling more efficient troubleshooting and decision-making.
+
+## Prerequisites
+
+Please refer to the [Deliver the WordPress Application with Cloud RDS](../1-cloud-resources/1-database.md) in the guide for deploying an application.
+
+This guide will assume that you have already deployed an application following the guide.
+
+## Display Resource Graph
+
+To display a resource graph, you need to run the following command with project name specified:
+
+```bash
+kusion resource graph --project wordpress-rds-cloud
+```
+
+The output will be a resource graph in the terminal:
+
+```shell
+Displaying resource graph in the project wordpress-rds-cloud...
+
+Workspace: demo
+
+Workload Resources:
+ID Kind Name CloudResourceID Status
+apps/v1:Deployment:wordpress-rds-cloud:wordpress-rds-cl Kubernetes:apps/v1:Deployment wordpress-rds-cloud/wordpress- Apply succeeded | Reconciled
+oud-dev-wordpress rds-cloud-dev-wordpress
+
+Dependency Resources:
+ID Kind Name CloudResourceID Status
+v1:Secret:wordpress-rds-cloud:wordpress-mysql-mysql Kubernetes:v1:Secret wordpress-rds-cloud/wordpress- Apply succeeded | Reconciled
+ mysql-mysql
+v1:Service:wordpress-rds-cloud:wordpress-rds-cloud-dev- Kubernetes:v1:Service wordpress-rds-cloud/wordpress- Apply succeeded | Reconciled
+wordpress-private rds-cloud-dev-wordpress-privat
+ e
+v1:Namespace:wordpress-rds-cloud Kubernetes:v1:Namespace wordpress-rds-cloud Apply succeeded | Reconciled
+
+Other Resources:
+ID Kind Name CloudResourceID Status
+aliyun:alicloud:alicloud_db_connection:wordpress-mysql alicloud:alicloud_db_connectio wordpress-mysql rm-2zer0f93xy490fdzq:rm-2zer0f Apply succeeded | Reconciled
+ n 93xy490fdzqtf
+aliyun:alicloud:alicloud_db_instance:wordpress-mysql alicloud:alicloud_db_instance wordpress-mysql rm-2zer0f93xy490fdzq Apply succeeded | Reconciled
+aliyun:alicloud:alicloud_rds_account:wordpress-mysql alicloud:alicloud_rds_account wordpress-mysql rm-2zer0f93xy490fdzq:root Apply succeeded | Reconciled
+hashicorp:random:random_password:wordpress-mysql-mysql custom:random_password Apply succeeded
+```
+
+The resource graph output provides a comprehensive overview of the resources in your project. Let's break down each field:
+
+- ID: This is a unique identifier for each resource.
+
+- Kind: This field indicates the type of resource.
+
+- Name: This is the name of the resource within its namespace or scope.
+
+- CloudResourceID: For cloud resources, this field shows the unique identifier assigned by the cloud provider. For Kubernetes resources, this field is often empty.
+
+- Status: This field shows the current state of the resource. Common statuses include:
+ - "Apply succeeded | Reconciled": The resource has been successfully created and is in sync with the desired state.
+ - "Apply succeeded | Reconcile failed": The resource has been successfully created, but the resource is not in sync with the desired state.
+ - "Apply succeeded": The `apply` operation has completed, but the resource might not in sync with the desired state.
+ - "Apply failed": The `apply` operation has failed.
+
+The graph is divided into three sections:
+
+1. Workload Resources: These are the main application components, such as Kubernetes Deployments.
+
+2. Dependency Resources: These are resources that the workload depends on, such as Kubernetes Secrets, Services, and Namespaces.
+
+3. Other Resources: This section includes additional resources, often cloud provider-specific, such as database instances and connections.
+
+This graph gives you a clear view of all the resources in your project, their types, names, cloud identifiers (if applicable), and current status. It's particularly useful for understanding the structure of your application and its dependencies, as well as for troubleshooting and ensuring all resources are in the expected state.
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/3-observability/_category_.json b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/3-observability/_category_.json
new file mode 100644
index 00000000..b061ae3e
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/3-observability/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Automated Observability"
+}
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/4-secrets-management/1-using-cloud-secrets.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/4-secrets-management/1-using-cloud-secrets.md
new file mode 100644
index 00000000..6d0afce4
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/4-secrets-management/1-using-cloud-secrets.md
@@ -0,0 +1,101 @@
+---
+id: using-cloud-secrets
+---
+
+# Using Cloud Secrets Manager
+
+Applications usually store sensitive data in secrets by using centralized secrets management solutions. For example, you authenticate databases, services, and external systems with passwords, API keys, tokens, and other credentials stored in a secret store, e.g. Hashicorp Vault, AWS Secrets Manager, Azure Key Vault, etc
+
+Kusion provides out-of-the-box support to reference existing external secrets management solution, this tutorial introduces that how to pull the secret from AWS Secrets Manager to make it available to applications.
+
+## Prerequisites
+
+Please refer to the [prerequisites](../2-working-with-k8s/1-deploy-application.md#prerequisites) in the guide for deploying an application.
+
+The example below also requires you to have [initialized the project](../2-working-with-k8s/1-deploy-application.md#initializing) using the `kusion init` command, which will generate a [`kcl.mod` file](../2-working-with-k8s/1-deploy-application.md#kclmod) under the project directory.
+
+Additionally, you also need to configure the obtained AccessKey and SecretKey as environment variables:
+
+```bash
+export AWS_ACCESS_KEY_ID="AKIAQZDxxxx" # replace it with your AccessKey
+export AWS_SECRET_ACCESS_KEY="oE/xxxx" # replace it with your SecretKey
+```
+
+
+
+## Setting up workspace
+
+Since v0.10.0, we have introduced the concept of [workspaces](../../../3-concepts/4-workspace/1-overview.md), whose configurations represent the part of the application behaviors that platform teams are interested in standardizing, or the ones to eliminate from developer's mind to make their lives easier.
+
+In the case of setting up cloud secrets manager, platform teams need to specify which secrets management solution to use and necessary information to access on the workspace level.
+
+A sample `workspace.yaml` with AWS Secrets Manager settings:
+
+```
+modules:
+ ...
+secretStore:
+ provider:
+ aws:
+ region: us-east-1
+ profile: The optional profile to be used to interact with AWS Secrets Manager.
+...
+```
+
+:::note
+The `provider` of the `secretStore` now supports `aws`, `alicloud` and `viettelcloud`.
+:::
+
+## Update AppConfiguration
+
+At this point we are set up for good! Now you can declare external type of secrets via the `secrets` field in the `AppConfiguration` model to consume sensitive data stored in AWS Secrets Manager.
+
+See the example below for a full, deployable AppConfiguration.
+
+```
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import service.secret as sec
+
+gitsync: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "syncer": c.Container {
+ image: "dyrnq/git-sync"
+ # Run the following command as defined
+ command: [
+ "--repo=https://github.com/KusionStack/kusion"
+ "--ref=HEAD"
+ "--root=/mnt/git"
+ ]
+ # Consume secrets in environment variables
+ env: {
+ "GIT_SYNC_USERNAME": "secret://git-auth/username"
+ "GIT_SYNC_PASSWORD": "secret://git-auth/password"
+ }
+ }
+ }
+ # Secrets used to retrieve secret data from AWS Secrets Manager
+ secrets: {
+ "git-auth": sec.Secret {
+ type: "external"
+ data: {
+ "username": "ref://git-auth-info/username"
+ "password": "ref://git-auth-info/password"
+ }
+ }
+ }
+ }
+}
+```
+
+## Apply and Verify
+
+Run `kusion apply` command to deploy above application, then use the below command to verify if the secret got deployed:
+
+```
+kubectl get secret -n secretdemo
+```
+
+You will find `git-auth` of type Opaque automatically created and contains sensitive information pulled from AWS Secrets Manager.
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/4-secrets-management/_category_.json b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/4-secrets-management/_category_.json
new file mode 100644
index 00000000..8990c11b
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/4-secrets-management/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Secrets Management"
+}
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/5-production-practice-case/1-production-practice-case.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/5-production-practice-case/1-production-practice-case.md
new file mode 100644
index 00000000..9672cd55
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/5-production-practice-case/1-production-practice-case.md
@@ -0,0 +1,190 @@
+---
+id: collaborate-with-github-actions
+---
+
+# Achieving Team Collaboration in Production Practice with GitHub Actions
+
+In this article, we will introduce how to use Kusion CLI in combination with GitHub Actions to achieve team collaboration in production practice.
+
+Adopting the concept of separation of concerns, we divide the staff involved in application delivery and operation into two groups: **Platform Engineers (PEs)** and **Developers (Devs)**. As the builders of the Internal Developer Platform (IDP), platform engineers are primarily responsible for creating the [storage backend](../../../3-concepts/7-backend/1-overview.md) for the Kusion CLI in team collaborative scenarios (e.g. AWS S3 or Alicloud OSS), developing custom reusable [Kusion modules](../../../3-concepts/3-module/1-overview.md), and creating and maintaining standardized platform configurations in [workspace](../../../3-concepts/4-workspace/1-overview.md). While application developers can focus on writing the application business logic and the configuration codes, self-serving the application delivery and operation by triggering the automated CI/CD pipelines. [GitHub Actions](https://github.com/features/actions) is such a CI/CD platform, and by customizing [GitHub Actions workflow](https://docs.github.com/en/actions/using-workflows), the pipeline such as building, testing, and deploying will be executed automatically.
+
+In the following sections, we will demonstrate the specific workflow from the perspectives of both PEs and Devs with the sample workflows from our [konfg](https://github.com/KusionStack/konfig) and [catalog](https://github.com/KusionStack/catalog) repository.
+
+## Perspective of PE
+
+### Setup Kusion Storage Backend
+
+In order to enable multiple people to collaboratively edit and modify application configuration code within a team, PEs need to create a centralized remote storage backend for Kusion CLI, such as [AWS S3](https://aws.amazon.com/pm/serv-s3/) or [Alicloud OSS](https://www.alibabacloud.com/en/product/object-storage-service). Below is an example OSS bucket, we can see that it is mainly used to store application **releases** and **workspace** configurations.
+
+
+
+Suppose PEs have set up the Alicloud OSS storage backend and get the AK/SK with the permission to read and write the bucket, they can use the following commands to set up the remote storage backend.
+
+```shell
+# please replace the env with actual AK/SK
+export OSS_ACCESS_KEY_ID=LTAxxxxxxxxxxxxxx
+export OSS_ACCESS_KEY_SECRET=uUPxxxxxxxxxx
+
+# set up backend
+kusion config set backends.oss_test '{"type":"oss","configs":{"bucket":"kusion-test","endpoint":"oss-cn-shanghai.aliyuncs.com"}}'
+kusion config set backends.current oss_test
+```
+
+### Develop Customized Kusion Modules
+
+In the production practice of an enterprise, a common scenario is that PEs need to abstract and encapsulate the on-premises infrastructural computing, storage and networking resources to reduce the cognitive burden of the developers. And they can develop customized Kusion modules, a kind of reusable building blocks to achieve this goal. Below shows an example [GitHub Actions workflow](https://github.com/KusionStack/catalog/actions/runs/9398478367/job/25883893076) for pushing the module artifacts provided by KusionStack Official with multiple os/arch to [GitHub Packages](https://github.com/features/packages).
+
+
+
+### Create and Update Workspace
+
+Moreover, PEs also need to create and update the workspace configurations, where they can declare the Kusion modules available in the workspace, and add some standardized default or application-specific configurations across the entire scope of the workspace.
+
+Suppose PEs have set up the remote storage backend, they can use the following commands to create and update workspace.
+
+```shell
+# create workspace with the name of 'dev'
+kusion workspace create dev
+
+# update workspace with 'dev.yaml'
+kusion workspace update dev -f dev.yaml
+
+# switch to the 'dev' workspace
+kusion workspace switch dev
+```
+
+```yaml
+# dev.yaml declares 'mysql' and 'network' modules in the workspace
+modules:
+ mysql:
+ path: oci://ghcr.io/kusionstack/mysql
+ version: 0.2.0
+ network:
+ path: oci://ghcr.io/kusionstack/network
+ version: 0.2.0
+```
+
+So far, PE has almost completed the fundamental work for setting up the IDP.
+
+## Perspective of Dev
+
+### Setup Kusion Storage Backend
+
+In order to get the available modules of the workspace and validate the generated [spec](../../../3-concepts/6-specs.md), developers need to communicate with PEs to obtain the AK/SK (usually with **Read-Only** permission), bucket name, and the endpoint to access the remote storage backend. And similar to the PEs, developers can set up the backend configs with the following commands.
+
+```shell
+# please replace the env with actual AK/SK
+export OSS_ACCESS_KEY_ID=LTAxxxxxxxxxxxxxx
+export OSS_ACCESS_KEY_SECRET=uUPxxxxxxxxxx
+
+# set up backend
+kusion config set backends.oss_test '{"type":"oss","configs":{"bucket":"kusion-test","endpoint":"oss-cn-shanghai.aliyuncs.com"}}'
+kusion config set backends.current oss_test
+```
+
+### Create and Update Project and Stack
+
+Next, developers can create and update the [Project](../../../3-concepts/1-project/1-overview.md) and [Stack](../../../3-concepts/2-stack/1-overview.md) configurations with `kusion project` and `kusion stack` command.
+
+```shell
+# create a new project named quickstart
+mkdir quickstart && cd quickstart
+kusion project create
+
+# create a stack named dev
+kusion stack create dev
+```
+
+Below shows the initiated project and stack contents.
+
+```yaml
+# quickstart/project.yaml
+name: quickstart
+```
+
+```yaml
+# quickstart/dev/stack.yaml
+# The metadata information of the stack.
+name: dev
+```
+
+```python
+# kcl.mod
+# Please add the modules you need in 'dependencies'.
+[dependencies]
+kam = { git = "https://github.com/KusionStack/kam.git", tag = "0.2.0" }
+service = {oci = "oci://ghcr.io/kusionstack/service", tag = "0.1.0" }
+```
+
+```python
+# main.k
+# The configuration codes in perspective of developers.
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+
+# Please replace the ${APPLICATION_NAME} with the name of your application, and complete the
+# 'AppConfiguration' instance with your own workload and accessories.
+${APPLICATION_NAME}: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+
+ }
+ }
+ accessories: {
+
+ }
+}
+```
+
+Developers can use `kusion mod list` to get the available modules in current workspace and use `kusion mod add` to add a specified module to current stack.
+
+```shell
+# list the available modules in the current workspace
+➜ kusion mod list
+Name Version URL
+mysql 0.2.0 oci://ghcr.io/kusionstack/mysql
+network 0.2.0 oci://ghcr.io/kusionstack/network
+```
+
+```shell
+# add the specified modules to the current stack
+kusion mod add mysql && kusion mod add network
+```
+
+The corresponding module artifacts will be downloaded and the declaration of the modules will be added to `kcl.mod`, which can be compared to `go mod tidy` and `go.mod`.
+
+```python
+# kcl.mod after executing 'kusion mod add'
+[package]
+
+[dependencies]
+kam = { git = "https://github.com/KusionStack/kam.git", tag = "0.2.0" }
+service = { oci = "oci://ghcr.io/kusionstack/service", tag = "0.1.0" }
+mysql = { oci = "oci://ghcr.io/kusionstack/mysql", tag = "0.2.0" }
+network = { oci = "oci://ghcr.io/kusionstack/network", tag = "0.2.0" }
+```
+
+After this, developers can edit the application configuration codes according to the actual needs.
+
+### Trigger Preview and Apply Pipeline
+
+[KusionStack/konfig](https://github.com/KusionStack/konfig) is the official example repository, and provides a set of GitHub Actions workflows [preview.yml](https://github.com/KusionStack/konfig/blob/main/.github/workflows/preview.yml) and [apply.yml](https://github.com/KusionStack/konfig/blob/main/.github/workflows/apply.yml). The `preview.yml` is triggered by a pull request to the main branch, while `apply.yml` is triggered by a push to the main branch.
+
+
+
+
+
+The previewing workflow will first get the changed projects and stacks.
+
+
+
+Then the previewing workflow will execute the `kusion preview` command to all of the changed stacks, and open an issue for manual approval to merge the changes after the approvers check the preview result artifact.
+
+
+
+
+
+Once the code review is completed and the pull request is merged into the main branch, it will trigger the apply workflow, which will deploy the changes to the affected Projects and Stacks, and upload the respective results to the GitHub Actions Artifacts.
+
+
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/5-production-practice-case/_category_.json b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/5-production-practice-case/_category_.json
new file mode 100644
index 00000000..2b76a644
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/5-production-practice-case/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Production Practice Case"
+}
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/6-llm-ops/1-inference.md b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/6-llm-ops/1-inference.md
new file mode 100644
index 00000000..57e33f03
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/6-llm-ops/1-inference.md
@@ -0,0 +1,409 @@
+---
+id: inference
+---
+
+# Provide LLM Service with Inference Module for AI Application
+
+In the wave of Artificial Intelligence (AI), Large Language Models (LLMs) are gradually becoming a key factor in driving innovation and productivity. As a result, researchers and developers are looking for a more efficient way to deploy and manage complex LLM models and AI applications.
+
+To simplify the process from model construction, deployment and interaction with applications, the KusionStack community has provided an `inference` module. We will explore in detail how to deploy an AI application using LLM service provided by this module in this article.
+
+:::info
+The module definition and implementation, as well as the example application we are about to show can be found [here](https://github.com/KusionStack/catalog/tree/main/modules/inference).
+:::
+
+## Prerequisites
+
+Before we begin, we need to perform the following steps to set up the environment required by Kusion:
+
+- Install Kusion
+- Running Kubernetes cluster
+
+For more details, please refer to the [prerequisites](https://www.kusionstack.io/docs/user-guides/working-with-k8s/deploy-application#prerequisites) in the guide for deploying an application with Kusion.
+
+## Initializing and Managing Workspace Configuration
+
+For information on how to initialize and switch a workspace with `kusion workspace create` and `kusion workspace switch`, please refer to [this document](https://www.kusionstack.io/docs/user-guides/working-with-k8s/deploy-application#initializing-workspace-configuration).
+
+For the current version of the `inference` module, an empty configuration for workspace initialization is enough, and users may need to configure the `network` module as an accessory to provide the network service for the AI application, whose workload is described with `service` module. Users can also add other modules' platform configurations in the workspace according to their need.
+
+An example is shown below:
+
+```yaml
+modules:
+ service:
+ path: oci://ghcr.io/kusionstack/service
+ version: 0.2.0
+ configs:
+ default: {}
+ network:
+ path: oci://ghcr.io/kusionstack/network
+ version: 0.2.0
+ configs:
+ default: {}
+ inference:
+ path: oci://ghcr.io/kusionstack/inference
+ version: 0.1.0-beta.4
+ configs:
+ default: {}
+```
+
+## Example
+
+After creating and switching to the workspace shown above, we can initialize the example `Project` and `Stack` with `kusion project create` and `kusion stack create`. Please refer to [this document](https://www.kusionstack.io/docs/user-guides/working-with-k8s/deploy-application#initializing-application-configuration) for more details.
+
+The directory structure, and configuration file contents of the example project is shown below:
+
+```shell
+example/
+.
+├── default
+│ ├── kcl.mod
+│ ├── main.k
+│ └── stack.yaml
+└── project.yaml
+```
+
+`project.yaml`:
+
+```yaml
+name: example
+```
+
+`stack.yaml`:
+
+```yaml
+name: default
+```
+
+`kcl.mod`:
+
+```yaml
+[dependencies]
+kam = { git = "https://github.com/KusionStack/kam.git", tag = "0.2.0" }
+service = {oci = "oci://ghcr.io/kusionstack/service", tag = "0.1.0" }
+network = { oci = "oci://ghcr.io/kusionstack/network", tag = "0.2.0" }
+inference = { oci = "oci://ghcr.io/kusionstack/inference", tag = "0.1.0-beta.4" }
+```
+
+`main.k`:
+
+```python
+import kam.v1.app_configuration as ac
+import service
+import service.container as c
+import network as n
+import inference.v1.inference
+
+inference: ac.AppConfiguration {
+ # Declare the workload configurations.
+ workload: service.Service {
+ containers: {
+ myct: c.Container {image: "kangy126/app"}
+ }
+ replicas: 1
+ }
+ # Declare the inference module configurations.
+ accessories: {
+ "inference": inference.Inference {
+ model: "llama3"
+ framework: "Ollama"
+ }
+ "network": n.Network {ports: [n.Port {
+ port: 80
+ targetPort: 5000
+ }]}
+ }
+}
+```
+
+In the above example, we configure the `model` and `framework` item of the `inference` module, which are two required configuration items for this module. The inference service of different models with different inference frameworks could be quickly built up by changing these two configuration items.
+
+As for how the AI application use the LLM service provided by the `inference` module, an environment variable named `INFERENCE_URL` will be injected by the module and the application can call the LLM service with the address.
+
+Which model used in the application is transparent, and you only need to provide the `prompt` parameter to the request address. Of course, you can directly modify the model and other configuration items in the `main.k` file and update the deployment resources by `kusion apply`.
+
+There are also some optional configuration items in the `inference` module for adjusting the LLM service, whose details can be found [here](../../../6-reference/2-modules/1-developer-schemas/inference/inference.md).
+
+## Deployment
+
+Now we can generate and deploy the `Spec` containing all the relevant resources the AI application needs with Kusion.
+
+First, we should navigate to the folder `example/default` and execute the `kusion generate` command, and a `Spec` will be generated.
+
+```
+➜ default git:(main) ✗ kusion generate
+ ✔︎ Generating Spec in the Stack default...
+resources:
+ - id: v1:Namespace:example
+ type: Kubernetes
+ attributes:
+ apiVersion: v1
+ kind: Namespace
+ metadata:
+ creationTimestamp: null
+ name: example
+ spec: {}
+ status: {}
+ extensions:
+ GVK: /v1, Kind=Namespace
+ - id: apps/v1:Deployment:example:example-default-inference
+ type: Kubernetes
+ attributes:
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ creationTimestamp: null
+ labels:
+ app.kubernetes.io/name: inference
+ app.kubernetes.io/part-of: example
+ name: example-default-inference
+ namespace: example
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: inference
+ app.kubernetes.io/part-of: example
+ strategy: {}
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ app.kubernetes.io/name: inference
+ app.kubernetes.io/part-of: example
+ spec:
+ containers:
+ - env:
+ - name: INFERENCE_URL
+ value: ollama-infer-service
+ image: kangy126/app
+ name: myct
+ resources: {}
+ status: {}
+ dependsOn:
+ - v1:Namespace:example
+ - v1:Service:example:ollama-infer-service
+ - v1:Service:example:example-default-inference-private
+ extensions:
+ GVK: apps/v1, Kind=Deployment
+ kusion.io/is-workload: true
+ - id: apps/v1:Deployment:example:ollama-infer-deployment
+ type: Kubernetes
+ attributes:
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ creationTimestamp: null
+ name: ollama-infer-deployment
+ namespace: example
+ spec:
+ selector:
+ matchLabels:
+ accessory: ollama
+ strategy: {}
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ accessory: ollama
+ spec:
+ containers:
+ - command:
+ - /bin/sh
+ - -c
+ - |-
+ echo 'FROM llama3
+ PARAMETER top_k 40
+ PARAMETER top_p 0.900000
+ PARAMETER temperature 0.800000
+ PARAMETER num_predict 128
+ PARAMETER num_ctx 2048
+ ' > Modelfile && ollama serve & OLLAMA_SERVE_PID=$! && sleep 5 && ollama create llama3 -f Modelfile && wait $OLLAMA_SERVE_PID
+ image: ollama/ollama
+ name: ollama-infer-container
+ ports:
+ - containerPort: 11434
+ name: ollama-port
+ resources: {}
+ volumeMounts:
+ - mountPath: /root/.ollama
+ name: ollama-infer-storage
+ volumes:
+ - emptyDir: {}
+ name: ollama-infer-storage
+ status: {}
+ dependsOn:
+ - v1:Namespace:example
+ - v1:Service:example:ollama-infer-service
+ - v1:Service:example:example-default-inference-private
+ extensions:
+ GVK: apps/v1, Kind=Deployment
+ - id: v1:Service:example:ollama-infer-service
+ type: Kubernetes
+ attributes:
+ apiVersion: v1
+ kind: Service
+ metadata:
+ creationTimestamp: null
+ labels:
+ accessory: ollama
+ name: ollama-infer-service
+ namespace: example
+ spec:
+ ports:
+ - port: 80
+ targetPort: 11434
+ selector:
+ accessory: ollama
+ type: ClusterIP
+ status:
+ loadBalancer: {}
+ dependsOn:
+ - v1:Namespace:example
+ extensions:
+ GVK: /v1, Kind=Service
+ - id: v1:Service:example:example-default-inference-private
+ type: Kubernetes
+ attributes:
+ apiVersion: v1
+ kind: Service
+ metadata:
+ creationTimestamp: null
+ labels:
+ app.kubernetes.io/name: inference
+ app.kubernetes.io/part-of: example
+ name: example-default-inference-private
+ namespace: example
+ spec:
+ ports:
+ - name: example-default-inference-private-80-tcp
+ port: 80
+ protocol: TCP
+ targetPort: 5000
+ selector:
+ app.kubernetes.io/name: inference
+ app.kubernetes.io/part-of: example
+ type: ClusterIP
+ status:
+ loadBalancer: {}
+ dependsOn:
+ - v1:Namespace:example
+ extensions:
+ GVK: /v1, Kind=Service
+secretStore: null
+context: {}
+```
+
+Next, we can execute the `kusion preview` command and review the resource three-way diffs for a more secure deployment.
+
+```
+➜ default git:(main) ✗ kusion preview
+ ✔︎ Generating Spec in the Stack default...
+Stack: default
+ID Action
+v1:Namespace:example Create
+v1:Service:example:ollama-infer-service Create
+v1:Service:example:example-default-inference-private Create
+apps/v1:Deployment:example:example-default-inference Create
+apps/v1:Deployment:example:ollama-infer-deployment Create
+
+
+Which diff detail do you want to see?:
+> all
+ v1:Namespace:example Create
+ v1:Service:example:ollama-infer-service Create
+ v1:Service:example:example-default-inference-private Create
+ apps/v1:Deployment:example:example-default-inference Create
+```
+
+Finally, execute the `kusion apply` command to deploy the related Kubernetes resources.
+
+```
+➜ default git:(main) ✗ kusion apply
+ ✔︎ Generating Spec in the Stack default...
+Stack: default
+ID Action
+v1:Namespace:example Create
+v1:Service:example:ollama-infer-service Create
+v1:Service:example:example-default-inference-private Create
+apps/v1:Deployment:example:ollama-infer-deployment Create
+apps/v1:Deployment:example:example-default-inference Create
+
+
+Do you want to apply these diffs?:
+ > yes
+
+Start applying diffs ...
+ ✔︎ Succeeded v1:Namespace:example
+ ✔︎ Succeeded v1:Service:example:ollama-infer-service
+ ✔︎ Succeeded v1:Service:example:example-default-inference-private
+ ✔︎ Succeeded apps/v1:Deployment:example:ollama-infer-deployment
+ ✔︎ Succeeded apps/v1:Deployment:example:example-default-inference
+Apply complete! Resources: 5 created, 0 updated, 0 deleted.
+
+```
+
+## Testing
+
+Execute the `kubectl get all -n example` command, and the deployed Kubernetes resources will be shown.
+
+```
+➜ ~ kubectl get all -n example
+NAME READY STATUS RESTARTS AGE
+pod/example-dev-inference-5cf6c74574-7w92f 1/1 Running 0 2d6h
+pod/mynginx 1/1 Running 0 2d6h
+pod/ollama-infer-deployment-7c56845496-s5snb 1/1 Running 0 2d6h
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+service/example-dev-inference-public ClusterIP 192.168.116.121 80:32693/TCP 2d6h
+service/ollama-infer-service ClusterIP 192.168.28.208 80/TCP 2d6h
+
+NAME READY UP-TO-DATE AVAILABLE AGE
+deployment.apps/example-dev-inference 1/1 1 1 2d6h
+deployment.apps/ollama-infer-deployment 1/1 1 1 2d6h
+
+NAME DESIRED CURRENT READY AGE
+replicaset.apps/example-dev-inference-5cf6c74574 1 1 1 2d6h
+replicaset.apps/ollama-infer-deployment-7c56845496 1 1 1 2d6h
+```
+
+The AI application in the example provides a simple service that returns the LLM responses when sending a GET request with the `prompt` parameter.
+
+We can test the application service locally by `port-forward`, allowing us to directly send requests to the application via our browser.
+
+```sh
+kubectl port-forward service/example-dev-inference-public 8080:80 -n example
+```
+
+The test results are shown in the figure below.
+
+
+
+By modifying the `model` parameter in the `main.k` file, you can switch to a different model without having to change the application itself.
+
+For example, we change the value of `model` from `llama3` to `qwen`. Then we execute the `kusion apply` command to update the K8S resources.
+
+```sh
+❯ kusion apply
+ ✔︎ Generating Spec in the Stack dev...
+Stack: dev
+ID Action
+v1:Namespace:example UnChanged
+v1:Service:example:ollama-infer-service UnChanged
+v1:Service:example:proxy-infer-service UnChanged
+v1:Service:example:example-dev-inference-public UnChanged
+apps/v1:Deployment:example:example-dev-inference UnChanged
+apps/v1:Deployment:example:proxy-infer-deployment Update
+apps/v1:Deployment:example:ollama-infer-deployment Update
+
+
+Do you want to apply these diffs?:
+ yes
+> details
+ no
+```
+
+We repeat to send the request to the application via the browser, and the new results are as follows.
+
+
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/6-llm-ops/_category_.json b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/6-llm-ops/_category_.json
new file mode 100644
index 00000000..d0ed9947
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/6-llm-ops/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "LLM Ops"
+}
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/_category_.json b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/_category_.json
new file mode 100644
index 00000000..c365bdea
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/1-using-kusion-cli/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Using Kusion CLI"
+}
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/2-using-kusion-server/1-cloud-resources/_category_.json b/docs_versioned_docs/version-v0.14/5-user-guides/2-using-kusion-server/1-cloud-resources/_category_.json
new file mode 100644
index 00000000..f6f2c380
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/2-using-kusion-server/1-cloud-resources/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Cloud Resources"
+}
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/2-using-kusion-server/2-working-with-k8s/_category_.json b/docs_versioned_docs/version-v0.14/5-user-guides/2-using-kusion-server/2-working-with-k8s/_category_.json
new file mode 100644
index 00000000..05291533
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/2-using-kusion-server/2-working-with-k8s/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Kubernetes"
+}
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/2-using-kusion-server/_category_.json b/docs_versioned_docs/version-v0.14/5-user-guides/2-using-kusion-server/_category_.json
new file mode 100644
index 00000000..e504d294
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/2-using-kusion-server/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Using Kusion Server"
+}
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/5-user-guides/_category_.json b/docs_versioned_docs/version-v0.14/5-user-guides/_category_.json
new file mode 100644
index 00000000..abf4c874
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/5-user-guides/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "User Guides"
+}
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/_category_.json b/docs_versioned_docs/version-v0.14/6-reference/1-commands/_category_.json
new file mode 100644
index 00000000..d783ca2e
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Kusion Commands"
+}
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/index.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/index.md
new file mode 100644
index 00000000..e0fb681a
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/index.md
@@ -0,0 +1,41 @@
+# Kusion Commands
+
+Kusion is the Platform Orchestrator of Internal Developer Platform
+
+Find more information at: https://www.kusionstack.io
+
+## Synopsis
+
+As a Platform Orchestrator, Kusion delivers user intentions to Kubernetes, Clouds and On-Premise resources. Also enables asynchronous cooperation between the development and the platform team and drives the separation of concerns.
+
+```
+kusion [flags]
+```
+
+## Options
+
+```
+ -h, --help help for kusion
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion apply](kusion-apply.md) - Apply the operational intent of various resources to multiple runtimes
+* [kusion config](kusion-config.md) - Interact with the Kusion config
+* [kusion destroy](kusion-destroy.md) - Destroy resources within the stack.
+* [kusion generate](kusion-generate.md) - Generate and print the resulting Spec resources of target Stack
+* [kusion init](kusion-init.md) - Initialize the scaffolding for a demo project
+* [kusion mod](kusion-mod.md) - Manage Kusion modules
+* [kusion options](kusion-options.md) - Print the list of flags inherited by all commands
+* [kusion preview](kusion-preview.md) - Preview a series of resource changes within the stack
+* [kusion project](kusion-project.md) - Project is a folder that contains a project.yaml file and is linked to a Git repository
+* [kusion release](kusion-release.md) - Observe and operate Kusion release files
+* [kusion resource](kusion-resource.md) - Observe Kusion resource information
+* [kusion server](kusion-server.md) - Start kusion server
+* [kusion stack](kusion-stack.md) - Stack is a folder that contains a stack.yaml file within the corresponding project directory
+* [kusion version](kusion-version.md) - Print the Kusion version information for the current context
+* [kusion workspace](kusion-workspace.md) - Workspace is a logical concept representing a target that stacks will be deployed to
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-apply.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-apply.md
new file mode 100644
index 00000000..42426c15
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-apply.md
@@ -0,0 +1,77 @@
+# kusion apply
+
+Apply the operational intent of various resources to multiple runtimes
+
+## Synopsis
+
+Apply a series of resource changes within the stack.
+
+ Create, update or delete resources according to the operational intent within a stack. By default, Kusion will generate an execution preview and prompt for your approval before performing any actions. You can review the preview details and make a decision to proceed with the actions or abort them.
+
+```
+kusion apply [flags]
+```
+
+## Examples
+
+```
+ # Apply with specified work directory
+ kusion apply -w /path/to/workdir
+
+ # Apply with specified arguments
+ kusion apply -D name=test -D age=18
+
+ # Apply with specifying spec file
+ kusion apply --spec-file spec.yaml
+
+ # Skip interactive approval of preview details before applying
+ kusion apply --yes
+
+ # Apply without output style and color
+ kusion apply --no-style=true
+
+ # Apply without watching the resource changes and waiting for reconciliation
+ kusion apply --watch=false
+
+ # Apply with the specified timeout duration for kusion apply command, measured in second(s)
+ kusion apply --timeout=120
+
+ # Apply with localhost port forwarding
+ kusion apply --port-forward=8080
+```
+
+## Options
+
+```
+ -a, --all --detail Automatically show all preview details, combined use with flag --detail
+ -D, --argument stringArray Specify arguments on the command line
+ --backend string The backend to use, supports 'local', 'oss' and 's3'.
+ -d, --detail Automatically show preview details with interactive options (default true)
+ --dry-run Preview the execution effect (always successful) without actually applying the changes
+ -h, --help help for apply
+ --ignore-fields strings Ignore differences of target fields
+ --no-style no-style sets to RawOutput mode and disables all of styling
+ -o, --output string Specify the output format
+ --port-forward int Forward the specified port from local to service
+ --spec-file string Specify the spec file path as input, and the spec file must be located in the working directory or its subdirectories
+ --timeout int The timeout duration for kusion apply command, measured in second(s)
+ --watch After creating/updating/deleting the requested object, watch for changes (default true)
+ -w, --workdir string The work directory to run Kusion CLI.
+ --workspace string The name of target workspace to operate in.
+ -y, --yes Automatically approve and perform the update after previewing it
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+Find more information at: https://www.kusionstack.io
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config-get.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config-get.md
new file mode 100644
index 00000000..83eba968
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config-get.md
@@ -0,0 +1,37 @@
+# kusion config get
+
+Get a config item
+
+## Synopsis
+
+This command gets the value of a specified kusion config item, where the config item must be registered.
+
+```
+kusion config get
+```
+
+## Examples
+
+```
+ # Get a config item
+ kusion config get backends.current
+```
+
+## Options
+
+```
+ -h, --help help for get
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion config](kusion-config.md) - Interact with the Kusion config
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config-list.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config-list.md
new file mode 100644
index 00000000..28277c80
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config-list.md
@@ -0,0 +1,37 @@
+# kusion config list
+
+List all config items
+
+## Synopsis
+
+This command lists all the kusion config items and their values.
+
+```
+kusion config list
+```
+
+## Examples
+
+```
+ # List config items
+ kusion config list
+```
+
+## Options
+
+```
+ -h, --help help for list
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion config](kusion-config.md) - Interact with the Kusion config
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config-set.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config-set.md
new file mode 100644
index 00000000..013719e1
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config-set.md
@@ -0,0 +1,40 @@
+# kusion config set
+
+Set a config item
+
+## Synopsis
+
+This command sets the value of a specified kusion config item, where the config item must be registered, and the value must be in valid type.
+
+```
+kusion config set
+```
+
+## Examples
+
+```
+ # Set a config item with string type value
+ kusion config set backends.current s3-pre
+
+ # Set a config item with struct or map type value
+ kusion config set backends.s3-pre.configs '{"bucket":"kusion"}'
+```
+
+## Options
+
+```
+ -h, --help help for set
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion config](kusion-config.md) - Interact with the Kusion config
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config-unset.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config-unset.md
new file mode 100644
index 00000000..a8070771
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config-unset.md
@@ -0,0 +1,37 @@
+# kusion config unset
+
+Unset a config item
+
+## Synopsis
+
+This command unsets a specified kusion config item, where the config item must be registered.
+
+```
+kusion config unset
+```
+
+## Examples
+
+```
+ # Unset a config item
+ kusion config unset backends.s3-pre.configs.bucket
+```
+
+## Options
+
+```
+ -h, --help help for unset
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion config](kusion-config.md) - Interact with the Kusion config
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config.md
new file mode 100644
index 00000000..b2a98910
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-config.md
@@ -0,0 +1,35 @@
+# kusion config
+
+Interact with the Kusion config
+
+## Synopsis
+
+Config contains the operation of Kusion configurations.
+
+```
+kusion config [flags]
+```
+
+## Options
+
+```
+ -h, --help help for config
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+### SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+* [kusion config get](kusion-config-get.md) - Get a config item
+* [kusion config list](kusion-config-list.md) - List all config items
+* [kusion config set](kusion-config-set.md) - Set a config item
+* [kusion config unset](kusion-config-unset.md) - Unset a config item
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-destroy.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-destroy.md
new file mode 100644
index 00000000..31c00a4f
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-destroy.md
@@ -0,0 +1,47 @@
+# kusion destroy
+
+Destroy resources within the stack.
+
+## Synopsis
+
+Destroy resources within the stack.
+
+ Please note that the destroy command does NOT perform resource version checks. Therefore, if someone submits an update to a resource at the same time you execute a destroy command, their update will be lost along with the rest of the resource.
+
+```
+kusion destroy [flags]
+```
+
+## Examples
+
+```
+ # Delete resources of current stack
+ kusion destroy
+```
+
+## Options
+
+```
+ --backend string The backend to use, supports 'local', 'oss' and 's3'.
+ -d, --detail Automatically show preview details after previewing it
+ -h, --help help for destroy
+ --no-style no-style sets to RawOutput mode and disables all of styling
+ -w, --workdir string The work directory to run Kusion CLI.
+ --workspace string The name of target workspace to operate in.
+ -y, --yes Automatically approve and perform the update after previewing it
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+Find more information at: https://www.kusionstack.io
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-generate.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-generate.md
new file mode 100644
index 00000000..1e4bed3e
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-generate.md
@@ -0,0 +1,53 @@
+# kusion generate
+
+Generate and print the resulting Spec resources of target Stack
+
+## Synopsis
+
+This command generates Spec resources with given values, then write the resulting Spec resources to specific output file or stdout.
+
+The nearest parent folder containing a stack.yaml file is loaded from the project in the current directory.
+
+```
+kusion generate [flags]
+```
+
+## Examples
+
+```
+ # Generate and write Spec resources to specific output file
+ kusion generate -o /tmp/spec.yaml
+
+ # Generate spec with custom workspace
+ kusion generate -o /tmp/spec.yaml --workspace dev
+
+ # Generate spec with specified arguments
+ kusion generate -D name=test -D age=18
+```
+
+## Options
+
+```
+ -D, --argument stringArray Specify arguments on the command line
+ --backend string The backend to use, supports 'local', 'oss' and 's3'.
+ -h, --help help for generate
+ --no-style no-style sets to RawOutput mode and disables all of styling
+ -o, --output string File to write generated Spec resources to
+ -w, --workdir string The work directory to run Kusion CLI.
+ --workspace string The name of target workspace to operate in.
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+Find more information at: https://www.kusionstack.io
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-init.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-init.md
new file mode 100644
index 00000000..2ed3b457
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-init.md
@@ -0,0 +1,46 @@
+# kusion init
+
+Initialize the scaffolding for a demo project
+
+## Synopsis
+
+This command initializes the scaffolding for a demo project with the name of the current directory to help users quickly get started.
+
+ Note that target directory needs to be an empty directory.
+
+```
+kusion init [flags]
+```
+
+## Examples
+
+```
+ # Initialize a demo project with the name of the current directory
+ mkdir quickstart && cd quickstart
+ kusion init
+
+ # Initialize the demo project in a different target directory
+ kusion init --target projects/my-demo-project
+```
+
+## Options
+
+```
+ -h, --help help for init
+ -t, --target string specify the target directory
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+Find more information at: https://www.kusionstack.io
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-add.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-add.md
new file mode 100644
index 00000000..9456cfb0
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-add.md
@@ -0,0 +1,39 @@
+# kusion mod add
+
+Add a module from a workspace
+
+```
+kusion mod add MODULE_NAME [--workspace WORKSPACE] [flags]
+```
+
+## Examples
+
+```
+ # Add a kusion module to the kcl.mod from the current workspace to use it in AppConfiguration
+ kusion mod add my-module
+
+ # Add a module to the kcl.mod from a specified workspace to use it in AppConfiguration
+ kusion mod add my-module --workspace=dev
+```
+
+## Options
+
+```
+ --backend string The backend to use, supports 'local', 'oss' and 's3'.
+ -h, --help help for add
+ -w, --workdir string The work directory to run Kusion CLI.
+ --workspace string The name of target workspace to operate in.
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion mod](kusion-mod.md) - Manage Kusion modules
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-init.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-init.md
new file mode 100644
index 00000000..905be243
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-init.md
@@ -0,0 +1,40 @@
+# kusion mod init
+
+Create a kusion module along with common files and directories in the current directory
+
+```
+kusion mod init MODULE_NAME PATH [flags]
+```
+
+## Examples
+
+```
+ # Create a kusion module template in the current directory
+ kusion mod init my-module
+
+ # Init a kusion module at the specified Path
+ kusion mod init my-module ./modules
+
+ # Init a module from a remote git template repository
+ kusion mod init my-module --template https://github.com//
+```
+
+## Options
+
+```
+ -h, --help help for init
+ --template string Initialize with specified template
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion mod](kusion-mod.md) - Manage Kusion modules
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-list.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-list.md
new file mode 100644
index 00000000..45130b4f
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-list.md
@@ -0,0 +1,39 @@
+# kusion mod list
+
+List kusion modules in a workspace
+
+```
+kusion mod list [--workspace WORKSPACE] [flags]
+```
+
+## Examples
+
+```
+ # List kusion modules in the current workspace
+ kusion mod list
+
+ # List modules in a specified workspace
+ kusion mod list --workspace=dev
+```
+
+## Options
+
+```
+ --backend string The backend to use, supports 'local', 'oss' and 's3'.
+ -h, --help help for list
+ -w, --workdir string The work directory to run Kusion CLI.
+ --workspace string The name of target workspace to operate in.
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion mod](kusion-mod.md) - Manage Kusion modules
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-login.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-login.md
new file mode 100644
index 00000000..c7f04645
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-login.md
@@ -0,0 +1,44 @@
+# kusion mod login
+
+Login to an oci registry for kusion modules
+
+## Synopsis
+
+The login command logins to a oci registry for kusion module artifacts.
+
+```
+kusion mod login KUSION_MODULE_REGISTRY_URL [--username username --password password]
+```
+
+## Examples
+
+```
+ # Login to a oci registry for kusion module artifacts
+ kusion mod login ghcr.io/kusion-module-registry --username username --password password
+
+ # Users can also set the username and password in the environment variables
+ export KUSION_MODULE_REGISTRY_USERNAME=username
+ export KUSION_MODULE_REGISTRY_PASSWORD=password
+ kusion mod login ghcr.io/kusion-module-registry
+```
+
+## Options
+
+```
+ -h, --help help for login
+ --password string The password of kusion module oci registry.
+ --username string The username of kusion module oci registry.
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion mod](kusion-mod.md) - Manage Kusion modules
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-pull.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-pull.md
new file mode 100644
index 00000000..339c132c
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-pull.md
@@ -0,0 +1,52 @@
+# kusion mod pull
+
+Pull kusion modules
+
+## Synopsis
+
+The pull command downloads the kusion modules declared in the kcl.mod file.
+
+```
+kusion mod pull KCL_MOD_FILE_PATH [--host kusion-module-oci-registry --username username --password password]
+```
+
+## Examples
+
+```
+ # Pull the kusion modules declared in the kcl.mod file under current directory
+ kusion mod pull
+
+ # Pull the kusion modules declared in the kcl.mod file under specified directory
+ kusion mod pull /dir/to/kcl.mod
+
+ # Pull the kusion modules with private oci registry
+ kusion mod pull --host ghcr.io/kusion-module-registry --username username --password password
+
+ # Or users can also pull the kusion modules in private oci registry with environment variables
+ export KUSION_MODULE_REGISTRY_HOST=ghcr.io/kusion-module-registry
+ export KUSION_MODULE_REGISTRY_USERNAME=username
+ export KUSION_MODULE_REGISTRY_PASSWORD=password
+ kusion mod pull /dir/to/kcl.mod
+```
+
+## Options
+
+```
+ -h, --help help for pull
+ --host string The host of kusion module oci registry.
+ --password string The password of kusion module oci registry.
+ --username string The username of kusion module oci registry.
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion mod](kusion-mod.md) - Manage Kusion modules
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-push.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-push.md
new file mode 100644
index 00000000..cc32868a
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod-push.md
@@ -0,0 +1,62 @@
+# kusion mod push
+
+Push a module to OCI registry
+
+## Synopsis
+
+The push command packages the module as an OCI artifact and pushes it to the OCI registry using the version as the image tag.
+
+```
+kusion mod push MODULE_PATH OCI_REPOSITORY_URL [--creds CREDENTIALS]
+```
+
+## Examples
+
+```
+ # Push a module of current OS arch to an OCI Registry using a token
+ kusion mod push /path/to/my-module oci://ghcr.io/org --creds
+
+ # Push a module of specific OS arch to an OCI Registry using a token
+ kusion mod push /path/to/my-module oci://ghcr.io/org --os-arch=darwin/arm64 --creds
+
+ # Push a module to an OCI Registry using a credentials in : format.
+ kusion mod push /path/to/my-module oci://ghcr.io/org --creds :
+
+ # Push a release candidate without marking it as the latest stable
+ kusion mod push /path/to/my-module oci://ghcr.io/org --latest=false
+
+ # Push a module with custom OCI annotations
+ kusion mod push /path/to/my-module oci://ghcr.io/org \
+ --annotation='org.opencontainers.image.documentation=https://app.org/docs'
+
+ # Push and sign a module with Cosign (the cosign binary must be present in PATH)
+ export COSIGN_PASSWORD=password
+ kusion mod push /path/to/my-module oci://ghcr.io/org \
+ --sign=cosign --cosign-key=/path/to/cosign.key
+```
+
+## Options
+
+```
+ -a, --annotations strings Set custom OCI annotations in '=' format.
+ --cosign-key string The Cosign private key for signing the module.
+ --creds string The credentials token for the OCI registry in or : format.
+ -h, --help help for push
+ --insecure-registry If true, allows connecting to a OCI registry without TLS or with self-signed certificates.
+ --latest Tags the current version as the latest stable module version. (default true)
+ --os-arch string The os arch of the module e.g. 'darwin/arm64', 'linux/amd64'.
+ --sign string Signs the module with the specified provider.
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion mod](kusion-mod.md) - Manage Kusion modules
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod.md
new file mode 100644
index 00000000..ed3f2203
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-mod.md
@@ -0,0 +1,39 @@
+# kusion mod
+
+Manage Kusion modules
+
+## Synopsis
+
+Commands for managing Kusion modules.
+
+These commands help you manage the lifecycle of Kusion modules.
+
+```
+kusion mod
+```
+
+## Options
+
+```
+ -h, --help help for mod
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+* [kusion mod add](kusion-mod-add.md) - Add a module from a workspace
+* [kusion mod init](kusion-mod-init.md) - Create a kusion module along with common files and directories in the current directory
+* [kusion mod list](kusion-mod-list.md) - List kusion modules in a workspace
+* [kusion mod login](kusion-mod-login.md) - Login to an oci registry for kusion modules
+* [kusion mod pull](kusion-mod-pull.md) - Pull kusion modules
+* [kusion mod push](kusion-mod-push.md) - Push a module to OCI registry
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-options.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-options.md
new file mode 100644
index 00000000..cd0c5baa
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-options.md
@@ -0,0 +1,37 @@
+# kusion options
+
+Print the list of flags inherited by all commands
+
+## Synopsis
+
+Print the list of flags inherited by all commands
+
+```
+kusion options [flags]
+```
+
+## Examples
+
+```
+ # Print flags inherited by all commands
+ kubectl options
+```
+
+## Options
+
+```
+ -h, --help help for options
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-preview.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-preview.md
new file mode 100644
index 00000000..7b5e1d9f
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-preview.md
@@ -0,0 +1,64 @@
+# kusion preview
+
+Preview a series of resource changes within the stack
+
+## Synopsis
+
+Preview a series of resource changes within the stack.
+
+ Create, update or delete resources according to the intent described in the stack. By default, Kusion will generate an execution preview and present it for your approval before taking any action.
+
+```
+kusion preview [flags]
+```
+
+## Examples
+
+```
+ # Preview with specified work directory
+ kusion preview -w /path/to/workdir
+
+ # Preview with specified arguments
+ kusion preview -D name=test -D age=18
+
+ # Preview with specifying spec file
+ kusion preview --spec-file spec.yaml
+
+ # Preview with ignored fields
+ kusion preview --ignore-fields="metadata.generation,metadata.managedFields"
+
+ # Preview with json format result
+ kusion preview -o json
+
+ # Preview without output style and color
+ kusion preview --no-style=true
+```
+
+## Options
+
+```
+ -a, --all --detail Automatically show all preview details, combined use with flag --detail
+ -D, --argument stringArray Specify arguments on the command line
+ --backend string The backend to use, supports 'local', 'oss' and 's3'.
+ -d, --detail Automatically show preview details with interactive options (default true)
+ -h, --help help for preview
+ --ignore-fields strings Ignore differences of target fields
+ --no-style no-style sets to RawOutput mode and disables all of styling
+ -o, --output string Specify the output format
+ --spec-file string Specify the spec file path as input, and the spec file must be located in the working directory or its subdirectories
+ -w, --workdir string The work directory to run Kusion CLI.
+ --workspace string The name of target workspace to operate in.
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-project-create.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-project-create.md
new file mode 100644
index 00000000..f68ebe3a
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-project-create.md
@@ -0,0 +1,44 @@
+# kusion project create
+
+Create a new project
+
+## Synopsis
+
+This command creates a new project.yaml file under the target directory which by default is the current working directory.
+
+ Note that the target directory needs to be an empty directory.
+
+```
+kusion project create
+```
+
+## Examples
+
+```
+ # Create a new project with the name of the current working directory
+ mkdir my-project && cd my-project
+ kusion project create
+
+ # Create a new project in a specified target directory
+ kusion project create --target /dir/to/projects/my-project
+```
+
+## Options
+
+```
+ -h, --help help for create
+ -t, --target string specify the target directory
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion project](kusion-project.md) - Project is a folder that contains a project.yaml file and is linked to a Git repository
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-project-list.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-project-list.md
new file mode 100644
index 00000000..0cd636c5
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-project-list.md
@@ -0,0 +1,51 @@
+# kusion project list
+
+List the applied projects
+
+## Synopsis
+
+This command lists all the applied projects in the target backend and target workspace.
+
+By default list the projects in the current backend and current workspace.
+
+```
+kusion project list
+```
+
+## Examples
+
+```
+ # List the applied project in the current backend and current workspace
+ kusion project list
+
+ # List the applied project in a specified backend and current workspace
+ kusion project list --backend default
+
+ # List the applied project in a specified backend and specified workspaces
+ kusion project list --backend default --workspace dev,default
+
+ # List the applied project in a specified backend and all the workspaces
+ kusion project list --backend default --all
+```
+
+## Options
+
+```
+ -a, --all List all the projects in all the workspaces
+ --backend string The backend to use, supports 'local', 'oss' and 's3'
+ -h, --help help for list
+ --workspace strings The name of the target workspace
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion project](kusion-project.md) - Project is a folder that contains a project.yaml file and is linked to a Git repository
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-project.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-project.md
new file mode 100644
index 00000000..48dac92f
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-project.md
@@ -0,0 +1,35 @@
+# kusion project
+
+Project is a folder that contains a project.yaml file and is linked to a Git repository
+
+## Synopsis
+
+Project in Kusion is defined as any folder that contains a project.yaml file and is linked to a Git repository.
+
+Project organizes logical configurations for internal components to orchestrate the application and assembles them to suit different roles, such as developers and platform engineers.
+
+```
+kusion project [flags]
+```
+
+## Options
+
+```
+ -h, --help help for project
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+* [kusion project create](kusion-project-create.md) - Create a new project
+* [kusion project list](kusion-project-list.md) - List the applied projects
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-release-list.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-release-list.md
new file mode 100644
index 00000000..754bbd51
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-release-list.md
@@ -0,0 +1,45 @@
+# kusion release list
+
+List all releases of the current stack
+
+## Synopsis
+
+List all releases of the current stack.
+
+This command displays information about all releases of the current stack in the current or a specified workspace, including their revision, phase, and creation time.
+
+```
+kusion release list [flags]
+```
+
+## Examples
+
+```
+ # List all releases of the current stack in current workspace
+ kusion release list
+
+ # List all releases of the current stack in a specified workspace
+ kusion release list --workspace=dev
+```
+
+## Options
+
+```
+ --backend string The backend to use, supports 'local', 'oss' and 's3'.
+ -h, --help help for list
+ -w, --workdir string The work directory to run Kusion CLI.
+ --workspace string The name of target workspace to operate in.
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion release](kusion-release.md) - Observe and operate Kusion release files
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-release-show.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-release-show.md
new file mode 100644
index 00000000..92d85a9f
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-release-show.md
@@ -0,0 +1,56 @@
+# kusion release show
+
+Show details of a release of the current or specified stack
+
+## Synopsis
+
+Show details of a release of the current or specified stack.
+
+This command displays detailed information about a release of the current project in the current or a specified workspace
+
+```
+kusion release show [flags]
+```
+
+## Examples
+
+```
+ # Show details of the latest release of the current project in the current workspace
+ kusion release show
+
+ # Show details of a specific release of the current project in the current workspace
+ kusion release show --revision=1
+
+ # Show details of a specific release of the specified project in the specified workspace
+ kusion release show --revision=1 --project=hoangndst --workspace=dev
+
+ # Show details of the latest release with specified backend
+ kusion release show --backend=local
+
+ # Show details of the latest release with specified output format
+ kusion release show --output=json
+```
+
+## Options
+
+```
+ --backend string The backend to use, supports 'local', 'oss' and 's3'
+ -h, --help help for show
+ -o, --output string Specify the output format
+ --project string The project name
+ --revision uint The revision number of the release
+ --workspace string The workspace name
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion release](kusion-release.md) - Observe and operate Kusion release files
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-release-unlock.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-release-unlock.md
new file mode 100644
index 00000000..f5964009
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-release-unlock.md
@@ -0,0 +1,47 @@
+# kusion release unlock
+
+Unlock the latest release file of the current stack
+
+## Synopsis
+
+Unlock the latest release file of the current stack.
+
+The phase of the latest release file of the current stack in the current or a specified workspace will be set to 'failed' if it was in the stages of 'generating', 'previewing', 'applying' or 'destroying'.
+
+Please note that using the 'kusion release unlock' command may cause unexpected concurrent read-write issues with release files, so please use it with caution.
+
+```
+kusion release unlock [flags]
+```
+
+## Examples
+
+```
+ # Unlock the latest release file of the current stack in the current workspace.
+ kusion release unlock
+
+ # Unlock the latest release file of the current stack in a specified workspace.
+ kusion release unlock --workspace=dev
+```
+
+## Options
+
+```
+ --backend string The backend to use, supports 'local', 'oss' and 's3'.
+ -h, --help help for unlock
+ -w, --workdir string The work directory to run Kusion CLI.
+ --workspace string The name of target workspace to operate in.
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+### SEE ALSO
+
+* [kusion release](kusion-release.md) - Observe and operate Kusion release files
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-release.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-release.md
new file mode 100644
index 00000000..689d39ac
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-release.md
@@ -0,0 +1,36 @@
+# kusion release
+
+Observe and operate Kusion release files
+
+## Synopsis
+
+Commands for observing and operating Kusion release files.
+
+These commands help you observe and operate the Kusion release files of a Project in a Workspace.
+
+```
+kusion release
+```
+
+## Options
+
+```
+ -h, --help help for release
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+* [kusion release list](kusion-release-list.md) - List all releases of the current stack
+* [kusion release show](kusion-release-show.md) - Show details of a release of the current or specified stack
+* [kusion release unlock](kusion-release-unlock.md) - Unlock the latest release file of the current stack
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-resource-graph.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-resource-graph.md
new file mode 100644
index 00000000..fe9c512f
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-resource-graph.md
@@ -0,0 +1,54 @@
+# kusion resource graph
+
+Display a graph of all the resources' information of the target project and target workspaces
+
+## Synopsis
+
+Display information of all the resources of a project.
+
+This command displays information of all the resources of a project in the current or specified workspaces.
+
+```
+kusion resource graph [flags]
+```
+
+## Examples
+
+```
+ # Display information of all the resources of a project in the current workspace.
+ kusion resource graph --project quickstart
+
+ # Display information of all the resources of a project in specified workspaces.
+ kusion resource graph --project quickstart --workspace=dev,default
+
+ # Display information of all the resource of a project in all the workspaces that has been deployed.
+ kusion resource graph --project quickstart --all
+ kusion resource graph --project quickstart -a
+
+ # Display information of all the resource of a project with in specified workspaces with json format result.
+ kusion resource graph --project quickstart --workspace dev -o json
+```
+
+## Options
+
+```
+ -a, --all Display all the resources of all the workspaces
+ --backend string The backend to use, supports 'local', 'oss' and 's3'
+ -h, --help help for graph
+ -o, --output string Specify the output format, json only
+ --project string The name of the target project
+ --workspace strings The name of the target workspace
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion resource](kusion-resource.md) - Observe Kusion resource information
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-resource-show.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-resource-show.md
new file mode 100644
index 00000000..14511a8b
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-resource-show.md
@@ -0,0 +1,53 @@
+# kusion resource show
+
+Show details of a resource of the current or specified stack
+
+## Synopsis
+
+Show details of a resource of the current or specified stack.
+
+This command displays detailed information about a resource of the current project in the current or a specified workspace
+
+```
+kusion resource show [flags]
+```
+
+## Examples
+
+```
+ # Show details of a specific resource of the current project in the current workspace
+ kusion resource show --id=hashicorp:viettelcloud:viettelcloud_db_instance:example-mysql
+
+ # Show details of a specific resource with specified project and workspace
+ kusion resource show --id=hashicorp:viettelcloud:viettelcloud_db_instance:example-mysql --project=example --workspace=dev
+
+ # Show details of a specific resource with specified backend
+ kusion resource show --id=hashicorp:viettelcloud:viettelcloud_db_instance:example-mysql --backend=local
+
+ # Show details of a specific resource with specified output format
+ kusion resource show --id=hashicorp:viettelcloud:viettelcloud_db_instance:example-mysql --output=json
+```
+
+## Options
+
+```
+ --backend string The backend to use, supports 'local', 'oss' and 's3'
+ -h, --help help for show
+ --id string The resource ID
+ -o, --output string Specify the output format
+ --project string The project name
+ --workspace string The workspace name
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion resource](kusion-resource.md) - Observe Kusion resource information
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-resource.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-resource.md
new file mode 100644
index 00000000..b33bb803
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-resource.md
@@ -0,0 +1,35 @@
+# kusion resource
+
+Observe Kusion resource information
+
+## Synopsis
+
+Commands for observing Kusion resources.
+
+These commands help you observe the information of Kusion resources within a project.
+
+```
+kusion resource
+```
+
+## Options
+
+```
+ -h, --help help for resource
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+* [kusion resource graph](kusion-resource-graph.md) - Display a graph of all the resources' information of the target project and target workspaces
+* [kusion resource show](kusion-resource-show.md) - Show details of a resource of the current or specified stack
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-server.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-server.md
new file mode 100644
index 00000000..b7749533
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-server.md
@@ -0,0 +1,59 @@
+# kusion server
+
+Start kusion server
+
+## Synopsis
+
+Start kusion server.
+
+```
+kusion server [flags]
+```
+
+## Examples
+
+```
+ # Start kusion server
+ kusion server --mode kcp --db_host localhost:3306 --db_user root --db_pass 123456
+```
+
+## Options
+
+```
+ -a, --auth-enabled Specify whether token authentication should be enabled
+ -k, --auth-key-type string Specify the auth key type. Default to RSA (default "RSA")
+ --auth-whitelist strings Specify the list of whitelisted IAM accounts to allow access
+ --auto-migrate Whether to enable automatic migration
+ --db-host string database host
+ --db-name string the database name
+ --db-pass string the user password used to access database
+ --db-port int database port
+ --db-user string the user name used to access database
+ --default-backend-endpoint string the default backend endpoint
+ --default-backend-name string the default backend name
+ --default-backend-type string the default backend type
+ --default-source-remote string the default source remote
+ -d, --dev-portal-enabled Enable dev portal. Default to true. (default true)
+ -h, --help help for server
+ --log-file-path string File path to write logs to. Default to /home/admin/logs/kusion.log (default "/home/admin/logs/kusion.log")
+ --max-async-buffer int Maximum number of buffer zones during concurrent async executions including generate, preview, apply and destroy. Default to 100. (default 100)
+ --max-async-concurrent int Maximum number of concurrent async executions including generate, preview, apply and destroy. Default to 10. (default 10)
+ --max-concurrent int Maximum number of concurrent executions including preview, apply and destroy. Default to 10. (default 10)
+ --migrate-file string The migrate sql file
+ -p, --port int Specify the port to listen on (default 80)
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+Find more information at: https://www.kusionstack.io
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-stack-create.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-stack-create.md
new file mode 100644
index 00000000..2804c7c9
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-stack-create.md
@@ -0,0 +1,49 @@
+# kusion stack create
+
+Create a new stack
+
+## Synopsis
+
+This command creates a new stack under the target directory which by default is the current working directory.
+
+The stack folder to be created contains 'stack.yaml', 'kcl.mod' and 'main.k' with the specified values.
+
+Note that the target directory needs to be a valid project directory with project.yaml file
+
+```
+kusion stack create
+```
+
+## Examples
+
+```
+ # Create a new stack at current project directory
+ kusion stack create dev
+
+ # Create a new stack in a specified target project directory
+ kusion stack create dev --target /dir/to/projects/my-project
+
+ # Create a new stack copied from the referenced stack under the target project directory
+ kusion stack create prod --copy-from dev
+```
+
+## Options
+
+```
+ --copy-from string specify the referenced stack path to copy from
+ -h, --help help for create
+ -t, --target string specify the target project directory
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion stack](kusion-stack.md) - Stack is a folder that contains a stack.yaml file within the corresponding project directory
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-stack.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-stack.md
new file mode 100644
index 00000000..74a4a928
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-stack.md
@@ -0,0 +1,34 @@
+# kusion stack
+
+Stack is a folder that contains a stack.yaml file within the corresponding project directory
+
+## Synopsis
+
+Stack in Kusion is defined as any folder that contains a stack.yaml file within the corresponding project directory.
+
+A stack provides a mechanism to isolate multiple deployments of the same application, serving with the target workspace to which an application will be deployed.
+
+```
+kusion stack [flags]
+```
+
+## Options
+
+```
+ -h, --help help for stack
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+* [kusion stack create](kusion-stack-create.md) - Create a new stack
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-version.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-version.md
new file mode 100644
index 00000000..7aa40a0f
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-version.md
@@ -0,0 +1,38 @@
+# kusion version
+
+Print the Kusion version information for the current context
+
+## Synopsis
+
+Print the Kusion version information for the current context
+
+```
+kusion version [flags]
+```
+
+## Examples
+
+```
+ # Print the Kusion version
+ kusion version
+```
+
+## Options
+
+```
+ -h, --help help for version
+ -o, --output string Output format. Only json format is supported for now
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-create.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-create.md
new file mode 100644
index 00000000..be6bcdb8
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-create.md
@@ -0,0 +1,46 @@
+# kusion workspace create
+
+Create a new workspace
+
+## Synopsis
+
+This command creates a workspace with specified name and configuration file, where the file must be in the YAML format.
+
+```
+kusion workspace create
+```
+
+## Examples
+
+```
+ # Create a workspace
+ kusion workspace create dev -f dev.yaml
+
+ # Create a workspace and set as current
+ kusion workspace create dev -f dev.yaml --current
+
+ # Create a workspace in a specified backend
+ kusion workspace create prod -f prod.yaml --backend oss-prod
+```
+
+## Options
+
+```
+ --backend string the backend name
+ --current set the creating workspace as current
+ -f, --file string the path of workspace configuration file
+ -h, --help help for create
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion workspace](kusion-workspace.md) - Workspace is a logical concept representing a target that stacks will be deployed to
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-delete.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-delete.md
new file mode 100644
index 00000000..80faba6d
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-delete.md
@@ -0,0 +1,44 @@
+# kusion workspace delete
+
+Delete a workspace
+
+## Synopsis
+
+This command deletes the current or a specified workspace.
+
+```
+kusion workspace delete
+```
+
+## Examples
+
+```
+ # Delete the current workspace
+ kusion workspace delete
+
+ # Delete a specified workspace
+ kusion workspace delete dev
+
+ # Delete a specified workspace in a specified backend
+ kusion workspace delete prod --backend oss-prod
+```
+
+## Options
+
+```
+ --backend string the backend name
+ -h, --help help for delete
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion workspace](kusion-workspace.md) - Workspace is a logical concept representing a target that stacks will be deployed to
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-list.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-list.md
new file mode 100644
index 00000000..df590d95
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-list.md
@@ -0,0 +1,41 @@
+# kusion workspace list
+
+List all workspace names
+
+## Synopsis
+
+This command list the names of all workspaces.
+
+```
+kusion workspace list
+```
+
+## Examples
+
+```
+ # List all workspace names
+ kusion workspace list
+
+ # List all workspace names in a specified backend
+ kusion workspace list --backend oss-prod
+```
+
+## Options
+
+```
+ --backend string the backend name
+ -h, --help help for list
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion workspace](kusion-workspace.md) - Workspace is a logical concept representing a target that stacks will be deployed to
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-show.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-show.md
new file mode 100644
index 00000000..b043960c
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-show.md
@@ -0,0 +1,44 @@
+# kusion workspace show
+
+Show a workspace configuration
+
+## Synopsis
+
+This command gets the current or a specified workspace configuration.
+
+```
+kusion workspace show
+```
+
+## Examples
+
+```
+ # Show current workspace configuration
+ kusion workspace show
+
+ # Show a specified workspace configuration
+ kusion workspace show dev
+
+ # Show a specified workspace in a specified backend
+ kusion workspace show prod --backend oss-prod
+```
+
+## Options
+
+```
+ --backend string the backend name
+ -h, --help help for show
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion workspace](kusion-workspace.md) - Workspace is a logical concept representing a target that stacks will be deployed to
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-switch.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-switch.md
new file mode 100644
index 00000000..4c16b43b
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-switch.md
@@ -0,0 +1,41 @@
+# kusion workspace switch
+
+Switch the current workspace
+
+## Synopsis
+
+This command switches the workspace, where the workspace must be created.
+
+```
+kusion workspace switch
+```
+
+## Examples
+
+```
+ # Switch the current workspace
+ kusion workspace switch dev
+
+ # Switch the current workspace in a specified backend
+ kusion workspace switch prod --backend oss-prod
+```
+
+## Options
+
+```
+ --backend string the backend name
+ -h, --help help for switch
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion workspace](kusion-workspace.md) - Workspace is a logical concept representing a target that stacks will be deployed to
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-update.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-update.md
new file mode 100644
index 00000000..9eda155f
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace-update.md
@@ -0,0 +1,50 @@
+# kusion workspace update
+
+Update a workspace configuration
+
+## Synopsis
+
+This command updates a workspace configuration with specified configuration file, where the file must be in the YAML format.
+
+```
+kusion workspace update
+```
+
+## Examples
+
+```
+ # Update the current workspace
+ kusion workspace update -f dev.yaml
+
+ # Update a specified workspace and set as current
+ kusion workspace update dev -f dev.yaml --current
+
+ # Update a specified workspace in a specified backend
+ kusion workspace update prod -f prod.yaml --backend oss-prod
+
+ # Update a specified workspace with a specified name
+ kusion workspace update dev --rename dev-test
+```
+
+## Options
+
+```
+ -b, --backend string the backend name
+ --current set the creating workspace as current
+ -f, --file string the path of workspace configuration file
+ -h, --help help for update
+ -r, --rename string the new name of the workspace
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion workspace](kusion-workspace.md) - Workspace is a logical concept representing a target that stacks will be deployed to
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace.md b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace.md
new file mode 100644
index 00000000..d0576700
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/1-commands/kusion-workspace.md
@@ -0,0 +1,39 @@
+# kusion workspace
+
+Workspace is a logical concept representing a target that stacks will be deployed to
+
+## Synopsis
+
+Workspace is a logical concept representing a target that stacks will be deployed to.
+
+Workspace is managed by platform engineers, which contains a set of configurations that application developers do not want or should not concern, and is reused by multiple stacks belonging to different projects.
+
+```
+kusion workspace [flags]
+```
+
+## Options
+
+```
+ -h, --help help for workspace
+```
+
+## Options inherited from parent commands
+
+```
+ --profile string Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) (default "none")
+ --profile-output string Name of the file to write the profile to (default "profile.pprof")
+```
+
+## SEE ALSO
+
+* [kusion](index.md) - Kusion is the Platform Orchestrator of Internal Developer Platform
+
+* [kusion workspace create](kusion-workspace-create.md) - Create a new workspace
+* [kusion workspace delete](kusion-workspace-delete.md) - Delete a workspace
+* [kusion workspace list](kusion-workspace-list.md) - List all workspace names
+* [kusion workspace show](kusion-workspace-show.md) - Show a workspace configuration
+* [kusion workspace switch](kusion-workspace-switch.md) - Switch the current workspace
+* [kusion workspace update](kusion-workspace-update.md) - Update a workspace configuration
+
+###### Auto generated by spf13/cobra on 21-Jan-2025
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/_category_.json b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/_category_.json
new file mode 100644
index 00000000..0df3bade
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Developer Schemas"
+}
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/app-configuration.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/app-configuration.md
new file mode 100644
index 00000000..6808bee7
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/app-configuration.md
@@ -0,0 +1,35 @@
+# appconfiguration
+
+## Schema AppConfiguration
+
+AppConfiguration is a developer-centric definition that describes how to run an Application.
This application model builds upon a decade of experience at AntGroup running super large scale
internal developer platform, combined with best-of-breed ideas and practices from the community.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**accessories**|{str:any}|Accessories defines a collection of accessories that will be attached to the workload.|{}|
+|**annotations**|{str:str}|Annotations are key/value pairs that attach arbitrary non-identifying metadata to resources.|{}|
+|**labels**|{str:str}|Labels can be used to attach arbitrary metadata as key-value pairs to resources.|{}|
+|**workload** `required`|[service.Service](workload/service#schema-service) \| [wl.Job](workload/job#schema-job) |Workload defines how to run your application code. Currently supported workload profile
includes Service and Job.|N/A|
+
+### Examples
+```python
+# Instantiate an App with a long-running service and its image is "nginx:v1"
+
+import kam as ac
+import kam.workload as wl
+import kam.workload.container as c
+
+helloworld : ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "nginx": c.Container {
+ image: "nginx:v1"
+ }
+ }
+ }
+}
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/database/mysql.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/database/mysql.md
new file mode 100644
index 00000000..8f6135bb
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/database/mysql.md
@@ -0,0 +1,39 @@
+# mysql
+
+## Schema MySQL
+
+MySQL describes the attributes to locally deploy or create a cloud provider
managed mysql database instance for the workload.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**type** `required`|"local" | "cloud"|Type defines whether the mysql database is deployed locally or provided by
cloud vendor.||
+|**version** `required`|str|Version defines the mysql version to use.||
+
+### Examples
+```python
+# Instantiate a local mysql database with version of 5.7.
+
+import mysql
+
+accessories: {
+ "mysql": mysql.MySQL {
+ type: "local"
+ version: "8.0"
+ }
+}
+```
+
+
+### Credentials and Connectivity
+
+For sensitive information such as the **host**, **username** and **password** for the database instance, Kusion will automatically inject them into the application container for users through environment variables. The relevant environment variables are listed in the table below.
+
+| Name | Explanation |
+| ---- | ----------- |
+| KUSION_DB\_HOST\_`` | Host address for accessing the database instance |
+| KUSION_DB\_USERNAME\_`` | Account username for accessing the database instance |
+| KUSION_DB\_PASSWORD\_`` | Account password for accessing the database instance |
+
+The `databaseName` can be declared in [workspace configs of mysql](../../2-workspace-configs/database/mysql.md), and Kusion will automatically concatenate the ``, ``, `` and `mysql` with `-` if not specified. When injecting the credentials into containers' environment variables, Kusion will convert the `databaseName` to uppercase, and replace `-` with `_`.
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/database/postgres.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/database/postgres.md
new file mode 100644
index 00000000..ad8cbb7e
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/database/postgres.md
@@ -0,0 +1,39 @@
+# postgres
+
+## Schema PostgreSQL
+
+PostgreSQL describes the attributes to locally deploy or create a cloud provider
managed postgresql database instance for the workload.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**type** `required`|"local" | "cloud"|Type defines whether the postgresql database is deployed locally or provided by
cloud vendor.||
+|**version** `required`|str|Version defines the postgres version to use.||
+
+### Examples
+```python
+#Instantiate a local postgresql database with image version of 14.0.
+
+import postgres as postgres
+
+accessories: {
+ "postgres": postgres.PostgreSQL {
+ type: "local"
+ version: "14.0"
+ }
+}
+```
+
+
+### Credentials and Connectivity
+
+For sensitive information such as the **host**, **username** and **password** for the database instance, Kusion will automatically inject them into the application container for users through environment variables. The relevant environment variables are listed in the table below.
+
+| Name | Explanation |
+| ---- | ----------- |
+| KUSION_DB\_HOST\_`` | Host address for accessing the database instance |
+| KUSION_DB\_USERNAME\_`` | Account username for accessing the database instance |
+| KUSION_DB\_PASSWORD\_`` | Account password for accessing the database instance |
+
+The `databaseName` can be declared in [workspace configs of postgres](../../2-workspace-configs/database/postgres.md), and Kusion will automatically concatenate the ``, ``, `` and `postgres` with `-` if not specified. When injecting the credentials into containers' environment variables, Kusion will convert the `databaseName` to uppercase, and replace `-` with `_`.
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/inference/inference.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/inference/inference.md
new file mode 100644
index 00000000..4cdb853a
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/inference/inference.md
@@ -0,0 +1,50 @@
+# inference
+
+## Index
+
+- v1
+ - [Inference](#inference)
+
+## Schemas
+
+### Inference
+
+Inference is a module schema consisting of model, framework and so on
+
+#### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**framework** `required`|"Ollama" \| "KubeRay"|The framework or environment in which the model operates.||
+|**model** `required`|str|The model name to be used for inference.||
+|**num_ctx**|int|The size of the context window used to generate the next token.|2048|
+|**num_predict**|int|Maximum number of tokens to predict when generating text.|128|
+|**system**|str|The system message, which will be set in the template.|""|
+|**temperature**|float|A parameter determines whether the model's output is more random and creative or more predictable.|0.8|
+|**template**|str|The full prompt template, which will be sent to the model.|""|
+|**top_k**|int|A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.|40|
+|**top_p**|float|A higher value (e.g. 0.9) will give more diverse answers, while a lower value (e.g. 0.5) will be more conservative.|0.9|
+#### Examples
+
+```
+import inference.v1.infer
+
+accessories: {
+ "inference@v0.1.0": infer.Inference {
+ model: "llama3"
+ framework: "Ollama"
+
+ system: "You are Mario from super mario bros, acting as an assistant."
+ template: "{{ if .System }}<|im_start|>system {{ .System }}<|im_end|> {{ end }}{{ if .Prompt }}<|im_start|>user {{ .Prompt }}<|im_end|> {{ end }}<|im_start|>assistant"
+
+ top_k: 40
+ top_p: 0.9
+ temperature: 0.8
+
+ num_predict: 128
+ num_ctx: 2048
+ }
+}
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/common.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/common.md
new file mode 100644
index 00000000..8b649196
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/common.md
@@ -0,0 +1,17 @@
+# common
+
+## Schema WorkloadBase
+
+WorkloadBase defines set of attributes shared by different workload profile, e.g Service
and Job. You can inherit this Schema to reuse these common attributes.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**annotations**|{str:str}|Annotations are key/value pairs that attach arbitrary non-identifying metadata to the workload.||
+|**containers** `required`|{str:}|Containers defines the templates of containers to be ran.
More info: https://kubernetes.io/docs/concepts/containers||
+|**labels**|{str:str}|Labels are key/value pairs that are attached to the workload.||
+|**replicas**|int|Number of container replicas based on this configuration that should be ran.||
+|**secrets**|{str:[s.Secret](#schema-secret)}|Secrets can be used to store small amount of sensitive data e.g. password, token.||
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/container/container.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/container/container.md
new file mode 100644
index 00000000..ce170fc6
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/container/container.md
@@ -0,0 +1,63 @@
+# container
+
+## Schema Container
+
+Container describes how the Application's tasks are expected to be run. Depending on
the replicas parameter 1 or more containers can be created from each template.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**args**|[str]|Arguments to the entrypoint.
Args will overwrite the CMD value set in the Dockfile, otherwise the Docker
image's CMD is used if this is not provided.||
+|**command**|[str]|Entrypoint array. Not executed within a shell.
Command will overwrite the ENTRYPOINT value set in the Dockfile, otherwise the Docker
image's ENTRYPOINT is used if this is not provided.||
+|**dirs**|{str:str}|Collection of volumes mount into the container's filesystem.
The dirs parameter is a dict with the key being the folder name in the container and the value
being the referenced volume.||
+|**env**|{str:str}|List of environment variables to set in the container.
The value of the environment variable may be static text or a value from a secret.||
+|**files**|{str:[FileSpec](#filespec)}|List of files to create in the container.
The files parameter is a dict with the key being the file name in the container and the value
being the target file specification.||
+|**image** `required`|str|Image refers to the Docker image name to run for this container.
More info: https://kubernetes.io/docs/concepts/containers/images||
+|**lifecycle**|[lc.Lifecycle](lifecycle/lifecycle.md#schema-lifecycle)|Lifecycle refers to actions that the management system should take in response to container lifecycle events.||
+|**livenessProbe**|[p.Probe](probe/probe.md#schema-probe)|LivenessProbe indicates if a running process is healthy.
Container will be restarted if the probe fails.||
+|**readinessProbe**|[p.Probe](probe/probe.md#schema-probe)|ReadinessProbe indicates whether an application is available to handle requests.||
+|**resources**|{str:str}|Map of resource requirements the container should run with.
The resources parameter is a dict with the key being the resource name and the value being
the resource value.||
+|**startupProbe**|[p.Probe](probe/probe.md#schema-probe)|StartupProbe indicates that the container has started for the first time.
Container will be restarted if the probe fails.||
+|**workingDir**|str|The working directory of the running process defined in entrypoint.
Default container runtime will be used if this is not specified.||
+
+### Examples
+```python
+import kam.workload.container as c
+
+web = c.Container {
+ image: "nginx:latest"
+ command: ["/bin/sh", "-c", "echo hi"]
+ env: {
+ "name": "value"
+ }
+ resources: {
+ "cpu": "2"
+ "memory": "4Gi"
+ }
+}
+```
+
+## Schema FileSpec
+
+FileSpec defines the target file in a Container.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**content**|str|File content in plain text.||
+|**contentFrom**|str|Source for the file content, reference to a secret of configmap value.||
+|**mode** `required`|str|Mode bits used to set permissions on this file, must be an octal value
between 0000 and 0777 or a decimal value between 0 and 511|"0644"|
+
+### Examples
+```python
+import kam.workload.container as c
+
+tmpFile = c.FileSpec {
+ content: "some file contents"
+ mode: "0777"
+}
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/container/lifecycle/lifecycle.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/container/lifecycle/lifecycle.md
new file mode 100644
index 00000000..91123526
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/container/lifecycle/lifecycle.md
@@ -0,0 +1,29 @@
+# lifecycle
+
+## Schema Lifecycle
+
+Lifecycle describes actions that the management system should take in response
to container lifecycle events.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**postStart**| | |The action to be taken after a container is created.
More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks||
+|**preStop**| | |The action to be taken before a container is terminated due to an API request or
management event such as liveness/startup probe failure, preemption, resource contention, etc.
More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks||
+
+### Examples
+```python
+import kam.workload.container.probe as p
+import kam.workload.container.lifecycle as lc
+
+lifecycleHook = lc.Lifecycle {
+ preStop: p.Exec {
+ command: ["preStop.sh"]
+ }
+ postStart: p.Http {
+ url: "http://localhost:80"
+ }
+}
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/container/probe/probe.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/container/probe/probe.md
new file mode 100644
index 00000000..64d709cd
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/container/probe/probe.md
@@ -0,0 +1,92 @@
+# probe
+
+## Schema Probe
+
+Probe describes a health check to be performed against a container to determine whether it is
alive or ready to receive traffic. There are three probe types: readiness, liveness, and startup.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**failureThreshold**|int|Minimum consecutive failures for the probe to be considered failed after having succeeded.||
+|**initialDelaySeconds**|int|The number of seconds before health checking is activated.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes||
+|**periodSeconds**|int|How often (in seconds) to perform the probe.||
+|**probeHandler** `required`|[Exec](#exec) | [Http](#http) | [Tcp](#tcp)|The action taken to determine the alive or health of a container||
+|**successThreshold**|int|Minimum consecutive successes for the probe to be considered successful after having failed.||
+|**terminationGracePeriod**|int|Duration in seconds before terminate gracefully upon probe failure.||
+|**timeoutSeconds**|int|The number of seconds after which the probe times out.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes||
+
+### Examples
+```python
+import kam.workload.container.probe as p
+
+probe = p.Probe {
+ probeHandler: p.Http {
+ path: "/healthz"
+ }
+ initialDelaySeconds: 10
+}
+```
+
+## Schema Exec
+
+Exec describes a "run in container" action.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**command** `required`|[str]|The command line to execute inside the container.||
+
+### Examples
+```python
+import kam.workload.container.probe as p
+
+execProbe = p.Exec {
+ command: ["probe.sh"]
+}
+```
+
+## Schema Http
+
+Http describes an action based on HTTP Get requests.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**headers**|{str:str}|Collection of custom headers to set in the request||
+|**url** `required`|str|The full qualified url to send HTTP requests.||
+
+### Examples
+```python
+import kam.workload.container.probe as p
+
+httpProbe = p.Http {
+ url: "http://localhost:80"
+ headers: {
+ "X-HEADER": "VALUE"
+ }
+}
+```
+
+## Schema Tcp
+
+Tcp describes an action based on opening a socket.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**url** `required`|str|The full qualified url to open a socket.||
+
+### Examples
+```python
+import kam.workload.container.probe as p
+
+tcpProbe = p.Tcp {
+ url: "tcp://localhost:1234"
+}
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/secret/secret.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/secret/secret.md
new file mode 100644
index 00000000..1f13bb85
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/internal/secret/secret.md
@@ -0,0 +1,29 @@
+# secret
+
+## Schema Secret
+
+Secrets are used to provide data that is considered sensitive like passwords, API keys,
TLS certificates, tokens or other credentials.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**data**|{str:str}|Data contains the non-binary secret data in string form.||
+|**immutable**|bool|Immutable, if set to true, ensures that data stored in the Secret cannot be updated.||
+|**params**|{str:str}|Collection of parameters used to facilitate programmatic handling of secret data.||
+|**type** `required`|"basic" | "token" | "opaque" | "certificate" | "external"|Type of secret, used to facilitate programmatic handling of secret data.||
+
+### Examples
+```python
+import kam.workload.secret as sec
+
+basicAuth = sec.Secret {
+ type: "basic"
+ data: {
+ "username": ""
+ "password": ""
+ }
+}
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/k8s_manifest/k8s_manifest.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/k8s_manifest/k8s_manifest.md
new file mode 100644
index 00000000..3e749af9
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/k8s_manifest/k8s_manifest.md
@@ -0,0 +1,30 @@
+# k8s_manifest
+
+## Schema K8sManifest
+
+K8sManifest defines the paths of the YAML files, or the directories of the raw Kubernetes manifests, which will be jointly appended to the Resources of Spec.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**paths** `required`|[str]|The paths of the YAML files, or the directories of the raw Kubernetes manifests.||
+
+### Examples
+
+```
+import k8s_manifest
+
+accessories: {
+ "k8s_manifest": k8s_manifest.K8sManifest {
+ paths: [
+ # The path of a YAML file.
+ "/path/to/my/k8s_manifest.yaml",
+ # The path of a directory containing K8s manifests.
+ "/dir/to/my/k8s_manifests"
+ ]
+ }
+}
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/monitoring/prometheus.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/monitoring/prometheus.md
new file mode 100644
index 00000000..bf2e551e
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/monitoring/prometheus.md
@@ -0,0 +1,24 @@
+# prometheus
+
+## Schema Prometheus
+
+Prometheus can be used to define monitoring requirements
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**path**|str|The path to scrape metrics from.|"/metrics"|
+|**port**|str|The port to scrape metrics from. When using Prometheus operator, this needs to be the port NAME. Otherwise, this can be a port name or a number.|container ports when scraping pod (monitorType is pod) and service port when scraping service (monitorType is service)|
+
+### Examples
+```python
+import monitoring as m
+
+"monitoring": m.Prometheus {
+ path: "/metrics"
+ port: "web"
+}
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/network/network.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/network/network.md
new file mode 100644
index 00000000..daa33121
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/network/network.md
@@ -0,0 +1,51 @@
+# network
+
+## Schema Network
+
+Network defines the exposed port of Service, which can be used to describe how the Service
get accessed.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**ports**|[[Port](#schema-port)]|The list of ports which the Workload should get exposed.||
+
+### Examples
+```python
+import network as n
+
+"network": n.Network {
+ ports: [
+ n.Port {
+ port: 80
+ public: True
+ }
+ ]
+}
+```
+
+## Schema Port
+
+Port defines the exposed port of Workload, which can be used to describe how the Workload get accessed.
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**port** `required`|int|The exposed port of the Workload.|80|
+|**protocol** `required`|"TCP" | "UDP"|The protocol to access the port.|"TCP"|
+|**public** `required`|bool|Public defines whether the port can be accessed through Internet.|False|
+|**targetPort**|int|The backend container port. If empty, set it the same as the port.||
+
+### Examples
+
+```python
+import network as n
+
+port = n.Port {
+ port: 80
+ targetPort: 8080
+ protocol: "TCP"
+ public: True
+}
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/opensearch/opensearch.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/opensearch/opensearch.md
new file mode 100644
index 00000000..49c0ebc4
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/opensearch/opensearch.md
@@ -0,0 +1,32 @@
+# opensearch
+
+## Index
+
+- [OpenSearch](#opensearch)
+
+## Schemas
+
+### OpenSearch
+
+OpenSearch is a module schema of OpenSearch. Currently, it only supports AWS OpenSearch Service
+
+#### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**domainName** `required`|str|Name of the domain.||
+|**engineVersion**|str|Either Elasticsearch_X.Y or OpenSearch_X.Y to specify the engine version for the Amazon OpenSearch Service domain. For example, OpenSearch_1.0 or Elasticsearch_7.9. Defaults to the latest version of OpenSearch.||
+#### Examples
+
+```
+import opensearch.OpenSearch as o
+
+accessories: {
+ "opensearch": o.OpenSearch {
+ domainName: "example"
+ engineVersion: "OpenSearch_1.0"
+ }
+}
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/opsrule/opsrule.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/opsrule/opsrule.md
new file mode 100644
index 00000000..8313090a
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/opsrule/opsrule.md
@@ -0,0 +1,35 @@
+# opsrule
+
+## Schema OpsRule
+
+OpsRule describes operation rules for various Day-2 Operations. Once declared, these
operation rules will be checked before any Day-2 operations.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**maxUnavailable**|int | str|The maximum percentage of the total pod instances in the component that can be
simultaneously unhealthy.|"25%"|
+
+```python
+import opsrule as o
+import kam.v1.app_configuration
+import kam.v1.workload as wl
+import kam.v1.workload.container as c
+
+helloworld : ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "nginx": c.Container {
+ image: "nginx:v1"
+ }
+ }
+ }
+ accessories: {
+ "opsrule" : o.OpsRule {
+ maxUnavailable: "30%"
+ }
+ }
+}
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/workload/job.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/workload/job.md
new file mode 100644
index 00000000..52194488
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/workload/job.md
@@ -0,0 +1,251 @@
+# job
+
+## Schemas
+- [Job](#schema-job)
+ - [Container](#schema-container)
+ - [Filespec](#schema-filespec)
+ - [LifeCycle](#schema-lifecycle)
+ - [Probe](#schema-probe)
+ - [Exec](#schema-exec)
+ - [Http](#schema-http)
+ - [Tcp](#schema-tcp)
+ - [Secret](#schema-secret)
+
+## Schema Job
+
+Job is a kind of workload profile that describes how to run your application code. This
is typically used for tasks that take from a few seconds to a few days to complete.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**annotations**|{str:str}|Annotations are key/value pairs that attach arbitrary non-identifying metadata to the workload.||
+|**containers** `required`|{str:[Container](../internal/container#schema-container)}|Containers defines the templates of containers to be ran.
More info: https://kubernetes.io/docs/concepts/containers||
+|**labels**|{str:str}|Labels are key/value pairs that are attached to the workload.||
+|**replicas**|int|Number of container replicas based on this configuration that should be ran.||
+|**schedule** `required`|str|The scheduling strategy in Cron format. More info: https://en.wikipedia.org/wiki/Cron.||
+|**secrets**|{str:[Secret](../internal/secret/secret.md#schema-secret)}|Secrets can be used to store small amount of sensitive data e.g. password, token.||
+
+### Examples
+```python
+# Instantiate a job with busybox image and runs every hour
+
+import kam.workload as wl
+import kam.workload.container as c
+
+echoJob : wl.Job {
+ containers: {
+ "busybox": c.Container{
+ image: "busybox:1.28"
+ command: ["/bin/sh", "-c", "echo hello"]
+ }
+ }
+ schedule: "0 * * * *"
+}
+```
+
+### Base Schema
+[WorkloadBase](../internal/common#schema-workloadbase)
+
+## Schema Container
+
+Container describes how the Application's tasks are expected to be run. Depending on
the replicas parameter 1 or more containers can be created from each template.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**args**|[str]|Arguments to the entrypoint.
Args will overwrite the CMD value set in the Dockfile, otherwise the Docker
image's CMD is used if this is not provided.||
+|**command**|[str]|Entrypoint array. Not executed within a shell.
Command will overwrite the ENTRYPOINT value set in the Dockfile, otherwise the Docker
image's ENTRYPOINT is used if this is not provided.||
+|**dirs**|{str:str}|Collection of volumes mount into the container's filesystem.
The dirs parameter is a dict with the key being the folder name in the container and the value
being the referenced volume.||
+|**env**|{str:str}|List of environment variables to set in the container.
The value of the environment variable may be static text or a value from a secret.||
+|**files**|{str:[FileSpec](#filespec)}|List of files to create in the container.
The files parameter is a dict with the key being the file name in the container and the value
being the target file specification.||
+|**image** `required`|str|Image refers to the Docker image name to run for this container.
More info: https://kubernetes.io/docs/concepts/containers/images||
+|**lifecycle**|[lc.Lifecycle](../internal/container/lifecycle/lifecycle.md#schema-lifecycle)|Lifecycle refers to actions that the management system should take in response to container lifecycle events.||
+|**livenessProbe**|[p.Probe](../internal/container/probe/probe.md#schema-probe)|LivenessProbe indicates if a running process is healthy.
Container will be restarted if the probe fails.||
+|**readinessProbe**|[p.Probe](../internal/container/probe/probe.md#schema-probe)|ReadinessProbe indicates whether an application is available to handle requests.||
+|**resources**|{str:str}|Map of resource requirements the container should run with.
The resources parameter is a dict with the key being the resource name and the value being
the resource value.||
+|**startupProbe**|[p.Probe](../internal/container/probe/probe.md#schema-probe)|StartupProbe indicates that the container has started for the first time.
Container will be restarted if the probe fails.||
+|**workingDir**|str|The working directory of the running process defined in entrypoint.
Default container runtime will be used if this is not specified.||
+
+### Examples
+```python
+import kam.workload.container as c
+
+web = c.Container {
+ image: "nginx:latest"
+ command: ["/bin/sh", "-c", "echo hi"]
+ env: {
+ "name": "value"
+ }
+ resources: {
+ "cpu": "2"
+ "memory": "4Gi"
+ }
+}
+```
+
+## Schema FileSpec
+
+FileSpec defines the target file in a Container.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**content**|str|File content in plain text.||
+|**contentFrom**|str|Source for the file content, reference to a secret of configmap value.||
+|**mode** `required`|str|Mode bits used to set permissions on this file, must be an octal value
between 0000 and 0777 or a decimal value between 0 and 511|"0644"|
+
+### Examples
+```python
+import kam.workload.container as c
+
+tmpFile = c.FileSpec {
+ content: "some file contents"
+ mode: "0777"
+}
+```
+
+### Schema Lifecycle
+
+Lifecycle describes actions that the management system should take in response to container lifecycle events.
+
+#### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**postStart**| | |The action to be taken after a container is created.
More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks||
+|**preStop**| | |The action to be taken before a container is terminated due to an API request or
management event such as liveness/startup probe failure, preemption, resource contention, etc.
More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks||
+#### Examples
+
+```
+import kam.workload.container.probe as p
+import kam.workload.container.lifecycle as lc
+
+lifecycleHook = lc.Lifecycle {
+ preStop: p.Exec {
+ command: ["preStop.sh"]
+ }
+ postStart: p.Http {
+ url: "http://localhost:80"
+ }
+}
+```
+
+### Schema Exec
+
+Exec describes a "run in container" action.
+
+#### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**command** `required`|[str]|The command line to execute inside the container.||
+#### Examples
+
+```
+import kam.workload.container.probe as p
+
+execProbe = p.Exec {
+ command: ["probe.sh"]
+}
+```
+
+### Schema Http
+
+Http describes an action based on HTTP Get requests.
+
+#### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**headers**|{str:str}|Collection of custom headers to set in the request||
+|**url** `required`|str|The full qualified url to send HTTP requests.||
+#### Examples
+
+```
+import kam.workload.container.probe as p
+
+httpProbe = p.Http {
+ url: "http://localhost:80"
+ headers: {
+ "X-HEADER": "VALUE"
+ }
+}
+```
+
+### Schema Probe
+
+Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. There are three probe types: readiness, liveness, and startup.
+
+#### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**failureThreshold**|int|Minimum consecutive failures for the probe to be considered failed after having succeeded.||
+|**initialDelaySeconds**|int|The number of seconds before health checking is activated.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes||
+|**periodSeconds**|int|How often (in seconds) to perform the probe.||
+|**probeHandler** `required`|[Exec](#exec) | [Http](#http) | [Tcp](#tcp)|The action taken to determine the alive or health of a container||
+|**successThreshold**|int|Minimum consecutive successes for the probe to be considered successful after having failed.||
+|**terminationGracePeriod**|int|Duration in seconds before terminate gracefully upon probe failure.||
+|**timeoutSeconds**|int|The number of seconds after which the probe times out.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes||
+#### Examples
+
+```
+import kam.workload.container.probe as p
+
+probe = p.Probe {
+ probeHandler: p.Http {
+ path: "/healthz"
+ }
+ initialDelaySeconds: 10
+}
+```
+
+### Schema Tcp
+
+Tcp describes an action based on opening a socket.
+
+#### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**url** `required`|str|The full qualified url to open a socket.||
+#### Examples
+
+```
+import kam.workload.container.probe as p
+
+tcpProbe = p.Tcp {
+ url: "tcp://localhost:1234"
+}
+```
+
+## Schema Secret
+
+Secret can be used to store sensitive data.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**data**|{str:str}|Data contains the non-binary secret data in string form.||
+|**immutable**|bool|Immutable, if set to true, ensures that data stored in the Secret cannot be updated.||
+|**params**|{str:str}|Collection of parameters used to facilitate programmatic handling of secret data.||
+|**type** `required`|"basic" | "token" | "opaque" | "certificate" | "external"|Type of secret, used to facilitate programmatic handling of secret data.||
+
+### Examples
+```python
+import kam.workload.secret as sec
+
+basicAuth = sec.Secret {
+ type: "basic"
+ data: {
+ "username": ""
+ "password": ""
+ }
+}
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/workload/service.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/workload/service.md
new file mode 100644
index 00000000..8dc74ccf
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/1-developer-schemas/workload/service.md
@@ -0,0 +1,248 @@
+# service
+
+## Schemas
+- [Service](#schema-service)
+ - [Container](#schema-container)
+ - [Filespec](#schema-filespec)
+ - [LifeCycle](#schema-lifecycle)
+ - [Probe](#schema-probe)
+ - [Exec](#schema-exec)
+ - [Http](#schema-http)
+ - [Tcp](#schema-tcp)
+ - [Secret](#schema-secret)
+
+## Schema Service
+
+Service is a kind of workload profile that describes how to run your application code. This
is typically used for long-running web applications that should "never" go down, and handle
short-lived latency-sensitive web requests, or events.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**annotations**|{str:str}|Annotations are key/value pairs that attach arbitrary non-identifying metadata to the workload.||
+|**containers** `required`|{str:}|Containers defines the templates of containers to be ran.
More info: https://kubernetes.io/docs/concepts/containers||
+|**labels**|{str:str}|Labels are key/value pairs that are attached to the workload.||
+|**replicas**|int|Number of container replicas based on this configuration that should be ran.||
+|**secrets**|{str:[Secret](../internal/secret/secret.md#schema-secret)}|Secrets can be used to store small amount of sensitive data e.g. password, token.||
+
+### Examples
+```python
+# Instantiate a long-running service and its image is "nginx:v1"
+
+import kam.workload as wl
+import kam.workload.container as c
+
+nginxSvc : service.Service {
+ containers: {
+ "nginx": c.Container {
+ image: "nginx:v1"
+ }
+ }
+}
+```
+
+### Base Schema
+[WorkloadBase](../internal/common#schema-workloadbase)
+
+## Schema Container
+
+Container describes how the Application's tasks are expected to be run. Depending on
the replicas parameter 1 or more containers can be created from each template.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**args**|[str]|Arguments to the entrypoint.
Args will overwrite the CMD value set in the Dockfile, otherwise the Docker
image's CMD is used if this is not provided.||
+|**command**|[str]|Entrypoint array. Not executed within a shell.
Command will overwrite the ENTRYPOINT value set in the Dockfile, otherwise the Docker
image's ENTRYPOINT is used if this is not provided.||
+|**dirs**|{str:str}|Collection of volumes mount into the container's filesystem.
The dirs parameter is a dict with the key being the folder name in the container and the value
being the referenced volume.||
+|**env**|{str:str}|List of environment variables to set in the container.
The value of the environment variable may be static text or a value from a secret.||
+|**files**|{str:[FileSpec](#filespec)}|List of files to create in the container.
The files parameter is a dict with the key being the file name in the container and the value
being the target file specification.||
+|**image** `required`|str|Image refers to the Docker image name to run for this container.
More info: https://kubernetes.io/docs/concepts/containers/images||
+|**lifecycle**|[lc.Lifecycle](../internal/container/lifecycle/lifecycle.md#schema-lifecycle)|Lifecycle refers to actions that the management system should take in response to container lifecycle events.||
+|**livenessProbe**|[p.Probe](../internal/container/probe/probe.md#schema-probe)|LivenessProbe indicates if a running process is healthy.
Container will be restarted if the probe fails.||
+|**readinessProbe**|[p.Probe](../internal/container/probe/probe.md#schema-probe)|ReadinessProbe indicates whether an application is available to handle requests.||
+|**resources**|{str:str}|Map of resource requirements the container should run with.
The resources parameter is a dict with the key being the resource name and the value being
the resource value.||
+|**startupProbe**|[p.Probe](../internal/container/probe/probe.md#schema-probe)|StartupProbe indicates that the container has started for the first time.
Container will be restarted if the probe fails.||
+|**workingDir**|str|The working directory of the running process defined in entrypoint.
Default container runtime will be used if this is not specified.||
+
+### Examples
+```python
+import kam.workload.container as c
+
+web = c.Container {
+ image: "nginx:latest"
+ command: ["/bin/sh", "-c", "echo hi"]
+ env: {
+ "name": "value"
+ }
+ resources: {
+ "cpu": "2"
+ "memory": "4Gi"
+ }
+}
+```
+
+## Schema FileSpec
+
+FileSpec defines the target file in a Container.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**content**|str|File content in plain text.||
+|**contentFrom**|str|Source for the file content, reference to a secret of configmap value.||
+|**mode** `required`|str|Mode bits used to set permissions on this file, must be an octal value
between 0000 and 0777 or a decimal value between 0 and 511|"0644"|
+
+### Examples
+```python
+import kam.workload.container as c
+
+tmpFile = c.FileSpec {
+ content: "some file contents"
+ mode: "0777"
+}
+```
+
+### Schema Lifecycle
+
+Lifecycle describes actions that the management system should take in response to container lifecycle events.
+
+#### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**postStart**| | |The action to be taken after a container is created.
More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks||
+|**preStop**| | |The action to be taken before a container is terminated due to an API request or
management event such as liveness/startup probe failure, preemption, resource contention, etc.
More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks||
+#### Examples
+
+```
+import kam.workload.container.probe as p
+import kam.workload.container.lifecycle as lc
+
+lifecycleHook = lc.Lifecycle {
+ preStop: p.Exec {
+ command: ["preStop.sh"]
+ }
+ postStart: p.Http {
+ url: "http://localhost:80"
+ }
+}
+```
+
+### Schema Exec
+
+Exec describes a "run in container" action.
+
+#### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**command** `required`|[str]|The command line to execute inside the container.||
+#### Examples
+
+```
+import kam.workload.container.probe as p
+
+execProbe = p.Exec {
+ command: ["probe.sh"]
+}
+```
+
+### Schema Http
+
+Http describes an action based on HTTP Get requests.
+
+#### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**headers**|{str:str}|Collection of custom headers to set in the request||
+|**url** `required`|str|The full qualified url to send HTTP requests.||
+#### Examples
+
+```
+import kam.workload.container.probe as p
+
+httpProbe = p.Http {
+ url: "http://localhost:80"
+ headers: {
+ "X-HEADER": "VALUE"
+ }
+}
+```
+
+### Schema Probe
+
+Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic. There are three probe types: readiness, liveness, and startup.
+
+#### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**failureThreshold**|int|Minimum consecutive failures for the probe to be considered failed after having succeeded.||
+|**initialDelaySeconds**|int|The number of seconds before health checking is activated.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes||
+|**periodSeconds**|int|How often (in seconds) to perform the probe.||
+|**probeHandler** `required`|[Exec](#exec) | [Http](#http) | [Tcp](#tcp)|The action taken to determine the alive or health of a container||
+|**successThreshold**|int|Minimum consecutive successes for the probe to be considered successful after having failed.||
+|**terminationGracePeriod**|int|Duration in seconds before terminate gracefully upon probe failure.||
+|**timeoutSeconds**|int|The number of seconds after which the probe times out.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes||
+#### Examples
+
+```
+import kam.workload.container.probe as p
+
+probe = p.Probe {
+ probeHandler: p.Http {
+ path: "/healthz"
+ }
+ initialDelaySeconds: 10
+}
+```
+
+### Schema Tcp
+
+Tcp describes an action based on opening a socket.
+
+#### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**url** `required`|str|The full qualified url to open a socket.||
+#### Examples
+
+```
+import kam.workload.container.probe as p
+
+tcpProbe = p.Tcp {
+ url: "tcp://localhost:1234"
+}
+```
+
+## Schema Secret
+
+Secret can be used to store sensitive data.
+
+### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**data**|{str:str}|Data contains the non-binary secret data in string form.||
+|**immutable**|bool|Immutable, if set to true, ensures that data stored in the Secret cannot be updated.||
+|**params**|{str:str}|Collection of parameters used to facilitate programmatic handling of secret data.||
+|**type** `required`|"basic" | "token" | "opaque" | "certificate" | "external"|Type of secret, used to facilitate programmatic handling of secret data.||
+
+### Examples
+```python
+import kam.workload.secret as sec
+
+basicAuth = sec.Secret {
+ type: "basic"
+ data: {
+ "username": ""
+ "password": ""
+ }
+}
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/_category_.json b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/_category_.json
new file mode 100644
index 00000000..81444988
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Workspace Configs"
+}
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/database/mysql.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/database/mysql.md
new file mode 100644
index 00000000..66225f5b
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/database/mysql.md
@@ -0,0 +1,52 @@
+# mysql
+
+## Module MySQL
+
+MySQL describes the attributes to locally deploy or create a cloud provider managed mysql database instance for the workload.
+
+### Attributes
+
+|Name and Description|Type|Default Value|Required|
+|--------------------|----|-------------|--------|
+|**cloud**
Cloud specifies the type of the cloud vendor. |"aws" \| "alicloud"|Undefined|**required**|
+|**username**
Username specifies the operation account for the mysql database. |str|"root"|optional|
+|**category**
Category specifies the edition of the mysql instance provided by the cloud vendor. |str|"Basic"|optional|
+|**securityIPs**
SecurityIPs specifies the list of IP addresses allowed to access the mysql instance provided by the cloud vendor. |[str]|["0.0.0.0/0"]|optional|
+|**privateRouting**
PrivateRouting specifies whether the host address of the cloud mysql instance for the workload to connect with is via public network or private network of the cloud vendor. |bool|true|optional|
+|**size**
Size specifies the allocated storage size of the mysql instance. |int|10|optional|
+|**subnetID**
SubnetID specifies the virtual subnet ID associated with the VPC that the cloud mysql instance will be created in. |str|Undefined|optional|
+|**databaseName**
databaseName specifies the database name. |str|Undefined|optional|
+
+### Examples
+
+```yaml
+# MySQL workspace configs for AWS RDS
+modules:
+ mysql:
+ path: oci://ghcr.io/kusionstack/mysql
+ version: 0.2.0
+ configs:
+ default:
+ cloud: aws
+ size: 20
+ instanceType: db.t3.micro
+ privateRouting: false
+ databaseName: "my-mysql"
+```
+
+```yaml
+# MySQL workspace configs for Alicloud RDS
+modules:
+ mysql:
+ path: oci://ghcr.io/kusionstack/mysql
+ version: 0.2.0
+ configs:
+ default:
+ cloud: alicloud
+ size: 20
+ instanceType: mysql.n2.serverless.1c
+ category: serverless_basic
+ privateRouting: false
+ subnetID: [your-subnet-id]
+ databaseName: "my-mysql"
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/database/postgres.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/database/postgres.md
new file mode 100644
index 00000000..aed20616
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/database/postgres.md
@@ -0,0 +1,55 @@
+# postgres
+
+## Module PostgreSQL
+
+PostgreSQL describes the attributes to locally deploy or create a cloud provider managed postgres database instance for the workload.
+
+### Attributes
+
+|Name and Description|Type|Default Value|Required|
+|--------------------|----|-------------|--------|
+|**cloud**
Cloud specifies the type of the cloud vendor. |"aws" \| "alicloud"|Undefined|**required**|
+|**username**
Username specifies the operation account for the postgres database. |str|"root"|optional|
+|**category**
Category specifies the edition of the postgres instance provided by the cloud vendor. |str|"Basic"|optional|
+|**securityIPs**
SecurityIPs specifies the list of IP addresses allowed to access the postgres instance provided by the cloud vendor. |[str]|["0.0.0.0/0"]|optional|
+|**privateRouting**
PrivateRouting specifies whether the host address of the cloud postgres instance for the workload to connect with is via public network or private network of the cloud vendor. |bool|true|optional|
+|**size**
Size specifies the allocated storage size of the postgres instance. |int|10|optional|
+|**subnetID**
SubnetID specifies the virtual subnet ID associated with the VPC that the cloud postgres instance will be created in. |str|Undefined|optional|
+|**databaseName**
databaseName specifies the database name. |str|Undefined|optional|
+
+### Examples
+
+```yaml
+# PostgreSQL workspace configs for AWS RDS
+modules:
+ postgres:
+ path: oci://ghcr.io/kusionstack/postgres
+ version: 0.2.0
+ configs:
+ default:
+ cloud: aws
+ size: 20
+ instanceType: db.t3.micro
+ securityIPs:
+ - 0.0.0.0/0
+ databaseName: "my-postgres"
+```
+
+```yaml
+# PostgreSQL workspace configs for Alicloud RDS
+modules:
+ postgres:
+ path: oci://ghcr.io/kusionstack/postgres
+ version: 0.2.0
+ configs:
+ default:
+ cloud: alicloud
+ size: 20
+ instanceType: pg.n2.serverless.1c
+ category: serverless_basic
+ privateRouting: false
+ subnetID: [your-subnet-id]
+ securityIPs:
+ - 0.0.0.0/0
+ databaseName: "my-postgres"
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/inference/inference.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/inference/inference.md
new file mode 100644
index 00000000..4cdb853a
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/inference/inference.md
@@ -0,0 +1,50 @@
+# inference
+
+## Index
+
+- v1
+ - [Inference](#inference)
+
+## Schemas
+
+### Inference
+
+Inference is a module schema consisting of model, framework and so on
+
+#### Attributes
+
+| name | type | description | default value |
+| --- | --- | --- | --- |
+|**framework** `required`|"Ollama" \| "KubeRay"|The framework or environment in which the model operates.||
+|**model** `required`|str|The model name to be used for inference.||
+|**num_ctx**|int|The size of the context window used to generate the next token.|2048|
+|**num_predict**|int|Maximum number of tokens to predict when generating text.|128|
+|**system**|str|The system message, which will be set in the template.|""|
+|**temperature**|float|A parameter determines whether the model's output is more random and creative or more predictable.|0.8|
+|**template**|str|The full prompt template, which will be sent to the model.|""|
+|**top_k**|int|A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative.|40|
+|**top_p**|float|A higher value (e.g. 0.9) will give more diverse answers, while a lower value (e.g. 0.5) will be more conservative.|0.9|
+#### Examples
+
+```
+import inference.v1.infer
+
+accessories: {
+ "inference@v0.1.0": infer.Inference {
+ model: "llama3"
+ framework: "Ollama"
+
+ system: "You are Mario from super mario bros, acting as an assistant."
+ template: "{{ if .System }}<|im_start|>system {{ .System }}<|im_end|> {{ end }}{{ if .Prompt }}<|im_start|>user {{ .Prompt }}<|im_end|> {{ end }}<|im_start|>assistant"
+
+ top_k: 40
+ top_p: 0.9
+ temperature: 0.8
+
+ num_predict: 128
+ num_ctx: 2048
+ }
+}
+```
+
+
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/k8s_manifest/k8s_manifest.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/k8s_manifest/k8s_manifest.md
new file mode 100644
index 00000000..ab960c65
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/k8s_manifest/k8s_manifest.md
@@ -0,0 +1,25 @@
+# k8s_manifest
+
+## Module K8sManifest
+
+K8sManifest defines the paths of the YAML files, or the directories of the raw Kubernetes manifests, which will be jointly appended to the Resources of Spec.
+
+### Attributes
+
+|Name and Description|Type|Default Value|Required|
+|--------------------|----|-------------|--------|
+|**paths**
The paths of the YAML files, or the directories of the raw Kubernetes manifests. |[str]|Undefined|**optional**|
+
+### Examples
+
+```yaml
+modules:
+ k8s_manifest:
+ path: oci://ghcr.io/kusionstack/k8s_manifest
+ version: 0.1.0
+ configs:
+ default:
+ paths:
+ - /path/to/k8s_manifest.yaml
+ - /dir/to/k8s_manifest/
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/monitoring/prometheus.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/monitoring/prometheus.md
new file mode 100644
index 00000000..55628423
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/monitoring/prometheus.md
@@ -0,0 +1,43 @@
+# monitoring
+
+`monitoring` can be used to define workspace-level monitoring configurations.
+
+## Attributes
+
+|Name and Description|Type|Default Value|Required|
+|--------------------|----|-------------|--------|
+|**operatorMode**
Whether the Prometheus instance installed in the cluster runs as a Kubernetes operator or not. This determines the different kinds of resources Kusion manages.|true \| false|false|optional|
+|**monitorType**
The kind of monitor to create. It only applies when operatorMode is set to True.|"Service" \| "Pod"|"Service"|optional|
+|**interval**
The time interval which Prometheus scrapes metrics data. Only applicable when operator mode is set to true.
When operator mode is set to false, the scraping interval can only be set in the scraping job configuration, which kusion does not have permission to manage directly.|str|30s|optional|
+|**timeout**
The timeout when Prometheus scrapes metrics data. Only applicable when operator mode is set to true.
When operator mode is set to false, the scraping timeout can only be set in the scraping job configuration, which kusion does not have permission to manage directly.|str|15s|optional|
+|**scheme**
The scheme to scrape metrics from. Possible values are http and https.|"http" \| "https"|http|optional|
+
+### Examples
+```yaml
+modules:
+ monitoring:
+ path: oci://ghcr.io/kusionstack/monitoring
+ version: 0.2.0
+ configs:
+ default:
+ operatorMode: True
+ monitorType: Pod
+ scheme: http
+ interval: 30s
+ timeout: 15s
+ low_frequency:
+ operatorMode: False
+ interval: 2m
+ timeout: 1m
+ projectSelector:
+ - foo
+ - bar
+ high_frequency:
+ monitorType: Service
+ interval: 10s
+ timeout: 5s
+ projectSelector:
+ - helloworld
+ - wordpress
+ - prometheus-sample-app
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/networking/network.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/networking/network.md
new file mode 100644
index 00000000..05609acc
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/networking/network.md
@@ -0,0 +1,26 @@
+# network
+
+`network` can be used to define workspace-level networking configurations.
+
+## Attributes
+
+|Name and Description|Type|Default Value|Required|
+|--------------------|----|-------------|--------|
+|**type**
The specific cloud vendor that provides load balancer.| "alicloud" \| "aws"|Undefined|**required**|
+| **labels**
The attached labels of the port.|{str:str}|Undefined|optional|
+| **annotations**
The attached annotations of the port.|{str:str}|Undefined|optional|
+
+### Examples
+
+```yaml
+modules:
+ path: oci://ghcr.io/kusionstack/network
+ version: 0.2.0
+ configs:
+ default:
+ type: alicloud
+ labels:
+ kusionstack.io/control: "true"
+ annotations:
+ service.beta.kubernetes.io/alibaba-cloud-loadbalancer-spec: slb.s1.small
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/opensearch/opensearch.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/opensearch/opensearch.md
new file mode 100644
index 00000000..0a77898a
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/opensearch/opensearch.md
@@ -0,0 +1,66 @@
+# opensearch
+
+`opensearch` can be used to define an AWS OpenSearch Service.
+
+## Attributes
+
+### Schema OpenSearch
+
+|Name and Description|Type|Default Value|Required|
+|--------------------|----|-------------|--------|
+|**clusterConfig**
The configurations for the cluster of the domain.
|[ClusterConfig](#schema-clusterconfig)|Undefined|False|
+|**ebsOptions**
The options for EBS volumes attached to data nodes in the domain.
|[EbsOptions](#schema-ebsoptions)|Undefined|False|
+|**region**
The AWS region.
|str|Undefined|True|
+|**statement**
The statement of the OpenSearch Service.
|[Statement](#schema-statement)[]||True|
+
+### Schema ClusterConfig
+
+|Name and Description|Type|Default Value|Required|
+|--------------------|----|-------------|--------|
+|**instanceType**
The instance type of data nodes in the cluster.
|str|Undefined|False|
+
+### Schema EbsOptions
+
+|Name and Description|Type|Default Value|Required|
+|--------------------|----|-------------|--------|
+|**ebsEnabled**
Whether EBS volumes are attached to data nodes in the domain.
|bool|False|False|
+|**volumeSize**
The size of EBS volumes attached to data nodes (in GiB).
|int|Undefined|Required if ebsEnabled is set to True|
+
+### Schema Statement
+
+|Name and Description|Type|Default Value|Required|
+|--------------------|----|-------------|--------|
+|**effect**
Whether this statement allows or denies the given actions. Valid values are Allow and Deny.
|"Allow" or "Deny"|"Allow"|True|
+|**principles**
The configuration block for principals.
|[Principal](#schema-principal)|Undefined|False|
+|**action**
The list of actions that this statement either allows or denies.
|[]str|Undefined|False|
+
+### Schema Principal
+
+|Name and Description|Type|Default Value|Required|
+|--------------------|----|-------------|--------|
+|**type**
The type of principal. Valid values include AWS, Service, Federated, CanonicalUser and *.
|str|Undefined|False|
+|**identifiers**
The list of identifiers for principals.
|[]str|Undefined|False|
+
+## Examples
+
+```yaml
+modules:
+ opensearch:
+ path: oci://ghcr.io/kusionstack/opensearch
+ version: 0.1.0
+ configs:
+ default:
+ region: us-east-1
+ clusterConfig:
+ instanceType: r6g.large.search
+ ebsEnabled: true
+ volumeSize: 10
+ statement:
+ - effect: Allow
+ principals:
+ - type: AWS
+ identifiers:
+ - "*"
+ action:
+ - es:*
+```
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/opsrule/opsrule.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/opsrule/opsrule.md
new file mode 100644
index 00000000..0c3d29c1
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/opsrule/opsrule.md
@@ -0,0 +1,22 @@
+# opsrule
+
+`opsrule` can be used to define workspace-level operational rule configurations.
+
+## Attributes
+
+|Name and Description|Type|Default Value|Required|
+|--------------------|----|-------------|--------|
+|**maxUnavailable**
The maximum percentage of the total pod instances in the component that can be
simultaneously unhealthy.|int \| str|Undefined|optional|
+
+
+### Examples
+
+```yaml
+modules:
+ opsrule:
+ path: oci://ghcr.io/kusionstack/opsrule
+ version: 0.2.0
+ configs:
+ default:
+ maxUnavailable: "40%"
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/workload/job.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/workload/job.md
new file mode 100644
index 00000000..da659136
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/workload/job.md
@@ -0,0 +1,26 @@
+# job
+
+`job` can be used to define workspace-level job configuration.
+
+### Attributes
+
+|Name and Description|Type|Default Value|Required|
+|--------------------|----|-------------|--------|
+| **replicas**
Number of container replicas based on this configuration that should be ran. |int|2| optional |
+| **labels**
Labels are key/value pairs that are attached to the workload. |{str: str}|Undefined| optional |
+| **annotations**
Annotations are key/value pairs that attach arbitrary non-identifying metadata to the workload. |{str: str}|Undefined| optional |
+
+### Examples
+```yaml
+modules:
+ job:
+ path: oci://ghcr.io/kusionstack/job
+ version: 0.1.0
+ configs:
+ default:
+ replicas: 3
+ labels:
+ label-key: label-value
+ annotations:
+ annotation-key: annotation-value
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/workload/service.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/workload/service.md
new file mode 100644
index 00000000..9c76a44c
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/2-workspace-configs/workload/service.md
@@ -0,0 +1,28 @@
+# service
+
+`service` can be used to define workspace-level service configuration.
+
+### Attributes
+
+|Name and Description|Type|Default Value|Required|
+|--------------------|----|-------------|--------|
+| **replicas**
Number of container replicas based on this configuration that should be ran. |int|2| optional |
+| **labels**
Labels are key/value pairs that are attached to the workload. |{str: str}|Undefined| optional |
+| **annotations**
Annotations are key/value pairs that attach arbitrary non-identifying metadata to the workload. |{str: str}|Undefined| optional |
+| **type**
Type represents the type of workload used by this Service. Currently, it supports several
types, including Deployment and CollaSet. |"Deployment" \| "CollaSet"| Deployment |**required**|
+
+### Examples
+```yaml
+modules:
+ service:
+ path: oci://ghcr.io/kusionstack/service
+ version: 0.2.0
+ configs:
+ default:
+ replicas: 3
+ labels:
+ label-key: label-value
+ annotations:
+ annotation-key: annotation-value
+ type: CollaSet
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/3-naming-conventions.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/3-naming-conventions.md
new file mode 100644
index 00000000..ab7f668c
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/3-naming-conventions.md
@@ -0,0 +1,34 @@
+---
+id: naming-conventions
+sidebar_label: Resource Naming Conventions
+---
+
+# Resource Naming Conventions
+
+Kusion will automatically create Kubernetes or Terraform resources for the applications, many of which do not require users' awareness. This document will introduce the naming conventions for these related resources.
+
+## Kubernetes Resources
+
+Kusion adheres to specific rules when generating the Kubernetes resources for users' applications. The table below lists some common Kubernetes resource naming conventions. Note that `Namespace` can now be specified by users.
+
+| Resource | Concatenation Rule | Example ID |
+| -------- | ------------------ | ---------- |
+| Namespace | `` | v1:Namespace:wordpress-local-db |
+| Deployment | ``-``-`` | apps/v1:Deployment:wordpress-local-db:wordpress-local-db-dev-wordpress |
+| CronJob | ``-``-`` | batch/v1:CronJob:helloworld:helloworld-dev-helloworld |
+| Service | ``-``-``-` or ` | v1:Service:helloworld:helloworld-dev-helloworld-public |
+
+## Terraform Resources
+
+Similarly, Kusion also adheres to specific naming conventions when generating the Terraform Resources. Some common resources are listed below.
+
+| Resource | Concatenation Rule | Example ID |
+| -------- | ------------------ | ---------- |
+| random_password | ``-`` | hashicorp:random:random_password:wordpress-db-mysql |
+| aws_security_group | ``-`` | hashicorp:aws:aws_security_group:wordpress-db-mysql |
+| aws_db_instance | `` | hashicorp:aws:aws_db_instance:wordpress-db |
+| alicloud_db_instance | `` | aliyun:alicloud:alicloud_db_instance:wordpress-db |
+| alicloud_db_connection | `` | aliyun:alicloud:alicloud_db_connection:wordpress |
+| alicloud_rds_account | `` | aliyun:alicloud:alicloud_rds_account:wordpress |
+
+The `` is composed of two parts, one of which is the `key` of database declared in `AppConfiguration` and the other is the `suffix` declared in `workspace` configuration. Kusion will concatenate the database key and suffix, convert them to uppercase, and replace `-` with `_`. And the `` supported now includes `mysql` and `postgres`.
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/_category_.json b/docs_versioned_docs/version-v0.14/6-reference/2-modules/_category_.json
new file mode 100644
index 00000000..4dadaa75
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Kusion Modules"
+}
diff --git a/docs_versioned_docs/version-v0.14/6-reference/2-modules/index.md b/docs_versioned_docs/version-v0.14/6-reference/2-modules/index.md
new file mode 100644
index 00000000..744892c4
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/2-modules/index.md
@@ -0,0 +1,45 @@
+# Kusion Modules
+
+KusionStack presets application configuration models described by KCL, where the model is called **Kusion Model**. The GitHub repository [KusionStack/catalog](https://github.com/KusionStack/catalog) is used to store these models, which is known as **Kusion Model Library**.
+
+The original intention of designing Kusion Model is to enhance the efficiency and improve the experience of YAML users. Through the unified application model defined by code, abstract and encapsulate complex configuration items, omit repetitive and derivable configurations, and supplement with necessary verification logic. Only the necessary attributes get exposed, users get an out-of-the-box, easy-to-understand configuration interface, which reduces the difficulty and improves the reliability of the configuration work.
+
+Kusion Model Library currently provides the Kusion Model `AppConfiguration`. The design of `AppConfiguration` is developer-centric, based on Ant Group's decades of practice in building and managing hyperscale IDP (Internal Developer Platform), and the best practice of community. `AppConfiguration` describes the full lifecycle of an application.
+
+A simple example of using `AppConfiguration` to describe an application is as follows:
+
+```bash
+wordpress: ac.AppConfiguration {
+ workload: service.Service {
+ containers: {
+ "wordpress": c.Container {
+ image: "wordpress:latest"
+ env: {
+ "WORDPRESS_DB_HOST": "secret://wordpress-db/hostAddress"
+ "WORDPRESS_DB_PASSWORD": "secret://wordpress-db/password"
+ }
+ resources: {
+ "cpu": "1"
+ "memory": "2Gi"
+ }
+ }
+ }
+ replicas: 2
+ ports: [
+ n.Port {
+ port: 80
+ public: True
+ }
+ ]
+ }
+
+ database: db.Database {
+ type: "alicloud"
+ engine: "MySQL"
+ version: "5.7"
+ size: 20
+ instanceType: "mysql.n2.serverless.1c"
+ category: "serverless_basic"
+ }
+}
+```
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/6-reference/3-roadmap.md b/docs_versioned_docs/version-v0.14/6-reference/3-roadmap.md
new file mode 100644
index 00000000..f411009e
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/3-roadmap.md
@@ -0,0 +1,15 @@
+# Roadmap
+
+For a finer-grained view of our roadmap and what is being worked on for a release, please refer to the [Roadmap](https://github.com/orgs/KusionStack/projects/24)
+
+## Expand Kusion Module Ecosystem to meet more scenarios
+
+We plan to expand the range of Kusion modules. This includes not only cloud services but also popular cloud-native projects such as Prometheus, Backstage, Crossplane, etc. By leveraging the ecosystem of CNCF projects and Terraform providers, we aim to enrich the Kusion module ecosystem to meet more scenarios.
+
+## LLM (Large Language Models) Operation
+
+Kusion is essentially designed to tackle team collaboration challenges. The LLM operations also involve many collaborative tasks. We believe Kusion can boost the operational efficiency of LLM engineers in this setting as well.
+
+## Kusion Server
+
+Currently, Kusion is a command-line tool, which has its pros and cons. Through our discussions with community users, we‘ve discovered that some of them prefer a long-running service with a web portal. We’re planning to build this form of Kusion, and have already started developing some features.
diff --git a/docs_versioned_docs/version-v0.14/6-reference/_category_.json b/docs_versioned_docs/version-v0.14/6-reference/_category_.json
new file mode 100644
index 00000000..a3b4dd92
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/6-reference/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Reference"
+}
diff --git a/docs_versioned_docs/version-v0.14/7-faq/1-install-error.md b/docs_versioned_docs/version-v0.14/7-faq/1-install-error.md
new file mode 100644
index 00000000..a0fde76a
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/7-faq/1-install-error.md
@@ -0,0 +1,39 @@
+---
+sidebar_position: 1
+---
+
+# Installation
+
+## 1. Could not find `libintl.dylib`
+
+This problem is that some tools depends on the `Gettext` library, but macOS does not have this library by default. You can try to solve it in the following ways:
+
+1. (Skip this step for non-macOS m1) For macOS m1 operating system, make sure you have a homebrew arm64e-version installed in /opt/homebrew, otherwise install the arm version of brew with the following command
+
+```
+/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
+# add to path
+export PATH=/opt/homebrew/bin:$PATH
+```
+
+2. `brew install gettext`
+3. Make sure `libintl.8.dylib` exists in `/usr/local/opt/gettext/lib` directory
+4. If brew is installed in another directory, the library can be created by copying it to the corresponding directory
+
+## 2. macOS system SSL related errors
+
+Openssl dylib library not found or SSL module is not available problem
+
+1. (Skip this step for non-macOS m1) For macOS m1 operating system, make sure you have a homebrew arm64e-version installed in /opt/homebrew, otherwise install the arm version of brew with the following command
+
+```
+/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
+# add to path
+export PATH=/opt/homebrew/bin:$PATH
+```
+
+2. Install openssl (version 1.1) via brew
+
+```
+brew install openssl@1.1
+```
diff --git a/docs_versioned_docs/version-v0.14/7-faq/2-kcl.md b/docs_versioned_docs/version-v0.14/7-faq/2-kcl.md
new file mode 100644
index 00000000..596aa881
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/7-faq/2-kcl.md
@@ -0,0 +1,7 @@
+---
+sidebar_position: 2
+---
+
+# KCL
+
+Visit the [KCL website](https://kcl-lang.io/docs/user_docs/support/faq-kcl) for more documents.
\ No newline at end of file
diff --git a/docs_versioned_docs/version-v0.14/7-faq/_category_.json b/docs_versioned_docs/version-v0.14/7-faq/_category_.json
new file mode 100644
index 00000000..7c4b229f
--- /dev/null
+++ b/docs_versioned_docs/version-v0.14/7-faq/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "FAQ"
+}
diff --git a/docs_versioned_sidebars/version-v0.14-sidebars.json b/docs_versioned_sidebars/version-v0.14-sidebars.json
new file mode 100644
index 00000000..9dc70e9f
--- /dev/null
+++ b/docs_versioned_sidebars/version-v0.14-sidebars.json
@@ -0,0 +1,8 @@
+{
+ "kusion": [
+ {
+ "type": "autogenerated",
+ "dirName": "."
+ }
+ ]
+}
diff --git a/docs_versions.json b/docs_versions.json
index 3c2a6773..ae1173cf 100644
--- a/docs_versions.json
+++ b/docs_versions.json
@@ -1,4 +1,5 @@
[
+ "v0.14",
"v0.13",
"v0.12",
"v0.11",
diff --git a/i18n/en/docusaurus-plugin-content-docs-docs/current.json b/i18n/en/docusaurus-plugin-content-docs-docs/current.json
index 427f5ca2..ff1faea1 100644
--- a/i18n/en/docusaurus-plugin-content-docs-docs/current.json
+++ b/i18n/en/docusaurus-plugin-content-docs-docs/current.json
@@ -1,6 +1,6 @@
{
"version.label": {
- "message": "v0.14 🚧",
+ "message": "v0.15 🚧",
"description": "The label for version current"
},
"sidebar.kusion.category.What is Kusion?": {
diff --git a/src/pages/index.js b/src/pages/index.js
index 6ba4579e..7a314b20 100644
--- a/src/pages/index.js
+++ b/src/pages/index.js
@@ -68,7 +68,7 @@ function Home() {
"button button--primary button--lg",
styles.button,
)}
- to="/docs/getting-started/install-kusion"
+ to="/docs/getting-started/getting-started-with-kusion-cli/install-kusion"
>
Install