You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+20-20Lines changed: 20 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,11 +20,11 @@ English | [中文](README_CN.md)
20
20
21
21
# Introduction
22
22
23
-
AxVisor is a hypervisor implemented based on the ArceOS unikernel framework. Its goal is to leverage the foundational operating system features provided by ArceOS to implement a unified modular hypervisor.
23
+
AxVisor is a Hypervisor implemented based on the ArceOS unikernel framework. Its goal is to leverage the basic operating system functionalities provided by ArceOS as a foundation to build a unified and componentized Hypervisor.
24
24
25
-
"Unified" refers to using the same codebase to support x86_64, Arm (aarch64), and RISC-V architectures simultaneously, in order to maximize the reuse of architecture-independent code and simplify development and maintenance costs.
25
+
**Unified** means using the same codebase to support three architectures—x86_64, Arm (aarch64), and RISC-V—maximizing the reuse of architecture-agnostic code and simplifying development and maintenance efforts.
26
26
27
-
"Modular" means that the functionality of the hypervisor is decomposed into multiple modules, each implementing a specific function. The modules communicate with each other through standard interfaces to achieve decoupling and reuse of functionality.
27
+
**Componentized** means that the Hypervisor's functionalities are decomposed into multiple independently usable components. Each component implements a specific function, and components communicate through standardized interfaces to achieve decoupling and reusability.
28
28
29
29
## Architecture
30
30
@@ -40,7 +40,7 @@ Currently, AxVisor has been verified on the following platforms:
40
40
41
41
-[x] QEMU ARM64 virt (qemu-max)
42
42
-[x] Rockchip RK3568 / RK3588
43
-
-[x]黑芝麻华山 A1000
43
+
-[x]PhytiumPi
44
44
45
45
## Guest VMs
46
46
@@ -50,10 +50,6 @@ Currently, AxVisor has been verified in scenarios with the following systems as
50
50
-[Starry-OS](https://github.com/Starry-OS)
51
51
-[NimbOS](https://github.com/equation314/nimbos)
52
52
- Linux
53
-
- currently only Linux with passthrough device on aarch64 is tested.
54
-
- single core: [config.toml](configs/vms/linux-qemu-aarch64.toml) | [dts](configs/vms/linux-qemu.dts)
@@ -77,6 +73,8 @@ In addition, you can use the [axvmconfig](https://github.com/arceos-hypervisor/a
77
73
78
74
## Load and run from file system
79
75
76
+
Loading from the filesystem refers to the method where the AxVisor image, Linux guest image, and its device tree are independently deployed in the filesystem on the storage. After AxVisor starts, it loads the guest image and its device tree from the filesystem to boot the guest.
77
+
80
78
### NimbOS as guest
81
79
82
80
1. Execute script to download and prepare NimbOS image.
@@ -98,27 +96,28 @@ In addition, you can use the [axvmconfig](https://github.com/arceos-hypervisor/a
98
96
99
97
4. Execute `./axvisor.sh run` to build AxVisor and start it in QEMU.
100
98
101
-
### More
99
+
### More guest
100
+
102
101
TODO
103
102
104
103
## Load and run from memory
105
-
### linux as guest
106
104
107
-
1.[See linux build help.](https://github.com/arceos-hypervisor/guest-test-linux)to get Image and rootfs.img.
105
+
Loading from memory refers to a method where the AxVisor image, guest image, and its device tree are already packaged together during the build phase. Only AxVisor itself needs to be deployed in the file system on the storage device. After AxVisor starts, it loads the guest image and its device tree from memory to boot the guest.
108
106
109
-
2. Modify the configuration items in the corresponding `./configs/vms/<ARCH_CONFIG>.toml`
107
+
### linux as guest
110
108
109
+
1. Prepare working directory
111
110
```console
112
111
mkdir -p tmp
113
-
cp configs/vms/linux-qemu-aarch64-mem.toml tmp/
112
+
cp configs/vms/linux-aarch64-qemu-smp1.toml tmp/
113
+
cp configs/vms/linux-aarch64-qemu-smp1.dts tmp/
114
114
```
115
115
116
-
-`image_location="memory"` indicates loading from the memory.
117
-
-`kernel_path` kernel_path specifies the path of the kernel image in the workspace.
118
-
-`dtb_path` specifies the path of the dtb file in the workspace.
119
-
- others
116
+
2.[See Linux build help](https://github.com/arceos-hypervisor/guest-test-linux) to get the guest Image and rootfs.img, then copy them to the `tmp` directory.
120
117
121
-
3. Edit the `.hvconfig.toml` file to set the `vmconfigs` item to the path of your guest configuration file, for example:
118
+
3. Execute `dtc -O dtb -I dts -o tmp/linux-aarch64-qemu-smp1.dtb tmp/linux-aarch64-qemu-smp1.dts` to build the guest device tree file
119
+
120
+
4. Execute `./axvisor.sh defconfig`, then edit the `.hvconfig.toml` file, set the `vmconfigs` item to your guest machine configuration file path, with the following content:
122
121
123
122
```toml
124
123
arceos_args = [
@@ -129,12 +128,13 @@ In addition, you can use the [axvmconfig](https://github.com/arceos-hypervisor/a
0 commit comments