You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The arch system section should be separate from the os.
This change will fully read in a compatibiity request (yaml),
and then map requested fields to extractor output fields,
and with the option to add on-the-fly custom metadata.
We output a finished compatibility artifact that is ready
to be pushed to a registry and later used paired with an
image. Note that I need to run a final jsonSchema validate
on generation (before save) still.
Signed-off-by: vsoch <[email protected]>
Copy file name to clipboardExpand all lines: README.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,6 +15,7 @@ This is a prototype compatibility checking tool. Right now our aim is to use in
15
15
## TODO
16
16
17
17
- metadata namespace and exposure: someone writing a spec to create an artifact needs to know the extract namespace (and what is available) for the mapping.
18
+
- create: the final step of create should be validation of the spec with the jsonSchema linked (not done yet)
18
19
- tests: matrix that has several different flavors of builds, generating compspec json output to validate generation and correctness
19
20
- likely we want a common configuration file to take an extraction -> check recipe
Note that we will eventually add a description column - it's not really warranted yet!
70
71
72
+
## Create
73
+
74
+
The create command is how you take a compatibility request, or a YAML file that has a mapping between the extractors defined by this tool and your compatibility metadata namespace, and generate an artifact. The artifact typically will be a JSON dump of key value pairs, scoped under different namespaces, that you might push to a registry to live alongside a container image, and with the intention to eventually use it to check compatiility against a new system. To run create
75
+
we can use the example in the top level repository:
Note that you'll see some errors about fields not being found! This is because we've implemented this for the fields to be added custom, on the command line.
82
+
The idea here is that you can add custom metadata fields during your build, which can be easier than adding for automated extraction. Let's add them now.
83
+
84
+
```bash
85
+
# a stands for "append" and it can write a new field or overwrite an existing one
86
+
./bin/compspec create --in ./examples/lammps-experiment.yaml -a custom.gpu.available=yes
Awesome! That, as simple as it is, is our compatibility artifact. I ran the command on my host just now, but run for a container image during
126
+
a build will generate it for that context. We would want to save this to file:
127
+
128
+
```bash
129
+
./bin/compspec create --in ./examples/lammps-experiment.yaml -a custom.gpu.available=yes -o ./examples/generated-compatibility-spec.json
130
+
```
131
+
132
+
And that's it! We would next (likely during CI) push this compatibility artifact to a URI that is likely (TBA) linked to the image.
133
+
For now we will manually remember the pairing, at least until the compatibility working group figures out the final design!
134
+
71
135
## Extract
72
136
73
-
If you want to extract metadata to your local machine, you can use extract! Either just run all extractors and dump to the terminal:
137
+
Extraction has two use cases, and likely you won't be running this manually, but within the context of another command:
138
+
139
+
1. Extracting metadata about the container image at build time to generate an artifact (done via "create")
140
+
2. Extracting metadata about the host at image selection time, and comparing against a set of contender container images to select the best one (done via "check").
141
+
142
+
However, for the advanced or interested user, you can run extract as a standalone utility to inspect or otherwise save metadata from extractors.
143
+
For example, if you want to extract metadata to your local machine, you can use extract! Either just run all extractors and dump to the terminal:
74
144
75
145
```bash
76
146
# Not recommend, it's a lot!
@@ -84,7 +154,17 @@ the full ability to specify:
84
154
2. One or more specific sections known to an extractor
85
155
3. Saving to json metadata instead of dumping to terminal
The library extractor currently just has one section for "mpi"
90
170
@@ -96,7 +176,7 @@ The library extractor currently just has one section for "mpi"
96
176
--Result for library
97
177
-- Section mpi
98
178
mpi.variant: mpich
99
-
mpi.version: 4.0
179
+
mpi.version: 4.1.1
100
180
Extraction has run!
101
181
```
102
182
@@ -119,7 +199,7 @@ cat test-library.json
119
199
"sections": {
120
200
"mpi": {
121
201
"mpi.variant": "mpich",
122
-
"mpi.version": "4.0"
202
+
"mpi.version": "4.1.1"
123
203
}
124
204
}
125
205
}
@@ -129,32 +209,45 @@ cat test-library.json
129
209
130
210
That shows the generic structure of an extractor output. The "library" extractor owns a set of groups (sections) each with their own namespaced attributes.
131
211
132
-
### System
212
+
####System
133
213
134
214
The system extractor supports three sections
135
215
136
216
- cpu: Basic CPU counts and metadata
137
217
- processor: detailed information on every processor
0 commit comments