Skip to content

Commit 691dafa

Browse files
committed
enhance resource interpreter test framework
Signed-off-by: zhzhuang-zju <[email protected]>
1 parent a0dbc48 commit 691dafa

File tree

4 files changed

+361
-16
lines changed

4 files changed

+361
-16
lines changed
Lines changed: 155 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,155 @@
1+
# Thirdparty Resource Interpreter
2+
3+
This directory contains third-party resource interpreters for Karmada. These interpreters define how Karmada should handle custom resources from various third-party applications and operators.
4+
5+
## Files
6+
7+
- `thirdparty.go` - Main implementation of the third-party resource interpreter
8+
- `thirdparty_test.go` - Test suite for validating resource interpreter customizations
9+
- `resourcecustomizations/` - Directory containing resource customization definitions organized by API version and kind
10+
11+
## Directory Structure
12+
13+
The resource customizations are organized in the following structure:
14+
15+
```
16+
resourcecustomizations/
17+
├── <group>/
18+
│ └── <version>/
19+
│ └── <kind>/
20+
│ ├── customizations.yaml # Resource interpreter customization rules
21+
│ ├── customizations_tests.yaml # Test cases for the customizations
22+
│ └── testdata/ # Test input and expected output files
23+
│ ├── desired_xxx.yaml # Input resource for desired state
24+
│ ├── observed_xxx.yaml # Input resource for observed state
25+
│ ├── status_xxx.yaml # Input aggregated status items
26+
│ ├── output_xxx.yaml # Expected output for various operations
27+
```
28+
29+
## How to test
30+
31+
### Running Tests
32+
33+
To run all third-party resource interpreter tests:
34+
35+
```bash
36+
cd pkg/resourceinterpreter/default/thirdparty
37+
go test -v
38+
```
39+
40+
### Creating Test Cases
41+
42+
#### 1. Create Test Structure
43+
44+
For a new resource type, create the directory structure:
45+
46+
```bash
47+
mkdir -p resourcecustomizations/<group>/<version>/<kind>/testdata
48+
```
49+
50+
#### 2. Create Test Data Files
51+
52+
Test data files are generally divided into four categories:
53+
54+
- `desired_xxx.yaml`: Resource definitions deployed on the control plane.
55+
- `observed_xxx.yaml`: Resource definitions observed in a member cluster.
56+
- `status_xxx.yaml`: Status information of the resource on each member cluster, with structure `[]workv1alpha2.AggregatedStatusItem`.
57+
- `output_xxx.yaml`: Expected output for various operations, with structure `map[string]interface`.
58+
59+
Multiple test data files can be created for each category as needed. Pay attention to naming distinctions, as they will ultimately be referenced in `customizations_tests.yaml`.
60+
61+
#### 3. Create Test Configuration
62+
63+
Test configuration is defined in `customizations_tests.yaml` within the resource customization directory. It specifies
64+
the test cases to be executed. Its structure is as follows:
65+
```go
66+
type TestStructure struct {
67+
Tests []IndividualTest `yaml:"tests"`
68+
}
69+
70+
type IndividualTest struct {
71+
DesiredInputPath string `yaml:"desiredInputPath,omitempty"` // the path of desired_xxx.yaml
72+
ObservedInputPath string `yaml:"observedInputPath,omitempty"` // the path of observed_xxx.yaml
73+
StatusInputPath string `yaml:"statusInputPath,omitempty"` // the path of status_xxx.yaml
74+
InputReplicas int64 `yaml:"inputReplicas,omitempty"` // the input replicas for revise operation
75+
OutputResultsPath string `yaml:"outputResultsPath,omitempty"` // the path of output_xxx.yaml
76+
Operation string `yaml:"operation"` // the operation of resource interpreter
77+
}
78+
```
79+
80+
Create `customizations_tests.yaml` to define test cases:
81+
82+
```yaml
83+
tests:
84+
- desiredInputPath: testdata/desired-flinkdeployment.yaml
85+
statusInputPath: testdata/status-file.yaml
86+
operation: AggregateStatus
87+
outputResultsPath: testdata/output-flinkdeployment.yaml
88+
- desiredInputPath: testdata/desired-flinkdeployment.yaml
89+
operation: InterpretReplica
90+
outputResultsPath: testdata/output-flinkdeployment.yaml
91+
```
92+
93+
Where:
94+
- `operation` specifies the operation of resource interpreter
95+
- `outputResultsPath` defines the file path for expected output results. The output results are key-value mapping where the key is the field name of the expected result and the value is the expected result.
96+
97+
The keys in output results for different operations correspond to the Name field of the results returned by the corresponding resource interpreter operation `RuleResult.Results`.
98+
99+
For example:
100+
```go
101+
func (h *healthInterpretationRule) Run(interpreter *declarative.ConfigurableInterpreter, args RuleArgs) *RuleResult {
102+
obj, err := args.getObjectOrError()
103+
if err != nil {
104+
return newRuleResultWithError(err)
105+
}
106+
healthy, enabled, err := interpreter.InterpretHealth(obj)
107+
if err != nil {
108+
return newRuleResultWithError(err)
109+
}
110+
if !enabled {
111+
return newRuleResultWithError(fmt.Errorf("rule is not enabled"))
112+
}
113+
return newRuleResult().add("healthy", healthy)
114+
}
115+
```
116+
117+
The output results for operation `InterpretHealth` should contain the `healthy` key.
118+
119+
### Supported Operations
120+
121+
The test framework supports the following operations:
122+
123+
- `InterpretReplica` - Extract replica count from resource
124+
- `InterpretComponent` - Extract component information from resource
125+
- `ReviseReplica` - Modify replica count in resource
126+
- `InterpretStatus` - Extract status information
127+
- `InterpretHealth` - Determine resource health status
128+
- `InterpretDependency` - Extract resource dependencies
129+
- `AggregateStatus` - Aggregate status from multiple clusters
130+
- `Retain` - Retain the desired resource template.
131+
132+
### Test Validation
133+
134+
The test framework validates:
135+
136+
1. **Lua Script Syntax** - Ensures all Lua scripts are syntactically correct
137+
2. **Execution Results** - Compares actual results with expected results
138+
139+
### Debugging Tests
140+
141+
To debug failing tests:
142+
143+
1. **Check Lua Script Syntax** - Ensure your Lua scripts are valid
144+
2. **Verify Test Data** - Confirm test input files are properly formatted
145+
3. **Review Expected Results** - Make sure expected results match the actual operation output
146+
4. **Use Verbose Output** - Run tests with `-v` flag for detailed output
147+
148+
### Best Practices
149+
150+
1. **Comprehensive Coverage** - Test all supported operations for your resource type
151+
2. **Edge Cases** - Include tests for edge cases and error conditions
152+
3. **Realistic Data** - Use realistic resource definitions in test data
153+
4. **Clear Naming** - Use descriptive names for test files and cases
154+
155+
For more information about resource interpreter customizations, see the [Karmada documentation](https://karmada.io/docs/userguide/globalview/customizing-resource-interpreter/).

pkg/resourceinterpreter/default/thirdparty/resourcecustomizations/flink.apache.org/v1beta1/FlinkDeployment/customizations_tests.yaml

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,16 @@ tests:
22
- desiredInputPath: testdata/desired-flinkdeployment.yaml
33
statusInputPath: testdata/status-file.yaml
44
operation: AggregateStatus
5-
- observedInputPath: testdata/observed-flinkdeployment.yaml
5+
outputResultsPath: testdata/output-flinkdeployment.yaml
6+
- desiredInputPath: testdata/desired-flinkdeployment.yaml
67
operation: InterpretReplica
8+
outputResultsPath: testdata/output-flinkdeployment.yaml
9+
- desiredInputPath: testdata/desired-flinkdeployment.yaml
10+
operation: InterpretComponent
11+
outputResultsPath: testdata/output-flinkdeployment.yaml
712
- observedInputPath: testdata/observed-flinkdeployment.yaml
813
operation: InterpretHealth
14+
outputResultsPath: testdata/output-flinkdeployment.yaml
915
- observedInputPath: testdata/observed-flinkdeployment.yaml
1016
operation: InterpretStatus
17+
outputResultsPath: testdata/output-flinkdeployment.yaml
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,105 @@
1+
# the expected output of operation 'AggregateStatus' for a FlinkDeployment resource.
2+
aggregatedStatus:
3+
apiVersion: flink.apache.org/v1beta1
4+
kind: FlinkDeployment
5+
metadata:
6+
name: basic-example
7+
namespace: test-namespace
8+
spec:
9+
flinkConfiguration:
10+
taskmanager.numberOfTaskSlots: "2"
11+
flinkVersion: v1_17
12+
image: flink:1.17
13+
job:
14+
jarURI: local:///opt/flink/examples/streaming/StateMachineExample.jar
15+
parallelism: 2
16+
upgradeMode: stateless
17+
jobManager:
18+
replicas: 1
19+
resource:
20+
cpu: 1
21+
memory: 2048m
22+
mode: native
23+
serviceAccount: flink
24+
taskManager:
25+
resource:
26+
cpu: 1
27+
memory: 2048m
28+
status:
29+
clusterInfo:
30+
flink-revision: 2750d5c @ 2023-05-19T10:45:46+02:00
31+
flink-version: 1.17.1
32+
total-cpu: "2.0"
33+
total-memory: "4294967296"
34+
jobManagerDeploymentStatus: READY
35+
jobStatus:
36+
checkpointInfo:
37+
lastPeriodicCheckpointTimestamp: 0
38+
jobId: 44cc5573945d1d4925732d915c70b9ac
39+
jobName: Minimal Spec Example
40+
savepointInfo:
41+
lastPeriodicSavepointTimestamp: 0
42+
startTime: "1717599166365"
43+
state: RUNNING
44+
updateTime: "1717599182544"
45+
lifecycleState: STABLE
46+
observedGeneration: 1
47+
reconciliationStatus:
48+
lastReconciledSpec: '{"spec":{"job":{"jarURI":"local:///opt/flink/examples/streaming/StateMachineExample.jar","parallelism":2,"entryClass":null,"args":[],"state":"running","savepointTriggerNonce":null,"initialSavepointPath":null,"checkpointTriggerNonce":null,"upgradeMode":"stateless","allowNonRestoredState":null,"savepointRedeployNonce":null},"restartNonce":null,"flinkConfiguration":{"taskmanager.numberOfTaskSlots":"2"},"image":"flink:1.17","imagePullPolicy":null,"serviceAccount":"flink","flinkVersion":"v1_17","ingress":null,"podTemplate":null,"jobManager":{"resource":{"cpu":1.0,"memory":"2048m","ephemeralStorage":null},"replicas":1,"podTemplate":null},"taskManager":{"resource":{"cpu":1.0,"memory":"2048m","ephemeralStorage":null},"replicas":null,"podTemplate":null},"logConfiguration":null,"mode":null},"resource_metadata":{"apiVersion":"flink.apache.org/v1beta1","metadata":{"generation":2},"firstDeployment":true}}'
49+
lastStableSpec: '{"spec":{"job":{"jarURI":"local:///opt/flink/examples/streaming/StateMachineExample.jar","parallelism":2,"entryClass":null,"args":[],"state":"running","savepointTriggerNonce":null,"initialSavepointPath":null,"checkpointTriggerNonce":null,"upgradeMode":"stateless","allowNonRestoredState":null,"savepointRedeployNonce":null},"restartNonce":null,"flinkConfiguration":{"taskmanager.numberOfTaskSlots":"2"},"image":"flink:1.17","imagePullPolicy":null,"serviceAccount":"flink","flinkVersion":"v1_17","ingress":null,"podTemplate":null,"jobManager":{"resource":{"cpu":1.0,"memory":"2048m","ephemeralStorage":null},"replicas":1,"podTemplate":null},"taskManager":{"resource":{"cpu":1.0,"memory":"2048m","ephemeralStorage":null},"replicas":null,"podTemplate":null},"logConfiguration":null,"mode":null},"resource_metadata":{"apiVersion":"flink.apache.org/v1beta1","metadata":{"generation":2},"firstDeployment":true}}'
50+
reconciliationTimestamp: 1717599148930
51+
state: DEPLOYED
52+
taskManager:
53+
labelSelector: component=taskmanager,app=basic-example
54+
replicas: 1
55+
# the expected output of operation 'InterpretReplica' for a FlinkDeployment resource.
56+
replica: 2
57+
requires:
58+
resourceRequest:
59+
"cpu": "1"
60+
"memory": "2048m"
61+
namespace: "test-namespace"
62+
# the expected output of operation 'InterpretComponent' for a FlinkDeployment resource.
63+
components:
64+
- name: jobmanager
65+
replicas: 1
66+
replicaRequirements:
67+
resourceRequest:
68+
"cpu": "1"
69+
"memory": "2048m"
70+
- name: taskmanager
71+
replicas: 1
72+
replicaRequirements:
73+
resourceRequest:
74+
"cpu": "1"
75+
"memory": "2048m"
76+
# the expected output of operation 'InterpretHealth' for a FlinkDeployment resource.
77+
healthy: true
78+
# the expected output of operation 'InterpretStatus' for a FlinkDeployment resource.
79+
status:
80+
clusterInfo:
81+
flink-revision: 2750d5c @ 2023-05-19T10:45:46+02:00
82+
flink-version: 1.17.1
83+
total-cpu: "2.0"
84+
total-memory: "4294967296"
85+
jobManagerDeploymentStatus: READY
86+
jobStatus:
87+
checkpointInfo:
88+
lastPeriodicCheckpointTimestamp: 0
89+
jobId: 44cc5573945d1d4925732d915c70b9ac
90+
jobName: Minimal Spec Example
91+
savepointInfo:
92+
lastPeriodicSavepointTimestamp: 0
93+
startTime: "1717599166365"
94+
state: RUNNING
95+
updateTime: "1717599182544"
96+
lifecycleState: STABLE
97+
observedGeneration: 1
98+
reconciliationStatus:
99+
lastReconciledSpec: '{"spec":{"job":{"jarURI":"local:///opt/flink/examples/streaming/StateMachineExample.jar","parallelism":2,"entryClass":null,"args":[],"state":"running","savepointTriggerNonce":null,"initialSavepointPath":null,"checkpointTriggerNonce":null,"upgradeMode":"stateless","allowNonRestoredState":null,"savepointRedeployNonce":null},"restartNonce":null,"flinkConfiguration":{"taskmanager.numberOfTaskSlots":"2"},"image":"flink:1.17","imagePullPolicy":null,"serviceAccount":"flink","flinkVersion":"v1_17","ingress":null,"podTemplate":null,"jobManager":{"resource":{"cpu":1.0,"memory":"2048m","ephemeralStorage":null},"replicas":1,"podTemplate":null},"taskManager":{"resource":{"cpu":1.0,"memory":"2048m","ephemeralStorage":null},"replicas":null,"podTemplate":null},"logConfiguration":null,"mode":null},"resource_metadata":{"apiVersion":"flink.apache.org/v1beta1","metadata":{"generation":2},"firstDeployment":true}}'
100+
lastStableSpec: '{"spec":{"job":{"jarURI":"local:///opt/flink/examples/streaming/StateMachineExample.jar","parallelism":2,"entryClass":null,"args":[],"state":"running","savepointTriggerNonce":null,"initialSavepointPath":null,"checkpointTriggerNonce":null,"upgradeMode":"stateless","allowNonRestoredState":null,"savepointRedeployNonce":null},"restartNonce":null,"flinkConfiguration":{"taskmanager.numberOfTaskSlots":"2"},"image":"flink:1.17","imagePullPolicy":null,"serviceAccount":"flink","flinkVersion":"v1_17","ingress":null,"podTemplate":null,"jobManager":{"resource":{"cpu":1.0,"memory":"2048m","ephemeralStorage":null},"replicas":1,"podTemplate":null},"taskManager":{"resource":{"cpu":1.0,"memory":"2048m","ephemeralStorage":null},"replicas":null,"podTemplate":null},"logConfiguration":null,"mode":null},"resource_metadata":{"apiVersion":"flink.apache.org/v1beta1","metadata":{"generation":2},"firstDeployment":true}}'
101+
reconciliationTimestamp: 1717599148930
102+
state: DEPLOYED
103+
taskManager:
104+
labelSelector: component=taskmanager,app=basic-example
105+
replicas: 1

0 commit comments

Comments
 (0)