Skip to content

Commit 784e6f7

Browse files
authored
feat: A new feature [Psupatpos 63] Robot framework runner
feat: A new feature [Psupatpos 63] Robot framework runner
2 parents 36b3541 + 8ea661d commit 784e6f7

17 files changed

+1218
-15
lines changed

Dockerfile

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ ENV ROBOT_HOME=/opt/robot \
1111

1212
COPY scripts/docker-entrypoint.sh /
1313
COPY scripts/*.py ${ROBOT_HOME}/
14+
COPY scripts/adapter-S3 ${ROBOT_HOME}/scripts/adapter-S3
1415
COPY requirements.txt ${ROBOT_HOME}/requirements.txt
1516
COPY library ${ROBOT_HOME}/integration-tests-built-in-library
1617

@@ -26,6 +27,13 @@ RUN \
2627
apk-tools \
2728
py3-yaml \
2829
ca-certificates \
30+
inotify-tools \
31+
# Clean up
32+
&& rm -rf /var/cache/apk/*
33+
34+
RUN echo 'https://dl-cdn.alpinelinux.org/alpine/edge/testing' >> /etc/apk/repositories \
35+
&& apk add --update --no-cache \
36+
s5cmd \
2937
# Clean up
3038
&& rm -rf /var/cache/apk/*
3139

@@ -42,12 +50,19 @@ RUN \
4250
&& python3 -m pip install --no-cache-dir ${ROBOT_HOME}/integration-tests-built-in-library \
4351
# Clean up
4452
&& rm -rf ${ROBOT_HOME}/integration-tests-built-in-library \
53+
# Create output directory
54+
&& mkdir -p ${ROBOT_HOME}/output \
4555
# Set permissions
4656
&& chmod +x /docker-entrypoint.sh \
57+
&& chmod -R 775 ${ROBOT_HOME} \
58+
&& chown -R ${USER_ID}:0 ${ROBOT_HOME} \
59+
&& chgrp -R 0 ${ROBOT_HOME} \
4760
&& chgrp 0 /docker-entrypoint.sh
4861

4962
WORKDIR ${ROBOT_HOME}
5063

64+
USER 1000:0
65+
5166
ENTRYPOINT ["/docker-entrypoint.sh"]
5267
CMD ["run-robot"]
5368

README.md

Lines changed: 36 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@
33
The `Docker Integration Tests` (aka `BDI`) is image for integration tests.
44
Supposed that this image will not be used to execute integration tests directly
55
but real images for integration tests will use this image as basic
6-
(`FROM` command in the particular docker file). BDI builds some `sandbox` which
6+
(`FROM` command in the particular Docker file). BDI builds some `sandbox` which
77
includes `python` interpreter, Robot Framework, some useful tools such as
88
`bash`, `shadow`, `vim`, `rsync`, `ttyd`, common custom Robot Framework libraries
9-
(for example, `PlatformLibrary`) and customized docker entry point script.
9+
(for example, `PlatformLibrary`) and customized Docker entry point script.
1010

1111
* [Introduction](#introduction)
1212
* [Pre-installed tools](#pre-installed-tools)
@@ -46,7 +46,7 @@ and move `PlatformLibrary.html` file to documentation directory.
4646

4747
## Docker Entry point Script
4848

49-
A docker entry point script is a script which will be executed after docker container is created.
49+
A Docker entry point script is a script which will be executed after Docker container is created.
5050
If you override the image, its entry point script
5151
will be executed by default. But if you override the entry point as well, your own entry point will be run.
5252
Docker Integration Tests contains simple and customized entry point script - `/docker-entrypoint.sh`
@@ -59,7 +59,7 @@ with the following command (possible `CMD` arguments):
5959

6060
* `run-ttyd` command starts `ttyd` tool. `ttyd` is Web-console which rather useful for dev and troubleshooting purposes.
6161

62-
* `custom` command executes custom bash script if this script's path is provided.
62+
* `custom` command executes custom Bash script if this script's path is provided.
6363
* To provide custom script this script should exist within container
6464
and environment variable `CUSTOM_ENTRYPOINT_SCRIPT` should contain path to the script.
6565
Actually, `custom` command is equivalent to overriding the entry point
@@ -211,7 +211,7 @@ STATUS_CUSTOM_RESOURCE_NAME=zookeeper
211211
```
212212

213213
If your k8s pod with integration tests always writes status to well-known custom resource you can override all this environment
214-
variables (excluding `STATUS_CUSTOM_RESOURCE_NAMESPACE`) in your docker file and set namespace in helm chart.
214+
variables (excluding `STATUS_CUSTOM_RESOURCE_NAMESPACE`) in your Docker file and set namespace in helm chart.
215215

216216
Both of this approaches work with native k8s entities too. For example:
217217

@@ -223,7 +223,7 @@ STATUS_CUSTOM_RESOURCE_PLURAL=deployments
223223
STATUS_CUSTOM_RESOURCE_NAME=zookeeper-1
224224
```
225225

226-
If feature is available `write_status.py` script is called two times. The first time immediately after docker
226+
If feature is available `write_status.py` script is called two times. The first time immediately after Docker
227227
entrypoint script was started to set `In progress` condition. The second time after tests are finished and parsed by
228228
`analyze results` script to set in the `message` field tests results. Default analyzer script is `write_status.py`
229229
but inheritor image can override it by `WRITE_STATUS_SCRIPT` environment variable which contains path to custom
@@ -244,7 +244,7 @@ should be placed in the `result.txt` file and the first line will be used as sho
244244

245245
**Note!** This feature (write status to k8s entities) is disabled by default! To turn on it please set the
246246
`STATUS_WRITING_ENABLED` environment variable to `true`.
247-
For example in your docker file as
247+
For example in your Docker file as
248248

249249
```ini
250250
ENV STATUS_WRITING_ENABLED=true
@@ -276,5 +276,33 @@ Docker Integration Tests uses the following environment variables:
276276
* IS_STATUS_BOOLEAN
277277

278278
All of them instead of `TAGS`, `ONLY_INTEGRATION_TESTS`, `STATUS_CUSTOM_RESOURCE_NAMESPACE`,
279-
`STATUS_CUSTOM_RESOURCE_PATH` and maybe `DEBUG` we recommend overriding in the docker file and do not
279+
`STATUS_CUSTOM_RESOURCE_PATH` and maybe `DEBUG` we recommend overriding in the Docker file and do not
280+
forward them to the integration tests deployment environment.
281+
282+
### ATP Storage (S3) Variables
283+
284+
These variables enable automatic upload of test results and reports to S3-compatible storage:
285+
286+
* `ATP_STORAGE_PROVIDER` - Storage provider type: `aws` (AWS S3), `minio` (MinIO), or `s3` (S3-compatible storage)
287+
* `ATP_STORAGE_SERVER_URL` - S3 API endpoint URL (required for MinIO and S3-compatible storage, e.g., <https://minio.example.com> or <https://s3.amazonaws.com>)
288+
* `ATP_STORAGE_SERVER_UI_URL` - S3 UI URL for browsing uploaded files (optional, e.g., <https://minio-ui.example.com>)
289+
* `ATP_STORAGE_BUCKET` - S3 bucket name for storing test results. If empty, S3 integration is disabled
290+
* `ATP_STORAGE_REGION` - S3 region (optional, default: us-east-1). Required for AWS S3
291+
* `ATP_STORAGE_USERNAME` - S3 credentials username:
292+
* For AWS S3: AWS Access Key ID
293+
* For MinIO: MinIO Access Key
294+
* Required when `ATP_STORAGE_BUCKET` is set
295+
* `ATP_STORAGE_PASSWORD` - S3 credentials password:
296+
* For AWS S3: AWS Secret Access Key
297+
* For MinIO: MinIO Secret Key
298+
* Required when `ATP_STORAGE_BUCKET` is set
299+
* `ATP_REPORT_VIEW_UI_URL` - URL for viewing Allure reports (optional, e.g., <https://allure.example.com>)
300+
* `ENVIRONMENT_NAME` - Environment name for organizing test results in S3 (e.g., dev, staging, prod). Results are stored in: `s3://{bucket}/Result/{environment}/{date}/{time}/`
301+
* `UPLOAD_METHOD` - Upload method: `sync` (directory sync) or `cp` (file-by-file upload, default: sync)
302+
303+
**Note:** Test results are automatically uploaded to S3 with the following structure:
304+
- Allure results: `s3://{bucket}/Result/{environment}/{date}/{time}/allure-results/`
305+
- Attachments: `s3://{bucket}/Report/{environment}/{date}/{time}/attachments/`
306+
307+
All ATP Storage variables except `ENVIRONMENT_NAME` it is recommended to override in the Docker file and do not
280308
forward them to the integration tests deployment environment.

demo/docker/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,3 @@
11
FROM ghcr.io/netcracker/qubership-docker-integration-tests:main
22

3-
COPY docker/robot ${ROBOT_HOME}
3+
COPY docker/robot ${ROBOT_HOME}

requirements.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ pyyaml==6.0.3
88
requests==2.32.5
99
requests_oauthlib==2.0.0
1010
robotframework-requests==0.9.7
11+
allure-robotframework==2.15.0
1112
robotframework==7.3.2
1213
setuptools~=80.10.0
1314
urllib3~=2.3.0
Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
#!/bin/bash
2+
set -e
3+
4+
# Main test job entrypoint script - coordinates all modules
5+
echo " Starting test job entrypoint script..."
6+
echo " Working directory: $(pwd)"
7+
echo " Timestamp: $(date)"
8+
9+
# Set default upload method
10+
export UPLOAD_METHOD="${UPLOAD_METHOD:-sync}"
11+
echo " Upload method: $UPLOAD_METHOD"
12+
echo " Report view host URL: $ATP_REPORT_VIEW_UI_URL"
13+
echo " S3 bucket: ${ATP_STORAGE_BUCKET:-<not set>}"
14+
echo " S3 provider: ${ATP_STORAGE_PROVIDER:-<not set>}"
15+
echo " S3 API host: ${ATP_STORAGE_SERVER_URL:-<not set>}"
16+
echo " S3 UI URL: ${ATP_STORAGE_SERVER_UI_URL:-<not set>}"
17+
echo " Environment name: $ENVIRONMENT_NAME"
18+
19+
# Import modular components
20+
source "${ROBOT_HOME}/scripts/adapter-S3/init.sh"
21+
source "${ROBOT_HOME}/scripts/adapter-S3/test-runner.sh"
22+
source "${ROBOT_HOME}/scripts/adapter-S3/upload-monitor.sh"
23+
24+
# Execute main workflow
25+
echo " Starting test execution workflow..."
26+
27+
# Store all arguments passed to this script
28+
echo " Robot arguments: $*"
29+
30+
# Initialize environment
31+
init_environment
32+
33+
# Start upload monitoring only if S3 is enabled
34+
if [[ -n "${ATP_STORAGE_BUCKET}" ]]; then
35+
start_upload_monitoring
36+
else
37+
echo "️ Skipping upload monitoring (S3 integration disabled)"
38+
fi
39+
40+
# Run tests
41+
run_tests "$@"
42+
43+
# Finalize upload only if S3 is enabled
44+
if [[ -n "${ATP_STORAGE_BUCKET}" ]]; then
45+
finalize_upload
46+
else
47+
echo "️ Skipping upload finalization (S3 integration disabled)"
48+
echo " Test results are available locally at: $TMP_DIR"
49+
fi
50+
51+
echo " Test job finished successfully!"
Lines changed: 172 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,172 @@
1+
#!/bin/bash
2+
3+
# Calculate Test Pass Rate from Allure Results
4+
#
5+
# This script analyzes test results from allure-results directory
6+
# and calculates the overall pass rate, then exports it as environment variable
7+
#
8+
# Dependencies:
9+
# - jq (for JSON parsing)
10+
# - bc (for floating point calculations)
11+
12+
set -eo pipefail
13+
14+
# Logging functions
15+
log_info() {
16+
echo "INFO: $1"
17+
}
18+
19+
log_success() {
20+
echo "SUCCESS: $1"
21+
}
22+
23+
log_warning() {
24+
echo "WARNING: $1"
25+
}
26+
27+
log_error() {
28+
echo "ERROR: $1"
29+
}
30+
31+
# Get script directory (exported for potential external use)
32+
# shellcheck disable=SC2034
33+
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
34+
35+
# Default allure-results directory (now in parent directory)
36+
ALLURE_RESULTS_DIR="${1:-/tmp/clone/allure-results}"
37+
38+
# Check if allure-results directory exists
39+
if [ ! -d "$ALLURE_RESULTS_DIR" ]; then
40+
log_error "Allure results directory not found: $ALLURE_RESULTS_DIR"
41+
exit 1
42+
fi
43+
44+
# Check if jq is available
45+
if ! command -v jq &>/dev/null; then
46+
log_error "jq is required but not installed. Please install jq to parse JSON files."
47+
exit 1
48+
fi
49+
50+
# Check if bc is available, if not we'll use awk for calculations
51+
BC_AVAILABLE=false
52+
if command -v bc &>/dev/null; then
53+
BC_AVAILABLE=true
54+
fi
55+
56+
log_info "Analyzing test results from: $ALLURE_RESULTS_DIR"
57+
58+
# Initialize counters
59+
total_tests=0
60+
passed_tests=0
61+
failed_tests=0
62+
skipped_tests=0
63+
64+
# Initialize test details arrays
65+
declare -a test_details=()
66+
# Add table header
67+
test_details+=("$(printf "%-12s" "Status") | Test Name")
68+
test_details+=("------------ | ------------------------------------------------------------")
69+
70+
# Process each result file
71+
for result_file in "$ALLURE_RESULTS_DIR"/*-result.json; do
72+
if [ -f "$result_file" ]; then
73+
log_info "Processing: $(basename "$result_file")"
74+
75+
# Extract test status using jq
76+
status=$(jq -r '.status' "$result_file" 2>/dev/null || echo "unknown")
77+
test_name=$(jq -r '.name' "$result_file" 2>/dev/null || echo "Unknown Test")
78+
79+
case "$status" in
80+
"passed")
81+
passed_tests=$((passed_tests + 1))
82+
log_success "PASSED: $test_name"
83+
test_details+=("PASSED | $test_name")
84+
;;
85+
"failed")
86+
failed_tests=$((failed_tests + 1))
87+
log_error "FAILED: $test_name"
88+
test_details+=("FAILED | $test_name")
89+
;;
90+
"skipped")
91+
skipped_tests=$((skipped_tests + 1))
92+
log_warning "SKIPPED: $test_name"
93+
test_details+=("SKIPPED | $test_name")
94+
;;
95+
*)
96+
log_warning "? $test_name (status: $status)"
97+
test_details+=("UNKNOWN | $test_name")
98+
;;
99+
esac
100+
101+
total_tests=$((total_tests + 1))
102+
fi
103+
done
104+
105+
# Calculate pass rate
106+
if [ $total_tests -eq 0 ]; then
107+
log_error "No test results found in $ALLURE_RESULTS_DIR"
108+
exit 1
109+
fi
110+
111+
# Calculate pass rate as percentage (passed / total * 100)
112+
if [ "$BC_AVAILABLE" = true ]; then
113+
pass_rate=$(echo "scale=2; $passed_tests * 100 / $total_tests" | bc)
114+
pass_rate_rounded=$(echo "scale=0; $passed_tests * 100 / $total_tests" | bc)
115+
else
116+
# Use awk for calculations if bc is not available
117+
pass_rate=$(awk "BEGIN {printf \"%.2f\", $passed_tests * 100 / $total_tests}")
118+
pass_rate_rounded=$(awk "BEGIN {printf \"%.0f\", $passed_tests * 100 / $total_tests}")
119+
fi
120+
121+
# Determine overall status
122+
if [ "$pass_rate_rounded" -eq 100 ]; then
123+
overall_status="PASSED"
124+
elif [ "$pass_rate_rounded" -ge 80 ]; then
125+
overall_status="PARTIAL"
126+
else
127+
overall_status="FAILED"
128+
fi
129+
130+
# Export results as environment variables
131+
export TEST_PASS_RATE="$pass_rate"
132+
export TEST_PASS_RATE_ROUNDED="$pass_rate_rounded"
133+
export TEST_TOTAL_COUNT="$total_tests"
134+
export TEST_PASSED_COUNT="$passed_tests"
135+
export TEST_FAILED_COUNT="$failed_tests"
136+
export TEST_SKIPPED_COUNT="$skipped_tests"
137+
export TEST_OVERALL_STATUS="$overall_status"
138+
139+
# Create test details string
140+
TEST_DETAILS_STRING=""
141+
for test_detail in "${test_details[@]}"; do
142+
if [ -n "$TEST_DETAILS_STRING" ]; then
143+
TEST_DETAILS_STRING="$TEST_DETAILS_STRING\n$test_detail"
144+
else
145+
TEST_DETAILS_STRING="$test_detail"
146+
fi
147+
done
148+
export TEST_DETAILS_STRING="$TEST_DETAILS_STRING"
149+
150+
# Display summary
151+
echo ""
152+
log_info "=== Test Results Summary ==="
153+
echo "Overall Status: $overall_status"
154+
echo "Pass Rate: ${pass_rate}%"
155+
echo "Total Tests: $total_tests"
156+
echo "Passed: $passed_tests"
157+
echo "Failed: $failed_tests"
158+
echo "Skipped: $skipped_tests"
159+
echo ""
160+
161+
# Export variables for use in other scripts
162+
log_info "Environment variables exported:"
163+
echo "TEST_PASS_RATE=$pass_rate"
164+
echo "TEST_PASS_RATE_ROUNDED=$pass_rate_rounded"
165+
echo "TEST_TOTAL_COUNT=$total_tests"
166+
echo "TEST_PASSED_COUNT=$passed_tests"
167+
echo "TEST_FAILED_COUNT=$failed_tests"
168+
echo "TEST_SKIPPED_COUNT=$skipped_tests"
169+
echo "TEST_OVERALL_STATUS=$overall_status"
170+
echo "TEST_DETAILS_STRING=<multiline string with test details>"
171+
172+
log_success "Pass rate calculation completed successfully"

0 commit comments

Comments
 (0)