Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
a5b18a2
Add unit tests for ConfigurationCheckModule
devanshjainms Oct 21, 2025
cf52097
Enhance HAClusterValidator to support multiple DR provider configurat…
devanshjainms Oct 21, 2025
1c4e82b
Fix parameter logging in HAClusterValidator and update global INI con…
devanshjainms Oct 21, 2025
28174e8
Refactor conditionals in configuration checks and update Azure Load B…
devanshjainms Oct 22, 2025
a96cd4e
Enhance configuration checks to support IBM DB2 alongside SAP HANA, u…
devanshjainms Oct 22, 2025
025b38e
Standardize DB2 naming across configuration files to ensure consisten…
devanshjainms Oct 22, 2025
c6c46e1
Standardize naming for DB2 to Db2 across configuration files for cons…
devanshjainms Oct 22, 2025
de7b627
Enhance Azure Load Balancer module to handle various IP address forma…
devanshjainms Oct 22, 2025
53e2e11
Refactor user parameter handling in CommandCollector and update DB2 u…
devanshjainms Oct 22, 2025
2e871f8
Add min_list validation for kernel parameters and update related tests
devanshjainms Oct 22, 2025
5aaa577
Improve validation logic in ConfigurationCheckModule to handle intege…
devanshjainms Oct 22, 2025
6ab09fa
Update DB2 user handling and command execution in db2.yml; include db…
devanshjainms Oct 22, 2025
468f84f
Enhance min_list validation in ConfigurationCheckModule to handle exc…
devanshjainms Oct 22, 2025
9ba6b87
Update documentation and improve configuration check reporting; enhan…
devanshjainms Oct 22, 2025
495116e
Refactor test cases in get_azure_lb_test.py to streamline frontend_ip…
devanshjainms Oct 23, 2025
9c1e470
Refactor AzureLoadBalancer to simplify private IP address retrieval; …
devanshjainms Oct 23, 2025
e1f159a
Add Azure login instructions to CONFIGURATION_CHECKS.md; update secti…
devanshjainms Oct 23, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 12 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,48 +11,42 @@

## 🔍 Overview

The SAP Testing Automation Framework is an open-source orchestration tool designed to validate SAP deployments on Microsoft Azure. It enables you to assess system configurations against SAP on Azure best practices and guidelines. Additionally, the framework facilitates automation for various testing scenarios, including High Availability (HA) functional testing.
The SAP Testing Automation Framework is an open-source orchestration tool designed to validate SAP deployments on Microsoft Azure. It enables you to assess system configurations against SAP on Azure best practices and guidelines, and facilitates automation for various testing scenarios.

> **NOTE**: This repository is currently in public preview and is intended for testing and feedback purposes. As this is an early release, it is not yet production-ready, and breaking changes can be introduced at any time.
![SAP Testing Automation Framework](./docs/images/sap-testing-automation-framework.png)

## Supported Configuration Matrix

The following SAP components are supported in a two-node Pacemaker cluster running on SUSE Linux Enterprise Server (SLES) or Red Hat Enterprise Linux (RHEL):

- **SAP HANA Scale-Up**
- **SAP Central Services**

For additional information on supported configuration patterns, such as cluster types (Azure Fence Agent or SBD) and storage options (Azure Files or Azure NetApp Files) in this automated testing framework, refer to [supported high availability configuration](./docs/HIGH_AVAILABILITY.md).

## 📊 Key Features

- **High Availability Testing** - Thorough validation of the SAP HANA scale-up and SAP Central Services failover mechanism in a two node pacemaker cluster, ensuring the system operates correctly across various test cases.
- **Configuration Validation** - Ensures that SAP HANA scale-up and SAP Central Services configurations comply with SAP on Azure best practices and guidelines.
- **Functional Testing** - Executes test scenarios on the high availability setup to identify potential issues, whether during a new system deployment or before implementing cluster changes in a production environment.
- **Configuration Checks** - Validates OS parameters, database settings, Azure resources, and storage configurations against SAP and Azure best practices for supported databases. Performs comprehensive validation including kernel parameters, filesystem mounts, VM sizing, and network setup to ensure compliance with recommended guidelines.
- **Detailed Reporting** - Generates comprehensive reports, highlighting configuration mismatch or deviations from recommended best practices. Includes failover test outcomes, any failures encountered, and logs with insights to aid in troubleshooting identified issues.

## 🏆 Purpose

Testing is crucial for keeping SAP systems running smoothly, especially for critical business operations. This framework helps by addressing key challenges:

- **Preventing Risks** - It simulates system failures like node crashes, network issues, and storage failures to check if recovery mechanisms work properly, helping to catch problems before they affect real operations.
- **Meeting Compliance Requirements** - Many businesses need to prove their SAP systems are reliable. This framework provides detailed reports and logs that help with audits and ensure compliance with internal and regulatory standards.
- **Ensuring Quality** - The framework runs automated tests to verify whether the failover behavior of SAP components functions as expected on Azure across various test scenarios. It also ensures that the cluster and resource configurations are set up correctly, helping to maintain system reliability.
- **Automating Testing**: Manually testing high availability (HA) setups is slow and error-prone. This framework automates the process—from setup to reportingsaving time and ensuring more accurate and consistent results.
- **Preventing Risks** - Identifies configuration issues and validates system behavior before problems affect production operations. It simulates system failures like node crashes, network issues, and storage failures to check if recovery mechanisms work properly, helping to catch potential issues early.
- **Meeting Compliance Requirements** - Provides detailed reports and logs that help with audits and ensure compliance with internal and regulatory standards.
- **Ensuring Quality** - The framework runs automated tests to verify whether the failover behavior of SAP components functions as expected on Azure across various test scenarios. It also ensures that the cluster and resource configurations are set up correctly, helping to maintain system reliability.
- **Automating Testing** - Automates validation processes from configuration checks to reporting, saving time and ensuring consistent results.

## 🚦 Get Started

There are two primary ways to get started with the SAP Testing Automated Framework. Choose the path that best fits your current environment and objectives:

### Option 1: [Integration with SAP Deployment Automation Framework (SDAF)](./docs/SDAF_INTEGRATION.md)
### Option 1: Integration with SAP Deployment Automation Framework (SDAF)

If you already have [SDAF](https://github.com/Azure/sap-automation) environment set up, integrating the SAP Testing Automation Framework is a natural extension that allows you to leverage existing deployment pipelines and configurations.

### Option 2: [Getting Started with High Availability Testing (Standalone)](./docs/HIGH_AVAILABILITY.md)
### Option 2: Getting Started with High Availability Testing (Standalone)

For users focused solely on validating SAP functionality and configurations, the standalone approach offers a streamlined process to test critical SAP components without the complexity of full deployment integration.
- For High Availability testing details, see the [High Availability documentation](./docs/HIGH_AVAILABILITY.md).
- For Configuration Checks and Testing details, see the [Configuration Checks documentation](./docs/CONFIGURATION_CHECKS.md).

## 🏗️ Architecture and Components

Expand All @@ -68,7 +62,8 @@ For support and questions, please:
## 📚 Additional Resources

- [Azure SAP Documentation](https://docs.microsoft.com/azure/sap)
- [SAP on Azure: High Availability Guide](https://docs.microsoft.com/azure/sap/workloads/sap-high-availability-guide-start)
- [Configuration Checks Guide](./docs/CONFIGURATION_CHECKS.md)
- [High Availability Testing Guide](./docs/HIGH_AVAILABILITY.md)

## 🤝 Contributing

Expand Down
46 changes: 35 additions & 11 deletions docs/CONFIGURATION_CHECKS.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,17 +29,17 @@ Configuration validation serves as a critical quality gate in the SAP deployment
- Storage account redundancy settings
- Disk caching policies

**SAP HANA Configuration**
- Memory allocation
- System replication parameters
**SAP Database Configuration**
- SAP HANA: Memory allocation, system replication parameters
- IBM DB2: Hardware requirements, system language, OS tuning parameters

**Pacemaker Cluster**
**Pacemaker Cluster (HANA only)**
- Resource agent versions and parameters
- Fencing (STONITH) configuration
- Resource constraints and colocation rules
- Cluster communication settings

**SAP HA Resources**
**SAP HA Resources (HANA only)**
- Virtual hostname configuration
- File system mount options
- Service startup ordering
Expand All @@ -56,8 +56,33 @@ Update the `TEST_TYPE` parameter in [`vars.yaml`](./../vars.yaml) file to `Confi

Follow the steps (2.1 - 2.2) in [Setup Guide for SAP Testing Automation Framework](./SETUP.MD#2-system-configuration) to configure your system details.

> **Note**: High Availability (HA) configuration checks and functional tests are currently supported only for SAP HANA databases. For IBM DB2 databases, only non-HA configuration checks are available.

### 3. Test Execution
### 3. Required Access and Permissions

Ensure that the managed identity or service principal used by the controller virtual machine has the necessary permissions to access Azure resources and SAP systems for configuration validation.
1. "Reader" role to the user-assigned managed identity on the resource group containing the SAP VMs and the Azure Load Balancer.
1. "Reader" role to the user-assigned managed identity on the resource group containing the Azure NetApp Files account (if using Azure NetApp Files as shared storage).
1. "Reader" role to the user-assigned managed identity on the resource group containing the storage account (if using Azure File Share as shared storage).
1. "Reader" role to the user-assigned managed identity on the resource group containing the managed disks (if using Azure Managed Disks for SAP HANA data and log volumes).
1. "Reader" role to the user-assigned managed identity on the resource group containing the shared disks (if using Azure Shared Disks for SBD devices).

### 4. Azure Login (required)

Ensure that you are logged into Azure CLI on the controller VM with the appropriate subscription context:

```bash
# Login to Azure using System Assigned Managed Identity
az login --identity

# Login to Azure using User Assigned Managed Identity
az login --identity -u <client-id-of-user-assigned-managed-identity>

# Set the desired subscription context
az account set --subscription <subscription-id>
```

### 5. Test Execution

To execute the script, run following command:

Expand All @@ -71,7 +96,7 @@ To execute the script, run following command:
# Run checks with verbose logging
./scripts/sap_automation_qa.sh -vv

# Run only Database (HANA) configuration checks
# Run only Database configuration checks (supports both HANA and DB2)
./scripts/sap_automation_qa.sh --extra-vars='{"configuration_test_type":"Database"}'

# Run only ASCS/ERS configuration checks
Expand All @@ -81,7 +106,7 @@ To execute the script, run following command:
./scripts/sap_automation_qa.sh --extra-vars='{"configuration_test_type":"ApplicationInstances"}'
```

### 4. Viewing Test Results
### 6. Viewing Test Results

After the test execution completes, a detailed HTML report is generated that summarizes the PASS/FAIL status of each test case and includes detailed execution logs for every step of the automation run.

Expand All @@ -99,12 +124,11 @@ After the test execution completes, a detailed HTML report is generated that sum
The report file is named using the following format:

```
HA_{SAP_TIER}_{DATABASE_TYPE}_{OS_DISTRO_NAME}_{INVOCATION_ID}.html
CONFIG_{SAP_SID}_{DATABASE_TYPE}_{INVOCATION_ID}.html
```

- `SAP_TIER`: The SAP tier tested (e.g., DB, SCS)
- `SAP_SID`: The SAP system ID (e.g., HN1, NWP)
- `DATABASE_TYPE`: The database type (e.g., HANA)
- `OS_DISTRO_NAME`: The operating system distribution (e.g., SLES15SP4)
- `INVOCATION_ID`: A unique identifier (Group invocation ID) for the test run which is logged at the end of test execution. Find example screenshot below:

![Test Execution Completion Screenshot](./images/execution_screenshot.png)
Expand Down
2 changes: 1 addition & 1 deletion docs/SETUP.MD
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ For the framework to access the properties of the Azure Load Balancer in a high
**Permissions required for Configuration Checks:**
1. "Reader" role to the user-assigned managed identity on the resource group containing the SAP VMs and the Azure Load Balancer.
1. "Reader" role to the user-assigned managed identity on the resource group containing the Azure NetApp Files account (if using Azure NetApp Files as shared storage).
1. "Storage Account Reader" role to the user-assigned managed identity on the resource group containing the storage account (if using Azure File Share as shared storage).
1. "Reader" role to the user-assigned managed identity on the resource group containing the storage account (if using Azure File Share as shared storage).
1. "Reader" role to the user-assigned managed identity on the resource group containing the managed disks (if using Azure Managed Disks for SAP HANA data and log volumes).
1. "Reader" role to the user-assigned managed identity on the resource group containing the shared disks (if using Azure Shared Disks for SBD devices).

Expand Down
4 changes: 4 additions & 0 deletions src/module_utils/collector.py
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,10 @@ def collect(self, check, context) -> str:
if not re.match(r"^[a-zA-Z0-9_-]+$", user):
self.parent.log(logging.ERROR, f"Invalid user parameter: {user}")
return f"ERROR: Invalid user parameter: {user}"

if user == "db2sid":
user = f"db2{context.get('database_sid', '').lower()}"

command = f"sudo -u {shlex.quote(user)} {command}"

return self.parent.execute_command_subprocess(
Expand Down
61 changes: 60 additions & 1 deletion src/modules/configuration_check_module.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
import time
import json
import re
import sys
from typing import Optional, Dict, Any, List, Type
from datetime import datetime
from concurrent.futures import ThreadPoolExecutor
Expand Down Expand Up @@ -91,6 +92,7 @@ def _init_validator_registry(self) -> Dict[str, Any]:
"string": self.validate_string,
"range": self.validate_numeric_range,
"list": self.validate_list,
"min_list": self.validate_min_list,
"check_support": self.validate_vm_support,
"properties": self.validate_properties,
}
Expand Down Expand Up @@ -497,6 +499,58 @@ def validate_list(self, check: Check, collected_data: str) -> Dict[str, Any]:
),
}

def validate_min_list(self, check: Check, collected_data: str) -> Dict[str, Any]:
"""
Validate that each value in a space-separated list meets or exceeds minimum values.
Used for kernel parameters like kernel.sem where actual values must be >= minimum required.

:param check: Check definition containing min_values and separator in validator_args
:type check: Check
:param collected_data: Space-separated string of values from system
:type collected_data: str
:return: Validation result dictionary
:rtype: Dict[str, Any]
"""
min_values = check.validator_args.get("min_values", [])
separator = check.validator_args.get("separator", " ")
try:

if not isinstance(min_values, list):
return {
"status": TestStatus.ERROR.value,
}

collected_values = (
str(collected_data).strip().split(separator) if collected_data else []
)
collected_values = [val.strip() for val in collected_values if val.strip()]
if len(collected_values) != len(min_values):
return {
"status": self._create_validation_result(check.severity, False),
}
all_valid = True
for actual, minimum in zip(collected_values, min_values):
try:
actual_int = int(actual)
minimum_int = int(minimum)
if actual_int > sys.maxsize or minimum_int > sys.maxsize:
continue
if actual_int < minimum_int:
all_valid = False
break
except (ValueError, OverflowError):
all_valid = False
break

return {
"status": self._create_validation_result(check.severity, all_valid),
}
except Exception as ex:
self.log(logging.ERROR, f"Error while validating min list {ex}")
return {
"status": TestStatus.ERROR.value,
}

def validate_vm_support(self, check: Check, collected_data: str) -> Dict[str, Any]:
"""
Validates if a VM SKU is supported for the given role and database type
Expand Down Expand Up @@ -609,6 +663,11 @@ def create_result(
valid_list = check.validator_args.get("valid_list", [])
if isinstance(valid_list, list) and valid_list:
expected_value = ", ".join(str(v) for v in valid_list)
elif check.validator_type == "min_list":
min_values = check.validator_args.get("min_values", [])
separator = check.validator_args.get("separator", " ")
if isinstance(min_values, list) and min_values:
expected_value = f"Min: {separator.join(str(v) for v in min_values)}"
elif check.validator_type == "properties":
props = check.validator_args.get("properties", [])
if isinstance(props, list) and props:
Expand Down Expand Up @@ -875,7 +934,7 @@ def run(self):
context["hostname"] = custom_hostname

self.set_context(context)
if self.context.get("check_type", {}).get("file_name") == "hana":
if self.context.get("check_type", {}).get("file_name") in ["hana", "db2"]:
temp_context = FileSystemCollector(parent=self).collect(
check=None, context=self.context
)
Expand Down
47 changes: 41 additions & 6 deletions src/modules/get_azure_lb.py
Original file line number Diff line number Diff line change
Expand Up @@ -233,31 +233,66 @@ def get_load_balancers_details(self) -> None:
for inbound_rule in inbound_rules
if "privateIpAddress" in inbound_rule
)

self.log(logging.INFO, f"Looking for load balancers with IPs: {load_balancer_ips}")

found_load_balancer = None

def get_private_ip_from_config(config):
"""
Extract private IP from frontend config, handling different key variations.
Azure SDK might return different structures based on authentication context.
"""
private_ip = (
config.get("private_ip_address")
or config.get("privateIpAddress")
)
return private_ip

found_load_balancer = next(
(
lb
for lb in load_balancers
for frontend_ip_config in lb["frontend_ip_configurations"]
if frontend_ip_config["private_ip_address"] in load_balancer_ips
for frontend_ip_config in lb.get("frontend_ip_configurations", [])
if get_private_ip_from_config(frontend_ip_config) in load_balancer_ips
),
None,
)

if not found_load_balancer and load_balancers:
available_ips = []
self.log(
logging.WARNING, f"No matching load balancer found for IPs: {load_balancer_ips}"
)
for lb in load_balancers:
lb_name = lb.get("name", "unknown")
for config in lb.get("frontend_ip_configurations", []):
private_ip = get_private_ip_from_config(config)
if private_ip:
available_ips.append(f"{lb_name}:{private_ip}")
else:
self.log(
logging.DEBUG,
f"Frontend config structure for {lb_name}: {list(config.keys())}",
)
self.log(logging.WARNING, f"Available load balancers and private IPs: {available_ips}")
parameters = []

def check_parameters(entity, parameters_dict, entity_type):
for key, value_object in parameters_dict.items():
entity_value = entity.get(key, "N/A")
expected_value = value_object.get("value", "")

parameters.append(
Parameters(
category=entity_type,
id=entity["name"],
id=entity.get("name", "unknown"),
name=key,
value=str(entity[key]),
expected_value=str(value_object.get("value", "")),
value=str(entity_value),
expected_value=str(expected_value),
status=(
TestStatus.SUCCESS.value
if entity[key] == value_object.get("value", "")
if entity_value == expected_value
else TestStatus.ERROR.value
),
).to_dict()
Expand Down
Loading
Loading