Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 47 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,53 @@ Executing the following will compile and run the code:
cargo run
```

# Testing

The project includes a comprehensive test suite covering unit and integration tests.

## Running Tests

To run all passing unit tests:

```bash
cargo test
```

This will run 20+ unit tests that don't require external dependencies.

## Integration Tests

Some tests are marked as ignored because they require environment variables and external API access. To run these tests, first set up the required environment variables:

```bash
export GOVEE_API_KEY="your_govee_api_key"
export ACCESS_TOKEN="your_access_token"
export GOVEE_ROOT_URL="https://developer-api.govee.com"
# ... plus all OFFICE_* device environment variables
```

Then run all tests including ignored ones:

```bash
cargo test -- --ignored
```

Or run all tests together:

```bash
cargo test -- --include-ignored
```

## Test Coverage

The test suite covers:
- **Wrapper functions**: Device data transformation and mapping
- **Error handlers**: Error response formatting and serialization
- **Route handlers**: HTTP endpoint behavior (healthcheck, home redirect)
- **Service layer**: Light setup and device control logic
- **Data structures**: Constants, enums, and type definitions
- **Authorization**: Token validation and authentication flows

# Dependencies:

Make sure to install xcode-select:
Expand Down
166 changes: 166 additions & 0 deletions TESTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
# Testing Documentation

This document provides detailed information about the test suite for the Cyberlight project.

## Test Organization

Tests are organized in the `src/tests/` directory with the following structure:

```
src/tests/
├── test_constants/ # Tests for constants and enums
│ └── test_enums.rs
├── test_error_handlers/ # Tests for error handling
│ ├── test_error_handlers.rs
│ └── test_error_implementations.rs
├── test_implementations/ # Tests for implementations (e.g., auth)
│ └── test_access_token.rs
├── test_routes/ # Tests for HTTP routes
│ ├── test_all_devices_routes.rs
│ ├── test_healthcheck_routes.rs
│ └── test_home_routes.rs
├── test_services/ # Tests for service layer
│ └── test_light_setup_service.rs
└── test_wrappers/ # Tests for wrapper functions
└── test_all_devices_wrapper.rs
```

## Test Categories

### Unit Tests (No External Dependencies)

These tests run without requiring environment variables or external API access:

1. **Wrapper Tests** (`test_wrappers/`)
- `test_wrap_devices`: Tests device data transformation
- `test_wrap_model_and_devices`: Tests model and device mapping
- `test_wrap_device_status`: Tests device status filtering

2. **Error Implementation Tests** (`test_error_handlers/test_error_implementations.rs`)
- Serialization/deserialization of error types
- AuthError, NotFoundError, ServerError handling

3. **Service Tests** (`test_services/`)
- `test_office_setup_creates_payload_with_on_command`: Tests on command payload creation
- `test_office_setup_creates_payload_with_off_command`: Tests off command payload creation
- `test_office_setup_handles_all_device_types`: Tests all device type variants

4. **Constants Tests** (`test_constants/`)
- Device struct creation and field access
- OfficeDevices enum variants
- All device types can be instantiated

5. **Route Tests** (`test_routes/`)
- `test_healthcheck_handler`: Tests healthcheck endpoint response
- `test_home_redirects_to_status`: Tests home route redirect behavior
- Error handler tests (404, 401, 500)

### Integration Tests (Require Environment Setup)

These tests are marked with `#[ignore]` and require environment variables:

1. **API Integration Tests** (`test_routes/test_all_devices_routes.rs`)
- `test_get_all_devices_handler`: Requires `GOVEE_API_KEY`
- `test_get_status_for_all_devices`: Requires `GOVEE_API_KEY`
- `test_get_status_for_device`: Requires `GOVEE_API_KEY`

2. **Authorization Tests** (`test_implementations/test_access_token.rs`)
- `test_valid_authorization_token`: Requires `ACCESS_TOKEN`, `GOVEE_API_KEY`
- `test_missing_authorization_header`: Requires `ACCESS_TOKEN`
- `test_invalid_authorization_token`: Requires `ACCESS_TOKEN`

## Running Tests

### Run All Unit Tests
```bash
cargo test
```
Expected output: 20+ tests passing, 6 tests ignored

### Run Specific Test Module
```bash
# Run wrapper tests only
cargo test test_wrap

# Run error handler tests only
cargo test test_error

# Run service tests only
cargo test test_light_setup
```

### Run Integration Tests

First, set up environment variables (create a `.env` file or export them):

```bash
# Required for API integration tests
export GOVEE_API_KEY="your_govee_api_key"
export GOVEE_ROOT_URL="https://developer-api.govee.com"
export ACCESS_TOKEN="your_access_token"

# Required for device control tests
export OFFICE_BOARD_LED_ID="your_board_led_id"
export OFFICE_BOARD_LED_MODEL="your_board_led_model"
export OFFICE_TABLE_LED_ID="your_table_led_id"
export OFFICE_TABLE_LED_MODEL="your_table_led_model"
export OFFICE_WINDOW_LED_ID="your_window_led_id"
export OFFICE_WINDOW_LED_MODEL="your_window_led_model"
export OFFICE_STANDING_LEFT_LED_ID="your_standing_left_led_id"
export OFFICE_STANDING_LEFT_LED_MODEL="your_standing_left_led_model"
export OFFICE_STANDING_RIGHT_LED_ID="your_standing_right_led_id"
export OFFICE_STANDING_RIGHT_LED_MODEL="your_standing_right_led_model"
export OFFICE_HUMIDIFIER_ID="your_humidifier_id"
export OFFICE_HUMIDIFIER_MODEL="your_humidifier_model"
```

Then run the ignored tests:

```bash
# Run only ignored tests
cargo test -- --ignored

# Run all tests (including ignored ones)
cargo test -- --include-ignored
```

## Test Maintenance

When adding new features, please ensure:

1. **Unit tests** are added for pure functions and logic that doesn't require external dependencies
2. **Integration tests** are added for API endpoints and routes
3. Tests requiring environment variables are marked with `#[ignore = "reason"]`
4. Test names clearly describe what is being tested
5. Tests are organized in the appropriate test module

## Coverage Improvements Made

The recent test suite enhancements include:

- ✅ Uncommented and fixed wrapper tests (3 tests)
- ✅ Added home route redirect test (1 test)
- ✅ Added service layer tests (3 tests)
- ✅ Added error implementation tests (6 tests)
- ✅ Added constants/enums tests (3 tests)
- ✅ Marked integration tests as ignored when environment variables are not set
- ✅ Organized tests into logical modules

Total: 20+ passing unit tests, 6 integration tests (ignored by default)

## Continuous Integration

In CI/CD pipelines, you can:

1. Run unit tests without environment setup: `cargo test`
2. Set up environment secrets and run all tests: `cargo test -- --include-ignored`

## Future Test Improvements

Potential areas for additional test coverage:

- Mock external API calls for integration tests
- Add performance/benchmark tests for critical paths
- Add property-based tests for data transformations
- Add tests for edge cases and error conditions in routes
- Add tests for concurrent request handling
3 changes: 3 additions & 0 deletions src/tests.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
pub mod test_wrappers;
pub mod test_error_handlers;
pub mod test_routes;
pub mod test_implementations;
pub mod test_services;
pub mod test_constants;
1 change: 1 addition & 0 deletions src/tests/test_constants.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
pub mod test_enums;
73 changes: 73 additions & 0 deletions src/tests/test_constants/test_enums.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
#[cfg(test)]
mod tests {
use crate::constants::enums::{Device, OfficeDevices};

#[test]
fn test_device_creation() {
let device = Device {
device_id: "test_id".to_string(),
model: "test_model".to_string(),
};
assert_eq!(device.device_id, "test_id");
assert_eq!(device.model, "test_model");
}

#[test]
fn test_office_devices_variants() {
let table_led = OfficeDevices::TableLED(Device {
device_id: "table_id".to_string(),
model: "table_model".to_string(),
});
match table_led {
OfficeDevices::TableLED(device) => {
assert_eq!(device.device_id, "table_id");
assert_eq!(device.model, "table_model");
}
_ => panic!("Expected TableLED variant"),
}

let standing_right = OfficeDevices::StandingRightLED(Device {
device_id: "right_id".to_string(),
model: "right_model".to_string(),
});
match standing_right {
OfficeDevices::StandingRightLED(device) => {
assert_eq!(device.device_id, "right_id");
assert_eq!(device.model, "right_model");
}
_ => panic!("Expected StandingRightLED variant"),
}
}

#[test]
fn test_all_office_device_variants_can_be_created() {
let devices = vec![
OfficeDevices::TableLED(Device {
device_id: "1".to_string(),
model: "m1".to_string(),
}),
OfficeDevices::StandingRightLED(Device {
device_id: "2".to_string(),
model: "m2".to_string(),
}),
OfficeDevices::StandingLeftLED(Device {
device_id: "3".to_string(),
model: "m3".to_string(),
}),
OfficeDevices::WindowLED(Device {
device_id: "4".to_string(),
model: "m4".to_string(),
}),
OfficeDevices::BoardLED(Device {
device_id: "5".to_string(),
model: "m5".to_string(),
}),
OfficeDevices::Humidifier(Device {
device_id: "6".to_string(),
model: "m6".to_string(),
}),
];
// Test that all 6 variants can be created
assert_eq!(devices.len(), 6);
}
}
1 change: 1 addition & 0 deletions src/tests/test_error_handlers.rs
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
pub mod test_error_handlers;
pub mod test_error_implementations;
55 changes: 55 additions & 0 deletions src/tests/test_error_handlers/test_error_implementations.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
#[cfg(test)]
mod tests {
use crate::error_handlers::error_implementations::{AuthError, NotFoundError, ServerError};

#[test]
fn test_auth_error_serialization() {
let error = AuthError {
error: "Unauthorized access".to_string(),
};
let json = serde_json::to_string(&error).expect("serialize auth error");
assert!(json.contains("Unauthorized access"));
assert!(json.contains("error"));
}

#[test]
fn test_auth_error_deserialization() {
let json = r#"{"error":"Invalid token"}"#;
let error: AuthError = serde_json::from_str(json).expect("deserialize auth error");
assert_eq!(error.error, "Invalid token");
}

#[test]
fn test_not_found_error_serialization() {
let error = NotFoundError {
error: "Resource not found".to_string(),
};
let json = serde_json::to_string(&error).expect("serialize not found error");
assert!(json.contains("Resource not found"));
assert!(json.contains("error"));
}

#[test]
fn test_not_found_error_deserialization() {
let json = r#"{"error":"Page not found"}"#;
let error: NotFoundError = serde_json::from_str(json).expect("deserialize not found error");
assert_eq!(error.error, "Page not found");
}

#[test]
fn test_server_error_serialization() {
let error = ServerError {
error: "Internal server error".to_string(),
};
let json = serde_json::to_string(&error).expect("serialize server error");
assert!(json.contains("Internal server error"));
assert!(json.contains("error"));
}

#[test]
fn test_server_error_deserialization() {
let json = r#"{"error":"Something went wrong"}"#;
let error: ServerError = serde_json::from_str(json).expect("deserialize server error");
assert_eq!(error.error, "Something went wrong");
}
}
1 change: 1 addition & 0 deletions src/tests/test_implementations.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
pub mod test_access_token;
Loading